Website Story

How Trustworthy AI Showcases website began.

I am Kyi Thar. Trustworthy AI Showcases did not begin as a general website. It grew out of my research and implementation work in trustworthy AI, especially from the VINNOVA-supported mobility project, Trustworthy AI and Mobile Generative AI for 6G Networks and Smart Industry Applications, developed through collaboration between Mid Sweden University and Nanyang Technological University. As the work expanded into implementations, prototypes, demonstrations, and draft resources, I created this website as an open public space to share it more clearly for dissemination, learning, and collaboration.

Started from VINNOVA-supported mobility research and implementation work
Research context Mid Sweden University and Nanyang Technological University collaboration
Public purpose Dissemination, learning, and practical communication of research outputs
Now includes Showcases, books, and public-facing research resources

Project Context

Organizations behind the work

The website grew from a funded collaboration between research institutions and international partners.

Funding support
Home institution
International collaboration

Origin

From research implementation to public communication

This website did not begin as a marketing page. It started as a practical way for me to present trustworthy AI related implementations and demonstrations developed through a VINNOVA-supported collaboration on trustworthy AI, mobile generative AI, 6G networks, and smart industry applications.

01

Research first

The initial work grew from hands-on implementation in a VINNOVA-supported MIUN-NTU project focused on trustworthy AI for automated industrial systems, mobile generative AI, and 6G-related applications.

02

Need for dissemination

As the outputs became more visible, it became clear that public dissemination needed a better format than isolated project notes, slides, or internal prototypes.

03

Website as a bridge

This site became that bridge: a place where research-driven implementations can be presented in a simpler, more public-facing form for students, collaborators, industry, and general visitors.

Purpose

Why this website exists

The purpose of Trustworthy AI Showcases is to make research outputs easier to access, understand, and reuse. Instead of leaving demos and resources scattered across different platforms, the site gathers them into one place with enough context for public understanding.

It is also a living space. Some items are mature live showcases. Some are drafts or works in progress. Together, they show how trustworthy AI work evolves from implementation to communication, dissemination, and learning material.

  • Present trustworthy AI implementations in a clearer public format
  • Support dissemination beyond project-specific audiences
  • Connect demos, books, and applied resources in one place
  • Make ongoing work visible even while some outputs are still evolving

Today

What the website is becoming

The site is gradually becoming more than a demo hub. It is turning into a public-facing collection of trustworthy AI showcases, educational material, and project-linked resources.

Showcases

Interactive explainability and wireless trust AI outputs that visitors can open and explore directly.

Books and guides

Longer-form writing such as AI for Leaders, designed to translate project learning into accessible guidance.

Research dissemination

A stable public layer for sharing progress from funded work with broader audiences beyond the core project team.

Profile

Kyi Thar at a glance

A concise view of my research focus, academic role, background, and public profiles.

What I work on

Applied machine learning for Industrial IoT, cyber-physical systems, explainable AI, adversarial robustness, federated learning, edge AI, intrusion detection, and resilient wireless systems.

  • Trustworthy AI and explainability for mission-critical environments
  • Industrial IoT, cyber-physical systems, and resilient wireless infrastructures
  • Federated learning, edge AI, intrusion detection, and robust deployment

Current role

I am currently Associate Senior Lecturer (Assistant Professor) at Mid Sweden University, where I lead and contribute to externally funded research on trustworthy AI, distributed intelligence, industrial communication, and mission-critical digital infrastructure.

Academic background

I completed my Master's and Ph.D. in Computer Engineering at Kyung Hee University under Professor Choong Seon Hong, and earned my Bachelor of Computer Technology from University of Computer Studies, Yangon.

Publications

Selected papers from DBLP

A selected set of papers from my Google Scholar and DBLP records that connect most clearly to trustworthy AI, explainability, and trust-related evaluation.

2025

ACHILLES: A Machine Learning Framework for Explainable and Generalized Automotive Intrusion Detection System

Authorea Preprints

A recent explainable and generalized automotive intrusion detection framework that directly addresses trustworthiness and explainability in vehicle security.

2025

Enhancing Intrusion Detection in CPS and IIoT with Lightweight Explainable AI Models

WFCS 2025

A recent explainable AI paper aligned with the website's focus on trustworthy, interpretable systems for mission-critical and industrial environments.

2022

Experimental Analysis of Trustworthy In-Vehicle Intrusion Detection System Using eXplainable Artificial Intelligence (XAI)

IEEE Access 10

The most direct publication match for the site's Trustworthy AI theme, combining trustworthy intrusion detection with explainability.

2025

Evaluating Trust-Related Principles in an Implemented Distributed Edge AI System

SNCNW 2025

A direct trust-oriented paper that evaluates trust principles in a real distributed edge AI implementation.

Thanks

Acknowledgements

This website reflects not only my own work, but also the guidance, collaboration, and support of people and institutions around me.

I would like to express my sincere thanks to Professor Mikael Gidlund at Mid Sweden University and Professor Dusit Niyato at Nanyang Technological University for their support, collaboration, and continued encouragement in trustworthy AI, wireless systems, and 6G-related research.

I would also like to sincerely thank Professor Choong Seon Hong and Kyung Hee University for the academic foundation and guidance that influenced my research journey. I am equally grateful to the broader research and dissemination environment around MIUN, NTU, and the partner projects that helped shape this website into a public-facing platform.

  • Mid Sweden University and the STC Research Centre
  • Nanyang Technological University and Professor Dusit Niyato's research team
  • Kyung Hee University and Professor Choong Seon Hong
  • TRUST project collaborators, including the University of Vaasa partnership
  • VINNOVA, Interreg Aurora, and KK-stiftelsen for support that made dissemination possible