Keynotes and Plenaries
Keynote Speaker: Dr. ChangWon Pyo, Director, Pyo Institute of Crime Science & Adjunct Professor of Hallym University, Korea
Keynote Title: AI Governance and ‘3’ policing problems in South Korea (‘Korea’)
Abstract: Korea has been trying hard to formulate AI-friendly environment since 2019 when the government announced ‘National AI Strategy’. In the year 2020, Korean Government set up a roadmap for reforming AI related legislation, system and regulations1. Recently in 2024, Korean National Assembly passed the ‘Basic Act on the Development of Artificial Intelligence and the Establishment of Trust (shortly, AI Basic Act)’ for the 2nd time in the World after the European Union2.
The structural limitations Korean economy faces, such as low birth rate and raid super-aging are pointed out as backgrounds for strong Ai-drive of Korea. Korea Government expects and hopes to achieve not only economic but also social and security effects by such strong ‘AI-Drive’.
However, there are serious problems to solve and barriers to overcome in the Korean march to the successful AI institutionalization and reform; those are general reluctance of the traditional industries to new technology, too much regulations on AI and digital technology, rigid education and labor market yet to adopt AI knowledge and skills, and risks of hacking to and crimes using AI.
To address these, Korea Government set up 3 main tasks to put forward : (1) To Formulate AI-Native Network in the whole society (2) To Set up New Digital order and governance, led by the industry and civil society (3) To increase security and safety in the national and social AI and digital system by enhancing and upgrading security of data centers, incorporate and internalize security to AI system and environment, and more investment
-
- Korean Government, ‘National AI & Digital Reform and Growth Strategy – Toward AI G3 Nation’, 4 April 2024
- Chosun Biz, ‘Korea passes AI Basic Act, second globally, enhancing national AI competitiveness’, 26 December 2024.
Biography: Dr. ChangWon Pyo received his PhD from University of Exeter, UK in Policing Studies. Dr. Pyo was elected as Member of Parliament (YONGIN CONSTITUENCY) SOUTH KOREA NATIONAL ASSEMBLY (韓國國會議員) between 2016-2020. He is also a seasoned Police Inspector and professor at the National Police University, South Korea.
Currently Changwon is a Director, at Pyo Institute of Crime Science since April 2014, Adjunct Professor, Forensic Information & Technology Dept. of Hallym University since 2023, Adviser (Criminal Profiling & Behavior Analysis field), Korea National Police Agency since 1999. Adviser (Criminal Justice Policy field), Korean Institute of Criminology and Justice since 2021, a Television Presenter in South Korea.
Keynote Speaker: Prof. Kam-Fai WONG, Member of the National Committee of the CPPCC, Member of the Legislative Council, Associate Dean (External Affairs), Faculty of Engineering, The Chinese University of Hong Kong, Hong Kong
Keynote Title: The Impacts of Digital Humanities to Linguistics, History, Philosophy and Arts
Abstract: In 2024, with the rapid development of the ‘Digital Economy’, ‘Digital Humanities’ has become a popular research field in many institutions worldwide. Over the past year, Generative AI (GenAI) has emerged, bringing many conveniences to humans. Officials, industries, academia, and research sectors around the globe are all vying to use it. This trend is unstoppable and will continue to be the driving force in the innovation and technology industry in 2024. The goal of AI research is to let machines replace humans, so the subtle relationship between artificial intelligence (digital) and human intelligence (humanities), and how the two interact and cooperate, are key topics of concern for many Digital Humanities scholars (including the author).
Theoretically, Digital (D) Humanities (H) can be divided into three categories. The first is D2H: How to use data to analyze and understand the culture of the real world? This is a typical application of big data. The second is H2D: How to imitate real human culture, transform it to the virtual world, and achieve the effect of “Digital Twins”? The third is D&H: How to promote the interaction between the real and virtual worlds, and build an efficient “Cyber-Physical System (CPS)” to network the physical world?
Simply put, from an academic perspective, Digital Humanities includes the four major disciplines of Linguistics, History, Philosophy, and Arts. Computer scientists have been continuously researching, trying to digitize these subjects, expand and deepen their content, promote interdisciplinary studies, and help optimize teaching and learning outcomes. However, if digitization is inappropriately applied, it will inevitably affect the connotation of the subjects. However, regardless of the discipline, Digital Humanities is closely related to data. This article lays down the objective: to highlight the impact of “Digital”on “”Humanities”.
For AI in linguistics, Natural Language Processing (NLP) technology is used for language analysis and understanding. NLP capabilities are based on Deep Learning and require the support of large corpora (i.e., text big data) for system training. Corpus training can easily lead to effect of “language discrimination”, which in turn raises the issue of “language conservation”. Large corpora are mainly based on commonly used online languages. For this reason, ChatGPT can fluently converse with users in English, Chinese, Spanish, Arabic, etc. (the languages most used on the internet currently), but it is helpless with languages that have not been digitized. For example, the least used language in the world is Ayapaneco, an ancient language used by a very small number of people in Mexico. There is no digital form ofvthe language online. Some experts estimate that these less-used, low resource languages will extinguish from the Internet world, leading to the disappearane of their related cultures. What is even more frightening is that if this unhealthy situation continues, the culture of the future online world will be manipulated by big (powerful) nations.
For AI in philosophy, take ChatGPT as an example. Building ChatGPT utilizes Deep Learning extensively, the method is like a parrot mimicking, learning conversational skills from a large corpus. Therefore, the quality of the corpus is very critical. The most common flaw is the hallucination effect – ChatGPT will make things up and answer off-topic due to insufficient training data. Moreover, hallucination can produce chain effects, where one wrong answer will naturally affect the next user prompt, and the subsequent reasoning and answers, resulting in a series of mistakes.
In terms of history, which is based on archives of past events. The deep learning technology can certainly make history cover knowledge deeper and broader, but this advantage greatly requires the authenticity of training data. However, deep learning is mainly a set of calculations based on statistics. It does not care about the authenticity of the data as it does not perform “fact checking”. Furthermore, whether the output historical event is true or false, the system cannot explain the results. The digitization of history also has a domino effect. If unchecked historical events are spread inaccurately, the credibility of future digital history will be greatly discounted.
In the realm of art, one may refer to the recent copyright infringement lawsuit against ChatGPT, filed by The New York Times in the United States. In this case, the defendants, OpenAI and Microsoft, were alleged to have unlawfully utilized the New York Times articles to train ChatGPT without obtaining prior permission. The resultant articles generated by ChatGPT were essentially verbatim, reproducing the original text without any modifications. Moreover, a similar scenario frequently arises with the Image Generator, MidJourney, raising suspicions of potential copyright infringements. Consequently, this prompts the question: will the future of AI necessitate a redefinition of the creative ecosystem and copyright parameters for automatically generated art? Furthermore, how can the value produced in this process be equitably distributed?
It is emphasized that “Safety underpins development, while development ensures safety. Both safety and development must be advanced concurrently.” Consequently, as Hong Kong fosters digital economy, it must also give due consideration to “digital security”. In the ongoing fourth generation (AI) industrial revolution, data serves as the pivotal resource for innovation and production, and its integrity must be safeguarded against invasion or contamination. Compromised data not only impedes the economic progression of the Special Administrative Region, but it also creates a loophole that could potentially jeopardize national security, providing criminals with an opportunity for exploitation. Hence, as we move into 2024, digital security, encompassing network security, artificial intelligence security, and the like, are of paramount importance to economies worldwide. The Special Administrative Region Government must not overlook them.
Biography: Kam-fai Wong is the Associate Dean (External Affairs) of the Faculty of Engineering, Professor in the Department of Systems Engineering and Engineering Management, Director of Centre for Innovation and Technology, The Chinese University of Hong Kong.
Prof. Wong’s research interest primarily revolves Chinese computing, database and information retrieval. He’s an ACL Fellow and is very active in professional and public service. He serves as Member of the 13th & 14th National Committee of the CPPCC, Member of the 7th Legislative Council of the HKSAR, Advisor of Our Hong Kong Foundation, Vice-Chairman of Hong Kong Professionals and Senior Executives Association, Vice Chairman & Secretary General of Hong Kong Alliance of Technology and Innovation, Director of Finance Dispute Resolution Centre, Executive Member of Council for the Promotion of Guangdong-Hong Kong-Macao Cooperation, Member of the Standing Committee of Shenzhen Association for Science and Technology, and Advisor of Guangzhou Association for Science and Technology, etc.
Prof. Wong was awarded the Medal of Honour (MH) by the HKSAR Government in 2011 for his contributions to IT development in Hong Kong.
Keynote Speaker: Prof. Chao XI, The Chinese University of Hong Kong, Hong Kong
Keynote Title: Regulating cross-border data flows: Divergence and convergence in national approaches
Abstract: The increasing digitisation of the modern economy has led to a surge in data flows across borders, raising concerns over issues ranging from privacy to national security. In response, national and regional authorities have begun to develop regulatory systems that govern the way in which data flow in and out of their borders. Of particular significance is the recent proliferation of national regulatory regimes placing various types of restrictions on outbound data transfers. This research critically evaluates the national approaches and, on that basis, reflects on the scope of harmonization of these national regimes.
Biography: Prof. Chao XI serves as the Dean and Professor at the Faculty of Law, The Chinese University of Hong Kong. He is internationally recognised for his research and scholarship in Chinese law, corporate law and governance, securities regulation, financial regulation, and empirical legal studies. In recognition of his deep commitment to teaching excellence, Prof. Xi has received the esteemed CUHK Vice-Chancellor’s Exemplary Teaching Award in 2010, 2017, and 2022.
Keynote Speaker: Prof. Chen Jie, Founding Dean of the School of Biomedical Engineering at Fudan University, China
Keynote Title: Guardians of the Mind: How Generative AI is Redefining Mental Healthcare—and Why Cybersecurity Must Lead the Revolution
Abstract: Imagine a world where no one suffers in silence—where cutting-edge AI meets human empathy to diagnose depression with nuance, guide a PTSD survivor toward healing, or simulate a patient’s darkest struggles to train the next generation of clinicians. This is not science fiction. Today, generative AI is reshaping mental healthcare, offering hope to millions while demanding urgent conversations about security, ethics, and the future of trust in technology. My research sits at this crossroads, pioneering AI systems that act as both healers and learners: AI Doctors designed to democratize mental health support, and AI Patients that mirror human complexity to revolutionize medical education. But as we build these digital allies, we face a pivotal question: How do we safeguard their promise in an era of escalating cyber threats?
At the heart of this vision are AI Doctors—intelligent systems blending large language models (LLMs) with clinical rigor. These are not chatbots, but empathetic partners trained on decades of psychiatric research, structured into dynamic knowledge graphs that map the labyrinth of mental disorders. By grounding responses in evidence-based frameworks like Cognitive Behavioral Therapy (CBT) and Dialectical Behavior Therapy (DBT) through retrieval-augmented generation (RAG), knowledge graph, and orchestrating multi-step reasoning via agentic workflows, these AI Doctors can detect subtle signs of depression in a patient’s language, deliver personalized coping strategies, and even recognize when to escalate care to a human provider. Yet their power is twofold: they learn from every interaction, refining their understanding of cultural nuances, trauma responses, and the fragile dance of therapeutic trust.
But what good is an AI Doctor without rigorous training? Enter AI Patients—digital twins powered by LLMs to simulate the lived experience of mental illness. These are not static avatars, but dynamic entities that emulate human behavior, from the hesitation in a veteran’s voice when recounting trauma to the spiraling thoughts of a teen battling anxiety. Medical students can practice delicate interventions in hyper-realistic scenarios, researchers can stress-test therapies at scale, and policymakers can explore “what-if” simulations of public health crises. Through self-play evaluation, AI Doctors even refine their own protocols by engaging AI Patients in thousands of simulated sessions, iterating toward compassion and accuracy.
This future—where AI becomes a lifeline for mental health—demands more than innovation; it requires an unshakable commitment to cybersecurity. Each leap forward carries profound responsibility: the same LLM that deciphers depression from a whispered confession must guard against breaches that could expose fragile human stories. The digital twin crafted to mirror vulnerability must never become a tool for manipulation, and the therapeutic chatbot born to heal must be armored against attacks seeking to corrupt its purpose. Here lies the frontier of progress: privacy-preserving federated learning shielding data like a decentralized fortress, real-time anomaly detection standing sentinel against poisoned prompts, and blockchain-verified audits ensuring every AI decision aligns with an ethical compass. Yet defenses forged in code alone are incomplete. This mission calls for a collective awakening—a fusion of minds across disciplines, where clinicians, technologists, and cybersecurity guardians unite to weave security into AI’s very DNA. From the first line of training data to the final interaction, every layer must embody a promise: that the algorithms designed to mend minds will themselves be impervious to fracture. The stakes are nothing less than humanity’s trust in tomorrow’s care.
Biography: Chen Jie, Ph.D., is a Fellow of the Canadian Academy of Engineering, a Fellow of the Institute of Electrical and Electronics Engineers (IEEE Fellow), a Fellow of the American Institute for Medical and Biological Engineering, and a Fellow of the Asia-Pacific Artificial Intelligence Association. He currently serves as the Founding Dean of the School of Biomedical Engineering at Fudan University. His research areas include microfluidic in vitro diagnostic chips, organ-neural chips, biomedical integrated circuits and wearable devices, and AI-based mental health diagnosis and therapy.
He has published 263 papers in top international journals (PNAS, ACS Nano, Physical Review Letters, Small, and Nature Microsystems & Nanoengineering) and leading IEEE journals (Proceedings of the IEEE, IEEE Journal on Solid-State Circuits, IEEE Transactions on Circuits and Systems, IEEE Transactions on Biomedical Engineering, and IEEE Transactions on Biomedical Circuits and Systems) as well as in major international conferences. His research has been cited over 9,500 times, with an h-index of 46.
He has received numerous prestigious awards, including:
- Distinguished Alumnus Award from the University of Maryland
- Killam Prize for Canada’s Best Professors (one of the highest honors for Canadian professors)
- Canada National Innovation Fund Leader Award
- Canadian Provincial Science and Technology Progress and Invention Award
He has served as a Board Member of the IEEE Circuits and Systems Society and held editorial positions as an Associate Editor for multiple IEEE journals. He has also served as a General Chair or Technical Committee Chair for various IEEE conferences.
He helped found two Bell Labs spin-off companies. One of them, Flarion Technologies Inc., focused on developing 4G wireless networks and was later acquired by QUALCOMM (San Diego, USA). The other company, iBiquity Digital, specialized in HD Radios, which are now widely used in BMW, General Motors, Ford, Toyota, and Honda vehicles and are available in major retail chains such as Walmart and Best Buy. In 2015, iBiquity Digital was acquired by DTS (California, USA).
Keynote Speaker: Prof. Takako Hashimoto, Professor, Chiba University of Commerce. IEEE R10 Director (2025-26)
Keynote Title: Structuring Topics on Large-Scale Social Media for Discovering People’s Perceptions
Abstract: X(Twitter) is currently one of the most influential microblogging services on which users interact with messages. It is imperative to grasp the big picture of X by analyzing its huge stream data. In this study, we propose a two-stage clustering method that automatically discovers coarse-grained topics from X data. In the first stage, we use graph clustering to extract micro-clusters from the word co-occurrence graph. All the tweets in a micro-cluster share a fine-grained topic. We then obtain the time series of each micro-cluster by counting the number of tweets posted in a time window. In the second stage, we use time series clustering to identify the clusters corresponding to coarse-grained topics.
We evaluate the computational efficacy of the proposed method and demonstrate its systematic improvement in scalability as the data volume increases. Next, we apply the proposed method to large-scale X data (26 million tweets) about the COVID-19 Vaccination in Japan. The proposed method separately identifies the reactions to news and the reactions to tweets.
Biography: Prof. Takako Hashimoto graduated from the Ochanomizu University in Japan and received a Ph.D. in computer science, specializing in multimedia information processing, from the Graduate School of Systems and Information Engineering of the University of Tsukuba in 2005. She worked at the software R&D center of Ricoh Co. Ltd. in Japan for 24 years and participated in the development of many software products as a technical leader. In April of 2009, she joined Chiba University of Commerce. In 2015, she stayed at University of California, Los Angeles as a visiting researcher. She is currently Professor of the faculty of Commerce and Economics at Chiba University of Commerce. She has focused on data mining research and social media analysis, especially topic extraction from millions of tweets related to disasters and topical problems like COVID-19. She served as a Board Member of the Database Society of Japan, Chair of IEEE Japan Council (2021-22), and BoG of IEEE Computer Society (2021-23) and has served as the Past Chair of IEEE Women in Engineering (2015-16). She has been elected IEEE R10 Director-elect (2023-24) as well. In 2019, she received the IEEE 2019 MGA Larry K. Wilson Transnational Award, and Fellow of the Information Processing Society of Japan.