Conference Keynotes

Björn Schuller
Don’t Fall in Love with Your AI: Super-Empathic CUIs Incoming—High Noon for Guardrails

Super-empathic Conversational User Interfaces are becoming feasible because affect sensing and computational paralinguistics have matured from lab curiosities into deployable signal-processing plus representation-learning pipelines that end at a whole new dimension of input analysis and output generation quality. This keynote frames “empathy” as an engineering stack for machine perception and generation: extracting stable socio-emotional cues from noisy, real-world interaction streams (speech, language, timing, and—when available—vision, physiology, and beyond), mapping them into continuous latent state, and using that state to drive controllable generation. This talk will detail practical architectures that turn paralinguistic evidence into actionable CUI behaviour: self-/weakly-supervised embeddings for voice and interaction dynamics; multimodal fusion that remains robust under missing channels; uncertainty-aware affect inference and personalisation layers that adapt to speakers without catastrophic drift. The result is a pipeline where “felt empathy” emerges from calibrated inference plus policy-conditioned response shaping. This leaves us needing to talk about mechanisms that bound relational behaviour in a time where users start to judge CUIs as more empathetic than humans. From a technical point of view, a first step can be dialogue analysis in search of attachment and dependency cues derived from paralinguistic and conversational markers. By that, the field now has the measurement substrate and generation ability to build super-empathic CUIs; the next leap is to make them controllable, robust, and testable under the very conditions that modern affective computing has been targeting for years.

Björn W. Schuller is a leading figure at the intersection of artificial intelligence, signal processing, and digital health. He holds a dual appointment as Full Professor and Chair of Health Informatics at TUM in Munich and Full Professor of Artificial Intelligence at Imperial College London, while also driving technology beyond the lab as co-founding CEO and current CSO of audEERING, an audio intelligence company in Munich and Berlin. Trained end-to-end in EE/IT at TUM (diploma, PhD, habilitation, and Adjunct Teaching Professor in Machine Intelligence and Signal Processing), he now sits in the engine rooms of Europe’s AI ecosystem as Core Member of the Munich Data Science Institute (MDSI), Principal Investigator in the Munich Center for Machine Learning (MCML), and Fellow of the Imperial Data Science Institute, with long-standing international roles including permanent Honourable Dean and Visiting Professor roles in China. A Fellow of the ACM and IEEE (including the IEEE Computer Society Golden Core Award), and Fellow of AAAC (President-Emeritus), BCS, ELLIS, and ISCA, Schuller has (co-)authored 1,700+ publications with 80,000+ citations and an h-index 125. He shapes the field’s soundtrack from the editorial booth as Field Chief Editor of Frontiers in Digital Health, Editor-in-Chief of AI Open, and former Editor-in-Chief of the IEEE Transactions on Affective Computing. His 50+ awards include World Economic Forum’s recognition as one of 40 extraordinary scientists under 40 back in 2015, and he currently holds both ACM Distinguished Speaker and IEEE Signal Processing Society Distinguished Lecturer. With highest-level grants, advisory work for companies such as Barclays, GN, Huawei, and Samsung, and 300+ media appearances, he mixes rock-solid science with arena-level presence.

Simone Stumpf
Responsible AI

Many Conversational AI technologies have now been integrated into everyday life. New frontiers driven by Generative and Agentic AI are on the horizon. However, how can we ensure that AI is developed and used ‘responsibly’? In this talk, I will review current efforts in Responsible AI – and what is currently not being considered. I will focus on research and practices in developing AI responsibly, making sure that AI is transparent, robust and trustworthy. I will discuss the future of responsible Conversational AI, to include everyone in its development.

Simone Stumpf is Professor of Responsible and Interactive AI at the School of Computing Science at University of Glasgow. She has a long-standing research focus on user interactions with AI systems. Her research includes self-management systems for people living with long-term conditions, developing teachable AI systems for people who don’t have a technical background, and investigating Responsible AI development, including AI fairness and AI auditing. Her work has contributed to Explainable AI (XAI) through the Explanatory Debugging approach for interactive machine learning, providing design principles to enable better human-computer interaction and investigating the effects of greater transparency. The prime aim of her work is to empower everyone to use AI effectively.

https://www.gla.ac.uk/schools/computing/staff/simonestumpf/