Keynote Speakers

Reality Design: Shaping Experiences Beyond Interfaces through Human-Centered AI

Albrecht Schmidt photo
Professor,
Ludwig-Maximilian University

Time

March, 25th | 9:00 CET

About

As artificial intelligence advances, human-computer interaction is no longer just about designing interfaces - it is about designing reality itself. The interface is no longer a separate entity; it is interwoven with our interactions in the world. Our capabilities and limitations are increasingly dependent on digital tools and the way we interact with them. AI, Augmented Reality (AR), Virtual Reality (VR) and Large Language Models (LLMs) are fundamentally changing the way we perceive and manipulate our environment. This is shaping our reality! This talk asks: what meaningful contributions can humans make in a world where AI is exhibiting advanced creativity, solving complex problems, and making decisions based on more comprehensive information than any human can comprehend? Reflecting on automation and the over 70-year-old HABA-MABA principle, we explore what remains uniquely human today. Do we need to rethink our relationship with digital technology and our approach to designing large, interconnected, interactive systems? How can human-centered AI systems empower people and enhance human agency? We will discuss and share examples of how to move from user interface design to reality design to create experiences that are meaningful for individuals and responsible for society at large.

Biography

Albrecht Schmidt is Professor of Computer Science at the Ludwig-Maximilians- Universität (LMU) in Munich, where he holds the Chair for Human-Centered Ubiquitous Media. His research and teaching interests are human-centered artificial intelligence, intelligent interactive systems, ubiquitous computing, digital media technologies, and digital technologies for human augmentation. He studied computer science in Ulm and Manchester and received his PhD from Lancaster University in 2003. Albrecht was the conference co-chair of the ACM SIGCHI 2023 conference, he is on the editorial board of the ACM TOCHI journal, and he is the co-founder of the ACM conference TEI and Automotive User interfaces. He was inducted into the ACM SIGCHI Academy in 2018, elected to the German Academy of Sciences Leopoldina in 2020, and named an ACM Fellow in 2023.

Interactive machine learning

Jerry Fails photo
Chair & Professor,
Boise State University

Time

March, 26th | 9:00 CET

About

The paper “Interactive machine learning” was presented at Intelligent User Interfaces (IUI) 2003 and was awarded “Outstanding Paper Award” and has been highly cited since then. Dr. Fails will present that research paper and a brief overview of related work. The original abstract provides an overview of the initial research and approach: Perceptual user interfaces (PUIs) are an important part of ubiquitous computing. Creating such interfaces is difficult because of the image and signal processing knowledge required for creating classifiers. We propose an interactive machine-learning (IML) model that allows users to train, classify/view, and correct the classifications. The concept and implementation details of IML are discussed and contrasted with classical machine learning models. Evaluations of two algorithms are also presented. We also briefly describe Image Processing with Crayons (Crayons), a tool for creating new camera-based interfaces using a simple painting metaphor. The Crayons tool embodies our notions of interactive machine learning.

Biography

Jerry Alan Fails is a Professor and the Chair of the Department of Computer Science at Boise State University in Boise, Idaho, USA. His research is in human-computer interaction (HCI). His early career research was focused at the intersection of novel user interfaces, machine learning, and image processing. More recently, his research leverages participatory user-centered design methods to support children as they search for information online, develop recommender systems for children, support children’s privacy and security needs online, understand privacy and fear within family contexts, and expand methods of designing technologies with and for children to online, hybrid, and in-person modalities at the local and global scale.

Human-Centered AI Transparency: Bridging the Sociotechnical Gap

Vera Liao photo
Principal Researcher,
Microsoft Research

Time

March, 27th | 9:00 CET

About

Transparency — enabling appropriate understanding of AI technologies — is considered a pillar of Responsible AI. The AI community have developed an abundance of techniques in the hope of achieving transparency, including explainable AI (XAI), model evaluation, and quantification of model uncertainty. However, there is an inevitable sociotechnical gap between these computational techniques and the nuanced and contextual human needs for understanding AI. Mitigating the sociotechnical gap has long been a mission of the HCI community, but the age of AI has brought new challenges to this mission. In this talk, I will discuss these new challenges and some of our approaches to bridging the sociotechnical gap for AI transparency: conducting critical investigation into dominant AI transparency paradigms; studying people’s transparency needs in diverse contexts; and shaping technical development by embedding sociotechnical perspectives in the evaluation practices.

Biography

Q. Vera Liao is an incoming Associate Professor at the University of Michigan and a Principal Researcher at Microsoft Research, where she is part of the FATE (Fairness, Accountability, Transparency, and Ethics in AI) group. Her current research interests are in human-AI interaction and responsible AI, with an overarching goal of bridging emerging AI technologies and human-centered perspectives. Her work has received many paper awards at HCI and AI venues. She currently serves as the co-editor-in-chief for the Springer HCI Book Series and on the Editorial Board of ACM TiiS. She has also served on the organizing committee or as a senior PC member for CHI, CSCW, FAccT, and IUI conferences.