TUTORIALS
AI4Qual
Half-Day Tutorial
He “Albert” Zhang, Penn State University
Jie Cai, Tsinghua University
Jingyi Xie, San José State University
Chuhao Wu, Clemson University
ChanMin Kim, Penn State University
John M. Carroll, Penn State University
AI4Qual: A Comprehensive Field Guide to LLM-Supported Qualitative Research
Qualitative research is central to understanding human experiences and contextual phenomena. However, it remains labor-intensive, difficult to scale, and challenging to teach consistently. Recent advances in Large Language Models (LLMs) and Multimodal LLMs (MLLMs) are prompting researchers to explore how these technologies can augment qualitative workflows. Despite this, significant gaps persist between technologists’ understanding of qualitative rigor and researchers’ experience deploying AI tools. This tutorial provides a comprehensive, practice-oriented introduction to LLM-supported qualitative research across two key stages: data collection and qualitative analysis. In the first stage, participants will learn how LLMs can enhance interview design, generate probes, support interviewer training, adapt tone, and even automate semi-structured interviews. Hands-on exercises will allow participants to create interview guides and conduct AI-assisted mock interviews. The second part focuses on using LLMs for first-cycle coding, and thematic development, emphasizing transparency, analytic rigor, and reflexivity. Through guided demonstrations, participants will gain practical skills, a critical understanding of AI’s strengths and limitations, and concrete methods for responsibly integrating LLMs into their qualitative research practice.
DASH
Quarter-Day Tutorial
Organizers
Michelle Brachman, IBM Research, United States
Heloisa Candello, IBM Research, Brazil
Amanda da Silveira, IBM Research, Brazil
DASH: Designing and Developing Agentic Systems for Humans
Recent developments in generative AI have opened new avenues for designing and developing agentic AI systems. New methods and frameworks continue to emerge to leverage generative AI to create novel types of agentic systems. These new agentic AI capabilities raise questions about both how to design these systems as well as how to best build them. In this tutorial, we introduce the core ideas necessary to design and build generative AI-powered agentic systems in ways that enable effective human-AI interaction. In particular, this tutorial will focus on levels of autonomy in generative agentic AI systems within human workflows and how we can best enable users to effectively interact with generative agentic AI systems in the context of existing knowledge about intelligent user interfaces.
Hitchhiker’s Guide to Temporal Analysis
Half-Day Tutorial
Organizers
Veronika Bogina, University of Haifa
Julia Sheidin, Braude College of Engineering
Modeling, Causality, and Visualization for User Interaction Data
Many systems generate rich streams of time-stamped events, from interaction logs to sensor readings, but extracting actionable temporal insights remains challenging. This half-day hands-on tutorial offers a practical introduction to temporal modelling, causality analysis, and time-oriented visualisation for event-based data. Participants will learn how to detect meaningful temporal patterns, identify event influences, and reason about cause–effect relationships in dynamic systems. The tutorial combines short conceptual modules with guided Jupyter notebooks and runnable examples. We show how temporal analysis can directly support intelligent and interactive systems. By the end of the session, attendees will be able to apply a range of temporal analysis techniques, interpret causal signals, and design effective visual representations of time-oriented data. All materials, including code and templates, will be shared in a public GitHub repository. This tutorial is suitable for researchers, students, and practitioners with basic Python experience who work with interaction, event, or sensor data.
NLDATA
Quarter-Day Tutorial
Vidya Setlur, Tableau Research
NLDATA: Supporting Human-Centric Data Exploration Through Semantics and Natural Language Interaction
Data science increasingly drives decision-making across domains, yet the quality of these decisions depends not only on advanced computational methods but also on how effectively systems support human interpretation, exploration, and communication of data. This tutorial provides a structured, interactive introduction to designing human-centric data exploration tools that integrate semantics, natural language processing (NLP), and human-computer interaction (HCI) to enhance accessibility, trust, and transparency in intelligent interfaces. Drawing from research across the HCI, NLP, and visualization communities, participants will learn about research concerning the generation of meaningful visual encodings of data, applying NLP techniques for query interpretation and ambiguity resolution, and designing conversational and multimodal interfaces to support data exploration. Through guided case studies and research examples, this 1.5-hour session will demonstrate how human-centered design principles can be integrated into the data exploration interfaces, supporting adaptive defaults, mixed-initiative interaction, and intelligent query handling. The tutorial will also highlight emerging challenges and opportunities, including AI-augmented data workflows, semantic inferencing for unstructured data, retrieval-augmented generation (RAG), and the ethics of fairness, explainability, and user agency.
P2P
Half-Day Tutorial
Organizers
Akram Bayat, Northeastern University
Ziyuan “Zoey” Zhu, IDEO
Zihan Zhan, Northeastern University
Pegah Zargarian, EVENNESS
Fatemeh Mottaghian, Boston University
Aisha Abdur Rahim, Northeastern University
P2P: From Prompt to Prototype – Functional UI Design with LLMs and MCP
P2P: From Prompt to Prototype is a half-day, hands-on tutorial that teaches participants how to design and build functional intelligent user interfaces using large language models (LLMs) and the Model Context Protocol (MCP). While most generative design tools stop at static mockups, this tutorial shows how to translate structured prompts into deployable, testable UI prototypes grounded in human-centered design principles. Participants will learn practical workflows for crafting effective prompts, generating accessible React/HTML interfaces, connecting prototypes to live data through MCP servers, and running automated evaluation pipelines for usability, accessibility, and performance. Through step-by-step exercises, attendees will create three working prototypes and develop a reusable toolkit of prompt templates, design patterns, and MCP configurations. The tutorial is designed for HCI researchers, educators, UX practitioners, and students who want to integrate AI-assisted prototyping into their research and teaching. By bridging conceptual design and implementation, P2P equips participants with a scalable framework for rapid iteration and for exploring the next generation of human–AI collaborative interface design.
REFLECT
Half-Day Tutorial
Organizers
Antonela Tommasel, Johannes Kepler University Linz, Austria – ISISTAN, CONICET-UNCPBA, Argentina
Markus Schedl, Johannes Kepler University Linz, Institute of Computational Perception – Linz Institute of Technology, Artificial Intelligence Lab, Austria
Ralph Hertwig, Max Planck Institute for Human Development, Research Center for Adaptive Rationality, Germany
REFLECT: Tutorial on Reflecting on Bias in LLMs through Human-Centered Perspectives
Large Language Models (LLMs) increasingly shape how people access, produce, and reason with information. Far from being neutral tools, they mirror the data, discourse, and cognitive patterns on which they are trained, often reproducing and amplifying social and cognitive biases that influence what is visible, credible, and valued. Understanding these reflections requires moving beyond technical detection toward examining how bias emerges in LLM outputs, how users perceive and respond to it, and how design choices can reinforce or mitigate its effects. REFLECT offers a human-centered exploration of bias in LLMs, bridging perspectives from computer science, human–computer interaction, and cognitive psychology. This interactive tutorial examines bias as an emergent property of generative models (arising through data, modeling, and interaction processes) and discusses design and interaction strategies that make these reflections visible and open to critical interpretation. By the end, participants will be equipped with conceptual and practical tools to identify, analyze, and interpret how LLMs reflect biases, fostering more transparent, accountable, and trustworthy human–AI interactions.