{"id":245831,"date":"2025-12-08T11:08:20","date_gmt":"2025-12-08T11:08:20","guid":{"rendered":"https:\/\/iui.acm.org\/2026\/?page_id=245831"},"modified":"2026-01-07T16:35:55","modified_gmt":"2026-01-07T16:35:55","slug":"tutorials","status":"publish","type":"page","link":"https:\/\/iui.acm.org\/2026\/tutorials\/","title":{"rendered":"Tutorials"},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; background_color=&#8221;rgba(0,0,0,0.43)&#8221; background_image=&#8221;https:\/\/iui.acm.org\/2026\/wp-content\/uploads\/2025\/10\/Conference-Image-1-scaled.jpg&#8221; background_blend=&#8221;overlay&#8221; min_height=&#8221;290px&#8221; custom_margin=&#8221;||22px|||&#8221; custom_padding=&#8221;98px||0px|||&#8221; bottom_divider_style=&#8221;curve2&#8243; bottom_divider_color=&#8221;#FFFFFF&#8221; bottom_divider_height=&#8221;80px&#8221; bottom_divider_repeat=&#8221;1x&#8221; da_disable_devices=&#8221;off|off|off&#8221; box_shadow_style=&#8221;preset6&#8243; locked=&#8221;off&#8221; global_colors_info=&#8221;{}&#8221; da_is_popup=&#8221;off&#8221; da_exit_intent=&#8221;off&#8221; da_has_close=&#8221;on&#8221; da_alt_close=&#8221;off&#8221; da_dark_close=&#8221;off&#8221; da_not_modal=&#8221;on&#8221; da_is_singular=&#8221;off&#8221; da_with_loader=&#8221;off&#8221; da_has_shadow=&#8221;on&#8221;][et_pb_row _builder_version=&#8221;4.19.4&#8243; _module_preset=&#8221;default&#8221; background_enable_color=&#8221;off&#8221; custom_margin=&#8221;-30px|auto||auto||&#8221; custom_padding=&#8221;0px|||||&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.19.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][dsm_text_divider header=&#8221;TUTORIALS&#8221; color=&#8221;#FFFFFF&#8221; _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; header_level=&#8221;h1&#8243; header_text_color=&#8221;#FFFFFF&#8221; header_font_size=&#8221;40px&#8221; background_enable_color=&#8221;off&#8221; global_colors_info=&#8221;{}&#8221;][\/dsm_text_divider][\/et_pb_column][\/et_pb_row][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;-8px|||||&#8221; custom_padding=&#8221;34px|||||&#8221; da_disable_devices=&#8221;off|off|off&#8221; global_colors_info=&#8221;{}&#8221; da_is_popup=&#8221;off&#8221; da_exit_intent=&#8221;off&#8221; da_has_close=&#8221;on&#8221; da_alt_close=&#8221;off&#8221; da_dark_close=&#8221;off&#8221; da_not_modal=&#8221;on&#8221; da_is_singular=&#8221;off&#8221; da_with_loader=&#8221;off&#8221; da_has_shadow=&#8221;on&#8221;][et_pb_row column_structure=&#8221;1_3,2_3&#8243; _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;-40px|auto||auto||&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3><b>AI4Qual<\/b><\/h3>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span style=\"color: #ceda35;\"><strong>Half-Day Tutorial<\/strong><\/span><\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<span style=\"color: #ceda35;\"><strong>Organizers<\/strong><\/span><br \/>\n<span><strong>He &#8220;Albert&#8221; Zhang<\/strong>, Penn State University<br \/>\n<\/span><span><strong>Jie Cai<\/strong>, Tsinghua University<br \/>\n<\/span><span><strong>Jingyi Xie<\/strong>, San Jos\u00e9 State University<br \/>\n<\/span><span><strong>Chuhao Wu<\/strong>, Clemson University<br \/>\n<\/span><span><strong>ChanMin Kim<\/strong>, Penn State University<br \/>\n<\/span><span><strong>John M. Carroll<\/strong>, Penn State University<\/span>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;2_3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h4><strong>AI4Qual: A Comprehensive Field Guide to LLM-Supported Qualitative Research<\/strong><\/h4>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span style=\"font-size: 15px;\">Qualitative research is central to understanding human experiences and contextual phenomena. However, it remains labor-intensive, difficult to scale, and challenging to teach consistently. Recent advances in Large Language Models (LLMs) and Multimodal LLMs (MLLMs) are prompting researchers to explore how these technologies can augment qualitative workflows. Despite this, significant gaps persist between technologists&#8217; understanding of qualitative rigor and researchers&#8217; experience deploying AI tools. This tutorial provides a comprehensive, practice-oriented introduction to LLM-supported qualitative research across two key stages: data collection and qualitative analysis. In the first stage, participants will learn how LLMs can enhance interview design, generate probes, support interviewer training, adapt tone, and even automate semi-structured interviews. Hands-on exercises will allow participants to create interview guides and conduct AI-assisted mock interviews. The second part focuses on using LLMs for first-cycle coding, and thematic development, emphasizing transparency, analytic rigor, and reflexivity. Through guided demonstrations, participants will gain practical skills, a critical understanding of AI\u2019s strengths and limitations, and concrete methods for responsibly integrating LLMs into their qualitative research practice.<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><\/span><\/p>\n<p>[\/et_pb_text][et_pb_button button_url=&#8221;https:\/\/agentcraft-iui.github.io\/2026&#8243; url_new_window=&#8221;on&#8221; button_text=&#8221;More&#8221; disabled_on=&#8221;on|on|on&#8221; _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; custom_button=&#8221;on&#8221; button_text_color=&#8221;#FFFFFF&#8221; button_bg_color=&#8221;#ceda35&#8243; button_border_radius=&#8221;46px&#8221; disabled=&#8221;on&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_button][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_3,2_3&#8243; _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3><strong>DASH<\/strong><\/h3>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span style=\"color: #ceda35;\"><strong>Quarter-Day Tutorial<\/strong><\/span><\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span style=\"color: #ceda35;\"><strong>Organizers<\/strong><\/span><\/p>\n<p><span><strong>Michelle Brachman<\/strong>, IBM Research, United States<br \/><\/span><span><strong>Heloisa Candello<\/strong>, IBM Research, Brazil<br \/><\/span><span><strong>Amanda da Silveira<\/strong>, IBM Research, Brazil<\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;2_3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h4><strong>DASH: Designing and Developing Agentic Systems for Humans<\/strong><\/h4>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span>Recent developments in generative AI have opened new avenues for designing and developing agentic AI systems. New methods and frameworks continue to emerge to leverage generative AI to create novel types of agentic systems. These new agentic AI capabilities raise questions about both how to design these systems as well as how to best build them. In this tutorial, we introduce the core ideas necessary to design and build generative AI-powered agentic systems in ways that enable effective human-AI interaction. In particular, this tutorial will focus on levels of autonomy in generative agentic AI systems within human workflows and how we can best enable users to effectively interact with generative agentic AI systems in the context of existing knowledge about intelligent user interfaces. <\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_3,2_3&#8243; _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; hover_enabled=&#8221;0&#8243; global_colors_info=&#8221;{}&#8221; sticky_enabled=&#8221;0&#8243;]<\/p>\n<h3><strong>Hitchhiker\u2019s Guide to Temporal Analysis<\/strong><\/h3>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span style=\"color: #ceda35;\"><strong>Half-Day Tutorial<\/strong><\/span><\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span style=\"color: #ceda35;\"><strong>Organizers<\/strong><\/span><\/p>\n<p><span><strong>Veronika Bogina<\/strong>, University of Haifa<br \/><\/span><span><strong>Julia Sheidin<\/strong>, Braude College of Engineering<\/span><\/p>\n<p>[\/et_pb_text][et_pb_icon _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_icon][\/et_pb_column][et_pb_column type=&#8221;2_3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||12px|||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h4><strong>Modeling, Causality, and Visualization for User Interaction Data<\/strong><\/h4>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span>Many systems generate rich streams of time-stamped events, from interaction logs to sensor readings, but extracting actionable temporal insights remains challenging. This half-day hands-on tutorial offers a practical introduction to temporal modelling, causality analysis, and time-oriented visualisation for event-based data. Participants will learn how to detect meaningful temporal patterns, identify event influences, and reason about cause\u2013effect relationships in dynamic systems. The tutorial combines short conceptual modules with guided Jupyter notebooks and runnable examples. We show how temporal analysis can directly support intelligent and interactive systems. By the end of the session, attendees will be able to apply a range of temporal analysis techniques, interpret causal signals, and design effective visual representations of time-oriented data. All materials, including code and templates, will be shared in a public GitHub repository. This tutorial is suitable for researchers, students, and practitioners with basic Python experience who work with interaction, event, or sensor data.<\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_3,2_3&#8243; _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3><strong>NLDATA<\/strong><\/h3>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span style=\"color: #ceda35;\"><strong>Quarter-Day Tutorial<\/strong><\/span><\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<span style=\"color: #ceda35;\"><strong>Organizers<\/strong><\/span><br \/>\n<span><strong>Vidya Setlur<\/strong>, Tableau Research<\/span>[\/et_pb_text][et_pb_icon _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_icon][\/et_pb_column][et_pb_column type=&#8221;2_3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||12px|||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h4><strong>NLDATA: Supporting Human-Centric Data Exploration Through Semantics and Natural Language Interaction<\/strong><\/h4>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>Data science increasingly drives decision-making across domains, yet the quality of these decisions depends not only on advanced computational methods but also on how effectively systems support human interpretation, exploration, and communication of data. This tutorial provides a structured, interactive introduction to designing human-centric data exploration tools that integrate semantics, natural language processing (NLP), and human-computer interaction (HCI) to enhance accessibility, trust, and transparency in intelligent interfaces. Drawing from research across the HCI, NLP, and visualization communities, participants will learn about research concerning the generation of meaningful visual encodings of data, applying NLP techniques for query interpretation and ambiguity resolution, and designing conversational and multimodal interfaces to support data exploration. Through guided case studies and research examples, this 1.5-hour session will demonstrate how human-centered design principles can be integrated into the data exploration interfaces, supporting adaptive defaults, mixed-initiative interaction, and intelligent query handling. The tutorial will also highlight emerging challenges and opportunities, including AI-augmented data workflows, semantic inferencing for unstructured data, retrieval-augmented generation (RAG), and the ethics of fairness, explainability, and user agency.<\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_3,2_3&#8243; _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3><strong>P2P<\/strong><\/h3>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span style=\"color: #ceda35;\"><strong>Half-Day Tutorial<\/strong><\/span><\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span style=\"color: #ceda35;\"><strong>Organizers<\/strong><\/span><\/p>\n<p><span><strong>Akram Bayat<\/strong>, Northeastern University<br \/><\/span><span><strong>Ziyuan \u201cZoey\u201d Zhu<\/strong>, IDEO<br \/><\/span><span><strong>Zihan Zhan<\/strong>, Northeastern University<br \/><\/span><span><strong>Pegah Zargarian<\/strong>, EVENNESS<br \/><\/span><span><strong>Fatemeh Mottaghian<\/strong>, Boston University<br \/><\/span><span><strong>Aisha Abdur Rahim<\/strong>, Northeastern University<\/span><\/p>\n<p>[\/et_pb_text][et_pb_icon _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_icon][\/et_pb_column][et_pb_column type=&#8221;2_3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||12px|||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h4><strong>P2P: From Prompt to Prototype &#8211; Functional UI Design with LLMs and MCP<\/strong><\/h4>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span>P2P: From Prompt to Prototype is a half-day, hands-on tutorial that teaches participants how to design and build functional intelligent user interfaces using large language models (LLMs) and the Model Context Protocol (MCP). While most generative design tools stop at static mockups, this tutorial shows how to translate structured prompts into deployable, testable UI prototypes grounded in human-centered design principles. Participants will learn practical workflows for crafting effective prompts, generating accessible React\/HTML interfaces, connecting prototypes to live data through MCP servers, and running automated evaluation pipelines for usability, accessibility, and performance. Through step-by-step exercises, attendees will create three working prototypes and develop a reusable toolkit of prompt templates, design patterns, and MCP configurations. The tutorial is designed for HCI researchers, educators, UX practitioners, and students who want to integrate AI-assisted prototyping into their research and teaching. By bridging conceptual design and implementation, P2P equips participants with a scalable framework for rapid iteration and for exploring the next generation of human\u2013AI collaborative interface design.<\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_3,2_3&#8243; _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3><strong>REFLECT<\/strong><\/h3>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span style=\"color: #ceda35;\"><strong>Half-Day Tutorial<\/strong><\/span><\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span style=\"color: #ceda35;\"><strong>Organizers<\/strong><\/span><\/p>\n<p><span><strong>Antonela Tommasel<\/strong>, Johannes Kepler University Linz, Austria &#8211; ISISTAN, CONICET-UNCPBA, Argentina<br \/><\/span><span><strong>Markus Schedl<\/strong>, Johannes Kepler University Linz, Institute of Computational Perception &#8211; Linz Institute of Technology, Artificial Intelligence Lab, Austria<br \/><\/span><span><strong>Ralph Hertwig<\/strong>, Max Planck Institute for Human Development, Research Center for Adaptive Rationality, Germany<\/span><\/p>\n<p>[\/et_pb_text][et_pb_icon _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_icon][\/et_pb_column][et_pb_column type=&#8221;2_3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||12px|||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h4><strong>REFLECT: Tutorial on Reflecting on Bias in LLMs through Human-Centered Perspectives<\/strong><\/h4>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.5&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>Large Language Models (LLMs) increasingly shape how people access, produce, and reason with information. Far from being neutral tools, they mirror the data, discourse, and cognitive patterns on which they are trained, often reproducing and amplifying social and cognitive biases that influence what is visible, credible, and valued. Understanding these reflections requires moving beyond technical detection toward examining how bias emerges in LLM outputs, how users perceive and respond to it, and how design choices can reinforce or mitigate its effects. REFLECT offers a human-centered exploration of bias in LLMs, bridging perspectives from computer science, human\u2013computer interaction, and cognitive psychology. This interactive tutorial examines bias as an emergent property of generative models (arising through data, modeling, and interaction processes) and discusses design and interaction strategies that make these reflections visible and open to critical interpretation. By the end, participants will be equipped with conceptual and practical tools to identify, analyze, and interpret how LLMs reflect biases, fostering more transparent, accountable, and trustworthy human\u2013AI interactions.<\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p><div class=\"et_pb_module dsm_text_divider dsm_text_divider_0\">\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t<div class=\"et_pb_module_inner\">\n\t\t\t\t\t<div class=\"dsm-text-divider-wrapper dsm-text-divider-align-center et_pb_bg_layout_light\">\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t<div class=\"dsm-text-divider-before dsm-divider\"><\/div>\n\t\t\t\t<h1 class=\"dsm-text-divider-header et_pb_module_header\"><span>TUTORIALS<\/span><\/h1>\n\t\t\t\t<div class=\"dsm-text-divider-after dsm-divider\"><\/div>\n\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>AI4QualHalf-Day TutorialOrganizers He &#8220;Albert&#8221; Zhang, Penn State University Jie Cai, Tsinghua University Jingyi Xie, San Jos\u00e9 State University Chuhao Wu, Clemson University ChanMin Kim, Penn State University John M. Carroll, Penn State UniversityAI4Qual: A Comprehensive Field Guide to LLM-Supported Qualitative ResearchQualitative research is central to understanding human experiences and contextual phenomena. However, it remains labor-intensive, difficult to scale, and challenging to teach consistently. Recent advances in Large Language Models (LLMs) and Multimodal LLMs (MLLMs) are prompting researchers to explore how these technologies can augment qualitative workflows. Despite this, significant gaps persist between technologists&#8217; understanding of qualitative rigor and researchers&#8217; experience deploying AI tools. This tutorial provides a comprehensive, practice-oriented introduction to LLM-supported qualitative research across two key stages: data collection and qualitative analysis. In the first stage, participants will learn how LLMs can enhance interview design, generate probes, support interviewer training, adapt tone, and even automate semi-structured interviews. Hands-on exercises will allow participants to create interview guides and conduct AI-assisted mock interviews. The second part focuses on using LLMs for first-cycle coding, and thematic development, emphasizing transparency, analytic rigor, and reflexivity. Through guided demonstrations, participants will gain practical skills, a critical [&hellip;]<\/p>\n","protected":false},"author":8,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"class_list":["post-245831","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/iui.acm.org\/2026\/wp-json\/wp\/v2\/pages\/245831","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/iui.acm.org\/2026\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/iui.acm.org\/2026\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/iui.acm.org\/2026\/wp-json\/wp\/v2\/users\/8"}],"replies":[{"embeddable":true,"href":"https:\/\/iui.acm.org\/2026\/wp-json\/wp\/v2\/comments?post=245831"}],"version-history":[{"count":5,"href":"https:\/\/iui.acm.org\/2026\/wp-json\/wp\/v2\/pages\/245831\/revisions"}],"predecessor-version":[{"id":245940,"href":"https:\/\/iui.acm.org\/2026\/wp-json\/wp\/v2\/pages\/245831\/revisions\/245940"}],"wp:attachment":[{"href":"https:\/\/iui.acm.org\/2026\/wp-json\/wp\/v2\/media?parent=245831"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}