Conference Program

For the interactive program with full details, please refer to the program on SigCHI's website. The program at a glance is pictured below.

Workshops, Tutorials, and Doctoral Consortium - Monday March 24

11:00 - 11:30 AM, Coffee Break | 01:00-02:30 PM, Launch break | 04:00 - 04:30 PM, Coffee Break | 06:00, Closing

Room Morning Afternoon
T4 HAI-GEN 2025 HAI-GEN 2025
T1a HealthIUI AXAI
T1b MIND MIND
T1c BEHAVE AI BEHAVE AI
T3a SOCIALIZE STEP-HAI
T3b Doct. Cons. Doct. Cons.
T8 DECI DECI

This schedule contains paper slots with 10 mins presentation and 5 mins for questions. The sessions within the same cells (e.g. S1 [new line] S2) indicate parallel sessions.

Main Conference (March 25-27)

6:00 PM, Tuesday 25 – Welcome Cocktail
7:30 PM, Wednesday 26 – Social Dinner

Time Tuesday March 25th Wednesday March 26th Thursday March 27th
8:30 AM - 8:45 AM Registration
8:45 AM - 9:00 AM Welcome
9:00 AM - 10:00 AM Chair: Kaisa Väänänen
Keynote #1
Albrecht Schmidt
Chair: Fabio Paternò
Impact paper : Jerry Fails
Panel: M. Burnett, E. Churcill, K. Gajos
Chair: Toby Li
Keynote #2
Q. Vera Liao
10:05 AM - 11:05 AM T1 AR/VR (4)
T3 TiiS papers (4)
Coffee break + Posters and Demos T1 Democratisation of AI (3)
T3 User adaptation (4)
11:05 AM - 11:25 AM Coffee break Coffee break + Posters and Demos Coffee break
11:25 AM - 12:55 PM T1 Interactive ML (6)
T3 Multimodal AI (4)
T1 Generative models (5)
T3 User studies (5 + 1 TiiS)
T1 Collaboration and interaction (4)
T3 Recommendation (5)
12:55 PM - 2:25 PM Lunch
2:25 PM - 4:25 PM T1 XAI methods (7)
T3 Knowledge-based approaches (7)
T1 LLM 1 (7)
T3 XAI methods 2 (7)
T1 LLM 2 (6+1)
T3 Video Presentation (12)
4:25 PM - 4:45 PM Coffee break
4:45 PM - 5:45 PM T1 Visualisation (4)
T3 User modeling (4)
Posters and Demos Town hall and Closing
6:00 PM Welcome Cocktails
7:30 PM Social Dinner

Session List

  • AR/VR
    • GlideRX: Enhancing Situation Awareness for Collision Prevention in Glider Flight through Extended Reality
    • A picture is worth a thousand words? Investigating the Impact of Image Aids in AR on Memory Recall for Everyday Tasks
    • KHAIT: K-9 Handler Artificial Intelligence Teaming for Collaborative Sensemaking
    • Video2MR: Automatically Generating Mixed Reality 3D Instructions by Augmenting Extracted Motion from 2D Videos
  • TiiS Papers
    • "I Want It That Way": Enabling Interactive Decision Support Using Large Language Models and Constraint Programming
    • "It would work for me too": How Online Communities Shape Software Developers’ Trust in AI-Powered Code Generation Tools
    • Ajna: A Wearable Shared Perception System for Extreme Sensemaking
    • 🌳-generAItor: Tree-in-the-loop Text Generation for Language Model Explainability and Adaptation
  • Democratisation of AI
    • DeepFlow: A Flow-Based Visual Programming Tool for Deep Learning Development
    • More than Marketing? On the Information Value of AI Benchmarks for Practitioners
    • AiModerator: A Co-Pilot for Hyper-Contextualization in Political Debate Video
  • User Adaptation
    • An Exploratory Study on How AI Awareness Impacts Human-AI Design Collaboration
    • Controlling AI Agent Participation in Group Conversations: A Human-Centered Approach
    • ShareFlows: Seamless knowledge capture and proactive push for efficient teacher workflows in higher education
    • Under the Hood of Carousels: Investigating User Engagement and Navigation Effort in Multi-list Recommender Systems
  • Interactive ML
    • Cluster-Based Approach for Visual Anomaly Detection in Multivariate Welding Process Data Supported by User Guidance
    • Interoceptive Objects: opportunistic harnessing of internal signals for state self-monitoring
    • HEPHA: A Mixed-Initiative Image Labeling Tool for Specialized Domains
    • Empowering Medical Data Labeling for Non-Experts with DANNY: Enhancing Accuracy and Mitigating Over-Reliance on AI
    • Towards Trustable Intelligent Clinical Decision Support Systems: A User Study with Ophthalmologists
    • User-Guided Correction of Reconstruction Errors in Structure-from-Motion
  • Multimodal AI
    • ILuvUI: Instruction-tuned LangUage-Vision modeling of UIs from Machine Conversations
    • Text-to-Image Generation for Vocabulary Learning Using the Keyword Method
    • TellTime: An AI-Augmented Calendar with a Voice Interface for Collecting Time-Use Data
    • MemPal: Leveraging Multimodal AI and LLMs for Voice-Activated Object Retrieval in Homes of Older Adults
  • Generative Models
    • Diffuse Your Data Blues: Augmenting Low-Resource Datasets via User-Assisted Diffusion
    • Interactive High-Quality Skin Lesion Generation using Diffusion Models for VR-based Dermatological Education
    • Unequal Opportunities: Examining the Bias in Geographical Recommendations by Large Language Models
    • Exploring the Design Space of Cognitive Engagement Techniques with AI-Generated Code for Enhanced Learning
    • Simulating Cooperative Prosocial Behavior with Multi-Agent LLMs: Evidence and Mechanisms for AI Agents to Inform Policy Decisions
  • User Studies
    • Slip Through the Chat: Subtle Injection of False Information in LLM Chatbot Conversation Increases False Memory Formation
    • TSConnect: An Enhanced MOOC Platform for Bridging Communication Gaps Between Instructors and Students in Light of the Curse of Knowledge
    • ArtInsight: Enabling AI-Powered Artwork Engagement for Mixed Visual-Ability Families
    • Evaluating the Impact of Automated Hints in a 3D Educational Escape Game: A Comparative Study of Accessibility and Computer Science Versions
    • Counselor-AI Collaborative Transcription and Editing System for Child Counseling Analysis
    • Measuring User Experience Inclusivity in Human-AI Interaction via Five User Problem-Solving Styles
  • Collaboration and Interaction
    • One Does Not Simply Meme Alone: Evaluating Co-Creativity Between LLMs and Humans in the Generation of Humor
    • MaGEL: A Soft, Transparent Input Device Enabling Deformation Gesture Recognition
    • 3D Touch Force Estimation from Capacitive Images
    • Real-Time Full-body Interaction with AI Dance Models: Responsiveness to Contemporary Dance
  • Recommendation
    • FretMate: ChatGPT-Powered Adaptive Guitar Learning Assistant
    • Prefer2SD: A Human-in-the-Loop Approach to Balancing Similarity and Diversity in In-Game Friend Recommendations
    • Orbit: A Framework for Designing and Evaluating Multi-objective Rankers
    • Words as Bridges: Exploring Computational Support for Cross-Disciplinary Translation Work
    • Can LLMs Recommend More Responsible Prompts?
  • XAI Methods (1/2)
    • SkinGEN: an Explainable Dermatology Diagnosis-to-Generation Framework with Interactive Vision-Language Models
    • Robust Relatable Explanations of Machine Learning with Disentangled Cue-specific Saliency
    • From Oracular to Judicial: enhancing clinical decision making through contrasting explanations and a novel interaction protocol
    • Less or More: Towards Glanceable Explanations for LLM Recommendations Using Ultra-Small Devices
    • The Influence of Curiosity Traits and On-Demand Explanations in AI-Assisted Decision-Making
    • Building Appropriate Mental Models: What Users Know and Want to Know about an Agentic AI Chatbot
    • Counterfactual Explanations May Not Be the Best Algorithmic Recourse Approach
  • Knowledge-based Approaches
    • Dynamik: Syntactically-Driven Dynamic Font Sizing for Emphasis of Key Information
    • CGAT-Net: Context-Aware Graph Attention Transformer Network for Scene Sketch Recognition
    • Text-to-SQL Domain Adaptation via Human-LLM Collaborative Data Annotation
    • To Guide Or To Disturb- -How To Teach Dexterous Skills Using AI?
    • A Design Space for Intelligent Dialogue Augmentation
    • DreamDirector: Designing a Generative AI System to Aid Therapists in Treating Clients’ Nightmares
    • Technologies Supporting Self-Reflection on Social Interactions: A Systematic Review
  • LLM 1
    • Limitations of the LLM-as-a-Judge Approach for Evaluating LLM Outputs in Expert Knowledge Tasks
    • Advancing Affective Intelligence in Virtual Agents Using Affect Control Theory
    • Lotus: Creating Short Videos From Long Videos With Abstractive and Extractive Summarization
    • EditIQ: Automated Cinematic Editing of Static Wide-Angle Videos via Dialogue Interpretation and Saliency Cues
    • A Framework for Efficient Development and Debugging of Role-Playing Agents with Large Language Models
    • Mental Models of Generative AI Chatbot Ecosystem
    • From Interaction to Impact: Towards Safer AI Agent Through Understanding and Evaluating Mobile UI Operation Impacts
  • XAI Methods (2/2)
    • Is Conversational XAI All You Need? Human-AI Decision Making With a Conversational XAI Assistant
    • Personalising AI assistance based on overreliance rate in AI-assisted decision making
    • How Dynamic vs. Static Presentation Shapes User Perception and Emotional Connection to Text-Based AI
    • Benefits of Machine Learning Explanations: Improved Learning in an AI-assisted Sequence Prediction Task
    • VibE: A Visual Analytics Workflow for Semantic Error Analysis of CVML Models at Subgroup Level
    • Evaluating the Impact of AI-Generated Visual Explanations on Decision-Making for Image Matching
    • Will Health Experts Adopt a Clinical Decision Support System for Game-Based Digital Biomarkers? Investigating the Impact of Different Explanations on Perceived Ease-of-Use, Perceived Usefulness, and Trust
  • LLM 2
    • CLEAR: Towards Contextual LLM-Empowered Privacy Policy Analysis and Risk Generation for Large Language Model Applications
    • “You Always Get an Answer”: Analyzing Users’ Interaction with AI-Generated Personas Given Unanswerable Questions and Risk of Hallucination
    • PromptMap: An Alternative Interaction Style for AI-Based Image Generation
    • SimTube: Simulating Audience Feedback on Videos using Generative AI and User Personas
    • Enhancing Visitor Engagement in Interactive Art Exhibitions with Visual-Enhanced Conversational Agents
    • DancingBoard: Streamlining the Creation of Motion Comics to Enhance Narratives
    • A Prompt Chaining Framework for Long-Term Recall in LLM-Powered Intelligent Assistant
  • Video Presentation
    • Enhancing Immersive Sensemaking with Gaze-Driven Recommendation Cues
    • A Design Space of Behavior Change Interventions for Responsible Data Science
    • Designing LLM-simulated Immersive Spaces to Enhance Autistic Children's Social Affordances Understanding in Traffic Settings
    • VideoMix: Aggregating How-To Videos for Task-Oriented Learning
    • A Dynamic Bayesian Network Based Framework for Multimodal Context-Aware Interactions
    • SAE: A Multimodal Sentiment Analysis Large Language Model
    • Navigating the Unknown: A Chat-Based Collaborative Interface for Personalized Exploratory Tasks
    • NoTeeline: Supporting Real-Time, Personalized Notetaking with LLM-Enhanced Micronotes
    • Can VTA Empower Learners to Ask Critical Questions?
    • Help Wanted – or Not: Bridging the Empathy Gap between Wheelchair Users and Passersby through AI-Mediated Communication with Politeness Strategies
    • Gensors: Authoring Personalized Visual Sensors with Multimodal Foundation Models and Reasoning
    • Authoring LLM-Based Assistance for Real-World Contexts and Tasks
    • Conversational Explanations: Discussing Explainable AI with Non-AI Experts
    • CoPrompter: User-Centric Evaluation of LM Instruction Alignment for Improved Prompt Engineering
  • Visualisation
    • StratIncon Detector: Analyzing Strategy Inconsistencies Between Real-Time Strategy and Preferred Professional Strategy in MOBA Esports
    • Guidance Source Matters: How Guidance from AI, Expert, or a Group of Analysts Impacts Visual Data Preparation and Analysis
    • Pluto: Authoring Semantically Aligned Text and Charts for Data-Driven Communication
    • Analyzing the Shifts in Users Data Focus in Exploratory Visual Analysis
  • User Modeling
    • Crying Jaywalker! Notifying Take-Over-Requests and Critical Events in Operational Driving Domain of Autonomous Vehicles via Multimodal Interfaces
    • Redefining Affordance via Computational Rationality
    • The Effects of Customisation on the Usability of Visual Analytics Dashboards: the Good, the Bad, and the Ugly
    • Coalesce: An Accessible Mixed-Initiative System for Designing Community-Centric Questionnaires