Call for Workshops

Deadlines (AoE)

  • Workshops
  • Proposal - Sept 22, 2020
  • Decisions Sent - Oct 7, 2020
  • Submissions to Workshop - Jan 15, 2021
  • Camera Ready - Feb 15, 2021

Workshop Chairs

Dorota Glowacka
University of Helsinki
Vinayak Krishanamurthy
Texas A&M University

Accepted Workshops

This year's conference will be completely virtual using Whova. Find out more by clicking here

IUI 2021 is pleased to announce the following 7 workshops to be held in conjunction with the conference. The goal of the workshops is to provide a venue for presenting research on focused topics of interest and an informal forum to discuss research questions and challenges. Workshops will be held on the first day of the conference.

An extended abstract with a summary of the workshop goals and an overview of the workshop topics will be included in the ACM Digital Library for IUI 2021.

At the convenience of the workshop organizers, we will arrange a joint volume of online proceedings for the workshop papers. Workshops should request the authors to submit papers in the ACM SIGCHI Paper Format.

Workshops with few submissions by January 3 2021 may be cancelled, shortened, merged with other workshops, or otherwise restructured. This will be done in consultation between the IUI 2021 workshop chairs and the workshop organizers.

Healthy Interfaces (HEALTHI) launch

Michael Sobolev
Cornell Tech, Northwell Health
Katrin Hänsel
Cornell Tech, Northwell Health
Tanzeem Choudhury
Cornell Tech

This multidisciplinary workshop on Healthy Interfaces (HEALTHI) offers a forum that brings academics and industry researchers together and seeks submissions broadly related to the design of smart user interfaces for promoting health. It builds on the fields of psychology, behavioural health, human computer interaction, ubiquitous computing, and artificial intelligence. This workshop aims at discussing topics related to intelligent user interfaces such as personalized and adaptive displays, wearable devices as well as voice and conversational assistants in the context of supporting health, healthy behavior, and wellbeing.

SOcial and Cultural IntegrAtion with PersonaLIZEd Interfaces (SOCIALIZE) launch

Francesco Agrusti
Roma Tre University
Fabio Gasparetti
Roma Tre University
Cristina Gena
University of Torino
Giuseppe Sansonetti
Roma Tre University
Marko Tkalčič
University of Primorska

The SOCIALIZE workshop aims to bring together all those interested in the development of interactive techniques that may contribute to foster the social and cultural inclusion of a broad range of users. More specifically, we intend to attract research that takes into account the interaction peculiarities typical of different realities, with a focus on disadvantaged and at-risk categories (e.g., refugees and migrants) and vulnerable groups (e.g., children, elderly, autistic and disabled people). Among others, we are also interested in human-robot interaction techniques aimed at the development of social robots, that is, autonomous robots that interact with people by engaging in social-affective behaviors, abilities, and rules related to their collaborative role.

Transparency and Explainability in Adaptive Systems through User Modeling Grounded in Psychological Theory - HUMANIZE launch

Mark Graus
Maastricht University
Bruce Ferwerda
Jönköping University
Marko Tkalčič
University of Primorska
Panagiotis Germanakos
Intelligent Enterprise Group SAP SE, Germany
University of Cyprus

HUMANIZE aims to investigate how intelligent, adaptive systems can benefit from combining quantitative, data-driven approaches with qualitative, theory-driven approaches. We in particular invite work from researchers that incorporate features grounded in psychological theory (e.g. personality, cognitive styles etc.) into the predictive models underlying their adaptive/intelligent systems (e.g. recommender systems, website morphing, etc.). Apart from research investigating how this approach can improve these systems, we are interested in research towards the potential of this approach in improving explainability, fairness and transparency and reducing bias in data or output of intelligent systems.

CUI@IUI: Theoretical and Methodological Challenges in Intelligent Conversational User Interface Interactions launch

Philip R Doyle
University College Dublin
Daniel John Rough
University of Dundee
Leigh Clark
Swansea University
Martin Porcheron
Swansea University
Benjamin R. Cowan
University College Dublin
Justin Edwards
University College Dublin
Stephan Schlögl
MCI Management Center Innsbruck
Minha Lee
Technical University of Eindhoven
Cosmin Munteanu
University of Toronto Mississauga
Christine Murad
University of Toronto
Jaisie Sin
University of Toronto
María Inés Torres
Universidad del Pais Vasco
Matthew Peter Aylett
CereProc Ltd.
Heloisa Candello
IBM Research

Our workshop brings together Intelligent User Interface (IUI) and Conversational User Interface (CUI) research communities to map out the theoretical and methodological challenges in designing and evaluating CUIs. Whilst CUI use continues to grow, significant challenges remain in creating established theoretical and methodological approaches for researching CUI interactions. These include assessing the impact of interface design on user behaviours and perceptions, developing design guidelines, understanding the role of personalisation and issues around ethics and privacy. We invite the submission of short (3-6 page) position papers, or if you’d prefer to attend without submitting a paper, short position statements (1 page).

Fourth Workshop on Exploratory Search and Interactive Data Analytics (ESIDA) launch

Dorota Glowacka
University of Helsinki
Evangelos Milios
Dalhousie University
Axel J. Soto
CONICET, DCIC-UNS
Fernando V. Paulovich
Dalhousie University
Denis Parra
Pontificia Universidad Católica
Osnat Mokryn
University of Haifa

The workshop focuses on systems that personalize, summarize and visualize the data for supporting interactive information seeking and information discovery, along with tools that enable user modeling and methods for incorporating the user needs and preferences into both analytics and visualization. Our aim is to bring together researchers and practitioners working on different personalization aspects and applications of exploratory search and interactive data analytics. This will allow us to achieve four goals: (1) propose new strategies for systems that need to convey the rationale behind their decisions or inference, and the sequence of steps that lead to specific (search) results; (2) develop new user modeling and personalization techniques for exploratory search and interactive data analytics; (3) develop a common set of design principles for this type of systems across platforms, contexts, users and applications; (4) develop a set of evaluation metrics for personalization in exploratory search.

2nd Workshop on Human-AI Co-Creation with Generative Models (HAI-GEN 2021) launch

Werner Geyer
IBM Research AI
Lydia B. Chilton
Columbia University
Justin D. Weisz
IBM Research AI
Mary Lou Maher
University of North Carolina, Charlotte

Recent advances in deep learning approaches to generative AI, such as generative adversarial networks (GANs), variational autoencoders (VAEs), and language models will enable new kinds of user experiences around content creation. These advances have enabled content to be produced with an unprecedented level of fidelity. In many cases, content generated by generative models is either indistinguishable from human-generated content or could not be produced by human hands. We believe that people skilled within their creative domain can realize great benefits by incorporating generative models into their own work: as a source of inspiration, as a tool for manipulation, or as a creative partner. However, recent deep-fake examples of prominent business leaders highlight the significant societal, ethical, and organizational challenges generative AI poses around issues such as security, privacy, and ownership.

The goal of this workshop is to bring together researchers and practitioners from the domains of HCI & AI to establish a joint community to deepen our understanding of the human-AI co-creative process and to explore the opportunities and challenges of creating powerful user experiences with deep generative models. We envision that the user experience of creating both physical and digital artifacts will become a partnership between people and AI: people will take the role of specification, goal setting, steering, high-level creativity, curation, and governance, whereas AI will augment human abilities through inspiration, creativity, low-level detail work, and the ability to design at scale. The central question of our workshop is: how can we build co-creative systems that make people feel that they have “creative superpowers”? How will user needs drive the development of generative AI algorithms, and how can the capabilities of generative models be leveraged to create effective co-creative user experiences?

Transparency and Explanations in Smart Systems (TExSS) launch

Alison Smith-Renner
Machine Learning Visualization Lab
DAC/WBB
Styliani Kleanthous
Loizou Cyprus Centre for Algorithmic Transparency
Open University of Cyprus
Jonathan Dodge
Oregon State University
Casey Dugan
IBM Research
Min Kyung Lee
University of Texas at Austin
Brian Y Lim
National University of Singapore
Tsvi Kuflik
University of Haifa
Advait Sarkar
Microsoft Research
Avital Shulner-Tal
University of Haifa
Simone Stumpf
Centre for HCI Design
University of London

Smart algorithmic systems that apply complex reasoning to make decisions, such as decision support or recommender systems, are difficult for people to understand. Algorithms allow the exploitation of rich and varied data sources, to support human decision-making; however, there are increasing concerns surrounding their fairness, bias, and accountability, as these processes are typically opaque to users. Transparency and accountability of algorithmic systems have attracted increasing interest toward more effective system training, better reliability, appropriate trust, and improved usability. The workshop on Transparency and Explanations in Smart Systems (TExSS) provides a venue for exploring issues when designing, developing, or evaluating transparent intelligent user interfaces, with additional focus on explaining systems and models toward ensuring fairness and social justice. It is a place for researchers and practitioners to meet and exchange ideas on how to make algorithmic systems more transparent, fair and accountable.