This year's conference will be completely virtual using Whova. Find out more by clicking here
IUI 2021 is pleased to announce the following 7 workshops to be held in conjunction with the conference. The goal of the workshops is to provide a venue for presenting research on focused topics of interest and an informal forum to discuss research questions and challenges. Workshops will be held on the first day of the conference.
An extended abstract with a summary of the workshop goals and an overview of the workshop topics will be included in the ACM Digital Library for IUI 2021.
At the convenience of the workshop organizers, we will arrange a joint volume of online proceedings for the workshop papers. Workshops should request the authors to submit papers in the ACM SIGCHI Paper Format.
Workshops with few submissions by January 3 2021 may be cancelled, shortened, merged with other workshops, or otherwise restructured. This will be done in consultation between the IUI 2021 workshop chairs and the workshop organizers.
This multidisciplinary workshop on Healthy Interfaces (HEALTHI) offers a forum that brings academics and industry researchers together and seeks submissions broadly related to the design of smart user interfaces for promoting health. It builds on the fields of psychology, behavioural health, human computer interaction, ubiquitous computing, and artificial intelligence. This workshop aims at discussing topics related to intelligent user interfaces such as personalized and adaptive displays, wearable devices as well as voice and conversational assistants in the context of supporting health, healthy behavior, and wellbeing.
The SOCIALIZE workshop aims to bring together all those interested in the development of interactive techniques that may contribute to foster the social and cultural inclusion of a broad range of users. More specifically, we intend to attract research that takes into account the interaction peculiarities typical of different realities, with a focus on disadvantaged and at-risk categories (e.g., refugees and migrants) and vulnerable groups (e.g., children, elderly, autistic and disabled people). Among others, we are also interested in human-robot interaction techniques aimed at the development of social robots, that is, autonomous robots that interact with people by engaging in social-affective behaviors, abilities, and rules related to their collaborative role.
HUMANIZE aims to investigate how intelligent, adaptive systems can benefit from combining quantitative, data-driven approaches with qualitative, theory-driven approaches. We in particular invite work from researchers that incorporate features grounded in psychological theory (e.g. personality, cognitive styles etc.) into the predictive models underlying their adaptive/intelligent systems (e.g. recommender systems, website morphing, etc.). Apart from research investigating how this approach can improve these systems, we are interested in research towards the potential of this approach in improving explainability, fairness and transparency and reducing bias in data or output of intelligent systems.
Our workshop brings together Intelligent User Interface (IUI) and Conversational User Interface (CUI) research communities to map out the theoretical and methodological challenges in designing and evaluating CUIs. Whilst CUI use continues to grow, significant challenges remain in creating established theoretical and methodological approaches for researching CUI interactions. These include assessing the impact of interface design on user behaviours and perceptions, developing design guidelines, understanding the role of personalisation and issues around ethics and privacy. We invite the submission of short (3-6 page) position papers, or if you’d prefer to attend without submitting a paper, short position statements (1 page).
The workshop focuses on systems that personalize, summarize and visualize the data for supporting interactive information seeking and information discovery, along with tools that enable user modeling and methods for incorporating the user needs and preferences into both analytics and visualization. Our aim is to bring together researchers and practitioners working on different personalization aspects and applications of exploratory search and interactive data analytics. This will allow us to achieve four goals: (1) propose new strategies for systems that need to convey the rationale behind their decisions or inference, and the sequence of steps that lead to specific (search) results; (2) develop new user modeling and personalization techniques for exploratory search and interactive data analytics; (3) develop a common set of design principles for this type of systems across platforms, contexts, users and applications; (4) develop a set of evaluation metrics for personalization in exploratory search.
Recent advances in deep learning approaches to generative AI, such as generative adversarial networks (GANs), variational autoencoders (VAEs), and language models will enable new kinds of user experiences around content creation. These advances have enabled content to be produced with an unprecedented level of fidelity. In many cases, content generated by generative models is either indistinguishable from human-generated content or could not be produced by human hands. We believe that people skilled within their creative domain can realize great benefits by incorporating generative models into their own work: as a source of inspiration, as a tool for manipulation, or as a creative partner. However, recent deep-fake examples of prominent business leaders highlight the significant societal, ethical, and organizational challenges generative AI poses around issues such as security, privacy, and ownership.
The goal of this workshop is to bring together researchers and practitioners from the domains of HCI & AI to establish a joint community to deepen our understanding of the human-AI co-creative process and to explore the opportunities and challenges of creating powerful user experiences with deep generative models. We envision that the user experience of creating both physical and digital artifacts will become a partnership between people and AI: people will take the role of specification, goal setting, steering, high-level creativity, curation, and governance, whereas AI will augment human abilities through inspiration, creativity, low-level detail work, and the ability to design at scale. The central question of our workshop is: how can we build co-creative systems that make people feel that they have “creative superpowers”? How will user needs drive the development of generative AI algorithms, and how can the capabilities of generative models be leveraged to create effective co-creative user experiences?
Smart algorithmic systems that apply complex reasoning to make decisions, such as decision support or recommender systems, are difficult for people to understand. Algorithms allow the exploitation of rich and varied data sources, to support human decision-making; however, there are increasing concerns surrounding their fairness, bias, and accountability, as these processes are typically opaque to users. Transparency and accountability of algorithmic systems have attracted increasing interest toward more effective system training, better reliability, appropriate trust, and improved usability. The workshop on Transparency and Explanations in Smart Systems (TExSS) provides a venue for exploring issues when designing, developing, or evaluating transparent intelligent user interfaces, with additional focus on explaining systems and models toward ensuring fairness and social justice. It is a place for researchers and practitioners to meet and exchange ideas on how to make algorithmic systems more transparent, fair and accountable.