ACM IUI 2023 Call for Workshop and Tutorial Proposals

In conjunction with the 28th International ACM Conference on Intelligent User Interfaces (ACM IUI 2023)

Sydney, Australia

March 27-31, 2023 https://iui.acm.org/2023/

WORKSHOP AND TUTORIAL CO-CHAIRS

Alison Smith-Renner, Human AI Innovation Team, Dataminr, USA

Paul Taele, Sketch Recognition Lab, Texas A&M University, USA

wt2023@iui.acm.org

REVIEW CYCLE DATES

  • Submissions to workshops: 9 January, 2023
  • Status report of workshops to Workshops & Tutorials Chairs: 14 January, 2023
  • Final go/no-go decision on workshops: Wednesday, 16 January, 2023
  • Notifications to workshop submission authors (Recommended): 29 January, 2023
  • Notifications to workshop submission authors (Latest): 9 February, 2023
  • Camera-ready for Workshop/Tutorial Summary: 16 February, 2023
  • Workshops & Tutorials date: 27 March, 2023

ACCEPTED WORKSHOPS AND TUTORIALS

IUI 2023 is pleased to announce the following 8 workshops and 3 tutorials to be held in conjunction with the conference. The goal of the workshops is to provide a venue for presenting research on focused topics of interest and an informal forum to discuss research questions and challenges. Tutorials are designed to provide fundamental knowledge and experience on topics related to intelligent user interfaces, and the intersection between Human-Computer Interaction (HCI) and Artificial Intelligence (AI). Workshops and tutorials will be held on the first day of the conference.

Workshops with few submissions by Wednesday, 14 January 2023 may be cancelled, shortened, merged with other workshops, or otherwise restructured. The organizers of accepted workshops and tutorials are responsible for producing a call for participation and publicizing it, such as distributing the call to relevant newsgroups and electronic mailing lists, and especially to potential audiences from outside the IUI conference community. Workshop and tutorial organizers will maintain their own website with information about the workshop or tutorial and the IUI 2023 web site will refer to this website. The workshop organizers will coordinate the paper solicitation, collection, and review process. A workshop and tutorials summary will be included in the ACM Digital Library for IUI 2023, and we will separately publish a joint workshop proceedings for accepted workshop submissions (through CEUR or similar)

DECI: Tutorial on Designing Effective Conversational Interfaces launch

Ujwal Gadiraju
Delft University of Technology
Tahir Abbas
Delft University of Technology

Conversational interfaces have been argued to have advantages over traditional GUIs due to having a more human-like interaction. The rise in popularity of conversational agents has enabled humans to interact with machines more naturally. There is a growing familiarity among people with conversational interactions mediated by technology due to the widespread use of mobile devices and messaging services such as WhatsApp, WeChat, and Telegram. Today, over half the population on our planet has access to the Internet with ever-lowering barriers to accessibility. This tutorial will showcase the benefits of employing novel conversational interfaces in the domains of human-AI decision making, health and well-being, information retrieval, and crowd computing. We will discuss the potential of conversational interfaces in facilitating and mediating the interactions of people with AI systems. The tutorial will include interactive elements and discussions, and provide participants with materials to build conversational interfaces.

HAI-GEN: Workshop on Human-AI Co-Creation with Generative Models launch

Mary Lou Maher
University of North Carolina, Charlotte
Justin D. Weisz
IBM Research AI
Hendrik Strobelt
IBM Research AI
Lydia B. Chilton
Columbia University
Werner Geyer
IBM Research AI

Recent advances in generative AI through deep learning approaches such as generative adversarial networks (GANs), variational autoencoders (VAEs), and large language models will enable new kinds of user experiences around content creation, across a range of media types (text, images, audio, and video). These advances have enabled content to be produced with an unprecedented level of fidelity, for tasks such as generating faces, prose and poems, deep fake videos of celebrities, music, and even code. In many cases, content generated by generative models is either indistinguishable from human-generated content or could not be produced by human hands. These examples also highlight some of the significant societal, ethical, and organizational challenges generative AI is posing around issues such as security, privacy, ownership, quality metrics, and evaluation of generated content.

HCDxML: Tutorial on Harnessing Design Thinking for Human-Centred Modelling launch

Lauren Pak
QuantumBlack, AI by McKinsey
Mariana Neves da Silva
PhysicsX
Viktoriia Oliinyk
QuantumBlack, AI by McKinsey
Ismail Ngem
QuantumBlack, AI by McKinsey

Data scientists often pair with analytical translators to inform feature selection for models. Although translators bring their domain expertise, this industry knowledge is primarily focused on business context and operational metrics. There is a lack of deep understanding of human behavior. This result is models built on reductive assumptions that do not fully capture human experience. Although features can be tweaked to improve accuracy or prevent overfitting, these tactics only further perpetuate criticisms of AI’s bias and lack of inclusivity. Furthermore, human-in-the-loop approaches involving hyperparameter tuning are not accessible for business stakeholders. The current experience for business users to translate model recommendations into action is poor, let alone continue to track and tune performance. The purpose of this tutorial is to leverage ethnography and explore participatory methods for feature co-creation and model sustainability. Design research, which is objective-led and focused on deeply understanding a given research area, can provide the lens to contextualize behavior holistically and reveal latent, human needs. Features can be informed through insight rather than determined by business metrics or SHAP values. This workshop will look at lessons from industry and explore the ways in which design ethnography and ML best practice can come together to design human-centred models.

ITAH: Workshop on Interactive Technologies for AI in Healthcare launch

Öznur Alkan
Optum Ireland
Oya Celiktutan
King's College London
Hanan Salam
New York University Abu Dhabi
Marwa Mahmoud
University of Glasgow
Greg Buckley
Optum Ireland
Niamh Phelan
Optum Ireland

AI for healthcare has been a very active research area in recent years. AI can support many processes in healthcare including but not limited to automatic screening and diagnostic tools, health management applications, administrative workflow automation, clinical documentation, patient outreach, and specialized support via image analysis, and medical device automation. Although AI systems have been shown to reduce medical errors and improve patient outcomes, adoption of these systems in practice remains a challenge due to the lack of user-centered design, personalisation, and the opaqueness of algorithms . Target users for AI systems in healthcare space can include clinicians, patients, and healthcare payers, where clinicians might include all practitioners who diagnose, treat, or care for patients. From the perspective of clinicians, as shown by the studies, they are more likely to accept a decision support system if the system matches their own decision-making processes, which can be possible by allowing them to interact with the systems and placing the user in the loop. Considering different application areas, feedback from not only the clinicians but also other target user groups can improve the systems’ performance significantly, which can result in better performance and end-user experience. From the perspective of target users, it is increasingly desirable that such technologies are tailored to their specific needs and profiles. While modern technologies can offer many benefits, e.g., planning and delivery of clinical care, and management of special conditions at home, they still suffer from unidirectional interaction, and a one-fits-all paradigm. Motivated by the above points, we aim to address the Interactivity in AI solutions targeted for healthcare domain by bringing together multidisciplinary researchers and practitioners from AI, healthcare, medicine, user interaction & experience design domains and facilitating the discussions in this very critical space. We aim to cover topics around different means of interactivity in AI solutions for healthcare, challenges associated with adaptation of AI models in healthcare space from end-users perspective, and how human-AI interaction can help build better solutions that can lead to better user experience.

MILC: Workshop on Intelligent Music Interfaces for Listening and Creation launch

Peter Knees
TU Wien
Alexander Lerch
Georgia Institute of Technology

Today’s music ecosystem is permeated by digital technology—from recording to production to distribution to consumption. Intelligent technologies and interfaces play a crucial role during all these steps. On the music creation side, tools and interfaces like new sensor-based musical instruments or software like digital audio workstations (DAWs) and sound and sample browsers support creativity. Generative systems can support novice and professional musicians by automatically synthesizing new sounds or even new musical material. On the music consumption side, tools and interfaces such as recommender systems, automatic radio stations, or active listening applications allow users to navigate the virtually endless spaces of music repositories. Since the workshop's first two editions in Tokyo 2018 and Los Angeles 2019, we have witnessed a drastic technical evolution and, reflecting this trend, an increase in volume of works dealing with music listening and creation interfaces. In addition to technical developments, we now see further challenges arising wrt. gaining deeper understandings of user intent, in the area of human-AI co-creation, and in building systems for automatic curation of generated contents. To address these and other challenges, the 3rd Workshop on Intelligent Music Interfaces for Listening and Creation (MILC 2023) will again bring together researchers from the communities of music information retrieval (MIR)—in particular content-based retrieval, recommender systems, machine learning, human computer interaction, adaptive systems, and beyond.

Multi-Criteria Decision Making and Recommender Systems: Tutorial launch

Yong Zheng
Illinois Institute of Technology
David (Xuejun) Wang
Morningstar, Inc.

Description: Recommender systems (RS) can assist users' decision making by recommending a list of items tailored to user preferences. The user preferences may be captured from different aspects of the items (i.e., criteria). Take hotel recommendations for example, a user may consider location, safety, room cleanliness, etc. In a movie recommendation scenario, a user may take story, visual effects, movie directing, movie stars into account. Apparently, multi-criteria decision making (MCDM) is involved in the decision and recommendation process, though the criteria are not always explicitly available. MCDM has been well developed, especially in the business or finance areas. Multi-criteria recommender systems (MCRS) have also been proposed and developed to serve recommendations in hotel booking (e.g., TripAdvisor.com), restaurant reservation (e.g., OpenTable), movie watching (e.g., Yahoo!Movies). However, there is a gap between MCDM and MCRS. The knowledge and skills in MCDM were not fully utilized to help build better MCRS. In addition, human-centric MCDM has been examined in different applications, but there are no such human-centric evaluations in MCRS. In this tutorial, we deliver knowledge, skills and existing development of MCDM and MCRS techniques, and offer an open discussion for future development, human-centric evaluations in MCRS.

SHAI 2023: Workshop on Designing for Safety in Human-AI Interactions launch

Nitesh Goyal
Google Research
Sungsoo Ray Hong
George Mason University
Regan L. Mandryk
University of Saskatchewan
Toby Jia-Jun Li
University of Notre Dame
Kurt Luther
Virginia Polytechnic Institute and State University
DaKuo Wang
IBM Research

Generative ML Models have an unprecedented opportunity to produce non-safe outcomes and harms at a large volume that can be incredibly challenging during Human AI Interactions. Despite best intentions, inadvertent outcomes might accrue leading to harms, especially to marginalized groups in society. On the other hand, those motivated and skilled at causing harm might be able to perpetuate even deeper harms. Our workshop is aimed at practitioners and academic researchers at the intersection of AI and HCI who are interested in understanding these socio-technical challenges, and identify opportunities to address them collaboratively.

SketchRec: Workshop on Sketch Recognition launch

Rachel Blagojevic
Massey University
Paul Taele
Texas A&M University
Tracy Hammond
Texas A&M University
Josh Cherian
Texas A&M University
Jung In Koh
Texas A&M University
Samantha Ray
Texas A&M University

Sketch recognition is the interpretation of hand-drawn diagrams, and seeks to understand the users' intent while allowing them to draw unconstrained diagrams. Sketch recognition research has been on-going for approximately half a century, and has experienced iterative advances due to the difficulty of the problem. As pen- and touch-capable devices such as smartphones, tablets, touch- driven monitors, and large touchscreen devices have become ubiquitous, and as emergent technologies such as virtual and augmented reality-driven computing technologies are becoming more advanced, sketch recognition-related research remains an open field for researchers to explore in approaching continuing interaction and recognition challenges with these technologies. The workshop on Sketch Recognition aims to share and discuss state-of-the-art innovations and challenges in IUI research topics that relate to sketch interactions and recognition. We especially focus on highlighting research contributions and engaging in healthy dialogue that intersect topics pertaining to sketch recognition user interfaces and techniques.

SOCIALIZE: Workshop on Social and Cultural Integration with Personalized Interfaces launch

Fabio Gasparetti
Roma Tre University
Cristina Gena
University of Torino
Giuseppe Sansonetti
Roma Tre University
Marko Tkalčič
University of Primorska

The SOCIALIZE workshop aims to bring together all those interested in the development of interactive techniques that may contribute to fostering the social and cultural inclusion of a broad range of users. More specifically, we intend to attract research that takes into account the interaction peculiarities typical of different realities, with a focus on disadvantaged and at-risk categories (e.g., refugees and migrants) and vulnerable groups (e.g., children, elderly, autistic, and disabled people). Among others, we are also interested in human-robot interaction techniques aimed at developing social robots, that is, autonomous robots that interact with people by engaging in socially-affective behaviors, abilities, and rules related to their collaborative role.