** TENTATIVE dates of the review cycle **
Report submission status to IUI workshop chairs
Final go/no-go decision
Camera-ready of workshop summaryJanuary 14, 2019
Notifications to authorsFebruary 15, 2019
Camera-ready of accepted papersMarch 20, 2019
Christoph Trattner, University of Bergen, Norway
Denis Parra, Pontifical Catholic University, Chile
Nathalie Riche, Microsoft Research
Both music creation and music listening interfaces heavily rely on and benefit from intelligent approaches that enable users to access sound and music in unprecedented manners. This ongoing trend draws from manifold areas such as interactive machine learning, music information retrieval (MIR) — in particular content-based retrieval systems, recommender systems, human computer interaction, and adaptive systems, to name but a few prominent examples. The 2nd Workshop on Intelligent Music Interfaces for Listening and Creation (MILC 2019) will bring together researchers from these communities and provide a forum for the latest trends in user-centric machine learning and interfaces for music consumption and creation.
The aim of this workshop is to explore new methods and interface/system design for interactive data analytics and management in various domains, including specialised text collections (e.g. legal, medical, scientific), multimedia, and bioinformatics, as well as for various tasks, such as semantic information retrieval, conceptual organization and clustering of data collections for sense making, semantic expert profiling, and document/multimedia recommender systems. The primary audience of the workshop are researchers and practitioners in the area of interactive and personalised system design as well as interactive machine learning both from academia and industry.
This workshop will follow on from the very successful ExSS 2018 workshop held at IUI. It will bring together researchers in academia and industry who have an interest in making smart systems explainable to users and therefore more intelligible and transparent. This topic has attracted increasing interest to provide glimpses into the black-box behavior of these systems in order to provide more effective steering or training of the system, better reliability and improved usability. This workshop will provide a venue for exploring issues that arise in designing, developing and evaluating smart systems that use or provide explanations of their behavior.
The HUMANIZE workshop aims to provide a venue for scholars to discover and discuss research findings on how to incorporate psychological theory into personalized interfaces. Personalization is often done through mining behavior data for patterns, but many examples exist of improving personalization techniques by incorporating psychological understanding of users in the form of user models. HUMANIZE aims to explore the interface between purely data-driven approaches and approaches that incorporate theoretical knowledge of users of a system.
IoT technologies (e.g. smart homes, m-health, public tracking) are revolutionizing the way we interact with our environments. Designing interfaces for IoTs is challenging due to inherent nature of IoT which accommodates several devices in an environment. The IUIoT workshop invites researchers who are working at the intersection of Interface design for Internet of Things environments. This half day workshop aims to serve as a targeted venue for discussing ongoing research and sharing ideas, and will serve as potential collaboration venue for researchers working on topics such as interface design for IoT; interaction paradigms for IoT; usability, usage and adoption studies; privacy and security; user modeling; adaptive IoT systems; voice-controlled systems; and accessibility.
Building sets of complete, correct, and unbiased information – whether it is domain knowledge or training data – is an iterative and ongoing process that is necessary for producing systems that have the requisite knowledge to be effective in their environment. Currently, this is an unintuitive task for users who have little to no knowledge of how the system and its underlying algorithms function. But splitting the process between AI-experts who understand the system and subject matter experts who understand the domain is inefficient and can stagnate systems that need to keep up with rapid changes in the real world where they are meant to operate. This workshop seeks to bring together researchers across different AI spaces to share and discuss the challenges of building interactions for guiding novice users through knowledge collection and model building or training.
Humanity generates many large and complex datasets that tend to be complex, heterogeneous (texts, images, videos, etc.), huge and are rapidly growing and changing over space and time dimensions. Hence, special, dedicated solutions for visualizing the data and developing effective user interfaces that would assist users in efficient analysis need to be proposed and used. Effective data pre-processing and management techniques are also needed for constructing large scale real-world applications or for investigating complex interaction patterns with such data in order to detect useful knowledge. This workshop aims at sharing latest progress and developments, current challenges and potential applications for exploiting large amounts of spatial-temporal data.
Conversational agent systems present an extremely rich and challenging research space for many topics of user awareness and adaptation, such as user profiles, contexts, personalities, emotions, social dynamics, conversational styles, etc. The user2agent workshop aims to bring together researchers who are interested in these topics from different communities, including user modeling, HCI, NLP and ML. Through a focused and open exchange of ideas and discussions, we will work to identify central research topics in user-aware conversational agents and develop an interdisciplinary agenda to address them.
IUI ATEC’s goal is to focus on three principles that describe approaches to combating algorithmic biases that can be applied by researchers, even without access to a given system’s inter-workings: - Awareness: Raise stakeholders’ awareness of the potential for biases and social harms that could result from developing and using a given analytic system. - Data provenance: Facilitate the exploration of the potential biases brought about by human and automated data gathering processes that are used to create training data for algorithmic systems. - Validation and testing of outputs: Develop rigorous techniques for testing models and assumptions used in analytic systems, evaluating the potential for social, discriminatory harm.