KEYNOTE SPEAKERS

This year's conference will be completely virtual using Whova. Find out more by clicking here

Accessibility as an Opportunity and Challenge for Intelligent User Interfaces

Meredith Morris
Director, People + AI Research
Google, Google Research

Time

April 14, 2021 12:30 PM CDT (Local: )

About

The World Health Organization estimates that over one billion people worldwide are disabled. Innovations at the intersection of AI and HCI have the potential to increase the accessibility of the digital and physical worlds for people experiencing long-term, temporary, and/or situational disabilities. Considering accessibility scenarios can illuminate opportunities and challenges for designers of intelligent user interfaces. In this keynote, I will use two scenarios to illustrate this concept: automatic alt text generation for images and augmentative and alternative communication technologies.

Alternative text (“alt text”) descriptions can be read aloud by screen reader software to increase image accessibility to people who are blind or have low vision. Many content authors fail to provide alt text metadata, leaving billions of digital images inaccessible to screen reader users. Advances in vision-to-language technologies offer promise for scaling the accessibility of digital imagery, but also present challenges such as user-understandable error metrics and the selection of relevant details.

Augmentative and alternative communication (“AAC”) technologies facilitate communication for people with speech disabilities. Many users with extremely limited mobility rely on eye gaze input to control AAC, typically resulting in limited communication bandwidth of 10 – 20 words per minute (compared with nearly 200 words per minute for spoken English). Advances in predictive language technologies have the potential to enhance the speed and expressivity of AAC communications, but present challenges around preserving user autonomy and authenticity.

For both scenarios (automatic alt text and predictive AAC), I will share research on end-user preferences that can inform technology design, as well as presenting novel prototypes that combine human and machine intelligence to support these user needs. I will close by identifying opportunities for future work at the intersection of intelligent user interfaces and accessibility.

Equitable AI: Using AI to Achieve Diversity in Admissions

Juan Gilbert
Department Chair
University of Florida

Time

April 15, 2021 9:00 AM CDT (Local: )

About

It has been nearly 20 years since the U.S. Supreme Court ruled on the use of race/ethnicity, gender, and national origin in university admissions in the University of Michigan cases. It is now 2021 and universities are still struggling with how to diversify their admissions offers within the bounds of the law. In response to this ongoing issue, I created Applications Quest, an equitable AI tool that adheres to the legal use of race, gender, national origin, etc. in admissions and hiring decisions. In this keynote address, I will explain how AI, specifically, Applications Quest, can be used to create equitable recommendations for admissions decisions. I will give a demonstration of how Applications Quest increases diversity compared to admissions committees while achieving the same academic achievement levels as the committee. Given Applications Quest is an unsupervised AI, it has the advantage of ignorance of race/ethnicity, gender, national origin, etc. Therefore, when used in admissions, it provides unbiased recommendations that can be interrogated by human evaluators. Applications Quest is a human-centered AI tool for achieving equity in admissions.

Achieving Health Equity: The Power & Pitfalls of Intelligent Interfaces

Andrea G. Parker, Ph.D.
Associate Professor
Georgia Tech

Time

April 16, 2021 9:00 AM CDT (Local: )

About

Digital health research—the investigation of how technology can be designed to support wellbeing—has exploded in recent years. Much of this innovation has stemmed from advances in the fields of human-computer interaction and artificial intelligence. A growing segment of this work is examining how information and communication technologies (ICTs) can be used to achieve health equity, that is, fair opportunities for all people to live a healthy life. Such advances are sorely needed, as there exist large disparities in morbidity and mortality across population groups. These disparities are due in large part to social determinants of health, that is, social, physical, and economic conditions that disproportionately inhibit wellbeing in populations such as low-socioeconomic status and racial and ethnic minority groups.

Despite years of digital health research and commercial innovation, profound health disparities persist. In this talk, I will argue that to reduce health disparities, ICTs must address social determinants of health. Intelligent interfaces have much to offer in this regard, and yet their affordances—such as the ability to deliver personalized health interventions—can also act as pitfalls. For example, a focus on personalized health interventions has lead to the design of various interfaces focused on individual-level behavior change. While such innovations are important, to achieve health equity there is also a need for complimentary systems that address social relationships. Social ties are a crucial point of focus for digital health research as they can provide meaningful supports for positive health, especially in populations that disproportionately experience health barriers. I will offer a vision for health equity research in which interactive and intelligent systems are designed to help people build social relationships that support wellbeing. By conceptualizing the purview of digital health research as encompassing not only individual but also social change, there is tremendous opportunity to create disruptive health interventions that help achieve health equity.

Intelligent Visualization Interfaces

Kwan-Liu Ma
Distinguished Professor
University of California, Davis

Time

April 17, 2021 9:00 AM CDT (Local: )

About

Visualization transforms large quantities of data into pictures in which relations, patterns, or trends of interest in the data reveal themselves to effectively guide the user in the data reasoning and discovery process. Visualization has become an essential tool in many areas of study that use a data-driven approach to problem solving and decision making. However, when the data is large relational or high-dimensional, it can take both novices and experts substantial effort to derive and interpret visualization results from the data. Following the resurgence of AI and machine learning technology in recent years, in the field of visualization, there is also the growing interest and opportunity in applying AI and machine learning to perform data transformation and to assist in the generation and interpretation of visualization, aiming to strike a balance between cost and performance. In this talk, I will present designs made by my group effectively making use of machine learning for general data visualization and analytics tasks [1–6], resulting in better visualization interfaces into the data.