End-User Developers program because they have to and not because they want to. This includes scientists, data analysts, and the general public when they write code. We have been working for many years on various ways to make end-user development more successful. In this talk, I will focus on two new projects where we are applying intelligent user interfaces to this long-standing challenge. In Sugilite, the user can teach an agent new skills interactively with the user interfaces of relevant smartphone apps through a combination of programming by example (PBE) and natural language instructions. For instance, a user can teach Sugilite how to order the cheaper car between Uber and Lyft, even if it has no access to their APIs, no knowledge about the task domain, and without knowing the concept of "cheap" in advance. Another project, called Verdant, is focusing on helping data scientists, including those using Machine Learning and other AI algorithms, do exploratory programming. Verdant supports micro-versioning, understanding the difference among the output and code of different versions, backtracking, provenance of output to its code, and searching the history. A goal for Verdant is to intelligently organize and filter the raw history data to help data scientists make effective choices from it.
Brad A. Myers is a Professor in the Human-Computer Interaction Institute in the School of Computer Science at Carnegie Mellon University. He was chosen to receive the ACM SIGCHI Lifetime Achievement Award in Research in 2017, for outstanding fundamental and influential research contributions to the study of human-computer interaction. He is an IEEE Fellow, ACM Fellow, member of the CHI Academy, and winner of 15 Best Paper type awards and 5 Most Influential Paper Awards. He is the author or editor of over 500 publications, including the books "Creating User Interfaces by Demonstration" and "Languages for Developing User Interfaces," and he has been on the editorial board of six journals. He has been a consultant on user interface design and implementation to over 90 companies, and regularly teaches courses on user interface design and software. Myers received a PhD in computer science at the University of Toronto where he developed the Peridot user interface tool. He received the MS and BSc degrees from the Massachusetts Institute of Technology during which time he was a research intern at Xerox PARC. From 1980 until 1983, he worked at PERQ Systems Corporation. His research interests include user interfaces, programming environments, programming language design, end-user software engineering (EUSE), API usability, developer experience (DevX or DX), interaction techniques, programming by example, handheld computers, and visual programming. He belongs to ACM, SIGCHI, IEEE, and the IEEE Computer Society.
Address: Human Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, PA 15213-3890. email@example.com, http://www.cs.cmu.edu/~bam
Explainable AI (XAI) has started experiencing explosive growth, echoing the explosive growth of AI's use for purposes that impact the general public. This spread of AI into the world outside of research labs brings with it pressures and requirements on XAI that many of us have perhaps not thought about deeply enough. In this keynote address, I'll explain why I think we have a long way to go before we'll be able to achieve our long-term goal: to explain AI not only well, but also fairly. I'll start with challenges in (1) how we go about XAI research and in (2) what we can succeed at explaining so far. Then I'll go in more depth into a third challenge: (3) who we can explain to. Who are the people we've even tried to explain AI to, so far? What are the societal implications of who we explain to well and who we do not? I'll discuss why we have to explain to populations to whom we've given little thought diverse people in many dimensions, including gender diversity, cognitive diversity, and age diversity. A dressing all of these challenges is necessary before we can claim to explain AI fairly and well.
Margaret Burnett (http://web.engr.oregonstate.edu/~burnett/) is an OSU Distinguished Professor at Oregon State University. She began her career in industry, where she was the first woman software developer ever hired at Procter & Gamble Ivorydale. A few degrees and start-ups later, she joined academia, with a research focus on people who are engaged in some form of software development. Together with her collaborators and students, she has contributed some of the seminal work on explaining AI to ordinary end users. She also co-founded the area of end-user software engineering, which aims to enable computer users not trained in programming to improve their own software, and co-leads the team that created GenderMag (gendermag.org), a software inspection process that uncovers user-facing gender biases in software from smart systems to programming environments. Burnett is an ACM Fellow and a member of the ACM CHI Academy.
Intelligent agents and robots will become part of our daily lives. As they do, they will not only carry out specific tasks in the environment but also partner with us socially and collaboratively. Groups of social robots may interact with groups of humans performing joint activities. Yet, research related to hybrid groups of humans and robots is still limited. What does it mean to be part of a hybrid group? Can social robots team up with humans? How do humans respond to social robots as partners? Do humans trust them? To research these questions we need a deep understanding of how robots can interact socially in group. That involves to give robots social competencies to allow them to identify and characterize group members, evaluate the dependencies between the behaviors of different members, understand and consider different roles, and infer the dynamics of group interactions in a group, led by a common past to build an anticipated future. In this talk I will discuss how to engineer social robots that act autonomously as members of a group collaborating with both humans and other robots. I will start by providing an overview of recent work in social human-robot teams, and will present different scenarios to illustrate the work.
Ana Paiva is a Full Professor in the Department of Computer Engineering at Instituto Superior Técnico (IST) from the University of Lisbon and is also the Coordinator of GAIPS – “Group on AI for People and Society” at INESC-ID (see http://gaips.inesc-id.pt/gaips/). Her group investigates the creation of complex systems using an agent-based approach, with a special focus on social agents. Prof. Paiva’s main research focuses on the problems and techniques for creating social agents that can simulate human-like behaviours, be transparent, natural and eventually, give the illusion of life. Over the years she has addressed this problem by engineering agents that exhibit specific social capabilities, including emotions, personality, culture, non-verbal behaviour, empathy, collaboration, and others. She has published extensively in the area of social agents, received best paper awards in many conferences, in particular she won the first prize of the Blue Sky Awards at the AAAI 2018. She has further advanced the area of artificial intelligence and social agents worldwide, having served for the Global Agenda Council in Artificial Intelligence and Robotics of the World Economic Forum and as a member of the Scientific Advisory Board of Science Europe. She is an EuroAI fellow.