March 28th, Tuesday | 9:30-10:30am (Sydney Local Time, GMT+11)
With the emergence of a new generation of embodied AI agents (e.g., cognitive robots), it has become increasingly important to empower these agents with the ability to learn and collaborate with humans through language communication. Despite recent advances, language communication in embodied AI still faces many challenges. Human language not only needs to ground to agents’ perception and action but also needs to facilitate collaboration between humans and agents. To address these challenges, I will introduce several efforts in my lab that study pragmatic communication with embodied agents. I will talk about how language use is shaped by shared experience and knowledge (i.e., common ground) and how collaborative effort is important to mediate perceptual differences and handle exceptions. I will discuss task learning by following language instructions and highlight the need for neurosymbolic representations for situation awareness and transparency. I will further present explicit modeling of partners’ goals, beliefs, and abilities (i.e., theory of mind) and discuss its role in language communication for situated collaborative tasks.
Joyce Chai is a Professor in the Department of Electrical Engineering and Computer Science at the University of Michigan. She holds a Ph.D. in Computer Science from Duke University. Her research interests span from natural language processing and embodied AI to human-AI collaboration. She is fascinated by how experience with the world and how social pragmatics shape language learning and language use; and is excited about developing language technology that is sensorimotor grounded, pragmatically rich, and cognitively motivated. Her current work explores the intersection between language, perception, and action to enable situated communication with embodied agents. She served on the executive board of NAACL and as Program Co-Chair for multiple conferences – most recently ACL 2020. She is a recipient of the NSF Career Award (2004), the Best Long Paper Award at ACL (2010), and an Outstanding Paper Award at EMNLP (2021). She is a Fellow of ACL.
March 29th, Wednesday | 9:30-10:30am (Sydney Local Time GMT+11)
This talk will give an overview of the advances in AI that have enabled the development of autonomous virtual characters with life like behaviours. Drawing from the research and development undertaken at Soul Machines, examples will be shown of virtual characters that recognize and respond to human emotion, learn human-like behaviours, and provide a realistic human face to computers interfaces. The research challenges for the next generation of digital humans will be outlined and examples of how they could transform human computer interaction.
Mark Sagar, CEO of Soul Machines, wants you to imagine a future with emotionally intelligent, digital humans. Sagar is also director of the Laboratory for Animate Technologies at the Auckland Bioengineering Institute at the University of Auckland. He is humanizing the interface between people and machines so that we might better cooperate with them in the future. His team is developing autonomously animated virtual humans with virtual brains and nervous systems, capable of highly expressive face to face interaction and real-time learning. His research is bringing technology to life, pioneering new technologies that realistically embody biologically based models of neural networks and neural systems with highly expressive faces to create live interactive virtual humans capable of emotional response and real-time learning, thereby redefining human interaction with artificial intelligence. It has the potential to impact everything from human-machine cooperation in assistive, commercial, educational and creative tasks to the future of storytelling with autonomous characters.
March 31st, Friday| 9:30-10:30am (Sydney Local Time GMT+11)
In 2022 Google Research Australia (GRA) was announced as part of the Digital Future initiative, a program aimed at contributing to a stronger digital future for Australians. GRA is a part of Google Brain, an arm of research that has significantly shaped the evolution of AI. In this talk I will explore the history of disruptive innovations within Google Brain, some of the exciting trends in Machine Learning and I will showcase some of the recent advances in generative AI and the use of AI for creativity.
Grace Chung (BE/BSc UNSW; Ph.D. Elec. Eng. Comp Sci. MIT), is Head of Google Research and Engineering Site Lead for Google Australia. In her 14 year tenure at Google, Grace has amassed extensive experience growing and leading full-stack engineering teams, developing core products across Google like ChromeOS, Chrome, Google Plus, and Knowledge Graph. Prior to Google, Grace was a researcher with expertise spanning Speech Recognition, Natural Language Processing, Text Mining, Spoken Dialogue Systems, Information Extraction, Machine Learning and Information Retrieval. In 2022, Grace was the founding member and lead of the Australia Google Research Hub.