Today’s most common user interfaces represent an incremental change from the GUI popularized by the Apple Macintosh in 1984. Over the last 30 years the dominant hardware has changed drastically while the user interface has barely moved: from one hand on a mouse to two fingers on a panel of glass. I will illustrate how we are building on-body interfaces of the future that further engage our bodies by using muscle sensing for input and vibrotactile output, offering discrete and natural interaction on the go. I will also show how other interfaces we are designing take an even more radical approach, moving the interface off the human body altogether and onto drones that project into the space around them. Finally, I will introduce a new project where we envision buildings as hybrid physical-digital spaces that both sense and actuate to improve human wellbeing.
James Landay is a Professor of Computer Science and the Anand Rajaraman and Venky Harinarayan Professor in the School of Engineering at Stanford University. He specializes in human-computer interaction. He is the founder and co-director of the World Lab, a joint research and educational effort with Tsinghua University in Beijing. Previously, Landay was a Professor of Information Science at Cornell Tech in New York City, a Professor of Computer Science & Engineering at the University of Washington, and a Professor in EECS at UC Berkeley. From 2003 through 2006 he was the Laboratory Director of Intel Labs Seattle, a university affiliated research lab that explored the new usage models, applications, and technology for ubiquitous computing. He was also the chief scientist and co-founder of NetRaker, which was acquired by KeyNote Systems in 2004. Landay received his BS in EECS from UC Berkeley in 1990, and MS and PhD in Computer Science from Carnegie Mellon University in 1993 and 1996, respectively. He is a member of the ACM SIGCHI Academy and he is an ACM Fellow.
Automatic music-understanding technologies (automatic analysis of music signals) make possible the creation of intelligent music interfaces that enrich music experiences and open up new ways of listening to music. In the past, it was common to listen to music in a somewhat passive manner; in the future, people will be able to enjoy music in a more active manner by using music technologies. Listening to music through active interactions is called active music listening.
In this keynote speech I first introduce active music listening interfaces demonstrating how end users can benefit from music-understanding technologies based on signal processing and/or machine learning. By analyzing the music structure (chorus sections), for example, the SmartMusicKIOSK interface enables people to access their favorite part of a song directly (skipping other parts) while viewing a visual representation of the song's structure. I then introduce our recent challenge of deploying such research-level music interfaces as web services open to the public. Those services augment people's understanding of music, enable music-synchronized control of computer-graphics animation and robots, and provide various bird's-eye views on a large music collection. In the future, further advances in music-understanding technologies and music interfaces based on them will make interaction between people and music even more active and enriching.
Masataka Goto received the Doctor of Engineering degree from Waseda University in 1998. He is currently a Prime Senior Researcher at the National Institute of Advanced Industrial Science and Technology (AIST). In 1992 he was one of the first to start working on automatic music understanding and has since been at the forefront of research in music technologies and music interfaces based on those technologies. Over the past 25 years he has published more than 250 papers in refereed journals and international conferences and has received 46 awards, including several best paper awards, best presentation awards, the Tenth Japan Academy Medal, and the Tenth JSPS PRIZE.
He has served as a committee member of over 110 scientific societies and conferences, including the General Chair of the 10th and 15th International Society for Music Information Retrieval Conferences (ISMIR 2009 and 2014). As the Research Director he began a 5-year research project (OngaCREST Project) in 2011 and follow-on 5-year research project (OngaACCEL Project) in 2016, both of which have focused on music technologies and been funded by the Japan Science and Technology Agency (CREST/ACCEL, JST).
Personalization, recommendations, and user modeling canbe powerful tools to improve people's experiences with technology and to help them find information. However, we also know that people underestimate how much of their personal information is used by our technology and they generally do not understand how much algorithms can discover about them.
Both privacy and ethical technology have issues of consentat their heart. While many personalization systems assumemost users would consent to the way they employ personaldata, research shows this is not necessarily the case. This talk will look at how to consider issues of privacy and consent when users cannot explicitly state their preferences, The Creepy Factor, and how to balance users' concerns with the benefits personalized technology can offer.
Jennifer Golbeck is Director of the Social Intelligence Lab and an Associate Professor in the College of Information Studies at the University of Maryland, College Park where she is director of the Social Intelligence Lab.
Her research focuses on analyzing and computing with social media, focused on predicting user attributes, and using the results to design and build systems that improve the way people interact with information online. She also studies malicious behavior online, including bot detection, online harassment, and fake news.
She received an AB in Economics and an SB and SM in Computer Science at the University of Chicago, and a Ph.D. in Computer Science from the University of Maryland, College Park.