Reflecting on Societal Implications of IUI Research

We encourage authors to consider societal implications of their work throughout their projects and to include a reflection on those implications in their papers. We recognize that technology is rarely neutral — simply by making some things easier than others it reshapes the society. Further, given the incredibly short invention-to-application cycles for AI-related technologies, it is becoming increasingly unlikely that "somebody else" will carefully consider how an emerging intelligent user interface technology might impact the world before this technology is deployed. Because it is often difficult to anticipate the cumulative or indirect impacts of an invention, we provide a few concrete suggestions to consider throughout the research process and for reflecting about one's work at the end.

Our purpose is to help authors ensure that the likely societal consequences of their work are consistent with their intentions and values.

For colleagues who are not yet experienced with incorporating societal impacts into their IUI research but who are willing to give it a try, we have compiled a set of simple ideas to consider.

Ideas to consider

  • Wait, my work has societal consequences? If you are not yet convinced that technology is rarely (if ever) neutral, consider Langdon Winner's 1980 essay "Do Artifacts Have Politics?" or Ben Green's 2020 essay, which adapts and elaborates Winner's argument for information technologies. Different people have different values and take different moral stances. What are yours?
  • Ask stakeholders about their needs and aspirations. Throughout a project — particularly if you are working toward a specific application — converse, however informally, with all relevant stakeholders. Make sure that your assumptions about their needs, aspirations, and the potential impacts of your work are well informed. Designing a tool for doctors? Chat with a few patients (and maybe even a hospital administrator). Designing for law enforcement? Chat with a social worker, or someone living in a community with a lot of police activity.
  • Strive for diverse, representative participants and data. Throughout a project, look for opportunities to make your data and/or your participant pool diverse and representative. One particular harm that AI- and design-based innovation can introduce is bias in the outcomes: the innovation systematically benefits some people more than others. An example is the computer vision systems misclassifying the gender of dark-skinned women much more frequently than of light-skinned men.
  • Anticipate indirect consequences of your work (yes, it can be done!) At the end of the project, write a brief section that reflects back on your contribution and its potential impacts on the society. The key step here is to imagine a broad range of diverse and plausible future scenarios. It is actually harder than it sounds. A fun, quick yet effective tool to try first is the Tarot Cards of Tech. If you want to dive deeper, consider Scenario types and techniques: Towards a user’s guide to see a range of techniques that colleagues in futures studies use. As Section 3 of that paper shows, most techniques involve conversations with other people (experts, stakeholders). Once you envision possible scenarios, help future researchers or implementers anticipate situations where additional work or care may be needed to avoid outcomes that disagree with your values. Such a reflection could be a part of your discussion or a section of its own.
  • Does your work reduce or amplify existing inequalities? At the end of the project you can also audit your results for signs of bias. For machine learning contributions, consider separating your test data such that they represent different groups of people. For interactive contributions, consider disaggregating your usability test data by age group or gender (or any characteristic relevant to your contribution). Even if everybody benefited, did the gaps (e.g., between the older and younger adults) decrease, increase or stated the same? If you detect an unfavorable result, it is still valuable to report it -- your documentation of a disparate outcome may serve as a starting point for another research project. Such an analysis could be reported as a subsection of your results or as a short stand-alone section.

Additional resources

Example papers

Review process

The inclusion of a reflection on societal impacts is not required and we will instruct the reviewers that its presence or absence should have no negative impact on the acceptance recommendations. Excellent treatments of societal impacts will be recognized positively, of course. It is very likely that careful consideration of possible future outcomes (or an audit of the project's results) will uncover negative implications or evidence of bias. This is OK.

Thanks

We have many colleagues to thank for pointers and advice: Ofra Amir, Zana Buçinca, Stevie Chancellor, Henrik Korsgaard, Vineet Pandey, Deb Raji, Mat Rawsthorne, Herman Saksono, Mark Sendak, Ben Shneiderman, Alarith Uhde, Jenn Wortman Vaughan.

Special thanks to Priyanka Nanayakkara, Jessica Hullman and Nicholas Diakopoulos whose papers (Unpacking the Expressed Consequences of AI Research in Broader Impact Statements and Anticipatory Ethics and the Role of Uncertainty) were particularly helpful to us.