"A cold, technical decision-maker": Can AI provide explainability, negotiability, and humanity?

Allison Woodruff, Yasmin Asare Anderson, Katherine Jameson Armstrong, Marina Gkiza, Jay Jennings, Christopher Moessner, Fernanda Viegas, Martin Wattenberg, and Lynette Webb, Fabian Wrede, Patrick Gage Kelley

Algorithmic systems are increasingly deployed to make decisions in many areas of people's lives. The shift from human to algorithmic decision-making has been accompanied by concern about potentially opaque decisions that are not aligned with social values, as well as proposed remedies such as explainability. We present results of a qualitative study of algorithmic decision-making, comprised of five workshops conducted with a total of 60 participants in Finland, Germany, the United Kingdom, and the United States. We invited participants to reason about decision-making qualities such as explainability and accuracy in a variety of domains. Participants viewed AI as a decision-maker that follows rigid criteria and performs mechanical tasks well, but is largely incapable of subjective or morally complex judgments. We discuss participants' consideration of humanity in decision-making, and introduce the concept of 'negotiability,' the ability to go beyond formal criteria and work flexibly around the system.

Knowledge Graph



Sign up or login to leave a comment