AI Bugs and Failures: How and Why to Render AI-Algorithms More Human?
Alkim Almila Akdag Salah
Chapter from the book: Verdegem, P. 2021. AI for Everyone?: Critical Perspectives.
Chapter from the book: Verdegem, P. 2021. AI for Everyone?: Critical Perspectives.
AI systems such as self-driving cars, or autonomous lethal weapons are expected to work in a framework called ‘explainable AI’, under meaningful human control, in a fail-proof way. In this chapter, the author discusses case studies where the opposite framework will prove more beneficial: i.e. in certain contexts, such as cultural and artistic production or social robotics, AI systems might be considered humanlike if they deliberately take on human traits: to bluff, to joke, to hesitate, to be whimsical, unreliable, unpredictable, and above all to be creative. In order to uncover why we need ‘humanlike’ traits -especially bugs & failures, the chapter considers representations of intelligent machines in the imagination of popular culture, and the deeply ingrained fear of the machine as the ‘other’.
Akdag Salah, A. 2021. AI Bugs and Failures: How and Why to Render AI-Algorithms More Human?. In: Verdegem, P (ed.), AI for Everyone?. London: University of Westminster Press. DOI: https://doi.org/10.16997/book55.j
This chapter distributed under the terms of the Creative Commons Attribution + Noncommercial + NoDerivatives 4.0 license. Copyright is retained by the author(s)
This book has been peer reviewed. See our Peer Review Policies for more information.
Published on Sept. 20, 2021