Primed Prediction: A Critical Examination of the Consequences of Exclusion of the Ontological Now in AI Protocol
Carrie O’Connell Chad Van de Wiele
Chapter from the book: Verdegem, P. 2021. AI for Everyone?: Critical Perspectives.
Chapter from the book: Verdegem, P. 2021. AI for Everyone?: Critical Perspectives.
Revisiting Norbert Wiener’s cybernetic prediction as the theoretical foundation of AI this chapter makes a plea how we need to uncover the black box of what is behind prediction and simulation. It explores the shortcomings of cybernetic prediction, the theoretical foundation of Artificial Intelligence, through the lens of Jean Baudrillard’s simulacra and simulation. Specifically, what prediction excludes – namely, an accounting for the ontological now – is what Baudrillard warned against in his analysis of the role technological innovations play in untethering reality from the material plane, leading to a crisis of simulacrum of experience. From this perspective, any deep-learning system rooted in the Wiener’s view of cybernetic feedback loops risks creating behaviour more so than predicting it. As this chapter will argue, such prediction is a narrow, self-referential system of feedback that ultimately becomes a self-fulfilling prophecy girded by the psycho-social effects of the very chaos it seeks to rationalise.
O’Connell C. & Van de Wiele C. 2021. Primed Prediction: A Critical Examination of the Consequences of Exclusion of the Ontological Now in AI Protocol. In: Verdegem, P (ed.), AI for Everyone?. London: University of Westminster Press. DOI: https://doi.org/10.16997/book55.k
This chapter distributed under the terms of the Creative Commons Attribution + Noncommercial + NoDerivatives 4.0 license. Copyright is retained by the author(s)
This book has been peer reviewed. See our Peer Review Policies for more information.
Published on Sept. 20, 2021