Talk #9: Ethics

Philosophy of Emergent Technologies

Tuesday, 12.09.2023, 13:00 – 14:00
Andreas-Pfitzmann-Bau APB/E023

by Konstanze Möller-Jansen

After completing a bachelor’s degree at TU Dresden and Ca’ Foscari Venezia in Business and Economics (B.Sc.) Konstanze Möller-Jansen studied Philosophy at the Humboldt-Universität zu Berlin (B.A.) and Freie Universität Berlin (M.A). Her research focus in political philosophy is on themes such as (structural) domination, freedom, and philosophy of technology. She is currently working on her dissertation topic: “The impact of algorithmic rule – a normative evaluation from the perspective of freedom as non-domination”.

Abstract

Our mature information society is marked by an increasing erasure of the distinction between being online and offline as technologies responsible for this development have become ubiquitous in areas as different as work, medicine, mobility, or communication. And clearly, these technologies have many practical uses, which simplify our lives. At the same time, their ubiquity means that disengagement has become almost impossible. It is increasingly more difficult to avoid the use of products and services that do not utilize big data, or predictive analytics, due to a lack of serious alternatives as well as countless optimized incentives to use them.

Furthermore, it stands to reason that AI systems are not only simplifying our lives, but that they can have severely harmful impacts. To address these problems, some suggest that practitioners must be more sensible to discriminating biases in their data, others frame these issues as problems of privacy infringements, or of opaque algorithms. But increasingly, theorists have paid attention to the fact that by framing these as “ethical” challenges, we fail to grasp a critical dimension of asymmetrical power dynamics underlining many of these problems. In this talk I investigate how AI applications should not only be addressed by balancing harms and benefits, but by considering wider implications, such as considering the possibility that AI systems might pose a threat to people’s freedom.