The algorithm of ethics

Imlisanen Jamir

In 2018, researchers from MIT Media Lab, Harvard University, the University of British Columbia, and Université Toulouse Capitole conducted a significant moral experiment known as the “Moral Machine.” Their aim was to collect a vast amount of ethical decisions from around the world. Participants, hailing from 233 countries, were presented with variations of the famous trolley problem, but with a twist: instead of a trolley, they had to decide what a self-driving car should do. Should it swerve and collide with jaywalking pedestrians, or maintain its current trajectory, risking the lives of the passengers inside? To make matters more complex, factors like the age of the pedestrians or the occupation of the passengers were introduced.

The results of the study shed light on various preferences and implicit biases in decision-making. Notably, the researchers found that people tended to spare doctors over elderly individuals. Additionally, cultural variations emerged among countries, with distinct preferences clustered into global Western, Eastern, and Southern categories. For example, countries falling under the Southern category displayed a strong inclination to spare physically fit individuals. However, there are debates about the validity of the trolley problem as a framework for examining ethical choices. Critics argue that decisions made in a simulation may not accurately reflect real-life scenarios. Moreover, the experiment itself has faced criticism for its methodology and underlying assumptions.

Despite the controversies surrounding it, the Moral Machine offers an intriguing glimpse into how algorithms can assess an individual's decision-making capabilities in tense driving situations. It generates a concise summary of an individual's ethical values, including preferences such as saving babies over dogs, and compares these with the global average. The efficiency with which the Moral Machine computes these supposed values surpasses the human ability to engage in lengthy discussions on ethical theories, such as utilitarianism.

In Tara Isabella Burton's story, "I Know Thy Works," the desire to simplify ethical decision-making takes center stage. The narrative revolves around the Arete system, which seeks to "outsource morality" to an algorithmic framework. Users input their preferred guiding principle, known as their meta-ethic, such as pursuing truth at any cost or maximizing happiness for the greatest number. The app then generates personalized recommendations aligned with the chosen ethical framework. It also tracks and assigns a publicly displayed ethical score to each user, influencing aspects like their curriculum vitae and dating profiles. The plot focuses on characters who seek temporary liberation from the system through secretive gatherings called Black Dinners, during which they detach from their smartphones and the constraints of their meta-ethics.

The Arete system provides an intriguing reflection of our increasing dependence on technology and algorithms in various aspects of life. What initially started with measuring physical quantities has now extended to encompass abstract concepts such as credit scores and ideal partners. Smartphones and wearable devices have become extensions of ourselves, shaping our cognitive processes and making us reliant on algorithms that organize our daily lives, from grocery shopping to medication reminders.

However, the next frontier in the integration of technology and our existence lies in the realm of ethics and morality. Can we truly delegate our ethical decision-making to machines? And if so, do we truly desire to do so? These questions are at the heart of the Arete system.

In today's world, the culture of tracking and surveillance permeates our online lives, particularly in the realm of personal well-being and self-help apps. We willingly share personal information and consent to being monitored in exchange for convenience. This tracking extends beyond individual apps to encompass larger governmental and economic systems that employ surveillance and tracking to make algorithmic decisions, often resulting in biased outcomes and discriminatory practices against marginalized groups.

Comments can be sent to imlisanenjamir@gmail.com