Building A Moral Machine: Who Will Be Responsible For The Ethics Of Self-driving Cars? - Alternative View

Table of contents:

Building A Moral Machine: Who Will Be Responsible For The Ethics Of Self-driving Cars? - Alternative View
Building A Moral Machine: Who Will Be Responsible For The Ethics Of Self-driving Cars? - Alternative View

Video: Building A Moral Machine: Who Will Be Responsible For The Ethics Of Self-driving Cars? - Alternative View

Video: Building A Moral Machine: Who Will Be Responsible For The Ethics Of Self-driving Cars? - Alternative View
Video: The ethical dilemma of self-driving cars - Patrick Lin 2024, April
Anonim

You are driving along a highway when a man runs out onto a busy road. Cars are moving around you, and you have a split second to make a decision: try to avoid a person and create the risk of an accident? Continue driving in the hope that he will have time? To brake? How do you rate the odds of having a child strapped in the back seat? In many ways, this is a classic “moral dilemma,” a trolley problem. She has a million different options that allow you to identify human prejudice, but the essence is the same.

You are in a situation where life and death are at stake, there is no simple choice, and your decision will, in fact, determine who lives and who dies.

The trolley dilemma and artificial intelligence

MIT's new paper, published last week in the journal Nature, is trying to come up with a working solution to the trolley problem by recruiting millions of volunteers. The experiment began in 2014 and was a success, receiving over 40 million responses from 233 countries, making it one of the largest moral studies ever conducted.

A person can make such decisions unconsciously. It is difficult to weigh all the ethical systems and moral prerequisites when your car is racing down the road. But in our world, decisions are increasingly made by algorithms, and computers can easily respond faster than we do.

Hypothetical situations with self-driving cars are not the only moral decisions that algorithms must make. Medical algorithms will choose who gets treatment with limited resources. Automated drones will choose how much "collateral damage" is acceptable in an individual military engagement.

Promotional video:

Not all moral principles are equal

The “solutions” to the trolley problem are as varied as the problems themselves. How will machines make moral decisions when the foundations of morality and ethics are not universally accepted and may not have solutions? Who can determine whether an algorithm is doing right or wrong?

The crowdsourcing approach adopted by Moral Machine scientists is quite pragmatic. Ultimately, for the public to accept self-driving cars, it must accept the moral foundation behind their decisions. It will not be good if ethicists or lawyers come up with a solution that is either unacceptable or unacceptable to ordinary drivers.

The results lead to the curious conclusion that moral priorities (and hence the algorithmic decisions that humans can make) depend on where in the world you are.

First of all, scientists recognize that it is impossible to know the frequency or nature of these situations in real life. People who have been involved in an accident very often cannot say what exactly happened, and the range of possible situations precludes a simple classification. Therefore, in order for the problem to become possible to track down, it has to be broken down into simplified scenarios, looking for universal moral rules and principles.

When you take a survey, you are presented with thirteen questions that require a simple yes or no choice, trying to narrow the answers down to nine factors.

Should the car turn into another lane or keep moving? Should you be saving young people, not old ones? Women or men? Animals or people? Should you try to save as many lives as possible, or is one child "worth" two elderly people? Save passengers in a car, not pedestrians? Those who do not follow the rules, or those who do not follow the rules? Should you save people who are physically stronger? What about people with higher social status, such as doctors or businessmen?

In this harsh hypothetical world, someone has to die, and you will answer each of these questions - with varying degrees of enthusiasm. However, making these decisions also reveals deeply ingrained cultural norms and biases.

The processing of the huge dataset surveyed by scientists yields universal rules as well as curious exceptions. The three most dominant factors, averaged across the entire population, were that everyone preferred to save more lives than fewer, people rather than animals, and young rather than old.

Regional differences

You can agree with these points, but the deeper you think about them, the more troubling the moral conclusions will be. More respondents chose to save the criminal over the cat, but in general they preferred to save the dog over the criminal. On the world average, being old is rated higher than being homeless, but homeless people were less rescued than fat people.

And these rules were not universal: respondents from France, the United Kingdom, and the United States preferred the young, while respondents from China and Taiwan were more willing to save the elderly. Respondents from Japan preferred to save pedestrians over passengers, while in China they prefer passengers to pedestrians.

The researchers found they could group the responses by country into three categories: “West,” mainly North America and Europe, where morality is based largely on Christian doctrine; "East" - Japan, Taiwan, the Middle East, where Confucianism and Islam predominate; "Southern" countries, including Central and South America, along with strong French cultural influence. There is a stronger preference for female sacrifice in the Southern segment than anywhere else. In the Eastern segment, there is a greater tendency to save young people.

Filtering by different respondent attributes gives endless interesting options. “Very religious” respondents are unlikely to prefer saving an animal, but both religious and non-religious respondents express approximately equal preference for saving people with high social status (although this can be said to be contrary to some religious doctrines). Men and women prefer to save women, but men are less inclined to do so.

Unanswered questions

Nobody claims that this research somehow "solves" all these weighty moral problems. The study authors note that crowdsourcing online data does involve bias. But even with a large sample size, the number of questions was limited. What if the risks change depending on the decision you make? What if the algorithm can calculate that you only had a 50 percent chance of killing pedestrians, given the speed at which you were moving?

Edmond Awad, one of the study's authors, expressed caution about over-interpreting the results. The discussion, in his opinion, should flow into risk analysis - who is more or less at risk - instead of deciding who dies and who does not.

But the most important result of the study was the discussion that flared up on its soil. As algorithms begin to make more and more important decisions affecting people's lives, it is imperative that we have an ongoing discussion of the ethics of AI. Designing an "artificial conscience" must include everyone's opinion. While the answers may not always be easy to find, it is best to try to form a moral framework for algorithms by not allowing algorithms to shape the world on their own without human control.