Moral Machine: Engineers Endowed Artificial Intelligence With Morality And Ethics - Alternative View

Table of contents:

Moral Machine: Engineers Endowed Artificial Intelligence With Morality And Ethics - Alternative View
Moral Machine: Engineers Endowed Artificial Intelligence With Morality And Ethics - Alternative View

Video: Moral Machine: Engineers Endowed Artificial Intelligence With Morality And Ethics - Alternative View

Video: Moral Machine: Engineers Endowed Artificial Intelligence With Morality And Ethics - Alternative View
Video: Morality and Artificial Intelligence: The Science and Beyond | Devin Gonier | TEDxAustinCollege 2024, May
Anonim

The stronger AI enters our daily life, the more often it has to face complex moral and ethical dilemmas, which are sometimes not so easy for a living person to solve. Scientists at MIT tried to solve this problem and endowed the machine with the ability to think in terms of morality, based on the opinion of most living people.

Some experts believe that the best way to train artificial intelligence to handle morally difficult situations is through the use of “crowd experience”. Others argue that this approach is not without bias, and different algorithms can come to different conclusions from the same data set. How can machines be in such a situation, which will obviously have to make difficult moral and ethical decisions when working with real people?

Intellect and morality

As artificial intelligence (AI) systems evolve, experts are increasingly trying to solve the problem of how best to give the system an ethical and moral basis for taking certain actions. The most popular idea is for AI to draw conclusions by studying human decisions. To test this assumption, researchers at the Massachusetts Institute of Technology created the Moral Machine. Website visitors were asked to make choices about how an autonomous vehicle should act when faced with a rather tough choice. For example, this is a familiar dilemma about a potential accident, which has only two options for the development of events: the car can hit three adults to save the lives of two children, or it can do the opposite. Which option should you choose? And is it possible, for example,to sacrifice the life of an elderly person to save a pregnant woman?

As a result, the algorithm assembled a huge database based on the test results, and Ariel Procaccia of the Computer Science Department at Carnegie Mellon University decided to use them to improve machine intelligence. In a new study, he and one of the creators of the project, Iyad Rahwan, loaded the complete base of the Moral Machine project into the AI and asked the system to predict how the car on autopilot would react to similar, but still slightly different scenarios. Procaccia wanted to demonstrate how a voting-based system can be a solution for “ethical” artificial intelligence.”The author of the work himself admits that such a system, of course, is still too early to be applied in practice, but it excellently proves the very concept that it is possible.

Cross morality

Promotional video:

The very idea of choosing between two morally negative outcomes is not new. Ethics use a separate term for it: the principle of double effect. But this is the field of bioethics, but no one has previously applied such a system to a car, and therefore the study aroused particular interest among experts around the world. OpenAI Co-Chair Elon Musk believes that creating "ethical" AI is a matter of developing clear guidelines or policies to guide program development. Politicians are gradually listening to him: for example, Germany created the world's first ethical principles for autonomous cars. Even Google's Alphabet AI DeepMind now has an ethics and public morals department.

Other experts, including a team of researchers from Duke University, believe that the best way to move forward is to create a “general framework” that describes how AI will make ethical decisions in a given situation. They believe that the unification of collective moral views, as in the same Moral Machine, will make artificial intelligence even more moral in the future than modern human society.

Criticism of the "moral machine"

Be that as it may, at present the principle of "majority opinion" is far from being reliable. For example, one group of respondents may have biases that are not common to everyone else. The result will be that AIs given exactly the same dataset may come to different conclusions based on different samples of that information.

For Professor James Grimmelmann, who specializes in the dynamics between software, wealth, and power, the very idea of public morality looks flawed. "It is not able to teach AI ethics, but only endow it with a semblance of ethical standards inherent in a certain part of the population," he says. And Procaccia himself, as mentioned above, admits that their research is nothing more than a successful proof of concept. However, he is confident that such an approach could bring success to the entire campaign to create a highly moral AI in the future. "Democracy, no doubt, has a number of shortcomings, but as a unified system it works - even if some people still make decisions that the majority does not agree with."

Vasily Makarov

Recommended: