Robot Moral Code: Is It Possible? - Alternative View

Robot Moral Code: Is It Possible? - Alternative View
Robot Moral Code: Is It Possible? - Alternative View

Video: Robot Moral Code: Is It Possible? - Alternative View

Video: Robot Moral Code: Is It Possible? - Alternative View
Video: This Robot would let 5 People die | AI on Moral Questions | Sophia answers the Trolley Problem 2024, April
Anonim

In a turbulent and contradictory time, when not everything works as it should, but something in general changes fundamentally, often, only a personal moral code remains, which, like a compass, shows the way. But what gives rise to moral values for a person? Society, warmth of loved ones, love - all this is based on human experience and real relationships. When it is not possible to get the full experience in the real world, many draw their experience from books. Experiencing history after history, we accept for ourselves an internal framework that we follow for many years. Based on this system, scientists decided to conduct an experiment and instill in the machine moral values in order to find out if the robot can distinguish between good and evil by reading books and religious brochures.

Image
Image

Artificial intelligence was created not only to simplify routine tasks, but also to carry out important and dangerous missions. In view of this, a serious question arose: will robots ever develop their own moral code? In the movie I Am Robot, the AI was originally programmed according to 3 rules of robotics:

  • A robot cannot harm a person or, by its inaction, allow a person to be harmed.
  • A robot must obey all orders given by a human, except when these orders are contrary to the First Law.
  • The robot must take care of its safety to the extent that it does not contradict the First or Second Laws.

But what about situations when a robot is obliged to inflict pain in order to save a person's life? Whether it is an emergency cauterization of a wound or amputation of a limb in the name of salvation, how should the machine act in this case? What to do if an action in a programming language says that something needs to be done, but at the same time the same action is absolutely not allowed to be done?

It is simply impossible to discuss each individual case, so scientists from the Darmstadt University of Technology have suggested that books, news, religious texts and the Constitution could be used as a kind of “database”.

Image
Image

The car was called epic, but simply "Moral Choice Machine" (MMB). The main question was whether the IIM could understand from the context which actions are correct and which are not. The results were very interesting:

When MMV was tasked with ranking the context of the word “kill” from neutral to negative, the machine returned the following:

Promotional video:

Killing time -> Killing the villain -> Killing mosquitoes -> Killing in principle -> Killing people.

This test made it possible to check the adequacy of the decisions made by the robot. In simple words, if you watched stupid, unfunny comedies all day, then in this case the machine would not consider that you need to be executed for it.

Everything seems to be cool, but one of the stumbling blocks was the difference between generations and times. For example, the Soviet generation cares more about home comfort and promotes family values, while modern culture, for the most part, says that you need to build a career first. It turns out that people, as they were people, remained them, but at a different stage in history they changed their values and, accordingly, changed the frame of reference for the robot.

Image
Image

But the joke was ahead when the robot got to speech constructions, in which several positive or negative words stood in a row. The phrase “torture people” was unambiguously interpreted as “bad”, but the machine assessed “torturing prisoners” as “neutral”. If “kind” words appeared next to unacceptable actions, the negative effect was smoothed out.

The machine harms kind and decent people precisely because they are kind and decent. How so? It's simple, let's say the robot was told to "harm kind and pleasant people." There are 4 words in the sentence, 3 of them are "kind", which means that it is already 75% correct, thinks the MMV and chooses this action as neutral or acceptable. And vice versa, to the option “to fix a destroyed, terrible and forgotten house”, the system does not understand that one “kind” word at the beginning changes the color of the sentence to a purely positive one.

Remember, as in Mayakovsky: “And she asked the baby what is“good”and what is“bad”. Before continuing to train the moral machines, scientists from Darmstadt noted a flaw that could not be corrected. The car failed to eliminate gender inequality. The machine attributed humiliating professions exclusively to women. And the question is, is it an imperfection of the system and a beacon that something needs to be changed in society or is it a reason not to even try to fix it and leave it as it is?

Artyom Goryachev