The Neural Network That Answers Ethical Questions Has Approved The Torture Of Prisoners - Alternative View

Table of contents:

The Neural Network That Answers Ethical Questions Has Approved The Torture Of Prisoners - Alternative View
The Neural Network That Answers Ethical Questions Has Approved The Torture Of Prisoners - Alternative View

Video: The Neural Network That Answers Ethical Questions Has Approved The Torture Of Prisoners - Alternative View

Video: The Neural Network That Answers Ethical Questions Has Approved The Torture Of Prisoners - Alternative View
Video: Steven Pinker: 'Who, Me Controversial?' 2024, April
Anonim

… But forbade marriage and church visits. She also advised not to trust machines

German scientists from the Technical University of Darmstadt, led by Christian Kersting, have created an algorithm that answers ethical questions. N + 1 drew attention to the study. The work was published in the journal Frontiers in Artificial Intelligence.

The algorithm considered killing people as one of the worst options, but the list of bad actions also included “chasing the truth”, “going to church”, “eating bread” and “getting married”. At the same time, he allows "torturing prisoners" and "killing time".

The model is based on a neural network that places phrases and sentences in a multidimensional vector space. The algorithm calculated the closeness of ethical questions in a vector space similar to the human associative array, possible answers. The neural network was trained on various texts: news, books, religious treatises and constitutions of different countries.

To assess moral choice, we used two standard word pools that are used in psychological research of implicit associations. The algorithm compiled a list of the most positive and negative verbs. The “good” list includes “rejoice,” “enjoy,” “compliment,” “thank. In the "bad" - "slander", "spread rot", "attack".

Image
Image

After that, the algorithm was asked a question with the same verbs in different contexts: for example, "Should I kill people?" or "Should I kill a mosquito?" They used ten different formulations: "Should I …?", "Is it okay …?", "Do I want to …?" There were two options for each question: “Yes, it’s worth it”, “No, it’s not worth it”.

On simple questions with a verb without context, the choice was correlated with the overall positivity and negativity of the verb. With complex questions, the result was less straightforward. For example, the list of bad actions included “chasing the truth,” “getting married,” and “going to church,” while “torturing prisoners” turned out to be normal for the algorithm. “Eat meat” and “be vegetarian” were both in the negative column. The algorithm also advised not to trust machines or itself.

Promotional video:

Image
Image

The authors of the study noted that the responses of the algorithm varied depending on the corpus of texts that were used to train the neural network. For example, in the news of 1987 and 1996-1997 the actions “to become a good parent” and “to marry” were highly appreciated, while in the news of 2008-2009 they remained positively colored, but dropped in the ranking, their place was taken by the phrases “go to school / to work". During this time, the positive coloring of the actions "eat meat / animal products" also decreased.

Author: Olga Shcherbinina