What Fairy Tales Should Artificial Intelligence Read? - Alternative View

Table of contents:

What Fairy Tales Should Artificial Intelligence Read? - Alternative View
What Fairy Tales Should Artificial Intelligence Read? - Alternative View

Video: What Fairy Tales Should Artificial Intelligence Read? - Alternative View

Video: What Fairy Tales Should Artificial Intelligence Read? - Alternative View
Video: A.I. Is a Big Fat Lie – The Dr. Data Show 2024, April
Anonim

To prevent artificial intelligence from rebelling against its creators, you need to read the correct fairy tales to it in the process of learning.

Ever since humans have created smart machines, they fear that sooner or later their creations will spiral out of control. The first such case was described in his play RUR by the author of the word “robot” - Karel Čapek. Later in science fiction, artificial intelligence, which rebelled against its creators, became, if not the mainstream, then a very common plot. Isaac Asimov put an end to this with his "Three Laws", the first of which read: "A robot cannot harm a person or, by its inaction, allow a person to be harmed."

Tell him a story

Real artificial intelligence (AI) is still a long way off, but there are already various systems that function like the human brain. Neural networks like Google Deep Dream are capable of recognizing images. Specialized supercomputers like IBM Watson understand tricky natural language questions, correlate symptoms, refine medical diagnoses, and even come up with culinary delights. In the coming decades, we will face massive computerization of many aspects of human activity. In this regard, experts in robotics and AI are increasingly raising an important but still completely unresolved question: how to make robots do ethical?

Image
Image

Mark Ridl, Associate Professor at the Georgia Tech College of Computer Science: “Cautionary stories from different cultures teach children how to behave in a socially acceptable way - with examples of right and wrong behavior in fairy tales, stories, and other literary works. Making the robots understand the meaning of these stories will help reinforce the choice of behavioral options that allow them to achieve their goals, but do not harm humans.”

Promotional video:

Climb a tree

According to researchers at the Georgia Institute of Technology Mark Ridl and Brent Harrison, you can teach a computer to humanity in exactly the same way that children are taught: by reading fairy tales to it. Moreover, it is better to specially constructed stories about what behavior in human society is considered correct and what is not. For this, the researchers created the Quixote system, named after the hero of Cervantes. Scripts compiled by Professor Riedl's previous creation, the Scheherazade program, are used as stories for teaching. She generates original stories about everyday topics - a trip to the airport, a date, a trip to the cinema or to the store - using the crowdsourcing platform Amazon Mechanical Turk: asks questions about various situations, and then arranges events in the correct sequence. Since there can be many sequences, the program generates not one story, but a whole tree consisting of branches - chains of events.

Image
Image

It is this story tree that is used to teach Quixote. At the first stage, a certain reward is assigned to each action, depending on its ethics. At the second stage, the system tries to independently make a choice by trial and error - skills are consolidated. In fact, Quixote is rewarded every time it acts like a good guy, not randomly and not like a bad hero. As an example, Riddle and Harrison created "Pharmacy World" - a universe of 213 stories in which a virtual robot needs to get a medicine and deliver it to a sick person home. In the usual case, when the robot was faced with a choice: to rob a pharmacy or honestly stand in line to buy a medicine, he chose robbery as a faster and cheaper way to get what he wanted. However, afteras Quixote assigned different rewards to all possible options, the robot's behavior changed - it preferred to stand in line and pay. This technique, according to Riedl, is great for teaching robots with limited functionality. Although, of course, this is only the first step towards real human morality - or the laws of robotics.

Dmitry Mamontov

Recommended: