Will The “Three Laws Of Robotics” Protect Us? - Alternative View

Table of contents:

Will The “Three Laws Of Robotics” Protect Us? - Alternative View
Will The “Three Laws Of Robotics” Protect Us? - Alternative View

Video: Will The “Three Laws Of Robotics” Protect Us? - Alternative View

Video: Will The “Three Laws Of Robotics” Protect Us? - Alternative View
Video: Why Asimov's Laws of Robotics Don't Work - Computerphile 2024, April
Anonim

It has been 50 years since Isaac Asimov came up with his famous Three Laws of Robotics - a set of rules that any self-respecting robot must follow. Although initially it was just a literary device, the three laws became the original recipe for avoiding the "robotocalypse". Fortunately, there are experts who are looking for certain whether Azimov's guarantees have stood the test of time. Unfortunately, they all say no.

To begin with, let's recall these same three laws:

A robot cannot harm a person or, by its inaction, allow a person to be harmed.

A robot must obey all orders given by a human, except when these orders are contrary to the First Law.

The robot must take care of its safety to the extent that it does not contradict the First and Second Laws.

Later, Azimov added a fourth, or zero, law, which preceded the others in terms of priority:

Promotional video:

0. A robot cannot harm a person, unless it can prove that it will ultimately be beneficial for all of humanity.

Image
Image

In the fantasy universe of Asimov, almost all robots had to follow these laws. These weren't just suggestions or guidelines - laws were built into the software. In addition, these rules cannot be circumvented, rewritten or revised.

As Asimov himself showed, imperfections, loopholes and ambiguities in these laws often led to strange and illogical behavior of robots. The laws were too vague and often failed to define and distinguish "humans" from "robots." In addition, robots could unknowingly break laws if full information was not available to them. Also, an overly cunning robot or artificial intelligence had to be under pressure not to reprogram its core.

The year is 2014, and much of the commonplace from our lives to the fantasies of the last century could seem outright fantasy. Many people felt that Asimov's laws should have remained as a literary device. But Azimov himself noted in 1981 that his laws could work. The Compute! he indicated the following:

“When someone asks me what it would be like if my three laws of robotics were actually used to define the behavior of robots, the answer is ready. Provided, of course, that robots are flexible and versatile enough to exhibit different behaviors. My answer is yes, the three laws are the only way humans can interact with robots - or anything else.

Almost thirty years later, we have come closer to a reality in which we will have robots - or, more precisely, the artificial intelligence that controls them - flexible enough and universal for different courses of behavior. It remains only a matter of time: when the machine surpasses man in all respects, from physical strength to the power of imagination.

The frightening thing is that there is practically no margin for error. If artificial superintelligence is poorly programmed or indifferent to humans, it will lead to disaster. We need to make sure that artificial intelligence is safe if we want to survive its emergence."

Answer the question "can Azimov's three laws help?" Two artificial intelligence theorists took over: Ben Herzel (Aidyia Holdings) and Louis Helm, deputy director of the Machine Intelligence Research Institute (MIRI) and executive editor of Rockstar Research magazine. After talking with them, it became clear that Asimov's laws in general could not cope with the task assigned to them, and if we need to create a safe AI, we will have to develop something completely different.

Asim's future?

Image
Image

The conversation with Herzel and Helm began with what Asimov was not mistaken about, predicting the future, and where he was mistaken.

“I think the type of robots that Asimov envisioned will be possible in the not too distant future,” Herzel replies. - However, in most of his fictional worlds, the writer assumed that humanoid robots would be the pinnacle of robotics and artificial intelligence engineering. That is unlikely. Very soon, after reaching the status of Azimov robots, the creation of artificial superintelligence and super robots will become available."

Thus, the typical world of the future in Asimov's stories about robots, according to Herzel, will be similar to the one in which we live today, but with intelligent robots walking the streets.

"It is unlikely that this will happen, and if it does, it will not be delayed for long."

For Helm, the robots appear to be completely different.

“The main issue, which, in my opinion, will be the most important for humanity, remains not the moral regulation of a gigantic number of half-intelligent humanoids, but ultimately the development of advanced forms of artificial intelligence (in any body). This development of superintelligence is a filter that humanity must pass through in the end. This is why developing a security strategy for this transition is so important. It seems completely strange to me that robots, androids or 'emulations' will exist for ten years or more until humanity is faced with the real problem of developing machine ethics for superintelligence."

A good start?

Image
Image

Considering that the three Asimov Laws of Robotics were the first sincere attempt to solve a very serious problem - the problem of the safe behavior of machines with artificial superintelligence - it is worth looking for those moments in which the laws can still be effective (or at least inspiring).

“To be honest, I don't find any inspiration in these three laws of robotics,” says Helm. - The essence of machine ethics is that they do not satisfy the basis of machine ethics. Perhaps the three laws of robotics are widely known, but in reality it is useless to use them as a basis for programming."

“For some reason, a system of good ethics - called deontology - has become an unreliable basis for ethics. There are a number of philosophers who are trying to fix the problems of deontology, but they are for the most part the same people who seek "intelligent design" and "divine intervention." Nobody takes them seriously."

The disadvantages of Asimov's three laws of robotics boil down to the following:

Adversarial in nature

Based on an obsolete ethical theory (deontology)

Don't even work in fiction

Herzel agrees:

“The purpose of the three laws was to break them in an interesting way; that's why their stories are especially entertaining. Therefore, the three laws can remain just a moral example of how not to do it. If we take them as a basis, they will inevitably have loopholes.”

Herzel believes that in reality these laws will not work, since the terms with their participation are ambiguous and remain the subject of interpretation - which means they are extremely dependent on those who make the translations.

Prejudice against robots?

Another aspect (and potential flaw) of the Three Laws in apparent chauvinism is the assumption that robots should remain, despite their superior power, subservient to humans, human needs and priorities.

“Azimov's society of the future consists entirely of chauvinists: people have much more rights than robots. The three laws of robotics were created to maintain just such a social order."

Image
Image

Helm looks at this problem a little differently, arguing that if we find ourselves in such a situation, it in itself will be evidence that we have gone too far.

“I think it would be unwise to design an artificial intelligence system or a self-aware robot. And unlike movies or books in which the creators of artificial intelligence "accidentally" come to intelligent machines, I do not believe that this can happen in real life. This will take too much effort and knowledge. And most AI designers are ethically savvy people, so they will avoid creating what philosophers call "morally significant beings." Especially when they can easily create an advanced machine that does not have ethical inclinations."

Helm is unconcerned about the need to develop asymmetric laws to govern the value of robots over humans, arguing (and hoping) that future artificial intelligence creators will rely on some ethical constraints.

“I kind of think humans are made of atoms, so in theory, an engineer can create a synthetic life form or a robot with a moral meaning. I would like to think that no one will do this. I think most people do too. But inevitably there will be a fool who wants to be known as a pioneer, even if it is unethical and stupid."

Three Laws of Robotics 2.0?

Given the obvious shortcomings of Asimov's three laws of robotics, the io9 resource asked the question: can they be corrected or improved? In fact, many science fiction writers have tried this many times, tweaking it over the years.

“No,” Helm says. "There are no patches for the three laws."

Besides their contradictory nature, laws are also adversarial in nature.

"I am a proponent of machine ethics approaches that are more cooperative, more consistent, and more normative, which means they can recover from misunderstandings or correct incorrect programming."

Herzel echoes Helm's claims.

Defining a set of ethical precepts as the core of machine ethics will be hopeless if the machine is based on flexible general artificial intelligence. If it is conceived as intuitive, flexible, adaptive or ethical - in this context, ethical precepts will be useful for the system only as a rough guideline for the application of its own ethical intuition. But in this case, the commandments will not become the basis of the ethical system, but only an aspect. This can be seen in human examples - the ethical principles we study work, but not as guidelines, they only push our intuition and ethical instincts. We are practically independent of ethical principles."

How to create secure artificial intelligence?

Given the inadequacy of the legal approach, you can ask Gertzel and Helm about modern approaches to the problem of "safe AI".

"Very few general artificial intelligence researchers believe that it is possible to create a system that is completely secure," Herzel says. "But this does not bother the majority, since nothing can be guaranteed in this life at all."

Herzel believes that once we create a general artificial intelligence system or its embryo, we can conduct research and experiments that tell us much more about the ethics of AI than we know.

“Hopefully, this way we can formulate good theories on the ethics of artificial intelligence that will allow us to delve deeper into this topic. But now it is rather difficult to theorize about the ethics of AI, since we do not have just good theories, we have no theories at all."

“And to people who have watched too many terminators, the process of creating artificial intelligence may seem intimidating, since they miss the word“primitive”in this context. However, the most radical changes happened in this way."

"When a group of clever cavemen invented language, they did not expect the development of a solid formal theory of language that could help predict future changes caused by the emergence of language in society."

Before considering the development of the next "three laws" as a technical problem, you need to do a lot of research. And this is where Helm and Herzel agree.

“My colleague from MIRI Luc Muelhauser summed up our reasoning as follows. He said that problems often move from philosophy to mathematics and then to engineering. Philosophy is often asked the right questions, but in the most impractical way. No one can say for sure if there has been any progress in resolving the issue. If we can reformulate the important philosophical problems of reason, identification, and value into precise formulas that a mathematician can handle, then some model can be built. One fine day it will come to engineering."

Helm finds this problem difficult for science and philosophy, but progress is possible.

“I am skeptical that philosophy will be able to solve a problem that has been struggling for more than 3000 years. But we, too, cannot take and begin to make mathematical models of questions that are not technically present. We still need a lot of theoretical and even practical developments."