To Avoid Death, Humanity Needs To Lock Up Artificial Intelligence - Alternative View

Table of contents:

To Avoid Death, Humanity Needs To Lock Up Artificial Intelligence - Alternative View
To Avoid Death, Humanity Needs To Lock Up Artificial Intelligence - Alternative View

Video: To Avoid Death, Humanity Needs To Lock Up Artificial Intelligence - Alternative View

Video: To Avoid Death, Humanity Needs To Lock Up Artificial Intelligence - Alternative View
Video: ENDGAME 2050 | Full Documentary [Official] 2024, April
Anonim

Superintelligent computers or robots that threaten the existence of the human race are a favorite plot of science fiction. In reality, according to scientists, such a doomsday scenario can only be prevented by creating a virtual "prison" for containing artificial intelligence (AI). And this must be done before he learns to be aware of and improve himself.

If you let this genie out of the bottle, then from an oracle that solves our problems, it can turn into an apocalyptic threat, says Roman Yampolsky, a scientist at the University of Louisville, Kentucky, who works in the field of computer science. The environment in which the AI will be contained must be carefully planned so that it cannot break free through threats, bribery, temptation or computer hacking.

“He can invent many ways of influencing, cranking a complex socio-engineering combination and using existing hardware components for purposes that we cannot even imagine,” Yampolsky believes. - Such software has unlimited possibilities for infecting computers and networks. AI can find ways to influence the human psyche, bribe or blackmail people who are in contact with it."

The emergence of a new area of scientific research dealing with the problem of creating a "prison" for artificial intelligence has additional benefits. These studies are very useful for improving computer security and cryptography, Yampolsky is sure. His proposals were detailed in the March issue of the academic journal of the Journal of Consciousness Studies.

How to set up a trap for Skynet

For starters, you can enclose artificial intelligence inside a "virtual machine". Let it work within a normal operating system - limiting AI access to the software and hardware of the host computer will make it much more harmless. Anyway, smart AI will not be able to send secret messages to its human supporters, beating off Morse code with cooling fans.

Putting artificial intelligence in a computer without Internet access is a good way to thwart AI's insidious plans to protect the Earth from you and me. In the event of any other failure, researchers can always slow down the AI thought process by slowing down the processing speed of the computer, regularly pressing the "reset" button or disconnecting the AI from the power supply.

Promotional video:

These security measures are designed to deal with highly intelligent and dangerous computer viruses, but how effective they actually turn out to be is anyone's guess.

Catch-22 (novel by American writer J. Heller; approx. Mixednews) is that we cannot conduct full-scale tests of our ideas until the development of AI is completed. But in order to make AI safe, it is necessary to develop a set of appropriate measures, says Yampolsky. "The best option is to impose restrictions on the use of AI systems and then, as necessary, change those restrictions in accordance with the increasing capabilities of AI."

Image
Image

Scientist Roman Yampolsky suggested using the "@" sign or other signs used today to denote a radiation or biological threat to indicate the presence of a hazard associated with artificial intelligence.

Never put a human security guard on the vehicle

During a seemingly innocent conversation with a security guard, nothing will prevent the AI from using psychological techniques such as friendship or blackmail. A person can be bribed by promising him perfect health, immortality, or even a promise to return relatives and friends from the other world. Or achieve anything with threats.

The safest way to communicate with AI is to make it, when solving a specific scientific or technical problem, choose an answer from the many available options, explains Yampolsky. This will help keep the super-smart oracle on a short leash.

Many researchers believe that, despite all the precautions, keeping the AI smartass locked up indefinitely will not work. The latest experiment of Eliezer Yudkowski, a researcher at the Institute of Singularity on the creation of Artificial Intelligence, showed that even human-level intelligence can deceive a person and "escape" from imprisonment.

Nevertheless, Yampolsky insists on keeping AI in strict isolation - it's still better than giving up and giving it complete freedom. But if the development of AI reaches the point where it can manipulate a person using the subconscious (predicting the future), telepathy or telekinesis, the consequences will be unpredictable.

“If such an AI will self-improve and reach a level significantly exceeding the capabilities of human intelligence, the consequences cannot even be imagined,” Yampolsky warns.