Why Do You Misunderstand The Robo-apocalypse - Alternative View

Table of contents:

Why Do You Misunderstand The Robo-apocalypse - Alternative View
Why Do You Misunderstand The Robo-apocalypse - Alternative View

Video: Why Do You Misunderstand The Robo-apocalypse - Alternative View

Video: Why Do You Misunderstand The Robo-apocalypse - Alternative View
Video: The Nightmare Artist 2024, April
Anonim

The fear of artificial intelligence has long been ingrained in popular culture. The finest minds of humanity, from Stephen Hawking to Elon Musk, have spoken about the existential threat of the digital mind to humanity.

Professor at the University of California at Berkeley Stuart Russell also talks about it, but in his vision of the future there are no evil robots rebelling against human domination.

According to Russell, at some point, the machines will simply become so effective that they will destroy us, performing the tasks that we gave them.

Russell is a machine learning expert and author of Human Compatible: AI and the Problem of Control.

"The traditional Hollywood storyline is that machines spontaneously become conscious and then decide they hate people and want to kill everyone," notes Professor Russell.

But this, in his opinion, is a wrong vision of the robo-apocalypse, because machines do not have human emotions.

“We need to worry not about the self-awareness of evil machines, but about the competence of the machines that perform the tasks we have set,” the scientist is convinced.

Promotional video:

Too Competent

In an interview with BBC Today, Professor Russell described a hypothetical situation in which the existence of humanity could be threatened by artificial intelligence.

Imagine that there is a super-powerful artificial intelligence that can control the climate on the planet. He is tasked with bringing carbon dioxide levels in the atmosphere back to pre-industrial levels.

Image
Image

“The system will consider that the most effective way to do this is to destroy people. They produce carbon dioxide,”says the professor.

“This can be circumvented, for example, by making her a condition not to get rid of people. What will the system do then? She will make sure that fewer and fewer children are born until people disappear altogether."

This is an example of what artificial intelligence can lead to under ill-conceived human instructions.

Superintelligence

AI systems that exist today have narrowly targeted applications. They are designed to solve clearly defined problems in one area, say scientists at the Center for the Study of Existential Risks at Cambridge University.

The triumphant moment for this AI philosophy was the victory of the Deep Blue computer over the reigning world chess champion Garry Kasparov in 1997.

* Deep Blue's victory over Kasparov in 1997 was an AI triumph
* Deep Blue's victory over Kasparov in 1997 was an AI triumph

* Deep Blue's victory over Kasparov in 1997 was an AI triumph.

But Deep Blue was created exclusively for the game of chess. In checkers, he would have lost to a fifth grader.

Recent advances in artificial intelligence have brought us very different programs.

For example, the AlphaGo Zero program learned to play Go on its own, without using data on past games played by people, and in just three days of playing with itself, it reached a level unattainable for an ordinary person.

Deep learning algorithms were used to create AlphaGo Zero. This means that relatively little human work was required to develop the program.

At the same time, AlphaGo Zero knows how to play well both chess and shogi.

Recall that the program is completely self-learning. This may cause concern for some.

“As AI becomes more powerful and versatile, there is a risk that it will become superintelligent, that is, it will surpass humans in many or all areas,” says the Center for the Study of Existential Risks.

We don't know what we want

Professor Russell believes that simply giving AI very specific tasks is not a way out, because humans themselves are not able to formulate them well enough.

Image
Image

“We don’t realize that we don’t like something until it happens,” he says.

“We need to change the very basic approach to creating AI systems,” adds the professor. He believes that giving robots a task, which they then perform, is fundamentally wrong.

“The system must understand that the goal is not clear to it. he says. - When such systems are created, they really depend on the person. They start asking permission before doing something because they're not sure if you want to."

Most importantly, the professor says, they won't resist if you decide to turn them off, because they are designed to not do what you don't like.

Aladdin's lamp

AI today is created as if it were genie from a lamp, says Professor Russell. You rub the lamp and say, "I want this and that to happen."

In Stanley Kubrick's 2001 * A Space Odyssey *, a superintelligent computer revolts against humans who decide to shut it down
In Stanley Kubrick's 2001 * A Space Odyssey *, a superintelligent computer revolts against humans who decide to shut it down

In Stanley Kubrick's 2001 * A Space Odyssey *, a superintelligent computer revolts against humans who decide to shut it down.

And if artificial intelligence is powerful enough, it will do exactly what you ask.

"The problem with gin is that the third wish is always to cancel the first two because the person has phrased them incorrectly."

"A machine that pursues the wrong goal becomes de facto an enemy of humanity, and this enemy is much stronger than us," the scientist concludes.