Stephen Hawking Called Artificial Intelligence The Greatest Mistake In History - Alternative View

Stephen Hawking Called Artificial Intelligence The Greatest Mistake In History - Alternative View
Stephen Hawking Called Artificial Intelligence The Greatest Mistake In History - Alternative View

Video: Stephen Hawking Called Artificial Intelligence The Greatest Mistake In History - Alternative View

Video: Stephen Hawking Called Artificial Intelligence The Greatest Mistake In History - Alternative View
Video: Stephen Hawking: 'AI could spell end of the human race' 2024, September
Anonim

The famous British physicist Stephen Hawking, in an article inspired by the science fiction film "Transcendence" starring Johnny Depp, said that underestimating the threat from artificial intelligence can be the biggest mistake in human history.

In an article co-authored with computer science professor Stuart Russell of the University of California at Berkeley and physics professors Max Tegmark and Frank Wilczek of MIT, Hawking points to some advances in artificial intelligence, noting self-driving cars, Siri's voice assistant, and the supercomputer. the winner of the man in the Jeopardy TV quiz game.

“All these achievements pale against the background of what awaits us in the coming decades. The successful creation of artificial intelligence will be the biggest event in human history. Unfortunately, it may turn out to be the last, if we do not learn to avoid risks,”Hawking is quoted by the British newspaper The Independent.

Professors write that in the future it may happen that no one and nothing can stop machines with inhuman intelligence from self-improvement. And this will launch the process of the so-called technological singularity, which means extremely rapid technological development. In the film with Depp's participation, the word "transcendence" is used in this sense.

Imagine such technology surpassing humans and driving financial markets, scientific research, humans, and the development of weapons beyond our understanding. If the short-term effect of artificial intelligence depends on who controls it, then the long-term effect depends on whether it can be controlled at all.

It is difficult to say what consequences artificial intelligence can have on humans. Hawking believes that there is little serious research devoted to these issues outside of such non-profit organizations as the Cambridge Center for the Study of Existential Risks, the Institute for the Future of Humanity, and research institutes for machine intelligence and future life.

According to him, each of us should ask ourselves what we can do now to avoid the worst-case scenario of the future.