Table of contents:

A Neuroscientist Talks About A Future With Superhuman Intelligence - Alternative View
A Neuroscientist Talks About A Future With Superhuman Intelligence - Alternative View
Video: A Neuroscientist Talks About A Future With Superhuman Intelligence - Alternative View
Video: Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds 2023, February

Are you waiting for the future to come? And in vain, says the scientist and writer Sam Harris.

Superhuman artificial intelligence is a matter of time, according to Sam Harris, a Stanford graduate and Ph.D. in neuroscience from the University of California, Los Angeles. Five of Harris's books have become New York Times bestsellers. “It's pretty hard to imagine that they won't destroy us or force us to destroy each other,” he says.

How do we prepare for this? According to Sam, our current attitude on this issue - everyone's support and expectation - is fatally flawed.

“If you and I are in some way similar, then you, most likely, will also be interested in dreaming about these things. However, this attitude is fundamentally wrong, because it should worry you,”he notes. These phrases are part of his TED talk on artificial intelligence.

"It seems that we are unable to adequately respond to all the dangers that await us in the future."

Inevitability of superhuman AI

According to Sam, if the progress of mankind does not interfere with a new world war, the collision of the Earth with an asteroid or some other unexpected catastrophe, then a superhuman artificial intelligence will be created. Harris's logic is as follows:

Promotional video:

1. We like smart things.

2. Smart things are useful because they make life easier. (up to a certain point)

3. Thus, as long as we have the opportunity to make things smarter, we will do it.

At some point, our smart software will reach such a level of development that it can independently create even smarter software - and then we will end. The pursuit of digital intelligence will lead to self-replication, and the human brain will become a biological rudiment.

Superhuman AI in the near future

Sam identifies two most likely paths for the logical development of superhuman AI:

1. Independent evolution. This path is similar to the relationship between humans and ants. You don't hate ants. You may even try not to step on them if you see. They can barely harm you. However, if they get in your way, for example, if they climb into your house, then you will destroy them. AI can treat us the same way.

2. Collaborative evolution. Perhaps with the help of brain implants (neuroprostheses) made by ambitious companies like Kernel, we can merge the superhuman mind with our own. “Most likely, this is the only safe and expedient way for us. However, you need to take every precaution before connecting anything to your head,”notes Harris.

Artificial intelligence, you and me, mind rangeTo make it easier for us to understand how soon AI will become smarter than humans, Sam suggests presenting a curve showing the level of intelligence development. Here we are with you. Here are the hens, a couple of steps behind. Are we far from the far end of the curve? Unknown. And, perhaps, we are not the species that is destined to find out. Harris is confident that artificial intelligence will explore further sections of this curve as quickly as possible for optimization purposes. “Most likely, we are not at the top of the development of intelligence or even somewhere nearby. It is imperative to understand this, as it puts us at risk and makes our misconceptions about risk so dangerous,”he says. Sam's speech does not take into account the likelihood that we are already living in some kind of simulation, as Elon Musk often discusses.(Elon also fears the uncontrolled development of AI and launched the OpenAI initiative to solve many of the problems Sam talks about.) Sam refuses to speculate that perhaps the combination of artificial and human intelligence is a new stage in our evolution, or that that we will turn into cyborgs, according to an expert from the University of Israel, Yuval Noah Harari. In addition, Harris does not believe that it is possible to prevent the emergence of superhuman artificial intelligence - this event will certainly come. But how exactly this will happen, how we will prepare for it and how decisive we will be are questions that the scientist encourages to think about as soon as possible: “Stuart Russell has a great analogy. He suggests that we imagine that we have received a message from an extraterrestrial civilization, which says: “Inhabitants of the Earth,we will arrive on your planet in 50 years. Get ready. " After this, would we just count the months and wait for the appointed hour? We would feel the need to act stronger than now. " Alexey Zenkov

Popular by topic