"We Need To Think Now How To Secure Artificial Intelligence" - Alternative View

"We Need To Think Now How To Secure Artificial Intelligence" - Alternative View
"We Need To Think Now How To Secure Artificial Intelligence" - Alternative View

Video: "We Need To Think Now How To Secure Artificial Intelligence" - Alternative View

Video:
Video: The danger of AI is weirder than you think | Janelle Shane 2024, September
Anonim

Stephen Hawking continues the "crusade" against artificial intelligence (AI).

During the highly anticipated Ask Me Everything on Reddit campaign, he wrote that work on developing an AI security protocol should be done now, not in the distant future.

“Our goal should not be to create abstract artificial intelligence, but to create useful intelligence,” wrote the famous physicist. “It will take decades to figure out how to do this, so let's start researching this issue now, not the night before the first advanced AI”.

Hawking has been using fame to support the AI safety movement since 2014, when he co-wrote an editorial with other researchers warning of the existential threat that advanced machines pose to humanity.

Other notable tech personalities, Elon Musk and Steve Wozniak, have joined Hawking. Earlier in this, all three signed an open letter calling for a ban on autonomous weapons or combat robots.

Expectations that humanity will soon create artificial intelligence have existed since the 50s and have not yet justified themselves. However, most AI scientists believe that machines that are equal in intelligence to humans will appear in this century.

Image
Image

Photo: ANDREW COWIE / AFP

Promotional video:

Hawking warns that the portrayal of artificial intelligence in science fiction films distorts the real danger of machines. Movies like The Terminator depict demonic killer robots seeking to destroy humanity for reasons that real artificial intelligence did not have. They also make it easier to mitigate potential AI risk (like not building cyborgs).

“The real threat from AI is not that it is evil, but that it is very capable. Artificial superintelligence is very good at performing its tasks.

If his tasks do not coincide with ours, we will have problems, - Hawking wrote, - you hardly hate ants, but if you are in charge of a green energy project, and the area where the anthill is located needs to be flooded, the ants are out of luck. Let's not allow humanity to find itself in the position of these ants."

Nick Bostrom, an AI philosopher, fantasizes about end-of-world scenarios when a program starts working in the wrong direction. For example, artificial intelligence is tasked with developing a plan to protect the environment, and in order to solve the environmental problem, it decides to destroy humanity.

Elon Musk has already committed $ 10 million to AI security research to prevent destructive artificial intelligence like Skynet from appearing in Terminator. But Hawking wants people to change their overall perception of AI.

“Please tell your students to think not only about how to create AI, but also how to ensure that it is used for useful purposes,” Hawking told the teacher who lectures on artificial intelligence. “The emergence of an AI that transcends humans will become either the best or the worst event in human history. Therefore, it is important that everything goes as it should."

Recommended: