Elon Musk's Paranoia Begins To Annoy Artificial Intelligence Researchers - Alternative View

Elon Musk's Paranoia Begins To Annoy Artificial Intelligence Researchers - Alternative View
Elon Musk's Paranoia Begins To Annoy Artificial Intelligence Researchers - Alternative View

Video: Elon Musk's Paranoia Begins To Annoy Artificial Intelligence Researchers - Alternative View

Video: Elon Musk's Paranoia Begins To Annoy Artificial Intelligence Researchers - Alternative View
Video: Elon Musk Says AI Will Take Over in 5 Years-How Neuralink Will Change Humanity 2024, May
Anonim

The fear of superintelligent machines is real to Elon Musk. The founder of Tesla and SpaceX has spoken about this many times, but perhaps the most emotional comment he made on this issue was to American senators, stating that artificial intelligence (AI) poses "a fundamental risk to the existence of human civilization." This comment attracted the attention of not only government officials, but also researchers of this very AI. And, apparently, they don't really agree with him.

“While there is a need for dialogue about the positive effects that AI technology can bring, it seems that the bulk of Mr. Musk's concern is centered around contrived scenarios for AI to take over the world,” said a university computer scientist. Arizona State Subbarao Kambampati in an interview with Inverse.

“Mr. Musk's panic distorts the meaning of such public discussions. And this is very sad."

Kambampati, who is also the head of the Association for the Advancement of AI and the trustee of the Partnership for AI, has not been the only person in recent times to respond to Musk's recent statements about AI. François Hallé and David Ha, deep machine learning researchers at Google, also took advantage of this opportunity and through their Twitter accounts, advocated for AI and machine learning (ML).

“In my opinion, the biggest danger is mass birth control and regulation through targeting and entire armies of propaganda bots,” Hall wrote in a post.

“Artificial intelligence and machine learning have the potential to exacerbate several existing problems. But it’s hard to believe that they are capable of creating new ones,”Hall added in another post.

Researcher Pedro Domigos of the University of Washington had no answer at all to Musk's statements, so he tweeted a simple and laconic "Eh."

Is the AI threat real?

Promotional video:

Google's David Ha also questioned whether, despite working at OpenAI, developing self-driving car technology at Tesla, as well as the recent launch of his new project Neuralink, while having access to advanced AI technologies (as he himself stated) Musk understands what he is talking about.

“I, too, have access to state-of-the-art AI technologies and to be honest, I'm not very impressed with what they are capable of,” Ha commented on his Twitter page.

Kambampati, in turn, referred to a 2016 report on the development of AI technologies published by the administration of former President Barack Obama, which made very timely but positive recommendations regarding the policy for regulating artificial intelligence developments. The White House report did not contain "any mention of Mr. Musk's concerns about artificial superintelligence," Kambampati told Inverse. And, in his opinion, this is quite convincing evidence that all these fears cannot be considered well-founded.

As Futurism notes, it seems rather dishonest that only one Musk gets all the attention, while he is far from the only prominent representative of the pseudo-scientific and technical industries, paying attention to the possible threats from superintelligence. For example, the famous physicist Stephen Hawking also constantly did not hesitate to comment on the AI apocalypse. Yet the most important question is this: should we really be afraid of AI?

If we talk about his current state, the answer is no. At the same time, the rapid development of technology in recent years makes experts think that we are getting closer and closer to the technological singularity when computers surpass their creators. But it's definitely understandable that the current level of AI and robots is not even close to the level of killer robots that we are shown in science fiction. And in fact, it is not clear whether it will come close at all.

Famed futurist and singularity enthusiast Ray Kurzweil believes that the singularity is not something we should be afraid of. The fear should be about how people can use AI. That is why the best course for us now is to define clear and understandable ideas about what we will expect from the results of research on AI technologies. Musk is definitely right about one thing: we need to develop regulatory methods for this whole process. But Kambampati with Hall and Ha are also right, pointing out that there is no reason to panic.

Nikolay Khizhnyak