At The Investor Meeting, Elon Musk Tried To Explain His Concerns About AI - Alternative View

At The Investor Meeting, Elon Musk Tried To Explain His Concerns About AI - Alternative View
At The Investor Meeting, Elon Musk Tried To Explain His Concerns About AI - Alternative View

Video: At The Investor Meeting, Elon Musk Tried To Explain His Concerns About AI - Alternative View

Video: At The Investor Meeting, Elon Musk Tried To Explain His Concerns About AI - Alternative View
Video: Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds 2024, September
Anonim

At a meeting with Tesla investors this Wednesday, CEO and founder Elon Musk once again expressed his concerns about AI developments. True, this time Musk was more restrained in his words, compared to his statements, which he made at a meeting with the country's leadership in the middle of last month. He tried to explain that he is not at all opposed to AI developments as such.

“I’m not advocating that we should stop developing AI, or what the press writes about me,” Musk said in response to a question raised at the conference related to this topic.

He noted that he himself is one of the chairmen and co-founders of OpenAI, a non-profit organization that aims to "find and use ways to develop secure general artificial intelligence."

Like OpenAI's goals, Musk's concerns are about the need to develop clear standards for AI development.

"The creation of artificial intelligence can be accompanied by some risks, so I believe that the public has the right to at least be informed by the government about these risks, because one of the tasks of the government is precisely to take care of their society," Musk commented at the meeting.

While many experts in artificial intelligence have voiced their displeasure at Musk's apparent alarmist tone in his interactions with American leaders, there have been plenty of AI experts who agree with the Tesla CEO's heightened caution. Individual groups have even developed basic sets of rules that would be good to follow when developing AI. Here we are talking about the ethical principles of the IEEE and the so-called Alisomar principles, presented at the Beneficial AI Conference 2017.

The main concern, Musk clarified, is not so much about the development of AI itself, but about how and who will use it.

“I believe we can get a lot of benefits from artificial intelligence. We just need to make sure that AI is really useful and not a tool for doing something stupid,”Musk added.

Promotional video:

In the event that something goes wrong, OpenAI Mask will not be the only hope for humanity, which will provide him with an opportunity to defend himself. His new project Neuralink, for example, aims to unite the human mind with the machine. If this is not enough, then SpaceX will have to come to the rescue, which promises to relocate humanity to Mars - an option considered by physicist Stephen Hawking as the most potential opportunity to avoid an apocalypse involving artificial intelligence.

For Musk himself, the most important task at the moment is to convince the government of a more deliberate attitude towards AI and come to the development of transparent rules for its creation.

"Deliberation and oversight are different things," Musk said during the meeting.

“At least if the government fully understands what is happening, it will be able to independently decide which rules are suitable for ensuring public safety. That's what I'm talking about."

Nikolay Khizhnyak