"A Fundamental Threat To Humanity". Correspondence Dispute Between Musk And Zuckerberg About Artificial Intelligence - Alternative View

Table of contents:

"A Fundamental Threat To Humanity". Correspondence Dispute Between Musk And Zuckerberg About Artificial Intelligence - Alternative View
"A Fundamental Threat To Humanity". Correspondence Dispute Between Musk And Zuckerberg About Artificial Intelligence - Alternative View

Video: "A Fundamental Threat To Humanity". Correspondence Dispute Between Musk And Zuckerberg About Artificial Intelligence - Alternative View

Video:
Video: Is AI Really a Threat: Elon Musk vs Mark Zuckerberg 2024, May
Anonim

Two billionaires who have made their fortune online, Tesla and Space X owner Elon Musk and Facebook founder Mark Zuckerberg, have gotten into an absentee dispute over the dangers of artificial intelligence.

Both entrepreneurs are confident that the creation of full-fledged artificial intelligence (AI) is a matter of the near future. But Musk believes that it is time for humanity to think about regulating developments in this area, and Zuckerberg is confident that artificial intelligence will significantly improve people's lives.

In mid-July, Elon Musk spoke to the governors of the US states and urged the authorities to think about introducing legislative restrictions on development in the field of AI.

Zuckerberg answered the question about the dangers of AI when chatting with Facebook users. He was asked bluntly if he thought Musk's fears were valid. The Facebook creator said he did not understand how Musk could seriously call for a slowdown in progress.

He responded by tweeting “I talked about this with Mark. He has a limited understanding of this issue :

Present Tense fully publishes the speeches of the two billionaires.

"Until we see robots walking the streets and killing people, we won't know how to react." Elon Musk's fears

Promotional video:

I have access to the very latest in AI. And I think people really should be concerned with this issue. I know how it sounds, but until people see robots walking the streets and killing people, they won't know how to react.

It seems unrealistic. I think we really need to worry about AI. And I think that this is a rare case when we, in our legislative restrictions, must be ahead of the curve, and not make decisions later, when it is too late.

Usually, restrictions work like this: at first, a lot of bad things happen, society protests, and after many years there are regulatory bodies for this industry. Companies that don't like outside control resist, and it takes a very long time. In the past, this has already led to bad consequences, but did not pose a fundamental threat to the existence of humanity.

AI is a fundamental risk to human existence. And this is the kind of risk that neither car accidents, nor plane crashes, nor problem drugs, nor bad food have ever had. They weren't a threat. Maybe they were for specific people, but they were not a risk for the whole society.

AI is a fundamental existential threat to civilization, and I don't think humans are fully aware of this. Yes, being restricted is not fun. But in the automotive industry we are controlled by the Department of Transportation and many others. And every country has regulatory bodies. In the space industry, we are controlled by the Federal Aviation Administration. But if you ask an ordinary person, "Hey, you want to get rid of civil aviation management, and just hope that manufacturers don't save on the plane, because profits have dropped this quarter," the person will answer: "Of course not." Sounds so-so.

I think that even very liberal-minded people would still say: "Let's keep an eye on aircraft manufacturers, we need to make sure that they make good aircraft." Nice cars, stuff like that. So the regulators have a role to play. Of course, I am against strict control. But I think we'd better hurry up with AI.

Of course, this [invention of AI] will lead to job losses, because robots can do everything better than us. That is, better than all of us. Yes. I don't know what to do about it. This is the worst problem for me. Yes, I am actually sure that we will need state control, because that way we can ensure public safety.

Companies are in a hurry, they need to have time to make AI, otherwise they will become uncompetitive. If your competitor is in a hurry to make AI, and you are not, he will destroy you. And companies say: "We don't want to die, it looks like we also need to create AI."

And this is where the controls should come in: "Guys, you all need to slow down and make sure it's safe." If the reviewers are convinced that it is safe, then you can continue to work. Otherwise, slow down. But the auditors must do this for everyone who does it, otherwise the shareholders will say, “Hey, why aren't you developing AI faster than your competitors? It would be necessary!"

I believe that this is the greatest risk that we take as a civilization. I think the role of the state is to take care of society and pay attention to what threatens it. Hence the limitations. First, you need to learn as much as possible about this in order to understand the nature of the problem, to observe the progress and incredible achievements in the field of AI.

Last year, in a game of go that is hard enough to win, and in which people thought the computer would never beat the best human player, or that it would happen 20 years from now, last year AlphaGo, created by DeepMind, the company bought by Google beat the best go player in the world. And now she can play with 150 players at the same time and defeat them all.

This is an impressive rate of progress, and it will show up more and more often. Robotics, for example. There are robots that learn to walk from scratch in hours. Faster than any species. But the most dangerous - and this is the hardest to understand - because it is not something physical, it is intelligence.

It would seem, well, what can powerful [artificial] intelligence do on the Web? Well, for example, start a war. For example, by creating fake news, fake press releases and mailing addresses, and simply manipulating information. The pen is mightier than the sword.

As an example - I want to emphasize, I don't think this really happened, I am saying this purely hypothetically - remember that Malaysian plane that was shot down over the border between Russia and Ukraine? This has greatly increased friction between Russia and the EU.

Imagine if you had artificial intelligence and its goal was to maximize the value of a stock portfolio, one way would be to buy defense stocks, short consumer stocks, and start a war.

How could he do it? Hack Malaysian Airlines, aircraft route servers, plot routes over a war zone, and then send an anonymous message that "an enemy plane is flying over you."

"All technologies can be used for harm and good." Mark Zuckerberg's optimism

I have pretty strong beliefs about this. I am very optimistic. I am an optimistic person in general. I think we can create things that make the world a better place, and I'm particularly optimistic about AI. I think that I just do not understand people who are always skeptical and constantly intimidate with these scenarios of the imminent end of the world. I think this is really very bad and irresponsible, because in the next five to ten years, AI will be able to improve and improve the quality of our lives.

If you are concerned about the safety and health of people, then AI is already helping to better diagnose diseases, and to select medicines for people depending on what they are sick with and how they can be better cured. Many people will receive better health care than ever before.

As for self-driving cars, they will be safer than cars that are driven by people. It's just a matter of time. Car accidents remain one of the main causes of death. And if they could be eliminated with the help of AI, it would greatly improve people's lives.

When I hear the reasoning, "Oh, you know, AI is going to harm people in the future," I think that indeed, all technology can be used for harm and good. And you need to be careful when you create something, you need to be careful and think about how it will be applied.

But when people insist on slowing down the AI process, I find it very controversial. It's really hard for me to understand this, because if you are against AI, then you are against safer machines that will not get into accidents, against the possibility of better diagnosing diseases. And I just don't understand how some people can do it without cunning. So I'm generally much more optimistic about this [AI creation] than perhaps many people are.

Recommended: