Google Does Not Want To Create Skynet One Day, So It Is Creating A Switch For AI - Alternative View

Google Does Not Want To Create Skynet One Day, So It Is Creating A Switch For AI - Alternative View
Google Does Not Want To Create Skynet One Day, So It Is Creating A Switch For AI - Alternative View

Video: Google Does Not Want To Create Skynet One Day, So It Is Creating A Switch For AI - Alternative View

Video: Google Does Not Want To Create Skynet One Day, So It Is Creating A Switch For AI - Alternative View
Video: Voice recognition app using javascript | If..Else artificial intelligence javascript explained 2024, July
Anonim

There are two main opposing sides in questions and discussions about the future of artificial intelligence. In one corner are companies such as Google, Facebook, Amazon, and Microsoft investing aggressively in technology to make AI systems smarter; in the other are the great thinkers of our time like Elon Musk and Stephen Hawking who believe that the development of AI is akin to a "demon spell".

One of the most advanced AI companies, DeepMind, owned by Google, has created security measures in case a person needs to "take over control of a robot whose failure could lead to irreparable consequences." We can assume that we are talking about … "kill all people." Of course, in reality, the measure taken is not a hint of a possible apocalypse, it is rather about security measures associated with smart robots that work and will work in the future in various factories and plants.

The published document with prescriptions is a joint work of DeepMind and the Oxford Institute for the Future of Humanity, which, as the name implies, deals with issues that will allow this very humanity to get into the future. For decades, the institute's founding director Nick Bostrom has spoken out very vividly about the possible dangers of the development of artificial intelligence and wrote more than one book on this issue, in which he discussed the consequences of creating super-intelligent robots.

The guide we're talking about today, dubbed “Safely Disabled Agents,” explores how to disable AI if it does something that the human operator doesn't want to do. The work consists of various calculations and formulas that 99 percent of us most likely will not understand. Therefore, to put it simply, we are talking about a "red button" that can disable AI.

You may laugh and wonder, but the pioneers of robotics and AI are watching the same sci-fi movies that we do. The ones in which robots once figured out how to ignore the shutdown command. So now we will be ready for this. The published document describes methods and solutions to ensure that the "trainable agent" (read AI) cannot learn to ignore or prevent the impact of the external environment or human operator.

This may seem like overkill, especially when you consider that the most advanced AI right now is really only good at board games. However, Bostrom believes that before we start creating AI of the level of the human brain, appropriate measures must be taken:

“As soon as artificial intelligence reaches the human level, a sharp leap in development is expected in many areas. Also, soon after, the AIs themselves will be creating new AIs. Just assess these prospects and at the same time the possible dangers. Therefore, it is better to overdo it than to miss it."

NIKOLAY KHIZHNYAK

Promotional video: