By 2040, Artificial Intelligence Can Revolutionize Nuclear Safety Principles - Alternative View

By 2040, Artificial Intelligence Can Revolutionize Nuclear Safety Principles - Alternative View
By 2040, Artificial Intelligence Can Revolutionize Nuclear Safety Principles - Alternative View

Video: By 2040, Artificial Intelligence Can Revolutionize Nuclear Safety Principles - Alternative View

Video: By 2040, Artificial Intelligence Can Revolutionize Nuclear Safety Principles - Alternative View
Video: 21 Lessons for the 21st Century | Yuval Noah Harari | Talks at Google 2024, May
Anonim

A recently published study by RAND Corporation concluded that by 2040, artificial intelligence (AI) could radically change the concept of nuclear deterrence.

An AI-driven "doomsday machine" seems unlikely, but the real risks of AI to nuclear safety lie in the AI's ability to induce people to make unjustified, potentially apocalyptic decisions.

During the Cold War, the concept of assured mutual destruction maintained a shaky peace between superpowers, arguing that any attack would result in devastating retaliation. As a result, the HLG concept ensured strategic stability, acting soberingly on both sides in their readiness for actions that could actually develop into an atomic war.

The RAND publication says that in the coming decades, AI is quite capable of destroying the concept of assured mutual destruction and thereby undermine the strategic stability of our world. More sophisticated sensors make it probable that "weapons of retaliation" - for example, nuclear submarines and mobile missiles - can be identified and destroyed.

Nations may be tempted to seek first-strike assets as if it were an advantage over their rivals, even if they have no intention of striking first. This undermines stability in that even if the side capable of attacking first is not going to use this opportunity, its opponent cannot be sure of it.

“The link between nuclear war and AI is not new, in fact the stories are intertwined,” says Edward Geist, co-author of the article and at RAND. "Initially, much of the AI research was carried out with the support of the military in pursuit of their goals."

As an example, he cited an "adaptive planned survival experiment" conducted in the 1980s, which aimed to use AI to combine intelligence with targeting for nuclear weapons.

In the event of uncertainty, AI could enhance the stability of the strategic situation by improving the accuracy of intelligence collected and its analysis.

Promotional video:

Researchers believe that in the process of development, AI will become less prone to making mistakes compared to humans, which means it will be a factor of stability in the long term.

Vadim Tarabarko