Can China And Russia Win The Artificial Intelligence Arms Race? - Alternative View

Can China And Russia Win The Artificial Intelligence Arms Race? - Alternative View
Can China And Russia Win The Artificial Intelligence Arms Race? - Alternative View

Video: Can China And Russia Win The Artificial Intelligence Arms Race? - Alternative View

Video: Can China And Russia Win The Artificial Intelligence Arms Race? - Alternative View
Video: Mark Cuban: If We Let China or Russia Win the Artificial Intelligence Race, we're 'SOL' 2024, May
Anonim

In October, the Beijing Institute of Technology, one of the country's leading military research institutions, selected 31 teenagers from more than 5,000 applicants. According to the Chinese authorities, these young people will develop a new generation of weapons equipped with artificial intelligence (AI) - this may include microscopic robots, computer viruses, submarines, drones, tanks.

Plans like this are a powerful reminder of the direction in which the main arms race might unfold. The increase in computing power and the improvement of programs with self-learning algorithms create new opportunities for both war and government.

Consultancy PwC estimates that the contribution of AI-powered systems to the global economy could reach $ 15.7 trillion by 2030, with China and the United States likely to be the leading players. However, the greatest concern for governments is the potential military implications. On the one hand, there is a fear of lagging behind, on the other, there is a fear that untested technologies may bring new dangers.

Pentagon officials have asked the Defense Innovation Council - a gathering of senior Silicon Valley officials who provide technical advice to the US military - to develop a set of ethical guidelines for using AI in war. Last month, France and Canada announced the creation of an international expert group to discuss similar issues. Until now, Western states have been of the opinion that vital decisions in a conflict must always be made by humans, and computers and algorithms simply facilitate their implementation. However, some countries - especially Russia and China - are considering a different path.

Last year, Russia reported doubling its investment in AI. Earlier this month, it was announced that Moscow will publish a new national AI strategy roadmap by mid-2019, which is seen by Russian officials as the key to dominating cyberspace and information operations. Already alleged Russian "troll farms" are believed to be using automated social media feeds to promote disinformation.

Experts argue that building an advanced AI system requires the right processing power, sufficient training data, and capable performers. As the world's most powerful authoritarian states, Russia and China have similar capabilities and intend to use AI both to maintain government positions on their territory and to defeat enemies outside of it.

Beijing is already using massive automated surveillance, including facial recognition software, to quell dissent, especially in the Uyghur Muslim northwest. It is likely that such systems will become more powerful as technology improves. When it comes to monitoring the communications of its citizens, China (like Russia) has much less doubts and regulatory mechanisms than Western states.

Western democracies, and especially America, have traditionally been more capable than dictatorships in using new technologies. However, when it came to AI, Washington's efforts to bridge the gap between Silicon Valley and the military faced a number of challenges. In June, Google employees forced the company to withdraw from a contract renewal with the Pentagon. Many scientists are reluctant to work on defense projects for fear that they will eventually create uncontrollable killer robots.

Promotional video:

However, the United States and its allies are developing their own autonomous weapons. So, in October, Microsoft quietly announced that it intends to sell to the Pentagon any advanced AI systems that are required to "create a strong defense." US Air Force executives say their classified future long-range strike aircraft, designed to replace the stealth B-2 bomber, will be able to operate with or without a crew. The Western military is investing more and more in self-driving trucks and other supply vehicles, seeking to assign more "dirty, boring and dangerous" tasks to them on the battlefield without risking human lives.

It is supposed to use "swarms" of drones, when numerous unmanned aerial vehicles control themselves. When it comes to drone versus drone battles, Western leaders have no problem letting unmanned systems make their own decisions. But if humans are involved in combat, US Department of Defense policy requires that decisions be made by humans. However, such control can become more and more difficult, especially if the automated systems of the enemy make decisions much faster than humans.

It is expected that by the early 2020s, large Chinese unmanned submarines that can carry weapons will find themselves in the world's oceans. Their target will be enemy forces in disputed areas such as the South China Sea. Such ships can travel great distances without being noticed for long periods of time. At the moment, Chinese scientists argue that it will be up to people to decide on the attack, but perhaps this is not entirely true. The Pentagon reported last January that Russia is also building large unmanned nuclear submarines, possibly capable of carrying nuclear weapons.

In addition, both Moscow and Beijing favor unmanned robotic tanks, with Russia already testing its latest developments in Syria. Such systems can significantly complicate Western commanders making decisions on determining targets on the battlefield due to a lack of understanding whether people are in combat equipment. Mistakes can lead to the onset of a conflict or its sharp exacerbation.

Reportedly, for the specified program of the Beijing Institute of Technology were selected those teenagers who expressed "readiness for battle." This prioritization can be very dangerous given that technology is under-tested and destructive.

Natalia Golovakha

Recommended: