Artificial Intelligence As A Terrorist: New Security Threats - Alternative View

Table of contents:

Artificial Intelligence As A Terrorist: New Security Threats - Alternative View
Artificial Intelligence As A Terrorist: New Security Threats - Alternative View

Video: Artificial Intelligence As A Terrorist: New Security Threats - Alternative View

Video: Artificial Intelligence As A Terrorist: New Security Threats - Alternative View
Video: Reshaping the Future of Crime, Terrorism and Security - Artificial Intelligence and Robotics 2024, October
Anonim

Russian scientists have examined and classified the threats of using artificial intelligence (AI) from the point of view of international information and psychological security. Particular attention was paid to countering the malicious use of AI by terrorists. The research results are published in the journal Russia in Global Affairs.

Dangers of AI Malicious Use

The potential danger of artificial intelligence for humanity has repeatedly become the topic of works of art and today is one of the most discussed issues of scientific and technological development. Although Strong AI, capable of independent systemic thinking, and possibly possessing self-awareness and will, is still far from being created, various advanced versions of Weak AI perform specialized tasks that seemed like science fiction 10 years ago. The use of AI for good purposes (for example, in medicine) is already bringing undoubted benefits. However, in the hands of terrorist and other criminal organizations, gradually cheaper and more advanced AI technologies can become a more terrible weapon than nuclear.

Researchers from different countries are actively studying the threats that the malicious use of AI poses to society as a whole and to certain spheres of human activity, be it politics, economics, military affairs, etc. However, threats directly to international information and psychological security have not yet been identified as an independent one. scope of consideration. Meanwhile, the use of AI to destabilize international relations through targeted high-tech information and psychological impact on people is an obvious great danger.

The authors of the study proposed a classification of threats to the malicious use of artificial intelligence based on a number of criteria, including territorial coverage, speed and form of propagation. Applying classification will help you find ways to counter and develop response tools.

“The future reduction in the cost of AI technologies can create threats of fundamentally new terrorist attacks. It is possible that a chatbot may be used to construct a message about a fake event and convince potential victims to visit it. In order to counteract, it is advisable to familiarize society with new threats, instructing citizens to be wary of remote contacts with people they do not know personally. The way out could be the certification of public events, confirming the reality of the information posted about them. Of course, the task of technical specialists in this case will be the protection of databases on events and the mechanism of their certification,”said Daria Bazarkina, professor of the Department of International Security and Foreign Policy of Russia, RANEPA, to RIA Novosti.

The modern level of technologies and capabilities of AI allows us to identify a number of fundamentally new threats to information and psychological security of a person.

Promotional video:

Using the technologies of "fake people" and "deepfake" for international provocations

American technology company NVIDIA recently shared the results of a generative-competitive network trained to generate its own images of people ("fake people"). The neural network in seconds creates high-resolution images of the faces of non-existent people, can add any cultural and ethnic features, emotions, mood and is based on an endless collection of images of real faces. In the future, it is possible that this experience will be repeated by other developers. At the same time, such images can be used by criminals for all sorts of provocations, which can only be recognized by a society with systemic polytechnical knowledge.

“Deepfake technology can synthesize the image and voice of a person. It has already begun to be used to create video images of world leaders, including US President Donald Trump and Russian President Vladimir Putin. Deepfake videos are capable of manipulating the behavior of large target groups and can be used in information and psychological confrontation (IPP) in order to provoke financial panic or war, said Evgeny Pashentsev, a leading researcher at the Institute of Contemporary International Issues of the Russian Foreign Ministry's Diplomatic Academy, professor at Moscow State University, to RIA Novosti.

Sentiment analysis is a class of methods of content analysis in computational linguistics designed for the automated detection of emotionally colored vocabulary and emotional assessment of authors in texts. Sentiment analysis is provided by a wide range of sources, such as blogs, articles, forums, polls, etc. This can be a very effective tool in APIs, which, for example, is confirmed by the high interest in such developments currently on the part of the leadership of the US Special Operations Command. (The United States Special Operations Command (SOCOM).

“Predictive Weapons”: Predicting Human Behavior Based on Social Media Data

In 2012, the United States' Advanced Intelligence Research Agency (IARPA) launched the Early Model-Based Event Recognition Using Surrogates (EMBERS) program. The purpose of the development is called the forecasting of civil unrest based on the use of AI with the determination of the date, place, protesting group of the population.

The system processes data from media and social networks, as well as sources of higher quality, for example, economic indicators. The fall of such programs into the hands of terrorists, of course, is also extremely dangerous. In this case, major terrorist attacks during the most large-scale social demonstrations may become a likely threat. An additional criterion for choosing the targets of a terrorist attack may be the degree of social and psychological tension in certain areas.

According to the authors of the study, as a countermeasure, predictive analytics can be used by the state and supranational authorities themselves to prevent social tension (timely adoption of social, economic and political measures aimed at achieving stability in the long term).

In addition, terrorist groups can cause reputational damage by using bots during political campaigns, as well as to attract new supporters or organize the assassinations of politicians.

Capturing drones and robotic infrastructure

Self-learning transport systems with AI control can be convenient targets for high-tech terrorist attacks. Terrorists' seizure of control of the transport control system in a large city can lead to numerous casualties, cause panic and create a psychological climate that will facilitate other criminal activities.

Commercial systems can be used to deploy unmanned aerial vehicles or autonomous vehicles to deliver explosives and create accidents. A series of major disasters can cause a stir in the international media and cause significant damage to information and psychological security.

Researchers from the Russian Presidential Academy of National Economy and Public Administration and the Diplomatic Academy of the Ministry of Foreign Affairs of the Russian Federation based their research on a systematic analysis of the role of AI in the field of security, a method of scenario analysis, historical analogies and case analysis. Together with a colleague from Uppsala University (Sweden), Greg Simons, the authors of the study became co-editors of the forthcoming book "Terrorism and Advanced Technologies in Information-Psychological Confrontation: New Risks, New Opportunities to Counter the Terrorist Threat" with the participation of researchers from 11 countries.

At the initiative of Russian researchers, the problem of the role of AI in the context of threats to international information and psychological security is being actively discussed and will be discussed at international conferences and autonomous international research seminars. Among them is a conference held under the auspices of the Commission of the Russian Federation for UNESCO and a number of other Russian and international structures in Khanty-Mansiysk on June 9-12. Governor of the Khanty-Mansiysk Autonomous Okrug Natalya Vladimirovna Komarova took an active part in the session of the section on the problem of the malicious use of AI, formed with the support of the European-Russian Expert Network for Communication Management (ERESKM). In the future, this is the IV Ibero-American Forum in St. Petersburg on October 1 - 3, 2019,European Conference on the Impact of Artificial Intelligence and Robotics October 31 - November 1, 2019. The authors of the studies have presented scientific reports in Argentina, Uruguay, Brazil, South Africa and Italy.

In the opinion of specialists from the RANEPA and the Diplomatic Academy, it is extremely important on the scale of Russia to establish the formation of long-term target programs, since the problem of AI and international information and psychological security is a problem for the near future.