How Neural Networks Were Taught To Falsify The News Picture - Alternative View

Table of contents:

How Neural Networks Were Taught To Falsify The News Picture - Alternative View
How Neural Networks Were Taught To Falsify The News Picture - Alternative View

Video: How Neural Networks Were Taught To Falsify The News Picture - Alternative View

Video: How Neural Networks Were Taught To Falsify The News Picture - Alternative View
Video: How Machine Learning Drives the Deceptive World of Deepfakes 2024, March
Anonim

To create fakes, artificial intelligence systems and technologies for generating video and audio fakes are used.

To create an “individual” news picture for any of us and to falsify the media reports selected in it, today the efforts of one programmer are enough. Specialists in artificial intelligence and cybersecurity told Izvestia about this. More recently, they estimated that this required the work of multiple teams. Such acceleration became possible with the development of technologies for attacks on neural networks and the generation of sound and video fakes using programs for creating “deep fakes”. The Izvestia newspaper recently underwent a similar attack, when three Libyan news portals at once published a message that allegedly appeared in one of the issues. According to experts, within 3-5 years, one can expect an invasion of robotic manipulators, which will automatically be able to create many fakes.

Brave new world

There have been more and more projects that adjust the information picture to the perception of specific users. One example of their work was the recent action of three Libyan portals, which published news allegedly printed in the issue of Izvestia on November 20. The creators of the fake modified the front page of the newspaper, posting on it a message about the negotiations between Field Marshal Khalifa Haftar and the Prime Minister of the Government of National Accord (PNS) Fayez Sarraj. The fake, in Izvestia font, was accompanied by a photo of the two leaders taken in May 2017. The label with the publication's logo was cut from the actual published issue of November 20, and all other texts on the page from the October 23 issue.

From the point of view of specialists, in the foreseeable future such falsifications can be done automatically.

Image
Image

“Artificial intelligence technologies are now completely open, and devices for receiving and processing data are miniaturizing and getting cheaper,” Yuri Vilsiter, Doctor of Physical and Mathematical Sciences, Professor of the Russian Academy of Sciences, head of the FSUE “GosNIIAS” department, told Izvestia. - Therefore, it is highly likely that in the near future, not even the state and large corporations, but simply private individuals will be able to eavesdrop and spy on us, as well as manipulate reality. In the coming years, it will be possible, by analyzing user preferences, to influence him through news feeds and very clever fakes.

Promotional video:

According to Yuri Vilsiter, technologies that can be used for such an intervention in the mental environment already exist. In theory, the invasion of manipulator bots can be expected in a few years, he said. A limiting moment here may be the need to collect large databases of examples of real people's responses to artificial stimuli with tracking of long-term consequences. Such tracking is likely to require several more years of research before targeted attacks are consistently obtained.

Vision attack

Alexei Parfentiev, head of the analytics department at SearchInform, agrees with Yuri Vilsiter. According to him, experts already predict attacks on neural networks, although now there are practically no such examples.

- Researchers from Gartner believe that by 2022, 30% of all cyberattacks will be aimed at corrupting the data on which the neural network trains and stealing ready-made machine learning models. Then, for example, unmanned vehicles can suddenly start mistaking pedestrians for other objects. And we will not talk about financial or reputational risk, but about the life and health of ordinary people, the expert believes.

Attacks on computer vision systems are being carried out as part of research now. The purpose of such attacks is to force the neural network to detect in the image what is not there. Or, conversely, not to see what was planned.

Image
Image

“One of the actively developing topics in the field of neural network training is the so-called adversarial attacks (“adversarial attacks”- Izvestia),” explained Vladislav Tushkanov, a web analyst at Kaspersky Lab. - In most cases, they are aimed at computer vision systems. To carry out such an attack, in most cases, you need to have full access to the neural network (the so-called white-box attacks) or the results of its work (black-box attacks). There are no methods that can deceive any computer vision system in 100% of cases. In addition, tools have already been created that allow you to test neural networks for resistance to adversarial attacks and increase their resistance.

In the course of such an attack, an attacker tries to somehow change the recognized image so that the neural network does not work. Often, noise is superimposed on the photo, similar to that which occurs when photographing in a poorly lit room. A person usually does not notice such interference well, but the neural network begins to malfunction. But in order to carry out such an attack, the attacker needs access to the algorithm.

According to Stanislav Ashmanov, General Director of Neuroset Ashmanov, there are currently no methods of dealing with this problem. In addition, this technology is available to anyone: an average programmer can use it by downloading the necessary open source software from the Github service.

Image
Image

- An attack on a neural network is a technique and algorithms for deceiving a neural network, which make it produce false results, and in fact, break it like a door lock, - Ashmanov believes. - For example, now it is quite easy to make the face recognition system think that it is not you, but Arnold Schwarzenegger in front of it - this is done by mixing additives imperceptible to the human eye into the data entering the neural network. The same attacks are possible for speech recognition and analysis systems.

The expert is sure that it will only get worse - these technologies have gone to the masses, fraudsters are already using them, and there are no means of protection against them. As there is no protection against the automated creation of video and audio forgeries.

Deep fakes

Deepfake technologies based on Deep Learning (technologies of deep learning of neural networks. - Izvestia) already pose a real threat. Video or audio fakes are created by editing or overlaying the faces of famous people who supposedly pronounce the necessary text and play the necessary role in the plot.

“Deepfake allows you to replace lip movements and human speech with video, which creates a feeling of realism of what is happening,” says Andrey Busargin, director of the department for innovative brand protection and intellectual property at Group-IB. - Fake celebrities “offer” users on social networks to participate in the drawing of valuable prizes (smartphones, cars, sums of money), etc. Links from these videos often lead to fraudulent and phishing sites where users are asked to enter personal information, including bank card details. Such schemes pose a threat to both ordinary users and public figures who are mentioned in commercials. Due to this kind of shenanigans, celebrity images become associated with scams or advertised goods.and this is where we run into personal brand damage,”he says.

Image
Image

Another threat is associated with the use of “fake votes” for telephone fraud. For example, in Germany, cybercriminals used voice deepfake to make the head of a UK subsidiary urgently transfer € 220,000 to the account of a Hungarian supplier in a telephone conversation, posing as a company manager. The head of the British firm suspected a trick when his "boss" asked for a second money order, but the call came from an Austrian number. By this time, the first tranche had already been transferred to an account in Hungary, from where the money was withdrawn to Mexico.

It turns out that the current technologies allow creating an individual news picture filled with fake news. Moreover, it will soon be possible to distinguish fakes from real video and audio only by hardware. According to experts, measures prohibiting the development of neural networks are unlikely to be effective. Therefore, soon we will live in a world in which it will be necessary to constantly recheck everything.

“We need to prepare for this, and we need to accept it,” stressed Yuri Vilsiter. - Humanity is not the first time passing from one reality to another. Our world, way of life and values are radically different from the world in which our ancestors lived 60,000 years ago, 5,000 years ago, 2,000 years ago, and even 200-100 years ago. In the near future, a person will be largely deprived of privacy and therefore forced not to hide anything and act honestly. At the same time, nothing in the surrounding reality and in one's own personality can be taken on faith, everything will have to be questioned and constantly rechecked. But will this future reality be dire? Not. It will simply be completely different.

Anna Urmantseva