What Dangers Of Neural Networks Do We Underestimate? - Alternative View

Table of contents:

What Dangers Of Neural Networks Do We Underestimate? - Alternative View
What Dangers Of Neural Networks Do We Underestimate? - Alternative View

Video: What Dangers Of Neural Networks Do We Underestimate? - Alternative View

Video: What Dangers Of Neural Networks Do We Underestimate? - Alternative View
Video: What Happens in a Neural Network? (building intuition with medical examples) // #4 of ML for Health 2024, April
Anonim

Have you ever met a person on the street who would be one to one like you? Clothes, face, gait, manner of communication, habits are completely identical to yours. It's like being scanned and printed on a printer. Sounds a little creepy, doesn't it? Now imagine that you have seen a video in which such a person tells something about himself. At best, you will try to remember when you walked in such a way that you did not remember anything, but you could say this on camera. While all this sounds like simple reasoning, but technology is already very close to creating such people. They already exist, but soon there will be many more.

Where does the fake come from?

Now there are already too many things that are commonly called fakes. They are everywhere. They can be found in photographs, in the news, in the production of goods and in information services. It is easier to say where there are no phenomena accompanied by this word. While you can fight them. You can study the origin of the photo, check the distinctive features of a branded product from a fake, and double-check the news. Although, news is a separate topic.

Nowadays, a content consumer does not want to wait and requires instant production from its creator, sometimes he does not even care about the quality, the main thing is to quickly. This is where situations arise when someone said something, and the others, without checking it, took it away from their websites and newspapers. In some cases, it takes a long time to spin this ball back and prove that it was all wrong.

There is no point in explaining why this is all done. On the one hand, there are those who just want to laugh at the situation, on the other, those who really didn't know they were wrong. A separate place, approximately in the middle, is occupied by those for whom it is trite to profit. These can be interests of influence at different levels, including the political one. Sometimes this is the purpose of making a profit. For example, sowing panic in the stock market and conducting profitable transactions with securities. But often this is due to hostility towards a person (company, product, etc.) in order to belittle him. A simple example is "dropping" in the ratings of a movie or institution that is not desirable to someone. Of course, this requires an army of those who will go and dislike (sometimes even bots), but that's another story.

What is Deep Learning?

Promotional video:

Recently, this term sounds more and more often. Sometimes he is not even related to the case and is confused with something else. So the software product looks more impressive.

Don't think that the concept and basic principles of machine learning appeared only a few years ago. In fact, they are so many years old that many of us were not even born then. The basic principles of deep learning systems and the mathematical models for their work were known back in the 80s of the last century.

At the time, they didn't make that much sense due to the lack of one important component. It was high computing power. Only in the middle of the 2000s, systems appeared that can help work in this direction and allow calculating all the necessary information. Now machines have evolved even more and some systems of machine vision, voice perception and some others work so efficiently that they sometimes even surpass the capabilities of a person. Although, they are not yet "imprisoned" in responsible directions, making them an addition to human capabilities while maintaining control over them.

Image
Image

What is Deepfake? When did Deepfake appear?

It is easy to guess that Deepfake is a small play on words associated with Deep Learning and the very fakes that I talked about above. That is, Deepfake should take fake to a new level and unload a person in this difficult business, allowing them to create fake content without wasting energy.

First of all, such fakes concern video. That is, any person can sit in front of the camera, say something, and his face will be replaced with another person. It looks scary, because, in fact, you just need to catch the basic movements of a person and it will be simply impossible to distinguish a fake. Let's see how it all started.

The first generative adversarial network was created by a student at Stanford University. It happened in 2014, and the name of the student was Ian Goodfellow. In fact, he pitted two neural networks against each other, one of which was engaged in the generation of human faces, and the second analyzed them and spoke similarly or not. So they trained each other and one day the second neural network began to get confused and take the generated images for real. It is this ever more complex system that gives birth to Deepfake.

Now one of the main promoters of the idea of Deepfake is Hao Li. He does not only this, but also many others. For this, he was repeatedly awarded various awards, including unofficial ones. By the way, he is one of those who should be thanked for the appearance of animoji in the iPhone X. If interested, you can familiarize yourself with it in more detail on his website. Today he is not the main topic of discussion.

We only remembered him because at the World Economic Forum in Davos he showed his application, which will allow you to replace the face of a person sitting in front of the camera with any other face. In particular, he showed how the system works on the example of the faces of Leonardo DiCaprio, Will Smith and other famous people.

It looks a little creepy. On the one hand, you can only admire the modern technologies that allow you to scan a face, change it to another on the go and produce a new picture. All this takes a split second and the system does not even slow down. That is, it allows not only to process the finished video and replace the face, but also to participate in such a character in some kind of live video communication.

Danger of Deepfake. How do I change the face on a video?

You can talk as much as you like that this technology is needed, it's very cool and you don't have to slander. You can even go to the extreme and begin to say that this is the position of a fierce oldfag who is simply afraid of everything new, but there really are more dangers than benefits.

With such technology, especially if it is open source, anyone will be able to surf and record any video. It's not bad if it just defames someone's honor and dignity, much worse if it is a statement made on behalf of an important person. For example, by recording a video only 30-40 seconds long on behalf of Tim Cook, you can bring down almost the entire US IT sphere, led by Apple. The stock market will be hit so hard that it will sow panic among investors. As a result, thousands of people will lose billions of dollars.

Anyone who does not like this way of earning money will say that this is what they need, let them go to the plant. But under the saddest scenario, there will be no plant after this. In addition, it is corny that we get a person who cheats on fluctuations in the value of securities. It is enough just to buy and sell them on time.

The situation could be even worse if the "joker" speaks out on behalf of the leader of a large state. Of course, then everything will be revealed, but during this time you can do a lot of unpleasant things. Against this background, simply substituting the face of a celebrity for an actor in an adult movie would be an innocent prank.

With such technologies, the main thing is to scan, and then it's a matter of technology. In the truest sense of the word
With such technologies, the main thing is to scan, and then it's a matter of technology. In the truest sense of the word

With such technologies, the main thing is to scan, and then it's a matter of technology. In the truest sense of the word.

You can imagine the opposite situation, when a real person says something, and then assures everyone that he was framed. How to be in this situation is also not very clear. This will bring such confusion to the news feeds that it is simply not possible to double-check it in another source. As a result, it will become generally unclear what is true and what is false in this world. A picture emerges from movies about a grim future, like Surrogates or Terminator, where the T-1000 introduced himself as other people and, among other things, called John Conor on behalf of his adoptive mother.

Now I'm not even talking about another abuse that will allow the collection of false evidence. Against this background, all the fun of the toy becomes too dubious.

How to detect Deepfake?

The problem is not even that such systems should be banned, but that this is no longer possible. They are already there, and the development of technologies, including the reading of faces, has led to their appearance and the spread of open source. Even if we imagine that the system in its current form will cease to exist, we must understand that it will be created anew. They will just once again teach neural networks to work with each other and that's it.

So far, not everything is so scary, and you can literally identify a fake with the naked eye. The picture is similar, but rather rough. In addition, she sometimes has some blending problems, especially around the edges of the face. But nothing stands still and it is not at all difficult to develop it even more. The same Hao Li is sure that this will take no more than a few months, and to create “masks” that even a computer cannot distinguish, it will take several more years. After that there will be no turning back.

On the one hand, the algorithm that YouTube and Facebook are already creating can protect against this. By the way, the latter even opened a competition for the development of recognition technology - Deepfake Detection Challenge ("The task of detecting deepfakes"). The prize fund for this competition is $ 10 million. The competition is already underway and will end in March 2020. You can still have time to participate.

Replacing a face in a video is no longer a problem
Replacing a face in a video is no longer a problem

Replacing a face in a video is no longer a problem.

Perhaps this generosity is due to a fake video with Mark Zuckerberg himself. If these two things are related, the emergence of such a competition is not surprising.

If the replaced face fully matches the original, the counterforce represented by a special neural network will be powerless. In this case, she will have to catch minimal differences in facial expressions, movements and manner of speaking. In the case of famous people, such a problem will be solved at the level of the video service, since the same YouTube knows how the conventional Donald Trump moves. When it comes to a lesser known person, it gets trickier. Although, this can also be proved by placing him in front of the camera and having a casual conversation while the neural network analyzes his movements. It will turn out to be something like studying a fingerprint, but, as we can see, this will again lead to unnecessary difficulties.

Sewing video authentication systems into cameras can also be circumvented. You can make the camera mark the captured video and make it clear that it was not filmed through a separate application or processed in a special program. But what about videos that have just been processed? For example, a edited interview. As a result, we will get a video in which the original key will no longer be.

A few memes at the end
A few memes at the end

A few memes at the end.

Can we say that we have now sketched one of the scenarios of a dark future? In general, yes. If the technologies that were created to achieve good goals get out of control, they can be sip with grief. Actually, there are many options for such dangerous technologies, but most of them are protected. For example, nuclear fusion. Here we are dealing with code that anyone can get.

Write in the comments how you see protection against counterfeiting, considering that the masking system was able to make masks completely identical to the original faces. And because they are on video, you can't even apply depth and volume recognition to them. In addition, let's assume that any code and key embedded in the image can be hacked. As they say, it would be for what. Now we can discuss, all the introductory is there.

Artem Sutyagin