Robots Will Create The Most Plausible Information Fakes - Alternative View

Robots Will Create The Most Plausible Information Fakes - Alternative View
Robots Will Create The Most Plausible Information Fakes - Alternative View

Video: Robots Will Create The Most Plausible Information Fakes - Alternative View

Video: Robots Will Create The Most Plausible Information Fakes - Alternative View
Video: Dating AI Robots, with Elon Musk & Boston Dynamics. 2024, May
Anonim

Imagine that one day some child prodigy would invent a technology that allows people or material objects to pass through walls, and post instructions on the Internet on how to cheaply build such a miracle machine from commonly available household materials. How would the world change?

Many industries are likely to become significantly more productive. Being able to walk through walls instead of having to use doors would make it easier for people to move around offices, move goods in warehouses, and carry out countless daily household tasks. However, the negative consequences could well outweigh the positive ones. Keeping valuables under lock and key would lose all meaning. Anyone could easily enter the warehouse, get into a bank safe or a house.

Most of the methods we use to protect private property involve walls in one way or another, so all of them would be useless overnight. Thieves would act with impunity until society came up with alternative ways to prevent such crimes. The result of such a development of events could be an economic collapse or social chaos.

The above illustrates a general principle: technological innovation is not always beneficial to society, at least in the short term. Technologies are capable of creating negative externalities, externalities. This economic term means harm caused to third parties or parties. If these negative externalities outweigh the usefulness of the technology itself, the invention actually makes the world worse, not better, at least for a while.

Machine learning, especially a variation known as deep learning, is arguably the hottest technological breakthrough on the planet. It empowers computers to perform many tasks that previously only humans could do: recognize images, drive cars, trade the stock market, and more. This has given rise to anxiety in some people that machine learning can render humans unnecessary and useless in the workplace. And this is actually possible, but there is a much more serious danger from machine learning that has not received enough attention so far. The point is that if machines are able to learn, then they can learn to lie as well.

People today can falsify images such as photographs, but this is a laborious and complex process. And falsification of a voice or video image is generally beyond the capabilities of an ordinary person. But soon, thanks to machine learning, it will likely be possible to easily create believable fake video images of someone's face and give the impression that the person is speaking with their own voice. Already, there is a technology for synchronizing lip movements that can literally put any words in a person's mouth. This is just the tip of the iceberg. Soon, 12-year-olds will be able to create photorealistic and believable-sounding fake images of politicians, big businessmen, their relatives or friends in their bedrooms in their bedrooms, saying everything they can imagine.

This in itself provokes some obvious abuse. Political hoaxes, the so-called "fake news" or "information stuffing", will spread like wildfires. Of course, the deception will be detected as soon as possible - no digital technology can be so good that other technology will not be able to detect the fake. But this will happen only after the harmful idea has already penetrated the minds of people who believe in it. Imagine a perfectly fabricated fake video of presidential candidates shouting racial slurs or confessing to serious crimes.

However, this is just the beginning. Imagine the potential to manipulate the stock market. Suppose someone circulated a fake video in which Tesla CEO Elon Musk admits that Tesla cars are not safe. The video will spread over the internet and the company's stock will crash. Soon after the discovery of a fake, their price will recover, however, during this time, manipulators will be able to make huge money, playing to reduce Tesla shares.

Promotional video:

And this is far from the most extreme scenario. Imagine such a prankster creating a realistic fake video of President Donald Trump stating that a strike on North Korean targets is imminent, and then posting that video where North Koreans can see it. What are the chances that the North Korean leadership will realize that this is a fake before they have to decide to start a war?

Those who tend to consider such extreme scenarios as an alarmist will reasonably point out that any fake can be detected, since the same machine learning technologies will be used to detect them as for creation. But this does not mean that we are not in danger of finding ourselves in a world filled with ubiquitous counterfeits. Once the fakes become believable enough that people cannot detect them on their own, we will forever lose confidence in what we see and hear. Instead of trusting our own senses, we will be forced to rely on algorithms used to detect forgeries and authenticate information. Throughout evolution, we've learned to trust our own feelings,and the transfer of these functions to machine intelligence may be too drastic for most people.

This could be bad news for the economy. Business and commerce networks are built on trust and communication. If machine learning throws an endless hurricane of illusions and fakes into the public information sphere one day, the walls created by evolution to distinguish reality from fiction will collapse. In this case, the level of trust in society can rapidly fall, which will not slow down to negatively affect the global well-being of humanity.

For this reason, the government should probably take some steps to ensure that digital counterfeiting is punished severely. Unfortunately, the current administration is unlikely to take such a step out of love for narrow-party news. And governments like Russia are likely to be even less inclined to curb these practices. Ultimately, the combination of poor governance with powerful new technologies poses a far greater threat to human society than the technologies themselves.

By Noah Smith - Bloomberg News Columnist

Igor Abramov