Artificial Intelligence: A Dangerous Enemy Or A Kind Helper - Alternative View

Artificial Intelligence: A Dangerous Enemy Or A Kind Helper - Alternative View
Artificial Intelligence: A Dangerous Enemy Or A Kind Helper - Alternative View

Video: Artificial Intelligence: A Dangerous Enemy Or A Kind Helper - Alternative View

Video: Artificial Intelligence: A Dangerous Enemy Or A Kind Helper - Alternative View
Video: 5 SCARIEST Things Said by A.I. Robots 2024, May
Anonim

The use of artificial intelligence (AI) in everyday life is rapidly gaining momentum, but this trend is increasingly causing concern for specialists. NYU staff prepared a cautionary note about the risks of using I. I. For the most part, the authors focused on ethical and social issues, as well as on the lack of regulation. The report of the AI Now group says little new, but in world practice, the question of the need to limit the work of artificial thinking is increasingly raised.

Almost all the visionaries of technological progress spoke about the potential threats to AI. Stephen Hawking in November last year voiced the main fears accompanying reflections on the development of technology. In his opinion, computers will sooner or later learn to reproduce themselves. Biological creatures are likely to lose to silicon intelligence in both mental abilities and in terms of adaptability to the environment, especially on another planet. Sooner or later, smart robots will consider humans an obsolete form and want to get rid of them.

Hawking's words still sound like a horror story, played dozens of times in films and books, but even those who are working on the implementation of smart systems have become increasingly afraid of the future with AI, although there are those who consider such fears premature. “Fearing the development of AI today is like worrying about the overpopulation of Mars,” said Andrew Ng, head of Google Brain. However, he did not reassure everyone.

Image
Image

Among the largest phobias associated with the introduction of a computer into all spheres of life, three are worth noting. First, the computer mind will be used as a weapon of war, which will significantly surpass even nuclear weapons in destructive power. The second is that AI will enslave humans. Third, like any human creation, the computer is error prone. All three points can be discussed in more detail.

On August 21, 2017, leading experts in the field of research and implementation of artificial intelligence, including the heads of companies such as SpaceX and Deep Mind, sent an open letter to the UN. In the text, they warned the world community against developing lethal autonomous weapons, be they robots or other mechanisms designed to kill the enemy. The authors of the document drew attention to the fatal consequences of the use of such weapons. With its phenomenal effectiveness, it is fair to ask whose hands it will fall into. It can be used by both a narrow circle of elites and terrorists.

The authors of the letter called on the control organization, and with it the scientific community, to outline the scope of the harmful use of AI. The responsibility facing a person for his own creation obliges him to carry out serious work to prevent risks. In addition to laws, their creators must also limit the power of robots, who will have to puzzle over how to turn off machines in extreme cases.

Image
Image

Promotional video:

The political abuse of electronic systems has already been demonstrated through the work of social networks and data analysis algorithms. In the spring of this year, a scandal with Cambridge Analytica thundered in the world. Its specialists conducted an in-depth analysis of user data and placed ads on Facebook, adapted individually for each person.

The company's employees not only crossed the ethical framework, but also applied modern technologies, the work of which was impossible to analyze. The effectiveness of machine learning is a recurring theme among leading mathematicians. They admit in unison that they are amazed at the extraordinary success of computer programs, but they are completely unable to understand the work of a complex system. In Myanmar in April this year, Facebook posts also caused unrest among the masses, but unlike the United States, where protests were done, in the Asian state there was a massacre, which killed several hundred people. Manipulation of a huge number of people is a reality and robots here can play a cruel joke with us.

Image
Image

It is worth remembering the same Facebook, where AI-based chatbots were launched. Virtual assistants were trained to conduct a dialogue with the interlocutor. Over time, bots became indistinguishable from real people, and then the authors decided to bring the robots together. Over time, bots began to shorten lexical structures and exchange gibberish. The media inflated a sensation from the news, they say "the machines revolted." But putting aside the exaggeration of journalists, it is fair to admit that in the case when machines begin to communicate with each other, a person may not notice this. And how they will live their own lives there - nobody knows.

The increasingly complex structure of the computer mind every day moves us away from understanding the principles of its work. But while most algorithms do their job, even today complex machines are far from ideal and make mistakes. For the further development of AI, it is important to understand not so much its strengths as its vulnerabilities. It is on this that a large number of scientific groups are focused, in one of which there is a specialist from MIT Anish atali. Just a couple of days ago, he told reporters about the most common mistakes in image recognition systems.

His colleagues demonstrated objects to the machine and found that often individual objects were perceived incorrectly by electronic vision. The computer could call a baseball a cup of coffee, and a 3D-printed turtle an army machine. The group has already collected about 200 items that are misleading algorithms.

Image
Image

Artificial intelligence, instead of perceiving an object as a whole, concentrated on its individual parameters. AI tends to learn from a “perfect” sample. When faced with non-conforming phenomena, he cannot always abandon the usual processing process. Instead of admitting to his inability to process the image, he continues to try to read the image and this leads to sometimes funny results. Instead of the shape of a turtle, the electronic brain tried to read its texture, similar to camouflage. For roughly the same reason, e-car autopilots are not yet 100% reliable. It is difficult for a car to see the silhouette and assume that it consists of separate elements.

And if in the coming years some shortcomings can be corrected, there is no guarantee that hackers will not exploit the vulnerability. Hackers of electronic devices today represent almost the main cause of fear. A small team of programmers can not only gain access to personal information, but also reconfigure autonomous systems, taking control of tools of colossal power. And then we will all be in trouble. But the main conclusion that can be drawn is perhaps the traditional one - you should not be afraid of cars, but people.

Alexander Lavrenov