Killer Robots? Stop, Even Good Robots Are Terribly Unpredictable - Alternative View

Table of contents:

Killer Robots? Stop, Even Good Robots Are Terribly Unpredictable - Alternative View
Killer Robots? Stop, Even Good Robots Are Terribly Unpredictable - Alternative View

Video: Killer Robots? Stop, Even Good Robots Are Terribly Unpredictable - Alternative View

Video: Killer Robots? Stop, Even Good Robots Are Terribly Unpredictable - Alternative View
Video: The $5,200,000,000,000 Trick Killing More Than Covid, w Stephen Fry. 2024, October
Anonim

The leaders of more than a hundred of the world's leading artificial intelligence companies are very concerned about the development of "killer robots". In an open letter to the UN, these business leaders, including Elon Musk of Tesla and the founders of Google's DeepMind, warned that the use of autonomous weapon technology could be adopted by terrorists and despots, or be hacked to some degree.

But the real threat is much more serious - and it's not just human misconduct, but also machine misdeeds. Examination of complex systems shows that they can behave much more unpredictably than can be inferred from the sum of individual actions. On the one hand, this means that human society can behave very differently than you might expect by studying the behavior of individuals. On the other hand, the same applies to technology. Even ecosystems of simple artificial intelligence programs - which we call dumb, good bots - can surprise us. Even individual bots can behave horribly.

The individual elements that make up complex systems, such as economic markets or global weather, tend not to interact in a simple linear fashion. This makes these systems very difficult to model and understand. For example, even after years of climate research, it is impossible to predict the long-term behavior of the weather. These systems are as sensitive to the smallest changes as they react explosively. It is very difficult to know the exact state of such a system at any given moment. All this makes these systems internally unpredictable.

All of these principles apply to large groups of people acting in their own way, be it human societies or groups of AI bots. Scientists recently studied a type of complex system that used good bots to automatically edit Wikipedia articles. These diverse bots are designed, written and used by trusted Wikipedia editors, and their underlying software is open source and available to everyone. Individually, they have a common goal of improving the encyclopedia. However, their collective behavior proved to be surprisingly ineffective.

Well-established rules and conventions are at the heart of how these bots work on Wikipedia, but since the website does not have a central control system, there is no effective coordination between the people running different bots. The results revealed a couple of bots that canceled each other's edits for several years and no one noticed. And of course, since these bots don't learn at all, they didn't notice either.

These bots are designed to speed up the editing process. But small differences in the design of bots, or between the people who use them, could lead to a massive waste of resources in an ongoing “edit war” that could be resolved much faster by editors.

The researchers also found that bots behaved differently in different language versions of Wikipedia. The rules, it would seem, are almost the same, the goals are identical, the technologies are similar. But on the German-language Wikipedia, bot cooperation was much more efficient and productive than, for example, in Portuguese. This can only be explained by the differences between the human editors who ran these bots in different environments.

Promotional video:

Exponential confusion

Bots from Wikipedia do not have wide autonomy and the system no longer works in accordance with the goals of individual bots. But the Wikimedia Foundation plans to use AI that will give more autonomy to these bots. And this is likely to lead to even more unpredictable behavior.

A good example of what can happen is shown by bots designed to talk to people when forced to communicate with each other. We're no longer surprised by the responses of personal assistants like Siri. But get them to communicate with each other and they will quickly begin to behave in unexpected ways, argue and even insult each other.

The larger the system becomes and the more autonomous each bot becomes, the more complex and unpredictable the future behavior of that system will be. Wikipedia is an example of a large number of relatively simple bots at work. The chatbot example is an example of a small number of relatively complex and inventive bots - in both cases, unforeseen conflicts arise. Complexity, and therefore unpredictability, grows exponentially as personality is added to the system. Therefore, when systems with a large number of very complex robots appear in the future, their unpredictability will go beyond our imaginations.

Blazing madness

Self-driving cars, for example, promise to make a breakthrough in efficiency and road safety. But we don't yet know what will happen when we have a big wild system of fully autonomous vehicles. They can behave differently even within a small fleet of individual vehicles in a controlled environment. And even more unpredictable behavior can manifest itself when self-driving cars, "trained" by different people in different places, begin to interact with each other.

People can adapt to new rules and conventions relatively quickly, but they have difficulty switching between systems. Artificial agents can be even more difficult. For example, if a car “trained in Germany” travels to Italy, for example, we do not know how it will master the unwritten cultural conventions that many other “trained in Italy” cars follow. Something mundane like crossing an intersection can become deadly risky because we simply don't know if the cars will interact as expected or if they will behave unpredictably.

Now think about the killer robots that are harassing Musk and his colleagues. One killer robot can be very dangerous in the wrong hands. What about the system of unpredictable killer robots? Think for yourself.

Ilya Khel