Thinking Like A Person: What Will Happen If You Endow The Machine With The Theory Of Consciousness - Alternative View

Table of contents:

Thinking Like A Person: What Will Happen If You Endow The Machine With The Theory Of Consciousness - Alternative View
Thinking Like A Person: What Will Happen If You Endow The Machine With The Theory Of Consciousness - Alternative View

Video: Thinking Like A Person: What Will Happen If You Endow The Machine With The Theory Of Consciousness - Alternative View

Video: Thinking Like A Person: What Will Happen If You Endow The Machine With The Theory Of Consciousness - Alternative View
Video: 10. Evolution, Emotion, and Reason: Evolution and Rationality 2024, May
Anonim

Last month, a team of self-taught AI players suffered a spectacular defeat against professional esports players. The show match, which took place as part of the Dota 2 The International World Championship, showed that team strategic thinking still allows a person to gain the upper hand over a car.

The AIs involved were several algorithms developed by OpenAI, of which Elon Musk is one of the founders. A collective of digital players, called the OpenAI Five, learned to play Dota 2 on their own, through trial and error, competing with each other.

Unlike the same chess or board logic game Go, the popular and fast-growing multiplayer game Dota 2 is considered a much more serious field for testing artificial intelligence. The overall difficulty of the game is just one factor. It's not enough just to click very quickly with the mouse and issue commands to the character you control. To win, it is necessary to have intuition and an understanding of what to expect from the opponent at the next moment in time, as well as to act adequately according to this set of knowledge in order to come together with common efforts towards a common goal - victory. The computer does not have this set of capabilities.

To date, even the most outstanding deep learning computer algorithm does not have the strategic thinking necessary to understand the goals from the tasks of its opponent, be it another AI or a human.

According to Wang, in order for AI to succeed, it needs to have a deep communication skill that originates from the most important cognitive feature of a person - the presence of intelligence.

Mental state model as a simulation

Promotional video:

By the age of four, children typically begin to understand one fundamental social trait: their minds are different from those of others. They begin to understand that everyone has what they believe in, their desires, emotions and intentions. And, most importantly, imagining themselves in the place of others, they can begin to predict the further behavior of these people and explain them. In a way, their brains begin to create multiple simulations of themselves within themselves, substituting themselves in the place of other people and placing themselves inside a different environment.

The mental state model is important in understanding oneself as a person and also plays an important role in social interaction. Understanding others is the key to effective communication and achieving common goals. However, this ability can also be the driving force behind false beliefs - ideas that lead us away from objective truth. As soon as the ability to use a mental state model is impaired, for example, this happens in autism, then natural "human" skills, such as the ability to explain and imagine, also deteriorate.

According to Dr. Alan Winfield, professor of robotics at the University of the West of England, the mental state model or "theory of mind" is a key feature that will one day enable AI to "understand" people, things, and other robots.

Instead of machine learning methods, in which multiple layers of neural networks extract individual pieces of information and "study" huge databases, Winston suggests taking a different approach. Rather than relying on learning, Winston suggests pre-programming the AI with an internal model of itself and of the environment that will answer simple "what if?" Questions.

For example, imagine that two robots are moving along a narrow corridor, their AI can simulate the results of further actions that will prevent their collision: turn left, right, or continue straight. This internal model will essentially act as a "consequence mechanism", acting as a kind of "common sense" that will help direct the AI to the next correct actions by predicting the future development of the situation.

In a study published earlier this year, Winston demonstrated a prototype robot capable of achieving such results. Anticipating the behavior of others, the robot successfully passed through the corridor without collisions. In fact, this is not surprising, the author notes, but the "attentive" robot, using a simulated approach to solving the problem, took 50 percent longer to complete the corridor. Nevertheless, Winston proved that his method of internal simulation works: "This is a very powerful and interesting starting point in the development of the theory of artificial intelligence," the scientist concluded.

Winston hopes that eventually AI will gain the ability to describe, mentally reproduce situations. An internal model of itself and others will allow such an AI to simulate various scenarios, and, more importantly, to define specific goals and objectives for each of them.

This is significantly different from deep learning algorithms, which, in principle, are not able to explain why they came to this or that conclusion when solving a problem. The black box model of deep learning is actually the real problem in trusting such systems. This problem can become especially acute, for example, when developing nursing robots for hospitals or for the elderly.

An AI armed with a mental state model could put itself in the shoes of its masters and correctly understand what is wanted from it. Then he could identify suitable solutions and, having explained these decisions to the person, he would already be fulfilling the task assigned to him. The less uncertainty in decisions, the more confidence there would be in such robots.

Mental state model in a neural network

DeepMind takes a different approach. Instead of pre-programming an algorithm for the mechanism of consequences, they have developed several neural networks that exhibit a similarity to a model of collective psychological behavior.

The AI algorithm "ToMnet" can learn actions by observing other neutron networks. ToMNet itself is a collective of three neural networks: the first is based on the peculiarities of choosing other AIs according to their latest actions. The second forms a general concept of the current mood - their beliefs and intentions at a certain point in time. The collective result of the work of two neural networks is received by the third, which predicts further actions of the AI based on the situation. As with deep learning, ToMnet becomes more effective as it gains experience by following others.

In one experiment, ToMnet "watched" three AI agents maneuver in a digital room, collecting colorful boxes. Each of these AI had its own peculiarity: one was "blind" - could not determine the shape and placement in the room. The other was a "sclerotic": he could not remember his last steps. The third could both see and remember.

After training, ToMnet began to predict the preferences of each AI by observing its actions. For example, the "blind" was constantly moving only along the walls. ToMnet remembered this. The algorithm was also able to correctly predict the future behavior of the AI and, more importantly, understand when the AI encountered a false representation of the environment.

In one test, a team of scientists programmed one AI for "myopia" and changed the layout of a room. The agents with normal vision quickly adapted to the new layout, but the myopic man continued to follow his original routes, falsely believing that he was still in the old environment. ToMnet quickly noted this feature and accurately predicted the agent's behavior, putting itself in its place.

According to Dr. Alison Gopnik, a developmental psychologist at the University of California, Berkeley, who was not involved in these studies, but who read the findings, these results do show that neural networks have an amazing ability to learn various skills on their own, through observation of others. At the same time, according to the expert, it is still very early to say that these AIs have developed an artificial model of the mental state.

According to Dr. Josh Tenebaum of the Massachusetts Institute of Technology, who was also not involved in the study, ToMnet's "understanding" is strongly tied to the context of the learning environment - the same room and specific AI agents whose task was to collect boxes. This constraint within a certain framework makes ToMnet less effective at predicting behavior in radically new environments, as opposed to the same children who can adapt to new situations. The algorithm, according to the scientist, will not cope with modeling the actions of a completely different AI or person.

In any case, the work of Winston and DeepMind demonstrates that computers are beginning to show the beginnings of "understanding" each other, even if this understanding is still only rudimentary. And as they continue to improve this skill, understanding each other better and better, the time will come when machines can understand the complexity and complexity of our own consciousness.

Nikolay Khizhnyak