Four Types Of Artificial Intelligence: From Jet Robots To Conscious Creatures - Alternative View

Table of contents:

Four Types Of Artificial Intelligence: From Jet Robots To Conscious Creatures - Alternative View
Four Types Of Artificial Intelligence: From Jet Robots To Conscious Creatures - Alternative View

Video: Four Types Of Artificial Intelligence: From Jet Robots To Conscious Creatures - Alternative View

Video: Four Types Of Artificial Intelligence: From Jet Robots To Conscious Creatures - Alternative View
Video: Will robots take our jobs? | CNBC Explains 2024, May
Anonim

It is widely believed that with the latest advances in artificial intelligence research, living and intelligent machines will soon be on the horizon. Cars understand voice commands, distinguish pictures, drive cars, and play games better than we do. How long is it left to wait until they start walking among us?

A recently released White House report on artificial intelligence takes a skeptical stance. It says that in the next 20 years, we are unlikely to see machines "exhibiting intellectual capabilities comparable to or superior to humans," but in the coming years "machines will achieve human capabilities to perform more and more tasks." However, this report is missing a few important things.

Artificial intelligence researcher Arend Hintze argues that the report focuses exclusively on "boring type of AI." It cuts off a giant branch of AI research in mid-sentence, how evolution helps us develop ever better AI systems, and how computational models help us understand the evolution of our own human intelligence.

The report focuses on, as the scientist says, the main tools of AI: machine learning and deep learning. This kind of technology allowed robots to play quizzes well and outplay the masters of the game of go. These systems can handle colossal amounts of data and perform complex calculations very quickly. But they are missing an element that will be key in creating the intelligent machines that we would like to have in the future.

We need more than teaching machines to learn. We need to overcome the boundaries that define four different types of artificial intelligence. The barriers that separate machines from us - and us from them.

AI type I: jet machines

The most basic types of AI systems are highly reactive and cannot form memories or use past experiences to inform current decisions. Deep Blue, the chess-playing supercomputer IBM that beat Grandmaster Garry Kasparov in the late 1990s, is a perfect example of this type of machine.

Promotional video:

Deep Blue can identify pieces on a chessboard and knows how they move. He can make predictions of moves, both his own and his opponent's. And he chooses the most optimal moves possible.

However, he has no idea about the past and the memory of what happened. Aside from the rarely used chess-specific rule of not repeating the same move three times, Deep Blue ignores everything up to now. He just looks at the pieces on the chessboard and chooses the next move.

This type of intelligence includes a computer that directly perceives the world and acts based on what it sees. He does not rely on an inner concept of the world. In his work, AI researcher Rodney Brooks argued that we should only build such machines. In his opinion, humans are not very good at programming precise simulated worlds for computers, as they say, at creating a "representation," a representation of the world.

The modern intelligent machines that we admire either do not have such a concept of the world, or it is very limited and deals with certain tasks. The innovation in Deep Blue's design wasn't about expanding the number of possible moves a computer considers. Instead, the developers have found a way to narrow his vision, to discard some of the possible moves in the future, depending on how they are evaluated.

Likewise, AlphaGo Google, which beat the world champion in Go, has no way of assessing possible future moves. Its analysis method is more sophisticated than that of Deep Blue: it uses a neural network to evaluate the game's unfolding.

These techniques improve the capabilities of AI systems, make certain games play better, but are not easy to change or apply to other situations. These computer types of imaginations do not have a concept of the world as a whole - which means they cannot go beyond performing the specific tasks for which they were made, and they are easily fooled.

They cannot interactively participate in the world, and we would like to see just such AI systems one day. Instead, the machines will behave exactly as they always do when faced with the same situation. If we want to make the AI system reliable and trustworthy, then that's a good thing: you would like your autonomous vehicle to be reliable. But if we want machines to interact with us and with the world, that's bad. The simplest AI systems never get bored, they cannot be interested or upset.

AI type II: limited memory

Type II includes machines that can look into the past. Self-driving cars are already a little capable of this. For example, they observe the speed and direction of other vehicles. This cannot be done all at once, for this you need to identify specific objects and observe them over time.

These observations are added to the self-driving cars' pre-programmed representations of the world, which include road markings, traffic lights and other critical elements. They connect when the car decides to change lanes and not collide with another.

But these simple bits of information about the past are only temporary. They will not be saved as part of the vehicle experience library in which it can learn, as human drivers do, accumulating experience over the years while driving.

How do we build AI systems that build complete representations, remember our experiences, and learn to cope with new situations? Brooke was right that this is very difficult to do. Perhaps it is worth looking for inspiration in Darwinian evolution?

AI type III: theory of mind

Here we need to make a short stop and call this moment an important gap between the machines we have and the machines we would like to build in the future. However, the first step is to be more specific about the views that machines will have to create.

Machines of the next, more advanced class not only form representations of the world, but also of other agents or entities of the world. In psychology, this is called "theory of mind" - the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.

This is important to the way we humans shape society as it provides us with social interactions. Without understanding each other's motives and intentions and not considering what someone else knows about me or the environment, working together is difficult at best and impossible at worst.

If AI systems really ever roam among us, they will need to understand what we think and feel, at least at the level of assumptions. And adjust your behavior accordingly.

IV type of AI: self-awareness

The ultimate goal of artificial intelligence development is to create systems that can shape self-images. Ultimately, AI researchers must not only understand consciousness, but also create machines with consciousness.

This is in a sense an extension of the "theory of mind" that was mentioned in the previous type of AI. When we talk about consciousness, we also mean self-awareness. “I want this thing” is different from “I know I want this thing”. Conscious beings are self-aware, aware of their inner states, and can anticipate the behavior or feelings of others. We assume that someone signaling us in traffic is angry or impatient, because that's how we might feel in his place. Without a theory of mind, we could not make such inferences.

While we are probably far from building self-conscious machines, we must focus our efforts on the path to understanding memory, learning, and the ability to make decisions about past experiences. This is an important step towards understanding the human mind itself. And this is very important if we want to design or develop machines that can not only classify what they see in front of us, but also much more.

ILYA KHEL