Could Machines Have Consciousness, According To Neuroscientists? It Seems That Yes - Alternative View

Table of contents:

Could Machines Have Consciousness, According To Neuroscientists? It Seems That Yes - Alternative View
Could Machines Have Consciousness, According To Neuroscientists? It Seems That Yes - Alternative View
Anonim

As much as the director wants to make you believe it, the protagonist of Andrew Garland's 2015 movie Out of the Machine isn't Caleb, a young programmer tasked with evaluating machine consciousness. No, the main character is Ava, an amazing humanoid AI, naive in appearance and mysterious inside. Like most films of this kind, Out of the Machine leaves the viewer to answer the question himself: Was Ava really conscious? At the same time, the film skillfully avoids the thorny question that high-profile films on the topic of AI tried to answer: what is consciousness and can a computer have it?

Hollywood producers aren't the only ones trying to answer this question. As machine intelligence is evolving at breakneck speed - not only surpassing the capabilities of humans in games like DOTA 2 and Go, but also doing it without human assistance - this question is being raised again in wide and narrow circles.

Will consciousness break through in cars?

This week, the prestigious journal Science published a review by cognitive scientists Drs. Stanislas Dehane, Hokwan Lau, and Sid Quider of the French College at UC Los Angeles and PSL Research University. In it, scientists said: not yet, but there is a clear path forward.

Cause? Consciousness is "absolutely computable," the authors say, because it arises from specific types of information processing that are made possible by the brain's hardware.

There is no magic broth, no divine spark - not even an empirical component (“what is it like to have consciousness?”) Is required to embed consciousness.

If consciousness comes purely from calculations in our one and a half kilogram organ, then equipping machines with a similar property is just a matter of translating biology into code.

Promotional video:

Just as today's powerful machine learning methods are heavily borrowed from neuroscience, we can also achieve artificial consciousness by studying the structures in our own brains that generate consciousness and implementing those ideas as computer algorithms.

From brain to robot

There is no doubt that the field of AI has received a great boost from the study of our own brains, both in shape and function.

For example, deep neural networks, the architectural algorithms that formed the basis of AlphaGo, are modeled on the multilayer biological neural networks organized in our brains.

Reinforcement learning, a type of "learning" in which the AI learns from millions of examples, is rooted in centuries of dog training technique: if a dog does something right, it gets rewarded; otherwise she will have to repeat.

In this sense, translating the architecture of human consciousness into machines seems like a simple step towards artificial consciousness. There is only one big problem.

“No one in AI is working on building conscious machines, because we just have nothing to tackle. We just don't know what to do,”says Dr. Stuart Russell.

Multi-layered consciousness

The hardest part to overcome before you start building thinking machines is to understand what consciousness is.

For Dehene and colleagues, consciousness is a multi-layered construct with two “dimensions”: C1, information that is stored ready-made in the mind, and C2, the ability to receive and track information about oneself. Both are important for consciousness and cannot exist without each other.

Let's say you are driving a car and a beacon comes on to warn you of a low remaining gasoline level. The perception of the indicator is C1, a mental representation with which we can interact: we notice it, act (refuel) and talk about it later ("Gasoline ran out on the descent, lucky - rolled").

“The first meaning that we want to separate from consciousness is the concept of global accessibility,” explains Dehene. When you become aware of a word, your entire brain understands it, that is, you can pass this information through various modalities.

But C1 is not just a "mental album". This dimension is a whole architecture that allows the brain to attract multiple modalities of information from our senses or, for example, from memories of related events.

Unlike subconscious processing, which often relies on certain “modules” that are competent in solving a specific set of tasks, C1 is a global workspace that allows the brain to integrate information, make decisions about action, and follow through.

By “consciousness,” we mean a certain representation, at a certain point in time, that fights for access to the mental workspace and wins. Winners are shared among the various computational circuits of the brain and kept at the center of attention throughout the decision-making process that determines behavior.

C1 consciousness is stable and global - all connected brain circuits are involved, the authors explain.

For a sophisticated car like the C1 smart car, this is the first step towards solving an impending problem like low fuel. In this example, the indicator itself is a subconscious signal: when it lights up, all other processes of the car remain uninformed, and the car - even when equipped with the latest visual processing tools - rushes past the gas station without hesitation.

With C1, the fuel tank will notify the car's computer (allowing the indicator to enter the car's "conscious mind") so that it, in turn, will activate GPS to find the nearest station.

“We believe the machine will translate this into a system that will extract information from all the modules available to it and make it available to any other processing unit that can use this information,” Dehane says. "This is the first sense of consciousness."

Meta-cognition

In a sense, C1 reflects the ability of the mind to extract information from the outside. C2 goes into introspective.

The authors define the second network of consciousness, C2, as "meta-cognition": it reflects when you learn or perceive something, or just make a mistake. (“I think I should have refueled at the last station, but I forgot”). This dimension reflects the connection between consciousness and a sense of self.

C2 is the level of consciousness that allows you to feel more or less confident in making a decision. In terms of computing, it is an algorithm that infers the probability that a decision (or calculation) will be correct, even if it is often perceived as a "sixth sense."

C2 also launches roots in memory and curiosity. These self-monitoring algorithms allow us to know what we know and what we don't know - this is "meta-memory" that helps you find the right word "on the tip of the tongue." Observing what we know (or don't know) is especially important for children, Dehané says.

“It is absolutely essential for young children to keep track of what they know in order to learn and be curious,” he says.

These two aspects of consciousness work together: C1 pulls relevant information into our working mental space (discarding other “possible” ideas or solutions), and C2 helps with long-term reflection about whether conscious thinking has led to a useful result or response.

Going back to the low fuel indicator example, C1 allows the car to solve the problem instantly - these algorithms globalize the information, and the car learns about the problem.

But to solve the problem, the car will need a catalog of "cognitive abilities" - the self-awareness of which resources are readily available, such as a GPS map of gas stations.

“This kind of self-discovery car is what we call working with the C2,” says Dehane. Since the signal is available globally and is monitored as if the car is looking at itself from the side, the car will attend to the low fuel indicator and behave in the same way as a person - will reduce fuel consumption and find a gas station.

"Most modern machine learning systems have no self-control," the authors note.

But their theory seems to be on the right track. In those examples where the self-observation system was implemented - in the form of a structure of algorithms or a separate network - the AI developed "internal models that were meta-cognitive in nature, which allowed the agent to develop (limited, implicit, practical) understanding of himself."

To conscious machines

Will a car with models C1 and C2 behave as if it has consciousness? It is very likely that a smart car will “know” that it is seeing something, express confidence in it, communicate it to others, and find the best solution to the problem. If his introspection mechanisms break down, he may also experience "hallucinations" or visual illusions that humans have.

Thanks to C1, he can use the information he has and use it flexibly, and thanks to C2, he will know the limits of what he knows, Dehane says. “I think this machine will have consciousness,” and not just seem like that to people.

If you are left with the feeling that consciousness is about much more than a global exchange of information and self-observation, you are not alone.

"This purely functional definition of consciousness may leave some readers unsatisfied," the authors admit. “But we are trying to take a radical step, perhaps simplifying the problem. Consciousness is a functional property, and as we continue to add functions to machines, at some point these properties will characterize what we mean by consciousness,”concludes Dehane.

Ilya Khel