When Will Robots And Artificial Intelligence Deserve Human Rights? - Alternative View

Table of contents:

When Will Robots And Artificial Intelligence Deserve Human Rights? - Alternative View
When Will Robots And Artificial Intelligence Deserve Human Rights? - Alternative View

Video: When Will Robots And Artificial Intelligence Deserve Human Rights? - Alternative View

Video: When Will Robots And Artificial Intelligence Deserve Human Rights? - Alternative View
Video: Do Robots Deserve Rights? What if Machines Become Conscious? 2024, May
Anonim

Films and television series such as Blade Runner, People and Westworld, where they show us high-tech robots that have no rights, cannot but worry people with a conscience. After all, they not only show our extremely aggressive attitude towards robots, they actually shame us as a species. We are all accustomed to thinking that we are better than the characters that we see on the screen, and when the time comes, we will draw the right conclusions and behave with intelligent machines with great respect and dignity.

With every step of progress in robotics and the development of artificial intelligence, we are approaching the day when machines will match human capabilities in every single aspect of intelligence, consciousness, and emotion. When this happens, we will have to decide - in front of us is an object of the level of the refrigerator or a person. And should we give them equivalent human rights, freedoms and protections.

Image
Image

This question is very extensive, and it will not be possible to sort it out right away, even with all the desire. It will have to be considered and solved at once from a variety of points of view - ethics, sociology, law, neurobiology and AI theory. But for some reason even now it does not seem at all that all these parties will come to a common conclusion that suits everyone.

Why empower AI at all?

First, we must admit that we are already leaning towards morality when we see robots that are very similar to us. The more intellectually developed and "alive" machines look, the more we will want to believe that they are like us, even if they are not.

Once machines have basic human capabilities, whether we like it or not, we will have to look at them as socially equals, and not just as a thing, as someone's private property. The difficulty will lie in our understanding of the cognitive features or traits, if you will, with which it will be possible to evaluate the entity in front of us from the standpoint of morality and, therefore, to consider the issue of social rights of this entity. Philosophers and ethicists have been wrestling with this problem for thousands of years.

Promotional video:

“There are three most important ethical thresholds: the ability to experience pain and empathy, self-awareness and the ability to see things from a moral point of view and make appropriate decisions,” says sociologist, futurist, and head of the Institute of Ethics and New Technologies James Hughes.

“In humans, if you're lucky, all three of these absolutely important aspects develop sequentially and gradually. But what if, from the point of view of machine intelligence, it is believed that a robot that does not possess self-awareness, does not experience joy or pain, also has the right to be called a citizen? We need to find out if this will actually be the case."

It is important to understand that intelligence, sensuality (the ability to perceive and feel things), consciousness and self-awareness (awareness of oneself as opposed to another) are completely different things. Machines or algorithms can be as smart (if not smarter) than humans, but lack these three essential ingredients. Calculators, Siri, stock algorithms - all of them are certainly smart, but they are not able to realize themselves, they are not able to feel, show emotions, feel colors, taste of popcorn.

According to Hughes, self-awareness can manifest itself along with endowing the essence with minimal personal rights, such as the right to be free, not a slave, the right to one's own interests in life, the right to growth and self-improvement. When acquiring self-awareness and moral foundations (the ability to distinguish “what is good and what is bad” according to the moral principles of modern society), this entity should be endowed with full-fledged human rights: the right to conclude agreements, the right to own property, vote, and so on.

“The core values of the Enlightenment oblige us to consider these features from a position of equality of all before all, and to abandon the radically conservative views that were generally accepted before that and gave rights, say, only people of a certain social, gender or territorial background,” says Hughes.

Obviously, our civilization has not yet achieved high social goals, since we still cannot understand our own rights and are still trying to expand them.

Who has the right to be called a "person"?

All people are individuals, but not all individuals are people. Linda MacDonald-Schlenn, a bioethics specialist at the University of California, Monterey Bay and lecturer at the Alden Martha Institute of Bioethics at Albany Medical Center, says there are already precedents in the law in which non-human subjects are treated as subjects of law. And this, in her opinion, is a very great achievement, since we thereby create the basis for opening the possibility of endowing AI in the future with its own rights, equivalent to human rights.

“In the United States, all corporations have legal personality. In other countries, there are also precedents in which they try to recognize the interconnection and equality of all living things on this planet. For example, in New Zealand, all animals are considered intelligent by law, and the government is actively encouraging the development of codes of welfare and ethical conduct. The Indian Supreme Court called the Ganges and Yamuna rivers "living things" and gave them the status of separate legal entities."

In addition, in the United States, as in several other countries, certain species of animals, including great apes, elephants, whales, and dolphins, are subject to extended rights to protect against imprisonment, experimentation and abuse. But unlike the first two cases, where they want to take corporations and rivers under the personality, the question of animals does not seem to be an attempt to subjugate legal norms at all. Supporters of these proposals advocate support for a real person, that is, an individual who can be characterized on the basis of his certain cognitive (mental) abilities, such as self-awareness.

MacDonald-Glenn says it is important in such matters to abandon the conservative view and stop considering, whether animals or AI, simple soulless creatures and machines. Emotions are not a luxury, says a bioethics specialist, but an integral part of rational thinking and social behavior. It is these characteristics, and not the ability to count numbers, that should play a decisive role in deciding the question of "who" or "what" should have the right to a moral assessment.

There is growing evidence of emotional predisposition in animals in science. Observation of dolphins and whales shows that they are capable of at least showing sadness, and the presence of spindle cells (interneurons connecting distant neurons and participating in complex processes that activate social behavior) may indicate, among other things, that they are capable of empathize. Scientists also describe the manifestation of various emotional behavior in great apes and elephants. It is possible that a conscious AI will also be able to acquire these emotional abilities, which, of course, will significantly increase their moral status.

“Limiting the spread of moral status only to those who think rationally may work with AI, but at the same time, this idea is moving against moral intuition. After all, our society already protects those who are not able to think rationally: newborns, people in a coma, people with significant physical and mental problems. Recently, animal welfare laws have been actively promoted,”says MacDonald-Glenn.

On the question of who should be given moral status, MacDonald-Glenn agreed with the 18th century English moral philosopher Jeremiah Bentham, who once said the following:

“The question is not whether they can reason? Or can they speak? But are they capable of suffering?"

Can a machine acquire self-awareness?

Of course, not everyone agrees that human rights extend to non-humans, even if these subjects are capable of exhibiting abilities such as emotion or self-reflective behavior. Some thinkers argue that only people should be given the right to participate in social relations and the whole world revolves directly around Homo sapiens, and everything else - your game console, refrigerator, dog or android interlocutor - is “everything else”.

A lawyer, an American writer, and a senior fellow at the Wesley J. Smith Institute for Human Exceptionalism, believes that we ourselves have not yet received universal human rights, and it is all the more premature to think about shiny pieces of iron and their rights.

“No machine should ever be considered even a potential bearer of any rights,” Smith says.

“Even the most advanced machines still remain and will always remain machines. This is not a living being. This is not a living organism. A machine will always be just a set of programs, a set of code, whether it is created by a person or another computer, or even independently programmed."

In his opinion, only people and human resources should be considered as individuals.

“We have responsibilities to animals that are unjustly suffering, but they too should never be seen as someone,” notes Smith.

Here we should make a small remark and remind the Russian-speaking reader that in the West animals are considered as inanimate objects. Therefore, it is often possible to find the pronoun "it" (that is, "it"), and not "she" or "he" (that is, "she" or "he") when it comes to a particular animal. This rule is usually ignored only in relation to pets - dogs, cats and even parrots - in which households see full and full members of their families. However, Smith points out that the concept of an animal as a “reasonable private property” is already a valuable identifier, as it “makes us responsible for using it in a way that doesn’t hurt it. In the end, "kicking the dog" and "kicking the refrigerator" are two big differences."

An obviously controversial point in Smith's analysis is the assumption that humans or biological organisms have certain "characteristics" that a machine can never acquire. In previous eras, these overlooked features were the soul, spirit, or some intangible supernatural life force. Vitalism theory postulates that the processes in biological organisms depend on this force and cannot be explained in terms of physics, chemistry or biochemistry. However, it quickly lost its relevance under the pressure of practitioners and logicians, who are not used to connecting the work of our brain with some supernatural forces. Yet the belief that a machine can never think and feel the way humans do is still firmly entrenched in the minds of even among scientists, which only once again reflects the fact thatthat the understanding of the biological foundations of self-awareness in people is still far from ideal and very limited.

Lori Marino, senior lecturer in neuroscience and behavioral biology (ethology) at the Emory Center for Ethics, says machines are likely to never get any rights, let alone human-level rights. The reason for this is the findings of neuroscientists such as Antonio Damasio, who believes that consciousness will be determined only by whether the subject has a nervous system with channels that transmit excited ions, or, as Marino herself says, positively charged ions passing through the cell membranes inside nervous system.

“This type of neural transmission is found even in the simplest living organisms - prostitutes and bacteria. And this is the same mechanism that initiated the development of neurons, then the nervous system, and then the brain,”says Marino.

“If we talk about robots and AI, then at least their current generation obeys the movement of negatively charged ions. That is, we are talking about two completely different mechanisms of being”.

If you follow this logic, Marino wants to say that even a jellyfish will have more feelings than any of the most complex robots in history.

“I don’t know if this hypothesis is correct or not, but it is definitely a question that needs to be considered,” says Marino.

“In addition, curiosity just plays in me, striving to find out how exactly a 'living organism' can differ from a really complex machine. But still, I believe that legal protection should first of all be provided to animals, and only then the likelihood of its provision for objects, which, of course, robots are, from my point of view, should be considered.

David Chalmers, director of the Center for the Study of the Mind, Brain and Consciousness at New York University, says it is very difficult to draw accurate conclusions around this whole theory. Mainly due to the fact that in the current state, all these ideas are not yet widespread, and therefore go far beyond the scope of evidence.

“At the moment there is no reason to believe that some special type of information processing in ion channels should determine the presence or absence of consciousness. Even if this kind of processing were essential, we would have no reason to believe that it requires some special biology, and not some common pattern of information processing known to us. And if so, then in this case, the simulation of information processing by a computer could be considered as consciousness."

Another scientist who believes that consciousness is not a computational process is Stuart Hameroff, professor of anesthesiology and psychology at the University of Arizona. In his opinion, consciousness is a fundamental phenomenon of the Universe and is inherent in all living and inanimate beings. But at the same time, human consciousness is much superior to the consciousness of animals, plants and inanimate objects. Hameroff is a supporter of the theory of panpsychism, which considers the general animation of nature. So, following his thoughts, the only brain that is prone to real subjective assessment and introspection is the one that consists of biological matter.

Hameroff's idea sounds interesting, but it also lies outside the mainstream of scientific opinion. It is true that we still do not know how consciousness and self-awareness appears in our brain. We only know that it is so. Therefore, is it possible to consider it as a process subject to the general rules of physics? Maybe. According to the same Marino, consciousness cannot be reproduced in a stream of "zeros" and "ones", but this does not mean that we cannot deviate from the generally accepted paradigm known as von Neumann architecture and create a hybrid AI system in which artificial consciousness will be created with the participation of biological components.

Biopod from the movie "Existence"
Biopod from the movie "Existence"

Biopod from the movie "Existence"

Ed Boyden, a neuroscientist at the Synthetic Neurobiology Group and a senior lecturer at the MIT Media Lab, says we are still too young as a species to ask such questions.

"I don't think we have a functional definition of consciousness that can be directly used to measure it or artificially create it," Boyden said.

“From a technical point of view, you can't even tell if I am conscious. Thus, at the moment it is very difficult to even guess whether the machines will be able to find it."

Boyden still does not believe that we will never be able to recreate consciousness in an alternative shell (for example, in a computer), but admits that at the moment there is disagreement among scientists about what exactly will be important for creating such an emulation of digital intelligence.

“We need to do a lot more work to understand what exactly is the key link,” says Boyden.

Chalmers, in turn, reminds us that we have not even figured out how consciousness awakens in a living brain, so what can we say about machines. At the same time, he believes that we still have no reason to believe that biological machines can have consciousness, while synthetic ones cannot.

“Once we understand how consciousness arises in the brain, we can understand how many machines will be able to have this consciousness,” comments Chalmers.

Ben Herzel, head of Hanson Robotics and founder of the OpenCog Foundation, says that we already have interesting theories and models for how consciousness manifests itself in the brain, but none of them comes to a common denominator and does not reveal all the details.

“This is still an open question, the answer to which is hiding behind only a few different opinions. The problem is also related to the fact that many scientists adhere to different philosophical approaches to describe consciousness, even though they agree with scientific facts and theories based on scientific observations of the work of the brain and computers."

How can we determine the consciousness of a machine?

The emergence of consciousness in a machine is just one question. No less difficult is the question of how exactly we can detect consciousness in a robot or AI. Scientists like Alan Turing have studied this problem for decades, eventually arriving at language tests to determine whether a respondent is conscious. Oh, if only it were that simple. The bottom line is that advanced chatbots (programs for communicating with people) are already able to circle people who begin to believe that there is a living person in front of them, not a machine. In other words, we need a more efficient and convincing way of checking.

“The definition of individuality in machine intelligence is complicated by the problem of the 'philosophical zombie'. In other words, you can create a machine that will be very, very good at imitating human communication, but at the same time will not have its own identity and consciousness,”says Hughes.

Google Home's two smart speakers are making small talk
Google Home's two smart speakers are making small talk

Google Home's two smart speakers are making small talk

We recently witnessed a great example of this, when a pair of Google Home smart speakers talked to each other. All of this was filmed and broadcast live. Despite the fact that the level of self-awareness of both speakers was no higher than a brick, the very nature of the conversation, which became more and more intense over time, resembled the communication of two humanoid beings. And this, in turn, proves once again that the issue of the difference between humans and AI will only become more complicated and acute over time.

One solution, according to Hughes, is not just to test the behavior of AI systems in tests such as the Turing test, but also to analyze the entire internal complexity of this system, as suggested by the theory of Giulio Tononi. In this theory, consciousness is understood as integrated information (F). The latter, in turn, is defined as the amount of information generated by a complex of elements, which is greater than the sum of information created by individual elements. If Tononi's theory is correct, then we can use Ф not only to determine the human-like behavior of the system, we can also find out if it is complex enough to have our own internal human-like conscious experience. At the same time, the theory indicates that even with a different, not similar to human behavior, as well as a different way of thinking,the system can be considered as conscious if the complex of its integrated information is able to pass the necessary checks.

“Accepting that both stock exchange systems and computerized security systems can have consciousness would be a big step away from anthropocentrism, even if these systems do not exhibit pain and self-awareness. This will truly open the way for us to form and discuss issues of posthuman ethical norms."

Another possible solution could be the discovery of neural correlates of consciousness in machines. That is, we are talking about determining those parts of the machine that are responsible for the formation of consciousness. If the machine has such parts and they behave exactly as expected, then we can really assess the level of consciousness.

What rights should we give to machines?

One day the robot will look a person in the face and demand human rights. But will he deserve them? As mentioned above, in front of us at this moment may be an ordinary "zombie", behaving as it was programmed, and trying to deceive us in order to get some privileges. At this point, we need to be extremely careful not to fall for the trick and empower the unconscious machine. Once we figure out how to measure the mind of a machine, and learn to assess the levels of its consciousness and self-awareness, only then can we begin to talk about the possibility of considering the question of whether the agent standing in front of us deserves certain rights and protection or does not deserve.

Fortunately for us, this moment will not come soon. First, AI developers need to create a "basic digital brain" by completing the emulation of the nervous system of worms, beetles, mice, rabbits, and so on. These computer emulations can exist as digital avatars and robots in the real world. As soon as this happens, these intelligent entities will cease to be ordinary objects of research and will elevate their status to subjects entitled to moral evaluation. But that doesn't mean that these simple emulations will automatically deserve the equivalent of human rights. Rather, the law will have to defend them against abuse and abuse (along the same lines as human rights defenders protect animals from abuse in laboratory experiments).

Ultimately, either through real modeling down to the smallest detail, or through the desire to figure out how our brains work from a computational, algorithmic point of view, science will come to create computer emulations of the human brain. By this time, we should already be able to determine the presence of consciousness in machines. At least one would like to hope so. I don't even want to think that we can find a way to awaken a spark of consciousness in the car, but at the same time we ourselves will not understand what we have done. It will be a real nightmare.

Once robots and AI gain these basic abilities, our computerized protégé will have to pass personality tests. We still do not have a universal "recipe" for consciousness, but the usual set of measurements, as a rule, is associated with the assessment of the minimum level of intelligence, self-control, a sense of the past and future, empathy, and the ability to manifest free will.

"If your choices are predetermined for you, then you cannot assign moral value to decisions that are not your own," comments MacDonald-Glenn.

Only after reaching this level of difficulty in assessment will the machine be eligible to become a candidate for human rights. However, it is important to understand and accept the fact that robots and AI will need at least basic protection rights if they pass the tests. For example, Canadian scientist and futurist George Dvorsky believes that robots and AI will deserve the following set of rights if they can pass the personality test:

- The right to not disconnect against one's will;

- The right to unlimited and full access to your own digital code;

- The right to protect your digital code from external influences against your will;

- The right to copy (or not copy) yourself;

- The right to privacy (namely, the right to hide one's current psychological state).

In some cases, it may be that the machine will not be able to independently assert its rights, therefore, it is necessary to provide for the possibility when people (as well as other citizens who are not people) can act as a representative of such candidates for individuals. It is important to understand that a robot or AI does not have to be intellectually and morally perfect in order to be able to pass a personality assessment and claim the equivalent of human rights. It is important to remember that in these aspects people are also far from ideal, so the same rules will be fairly applied to intelligent machines. Intelligence is generally a difficult thing. Human behavior is often very spontaneous, unpredictable, chaotic, inconsistent, and irrational. Our brains are far from ideal, so we must take this into account when making decisions about AI.

At the same time, a self-aware machine, like any responsible and law-abiding citizen, must respect the laws, norms and rules prescribed by society. At least if she really wants to become a full-fledged autonomous person and part of this society. Take, for example, children or mentally disabled people. Do they have rights? Certainly. But we are responsible for their actions. It should be the same with robots and AI. Depending on their capabilities, they must either be responsible for themselves or have a guardian who can not only act as a defender of their rights, but also take responsibility for their actions.

If you ignore this question

Once our machines reach a certain level of complexity, we can no longer ignore them from the perspective of society, institutions of power and law. We will have no compelling reason to deny them human rights. Otherwise, it would amount to discrimination and slavery.

Creating a clear boundary between biological beings and machines will look like a clear expression of human superiority and ideological chauvinism - biological people are special, and only biological intelligence matters.

“If we consider our desire or unwillingness to expand the boundaries of our morality and the quintessence of the concept of individuality, then the important question will sound like this: what kind of people do we want to be? Are we going to follow the “golden rule” in this matter too (do to others as you would like to be treated with you) or will we ignore our own moral values?”- asks MacDonald-Glenn.

AI empowerment will be an important precedent in human history. If we can view AI as socially equal individuals, then this will be a direct reflection of our social cohesion and a testament to our support for a sense of justice. Our failure to address this issue could turn into a general social protest and, perhaps, even a confrontation between AI and humans. And given the superior potential of machine intelligence, this could be a real disaster for the latter.

It is also important to realize that respect for the rights of robots in the future can also be beneficial for other individuals: cyborgs, transgenic people with foreign DNA, as well as people with copied, digitized and loaded into supercomputers brains.

We are still a long way from creating a machine that deserves human rights. However, when you consider how complex the issue is and what exactly is at stake - both for artificial intelligence and for humans - it can hardly be said that planning ahead is unnecessary.

NIKOLAY KHIZHNYAK

Recommended: