Dangerous Humanity: Why Do We Need Intelligent Artificial Intelligence? - Alternative View

Table of contents:

Dangerous Humanity: Why Do We Need Intelligent Artificial Intelligence? - Alternative View
Dangerous Humanity: Why Do We Need Intelligent Artificial Intelligence? - Alternative View

Video: Dangerous Humanity: Why Do We Need Intelligent Artificial Intelligence? - Alternative View

Video: Dangerous Humanity: Why Do We Need Intelligent Artificial Intelligence? - Alternative View
Video: The danger of AI is weirder than you think | Janelle Shane 2024, May
Anonim

When Norbert Wiener, the father of cybernetics, wrote his book The Human Uses of Human Beings in 1950, vacuum tubes were still the main electronic building blocks, and there were, in fact, only a few computers at work. However, he envisioned the future we now see with unprecedented precision, mistaking only minor details.

Before any other philosopher of artificial intelligence, he realized that AI will not only imitate - and replace - human beings in many types of intellectual activities, but will also change people in the process. “We are just whirlpools in a river of ever-flowing water,” he wrote. "We are not something that just lives, we are models that perpetuate themselves."

For example, when many enticing opportunities arise, we are willing to pay and accept the small costs of running a business to access new opportunities. And very soon we become dependent on new tools, we lose the ability to exist without them. Options become required.

This is a very old history of evolution, and many of the chapters from it are well known to us. Most mammals can synthesize their own vitamin C, but fruit-eating primates have lost this built-in ability. The self-repeating patterns we call humans now rely on clothing, processed food, vitamins, syringes, credit cards, smartphones, and the internet. And tomorrow, if not today, from artificial intelligence.

Wiener foresaw several problems with this state of affairs that Alan Turing and other early AI optimists largely overlooked. The real threat, he said, was:

… is that such machines, while helpless in themselves, can be used by humans or block off human beings to increase their control over the rest of the race, or political leaders can try to take control of their populations using political methods rather than the machines themselves, as narrow and indifferent to man, as if they were invented mechanically.

Obviously, these dangers are now very relevant.

In the media, for example, innovations in digital audio and video allow us to pay a small price (in the eyes of audiophiles and moviegoers) to move away from analog formats, and in return receive an extremely simple - too simple - way of playing recordings with almost no restrictions.

Promotional video:

Image
Image

But there is a huge hidden cost. Orwell's ministry of truth has become a real opportunity. AI's techniques for creating virtually indistinguishable fake "records" make the tools we've used for investigations obsolete for the past 150 years.

We just have to abandon the brief era of photographic evidence and return to that old world where human memory and trust were the gold standard. Or we can develop new methods of defense and attack in the battle for truth. One of the most exciting recent examples is the fact that it is much cheaper to destroy a reputation than to earn and protect the same reputation. Wiener saw this phenomenon very broadly: "In the long run, there will be no difference between arming yourself and arming the enemy." The era of information has also become an era of disinformation.

What we can do? The key is Wiener's same observation that "these machines" are "themselves helpless." We create tools, not colleagues, and the real threat is that we don't see the difference.

Artificial intelligence in its current form is parasitic on human intelligence. He very unceremoniously takes possession of everything that human creators have created and draws patterns - including our most secret habits. These machines do not yet have goals or strategies, are not capable of self-criticism and innovation, they only study our databases, without their own thoughts and goals.

They are, as Wiener says, helpless not in the sense that they are chained or immobilized, no, they are not agents at all - they do not have the ability to "act from causes," as Kant would put it.

In the long term, “strong AI,” or general artificial intelligence, is possible in principle, but not desirable. The even more limited AI that is possible in practice today will not be evil. But it poses a threat - in part because it can be mistaken for a powerful AI.

How strong is artificial intelligence today?

The gap between today's systems and the sci-fi systems that flood the popular imagination is still huge, although many people, both amateurs and professionals, tend to underestimate it. Let's take a look at IBM's Watson, which may well be respectable in this day and age.

This supercomputer is the result of an extremely large-scale R&D (research and development) process that involved many people and design intelligence over many centuries, and it uses thousands of times more energy than the human brain. His victory in Jeopardy! was a true triumph made possible by the formulaic constraints of Jeopardy! 's rules, but even those rules had to be revised in order for him to participate. I had to give up a little versatility and add humanity to make the show.

Watson is a bad company, despite misleading advertisements from IBM that promise AI conversational ability at both levels, and turning Watson into a believable multi-faceted agent would be like turning a calculator into Watson. Watson may be a good computing case for such an agent, but rather a cerebellum or amygdala rather than a mind - at best, a special purpose subsystem serving as a support, but not nearly a system for planning and formulating goals based on the conversational experience gained.

Why would we want to create a thinking and creative agent from Watson? Perhaps Turing's brilliant idea - the famous Turing test - lured us into a trap: we became obsessed with creating at least the illusion of a real person sitting in front of a screen, bypassing the "ominous valley."

The danger is that since Turing presented his task - which was primarily a task to deceive the judges - the creators of AI have tried to accomplish it using funny humanoid dolls, "cartoon" versions that will fascinate and disarm the uninitiated. ELIZA by Joseph Weisenbaum, the very first chatbot, was a prime example of creating such an illusion, but with an extremely simple algorithm that could convince people that they were having sincere and sincere conversations with other people.

He was worried about the ease with which people are willing to believe this. And if we figured out anything from the annual Lebner Prize Limited Turing Test, it is that even the smartest people who are not versed in computer programming easily fall for these simple tricks.

The attitude of people in AI to such techniques ranges from judgmental to rewarding, and the consensus is that these tricks are not very deep, but can be useful. A shift in attitude that would be very welcome would be a sincere admission that doll-painted androids are false advertisements that should be condemned, not encouraged.

How can this be achieved? Once we understand that people begin to make life and death decisions by following the “advice” of AI systems, the internal operations of which are almost incomprehensible, we will see a good reason for those who urge people to trust such systems, to start relying on the norms of morality and law.

Artificial intelligence systems are very powerful tools. So powerful that even experts have a good reason not to trust their own judgment when there are “judgments” represented by these tools. But if these users of the tools are going to benefit, financially or otherwise, from the popularization of these tools, they must make sure they know how to do it with full responsibility, maximum control and justification.

Licensing and approving the operators of such systems - just as we license pharmacists, crane operators and other professionals whose mistakes and misjudgments can have dire consequences - maybe, with the support of insurance companies and other organizations, oblige the creators of AI systems to go the long way looking for the weaknesses and shortcomings of their products, and also train those who are going to work with them.

One can imagine a kind of reverse Turing test in which the judge will be the subject of the assessment; until he finds weakness, violation of boundaries, gaps in the system, he will not receive a license. Such a judge will require serious training to obtain a certification. The urge to attribute to an object a human ability to think, as we usually do when meeting an intelligent agent, is very, very strong.

In fact, the ability to resist the urge to see something humanized is a weird thing. Many people would find nurturing such a talent dubious, because even the most pragmatic users of the system are occasionally "friendly" about their tools.

Regardless of how carefully AI designers choose to eliminate fake "human" overtones in their products, we should expect the flourishing of labels, workarounds, and tolerable distortions of the actual "understanding" of both systems and their operators. In the same way that drugs with a long list of side effects are advertised on TV, or alcohol is advertised with an abundance of small print with all the warnings required by the law, so artificial intelligence developers will comply with the law, but get sophisticated in warnings.

Why do we need artificial intelligence?

We don't need artificial conscious agents. There are a host of natural conscious agents that are sufficient to carry out any task for professionals and privileged individuals. We need smart tools. Tools have no rights and should not have feelings that can be hurt or misused.

One reason not to make artificial conscious agents is that although they can become autonomous (and in principle they can be as autonomous, self-improving, or self-creating as any person), they should not - without special permission - share with our natural conscious agents, our vulnerability or our mortality.

Daniel Dennett, a professor of philosophy at Tufts University, once challenged students in a workshop on artificial agents and autonomy: Give me the specifications of a robot that can sign a contract with you - not a surrogate owned by another person, but by itself. It is not a question of understanding the reasons or manipulating the pen on paper, but rather the ownership and well-deserved ownership of legal status and moral responsibility. Young children cannot sign such contracts, as well as disabled people, whose legal status obliges them to be under guardianship and imposes responsibility on guardians.

The problem with robots that might want to achieve such exalted status is that, like Superman, they are too vulnerable to make such claims. If they refuse, what will happen? What will be the penalty for breaking a promise? Will they be locked up in a cage or taken apart? A prison for artificial intelligence will not be inconvenient unless we first load up the thirst for freedom that cannot be ignored or disabled by the AI itself. Disassembling the AI will not kill the information that is stored on its disk and in the software.

The ease of digitally recording and transmitting data - a breakthrough that enabled software and data to achieve, in fact, immortality - makes robots invulnerable. If it doesn't seem obvious, think about how people's morality would change if we could back up people every week. Jumping off a bridge without a rubber band on Sunday after a Friday backup can be a rash decision, then you can watch the tape of your untimely death later.

That is why we are not creating conscious - would like to create - humanoid agents, but rather a completely new type of creature, some oracles, unconscious, without fear of death, without distraction by love and hate, without personality: mirrors of truth that will almost certainly be infected lies.

Human use of human beings will soon change - once again - forever, but if we take responsibility for our evolutionary trajectory, we can avoid unnecessary dangers.

Ilya Khel