All Our Knowledge About Artificial Intelligence Is Just Delusion - Alternative View

Table of contents:

All Our Knowledge About Artificial Intelligence Is Just Delusion - Alternative View
All Our Knowledge About Artificial Intelligence Is Just Delusion - Alternative View

Video: All Our Knowledge About Artificial Intelligence Is Just Delusion - Alternative View

Video: All Our Knowledge About Artificial Intelligence Is Just Delusion - Alternative View
Video: Matthew De Abaitua - Artificial Imagination: Dreams and Delusions of AI (DotYork 2018) 2024, May
Anonim

Perhaps one of the most difficult tests of machine intelligence was the chess game that took place almost 20 years ago between the Deep Blue computer and the world chess champion Garry Kasparov. The car won. At the moment, the series of games in the logic game of go has ended, where the AI AlphaGo from DeepMind (owner of Google) and the legendary go champion from China Li Si Dol competed. In four of the five games the machine won, showing its superiority over the human in this game. The incredibly complex game between human and AI shows that machine intelligence has developed very strongly during this time. It seems that the fateful day, when machines really become smarter than humans, is now closer than ever. However, it seems that many do not understand at all, but rather are even mistaken about the consequences that may await us in the future.

We do underestimate some very serious and even dangerous consequences of the development of artificial intelligence. Last year, SpaceX co-founder Elon Musk issued a statement in which he expressed concern about the possible enslavement of the world by artificial intelligence, which in turn prompted a huge amount of comments from both supporters and opponents of this opinion.

Image
Image

For such a fundamentally revolutionary future that can await us, it is surprising that there are so many disagreements today about whether it will happen at all and what it will eventually turn out to be. It is especially strange to deny the incredible benefits that we can get from creating a truly smart AI, of course, taking into account all possible risks. All these questions are incredibly difficult to get the right answers, since AI, unlike any other inventions of mankind, can really "rejuvenate" this mankind or completely destroy it.

Now it is difficult to understand what to believe and what to hope for. However, thanks to pioneers of computer science, neuroscientists and theorists of AI development, a clear picture is slowly starting to clear up. Below is a list of the most common misconceptions and myths regarding artificial intelligence.

We will never create AI with humanoid intelligence

Reality, in turn, suggests that we already have computers that match and even exceed human capabilities in some areas. Take chess or the same game of go, trading on stock exchanges, or the role of a virtual interlocutor. Computers and the algorithms that control them will only get better over time, and it’s only a matter of time until they become equal to human capabilities.

Promotional video:

Image
Image

New York University researcher Gary Marcus once said that "virtually everyone" who works with AI believes that machines will one day surpass us:

"The only dispute between supporters and skeptics is the timing of this event."

Futurists such as Ray Kurzweil believe this could happen in the coming decades, while others say it will take several centuries.

AI skeptics are unconvincing in their evidence that the creation of artificial intelligence as something unique and very similar to a real living human brain in technological terms is somewhere beyond reality. Our brain is also a machine. Biological machine. It exists in the same world and obeys the same laws of physics as everything else. And over time, we will fully figure out the whole principle of its work.

AI will have consciousness

There is a general consensus that machine intelligence will be conscious. That is, AI will think in the same way as a human does. In addition, some critics, such as Microsoft co-founder Paul Allen, believe that due to an incomplete theoretical base describing the mechanisms and principles of self-awareness, we have not yet created even general artificial intelligence, that is, intelligence capable of performing any intellectual tasks with which a person can handle. However, according to Murray Shanahan, assistant professor in the Department of Cognitive Robotics at Imperial College London, we should not equate the two.

Image
Image

“Self-awareness is definitely a very interesting and important subject of research, but I do not believe that self-awareness should be a mandatory attribute of human-like artificial intelligence,” says Shanahan.

"By and large, we use the term 'self-awareness' only to indicate several psychological and cognitive attributes that are usually linked in a person."

It's possible to imagine a very smart car that lacks one or more of these attributes. Someday we will be able to build a truly incredibly intelligent AI, but at the same time devoid of the ability to self-awareness, as well as subjective and conscious understanding of the world around us. Shanahan notes that the unification of intelligence and self-awareness within the machine will still be possible, but we must not overlook the fact that these are two completely separate concepts.

And although one of the variants of the "Turing Test", in which a machine showed that it is no different from a person, was successfully passed, this does not mean at all that this machine has consciousness. From our (human) point of view, advanced artificial intelligence may appear to us as something possessing self-awareness, but the machine itself will be no more aware of itself than the same stone or calculator.

We shouldn't be afraid of AI

In January of this year, Facebook founder Mark Zuckerberg shared his thoughts on how we shouldn't be afraid of AI, adding that this technology can bring tremendous benefits around the world. The truth is, he's only partly right. We will indeed be able to reap amazing benefits with AI at our disposal (from self-driving cars to new discoveries in medicine), but no one can guarantee that every application of AI will be beneficial.

A highly intelligent system, perhaps, will know everything necessary for certain tasks (for example, resolving a complex global financial situation or hacking the enemy's computer systems), however, outside of highly specialized tasks, the potential of AI is still completely unclear and therefore potentially dangerous. For example, DeepMind's system is specialized in the game of go, but it has no ability (and reason) to explore areas outside of this realm.

Flame computer virus, whose task was to track the Middle East countries

Image
Image

Many of these systems can pose serious security risks. A good example is the powerful and highly cunning Stuxnet virus, a militarized worm created by the US and Israeli military to infiltrate and attack Iranian nuclear power plants. Only this malicious program somehow (accidentally or deliberately) infected one of the Russian nuclear power plants.

Another example is the Flame virus, developed for targeted cyber espionage in the Middle East. It is easy to imagine how "future versions" of Stuxnet and Flame viruses will independently go beyond their assigned tasks and will be able to infect almost the entire infrastructure of a country. And they will do it very quietly and imperceptibly.

Artificial superintelligence will be too smart to make mistakes

Richard Luzmore, mathematician, artificial intelligence researcher and founder of the robotics company Surfing Samurai Robots, believes that most AI doomsday scenarios look unlikely because they are all broadly based on what AI will one day say, “I understand that destruction people is a mistake in my program code, but I still have to follow the task."

Image
Image

Luzmore believes that if AI behaves according to this scenario, it will face logical contradictions that will question its entire accumulated knowledge base and will eventually lead to its own awareness of its stupidity and uselessness.

The researcher also believes that those people who say that "AI will only do what is included in its program" are mistaken, just like those people who once said the same words, but towards computers, declaring that computer systems will never have universality.

Peter McIntyre and Stuart Armstrong, both at the Institute for the Future of Humanity at Oxford University, in turn, disagree with this opinion, arguing that AI behavior will be mandatory and largely justified by the program code. Scientists do not believe that AI will never make mistakes or, on the contrary, will be too stupid to understand exactly what we want from them.

“By definition, artificial superintelligence (ISI) is an agent whose intelligence will many times surpass the best minds of humanity in almost all areas,” says McIntyre.

"He will definitely understand what we want from him."

McIntyre and Armstrong believe that AI will perform only those tasks for which it was programmed, but if it somehow develops on its own, then, most likely, it will try to figure out how much its actions will differ from human tasks and inherent in it. laws.

McIntyre compares the future situation of humans to that faced by mice. Mice have a powerful instinct for finding food and shelter, but their goals very often conflict with a person who does not want to see them in his home.

"Like our knowledge of mice and their desires, the superintelligence system can also know everything about us and know what we want, but at the same time it will be completely indifferent to our desires."

Simple solution eliminates AI control problem

As shown in the movie "From the Machine", it will be very difficult to control the AI, which will be much smarter than us

Image
Image

If we assume that one day we will create superintelligence that will be smarter than humans, then we will have to face a serious problem, the problem of control. Futurists and artificial intelligence theorists are not yet able to explain how we can control and contain ISI after its creation. It is also unclear how we can ensure that he will be friendly towards people. More recently, researchers at the Georgia Institute of Technology (USA) naively suggested that AI can learn and absorb human values and knowledge of social norms by simply reading simple fairy tales. Yes, yes, simple children's fairy tales and stories that parents read to us in childhood. But in reality, everything will turn out to be much more complicated than all this.

"Many so-called 'solutions' have been proposed for the issue of artificial intelligence control," says Stuart Armstrong.

One example of such solutions can be programming the ISI in such a way that it constantly tries to please or please a person. An alternative would be to integrate concepts such as love or respect into its source code. And in order to avoid such a development of the scenario, according to which AI can simplify all these concepts and perceive this world through the prism of these super-simplified concepts, dividing it only into black and white, it can be programmed to understand and accept intellectual, cultural and social diversity.

The three laws of robotics created by Isaac Asimov fit perfectly into the concept of fiction, but in reality we need something more comprehensive to address the issue of control

Image
Image

Unfortunately, these solutions are too simple and look like an attempt to fit all the complexity of human likes and dislikes within one general definition or concept, or an attempt to fit the entire complexity of human values into one word, phrase or idea. Try, for example, to fit within this framework a consistent and adequate definition for such a thing as "respect."

“Of course, one should not think that such simple options are completely useless. Many of them offer a great reason to think, and perhaps stimulate the search for a solution to the ultimate problem,”says Armstrong.

"But we cannot rely solely on them without more complex work, without clear research and looking for the consequences of using certain solutions."

We will be destroyed by artificial superintelligence

No one can guarantee that AI will one day destroy us, just as no one can say with certainty that we will not be able to find ways to control and use AI for our own purposes. As the American expert on artificial intelligence Eliezer Yudkowski once said: "AI cannot love or hate you, but you are made of atoms that it could use for something else."

Oxford philosopher Nick Bostrom in his book "Superintelligence: Ways, Dangers and Strategies" writes that real superintelligence will one day be able to realize itself, which will make it more dangerous than any invention ever created by man. Prominent modern thinkers such as Elon Musk, Bill Gates and Stephen Hawking (many of whom believe that "AI could be the worst mistake in human history") are more likely to agree with this opinion and are already sounding the alarm.

Peter McIntyer believes that for most of the tasks that superintelligence can set for itself, people will look like an extra link.

“AI will one day be able to conclude - and it should be noted that it will calculate this quite correctly - that people do not want to use it to maximize the profitability of any particular company at any cost, regardless of the consequences for consumers, the environment and living beings. Therefore, he will have a huge incentive to develop a plan and strategy, thanks to which a person will not be able to interfere with the solution of the task, by changing it or even turning off the AI."

According to MacIntyer, if the AI's tasks are directly opposite to our own, then it will give him great reasons to not let us stop him. And if we consider that the level of his intelligence will be much higher than ours, it will really be very, very difficult to stop him, if at all possible.

However, it is impossible to say anything with certainty right now, and no one will be able to say which form of AI we will have to deal with and how it can threaten humanity. As Elon Musk once said, AI could actually be used to control, track and control other AIs. And perhaps human values and initial friendliness to people will be introduced into it.

Artificial superintelligence will be friendly

The philosopher Immanuel Kant believed that intelligence is strongly interconnected with the principles of morality. In his work Singularity: A Philosophical Analysis, neuroscientist David Chalmers took Kant's famous idea as a basis and tried to apply it to analyze the formation of artificial superintelligence.

Image
Image

“If the principles described in this paper are correct, then along with the sharp development of AI, one should expect a sharp development of moral principles. Further development will lead to the emergence of ISI systems, which will have supermorale and superintelligence. Therefore, we should expect only benevolent qualities on their part."

The truth is that the idea of an advanced AI with moral principles and exceptional virtue does not hold water. As Armstrong points out, the world is full of smart war criminals. Intellect and morality in a person, for example, are not connected in any way, therefore the scientist questions that such a relationship will exist in other forms of intelligence.

“Smart people who behave immorally tend to create far more problems and pain than their less intelligent counterparts. Intelligence enables them to be more sophisticated in their bad rather than good actions,”says Armstrong.

McIntyre explains that the ability of an agent to achieve a given task has nothing to do with what that task is.

“We will be very lucky if our AIs become more moral, not just smart. Relying on luck is, of course, the last thing in this matter, but perhaps it is luck that will determine our position in the future,”the researcher says.

The risks associated with AI and robotics are the same

This is a particularly common misconception used in the media, as well as in Hollywood blockbusters like The Terminator.

Image
Image

If artificial superintelligence such as Skynet, for example, really wants to destroy all of humanity, then it is unlikely he will use paramilitary androids with machine guns in each hand. His cunning and effective thinking will allow him to understand that it will be much more convenient to use, say, a new type of biological plague or, for example, some nanotechnological disaster. Or maybe he will just take and destroy the atmosphere of our planet. AI is potentially dangerous not just because its development is closely related to the development of robotics. The reason for his potential danger lies in the methods and means through which he can declare his presence to the world.

AI featured in science fiction reflects our future

Without a doubt, for many years, writers and science fiction writers have used the science fiction environment as a springboard for speculating about our real future, but the actual creation of ISI and the real consequences of this are still beyond the horizon of our real knowledge. Moreover, such an artificial and clearly inhuman nature of AI does not even allow us to assume with any certainty what this AI will actually be like.

Image
Image

In most sci-fi works, AI is as close to humans as possible.

“Before us is actually a whole spectrum of the most diverse possible types of mind. Even if we take only the human form. For example, your mind is far from identical with your neighbor's mind. But this comparison is just a drop in the sea of all the possible diversity of minds that could exist,”says McIntyre.

Most sci-fi fictions are, of course, created primarily to tell a story, not to be as scientifically convincing as possible. If it were the other way around (science is more important than the plot), then it would be uninteresting to follow such works.

“Just imagine how boring all these stories would be, where AI, without self-awareness, the ability to rejoice, love or hate, destroys all people practically without any resistance to achieve its goal, which, by the way, may not be very interesting either. for the reader or viewer,”comments Armstrong.

AI will take our jobs

AI's ability to automate the processes that we do manually, and its potential to destroy all of humanity are not the same thing. However, according to Martin Ford, author of Rise of the Robots: Technology and the Threat of a Jobless Future, these concepts are often tried to be compared and combined into a coherent whole. Of course, it's great that we try to anticipate the consequences of creating AI so far, but only if this effort does not distract us from the problems that we may face in a couple of decades if we do nothing. And one of the main such problems is mass automation.

Image
Image

Few would argue that one of the challenges of artificial intelligence will be to find a way to automate many jobs, from factory jobs to some white-collar jobs. Some experts predict that half of all jobs, at least in the US, may be automated in the near future.

But this does not mean at all that we will not be able to accept such changes. It is important to understand that the desire to relieve oneself from unnecessary physical and psychological stress at work has actually been a dream of our human species for a long time.

“In a few decades, AI will indeed replace people in many jobs. And this is actually very good. For example, self-driving cars will replace truckers, which in turn will not only lower the cost of shipping goods, but also lower the cost of the goods themselves. If you work as a truck driver, then you, of course, remain the last one, but everyone else in this case is more likely to receive a raise in their salaries. And the more this money can be saved, the more this money can be spent on other goods and services that people will be engaged in at their workplaces."

Most likely, artificial intelligence will search for new ways of well-being and feed people while they do other things. Advances in AI development will be accompanied by successes in other areas. Especially in the manufacturing sectors. In the future, all of this is likely to greatly simplify rather than complicate the satisfaction of our basic needs.