When Will We Have Real Artificial Intelligence? - Alternative View

Table of contents:

When Will We Have Real Artificial Intelligence? - Alternative View
When Will We Have Real Artificial Intelligence? - Alternative View

Video: When Will We Have Real Artificial Intelligence? - Alternative View

Video: When Will We Have Real Artificial Intelligence? - Alternative View
Video: Joe Asks John Carmack "How Close Are We to Artificial Intelligence?" 2024, March
Anonim

The field of artificial intelligence research has come a long way, but many believe that it was officially born when a group of scientists from Dartmouth College got together in the summer of 1956. Computers have improved dramatically over the past few years; today they perform computational operations much faster than humans. Given all this incredible progress, the scientists' optimism was understandable. The ingenious computer scientist Alan Turing suggested the appearance of thinking machines a few years earlier, and scientists came up with a simple idea: intelligence, in fact, is just a mathematical process. The human brain is a machine to a certain extent. Highlight the thinking process and the machine can imitate it.

Then the problem did not seem particularly difficult. The Dartmouth scientists wrote: "We believe that significant progress can be made on one or more of these problems if a carefully selected group of scientists work on it together over the summer." This proposal, by the way, contained one of the earliest uses of the term "artificial intelligence". There were many ideas: perhaps imitating the circuitry of neurons in the brain could teach machines the abstract rules of human language.

The scientists were optimistic and their efforts were rewarded. They had programs that seemed to understand human language and could solve algebraic problems. People confidently predicted that machine intelligence at the human level will appear in twenty years.

It is a lucky coincidence that the field of forecasting, when we will have human-level artificial intelligence, was born at about the same time as the field of AI itself. In fact, it all goes back to Turing's first article on "thinking machines," in which he predicted that the Turing test - in which a machine must convince a person that it is also a human - would pass 50 years later, by the year 2000. Today, of course, people still predict that this will happen in the next 20 years, among the famous "prophets" - Ray Kurzweil. There are so many opinions and predictions that sometimes it seems that AI researchers are putting the following phrase on the answering machine: "I already predicted what your question will be, but no, I cannot predict it exactly."

The problem with trying to predict an exact date for human-level AI is that we don't know how far we can go. It doesn't sound like Moore's Law. Moore's Law - doubling computing power every couple of years - makes a specific prediction about a specific phenomenon. We roughly understand how to move on - to improve silicon chip technology - and we know that in principle we are not limited in our current approach (until we start working with chips on an atomic scale). The same cannot be said about artificial intelligence.

Common mistakes

Stuart Armstrong's research focused on trends in these projections. Specifically, he looked for two major cognitive biases. The first was the idea that AI experts predict that AI will arrive (and make them immortal) just before they die. This is Kurzweil's criticism of the "nerd rapture" - his predictions are motivated by fear of death, desire for immortality and fundamentally irrational. The creator of superintelligence becomes almost an object of worship. The criticism is usually made by people working in the field of AI who know firsthand about the frustrations and limitations of modern AI.

Promotional video:

The second idea is that people always choose a time span of 15-20 years. This is enough to convince people that they are working on something that will be revolutionary in the near future (because people are less attracted to the effort that will manifest itself through the ages), but not so much in the near future that you immediately find yourself damn wrong. People are happy to predict the appearance of AI before they die, but it is desirable that it not be tomorrow or in a year, but in 15-20 years.

Measurement progress

Armstrong notes that if you want to gauge the validity of a particular prediction, there are many parameters to look at. For example, the idea that human-level intelligence will evolve by simulating the human brain provides you with at least a clear framework for measuring progress. Each time we get an ever more detailed map of the brain, or we successfully imitate a certain part of it, which means we progress towards a specific goal, which, presumably, will result in human-level AI. Maybe 20 years will not be enough to achieve this goal, but at least we can measure progress from a scientific point of view.

Now compare this approach with the approach of those who say that AI, or something conscious, will "appear" if the network is complex enough and has sufficient computing power. Perhaps this is how we imagine human intelligence and consciousness that arose in the process of evolution, although evolution has taken billions of years, not tens of years. The problem is that we have no empirical evidence: we have never seen consciousness emerge from a complex web. Not only do we not know if this is possible, we cannot even know when this awaits us, because we cannot measure progress along this path.

There is a tremendous difficulty in figuring out which tasks are really difficult to complete, and this has haunted us from the birth of AI until today. It is simply impossible to understand human language, chance and creativity, self-improvement - and all at once. We've learned to process natural speech, but do our computers understand what they are processing? We've made an AI that feels “creative”, but is there any creativity in its actions? The exponential self-improvement that will lead to the singularity generally seems to be something transcendental.

We ourselves do not understand what intelligence is. For example, AI experts have always underestimated the ability of AI to play Go. In 2015, many thought that AI would not learn to play Go until 2027. But only two years have passed, not twenty. Does this mean AI will write the greatest novel in a few years? Understand the world conceptually? Coming close to a person in terms of intelligence? Unknown.

Not human, but smarter than people

We may have looked at the problem incorrectly. For example, the Turing test has not yet been passed in the sense that the AI could convince a person in a conversation that he is talking to a person; but the computational ability of AI, as well as the ability to recognize patterns and drive a car, is already far beyond what humans can. The more decisions are made by algorithms of "weak" AI, the more the Internet of Things grows, the more data is fed to neural networks and the greater the impact of this "artificial intelligence" will be.

We may not yet know how to create human-grade intelligence, but we also don’t know how far we can go with the current generation of algorithms. So far, they are not even close to those terrible algorithms that undermine the social order and become a kind of vague superintelligence. Nor does it mean that we should stick to optimistic forecasts. We will have to make sure that the value of human life, morality, morality will always be embedded in the algorithms so that the algorithms are not completely inhuman.

Any forecasts must be halved. Don't forget that in the early days of AI, it looked like it would succeed very quickly. And today we also think so. Sixty years have passed since scientists gathered in Dartmouth in 1956 to "create intelligence in twenty years," and we are still continuing their work.

Ilya Khel