What's Going On With Artificial Intelligence? Analysis Of 16 625 Works Over The Past 25 Years - Alternative View

Table of contents:

What's Going On With Artificial Intelligence? Analysis Of 16 625 Works Over The Past 25 Years - Alternative View
What's Going On With Artificial Intelligence? Analysis Of 16 625 Works Over The Past 25 Years - Alternative View

Video: What's Going On With Artificial Intelligence? Analysis Of 16 625 Works Over The Past 25 Years - Alternative View

Video: What's Going On With Artificial Intelligence? Analysis Of 16 625 Works Over The Past 25 Years - Alternative View
Video: Artificial intelligence and algorithms: pros and cons | DW Documentary (AI documentary) 2024, September
Anonim

Pretty much everything you hear about artificial intelligence today comes from deep learning. This category of algorithms works with statistics to find patterns in data, and has proven to be extremely powerful in mimicking human skills, such as our ability to see and hear. To a very narrow extent, it can even imitate our ability to reason. These algorithms support Google Search, Facebook Newsfeed, Netflix recommendation engine, and shape industries such as healthcare and education.

How deep learning develops

Despite the fact that deep learning practically single-handedly brought artificial intelligence to the public, it represents only a small flash in the historical task of humanity to reproduce its own intelligence. It has been at the forefront of this search for less than a decade. If we put aside the whole history of this area, it is easy to understand that soon it may also go away.

The sudden ups and downs of various methods have long characterized AI research, he said. There has been intense competition between different ideas every decade. Then, from time to time, the switch flips and the whole community starts doing one thing.

Our colleagues at MIT Technology Review wanted to visualize these troubles and starts. To this end, they turned to one of the largest databases of open scientific papers known as arXiv. They downloaded excerpts from a total of 16,625 articles available in the AI section through November 18, 2018, and tracked down the words mentioned over the years to see how the field has evolved.

Through their analysis, three major trends emerged: the shift towards machine learning in the late 90s and early 2000s, the rise in popularity of neural networks that began in the early 2010s, and the rise in reinforcement learning in the past few years.

Promotional video:

But first, a few caveats. First, the arXiv section with AI dates back to 1993, and the term "artificial intelligence" dates back to the 1950s, so the database itself only represents the final chapters of the history of the field. Secondly, the documents added to the database every year represent only a fraction of the work that is being done in this area at the moment. However, arXiv offers an excellent resource for identifying some of the major research trends and for seeing the tug of war between different ideological camps.

The machine learning paradigm

The biggest shift the researchers found was the move away from knowledge-based systems towards the early 2000s. Such computer systems are based on the idea that it is possible to encode all human knowledge in a system of rules. Instead, scientists turned to machine learning, the parent category of algorithms that include deep learning.

Among the 100 words mentioned, those associated with knowledge-based systems - "logic", "constraints" and "rule" - decreased the most. And those related to machine learning - "data", "network", "performance" - grew the most.

The reason for this change in weather is very simple. In the 1980s, knowledge-based systems were gaining popularity among fans, thanks to the excitement around ambitious projects that tried to recreate common sense in machines. But when these projects unfolded, the researchers faced a major challenge: Too many rules had to be coded for the system to do anything useful. This led to increased costs and significantly slowed down ongoing processes.

The answer to this problem is machine learning. Rather than requiring people to manually code hundreds of thousands of rules, this approach programs machines to automatically extract those rules from a pile of data. Likewise, this field has moved away from knowledge-based systems and turned to improving machine learning.

The boom of neural networks

Within the new paradigm of machine learning, the transition to deep learning did not happen overnight. Instead, an analysis of key terms has shown that scientists have tested many methods in addition to neural networks, the main mechanisms of deep learning. Other popular methods included Bayesian networks, support vector machines, and evolutionary algorithms, all of which use different approaches to finding patterns in data.

During the 1990s and 2000s, there was strong competition between these methods. Then, in 2012, a dramatic breakthrough led to another change in weather. During the annual ImageNet competition to accelerate progress in computer vision, a researcher named Jeffrey Hinton, along with colleagues from the University of Toronto, achieved the best image recognition accuracy with an error of just over 10%.

The deep learning technique he used has spawned a new wave of research, first in the visualization community and then beyond. As more and more scientists began to use it to achieve impressive results, the popularity of this technique, along with the popularity of neural networks, has skyrocketed.

The growth of reinforcement learning

The analysis showed that a few years after the heyday of deep learning, there has been a third and final shift in AI research.

In addition to the various machine learning methods, there are three different types: supervised learning, unsupervised learning, and reinforcement learning. Supervised learning, which involves feeding tagged data to the machine, is the most commonly used and also has the most practical applications today. However, in the past few years, reinforcement learning, which mimics the learning process of animals through carrots and sticks, punishments and rewards, has led to a rapid rise in references in the works.

The idea itself is not new, but it hasn't worked for many decades. “Supervised learning specialists laughed at the reinforcement learning specialists,” says Domingos. But as with deep learning, one turning point suddenly brought method to the fore.

That moment came in October 2015, when DeepMind's AlphaGo, trained with reinforcement, defeated the world champion in the ancient game of go. The impact on the research community was immediate.

The next ten years

The MIT Technology Review provides only the latest snapshot of the competition among ideas that characterize AI research. However, it illustrates the inconsistency of the pursuit of duplication of intelligence. “It is important to understand that no one knows how to solve this problem,” says Domingos.

Many of the methods that have been in use for 25 years emerged around the same time in the 1950s and have failed to match the challenges and successes of each decade. Neural networks, for example, peaked in the 60s and slightly in the 80s, but nearly died before regaining their popularity thanks to deep learning.

Each decade, in other words, has seen the dominance of a different technique: neural networks in the late 50s and 60s, various symbolic attempts in the 70s, knowledge-based systems in the 80s, Bayesian networks in the 90s, reference vectors in zero and neural networks again in the 2010s.

The 2020s won't be any different, says Domingos. This means that the era of deep learning may soon be over. But what will happen next - an old technique in a new glory or a completely new paradigm - this is the subject of fierce controversy in the community.

Ilya Khel