Real Artificial Intelligence Can Be Created By Solving Three Main Problems - - Alternative View

Table of contents:

Real Artificial Intelligence Can Be Created By Solving Three Main Problems - - Alternative View
Real Artificial Intelligence Can Be Created By Solving Three Main Problems - - Alternative View

Video: Real Artificial Intelligence Can Be Created By Solving Three Main Problems - - Alternative View

Video: Real Artificial Intelligence Can Be Created By Solving Three Main Problems - - Alternative View
Video: Can AI Solve Our Biggest Problems? | Freethink Challengers 2024, April
Anonim

At a conference on deep machine learning in London last month, one topic was raised several times: the importance of understanding what we really do. While companies such as Google continue to claim that we are all living in the "first century of AI," when machine learning is just beginning to discover new fields of activity (such as speech and image recognition), those who really stand at the forefront of AI research, seek to emphasize that there are many more challenges to be addressed before the true age of AI arrives. Even if we already have digital assistants that can speak like computers in sci-fi movies, this does not mean that we are seriously close to creating real artificial intelligence.

In the end, all the problems standing in the way of creating real AI are as follows: in the amount of information that will need to be invested in them; in our inability to create an AI that could cope equally well with several tasks at once; well, in fact, we have no idea how such systems should actually work. Machine learning technologies are already capable of doing wonderful things in 2016, but these things can sometimes be difficult to explain, even for the creators themselves. Not to mention how much money it all costs. Let's take a closer look at the complexities that AI engineers face today.

Information first, then AI

We all perfectly understand that AI needs to have access to information to study the world around us, but we do not quite understand exactly how much information is needed. According to Neil Lawrence, professor at the Department of Machine Learning at Sheffield University and a member of the AI technology development team at Amazon, these systems will need hundreds and thousands of times more information than humans in order to learn to understand the world and recognize certain objects.

“If you look at all the industries and areas where engineers have achieved some success in machine deep learning, you can immediately see how much information was used to solve all these issues,” says Lawrence, citing as an example the same speech and image recognition technologies.

Companies like Google and Facebook have access to mountains of information, which, of course, makes it easier to create various useful tools (the same voice search technologies for Android, for example).

For Lawrence, information is now what coal was in the early years of the industrial revolution. As an example, Lawrence cites Thomas Newcomen, an Englishman who created in 1712 (actually 60 years before the creation of such a machine by James Watt) a primitive version of a steam engine powered by coal. Newcomen's invention was not perfect. Compared to Watt's machine, it proved to be inefficient and too expensive to use. For the most part, it could only be used in coal mines, where the amount of fuel needed made up for the machine's shortcomings.

An example of Facebook's open image recognition technology

Promotional video:

Image
Image

Lawrence believes there could be hundreds of these Newcomen around the world developing their own machine learning models. Perhaps, there are truly revolutionary ideas among them, but without their technologies accessing huge databases of information about them, most likely, no one will ever know. Big companies like Google, Facebook and Microsoft - they are the very modern "coal miners". They have access to an unlimited amount of information, so they can create ineffective machine learning systems and then improve them. Small startups may indeed have great ideas, but they will never achieve anything worthwhile without access to information bases.

This problem becomes even clearer when you look at areas where getting the information you need becomes even more difficult. Take, for example, the healthcare system, where AI could be used to perform tasks related to machine vision - finding and recognizing malignant tumors on X-rays, for example. But access to such data is usually very limited. The main limiting factor here, according to Lawrence, is the current perception of people that it is unethical for third parties to access this kind of information. The main problem, according to Lawrence, lies not in finding ways to disseminate information, but in how to make machine learning systems more efficient and teach to work with less information. And these improvements in efficiency, according to the scientist, could take the same 60 years.as was the case with Watt's car.

Specialization is a dead end. AI must be able to multitask

Another key challenge facing the development of truly deep machine learning models is the fact that all of our current AI systems are, in fact, very dumb. According to Rya Hudsell, a researcher at Google's DeepMind, these systems can actually be taught to perform cat recognition tasks, to play with, and at the same time be very effective in performing these tasks. But "at the moment in the world there is not a single full-fledged neural network and methods that would train it to recognize images, play Space Invaders and contemplate music." In turn, it is neural networks that are the key basis for creating deep learning systems for machines.

And this problem is much more significant than it might seem at first glance. When DeepMind announced last February that it had built a system that could play 49 Atari games, it really could be seen as a big achievement. But in the end it turned out that after the system completes the passage of one game, each time it needs to be retrained to play another. Hudsell notes that we cannot teach the system to play all the games at once, since the rules of each will mix with each other and ultimately interfere with the task. Each time you have to re-learn the machine, and each time the system “forgets” how to play the previous game.

“In order to create general artificial intelligence, we need something that will help us teach a machine to perform several tasks at once. Now we can't even train them to play games,”says Hadsell.

The solution can be hidden in the so-called progressive neural networks - combining independent deep learning systems into a single whole for more efficient work with information. In a published scientific paper addressing this issue, Hadsell and her team of researchers talked about how their progressive neural network was able to adapt in the game Pong, in which the conditions were somewhat different each time (in one case, the colors were changed; in the other, they were confused control), much faster than the "ordinary" neural network, which had to be trained anew each time.

The basic principle of a progressive neural network

Image
Image

The method has proven to be very promising and has recently been used to tune robotic arms, speeding up the learning process from a week to just one day. Unfortunately, this method also has its limitations. Hudsell notes that in the case of progressive neural networks, the learning process cannot be reduced to simply adding new tasks to their memory. If you continue to combine such systems together, then sooner or later you will come to "too complex a model that will be impossible to trace." In this case, we will talk about a different level. The level at which various tasks will generally be performed in the same way. Building an AI capable of designing chairs and creating an AI of the level of human intelligence capable of writing poems and solving differential equations are not the same thing.

AI can be called AI if we can show how it works

Another daunting obstacle is understanding how artificial intelligence will arrive at its conclusions when solving problems. Neural networks are usually impervious to the observer. Despite the fact that we know how they are collected and how information flows through them, the decisions they make usually remain beyond explanation.

An excellent example of this problem is the Virginia Tech experiment. The researchers have created a tracking system for the neural network that records which pixels in a digital image the computer starts analyzing from. The researchers showed the neural network images of the bedroom and asked it the question: "What is hanging on the windows?" The machine, instead of looking directly at the windows, began analyzing the images starting from the floor. A bed came into her field of vision and the car answered: "there are curtains on the windows." The answer turned out to be correct, but only because the system was "taught" to work with a limited amount of data. Based on the picture shown, the neural network concluded that if the photo shows a bedroom, then most likely there should be curtains on the windows. So when a detail came into her field of vision,which is usually found in any bedroom (in this case, a bed), she did not analyze the image further. She may not have even seen this bed, she saw the curtains. It is logical, but very superficial and attracted. Plus, many bedrooms don't have curtains!

Tracking technology is just one tool that can help us understand what prompts a machine to make a particular decision, but there are better methods that can add more logic and deep analysis to machine learning systems. Murray Shanahan, professor of cognitive robotics at Imperial College London, believes that the best solution to the problem is to rethink the old-fashioned paradigm of AI - symbolic AI, or GOFAI (Good Old-Fashioned Artificial Intelligence, "good old artificial intelligence"). Its paradigm boils down to the fact that absolutely any task can be broken down into basic logical elements, where each word is just a complex set of simple symbols. By combining these symbols - in actions, events, objects, and so on - thinking can be synthesized. Just think that such developments were carried out back in the days when computers were gigantic boxes the size of a room, working on magnetic tape (work began in the mid-50s and continued until the end of the 80s of the last century).

Shanahan's proposal is to combine GOFAI symbolic descriptions and deep learning technologies. This will allow not only feeding such systems new information and waiting for them to deduce certain patterns of behavior and problem-solving based on this information, Shanahan's approach is intended to endow such systems with starting points for understanding the world. This, in his opinion, will not only solve the problem of AI transparency, but also the problem of transferable learning described by Hadsell.

“It’s safe to say that Breakout is very similar to Pong, because in both cases“platforms”and“balls”are used, but from the point of view of human perception and logic, they are two completely different games. And it is virtually impossible to draw parallels between them. It's like trying to combine the structure of the atom and the structure of the entire solar system."

Shanahan and his colleagues at Imperial College London are currently working on a new method for machine learning (which they call deep symbolic stimulated learning) and have published some small experiments. The method is still in its infancy, and therefore it is difficult to say yet whether it will scale to larger systems working with different types of data. Nevertheless, there are still chances that this method will grow into something more. After all, deep learning has always been the most boring and boring part of AI development until researchers found a way to quickly access data and acquired massive processing power. Quite possible,it's time to go back to old AI paradigms and try them in a new environment.

NIKOLAY KHIZHNYAK