Google: "artificial Intelligence Should Complement, Not Replace Human" - Alternative View

Table of contents:

Google: "artificial Intelligence Should Complement, Not Replace Human" - Alternative View
Google: "artificial Intelligence Should Complement, Not Replace Human" - Alternative View

Video: Google: "artificial Intelligence Should Complement, Not Replace Human" - Alternative View

Video: Google:
Video: John Lennox | 2084: Artificial Intelligence and the Future of Humanity | Talks at Google 2024, September
Anonim

Greg Corrado, the lead programmer of the Google Brain project (which creates machine learning and artificial intelligence systems and their integration into Google services), told RIA Novosti what artificial intelligence is, why the uprising of self-improving machines does not threaten us, and shared his thoughts on how humanity will adapt to life in the era of "intelligent machines".

In recent years, Corrado and his colleagues have developed new machine translation systems, taught the Google Images image search engine to recognize cats, dogs and other objects, and created a neural network that improves photo resolution. They are currently working on a voice assistant for Android phones.

Greg, judging by your publications, your scientific career began in neurophysiology, not in the development of artificial intelligence. How did it happen that AI systems became important to you personally and to Google in general?

- Initially, I was interested in completely different things - at first I was studying physics, and then I switched to the field of brain sciences. Then it seemed to me that the brain is the most interesting physical system, and I wanted to understand how the development of our brain led to the emergence of intelligent and self-aware beings.

After working for some time in this way, I realized that it was no less interesting for me to study all forms of intelligence, not only in its biological, but also in artificial manifestations. In addition, at that time it seemed to me that in the field of artificial intelligence we would be able to advance much faster than in the study of the human brain.

With Google, the situation is different - the company has actually been engaged in artificial intelligence since its inception. The very implementation of the search engine, as conceived by the founders of the company, Sergey Brin and Larry Page, requires at least some form of artificial intelligence.

For us, the main task is not to create AI as some kind of product that we could pack in a box and sell, but to use such systems to organize and process information. The amount of information is constantly growing, and the more data we receive, the worse we see it and lose the idea of what is important and what is not.

The very mathematical idea describing deep neural networks and nonlinear learning methods appeared more than ten years ago, but they have become popular only in the last two or three years. What is the reason for this?

Promotional video:

- I can say more - the very principles of organizing these networks appeared even earlier, in the 60s of the last century, and they were finally formalized in a mathematical form in the 1980s and 1990s. Since then, they have not actually changed. The only reason why their renaissance is happening right now is because computers have become noticeably faster in recent years and we have the opportunity to cheaply increase their power.

In the nineties and early 2000s, we did not understand this, and therefore neural networks were considered at that time as an interesting but useless toy. But in reality, they simply did not have enough computing resources in order to do something useful. When data volumes and computational resources were small, simple mathematical approaches used them more efficiently than neural networks.

Did we first realize that deep networks and machine learning can be useful? around 2010 or 2011, and it took us about five years to test this idea and prove that it was. Now every person in the industry understands that this is so and strives in this direction.

Every year our civilization produces more and more data, the processing of which will soon be impossible without the use of artificial intelligence. Will humanity be able to develop further, as the opponents of AI believe, without using such systems?

- I don't think that artificial intelligence or machine learning can be anything good or bad - it's just a collection of new technologies, the way of using which will depend solely on ourselves. As a society, we must decide for what and how they will be applied - for example, we can completely abandon them for some ethical or political reason.

I personally think that such a solution would be extremely ineffective and similar to the one if we banned automatic sewing machines and forced all factories to sew clothes by hand. Instead, we need to understand where their use will bring real benefits and align with our values.

And it seems to me that there are countless areas where the use of both AI and machine learning will bring enormous benefits to society, many of which we still cannot imagine.

Google today uses neural networks in Google Translate, Google Image Search and many other products. Where will the main breakthrough in their application be made in the near future?

- When people talk about the development of artificial intelligence, they often imagine something very impressive, which it really is not. You need to understand that we are simply trying to make machines less stupid than they are today and make them interact with humans in the most natural way.

In addition, artificial intelligence allows us to correct the errors of machines and make them learn from those errors. There is a kind of dialogue between man and machine.

I hope that in the next ten years we will have a system, possibly working on phones and other mobile devices, which will dynamically interact with the owner and take into account his personality characteristics and any specific wishes.

For example, you ask such an assistant to help you reach a point on the map. When he offers you a route, you can ask him to lay it closer to the park or fulfill other wishes, and the system will understand you and fulfill your wishes. We want communication with such systems to go as natural as possible for a person.

In recent years, scientists have increasingly talked about the creation of self-improving artificial intelligence systems, which many serious scientists, for example Stephen Hawking, consider very dangerous for humans. Is it really?

- In fact, self-improving artificial intelligence has not yet been created. From a technical point of view, it is possible that someday such systems will appear, but they do not exist yet, and they are unlikely to appear in the foreseeable future.

For example, today there are robots that work on assembly lines and assemble machines and other complex technical devices. These robots were assembled on the same line with the participation of the same robots, but this does not mean that the robots have improved themselves. The same is true for machine learning technologies - we use them to analyze data, including other artificial intelligence systems, but they still cannot improve themselves without human creativity.

The AlphaGo system, for example, is capable of a very limited form of self-learning, but it cannot radically change its architecture and make itself radically different, for example, adapted for solving other problems. What exists today is only a small part of what needs to be done to create self-learning systems, and the rest of the way is not yet covered.

Therefore, I personally think that the creation of artificial intelligence today more and more resembles art than engineering or science, and leaders in our industry often surprise themselves and colleagues with their unexpected discoveries and findings. In general, we can say that today the creation of AI is the exclusive prerogative of man.

Can such systems be used to solve the most unusual tasks - for example, as a "brain" for rovers or probes that would independently study distant planets and worlds?

- In principle, this is quite possible, and it seems to me that in the future artificial intelligence will be used to conduct research in space or on the ocean floor, where direct control of technology is impossible for one reason or another. AI systems will help such rovers or robots solve some routine tasks and avoid dangers, but real scientific research will still be carried out by humans.

Only a person can set specific scientific tasks and determine how to solve them. We can tell the robot - "come closer to this crater and try to find white rocks in it", but the robot itself, in principle, cannot understand why these rocks are of interest to us. Man sets tasks very well and looks for ways to solve them, and machines do not yet have such an opportunity. In this case, robots can only be assistants, but not researchers.

This question is directly related to the fact that many people do not understand what artificial intelligence is, and believe that scientists create a complete analogue of a person with his mind, feelings and other features. Why is this happening?

- It seems to me that this is a consequence of the fact that people do not understand why artificial intelligence exists at all. They believe that we create machines like humans. I completely reject this and say that we create it not to replace a person, but to supplement his capabilities.

For example, I use a computer every day, and there are things that I and it can do on their own - add or multiply numbers. The computer does it faster, and I appreciate it for this reason too. Accordingly, I can carry out the "division of labor" by handing over the calculations to the computer and do more than I could have done without his help. And this, it seems to me, is the essence of artificial intelligence.

The idea of creating a complete analogue of a person, it seems to me, is rooted in either science fiction or philosophy. Philosophers have long thought about whether we can create machines that look like humans, but this question has nothing to do with whether such "artificial people" will be useful to us, and whether it is possible at all.

Therefore, from the point of view of practice, it is better to create intelligence that will complement our own mind and expand its capabilities, and not copy it.

Already today there are systems similar to machines with an autopilot, in which human life depends on the actions of artificial intelligence. Do we need to create fundamentally new legal norms to define responsibility and who will be responsible for the actions of AI?

- This is a very important question. Let me emphasize right away that I am not a lawyer, but it seems to me that there have already been examples of similar problems in the past. For example, for many centuries there have been laws that determine who is responsible for the fact that someone's horse kicked a stranger, a dog attacked a stranger, a machine broke down and injured a worker, or some other incident occurred.

We, it seems to me, need to use this experience when determining responsibility for the behavior of artificial intelligence systems. Accordingly, it remains only to discuss how we can adapt these principles to determine how we will interact with autonomous machines and other devices of this type.

This discussion, I am more than sure, will be very hot and lengthy, but it seems to me that it will not require the introduction of any completely new legal norms and principles. Of course, some people suggest considering AI systems as “personalities” and so on, but for me personally, this is something very strange and incomprehensible.

Futurologists and some of your colleagues predict that the further development of AI will leave many people engaged in skilled but routine work - accountants, drivers, officials - without work. Does this problem exist and how can it be solved?

- Of course, this problem exists. My parents and grandparents were accountants, and my grandmother started out with pen and paper and finished with calculators, and my mother switched from calculators to spreadsheets on a computer. All these changes have radically changed the way accountants work, but the very essence of the profession has not changed from this - accountants have not disappeared, but at the same time they began to work better, faster and better.

It seems to me that something similar will happen with artificial intelligence. The content of many professions will change, but their essence will remain the same, some of them may even disappear, but such a fate will befall only a small part of them.

In addition, something similar could happen, which happened to the field of web design in the late 1990s - before the advent and spread of the Internet, no one could have imagined that anyone could do such things. On the other hand, it is foolish and wrong to approach this issue lightly and think that nothing will change. You need to understand that these technologies will change the world, that you will need to adapt to them and help others to adapt to them.

Recommended: