Scientist: There Are No Fundamental Restrictions On The Creation Of An Intelligent Machine - Alternative View

Table of contents:

Scientist: There Are No Fundamental Restrictions On The Creation Of An Intelligent Machine - Alternative View
Scientist: There Are No Fundamental Restrictions On The Creation Of An Intelligent Machine - Alternative View

Video: Scientist: There Are No Fundamental Restrictions On The Creation Of An Intelligent Machine - Alternative View

Video: Scientist: There Are No Fundamental Restrictions On The Creation Of An Intelligent Machine - Alternative View
Video: 6 Problems with our School System 2024, June
Anonim

Mikhail Burtsev, head of the laboratory of neural systems and deep learning at Moscow Institute of Physics and Technology, talks about whether it is possible to create a full-fledged computer analogue of the human mind, and explains why scientists are developing it and how to protect it from troll attacks.

At the end of July, scientists from MIPT launched an international competition for “spoken” artificial intelligence systems capable of imitating a living person, and invited everyone to talk to them and evaluate the resulting dialogues. Using the help of volunteers, scientists hope to create a voice assistant in the next three years that can communicate with a person almost as well as a living interlocutor.

The competition was co-organized by scientists from the universities of Montreal, McGill and Carnegie Mellon. You can take part in the testing of dialogue systems by following the link.

In fact, these ideas were not invented today - modern voice assistants from Google, Apple, Amazon and other IT companies are rooted in the deep past, at the very beginning of the computer era. The first such talking machine, named ELIZA, was created in 1966 and was, in fact, a joke, a parody of a psychotherapist giving useless advice to a patient.

In subsequent years and decades, programmers created more and more complex and "live" systems for communicating with a computer. The most advanced of these systems can recognize the mood of the owner, remember his old desires and preferences, and solve some of the routine and household tasks for him, ordering food or goods in the store, or playing the role of an operator in call centers.

Mikhail, almost 50 years have passed since the creation of ELIZA. What has changed in general during this time and can, in principle, expect that in the future scientists will be able to create such a system that people cannot distinguish from a living interlocutor?

- I think that in the near future it will be possible to create a spoken intelligence technology that will allow the machine to approach the level of human dialogue. We are working on this task within the iPavlov project, which is part of the National Technology Initiative.

The user should be as comfortable communicating with an automatic dialogue system as with a living person. This will make it possible to create information systems that are able to better understand what a person wants from them, and respond to him in natural language.

Promotional video:

Conversational intelligence can be used to automate many voice and text interfaces, including messengers like Telegram. Messengers, as statistics show, today are used more actively than social networks, and a very large amount of information passes through text communication channels.

For example, they are convenient to use in transport, and the addition of an interactive assistant - a chat bot - will allow users not only to communicate with each other, but also to receive the necessary information, make purchases and do many other things. This will lead to the fact that instant messengers and voice assistants will gradually replace the usual web pages and applications, and also play the role of online consultants and call center specialists.

Given the presence of Apple, Google and Amazon in this market, can Russia compete here? Is there any specificity in the Russian language that could hinder potential competitors of Russian companies and scientists?

- Of course, the Russian language is more complex, and some of the methods that are used today in the development of dialogue systems and voice assistants in the world cannot be applied without refinement and significant modification that would allow them to work with richer grammar.

On the other hand, the basic algorithms that are used in the work of Siri, Cortana, Google and other digital assistants, no one hides - they are available to us at least at the level of research and concepts. Research papers and program code are often publicly available - in principle, it can be adapted to Russian.

Mikhail Burtsev, Head of the Laboratory of Neural Systems and Deep Learning, MIPT
Mikhail Burtsev, Head of the Laboratory of Neural Systems and Deep Learning, MIPT

Mikhail Burtsev, Head of the Laboratory of Neural Systems and Deep Learning, MIPT

Moreover, there are not many attempts to implement this at the "industrial" level. The only major project is being conducted by Yandex, which is developing an assistant for the Alice project.

In our project we are trying to create tools that would simplify and speed up the creation of such "industrial" dialog systems designed for a variety of purposes. But developing a universal voice assistant capable of solving any problem is an extremely difficult task even for large companies.

On the other hand, automating a small business that will use a specialized dialogue system is much easier to implement. We hope that the tools that we will create will help entrepreneurs and programmers solve such problems quickly enough, without having any deep knowledge and without applying super efforts for this.

Many scientists, such as Roger Penrose or Stuart Hameroff, believe that the human mind is of a quantum nature and it is impossible to build a machine analogue in principle. Do you agree with them or not?

- In my opinion, if we look at what we know today about the structure of the brain and the nature of human consciousness, then so far we are facing no fundamental obstacles in order to reproduce its work using a computer.

Penrose and Hameroff have a set of hypotheses that they believe explain why this cannot be done. So far, neurophysiologists have not found any experimental evidence that these hypotheses are correct, and our current knowledge base speaks in favor of the opposite.

Another thing is that the time frame for when such a machine will be created remains not fully defined. It seems to me that this may happen in at least 50 or even 100 years.

Will this require fundamentally new technologies and computers that are closer in their principles of operation to neurons than to digital logic?

- If we believe that human intelligence is based on some form of computation, then any universal computing system equivalent to a Turing machine can, in theory, emulate the work of the human brain.

Another thing is that this machine can run very slowly, which will make it useless from a practical point of view. Today it is difficult to guess what technologies for building computers we will need here.

What other tasks can digital assistants solve besides the things they do today? Can they be used to decipher texts in dead languages or encryptions like the Voynich manuscript?

- At the moment, as far as I know, no one has tried to use neural networks to uncover the secrets of dead languages and decipher texts, but it seems to me that someone will try to do this in the near future. We, in turn, have not yet been interested in such things.

"Helper" is actually a very broad concept that can include many very different things. If we take, for example, the same ELIZA, the virtual “psychotherapist”, the question arises: is she an assistant or not?

Dialogue systems can be used not only for solving practical problems, but also in order to entertain people or maintain their mood. The question here, in fact, is what we mean by the concept of a personal assistant and how broad or narrow it is. In the broadest sense, such systems can solve all issues related to communication, albeit with varying degrees of success.

Conversational interfaces, in addition to direct communication with people, can also be used to teach machines to quickly find a common language and transfer information from one system to another.

This will circumvent the problem of establishing links and transferring data between existing and created services, since they will not need to know each other's API specifications to communicate with each other. They will be able to exchange data using natural languages or their own artificial language that will be invented by machines or humans for such purposes.

Roughly speaking, even systems "unfamiliar" to each other will be able to agree using a common language of communication for them, and not fixed rules for the exchange of information. If they do not understand something, then they can ask each other about unknown things, which will make the entire infrastructure for providing services and services on the Internet incredibly flexible and will allow it to quickly integrate new services without the help of people.

In this regard, the question arises - who should be responsible for the recommendations of the "psychotherapist" ELIZA, computer doctors and other voice assistants, whose advice can greatly affect the well-being and health of a person?

- This is a very difficult question, since today there are no clear criteria that would help us understand how to act in such cases. Many Internet services and services that issue recommendations to users begin to work only after the user agrees to the terms of service and the consequences that may arise as a result of working with it.

It seems to me that the work of chat bots and voice assistants - at least in the early stages of their existence - could be regulated in a similar way. For example, if a bot simply searches and analyzes information, acting in much the same way as a search engine, then the same rules can be applied to it. In the event that he will give medical or legal advice, the form of responsibility should be different.

For example, such systems should clearly notify the user of the consequences of the choice between artificial intelligence and an ordinary doctor. A person will have a choice - to trust the doctor, who will, for example, be wrong in 10% of cases, or to bet on a machine that gives the wrong answer in 3% of cases. In the first case, the doctor will be responsible for the error, and in the second, the user himself.

Last year, Microsoft launched the Tay. AI chatbot, which it had to turn off literally a day later because netizens turned a teenage girl into a real racist. Is it possible to protect such dialogue systems from trolls and pranksters?

- It seems to me that you can protect yourself, but whether it is worth doing it depends on the purpose of the system. It is clear that if the system should not give out any specific remarks - rude or extremist, then we can filter its responses. This filtering can occur either at the stage of training the system, or already during the generation of responses.

By the way, a similar task of assessing the quality of the dialogue was solved by teams within the framework of the DeepHack Turing scientific school-hackathon, which took place at MIPT a few weeks ago. Its participants developed algorithms that could predict from the cues in the dialogue what kind of assessment a person would give to the dialogue system.

The next step in the development of this approach is the creation of a program that would evaluate the acceptability of phrases or the reliability of sources used in generating responses to user queries. It seems to me that it would help to solve this problem.