And What Is It All About, This Artificial Intelligence Of Yours? - Alternative View

Table of contents:

And What Is It All About, This Artificial Intelligence Of Yours? - Alternative View
And What Is It All About, This Artificial Intelligence Of Yours? - Alternative View

Video: And What Is It All About, This Artificial Intelligence Of Yours? - Alternative View

Video: And What Is It All About, This Artificial Intelligence Of Yours? - Alternative View
Video: Artificial Intelligence Full Course | AI Full Course | Artificial Intelligence Tutorial |Simplilearn 2024, May
Anonim

"I think, therefore I am." We have all heard this famous statement, which is one of the foundations of modern philosophy, self-awareness, individualism and Western rationalism. However, the French philosopher Rene Descartes, who put it forward, did not particularly bother to give it a precise definition. The statement was put forward as a primary certainty, a truth that cannot be doubted. But is it really so?

What are thoughts? What is intelligence? And can machines have these features? When it comes to the field of artificial intelligence (AI), the questions play the same important role here as the answers themselves. First of all, we must understand whether smart machines will think the same way as we do.

As an illustration and example to understand what is at stake, here's an interesting thought experiment.

Chinese room

Imagine an isolated room. There are many people sitting at tables inside the room. At one end of the room, a note is fed through a slot, covered in strange marks and symbols. The task of the people in the room is to divide what is written on the paper into parts and place the symbols in the necessary baskets, taking into account the existing instructions. For example, if you see a straight diagonal line, you need to place the symbol in basket 2-B, if you see a cruciform symbol, you need to place it in basket 17-Y, and so on. However, the instructions do not indicate the meaning of these symbols. People just do what is written in them. When the work is completed, the papers are transferred to another part of the room. There are people sitting there who are trained to do other work. And also according to the instructions. If box 2-B passes the test, put a horizontal line;if box 17-Y has been checked, place a circle. After that, all the pieces are transferred to the last person, who puts them together and transfers the final result through a slot in another part of the room.

The note that passed through one slot was written in Chinese. The note, which was sent through the other slot across the room, is in perfect English. The bottom line is that no one in the room knows these languages.

This thought experiment, first published by the philosopher and one of the pioneers of computer computing and artificial intelligence, John Searle, is often used as an example that illustrates the complexity of the very definition of intelligence. With enough people in the room, you can do almost any task: draw or describe pictures, translate or correct translation errors from one language to another, calculate incredibly huge numbers, and so on. But can all these actions be combined under one general concept of "intelligence"? Some people outside the room will say yes. But all those who will be inside will disagree.

Promotional video:

If you imagine transistors instead of people in the room, then in fact there will be a computer in front of you. Therefore, a more logical question is this: can a computer be more than just an overcomplicated version of the "Chinese room"? One of the answers to this question, as is often the case in this area of discussion, raises even more questions: what if, instead of transistors, the room is filled with neurons? What if the brain is nothing more than a more sophisticated analogue of the "Chinese room"?

The depth of this "rabbit hole" may be whatever you want it to be, but today we will not try to solve problems and answer questions that philosophers have not been able to answer for a whole millennium. Better let's get straight to something more practical.

Weak and strong AI

These days, the term "AI" is liked to be used almost everywhere, without really thinking about what it really means. Nevertheless, he has a precise definition. In short, AI is a program that can reproduce the processes of human thinking or their result. Of course, such a formulation seems rather vague, but for our case it will work.

AI will one day be applied everywhere. AI will choose the next song in your music collection for you; AI will control the limbs of bipedal robots; AI will be able to select objects in the image and describe them; The AI will translate text from German to English and from Russian to Korean in any order. In general, he will be able to perform all those things that we now do on our own, but in the future we would like to automate.

Despite the fact that this list also contains very complex tasks, they do not cease to be … “tasks”. A neural network trained in millions of words, phrases and sentences that allows you to freely translate and switch between 8 different languages, it is still nothing more than just a very complicated set of numbers, according to the rules established by their creator. If the solution of a problem can be reduced to following a simple set of rules and instructions, as in the very "Chinese room" - can we talk about it as intelligence, and not as a simple computation?

That is why we come to such a distinction as “weak” and “strong” AI. These are not really types of AI, but rather ways of approaching the idea itself. As with many different philosophical theories, none of the options here is "more correct" than "the other." Each of them is important and relevant in its own way.

On the one hand, there are those who believe that the maximum achievement of AI will be nothing more than an emulation of the minds of those who created it. At the same time, no matter how complex and massive the structure of this AI itself will be - it will not be able to break out of the framework imposed on it, taking into account its mechanical nature. But even within this framework, it can be an amazing tool for solving very complex and complex problems. Nevertheless, such AI will be nothing more than very advanced software. This perspective is considered in relation to weak AI. Given its limited fundamental framework, its use will only be relevant for solving clearly formulated tasks.

On the other hand, there are supporters of strong AI, who believe that one day an AI will be created, whose capabilities will not only not be inferior, but even exceed the capabilities of the human mind. These are the people who believe that the brain itself is the "Chinese room." And if this mass of biological circuits found in each of our heads can create what we call intelligence and self-awareness, then why can't artificial microcircuits do the same? This is the main idea behind strong AI.

It would seem that the tasks are defined, the accents are highlighted, what are we waiting for? There is one small "but" that does not allow us to start creating AI. We just don't have a working definition and schema of intelligence!

The meaning of the second "I"

In fact, it is very difficult to say right away how much progress we have made in defining intelligence even over the past 3000 years. But at least we were able to abandon the apparently muddy ideas like that intelligence is something that can be easily measured, or that it depends on some biological indicator like the size of the head and brain.

Most likely, each of us has his own idea of what intelligence really is, but this, in turn, only complicates the answer to the question posed - can AI fall under this concept. Over the years, many definitions have been proposed, each of which tried to give a more accurate description, but they all ended up talking about the same thing, only from different points of view. Some options were too private, others, on the contrary, were too general, some were - I'm not afraid of this word - insane.

Nevertheless, among all the simplest and at the same time fundamental, most likely, the following may be: intelligence is the ability to solve new problems.

Ultimately, this is what lies at the heart of "adaptability", "generalization" and "initiative", which, in turn, are a direct reflection of the "motivation", "judgment" and "perception" directly related to intelligence. It is very important that the individual can solve problems through the prism of the world around him. But it is even more important that this individual could have the ability to solve not only individual problems, but also adapt to the solution of others. It is this property - the adaptive nature - that is the key to intelligence, even though no one has yet been able to formulate this idea more precisely and, as they say, "on the shelves".

Will the AI we create one day be able to adapt to solving other problems? Will he one day be able to independently, without instructions from his creators, deal with issues and tasks that no one set before him? Scientists are trying with all their might to create new generations of AI that can learn and perform tasks with unprecedented precision to detail, AI that will learn not only on orders from above, but also through their own initiatives. Ultimately, will these AIs "think" or still "calculate" - let's leave this question for philosophers and computer scientists. But the very fact that sooner or later we will come back to this issue one day cannot but amaze.

NIKOLAY KHIZHNYAK

Recommended: