Artificial Neural Networks: How To Teach A Machine To Think? - Alternative View

Table of contents:

Artificial Neural Networks: How To Teach A Machine To Think? - Alternative View
Artificial Neural Networks: How To Teach A Machine To Think? - Alternative View

Video: Artificial Neural Networks: How To Teach A Machine To Think? - Alternative View

Video: Artificial Neural Networks: How To Teach A Machine To Think? - Alternative View
Video: TensorFlow 2.0 Complete Course - Python Neural Networks for Beginners Tutorial 2024, May
Anonim

Recently, publications on the prospects for the emergence of artificial intelligence have become more frequent. The practical and moral-ethical aspects of the coexistence of mankind with him are discussed. How timely are these discussions? Can we really expect the appearance of "thinking machines"?

All projects to create artificial intelligence can be roughly divided into two areas. The first is the accumulation of databases and their processing by programs that imitate the activity of the human brain. The second is based on the study of models of intellectual behavior. The key drawback of both is that we still do not know well enough what the mind and intellectual behavior are, and the human brain, frankly, has been seriously studied relatively recently.

There is an opinion that the problem can be circumvented due to cyborgs, that is, by fusing a living brain (ape, and in the future - a human) with a computer, however, this path is fraught with enormous difficulties, and, even worse, in this case it will be impossible to talk about full artificial intelligence.

However, scientists believe that it is quite realistic to jump over several steps, allowing artificial intelligence to develop independently - just as it developed in living nature, with the difference that its evolution will take place in virtual, not material space. Here the bet is placed on artificial neural networks or neural networks (Artificial Neural Network).

Let's remember what a neuron is. This is the name of a nerve cell, which differs from other cells in that it is capable of storing and transmitting information by electrical and chemical signals. The function of neurons was discovered at the end of the 19th century, which, of course, played into the hands of the materialists who were gaining authority around the world at that time: they immediately declared that it was neurons that contained the "soul." Hence the idea that if you somehow grow an exact copy of the brain, then a "soul" will be born in it. But a philosophical question arose: is it possible to speak of a “soul” without reason? After all, it is a product of upbringing, as shown by the study of "Mowgli" - human children raised by animals. Accordingly, it is not enough to create a copy of the brain - it still needs to be "educated" to gain intelligence.

TECHNICAL FINE

A normal adult's brain contains approximately 86 billion neurons. Not so long ago, the idea of creating a digital analogue of it seemed absolutely fantastic. However, today, with the development of information technology, this already seems quite achievable.

Promotional video:

It should be remembered that the famous American mathematician Norbert Wiener, the "father" of cybernetics, is considered the founder of the theory of modeling complex biological processes, including brain processes. In 1949, Canadian psychologist Donald Hebb, a specialist in the study of thinking processes, based on Wiener's calculations, compiled the first training algorithm for neural networks (by the way, at one time Hebb served in the CIA, where he dealt with the problem of brainwashing).

In 1957, American Frank Rosenblatt, an artificial intelligence theorist, based on his previous work, created a logic diagram of the perceptron - a self-learning cybernetic model of the brain, which was implemented three years later on the basis of the Mark-1 electronic computer. The perceptron transmits signals from photocells (sensors, S-cells) to blocks of electromechanical memory cells that are randomly connected. If one of the cells receives a signal exceeding the threshold value, then it transmits it further - to the adder (R-element), and with a certain coefficient (“weight” of the AR-connection). Depending on the sum of the signal multiplied by the weight factors, the adder outputs one of three possible results to the output of the entire system: -1, 0 and +1. The training of the perceptron occurs at the stage of introducing weight coefficients into the system. For example,we place a “square” figure in front of the photocells and set the rule: when a square appears in the field of view, the perceptron should give a positive result (+1), and when any other object appears, negative (-1). Then we change objects one by one and adjust the weights when a square appears in the direction of increasing, and in its absence - in the direction of decreasing. As a result, we obtain a unique array of values of weight coefficients within the system for any variant of the appearance of a square, and in the future we can use it to recognize squares. "Mark-1", despite its primitiveness compared to modern computers, could recognize not only geometric shapes, but also letters of the alphabet, and written in different handwritings.when a square appears in the field of view, the perceptron should give a positive result (+1), and when any other object appears - negative (-1). Then we change objects one by one and adjust the weights when a square appears in the direction of increasing, and in its absence - in the direction of decreasing. As a result, we obtain a unique array of values of weight coefficients within the system for any variant of the appearance of a square, and in the future we can use it to recognize squares. Mark-1, despite its primitiveness in comparison with modern computers, could recognize not only geometric shapes, but also letters of the alphabet, moreover, written in different handwritings.when a square appears in the field of view, the perceptron should give a positive result (+1), and when any other object appears - negative (-1). Then we change objects one by one and adjust the weights when a square appears in the direction of increasing, and in its absence - in the direction of decreasing. As a result, we obtain a unique array of values of weight coefficients within the system for any variant of the appearance of a square, and in the future we can use it to recognize squares. Mark-1, despite its primitiveness in comparison with modern computers, could recognize not only geometric shapes, but also letters of the alphabet, moreover, written in different handwritings. Then we change objects one by one and adjust the weights when a square appears in the direction of increasing, and in its absence - in the direction of decreasing. As a result, we obtain a unique array of values of weight coefficients within the system for any variant of the appearance of a square, and in the future we can use it to recognize squares. "Mark-1", despite its primitiveness compared to modern computers, could recognize not only geometric shapes, but also letters of the alphabet, and written in different handwritings. Then we change objects one by one and adjust the weights when a square appears in the direction of increasing, and in its absence - in the direction of decreasing. As a result, we obtain a unique array of values of weight coefficients within the system for any variant of the appearance of a square, and in the future we can use it to recognize squares. "Mark-1", despite its primitiveness compared to modern computers, could recognize not only geometric shapes, but also letters of the alphabet, and written in different handwritings. As a result, we obtain a unique array of values of weight coefficients within the system for any variant of the appearance of a square, and in the future we can use it to recognize squares. "Mark-1", despite its primitiveness compared to modern computers, could recognize not only geometric shapes, but also letters of the alphabet, and written in different handwritings. As a result, we obtain a unique array of values of weight coefficients within the system for any variant of the appearance of a square, and in the future we can use it to recognize squares. "Mark-1", despite its primitiveness compared to modern computers, could recognize not only geometric shapes, but also letters of the alphabet, and written in different handwritings.

SMART THINGS

Of course, many more complex circuits, algorithms, and variants of perceptrons have emerged since then. Nevertheless, this approach to organizing a neural network model has fundamental limitations: for example, perceptrons are powerless to solve the problem of dividing a figure into separate parts or determining the relative position of figures.

When it became clear that it was impossible to build artificial intelligence based on perceptrons, interest in them fell. Nevertheless, in the early 1980s, new variants of self-learning and self-organizing neural networks appeared: the Hopfield network, the Hemming network, the Kohonen network, the Jordan network, and others. In 1986, a kind of revolution took place: Soviet and American scientists developed a backpropagation method (iterative gradient algorithm), which made it possible to overcome previously discovered limitations. After that, neural networks received rapid development, which was immediately implemented in applied computer programs.

Modern software packages built on the basis of artificial neural networks are capable of recognizing arbitrarily complex texts, sound commands, faces, gestures and facial expressions. However, these are only the simplest use cases, there are also more unusual ones. Self-learning autopilots capable of responding to the development of catastrophic situations earlier than pilots. Exchange inspectors identifying suspicious transactions in the stock markets. Network ad agents that track the preferences of potential customers. Medical Diagnosticians Determining Pathologies in Infants.

It is clear that as information technologies improve, neural networks will also become more complex. They will manage all household appliances and life support for homes, factories and supermarkets. They can monitor threats, analyze trends and give advice, for example, on the optimal investment of money. They will even be able to create objects of art: there are already paintings and poems written by neural networks!

SLAVERY OR FRIENDSHIP?

In fact, everything goes to the fact that a neural network will someday become an irreplaceable assistant in a thousand large and small matters. Futurists fear this. They believe that at some point, quantity will turn into quality, artificial intelligence will arise in neural networks, which will immediately challenge humanity and destroy it. Another option is also possible - people will become so dependent on the decisions made by the neural network that they themselves will not notice how they will turn into its slaves.

Scary scenarios like these seem too weird. The fact is that neural networks are initially structured to adapt to the needs of a particular person or group of people. They can help correct a mistake or give advice, highlight a problem or notice a deception, but they themselves are not able to make a choice between equivalent options, because we (unfortunately or fortunately) will not be able to teach them the main thing - morality. Therefore, at all times, neural networks will be like domestic dogs - obedient, loyal and friendly.

Anton Pervushin