Will Robots Be Able To Find A Soul: Emotional Artificial Intelligence - Alternative View

Table of contents:

Will Robots Be Able To Find A Soul: Emotional Artificial Intelligence - Alternative View
Will Robots Be Able To Find A Soul: Emotional Artificial Intelligence - Alternative View

Video: Will Robots Be Able To Find A Soul: Emotional Artificial Intelligence - Alternative View

Video: Will Robots Be Able To Find A Soul: Emotional Artificial Intelligence - Alternative View
Video: Will Self-Taught, A.I. Powered Robots Be the End of Us? 2024, May
Anonim

Every year, developments in the field of artificial intelligence are becoming more and more perfect - face recognition technology, intelligent voice assistants and even art created by algorithms are becoming a part of our life. But will AI be able to take the last frontier of human capabilities and learn how to experience emotions? Together with the Theory and Practice platform, we invite you to learn about the future of man and machine.

The sacred versus knowledge

In the ordinary view, artificial intelligence can never come close to human emotionality because of our special mental organization, which cannot be transplanted into a machine. In contrast, the challenge with creating emotional AI stems from the fact that humans aren't all that good at empathy. We are not at all like ideal emotional machines that can easily decipher the feelings of others. Our empathy is seriously limited by unique experiences, learned stereotypes, and individual psycho-emotional reactions. Thus, a European from the middle class is unlikely to understand what feelings the leader of an African tribe expresses, and vice versa.

Image
Image

On the one hand, we believe that emotionality is a sacred gift, an exclusive privilege of people. On the other hand, we know too little about it, says Sergey Markov, an AI and machine learning specialist and founder of the 22century.ru portal. In his opinion, the abandonment of the perception of emotionality as something sacred will allow finding new methods for studying empathy. With the help of reverse engineering (researching a finished device or program in order to understand how it works and discovering non-obvious possibilities), researching neural networks and machine learning, you can learn something fundamentally new about human emotionality. “Machine learning allows in a number of cases, as they say, to believe the harmony with algebra - more reliable knowledge based on big data statistics is replacing guesses and hypotheses,” Markov believes.

Establish communication

Promotional video:

We might not be trying to teach cars about empathy just out of curiosity, but the growing number of automated systems - from voice assistants to self-driving cars - makes emotional AI a must. The main challenge facing machine learning specialists is to simplify the work with different interfaces both at the input level and in the process of information output. The frequency of communication with computers is obviously growing, but the services and systems themselves do not yet understand why we shake the phone: from anger or from laughter.

Emotional intelligence is already in demand in many business projects. From advertising, which, by adjusting to the emotional state of a potential client, increases sales, to recognition technologies that, having detected the most nervous person in the crowd, will help to catch the criminal.

Researchers will have to work with emotional intelligence and safety concerns. “Decisions made by computers shouldn't seem psychopathic. If a machine operates in a world where people live, it must be able to take into account "human circumstances", that is, be capable of empathy. A typical example: a robot diagnostician sending an elderly person for a complex operation must take into account the risks associated with stress. An unmanned vehicle, completely devoid of empathy, in a certain context can also do trouble,”says philosopher Kirill Martynov.

Alarmists like the philosopher and anthropocene specialist Nick Bostrom, Martynov notes, argue that the problem of "loss of sensitivity" in the superintelligence, which stands out sharply against the background of the human level, is quite real. They are already trying to prevent this problem with the help of legal restrictions. With this approach, AI creators will be legally obliged to endow development with the elements of emotional intelligence necessary for empathy.

Teach emotions

The nontrivial task of creating emotional AI is getting easier with the advent of new tools like machine learning. Sergey Markov describes this process in the following way: “You can take several hundred thousand audio recordings of human utterances and ask a group of people-markers to match a set of markers of the 'emotional alphabet' with each of these phrases. Then, 80% of phrases are randomly selected - on this sample, the neural network is trained to guess emotional markers. The remaining 20% can be used to make sure that the artificial intelligence is working properly. In another learning model, which Markov describes, the neural network gains more independence. In it, the AI itself categorizes phrases according to similar emotional coloring, speech rate and intonation, and later learns to synthesize its statements based on the received categories. Anyway,Big data is becoming the main resource for training artificial intelligence.

Evolutionary race

“The fact that we experience our own emotions as 'real' is due only to the fact that our cognitive system, which arose during evolution, is so tuned. Individuals capable of experiencing emotions and controlling their behavior were given an edge in the evolutionary race. Computers are unlikely to be able to come close to modeling the real evolution of primates - in this sense, their emotions will not be "real", "Martynov believes.

Image
Image

The key question, Martynov says: is it possible to model the subjective experiences of emotions, what Aristotle would call the soul, and Descartes the cogito? Science still does not give a direct answer to this question, and philosophers gather conferences on the nature of qualia (irreducible elements of subjective experience). However, there are optimists like philosopher and cognitive scientist Daniel Dennett who argue that ultimately subjective experience is the ability to tell yourself and others about how you felt. We, of course, will receive convincing verbal reports of emotions from machines in the near future, Martynov thinks.

But with a high probability, Sergey Markov believes, our joint future with emotional artificial intelligence will take forms that cannot be imagined today with the stereotypical opposition of people and machines: “Rather, in the future, people and machines will be united into heterogeneous synthetic systems, in which you can no longer to draw even a conditional line separating a person and a product of his technologies. Emotional intelligence has a big role to play in this scenario.”