When Will Artificial Intelligence Begin To Understand Human Emotions? - Alternative View

Table of contents:

When Will Artificial Intelligence Begin To Understand Human Emotions? - Alternative View
When Will Artificial Intelligence Begin To Understand Human Emotions? - Alternative View

Video: When Will Artificial Intelligence Begin To Understand Human Emotions? - Alternative View

Video: When Will Artificial Intelligence Begin To Understand Human Emotions? - Alternative View
Video: Companies-And DARPA-Are Using AI To Predict Human Emotion | Forbes 2024, March
Anonim

Would you trust a robot if it were your doctor? Emotional intelligent machines may not be as far from us as they seem. Over the past few decades, artificial intelligence has dramatically increased the ability to read people's emotional responses.

But reading emotions doesn't mean understanding them. If AI itself can't experience them, will it ever be able to fully understand us? And if not, do we risk attributing properties to robots that they do not have?

The latest generation of artificial intelligence already thanks us for the growth in the amount of data computers can learn from, as well as for the increase in processing power. These machines are gradually being improved in matters that we usually gave exclusively to people for execution.

Image
Image

Today, artificial intelligence can, among other things, recognize faces, turn sketches of faces into photographs, recognize speech, and play Go.

Identifying criminals

Not so long ago, scientists developed artificial intelligence that can tell if a person is a criminal just by looking at their facial features. The system was evaluated using a database of Chinese photographs and the results were simply stunning. AI erroneously classified innocent people as criminals in only 6% of cases and successfully identified 83% of criminals. The overall accuracy was nearly 90%.

Promotional video:

This system is based on an approach called "deep learning" that has proven successful in, for example, facial recognition. Deep learning combined with a "face rotation model" allowed artificial intelligence to determine whether two photographs represent the same person's face, even if the lighting or angle changes.

Deep learning creates a "neural network" that is based on the approximation of the human brain. It consists of hundreds of thousands of neurons organized in different layers. Each layer takes input data, such as a face image, to a higher level of abstraction, such as a set of edges in specific directions and locations. And it automatically highlights the features that are most relevant to the performance of a particular task.

Given the success of deep learning, it’s not surprising that artificial neural networks can distinguish criminals from innocent ones - if there are indeed facial features that differ between the two. The study made it possible to distinguish three features. One is the angle between the tip of the nose and the corners of the mouth, which, on average, is 19.6% less for criminals. The curvature of the upper lip is also on average 23.4% larger for criminals, and the distance between the inner corners of the eyes is on average 5.6% narrower.

At first glance, this analysis suggests that the outdated view that criminals can be identified by physical attributes is not so wrong. However, this is not the whole story. Remarkably, the two most relevant features are associated with the lips, and these are our most expressive facial features. The photographs of the criminals used in the study require a neutral facial expression, but the AI still managed to find hidden emotions in these photographs. Perhaps so insignificant that people cannot detect them.

Image
Image

It's hard to resist the temptation to look at sample photos yourself - here they are. The document is still undergoing review. Close examination does show a slight smile in the photographs of the innocent. But there are not many photographs in the samples, so it is impossible to draw conclusions about the entire database.

The power of affective computing

This is not the first time a computer has been able to recognize human emotions. The so-called area of "affective computing" or "emotional computing" has been around for a long time. It is believed that if we want to live comfortably and interact with robots, these machines must be able to understand and adequately respond to human emotions. The possibilities in this area are quite extensive.

For example, the researchers used facial analysis to identify students having difficulty with computer-based teaching lessons. AI has been taught to recognize different levels of engagement and frustration so that the system can understand when students find jobs too easy or too complex. This technology can be useful for improving the learning experience on online platforms.

Sony is trying to develop a robot that can form emotional bonds with people. It is not yet entirely clear how she was going to achieve this or what exactly the robot will do. However, the company says it is trying to "integrate hardware and services to provide an emotionally comparable experience."

Emotional artificial intelligence will have a number of potential advantages, be it the role of the interlocutor or the performer - it will be able to both identify the criminal and talk about the treatment.

There are also ethical concerns and risks. Would it be right to let a patient with dementia rely on an AI companion and tell them that they are emotionally alive when they are not? Can you put a person behind bars if the AI says he is guilty? Of course not. Artificial intelligence, first of all, will not be a judge, but an investigator, identifying "suspicious", but certainly not guilty people.

Subjective things like emotions and feelings are hard to explain to artificial intelligence, in part because AI doesn't have access to good enough data to analyze it objectively. Will AI ever understand sarcasm? One sentence can be sarcastic in one context and completely different in another.

In any case, the amount of data and processing power continues to grow. With a few exceptions, AI may well learn to recognize different types of emotions in the next few decades. But could he ever experience them himself? That's a moot point.

ILYA KHEL