People Don't Trust Artificial Intelligence. How To Fix It? - Alternative View

Table of contents:

People Don't Trust Artificial Intelligence. How To Fix It? - Alternative View
People Don't Trust Artificial Intelligence. How To Fix It? - Alternative View

Video: People Don't Trust Artificial Intelligence. How To Fix It? - Alternative View

Video: People Don't Trust Artificial Intelligence. How To Fix It? - Alternative View
Video: It’s Getting Harder to Spot a Deep Fake Video 2024, May
Anonim

Artificial intelligence can already predict the future. The police use it to draw up a map showing when and where crime can occur. Doctors use it to predict when a patient might have a stroke or heart attack. Scientists are even trying to give AI an imagination so that it can anticipate unexpected events.

Many decisions in our life require good predictions, and AI agents are almost always better at them than humans. Yet for all these technological advances, we still lack confidence in the predictions that artificial intelligence provides. People are not used to relying on AI and prefer to trust human experts, even if those experts are wrong.

Image
Image

If we want artificial intelligence to benefit humans, we need to learn to trust it. To do this, we need to understand why people are so persistent in refusing to trust AI.

Trust Dr. Robot

IBM's attempt to introduce a supercomputing program to oncologists (Watson for Oncology) failed. AI has pledged to provide high-quality treatment recommendations for 12 cancers, accounting for 80% of the world's cases. To date, more than 14,000 patients have received recommendations based on his calculations.

But when doctors first encountered Watson, they found themselves in a rather difficult situation. On the one hand, while Watson provided treatment guidelines that were consistent with their own opinions, the medical profession did not see much value in AI recommendations. The supercomputer was simply telling them what they already knew, and these recommendations did not change the actual treatment. This may have given doctors peace of mind and confidence in their own decisions. But IBM has yet to prove that Watson actually increases cancer survival rates.

Promotional video:

Image
Image

On the other hand, if Watson made recommendations that were at odds with expert opinion, doctors concluded that Watson was incompetent. And the machine couldn't explain why its treatment would work because its machine learning algorithms were too complex for humans to understand. Accordingly, this led to even more distrust, and many doctors simply ignored the AI recommendations, relying on their own experience.

As a result, IBM Watson's Chief Medical Partner, MD Anderson Cancer Center, recently announced that it was canceling the program. The Danish hospital also said it was abandoning the program after it found that oncologists disagreed with Watson on two of three occasions.

The problem with Watson oncology was that the doctors simply didn't trust him. People's trust often depends on our understanding of how other people think and our own experience, which builds confidence in their opinion. This creates a psychological sense of security. AI, on the other hand, is relatively new and incomprehensible to humans. It makes decisions based on a sophisticated analysis system to identify potential hidden patterns and weak signals from large amounts of data.

Even if it can be explained in technical terms, the AI decision-making process is usually too complex for most people to understand. Interacting with something we don't understand can cause anxiety and create a sense of loss of control. Many people simply do not understand how and with what AI works, because it happens somewhere behind the screen, in the background.

For the same reason, they are more acutely aware of cases where AI is wrong: think of the Google algorithm that classifies people of color as gorillas; the Microsoft chatbot that became a Nazi in less than a day; Tesla vehicle in autopilot mode resulting in fatal accident. These unfortunate examples have received disproportionate media attention, highlighting the agenda that we cannot rely on technology. Machine learning is not 100% reliable, in part because humans design it.

A split in society?

The feelings that artificial intelligence evokes go deep into the nature of the human being. Scientists recently conducted an experiment in which they surveyed people who watched movies about artificial intelligence (fantasy) on the topic of automation in everyday life. It turned out that regardless of whether the AI was portrayed in a positive or negative way, simply watching a cinematic representation of our technological future polarizes the participants' attitudes. Optimists become even more optimistic, and skeptics close even more.

This suggests that people are biased towards AI based on their own reasoning, a deep-seated tendency of confirmation bias: the tendency to seek or interpret information in such a way as to confirm pre-existing concepts. As AI increasingly appears in the media, it can contribute to deep divisions in society, a rift between those who use AI and those who reject it. A predominant group of people can get a major advantage or a handicap.

Three ways out of the AI confidence crisis

Fortunately, we have some thoughts on how to deal with trust in AI. Experience with AI alone can dramatically improve people's attitudes towards this technology. There is also evidence that the more often you use certain technologies (such as the Internet), the more you trust them.

Another solution might be to open up the black box of machine learning algorithms and make them work more transparently. Companies like Google, Airbnb and Twitter are already publishing transparency reports on government inquiries and disclosures. This kind of practice in AI systems will help people gain the necessary understanding of how algorithms make decisions.

Research shows that involving people in AI decision making will also increase trust and allow AI to learn from human experience. The study found that people who were given the opportunity to slightly modify the algorithm felt more satisfied with the results of its work, most likely due to a sense of superiority and the ability to influence future outcomes.

We don't need to understand the intricate inner workings of AI systems, but if we give people even a little information and control over how these systems are implemented, they will have more confidence and desire to embrace AI in their daily lives.

Ilya Khel