Artificial Intelligence Turned Out To Be A Racist - Alternative View

Artificial Intelligence Turned Out To Be A Racist - Alternative View
Artificial Intelligence Turned Out To Be A Racist - Alternative View

Video: Artificial Intelligence Turned Out To Be A Racist - Alternative View

Video: Artificial Intelligence Turned Out To Be A Racist - Alternative View
Video: Kate Crawford | AI Now: Social and Political Questions for Artificial Intelligence 2024, April
Anonim

A study by scientists from the Massachusetts Institute of Technology has revealed the details of the process of data analysis by artificial intelligence, which is often guided by sexist and racist stereotypes in decision-making. Several systems that participated in the experiment demonstrated susceptibility to human prejudice.

The British newspaper "The Daily Mail" writes that after receiving the results of the study, the team of scientists undertook to reprogram the artificial intelligence, eliminating previous problems. According to Irene Chen, who works at the Massachusetts Institute of Technology, computer scientists tend to rush to the conclusion that the only way to eliminate elements of racism and sexism in artificial intelligence algorithms is to improve the software code. The quality of algorithms is directly proportional to the data they operate on. Research by Chen with David Sontag and Fredrik D. Johannson shows that more data available can radically change things for the better.

In one experiment, the team looked at a system that predicted a person's income based on available information. Analysis has shown that in 50% of cases, the algorithm is inclined to predict that a woman's income will be, on average, less than a man's. By increasing the amount of available data 10 times, the scientists found that the factor of such an error decreased by 40%.

Moreover, in the study of the system that is used in hospitals and predicts the survival of patients undergoing severe surgery, the accuracy of predictions was much lower for the Mongoloid race than for the Caucasians and Negroids. However, scientists argue that the use of the advanced analysis technique can significantly reduce the prediction accuracy for patients not belonging to the Mongoloid race. This shows that more data available may not always correct algorithm errors. Instead, scientists should receive more information about discriminated groups.

The new method raises another question for machine learning researchers about how to efficiently analyze data without existing discrimination.

How does machine learning work in AI-powered systems?

Artificial intelligence systems are based on artificial neural networks (ANNs), which extrapolate the information storage and learning techniques used by the human brain to mechanical systems. ANNs train to find patterns in available sources of information, including speech, text, and images. Improving the accuracy of data analysis is one of the fundamental prerequisites that predate the latest developments in artificial intelligence.

"Normal" artificial intelligence uses the input data to tell the algorithm about the subject of analysis, while operating with a huge amount of information.

Promotional video:

Practical applications of machine learning include Google translation services, face recognition from Facebook photos, and filters in Snapchat that scan faces before applying visual effects online.

The data entry process is often time-consuming and is usually limited by the flow of information about one aspect of the object under study. A new type of ANN - a generative-adversarial neural network - opposes the capabilities of two different robots with artificial intelligence at once, provoking a less intelligent system to learn at the expense of the second without human participation. This technique dramatically improves the efficiency and speed of machine learning while increasing the quality of data analysis.

Oliy Kurilov