Scientists Have Ceased To Understand How Artificial Intelligence Works - Alternative View

Scientists Have Ceased To Understand How Artificial Intelligence Works - Alternative View
Scientists Have Ceased To Understand How Artificial Intelligence Works - Alternative View

Video: Scientists Have Ceased To Understand How Artificial Intelligence Works - Alternative View

Video: Scientists Have Ceased To Understand How Artificial Intelligence Works - Alternative View
Video: Facebook A.I. Robots shut down after creating their own language |Artificial Intelligence |#facebook 2024, May
Anonim

Scientists and programmers have stopped understanding how artificial intelligence makes decisions. This problem was announced by several specialists at the main AI conference - Neural Information Processing Systems - held in Long Beach (California).

Experts interviewed by Quartz say they need to act before the system gets too complex.

“We don't want to take AI decisions for granted without understanding their logic,” says Jason Yosinski of Uber. "For society to accept machine learning models, we need to know how AI arrives at certain conclusions."

The problem that many experts call the “black box” is really serious. Previous experience has shown that AI has a tendency to make biased decisions and draw analogies where they shouldn't. As neural networks are gradually infiltrating law enforcement, the healthcare system, scientific research, and the algorithms that determine what you see on your Facebook news feed today, AI error can be very costly.

As an example, Kiri Wagstaff, an AI expert at the Jet Propolusion Lab (NASA), cites a mission to Mars. The devices are located 200 million miles from Earth and cost hundreds of millions of dollars, so any errors in the work of AI are simply unacceptable.

“People need to know what AI is doing and why. Otherwise, how can they trust him to control expensive equipment?”Wagstaff says.

At the moment, the scientist is working on an algorithm that sorts images taken by various NASA spacecraft. Since the number of images is in the millions, the computer allows you to sort and highlight the most interesting ones without spending a huge amount of time on this process. The problem, however, lies in the fact that often only the AI alone knows why certain images it selects are unusual.

Thus, concludes Wagstaff, if there is an error inside this algorithm, one day it may miss very important information.

Promotional video:

“Basically, the computer gives you an image and says, 'Look, this is interesting.' But you cannot always understand why it is interesting: because of the color, shape of objects or their location in space - you probably don’t know this,”the scientist says.

Hannah Wallach, a senior researcher at Microsoft, agrees with her colleagues' conclusions.

“As machine learning becomes more widespread and the stakes rise, we can no longer view these systems as black boxes. We need to understand what is happening inside them and what they are doing,”said the researcher.

Scientists are fortunately trying to find methods to understand the logic of artificial intelligence. Thus, Google researcher Mitra Raghu presented a report that describes the process of tracking the actions of individual "neurons" of a neural network. Analyzing millions of operations, she was able to figure out which of the artificial "neurons" focused on misconceptions, and turn them off. This proves that translating the work of neural networks into a form accessible to human understanding is not such an impossible task.

Another option for solving the problem is to regularly test the skills developed by artificial intelligence.

“It’s like school teachers asking children to retell in their own words what they understood from the teacher’s explanation,” says Wagstaff.

That being said, the importance of understanding the internals of the algorithm is not only to prevent a hypothetical rover from falling off a Martian rock; realizing what the failure is, you can make existing systems even better.

“If your system doesn’t work and you don’t know why, then it’s very difficult to do something about it,” says Yosinski. "If you know what happened, then the situation can always be corrected."

Used materials from the site hightech.fm