DeepMind Has Tested IQ Of AI Models - Alternative View

Table of contents:

DeepMind Has Tested IQ Of AI Models - Alternative View
DeepMind Has Tested IQ Of AI Models - Alternative View

Video: DeepMind Has Tested IQ Of AI Models - Alternative View

Video: DeepMind Has Tested IQ Of AI Models - Alternative View
Video: DeepMind's AI Takes An IQ Test 2024, March
Anonim

In a blog post, DeepMind, a subsidiary of Google, talked about an experiment with testing artificial intelligence models for generalization and abstract thinking skills. Experts have developed a generator that poses questions based on the concept of progression, color properties, shapes or sizes, and their relationships. Similar tasks are found in IQ tests for humans.

Image
Image

IQ Test Accuracy

Most of the models answered the questions with 75% accuracy. In doing so, the researchers found a strong correlation between task performance and the ability to identify underlying abstractions. They managed to increase efficiency by training algorithms to explain their answers, to show which relationships and properties need to be considered in a particular issue.

However, some models are bad at "transferring" the studied relationships to new properties, for example, if she trained to identify logical sequences in relation to the color of objects, and in the task it is required to establish a dependence by their shape.

The team found that if the neural network correctly extrapolated its knowledge of relationships to a new combination of values, then the accuracy of the tasks increased to 87%. In the case of incorrect extrapolation, it dropped to 32%.

The detailed progress of the research and the results were published by the developers in the article.

Promotional video:

In February 2018, Facebook AI Research developers also trained artificial intelligence to explain its actions. In general, this technique allows you to track the logic of solving a problem and identify problems that prevent you from coping with it.