Deepmind Teaches Its Artificial Intelligence To Think Like A Human Being - Alternative View

Deepmind Teaches Its Artificial Intelligence To Think Like A Human Being - Alternative View
Deepmind Teaches Its Artificial Intelligence To Think Like A Human Being - Alternative View

Video: Deepmind Teaches Its Artificial Intelligence To Think Like A Human Being - Alternative View

Video: Deepmind Teaches Its Artificial Intelligence To Think Like A Human Being - Alternative View
Video: Matt Botvinick - Holy Grail Questions at the Intersection of Neuroscience and AI 2024, May
Anonim

Last year, the artificial intelligence AlphaGo beat the world champion in the game for the first time. This victory was unprecedented and unexpected given the high difficulty of the Chinese board game. While AlphaGo's victory was definitely impressive, this AI, which has beaten other champions of Go since then, is still considered a "narrow" type of AI - one that can only surpass humans in a limited field of tasks.

So, although we will hardly be able to beat a computer in Go or chess without resorting to the help of another computer, we also cannot yet rely on them for routine tasks. AI will not make you tea or schedule MOT for your car.

Contrary to this, AI is often portrayed in science fiction as "general" artificial intelligence. That is, artificial intelligence with the same level and variety as a human. While we already have different types of artificial intelligence that can do everything from diagnosing diseases to driving our cars, we have not yet been able to figure out how to integrate them on a more general level.

Last week, DeepMind researchers presented several papers that claim to lay the foundations for general artificial intelligence. Although there are no conclusions yet, the first results are encouraging: in some areas, AI has already surpassed humans in abilities.

Both DeepMind's work focuses on relative reasoning, a critical cognitive ability that allows people to make comparisons between different objects or ideas. For example, to compare which object is larger or smaller, which is on the left and which is on the right. People use relative (or relational) reasoning every time they try to solve a problem, but scientists have yet to figure out how to give AI this deceptively simple ability.

DeepMind scientists have chosen two different routes. Some trained a neural network - a type of AI architecture modeled after a human brain - using a database of simple, static 3D objects called CLEVR. Another neural network was taught to understand how a two-dimensional object changes over time.

In CLEVR, a neural network was represented by a set of simple designs such as pyramids, cubes, and spheres. Scientists then asked artificial intelligence questions in natural language, such as "is a cube made of the same material as a cylinder?" Amazingly, the neural network was able to correctly estimate the relational attributes of CLEVR in 95.5% of cases, surpassing even a human with its 92.6% accuracy in this parameter.

In the second test, DeepMind researchers created a Visual Interaction Network (VIN) neural network that was trained to predict the future states of an object on video, depending on its previous movements. To do this, the scientists first fed the VIN three consecutive video frames, which the network translated into code. In this code, there was a list of vectors - the speed or position of an object - for each object in the frame. The VIN was then fed a sequence of other codes, which combined to predict the code for the next frame.

Promotional video:

To train the VIN, scientists used five different types of physics systems, in which 2D objects moved against the background of "natural images" and collided with different forces. For example, in one physical system the simulated objects interacted with each other in accordance with Newton's law of gravitation. In another, a neural network was presented with billiards and made to predict the future position of the balls. According to scientists, the VIN network successfully coped with predicting the behavior of objects in the video.

This work represents an important step towards general AI, but there is still a lot of work to be done before artificial intelligence can take over the world. And besides, superhuman performance does not imply superhuman intelligence.

Not yet, anyway.

ILYA KHEL