For The First Time, Scientists Have Extracted Images From Thoughts - Alternative View

Table of contents:

For The First Time, Scientists Have Extracted Images From Thoughts - Alternative View
For The First Time, Scientists Have Extracted Images From Thoughts - Alternative View

Video: For The First Time, Scientists Have Extracted Images From Thoughts - Alternative View

Video: For The First Time, Scientists Have Extracted Images From Thoughts - Alternative View
Video: Giant squid, planet x and a mysterious monolith. It can't be! 2024, May
Anonim

Neuroscientists have recreated dynamic visual images that appeared in the head while watching randomly selected videos. In the future, such technology will help doctors watch patients' hallucinations on the screen, and rich eccentrics - upload their dreams to YouTube

A fantastic result was obtained at the University of California at Berkeley: the subjects were shown various videos from YouTube (fragments of films, trailers, etc.), while the magnetic resonance imaging machine recorded in detail the activity of cells in various parts of the visual cortex. Using this information, the scientists were able to reconstruct the viewed frames in color and dynamics (see the impressive video at the end of the material).

The quality of these recovered clips may seem unimportant to some, but it is incomparably better and more accurate than what was obtained “from the head” of a person in the first experiment of this type.

The scientists, who created the new program for the recognition of mental images, themselves took turns acting as experimental subjects, since they had to spend many hours at a time inside the tomograph.

To begin with, the researchers recorded a picture of brain activity while watching a number of trailers of Hollywood films. Biologists have built a 3D computer model of the brain with groups of cells (voxels) and recorded how each voxel responds to changes in shape and movement of objects on the screen. Thus, it was possible to obtain a rough correspondence between visual information and the process of its processing in the cerebral cortex.

To test and fine-tune the algorithm, scientists fed him thousands of hours from millions of randomly taken videos from the same YouTube service, receiving the opposite result - a simulated brain activity that would be observed if a person watched these videos.

Footage from the videos, and on the right - they are the same, but extracted from the observer's head

Image
Image

Promotional video:

Image
Image
Image
Image

Finally, the algorithm was reversed again. When people watched a test video in a tomograph, the computer picked up 100 clips from the Web, which were most likely to cause just such a picture of cell activity. Then, second by second, the program mixed frames from these clips, obtaining a blurry resulting movie, which coincided well with what the person observed in reality.

(Details of the experiment can be found in an article in Current Biology and a university press release.)

“This is how we open a window to the cinema that goes in our minds,” says co-author Jack Gallant. By the way, Gallant is known to us from his experience with recognition by means of brain scanning of the seen photographs.

Scientists explain why mind recognition using magnetic tomography is difficult to implement. One of the problems is that the scanner registers changes in blood flow through the cortex, and they occur much slower than nerve signals change.

That is why previously such a trick could only be performed with static pictures. “We solved this problem by creating a two-stage model that separately describes nerve cells and blood flow signals,” says Shinji Nishimoto, lead author of the study.

It may take a decade before the new technology is practically applied. But already in this raw form, it is able to help neurophysiologists better understand what is happening in the human head.

Leonid Popov