Japanese Scientists Taught The Computer To Read Minds. Video Examples - Alternative View

Japanese Scientists Taught The Computer To Read Minds. Video Examples - Alternative View
Japanese Scientists Taught The Computer To Read Minds. Video Examples - Alternative View

Video: Japanese Scientists Taught The Computer To Read Minds. Video Examples - Alternative View

Video: Japanese Scientists Taught The Computer To Read Minds. Video Examples - Alternative View
Video: Brain decoding how scientists can read your mind 2024, November
Anonim

With the development of preference recognition systems and targeted advertising, offers to watch videos and friends on social networks give the user the illusion that the computer is reading his thoughts. However, for better or worse, we are actually moving towards computers and systems that actually read our thoughts. A new study by developers from Japan is, without exaggeration, an absolutely breakthrough step in this direction.

A team from Kyoto University decided to use a virtual neural network to try to read and interpret thoughts in a living neural network, in the human brain. Although it sounds pretty crazy, the experiment is not fundamentally new and the Japanese are not the first group to work in this direction. The difference between the Kyoto team and their predecessors is that earlier techniques reconstructed images from the pixels and basic geometric shapes. However, a new technology, dubbed "deep image reconstruction", moves beyond binary pixels and gives researchers the ability to decode images with multiple layers of color and structure.

“Our brains process visual information by hierarchically extracting different levels of traits or components of different complexities,” said Yukiyasu Kamitani, one of the scientists involved in the study, in an interview. "These neural networks or AI models can be used as an approximation to the hierarchical structure of the human brain."

The study lasted for 10 months. Three experimental volunteers, for different lengths of time, looked at images of three different categories: natural objects (such as animals or people), artificial geometric shapes, and letters of the alphabet.

Image
Image

In this case, the activity of the brain was recorded while viewing the images. The image was then removed and the subject was asked to think about the picture he was just looking at. At the same time, brain activity was again recorded and the data were compared with the previous ones, after which the results were entered into a virtual neural network, which later used them to interpret brain activity as certain thoughts.

In humans (and indeed in all mammals), the visual cortex is located in the back of the brain, in the occipital lobe, which is above the cerebellum. Activity in the visual cortex was measured using functional magnetic resonance imaging (fMRI), transforming the resulting image into hierarchical features of a virtual neural network.

Starting from a random image, the generated network optimizes the pixel values of that image many times over. As a result, the neural network functions of the input image are similar to those decoded from brain activity.

Promotional video:

It is important to note that the experimental model assumed the use of not only natural images (people or nature), but also assumed the generation and recognition of artificial structures and geometric shapes:

As you can see from the video, it is much more difficult for the system to decode the image in a situation where a person does not look at the image, but only thinks about what he saw. However, this, apparently, is absolutely natural: not every brain remembers every detail of the image just seen, for example, a page from a book. Our memories are usually very blurry and fuzzy.

At this stage of the study, images reconstructed from brain activity only retain some similarity with the original images viewed by the participants in the experiment, they mostly look like minimally detailed clumps of pixels. However, this is only the beginning of the path and over time, the recognition accuracy will be more and more, although even now we can confidently say which object the subject is thinking about.

All this opens up amazing perspectives for developers. Imagine "instant modeling" when you just imagine an object in your head - an art concept or a detail of a mechanism - and its computer immediately, without pressing any buttons, automatically creates the necessary three-dimensional object.

Image
Image

Or could AI go much further, recording your brain activity while you sleep, and then recreating all your dreams in a 3D world?

There are countless applications of this development, so the Japanese team is working hard on everything. However, it is in all our common interests that mind reading advances gradually and carefully enough as technology carries many dangers.

Recommended: