Artificial Intelligence Converts Brain Signals Into Human Speech - Listen For Yourself - Alternative View

Artificial Intelligence Converts Brain Signals Into Human Speech - Listen For Yourself - Alternative View
Artificial Intelligence Converts Brain Signals Into Human Speech - Listen For Yourself - Alternative View

Video: Artificial Intelligence Converts Brain Signals Into Human Speech - Listen For Yourself - Alternative View

Video: Artificial Intelligence Converts Brain Signals Into Human Speech - Listen For Yourself - Alternative View
Video: New AI Creates Human Speech From Brain Signals 2024, May
Anonim

In the future, mute people will be able to freely and clearly pronounce words thanks to devices that convert their brain activity into a synthesized voice. Researchers at the University of California, San Francisco recently took a big step towards improving this technology by allowing artificial intelligence to reproduce a voice without reading thoughts, but by analyzing the movements of a person's lips. The result was impressive - the synthesized voice can be heard right now.

It is expected that the device will work when the person mentally or physically reproduces the movements of the mouth, even if they do not make any sounds. To understand which areas of the human brain are activated by certain mouth movements, the researchers recruited five volunteers in the trial. They read short excerpts from children's stories - during this process, electrodes embedded in their brains read their activity.

Ultimately, the researchers ended up with two neural networks: the first matched brain signals with lip movements, and the second turned these movements into synthesized speech. The volunteers were indeed able to repeat fragments of sentences - about 69% of the synthesized words can be easily recognized on the recordings. As with other studies, the shorter the sentences, the more accurate the result.

Researchers can improve the technology through the use of brain implants with denser electrodes and sophisticated machine learning algorithms. Common features were found between the brain responses of different study participants, which suggests that future speech synthesis devices can be easily customized for each person. The researchers also noticed that artificial intelligence sometimes recognizes sounds that are not used in training, which also gives great hope.

Ramis Ganiev