The Neural Network Has Learned To Animate Barack Obama - Alternative View

The Neural Network Has Learned To Animate Barack Obama - Alternative View
The Neural Network Has Learned To Animate Barack Obama - Alternative View

Video: The Neural Network Has Learned To Animate Barack Obama - Alternative View

Video: The Neural Network Has Learned To Animate Barack Obama - Alternative View
Video: This AI Learned To Animate Humanoids🚶 2024, May
Anonim

Neural networks today are developing so quickly that sometimes you simply do not have time to follow their progress. Now they paint pictures, then they compose music, then they sound the text with the voice of any person - it is possible to list their achievements for a long time. Another amazing technology was demonstrated to us by researchers from the University of Washington. They managed to teach a neural network to animate facial expressions of former US President Barack Obama. And it turned out very plausible.

The system needs an audio recording of President Obama's speech and some free time to work. Scientists previously fed the neural network several hours of video recordings of the former president so that it could study his facial expressions and identify its main points for further work. And voila! At the exit, we get President Obama, who on camera says the phrases he uttered on some talk show or somewhere else. In this case, of course, it is best to see it all with your own eyes. How to explain this in words is not an easy task.

Ira Kemelmacher-Schlitserman, an assistant professor at the University of Washington, explains that the technology can be useful in several areas. For example, in virtual reality, a person's speech can be almost perfectly transferred to his computer avatar. Such an approach can seriously make life easier for video game developers who use very complex and expensive technologies to capture human facial expressions. And here you just need to record the voice of a person and feed the recording to the neural network.

The system is still far from perfect. After all, she first needs to carefully study the facial expressions of a person, and this will require several hours of video recordings. But the creators of the technology believe that in the future, the process will be greatly simplified. This technology will be demonstrated in more detail to the public at the SIGGRAPH 2017 exhibition, which starts next month.

Sergey Gray