Virtual reality is not limited to the entertainment world. It is also being adopted in more practical areas - for example, to assemble parts of a car engine or so that people can “try out” newfangled trends while at home. Yet this technology still struggles to solve human perception problems. Obviously, virtual reality has some pretty cool apps. At the University of Bath, it is used for exercise; Imagine heading to the gym to take part in the Tour de France and ride with the best cyclists in the world.
Virtual reality, in a technical sense, does not get along well with human perception. That is, with the way we perceive information about the world and build an understanding about it. Our perception of reality determines our decisions and relies heavily on our senses. Consequently, the creation of an interactive system must imply accounting not only for hardware and software, but also for the people themselves.
It is very difficult to solve the problem of designing virtual reality systems that will transport people to new worlds with an acceptable sense of presence. The more complex the VR experience becomes, the more difficult it becomes to quantify the contribution of each element of the experience to someone's perception in a VR headset.
When watching a movie in a 360-degree view in virtual reality, for example, how would we determine which is more conducive to the involvement in watching the movie: computer graphics (CGI) or surround sound technology? VR has to be studied using the knife and ax method, cutting off the unnecessary and chopping off the excess before adding new elements, evaluating the effect of their appearance on human perception.
There is a theory at the intersection of computer science and psychology. The maximum likelihood score explains how we combine the information we receive from all of our senses, integrating it to inform our understanding of the environment. In its simplest form, the theory states that we optimally combine sensory information; every feeling contributes to the appreciation of the environment, but in general it is a rather noisy process.
Imagine a person with good hearing walking at night in a quiet side street. He sees a dark shadow in the distance and hears the distinct sound of footsteps approaching him. However, this person cannot be sure of what they are seeing due to the "noise" in the signal (because it is dark). It relies on hearing because a quiet environment means that the sound in this example will be a more reliable signal.
This scenario is shown in the image below: how the assessment involving the human eyes and ears is combined to give an optimal conclusion somewhere in between.
Of course, this cannot go unnoticed by VR developers. Scientists from the University of Bath have applied this method to solve the problem of how people estimate distances when using virtual reality headsets. A driving simulator in which people learn to drive can lead to compressed distances in virtual reality, and this is fraught with misuse in an environment where risk factors should be considered.
Understanding how people integrate information from their senses is critical to the long-term success of VR because it's not just the visual part. Maximum likelihood score helps simulate how efficiently a virtual reality system needs to render a multisensory environment. Better knowledge of human perception will lead to an even more immersive VR experience.
Simply put, the question is not how to separate signals from noise; the question is to perceive all noisy signals and get the highest quality virtual environment.