Neurointerfaces - technologies that connect the brain and the computer - are gradually becoming a routine: we have already seen how, with the help of mental orders, a person can control a prosthesis or type text on a computer. Does this mean that the promises of science fiction writers who wrote about full-fledged reading of thoughts using a computer or even about transferring human consciousness into a computer will soon become a reality? The same topic - "Augmented Personality" - in 2019 is dedicated to the science fiction story competition "Future Time", organized by the Sistema charitable foundation. Together with the organizers of the competition, the N + 1 editors figured out what modern neural interfaces are capable of and whether we can really create a full-fledged brain-computer connection. And Alexander Kaplan helped us in this,founder of the first Russian interface laboratory at Lomonosov Moscow State University.
Hack the body
Neil Harbisson has congenital achromatopsia, which has deprived him of color vision. The Briton, deciding to deceive nature, implanted a special camera that converts color into sound information and sends it to the inner ear. Neil considers himself the first cyborg officially recognized by the state.
In 2012, in the United States, Andrew Schwartz from the University of Pittsburgh demonstrated a paralyzed 53-year-old patient who, using electrodes implanted in her brain, sent signals to a robot. She learned to control the robot so much that she was able to serve herself a bar of chocolate.
In 2016, in the same laboratory, a 28-year-old patient with a severe spinal injury extended a brain-controlled artificial hand to Barack Obama who visited him. Sensors on the hand allowed the patient to feel the handshake of the 44th President of the United States.
Modern biotechnology empowers people to "crack" the limitations of their bodies, creating a symbiosis between the human brain and the computer. It seems that everything is heading towards the fact that bioengineering will soon become a part of everyday life.
Promotional video:
What will happen next? Philosopher and futurist Max More, a follower of the idea of transhumanism, since the end of the last century, has been developing the idea of man's transition to a new stage of evolution using, among other things, computer technology. In literature and cinema of the last two centuries, a similar play of the futuristic imagination has been slipping.
In the world of William Gibbson's science fiction novel Neuromancer, published in 1984, implants have been developed that allow their wearer to connect to the Internet, expand intellectual capabilities and relive memories. Masamune Shiro, the author of the cult Japanese sci-fi manga "Ghost in the Shell" recently filmed in the USA, describes a future in which any organ can be replaced with bionics, up to the complete transfer of consciousness into a robot's body.
How far can neural interfaces go in a world where, on the one hand, ignorance multiplies fantasies, and on the other hand, fantasies often turn out to be providence?
Potential difference
The central nervous system (CNS) is a complex communication network. There are more than 80 billion neurons in the brain alone, and there are trillions of connections between them. Every millisecond inside and outside of any nerve cell, the distribution of positive and negative ions changes, determining how and when it will react to a new signal. At rest, the neuron has a negative potential relative to the environment (on average -70 millivolts), or "resting potential". In other words, it is polarized. If a neuron receives an electrical signal from another neuron, then in order for it to be transmitted further, positive ions must enter the nerve cell. Depolarization occurs. When the depolarization reaches a threshold value (approximately -55 millivolts, however, this value may vary),the cell gets excited and lets in more and more positively charged ions, which creates a positive potential, or "action potential".
Action potential.
Further, the action potential along the axon (cell communication channel) is transmitted to the dendrite - the recipient channel of the next cell. However, the axon and dendrite are not directly connected, and the electrical impulse cannot simply pass from one to the other. The place of contact between them is called a synapse. Synapses produce, transmit and receive neurotransmitters - chemical compounds that directly "forward" a signal from the axon of one cell to the dendrite of another.
When the impulse reaches the end of the axon, it releases neurotransmitters into the synaptic cleft, crossing the space between cells and attaching to the end of the dendrite. They force the dendrite to let in positively charged ions, move from the resting potential to the action potential, and transmit a signal to the cell body.
The type of neurotransmitter also determines which signal will be sent further. For example, glutamate leads to neuronal firing, gamma-aminobutyric acid (GABA) is an important inhibitory mediator, and acetylcholine can do both depending on the situation.
This is how a neuron looks schematically:
Neuron diagram.
And this is how it looks in reality:
Neuron under the microscope.
Moreover, the response of the recipient cell depends on the number and rhythm of incoming impulses, information coming from other cells, as well as on the brain area from which the signal was sent. Various auxiliary cells, the endocrine and immune systems, the external environment and previous experience - all this determines the state of the central nervous system at the moment and thereby affects human behavior.
And although, as we understand it, the central nervous system is not a set of "wires", the work of neurointerfaces is based precisely on the electrical activity of the nervous system.
Positive leap
The main task of the neurointerface is to decode the electrical signal coming from the brain. The program has a set of "templates" or "events" consisting of various signal characteristics: vibration frequencies, spikes (activity peaks), locations on the cortex, and so on. The program analyzes the incoming data and tries to detect these events in them.
The commands sent further depend on the result obtained, as well as on the functionality of the system as a whole.
An example of such a pattern is the P300 (Positive 300) evoked potential, often used for the so-called spellers - mechanisms for typing text using brain signals.
When a person sees the symbol he needs on the screen, after 300 milliseconds, a positive jump in electrical potential can be detected on the recording of brain activity. Upon detecting the P300, the system sends a command to print the corresponding character.
In this case, the algorithm cannot detect the potential from one time due to the noise level of the signal by random electrical activity. Therefore, the symbol must be presented several times, and the obtained data must be averaged.
In addition to a one-step change in potential, the neurointerface can look for changes in the rhythmic (i.e., oscillatory) activity of the brain caused by a certain event. When a sufficiently large group of neurons enters a synchronous rhythm of activity fluctuations, this can be detected on the signal spectrogram in the form of ERS (event-related synchronization). If, on the contrary, there is a desynchronization of oscillations, then the spectrogram contains ERD (event-related desynchronization).
At the moment when a person makes or simply imagines a hand movement, ERD is observed in the motor cortex of the opposite hemisphere at an oscillation frequency of about 10–20 hertz.
This and other templates can be assigned to the program manually, but often they are created in the process of working with each specific individual. Our brain, like the features of its activity, is individual and requires adaptation of the system to it.
Recording electrodes
Most neurointerfaces use electroencephalography (EEG) to record activity, that is, a non-invasive method of neuroimaging, due to its relative simplicity and safety. Electrodes attached to the surface of the head register the change in the electric field caused by the change in the potential of the dendrites after the action potential has "crossed" the synapse.
At the moment when positive ions penetrate into the dendrite, a negative potential is formed in the surrounding environment. At the other end of the neuron, ions with the same charge begin to leave the cell, creating a positive potential outside, and the space surrounding the neuron turns into a dipole. The electric field propagating from the dipole is recorded by an electrode.
Unfortunately, the method has several limitations. The skull, skin and other layers that separate nerve cells from the electrodes, although they are conductors, are not so good as not to distort information about the signal.
The electrodes are capable of recording only the total activity of many neighboring neurons. The main contribution to the measurement result comes from neurons located in the upper layers of the cortex, whose processes are perpendicular to its surface, because it is they who create the dipole, the electric field of which the sensor can best capture.
All this leads to the loss of information from deep structures and a decrease in accuracy, so the system is forced to work with incomplete data.
Invasive electrodes, implanted on the surface or directly inside the brain, allow for much greater accuracy.
If the desired function is associated with the surface layers of the brain (for example, motor or sensory activity), then implantation is limited to trepanation and attachment of electrodes to the surface of the cortex. Sensors read the total electrical activity of many cells, but this signal is not as distorted as in EEG.
If the activity located deeper is important, then the electrodes are inserted into the cortex. It is even possible to register the activity of single neurons using special microelectrodes. Unfortunately, the invasive technique poses a potential danger to humans and is used in medical practice only in extreme cases.
However, there is hope that the technique will become less traumatic in the future. The American company Neuralink plans to implement the idea of safely introducing thousands of thin flexible electrodes without drilling the skull, using a laser beam.
Several other labs are working on biodegradable sensors that will remove electrodes from the brain.
Banana or orange?
Signal recording is only the first step. Next, you need to "read" it to determine the intentions behind it. There are two possible ways to decode brain activity: let the algorithm pick out the relevant characteristics from the dataset itself, or give the system a description of the parameters to look for.
In the first case, the algorithm, not limited by search parameters, classifies the "raw" signal itself and finds elements predicting intentions with the highest probability. If, for example, a subject alternately thinks about movement with his right and left hand, then the program is able to find the signal parameters that maximally distinguish one option from the other.
The problem with this approach is that the parameters describing the electrical activity of the brain are too multidimensional, and the data is too noisy with various noises.
With the second decoding algorithm, it is necessary to know in advance where and what to look for. For example, in the example of the P300 speller described above, we know that when a person sees a symbol, the electric potential changes in a certain way. We teach the system to look for these changes.
In such a situation, the ability to decipher a person's intentions is tied to our knowledge of how brain functions are encoded in neural activity. How does this or that intention or state manifest in the signal? Unfortunately, in most cases we do not have an answer to this question.
Neurobiological research on cognitive function is underway, but nevertheless, we can decipher a very small fraction of the signals. The brain and consciousness remain for us a "black box" for now.
Alexander Kaplan, neurophysiologist, Doctor of Biological Sciences and founder of the Laboratory of Neurophysiology and Neurointerfaces of Lomonosov Moscow State University, who received the first grant in Russia for the development of a neurointerface for communication between the brain and a computer, says that researchers are able to automatically decipher some of a person's intentions or images imagined by him based on the EEG …
However, at the moment, there are no more than a dozen such intentions and images. These are, as a rule, states associated with relaxation and mental tension or with the representation of movements of body parts. And even their recognition occurs with errors: for example, to establish by the EEG that a person intends to clench his right hand into a fist, even in the best laboratories it is possible in no more than 80-85 percent of the total number of attempts.
And if you try to understand from the EEG whether a person imagines a banana or an orange, then the number of correct answers will only slightly exceed the level of random guessing.
The saddest thing is that it has not been possible to improve the reliability of neurointerface systems in recognizing human intentions by EEG and to expand the list of such intentions for more than 15 years, despite significant advances in the development of algorithms and computing technology achieved during the same time.
Apparently, the EEG reflects only a small part of a person's mental activity. Therefore, neurointerface systems should be approached with moderate expectations and clearly outlined the areas of their real application.
Lost in translation
Why can't we create a system that does what the brain can easily do? In short, the way the brain works is too complex for our analytical and computational capabilities.
First, we do not know the "language" in which the nervous system communicates. In addition to impulse series, it is characterized by many variables: the features of the pathways and the cells themselves, chemical reactions occurring at the time of information transfer, the work of neighboring neural networks and other body systems.
In addition to the fact that the "grammar" of this "language" is complex in itself, it may differ in different pairs of nerve cells. The situation is aggravated by the fact that the rules of communication, as well as the functions of cells and the relationships between them, are all very dynamic and constantly changing under the influence of new events and conditions. This exponentially increases the amount of information that needs to be taken into account.
Data that fully describe brain activity will simply drown any algorithm that undertakes to analyze it. Therefore, decoding intentions, memories, movements is practically an insoluble task.
The second hurdle is that we don't know very much about the very brain functions we are trying to detect. What is memory or visual image, what are they made of? Neurophysiology and psychology have been trying to answer these questions for a long time, but so far there is little progress in research.
The simplest functions such as motor and sensory functions have the advantage in this sense, as they are better understood. Therefore, the currently available neural interfaces interact mainly with them.
They are able to recognize tactile sensations, imaginary movement of a limb, response to visual stimulation, and simple reactions to environmental events such as a reaction to an error or a mismatch between the expected stimulus and the real one. But higher nervous activity remains a big secret for us today.
Two-way communication
Until now, we have only discussed the situation of one-way reading of information without any backward influence. However, today there is already a technology for transmitting signals from a computer to the brain - CBI (computer-brain interface). It makes the communication channel of the neurointerface two-way.
Information (for example, sound, tactile sensations, and even data on the functioning of the brain) enters the computer, is analyzed and, through stimulation of the cells of the central or peripheral nervous system, is transmitted to the brain. All this can occur completely bypassing the natural organs of perception and is successfully used to replace them.
According to Alexander Kaplan, at present there are no longer any theoretical restrictions for equipping a person with artificial sensory "organs" connected directly to the brain structures. Moreover, they are actively being introduced into the daily life of a person, for example, to replace the disturbed natural sense organs.
So-called cochlear implants are already available to people with hearing impairments: microchips that combine a microphone with hearing receptors. Testing of retinal implants for vision restoration has begun.
According to Kaplan, there are no technical restrictions for connecting any other sensors to the brain that respond to ultrasound, changes in radioactivity, speed or pressure.
The problem is that these technologies have to be completely based on our knowledge of how the brain works. Which, as we have already found out, are rather limited.
The only way to get around this problem, according to Kaplan, is to create a fundamentally new communication channel, with its own language of communication, and teach not only the computer, but also the brain to recognize new signals.
Such developments have already begun. For example, in the laboratory of applied physics at Johns Hopkins University several years ago, they tested a bionic hand capable of transmitting tactile information to the brain.
When touching the sensors of the artificial hand, the electrodes stimulate the pathways of the peripheral nervous system, which then transmit the signal to the sensory areas of the brain. A person learns to recognize incoming signals as different types of touch. Thus, instead of trying to reproduce the tactile system of signals that is natural for humans, a new channel and language of communication is created.
However, this path of development is limited by the number of new channels that we can create, and how informative they will be for the brain, says Alexander Kaplan.
Future tense
Kaplan believes that at the moment there is no new way to develop neurointerface technologies. According to him, the very possibility of an interface for communication between the brain and the computer was discovered in the 70s of the last century, and the principles of the brain, on which today's developments are based, were described about thirty years ago, and since then new ideas have practically not appeared.
Thus, the now widely used potential of the P300 was discovered in the 1960s, the motor imagery in the 1980s-1990s, and the mismatch negativity in the 1970s).
Scientists once harbored hopes that they would be able to establish a deeper informational contact between the brain and processor technology, but today it has become clear that they did not come true.
However, Kaplan says, it has become clear that neurointerfaces can be implemented for medical use. According to the scientist, now the development of neurointerfaces is going to the greatest extent through the introduction of technology into the clinical sphere.
Scientists once harbored hopes that they would be able to establish a deeper informational contact between the brain and processor technology, but today it has become clear that they did not come true.
However, Kaplan says, it has become clear that neurointerfaces can be implemented for medical use. According to the scientist, now the development of neurointerfaces is going to the greatest extent through the introduction of technology into the clinical sphere.
However, thanks to brain research and advances in technology, today's neurointerfaces are capable of what once seemed impracticable. We don't know for sure what will happen in 30, 50 or 100 years. The historian of science Thomas Kuhn put forward the idea that the development of science is a cycle: periods of stagnation are replaced by paradigmatic shifts and scientific revolutions that follow. It is quite possible that in the future we will have a revolution that will take the brain out of the black box. And she will come from the most unexpected side.
Maria Ermolova