How To Understand The Brain In Order To Build "thinking" Machines? - Alternative View

Table of contents:

How To Understand The Brain In Order To Build "thinking" Machines? - Alternative View
How To Understand The Brain In Order To Build "thinking" Machines? - Alternative View
Anonim

Bring a three-year-old child to the zoo, and he will intuitively determine that the long-necked animal chewing leaves is the same giraffe from his children's picture book. This simple feat is actually quite complex. The drawing in the book is a frozen silhouette of simple lines, and the living creature is a masterpiece of color, texture, movement and light. It looks different when viewed from different angles, and can change shape, position, perspective.

In general, people do well in these kinds of tasks. We can easily understand the most important features of an object from simple examples and apply this knowledge to something unfamiliar. Computers, on the other hand, usually need to compile an entire database of giraffes, shown in different positions, from different perspectives, in order to learn how to accurately recognize an animal.

Visual identity is one of the many areas in which humans easily beat computers. We are also better at looking for relevant information in the data stream; we solve unstructured problems; We learn playfully, like a child who learns about gravity by playing with blocks.

“People are much, much more versatile,” says Tai Sing Lee, a scientist and neuroscientist at Carnegie Mellon University in Pittsburgh. “We are still more flexible in thinking, able to foresee, imagine and create future events.”

But the U. S. is funding an ambitious new program that seeks to put artificial intelligence on a par with our own mental abilities. Three teams of neuroscientists and computer scientists are trying to figure out how the brain performs these feats of visual identification, and then build machines that do the same.

"Modern machine learning fails where humans thrive," says Jacob Vogelstein, who leads the program at Intelligence Advanced Research Projects Activity (IARPA). "We want to revolutionize machine learning by reverse engineering algorithms and brain computation."

There is very little time. Each group is currently modeling a patch of bark in unprecedented detail. Together, they develop algorithms based on what they have learned. By next summer, each of these algorithms will be given an example of an unfamiliar thing to detect in thousands of images in the unfamiliar database. "The timeline is very tight," said Christoph Koch, president and senior fellow at the Allen Institute for Brain Science in Seattle, who works with one of the teams.

Koch and his colleagues create a complete wiring diagram for a small cube of the brain - a million cubic microns, about one five hundredth of the volume of a poppy seed. And this is an order of magnitude more than the most complete and largest map of brain weaving to date, which was published in June last year and which took about six years to create.

Promotional video:

By the end of a five-year IARPA project called "cortical network machine intelligence (Microns)", scientists plan to map a cubic millimeter of the cortex. This tiny chunk contains close to 100,000 neurons, 3 to 15 million neural connections, or synapses, and enough neural entanglements to cover a major city if untangled and stretched out.

No one has yet tried to reconstruct part of the brain on such a scale. But more small-scale efforts have shown that such maps can shed light on the inner workings of the cerebral cortex. In a paper published in the journal Nature in March, Wei-Chung Allen Lee - a neuroscientist at Harvard University who works with Koch's team - and his colleagues mapped the connections of 50 neurons and over 1,000 of their partners. By combining this map with information about the functioning of each neuron in the brain - some respond to a visual signal, for example - scientists have deduced a simple rule of the anatomical connection of neurons in this part of the cortex. And they found that neurons with similar functions are more likely to connect and form large connections with each other, and less likely with other types of neurons.

And although the goal of the Microns project is very technological - IARPA is funding research that could lead to data analysis tools for intelligence agencies, and others, of course - in parallel with this, scientists will receive data on the work of the brain. Andreas Tolias, a neurologist at Baylor College of Medicine who is one of the leading members of Koch's team, likens our current knowledge of the cortex to blurry photography. He hopes the unprecedented scale of the Microns project will help sharpen this perspective and discover more complex rules that govern our neural circuits. Without knowing all the constituent parts, "we may be missing the beauty of this structure."

Brain processor

The intricate folds that cover the surface of the brain and form the cerebral cortex (cortex) are literally wedged into our skulls. In many ways it is the brain's microprocessor. The three millimeter thick interlayer consists of a series of repeating modules, or microcircuits, like an array of logic gates in a computer chip. Each module consists of approximately 100,000 neurons arranged in a complex network of interconnected cells. There is evidence that the basic structure of these modules is approximately the same throughout the cortex. However, modules in different regions of the brain are specialized for specific purposes such as vision, movement and hearing.

Scientists have only a rough idea of how these modules look and how they function. They are largely limited to studying the brain on the smallest scale: tens or hundreds of neurons. New technologies designed to track the shape, activity, and connectivity of thousands of neurons are only now allowing scientists to begin analyzing how cells within a module interact with each other; how activity in one part of the system can generate activity in another part. “For the first time in history, we were able to poll these modules instead of just guessing at the content,” says Vogelstein. "Different teams have different guesses about what's inside."

Researchers will focus on the part of the cortex responsible for vision. This system of feelings has been actively studied by neurophysiologists, and specialists in computer modeling have long been trying to emulate. “Vision seems simple - just open your eyes - but teaching computers to do the same is very difficult,” says David Cox, a neuroscientist at Harvard University who leads one of the IARPA teams.

Andreas Tolias (left)

Image
Image

Each team starts with the same basic idea of how vision works: an old theory known as analysis by synthesis. According to this idea, the brain makes predictions about what will happen in the near future, and then checks those predictions against what it sees. The strength of this approach is in its efficiency - it requires less computation than continuously recreating each moment in time.

The brain can perform analysis through synthesis in a bunch of different ways, so scientists are exploring another possibility. Cox's group sees in the brain a kind of physics engine that uses existing physical models to simulate the world as it should look. Tai Sing Lee's team, along with George Church, assumes that the brain has a built-in library of parts - pieces and pieces of objects and people - and teaches the rules for how to put those parts together. Leaves, for example, usually appear on branches. Tolias's group is working on a more data-driven approach, in which the brain creates statistical expectations for the world it lives in. His group will test different hypotheses about how different parts of the circuit learn to communicate.

All three groups will monitor the neural activity of tens of thousands of neurons in the target brain cube. Then various methods are used to create a wiring diagram for these cells. Cox's team, for example, will cut brain tissue into layers thinner than a human hair and analyze each slice using electron microscopy. Scientists then glue each cross section together on a computer to create a densely packed 3D map of how millions of nerve wires make their way through the cortex.

With the map and activity chart in hand, each team will try to understand the basic rules governing the circuit. Then they program these rules into the simulation and measure how well the simulation matches the real brain.

Andreas Tolias and colleagues mapped the connections of pairs of neurons and recorded their electrical activity. The complex anatomy of five neurons (top left) can be summarized in a simple schematic diagram (top right). If you run an electric current through neuron 2, it is activated, launching an electric charge in two cells down the course, neurons 1 and 5 (below)

Image
Image

Tolias and his colleagues have already tasted this approach. In a paper published in Science in November, they mapped the connections of 11,000 neuron pairs, revealing five new types of neurons. “We still do not have a complete list of the parts that make up the cortex, the type of individual cells, their connections,” says Koch. “That's where Tolias started.”

Among the thousands of neural connections, the Tolias group discovered three general rules that govern the connection of cells: some communicate primarily with neurons of their own type; others avoid their own type, dealing primarily with other types; the third group communicates with only a few other neurons. (Tolias' group defined their cells based on neural anatomy, not function, unlike Wei Li's group.) Using just three of these communication rules, scientists were able to reproduce the circuit fairly accurately. “The challenge now is to figure out what these communication rules mean algorithmically,” says Tolias. "What type of computation are they doing?"

Neural networks based on real neurons

Brain-based artificial intelligence is not a new idea. So-called neural networks that mimic the basic structure of the brain were extremely popular in the 1980s. But at the time, scientists in the field lacked the computational power and data on how to make algorithms efficient. And all these millions of pictures with cats on the Internet were not. And although neural networks have experienced a major renaissance - nowadays it is already difficult to imagine life without voice and face recognition programs, and the AlphaGo computer recently beat the world's best go player - the rules that use neural networks to change their connections are almost certainly different from those what the brain uses.

Modern neural networks "are based on what we knew about the brain in the 1960s," says Terry Seinowski, a computational neuroscientist at the Salk Institute in San Diego, who developed the first neural network algorithms with Jeffrey Hinton, a scientist at the University of Toronto. "Our knowledge of how the brain is organized is bursting at the seams."

For example, modern neural networks consist of a direct flow architecture, where information flows from input to output through a series of layers. Each layer is trained to recognize certain features, like eyes or whiskers. Then the analysis moves on, and each layer performs more and more complex calculations. Eventually, the program recognizes a cat in a series of colored pixels.

But this forward-looking structure lacks an important component of the biological system: feedback, both within individual layers and from layers of a higher order with a lower one. In a real brain, neurons in one layer of the cortex are connected to their neighbors, as well as to neurons in layers above and below, forming a complex network of loops. “Feedback is an extremely important part of the cortical networks,” says Seinovski. "There are as many signals in feedback as there are feedforward connections."

Neuroscientists do not yet fully understand what feedback loops do, although they do know that they are essential to our ability to focus. They help us to listen to the voice on the phone without being distracted by the sounds of the city, for example. Part of the popularity of the theory of analysis by synthesis lies in the fact that it provides the foundation for all these repetitive compounds. They help the brain compare its predictions with reality.

Microns researchers seek to decipher the rules governing feedback loops - for example, which cells connect the loops, which ones activate their activity, and how that activity affects the output of data from the circuit - and then translate these rules into an algorithm. “The machine now lacks imagination and introspection. I believe that the feedback loop will allow us to imagine and self-analyze at many different levels,”says Tai Sing Lee.

Perhaps a feedback loop will one day endow machines with features that we consider unique to humans. “If you could implement a feedback loop in a deep network, you could move from a network that is only capable of a knee jerk - providing input and output - to a more reflective network that begins to make sense of its inputs and test hypotheses.” says Sejnowski.

The key to the mystery of consciousness

Like all IARPA programs, the Microns project is a high risk. The technologies that scientists need for large-scale mapping of neural activity and entanglements exist, but no one has applied them on such a scale until now. Scientists have to deal with huge amounts of data - 1-2 petabytes of data per cubic millimeter of brain. You will probably have to develop new machine learning tools to analyze all this data, which is quite ironic.

It is also unclear if the lessons learned from the small bite of the brain can hint at larger brain talents. “The brain is not just a piece of bark,” says Sejnowski. "The brain is hundreds of systems specialized for different functions."

The cerebral cortex itself is made up of repeating links that look about the same. But other parts of the brain can function in very different ways. “If you want AI that goes beyond simple pattern recognition, you will need a lot of different parts,” says Seinowski.

If the project succeeds, however, it will do more than analyze intelligence data. A successful algorithm will reveal important truths about how the brain gives meaning to this world. In particular, it will help confirm whether the brain really works by analysis through synthesis - that it compares its predictions about the world with incoming data from our senses. This will show that a key ingredient in the recipe for consciousness is an ever-changing mixture of imagination and perception. By building a machine that can think, scientists hope to uncover the secrets of thought itself.

Recommended: