Scientists Have Created An Artificial Brain From Silver And Made It Learn - Alternative View

Table of contents:

Scientists Have Created An Artificial Brain From Silver And Made It Learn - Alternative View
Scientists Have Created An Artificial Brain From Silver And Made It Learn - Alternative View
Anonim

A tiny, self-organized network of artificial synapses remembers their experiences and can solve simple problems. Its creators hope that someday, on the basis of this artificial brain, devices will be created that in their energy efficiency are not inferior to the computing power of the brain. In general, brains, if we omit their achievements in thinking and solving problems, are perfect in their energy efficiency. The brain needs the same amount of energy to work as a 20-watt incandescent lamp absorbs. And one of the most powerful and fastest supercomputers in the world, Computer K in Kobe, Japan, uses up to 9.89 megawatts of power - about the same as 10,000 homes. But in 2013, even with this energy, it took the machine 40 minutes to simulate 1% of the human brain's activity in 1 second.

And so research engineers at the NanoSystems Institute of California at the University of California, Los Angeles hope to rival the computational and energy-efficient capabilities of the brain, thanks to systems that mirror the structure of the brain. They are creating a device, possibly the first of its kind, that is “inspired by the brain to generate properties that enable the brain to do what it does,” says Adam Stig, a researcher and associate professor at the institute who is leading the project with Jim Gimzewski, professor of chemistry at the University of California In Los Angeles.

Their design is not at all like ordinary computers, which are based on small wires printed on silicon microcircuits in highly ordered circuits. The current experimental version is a 2 x 2 mm grid of silver nanowires connected by artificial synapses. Unlike silicon circuitry with its geometric precision, this device is woven like a "well-blended spaghetti dish," says Stig. Moreover, its fine structure is organized from random chemical and electrical processes, and is not carefully designed.

In its complexity, this silver web resembles a brain. There are a billion artificial synapses per square centimeter of the grid, which is several orders of magnitude different from the real brain. The electrical activity of the network also exhibits a property unique to complex systems like the brain: "criticality," a state between order and chaos that indicates maximum efficiency.

This network of highly intertwined nanowires may look chaotic and random, but its structure and behavior resembles that of neurons in the brain. Scientists at NanoSystems are developing it as a brain device for learning and computing
This network of highly intertwined nanowires may look chaotic and random, but its structure and behavior resembles that of neurons in the brain. Scientists at NanoSystems are developing it as a brain device for learning and computing

This network of highly intertwined nanowires may look chaotic and random, but its structure and behavior resembles that of neurons in the brain. Scientists at NanoSystems are developing it as a brain device for learning and computing.

Moreover, preliminary experiments show that this neuromorphic (i.e., brain-like) silver wire mesh has great functional potential. She can already perform simple educational and logical operations. It can remove unwanted noise from the received signal, an important ability for voice recognition and similar tasks that cause problems in traditional computers. And its existence proves the principle that one day it will be possible to create devices with energy efficiency close to that of the brain.

These advantages are especially curious against the background of the approaching limit of miniaturization and efficiency of silicon microprocessors. "Moore's Law is dead, semiconductors can no longer get smaller, and people start wailing about what to do," says Alex Nugent, CEO of Knowm, a neuromorphic computing company not involved in the UCLA project. “I like this idea, this direction. Conventional computing platforms are a billion times less efficient."

Promotional video:

Switches as synapses

When Gimrzewski began working on his silver grid project 10 years ago, he wasn’t interested in energy efficiency at all. He was bored. Using a scanning tunneling microscope to study electronics at the atomic scale for 20 years, he finally said, "I'm tired of perfection and precise control and a little tired of reductionism."

Reductionism, it should be assumed, underlies all modern microprocessors, when complex phenomena and circuits can be explained using simple phenomena and elements.

In 2007, he was asked to study individual atomic switches (or switches) developed by the Masakazu Aono group of the International Center for Materials on Nanoarchitectonics in Tsukuba, Japan. These switches contained the same ingredient that turns a silver spoon black when it touches an egg: iron sulfide sandwiched between hard metallic silver.

Applying voltage to the devices pushes the positively charged silver ions in the silver sulfide towards the silver cathode layer, where they are reduced to metallic silver. The atomic silver filaments grow, eventually closing the gap between the metallic silver sides. The switch is on and current can flow. Reversing the current has the opposite effect: the silver bridges are shortened and the switch is turned off.

However, shortly after developing the switch, Aono's group began to observe unusual behavior. The more often the switch was used, the easier it was to turn on. If it was not used for some time, it gradually turned off on its own. Essentially, the switch remembered its history. Aono and his colleagues also found that the switches appeared to interact with each other, such that turning on one switch would sometimes block or turn off others nearby.

The majority in Aono's group wanted to construct these strange properties outside of the switches. But Gimrzewski and Stig (who had just completed his PhD in Gimrzewski's group) remembered synapses, the switches between nerve cells in the human brain, that also change relationships with experience and interaction. And so the idea was born. “We thought, why not try to translate all of this into a structure that resembles a mammalian cerebral cortex and study it?” Says Stig.

Building such a complex structure was definitely difficult, but Stig and Odrius Avicenis, who had just joined the group as a graduate student, developed a protocol for this. By pouring silver nitrate onto tiny copper spheres, they could cause microscopically thin, intersecting silver wires to grow. They could then pump sulfur gas through this grid to create a layer of silvery sulfide between the silver wires, like in the original Aono team's atomic switch.

Self-organized criticality

When Gimzewski and Stig told others about their project, no one believed it would work. Some said the device would show one type of static activity and settle on it, Stig recalled. Others suggested the opposite: “They said the switch would cascade and the whole structure would just burn out,” Gimzewski says.

But the device didn't melt. In contrast, when Gimzewski and Stig watched him through an infrared camera, the input current continued to alter the paths it took through the device - proving that the activity on the network was not localized, but rather distributed, as in the brain.

One fall day in 2010, as Avicenis and his colleague Henry Sillin were increasing the input voltage to the device, they suddenly noticed that the output voltage began to randomly vibrate, as if the wire mesh was alive. “We sat down and looked at it, we were shocked,” says Sillin.

They guessed they had found something interesting. When Avicenis analyzed the monitoring data over several days, he found that the network remained at the same level of activity for short periods more often than for long periods. They later found that small areas of activity were more common than large ones.

“My jaw dropped,” Avicenis says, because it’s the first time they learned a power law from their device. Power laws describe mathematical relationships in which one variable changes as the power of another. They apply to systems in which larger scales, longer events are less common than smaller and shorter ones, but they are widespread and not accidental. Per Bac, a Danish physicist who died in 2002, first proposed power laws as the hallmark of all kinds of complex dynamical systems that can organize over large scales and long distances. This behavior, he said, indicates that a complex system is balancing and functioning on the golden mean between order and chaos, in a state of "criticality," and all its parts interact and interconnect for maximum efficiency.

As Buck predicted, power-law behavior was observed in the human brain: in 2003, Dietmar Plentz, a neurophysiologist at the National Institutes of Health, observed that groups of nerve cells activated others, which in turn activated others, often triggering systemic cascades of activations. Plenz found that the size of these cascades followed a power law distribution, and the brain did indeed act to maximize the spread of activity without risking losing control of its spread.

The fact that the University of California device also demonstrated the power law in action is very important, says Plentz. Because it follows that, as in the brain, it has a delicate balance between activation and inhibition, which keeps the sum of its parts working. The activity does not suppress the set, but it does not stop either.

Gimrzewski and Stig later found another similarity between the silver network and the brain: just as the sleeping human brain exhibits fewer short cascades of activation than the waking brain, the short activation state in the silver network becomes less common at lower input energies. In some way, reducing the power consumption of a device can create a state resembling the dormant state of the human brain.

Learning and computing

And here's the question: if a network of silver wires has brain-like properties, can it solve computational problems? Preliminary experiments have shown that the answer is yes, although the device, of course, is not even remotely comparable to a regular computer.

First, there is no software. Instead, the researchers exploit the fact that the network can distort the incoming signal in different ways, depending on where the output is measured. This offers a possible use for voice or image recognition, as the device must be able to clean up a noisy input signal.

It also follows from this that the device can be used for so-called reservoir calculations. Since a single input can, in principle, generate many, millions of different outputs (hence the reservoir), users can select or combine outputs so that the result is the desired input computation. For example, if you stimulate a device in two different locations at the same time, there is a chance that one of the millions of different outputs will represent the sum of the two inputs.

The challenge is to find the correct conclusions and decode them, and figure out how best to encode the information so that the network can understand it. This can be done by training the device: by running the task hundreds or thousands of times, first with one type of input, then with another, and comparing which output copes with the task better. “We don't program the device, but we choose the best way to encode the information so that the behavior of the network is useful and interesting,” says Gimrzewski.

In a work to be published soon, scientists will explain how they trained a network of wires to perform simple logical operations. And in unpublished experiments, they trained the network to solve a simple memory problem usually given to rats (the T-maze). In the T-maze test, the rat is rewarded if it makes a correct turn in response to light. With its own version for training, the network can make the right choice 94% of the time.

Image
Image
Image
Image

So far, these results have been little more than a proof of principle, Nugent says. “The little rat making a decision in the T-maze never gets close to something in machine learning that can evaluate its systems,” on a traditional computer, he says. He doubts the device can be turned into a useful chip in the next few years.

But the potential is huge, he stresses. Because the network, like the brain, does not separate processing and memory. Traditional computers need to transfer information between different domains that handle these two functions. “All this extra communication builds up because the wires need power,” Nugent says. Taking traditional computers, you would have to power off France in order to simulate a complete human brain at decent resolution. If devices like the Silver Network can solve problems with the efficiency of machine learning algorithms running on traditional computers, they can use a billion times less power. And then the matter is small.

Scientists' findings also support the notion that, under the right circumstances, intelligent systems can be formed by self-organization without any template or process for their development. The Silver Network "spontaneously emerged," says Todd Hilton, a former DARPA manager who supported the project early on.

Gimrzewski believes that a network of silver wires or similar devices can be better than traditional computers at predicting complex processes. Traditional computers model the world with equations that often describe complex phenomena only approximately. Atomic switch neuromorphic networks align their own intrinsic structural complexity with the phenomenon they are simulating. And they also do it quickly - the state of the network can fluctuate at rates of up to tens of thousands of changes per second. “We use a complex system to understand complex phenomena,” says Gimrzewski.

Earlier this year, at a meeting of the American Chemical Society in San Francisco, Gimzewski, Stig and their colleagues presented the results of an experiment in which they fed the device the first three years of a six-year Los Angeles traffic dataset in a series of pulses indicating the number of passing cars per hour. After hundreds of hours of training, the output finally predicted the statistical trend of the second half of the dataset, and quite well, even though it wasn't shown to the device.

Perhaps one day, Gimrzewski jokes, he uses the network to predict the stock market.

Ilya Khel

Recommended: