The Five Most Frightening Advances In Robotics In Recent Memory - Alternative View

Table of contents:

The Five Most Frightening Advances In Robotics In Recent Memory - Alternative View
The Five Most Frightening Advances In Robotics In Recent Memory - Alternative View

Video: The Five Most Frightening Advances In Robotics In Recent Memory - Alternative View

Video: The Five Most Frightening Advances In Robotics In Recent Memory - Alternative View
Video: 9 Most Advanced AI Robots - Humanoid & Industrial Robots 2024, April
Anonim

If robots don't scare you, you are either developing them, or you yourself are one of them.

Already, the electronic brains of the most advanced models are smarter than humans and are capable of doing things that will make any of us at least cringe uncomfortably. Think we're just rabid neo-Luddites calling for an end to thinking machines? May be.

But what do you say after learning about such achievements in the field of robotics and artificial intelligence?

No. 5. Schizophrenic robot

Scientists at the University of Texas at Austin simulated mental illness on a computer, endowing artificial intelligence with schizophrenia.

The victim of the experiment was DISCERN, a supercomputer that functions as a biological neural network that repeats the principle of the human brain. In their attempt to recreate the mechanism behind schizophrenia, scientists have started from the theory of hyperlearning, which claims that the brain of a schizophrenic processes and stores too much information, memorizing everything, even unnecessary small details.

Then scientists created an emulation of schizophrenia in artificial intelligence (we are not sure this experiment is not a crime against humanity), loading the computer with a lot of stories. As long as the machine was only allowed to store key details in memory, it worked perfectly and reproduced the material learned normally. Then the experimenters reprogrammed the memory device, forcing it to record absolutely all the details - both important and insignificant.

Promotional video:

The result is a schizorobot! The computer was unable to remember normally what it was taught and to reproduce any coherent story. How crazy is he? For example, at some point, he claimed responsibility for a terrorist attack, telling researchers that he had planted a bomb. The artificial intelligence did this because it confused a third party's report of the terrorist explosion with its own memory. In another case, the computer, on the contrary, began to talk about itself in the third person, because it simply could not make out who it was now. All in all, DISCERN has developed personality disorder. Fortunately, scientists from the University of Texas invented some kind of robotic antipsychotics to get their ward back to normal.

No. 4. Cheating robot

Professor Roland Arkin from the School of Interactive Computing at the University of Georgia presented the results of an experiment in which scientists managed to train a group of robots to cheat and deceive. They deduced a strategy of deceiving behavior based on models of behavior of birds and squirrels.

The experiment involved two robots. The first was required to find a place to hide, the second - to discover where the first was hiding. The robots had to go through a path with obstacles pre-installed on it, which overturned as the cars moved forward. The first plotted a course, and the second pursued him, analyzing the route of the wanted person by the tracks left.

After a while, the hiding robot began to deliberately overturn obstacles only in order to create a false trail, while the cunning himself at that time was hiding somewhere far from all this mess. Simple tactic, but using it, the hiding droid was able to fool its pursuer 75% of the time.

Moreover, this strategy was not initially programmed - the robot developed it independently, by trial and error. It's good that this is all just a harmless university experiment, right?

# 3. Ruthless robot

Scientists at the Laboratory of Intelligent Systems put a group of robots in the same room with conventional sources of "food" and "poison". Machines earned points for being closest to the "food", and lost points if they got too close to the "poison." All robots participating in the experiment were equipped with small blue lights that blinked erratically, as well as a camera-sensor, which helped to identify the light from the lamps of other robots.

They could also turn off their own lights if they wanted. When the tests began, it took the robots only a short time to realize that the greatest concentration of blue light was in the place where other robots gathered - that is, next to the "food". It turned out that by blinking light bulbs, robots showed competitors where the correct source is.

After several stages of the experiment, almost all robots turned off their "beacons", refusing to help each other. But this was not the end of it - some of the bots figured out to take the rest away from the "food", blinking more intensely, just like a rat catcher with a pipe. Although, maybe it plays into the hands of humanity: if robots start fighting with each other, it is not a fact that they will have enough time for us.

# 2. Supercomputer with imagination

Among the many Google projects that will no doubt one day end our civilization, one stands out - this is a self-learning computer simulation of a neural network. The same as the schizophrenic computer from the fifth point, but this one has not gone crazy yet. On the contrary, he has his own highly organized consciousness, and even a penchant for creativity.

During the experiment, this supercomputer was given free access to the Internet and the ability to study the contents of the network. No restrictions or instructions - the most powerful superintelligence was simply allowed to explore all of human history and experience. And do you know what the supercomputer chose from all this wealth? He started looking at pictures of cats.

Yes, as it turned out, we all use the Internet in the same way, regardless of who we are - the thinking meat or the high-tech digital mind. Leave us to ourselves and we will choose cute kittens. A little later, Google discovered that the computer even developed its own concept of what a cat looks like, independently generating an image using its analogue of the cerebral cortex based on the photographs it had viewed earlier.

# 1. Seer robot

The Nautilus is another self-learning supercomputer. This aggregate has been fed millions of newspaper stories since 1945 by searching based on two criteria: nature of publication and location. Using this storehouse of information about past events, he was asked to put forward assumptions about what will happen in the "future." And his guesses turned out to be surprisingly accurate. How accurate? Well, for example, he found bin Laden.

Of course, this does not mean that a hefty, angry cyborg broke into Osama's door with an ion cannon at the ready, the Nautilus was searching after bin Laden was actually discovered. But having received enough information, the artificial intelligence was able to calculate the approximate location of terrorist # 1.

It took the American government and its allies 11 years, two wars, two presidents and billions of dollars. It took "Nautilus" much less time, moreover, all he did was simply analyze the news related to the leader of the terrorists, and linked the points of his supposed position. As a result, the Nautilus narrowed the search area to a two-hundred-kilometer zone in northern Pakistan - exactly where Osama's refuge was found.

But the Nautilus' achievements are not limited to the hunt for bin Laden. He was also able to predict the Arab Spring protests simply by looking at publications and determining their "mood" by how often they used positive or negative language.

So robots can predict the future, and they will surely predict the imminent extinction of mankind, if we just attach legs to them. Nevertheless, the experiment with "Nautilus" is of a retrospective nature - it was allowed to "predict" the events that had already happened. Although now scientists are thinking about allowing the machine to calculate the present future.