Elon Musk's Neuralink. Part Seven: The Great Merger - Alternative View

Table of contents:

Elon Musk's Neuralink. Part Seven: The Great Merger - Alternative View
Elon Musk's Neuralink. Part Seven: The Great Merger - Alternative View

Video: Elon Musk's Neuralink. Part Seven: The Great Merger - Alternative View

Video: Elon Musk's Neuralink. Part Seven: The Great Merger - Alternative View
Video: Neuralink’s New Brain Implant: Hype vs Science 2024, November
Anonim

Part One: The Human Colossus

Part Two: The Brain

Part Three: Flying Over the Nest of Neurons

Part four: neurocomputer interfaces

Part Five: The Neuaralink Problem

Part Six: Age of Wizards 1

Part Six: Age of Wizards 2

Part Seven: The Great Fusion

Promotional video:

Image
Image

Imagine that an alien traveler visits a new star and finds that three planets are circling around it, all with life. The first will be identical to the Earth at the time of 10 million years BC. The second will be identical to the Earth at the time of 50,000 BC. The third will be identical to the Earth in 2017.

The alien does not understand primitive biological life, but decides to fly around all three planets, looking at each with a telescope. On the first, he sees a lot of water, trees, mountains and several signs of animal life. He sees a herd of elephants on the African plain, a group of dolphins swimming in the ocean, and several other creatures here and there, living their lives.

There are even more animals on the second planet, but they are not very different. There is also something unusual - the occasional small dots of shimmering light dotting the ground.

Bored, he moves on to the third planet. Wow. He sees planes flying above the ground, gigantic areas of gray land with buildings towering on it, ships of different sizes, plowing the seas, long railways stretching across continents, and his spacecraft almost collides with a satellite on approach.

Going home, he reports the find. "Two planets with primitive life and one with intelligent life."

His conclusions seem logical and understandable, but he was wrong.

In fact, the first planet will be strange. On the second and third planets there is intelligent life - equally intelligent. So equal that you can steal a child from the second planet and replace them with a child on the third planet, and both will grow up as normal people, without any problems. The same people.

And yet, how did it happen?

Image
Image

Colossus Human. Here's how.

Ever wondered why people do not impress us as much as the achievements of mankind?

Because people are still the same people from the second planet.

Bring a human child to a group of chimpanzees and ask them to raise it in the Tarzan style, and as they mature, that person will know how to run through the forest, jump trees, look for food, and masturbate. After all, this is each of us, in fact.

Humanity, on the other hand, is a superintelligent, extremely inquisitive, millennial Colossus with 7.5 billion neurons. It was he who built the third planet.

The invention of language allowed every person with a brain to compress their knowledge into a grain before dying, and this grain sprouted into a whole tower, which grew higher and higher, until one day it became the brain of the great Colossus, who built us a civilization. Since then, the Human Colossus has invented and invented everything, it gets better and better over time. Obsessed with the desire to create value, the Colossus moves at a tremendous speed - and we feel it because we live in the most extraordinary time in history.

Remember when I said that we might be living on the border between two great eras of communication?

It may be that we are at a great historical milestone. After 1,000 centuries of human life and 3.8 billion years of life on earth, this century will mark our transition from one planetary era to a multiplanetary era. This century may become the century when the earthly race will be able to wrest the genetic code from the tenacious clutches of evolution and learn to reprogram itself. People living today can witness the moment when biotechnology will finally free human life from the laws of nature and hand it over to the will of every person.

The Colossus of Humanity has reached a whole new level of power - power that can destroy an era of 3.8 billion years - and has brought us to the brink of numerous tipping points that will lead to unimaginable changes. And if our alien friend one day discovers a fourth planet that is identical to Earth in 2100, we can say for sure that it will not look like a third planet.

I hope you like the third planet, because we live on it. But we will come to the fourth planet, whether we like it or not.

Image
Image

If I had to summarize everything Musk does, it would be pretty simple:

He wants to prepare us for the fourth planet.

He thinks in broad categories and sees life on the planet only with maximum coverage. Therefore, his actions seem completely divorced from reality (and at the same time fantastically sagacious). This worries him too.

Not that he considers the fourth planet to be a bad place - he admits that it can be so and that the generations who live today, without realizing it, are facing for the first time in history a real threat to the survival of humanity.

At the same time, people living today are also living for the first time in history with the hope of a truly utopian future - one that even defies death and taxes. The fourth planet will be our promised land.

When viewed from afar, it becomes apparent how high the stakes are.

And the outcome is determined not only by the whim of chance - it is determined by the whim of the Human Colossus. The fourth planet will only appear because the Colossus creates it. And our future will be like heaven or hell - it depends on what the Colossus will do. Maybe in the next 150 years, maybe 50. Or 25.

The sad thing about this story is that the Human Colossus is not optimized for the maximum success of a safe transition to the best possible fourth planet for most people - it is optimized to create a fourth planet in any way possible as quickly as possible.

Realizing all this, Elon dedicated his life to trying to influence the Human Colossus to align its motivations with the long-term interests of humans. He knows that it is impossible to rewrite the Human Colossus - unless the threat to survival arises directly in front of each person, and this usually does not happen until the very last moment - so he treats the Colossus like his pet.

If you want your dog to sit, you associate the sit command with having fun. For the Human Colossus, pleasure is a juicy new industry that is exploding in both supply and demand.

Elon saw that the Human Colossus dog pees on the carpet, constantly emitting carbon into the atmosphere - and instead of punishing the dog and persuading him not to write on the carpet (which, by the way, many are trying unsuccessfully to do) or to limit his behavior (which they are trying unsuccessfully to do governments), he created an electric car that will very soon conquer the world. The auto industry is gradually shifting in this direction, and nine years after Tesla launched its first car, the number of companies that have an electric car in their product line has changed from zero to almost all. Colossus the dog accepted the handout and will change his behavior.

Elon saw the Human Colossus dog trying to preserve all the eggs on one planet, despite all these turning points in history, so he created SpaceX to learn how to land a rocket that will reduce the cost of space travel by 99% and accelerate the development of the industry. His plan with Mars is not to convince humanity that it's cool - to create a civilization on the Red Planet in order to guarantee life insurance for the species - but to create a functioning permanent cargo and passenger transit to Mars, because when the time comes, there will be enough created value on Mars for the Colossus to become interested.

But the worst thing, according to Elon, is that Colossus is trying to teach Computer Colossus to think. According to Elon and many others, the development of superintelligent artificial intelligence poses the greatest threat to human survival. And it's not hard to see why. Reason gives us godlike power, power over all other creatures on Earth - and for them, human power is not the sweetest time. If anything valuable could be made from animal body parts, we would have created large industries that process animals and sell these body parts - and so it is. Sometimes we kill them for the buzz, for the sake of sports interest. But we do it without hatred, without a desire to harm anyone, it just so happened that all these creatures, or the ecosystem, were in the path of our interest, actions, actions, movement. People love to reproach humanity for excessive cruelty, but any species would do the same - it was selfish, first and dominant.

The problem with other creatures is not that we are selfish, but that we have enormous power over them. The power that our intellectual advantage has provided.

Therefore, it is quite logical to fear the prospect of deliberately creating something that has (probably to the maximum extent) much more intelligence than we - especially against the background of the fact that every person on the planet can be the first to do it.

And everything is going very fast. Elon talks about the rapid progress that Google's gaming AI is making:

There have been two of those occasions where AlphaGo beat human players one-on-one, beat Lee Sedol in four games out of five, and now he will beat every human in every game, playing with the top 50 players at the same time, and will always defeat them, forever centuries. Just a year later.

And this is about such a harmless thing as AlphaGo. But the degree of freedom in which AI can win is constantly growing. Go has a lot more degrees of freedom than chess, but if you take any of the real-time strategies like League of Legends or Dota 2, in which there are much more degrees of freedom than in go, the AI cannot yet win in them. … But he will learn. Then reality will follow with the maximum number of degrees of freedom.

Obviously there is something to worry about here:

Over the past few years, I've realized that AI is definitely going to far surpass humans in intelligence. There is a certain risk that something bad will happen, something that we cannot control, that cannot control humanity. After a certain point, either a group of people monopolize the AI, or the AI will go on a free float, something like that. Maybe, maybe not.

But in typical Human Colossus form, "the collective will will not be compromised by AI."

When I interviewed Elon in 2015, I asked him if he would ever be involved in the creation of artificial superintelligence. He said, "To be honest, I think we shouldn't create it." And when I later said that making something smarter than yourself was like a Darwinian mistake (I stole this phrase from Nick Bostrom), Elon replied: "We will receive the Darwin Prize collectively."

Now, two years later, he says the following:

I did try to sound the AI wake-up call, but obviously it didn't have any effect (laughs), so I figured, fuck it, let's design it to be okay.

He accepted reality - the Human Colossus is not going to leave until the Computer Colossus wakes up one day. It should be.

Image
Image

No matter what they tell you, no one knows what will happen when the Computer Colossus learns to think. In a long article on AI, I explored the arguments of both sides, one of which is convinced that artificial superintelligence will be the solution to all our problems, and the other that humanity is a bunch of children playing with a bomb that they don't understand. Personally, I still have not decided on the camp that I trust more, so it seems most rational to prepare for the worst and do everything possible to increase the chances of success. Many experts agree with this logic, but cannot agree on a strategy for the safe creation of ISI. People just don't know the answer. What precautions can you take in the face of a world that is impossible to understand?

Elon also admits that he doesn't know the answer, but is working on a plan that he thinks will give us the best result.

Elon's plan

Abraham Lincoln was proud of himself when he came up with this line:

Honestly - the line is good.

This whole idea "about people, for people, for the sake of people" is the basis of democracy.

Unfortunately, "people" are unpleasant. Therefore, democracy also becomes unpleasant. But the unpleasant, as a rule, turns out to be a fairy tale compared to the alternatives. Elon thinks so:

I think protecting the team is very important. I think it was Churchill who said: "Democracy is the worst of all systems of government, apart from all the others." It is good when the king has a good philosopher on the level of Plato. It would be cool. But now most dictators don't think so. Therefore, they are terrible.

In other words, democracy is a way to hide from the monster by hiding in the sewers.

It often happens in life when risk is a good strategy that will provide a chance for the best possible outcome, but when the stakes are as high as possible, the right move must be safe. Power is one such situation. Therefore, even if democracy guarantees a certain level of mediocrity, Elon says that "you are unlikely to find many people in the United States who would speak for dictatorship, regardless of what they think of a particular president."

And since Elon sees AI as perfect strength and power, AI development will be the ultimate safe strategy for the situation. Therefore, his strategy for minimizing the existential risks of AI is that the power of AI will become for people, for the sake of people, controlled by people.

To implement this concept in the field of AI, Elon approached the situation from different angles.

"For people and for people," he and Sam Altman created OpenAI, a self-proclaimed "non-profit AI research company that pioneers and sets a course for the development of general-level artificial intelligence security."

Usually, when humanity is working on something new, the work starts with the innovation of a few people. When they succeed, an industry is born, and the Human Colossus jumps aboard to settle en masse on the foundation laid by the industry's pioneers.

But what if these pioneers were working on a magic wand that could give whoever owned it tremendous, irresistible power over everyone else, including the power to prevent anyone else from making a magic wand? Interesting situation, isn't it?

This is how Elon views today's AI efforts. And since he can't stop people from trying to create a wand, his solution is to create an open, collaborative and transparent wand design laboratory. When something breakthrough is developed in the laboratory, it immediately becomes open and known to everyone and everyone, and is not hidden in secret.

On the one hand, this approach has its drawbacks. The bad guys are also trying to create a magic wand, and you don't want the first magic wand to fall into the hands of the bad guys. And now the bad guys can benefit from all the innovations the lab shares. This is a serious problem.

But the lab also helps millions of other people trying to create magic wands. For stealthy pioneers, this creates serious competition, and it becomes less likely that any inventor can create a magic wand long before others do too. Chances are, when the first magic wand eventually appears, thousands of others will appear along with it - different wands, with different capabilities, created by different people for different reasons. If magic wands appear on Earth, Elon believes, we will at least be sure that they will end up in the hands of different people around the world - and not in the hands of one almighty wizard.

If everyone is from the planet Krypton, it's okay. But if only one is Superman with the makings of Hitler, we're in trouble.

In a broader sense, a single innovator's magic wand is likely to serve the needs and goals of its inventor. But if we make the future wand industry a collective endeavor, a wide variety of needs and goals will create many wands that will meet the needs of the masses.

That is, like democracy.

Nikola Tesla, Henry Ford, the Wright brothers, and Alan Turing are incredibly lucky that the revolution began with their breakthroughs. But when you are dealing with inventing something incredibly powerful, you cannot sit back and let innovators indulge - there is too much left to chance.

OpenAI is an effort to democratize the creation of artificial intelligence for the entire Colossus of Man to work on during the pioneering phase. Elon sums it up like this:

AI will definitely surpass humans in ability. As long as he is attached to human will, in particular the sum of the will of a large number of people, his actions will be determined by a large number of people, because he will function in accordance with their will.

And now, you have got a super-strong AI at the human level or higher, which is made by humans and for humans. This makes it less likely that the world's AI will end up in the hands of one bad person or a tightly controlled monopoly.

Now all we have left are people.

But now it will be easier. Remember, the Human Colossus creates artificial superintelligence for the same reason that it creates machines, conveyors and computers - as an addition to outsource work. Machines walk for us, factory conveyors release parts for us, computers store information, organize and count.

Making computers that can think will be our biggest invention - they will enable us to outsource our most important and highly efficient work. Thinking created everything we have, so just imagine the power that will come from the superintelligent thinking complement. And the addition of humans, by definition, belongs to humans.

There is only one thing -

High caliber AI is not very similar to other inventions. The rest of our technology is great at what it can create, but in the end, these are all soulless machines with narrow, limited intelligence. The AI we are trying to create will be as smart as a person, as a truly "insanely smart" person. It will be fundamentally different from everything that we have done before, why then do we expect the usual rules to apply to it?

Technology has always belonged to us - this is such an obvious point that it seems pointless to quote it. But could it be that if we do something smarter than a person, it will not be so easy to manage?

Could it be that a creature that understands better than any human on Earth may refuse to complement a human, even if it is created for this very purpose? We do not know what problems may appear - but it is worth assuming that yes, they may appear.

And if so, we will have a serious problem in our hands.

Because, as human history shows, when something smarter than others appears on the planet, it is bad for everyone else. And if AI becomes the smartest object on the planet and suddenly does not completely belong to us, it will be left to itself. It turns out that we will find ourselves in the "everyone else" category.

Therefore, people who gain monopoly control over AI are already a problem in themselves - and this is a problem OpenAI is trying to solve. But this problem may turn out to be dim next to the prospect of unmanaged AI.

This keeps Elon awake. He sees in this only a matter of time when artificial superintelligence will appear on this planet, and when this happens, it is very important that we do not become "everyone else" for him.

This is why, in a world of a future made up of AI and “everyone else,” we only have one good option:

Become an AI.

* * *

Remember, I said earlier that magic hats are divided into two ideas that we need to somehow grasp:

1. An absolutely incredible idea

2. Super-duper-amazing absolutely incredible idea

This is where we come to the second.

These are the two ideas Elon is referring to when he calls the magic hat the tertiary digital layer in our brains. First, we found out that the brain-wide interface, its concept, is akin to putting our devices in our heads - thereby turning the brain into a device. Like this:

Image
Image

Your devices give you the superpower of cyborgs and open a window to the digital world. Your magic hat's electrode array is a new brain structure that connects your limbic system to your cortex.

But your limbic system, cortex, and magic hat are just a piece of hardware. When you experience your limbic system at work, you are not interacting with the physical part of it, but with the information within it. It is the activity of this physical system that is bubbling in your mind that makes you angry, scared, swearing, or hungry.

It's the same with your bark. A napkin wrapped around your brain stores and organizes information, but when you think, see, hear, or feel something, you are interacting with the information. The visual cortex by itself does nothing for you - it is the stream of photonic information passing through it that gives you the experience of having a visual cortex. When you dive into memories to retrieve something, you are not looking in neurons, you are looking in the information that is stored in them.

The limbic system and cortex are just gray matter. The flow of activity in the gray matter shapes the familiar inner characters, the monkey brain and the rational human brain.

What does this say about your tertiary digital layer?

This says that while there will indeed be a physical device in your brain - the array of electrodes itself - the main component that you will be dealing with will be the information that passes through the array.

And just as the feelings and urges of the limbic system, the thoughts and chatty voice of the cortex will feel like parts of you - your inner being - the activity flowing through your magic hat will become part of you and your being.

Elon's vision for the era of the wizard is that among the many uses of the wizard hat, one of its uses is to interface between your brain and the AI cloud system. This AI system, he believes, will become a real character in your mind, like your monkey and a man, and you will feel it on a par with others. He says:

I think there seems to be a way to make the tertiary layer feel like part of you. Not as an external device, but as you.

This makes sense on paper. Most of your “thinking” is in your cortex, but when you’re hungry, you don’t say “my limbic system is hungry,” you say, “I am hungry.” Likewise, Elon thinks, when you try to find a solution to a problem and your AI finds the answer, you don't say "my AI found the answer," you say, "yeah, I found the answer." When your limbic system wants to procrastinate, and your cortex wants to work, there is no fight between external entities, you are just trying to be more disciplined. Likewise, when you come up with a working strategy and your AI disagrees, there will be genuine disagreement and dialogue will arise - but it will be your internal dialogue, not a debate between one speaker and another. This dialogue will be like meditation.

This makes sense on paper.

But when I first heard Elon talk about this concept, I couldn't grasp something. It doesn't matter how many times I tried to suck in this idea, but I couldn't imagine what it would be like to hear the AI in my head, it was always an external system for me with which I connect. There was no me in it.

But then, one night, while working on this article, I reread Elon's quotes, and everything suddenly fell into place. AI will be me. Fully. I understood.

And again I lost the thread. The next day I tried to explain the epiphany to a friend, and we were both embarrassed. I returned to the thought “damn it, but it won't be me, it will be a connection with me”. Since then, I have never been able to grab the right idea by the tail. The best comparison I have come up with is space-time - a single fabric. For a second, it seemed intuitive to me that time moves slower when you move very fast. But the thought was gone again. I printed these lines, but they don't seem intuitive to me.

The idea of "being artificial intelligence" is especially challenging because it combines two dizzying concepts - the brain-computer interface and the abilities it will give you, and general-level artificial intelligence. Today, people simply cannot understand any of these ideas, because in the process of imagining we are guided by our life experiences, and these concepts are simultaneously completely new to us. It's like trying to imagine a color you've never seen before.

That is why when I hear Elon speak with conviction on this topic, I am somewhere in between, while believing him and dividing his words in two. I rush back and forth. But if we assume that he is the one who grasped the intuitive idea of space-time, and also one of the few who knows how to colonize Mars, I try to listen to what he says.

And he says it's all about bandwidth. Obviously, bandwidth should make the wizard hat useful. But Elon believes that when it comes to interacting with AI, high bandwidth is not only preferable, but fundamentally important to the prospect of being AI, when compared to the simple use of AI. Here's what he thinks about it:

Elon sees communication bandwidth as a key factor that will determine the level of integration with AI, and this level of integration is a key factor in whether we can influence AI in the future:

Without a real understanding of what kind of AI will be on the planet when we reach the era of artificial superintelligence, the integration of humans and AI will in itself be a defense against human extinction as a species. Our vulnerability in the AI era will manifest itself in the form of bad people who control AI, or escaped AI that does not want to act in accordance with the values of people. In a world where millions of people control the power of AI - who can think with AI, defend themselves with AI, understand AI because they are integrated with it - humans will be less vulnerable. People will be more powerful, and it’s scary, but as Elon says, if everyone becomes Superman, it will be difficult for Superman alone to do much harm - there will be a kind of balance. And we are unlikely to lose control over AI, because it will be widely distributed on the planet, and its goals will be different.

But time is of the essence. And this is emphasized by Elon:

As I pondered all this, one question arose: would a brain-wide interface be enough to make integration possible and probable. Elon noted that the difference between our thinking speed and the computer's thinking speed will continue to be large.

But increasing the bandwidth will make it an order of magnitude higher. And it is right. Will this solve all the problems? Not. Is it correct? Yes. If you move in one direction, why look around?

So Elon launched Neuralink.

Image
Image

He launched Neuralink to accelerate our progress in the era of wizards - a world where, he says, “anyone who wants to supplement themselves with AI can afford it, and where there will be billions of individual symbiotes of AI people who will jointly make decisions. about future . A world where AI will be for people, for people and for people.

Image
Image

* * *

I suspect that some of you now believe that the crazy world in which we have lived for the last 38,000 words may be our future, but some part of you refuses to believe it. And here I sincerely understand you, I myself am the same.

But the insane part of him shouldn't be the reason he is hard to believe. Remember: George Washington died when he saw 2017. And our future will be incomprehensibly shocking to us. The only difference is that time flies even faster now than it did in George's time.

Carried away by the future speaks of the magic of our collective mind - and the naivety of our intuition. Our minds evolved at a time when progress was very slow, so our equipment is calibrated for it. And if we do not actively reject our intuition - that part of us that reads about the future as something outlandish and refuses to believe that it is possible - we will live in ignorance and die in denial.

The reality is that we are moving in a tight stream into a very tight place, and no one knows what it will be like there when we get there. Many people are afraid to think about it, but I like it. Because when we were born, we did not live in a normal world like normal people - we live in a horror movie. Some people will take this information and try to keep up with Elon, doing their best to ensure that this movie has a happy ending - and glory to them. Because I'd rather be the viewer watching this popcorn and cola movie cheering for the good guys.

In any case, I think it is sometimes helpful to climb a tree from time to time and remind ourselves what a great time we live in. And there are so many trees around. I'll see you at one of them soon.

ILYA KHEL

Part One: The Human Colossus

Part Two: The Brain

Part Three: Flying Over the Nest of Neurons

Part four: neurocomputer interfaces

Part Five: The Neuaralink Problem

Part Six: Age of Wizards 1

Part Six: Age of Wizards 2

Part Seven: The Great Fusion