People With Enhanced Intelligence Can Be More Effective Than Artificial Intelligence - Alternative View

Table of contents:

People With Enhanced Intelligence Can Be More Effective Than Artificial Intelligence - Alternative View
People With Enhanced Intelligence Can Be More Effective Than Artificial Intelligence - Alternative View

Video: People With Enhanced Intelligence Can Be More Effective Than Artificial Intelligence - Alternative View

Video: People With Enhanced Intelligence Can Be More Effective Than Artificial Intelligence - Alternative View
Video: Artificial vs. human intelligence: who will win the race? | Max Little | TEDxAstonUniversity 2024, May
Anonim

Despite widespread attention to the development of artificial intelligence, there are several people in the world who believe that the potential of significantly improved human intelligence (AI) is much higher. The question of what will come first remains open, but it is believed that a technologically advanced brain can be as powerful (and frankly, as dangerous) as AI.

Man as a species has developed his thinking over many millennia. Or at least tried to do it. In trying to overcome their cognitive limitations, people have used everything they can, from written, linguistic and meditative techniques to modern nootropics.

Why, instead of trying to develop general artificial intelligence (AGI) or even artificial superintelligence (ISI), it would not be better to focus on the human brain and its still far from fully unlocked potential? After all, a significant improvement in the mind - whether through genetics, cybernetics, or the forces of external devices - can lead to the same result that we expect from the creation of advanced AI.

The Gizmodo portal decided to contact the futurologist Mikhail Anisimov, the blogger of the Accelerating Future portal and the co-founder of the Singularity Summit event, and find out the likely prospects for the development of the direction of improving the human brain. Anisimov takes this issue very seriously and believes that we should be wary of AI as much as AI.

Michael, when we talk about increasing intelligence, what do we really mean by this? Are we talking about creating new Einsteins? Or is it something even more ambitious?

The real goal of the AI direction is rather to create "super-Einsteins", people much smarter than any person who has ever lived on Earth. And to achieve this goal, several key steps will have to be taken.

The first step is to create a direct neural connection with information. Think of it as a kind of "telepathic Google search".

The next will be the development of brain-computer interfaces, which will enhance the work of the part of the brain responsible for visual perception. This will enhance not only our vision, but also significantly increase the efficiency of information processing. Just imagine that you will be able to almost instantly create in your brain some complex scheme of an object in detail, or now you can memorize the scheme you have seen in great detail. Also, augmentation will be required in other areas of the sensory cortex responsible for touch and hearing.

Promotional video:

The third step will be to augment the prefrontal cortex. This is the holy grail of UI research - improving the efficiency of perception and the integration of sensory information into concepts. The end result will be the emergence of people capable of demonstrating almost impossible intellectual abilities. We are talking, for example, about the ability to control other people with the power of reason, understanding all the winning positions in the stock market, or, for example, creating inventions that can change the world in almost one day. Now this may seem impossible, just as our modern achievements seemed impossible in the eyes of the people of the Stone Age. But this possibility is really real.

Looking ahead to the next ten years, where will AI evolve? Can the human brain really compete with it?

The human brain really cannot be changed too much. After all, it is already the result of seven million years of evolutionary optimization and fine-tuning, which, I must admit, were very successful, given its limitations. And attempts to "overclock" it, as a rule, lead to its failure, if we consider the issue, at least from the point of view of an example of the effects accompanied by amphitaminic dependence.

Trailer for the movie Areas of Darkness

Chemicals are incapable of enhancing a person's cognitive abilities. And everything that is given in favor of the effectiveness of the current "brain accelerators" looks in fact extremely controversial and doubtful. Real success will require the use of brain implants connecting to millions of neurons. This, in turn, will require millions of tiny electrodes and a dedicated control system to synchronize them all. Nowadays the most advanced computer-brain interface design has about 1000 such connections at best. In other words, modern developments have to be scaled up more than a thousandfold to achieve something really interesting. Even if we take into account the exponential growth, it will take at least 15 to 20 years to achieve any results.

The development of UI is based primarily on progress in the development of nanoproduction. And the results of engineers working on brain-computer interfaces, such as, for example, Ed Boyden of the Massachusetts Institute of Technology, depends entirely on progress in this direction. Given the negligible level of development of production technologies at the atomic level, nano-level self-assembly looks like the most obvious way to develop brain-computer interfaces with millions of electrodes. Nano-level self-assembly, it should be noted, is not an atomically accurate manufacturing method, but it is accurate enough according to the standards of basic manufacturing and photolithography.

What are some potential psychological side effects for people with significantly improved intelligence? Can they generally be considered as people from this point of view?

One of the most common side effects will be insanity. The human brain is an incredibly fine tuned machine. And any change in its mode of operation leads to what we usually call "madness." There are many different types of insanity, far more than behavioral patterns in working common sense. For example, hidden insanity from the point of view of the external environment may seem quite healthy behavior, therefore, most likely, we will face difficulties in identifying such individuals suffering from insanity.

Even with complete sanity, various side effects can occur, including seizures, information overload, and possibly feelings of egomania and total alienation. Smart people tend to feel more alienated from the world around them, so knowing that you are the smartest on the planet will amplify this effect many times over.

Most very smart people are not very sociable and cheerful, like, say, the American scientist Richard Feynman or like the modern Neil DeGrasse Tyson. Hemingway once said: "An intelligent person is forced to get drunk sometimes in order to endure communication with fools." But what happens when the alcoholic intoxication is no longer enough to maintain camaraderie and mutual affection? This can lead to a loss of empathy, which ultimately leads to psychopathy.

So what comes first? Amplified or Artificial Intelligence?

It is very difficult to predict. There is a great desire that enhanced intelligence will appear before AI. All these works of art and games in which the protagonist possesses such a UI are largely responsible for this desire. However, it should be understood that the bias towards UI does not actually make it easier to work on in terms of technological complexity. But personally, I think UI will come earlier.

Be that as it may, both directions are very difficult to implement. We may not see either one until the 2060s, 2070s, or even later. Ultimately, both directions of development will achieve their goals. If you think about it, there is nothing surprising in intelligence now, so the need to improve it is incredibly huge. And, probably, it will take a global uprising of the Luddites (participants in the spontaneous protests of the first quarter of the 19th century against the introduction of machines during the industrial revolution in England - ed.) To stop the progress of technological development, which will lead us to the desired results.

What are the advantages and disadvantages of these two different development approaches?

The main advantage of choosing the AI development path lies in the disproportionately cheaper and easier development processes. AI on paper and in code has already been developed. It remains only to assemble it correctly. The most significant research on UI, in turn, is now simply illegal. Serious research on UI will require very sophisticated experiments in neurosurgery and brain implants. Brain implants, in turn, can fail, causing seizures, insanity, or even death. Improving human intelligence in terms of a quality way of self-improvement is not just a matter of taking a few pills to gain superpowers. Here you really have to experiment with brain implants if you want to achieve serious results.

Most of the research in this area is heavily controlled and incredibly expensive. All experiments with animals are always expensive. For example, Theodor Berger has been working on a hippocampal implant to restore and improve memory performance for many years. In 2004, the first tests involving living tissue were carried out, but since then news has been very rare. Every few years, of course, there are some notes in the news, but I am very skeptical about the final result. Now look at the level of development we have been able to achieve in the development of artificial intelligence!

Does AI look safer than AI in terms of predictability and control? Which is more important to create first, AI or super-powerful AI?

Enhanced intelligence is seen as more unpredictable and uncontrollable compared to super-powerful AI. And in the long run, it actually looks even more dangerous. I recently wrote an article reflecting on the topic of global political transformation, triggered by immense power concentrated in the hands of a small group of people with AI or exclusive access to molecular manufacturing. In it, I coined the term “Maximilian,” meaning “the best of the best,” to describe a highly influential leader who himself uses enhanced intelligence technology to elevate himself above everyone else.

Cognitively Enhanced Reginald Barkley from Star Trek: The Next Generation in Episode X

Image
Image

The key point of my reasoning in the article is that in the case of improved intelligence, you have to deal with people, and a person, as you know, is imperfect. It may turn out that people with improved intelligence, perhaps, will retain a certain level of human morality, but they will most likely begin to use all the power of their minds for hedonistic purposes and, possibly, even for genocide.

Artificial superintelligence, in turn, can be created from scratch and simply made to follow a set of own motivations (which will also be embedded in it), which have a good start, stability and have positive feedback.

People ask the question: "Would the ISI reject these motivations?" No, he won't, because these motivations will form the core of his values, if, of course, they are properly programmed. There will be no “ghost in the car” capable of ignoring or rejecting pre-programmed motives.

The philosopher Nick Bostrom has done an amazing analysis in his work The Will of the Superintelligence. The key idea in it is that selfish goals cannot appear on their own if the system of key tasks of AI is initially fundamentally a set of selfless goals and if the very essence of the machine's existence is to maintain this selflessness. Evolution has shown that selfish goals play an important role in survival, especially if we take into account the evolutionary constructive limitations of living organisms, but this does not mean that we ourselves are unable to program disinterested agents from scratch?

What difficulties (technological, medical, or ethical) stand in the way of development?

The biggest challenge lies in developing suitable production technologies. Now we are not even close to this. Another pitfall is the task of figuring out what exactly each individual neuron is responsible for, and determining the exact location of these neurons in a single person. Again, we haven't even gotten close to that.

Third, we need to develop a quick way to test the many theories about brain function. Ed Boyden calls this "high throughput neural network screening circuitry." The most appropriate and obvious solution to this question may be somehow creating a human being unconscious and experimenting on it without a twinge of conscience or moral dilemmas. But I have a feeling that this idea will be met with hostility by most ethics committees.

In the absence of such an opportunity, we need to find a way to create an unprecedentedly detailed model of the human brain. And we are not talking about today's hard attempts of the "brain modeling" project around which there is so much hype. We are talking about an unprecedented new level, which we will not be able to reach until 2050-2080. Oxford analysis shows that it is more about the 2080s. It all looks now, of course, like fortune telling on coffee grounds, but still closer to reality.

NIKOLAY KHIZHNYAK