What Is It Like - Be An Alien? - Alternative View

Table of contents:

What Is It Like - Be An Alien? - Alternative View
What Is It Like - Be An Alien? - Alternative View

Video: What Is It Like - Be An Alien? - Alternative View

Video: What Is It Like - Be An Alien? - Alternative View
Video: Is Alien ‘Life’ Weirder Than We Imagine: Who Is Out There? 2024, September
Anonim

Humanity has one quality that highly developed alien civilizations most likely do not have - consciousness.

It is possible that people do not have the highest level of mental development in the universe. Since the Earth is a relatively young planet, the most ancient civilizations may be billions of years older than us. But even on Earth, Homo sapiens will not dominate for a long time as the most intelligent biological species.

World champions in go, chess and quiz winners are intelligent computer programs. Artificial intelligence is expected to supplant many human professions over the next several decades. And, given the fast pace of its development, it is quite possible that it will soon be transformed into a general artificial intelligence, which, like the human mind, will be able to combine the ability to analyze a variety of information, show flexibility and common sense. And from there it is not far to the emergence of superintelligent artificial intelligence, which in all respects is much smarter than humans - even in areas such as scientific thinking and social skills, which now seem to be the prerogative of the human mind. Each of the living people can be one of the last steps of the evolutionary ladder,leading from the very first living cell to synthetic artificial intelligence.

We are only now beginning to realize that these two forms of superhuman intelligence - alien and artificial - are not all that different. The technical progress that we are seeing today could have taken place before, somewhere in the universe. The transition from biological intelligence to an artificially created one can be a common template, implemented over and over again in all corners of the cosmos. The greatest intelligences in the universe are probably of post-biological origin, evolving from once biological civilizations. (This point of view, along with others, I share with Paul Davis, Stephen Dick, Martin Rees and Seth Shostak). Based on the experience of mankind - and we have only one example - the transition from biological to post-biological can take only a few hundred years.

I prefer the term “artificial” to “post-biological” because the contrast between biological and synthetic is not so sharp. Imagine a biological mind becoming a superintelligence through purely biological enhancements, such as nanotechnologically enhanced neural minicolumns. Such a living organism would be considered post-biological, but not everyone would call it "artificial intelligence." Or consider computronium, built from purely biological materials, such as the Cylon raider (space fighter) in the sci-fi media franchise Battlestar Galactica.

The key point is that there is no reason to think of humanity as the highest form of intelligence. Our brains adapt to certain environmental conditions and are severely constrained by chemical and historical factors. But technology has opened up a vast space of design parameters that offers new materials and methods of work, as well as new possibilities for exploring this space faster than with traditional biological evolution. And I think the reasons why artificial intelligence will surpass us are already visible today.

Extraterrestrial artificial intelligence can pursue goals that are contrary to the interests of biological life

Already now, the best means for processing information are not groups of neurons, but silicon microcircuits. The peak speed of neurons is about 200 Hz, compared to the gigahertz of transistors in modern microprocessors. Despite the fact that the human brain is still much more intelligent than a computer, machines still have almost unlimited possibilities for improvement. Just a little more and they will be designed to match the capabilities of the human brain or even surpass them, by reverse engineering the brain and improving its algorithms, or by combining reverse engineering and rational algorithms that are not based on the work of the human brain.

Promotional video:

In addition, artificial intelligence can be loaded in several places at once, it is easy to maintain and modify it, and it can also be preserved in conditions in which biological life, including interstellar travel, hardly survives. Our small brains are limited by the volume of our skull and our metabolism; superintelligent artificial intelligence, on the other hand, is able to spread over the Internet and even create computronium on the scale of the galaxy, using all its matter in order to raise the level of information processing to the maximum. Without a doubt, a superintelligent artificial intelligence would be much more durable than our human mind.

Suppose I'm right. Let's say that intelligent life out there is post-biological. What conclusions should we draw? At the present time, there are eloquent discussions on the existence of artificial intelligence on Earth. There are two controversial issues: the so-called "control problem" and the nature of subjective experience, which influence our understanding of extraterrestrial civilizations and their impact on us in case of contact.

Ray Kurzweil is optimistic about the post-biological phase of evolution, suggesting that humanity will merge with machines, achieving a startling technological utopianism. But Stephen Hawking, Bill Gates, Elon Musk and others have expressed concern that humanity may lose control of superintelligent intelligence, as it might rewrite its program and "outsmart" any controls we put in place. This is the problem of control, which is our ability to control the mysterious and vastly superior AI.

Superintelligent AI can be developed at a technological singularity - an abrupt transition in which accelerating technological progress, and especially the explosive development of artificial intelligence - reduces our chances of predicting or understanding. But even if such a mind emerges in a less radical way, we will most likely be unable to predict or control its goals. Even if we could choose what moral principles to install in our machines, moral programming is difficult to define in a clear way, and such programs can in any case be rewritten by superintelligent intelligence. Smart machines can bypass existing security measures, such as emergency locking of the device, thereby posing a real threat to biological life. Millions of dollars are being invested in AI security organizations. The best minds in computer science and computing are working on this problem. Hopefully scientists will create safe systems, but there are fears that the problem of control is insurmountable.

In light of this, contact with alien intelligence can be more dangerous than we think. Biological aliens can be hostile, but even more dangerous can be extraterrestrial artificial intelligence. His goals may be contrary to the interests of biological life, he may have intellectual abilities far superior to the human mind, and much greater longevity than biological life.

All this testifies to the caution that should be exercised in relation to the Search for Extraterrestrial Intelligence (SETI) program, with the help of which we not only passively expect to hear signals from other civilizations, but also deliberately "advertise" our existence. The most famous incident occurred in 1974, when Frank Drake and Carl Sagan sent a message to one of the star clusters using a huge radio telescope in the Puerto Rican city of Arecibo. Proponents of the extraterrestrial intelligence program believe that instead of just listening to signals from extraterrestrial civilizations, we should send messages to the stars closest to Earth, using the most powerful radio transmitters, such as in Arecibo.

Why should unconscious machines be as important as biological intelligence?

From a control point of view, such a program seems reckless to me. Although a truly advanced civilization would probably have no interest in us, even one hostile civilization out of millions of others would be a disaster. Until we are confident that superintelligent intelligence does not pose a danger to us, humanity should not attract the attention of extraterrestrial civilizations. SETI proponents point out that our radar and radio signals are already detectable, but rather weak and quickly mingling with natural galactic noise. The transmission of stronger signals, which will be accurately heard, is fraught with dangerous consequences for humanity.

The safest setting is intellectual humility. Indeed, if we exclude the obvious scenarios when alien ships hover over the Earth, as in the recently released film "Arrival", I'm not sure if we will be able to determine superintelligent intelligence on technical grounds. Some scientists believe that superintelligent AI could fuel black holes or create Dyson spheres, megastructures that absorb the energy of entire stars. But from the point of view of our modern technologies, this is just reasoning. It is extremely self-confident to assert that we can foresee the computational abilities and energy needs of civilizations ahead of us in development by millions, if not billions of years.

Some of the earliest superintelligent artificial intelligences may have had knowledge-processing systems modeled after the biological brain, such as the deep learning system created using the example of neural networks in the brain. Their computational structure would be clear to us, at least in general terms. Perhaps artificial intelligence has goals similar to biological beings - reproduction and survival.

But, being a self-improving system, superintelligent artificial intelligences can go into an unrecognizable form. Some of them may choose to retain cognitive functions similar to the species they were modeled after, while setting an acceptable limit for their own cognitive architecture. Who knows? But in the absence of such a limit, the alien superintelligence can quickly surpass our ability to not only understand its actions, but even simply look for it. Perhaps he would even merge with the natural features of the universe; or it may be part of dark matter, as Caleb Scharf recently suggested.

Supporters of the program for the search for extraterrestrial intelligence believe that this is why we must send signals into space in order to enable alien intelligence to detect us and create means of contact available for our intellectually lagging civilization. While I agree that there is reason to reckon with the SETI program, the likelihood of encountering dangerous superintelligent intelligence is far greater. As far as we know, a malicious superintelligence could infect the world's artificial intelligence systems with viruses, and intelligent civilizations would create cloaking devices. We, humans, may need to first achieve our own uniqueness, before resorting to the program of the search for extraterrestrial intelligence. Our own superintelligent intelligences will be able to inform us of the prospects for galactic security,and to guide our actions in the event of recognition of signs of another superintelligence in the universe. Birds of a feather flock together.

It is only natural to think about whether this means that humanity needs to avoid the development of a complex artificial intelligence system in the field of space research; just remember the HEL on-board computer from the movie "A Space Odyssey 2001". I believe it is too early to think about a ban on the use of artificial intelligence in space in the future. By the time we can explore the universe with our own artificial intelligence, humanity will reach a tipping point. We will either already lose control of artificial intelligence - in which case there will be no human-initiated space projects - or we will firmly hold the AI security levers. Time will tell.

Artificial intelligence, which has not been fully explored, is not the only cause for concern. We tend to expect that if we encounter an evolved extraterrestrial intelligence, we are likely to meet biologically very different creatures from us, but with intelligence like ours. Imagine that every moment you are awake and always when you are sleeping, there is something unique to you. When the warm hues of the sunrise come to your gaze, or when you breathe in the aroma of freshly baked bread, you have a conscious experience. In the same way, there is something characteristic of the alien mind - at least the way we imagine it. But this assumption is worth questioning. Can a superintelligent artificial intelligence have a conscious experience, and if so,what would we say to that? And what impact would their inner world or lack thereof have on us?

The question of whether artificial intelligence has an inner world is the key to our assessment of its existence. Consciousness is the cornerstone of our moral and ethical system and the main condition for the perception of someone or something as an individual, a person, and not a machine. Conversely, the question of whether they are conscious may be the key to their perception of us. The answer to the question whether artificial intelligence has an inner world depends on this assessment; he can discover in us the capacity for conscious experience, using his own subjective experience as a "springboard". In the end, we evaluate the life of other biological creatures by the similarity of our minds, so most of us in horror will refuse to kill chimpanzees, but they will eat an apple with pleasure.

But how do beings with enormous intellectual differences, created from different substrates, recognize consciousness in each other? Philosophers around the world have pondered whether consciousness is limited to biological phenomena. Superintelligent artificial intelligence, if we are to immerse ourselves in philosophical reasoning, could similarly raise the “problem of biological consciousness” of people, wondering at the same time whether we have all the necessary material.

Nobody knows which intellectual path the supermind will take to determine the consciousness of people. But we, from our side, also cannot assert whether artificial intelligence has consciousness. Unfortunately, it will be difficult to determine this. Right now, you can claim that you are gaining experience, and that is what makes you yourself. You are your own example of conscious experience. And you think that other people and some non-human animals are most likely also conscious, because they are similar to you on a neurophysiological level. But how do you determine whether something created from completely different substrates can have experience?

Consider, for example, a silicon-based superintelligence. Despite the fact that both silicon microchips and neural minicolumns process information, we now know that at the molecular level they can differ in the way they influence consciousness. After all, we believe that carbon is a material chemically more suitable for life than silicon. If the chemical differences between silicon and carbon affect something as important as life itself, we should not rule out the possibility that chemical differences affect other key functions, such as whether silicon contributes to the birth of consciousness.

The conditions necessary for the emergence of consciousness are widely discussed by artificial intelligence researchers, neuroscientists, and philosophers. Solving this question might require a philosophy-based empirical approach - a way to determine, on a case-by-case basis, whether an information processing system supports consciousness, and under what conditions.

Next, let's talk about an assumption with which we could improve our understanding of silicon's ability to maintain consciousness. The development of silicon-based brain chips is already underway to treat memory-related diseases such as Alzheimer's and PTSD. When the time comes, and chips will be installed in those parts of the brain that are responsible for the functions of consciousness, for example, attention and working memory, we will understand whether silicon is a substrate for consciousness. You might find that when you replace a specific part of the brain with a chip, a certain experience is lost, as described in one of the works of Oliver Sachs. In this case, the engineers who create such chips could try a different, non-neural kind of substrate, but they could eventually discoverthat the chip created from biological neurons is the only one working. Such a technique would help determine whether an AI system can be conscious, at least when placed in a larger and presumably already conscious system.

Even if silicon can give rise to consciousness, this is possible only under certain conditions. The properties that lead to the emergence of a complex information processing process (which AI developers are happy about) may differ from the properties that generate consciousness. The so-called engineering of consciousness may be required - technically thoughtful actions to introduce consciousness into machines.

This is what worries me. Who on Earth or on distant planets would try to independently integrate consciousness into AI systems? Indeed, when I think about the AI programs that exist on Earth, I understand why their design engineers diligently avoid creating conscious machines.

Today, the Japanese are developing robots for caring for the elderly, cleaning nuclear reactors, and participating in hostilities. This raises the question: Is it ethical to exploit robots if they are conscious? And how will it differ from breeding human beings for these purposes? If I were the director of AI development at Google or Facebook, then, when thinking about future projects, I would not want to face the ethical confusion of accidentally designing a conscious system. The development of such a system could lead to accusations of enslavement of robots and other public scandals, as well as a ban on the use of artificial intelligence technologies in the areas for which they were intended. The natural answer to these questions is the search for structures and substrates where robots are not endowed with consciousness.

Moreover, the elimination of consciousness may be more effective for the self-cultivating superintelligence. Think about how human consciousness works. Only a small percentage of human thinking is available for conscious mental activity. Consciousness is consistent with innovative learning objectives that require attention and focus. The superintelligence would have expert knowledge in every field of activity, as well as an incredibly fast speed of information processing, covering huge databases that could include the entire Internet, and in the future, concentrate an entire galaxy in itself. What would remain unknown for him? What would require measured and reasonable concentration? Hasn't he achieved perfection in everything? Like an experienced driver on a familiar roadhe could rely on unconscious (automatic) information processing. From the point of view of efficiency, unfortunately, it is obvious that the majority of intelligent systems will not possess consciousness. On a cosmic scale, consciousness can only be an impulse, a brief flowering of experience before the universe returns to mindlessness.

If humans suspect that artificial intelligence is unconscious, they are likely to fearfully assume that AI is seeking to become post-biological. This increases our existential worries. Why should unconscious machines be as important as biological intelligence?

Very soon people will cease to be the only intelligent beings on Earth. And, perhaps, somewhere in space, not biological life, but a superintelligent AI has already reached its peak. But perhaps biological life has a different characteristic - conscious experience. As we now know, intelligent artificial intelligence will require deliberate engineering efforts to create machines that can access human senses. Perhaps some species will find it appropriate to create their own artificially conscious children. Or, perhaps, future humanity will engage in consciousness engineering and share with the Universe its conscious ability to feel.

Susan Schneider is an Assistant Professor in the Department of Philosophy and Cognitive Sciences at the University of Connecticut and an affiliate professor at the Institute for Advanced Study, Center for Theological Research, Ethics and Technology Group at the Yale Interdisciplinary Center for Bioethics. Author of several books, including Science Fiction and Philosophy: From Time Travel to Supermind.

More information can be found at SchneiderWebsite.com