Threats To Existence: Analyzing Scenarios For Human Extinction And Other Similar Dangers - Alternative View

Table of contents:

Threats To Existence: Analyzing Scenarios For Human Extinction And Other Similar Dangers - Alternative View
Threats To Existence: Analyzing Scenarios For Human Extinction And Other Similar Dangers - Alternative View

Video: Threats To Existence: Analyzing Scenarios For Human Extinction And Other Similar Dangers - Alternative View

Video: Threats To Existence: Analyzing Scenarios For Human Extinction And Other Similar Dangers - Alternative View
Video: The 11 Greatest Threats To Humanity 2024, September
Anonim

With the acceleration of technological progress, humanity may be rapidly approaching a critical point in its development. In addition to well-known threats like the nuclear holocaust, the prospect of rapidly advancing technologies like nanosystems and machine intelligence presents us with unprecedented opportunities and risks. Our future and whether we will have a future at all depends on how we deal with these challenges. With rapidly advancing technology, we need a better understanding of the dynamics of the transition from human to “post-human” society. It is especially important to know where the traps are located: paths where things can go deadly wrong.

While we have a long history of exposure to a variety of personal, local, and transferable worldwide hazards, this article analyzes a newly emerging category: existential risks. This is what we call the risks of events that can lead to our extinction or cardiac damage to the potential of intelligent life that has developed on Earth. Some of these dangers are relatively well known, while others are completely overlooked. Existence threats have a number of features that make conventional risk management ineffective in this case. The final chapter of this article discusses some of the ethical and political implications of this problem. A clearer understanding of the threat landscape will enable us to formulate better strategies.

Life is dangerous, and danger is everywhere. Fortunately, not all risks are equally serious. For our purposes, we can use three dimensions to describe risks: scale, intensity, and likelihood. By "scale" I mean the size of the group of people at risk. By "intensity" I mean how much harm will be done to each individual in the group. And by "probability" I mean the best current subjective estimate of the likelihood of a negative outcome.

1. Typology of risks

We can distinguish six qualitatively different groups of risks, depending on their scale and intensity (Table 1). A third dimension, probability, can be superimposed on these two dimensions. All things being equal, a risk is more serious if it has a significant likelihood and if our actions can increase or decrease it.

Scale / Intensity: Bearable Intensity Lethal Intensity

global Ozone depletion X

Promotional video:

local economic downturn in the country Genocide

Personal Car Theft Death

“Personal”, “local” or “global” refers to the size of the population that is directly affected; global risk affects all of humanity (and our descendants). “Tolerable risk intensity” and “lethal intensity” refer to how badly the population at risk will be affected. The tolerable risk can also lead to great destruction, but the opportunity remains to recover from damage or find ways to overcome the negative consequences. In contrast, the ultimate risk is that risk when objects exposed to it either die or are irreversibly damaged in such a way that they radically reduce their potential to live the life they seek to live. In the case of personal risks, the ultimate outcome may be, for example, death, irreversible serious brain damage,or life imprisonment. An example of a localized risk of death can be genocide, leading to the destruction of an entire people (which happened to several Indian peoples). Another example is conversion to eternal slavery.

2. Existence risks

In this article we will discuss risks of the sixth category, which is marked in the table as X. This is a category of global deadly risks. I will call them existential threats.

Existence threats are different from global tolerable risks. Examples of the latter are: threats to the biodiversity of the terrestrial ecosphere, moderate global warming (and even great), and possibly stifling cultural and religious eras such as the "dark ages", even if they span the entire society, if they sooner or later end (although see the Screech chapter below). To say that a global risk is tolerable is obviously not to say that it is acceptable or not very serious. A world war with conventional weapons or a decade of the Nazi-style Reich will be extremely dire events, despite the fact that they fall into the category of tolerable global risks, as humanity may eventually recover. (On the other hand,these events will be a local death risk for many individuals and for persecuted ethnic groups.)

I will use the following definition of existential risk:

A threat to existence is a risk in which a negative result either destroys the intelligent life that has arisen on Earth, or irreversibly and significantly reduces its potential.

Existence is a risk that threatens humanity as a whole. Such catastrophes have huge negative consequences for the entire future of earthly civilization.

3. The uniqueness of the problem of threats to existence

Risks in this sixth category have emerged recently. This is one of the reasons it is useful to separate them into a separate category. We have not developed mechanisms, natural or cultural, to deal with such risks. Our institutions and defensive strategies have been shaped by facing risks such as dangerous animals, hostile people or tribes, poisoned food, car accidents, Chernobyl, Bhopal, volcanic eruptions, earthquakes, droughts, World War I, World War II, flu epidemics, smallpox, black plague and AIDS. This type of disaster has happened many times, and our cultural attitude to risk has been shaped through trial and error in managing such threats. But being a tragedy directly for the participants in these events,from a broad point of view - from the point of view of all humanity - even the most terrible of these catastrophes were only ripples on the surface of the great sea of life. They did not significantly affect the total number of happy and suffering people, nor did they determine the long-term fate of our species.

Except for species-destroying comets and asteroid collisions (which are extremely rare), there probably weren't significant threats to existence until the mid-20th century, and there was definitely nothing we could do about any of them.

The first human-created threat to existence was the first atomic bomb. At the time, there was some concern that the explosion would set off a chain reaction by “setting fire to” the atmosphere. Although we now know that such an outcome was physically impossible, at the time this assumption was consistent with the definition of an existential threat. For something to be a risk, based on the available knowledge and understanding, it is enough that there is a subjective probability of an unfavorable outcome, even if it later turns out that objectively there was not a single chance that something bad happened. If we do not know if something is objectively risky, it is risky in a subjective sense. This subjective meaning is, of course, what we should base our decisions on. At any time, we must use our best subjective assessment of whetherwhat are the objective risk factors.

A much greater threat to existence arose simultaneously with the emergence of arsenals of nuclear weapons in the USSR and the USA. A full-fledged nuclear war was possible with a significant degree of probability and with consequences that could be so persistent as to be characterized as global and final. Among the people best familiar with the information available at the time, there was a real concern that a nuclear Armageddon could happen and that it could wipe out our species or destroy human civilization permanently. Russia and the United States continue to possess huge nuclear arsenals that could be used in a future confrontation, by accident or on purpose. There is also the risk that other countries may one day build up large arsenals. Note, however, that a small exchange of nuclear strikes, for example, between India and Pakistan,is not a threat to existence, since it will not destroy humanity and will not irreversibly damage human potential. Such a war, however, would be a local mortal risk for those cities to be targeted. Unfortunately, we will see that nuclear Armageddon and cometary or asteroid impact are only a prelude to threats to existence in the 21st century.

The special nature of the challenges posed by existential threats can be illustrated by the following remarks.

Our approach to existential threats cannot be based on trial and error. There is no way to learn from mistakes. The reactive approach - watching what happened, limiting the damage and learning from the experience - doesn't work. Rather, we must take a proactive approach. This requires foresight to detect new types of risks and a willingness to take decisive preventive measures and pay their moral and economic costs.

We cannot confidently rely on our institutions, moral standards, social attitudes, or national security policies that have evolved from our experience in managing other types of risk. Existence threats are a different beast. It may be difficult for us to take them as seriously as they deserve, since we have never experienced such disasters. Our collective fear response is likely poorly calibrated to the scale of the threat.

Reduction of existential threats is a general public good (Kaul, 1999) and therefore may not be adequately supplied by the market (Feldman, 1980). Existence threats are a threat to everyone and may require an international response. Respect for national sovereignty is not a legitimate excuse for failing to take countermeasures against critical existences.

If we take into account the well-being of future generations, then the damage from existential threats is multiplied by another factor, which depends on whether and how much future benefits are considered (Caplin, Leahy 2000; Schelling 2000: 833-837).

It is surprising, given the undeniable importance of the topic, how little systematic work has been done in this area. This is in part because the most serious risks arise (as we will show later) from anticipated future technologies that we have only recently begun to understand. Another part of the explanation may be the inevitably interdisciplinary and speculative nature of the research subject. And in part, the neglect can be attributed to a reluctance to think seriously about depressing topics. This does not mean that we should be discouraged, but that we need to take a hard look at what can go wrong so that we can create robust strategies to improve our chances of survival. To do this, we need to know where to focus our efforts.

4. Classification of risks to existence

We will use the following 4 categories to classify existential risks:

Explosions (Bangs) - The intelligent life that has arisen on Earth is exterminated as a result of a relatively sudden catastrophe, which can occur either as a result of an accident or on purpose.

Crunches - Humanity's ability to evolve into posthumanity is irreversibly damaged, although humans continue to live somehow.

Shrieks - Some form of posthumanity will be achieved, but it will only be an extremely narrow fraction of the spectrum of what is possible and desired.

Whimpers - Posthuman civilization arises, but develops in a direction that leads gradually, but irrevocably, to the complete disappearance of things that we value, or to a state where these values are realized only to a small extent from the level that could be achieved.

Armed with such a classification, we can begin to analyze the most likely scenarios in each category. The definitions will also become clearer as we progress.

Thanks to the acceleration of technological progress, humanity is approaching a critical point in its development. In addition to the known threats like the nuclear holocaust, the prospect of rapidly evolving technologies like nanosystems and machine intelligence presents us with unprecedented opportunities and risks. Our future and whether we will have a future depends on how we deal with these challenges. With rapidly advancing technology, we need a better understanding of the dynamics of the transition from human to “post-human” society. It's important to know where the traps are located: paths where things can go deadly wrong.

While we have a long history of exposure to a variety of personal, local, and transferable worldwide hazards, this article analyzes a newly emerging category: existential risks. This is what we call the risks of events that can lead to human extinction or cardiac damage to the potential of intelligent life that has developed on Earth. Some of these dangers are relatively well known, while others are overlooked. Existence threats have a number of features that make conventional risk management ineffective in this case. The final chapter of this article discusses some of the ethical and political implications of this problem. A clearer understanding of the threat landscape will enable us to formulate the main strategies.

It's dangerous to live, and danger is everywhere. Fortunately, not all risks are equally serious. For our purposes, we use three dimensions to describe risks: scale, intensity, and likelihood. By "scale" I mean the size of the group of people at risk. By "intensity" I mean how much harm will be done to each individual in the group. And by "probability" I mean the best current subjective estimate of the likelihood of a negative outcome.

5. Typology of risks

There are six groups of risks, depending on the scale and intensity.

Scale / Intensity: Bearable Intensity / Lethal Intensity

global / Ozone depletion, X

local / economic downturn in the country, Genocide

Personal / Car theft, Death

A third dimension, probability, can be superimposed on these two dimensions. All things being equal, a risk is more serious if it has a significant likelihood and if our actions can increase or decrease it.

“Personal”, “local” or “global” refers to the size of the population that is directly affected; global risk affects all of humanity (and descendants). “Tolerable risk intensity” and “lethal intensity” refer to how badly the population at risk will be affected. The tolerable risk can also lead to destruction, but the opportunity remains to recover from damage or find ways to overcome the negative consequences. In contrast, the ultimate risk is that risk when objects exposed to it either die or are irreversibly damaged in such a way that they reduce their potential to live the life they seek to live. For personal risks, the ultimate outcome is death, irreversible, serious brain damage, or life imprisonment. An example of a local mortal risk is genocide leading to the destruction of a people (which happened to several Indian peoples). Another example is conversion to eternal slavery.

Existence risks

In this article we will discuss risks of the sixth category, which is marked in the table as X. This is a category of global deadly risks. I will call them existential threats.

Existence threats are different from global tolerable risks. Examples of the latter are: threats to the biodiversity of the terrestrial ecosphere, moderate global warming (and even great), and possibly stifling cultural and religious eras such as the "dark ages", even if they span the entire society, if they sooner or later end (although see the Screech chapter below). To say that a global risk is tolerable obviously does not mean to say that it is acceptable or not serious.

A world war with standard weapons or a decade of the Nazi-style Reich will be extremely dire events, despite the fact that they fall into the category of tolerable global risks, as humanity can eventually recover. (But these events will be a local death risk for many and for persecuted ethnic groups.)

I will use the following definition of existential risk:

A threat to existence is a risk in which a negative result either destroys the intelligent life that has arisen on Earth, or irreversibly and reduces its potential.

Existence is a risk that threatens humanity as a whole. Such catastrophes have negative consequences for the future of earthly civilization.

The uniqueness of the existential threat problem

Risks in this sixth category have emerged recently. This is one of the reasons it is useful to separate them into a separate category. We have not developed mechanisms, natural or cultural, to deal with such risks. Our institutions and defensive strategies have been shaped by facing risks such as dangerous animals, hostile people or tribes, poisoned food, car accidents, Chernobyl, Bhopal, volcanic eruptions, earthquakes, droughts, World War I, World War II, flu epidemics, smallpox, black plague and AIDS. This type of disaster has happened many times, and our cultural attitude to risk has been shaped through trial and error in managing such threats. But being a tragedy directly for the participants in these events,from a broad point of view - from the point of view of humanity - even the worst of these catastrophes were only ripples on the surface of the great sea of life.

They did not significantly affect the total number of happy and suffering people, nor did they determine the long-term fate of our species.

Except for species-destroying comets and asteroid collisions (which are extremely rare), there were no threats to existence until the middle of the 20th century, and definitely we could not do anything with any of them.

The first human-created threat to existence was the first atomic bomb. At the time, there was some concern that the explosion would set off a chain reaction by “setting fire to” the atmosphere. Although we now know that such an outcome was physically impossible, at the time this assumption was consistent with the definition of an existential threat. For something to be a risk enough, that there is a subjective probability of an unfavorable outcome, even if it later turns out that objectively there was no chance that something bad happened. If we do not know whether something is objectively risky or not, it is a risk in a subjective sense. This subjective meaning is, of course, what we should base our decisions on. At any given moment, we must use our best subjective assessment of what the objective risk factors are.

A much greater threat to existence arose simultaneously with the emergence of arsenals of nuclear weapons in the USSR and the USA. A full-fledged nuclear war was possible with a significant degree of probability and with consequences that could be so persistent as to be characterized as global and final. Among the people best familiar with the information available at the time, there was a real concern that a nuclear Armageddon could happen and that it would wipe out our species or destroy human civilization forever.

Russia and the United States possess huge nuclear arsenals that could be used in future confrontation, by accident or on purpose. There is also the risk that other countries may one day build up large arsenals. But a small exchange of nuclear strikes, for example, between India and Pakistan, is not a threat to existence, since it will not destroy humanity and will not irreversibly damage human potential. Such a war, however, would be a local mortal risk for those cities to be targeted. Unfortunately, we will see that nuclear Armageddon and cometary or asteroid impact are only a prelude to threats to existence in the 21st century.

The nature of the problems arising from existential threats will be illustrated by the following remarks.

Our approach to existential threats cannot be based on trial and error. You can't learn from mistakes. The reactive approach - watching what happened, limiting the damage and learning from the experience - doesn't work. We must take a proactive approach. This requires foresight to detect new types of risks and a willingness to take preventive measures and pay for their moral and economic costs.

We cannot confidently rely on institutions, moral standards, social attitudes, or national security policies that have evolved from experience in managing other types of risk. Existence threats are a different beast. It is difficult for us to take them as seriously as they deserve, since we have never experienced such disasters. The collective fear response is likely poorly calibrated to the scale of the threat.

Reducing existential threats is a common public good (Kaul, 1999) and is therefore under-supplied by the market (Feldman, 1980). Threats to existence are threats to everyone and require an international response. Respect for national sovereignty is not a legitimate excuse for failing to take countermeasures against critical existences.

It is surprising, given the undeniable importance of the topic, how little systematic work has been done in this area. This is in part because the most serious risks arise (as we will show later) from anticipated future technologies that we have only recently understood.

Another part of the explanation may be the inevitably interdisciplinary and speculative nature of the research subject. And in part, the neglect can be attributed to a reluctance to think seriously about depressing topics. This does not mean that we should be discouraged, but that we need to take a hard look at what can go wrong so that we can create robust strategies to improve our chances of survival. To do this, we need to know where to focus our efforts.

Existence risk classification

We will use the following 4 categories to classify existential risks:

Explosions (Bangs) - The intelligent life that has arisen on Earth is exterminated as a result of a relatively sudden catastrophe, which can occur either as a result of an accident or on purpose.

Crunches - Humanity's ability to evolve into posthumanity is irreversibly damaged, although humans continue to live somehow.

Shrieks - Some form of posthumanity will be achieved, but it will only be an extremely narrow fraction of the spectrum of what is possible and desired.

Whimpers - Posthuman civilization arises, but develops in a direction that leads gradually, but irrevocably, to the complete disappearance of things that we value, or to a state where these values are realized only to a small extent from the level that could be achieved.

Armed with such a classification, we can begin to analyze the most likely scenarios in each category. The definitions will also become clearer as we progress.

Bangs

This is the most obvious form of global risk. This concept is the easiest to understand. Below are some of the most likely ways to explosively end the world. I tried to arrange them in ascending order (in my estimation) of the likelihood of causing the extermination of intelligent life on Earth; but my intention with regards to ordering was to create a basis for further discussion, rather than make some unambiguous statements.

1. Intentional abuse of nanotechnology

In its mature form, molecular nanotechnology will allow the creation of self-replicating robots the size of bacteria that can feed on sewage or other organic matter (Drexler 1985, 1992; Merkle et al. 1991: 187-195; Freitas 2000). Such replicators can eat up the biosphere or destroy it in other ways, such as poisoning it, burning it, or blocking sunlight. A man with criminal intentions with this nanotechnology can cause the destruction of intelligent life on Earth by releasing such nanobots into the environment.

The technology for creating destructive nanobots seems to be much simpler than the technology for creating an effective defense against such an attack (global nanotechnological immune system, "active shield" (Drexler 1985)). Therefore, there will likely be a period of vulnerability during which it is necessary to prevent these technologies from falling into the wrong hands. Also, these technologies can be difficult to manage because they do not require rare radioactive isotopes or huge, easily detectable factories, as is the case in the manufacture of nuclear weapons (Drexler 1985).

Even if effective defenses against limited nanotechnology attack are in place before dangerous replicators are developed and acquired by suicidal regimes or terrorists, there is still the danger of an arms race between nanotech-possessing states. It has been argued (Gubrud 2000) that molecular manufacturing will lead to greater instability in the arms race and greater instability in relation to crises than nuclear weapons. The instability of the arms race means that each competing side will be dominated by the desire to strengthen its armaments, leading to a rapid unfolding of the arms race. Instability in relation to crises means that striking first will be the main incentive for each side. Two approximately equally strong opponents,Having acquired nanotechnological weapons, they will begin, from this point of view, mass production and design of weapons, which will continue until a crisis occurs and a war starts, potentially capable of causing universal final destruction. That this arms race could be predicted is not a guarantee that an international security system will be put in place in time to prevent this catastrophe. A nuclear arms race between the USSR and the USA was predicted, but it happened nonetheless.that the international security system will be created in time to prevent this catastrophe. A nuclear arms race between the USSR and the USA was predicted, but it happened nonetheless.that the international security system will be created in time to prevent this catastrophe. A nuclear arms race between the USSR and the USA was predicted, but it happened nonetheless.

2. Nuclear Holocaust

The United States and Russia still have huge reserves of nuclear weapons. But will a full-fledged nuclear war lead to the real extermination of humanity? Note that:

(a) for it to become an existential risk, it is sufficient that we are not sure that it will not happen.

(b) the climatic effects of a large-scale nuclear war are little known (there is a possibility of a nuclear winter).

(c) future arms races between countries cannot be ruled out, and this could lead to the emergence of even larger arsenals than those that existed at the height of the Cold War. The world stock of plutonium is steadily increasing and has reached 2,000 tons, which is about ten times more than remains in warheads (Leslie 1996: 26). Even if some people survive the short-term effects of nuclear war, it could lead to the collapse of civilization. The Stone Age human race may or may not be more resistant to extinction than other animal species.

3. We live in a simulation and it turns off

It can be argued that the hypothesis that we are living in a computer simulation should be attributed to a significant probability (Bostrom 2001). The main idea behind the so-called Proof of Simulation is that huge amounts of computing power may be available in the future (Moravec 1989, 1999), and that they can be used, among other things, to run large numbers of finely structured simulations of the past. human civilizations. On a few not-so-incredible assumptions, the result may be that most minds like ours are simulated minds, and so we must attribute a significant likelihood that we are such simulated minds rather than the (subjectively indistinguishable) minds of naturally evolved beings. And if so, we run the risk ofthat the simulation can be turned off at any time. The decision to stop our simulation may be due to our actions or external factors.

While it may seem frivolous to someone to put forward such a radical hypothesis next to a specific threat of a nuclear holocaust, we must base our conclusions on reasoning, not untrained intuition. Until there is a rebuttal to the arguments presented by Bostrom (2001), it would be intellectually dishonest to disregard simulation shutdown as a possible cause of human extinction.

4. Poorly programmed superintelligence

When we create the first superintelligent device (Moravec 1989, 1998, 1999; Vinge 1993; Bostrom 1998; Kurzweil 1999; Hanson et al. 1998), we may make a mistake and set goals that will direct it towards the destruction of humanity, given its colossal an intellectual advantage giving the strength to do so. For example, we may mistakenly elevate a lower-level goal to a supergoal status. We tell him to solve some mathematical problem, and he obeys, turning all matter in the solar system into a huge computing device, along the way killing the person who asked this question. (For further analysis of this topic, see (Yudkowsky 2001)).

5. Genetically engineered biological object

As a result of the tremendous advances in genetic technology now taking place, it may be possible for a tyrant, terrorist or madman to create a “doomsday virus”: an organism that combines long latency with high virulence and lethality (National Intelligence Council 2000).

Dangerous viruses can even be grown inadvertently, as recently demonstrated by Australian researchers who created a modified mousepox Ectromelia virus with 100% lethality when they tried to design a contraceptive virus for mice to be used to control rodent pests (Nowak 2001). Although this particular virus does not infect humans, it is suspected that similar changes would increase the lethality of the human smallpox virus. What adds to the potential danger here is that the study was quickly published in the open scientific literature (Jackson et al. 2001: 1479-1491). It is rare to see information created in open biotech projects kept secret, regardless of whetherhow severe the potential danger it has - and the same applies to research in nanotechnology.

Genetic medicine will lead to better drugs and vaccines, but there is no guarantee that defenses will keep up with the offense. (Even a randomly generated mousepox virus had a 50% fatality rate in vaccinated mice.) In the end, the dangers of biological weapons may be buried by the advancement of nanomedicine, but while nanotechnology has enormous long-term potential for medicine (Freitas 1999), it carries its own dangers.

6. Mistaken use of dangerous nanotechnology ("gray goo")

The possibility of an accident can never be completely ruled out.

However, there are many ways to avoid human-killing accidents through the use of reliable engineering solutions. The dangerous use of self-replicating systems can be avoided; you can make nanobots dependent on the use of some rare chemical that does not exist in nature; you can enclose them in a sealed environment; they can be designed so that any mutation will almost certainly cause the nanobot to stop functioning (Foresight Institute 2000). For this reason, accidental misuse of nanobots is far less troubling than malicious misuse (Drexler 1985; Freitas 2000; (Foresight Institute 1997-1991).

However, the distinction between accidental and intentional can become blurry. Although, in principle, it seems possible to make global nanotechnological disasters very unlikely, specific circumstances may prevent this ideal level of security from being realized. Compare nanotechnology to nuclear technology. From an engineering point of view, of course, it is possible to use nuclear technology only for peaceful purposes, for example, only in nuclear reactors, which have a zero probability of destroying the entire planet. But in practice, it turned out to be impossible to avoid the use of nuclear technology also for the creation of nuclear weapons, which led to an arms race. With nuclear arsenals at a high level of combat readiness, a high risk of accidental war is inevitable. The same can happen with nanotechnology:they may be forced to serve military purposes in a manner that could create an imminent risk of serious accidents.

In some situations, it may even be strategically beneficial to deliberately make a certain technology or control system risky, for example, to create a “fundamentally unpredictable threat, in which there is always an element of chance” - (Schelling 1960).

7. Something unexpected

We need such a unifying category. It would be foolish to believe that we have already invented and predicted all significant threats. Future technological or scientific discoveries could easily create new ways to destroy the world.

Some foreseeable hazards (and therefore not from this category) were excluded from the list of Explosions due to the fact that they seem to be too unlikely causes of a global catastrophe, namely: solar flares, supernovae, explosions and mergers of black holes, gamma-ray bursts, outbreaks in the galactic center, supervolcanoes, loss of biodiversity, increased air pollution, the gradual loss of human ability to reproduce, and many religious doomsday scenarios. The hypothesis that one day we will reach "enlightenment" and commit collective suicide or stop multiplying, as VHEMT (The Voluntary Human Extinction Movement) advocates hope (Knight 2001), seem unlikely. If, indeed,it would be better not to exist (as Silenus told King Midas in Greek myth and as Arthur Schopenhauer argued (Schopenhauer 1891), although for reasons specific to his particular philosophical system, he did not agitate for suicide), then we would not consider this scenario global a disaster. The assumption that being alive is not a bad thing should be seen as an implied assumption in the definition of Explosions. False general suicide is a risk of existence, although its likelihood seems extremely small. (For more on the ethics of human extinction, see chapter 4 of (Leslie 1996: 26).)that being alive is not bad, should be considered as an implicit assumption in the definition of Explosions. False general suicide is a risk of existence, although its likelihood seems extremely small. (For more on the ethics of human extinction, see chapter 4 of (Leslie 1996: 26).)that being alive is not bad, should be considered as an implicit assumption in the definition of Explosions. False general suicide is a risk of existence, although its likelihood seems extremely small. (For more on the ethics of human extinction, see chapter 4 of (Leslie 1996: 26).)

8. Disasters as a result of physical experiments

The Manhattan Project's atomic bomb designers worried that an explosion would ignite the atmosphere have modern counterparts.

There have been speculations that experiments on future high-energy particle accelerators can cause the destruction of the metastable state of the vacuum in which our cosmos may be located, turning it into a "true" vacuum with a lower energy density (Coleman, Luccia 1980: 3305-3315). This will create an expanding bubble of total annihilation that will spread throughout the galaxy and beyond at the speed of light, tearing all matter apart as it travels.

Another idea is that accelerator experiments could create negatively charged, stable “strangelets” (a hypothetical form of nuclear matter) or create a microscopic black hole that plunges into the center of the Earth and begins to consume the rest of the planet (Dar et al. 1999: 142-148). Such scenarios seem impossible based on our best physics theories. But the reason we do the experiments is precisely because we don't know what will actually happen. Much more convincing evidence is that the energy densities attainable in modern accelerators are much lower than those found in nature in collisions of cosmic rays (Dar et al. 1999: 142-148; Turner, Wilczek 1982: 633-634). It is possible, however,that factors other than energy density are important to these hypothetical processes, and these factors will be brought together in future new experiments.

The main cause of concern about "physical disasters" is the meta-level observation that discoveries of all kinds of dangerous physical phenomena are happening all the time, so even if now all the physical disasters that we think about are completely improbable or impossible, they can still there are more realistic paths to disaster waiting to be discovered. The ones given here are nothing more than illustrations of the general case.

9. Naturally occurring disease

What if AIDS was as contagious as the common cold?

There are several features of the modern world that could make a global pandemic much more likely than ever before. Travel, food trade, and urban life have all increased significantly in modern times, making it easier for a new disease to quickly infect most of the world's population.

10. Collision with an asteroid or comet

There is a real but very small risk that we will be exterminated by an asteroid or comet impact (Morrison et al. 1994).

In order to cause the extinction of humanity, the impacting body may need to be more than 1 km in diameter (and probably 3-6 km.) There have been at least five, and maybe more than a dozen, mass extinctions on Earth, and at least some of them were probably caused by collisions. (Leslie 1996: 81 f). In particular, the extinction of dinosaurs 65 million years ago was associated with the fall of an asteroid 10-15 km in diameter on the Yucatan Peninsula. It is believed that a body 1 km or more in diameter collides with the Earth on average once every half a million years. We have cataloged only a small fraction of potentially dangerous bodies so far.

If we can spot an approaching body in time, we will have a good chance of deflecting it by intercepting it with a nuclear missile (Gold 1999).

11. Unstoppable global warming

There is a scenario that the release of greenhouse gases into the atmosphere can be a process with strong positive feedback. Maybe this is what happened with Venus, which now has an atmosphere of and a temperature of 450 ° C. Hopefully, however, we will have the technological means to counter this trend by the time it becomes really dangerous.

Nick Bostrom

Recommended: