EDIT: I was wrong. We have much more than one galaxy group to sterilize. Because of that mistake, I'll do a separate material about it. In short, my assumption was too naive. The available universe in some distant future will be limited to the local group because space accelerates between galaxy groups, and that acceleration is greater the more distant the galaxy is. There are some crucial factors here, like the actual rate of the expansion and the acceleration, the maximum speed limit for any man-made ship and the moment we send such a ship, the more distant future the smaller our available space is. In theory, it would be possible to reach galaxies that are 3 billion light-years from us, with spaceships that could travel at 20% of light speed. But there are problems with such a view, and Isaac Arthur in this video 24:10 [ https://youtu.be/xRB7a89Jh7w ]claims we could colonize (I hope sterilize) a bubble in space a billion light-years from the earth. There are many thousands of galaxies we could reach then. I'm sorry for the mistake. It changes a little in the video, namely we have now much greater responsibility and we can expect there is less intelligent life in the universe. Thank You for understanding and have a great day!
My favorite continent is Antarctica. Symbolically, of course, as I can say that's Rodinia, which existed in the early stages of the Earth's history before sentient life emerged, or some continent on a planet without life at all. But as I've said it's symbolic, with no significance. But Antarctica is somehow spatiotemporally closer to me, and somehow close to my mind. The continent has only two native species of vascular plants, the Antarctic pearlwort, and the Antarctic hair grass. The number of insect species can be counted on fingers, the number of native all-terrestrial vertebrates is exactly zero. I am specifically omitting the ocean, where the krill biomass is the largest in the world. The continent itself, however, covered with ice under which the mountain ranges once covered with forests rest, can fill the mind with a certain peace, its lifelessness is in some way more beautiful than all nature known to me.
I love the history of the earth, the development of life, the diversity of living creatures. The biosphere delights me with its complexity and its uniqueness. It's beautiful. Of course, this is my opinion and I do not go into its details as they are not important to me. Whoever cannot feel its beauty anymore, let him imagine that it is simulated and that there is no suffering in the world, only that astounding complexity. It is beautiful, how beautiful it is to me can be imagined by anyone who, like me, has ever had mystical experiences related to nature, known in psychology as one of the most wonderful experiences felt by people. But I see no reason to assume that the beauty of nature is somehow essential in the final moral sense. We also have no clear grounds to believe that the known biosphere is simulated, or even with the reasons to think so we should rather in practice assume it is not, too much cost would bear our careless mistake. I sincerely admire the biodiversity of the earth, the ability of the universe to create life. The universe itself knows how beautiful and terrible wonders are hidden in the depths of the biological past, the future, or under the shell of atmospheres of distant exoplanets.
But life is not only beautiful, and while the terror of biological existence is generally known to us, hardly anyone realizes it in even a fraction of its actual atrocity, and even fewer of us consider suffering in nature to be a moral problem. And it is a problem. Let's set aside considerations about the nature of morality and focus on what we can observe when we look at reality, when we set back our culture-dependent model of nature, looking only at its pure existence. We see the constant, endless struggle, ubiquitous death, and suffering that must be endured by biological minds. There is little contentment, satisfaction, and fulfillment in nature, most of its being is destroying maladjusted unfortunates in myriad fancy ways and turning those of dubious luck into DNA duplicating machines. All of that In the end only to destroy them as if they meant nothing to nature because apart from what they mean to themselves, their existence has no greater cosmic meaning.
Nature, of course, is no being, nothing matters to it, just as nothing matters except minds. Only conscious systems can create meaning, can feel meaning, and the reduction of non-fulfillment of desires is important for them like nothing else. I would like to pray to Nonentity that not all active brains can feel suffering. Let nematodes, mites, insects, and zooplankton not exist as subjective beings, let their potential semi-conscious despair and desires always rest in non-existence. We do not know where the limits of the conscious suffering lie, how many and in which ways arranged neurons are needed to subjectively exist and morally count. The amount of suffering, with or without invertebrates, however, is unambiguously enormous. Even if we assume that some lives can have absolute positive value, and not that any positive value comes from fulfilling already existing, and therefore unfulfilled, desires and preferences, most lives in nature are not worth living. While I am indeed inclined to believe that no life is worth existing, the conclusion that almost all animals live what is life not worth living is sufficient to consider the option of euthanizing the Earth. One can approach this from other perspectives, arguing that trying to make all life happy or wiping out only part of life should be a priority. Recognizing my epistemic uncertainty, I believe that this may be the case, and for this reason, it is important to gather as much data as possible before taking drastic action. I will try to elaborate on that as an objection to immediate extinction. However, in this material, I do not intend to assume the position that the extinction of all life is inadvisable, because I am very curious about ways in which life could be erased. Therefore, I will try to present a sketch of thoughts and opinions that are particularly close to me in this respect.
Depending on the adopted system of axiological assumptions, one can prefer different ways of solving the problems of life, as well as differently define where the problem lies. Here I will focus on a radical solution, on the physical erasure of all earthly life, and potentially all life in the available universe.
I have faced several plans to erase life. This is often imagined making the earth uninhabited by sterilizing its surface with atomic bombs, smashing it by colliding with other celestial bodies, destabilizing the orbit which would result in freezing in the vacuum of space or being devoured by the sun. Assuming the classical model of the development of the solar system, the sophisticated life on earth could exist for about 800 million years in the future without human intervention, which is longer than the Phanerozoic eon, the period after the Precambrian, which began 540 million years before the appearance of a man. Multicellular life has existed long before, but the abundance of fossils only appears moments before the Cambrian, and biodiversity has been steadily increasing since then. This trend could continue for several hundreds of millions of years until the planet warmed, frying by the slowly aging sun as it entered the red giant phase. Without any intelligent intervention or improbable gamma-ray burst, in that way life capable of feeling pain would perish.
Humanity by its existence can save a lot of potential beings from suffering, although humans can also add even more pain to the cosmos. Let us assume, however, that, as Brian Tomasik calls for, humanity eventually will not scatter life across inhospitable space or imprison conscious beings in simulated worlds. Suppose that the moral progress of humanity, if it goes in the direction in which I hope it goes, therefore to minimize suffering also by minimizing life, will allow us to consider the scenario of the controlled extinction of all earthly non-human life, all earthly life, or even all life in the available space. How efficiently, therefore as effectively as possible, as permanently as possible, and without unneeded suffering can mankind remove life from the planet and the universe?
Of course, I don't know. How could any other intellectually sincere answer emerge here? But I will try to present what I think about it, what scenarios I consider rational, which are not thought out, and which are worth considering. What factors do I consider necessary to take into account, what difficulties will arise along the way and what chances of victory in the fight against the mocking world for eternal silence has the idea that an efilist hopefully raises his eyes to, the hope of the extinction of all life?
It can be said that extinction is inevitable, and under certain commonly accepted assumptions, namely with the rejection of multiverse immortality, it is likely to be so. The age of star formation will end, all the large-scale structures of the universe will die, and if the Boltzmann brains are denied or ignored, and the universe cannot re-emerge, we will have a perfectly empty eternal death without conscious beings. This is probably an optimistic scenario for many, but it is not our goal to wait until life thoughtlessly destroys everyone and everything we allow it to create and kill amidst torment and torture. Advanced civilizations can prevent lives earlier. Whether we are advanced enough to do this, and how it could be achieved, has intrigued many philosophical pessimists, though few have come up with any possible solution.
"What should be known before"
Finding out whether it is rational to wipe out all life should be achieved before the very act of wiping out all life. Perhaps there would be opportunities to turn off consciousness in a way that would allow it to be recreated. If we existed in a simulation or moved into a simulation and our simulation could be paused. Some people, simulation makers, or a superintelligence created for that purpose would have enough time to gather data to determine whether life is worth continuing or not. From the perspective of the creatures existing in the simulation, however, time would not stop flowing - if the simulation were turned off, they would unconsciously cease to exist, if stopped and turned back on billions of years from now, no one in the simulation would know. We can imagine this at any moment, if there were millennia between our individual conscious seconds we couldn't know the difference. Most of us, however, assume that we do not find ourselves in such a simulation, that our death will be irrevocable in one way or another, or at least that bringing the simulation to life will not be as easy as restarting the simulation.
It is possible that the number of factors on which the answer depends is extremely extensive, it is even possible that the human mind is not able to conclusively come to a definite reply. The existence of higher intelligences in the future should provide a solution to the problem in such a case, and it would probably be helpful anyway even if we can already know whether it is better to die or live. We could potentially, having developed appropriate technologies, transfer our minds to other media, and then lie down into a centuries-old sleep, during which a small group of posthumans or superintelligence would have the task of determining whether life is worth living, therefore whether to awake us in the best possible world or never wake us up again, following the conclusion that there is not even one good world. Freezing people's consciousness would be a way to avoid harm from taking their lives if death itself could be harm at all. Of course, such technology can only be accessed by creating a superintelligence that, perhaps in a day, could answer every question, definitively and finally. Freezing minds would also be difficult with animals, and keeping the earth alive without sentient creatures until then seems impossible.
But what if we wanted to keep the biosphere functioning while supercomputers were acquiring data to determine whether it is worth continuing life or not? Perhaps nanotechnology could keep ecosystems functioning. To create it, however, it would take superintelligence again, and after creating and analyzing the data, it could decide whether or not to live in a few hours, although taking into account that neurons conduct impulses slower than optical fibers, and Eliezer Yudkowsky from the Singularity Institute for Artificial Intelligence estimated that if it were possible to simulate the human brain in a machine similar in size and operating at a similar temperature to the human body, this simulation could run a million times faster than the biological brain, experiencing about a year of subjective time in 31 seconds, the answer from SI would come possibly faster than we could expect.
I do not think it is necessary to wait long for an answer whether it is worth living or not if such an objective answer exists. It is possible that some people alive today have a definitive answer to this question, and that it is correct, and perhaps they may be opponents of life or its worshipers. However, the problem may not be to reach the correct conclusion but to convince the undecided and opponents, and for this one need to prove the rightness of the thesis beyond all reasonable doubt, and beyond any doubt ideally. I don't think many things have been proven that way so far. I believe it would be wise to consider the decision to destroy all life before gaining sufficient knowledge of the world irresponsible.
Knowledge is our only weapon against the world, the world that works to hurt us, to get in the way of our fulfillment. Obviously, this is a metaphor, but knowledge has an impossible to overestimate the instrumental value. To act morally and rationally, where I believe these are synonyms, we need a huge amount of data and an efficient system for processing it, not subjected to fatal errors. Our mind was not created for this purpose, it is not meant to be all effective. Each of us can be wrong. This epistemic uncertainty should stop us from destroying all life until we get an unambiguously objective answer, thus one which is virtually indisputable, on which every rational mind has to agree, and on which any rational mind can be convinced. Until we are reasonably sure, I don't think it would be good to irrevocably destroy earthly life.
Achieving a sufficient level of certainty requires our existence. I believe that creating hyper-rationality and improving our cognitive abilities and logical thinking skills, therefore becoming superintelligence, should be the goal of civilization. We want to know how to act, and we can only know it if we have a universal model of the world and morality, the simplest coherent model of all existence. So I think we should become hyper-rational before we destroy life, to find out if this is really a rational move in the first place. In addition to reducing epistemic uncertainty, developing an effective way to wipe life out of all the available cosmos requires a higher intelligence than ours, a superintelligence again.
There is something to be marked here that means a lot. I am not arguing against taking any steps to change the current state until we are sure what to do. There is practically no view that it is irrational not to create as much life as possible, but there are views that proclaim the negativity of all existence. By following the precautionary principle, we should minimize the creation of new entities until we know if it is wrong. We should stop rewilding and try to reduce the suffering of sentient beings as efficiently as possible, striving to depopulate the earth and remove most of nature. I don't find it wise to die out before the answer is known, but neither do I think it rational to sustain and fuel the suffering of the world. If effective measures exist to prevent the proliferation of life, they should be applied to the point where there would be just a handful of people working only on creating hyper-rationality and superintelligence, at least in theory.
In practice, I doubt very much that this approach will be effective, I just wanted to point out what I consider to be non-extinction. In fact, I am convinced that superintelligence will arise before the suffering reduction meme becomes globally widespread in society, although the very rise of superintelligence is not an unequivocally positive phenomenon. Depending on its design, it may even turn out to be the most terrible creature ever to come into existence.
For this reason, some argue for the destruction of life before we create a technology powerful enough to actualize simulated hells or other astronomical sufferings. Perhaps this is the best solution, although I do not have strong beliefs about it, I rather believe that we should exercise extreme caution and investigate the risk as the research progresses. Superintelligence does not have to be all-powerful if we only create certain forms of it, for example by avoiding the creation of artificial superintelligence and focusing on improving existing human minds, which are easier to control, even if they had intellectual capabilities that surpass any other human. Building ever-better but lacking self-awareness supercomputers is another possibility. Maximum caution is a priority, artificial superintelligence should appear only when we are sure that it does not threaten sentient beings with astronomical suffering. We may have a full model of the world before SI creation, thanks to the limited superintelligence of transhumans and posthumans, hyper rationality, and supercomputers. Ideally, the creation of artificial superintelligence should not be needed at all.
The second factor that requires us to have increased intellectual abilities is to effectively carry out the extinction on its largest scale, cosmically. The planetary scale is minimum, the greatest possible effectiveness and carrying all out in a way that causes the least, preferably zero, suffering. I doubt that zero suffering is possible here, but we should get closer to that level.
It must not be forgotten that by waiting, we are allowing suffering to exist now, so waiting too long would also be ineffective, but I believe some factors should make us postpone the plan to destroy life until we have much more advanced technology.
"on scenarios of controlled extinction"
Factors important for the decision to destroy life are, first, the possibility to answer whether it is in fact rational. Second, the effectiveness of destroying life on earth. Third, the possibility of influencing life in the available cosmos. and fourth, the possibility of resurrecting the same minds, so the problem of multiverse immortality. These 4 issues are the most important to me and my current views on these topics will be presented below.
The effectiveness of destroying life on earth will be commented on in a moment. On a larger than planetary scale, to decide to destroy life, we need more knowledge about life in space and the possibilities of immortality.
In 100 billion to 1 trillion years all the galaxies of the Local Group will coalesce into a single large galaxy. In 100-500 billion the Universe's expansion causes all galaxies beyond the former Milky Way's Local Group to disappear beyond the cosmic light horizon, removing them from the observable universe. The reason for this is that the universe is expanding. As the space between galaxy clusters expands faster than light can travel this path, it means that it is not possible to get outside the local galaxy group. Conquering space would mean inhabiting 80 galaxies, mostly dwarf galaxies, Milky Way, Andromeda, and Triangulum galaxies being the only larger ones, and it would take millions of years to reach them. In practice it seems to be the whole universe we could influence in some way, to make bigger plans than this would be meaningless. The problem of the existence of life in space is essential, as the efilists should aim to wipe out life not only from the earth but also from the available cosmos. Assuming that we only have access to the local galaxy group, the amount of this life will be limited. Even assuming that a larger part of the universe, so the amount of life we can influence, will be much larger, it is still limited, we cannot get to the entire universe, only a part of it will be even theoretically available to us. Of course, not only the annihilation of existing life is important, the sterilization of planets with microbes or the prevention of future life is another element that we must take into account when planning an effective extinction.
The second large-scale issue is the possibility of some form of subjective immortality. If we copy your brain during sleep, then kill you and put a copy in an identical body and wake it up, will it still be you? Knowing this is going to happen tonight, should you expect to wake up? If not, we must rule out the option of potential immortality, it takes some form of physical continuity to sustain existence. Some find it intuitive, others see a smuggled concept of the soul here. If we assume that only an informational connection is needed to remain ourselves, we open the way to the resurrection of the dead. Imagine a dying man whose brain was scanned just before he died. The man died, but his brain a million years later is perfectly recreated from a scan. We have to ask ourselves if there is a possibility that, from the perspective of a dying person, he has only fallen asleep and then found himself back alive, in another world, perhaps simulated as a scanned mind, reconstructed, but remaining the same person? If the universe is large enough, especially if it is infinite or all possible human minds exist in it, every observer-moment, then a moment that can be felt, has a possible next moment, so if we are a collection of all copies of ourselves we should expect some kind of real subjective immortality. This phenomenon is known as Big World immortality and I take it into account when analyzing the effectiveness of extinction. A more detailed discussion of the phenomenon itself will be included in one of the future materials.
Determining whether some form of subjective immortality is a real phenomenon for me occupies the first position on the list of truths about reality that we must discover. Depending on whether eternal termination is possible or whether we should expect immortality when all perfect copies of our mind are our mind, various modes of action should be adopted, and extinction will not always be preferred.
In the scenario where Big World immortality is true, the creation of simulated worlds in which our lives would be continued in the best possible way, so without any suffering or non-fulfillment can be imagined as a state of indifferent nirvana, I consider such a state to be the least negative state of being of any conscious creature. Controlling your future rather than letting it flow with the tide of the random choices of the inanimate universe is a better idea in this already tragic scenario of immortality. Raising dead beings or simulating a copy of their brains at every conscious moment to keep them from suffering and reliving deaths in ever more degraded form is another potential possibility. Using supercomputers to simulate future lives is more profitable than extinction if death as eternal non-existence is impossible. How to overcome immortality will be discussed in the future. If for various reasons, you have already rejected this hypothesis, you may ignore the references to it, however, this is one of the factors that I see as worth considering in individual extinction scenarios.
"How to erase earthly life"
I don't see any logical obstacles to recognizing that life on earth can be completely extinct with the use of technology. Perhaps life on earth can be erased today. From my previous statements it follows that I believe that even if it could be done, life should only be limited until we reduce our uncertainty about the functioning of the world to the minimum necessary and create highly advanced technologies. This would be a major objection if we had the means to effectively wipe out all lives with certainty. However, I share the opinion that we currently do not have adequate resources to carry out such an operation effectively.
I have the impression that relatively few people with an extinctionist worldview share the view that we should destroy the world tomorrow if it could be done, although it probably depends on the details. But how would that be if you wanted to start implementing a practical extinction plan now? In the future, it is possible to convince most people of the rationality of such a move, if rational, by working to improve the mind, eliminate cognitive biases, and increase intelligence. Currently, however, efilists constitute a fraction of the population, say 0.00000125% if there were as many as 10,000, which I think is an overestimation. That's a little. We can predict that suffering-focused ethics will gain popularity, but such a radical view as the erasure of all life will not gain a significant number of followers even if the number increases exponentially. It is hard to resist the conclusion that believing in the realism of carrying out a controlled extinction currently would be an illusion. It is possible that postulating such a solution in theory and popularizing extinctionist views is the best way to increase the probability of extinction, but it's important to remember to keep calm and think realistically, as the mere mention of the ethicality of extinction or even the idea of interfering with nature to reduce suffering seem to most people's crazy and absurd. A lost idealist would be one who hoped for an imminent controlled extinction while believing that it could be carried out satisfactorily efficiently.
Society or the technology to which individuals could access would not allow a controlled extinction before the era of nanotechnology, artificial intelligence, and transhumanism, and making humanity more rational is likely to be an arduous process. Guessing specific dates is rather meaningless here because the whole process depends on technologies that we do not know when (and if) they will arise, but if civilization does not meet a catastrophe, the life span of a few to a dozen generations should lead humanity to transhumanism and posthumanism, gradually or through a technological singularity. Controlled extinction at this time is exponentially more likely, which does not change the fact that the chances of adopting such a goal by society may still be slim. As usual in predicting the future, we have too little data and can only play the guesswork more or less seriously.
By now or "in the near future" therefore, I mean the period before the domination of transhumanism, quite whether the transhumanist revolution happens in a decade or a few hundred years from now. Before that, I estimate the chances of a controlled extinction to be close to zero. Some hope in uncontrolled extinction, and many fear it too. A global climate catastrophe, the Third World War, or another destructive event of natural or civilizational origin pose a real risk, but it should be noted that the chances that they will actually cause the extinction of all mankind are not great. Don't get me wrong, this does not mean that they are not dangerous or that there is no hope for extinction, but unless we make the surface of the planet impossible to survive on, life and possibly humanity will survive. In the past, disasters such as the great Permian extinction wiped 95% of marine life and 70% of terrestrial life off the face of the earth, rendering the globe contaminated for a time unimaginable to humans, but the blink of an eye on a geological scale. Neither global warming, nor a total war with hydrogen bombs, nor an asteroid strike, nor a temporary disappearance of the magnetic field will cause all life to die out. After a few million years, life will return to its previous diversity, and if humans die, nature will kill and tear itself apart for the next 800 million years.
Such an effect would have a premature extinction of people. Probably such an ideal world is imagined by those who wish human extinction, naively assessing a world without people as better. In fact man, in all his mindless cruelty, has possibly prevented more suffering than he has caused, which in no way justifies the atrocities that animals kept on factory farms, slaves, or tortured have to endure. However, the chances of human extinction as a result of a global catastrophe are also not great. We live on all continents, including my favorite Antarctica, there are not millions, but billions of us all over the world. The worst of catastrophes would have wiped out most people. The apocalypse would take the majority, the vast majority, the overwhelming majority, the astronomical majority, but not all. It takes only thousands of people to rebuild the population, it takes thousands of years for mankind back to the Stone Age to take over the earth again, repeating all the tragic mistakes of the previous civilization. We would have to rebuild civilization, again by torturing, murdering, and enslaving, ruthlessly harming everything that feels, and spreading destruction, unconsciously preventing some natural suffering. It is better for life that mankind does not become extinct, that it can prevent the entire future, and preferably that it does not even come close to extinction, that it does not have to recreate civilization.
Uncontrolled global catastrophes appear to be ineffective. Especially the tragic hopes of those who count on them, fearing that before humanity grasps what it is doing, it will spread life into space or create sentient beings in simulations. Given how large the capacity the simulations could have, it could indeed be better if humanity died out, even if it meant 800 million additional years of suffering on earth. Simulating just one biosphere in supercomputers could create 800 million years of suffering in just 800 years. And with supercomputers in space, many more biospheres could be simulated. The argument from avoiding the risk of astronomical suffering is therefore worth considering. I have too little data to even try to say what is in fact the most profitable outcome, also I doubt anyone could cause extinction before we reach the technology I'm talking about.
So what could it be like to carry out global extermination of life, using current technologies and those potentially available in the future, ignoring the human resistance factor, and focusing on technical details? Activities could be globally coordinated or led by a small group of people, billionaires, or leaders, but let's leave the conspiracy for another day, probably putting them among fairy tales anyway, as the only effective conspiracy would be made of posthumans or artificial superintelligence. The erasure of life would consist of two elements, destroying life itself and rendering the planet uninhabitable, preventing a future reemergence of life, possibly by destroying the planet itself or throwing it out of its life-sustaining orbit.
Sterilization
Sterilization is often seen as a more moral way to end entire life. Merely preferring physical elimination rather than sterilization can lead to outrage. I can see two reasons why this may be the case. First, sterilization may be considered a way of ending life with less suffering than physical extermination, and second, death may be considered to be a harm. Preventing new creatures from existing is a better idea than even the painless killing of already existing creatures, even if it were an unconscious and painless death.
In a thought experiment in which we sterilize all animal populations instantaneously, the result is that no new beings are generated, assuming that we also inhibit non-sexual processes such as parthenogenesis or fragmentation. Thus, any potential for new sentient beings ceases immediately. imagine this in a simulation if someone like me prefers to feel a certain realism of the experiment.
Let us consider the version in which only nature exists, we are not taking humanity into account. What we can expect is the quick extinction of short-lived creatures like insects, the killing of a large proportion of herbivores by predators, and the death of predators by starvation when prey becomes too rare. Long-lived herbivores could live long in an empty world to die because of diseases, as there would be no more predators on the planet. The entire sentient part of the biosphere ceases to exist in less than a hundred years, and probably in a dozen or so, there would be almost no sentient life in existence. It is not a pleasant vision, I think it would be better to implement some scenarios of immediate extermination. However, it is not such a terrible vision. With a choice of gradual or immediate sterilization, it would be better to choose simultaneous, immediate, because instead of starving predators, in the gradual version we would have to deal with more life, so more death and suffering. The unnatural lives and deaths of sterilized beings are a small price to pay for preventing the existence of myriads of new ones.
One-off sterilization would also be better than the gradual reduction of ecosystems, which may prove to be the least unrealistic today, and which is already argued for by those concerned with suffering in nature.
The obvious problem is the striking impossibility of an immediate sterilization solution. Should we catch zebras and lions to sterilize and release them? Probably sterilization would take place while dormant, so the animal could then be killed without any harm. The stress and suffering of the large mammals themselves, soon living in a world devoid of other animals, would probably be a good argument for putting them to sleep instead of sterilizing them. Large mammals are not the biggest problem anyway. How to sterilize birds, how to sterilize amphibians, spiders, insects, sea fish, or zooplankton? Sterilization sounds peaceful, but it would be neither peaceful, nor effective, nor perhaps even possible.
I can only imagine one universal sterilization method, and that is the use of nanotechnology. Imagine nanomachines endowed with some form of targeted intention and capable of being controlled, for example, by superintelligent satellites. Nanobots are spreading all over the world, maybe even multiplying since they are biotechnology, like viruses, within a few years infecting every living animal organism. They then painlessly and unconsciously sterilize all life on the same day. The level of speculation used in this solution deserves a science fiction book, but knowing how powerful the possibilities of technology, especially superintelligence, can be, I think it is reasonable to consider this scenario. But with such a level of technology, would it not be better if the nanomachines painlessly and unconsciously first put all lives to sleep, then shut down the functioning of the body? Perhaps even making death the most wonderful experience possible?
reduction of ecosystems
An alternative to the physical elimination of sentient beings as well as to the global sterilization in the near future would be to reduce existing ecosystems. Brian Tomasik, in his essays on reducing suffering, suggests that the best way to reduce suffering in nature today is to limit the amount of material that animals can eat, i.e. to reduce primary production for which plants on land and phytoplankton in the ocean are responsible. Policies to prevent rewilding, prevent conservation, and increase management of pristine sites are beneficial in reducing suffering. The destruction of ecosystems, deforestation, or the destruction of the ocean floor make it impossible to sustain as many creatures as before. Replacing the tropical forest with arable land and the grassland with an airport causes a decrease in the number of new entities, as a certain area of the planet will no longer be able to maintain its biological functioning. It is important to note that the wide-ranging effects of individual measures are unclear, it is not known how much, for example, logging the Amazon will affect climate change, or what climate change looks like in the long-term minimization of suffering. However, I am presenting the idea that, as a rule, preventing the spread of natural ecosystems and managing the existing ones seems beneficial in terms of reducing suffering.
Letting the biosphere function while eliminating fragments of it may be most effective given that we are already doing it. By combating the ideas that nature is good and replacing ecosystems with a realistic view of nature that is beautiful and delightful in its complexity, but at the same time an environment of atrocities and torture, if caring for suffering in nature is another step in moral development, it might allow even a non- antinatalist society to understand the essence of the problem. We can imagine a world where transhumanism becomes popular and then ubiquitous, where transhumans would seek to replace nature with cities, keep the planet functioning by technology, sustaining the circulation of elements and oxygen formation, while sentient biological life is gradually eliminated. Replacing the natural biosphere with a different, biotechnological, designed and more beautiful than the natural biosphere without suffering could even be a solution suitable for instrumental efilists, and for sure it should be taken into account by future civilization even if a total extinction of all life would be irrational.
This does not mean that the future development of civilization will lead to it, it may be, on the contrary, the development of technology will allow us to effectively protect nature and spread it, which is now a much more popular endeavor. Instead of a planet covered in 90% of infrastructure, we would have a planet with futuristic cities surrounded by a beautiful jungle or a savannah restored to its original form, in which sentient beings torture and eat each other.
The vision of a planet covered with cities is also not pleasing to the ears of an anti-natalist. Ideally, such a procedure should be performed by artificial, non-sentient intelligence, with a minimal number of people or posthumans, or an artificial superintelligence overseeing the entire procedure. With such advanced technology, however, it seems possible to eliminate all life faster. First of all, it is necessary to get rid of the notions about the gradual destruction of the world by the gradual contamination of subsequent regions or the sterilization of certain places with atomic bombs. It would not be effective, giving a short-term effect at the cost of unnecessary suffering. Assuming the possession of the nanotechnology described above, it could be used to control the biospheres even if the goal of future humanity was not its complete elimination or immediate annihilation.
It is possible that humans, upon reaching the level of transhumanism or posthumanism, will eliminate the problem of suffering from society, or even, as David Pearce argues for example, also from the rest of the biosphere. Perhaps it is actually possible to eliminate suffering in nature without eliminating sentience. Some consider a world without suffering to be positive, so they would not be opposed to striving for a society of transhumans or posthumans where suffering does not exist, so even if the future contained the vast majority of beings assuming the hypothetical impossibility of suffering it would not be problematic. For those who see suffering not as a source but as a manifestation of a deeper problem such as the existence of desires or unsatisfied preferences, so if we assume that negativity is caused by a mechanism more fundamental than suffering, the existence of such a society would still be negative.
If humanity were to remain and only the rest of nature would disappear, the gradual replacement of nature by technology seems potentially realistic, though this realism is still limited. At the same time, it is immediately apparent that such replacing nature with civilization is not particularly effective if we want to eliminate non-human life as the main goal. It may only be the preference of future people not to spread or maintain the existence of nature, or it may simply be the result of the development of transhumanist societies.
nuclear option
Sometimes the proposed solution is to use atomic bombs to sterilize the planet's surface. In practice, this would destroy both nature and humans, but we can imagine enlightened humanity moves to Mars or into silicone bodies and just wants to destroy old nature. However, this is an abstract situation, while some extinctionists seem to consider the atomic solution realistic today, while others at least use it as an example.
The most obvious practical problem is who would do it. It's hard to imagine a group of people digging up a mass of radioactive materials, unnoticed, preparing a plan to destroy the entire surface of the earth in this way. Even a global government would have a problem with it, and we live in a completely different system than the global government. The very idea that such a scenario could be carried out seems divorced from reality to many.
Leaving aside the human factor, the Federation of American Scientists estimates there are around 19,000 nuclear warheads on earth, 95 percent of which are Russian and American. Their explosive power varies enormously: the strategic thermonuclear weapons of the superpowers pack a punch measured to be equivalent to several megatons, while warheads tested by India and Pakistan are around 100 times less powerful.
There are around 40 trillion tons of uranium in Earth's crust, but most are distributed at low parts per million trace concentration over its 3 * 10 ^ 19-ton mass. Estimates of the amount concentrated into ores affordable to extract currently can be less than a millionth of that total. If humanity mined every bit of available uranium from the Earth according to a different source it is approximately 35 million tons. That’s enough to build ten billion Hiroshima bombs. That would be an extinction-level event on par with the asteroid that ended the Mesozoic era. Whether we were detonating several huge bombs or trying to destroy each site separately, we have the chance to only cause extinction as powerful as has already happened in the past. Let 99% of earthly life perish, let all mammals and birds die, let everything heavier than mice go extinct. Let everything heavier than a fly get annihilated. How many millions of years will it take for nature to restore biodiversity? On the timescale of complex life existence, it will only be a very long wink of an eye.
But maybe such destruction of life would be enough if we later also destroy the planet itself, making it uninhabitable? While the suffering of destroying life by atomic bombs may be comparable to the suffering of destroying the earth by turning its surface into lava by an impact, in the event of a collision with a large enough body, it may be less painful than trying to freeze or burn the earth by throwing it out of orbit. Especially by not having a more efficient method, such as nanotechnology, having a lot of bomb-building materials sourced from parts of the system other than the earth, like Mars and asteroids could be necessary. Carrying out a controlled extinction calculated so that almost all beings evaporate, after which the earth would be rendered completely inhospitable could be effective. However, it would certainly require many centuries of the world's existence, making it possible to extract raw materials from other parts of the system. At least, to create a technology powerful enough to mine uranium that is currently impossible to extract, and to destroy the potential for the further development of life after atomic sterilization. In both scenarios, we could assume centuries are needed.
Unfortunately, a position not so coordinated and endowed with many reservations appears in the head when the use of atomic bombs is proposed, in my opinion, it is better not to bring out this solution at all than to quote it without clarification. Probably the image seen by the majority will be something terrifying. Not that exterminating life is not terrifying, but fueling that fear with imaginations of atomic mushrooms probably won't cool the discussion. The default way of eliminating life should be a less catastrophic version, like using nanotechnology, which most people obviously don't associate negatively, or the way to obliterate consciousness should be indeterminate as it is technically irrelevant for an efilist how to cause the extinction but rather whether it is rational to obliterate Earth's nature. So allow me for some criticism. Efilists who seem to believe that this way of destroying life as soon as possible, is the preferred solution, or an effective solution at all, can spread the problematic meme by replicating negative stereotypes, which is at worst extremely irresponsible rhetoric, and at best simply ineffective or not very effective. Focusing mainly on the nuclear option that would be realized soon is too short-sighted and not very creative. To make the vision of destroying the world with nuclear bombs an icon would be a failure to some extent. I think so, and since this is a matter of opinion, please forgive me for this kind of criticism if you think otherwise. Personally, I like atomic explosions in some way, and would be a meaningful symbol to me. As other powerful phenomena, they are majestic, here in addition so unambiguous in their destructiveness. I hope I am wrong to think that associating them with efilism would be so negative, but it is only hope and I do not find it useful to test it myself.
I will not dwell on the seemingly ill-considered appeals to destroy life as soon as possible, it is clear to me that the use of atomic bombs would, first, be unfeasible by the human factor, and second, even if some conspiracy were possible, horribly ineffective, leaving the earth with another a great disaster, but still filled with life. Ironically, the best solution would be if mankind too survived, going back to the Stone Age, to be able to repeat the attempt to destroy life, but this time thoughtfully and effectively.
There are rational reasons to argue for quick extinction, or even merely an attempt to extinction, or even an extinction only effective enough to wipe out humanity if we are concerned about the risk of astronomical suffering created by evil AI and conscious simulations. The situation then becomes particularly tragic, as there is most likely no real chance of a quick extinction, and intelligence could arise again even after the human species were successfully exterminated.
In the end, mining asteroids for radioactive elements and destroying the planet after nuclear sterilization could be effective, as well as the use of other types of bombs, perhaps still unknown, maybe much more effective than the nuclear option or complement it. However, this is only possible in the technologically relatively distant future and better solutions could be possible till then. Now imagine a technology that could less brutally wipe out life from within
nanotechnology and nanobiotechnology
Nanotechnology is present in countless science fiction works such as, just to name a few, Children of Time by Adrian Tchaikovsky,
Peace on Earth by Stanislaw Lem, the 2008 film The Day the Earth Stood Still, or the 2014 film Transcendence, as well as in dozens of other books, films, and computer games. The development of powerful nanotechnology is not just fiction but one of the goals of transhumanism, a tool that can be used globally in the future. In particular, the development of other technologies or supercomputers and intelligent materials will allow nanotechnology to be used for almost any purpose in the future, and at least such a scenario is considered likely by current transhumanists.
The term nanobiotechnology and bionanotechnology refers to the intersection of nanotechnology and biology. Given that the subject has only emerged very recently, bionanotechnology and nanobiotechnology serve as blanket terms for various related technologies. This discipline helps to indicate the merger of biological research with various fields of nanotechnology. Concepts that are enhanced through nanobiology include nanodevices such as biological machines, nanoparticles, and nanoscale phenomena that occur within the discipline of nanotechnology. The use of nanobiotechnology to wipe out Earth's life as well as life in the available space seems to be a promising hope. I will use the terms nanotechnology and nanobiotechnology here largely synonymously, although both nanobiotechnology and non-biological nanotechnology could be used to erase life. The division is not clearly clarified in practice, as we are only at the initial stages of work on the broadly understood nanotechnology.
Imagine the creation of such nanotechnology, controlled by satellites or antennas on the surface of a planet, possibly endowed with the intelligence of a swarm. It does not have to be like a biological entity, but visualizing the nanovirus is helpful. A mechanism that is very stable or even unable to mutate to prevent side effects or catastrophe could allow nanotechnology to copy itself, which would facilitate its spread. Of course, it should be assumed that biological or artificial superintelligence is responsible for its design. Such an intelligent virus could be found in any sentient organism on the planet, including deep-water fish and nematodes in a few years. There is no need to infect fungi or plants if the next step is to make the planet inhospitable. We can now imagine sending a signal causing nanotechnology to immediately put all sentient beings to sleep, then painlessly shutting down the brains of every living organism. Depending on the scenario, people have long existed in virtual worlds, develop their suffering-free world on other celestial bodies, maybe in silicon flesh, or that a small caste of posthumans have considered the destruction of the world profitable and the fate of eternal sleep will meet, in addition to the entire biosphere, billions of people as well. It could also be that humanity is eliminated earlier and the biosphere is erased gradually, for example to obtain as much data as possible, for which it is necessary to monitor its functioning. Such data can be used to model other biospheres, which would enable their more efficient erasure.
It is hard to imagine a better way to eliminate life than putting it to sleep immediately. Death would be unconscious, and there is no obstacle to making it the most wonderful experience, causing mystical sensations of deep peace. Continuous monitoring of the body's condition until the brain is shut down by nanotechnology eliminates any risk of suffering or physical pain. So far, unlike all the previous mechanical methods, it would render it possible to kill any animal without suffering. A high level of speculation may scare some people away, but taking into account the current development of technology and the anticipated technological revolutions in the future, above all, the achievement of superintelligence, placing some hope in the described scenario is justified for me. The creation of quantum supercomputers, the ever-increasing knowledge of how the world works, and the improvement of the human mind allow us to take seriously the vision of using nanobiotechnology to wipe out the biosphere. If we take into account a technological singularity, for example in the form of an explosion of superintelligence, designing better and better versions of themselves, this solution should probably be the default. At the moment, the possibilities of futuristic nanotechnology seem almost limitless.
We can imagine and hope for a situation where effective nanobiotechnology emerges very soon after we are sure which ethical system is complete and consistent, or even before we know it. The use of nanotechnology to erase life then seems optimal. It is possible, however, that posthuman or superintelligence would discover the most complete model of the world before nanotechnology, and recognize that destroying life faster is better, and a group of superintelligence left alive especially for this purpose will then send nanobiotechnology or completely mechanical nanotechnology into space. In that case, if more suffering came from waiting for effective nanotechnology to be created than from mechanical wiping out of life, which would be a rather tragic situation, perhaps one would expect to wipe out life in other ways than by putting it to sleep painlessly. It may be that it is better to destroy the world painfully sooner than to let it exist for a long time to wipe it out painlessly afterward.
Other, even more, exotic scenarios may be considered, such as turning off consciousness in animals while keeping the biosphere functioning if possible and nanotechnology would not become the new consciousness or replacing the gradient of dissatisfaction with a gradient of bliss. The latter, although metaphysically would probably still be negative, would imply only different degrees of subjective satisfaction. David Pearce and his hedonistic imperative are in favor of such a concept. With nanotechnology, one could even create biospheres, an animate cosmos without creating consciousness, or with only pleasure-feeling consciousness, if that were the goal of some abstract civilization. Since this material is about extinction, as well as I am not convinced by the above views, I cite it as an interesting alternative, possibly even very attractive to conditional efilists.
making the planet uninhabitable
Erasing all sentient life is not enough, we must ensure that there is a mechanism to prevent it from arising again. Life, if eliminated incompletely, or if the potential for its creation is ignored, may arise once again. The methods described here can be used both to wipe out life and to destroy a planet when life no longer exists. We would place them near the nuclear option because they seem brutal if there were still conscious lives on earth, far from ideal nanobiotechnology that could potentially wipe out life painlessly, and then mechanically rendering the planet uninhabitable could be put into practice.
So our goal is not only to effectively wipe out sentient life from the earth but also to make the earth uninhabitable by sentient beings. Destroying 99.9999% of sentient life is insufficient, as is leaving the earth with microbes or water to create life from scratch. So in fact, destroying 100% of life is also not enough. Better to wait a thousand years of human and other animal suffering and then wipe out life in an effective way. Extremophiles live very deep in the crust, according to Wikipedia The deep biosphere is the part of the biosphere that resides below the first few meters of the surface. It extends down at least 5 kilometers below the continental surface and 10.5 kilometers below the sea surface, at temperatures that may reach beyond 120 ° C. It includes all three domains of life and the genetic diversity rivals that on the surface. Over the next 800 million years, they can rise to the surface and re-colonize it, perhaps creating feeling brains again. While if we assign a high probability of astronomical suffering, e.g. in simulations in the case of technological development, it is better to even leave the land not completely sterilized, here I will focus on potential ways to make it impossible to resettle again.
Effective prevention of life consists not only of erasing all of it, but also reducing the potential for the emergence of sentience afresh. This potential, assuming the reality of Boltzmann brains, is never zero, but is especially great if we leave behind a planet with an existing, though still unconscious, life behind us.
Potential scenarios, the list of which is not exhaustive and will be updated by me in the future, their advantages and objections are therefore presented below. Each of these ways of making a planet uninhabitable or destroying it can also be a way to destroy life on its surface, although we should prefer ways in which life is eliminated as painlessly as possible before we start destroying the planet. However, depending on the possibilities and the final, unknown effectiveness, no one is able to say which scenario is the best. It may be that destroying the earth together with its life in one clash will be the most effective and will bring the least suffering. Destabilizing the orbit leading to a freezing, collision, or burning of a planet, or targeting an asteroid, moon or dwarf planet are examples of such scenarios.
Creating a runaway greenhouse effect
The least effective, entailing the most potential suffering, and therefore the most questionable, but still potentially worth considering in some rare situations solution is I think to destabilize the planet's climate. Making the surface particularly cold, or rather, more realistically, particularly warm, creating the runaway greenhouse effect could prevent life from advancing and cause the extinction of existing life. I make it clear that this concept has almost nothing to do with the current climate catastrophe, and that heating the planet to the level found on Venus, rather than simply being very warm, would be the goal. Heating the planet to a level where life can only exist in the seas and poles is not enough, and contributing to a human-driven climate crisis will not even lead to that. The climate change option is supposed to mean the option of completely destroying any kind of life-friendly climate, probably with the use of entirely unknown techniques. In the scenario where for some reason space options cannot be used, this may be the best solution, but by design, space options seem much more reliable and effective, and, paradoxically, much less brutal if there were still sentient life on earth.
Space options
Throwing the earth out of orbit seems in some cases to prevent the existence of life as we know it. Orbital destabilization seems possible when another body or group of bodies is sent to pass close to the planet, a collision with a large body also has the potential to cause destabilization.
Causing the earth to be thrown to a more distant orbit or even out of the system should not be the preferred option. The effectiveness in such a model would be high, but not sufficient. The earth would freeze, but due to geothermal activity and the influence of the moon, life beneath the surface of ice-bound oceans could still exist, relying on chemosynthesis instead of photosynthesis. In fact, a galaxy may have a similar number of orbiting planets and lone planets, and different estimates give different results, on some of them life could thrive even in total darkness. So as far as possible, we should not send the earth beyond the boundaries of the solar system. Bringing the earth into a further orbit around the sun doesn't seem good either. For the same reasons, life could survive or develop anew, perhaps even living in a penetrating cold. Another problem would be a future where the sun in the form of a swelling red giant could warm the earth, allowing surface liquid water to exist for millions of years again.
Therefore, it seems more effective to knock the earth out of its current orbit and cause it to collide with another celestial body, for example with Mars. The disadvantage is the possibility that the two planets would merge and the water on their surface would allow life to re-emerge, as Mars is within the ecosphere and the ecosphere itself will shift outward in the future. It is better therefore to direct the earth towards the outer planets, as long as it does not become their moon, causing the earth to collide with Saturn or Jupiter, which should annihilate the smaller rocky planet. By the way, it would be possible to destroy certain icy moons, which could be a problem if there is any potential for life on them.
The Earth has some chance of being engulfed by the growing sun in the distant future when there will be no more life left on it. While current models indicate that, unlike planets closer to the sun, it is likely to avoid this fate, directing the earth into a more internal orbit or throwing it into the sun seems to be the most effective way to eliminate the planet.
So briefly in my head are the visions of the throwing of the earth from its orbit. It seems difficult but potentially doable. With the help of atomic bombs or more advanced technology, the larger asteroids in the asteroid belt could be directed towards the Earth, which, in their near pass or collision, could knock the planet out of its current orbit. Smaller asteroids would probably need many years of systematic small orbit changes, so this option is suitable when there is no longer any life on earth that can suffer from drastic climate change.
Perhaps the more effective or simpler ways of making the earth uninhabited do not involve throwing it out of orbit. We probably need some kind of body to hit the earth in any case, I mainly target asteroids here, maybe even ice moons of gas giants or dwarf planets. Whatever body we consider to hit the planet, it should, however, cause an impact so large that it would turn the earth's crust into lava. Even this, however, leaves the possibility that the earth will cool down and the oceans will reappear on it, allowing life to resume. The awareness that life appeared in Earth's oceans almost immediately after their formation indicates that we should prevent them from re-forming in the future. A collision with a larger body could not only liquefy the crust of the planet for longer, perhaps the chance of the planet shattering is worth considering. However, creating a new asteroid belt in place of the earth seems to require such a force of a collision with such a large body that it would be more effective to knock the planet into the sun as a result of such a collision. The larger the body we intend to use, the more force we would need to knock it out of its previous orbit, which seems to be a difficult and lengthy process that requires special precision. The closest to such a scenario seems to be sending a body that would destabilize the moon. As in other cases, without changing the orbit, the earth would still cool down in the ecosphere, although without a moon stabilizing the tilt of the axis of rotation, which can naturally fluctuate significantly, the formation of a complex life could be difficult, it is still worth preferring a scenario in which not only the potential for life on the planet, but the planet itself ceases to exist. A collision of the moon with the earth, if properly planned, could push the resulting ball of lava towards the sun before the planet cools down. Considering that the moon was formed by the ejection into space of some proto-earth material and a planet called Thea that collided with earth, it would be particularly poetic to merge the earth and moon at their death.
Space sterilization.
Even if we could not or would not be able to get outside our solar system, on bodies such as Europa, Titan, and Enceladus, which are icy moons of the gas giants and have enormous oceans beneath the ice crust, life can exist or arise. Such a life could possibly be based solely on chemosynthesis, without creating complex biospheres, but in the future, as the sun swells, they will find themselves in the habitable zone, which may result in a more sophisticated life. Even if for some reason it is ineffective to leave the system, we should destroy other places within it where life could develop.
Ice moons, however, are not the only place where life in space has a chance to thrive. Efilism is the view that the existence of conscious life is a fundamentally negative condition, and that it is, therefore, rational to eliminate it effectively. This means not only preventing the reemergence of life on Earth but also sterilizing as much of the available space as possible. Effective sterilization involves doing everything as painlessly as possible, as well as achieving the appropriate extent both in time and space.
The question of the existence and rarity of life in space is of key importance here. Complex life on earth has existed for more than 540 million years, and this is also roughly the length of the Phanerozoic eon. In those 540 million years, the intelligence capable of creating civilization appeared before the blink of an eye. The real human era is not longer than several hundred thousand years, symbolically 12 thousand years ago we built the first temple, and only a few dozen years ago the Internet spread across the globe. For 540 million years, life on earth has existed, it has been killing itself and it has suffered. Without superluminal travel, which is impossible according to the current understanding of physics, the realm in space that we can reach is very limited. The Hubble Deep Field, an extremely long exposure of a relatively empty part of the sky, provided evidence that there are about 125 billion galaxies in the observable universe, which is 93 billion light-years in diameter. However, space continues to expand, accelerating, making it impossible to reach certain areas even when traveling at the speed of light. As far as I understand, the space available to us includes about 80 galaxies, in that only a few large ones, the largest of which are the Andromeda, the Milky Way, and the Triangulum galaxy.
I will therefore call the local group available cosmos. Though if I interpreted the data incorrectly, then the available cosmos will be greater in volume and will contain more galaxies.
We do not know how much life exists in the local group. There is a chance that in dwarf galaxies, due to the existence of too few heavy elements, the formation of friendly, such an irony, for life planets is at least impossible or extremely rare for the present age of the cosmos. The same is true for the outermost edges of large galaxies. Near the center of galaxies, frequent and powerful supernovae can sterilize planets, making life impossible there, which would limit its occurrence to the so-called galactic ecosphere. Life as we know it can also only exist around a few kinds of stars, as overly hot A, B, and O stars live too short, ending their lives in a powerful explosion. F-type, G-type stars, or yellow dwarfs, represented by the sun, and K-type stars, the so-called orange dwarfs, allow life to exist. F, G, and K stars make up approximately 3.03%, 7.5%, and 12% of all main-sequence stars. The remaining 76% of the main sequence stars are red dwarfs around which the existence of life is debatable. And there are brown dwarfs and lone planets of course, but we know too little to say that life is impossible around and on them.
We don't know how much life exists in the galaxy. There are 100- 400 billion stars in the milky way alone. We do not know on how many planets life can thrive and how advanced life usually becomes, it is estimated that there are approximately 300 million habitable planets in the Milky Way. There may be superhabitable planets around orange dwarfs, with a biosphere much more complex and richer than Earth's.
The most optimistic news would be if life did not exist anywhere else in the local group of galaxies, although also then one should take into account the possibility of its emergence in the next 10 ^ 6 years to 10 ^ 14 years of the stelliferous era. After that time new stars cease to fuse. However, given the number of habitable planets, it is safe to assume that life may already exist, even if intelligence and civilizations are extremely rare. Perhaps life rarely even enters the phase of sentience, but sterilizing microbial-fed planets to reduce the potential for life also requires spreading some form of technology into space. Our credence as to whether there is another civilization in the galaxy seems low, as we see no signs of such a civilization, and perhaps we should expect some evidence given that using von Neumann probes, so self-replicating spacecraft. It seems possible to reach every nook and cranny of the galaxy in a relatively short amount of time. It has been theorized that a self-replicating starship utilizing relatively conventional theoretical methods of interstellar travel so no exotic faster-than-light propulsion, and speeds limited to an "average cruising speed" of 0.1 the speed of light could spread throughout a galaxy the size of the Milky Way in as little as half a million years. Therefore, it should be initially expected that we ourselves, having the appropriate technology, would be able to get to every planet in the galaxy in a plus or minus such a period.
Being potentially the only one civilization in the galaxy entails a form of moral responsibility. For only we can effectively sterilize planets covered with a suffering biosphere. Von Neumann probes are devices that, after reaching their destination, create copies of themselves from available materials, spreading across space. The design of intelligent probes, in the form of artificial super-technology aimed at reducing the broadly understood suffering, or the fusion of super-intelligent posthumans with such systems, would enable the effective sterilization of space. Nanotechnologies designed to put the biospheres to sleep could be crafted individually for each planet.
My preferred way of wiping disvalue out of space, before I even encountered the concept of efilism, was for humanity to create or become super-intelligent machines that would built swarms to sterilize the planets encounters and destroy any planets where life could arise. Such superintelligence, again in the form of artificial intelligence or post-human, would need a reliable mechanism of self-control and combating possible faults in fundamental code in order not to target the painful destruction of life or the re-creation of life by mistake. Operating in swarms could probably ensure this sufficiently. Such guardians should exist in the local group as long as there is potential for life on the planets, thus many times longer than the present age of the universe.
In theory, when encountering another civilization, SI can present to it all its knowledge, therefore all the knowledge one can have about the workings of the universe, or even make encountered biological creatures post-biological entities. Such a scenario deserves its own science fiction work, but in practice, however, assuming that this super-intelligence is the maximum intelligence possible, painless putting the entire planet to sleep seems to be the preferred option. Such a safety valve as SI shutdown or reprogramming in case it turns out that its initial values were not rational is welcome, but assuming that we are dealing with a hyper-rational entity with all physically available data known to it, it seems unlikely.
We should be concerned about life in the cosmos, as it is rational to assume that the amount of suffering in the local group is much greater than that on earth. Having access to the local group only, we cannot count on numerous other civilizations sterilizing the cosmos, it can be safe to assume that we are the only advanced civilization in the galaxy. We bear the burden of the suffering of all the other planets of 80 galaxies on our beasty shoulders, not only now, but for billions of years of the future existence. We should reason as we would expect from other civilizations, if each thought that it would be more efficient to eliminate life only from one system, most biospheres will be born and die naturally, creating many billions of years of unstoppable suffering. And every planet with civilization will have to go through a painful Way of the Cross first before benevolent intelligence wipe it out from the surface. Therefore, eliminating only one planet should not be a universal line of reasoning. As long as there are no impossible difficulties, we should take care of the elimination of suffering in the entire space and time available to us.
The sterilization of the cosmos does not need to extend the existence of the biosphere on earth, it can be accomplished by a group of posthuman superinteligences once the life of the solar system has been wiped out. The plan according to which life on earth is physically wiped out, followed by the destruction of the planet and then the sterilization of the cosmos, even if the creation of technology necessary for the last one would be achieved after the death of the earth, seems to be effective.
The creatures, be it artificial superintelligence or machines that used to be humans, spread through space to sterilize it, need not mean that the erasure of the sentient life is incomplete. Intelligence can exist without feeling, or at least without the possibility of suffering or dissatisfaction, which would not make potential hyper-rational posthumans moral subjects in the sense that their mental states would not present a negative value in themselves. Still, the ability to have some kind of empathy, not to emotionally feel, but to understand and sustain one's purpose, would be essential, but I believe it can be done based on pure reason, without needing to physically feel anything. In this way, Edvard von Hartmann's hoped vision of wiping out all life could be eventually realized.
Death bubbles
In quantum field theory, a false vacuum is a hypothetical vacuum that is stable but not in the most stable state possible (it is metastable). It may last for a very long time in that state, but could eventually decay to the more stable state, an event known as false vacuum decay. The most common suggestion of how such a decay might happen in our universe is called bubble nucleation. If a small region of the universe by chance reached a more stable vacuum, this "bubble" would spread.
If our universe is in a false vacuum state rather than a true vacuum state, then the decay from the less stable false vacuum to the more stable true vacuum, called false vacuum decay, could have dramatic consequences. The effects could range from complete cessation of existing fundamental forces, elementary particles, and structures comprising them, to a subtle change in some cosmological parameters, mostly depending on a potential difference between true and false vacuum. Some false vacuum decay scenarios are compatible with the survival of structures like galaxies and stars or even biological life while others involve the full destruction of baryonic matter or even immediate gravitational collapse of the universe, although in this more extreme case the likelihood of a "bubble" forming may be very low or even false vacuum decay may be impossible. Consider that kind of these bubbles that annihilates life.
Thus, initiating the decomposition of a false vacuum would create an annihilation bubble, the boundaries of which would propagate at the maximum speed possible in space, at the speed of light. If such a bubble was approaching the earth, we would not be able to observe it, since the last photons of light from distant stars would reach us before its border, and by the next femtosecond, we would no longer exist. It is possible that at this point such a wave is going towards the earth, that it will annihilate you before you finish this material. None of us would feel anything, none of us could be aware of anything, no device would be able to detect anything in time.
I do not know if it would be possible to cause a false vacuum decay at all on request, I consider it probable to assume that it is not. If, however, this proved to be feasible, assuming that we are one copy and annihilating it entails annihilation of consciousness, we would be holding the most powerful, effective, and humane tool for wiping out worlds in our hands. The creation of one death bubble on earth or in the solar system, assuming a cosmic accelerator would be needed for this could be possible for an advanced civilization. To initiate the breakdown of the vacuum would be to annihilate the entire local group, eliminating all life and all potential for it. Unfortunately, the phenomenon is hypothetical.
The problem arises when we assume that death is not annihilation from a subjective point of view and that we are a mental state existing in every perfectly identical copy throughout the universe. This form of trans-world identity results in the so-called big world immortality. There is only one mind of ours, but it is present in infinite copies, assuming the size of the universe is infinite, and since there is always a version of our mind that survives every event, we should expect to survive every death. The fraction of certain types of copies depends then on which future is to be more expected. If big world immortality is true, it would be possible that creating death bubbles is on the agenda in space, so much so that we are annihilated by many such bubbles every second. From a subjective point of view, our infinite number of copies, called measure, is declining, i.e. decreasing in each volume of spacetime, but there is still a conscious mind reproduced by these copies, so we still exist. So no matter how many times we get annihilated, we will always feel a future. By creating a death bubble, we would not observe anything out of the ordinary, suffering and non-fulfillment would still exist. It should be noted that reducing the measure can be a very useful mechanism if we intend to cheat immortality by simulating a larger measure of copies of our future in satisfied states than is average in the material universe, thereby increasing the probability that our future will be as designed. Originally, this idea was presented by Alexey Turchin in 2018 and deserves a separate discussion of the hopes and problems associated with it.
Assuming that there is subjective death, the use of the decomposition of the false vacuum for universal, immediate annihilation is undoubtedly the most effective option. Even if we just reduce our measure this way, this situation is always profitable.
The end of humanity
It is impossible to convince most people today that extinction is rational, assuming it is. You couldn't convince everyone with rational arguments, even if you had AI spending all the time with them. Destroying the entire worldview and building a new one from scratch is impossible on a global scale, but the situation may be different with transhumans or posthumans, devoid of the limitations of the human mind, ascended above cognitive errors.
To exterminate life, we would need the consent of all mankind, the existence of a ruling entity such as a superintelligence that formed a singleton or groups of superintelligent posthumans, or a conspirational organization unrelated to the management of the world. The first and last scenarios seem particularly improbable to me. Creating a group of people wanting to destroy the world could be feasible in a world where no one knows that something like this could happen at all, only then the appropriate resources could be obtained and used destructively, but we live in a reality where this way of operating is known, they would not have succeeded. The first option seems more likely, if humanity becomes transhumans or posthumans, they might consider extinction to be the only rational decision, which would result in omnicide.
As part of the clarification, when writing about transhumans and posthumans, it is not necessary to imagine transhumanity and posthumanity, we can imagine the emergence of a group of posthumans while the rest of the world remains biological transhumans or ordinary human-animals. Making a global and final decision could realistically belong to such a group, whether we like it or not, no matter what or even how rational the decision might be.
Today, even if 1% of humanity considered extinction desirable, the effective destruction of the world seems impossible. It would be impossible to maintain a conspiracy, this group would have to be a group of people who are powerful themselves and possess unimaginably powerful technology. If we are not supporters of conspiracy theories, I think it is safer not to think about such an extinction scenario. If only 1% were not in favor of the idea of extinction, 99% would have had enough problems fighting the side that would try to destroy the extinction plans by all means, which would have led to a war, probably ending with the genocide of that 1%, which is a terrifying vision as certainly it would not be painless. The very process of forming divisions would realistically be extremely painful.
Another problem would arise if divided humanity tried to annihilate each other, or part of it tried to escape into space. If, for example, in the process of implementing transhumanism on a global scale, there was a division, war, or genocide, it could be the greatest atrocity in the history of mankind. I can even imagine reprogramming people in such a way that they would not be able to treat the idea of extinction as rational, it could fall to successive generations of transhumans. It would be another, this time conscious, conspiracy against the human race.
However, we can assume that the transhumans and posthumans will universally value rationality, which will lead them all to the same worldview and to the idea of the same, best decision, whatever it may be. I see three options for such a decision, they are creating the ultimate paradise, plunging into technological nirvana if death is impossible because of multiverse immortality or extinction, taking care of sterilization of space in advance.
The operation of one mighty being, or group of beings, a singleton, for example in the form of superintelligence, would be a more realistic way of carrying out a controlled extinction. Depending on what stage of development would be the rest of humanity, whether they would be the basic biological model, transhumans or posthumans, there is an option to convince everyone, but probably not very much in practice, as well as unnecessary given that superintelligence may not flaunt its goals and just kill everyone benevolently. All life could die in a dream. We can even imagine a situation where the SI announces the transfer of all human minds to a virtual Paradise, allows everyone to lay into the mind-scanning device, and then puts happy people to sleep, ending our species with such a trick. The SI, depending on the circumstances, could create a Paradise simulation, but turn it off immediately after transferring all people to it. Disappearing without any fear nor consciousness, dying by not waking up couldn't hurt people who no longer existed, but knowing that SI could do this would probably be harm, leading to uncertainty and non-fulfillment. Practical execution of any such tricks, however, I leave to superintelligence.
Therefore, creating a superintelligence taking control of the world seems to be the most effective, but the problem arises if any error occurs in the design of such an SI. For extensive descriptions of the dangers, see Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies. For this reason, humanity's achievement of superintelligence modifying our minds, or the creation of a neuromorphic SI in a physical body seems much safer, as it does not at least lead to a situation where a virtually all-powerful existence tortures all those who did not want it to arise. The most important point in the history of mankind may be precisely the one where the first superintelligence will arise capable of producing a better version than itself, the so-called technological singularity will end only when higher intelligences are no longer possible. It is crucial, possibly the most important event that we can ever influence, that the purpose of such intelligence is beneficial to us.
There are also those who, fearing the risk of superintelligence errors and the risk of astronomical future suffering, consider it necessary to destroy the world earlier. Depending on what data we have in the future, this option may be the most rational.
Another problem arises when the creation of a singleton is impossible when there are more of the same maximally powerful superintelligence with different goals. One would intuitively expect a great final war in which the winning side pushes the defeated Satan down into the abyss. In practice, however, if both thrones are maximally superintelligent, I suppose it is rational to assume both of them already know the future and their every move. To some extent, making random decisions from a certain pool could be helpful, but it seems to me that a war between efilistic and lifeistic SI simply wouldn't be going on. Superintelligent sides would have nothing to fight for, knowing in advance which side would win or lose. Both of them would be able to model all possible futures, and there are probably scenarios where there is no middle ground that would satisfy each of the sides, so the war would end with the devastation of one of the parties, but seeing that both sides should act hyper-rationally, it is difficult for me to imagine it. The SI of both camps, or the super-intelligent posthumans of both camps, would automatically then probably try to compromise, knowing that one would never beat the other. We would face a particularly tragic situation, if it were possible to come to two equally rational conclusions, then we could also have a problem with divided posthumanity, but I doubt that such a situation would actually happen, I assume that there is only one consistent, simplest model of the world, only one truth. In a situation where we have posthumans wanting to destroy all life and posthumans wanting to continue life, perhaps lifeists would move to a
perfect simulation in which they would not create a new life beyond being immortal, or create a new life but only living in an eternal paradise. An efilistic SI at this time would sterilize space. Paradoxically, perhaps the conflicting parties would find it better to merge into one by combining both goals, as it would be most effective to create the best world in the case there are two visions of it.
Anticipating superintelligence conflicts, however, is beyond my present ability to use my imagination realistically.
It seems to me, however, that an SI that emerges even a second ahead has an advantage and it would probably result in the creation of a singleton, so I consider a world with SI conflict to be unlikely, as every SI from the very beginning would try to achieve its goal to its fullest, and therefore would not allow competition to exist.
These are my thoughts on extinction.
In the end, I think it is ineffective to focus mainly on how to cause the extinction. Other, for now, more realistic ways of reducing suffering should be our priorities.