Simulation Ethics: Will Virtual People Be Happy? - Alternative View

Table of contents:

Simulation Ethics: Will Virtual People Be Happy? - Alternative View
Simulation Ethics: Will Virtual People Be Happy? - Alternative View

Video: Simulation Ethics: Will Virtual People Be Happy? - Alternative View

Video: Simulation Ethics: Will Virtual People Be Happy? - Alternative View
Video: Interactive Protection Simulation – Now in Virtual Reality! 2024, May
Anonim

If you've ever played role-playing games - whether online or in an old-fashioned backyard - you know how easy it is to get attached to your avatar, that is, to get used to the role of the "Cossack robber". You literally feel pain when a character is beaten by a troll, roasted by a dragon, or killed by a shaman. American sociologist (and avid gamer) William Sims Bainbridge took this relationship even further by creating virtual representations of 17 deceased family members. In a 2013 essay on Online Avatars, he envisions a time when we can load our personality into an artificial intelligent simulation of ourselves that can act independently of us and even remain after we die.

What kind of responsibility can we place on these simulated people? While we are suspicious of violent computer games, no one seriously considers it murder to undermine a virtual attacker. However, it is no longer absurd to imagine that once modeled people will exist and also have some degree of autonomy and consciousness. Many philosophers believe that a mind like ours does not have to be stored in the network of neurons in our brain, but could exist in various kinds of material systems. If they are correct, there is no reason to believe why powerful enough computers could not store consciousness in their chips.

Will you kill a digital fly?

Today, many moral philosophers try to comprehend the ethics of changing the human population and ask questions like these: What is the cost of human life? What kind of life should we strive to shape? How much value should we invest in human diversity? But when it comes to the ethics of handling modeled entities, it is not entirely clear whether we should rely on the same logic and intuition that we use in our world of flesh and blood. Deep down in our souls, we feel wrong to kill a dog or even a fly. But would disabling the fly - or human brain model - kill? When “life” takes on new digital forms, our own experience can no longer act as a reliable moral companion.

Adrian Kent, a theoretical physicist at the University of Cambridge, decided to explore this gap in moral reasoning. Suppose we have learned to emulate human consciousness on a computer cheaply and simply, he writes in one of the articles. We would like to give this virtual creature a rich and rewarding environment for communication - a life worth living. Perhaps we could do this with real people by scanning their brains in great detail and replicating them on a computer. One can imagine how such a technology “saves” people from a fatal disease; some transhumanists see this as the path to immortal consciousness.

Of course, this could all be a pipe dream - but let's assume the opposite. Now let's get our hands on a set of utilitarian moral principles introduced by Jeremy Bentham in the late 18th century and subsequently refined by John Stuart Mill. All things considered, Bentham said, we should strive to bring maximum happiness (or "utility") to as many people as possible. Or, in Mill's words, "actions are justified in proportion to how happy they are, and not justified if they lead to a lack of happiness."

The principles of good behavior have many critics. For example, how can we measure or compare types of happiness - weighing the value of grandmother's love, for example, on the same scales with admiration for a virtuoso concert pianist? “Even if you want to take utilitarianism seriously, you don’t know what qualities you put into your calculations really matter,” Ken says. However, most belief systems today recognize by default that a moral compass that points towards greater happiness is definitely preferable to one that looks away from it.

Promotional video:

According to Kent's scenario, if we proceed from utilitarian motives, it may be believed that we must move forward and multiply our modeled, or simulated, creatures - let's call them sims - without restrictions. In the real world, such uncontrolled procreation has obvious disadvantages. People will suffer, emotionally and economically, with a large family; overpopulation is already putting pressure on global resources, and so on. But in the virtual world, there may not be such limits. You can create a utopia with almost unlimited resources. Why, then, can't you create as many worlds as possible and fill them with joyful Sims?

Our intuition suggests the answer: but why? Perhaps a conscious sim simply will not have the same intrinsic value as a person of flesh and blood? This point is expressed by Michael Medery, a philosophy of mind and virtual reality ethicist at Tulane University in New Orleans, and believes it should be taken seriously.

Some philosophers believe that we ourselves can be similar simulated beings. We simply cannot exclude such a possibility, but nevertheless we consider these questions meaningful. So one would also assume that Sims have the right to ask questions.

Kent again asks the question: what would be more correct - to create a population of identical creatures or completely different? Obviously, it would be more efficient to create identical creatures - enough information about one to create N such. But our instincts tell us that diversity has meaning and value. And why, I wonder if there is no reason to believe that N different individuals will be happier than N identical ones.

Kent's approach is that different lives are preferable to several copies of the same. “I cannot get rid of the thought that a universe with a billion independent identical emulations of Alice will be less interesting and of less quality than a universe with a billion different individuals,” he says. He calls this concept replication inferiority.

If we consider the cosmos inhabited by billions of Alice, is it really worth talking about one life, multiplied many times, or are we talking about one life, spread over many worlds. It can follow from this that many Alice in an identical environment will have values no more than one. He calls this scenario the uselessness of replication. "I am inclined to that opinion," says Kent, admitting, however, that he cannot find an irrefutable argument in his defense.

Kent's thought experiment touches on several old mysteries of moral philosophy that have never been solved. The English philosopher Derek Parfit, who died last year, addressed them in his monumental work on personality and self, Reasons and Personalities (1984). Parfit pondered questions such as how many people should there be, and whether it will always be morally better to add a life worth living to the hustle and bustle of the world when we get the opportunity.

Even if you accept the utilitarian point of view, there is the problem of finding the greatest happiness for the greatest number: dual criteria create ambiguity. For example, imagine that we have control over how many people live in a world with finite resources. One would think that there should be some optimal number of people that (in principle) will make the best use of resources to ensure happiness and prosperity for all. But would there not be room in such a utopia for another person? Wouldn't it be acceptable to diminish everyone's happiness a little bit to ensure another happy life?

The problem is that this process has no end. As the numbers rise, the added happiness of new lives may outweigh the costs of those already alive. Ultimately, Parfit says, you come to a "disgusting conclusion": a scenario in which the best outcome would be a bloated population of people who are hopelessly unhappy but better off than there was no life at all. Taken together, their meager scraps of happiness add up to more than the sum of a small number of genuinely happy people. "It's hard for me to accept this conclusion," Parfit wrote, but can we justify this approach? Kent doesn't know. “I don’t know if there can be any balanced decision on the disgusting inference,” he says.

At the root of this question lies what Parfit called the “problem of non-identity”: how can we reason rationally about the questions of worried individuals if their very existence depends on the choices we make (like finding a place for “another one”)? Weighing the pros and cons that can affect an individual, depending on the decisions we make, is, in principle, not so difficult. But given the possibility that this character may never have existed, we no longer know how to calculate. Compared to zero or nothingness, anything will be a win, so even the worst scenarios of existence can be morally justified.

There is another very strange scenario in this utilitarian population game. What if there were people with such a great need for happiness that they would demand more from others than they could afford to give? American philosopher Robert Rozick called this creation a "utilitarian monster" and criticized it in his book Anarchy, State and Utopia (1974). According to Nozick, this picture "will require all of us to sacrifice for the sake of the monster to increase our overall utility." Much of Parfit's book was an attempt - ultimately unsuccessful - to escape both the hideous conclusion and the utilitarian monster.

Now let's go back to Kent's virtual worlds full of Sims and his principle of incomplete replication - when a certain number of different lives are worth more than the same number of identical ones. Perhaps this will allow us to avoid Parfit's hideous conclusion. Despite what Leo Tolstoy said about the inequality of unhappy families at the beginning of Anna Karenina (1878), it seems that a huge number of unhappy lives will be almost the same in their gloomy sadness. Consequently, it will not be possible to pick them up and increase overall happiness drop by drop.

But in the same way, the inferiority of replication condones the utilitarian monster - by definition, it must be unique, and therefore more "worthwhile," compared to the inevitable similarity of the lives that feed it. This decision also does not satisfy us. “It would be nice if people thought more about these issues somehow,” admits Kent. "I'm kind of bewildered by them."

For the American libertarian economist Robin Hanson, professor of economics at George Mason University in Virginia, these reflections are not so much mental experiments as predictions of the future. His book "The Age of Em" presents a society in which all people upload their consciousness into a computer to live their lives in the form of "emulations" (not sims, but emas). “Billions of such downloads could live and work in one high-rise building, and there would be room for everyone,” he writes.

Hanson examined in detail how such an economy might work. Ems can be of any size - some of them will be very small - and time can go differently for them compared to humans. There will be close supervision and a small salary, but ems can get rid of this suffering by choosing a life without work. (Hanson believes we can already live in such a world.)

This scenario allows for the possibility of duplication of itself, since the mind has already been transferred to the computer, so making copies will be quite simple. Hanson says that the problem of identity in this case is blurred: duplicates are "the same person" initially, but gradually their identity diverges when they begin to live separately.

Hanson suggests that duplicating people will be not only possible, but desirable. In the coming age of emus, people with particularly valuable mental abilities will be "loaded" many times. And in general, people will want to make multiple copies of themselves anyway as a form of insurance. "They may prefer redundancy in their own implementation to make sure they can survive unexpected disasters," Hanson says.

But he doesn't think they'd prefer Kent's scenario of identical lives. Ems “will not place any special value on living the same life at different times and in different places. They will invest value in multiple copies because those copies can work or communicate with others. But this kind of work and relationship will require each copy to be causally independent and their stories intertwined based on tasks or as partners.”

In any case, em will have to grapple with moral difficulties that we cannot yet overestimate. “I don’t think that the morality that humans have is general enough or robust enough to give confident answers to situations like this, which are so far removed from the 'experience' of our ancestors,” says Hanson. "I predict that emus will have many different conflicting opinions about such things."

Ems, sims and our virtual future

Now all this may sound very strange, like apocryphal medieval talks about angels dancing on tiptoes. Will we ever be able to create virtual life that will be similar to real life at all? "I don't think anyone can confidently say whether this is possible or not," says Kent, in part because "we don't have a good scientific understanding of consciousness."

Even so, technology is moving forward and these questions remain open. Swedish philosopher Nick Bostrom from the Institute for the Future of Man, argued that the computing power available to "posthuman" civilization would make it easy to simulate creatures that live in the world and feel it as real as we are our own. (Bostrom also thinks we could live in such a simulation.) But asking how we can populate such worlds presents "a real dilemma for programmers, scientists and legislators of the future, albeit not so distant," says Kent.

Kent's scenario may already have real-world implications. Utility maximization arguments and the problem of non-identity arise in discussions about promoting and preventing human conception. When should you refuse to conceive in case of risk, for example, developing abnormalities in a child? None of the new methods guarantee complete safety and never guaranteed, but IVF would never be done if it was a prerequisite. Such a technique is expected to meet a certain risk threshold. But the utilitarian approach challenges this idea.

What if a new assisted reproduction method had a moderate risk of developing small defects at birth, such as birthmarks? (This is a real argument: Nathaniel Hawthorne's 1843 story tells of an alchemist who tried, with fatal consequences, to remove the mark from his wife, cited the 2002 US Bioethics Council.) It is difficult to say that people marked with such a spot will be worse than others and therefore should not be taken into account by the method. But where is the line to be drawn? When is a birth defect better not to be implemented at all?

Some have mentioned this dilemma in their defense of human cloning. Will dangers, such as social stigma or distorted parental motives and expectations, outweigh the benefits of giving life. Who are we to make the choice for the cloned human? Who will we be if we make this choice before man even exists?

This kind of reasoning seems to require us to make godlike decisions. But a feminist observer might think that we are the victim of a version of Frankenstein's fantasy. In other words, isn't this a bunch of men dreaming of finally starting to fabricate people when women have been doing this for centuries? The sense of newness that leads to all these controversies has a patriarchal flavor.

Even so, the perspective of virtual consciousness raises some very fresh and fascinating ethical questions - which, according to Kent, lead us to challenge the intuitive value we place in life and demographics. It is extremely difficult to see a strong argument for the fact that a certain number of different lives will be morally better than the same number of identical ones. Why do we think so? How can we get rid of prejudice?

One might say that perceived homogeneity in human populations erodes the capacity for empathy and ethical reasoning. The “faceless” crowd, which is opposed by the individual, is a familiar cliché that evokes a sense of heroism. But this is not necessarily correct.

Maybe we have an evolutionary aversion to the same individuals, especially when you consider that the genetic diversity of a population ensures its survival? Just think about movies about identical twins or clones - it doesn't always look good. It even looks ominous. In 1919, Sigmund Freud associated this feeling with the idea of an evil double (doppelganger). And if this is still understandable in the case of identical twins, a hundred “identical” characters will look terrible.

It doesn't look like we're going to face armies of duplicates anytime soon, in the real or virtual world. But the value of thought experiments is that they give you a new way to make sense of the world's questions. By imagining the ethics of our relationship with Sims, we uncover shaky or non-existent logic that we instinctively use to weigh the moral value of our own lives.

Ilya Khel