What Is The Developer Of Artificial Intelligence Afraid Of In His Creation - Alternative View

Table of contents:

What Is The Developer Of Artificial Intelligence Afraid Of In His Creation - Alternative View
What Is The Developer Of Artificial Intelligence Afraid Of In His Creation - Alternative View

Video: What Is The Developer Of Artificial Intelligence Afraid Of In His Creation - Alternative View

Video: What Is The Developer Of Artificial Intelligence Afraid Of In His Creation - Alternative View
Video: The danger of AI is weirder than you think | Janelle Shane 2024, May
Anonim

As an AI researcher, I am often confronted with the opinion that many people are afraid of AI and what it can become. In fact, it is not surprising if you look from the perspective of human history, while paying attention to what the entertainment industry is feeding us, that people may be afraid of a cybernetic uprising that will force us to live in isolated territories, and turn others into a "Matrix-like" kind of human batteries …

And yet, for me, looking at all these evolutionary computer models that I use in AI development, it is difficult to think that my harmless creatures, clean as a baby's tears, on my computer screen can one day turn into monsters of a futuristic dystopia. Can I really be called a "destroyer of worlds", as Oppenheimer once regretted and said about himself after he led the program to create a nuclear bomb?

Perhaps I would accept such fame, or maybe the critics of my work are right after all? Maybe I really should stop avoiding questions about what fears I have about AI as an AI expert?

Fear of unpredictability

The HAL 9000, the dream of science fiction writer Arthur Charles Clarke and brought to life by filmmaker Stanley Kubrick in 2001's A Space Odyssey, is an excellent example of a system that crashed due to unforeseen circumstances.

In many complex systems - the Titanic, the NASA space shuttle and the Chernobyl nuclear power plant - engineers had to combine many components. Perhaps the architects of these systems were well aware of how each of its elements worked individually, but they did not understand well enough how all these components would work together.

Promotional video:

The result was systems that were never fully understood by their own creators, which led to well-known consequences. In each case - the ship sank, two shuttles exploded, and almost all of Europe and parts of Asia were faced with the problem of radioactive contamination - a set of relatively small problems, but coincidentally occurring at the same time, created a catastrophic effect.

I can perfectly imagine how we, the creators of AI, can arrive at similar results. We take the latest developments and research in cognitive science (the science of thinking - ed.), Translate them into computer algorithms and add all this to existing systems. We are trying to develop AI without fully understanding our own intelligence and consciousness.

Systems like IBM's Watson or Google's Alpha are artificial neural networks with impressive computational power and the ability to tackle really complex tasks. But so far, the only thing that an error in their work will lead to will be the result of losing the intellectual game "Jeopardy!" or a missed opportunity to defeat yet another world's best logic board game Go.

These consequences are not global in nature. In fact, the worst thing that can happen to people in this case is that someone loses some amount of money in the bets.

However, AI architecture is becoming more complex and computer processes are becoming faster. AI capabilities will only increase over time. And already this will lead us to the fact that we will begin to assign more and more responsibility to AI, even despite the increasing risks of unforeseen circumstances.

We are well aware that "mistakes are part of human nature", therefore it will be simply physically impossible for us to create a truly safe system in everything.

Fear of misuse

I'm not really worried about the unpredictability of the consequences in the work of the AI, which I develop using the so-called neuroevolution approach. I create virtual environments and populate them with digital creatures, giving their brains commands to solve problems of increasing complexity.

Over time, the efficiency of solving problems by these creatures increases and evolves. Those who do the best job are selected for reproduction, creating a new generation on their basis. Over the generations, these digital creatures develop cognitive abilities.

For example, right now we are taking the first steps in developing machines to the level of performing simple navigation tasks, making simple decisions, or memorizing a couple of bits of information. But soon we will see the development of machines that can perform more complex tasks and have a much more efficient overall level of intelligence. Our ultimate goal is to create human-level intelligence.

In the course of this evolution, we will try to find and fix all bugs and problems. With each new generation, the machines will be better at coping with errors than the previous ones. This will increase the chances that we will be able to identify all unpredictable consequences in simulations and eliminate them even before they can be realized in the real world.

Another opportunity that the evolutionary method of development provides is to endow artificial intelligence with ethics. It is likely that such ethical and moral characteristics of a person as reliability and altruism are the result of our evolution and a factor in its continuation.

We can create a built environment and empower machines to demonstrate kindness, honesty, and empathy. This could be one way to make sure we develop more obedient servants than ruthless killer robots. However, while neuroevolution can reduce the level of unintended consequences in AI behavior, it cannot prevent the misuse of artificial intelligence.

As a scientist, I have to live up to my commitment to truth and report what I discovered through my experiments, whether I like the results or not. My job is not to determine what I like and what I don't. The only important thing is that I can make my work public.

Fear of wrong social priorities

Being a scientist does not mean losing your humanity. I have to reconnect with my hopes and fears at some level. As a morally and politically motivated individual, I must consider the potential consequences of my work and its possible effect on society.

As scientists and as representatives of society, we still have not come to a clear idea of what exactly we want to get from AI and what it should eventually become. This is, of course, partly due to the fact that we still do not fully understand its potential. But still, we need to clearly understand and decide what we want to get from truly advanced artificial intelligence.

One of the biggest areas that people pay attention to when talking about AI is employment. Robots are already doing complex physical work for us, such as assembling and welding car body parts together. But one day the day will come when robots will be tasked with performing cognitive tasks, that is, they will be tasked with what was previously considered an exclusively unique ability of the person himself. Self-driving cars can replace taxi drivers; self-driving planes will not need pilots.

Instead of receiving medical care in emergency rooms filled with always tired staff and doctors, patients will be able to conduct examinations and learn diagnoses using expert systems with instant access to all medical knowledge. Surgical operations will be performed by robots not susceptible to fatigue, with a perfectly “trained hand”.

Legal advice will be available from a comprehensive legal framework. For investment advice, we will turn to expert market forecasting systems. Perhaps one day all human work will be done by machines. Even my work can be done faster by using a large number of machines relentlessly researching how to make machines even smarter.

In the realities of our current society, automation is already forcing people to leave their jobs, making the rich owners of such automated machines even richer, and the rest even poorer. But this is not a scientific problem. This is a political and socioeconomic problem that society itself must solve.

My research will not change that, but my political foundations, together with humanity, may lead to circumstances in which AI can become an extremely useful function, instead of widening the gap between one percent of the world's elite and the rest of us.

Fear of a catastrophic scenario

We've come to the final fear imposed on us by the insane HAL 9000, the Terminator and any other villainous superintelligence. If AI continues to evolve until it surpasses human intelligence, would an artificial superintelligent system (or a collection of such systems) view humans as worthless? How can we justify our existence in the face of a superintelligence capable of doing and creating what no other person can? Will we be able to avoid the fate of being wiped off the face of the Earth by the machines that we helped create?

Therefore, the most important question in such circumstances will sound like this: why will artificial superintelligence need us?

If such a situation had happened, I would probably say that I am a good person who even contributed to the creation of this superintelligence, which I am facing now. I would appeal to his compassion and empathy, so that the superintelligence will leave me, so compassionate and empathic, alive. I would also add that species diversity itself has value and the universe is so large that the existence of the human species in it is actually quite insignificant.

But I cannot speak for all of humanity, so it will be difficult for me to find a valid argument for all of us. It's just that when I look at us all, I really see that we have done a lot and are doing a lot wrong. Hatred for each other reigns in the world. We go to war against each other. We distribute food, knowledge and health care unfairly. We are polluting the planet. In this world, of course, there are many good things, but if you look at all the bad things that we have created and continue to create, it will be very difficult to find an argument to support our continued existence.

Fortunately, we don't have to justify our existence just yet. We still have time. From 50 to 250 years, depending on how quickly artificial intelligence develops. We, as a species, have the ability to come together and find a good answer to the question of why superintelligence should not wipe us off the face of the planet.

It will be very difficult to resolve this issue. After all, to say that we support diversity and ethnocultural differences, and to do so, are completely different things. How to say that we want to save the planet, and successfully cope with it.

All of us, be it every individual or society as a whole, must prepare for a catastrophic scenario, using that time to be ready to show and prove why our creations should allow us to continue to exist. Or we can simply continue to blindly believe that such a development of events is impossible, and simply stop talking about this topic.

However, no matter what physical danger superintelligence may pose to us, we should not forget that it will also pose a political and economic danger. If we don’t find a way to improve our standard of living, we will end up simply fueling capitalism with an artificial intelligence laborer that will serve only a select few who have all the means of production.

This article was originally published at theconversation.com by Arend Hintze, Associate Professor in the Department of Integrative Biology, Computer Science and Engineering, Michigan State University.

Nikolay Khizhnyak