Artificial Intelligence: A Double-edged Sword? - Alternative View

Artificial Intelligence: A Double-edged Sword? - Alternative View
Artificial Intelligence: A Double-edged Sword? - Alternative View

Video: Artificial Intelligence: A Double-edged Sword? - Alternative View

Video: Artificial Intelligence: A Double-edged Sword? - Alternative View
Video: 15/15: Talk Artificial Intelligence: A Double-Edged Sword 2024, May
Anonim

We say robots will rise. That will take over the world, take control of everything. We have heard these warnings for decades, and today, at the beginning of the 21st century, the number of these warnings is only growing. There is a growing fear of artificial intelligence that will end our species.

Such scripts are not only snapped up in Hollywood, but also increasingly find supporters in the scientific and philosophical field. For example, Ray Kurzweil wrote that the exponential growth of artificial intelligence will lead to a technological singularity, the point where machine intelligence will surpass human intelligence. Some see this as the end of the world, others see only opportunities. Nick Bostrom thinks superintelligence would help us deal with disease, poverty and environmental destruction, as well as improve ourselves.

On Tuesday, the famous scientist Stephen Hawking joined the ranks of the singularity prophets, more precisely, to their pessimistic group, as he told the BBC that "the development of full-fledged artificial intelligence may mean the end of the human race." He believes that humans will not be able to compete with AI, which will be able to reprogram itself and reach a level far superior to human. However, Hawking also believes that we should not try to contact aliens at all, since they will have only one goal: to subdue us or destroy us altogether.

The problem with these scenarios is not that they are inevitably wrong - who can predict the future? - or not believing science fiction scripts. The latter is inevitable if you want to understand and evaluate modern technologies and their impact on us in the future. It is important to put our philosophical questions on the table in such scenarios and examine our fears to figure out what matters most to us.

The problem with focusing exclusively on artificial intelligence in the context of “doomsday” and other fatal scenarios is that it distracts us from other, more pressing and important ethical and social issues that arise in connection with technological developments in these areas. For example, is there a place for privacy in the world of information and communication technologies? Are Google, Facebook and Apple Threatening Freedom in the Technology World? Will further automation lead to job losses? Can new financial technologies threaten the global economy? How do mobile devices affect the environment? (To Hawking's credit, he mentioned privacy in interviews, but then he still spoke of the end of the human age.)

These questions are far less attractive than superintelligence or the end of humanity. They also don't raise questions about intelligence or robots; they are about what kind of society we need and how we want to see our life.

These questions are ancient, they appeared from the very appearance of science and philosophy, and the growing information technologies of today, which are changing our world, make us think about them again. Hopefully, the best human minds of our time will focus most of their energies on finding answers to the right questions, rather than demagogy around exaggerated threats.

Ilya Khel

Promotional video: