Battle Robots Are Coming! - Alternative View

Battle Robots Are Coming! - Alternative View
Battle Robots Are Coming! - Alternative View

Video: Battle Robots Are Coming! - Alternative View

Video: Battle Robots Are Coming! - Alternative View
Video: Anaconda Robot Car Transform: War Robot Games video | Robot 2021 gameplay | 3D robot gameplay 2024, May
Anonim

Real killer robots, relentlessly performing the task and not requiring human supervision, may appear in service with the armies of the world in a year, experts say. Anticipating the danger here, experts are discussing whether killer robots and their use on the battlefield should be banned in advance.

Image
Image

Killer robots may be in service with the troops in a year if the UN does not take measures in advance to restrict their use, says Noel Sharkey, professor at Sheffield University. In his opinion, these machines will inevitably cause massive innocent deaths, since they cannot distinguish a military man from a civilian non-combatant. His statement came just at the time when the expert community from 120 countries is discussing in Geneva the potential risks associated with the operation of autonomous weapons systems. Professor Sharkey believes: there are few discussions, it is urgent to adopt a document on limiting the production of robots! Otherwise, a year later, robots will literally flood the armies of developed countries.

Image
Image

These days in Geneva, representatives of more than 90 countries of the world are discussing the danger that comes from "lethal autonomous weapons systems." One of the important issues of discussion is how to ensure the safety of their use in a combat situation? Many experts, including Professor Sharkey, insist that, despite the autonomy of systems, a person must constantly have the ability to "principle control" over them - for the safety of humanity. “A robot is still unable to make decisions like a human,” says Professor Sharkey. - He is not completely able. to comprehend the logic of hostilities, may not distinguish an ally from an enemy and decide on the permissibility of certain actions. “We cannot let the robot decide whether, for example, Osama bin Laden's life is worth it, so that 50 old women and 20 children die with him,Sharkey says. - The robot doesn't care. Fundamental decisions, for example, about the choice of a goal must be made by a person. " According to Professor Sharkey, if such decisions are left to robots, "the consequences of their use will be no less devastating than the use of chemical weapons."

Image
Image

Professor Sharkey is one of 57 experts who signed an open letter addressed to the University of South Korea, in which experts protest against the program for the development of intelligent weapons. The Korea Institute of Science and High Technology Kaist is currently launching the program with military equipment manufacturer Hanwa Systems, and the news has rocked the scientific community. “A university, a scientific institution, cannot do this from a moral point of view,” says Professor Sharkey, with whom the other signatories fully agree.

According to experts, automatic killer robots will open Pandora's Box, revolutionizing the way of war.

Promotional video:

Representatives of the Kaist Institute have already responded to the open letter, assuring their colleagues that they have no plans to create droids in the Skynet style.

Image
Image

Representatives of the Keist Institute assured their colleagues that, as an academic institution, they “highly value universal standards of ethics and morality” and do not intend to develop robotic soldiers operating outside human control. To date, the Kaist Institute is only conducting research in this area. On this, the authors of the open letter calmed down and did not boycott their colleagues. However, the question of whether humanity will be able to agree on general principles of total control over the actions of combat robots is still open. Meanwhile, today it is by no means important in theory. Today, highly functional robots are already such complex mechanisms that even specialists do not fully understand how their artificial intelligence works. And without fully understanding this, they are unable to predictwhen exactly it can fail. “If humanity is destined to be destroyed, then, most likely, artificial intelligence will play a significant role in this process,” says one of the participants in the conference in Geneva. According to Elon Musk, artificial intelligence today carries with it more threats to humanity than North Korea. Musk is convinced that robots should be under constant strict control of humans, and has repeatedly appealed to the governments of different countries to develop clear rules for control over systems with artificial intelligence.than North Korea. Musk is convinced that robots should be under constant strict control of humans, and has repeatedly appealed to the governments of different countries to develop clear rules for control over systems with artificial intelligence.than North Korea. Musk is convinced that robots must be under constant strict control of humans, and has repeatedly appealed to the governments of different countries to develop clear rules for control over systems with artificial intelligence.

Varvara Lyutova