Killer Robots Will Explain Their Decisions To People - Alternative View

Killer Robots Will Explain Their Decisions To People - Alternative View
Killer Robots Will Explain Their Decisions To People - Alternative View

Video: Killer Robots Will Explain Their Decisions To People - Alternative View

Video: Killer Robots Will Explain Their Decisions To People - Alternative View
Video: Killer Robots Will Fight Our Wars: Can They Be Trusted? 2024, April
Anonim

American experts are developing a neural network that can explain the decisions made by combat robots. This will increase the level of human confidence in the "electronic soldiers".

The Explainable Artificial Intelligence project, or simply XAI (Explainable Artificial Intelligence), will allow humans and robots to better understand each other. The XAI neural network will build a logical chain of "reasoning" of the robot and show the person why the machine made just such a decision - for example, to open fire on a specific target.

Image
Image

Development is underway to create mixed robot-human combat units in the US Army. So far, it is difficult to imagine, but in a few decades robots will become full-fledged soldiers - just like people. Moreover, they are able to replace a person in situations where it may be necessary to open fire on a living target. Such a replacement will formally relieve a person of responsibility for making a deadly decision, but it will not be able to deprive him of doubts and remorse, and can also cause distrust of robots. You never know - maybe robots are actually electronic maniacs who like to kill?

The XAI project should solve this problem in human-robot interaction. Thanks to the introduction of the new system, the military (those who are still living people) will be able to understand by what criterion the machine has evaluated this or that object or person as a threat. Moreover, XAI will make it easier for combat robots to adapt to the realities of a specific military conflict and reduce the degree of senseless cruelty.

The new development will be useful not only for the military. It can also be used in the civilian sphere - in industry, security systems and medicine. An example of such use is given by Bill Ferguson, a leading specialist at Raytheon, which is directly involved in the development of a neural network. “If artificial intelligence detects signs of lung cancer on the X-ray, the doctor can find out why the machine decided so. The computer will highlight all suspicious areas that the doctor could have ignored, mistaking for image artifacts. After that, the doctor will be able to determine for himself whether it is worth correcting the diagnosis, or whether the neural network was mistaken,”explains the scientist.