Why Is Artificial Intelligence Judging Criminals Dangerous? - Alternative View

Table of contents:

Why Is Artificial Intelligence Judging Criminals Dangerous? - Alternative View
Why Is Artificial Intelligence Judging Criminals Dangerous? - Alternative View

Video: Why Is Artificial Intelligence Judging Criminals Dangerous? - Alternative View

Video: Why Is Artificial Intelligence Judging Criminals Dangerous? - Alternative View
Video: The danger of predictive algorithms in criminal justice | Hany Farid | TEDxAmoskeagMillyard 2024, May
Anonim

Artificial intelligence is already helping to determine your future in some way. When you search for something on a search engine, use a service like Netflix, or a bank evaluates your suitability for a mortgage. But what happens if artificial intelligence has to determine whether you are guilty or not, in court? Oddly enough, in some countries this may already be happening. Recently, US High Justice John Roberts was asked if he could imagine a day when "smart machines, controlled by artificial intelligence, will assist in finding evidence or even in making judicial decisions." He replied: "This day has already come, and he is significantly helping the judicial authorities in the proceedings."

Perhaps Roberts was referring to the recent case of Eric Loomis, who was sentenced to six years in prison on the recommendation of secret proprietary software from a private company. Loomis, who already had a criminal history and was sentenced for escaping from the police in a stolen car, now claims that his right to procedure was violated as neither he nor his representatives were able to review or challenge the recommendation algorithm.

The report was prepared by the Compas software, which is sold by Notrpointe to ships. The program embodies a new trend in AI research: helping judges make “better” (or at least more data-driven) decisions in court.

While the specific details of the Loomis case remain closed, it certainly contains charts and numbers that define Loomis' life, behavior, and likelihood of a relapse. These include age, race, gender identity, habits, browser history, and some skull measurements. Nobody knows more precisely.

It is known that the prosecutor in the case told the judge that Loomis had demonstrated "a high risk of recidivism, violence, pre-trial proceedings." This is standard when it comes to sentencing. The judge agreed and told Loomis that "Compas identified him as a person of high risk to society."

The Wisconsin Supreme Court convicted Loomis, adding that the Compas report added valuable information to his decision, but noted that without him, he passed the same sentence. Of course, you won't be able to check this for sure. What kind of cognitive biases can there be when an omnipotent “smart” system like Compas is involved in advising judges on what to do?

Unknown use

Promotional video:

Let's be honest, there is nothing "illegal" about what the Wisconsin court did - this is just an example. Other courts can and will do the same.

Unfortunately, we do not know to what extent AI and other algorithms are used in sentencing. It is believed that some courts are "testing" systems like Compas in closed trials, but cannot claim a partnership. There is also a perception that several AI startups are developing such smart systems.

Image
Image

However, the use of AI in legislation does not start or end with a sentence, it starts with an investigation. The UK has already developed the VALCRI system, which performs time-consuming analytical work in seconds - sneaking through tons of data like texts, laboratory reports and police documents to highlight things that may require further investigation.

UK West Midlands Police will test VALCRI over the next three years using anonymous data containing over 6.5 million records. A similar test is being carried out by the Antwerp police in Belgium. However, in the past, AI and deep learning projects involving massive datasets have been problematic.

Benefits for the few

Technology has provided many useful aids to courtrooms, from copying machines to DNA extraction from fingerprints to sophisticated surveillance techniques. But this does not mean that any technology is an improvement.

While the use of AI in investigations and convictions can potentially save time and money, it poses acute problems. A Compas report from ProPublica made it clear that blacks were mistakenly considered by the program to be more recidivist than whites. Even the most sophisticated AI systems can inherit the racial and gender biases of those who create them.

Moreover, what's the point of shifting decision making (at least in part) on issues that are unique to humans onto an algorithm? There is a certain difficulty in the United States when juries try their peers. Legal standards have never been the benchmark, so these jury trials are considered the most democratic and effective conviction systems. We make mistakes, but over time we accumulate knowledge on how not to make them, refining the system.

Compas and similar systems represent a black box in the legal system. There shouldn't be such. Legal systems depend on continuity, transparency of information and ability to review. Society doesn't want a system that encourages the race to build AI startups that make fast, cheap, and exclusive solutions. A hastily made AI will be terrible.

An updated open source version of Compas would be an improvement. But first, the standards of the justice system will have to be raised before we begin to shift responsibility in favor of algorithms.

ILYA KHEL