Commentary: Imperfect AI-driven justice may be better than none at all
Some fear automation reinforces bias and dehumanises people, but “robot judges” can also tackle backlogs and make the legal process more accessible, says the Financial Times’ Jemima Kelly.
LONDON: A few years ago, I was hauled up in front of three magistrates in a south London courthouse, the culmination of a months-long bureaucratic nightmare that landed me with a criminal conviction for being unable to prove I had paid a £1.50 (S$2.35) bus fare when my phone ran out of battery.
After nervously recounting my tangled tale of woe, I was vindicated: I was told my conviction would be withdrawn. “I bet you’re very relieved,” the presiding justice said to me with a sympathetic smile. I was indeed.
But such human interaction in the judicial system might become rarer. Several countries are experimenting with using artificial intelligence-driven algorithms to mete out judgments, replacing humans with so-called “robot judges” (although no robot is involved).
The main arguments in favour of AI-driven adjudication revolve around two purported benefits: Greater efficiency and the possibility of reducing human bias, error and “noise”. The latter refers to unwanted variability between judgments that might be influenced by factors such as how tired a judge is or how their sports team performed the night before.
IMPACT OF ARTIFICIAL INTELLIGENCE ON JUDGMENT PROCESS
There are strong reasons to be cautious, however. For instance, there are concerns that, contrary to removing bias and discrimination, AI-driven algorithms — which use brute processing power on human data, finding patterns, categorising and generalising — will duplicate and reinforce those that already exist. Some studies have shown this happens.
The most common arguments against AI-driven adjudication, like this one, concern outcomes. But in a draft research paper, John Tasioulas, director of Oxford University’s Institute for Ethics in AI, says more attention should be given to the process by which judgments are arrived at.
Tasioulas quotes a passage from Plato’s The Laws, in which Plato describes the qualitative difference between a “free doctor” — one who is trained and can explain the treatment to his patient — and a “slave doctor”, who cannot explain what he is doing and instead works by trial and error.
Even if both doctors bring their patients back to health, the free doctor’s approach is superior, argues Plato, because he is able to keep the patient co-operative and to teach as he goes. The patient is thus not just a subject, but also an active participant.
Like Plato, Tasioulas uses this example to argue why process matters in law. We might think of the slave doctor as an algorithm, doling out treatment on the basis of something akin to machine learning, while the way the free doctor treats his patients has value in and of itself. And only the process by which a human judge arrives at a final decision can provide three important intrinsic values.
ALGORITHMIC JUSTICE CAN BE DEHUMANISING
The first is explainability. Tasioulas argues that even if an AI-driven algorithm could be programmed to provide some kind of explanation of its decision, this could only be an ex post rationalisation rather than a real justification, given that the decision is not arrived at through the kind of thought processes a human uses.
The second is accountability. Because an algorithm has no rational autonomy, it cannot be held accountable for its judgments. “As a rational autonomous agent who can make choices ... I can be held accountable for these decisions in a way that a machine cannot,” Tasioulas tells me.
The third is reciprocity – the idea that there is value in the dialogue between two rational agents, the litigant and the judge, which forges a sense of community and solidarity.
There is a dehumanising aspect to algorithmic justice that comes from the lack of these three elements. There are other issues, too, such as the extent to which using AI-driven adjudication would transfer legal authority from public bodies to the private entities that build the algorithms.
AUTOMATION CAN HELP TACKLE LEGAL BACKLOG
But for Richard Susskind, technology adviser to the UK lord chief justice, while there are many valid arguments against AI-led justice, there is also an urgent moral need to make the legal process more accessible. Automation can help tackle this. According to the OECD, less than half the global population lives under the protection of law; Brazil has a backlog of 100 million court cases, while India has 30 million.
“It’s a gap on a monstrous scale,” Susskind tells me. “What many of us are ... trying to do is reduce manifest injustice, rather than achieve some metaphysical, perfect justice.”
As the aphorism goes, we must not let the perfect be the enemy of the good. “Computer says no” — the catchphrase from a Little Britain TV sketch — might be a highly imperfect and often frustrating form of justice, but access to an imperfect judicial system is surely better than no access at all.