Can we soon leave our justice system to AI judges?

Frederik Peeraer spreekt het publiek toe
UGent'ers on AI
Artificial intelligence is all around us and seems poised to change our society for good. What do Ghent University researchers think about it?

We asked legal expert Frederik Peeraer what he sees as the opportunities, challenges and pitfalls of using AI in the justice system. Could we soon outsource our justice system? Frederik Peeraer wrote this opinion piece in connection with the event ARTIFICIËLE INTELLUGENTIE on 4 March 2026.

“What do we think of justice? Justice is slow, expensive and unpredictable. God save us from the fairness of the courts!”

These words were not spoken by citizens of today, but by people in the spring of 1789. Instead of chaos, they wanted clarity and certainty. How could that be achieved? Through legal codes. These would contain clear rules that judges would no longer have to interpret. Today we have more than enough legal codes, yet the complaint that justice is slow, expensive and unpredictable remains.

Could AI perhaps succeed where legal codes have not?

At first glance, the idea is very appealing. Justice is slow because lawyers and judges need a great deal of time to study case files. The file of the Bende van Nijvel, for instance, has grown over the past forty years to around four million pages. It takes years for humans to go through that amount of material. A shortage of human judges also means that people sometimes have to wait more than ten years before their case is heard. AI judges could work in the blink of an eye and be deployed without limit, making justice much faster.

AI could also make justice much cheaper. Not only the cost of judges, but also, and perhaps even more so, the cost of lawyers. If lawyers can work faster or even be replaced, legal proceedings would become far more affordable. Large law firms are already building their own AI systems. These systems allow them to quickly search and compare previous cases, helping lawyers assess more easily whether a case has a chance of success, or even before which judge it might succeed.

Would justice also become more predictable? Human judges often disagree with one another. Like all people, they are not always consistent. AI judges would not necessarily deliver identical judgments every time, but certain human differences would certainly disappear.

That sounds promising, but I see at least three problems with AI judges: reliability, responsibility and legitimacy.

AI decisions are never completely reliable. Every AI system hallucinates. Perhaps only rarely, but it does happen. Even small hallucinations can cause irreversible damage. Imagine, for example, that your name is accidentally hallucinated as the perpetrator in a sexual offence case.

In addition, AI decisions are difficult to correct. AI judges reproduce patterns. If the data contain historical inequalities, AI judges will reproduce them on a large scale. Moreover, AI judges cannot critically reflect on these inequalities. If an employer trains AI with CVs of previously hired candidates and almost all of those candidates are men, the AI will discriminate against women. Human judges can break such patterns. AI judges cannot.

A second problem is the lack of responsibility, both from AI and for AI. AI judges cannot responsibly handle data. As long as the input data end up with big tech companies, AI judges are a nightmare for privacy. Would you really want Elon Musk to know every detail of your divorce? AI judges also cannot provide accountability. AI systems are black boxes. No one can explain on what basis an AI judge reaches its conclusion. Yet judges are required to justify their decisions, something people were already demanding in 1789. Such justification is meant to prevent arbitrariness and to allow oversight of judges. Responsibility for AI is also problematic. It is unclear who would be responsible if an AI judge makes a wrong decision. That undermines trust in the justice system.

This brings us to the third and biggest problem with AI judges: legitimacy. Law is power. Judicial decision-making is never simply the mechanical application of rules. Legislators cannot foresee the future or anticipate every possible situation. Judges must therefore make decisions in cases the legislator never considered. In the nineteenth century no one could imagine that people would one day go joyriding in cars, or that electricity would be stolen to power cannabis plantations. Yet judges must find solutions to such situations. In doing so, they help shape the law of tomorrow. This also means they make choices, and those choices are never completely neutral.

The question therefore arises why we accept that judges hold such power. There are three reasons.

First, we accept the power of judges because it is democratically embedded. Judges do not decide their own role. The law determines what they may do, how they are appointed and what their powers are. AI judges operate outside such a legal framework.

Second, every power requires a counterpower. Judges must also be able to set limits on politics. Consider the childcare benefits scandal in the Netherlands. People applied for benefits and often received automatic advances. The legislator demanded a strict anti-fraud approach. As a result, a minor administrative error could ruin someone’s life. Many people lost their homes. It is the task of a judge to put a stop to such injustice. An AI judge would not provide such a counterbalance to political power.

This touches on judicial independence. Judges derive their legitimacy from the fact that they are institutionally independent from political and economic power. AI judges are not. They are developed, trained and maintained by big tech companies. Whoever controls the AI judges also controls the outcome. Power would shift from democratically accountable institutions to private actors that prioritise their own interests.

Let me give one example. Last year there was much controversy about the door handles of certain Tesla models. In ordinary cars you can always open the door, but in those models a working battery is required. If there is a problem with the battery, the door cannot be opened. In several accidents people died because they were trapped inside the car and could not open the door, or at least not easily enough. Would an AI judge hold Tesla responsible for those deaths? Possibly not.

In summary, justice could become faster, cheaper and more predictable with AI, and human judges can certainly make use of these advantages. But replacing human judges with AI judges is not a good idea. Judicial decisions have profound consequences for individuals and for society as a whole. Justice must be reliable, responsible and legitimate. AI judges are not. They could only offer us the fairness of big tech, and I hope we will be spared that.

In short

  • AI has the potential to make the justice system faster and cheaper.
  • But legal scholar Frederik Peeraer also sees risks such as hallucinations, privacy concerns and a lack of neutrality and accountability.
  • According to Peeraer, AI-based justice could also undermine our democracy.

Frederik Peeraer is a legal expert and researcher at the Faculty of Law and Criminology at Ghent University. His research focuses on legal theory, legal argumentation, and the methods by which lawyers interpret and apply the law. He was involved as an advisor in the development of the new Belgian Civil Code.

Read more about Artificial Intelligence

Ghent University is brimming with AI expertise. Dive in and read more opinion pieces from our researchers.

Read also

Does AI make our brains smarter or lazier?

We asked neurologist Kristl Vonck how she views the impact of AI on our brains. What happens to our brain when we systematically outsource thinking to an algorithm?

Kristl Vonck
view

Will robots soon leave us without work?

We asked economist Amy Van Looy how she views the use of robots in the workplace. Are our jobs at risk?

Amy Van Looy kijkt een collega aan
view

Evil genius: what René Descartes can tell us about AI

We asked philosopher Ignaas Devisch to share his thoughts on how we should deal with AI. He revisits Descartes’ thought experiment.

Ignaas Devisch spreekt publiek toe
view

Who decides what AI is allowed to say?

We asked AI professor Tijl De Bie how he views the limits we should or should not impose on AI.

Tijl De Bie spreekt de zaal toe
view