An interesting snippet, from a digest by a lawyer, no less. He makes the case that AIs are better at following the law than human judges. And we might note that AI judges would be less likely to step out of their proper role and try to rule the world:
"In 2015, Harvard ran a judicial experiment. Thirty-five federal judges participated in a workshop. … The all-too-predictable result was most judges based their decisions on the bleeding-heart, emotional factors— and not the law. …
But this week, Eric A. Posner and Shivam Saran at the University of Chicago Law School released a new paper in which they repeated the 2015 experiment, but this time used Chat GPT to evaluate the cases. … Chat GPT was completely unpersuaded by the emotional factors. In the 2015 study, almost all the human judges who got “sympathetic defendant” packets reversed the conviction. But GPT didn’t take the bait. It affirmed the conviction in all 25 cases . …"
Perhaps yes, likely no. ChatGPT will rule according to its training. If it is trained to give preference to, for example, a certain minority group because of systemic racism and on the principle of disparate outcomes, it will do so regardless of the black-letter law. In the end, the rulings by AI judges will be determined by the biases of their human creators.
But it’s even worse than that. Forget about human-judge emotionalism or biased training sets in the abstract. If the AI judge simply interpreted the law literally, the results would be — to say the least — unexpected. This was the point made by John Hasnas in 1995. Review this post from 2022 for a discussion of the topic. The original link to the Hasnas’s article is broken so you can use this one.
Paraphrasing Euclid: There is no AI road to justice.
I don’t know what criteria Chat GPT relies on , but I do know it isn’t case or statutory law. I had to redo a pretrial memorandum at the last minute because a (former) law clerk had relied on “superGPt”. It came up with a lot of cases exactly on point. All completely fabricated.
Presumably, a Large Language Model could be trained specifically on Black Letter Law. There have been reports of LLMs like Chat GPT fabricating academic references – and supposedly enhancements being implemented to stop GPT lying. If such reports are correct, then similar enhancements could be implemented to stop an LLM trained specifically on Black Letter Law from making up citations.
One of the logical failings of the legal system is that it often enshrines incompatible (if worthy) goals. One example is international law giving primacy both to the inviolability of current national borders and to the right of people for self-determination – both good, but quite inconsistent.
A first step towards “AutoJudge” would be an LLM trained only on current law which would identify any inconsistencies between a new law and all prior existing laws & regulations. If there are any incompatibilities, then the representatives who voted for the new law would be publicly flogged and have their citizenship revoked. (I can dream!)
As a side note, in Arthur C. Clarke’s book “2001”, he explained HAL going off the rails in terms of incompatible instructions which humans had unintentionally fed into the machine. HAL had been programmed to give the human astronauts all relevant information, and had also been programmed not to reveal the fact that the mission was to explore the Obelisk. HAL dealt with the logical incompatibility by killing the human astronauts. It is tempting to see a parallel in what today’s excessive laws are doing to human society.
This really is kinda an outmoded discussion. You can’t use ChatGPT for,legal research, that has been demonstrated conclusively. But so what? I have Westlaw Precision, which is incapable of making stuff up because it only has access to the Westlaw database. And it is wonderful.