First reported by Vice, Columbian First Circuit Judge Juan Manuel Padilla Garcia claims to have used ChatGPT in the process of deciding a case in the city of Cartagena. The judge attests to the tool’s use in a court document from January 30.
“What we are really looking for is to optimize the time spent drafting judgements after corroborating the information provided by AI,” Garcia explained in the court document (translated from the original Spanish). Garcia was hearing a case between a health insurance company and the family of an autistic child over the child’s medical coverage.
Garcia seems to have based his final decision in the case on the chatbot’s responses regarding legal precedents and jurisprudence, and asserts that all responses were fact checked in this case—an important stipulation given these bots’ occasional bias and unreliability. The judge claims to have wanted to speed up the decision making process in this largely-unprecedented use of AI in a legal proceeding.
Now, I’m no lawyer, but doesn’t this feel exceptionally bad? Isn’t it kind of a judge’s job to be able to do this on their own, to go through an exceptionally rigorous educational and professional weeding out process to prove they should be a member of an elite caste that determines the character of our laws and governance? Well, now one of them’s using the homework cheating website to weigh in on a child’s medical care.
Here in the US meanwhile, the California legal community recently roundly rejected the use of AI chatbots as legal counsel. AI certainly isn’t going away, but stories like this one make me revise my internal assessment at the likelihood of a Butlerian Jihad happening somewhere down the line. At the very least, if ChatGPT decided you were ineligible for medical insurance coverage, wouldn’t you want to recreate that one scene from Office Space?