From the article: Large language models have already been used to cheat in school and spread misinformation in news reports. Now they’re creeping into the courts, fueling bogus filings that judges face amid heavy caseloads – raising new risks for a legal system already stretched thin.
A recent Ars Technica report detailed a Georgia appeals court decision highlighting a growing risk for the US legal system: AI-generated hallucinations creeping into court filings and even influencing judicial rulings. In the divorce dispute, the husband’s lawyer submitted a draft order peppered with citations to cases that do not exist – likely invented by generative AI tools like ChatGPT. The initial trial court signed off on the document and subsequently ruled in the husband’s favor.
Only when the wife appealed did the fabricated citations come to light. The appellate panel, led by Judge Jeff Watkins, vacated the order, noting that the bogus cases had undermined the court’s ability to review the decision. Watkins didn’t mince words, calling the citations possible generative-artificial intelligence hallucinations. The court fined the husband’s lawyer $2,500.
That might sound like a one-off, but a lawyer was fined $15,000 in February under similar circumstances. Legal experts warn it is likely a sign of things to come. Generative AI tools are notoriously prone to fabricating information with convincing confidence – a behavior labeled “hallucination.” As AI becomes more accessible to both overwhelmed lawyers and self-represented litigants, experts say judges will increasingly face filings filled with fake cases, phantom precedents, and garbled legal reasoning dressed up to look legitimate.
The problem is compounded by a legal system already stretched thin. In many jurisdictions, judges routinely rubberstamp orders drafted by attorneys. However, the use of AI raises the stakes.
Pert02 on
Start disbarring legal personnel if they bring AI nonsense to courts. Problem will solve itself real quick.
Edit: Start holding judges fucking accountable too. If they allow AI slop arguments in court there should be consequences
SufficientPoophole on
Pfft. The family court is open door and anyone can walk in with an “emergency” and have their family completely destroyed by a stranger.
You are concerned about fighting back 🙄
Lick boots much?
I should mention that people should know ALL JUDGES AND CERTAIN LAWYERS ARE FRENEMIES.
Look into the Jesters and other “clubs”
NotObviouslyARobot on
We need strict statutory liability for generative AI use–for the owners of the AI.
Use an AI to operate a company and harm is declared? Your shareholders get sued.
puertomateo on
These articles are always so bad.
They always focus on the hallucinated cases that shlock lawyers get from Chat GPT. And the solution is easy: Don’t use Chat GPT.
There are implementations of LLMs in the legal field, implemented in a fenced garden that will only draw from actual caselaw. And that is the vast majority of the current usage of GenAI (for purposes of assisting in drafting briefs) within the legal field.
These articles write as if the error was lawyers using GenAI. When that’s not what they’re writing about at all. What they’re writing about is using Chat GPT. It’s a more interesting question and issue of lawyers using legal-specific Gen AI in far less obvious ways. And if they are even ethically compelled to do so, under the ethical obligations of staying current with technology and delivering efficient work product for their clients.
5 Comments
From the article: Large language models have already been used to cheat in school and spread misinformation in news reports. Now they’re creeping into the courts, fueling bogus filings that judges face amid heavy caseloads – raising new risks for a legal system already stretched thin.
A recent Ars Technica report detailed a Georgia appeals court decision highlighting a growing risk for the US legal system: AI-generated hallucinations creeping into court filings and even influencing judicial rulings. In the divorce dispute, the husband’s lawyer submitted a draft order peppered with citations to cases that do not exist – likely invented by generative AI tools like ChatGPT. The initial trial court signed off on the document and subsequently ruled in the husband’s favor.
Only when the wife appealed did the fabricated citations come to light. The appellate panel, led by Judge Jeff Watkins, vacated the order, noting that the bogus cases had undermined the court’s ability to review the decision. Watkins didn’t mince words, calling the citations possible generative-artificial intelligence hallucinations. The court fined the husband’s lawyer $2,500.
That might sound like a one-off, but a lawyer was fined $15,000 in February under similar circumstances. Legal experts warn it is likely a sign of things to come. Generative AI tools are notoriously prone to fabricating information with convincing confidence – a behavior labeled “hallucination.” As AI becomes more accessible to both overwhelmed lawyers and self-represented litigants, experts say judges will increasingly face filings filled with fake cases, phantom precedents, and garbled legal reasoning dressed up to look legitimate.
The problem is compounded by a legal system already stretched thin. In many jurisdictions, judges routinely rubberstamp orders drafted by attorneys. However, the use of AI raises the stakes.
Start disbarring legal personnel if they bring AI nonsense to courts. Problem will solve itself real quick.
Edit: Start holding judges fucking accountable too. If they allow AI slop arguments in court there should be consequences
Pfft. The family court is open door and anyone can walk in with an “emergency” and have their family completely destroyed by a stranger.
You are concerned about fighting back 🙄
Lick boots much?
I should mention that people should know ALL JUDGES AND CERTAIN LAWYERS ARE FRENEMIES.
Look into the Jesters and other “clubs”
We need strict statutory liability for generative AI use–for the owners of the AI.
Use an AI to operate a company and harm is declared? Your shareholders get sued.
These articles are always so bad.
They always focus on the hallucinated cases that shlock lawyers get from Chat GPT. And the solution is easy: Don’t use Chat GPT.
There are implementations of LLMs in the legal field, implemented in a fenced garden that will only draw from actual caselaw. And that is the vast majority of the current usage of GenAI (for purposes of assisting in drafting briefs) within the legal field.
These articles write as if the error was lawyers using GenAI. When that’s not what they’re writing about at all. What they’re writing about is using Chat GPT. It’s a more interesting question and issue of lawyers using legal-specific Gen AI in far less obvious ways. And if they are even ethically compelled to do so, under the ethical obligations of staying current with technology and delivering efficient work product for their clients.