Lawyers blame ChatGPT for tricking them into citing bogus case law

glitcher | 150 points

Misrepresenting wholly fabricated case law to a judge out of sheer incompetence has been grounds for disbarment in at most US States for over a century.

ChatGPT told me that. It might be true.

xbar | a year ago

>[to the judge on behalf of Schwartz] Mr. Schwartz, someone who barely does federal research, chose to use this new technology.

That’s a horrible excuse. I’m not a lawyer and don’t do caselaw research on any sort of regular basis but I have still poked around a bit when something strikes my interest. Compared to google they’re clunky and have poor matching, but I don’t remember it taking me more than half an hour or so to figure out which system would have the case I want and have to drill down to find it. ChatGPT was giving the the lawyer the (made up) case. It should really be a trivial task to find it in a caselaw database. Heck if I was the lawyer I would really really want to find the full text case! Who knows what broader context or additional nuggets of useful information it might have for my current client’s issue?

I would not be surprised if he went looking, couldn’t find it easily, and just said “whatever it has to be there somewhere and I can get by without the entire thing”

ineedasername | a year ago

The lawyer said: “I did not comprehend that ChatGPT could fabricate cases."

I wonder how many other people using ChatGPT do not comprehend that ChatGPT can be a confident bullshitter...

I'm surprised that this one case is getting so much attention because there must be so many instances of people using false information they got from ChatGPT.

DtNZNkLN | a year ago

If they were able to be tricked by ChatGPT, they are definitely not good at being lawyers. Trying to blame the AI is like trying to blame MS Word for offering an inappropriate homonym when spell checking. The computer did not put the citations in front of the judge.

voakbasda | a year ago

Disbarment is appropriate here.

Why not blame your laptop manufacturer for creating the hardware you used to file your fraudulent court documents?

rank0 | a year ago

“There’s simply no way we could have known these were bogus cases.” the lawyers are quoted as saying.

They are currently using Bard to help draft a lawsuit against OpenAI, claiming the company knowingly misrepresents the capabilities of their technology.

BaculumMeumEst | a year ago

The cynic in me wonders if this isn’t part of a plan to create a legal precedent banning AI from handling legal disputes.

Think about it: the legal profession is possibly one of the most threatened by the development of AI models. What better way to secure the professional future of the long tail of lawyers and paralegals?

omginternets | a year ago

When you start giving ChatGPT a plugin to query LexisNexis and do proper citiations (as Bing Chat does), then things get interesting.

Unfortunately Lexis's API fees are currently quite steep, so only very wealthy law firms will be able to afford to use such a service in the short term.

sys42590 | a year ago

They saw ChatGPT passed the bar in the 90th percentile and thought they were on easy street. What a dumb way to lose your law license.

bastardoperator | a year ago

Oh god, I suppose a part of the reason why those lawyers are well paid is because if something goes wrong then they're going to be responsible...

summerlight | a year ago

People confuse creating a response that makes sense in a language (which LLM’s are designed to do) with coveying facts and truths in a language (which LLM’s are not designed to do).

LLM’s are revolutionary because they provide a more fluent interface to data… but that doesn’t not mean that the data is correct. Especially not in the early phases.

For most people any sufficiently advance technology is indistinguishable from magic. The average joes think that this is magic.

John23832 | a year ago

Just once it would be refreshing for someone who gets caught doing something wrong to say, "Wow, ya got me. I'm sorry. Not for doing it, I'm only sorry I got caught. I really thought I'd skate right by on this one. Legitimately won't do it again. Or if I do I'll proof-read it better at least. Please let me know the punishment and I'll accept it."

300bps | a year ago

I'm so confused. Why do the lawyers not simply check to see if the references are real or not? How hard is it to look through an LLM's output and do a quick search to see if any of the laws or cases mentioned in it are in fact unsubstantiated?

anon25783 | a year ago

Blaming ChatGPT for making stuff up is like blaming sex dice after you followed their instructions to "spank" + "hair".

causi | a year ago

"How dare you allow me to use my laziness against myself ChatGPT!"

midnitewarrior | a year ago

OpenAI wanted to preach safety. I think we hold them liable for literally everything, everything Chat GPT does or says, until the model is open and they can argue that they have no control over it.

They wanted this liability, they accepted this liability, they said they'd keep it safe and they haven't. It's on them.

bioemerl | a year ago

While the lawyers blamed ChatGPT, the totality of the circumstances seem to indicate that they're less than honest in doing so. There is a live-tweet of the hearing here: https://twitter.com/innercitypress/status/166683852676213965..., and you can follow along with the lawyerly cringe there.

Okay, lawyer #1 (LoDuca, the one on the case in the first place) appears to have played essentially no role in the entire case; his entire purpose appears to be effectively a sockpuppet for lawyer #2 (Schwartz), as LoDuca was admitted to federal court and Schwartz was not. He admits to not having read the things Schwartz asked him to file, as well as the "WTF?" missives that came back. He lied to the court about when he was going to be on vacation, because that is when Schwartz was on vacation. But other than doing nothing when he was supposed to do something (supposed to do a lot of somethings), he is otherwise uninvolved in the shenanigans.

So everything happened because of Schwartz, but before we get to this part, let me fill in relevant background information. The client is suing an airline for an injury governed by the Montreal Protocol. Said airline went bankrupt, and when that happened, the lawyer dismissed the lawsuit, only to refile it when the airline emerged from bankruptcy. This was a mistake; dismissing-and-refiling means the second case is outside the statute of limitations. The airline filed a motion to dismiss because, well, outside statute of limitations, and it is Schwartz's response that is at the center of this controversy.

What appears to have happened is that there is no case law to justify why the case shouldn't be dismissed. Schwartz used chatGPT to try to come up with case law [1] for the argument. He claims in the hearing that he treated it like a search engine, and didn't understand that it could come up with fake arguments. But those claims I'm skeptical of, because even if he's using chatGPT to search for cases, he clearly isn't reading them.

When the airline basically said "uh, we can't find these cases," Schwartz responded by providing fake cases from chatGPT where alarm bells should be ringing saying "SOMETHING IS HORRIBLY, HORRIBLY WRONG." The purported cases in the reply had blindingly obvious flaws that ought to have made you realize something was up before you're off the first page. It is only when the judge turns around and issues the order to show cause that the lawyers attempt to start coming clean.

But wait, there's more! The response was improperly notarized: it had the wrong month. So the judge asked them to provide the original document before signature to justify why it wasn't notary fraud. And, uh, there's a clear OCR error (compare last page of [2] and [3]).

When we get to these parts of in the hearing, Schwartz's responses aren't encouraging. Schwartz tries to dodge the issue of why he was citing cases he didn't read. Believing his responses of why he thought the cases were unpublished ("F.3d means Federal district, third department") really requires you to assume he is an incompetent lawyer at best. The inconsistencies in the affidavit are glossed over, and the stories don't entirely add up. It seems like a minor issue, but it does really give the impression that even with all the attention on them right now, they're still being less than candid with the court.

The attorneys for Schwartz are trying hard to frame it as a "he didn't know what he was getting into with chatGPT, it's not his fault," but honestly, it really does strike me that he knew what he was getting into and somehow thought he wouldn't get caught.

[1] His conversation can be found here: https://storage.courtlistener.com/recap/gov.uscourts.nysd.57..., it's one of the affidavits in the case.

[2] https://storage.courtlistener.com/recap/gov.uscourts.nysd.57...

[3] https://storage.courtlistener.com/recap/gov.uscourts.nysd.57...

jcranmer | a year ago

Given that they're attorneys, we know what their next course of action may be. I've noticed that ChatGPT's new onboarding includes a screen with a big disclaimer where there wasn't one before. I can only assume that it may be related to cases like these.

jameslk | a year ago

Related:

- "A man sued Avianca Airline – his lawyer used ChatGPT" (13 days ago, 174 points, 139 comments): https://news.ycombinator.com/item?id=36095352

- "Lawyer who used ChatGPT faces penalty for made up citations" (1 day ago, 106 points, 128 comments): https://news.ycombinator.com/item?id=36242462

Original court documents: https://www.courtlistener.com/docket/63107798/mata-v-avianca...

isp | a year ago
[deleted]
| a year ago

Thats weird. I tried really hard to convince it that there's such case law establishing a legal category of "praiseworthy homicide" and it refused to believe me. I thought it was overttrained / patched on all law related applications.

IIAOPSW | a year ago

comments here are like "what dumb lawyers". Sure OK. But what can we say here about "GPT-4 passed the bar exam!" and how useful is that data, given that this does not imply GPT-4 has the actual skills of a human lawyer.

zzzeek | a year ago

A poor workman blames his tools.

hackerfactor1 | a year ago

I don't understand how they messed this up so bad. They say they didn't know it could hallucinate and that they thought it was just like any other search engine. But it seems like even if it worked like they thought, they'd still have fucked up?

If it's just like a normal person, if that person isn't a lawyer, you wouldn't ask them to do your lawyery work. I'd hope this lawyer doesn't his kids to do his work for him.

If it's just like a normal search engine, we all know how much bullshit, spam, and misinformation there is on the internet (mostly written by normal good old fashioned humans!). So that wouldn't have been trustworthy either!

There's no way this kind of thing is excusable.

6gvONxR4sf7o | a year ago

A lawyer is certainly at fault if they do not fact check the material they present at trial. But the conmen who are selling ChatGPT and the like are extremely irresponsible for the way they sell LLMs as magical AI that arrives at factually correct answers by reasoning rather than the consequence of the law of large numbers applied to stochastic text generation.

plorg | a year ago

Would it have been that difficult for the lawyers to actually check the case law ChatGPT cited?

Seems like pure laziness.

bequanna | a year ago

Boy, that's elementary level excuse. Your honor, my dog ate my homework.

ww520 | 10 months ago

Why don't they sue it?

seydor | a year ago

in this thread (and all threads on this topic):

angry armchair legalists trying to stick it to The Man!! by pretending neglegient homicide is the same as premeditated murder

meghan_rain | a year ago

“I’m an idiot, my bad.” is rarely a useful defense when you have a fiduciary duty to your client and work in tightly regulated field.

cainxinth | 10 months ago

The lawyers are either idiots, totally dishonest, or both. OpenAI should make ChatGPT off-limits to lawyers in their TOS.

RagnarD | 10 months ago

This is interesting. How long untill someone gets sick because he/she was following what chatGPT told him/her to do? Medical advice? Political misinformation?

How things will unfold this decade? Banning chatgpt from certain topics (medicine, law, etc...)? This decade will be really interesting indeed.

elforce002 | a year ago

Surely CGPT can tell them how to get out of this one.

activiation | a year ago

It makes sense in a Dunning-Kruger way that lawyers this dumb would consider ChatGPT qualified for their purposes.

pengaru | a year ago

[dead]

theknocker | a year ago

[flagged]

cowmix | a year ago