Lawyer Uses ChatGPT In Federal Court And It Goes Horribly Wrong
A lawyer representing a man in a personal injury lawsuit in Manhattan has thrown himself on the mercy of the court. What did the lawyer do wrong? He submitted a federal court filing that cited at least six cases that don’t exist. Sadly, the lawyer used the AI chatbot ChatGPT, which completely invented the cases out of thin air.
The lawyer in the
case, Steven A. Schwartz, is representing a man who’s suing Avianca Airlines after a serving cart allegedly hit his knee in 2019. Schwartz said he’d never used ChatGPT before and had no idea it would just invent cases.
In fact, Schwartz said he even asked ChatGPT if the cases were real. The chatbot insisted they were. But it was only after the airline’s lawyers pointed out in a new filing that the cases didn’t exist that Schwartz discovered his error. (Or, the computer’s error, depending on how you look at it.)
The judge in the case, P. Kevin Castel, is holding a hearing on June 8 about what to do in this tangled mess, according to the
New York Times. But, needless to say, the judge is not happy.
ChatGPT was launched in late 2022 and instantly became a hit. The chatbot is part of a family of new technologies called generative AI that can hold conversations with users for hours on end. The conversations feel so organic and normal that sometimes ChatGPT will seem to have a mind of its own. But the technology is notoriously inaccurate and will often just invent facts and sources for facts that are completely fake. Google’s competitor product Bard has similar problems.
Encore un qui n'a pas compris comment fonctionne chatGPT.
Son client doit etre drolement content.