Sweet Little AI Lies: New York Lawyer Faces Sanctions After Using ChatGPT to Write Brief Filled with Fake Citations

The Associated Press
The Associated Press

A New York-based attorney is facing potential sanctions after using OpenAI’s ChatGPT to write a legal brief he submitted to the court. The problem? The AI Chatbot filled the brief with citations to fictitious cases, a symptom of AI chatbots called “hallucinating.” In an affadavit, the lawyer claimed, “I was unaware of the possibility that [ChatGPT’s] content could be false.”

Engadget reports that attorney Steven Schwartz of the law firm Levidow, Levidow and Oberman used AI to help with a lawsuit against the Colombian airline Avianca in a ground-breaking case, highlighting AI’s potential drawbacks in legal practice. However, the resulting legal brief prepared with ChatGPT’s help was filled with references to court rulings that simply did not exist.

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022. (Photo by Jonathan Raa/NurPhoto via Getty Images)

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022. (Photo by Jonathan Raa/NurPhoto via Getty Images)

Roberto Mata, who claims to have been hurt on a flight to New York City’s John F. Kennedy International Airport, is represented by Schwartz’ law firm. Schwartz submitted a 10-page brief arguing for the continuation of the lawsuit in response to Avianca’s request for the case to be dismissed.

More than a dozen court rulings were cited in the document, including “Miller v. United Airlines,” “Martinez v. Delta Airlines,” and “Varghese v. China Southern Airlines.” These court rulings, however, were nowhere to be found because ChatGPT, an AI, completely made them up. When AI chatbots like ChatGPT make up information, it is referred to in the tech industry as “hallucinating.” ChatGPT and other similar tools suffering from such “hallucinations” is an extremely common occurrence.

Schwartz claimed in an affidavit that he had used the chatbot to “supplement” his case-related research. “I was unaware of the possibility that [ChatGPT’s] content could be false,” he wrote. Schwartz’s screenshots reveal that he had questioned ChatGPT about the veracity of the cases it cited. In its affirmative response, the AI asserted that the rulings could be found in “reputable legal databases,” such as Westlaw and LexisNexis.

Expressing his regret, Schwartz stated, “I greatly regret using ChatGPT and will never do so in the future without absolute verification of its authenticity.”

The case has drawn attention in the legal community because it is the first time artificial intelligence has been used in this way. A hearing to discuss possible penalties for Schwartz’s actions has been scheduled for June 8 by the judge overseeing the case. This case serves as a stark reminder of the need for caution and verification when using such tools as the legal profession struggles to integrate AI.

Read more at Engadget here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan

usechatgpt init success

usechatgpt init success

COMMENTS

Please let us know if you're having issues with commenting.