The artificial intelligence is becoming an ally of law firms around the world. Some firms use software powered by GPT-4 to automate tasks, extract terms from a database, or write cases. Although technology makes work easier, the truth is that it is not reliable and the proof of this is ChatGPT.
Roberto Mata, a man traveling from El Salvador to New York, sued Avianca because a metal service cart apparently hit his knee during the flight. The Colombian airline requested that the case be dismissed before a federal judge in Manhattan. However, Mata’s lawyers counterattacked with a forceful brief that referred to more than half a dozen court decisions that favored the lawsuit. The problem? Almost all of these cases were false.
It was Avianca’s lawyers who warned of the irregularity. They had searched for the cited references and never found them. So, US District Judge Kevin Castel upheld it: “Six of the cases filed appear to be bogus court decisions with bogus citations and bogus internal subpoenas,” reported The New York Times.
The court requested an explanation from the lawyer responsible for the brief, Steven Schwartz, who works for the firm Levidow & Oberman. In an affidavit, Schwartz acknowledged that he had used ChatGPT to draft the document. However, he clarified that he had tried to verify the existence of the cases. What did he do? He asked ChatGPT if he had lied in his investigation against Avianca.
ChatGPT defended its lies in the case against Avianca
The lawyer asked ChatGPT to clarify what was the source of one of the antecedents cited. The famous chatbot, developed by OpenAI, insisted that the case was real and cited a reference, which was later found not to be real either. Schwartz recounts that he insisted: “Are the other cases you gave me false?” ChatGPT responded: “No, the other cases I provided are real and can be found in reputable legal databases.”
Schwartz, at the hearing held last Thursday, explained that he had consulted ChatGPT “to complement” his own work on the case against Avianca. The lawyer assured in court that he had never used ChatGPT: “I was not aware of the possibility that its content could be false.”
Schwartz has over 30 years of experience. He said that he was very sorry for trusting ChatGPT and promised that he would never do it again. The judge handling the case scheduled a hearing for June 8 to discuss possible sanctions.
The margin of error of artificial intelligence
Several investigations have warned of the great impact of ChatGPT and other similar tools in a list of professions and trades. Among them, the legal profession. The Wall Street Journal reported this month that dozens of law firms were already using software powered by GPT-4, one of the models developed by OpenAI and powering ChatGPT and Microsoft’s new Bing.
Firms with a global reach are using technology to simplify tasks that require a lot of time and resources. Among them, the drafting of contracts or research of documents. Tasks that beginner lawyers usually do are now solved in minutes thanks to artificial intelligence.
Some firms like DLA Pipper, however, acknowledged that they could not do without human supervision, having confirmed some errors made by their AI-driven systems.
journalist of Guardian they discovered in April that, as in the Avianca case, ChatGPT was citing articles that had never been published. And so, many times more. researchers by Newsguardfor example, have tried several times as ChatGPT and Bard (Google’s chatbot) they easily produce false content that supports known conspiracy theories.