With the popularity around artificial intelligence (AI) reaching a fever pitch in recent months, many fear that programs like ChatGPT one day they will be put out of work. For a New York lawyer, that nightmare could come true sooner than expected, but not exactly because of that.
According to New York Timesquoted in EngadgetAttorney Steven Schwartz of the law firm of Levidow, Levidow and Oberman recently turned to the OpenAI chatbot for help writing a legal brief, with predictably disastrous results.
Schwartz’s firm has been suing the Colombian airline Avianca on behalf of Roberto Mata, who claims he was injured on a flight to John F. Kennedy International Airport in New York City. When the airline recently asked a federal judge to dismiss the case, Mata’s attorneys filed a 10-page brief arguing why the lawsuit should continue.
The document… created by ChatGPT
The document cited more than half a dozen court decisions, including “Varghese v. China Southern Airlines,” “Martinez v. Delta Airlines” and “Miller v. United Airlines.” Unfortunately for everyone involved, no one who read the brief could find any of the court decisions cited by Mata’s lawyers. Because? Because ChatGPT made them all.
In an affidavit filed Thursday, Schwartz said he had used the chatbot to “add-on” his investigation into the case.
The lawyer wrote that “I was not aware of the possibility that the content of ChatGPT could be fake”. He even shared screenshots showing that he had asked the AI if the cases he cited were real, to which the show replied that they were, claiming that the decisions could be found in “reputable legal databases” including Westlaw and LexisNexis.
Schwartz said he “really regrets” using ChatGPT “and will never do so in the future without absolute verification of its authenticity.” The judge overseeing the case ordered a hearing on June 8 to discuss possible penalties for the “unprecedented circumstance” created by the lawyer’s actions, who may not be able to write another report.