Not to believe! A lawyer is in trouble for using the response given by ChatGPT in a case against Avianca.
The American faces possible sanctions after using this popular tool based on the Artificial intelligence to compose a letter. What he didn’t know is that the show doesn’t always tell the truth and he invented a whole series of supposed legal precedents.
As published by the newspaper The New York Timesthe lawyer in trouble is Steven Schwarzlawyer in a case before a New York court, a lawsuit against the airline Avianca filed by a passenger which ensures that he suffered an injury when he was hit with a service cart during a flight.
Schwartz represents the plaintiff and used ChatGPT to prepare a brief opposing a defense request to have the case dismissed.
You may also be interested in: Artificial Intelligence: These are the countries that are prohibited from using ChatGPT
In it ten page document The lawyer cited several judicial decisions to support his theses, but it was soon discovered that the well-known chatbot from the OpenAI company had invented them.
“The Court finds itself before a unprecedented situation. A filing submitted by plaintiff’s counsel in opposition to a motion to dismiss (the case) is replete with citations to non-existent cases,” Judge Kevin Castel wrote.
The judge then issued an order summoning a hearing on June 8 in which Schwartz will have to try to explain why he should not be sanctioned after having tried to use completely false precedent assumptions.
He did so one day after the lawyer himself presented an affidavit in which admitted to using ChatGPT to prepare the brief and acknowledged that the only verification it had carried out was to ask the application if the cases it cited were real.
Schwartz justified himself by assuring that I have never used such a tool before and that, therefore, “it was not aware of the possibility that its content could be false.”