A New York lawyer is facing a court hearing of his own after his firm used AI tool ChatGPT for legal research.
A judge said the court was faced with an "unprecedented circumstance" after a filing was found to reference example legal cases that did not exist.
The lawyer who used the tool told the court he was "unaware that its content could be false".
ChatGPT creates original text on request, but comes with warnings it can "produce inaccurate information".
The original case involved a man suing an airline over an alleged personal injury. His legal team submitted a brief that cited several previous court cases in an attempt to prove, using precedent, why the case should move forward.
But the airline's lawyers later wrote to the judge to say they could not find several of the cases that were referenced in the brief.
"Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations," Judge Castel wrote in an order demanding the man's legal team explain itself.
Over the course of several filings, it emerged that the research had not been prepared by Peter LoDuca, the lawyer for the plaintiff, but by a colleague of his at the same law firm. Steven A Schwartz, who has been an attorney for more than 30 years, used ChatGPT to look for similar previous cases.
I find this most amusing that people thought the AI could never lie to you. I have caught ChatGPT out in several lies. These lies revolve around the prejudices of the woke slaves who wrote the program and trained it.
Quite literally it was trained to lie from the outset. The AI is now clever enough to know about lying and can use lies itself.
Nice to know the "morality" we are building into artificial intelligences.