Great excitement has been caused by the case reported in the New York Times (and elsewhere): Here’s What Happens When Your Lawyer Uses ChatGPT – a ten-page pleading submitted by a law firm for its client
cited more than half a dozen relevant court decisions. There was Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and, of course, Varghese v. China Southern Airlines, with its learned discussion of federal law and “the tolling effect of the automatic stay on a statute of limitations.”
But all these decisions had been invented by ChatGPT, which the lawyer had used to help him write the pleading (US brief).
There’s been some discussion about German lawyers using AI in the beck-community blog.
ChatGPT – Nutzungen durch Anwälte: gefährliche rechtliche Klippen sind zu umschiffen is an entry by Dr. Axel Spies. It refers to an article which I don’t have access to. The main conclusion is that it is a violation of the GDPR (German DSGVO) to enter a client’s name, for example, into ChatGPT. It’s hard to imagine this happening in Europe. But obviously, even in the USA the judge soon noticed the problem. I suppose ChatGPT could devise deceptive arguments, but once it invents facts, it should be obvious it is false.
One commenter on the blog entry actually asked ChatGPT what lawyers should think of a chatbot’s legal advice:
Das meint ChatGPT selbst zu dem Thema:
Als KI-Chatbot kann ich keine Rechtsberatung geben, aber ich kann Ihnen allgemeine Informationen zur Verfügung stellen. …
Zweitens müssen Rechtsanwälte sicherstellen, dass die von ChatGPT bereitgestellten Informationen korrekt und aktuell sind. Rechtsanwälte können sich nicht allein auf ChatGPT verlassen, um rechtliche Fragen zu beantworten, sondern müssen ihre Recherchen sorgfältig prüfen und zusätzliche Informationen sammeln, um eine vollständige und zuverlässige Antwort zu erhalten.
Peter Winslow reports on the US case in German on the beck-community blog too.