Next week, a veteran New York lawyer of 30 years’ standing will face a disciplinary hearing over a novel kind of misdemeanour: including bogus AI-generated content in a legal brief.
Steven Schwartz, from the firm Levidow, Levidow & Oberman, had submitted a 10-page document to a New York court as part of a personal injury claim against Avianca airlines. The trouble was, as the judge discovered on closer reading, the submission contained entirely fictional judicial decisions and citations that the generative AI model ChatGPT had “hallucinated”.
In an affidavit, the mortified Schwartz admitted he had used OpenAI’s chatbot to help research the case. The generative AI model had even reassured him the legal precedents it cited were real. But he acknowledged that ChatGPT had proved to be an unreliable source. Greatly regretting his over-reliance on the computer-generated content, he added that he would never use it again “without absolute verification of its authenticity”. One only hopes we can all profit from his “learning experience” — as teachers nowadays call mistakes.