The Hazards of Solely Relying on ChatGPT for Legal Research: Lessons Learned from a Cautionary Tale
Introduction
This article delves into the pitfalls of using ChatGPT as the primary tool for legal research while drawing attention to a cautionary tale where a seasoned lawyer's blind reliance on ChatGPT led to a catastrophic outcome for him and his firm.
History and Analogy
It is common practice for senior lawyers to give assignments to law clerks or junior lawyers in their firms to draft articles or even legal pleadings for the senior lawyer. In these cases, the junior lawyers and law clerks act as ghost-writers for the senior lawyers.
AI language models, like ChatGPT, can serve the same purpose to very limited effects, at least for articles or short arguments. However, just as senior lawyer should not rely exclusively on the drafting or legal research of a junior lawyer or law student, they should not blindly rely on ChatGPT either. Once your name goes on a paper, you own all the glory and all the infamy that comes with it, as the case might be.
The New York Case
From The New York Times, Saturday, May 27, 2023:
The lawsuit began like so many others: A man named Roberto Mata sued the airline Avianca, saying he was injured when a metal serving cart struck his knee during a flight to Kennedy International Airport in New York.
When Avianca asked a Manhattan federal judge to toss out the case, Mr. Mata’s lawyers vehemently objected, submitting a 10-page brief that cited more than half a dozen relevant court decisions. There was Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and, of course, Varghese v. China Southern Airlines, with its learned discussion of federal law and “the tolling effect of the automatic stay on a statute of limitations.”
There was just one hitch: No one — not the airline’s lawyers, not even the judge himself — could find the decisions or the quotations cited and summarized in the brief.
That was because ChatGPT had invented everything.
The lawyer who created the brief, [name omitted], threw himself on the mercy of the court on Thursday, saying in an affidavit that he had used the artificial intelligence program to do his legal research — “a source that has revealed itself to be unreliable.”
The article goes on to state that the lawyer in question has over thirty years’ experience. The lawyer’s failure to check citations and cases defies explanation and is, IMHO, inexcusable. I suggest people read the entire article, because it is one huge cautionary tale for the practitioner.
Perils of Apparent Certainty
As I stated in a previous blog post, several months ago I also tried to save some time and effort by using ChatGPT in a legal matter I was researching. I posed a very narrow prompt and asked it to limit its answer to Florida Supreme Court cases. As the issue I was researching was very narrow and somewhat esoteric, I expected one or two cases at most. ChatGPT returned several and to my surprise they were all completely on point and favorable to my argument. This was both a pleasant surprise and a bit unsettling.
I then queried it for more cases. It again provided another list of recent Florida Supreme Court cases that were all exactly on point and favorable. Now I was skeptical. I queried it a third time and, again, the same result, another list of very recent Florida Supreme Court cases on a matter that I thought was very esoteric and uncommon. Now I was really skeptical. I did my own legal research.
Using Westlaw, I found that none of the cases ChatGPT provided to me were at all on point and that many of them didn’t even exist. Every single citation was either wrong or completely made up. Not even a first-year law student would mess up this badly. I then conducted my own research and came up with my answer in about thirty minutes. Instead of saving some time, I ended up losing about twenty minutes down an AI rabbit hole. I had come face-to-face with an AI hallucination, and it proved a valuable lesson.
At present, IMHO AI legal research is not fit for purpose.
Contextual Blind Spots and Hallucinations
ChatGPT, as an advanced language model, lacks the contextual understanding and legal expertise possessed by experienced legal professionals. It simply does not really understand what it is doing. However, it can get worse.
AI hallucination refers to a phenomenon where artificial intelligence systems generate information or responses that are fictional or not based on real-world data. It occurs when the AI system, due to its training on vast amounts of information, produces outputs that seem plausible but are actually fabricated or lack factual basis. This can lead to misleading or inaccurate results, potentially causing significant consequences when relied upon for decision-making or information dissemination.
Neglecting Professional Responsibility
Lawyers have a profound responsibility to provide competent representation to our clients. No competent lawyer should rely on the pleadings or research of a law student or junior attorney, at least not without checking the sources and actually reading the documents. They certainly should not rely on AI without checking, at least not at this point – and probably not ever.
The New York lawyer's reliance on ChatGPT without conducting due diligence reflects a failure in meeting this professional obligation. By forgoing the necessary human expertise and neglecting to verify sources, the lawyer compromised the integrity of their legal argument and jeopardized their client's best interests. Further, as this case was removed to federal court Rule 11 sanctions are likely. Federal judges have no patience for sloppy practice.
Conclusion
The case of the seasoned lawyer who fell victim to the pitfalls of relying solely on ChatGPT for legal research serves as a poignant reminder of the hazards associated with unquestioning dependence on AI technology. Contextual blind spots, the perils of misinformation, the potential for biases, and the erosion of professional responsibility highlight the need for caution and critical thinking when integrating AI into legal research practices. Lawyers must recognize that AI models like ChatGPT should be viewed as valuable tools that augment human expertise, rather than serving as substitutes for diligent human analysis and verification. By maintaining a balance between AI assistance and human judgment, legal professionals can navigate the complexities of legal research effectively, ensuring competent representation and upholding the principles of justice.