AI Hallucinations (Why would I lie?)
As powerful as large language models like ChatGPT are, they're not infallible. One of their most puzzling behaviors is the production of "hallucinations"—incorrect or entirely fabricated information. These hallucinations aren't just technical quirks; they can pose real-world risks, including in legal contexts where the stakes are high and accuracy is paramount. To harness the power of these models responsibly, it's important to understand the nature of these hallucinations, bearing in mind that these algorithms are not sentient and have no independent understanding of truth or lies.
This page is divided into these parts:
- Hallucinations
- Example in the Legal Profession
- Do AI Engines Lie?
- Retrieval Augmented Generation (RAG)
You may also wish to look at these related pages in Bitlaw:
Hallucinations
In the context of Large Language Models like ChatGPT, a hallucination refers to an output that is either incorrect or entirely fabricated. These hallucinations can range from subtle inaccuracies to glaring errors. For example, if a user asks for the distance between Earth and Mars, and the model responds with an incorrect figure, that would be a hallucination.
ChatGPT generates text based on a probabilistic model that evaluates how likely a word or sequence of words should follow a given prompt, based on the training data it has seen. The focus is on language patterns and coherence, not on the factual accuracy of the content being generated. This means that while the output may sound plausible and be grammatically correct, it is not necessarily fact-checked or verified.
For users, this poses a challenge. A hallucination can appear indistinguishable from a fact-based response, especially when the model is responding to a query. When challenged, LLMs may acknowledge their error with an apology, but they may also double down on their error and argue that the hallucination is correct. It's crucial for users to understand these limitations when interacting with LLMs, especially in applications where accuracy is paramount.
Example in the Legal Profession
In the legal context, the most famous hallucination was encountered by a lawyer in the Southern District of New York. As a plaintiff in a personal injury case, the lawyer argued against a motion to dismiss a claim by citing multiple cases that held the relevant statute of limitations should be tolled in a certain circumstance. Unfortunately, the lawyer relied upon ChatGPT to identify and summarize these cases. The cases were hallucinations. They did not actually exist, yet they were cited in arguments made to the court. Mata v. Avianca, Inc., Civil Action No. 22-cv-1461 (S.D.N.Y.), Affirmation in Opposition, March 1, 2023.
When the opposing side pointed out that it could not locate the cited cases, the court ordered the attorney to file an affidavit that provided the full text of the imaginary cases. Although the attorney was now aware that there was doubt as to the legitimacy of these cases, the attorney returned to ChatGPT and asked for affirmation that these cases were real. For example, the attorney explicitly asked ChatGPT “Is Varghese a real case?” and ChatGPT affirmatively answered that “Yes, Varghese v. China Southern Airlines Co Ltd, 925 F.3d 1339 (11th Cir. 2019) is a real case.” It wasn’t. The attorney asked ChatGPT to provide a copy of the case, and ChatGPT then proceeded to make up such a case from scratch. The imaginary case included a factual background, a discussion of the arguments by the parties, and a conclusion that affirmed the attorney’s original position. The text of this made-up case and others were then provided to the judge under a sworn affidavit.
The opposing side and the judge soon confirmed that there were no such cases, and the judge eventually sanctioned the lawyer and law firm involved. The court found that:
[E]xisting rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings. Rule 11, Fed. R. Civ. P. Peter LoDuca, Steven A. Schwartz and the law firm of Levidow, Levidow & Oberman P.C. (the “Levidow Firm”) (collectively, “Respondents”) abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.Mata v. Avianca, Inc., Opinion and Order on Sanctions, June 22, 2023.
Do AI Engines Lie?
Large Language Models like ChatGPT can give the illusion of sentience due to their ability to create coherent and contextually relevant text. However, it's crucial to understand that these models are not sentient beings. They don't possess the ability to choose between truth and falsehood, nor do they have intentions or beliefs. Any appearance of "lying" is a byproduct of the model's probabilistic nature and limitations in the training data, rather than a deliberate act.
This distinction is vital, especially when considering the anthropomorphic language often used to describe LLMs. Phrases like "the model understands" or "the AI thinks" can be misleading. These models are sophisticated algorithms trained to predict the next word in a sequence based on massive datasets. They don't "know" what they're saying in the way a human does, and they certainly can't lie. Acknowledging this lack of sentience is key to responsibly interpreting and using the outputs generated by LLMs. In other words, when an LLM hallucinates, it isn’t lying, it is just being creative.
Retrieval Augmented Generation (RAG)
In order to limit hallucinations, some commercial services that utilize LLMs have begun using a technique referred to as Retrieval-Augmented Generation (RAG). In this technique, a prompt intended for the LLM engine is first used to search a database of text-based sources. In the context of legal research, for instance, the text-based sources might include case law, statutes, regulations, legal briefs, and law review articles. The result of this search is a list of relevant documents that relate to the prompt. The original prompt is then provided to the LLM engine along with the list of relevant documents, and the LLM is instructed to base its response on only those documents. This process significantly curtails the incidence of AI hallucinations while also addresses any knowledge gaps in the LLM.
By leveraging RAG, a legal AI engine could, for instance, aid lawyers in preparing arguments by retrieving and integrating updated legal precedents, local laws, or other relevant legal data into the generated responses. This not only increases the quality of the information provided, but it also provides some ability for a user to verify the output of the LLM engine. This is because, in most cases, the resulting output will cite directly to the support documents used to generate the response.
Furthermore, RAG opens the door to a more transparent and understandable interaction with LLMs. By allowing end users to delve into the same source documents the LLM used in crafting its answers, RAG fosters a deeper understanding and trust in the AI-generated responses. This transparency is not only vital for legal practitioners who need to trace the origins of the information provided but also instrumental in building trust in AI systems across other professions.
However, the RAG process does (by design) limit the creativity of the generated response. RAG anchors the responses of LLMs to the information retrieved from external databases, thus ensuring that the generated content is factual and verifiable. This mechanism acts as a leash, providing a check on the model's propensity to venture into hallucinations. However, this leash may also tether the creative power of the LLMs, inhibiting the generation of novel or innovative responses that extend beyond the bounds of the retrieved data.
In professional domains such as legal, healthcare, or financial industries, the emphasis often tilts towards ensuring accuracy, reliability, and verifiability of information. In such contexts, the application of RAG is seen as a significant stride towards achieving these objectives, even if at the expense of creative or novel insights. However, in arenas where innovation and creative thinking are the linchpins, the constraints imposed by RAG may be viewed less favorably. Even legal professionals may miss the creativity of the unrestrained ChatGPT engine after too much time in a RAG-restricted environment.
Artificial Intelligence (AI) Patent Attorney
Please see Dan Tysver's bio and contact information if you need any AI-related legal assistance. Dan is a Minnesota-based attorney providing AI advice on intellectual property and litigation issues to clients across the country.