Both Humans and AI Hallucinate — But Not in the Same Way

In the past six months, the advent of highly capable large language models (LLMs) like GPT-3.5 has captivated the world’s attention. However, as users uncover their fallibility, trust in these models has waned, revealing that they share an imperfect nature with humans.

Unveiling the Enigmatic Realm: Humans and AI Hallucinate Differently
Both AI and Humans Hallucinate

How Humans Hallucinate

Humans possess the ability to hallucinate and generate false information intentionally or unintentionally. Cognitive biases, known as “heuristics,” often underlie these tendencies.

These biases arise out of necessity. With limited cognitive capacity, we can only process a fraction of the information bombarding our senses. Consequently, our brains rely on learned associations to fill in the gaps and swiftly respond to questions or challenges.

They can lead to flawed judgment. For instance, automation bias inclines us to favor information generated by automated systems, like ChatGPT, over non-automated sources, causing us to overlook errors or act upon false information. The halo effect influences our subsequent interactions based on initial impressions.

How AI Hallucinates

In the context of LLMs, hallucination takes on a different meaning. LLMs do not engage in hallucination to efficiently comprehend the world by conserving limited mental resources. Instead, hallucination describes the failure to predict an appropriate response to a given input.

How AI Hallucinates

However, there are similarities between how humans and LLM hallucinate, as both attempt to fill in the gaps. LLMs generate responses by predicting the most likely word to follow in a sequence based on preceding context and learned associations.

Like humans, LLMs strive to predict the most plausible response, but unlike humans, they lack an understanding of the content they produce. As to why LLM hallucinate, factors including training on flawed or insufficient data, programming choices, and reinforcement through human-guided training show up.

Pursuing Improvement Together

In reality, the shortcomings of both humans and technology are intertwined, necessitating a collaborative approach to rectify the issues. Here are some strategies for achieving this:

Responsible Data Management
Addressing biases in AI requires diverse and representative training data, along with bias-aware algorithms and techniques such as data balancing to eliminate skewed or discriminatory patterns.

Transparency and Explainable AI
Despite efforts to address biases, they can persist and be challenging to detect. Studying how biases enter and propagate within AI systems allows for improved understanding and transparency in decision-making processes—an essential aspect of explainable AI.

Prioritizing the Public Interest
Achieving unbiased AI systems involves human accountability and integrating human values. Ensuring diverse representation among stakeholders, encompassing different backgrounds, cultures, and perspectives, is key.

Pursuing Improvement Together

By collaborating in these ways, we can build smarter AI systems that help us recognize and mitigate our hallucinations. In healthcare, AI assists in analyzing human decisions, identifying inconsistencies, and prompting clinicians to ensure improved diagnostic accuracy while maintaining human accountability.

While we strive to enhance the accuracy of LLMs, we must not disregard how their current fallibility serves as a mirror reflecting our imperfections.