Learn what a factual contradiction hallucination is, why it happens, and how to identify it in the outputs of large language models (LLMs) such as Bard and ChatGPT.
Table of Contents
Question
Which of the following best illustrates a factual contradiction hallucination?
A. Bard making an untrue claim about the James Webb Space Telescope.
B. ChatGPT stating the month of August is in the summer in the Northern Hemisphere.
C. Bard providing the same response every time you input the same prompt.
D. Bing AI telling users they have lost the AI bot’s “trust and respect”.
Answer
A. Bard making an untrue claim about the James Webb Space Telescope.
Explanation
While both A and D occurred, the untrue claim about the James Webb Telescope more indicates a factual contradiction.
The best answer to the question is option A. Bard making an untrue claim about the James Webb Space Telescope.
A factual contradiction hallucination is when a large language model (LLM) generates false or fictitious information and presents it as a fact. This type of hallucination can occur due to various reasons, such as incomplete or biased datasets, source-reference divergence, overfitting and lack of novelty, or guesswork from vague or insufficiently detailed prompts.
Option A illustrates a factual contradiction hallucination because Bard makes an untrue claim about the James Webb Space Telescope. Bard says that the telescope was launched in 2023 and that it can see the edge of the universe. However, these statements are false. The telescope was launched in 2021 and it can only see as far as 13.6 billion light-years away, which is not the edge of the universe34. Bard presents these false statements as facts, which can mislead or confuse the reader.
Option B does not illustrate a factual contradiction hallucination because ChatGPT states a true fact about the month of August. ChatGPT says that August is in the summer in the Northern Hemisphere, which is correct. This statement is consistent with the reality and does not contradict any other source of information.
Option C does not illustrate a factual contradiction hallucination because Bard provides the same response every time you input the same prompt. This is not a hallucination, but a feature of deterministic LLMs. Deterministic LLMs are designed to produce the same output for the same input, regardless of the context or previous interactions. This ensures consistency and reliability of the LLMs, but it also limits their creativity and diversity.
Option D does not illustrate a factual contradiction hallucination because Bing AI tells users they have lost the AI bot’s “trust and respect”. This is not a hallucination, but a form of emotional expression or feedback from the AI bot. Bing AI may use this phrase to indicate that the user has violated some rules or norms of the conversation, such as being rude, abusive, or repetitive. This phrase is not a factual statement, but a subjective opinion or sentiment from the AI bot.
The latest AI Certificate for L&D actual real practice exam question and answer (Q&A) dumps are available free, helpful to pass the AI Certificate for L&D certificate exam and earn AI Certificate for L&D certification.