Skip to Content

AI Certificate for L&D: Types of AI Hallucination: Definition and Examples

Learn about the four types of AI hallucination: factual contradiction, sentence contradiction, prompt contradiction, and random or irrelevant. See examples of how they occur and how to avoid them.

Table of Contents

Question

Link the type of AI hallucination with its definition.

Options:

  • Prompt Contradiction
  • Random or Irrelevant
  • Sentence Contradiction
  • Factual Contradiction

Answer areas:

  • AI hallucination that occurs when fictitious information is presented as a fact.
  • AI hallucinations that occurs when a sentence in a generated response contradicts a previous sentence.
  • AI hallucination that occurs when a response contradicts the prompt used to generate it.
  • AI hallucination that occurs when random information with no pertinence to the output or input is generated.

Answer

  • Factual Contradiction: AI hallucination that occurs when fictitious information is presented as a fact.
  • Sentence Contradiction: AI hallucinations that occurs when a sentence in a generated response contradicts a previous sentence.
  • Prompt Contradiction: AI hallucination that occurs when a response contradicts the prompt used to generate it.
  • Random or Irrelevant: AI hallucination that occurs when random information with no pertinence to the output or input is generated.

Explanation

AI hallucinations are inaccurate or nonsensical outputs generated by large language models (LLMs) that are trained on huge amounts of text data and use a neural network architecture called a transformer. AI hallucinations can be classified into four types based on the nature and source of the error:

  • Factual Contradiction: This type of AI hallucination occurs when the generated output contains false or misleading information that is presented as a fact. For example, Google’s Bard chatbot incorrectly claimed that the James Webb Space Telescope had captured the world’s first images of a planet outside our solar system. This type of hallucination can result from limitations or biases in the training data, or from the model’s inability to verify the truthfulness of the information.
  • Sentence Contradiction: This type of AI hallucination occurs when a sentence in the generated output contradicts a previous sentence or the overall context. For example, a chatbot might say “I love dogs, they are so cute and friendly. I hate dogs, they are so annoying and noisy.” This type of hallucination can result from the model’s lack of coherence and consistency, or from the model’s tendency to repeat or paraphrase information without checking for logical errors.
  • Prompt Contradiction: This type of AI hallucination occurs when the generated output contradicts the prompt or the input used to generate it. For example, a chatbot might say “I am a human, not a robot. I am a robot, not a human.” when asked to introduce itself. This type of hallucination can result from the model’s lack of understanding and relevance, or from the model’s confusion between different modes or domains of communication.
  • Random or Irrelevant: This type of AI hallucination occurs when the generated output contains random or irrelevant information that has no pertinence to the input or the output. For example, a chatbot might say “The weather is nice today. Do you like bananas?” when asked to talk about politics. This type of hallucination can result from the model’s lack of focus and specificity, or from the model’s exposure to noisy or diverse data sources.

AI Certificate for L&D Exam Question and Answer

The latest AI Certificate for L&D actual real practice exam question and answer (Q&A) dumps are available free, helpful to pass the AI Certificate for L&D certificate exam and earn AI Certificate for L&D certification.