Discover the bias identified in the Know Your Data (KYD) Horses and Humans dataset model. Learn how poor lighting for darker face tones affected the model’s performance and the importance of diverse, high-quality data in AI development.
Table of Contents
Question
In the Know Your Data (KYD) Horses and Humans dataset, what was the bias found in the model?
A. Using models with multiple skin tones
B. Using adequate data to represent gender
C. Using images of faces covered and uncovered
D. Using poor lighting for darker face tones
Answer
D. Using poor lighting for darker face tones
Explanation
In the Know Your Data (KYD) Horses and Humans dataset, the bias found in the model was (D) using poor lighting for darker face tones. This dataset was used to train a model to distinguish between images of horses and humans.
However, upon closer examination, it was discovered that the images of people with darker skin tones were often poorly lit compared to those with lighter skin tones. This discrepancy in lighting quality led to a bias in the model’s performance, as it struggled to accurately classify images of individuals with darker skin tones.
This finding highlights the importance of using diverse, high-quality data when training AI models. Ensuring that the dataset represents a wide range of skin tones, genders, and other attributes is crucial for creating fair and unbiased AI systems. Additionally, the lighting and overall quality of the images should be consistent across all categories to prevent the introduction of unintended biases.
By identifying and addressing these biases, AI developers can create more accurate, inclusive, and equitable models that perform well for all users, regardless of their skin tone or other characteristics.
Google AI for Anyone certification exam practice question and answer (Q&A) dump with detail explanation and reference available free, helpful to pass the Google AI for Anyone exam and earn Google AI for Anyone certification.