When evaluating fairness in AI, how are data groups typically defined? Understand the correct answer for the IBM Artificial Intelligence Fundamentals certification exam, with a detailed explanation on identifying attributes that may cause disparities in outcomes.
Table of Contents
Question
When evaluating fairness in AI, how are data groups typically defined?
A. Using a legally mandated list of protected attributes.
B. Based on attributes that may cause disparities in outcomes.
C. By selecting the highest and lowest performers in a sample.
D. Through random sampling of all possible values in the dataset.
Answer
B. Based on attributes that may cause disparities in outcomes.
Explanation
Data groups in AI fairness analysis are defined by attributes believed or shown to lead to unequal outcomes or potential bias. These attributes—often called “protected” or “sensitive”—can include factors like gender, race, age, or socioeconomic background.
Grouping based on such attributes allows for the detection and correction of disparate impact, ensuring AI models do not produce systematically unfair results.
Legally mandated lists (A) provide a starting point, but true fairness assessment expands to any attribute observed to cause disparities, not only those listed by law.
Defining groups by performance extremes (C) or random sampling (D) does not address the crux of fairness analysis, which centers on bias and equitable treatment across relevant attribute groups.
Correctly identifying and evaluating these data groups is essential to developing and testing fair AI systems, as emphasized in IBM’s Artificial Intelligence Fundamentals certification material.
IBM Artificial Intelligence Fundamentals certification exam practice question and answer (Q&A) dump with detail explanation and reference available free, helpful to pass the Artificial Intelligence Fundamentals graded quizzes and final assessments, earn IBM Artificial Intelligence Fundamentals digital credential and badge.