Learn about Microsoft’s principle of fairness in responsible AI, which states that AI systems should not reflect biases from their training data sets. Understand how this helps ensure AI is developed and used ethically.
Table of Contents
Question
According to which Microsoft’s principle of responsible Al, AI system should not reflect biases from the data sets that are used to train the systems. Select the correct option.
A. inclusiveness.
B. privacy and security.
C. fairness.
D. accountability.
Answer
According to Microsoft’s principles of responsible AI, the correct answer is:
C. fairness
Explanation
The principle of fairness states that AI systems should treat all people fairly and not discriminate. A key aspect of this is that AI systems should avoid reflecting or amplifying biases that may exist in the data used to train them. Training data sets can contain historical biases and unfairness (e.g. against certain genders, races, etc). If AI models learn these biases, they can perpetuate unfair treatment and discrimination at scale.
To uphold the principle of fairness, significant care must be taken when selecting and preparing training data for AI systems. The data should be carefully reviewed to identify potential biases. Where biases exist, the data should be adjusted to remove them where possible. Techniques like oversampling underrepresented groups can help create more balanced data.
During model training and evaluation, fairness metrics should be measured to assess if the model is treating different groups equitably. Unfair models should be adjusted or discarded. AI systems should also be monitored for fairness once deployed.
By striving for AI systems that reflect the principle of fairness, we can work to ensure that AI is a positive force that provides equitable benefits for all. Adhering to this principle helps prevent AI from causing or exacerbating harm and discrimination.
The other Microsoft principles of responsible AI are:
- Inclusiveness
- Reliability & Safety
- Transparency
- Privacy & Security
- Accountability
But the principle most directly related to preventing AI systems from reflecting biases in training data is Fairness.
The principle of fairness in Microsoft’s Responsible AI guidelines ensures that AI systems do not reflect biases from the datasets used to train the systems. It emphasizes that AI should treat all users and groups fairly, avoiding discrimination or unfair outcomes.
Microsoft Azure AI Fundamentals AI-900 certification exam practice question and answer (Q&A) dump with detail explanation and reference available free, helpful to pass the Microsoft Azure AI Fundamentals AI-900 exam and earn Microsoft Azure AI Fundamentals AI-900 certification.