Discover the key principles of responsible AI and learn which practice contradicts these guidelines according to the Google AI for Anyone certification exam.
Table of Contents
Question
Which of the following is NOT a responsible AI practice?
A. Avoid creating or reinforcing unfair bias, particularly those related to sensitive characteristics
B. Take into account a broad range of social and economic factors, and proceed where the overall likely benefits exceed the risks
C. Build AI technologies such that they will not be subject to human direction and control to decrease bias
D. Provide appropriate opportunities for feedback, relevant explanations, and appeal
Answer
C. Build AI technologies such that they will not be subject to human direction and control to decrease bias
Explanation
Building AI systems that are not subject to human oversight and control is contrary to responsible AI practices. Responsible AI development involves humans remaining in control of AI systems to ensure they are operating as intended, to monitor for potential issues like unfair bias, and to intervene if problems arise.
The other answer choices are all examples of responsible AI practices:
A) Avoiding the creation or reinforcement of unfair bias, especially related to sensitive attributes like race, gender, age, etc. is critical for responsible AI.
B) Carefully weighing potential risks and benefits, and only proceeding with AI development if the likely societal benefits outweigh the risks.
D) Providing mechanisms for feedback, explanations of AI decisions, and ways to appeal or contest AI outputs.
So in summary, building AI systems to be independent of human control is not a responsible practice, as it removes the ability to oversee the AI and make corrections if issues occur. Responsible AI keeps humans in the loop.
Google AI for Anyone certification exam practice question and answer (Q&A) dump with detail explanation and reference available free, helpful to pass the Google AI for Anyone exam and earn Google AI for Anyone certification.