Skip to Content

AI-900: Building an Ethical AI App: Key Principles for Responsible AI Implementation

Discover the essential principles for responsible AI implementation when building your AI-based app. Learn how to ensure ethical practices by incorporating AI model validation and establishing a risk governance committee. Avoid disclosing AI-based algorithms for automated decision making. Create an app that prioritizes responsible AI and gains user trust.

Question

You are building an AI-based app. You need to ensure that the app uses the principles for responsible AI. Which two principles should you follow? (Each correct answer presents part of the solution. Choose two.)

A. Implement an Agile software development methodology.
B. Implement a process of AI model validation as part of the software review process.
C. Establish a risk governance committee that includes members of the legal team, members of the risk management team, and a privacy officer.
D. Prevent the disclosure of the use of AI-based algorithms for automated decision making.

Answer

B. Implement a process of AI model validation as part of the software review process.
C. Establish a risk governance committee that includes members of the legal team, members of the risk management team, and a privacy officer.

Explanation

The correct answers are B. Implement a process of AI model validation as part of the software review process and C. Establish a risk governance committee that includes members of the legal team, members of the risk management team, and a privacy officer. These two principles are aligned with the responsible AI principles of reliability and safety, transparency, accountability, and privacy and security. Let me explain why:

  • AI model validation is the process of ensuring that the AI model meets the desired performance and quality standards, as well as adheres to the ethical and legal requirements. AI model validation can involve techniques such as testing, debugging, auditing, monitoring, and documenting the AI model. AI model validation can help to ensure that the AI model is reliable, safe, fair, transparent, and accountable. Implementing a process of AI model validation as part of the software review process can help to identify and mitigate any potential risks or issues with the AI model before deploying it to the end-users. This principle is supported by the Microsoft Responsible AI Standard, which states that “AI systems should be designed, developed, and deployed in a manner that ensures they are reliable and safe, and that they operate in a manner that is consistent with the expectations and needs of their users and other stakeholders”.
  • A risk governance committee is a group of people who are responsible for overseeing and managing the risks associated with the AI-based app. A risk governance committee can help to ensure that the AI-based app complies with the relevant laws, regulations, policies, and ethical standards, as well as respects the rights and interests of the end-users and other stakeholders. A risk governance committee can also help to establish and communicate the goals, values, and principles of the AI-based app, as well as monitor and evaluate its performance and impact. A risk governance committee should include members of the legal team, members of the risk management team, and a privacy officer, as they have the expertise and authority to address the legal, ethical, and privacy aspects of the AI-based app. This principle is supported by the Responsible AI Principles and Approach | Microsoft AI, which states that “Microsoft is committed to ensuring that our AI systems are designed and deployed in ways that warrant people’s trust. To do this, we have established a set of Responsible AI principles that guide our work: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability”.

The other two options are incorrect because:

  • Implementing an Agile software development methodology is not a principle for responsible AI, but rather a general approach for software development that emphasizes collaboration, feedback, and adaptation. While Agile software development can be compatible with responsible AI practices, it is not sufficient or necessary to ensure that the AI-based app follows the responsible AI principles.
  • Preventing the disclosure of the use of AI-based algorithms for automated decision making is not a principle for responsible AI, but rather a violation of the principle of transparency. Transparency is the principle that the AI-based app should be clear and understandable to the end-users and other stakeholders, especially when it affects their rights, interests, or well-being. Preventing the disclosure of the use of AI-based algorithms for automated decision making can undermine the trust and confidence of the end-users and other stakeholders, as well as prevent them from exercising their rights to information, explanation, and appeal. This principle is supported by the 13 Principles for Using AI Responsibly, which states that “AI systems should be transparent and explainable, enabling users to understand how and why the systems make certain decisions or recommendations”.

References

Microsoft Docs > Azure > Cloud Adoption Framework > Adopt > Innovate > Responsible and trusted AI
Microsoft Docs > Implications of responsible AI – Practical guide

Microsoft Azure AI Fundamentals AI-900 certification exam practice question and answer (Q&A) dump with detail explanation and reference available free, helpful to pass the Microsoft Azure AI Fundamentals AI-900 exam and earn Microsoft Azure AI Fundamentals AI-900 certification.

Microsoft Azure AI Fundamentals AI-900 certification exam practice question and answer (Q&A) dump

Alex Lim is a certified IT Technical Support Architect with over 15 years of experience in designing, implementing, and troubleshooting complex IT systems and networks. He has worked for leading IT companies, such as Microsoft, IBM, and Cisco, providing technical support and solutions to clients across various industries and sectors. Alex has a bachelor’s degree in computer science from the National University of Singapore and a master’s degree in information security from the Massachusetts Institute of Technology. He is also the author of several best-selling books on IT technical support, such as The IT Technical Support Handbook and Troubleshooting IT Systems and Networks. Alex lives in Bandar, Johore, Malaysia with his wife and two chilrdren. You can reach him at [email protected] or follow him on Website | Twitter | Facebook

    Ads Blocker Image Powered by Code Help Pro

    Your Support Matters...

    We run an independent site that is committed to delivering valuable content, but it comes with its challenges. Many of our readers use ad blockers, causing our advertising revenue to decline. Unlike some websites, we have not implemented paywalls to restrict access. Your support can make a significant difference. If you find this website useful and choose to support us, it would greatly secure our future. We appreciate your help. If you are currently using an ad blocker, please consider disabling it for our site. Thank you for your understanding and support.