The latest Microsoft AI-900 Azure AI Fundamentals certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the Microsoft AI-900 Azure AI Fundamentals exam and earn Microsoft AI-900 Azure AI Fundamentals certification.
Table of Contents
Question 591
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
You have a database that contains a list of employees and their photos.
You are tagging new photos of the employees.
Statement 1: The Face service can be used to perform facial recognition for employees.
Statement 2: he Face service will be more accurate if you provide more sample photos of each employee from different angles.
Statement 3: If an employee is wearing sunglasses, the Face service will always fail to recognize the employee.
Answer
Statement 1: The Face service can be used to perform facial recognition for employees. Yes
Statement 2: he Face service will be more accurate if you provide more sample photos of each employee from different angles. Yes
Statement 3: If an employee is wearing sunglasses, the Face service will always fail to recognize the employee. No
Question 592
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
Statement 1: The Custom Vision service can be used to detect objects in an image.
Statement 2: The Custom Vision service requires that you provide your own data to train the model.
Statement 3: The Custom Vision service can be used to analyze video files.
Answer
Statement 1: The Custom Vision service can be used to detect objects in an image. Yes
Statement 2: The Custom Vision service requires that you provide your own data to train the model. Yes
Statement 3: The Custom Vision service can be used to analyze video files. No
Explanation
<box 1>: Yes
Custom Vision functionality can be divided into two features. Image classification applies one or more labels to an image. Object detection is similar, but it also returns the coordinates in the image where the applied label(s) can be found.
<box 2>: Yes
The Custom Vision service uses a machine learning algorithm to analyze images. You, the developer, submit groups of images that feature and lack the characteristics in question. You label the images yourself at the time of submission. Then, the algorithm trains to this data and calculates its own accuracy by testing itself on those same images.
<box 3>: No
Custom Vision service can be used only on graphic files.
Question 593
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
Statement 1: A restaurant can use a chatbot to empower customers to make reservations by using a website or an app.
Statement 2: A restaurant can use a chatbot to answer inquiries about business hours from a webpage.
Statement 3: A restaurant can use a chatbot to automate responses to customer reviews on an external website.
Answer
Statement 1: A restaurant can use a chatbot to empower customers to make reservations by using a website or an app. Yes
Statement 2: A restaurant can use a chatbot to answer inquiries about business hours from a webpage. Yes
Statement 3: A restaurant can use a chatbot to automate responses to customer reviews on an external website. Yes
Question 594
What is the ideal value of AUC?
Answer
1.0
Explanation
Area Under Curve (AUC) is the model performance metrics for classification models. For binary classification models, the AUC value of 0.5 represents the random predictions. The model predictions are the same as randomly selected values of “Yes” or “No”. If the AUC value is below 0.5, the model performance is worse than random. Ideally, the best-fitted model has a value of 1. Such an ideal model predicts all the values correctly.
Question 595
What are the three main authoring tools on the Azure ML Studio home screen?
Answer
Designer, Automated ML, Notebooks
Question 596
You created a classification model with four possible classes. What will be the size of the confusion matrix?
Answer
4×4
Explanation
The confusion matrix provides a tabulated view of predicted and actual values for each class. If we are predicting the classification for four classes, our confusion matrix will have 4×4 size.
Question 597
What is the name of the responsible AI principle that directs AI solutions design to include resistance to harmful manipulation?
Answer
Reliability and Safety.
Explanation
Microsoft recognizes six principles of responsible AI: Fairness, Reliability and safety, Privacy and security, Transparency, Inclusiveness and Accountability.The principle of Reliability and safety directs AI solutions to respond safely to non-standard situations and to resist harmful manipulations.
Question 598
What are the three metrics that help for evaluate Custom vision model performance?
Answer
Recall, Average Precision (AP), Precision
Explanation
Custom vision is one of the Computer Vision tasks. Custom vision service helps create your own computer vision model. There are three main performance metrics for the Custom vision models: Precision, Recall, and Average Precision (AP).
Precision defines the percentage of the class predictions that the model makes correct. For example, if the model predicts that ten images are bananas, and there are actually only seven bananas, the model precision is 70%.
Recall defines the percentage of the class identification that the model makes correct. For example, if there are ten apple images, and the model identifies only eight, the model recall is 80%.
Average Precision (AP) is the combined metrics of both Precision and Recall.
Accuracy is a Classification model metric, but it is not used for Custom vision models performance assessments.
Number of Points is a Clustering model metric and is not used for Custom vision models performance assessments.
Mean Absolute Error (MAE) is a Regression model metric and is not used for Custom vision models performance assessments.
Question 599
What metrics does Azure ML use for the Evaluation of the regression models?
Answer
Coefficient of Determination, Mean Absolute Error (MAE), Root Mean Squared Error (RMSE)
Explanation
Azure ML uses model evaluation for the measurement of the trained model accuracy. For regression models Evaluate Model module provides the following five metrics: Mean absolute error (MAE), Root mean squared error (RMSE), Relative absolute error (RAE), Relative squared error (RSE), and Coefficient of determination (R2).
- Root Mean Squared Error (RMSE) is the regression model evaluation metrics. It represents the square Root from the squared mean of the errors between predicted and actual values.
- Mean absolute error (MAE) is the regression model evaluation metrics. It produces the score that measures how close the model is to the actual values — the lower score, the better the model performance.
- Coefficient of determination or R2 is the regression model evaluation metrics. It reflects the model performance: the closer R2 to 1 – the better the model fits the data.
- Accuracy is the classification model evaluation metrics and is not the regression model evaluation metrics.
- Number of Points is the clustering model evaluation metrics and is not the regression model evaluation metrics.
- Combined Evaluation is the clustering model evaluation metrics and is not the regression model evaluation metrics.
- Recall is the classification model evaluation metrics and is not the regression model evaluation metrics.
Question 600
You need to create a language model. What are the essential elements that you need to supply as data for your language model training?
Answer
Utterances, Entities, Intents
Explanation
For language model training, we need to provide the following key elements: Entities, Intents, and Utterance. We can achieve this by using the
Azure Cognitive service LUIS portal.
- Entity is the word or phrase that is the focus of the utterance, as the word “light” in the utterance “Turn the lights on.”
- Intent is the action or task that the user wants to execute. It reflects in utterance as a goal or purpose. We can define intent as “TurnOn” in the utterance “Turn the lights on.”
- Utterance is the user’s input that your model needs to interpret, like “Turn the lights on” or “Turn on the lights”.