Master the key visual features—objects, captions, and text extraction—needed for Azure AI image analysis. Ace the AI-102 exam with this detailed guide on configuring applications for advanced image processing.
Table of Contents
Question
Your organization, Xerigon Corporation, is developing a social-media application that uses Azure AI services to analyze user-uploaded images. You need to configure the application to identify objects, generate captions, and extract text from the images. Below is a snippet of C# code to analyze an image using the Azure AI services:
using Azure.AI.Vision.ImageAnalysis;
ImageAnalysisClient client = new ImageAnalysisClient(
Environment.GetEnvironmentVariable(“ENDPOINT”),
new AzureKeyCredential(Environment.GetEnvironmentVariable(“KEY”)));
ImageAnalysisResult result = client.Analyze(
new Uri(“<url>”),
VisualFeatures.__?????___ | VisualFeatures.__ ?????____ | VisualFeatures.__ ?????____,
new ImageAnalysisOptions { GenderNeutralCaption = true });
Which of the following visual features should you select to configure the application to identify objects, generate captions, and extract text from the images?
A. Tags
B. Read
C. SmartCrops
D. People
E. Caption
F. Objects
Answer
B. Read
E. Caption
F. Objects
Explanation
VisualFeatures.Caption, VisualFeatures.Objects, and VisualFeatures.Read are the correct visual features for the given scenario: VisualFeatures.Caption for generating a natural language caption, VisualFeatures.Objects for identifying objects, and VisualFeatures.Read for extracting readable text.
using Azure.AI.Vision.ImageAnalysis;
ImageAnalysisClient client = new ImageAnalysisClient(
Environment.GetEnvironmentVariable(“ENDPOINT”),
new AzureKeyCredential(Environment.GetEnvironmentVariable(“KEY”)));
ImageAnalysisResult result = client.Analyze(
new Uri(“<url>”),
VisualFeatures.Caption | VisualFeatures.Objects | VisualFeatures.Read new ImageAnalysisOptions { GenderNeutralCaption = true });
The other options provided are not correct because they do not fulfill the given requirements.
Below are the available visual features you can select while using an Azure AI service for image processing requirements.
- VisualFeatures.Tags helps identify tags related to the image, such as objects, scenery, settings, and actions.
- VisualFeatures.Objects provides the bounding box for each object detected in the image.
- VisualFeatures.Caption generates a natural language caption describing the image.
- VisualFeatures.DenseCaptions produces extra details in the captions for the objects found in the image.
- VisualFeatures.People delivers the bounding box for detected individuals.
- VisualFeatures.SmartCrops returns the bounding box of the aspect ratio for the specified area.
- VisualFeatures.Read extracts readable text from the image.
The following shows the proper syntax for adding all available features for the Analysis 4.0 API
VisualFeatures visualFeatures =
VisualFeatures.Caption |
VisualFeatures.DenseCaptions |
VisualFeatures.Objects |
VisualFeatures.Read |
VisualFeatures.Tags |
VisualFeatures.People |
VisualFeatures.SmartCrops;
Microsoft Azure AI Engineer Associate AI-102 certification exam practice question and answer (Q&A) dump with detail explanation and reference available free, helpful to pass the Microsoft Azure AI Engineer Associate AI-102 exam and earn Microsoft Azure AI Engineer Associate AI-102 certification.