Learn how to implement Azure Computer Vision in C# to analyze images for tags, objects, and descriptions. Perfect for AI-102 exam prep and real-world AI projects.
Table of Contents
Question
Your organization, Xerigon Corporation, is developing an AI application that uses computer vision to analyze images. You plan to use the Azure Computer Vision service to analyze images and extract information such as tags, objects, and descriptions.
You are writing code for the AI application using C#.
You use the following code to call the Azure AI Vision service to analyze the image.
// Authenticate Azure AI Vision client ApiKeyServiceClientCredentials credentials = new ApiKeyServiceClientCredentials(cogSvcKey); cvClient = new ComputerVisionClient(credentials) { Endpoint = cogSvcEndpoint };
Which is the correct C# code snippet to use in your AI application to identify the features to be retrieved, such as tags, objects, and descriptions, from the images?
A.
features = [VisualFeatureTypes.description,
VisualFeatureTypes.tags,
VisualFeatureTypes.categories,
VisualFeatureTypes.brands,
VisualFeatureTypes.objects,
VisualFeatureTypes.adult]
B.
using (var imageData = File.OpenRead(imageFile))
{
var analysis = await cvClient.AnalyzeImageInStreamAsync(imageData, features);
foreach (var caption in analysis.Description.Captions)
VisualFeatureTypes.tags,
VisualFeatureTypes.categories,
VisualFeatureTypes.brands,
VisualFeatureTypes.objects,
VisualFeatureTypes.adult]
{
Console.WriteLine($”Description: {caption.Text} (confidence: {caption.Confidence.ToString(“P”)})”);
}
C.
with open(image_file, mode=”rb”) as image_data:
analysis = cv_client.analyze_image_in_stream(image_data , features)
VisualFeatureTypes.tags,
VisualFeatureTypes.categories,
VisualFeatureTypes.brands,
VisualFeatureTypes.objects,
VisualFeatureTypes.adult]
for caption in analysis.description.captions:
print(“Description: ‘{}’ (confidence: {:.2f}%)”.format(caption.text, caption.confidence * 100))
D.
List<VisualFeatureTypes?> features = new List<VisualFeatureTypes?>()
{
VisualFeatureTypes.Description,
VisualFeatureTypes.Tags,
VisualFeatureTypes.Categories,
VisualFeatureTypes.Brands,
VisualFeatureTypes.Objects,
VisualFeatureTypes.Adult
};
Answer
D.
List<VisualFeatureTypes?> features = new List<VisualFeatureTypes?>()
{
VisualFeatureTypes.Description,
VisualFeatureTypes.Tags,
VisualFeatureTypes.Categories,
VisualFeatureTypes.Brands,
VisualFeatureTypes.Objects,
VisualFeatureTypes.Adult
};
Explanation
Below is the correct C# code snippet you would use in your application to extract information on tags, objects, and descriptions from the images.
List<VisualFeatureTypes?> features = new List<VisualFeatureTypes?>()
{
VisualFeatureTypes.Description,
VisualFeatureTypes.Tags,
VisualFeatureTypes.Categories,
VisualFeatureTypes.Brands,
VisualFeatureTypes.Objects,
VisualFeatureTypes.Adult
};
The above code uses the new List<VisualFeatureTypes?>() function to identify the features to be retrieved.
The code snippet below is not C# but Python However, it is used when developing AI applications using Python.
features = [VisualFeatureTypes.description,
VisualFeatureTypes.tags,
VisualFeatureTypes.categories,
VisualFeatureTypes.brands,
VisualFeatureTypes.objects,
VisualFeatureTypes.adult]
The C# code snippet below is not correct for identifying the features to be retrieved. It is used to get the image analysis. You could provide additional code to get tags, categories, objects, or moderation ratings from the image. VisualFeatureTypes is an enumeration which defines a set of named constants representing different types of visual features that can be analyzed or extracted from an image such as tags, categories, brands, etc. The VisualFeatureTypes enumeration should not be included with the image analysis code.
// Get image analysis
using (var imageData = File.OpenRead(imageFile))
{
var analysis = await cvClient.AnalyzeImageInStreamAsync(imageData, features);
// get image captions
foreach (var caption in analysis.Description.Captions)
{
Console.WriteLine($”Description: {caption.Text} (confidence: {caption.Confidence.ToString(“P”)})”);
}
The below code is used in Python for image analysis and not used in C#.
with open(image_file, mode=”rb”) as image_data:
analysis = cv_client.analyze_image_in_stream(image_data , features)
VisualFeatureTypes.tags,
VisualFeatureTypes.categories,
VisualFeatureTypes.brands,
VisualFeatureTypes.objects,
VisualFeatureTypes.adult]
for caption in analysis.description.captions:
print(“Description: ‘{}’ (confidence: {:.2f}%)”.format(caption.text, caption.confidence * 100))
Microsoft Azure AI Engineer Associate AI-102 certification exam practice question and answer (Q&A) dump with detail explanation and reference available free, helpful to pass the Microsoft Azure AI Engineer Associate AI-102 exam and earn Microsoft Azure AI Engineer Associate AI-102 certification.