Skip to Content

AI-102: Which Azure Endpoint Should You Use for Spanish Image Tagging?

Struggling with the AI-102 exam? Learn how to configure Azure endpoints for Spanish image tagging using Azure AI Vision. Master this critical skill to ace your certification.

Table of Contents

Question

Your organization, Xerigon Corporation, is developing a social-media application that uses Azure AI services to analyze user-uploaded images. You want to configure the application to label the images with a detailed list of words in Spanish that relate to their content.

Which URL should you use for the Azure service endpoint?

A. <endpoint>/contentmoderator/moderate/v1.0/ProcessImage&language=es
B. <endpoint>/vision/v4.0/analyze?visualFeatures=Read&language=es
C. <endpoint>/vision/v4.0/analyze?visualFeatures=Tags&language=es
D. <endpoint>/contentmoderator/moderate/v1.0/Read&language=es
E. <endpoint>/vision/v4.0/analyze?visualFeatures=Objects&language=es

Answer

C. <endpoint>/vision/v4.0/analyze?visualFeatures=Tags&language=es

Explanation

You would use the following URL:

<endpoint>/vision/v4.0/analyze?visualFeatures=Tags&language=es

This URL specifies the tag feature of the AI Vision Analysis API and the language code for Spanish (es). Below are the available visual features you can select while using the Azure AI Vision service for image processing requirements:

  • VisualFeatures.Tags – helps identify tags related to the image, such as objects, scenery, settings, and actions.
  • VisualFeatures.Objects – provides the bounding box for each object detected in the image.
  • VisualFeatures.Caption – generates a natural language caption describing the image.
  • VisualFeatures.DenseCaptions – produces more detailed captions for the objects detected in the image.
  • VisualFeatures.People – delivers the bounding box for detected individuals.
  • VisualFeatures.SmartCrops – returns the bounding box of the aspect ratio for the specified area.
  • VisualFeatures.Read – extracts readable text from the image.

You would not use the <endpoint>/vision/v4.0/analyze?visualFeatures=Read&language=es URL. This URL uses the read feature which extracts readable text from the image.

You would not use the <endpoint>/vision/v4.0/analyze?visualFeatures=Objects&language=es URL. This URL use the objects feature which detects objects inside of an image and provides the bounding box for each object detected in the image.

You would not use either of the following because they call the Image Moderation service, not the AI Vision Analysis API:

<endpoint>/contentmoderator/moderate/v1.0/ProcessImage&language=es

<endpoint>/contentmoderator/moderate/v1.0/Read&language=es

The Content Moderator service scans images, videos, and text and applies content flags if needed.

Microsoft Azure AI Engineer Associate AI-102 certification exam practice question and answer (Q&A) dump with detail explanation and reference available free, helpful to pass the Microsoft Azure AI Engineer Associate AI-102 exam and earn Microsoft Azure AI Engineer Associate AI-102 certification.