Skip to Content

AI-102: How to Use Azure AI Content Safety REST API for Text Moderation Effectively?

Learn step-by-step how to leverage Azure AI Content Safety’s REST API for text moderation. Discover how to flag objectionable content, configure categories, and interpret severity levels with ease.

Table of Contents

Question

You use the REST API to do basic text moderation with the Azure AI Content Safety service to flag objectionable content. You run the following cURL statement:

curl –location –request POST ‘<endpoint>/contentsafety/text:analyze?api-version=2023-10-01’ \
–header ‘Ocp-Apim-Subscription-Key: <your_subscription_key>’ \
–header ‘Content-Type: application/json’ \
–data-raw ‘{
“text”: “I don’t like Philadelphia Eagle fans because they are all so aggressive and stupid. They attack opposing fans for no reason at all.”,
“categories”: [“Hate”, “Sexual”, “SelfHarm”, “Violence”],
“blocklistNames”: [“string”],
“haltOnBlocklistHit”: true,
“outputType”: “EightSeverityLevels”
}’

The text moderation results are displayed in the following JSON data:

{

“blocklistsMatch”: [

{

“blocklistName”: “string”,

“blocklistItemId”: “string”,

“blocklistItemText”: “string”

}

],

“categoriesAnalysis”: [

{

“category”: “Hate”,

“severity”: A

},

{

“category”: “SelfHarm”,

“severity”: B

},

{

“category”: “Sexual”,

“severity”: C

},

{

“category”: “Violence”,

“severity”: D

}

]

}

Replace the missing values for each category.

Severity Level:

  • 0
  • 2
  • 3
  • 4

Answer

A. 3
B. 0
C. 0
D. 0

Explanation

The text moderation results are displayed as follows:

{
"blocklistsMatch": [
{
"blocklistName": "string",
"blocklistItemId": "string",
"blocklistItemText": "string"
}
],
"categoriesAnalysis": [
{
"category": "Hate",
"severity": 3
},
{
"category": "SelfHarm",
"severity": 0
},
{
"category": "Sexual",
"severity": 0
},
{
"category": "Violence",
"severity": 0
}
]
}

In this scenario, the text submitted in the cURL statement to the Azure AI Content Safety service was as follows:

“I don’t like Philadelphia Eagle fans because they are all so aggressive and stupid. They attack opposing fans for no reason at all.”

This statement does not fall into the category of violence, sexual, or self-harm but does fall into the category of hate. According to Microsoft’s hate levels, content that is judgmental, stereotypical, or characterizes an identity group such as Philadelphia Eagle’s fans would be at severity level 3.

The output type specifies eight levels (0-7) (“outputType”: “EightSeverityLevels”). If it had specified “FourSeverityLevels”, the classifier would only return severity levels 0,2,4,6. The following shows how the eight severity levels are mapped to four:

  • [0,1] -> 0
  • [2,3] -> 2
  • [4,5] -> 4
  • [6,7] -> 6

The following link describes the differences between the severity levels of the hate, sexual, violence, and self-harm categories.

Microsoft Azure AI Engineer Associate AI-102 certification exam practice question and answer (Q&A) dump with detail explanation and reference available free, helpful to pass the Microsoft Azure AI Engineer Associate AI-102 exam and earn Microsoft Azure AI Engineer Associate AI-102 certification.