Struggling with speech translation configuration in Azure AI-102 exam? Get step-by-step solutions for implementing SpeechRecognitionLanguage and AddTargetLanguage methods in C# code to fix speech-to-text conversion errors and ace Microsoft’s AI Engineer certification.
Table of Contents
Question
Your organization, Xerigon Corporation, is developing a customer service app using C#. The app will include functionality to perform real-time speech-to-speech translation, enabling conversations between people who speak different languages. You want to configure the app to translate from English to French.
You are writing the C# code to perform speech-to-speech translation. You have created a speech translation configuration to call the Speech service using the Speech SDK.
Complete the code below to achieve the objective.
public class Program { static readonly string SPEECH__SUBSCRIPTION__KEY = Environment.GetEnvironmentVariable(nameof(SPEECH__SUBSCRIPTION__KEY)); static readonly string SPEECH__SERVICE__REGION = Environment.GetEnvironmentVariable(nameof(SPEECH__SERVICE__REGION)); static Task Main() => TranslateSpeechAsync(); static async Task TranslateSpeechAsync() { var speechTranslationConfig = SpeechTranslationConfig.FromSubscription(SPEECH__SUBSCRIPTION__KEY, SPEECH__SERVICE__REGION); speechTranslationConfig.A = "en=US"; speechTranslationConfig.B("fr"); } }
Code:
- TranslationRecognizer
- AddTargetLanguage
- SpeechSynthesisVoiceName
- SpeechRecognitionLanguage
- AudioConfig
Answer
A. SpeechRecognitionLanguage
B. AddTargetLanguage
Explanation
You would use SpeechRecognitionLanguage and AddTargetLanguage properties to complete the C# code. Below is the completed code:
public class Program { static readonly string SPEECH__SUBSCRIPTION__KEY = Environment.GetEnvironmentVariable(nameof(SPEECH__SUBSCRIPTION__KEY)); static readonly string SPEECH__SERVICE__REGION = Environment.GetEnvironmentVariable(nameof(SPEECH__SERVICE__REGION)); static Task Main() => TranslateSpeechAsync(); static async Task TranslateSpeechAsync() { var speechTranslationConfig = SpeechTranslationConfig.FromSubscription(SPEECH__SUBSCRIPTION__KEY, SPEECH__SERVICE__REGION); speechTranslationConfig.SpeechRecognitionLanguage = "en=US"; speechTranslationConfig.AddTargetLanguage(“fr"); } }
The SpeechRecognitionLanguage property sets the language of the input speech that the system will recognize. The provided code correctly sets it to “en-US” (English, United States). This configuration is essential because it tells the service what language to expect from the input, ensuring accurate recognition and translation.
The AddTargetLanguage method is used to specify the target language into which the speech should be translated. In the provided code, this method is correctly used to add “fr” (French) as the target language. You can add multiple target languages by calling this method multiple times. This is a crucial part of configuring the speech translation, as it determines the output language for the translation process.
You would not use the TranslationRecognizer, SpeechSynthesisVoiceName, or AudioConfig properties to complete the code.
TranslationRecognizer is a class in the Azure Speech SDK used to perform real-time speech translation. It integrates both speech recognition and translation, converting spoken language into text in multiple languages simultaneously.
SpeechSynthesisVoiceName is a property used to specify the synthesis voice that should be used for text-to-speech (TTS) synthesis.
AudioConfig is used to specify the audio input and output settings, such as the microphone or speaker configuration.
Microsoft Azure AI Engineer Associate AI-102 certification exam practice question and answer (Q&A) dump with detail explanation and reference available free, helpful to pass the Microsoft Azure AI Engineer Associate AI-102 exam and earn Microsoft Azure AI Engineer Associate AI-102 certification.