Learn how to implement real-time speech-to-text functionality for medical applications using Azure AI Speech SDK in Python. Perfect for AI-102 certification prep!
Table of Contents
Question
Your organization, Nutex Inc., is developing a voice-based application for a healthcare organization that allows doctors to dictate their notes after patient consultations. The application needs to convert spoken medical terms and patient details into text in real time, ensuring accuracy and efficiency. You decide to implement this functionality using Azure AI speech-to-text with the Speech SDK in Python.
You have set the environmental variables and are in the process of creating a console application to recognize speech from the microphone.
You have installed the Speech SDK and written the following code. Complete the code with its missing property.
import os import azure.cognitiveservices.speech as speechsdk def recognize_from_microphone(): # This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION" speech_config = speechsdk.SpeechConfig(subscription=os.environ.get('SPEECH_KEY'), region=os.environ.get('SPEECH_REGION')) speech_config.speech_recognition_language="en-US" audio_config = speechsdk.audio. MISSING(use_default_microphone=True) speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_config) print("Speak into your microphone.") speech_recognition_result = speech_recognizer.recognize_once_async().get() if speech_recognition_result.reason == speechsdk.ResultReason.RecognizedSpeech: print("Recognized: {}".format(speech_recognition_result.text)) elif speech_recognition_result.reason == speechsdk.ResultReason.NoMatch: print("No speech could be recognized: {}".format(speech_recognition_result.no_match_details)) elif speech_recognition_result.reason == speechsdk.ResultReason.Canceled: cancellation_details = speech_recognition_result.cancellation_details print("Speech Recognition canceled: {}".format(cancellation_details.reason)) if cancellation_details.reason == speechsdk.CancellationReason.Error: print("Error details: {}".format(cancellation_details.error_details)) print("Did you set the speech resource key and region values?") recognize_from_microphone()
A. RecognizedSpeech
B. SpeechConfig
C. SpeechRecognizer
D. AudioConfig
Answer
D. AudioConfig
Explanation
You would use the AudioConfig property in the given code. AudioConfig is the correct class to use when configuring the audio input source for speech recognition. In the provided code, it is used to specify that the default microphone should be used for capturing audio. This is a critical part of the setup process for recognizing speech from the microphone, as it tells the SDK where to listen for the audio input.
You would not use the RecognizedSpeech property in the given code. RecognizedSpeech is a constant within the Azure Speech SDK that represents the reason for a successful speech recognition result. In the provided code, it is used to check if the speech recognition was successful and to handle the recognized text.
You would not use the SpeechConfig property in the given code. SpeechConfig is the class used to configure the subscription and region information for the Azure Speech service. It is essential for setting up the speech recognition environment, as seen in the provided code where the subscription key and region are specified.
You would not use the SpeechRecognizer property in the given code. SpeechRecognizer is the class that performs the actual speech recognition task by processing the audio input and converting it into text. It is instantiated after the AudioConfig and SpeechConfig properties have been set up.
Microsoft Azure AI Engineer Associate AI-102 certification exam practice question and answer (Q&A) dump with detail explanation and reference available free, helpful to pass the Microsoft Azure AI Engineer Associate AI-102 exam and earn Microsoft Azure AI Engineer Associate AI-102 certification.