FAQ documents offer a fast track to chatbots. Learn how Azure QnA Maker can easily extract questions and answers from PDFs to power natural language conversations.
Table of Contents
Question
You have a frequently asked questions (FAQ) PDF file. You need to create a conversational support system based on the FAQ. Which service should you use?
A. QnA Maker
B. Text Analytics
C. Computer Vision
D. Language Understanding (LUIS)
Answer
A. QnA Maker
Explanation
The correct answer is A. QnA Maker.
QnA Maker is a cloud-based API service that lets you create a conversational question-and-answer layer over your existing data. Use it to build a knowledge base by extracting questions and answers from your semi-structured content, including FAQs, manuals, and documents. Answer users’ questions with the best answers from the QnAs in your knowledge base automatically. Your knowledge base gets smarter, too, as it continually learns from user behavior.
QnA Maker is a service that allows you to create a conversational, question-and-answer layer over your data. You can use QnA Maker to extract question-answer pairs from semi-structured content, such as FAQ documents, support websites, product manuals, etc. You can also add personality to your bot using pre-built chit-chat datasets. QnA Maker enables you to build, train, and publish a sophisticated bot using FAQ pages, support websites, product manuals, SharePoint documents, or editorial content through an easy-to-use UI or via REST APIs. You can also design complex multi-turn conversations easily through QnA Maker portal or using REST APIs.
Text Analytics is a service that provides advanced natural language processing over raw text, and includes four main functions: sentiment analysis, key phrase extraction, language detection, and named entity recognition. Text Analytics is not suitable for creating a conversational support system based on an FAQ document.
Computer Vision is a service that provides state-of-the-art algorithms to process images and return information. Computer Vision can analyze the content of an image in different ways based on the visual features you request. Some of the features are: tagging, describing, recognizing celebrities and landmarks, optical character recognition (OCR), detecting faces, emotions, and colors, etc. Computer Vision is not suitable for creating a conversational support system based on an FAQ document.
Language Understanding (LUIS) is a service that enables you to build natural language understanding into apps, bots, and IoT devices. LUIS allows your application to understand what a person wants in their own words. LUIS uses machine learning to allow developers to build applications that can receive user input in natural language and extract meaning from it. LUIS is not suitable for creating a conversational support system based on an FAQ document, as it requires more manual work to define intents, entities, and utterances.
Incorrect:
Not A: Azure Custom Vision is a cognitive service that lets you build, deploy, and improve your own image classifiers. An image classifier is an AI service that applies labels (which represent classes) to images, according to their visual characteristics. Unlike the Computer Vision service, Custom Vision allows you to specify the labels to apply.
Not D: Azure Cognitive Services Face Detection API: At a minimum, each detected face corresponds to a faceRectangle field in the response. This set of pixel coordinates for the left, top, width, and height mark the located face. Using these coordinates, you can get the location of the face and its size. In the API response, faces are listed in size order from largest to smallest.
References
Microsoft Docs > Azure > Services > Cognitive Services > Question answering
Microsoft Azure AI Fundamentals AI-900 certification exam practice question and answer (Q&A) dump with detail explanation and reference available free, helpful to pass the Microsoft Azure AI Fundamentals AI-900 exam and earn Microsoft Azure AI Fundamentals AI-900 certification.