Skip to Content

AI-900: What is the Best Option to Deploy a Real-Time Inference Pipeline in Azure ML Designer?

Learn the correct option for deploying a real-time inference pipeline as a service using Azure ML designer. Discover why AKS (Azure Kubernetes Service) is the ideal choice for deploying models for others to consume.

Table of Contents

Question

You are using Azure ML designer to deploy a real-time inference pipeline as a service for others to consume. Where will you deploy the model? Select the correct option.

A. a local web service.
B. AKS (Azure Kubernetes Service).
C. Azure ML compute.
D. Azure containers.

Answer

When using Azure ML designer to deploy a real-time inference pipeline as a service for others to consume, the correct option is B. AKS (Azure Kubernetes Service).

Explanation

Azure Kubernetes Service (AKS) is a fully managed container orchestration service that allows you to deploy, scale, and manage containerized applications efficiently. When deploying a real-time inference pipeline in Azure ML designer, AKS is the ideal choice for several reasons:

  1. Scalability: AKS enables you to easily scale your deployed model to handle varying workloads. It automatically provisions and manages the underlying infrastructure, allowing you to focus on your model deployment.
  2. High Availability: AKS ensures high availability for your deployed model by automatically distributing containers across multiple nodes. If a node fails, AKS automatically reschedules the containers on healthy nodes, minimizing downtime.
  3. Load Balancing: AKS provides built-in load balancing capabilities, distributing incoming requests evenly across the deployed containers. This ensures optimal performance and responsiveness for your inference pipeline.
  4. Integration with Azure ML: Azure ML designer seamlessly integrates with AKS, making it straightforward to deploy your trained models as a web service. You can easily configure and manage your deployment directly from the Azure ML designer interface.

The other options mentioned are not suitable for deploying a real-time inference pipeline as a service:

  • Option A, a local web service, is not appropriate for serving models to others as it runs on your local machine and is not accessible to external consumers.
  • Option C, Azure ML compute, is used for training models and running experiments, not for deploying inference pipelines as a service.
  • Option D, Azure containers, refers to the general concept of containerization in Azure but does not specifically address the deployment of a real-time inference pipeline.

In summary, when using Azure ML designer to deploy a real-time inference pipeline as a service for others to consume, AKS (Azure Kubernetes Service) is the correct and most suitable option. It provides scalability, high availability, load balancing, and seamless integration with Azure ML, making it the ideal choice for deploying models that will be consumed by others.

Deploying a model to AKS allows you to manage and scale the service for real-time inference effectively.

Microsoft Azure AI Fundamentals AI-900 certification exam practice question and answer (Q&A) dump

Microsoft Azure AI Fundamentals AI-900 certification exam practice question and answer (Q&A) dump with detail explanation and reference available free, helpful to pass the Microsoft Azure AI Fundamentals AI-900 exam and earn Microsoft Azure AI Fundamentals AI-900 certification.