The latest Microsoft AZ-303 Microsoft Azure Architect Technologies certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the Microsoft AZ-303 Microsoft Azure Architect Technologies exam and earn Microsoft AZ-303 Microsoft Azure Architect Technologies certification.
Exam Question 101
You are designing the disaster recovery strategy for an application.
The application uses a private blob container in a storage account named storage1. The application needs to read and write blobs in storage1 even if a disaster impacting a whole Azure region occurs.
You need to configure storage1 to maximize availability and indicate an action to perform in case of an outage.
Which redundancy option and action should you use?
Redundancy option for storage1:
- GRS
- LRS
- RA-GRS
- ZRS
Action to perform in case of an outage:
- Copy the files to a new storage account.
- Initiate a storage account failover.
- Use the storage account secondary endpoint.
Correct Answer:
Redundancy option for storage1: Geo-redundant storage (GRS)
Action to perform in case of an outage: Initiate a storage account failover.
Answer Description:
You should configure Geo-redundant storage (GRS) for storage1. This redundancy option replicates the storage account to a geographically isolated region if there is a disaster that impacts an Azure region.
You should initiate a storage account failover if an outage occurs. An account failover promotes the secondary endpoint of the storage account to become the primary endpoint. Once failover is completed, the application can read and write to the new primary region and maintain high availability.
You should not configure Read-access geo-redundant storage (RA-GRS) and use the storage account secondary endpoint. Using RA-GRS and redirecting the application to use the secondary storage account endpoint solves the read availability problem. However, the secondary endpoint does not allow the application to write on the replicated blobs.
You should not configure Locally redundant storage (LRS) or Zone-redundant storage (ZRS) for storage1. These redundancy options do not replicate the storage account to a geographically isolated region.
You should not copy the files to a new storage account. If the primary region is not available, you may not be able to copy the original blobs from the storage account.
References:
Microsoft Docs > Disaster recovery and storage account failover
Microsoft Docs > Initiate a storage account failover
Microsoft Docs > Azure Storage redundancy
Exam Question 102
You are designing a new storage solution using Azure Storage Accounts for your company.
The company security team requires that the solution uses Azure Active Directory (Azure AD) as the authentication platform with the storage account.
You need to indicate which storage account services are compatible with Azure AD.
Choose all that apply:
A. You can use Azure AD role-based access control (RBAC) to access Azure Blobs and Azure Queues.
B. You can use the Azure Files access over Server Message Block (SMB) protocol with Azure AD by enabling Azure AD Domain Services.
C. You can use Azure AD managed identities to access Azure Tables.
Correct Answer:
A. You can use Azure AD role-based access control (RBAC) to access Azure Blobs and Azure Queues.
B. You can use the Azure Files access over Server Message Block (SMB) protocol with Azure AD by enabling Azure AD Domain Services.
Answer Description:
You can use Azure AD role-based access control (RBAC) to access Azure Blobs and Azure Queues. You can use Azure AD to authorize requests to Azure Blobs and Azure Queues, assign RBAC roles to the security principal, like the Storage Blob Data Contributor or Storage Queue Data Reader built-in roles, for example.
You can use the Azure Files access over Server Message Block (SMB) protocol with Azure AD by enabling the Azure AD Domain Services. After you enable Azure AD Domain Services, you can mount Azure Files in a domain-joined machine and enforce authorization on user access using the same credentials in Azure AD. The SMB protocol is a network file sharing protocol providing access to files, printers, and serial ports over a network.
You cannot use Azure AD managed identities to access Azure Tables. Only storage account keys and shared access signature (SAS) are supported as authorization for Azure Tables.
References:
Microsoft Docs > Authorizing access to data in Azure Storage
Microsoft Docs > Authorize access to blobs and queues using Azure Active Directory
Microsoft Docs > Overview of Azure Files identity-based authentication support for SMB access
Exam Question 103
You are the Azure administrator for a web API that uses the Free plan.
You need to monitor the web API to determine whether or not you should change the plan to Basic.
Which metric should you monitor?
A. Requests
B. Average Response Time
C. CPU Time
D. Thread Count
Correct Answer:
C. CPU Time
Answer Description:
You should monitor CPU time. This represents the number of CPU minutes used by the web API. For the Free plan, a web app or web API is allowed 60 CPU minutes per day. By monitoring this metric, you can decide whether or not to scale up the web API.
You should not monitor Average Response Time. This represents the average number of milliseconds used to serve a single request. An App Service Plan can affect the response time, but it is not feasible to use the response time to determine whether or not to scale up an app due to other factors. For example, the average response time can increase due to the number of simultaneous requests made, for example, during peak times.
You should not monitor Requests. This represents the total number of HTTP requests made to the web API. An App Service Plan does not limit the number of requests made to a web API.
You should not monitor Thread Count. This represents the total number of working threads used to service requests. An App Service Plan does not limit the number of threads used by a web API.
References:
Microsoft Docs > Monitor apps in Azure App Service
Azure Pricing > App Service Pricing
Exam Question 104
You need to enable encryption for a running Windows Infrastructure-as-a-Service (IaaS) virtual machine (VM).
Which PowerShell cmdlet should you use?
A. Set-AzDiskDiskEncryptionKey
B. Set-AzVMDataDisk
C. Set-AzVMDiskEncryptionExtension
D. ConvertTo-AzVMManagedDisk
Correct Answer:
C. Set-AzVMDiskEncryptionExtension
Answer Description:
You should use the Set-AzVMDiskEncryptionExtension cmdlet. This cmdlet is used to enable encryption on a running VM by installing the disk encryption extension. This cmdlet is used to enable encryption for a Windows or supported Linux VM. You should create a snapshot of the VM before enabling encryption.
You should not use the Set-AzVMDataDisk cmdlet. This cmdlet is used to modify properties for a VM data disk but does not include properties related to encryption.
You should not use the Set-AzDiskDiskEncryptionKey cmdlet. This cmdlet sets the disk encryption key properties on a disk but does not enable encryption.
You should not use the ConvertTo-AzVMManagedDisk cmdlet. This cmdlet is used to convert a VM with blob-based disks to a VM with managed disks.
References:
Microsoft Docs > Azure Disk Encryption for virtual machines and virtual machine scale sets
Microsoft Docs > Azure Disk Encryption for Linux VMs
Microsoft Docs > Quickstart: Create and encrypt a Windows virtual machine in Azure with PowerShell
Microsoft Docs > Set-AzVMDiskEncryptionExtension
Microsoft Docs > Set-AzVMDataDisk
Microsoft Docs > Set-AzDiskDiskEncryptionKey
Microsoft Docs > Update-AzDisk
Exam Question 105
Your company is researching ways to improve data security for Windows and Linux Infrastructure-as-a-Service (IaaS) virtual machines (VM)s. You need to determine if Azure Disk Encryption can meet the company’s requirements.
Choose all that apply:
A. Azure Disk Encryption is supported for Basic, Standard, and Premium tier VMs.
B. You must encrypt the OS volume before you can encrypt any data volumes on a Windows VM.
C. You can use an on-premises key management service to safeguard encryption keys.
Correct Answer:
B. You must encrypt the OS volume before you can encrypt any data volumes on a Windows VM.
Answer Description:
Azure Disk Encryption is not supported for Basic tier VMs. It is supported for Standard and Premium tier VMs. Azure Disk Encryption supports Windows Server 2008 and later, and a subset of Azure Linux images. Custom Linux images are not supported.
You must encrypt the boot volume before you can encrypt any data volumes on a Windows VM. Azure Disk Encryption does not let you encrypt a data volume unless you first encrypt the OS volume. This is different for Linux VMs, which let you encrypt data without first encrypting the OS volume.
You cannot use an on-premises key management service to safeguard encryption keys. You are required to use Azure Key Vault. Azure Key Vault is a prerequisite for implementing Azure Disk Encryption.
References:
Microsoft Docs > Creating and configuring a key vault for Azure Disk Encryption
Microsoft Docs > Azure Disk Encryption for virtual machines and virtual machine scale sets
Exam Question 106
Your company plans to use a custom image based on an existing Azure Windows virtual machine (VM) to provision new VMs in multiple regions.
You need to prepare the VM so it can be used to create a custom image.
Which three commands should you run first in sequence?
A. 1. Sysprep; 2.Stop-AzVm; 3.Set-AzVm;
B. 1. Sysprep; 2.Set-AzVm; 3.Stop-AzVm;
C. 1. Set-AzVm; 2.Sysprep; 3.Stop-AzVm;
D. 1. Set-AzVm; 2.Stop-AzVm; 3.Sysprep;
Correct Answer:
A. 1. Sysprep; 2.Stop-AzVm; 3.Set-AzVm;
Answer Description:
You need to start by running the following commands in order:
- Sysprep
- Stop-AzVm
- Set-AzVm
A custom image is similar to an Azure marketplace image. The primary difference is that you create the image yourself from an existing VM. The result is a reusable image that can be used to create as many VMs as you want.
You start by running the Sysprep command to remove personal information and generalize the image. You then use the Stop-AzVm cmdlet to deallocate the VM. Finally, you need to identify the VM as generalized to Azure using the Set-AzVm command.
Once you have prepared the image, you run Get-AzVM to retrieve the image and load it into a variable, New-AzImageConfig to create the image configuration by specifying the image location, and finally New-AzImage to create the image, specifying the image name and location.
At this point, you can use the New-AzVm to create new VMs from the image.
References:
Microsoft Docs > Tutorial: Create a custom image of an Azure VM with Azure PowerShell
Microsoft Docs > Tutorial: Create and Manage Windows VMs with Azure PowerShell
Microsoft Docs > Create a VM from a managed image
Microsoft Docs > Create a managed image of a generalized VM in Azure
Exam Question 107
You need to recommend a solution that will monitor Azure subscription activity and send alerts to a non-Azure system for processing.
Notification of alerts sent to the external system must be automated.
Which mechanism should you recommend?
A. Azure Stream Analytics
B. Azure Event Hubs
C. Webhook
D. Power BI
Correct Answer:
C. Webhook
Answer Description:
You should recommend using a webhook. Azure alerts use HTTP POST to send the alert contents in JSON format to a webhook URI that you provide when you create the alert. Azure posts one entry per request when an alert is activated.
You should not recommend Power BI. This service is used to present and analyze both historical and live data. The external system would need to retrieve the data from Power BI.
You should not recommend Azure Event Hubs. Although this service is used to ingest data, you would need an additional component to send data to an external system.
You should not recommend Azure Stream Analytics. This service is used to process large amounts of data on the fly and to perform complex data analytics and aggregations.
References:
Microsoft Docs > Overview of Azure platform logs
Microsoft Docs > Webhook actions for log alert rules
Microsoft Docs > Connect to the services you use with Power BI
Microsoft Docs > Stream Azure platform logs to Azure Event Hubs
Microsoft Docs > Stream data as input into Stream Analytics
Exam Question 108
You plan to move a batch processing solution that currently runs on multiple on-premises Virtual Machines (VMs) to the Azure cloud.
The solution requires you to control when maintenance events occur and provide hardware isolation at the physical server level.
You need to implement a solution to meet the requirements.
What should you use?
A. App Service Environments
B. Azure Dedicated Hosts
C. Azure Kubernetes Service
D. Azure VM scale sets
Correct Answer:
B. Azure Dedicated Hosts
Answer Description:
You should use Azure Dedicated Hosts. You can use Azure Dedicated Hosts to provide a physical server dedicated to one Azure subscription. Azure Dedicated Hosts provide hardware isolation at the physical server level and total control over Azure’s maintenance events by defining a custom maintenance window. You can also host one or more virtual machines on a single Dedicated Host.
You should not use Azure VM scale sets. You can use Azure VM scale sets to manage a group of load-balanced VMs that run a similar workload. You can use update domains to improve high availability during maintenance events. However, you cannot control when maintenance events will be applied, and VM scale sets do not provide hardware isolation at the physical server level.
You should not use App Service Environments. You can use App Service Environments to provide an isolated and dedicated environment for running App Service apps. App Service Environments provide hardware isolation at the physical server level. However, it is better suited for running web applications.
You should not use Azure Kubernetes Service. You can use Azure Kubernetes Service to provide a managed Kubernetes cluster in Azure and reduce the complexity of managing the cluster. Azure Kubernetes Service is better suited for running microservices and containerized applications, and it does not provide hardware isolation at the physical server level by itself.
References:
Microsoft Docs > Azure Dedicated Hosts
Microsoft Docs > What are virtual machine scale sets?
Microsoft Docs > Maintenance for virtual machines in Azure
Microsoft Docs > Introduction to the App Service Environments
Microsoft Docs > Azure Kubernetes Service (AKS)
Exam Question 109
You are implementing an n-tier application that runs on three Azure Virtual Machines (VMs) in your Azure subscription.
The application requires the lowest possible network latency between the Azure VMs.
You need to deploy the application using the most cost-effective solution.
What should you do?
A. Create a proximity placement group.
B. Deploy the VMs on a Dedicated Host.
C. Use a VM scale set and deploy the VMs in the same update domain.
D. Use a VM scale set and deploy the VMs in the same fault domain.
Correct Answer:
A. Create a proximity placement group.
Answer Description:
You should create a proximity placement group. You can use a proximity placement group to provision resources like Azure VMs or VM scale sets that are physically located close to each other. This achieves the lowest network latency between this group of resources.
You should not deploy the VMs on a Dedicated Host. You can use a Dedicated Host to host one or more VMs in the Azure infrastructure. To achieve the lowest network latency, you also need a proximity placement group, which is not compatible with a Dedicated Host.
You should not use a VM scale set and deploy the VMs in the same fault domain or update domain. You can use a VM scale set to provide high availability for each application tier by increasing the VM redundancy. A fault domain is a logical group that shares the same hardware power supply and network switch. An update domain also shares the same hardware, but it is logically separated from the fault domain so they can perform maintenance operations at different times. A fault domain and an update domain are used by a single VM scale set. You cannot control where you deploy your VMs in a specific fault or update domain, and you cannot guarantee that the resources are physically located close to each other using three different VM scale sets. Therefore, the lowest network latency would not necessarily be achieved.
References:
Microsoft Docs > Create a proximity placement group using the portal
Microsoft Docs > Azure Dedicated Hosts
Microsoft Docs > What are virtual machine scale sets?
Microsoft Docs > Availability options for virtual machines in Azure
Exam Question 110
You plan to deploy 15 identical virtual machines (VMs) to Azure. All 15 VMs must be based on the settings of a local on-premises computer.
You need to choose the best strategy for deploying the VMs.
What should you do?
A. Create a VM in Azure. Use Azure CLI to copy that VM 14 times.
B. Create a VM in Azure. Use PowerShell to copy that VM 14 times.
C. Create an Extensible Markup Language (XML) file that describes a single VM. Use Azure CLI to deploy a template to Azure.
D. Create a JavaScript Object Notation (JSON) file that describes a single VM. Use PowerShell to deploy a template to Azure.
Correct Answer:
D. Create a JavaScript Object Notation (JSON) file that describes a single VM. Use PowerShell to deploy a template to Azure.
Answer Description:
You should create a JSON file that describes a single VM. This file is referred to as an Azure Resource Manager (ARM) template in Azure. You should then use deployment commands to deploy the template to Azure. One tool that you can use is PowerShell. Once the template is deployed, you can use it to create actual VMs.
You should not create an XML file to describe a single VM. ARM templates must be written in JSON syntax.
You should not create the VM in Azure and then use PowerShell or Azure CLI to copy it. ARM templates provide a way to describe a VM before you create it.
References:
Microsoft Docs > Quickstart: Create and deploy ARM templates by using the Azure portal