Skip to Content

AZ-303 Microsoft Azure Architect Technologies Exam Questions and Answers – Page 1

The latest Microsoft AZ-303 Microsoft Azure Architect Technologies certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the Microsoft AZ-303 Microsoft Azure Architect Technologies exam and earn Microsoft AZ-303 Microsoft Azure Architect Technologies certification.

AZ-303 Microsoft Azure Architect Technologies Exam Questions and Answers

Exam Question 81

You are the IT administrator for an automobile dealership on the west coast of the United States. The dealership wants to take advantage of Microsoft Azure by first moving its website to the cloud. The dealership wants to use the lowest cost solution possible.
Business Requirements: One of the problems the dealership has been facing is website downtime. The dealership typically provides maintenance every Sunday and Wednesday at 2:00 A.M. Eastern Time. However, because the dealership wants to attract customers all over the world, it wants to ensure that the website is always available. During peak seasons, the dealership notices that the website responds slower. The dealership wants this bottleneck eliminated.
Technical Requirements: The website is currently hosted at the dealership’s domain registrar. The dealership wants to move the site to Azure on Windows Server virtual machines (VMs). Users must be able to use the same domain name to reach the website. The website must be hosted in only one Azure region. The VMs must use a four-gigabyte (GB) solid state drive (SSD). The dealership expects there to be less hands-on maintenance and administration once the infrastructure is moved to Azure
You need to eliminate the bottleneck during peak seasons.
Which two Azure resources should you create? Each correct answer presents part of the solution.

A. API Management gateway
B. Traffic Manager profile
C. Scale set
D. Load balancer
E. Service Fabric cluster
Correct Answer:
C. Scale set
D. Load balancer
Answer Description:
You should create a scale set. A scale set contains one or more identical VMs. It can be configured to automatically scale out more VMs as the CPU threshold increases.
You should also create a load balancer. A load balancer distributes traffic evenly across a set of VMs.
You should not create a Service Fabric cluster. Service Fabric allows you to scale out micro-services. In this scenario, you need to scale out VMs.
You should not create a Traffic Manager profile. Traffic Manager distributes traffic across Azure regions. It uses DNS to determine the nearest Azure datacenter to which external traffic should be routed.
You should not create an API Management gateway. API Management allows API developers to publish and secure web APIs.
References:
Microsoft Docs > What are virtual machine scale sets?
Microsoft Docs > Overview of Azure Service Fabric
Microsoft Docs > About API Management
Microsoft Docs > What is Traffic Manager?

Exam Question 82

You are the IT administrator for an automobile dealership on the west coast of the United States. The dealership wants to take advantage of Microsoft Azure by first moving its website to the cloud. The dealership wants to use the lowest cost solution possible.
Business Requirements: One of the problems the dealership has been facing is website downtime. The dealership typically provides maintenance every Sunday and Wednesday at 2:00 A.M. Eastern Time. However, because the dealership wants to attract customers all over the world, it wants to ensure that the website is always available. During peak seasons, the dealership notices that the website responds slower. The dealership wants this bottleneck eliminated.
Technical Requirements: The website is currently hosted at the dealership’s domain registrar. The dealership wants to move the site to Azure on Windows Server virtual machines (VMs). Users must be able to use the same domain name to reach the website. The website must be hosted in only one Azure region. The VMs must use a four-gigabyte (GB) solid state drive (SSD). The dealership expects there to be less hands-on maintenance and administration once the infrastructure is moved to Azure
You need configure Azure to automatically notify the owner of the dealership when peak season appears to have started. The solution must minimize expense and difficulty to implement.
What should you do?

A. Create a Function that uses a timed trigger to monitor the CPU usage and send a text message when a CPU threshold is exceeded.
B. Use Monitor to capture the average CPU percentage over time and create an alert when a CPU threshold is exceeded.
C. Create a WebJob that uses a timed trigger to monitor memory usage and invoke WebHook when consumption is high.
D. Use Machine Learning to create a model that examines historical memory usage and send an email when consumption is high.
Correct Answer:
B. Use Monitor to capture the average CPU percentage over time and create an alert when a CPU threshold is exceeded.
Answer Description:
You should use Monitor to create an alert when a CPU threshold is exceeded. With Monitor, you first choose a resource to monitor. In this scenario, the resource is a VM. You then choose a condition to monitor. In this scenario, when peak season starts, the website’s response time is slower. This means that the CPU is doing more work than usual. Therefore, you should create a condition that monitors CPU percentage. You then choose an action. You can configure an action to e-mail the owner of the dealership when the CPU percentage exceeds a specific threshold.
You should not use Machine Learning. With Machine Learning, you import historical data into a model to predict future outcomes. You cannot monitor VM metrics like CPU usage and memory consumption.
You should not create a Function. This requires you to create an App Service resource. Also, you would need to manually write code to monitor CPU usage on the VM and send the text message.
You should not create a WebJob. This requires you to create an App Service resource. Also, you would need to manually write code to monitor memory consumption on the VM and invoke the WebHook. You would also need to code the WebHook to send the message to the owner.
References:
Microsoft Docs > How to monitor virtual machines in Azure
Microsoft Docs > What is automated machine learning (AutoML)?
Microsoft Docs > Azure Functions triggers and bindings concepts
Microsoft Docs > Run background tasks with WebJobs in Azure App Service

Exam Question 83

You are the IT administrator for an automobile dealership on the west coast of the United States. The dealership wants to take advantage of Microsoft Azure by first moving its website to the cloud. The dealership wants to use the lowest cost solution possible.
Business Requirements: One of the problems the dealership has been facing is website downtime. The dealership typically provides maintenance every Sunday and Wednesday at 2:00 A.M. Eastern Time. However, because the dealership wants to attract customers all over the world, it wants to ensure that the website is always available. During peak seasons, the dealership notices that the website responds slower. The dealership wants this bottleneck eliminated.
Technical Requirements: The website is currently hosted at the dealership’s domain registrar. The dealership wants to move the site to Azure on Windows Server virtual machines (VMs). Users must be able to use the same domain name to reach the website. The website must be hosted in only one Azure region. The VMs must use a four-gigabyte (GB) solid state drive (SSD). The dealership expects there to be less hands-on maintenance and administration once the infrastructure is moved to Azure
You need to ensure that users can reach the website hosted in Azure with the existing domain name.
What two actions should you perform on the VM? Each correct answer presents part of the solution.

A. Assign a public static IP address.
B. Add an extension.
C. Add an inbound port rule.
D. Add an outbound port rule.
E. Add a DNS A record.
F. Assign a public static IP address.
Correct Answer:
C. Add an inbound port rule.
E. Add a DNS A record.
Answer Description:
You should add an inbound port rule to the VM. This rule should allow traffic over an HTTP port, which by default is port 80. (For HTTPS, the port is 443.)
You should also add a DNS A record to the VM to resolve the public IP address assigned to the VM. The public IP address is dynamic by default, and this does not cost any more money. At the domain registrar, you can create a CNAME record that points your website domain name to Azure at [dnsnamelabel].[region].cloudapp.azure.net.
You should not add a VM extension. A VM extension is a small application that provides post deployment tasks. For example, an extension can automatically install anti-virus software whenever a VM is deployed through script.
You should not add an outbound port rule to the VM. The VM should allow all outbound traffic by default.
You should not assign a public static IP address to the VM. This causes the IP address assigned to it to always remain the same. However, this is not necessary and costs more money. You can use a CNAME record at the domain registrar and a DNS name label in Azure.
References:
Microsoft Docs > Tutorial: Map an existing custom DNS name to Azure Web Apps
Microsoft Docs > Create a virtual machine with a static public IP address using the Azure portal
Microsoft Docs > Network security groups
Microsoft Docs > Virtual machine extensions and features for Windows

Exam Question 84

You are implementing a big data solution that runs on two Azure Virtual Machines (VMs). A VM named model1 is used to train a deep learning algorithm that uses GPU processing. A VM named database1 runs a NoSQL database that requires high disk throughput and IO.
You need to implement the most appropriate VM sizes for these VMs.
Choose all that apply

A. Implement a high performance compute VM for model1 and a Dsv3 size VM for database1.
B. Implement a GPU optimized VM for model1 and an Lsv2 size VM for database1.
C. Implement a memory optimized VM for model1 and a Fsv2 size VM for database1.
Correct Answer:
B. Implement a GPU optimized VM for model1 and an Lsv2 size VM for database1.
Answer Description:
Implement a high performance compute VM for model1 and a Dsv3 size VM for database1: This solution does not meet the goal. You can use high performance compute (HPC) VMs for workloads that might use high-throughput network interfaces like remote direct memory access (RDMA), such as genomics, computational chemistry, and financial risk modeling. You can use a Dsv3 size VM for general-purpose workloads with a good CPU-to-memory ratio, like small or medium databases and web servers.
Implement a GPU optimized VM for model1 and an Lsv2 size VM for database1: This solution meet the goal. You can use a GPU optimized VM for model1, which provides access to GPU hardware to train the deep learning algorithm. You can use an Lsv2 size VM, which is a storage optimized VM with high disk throughput and IO. This is ideal for Big Data solutions, NoSQL databases, data warehousing, and large transactional databases.
Implement a memory optimized VM for model1 and a Fsv2 size VM for database1: This solution does not meet the goal. You can use memory optimized VMs for workloads that require high memory-to-CPU ratio, like medium to large caching solutions like Redis and in-memory analytics. You can use a Fsv2 size VM for compute optimized workloads with a high CPU-to-memory ratio, like network appliances, batch processes, and application servers.
References:
Microsoft Docs > Sizes for virtual machines in Azure
Microsoft Docs > Memory optimized virtual machine sizes
Microsoft Docs > Compute optimized virtual machine sizes

Exam Question 85

You manage an Azure subscription for your company.
The subscription has one hundred Azure virtual machines (VMs) that run different workloads.
You need to identify underutilized VMs and suggest a less expensive service tier for these VMs.
What should you use?

A. Azure Monitor
B. Azure Log Analytics
C. Azure Advisor
D. Application Insights
Correct Answer:
C. Azure Advisor
Answer Description:
You should use Azure Advisor to identify underutilized VMs. You can use Azure Advisor to display personalized recommendations for your subscription. These recommendations are divided into five different categories. The Cost category includes recommendations on how to optimize VM costs by resizing or shutting down underutilized instances. Azure Advisor uses multiple metrics to identify underutilized VMs and suggests the most appropriate service tier for each workload.
You should not use Application Insights to identify underutilized VMs. You can use Application Insights as an Application Performance Management (APM) platform to monitor the applications to give visibility about performance anomalies, unhandled exceptions, and how users behave when using the applications.
You should not use Azure Monitor to identify underutilized VMs. Azure Monitor is a complete monitoring service that centralizes performance and availability by monitoring applications and services with the use of metrics and logs. You can use Azure Monitor to aggregate multiple metrics, like CPU usage percentage, network utilization, and others, to determine if a VM is underutilized. However, you need to adjust which metrics to use based on the VM workload and determine manually the most appropriate service tier to use.
You should not use Azure Log Analytics to identify underutilized VMs. Log Analytics is a tool in the Azure portal for writing log queries and analyzing their results. You can write a log query to calculate and correlate performance records and identify underutilized VMs. However, you need to write different queries based on the VM workload and determine manually the most appropriate service tier to use.
References:
Microsoft Docs > Introduction to Azure Advisor
Microsoft Docs > Reduce service costs using Azure Advisor
Microsoft Docs > What is Application Insights?
Microsoft Docs > Azure Monitor overview
Microsoft Docs > Overview of log queries in Azure Monitor

Exam Question 86

You are the solution architect for an IT company. Your company has a solution that is provisioned in your customer’s Azure subscription. You build a monitoring dashboard that uses Azure Monitor Workbooks to monitor the provisioned solution.
You need to evaluate how to publish and secure the dashboard in the customer’s subscription.
Choose all that apply:

A. You can publish the dashboard template in the customer subscription’s gallery template. *
B. You can customize the dashboard by saving the template as a shared report.
C. You can use role-based access control (RBAC) to limit access to the workbook templates.
Correct Answer:
B. You can customize the dashboard by saving the template as a shared report.
C. You can use role-based access control (RBAC) to limit access to the workbook templates.
Answer Description:
You can publish the dashboard template in the customer subscription’s gallery template. After you design the dashboard, you can export the template using the gallery template in the advanced editor. You can combine the exported template with an Azure Resource Manager (ARM) template and deploy it in the customer’s subscription.
You can customize the dashboard by saving the template as a shared report. You can create a custom dashboard based on a workbook template and save it as a shared report, so that other users can use this custom dashboard, or save it as a private report.
You can use RBAC to limit the access to the workbook templates. You need to create an Azure resource when you deploy a workbook template in a resource group. You can assign an RBAC in the resource group or resource level to limit access to the report template.
References:
Microsoft Docs > Azure Monitor Workbooks
Microsoft Docs > Programmatically manage workbooks
Microsoft Docs > Access control

Exam Question 87

Your team is using role-based access control (RBAC) to manage access to Azure resources.
You need to programmatically retrieve the team’s most recent 100 events.
Which cmdlet should you use?

A. Get-AzMetric
B. Get-AzLog
C. Get-AzLogProfile
D. Get-AzDiagnosticSetting
Correct Answer:
B. Get-AzLog
Answer Description:
You should use the Get-AzLog cmdlet to retrieve the last 100 events. You should use the MaxRecord parameter with this command. You can also filter the events by start and end time and display detailed information.
You should not use the Get-AzLogProfile cmdlet to retrieve the last 100 events. This cmdlet is used for retrieving information about the log profile.
You should not use the Get-AzMetric cmdlet to retrieve the last 100 events. This cmdlet is used for retrieving information about all metrics values connected to a specified resource.
You should not use the Get-AzDiagnosticSetting cmdlet to retrieve the last 100 events. This cmdlet gets the categories and time grains that are logged for a resource. A time grain is the aggregation interval of a metric.
References:
Microsoft Docs > Get-AzLog
Microsoft Docs > Get-AzLogProfile
Microsoft Docs > Get-AzMetric
Microsoft Docs > Get-AzDiagnosticSetting

Exam Question 88

You have an ASP.Net Core application running in a Windows App Service.
The application generates log messages that should be stored for one week at least.
You need to enable a diagnostics logging and only store logs with the severity level of Warning or higher.
How should you configure the diagnostics logging? To answer, select the appropriate options from the drop-down menus.

Diagnostic Logging:

  • Application Logging (Blob)
  • Application Logging (Filesystem)
  • Detailed Error Messages
  • Web server logging (Storage)

Severity level:

  • Error
  • Information
  • Verbose
  • Warning

Correct Answer:
Diagnostic Logging: Application Logging (Blob)
Severity level: Warning
Answer Description:
You should enable the Application Logging (Blob) diagnostics logging. This setting can store logs generated by the application in a Blob Storage. You can access the application logs stored for more than one week.
You should also configure the severity level to Warning. You should use the Warning severity level to store only Warning, Error, and Critical log messages.
You should not enable the Application Logging (Filesystem) diagnostics logging. This setting saves the application log directly in the App Service filesystem. This option should be used only for debugging purposes because it is enabled for only 12 hours before turning itself off.
You should not enable the Detailed Error Messages diagnostics logging. This setting can store detailed error pages in HTML format that is hidden to clients using the application.
You should not enable the Web server logging (Storage) diagnostics logging. This setting can store raw HTTP request data from the webserver in a Blob Storage. You can use this setting in Windows App Services only.
You should not configure the severity level to Error. This severity level stores Error and Critical log messages. However, log messages with the Warning severity level will not be stored.
You should not configure the severity level to Information or Verbose. These severity levels store Warning, Error, and Critical log messages. However, they also store Info log messages for Information level, and also Trace for Verbose level, storing more log messages than necessary by the requirements.
References:
Microsoft Docs > Enable diagnostics logging for apps in Azure App Service

Exam Question 89

You are enabling a diagnostics logging for the App Services below:
App1 – ASP.Net Core application running in Windows platform
App2 – Node.js application running in Linux platform
You need to determine which diagnostics logging setting could be enabled for each application.
Choose all that apply:

A. You can enable Detailed Error Messages and Failed request tracing, and store these logs directly to a Blob Storage in App1.
B. You can enable Application logging to store application logs at the App Service filesystem in App2.
C. You can enable Web server logging to store HTTP request data log messages in App2.
Correct Answer:
B. You can enable Application logging to store application logs at the App Service filesystem in App2.
Answer Description:
You cannot enable Detailed Error Messages and Failed request tracing, and store these logs directly to a Blob Storage in App1. These diagnostics logs can only be stored in the App Service filesystem.
You can enable Application logging to store application logs at the App Service filesystem in App2. You can enable Application logging in the App Service filesystem for both Windows and Linux platforms. You can also store Application logging directly to a Blob Storage, but only using Windows platform.
You cannot enable Web server logging to store HTTP request data log messages in App2. You can only enable Web server logging in Windows platform. This setting will store the IIS server logs in the App Service filesystem using the W3C extended log file format.
References:
Microsoft Docs > Enable diagnostics logging for apps in Azure App Service
Microsoft Docs > Enable and Configure App Service Application Logging

Exam Question 90

You have two Azure Virtual Machines (VMs) and three Storage accounts provisioned in an Azure subscription. The subscription configuration is shown in the exhibit.

Resource Group Region
rg1 Central US
rg2 East US
Azure VM Operating System Resource Group Region
vm1 Ubuntu 20.04 LTS rg1 Central US
vm2 Windows Server 2019 rg2 East US
Storage Account Type Replication Resource Group Region
storage1 Premium storage account Locally-redundant storage (LRS) rg1 Central US
storage2 Storage account v1 Locally-redundant storage (LRS) rg2 East US
storage3 Storage account v2 Geo-redundant storage (GRS) rg1 Central US

You need to enable boot diagnostics in the Azure VMs using the Storage accounts available.
Which Storage accounts should you use?

Enable boot diagnostics in vm1 by using:

  • storage1 only
  • storage1 or storage2 only
  • storage1, storage2, or storage3
  • storage2 only
  • storage2 or storage3 only
  • storage3 only

Enable boot diagnostics in vm2 by using:

  • storage1 only
  • storage1 or storage2 only
  • storage1, storage2, or storage3
  • storage2 only
  • storage2 or storage3 only
  • storage3 only

Correct Answer:
Enable boot diagnostics in vm1 by using: storage3 only
Enable boot diagnostics in vm2 by using: storage2 only
Answer Description:
You should enable boot diagnostics on vm1 by using storage3 only. You can use a standard storage account v2 in the same region as the Azure VM is provisioned, which is storage3 for vm1. You can use Geo-redundant storage (GRS) or Read-Access Geo-redundant storage (RA-GRS) replication in the storage account to provide additional redundancy.
You should enable boot diagnostics on vm2 by using storage2 only. You can use a standard storage account v1 in the same region as the Azure VM is provisioned, which is storage2 for vm2.
You should not enable boot diagnostics by using storage1. Boot diagnostics does not support a premium storage account, even if the storage accounts use Locally-redundant storage (LRS) replication and are provisioned in the same Azure region of the Azure VM. Using premium storage might result in the StorageAccountTypeNotSupported error when you start the VM.
References:
Microsoft Docs > How to use boot diagnostics to troubleshoot virtual machines in Azure
Microsoft Docs > Create diagnostic settings to send platform logs and metrics to different destinations