Skip to Content

AZ-303 Microsoft Azure Architect Technologies Exam Questions and Answers – Page 2

The latest Microsoft AZ-303 Microsoft Azure Architect Technologies certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the Microsoft AZ-303 Microsoft Azure Architect Technologies exam and earn Microsoft AZ-303 Microsoft Azure Architect Technologies certification.

AZ-303 Microsoft Azure Architect Technologies Exam Questions and Answers

Exam Question 171

You obtain a Docker container image from a third-party source.
You need to push the image to an Azure Container Registry that you created.
What should you do first?

A. Tag the image with the login server.
B. Create a load balancer.
C. Deploy an Azure virtual machine (VM).
D. Assign the Owner role to the Owner security group.
Correct Answer:
A. Tag the image with the login server.
Answer Description:
You should tag the image with the login server. This is required before you can push the image.
You should not deploy an Azure VM. A container image runs in a container. It does not require a VM.
You should not assign the Owner role to the appropriate security group. Owner role assignment is not required to deploy a Docker container image.
You should not create a load balancer. A load balancer distributes load to a pool of VMs. This is not required for a Docker container image.
References:
Microsoft Docs > Tutorial: Create an Azure container registry and push a container image
Microsoft Docs > What is Azure Load Balancer?
Microsoft Docs > Azure built-in roles
Microsoft Docs > What is Azure Container Instances?

Exam Question 172

You recently moved a critical production workload to Azure Kubernetes Service (AKS). A second AKS cluster is used for other application workloads.
You want to collect performance metrics directly from the AKS cluster that is used for the critical workloads.
Which four actions should you perform in sequence?

A. 1. Create a Log Analytics workspace if you do not have one. 2. From the Azure portal, enable monitoring for the cluster. 3. Add Azure Monitor for Containers to the workspace. 4. View charts on the Insights page of the AKS cluster.
B. 1. Retrieve entries from event log . 2. From the Azure portal, enable monitoring for the cluster. 3. Add Azure Monitor for Containers to the workspace. 4. View charts on the Insights page of the AKS cluster.
C. 1. Retrieve entries from event log . 2. Run a query on the cluster in Log Analytics. 3. Add Azure Monitor for Containers to the workspace. 4. View charts on the Insights page of the AKS cluster.
D. 1. Create a Log Analytics workspace if you do not have one . 2. Run a query on the cluster in Log Analytics. 3. Add Azure Monitor for Containers to the workspace. 4. View charts on the Insights page of the AKS cluster.
Correct Answer:
A. 1. Create a Log Analytics workspace if you do not have one. 2. From the Azure portal, enable monitoring for the cluster. 3. Add Azure Monitor for Containers to the workspace. 4. View charts on the Insights page of the AKS cluster.
Answer Description:
You should perform the following steps in order:

  1. Create a Log Analytics workspace if you do not have one.
  2. From the Azure portal, enable monitoring for the cluster.
  3. Add Azure Monitor for Containers to the workspace.
  4. View charts on the Insights page of the AKS cluster.

You must first create a Log Analytics workspace if you do not already have one. You must then enable monitoring for the target cluster. Next, you add Azure Monitor for Containers to the workspace. This allows the collection of performance data from the nodes in the cluster. Although you can use Azure Monitor to view performance data on all clusters, you can also view this data directly from the cluster.
After you have enabled monitoring for a cluster, you do not need to run queries to see detailed performance data.
Because you are gathering metrics instead of logs, you do not need to obtain data from the activity log.
References:
Microsoft Docs > How to enable Azure Monitor for containers
Microsoft Docs > Monitor your Kubernetes cluster performance with Azure Monitor for containers

Exam Question 173

You use the following commands to create a container in Azure:
az group create --name app1RG --location eastus
az container create --resource-group app1RG --name app1Container --image company1/app1Image --dns-name-label app1 --ports 80

You need to navigate to the application that is hosted in the container.
Which URL should you use?

A. eastus.app1-azurecontainer.io
B. app1.eastus.azurecontainer.io
C. app1.azurecontainer.eastus.io
D. eastus.azurecontainer.app1.io
Correct Answer:
B. app1.eastus.azurecontainer.io
Answer Description:
You should navigate to app1.eastus.azurecontainer.io. The container application’s URL format is [DNS label].[Azure region].azurecontainer.io.
References:
Microsoft Docs > What is Azure Container Instances?
Microsoft Docs > Quickstart: Deploy a container instance in Azure using the Azure CLI

Exam Question 174

You use the following Azure CLI command to create an Azure container instance:
az container create --resource-group testgroup --name testcontainer --image company1/c1app1
You need to be able to browse to the container’s URL.
Which two parameters must you set?

A. –dns-name-label
B. –environment-variables
C. –ports
D. –os-type
E. –protocol
Correct Answer:
A. –dns-name-label
C. –ports
Answer Description:
You should set the –dns-name-label parameter. This parameter is necessary so that Azure can resolve the DNS name to the IP address that hosts the container instance.
You should set the –ports parameter. This parameter is necessary so that you can have Azure open the appropriate TCP ports. If the default ports are used, it can be omitted.
You do not need to set the –environment-variables parameter. This parameter allows you to set environment variables for container instances, which is unnecessary in this scenario.
You do not need to set the –os-type parameter. This parameter specifies the operating system for the container instance. The type of operating system is irrelevant in this scenario.
You do not need to set the –protocol parameter. This parameter specifies either TCP or UDP. When browsing from a web browser, the protocol is automatically TCP.
References:
Microsoft Docs > az container

Exam Question 175

You are deploying a container solution in Azure.
You create a Docker image and add it to an Azure Container Registry named registry1.
You need to deploy the Docker image to Azure Container Instances.
Which command should you run?

A. docker push
B. az acr create
C. az aks create
D. az container create
Correct Answer:
D. az container create
Answer Description:
You should run the az container create command. This command creates a container in Azure Container Instance and deploys the image specified in the –image parameter.
You should not run the az acr create command. This command creates an Azure Container Registry. In this scenario, you already have a container registry named registry1.
You should not run the docker push command. This command pushes an image to a registry. You already pushed the image to registry1.
You should not run the az aks create command. This command creates an Azure Kubernetes Service cluster, and in this scenario you need to create an Azure Container Instance to deploy the image.
References:
Microsoft Docs > Quickstart: Deploy a container instance in Azure using the Azure CLI
Microsoft Docs > Tutorial: Create an Azure container registry and push a container image
Microsoft Docs > Quickstart: Deploy an Azure Kubernetes Service cluster using the Azure CLI

Exam Question 176

You are planning to create a new Azure Cosmos DB account for an existing application.
The application runs the following queries:
CREATE KEYSPACE app
WITH REPLICATION = { ‘class’ : ‘NetworkTopologyStrategy’, ‘datacenter1’ : 1 };
CREATE TABLE IF NOT EXISTS app.users (
user_id int PRIMARY KEY,
user_name text,
user_age int,
user_bcity text
);
SELECT user_name, occupation AS user_occupation FROM app.users WHERE user_age > 40;
You need to create the Cosmos DB account for the application.
Choose all that apply:

A. Create a Cosmos DB account with SQL API.
B. Create a Cosmos DB account with Gremlin API.
C. Create a Cosmos DB account with Cassandra API.
D. Create a Cosmos DB account with MongoDB API.
Correct Answer:
C. Create a Cosmos DB account with Cassandra API.
Answer Description:
You can use Cassandra API to store data for applications written for Apache Cassandra. Apache Cassandra uses a SQL-like query language named Cassandra Query Language (CQL). Cassandra stores data in tables, where the data schema is defined. Those tables are grouped in a keyspace that defines options to all the tables, such as the replication strategy to use.
You can use the SQL API to query data that use SQL-like statements like SELECT. However, you cannot use SQL statements like CREATE to create a container.
You cannot use SQL-like statements to query a graph database. You should use the Gremlin query language to query data from a Cosmos DB Gremlin API graph database:
g.V().hasLabel(‘users’).has(‘user_age’, gt(40))
You cannot use SQL-like statements to query from a document database. You should use MongoDB queries to query data from Cosmos DB MongoDB API:
db.users.find({user_age: {$gt: 40}})
References:
Microsoft Docs > Introduction to the Azure Cosmos DB Cassandra API
Microsoft Docs > Quickstart: Build a Cassandra app with .NET Core and Azure Cosmos DB
Microsoft Docs > Tutorial: Create a Cassandra API account in Azure Cosmos DB by using a Java application to store key/value data
Microsoft Docs > Tutorial: Query data from a Cassandra API account in Azure Cosmos DB
Cassandra > Cassandra Query Language (CQL) > Data definition (DDL) > Data Definition

Exam Question 177

Your company creates an Azure Cosmos DB in Azure portal. The database must be a graph database with the ability to model and traverse relationships between entities in the database.
You need to recommend the appropriate Cosmos DB API to use.
Which API should you use?

A. API for MongoDB
B. Table API
C. Casandra API
D. SQL API
E. Gremlin API
Correct Answer:
E. Gremlin API
Answer Description:
You should choose the Gremlin API. This is the API that is used to build a graph database. In Azure portal, the API is identified as Gremlin (graph). When using this API, in addition to having an Azure subscription, you must install Visual Studio 2017 and enable Azure development.
The APIs supported by Azure Cosmos DB are:
Azure Cosmos DB’s API for MongoDB – Used when migrating from a MongoDB and supports the MongoDB wire protocol and connections by MongoDB client drivers.
Cassandra API – Used to create a data store for use with apps written for Apache Cassandra with compatibility with existing applications and support for the Cassandra Query Language (CQL).
Gremlin API – Used when creating graph databases for modeling and traversing relationships between entities.
SQL API – Default Cosmos DB API that supports building a non-relational document database that supports SQL syntax queries.
Table API – Provides premium database support for applications written for Azure Table storage.
None of these APIs, with the exception of the Gremlin API, can be used to build a graph database.
If you need to support multiple APIs, you must create a separate database with a unique account name for each.
References:
Microsoft Docs > Azure Cosmos DB Documentation
Microsoft Docs > Frequently asked questions about different APIs in Azure Cosmos DB
Microsoft Docs > Introduction to Azure Cosmos DB: Gremlin API
Microsoft Docs > Azure Cosmos DB Gremlin graph support
Introduction to Azure Cosmos DB database and the SQL API

Exam Question 178

You plan to create new Azure Cosmos accounts for four Cosmos DB databases.
The databases have the following requirements:

  • db1: A Core (SQL) API multi-region database with multi-region writes and an estimated 700 Request Units (RU/s) of provisioned throughput
  • db2: A Mongo DB single-region database, with an estimated 400 RU/s of provisioned throughput
  • db3: A Core (SQL) API single-region database, with an estimated 4500 RU/s of provisioned throughput
  • db4: A Mongo DB single-region database, with an estimated 500 RU/s of provisioned throughput

You need to deploy these databases using the minimum number of Cosmos accounts while minimizing the cost.
How many Cosmos accounts should you deploy?

A. One Cosmos account for db1, another for db3, and a third for db2 and db4
B. One Cosmos account for all databases
C. Separate Cosmos accounts for each database
D. One Cosmos account for db1 and db3, and a second one for db2 and db4
Correct Answer:
A. One Cosmos account for db1, another for db3, and a third for db2 and db4
Answer Description:
You should deploy one Cosmos account for db1, another for db3, and a third for db2 and db4. You can deploy two Core (SQL) API Cosmos accounts, one with multi-region and multi-region writes for db1 and another account with single-region for db3. You can deploy db2 and db4 in the same MongoDB API account because they have the same single-region replication configuration. You need to configure the RU/s of provisioned throughput according to the database estimates.
You should not deploy one Cosmos account for all databases. You can select only one API type for a Cosmos account. You cannot use the same Cosmos account for Core (SQL) API and MongoDB API databases.
You should not deploy one Cosmos account for db1 and db3, and a second one for db2 and db4. Although this is a possible configuration for Cosmos accounts for these databases, db3 is provisioned in a Cosmos account with multi-region, multi-region writes. This doubles the cost per 100 RU/s per hour with multi-region writes and results in a more expensive solution.
You should not deploy one Cosmos account for each database. Although this configuration results in an optimal cost for the solution, you would achieve the same cost by deploying three Cosmos accounts: one Cosmos account for db1, another for db3, and a third for db2 and db4, instead of four Cosmos accounts.
References:
Microsoft Docs > Work with Azure Cosmos account
Microsoft Docs > Manage an Azure Cosmos account
Microsoft Docs > Plan and manage costs for Azure Cosmos DB

Exam Question 179

You have an Azure subscription named Subscription1.
You have two virtual networks on Subscription1:

  • vnet1: Address Space 10.0.0.0/16 in the East US region
  • vnet2: Address Space 10.1.0.0/16 in the Central US region

You create a new Azure Cosmos DB account named sqlaccount1 configured as shown in the exhibit.

You need to determine the network connectivity to sqlaccount1.
Which virtual networks can access sqlaccount1 in the current configuration?

A. vnet1 and vnet2 by using the public endpoint
B. vnet1 and vnet2 by using the private endpoint
C. Only vnet1 by using the private endpoint
D. vnet1 by using the private endpoint and vnet2 by using the public endpoint
Correct Answer:
C. Only vnet1 by using the private endpoint
Answer Description:
You can access sqlaccount1 from only vnet1 by using the private endpoint. You have deployed sqlaccount1 using a private endpoint as the connectivity method. With this method, your Cosmos DB account can only be accessed through a private endpoint, which is configured with vnet1 as shown in the exhibit. You can create a private endpoint with virtual networks in the same region as your Cosmos DB account.
You cannot access sqlaccount1 from the public endpoint. When you create a Cosmos DB account with a private endpoint, the public endpoint is disabled by default and your account receives traffic only from the private endpoint.
You cannot access sqlaccount1 from vnet2. You can only connect through a private endpoint with virtual networks that have previously been configured. You cannot configure a private endpoint with virtual networks in other regions. Instead, you can configure a virtual network peering between vnet1 and vnet2, and access sqlaccount1 from vnet2 through the vnet1 endpoint.
References:
Microsoft Docs > Configure Azure Private Link for an Azure Cosmos account

Exam Question 180

You have a .NET Core application that stores key-value data in an Azure Table storage named table1.
Users report that the application performance is slow during peak usages. You identify that table1 is the bottleneck.
You need to evaluate the impacts and create a plan to migrate table1 to a Cosmos DB account.
Choose all that apply:

A. You can use the Table API to migrate table1 to Cosmos DB.
B. You need to change the application code to use the Cosmos DB SDK.
C. You can use the AzCopy utility to move data from table1 to Cosmos DB.
Correct Answer:
A. You can use the Table API to migrate table1 to Cosmos DB.
C. You can use the AzCopy utility to move data from table1 to Cosmos DB.
Answer Description:
You can use the Table API to migrate table1 to Cosmos DB. The Table API provides premium capabilities for applications written for Azure Table storage, such as dedicated throughput, guaranteed high availability, and better latency.
You do not need to change the application code to use the Cosmos DB SDK. You can still use the Table storage SDK with Cosmos DB Table API, but it is recommended to update to Cosmos DB SDK for the best support and improved performance.
You can use the AzCopy utility to move data from table1 to Cosmos DB. You can use the AzCopy utility or the Azure Cosmos DB Data Migration Tool to migrate the data from table1 to Cosmos DB.
References:
Microsoft Docs > Introduction to Azure Cosmos DB: Table API
Microsoft Docs > Frequently asked questions about the Table API in Azure Cosmos DB
Microsoft Docs > Move Azure Table Storage data to Azure Cosmos DB