Skip to Content

AZ-900 Microsoft Azure Fundamentals Exam Questions and Answers – Page 5 Part 2

The latest Microsoft AZ-900 Azure Fundamentals certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the Microsoft AZ-900 Azure Fundamentals exam and earn Microsoft AZ-900 Azure Fundamentals certification.

Question 471

Your company has purchased a subscription to Microsoft Azure. Microsoft Azure uses a consumption-based model. Which of the following are benefits of the consumption-based model? (Choose three.)

*A. No need to purchase and manage infrastructure.
*B. Pay for additional resources if and when needed.
*C. No upfront cost.
D. Resources that are purchased but not used are credited back.

Explanation

Cloud service providers such as Microsoft Azure operate on a consumption-based model. With Azure’s pay-as-you-go pricing, the consumer only pays for the resources that they use.

Some of the benefits of the consumption-based model are:

  • No upfront costs
  • No need to purchase and manage infrastructure that may or may not be fully used
  • Pay for additional resources if and when needed.

With a consumption-based model, you do not have to purchase services that you do not need or may not use. Since the consumption-based model is on a pay-as-you-go basis, there is no need to issue credit for resources not used.

Question 472

You have been asked to be consolidating on-premises file shares that support a line of business application for the marketing department to Azure files. Part of this project includes using Azure File Sync to ensure an on-premises Windows Server 2016 server maintains a cache of the Azure file share.

What is the correct sequence of steps required to deploy the Azure File Sync service to meet the requirement?

Unordered Choices:

  • Register the Windows 2016 server and create a server endpoint
  • Install the Azure File Sync agent on Windows 2016
  • Sync to the specified Azure file share
  • Create a sync group
  • Create a Storage Sync Service

Answer:

Correct Order:

  1. Create a Storage Sync Service
  2. Create a sync group
  3. Install the Azure File Sync agent on Windows 2016
  4. Register the Windows 2016 server and create a server endpoint
  5. Sync to the specified Azure file share

Explanation

You should perform the following steps:

  1. Create a Storage Sync Service.
  2. Create a sync group.
  3. Install and configure the Azure File Sync agent on the Windows Server 2016 server.
  4. Register the Windows 2016 server and create a server endpoint.
  5. Sync to the specified Azure file share.

You have to first place a Storage Sync Service resource into a resource group of your subscription to start the deployment of Azure File Sync. You can use the Azure portal to select Azure File Sync to deploy a Storage Sync Service.

You should then create a sync group. This group specifies the sync topology for a set of files.

You should then install and configure the Azure File Sync agent on the server with the full data set. The Storage Sync Service must be deployed before configuring the Azure File Sync agent.

After the Azure File Sync agent has been configured, you will need to register the server and create a server endpoint on the share. The registration establishes a trust relationship between your server or cluster and the Storage Sync Service. You can only have one Storage Sync Service synced with a server. You can only sync with other servers and Azure file shares that use the same Storage Sync Service. A sync group must contain one cloud endpoint. You can create multiple sync groups to adhere to your desired sync topology.

You should then allow the synchronization to upload all files to the Azure file share.

After the upload is complete, if you want to have the Azure file share on other servers, you will need to install the Azure File Sync agent on those servers and create file shares on those servers.

Question 473

Your company’s Chief Financial Officer wants to have a tighter control on spending for the cloud infrastructure.

She wants to have a tool to estimate the up-front costs associate with the Azure cloud. An associate recommends that she use Azure Cost Management.

Will this solution meet the CFO’s needs?

*A. No
B. Yes

Explanation

Azure Cost Management does not estimate your up-front cloud costs. The Azure Pricing Calculator is a tool that can perform that task.

Azure Cost Management consists of Cost Management + Billing, which is a suite of tools that optimizes, analyzes, and manages your workload costs. You can use these tools to perform the following tasks:

  • Streamline bill paying tasks
  • Managing costs
  • Download cost and usage data from your invoice
  • Use data analysis to monthly costs
  • Limit spending through the use of thresholds
  • Find opportunities for changes in workloads that can reduce spending

Question 474

The Nutex Corporation plans to comply with all the privacy, compliance, and data protection standards. You are asked to investigate the security, compliance, and privacy offerings and commitments from Microsoft.

Which of the following statements about the Azure Trust Center are TRUE? (Choose two.)

*A. Customers, including controllers and processors, who are not GDPR-compliant can be fined up to 4% of their annual global turnover or €20 million.
B. Azure Sentinel is a compliance management tool available with Trust Center.
C. Azure Trust Center is built on the three foundational principles of trust.
*D. Azure is K-ISMS certified.

Explanation

Azure is K-ISMS certified. Customers, including controllers and processors, who are not GDPR-compliant can be fined up to 4% of their annual global turnover or €20 million.

K-ISMS certification is designed to ensure the security and privacy of data in the Korean region. Azure meets the latest compliance offerings in the K-ISMS.

The EU General Data Protection Regulation (GDPR) was developed to create data privacy laws across Europe. It replaces Data Protection Directive 95/46/EC and differs in several significant ways, such as:

  • Larger jurisdiction
  • Larger fines
  • Consent must be requested in a clear and easily accessible manner
  • Breach Notifications will be mandatory and must be completed within 72 hours of breach awareness
  • Privacy

Azure Trust Center is built on four, not three, foundational principles of trust:

  1. security (keep customers’ data secure),
  2. privacy (how customers are in control of their data),
  3. compliance (comprehensive list of compliance offerings and solutions), and
  4. transparency (being transparent about how Microsoft uses customers’ data).

Azure Sentinel is not a compliance management tool available with Trust Center. Azure Sentinel is a tool that provides intelligent security analytics. The data for this analysis tool is stored in an Azure Monitor Log Analytics workspace. Azure Sentinel collects data at cloud scale, finds uncovered threats, minimizes false positives using analytics and threat intelligence, investigates threats, and responds to incidents rapidly with built-in orchestration and automation of common tasks.

Question 475

Josephine must ensure the network peering implementation for Nutex VNet VMs utilizes bandwidth most effectively. Nutex_VNetA currently hosts a SQL Server VM that provides databases for DevWeb_VNetB and ProdWeb_VNetC, which are in turn being used by on-premises users.

The development platforms running in DevWeb_VNetB do not need to communicate with the production environments running in ProdWeb_VNetC.

How should Josephine set up the network topology to most effectively use these resources? (Choose two.)

A. She should implement network peering between DevWeb_VNetB and ProdWeb_VNetC
*B. She should implement network peering between Nutex_VNetA and DevWeb_VNetB
C. She should implement network peering between Nutex_VNetA and the on-premises subnet
*D. She should implement network peering between Nutex_VNetA and ProdWeb_VNetC

Explanation

Josephine should implement network peering between:

Nutex_VNetA and DevWeb_VNetB

AND

Nutex_VNetA and ProdWeb_VNetC

Virtual network peering is a feature of Azure that enables you to seamlessly connect two Azure virtual networks so that that the virtual networks appear as one for connectivity purposes.

She should not configure peering between the DevWeb_VNetB and ProdWeb_VNetC because they do not communicate with each other.

She cannot set up network peering with Nutex_VNetA and the on-premises subnet because network peering is only implemented between two VNets.

Nutex_VNetA &DevWeb_VNetB as well as Nutex_VNetA &ProdWeb_VNetC are effective as a hub and spoke topology using network peering and service chaining.

Question 476

You are an administrator for the Nutex Corporation. You must implement and manage virtual networking and configure endpoints on subnets to improve security for your Azure resources.

Which kind of information do you need if you want to allow traffic from the on-premises network through ExpressRoute for public peering or Microsoft peering?

A. NAT private IP addresses
*B. NAT public IP addresses

Explanation

You would choose network address translation (NAT) public IP addresses. Azure service resources secured to virtual networks cannot be accessed from on-premises networks by default. To allow traffic from on-premises, you must also allow public IP addresses from your on-premises network through ExpressRoute. These IP addresses must be added through the IP firewall configuration for Azure service resources.

You would not use NAT private IP addresses. You will need to identify the NAT IP addresses that you are using if you are using Microsoft peering from your on-premises through ExpressRoute for public peering. Each ExpressRoute circuit uses two NAT IP addresses for public peering by default. The ExpressRoute circuit is applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone.

The NAT IP addresses are provided either by the customer or by the service provider for Microsoft peering. You must allow these public IP addresses in the resource IP firewall setting to allow access to your service resources.

Question 477

The Nutex Corporation wants you to get detailed reports for costs incurred to host and deliver apps on Azure. You want to use the Azure Cost Management feature to get reports for expenses incurred by the services.

Match the resource provider namespace for a resource type on the left with the Azure service that uses the resource provider namespace on the right.

Resource Provider Namespace:

  • Microsoft.SecurityInsights
  • Microsoft.Compute
  • Microsoft.OffAzure
  • Microsoft.Consumption
  • Microsoft.Visualstudio
  • Microsoft.AlertsManagement

Azure Service:

  • Azure Monitor
  • Azure Cost Management
  • Azure Migrate
  • Azure Sentinel
  • Azure DevOps
  • Azure Virtual Machine Scale Sets

Answer:

  • Azure Monitor: Microsoft.AlertsManagement
  • Azure Cost Management: Microsoft.Consumption
  • Azure Migrate: Microsoft.OffAzure
  • Azure Sentinel: Microsoft.SecurityInsights
  • Azure DevOps: Microsoft.Visualstudio
  • Azure Virtual Machine Scale Sets: Microsoft.Compute

Explanation

You would map the resource provider namespaces with the Azure services that use them as follows:

You would map the resource provider namespaces with the Azure services that use them as follows.

Azure services use Azure Resource Providers to choose the type of resource required to perform the service. The name of a resource type is in the format: {resource-provider}/{resource-type}. For example, the resource type for a key vault is Microsoft.KeyVault/vaults.

You can monitor the usage costs with the Cost Management feature and get data for the cost incurred per resource provider. This feature helps you find out the cost associated with using the service that requires the resource provider.

To monitor usage costs by resource providers, open the Azure Cost Management + Billing hub, click Cost Management, select the Scope, specify the time interval, add Resource type filter, and specify the resource types.

Question 478

The Nutex Corporation plans to add a large amount of data from a company that it purchased. It plans on using Azure Blob storage.

Match the Attribute or Setting for Azure Blob storage with its appropriate description.

Description:

  • A virtual file system driver that accesses the block blob data in the Storage account through the Linux file system
  • A service used to transfer on-premises data to Blob storage when large datasets or network constraints do not allow to upload data over the wire.
  • An Access tier that stores Blobs of data that is NOT accessed frequently and stored for at least 30 days.
  • A command-line tool for Windows and Linux to copy data to and from Blob storage, across containers, or across storage accounts.
  • A type of Blob that stores VHD files and serve as disks for Azure virtual machines

Attribute/Setting:

  • Blobfuse
  • Azure Data Box
  • Page
  • Cool
  • AzCopy

Answer:

  • Blobfuse: A virtual file system driver that accesses the block blob data in the Storage account through the Linux file system
  • Azure Data Box: A service used to transfer on-premises data to Blob storage when large datasets or network constraints do not allow to upload data over the wire.
  • Page: A type of Blob that stores VHD files and serve as disks for Azure virtual machines
  • Cool: An Access tier that stores Blobs of data that is NOT accessed frequently and stored for at least 30 days.
  • AzCopy: A command-line tool for Windows and Linux to copy data to and from Blob storage, across containers, or across storage accounts.

Explanation

You should map the attributes/settings of Azure Blob Storage with their descriptions as follows:

You should map the attributes/settings of Azure Blob Storage with their descriptions as follows.

Blobfuse is a virtual file system driver for Azure Blob storage. You can use Blobfuse to access your existing block blob data in your Storage account through the Linux file system. Blobfuse can be installed on Ubuntu 14.04, 16.04, and 18.04 editions.

Azure Data Box transfers on-premises data to Blob storage when large datasets or network constraints make uploading data over the wire unrealistic. One of Azure Data Box Disk, Azure Data Box, or Azure Data Box Heavy devices from Microsoft can be used, depending on the size of data to be transferred. You can then copy your data to those devices and ship them back to Microsoft to be uploaded into Blob storage.

The three types of Blobs in Azure Blob Storage are Block, Append, and Page. Block blobs store text and binary data, up to about 4.7 TB. Block blobs are made up of blocks of data that can be managed individually. Append blobs are made up of blocks like block blobs but are optimized for append operations. Page blobs store random access files up to 8 TB in size. Page blobs store virtual hard drive (VHD) files and serve as disks for virtual machines.

The three Access tiers available with Azure Blob storage are Hot, Cool, and Archive. Hot is optimized for storing data that is accessed frequently. Cool is optimized for storing data that is infrequently accessed and stored for at least 30 days. Archive is optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements.

AzCopy is a command-line tool that copies data to and from Blob storage, across containers, or across storage accounts. AzCopy executable files are available for Windows, Linus, and macOS computers.

Question 479

Verigon Corporation is an industrial HVAC vendor. Their systems rely on many Azure services, including Azure Storage Blob and the Azure IoT Hub. They take advantage of the Azure Functions serverless environment. Verigon would like to automate some steps to be taken when a rare occurrence is detected, such as an unplanned major increase in temperature.

What Azure service would best allow Azure Functions to react to such status change incidents?

*A. Azure Event Grid
B. Azure Service Bus
C. Azure Event Hub
D. Azure Data Factory
E. Azure Kubernetes

Explanation

Azure Event Grid would be the service for Verigon to use. Azure Event Grid serves as a fully-managed event routing service. It can raise events from almost any source (such as IoT hub) and route them anywhere (such as Azure Functions). It is intended for reactive programming to discrete events, such as status change. An event is the smallest amount of information that describes something that happened. It has information that is only relevant to that type of event. Event Grid offers durable delivery, meaning that if an event is not acknowledged by the endpoint, it will retry.

The Azure Service Bus is not the best choice in this scenario. The Azure Service Bus is based on messages. A message is a raw data that is to be stored or consumed elsewhere.

Financial transactions would be a good example. Verigon wants to know about events, not messages. However, the Service Bus can be configured to send events to Event Grid if there are messages in a queue.

Kubernetes does not meet Verigon’s needs in this scenario. The Azure Kubernetes Service allows for the deployment and management of containers.

It is not an event-routing service.

The Azure Data Factory service does not apply to the Verigon scenario. It is a cloud-based data integration service to transform data at scale from data stores. It is not an event-routing service.

The Azure Event Hub is a data streaming service intended for millions of events per second. It is designed to ingest a massive volume of data. The scenario does not indicate a need for such speed and transaction processing, as Verigon is looking for rare occurrences.

Question 480

The Nutex Corporation wants to use Azure RBAC to limit the privileges given to some of its Azure users for security reasons.

Which of the following statements about Azure RBAC is NOT true?

A. The Owner role has full access to all resources, including the right to delegate access to other users.
B. Deny assignments block users from performing specific Azure resource actions even if a role assignment grants them access.
C. Up to 5,000 custom roles can be created per Azure AD.
*D. Transferring a subscription to a different Azure AD tenant permanently deletes all role assignments from the source Azure AD tenant and migrates the role assignments to the target Azure AD tenant.

Explanation

Transferring a subscription to a different Azure AD tenant does NOT permanently delete all role assignments from the source Azure AD tenant and migrate the role assignments to the target Azure AD tenant. If you transfer a subscription to another Azure AD tenant, the role assignments in the source tenant are permanently deleted, but they are not migrated to the target tenant. You will need to recreate the role assignments in the target tenant. You also must manually recreate managed identities for Azure resources.

The fundamental built-in roles with Azure RBAC are as follows:

  • The Owner role has full access to all resources, including the right to delegate access to other users.
  • The Contributor role can create and manage all types of Azure resources but cannot grant access to other users.
  • The Reader role can view existing Azure resources.
  • The User Access Administrator role can manage user access to Azure resources.

Up to 5,000 custom roles can be created for each Azure Active Directory. Built-in roles may not always meet all of your specific needs. In such cases, custom roles can be created. Custom roles can be shared across subscriptions and are stored in an Azure Active Directory. For specialized clouds, such as Azure Government, Azure Germany, and Azure China 21Vianet, the limit is 2,000 custom roles. Custom roles can be created using Azure PowerShell, Azure CLI, or the REST API.

Deny assignments block users from performing specific actions even if a role assignment grants them access. Like a role assignment, a deny assignment attaches a set of deny actions to a user, group, or service principal at a scope for the purpose of denying access. Deny assignments are created and managed by Azure to protect resources.