Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 20

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 911

Exam Question

A company is running several business applications in three separate VPCs within the eu-east-1 Region. The applications must be able to communicate between VPCs. The applications also must be able to consistently send hundreds of gigabytes of data each day to a latency-sensitive application that runs in a single on-premises data center. A solutions architect needs to design a network connectivity solution that maximizes cost effectiveness.

Which solution meets these requirements?

A. Configure three AWS Site-to-Site VPN connections from the data center to AWS. Establish connectivity by configuring one VPN connection for each VPC.
B. Launch a third-party virtual network appliance in each VPC. Establish an IPsec VPN tunnel between the data center and each virtual appliance.
C. Set up three AWS Direct Connect connections from the data center to a Direct Connect gateway in us-east-1. Establish connectivity by configuring each VPC to use one of the Direct Connect connections.
D. Set up one AWS Direct Connect connection from the data center to AWS. Create a transit gateway, and attach each VPC to the transit gateway. Establish connectivity between the Direct Connect connection and the transit gateway.

Correct Answer

D. Set up one AWS Direct Connect connection from the data center to AWS. Create a transit gateway, and attach each VPC to the transit gateway. Establish connectivity between the Direct Connect connection and the transit gateway.

Explanation

To meet the requirements of enabling communication between VPCs and sending data to a latency-sensitive application in an on-premises data center in a cost-effective manner, the recommended solution is to use AWS Direct Connect with a transit gateway.

Setting up one Direct Connect connection from the data center to AWS allows for a dedicated and reliable network connection. By creating a transit gateway and attaching each VPC to it, you can establish connectivity between the Direct Connect connection and the transit gateway. This allows for efficient and cost-effective communication between the VPCs and the on-premises data center.

Using a transit gateway simplifies network management by centralizing the connectivity and routing between VPCs and on-premises networks. It provides a scalable and efficient solution for interconnecting multiple VPCs and enables consistent and secure communication between them.

Option A is not the best choice as it would require three separate VPN connections, which can be complex to manage and may not provide the desired performance and reliability for the given data volume.

Option B introduces the complexity of deploying and managing third-party virtual network appliances in each VPC, which may not be the most cost-effective solution.

Option C suggests using multiple Direct Connect connections, but it would require additional resources and could result in higher costs compared to using a transit gateway.

Therefore, option D is the most suitable solution for maximizing cost-effectiveness while meeting the requirements.

Question 912

Exam Question

A solutions architect is tasked with transferring 750 TB of data from a network-attached file system located at a branch office Amazon S3 Glacier. The solution must avoid saturating the branch office’s low-bandwidth internet connection.

What is the MOST cost-effective solution?

A. Create a site-to-site VPN tunnel to an Amazon S3 bucket and transfer the files directly. Create a bucket VPC endpoint.
B. Order 10 AWS Snowball appliances and select an S3 Glacier vault as the destination. Create a bucket policy to enforce VPC endpoint.
C. Mount the network-attached file system to Amazon S3 and copy the files directly. Create a lifecycle policy to S3 objects to Amazon S3 Glacier.
D. Order 10 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier.

Correct Answer

D. Order 10 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier.

Explanation

Given the requirement to transfer a large amount of data (750 TB) from a branch office to Amazon S3 Glacier while avoiding saturating the branch office’s low-bandwidth internet connection, the most cost-effective solution is to use AWS Snowball appliances.

AWS Snowball is a data transfer service that uses physical devices to securely and efficiently transfer large amounts of data to and from AWS. By ordering 10 Snowball appliances, the data can be physically shipped to AWS, bypassing the need for internet bandwidth.

The selected Amazon S3 bucket can serve as the destination for the Snowball transfer. A lifecycle policy can be applied to the S3 objects in the bucket, specifying the transition of the objects to Amazon S3 Glacier. This allows for long-term storage at a lower cost while still maintaining access to the data when needed.

Option A suggests creating a site-to-site VPN tunnel and transferring the files directly, which would still rely on the low-bandwidth internet connection and could result in prolonged transfer times and potential saturation of the network.

Option B involves ordering AWS Snowball appliances and selecting an S3 Glacier vault as the destination. While this could still work, using an S3 bucket allows for more flexibility in managing the data and leveraging S3’s features.

Option C suggests mounting the network-attached file system to S3 and copying the files directly. However, this would still rely on the branch office’s low-bandwidth internet connection and may not be the most efficient solution for transferring such a large amount of data.

Therefore, option D with AWS Snowball appliances and an S3 bucket as the destination, along with a lifecycle policy for transitioning to S3 Glacier, is the most cost-effective solution that addresses the given requirements.

Question 913

Exam Question

A company wants to share data that is collected from self-driving cars with the automobile community. The data will be made available from within an Amazon S3 bucket. The company wants to minimize its cost of making this data available to other AWS accounts.

What should a solutions architect do to accomplish this goal?

A. Create an S3 VPC endpoint for the bucket.
B. Configure the S3 bucket to be a Requester Pays bucket.
C. Create an Amazon CloudFront distribution in front of the S3 bucket.
D. Require that the files be accessible only with the use of the BitTorrent protocol.

Correct Answer

B. Configure the S3 bucket to be a Requester Pays bucket.

Explanation

To minimize the cost of making the data available to other AWS accounts, the company should configure the S3 bucket as a Requester Pays bucket. With Requester Pays, the requester (the AWS account accessing the data) is responsible for paying the data transfer and request costs associated with accessing the objects in the bucket.

By enabling Requester Pays, the company can share the data with the automobile community while ensuring that the costs incurred for data transfer and access are borne by the requesters rather than the company itself. This allows the company to minimize its cost of making the data available.

Option A, creating an S3 VPC endpoint for the bucket, is not necessary in this case as the goal is to make the data available to other AWS accounts, not restrict access to the bucket within a specific VPC.

Option C, creating an Amazon CloudFront distribution in front of the S3 bucket, can improve data transfer performance and provide caching capabilities, but it does not directly address the goal of minimizing cost.

Option D, requiring the use of the BitTorrent protocol, is not an appropriate solution as it may introduce additional complexity and limitations for accessing the data, and it does not specifically address cost optimization.

Therefore, the most suitable solution is to configure the S3 bucket as a Requester Pays bucket, ensuring that the costs of data transfer and access are shifted to the requesters.

Question 914

Exam Question

A company has deployed an API in a VPC behind an internet-facing Application Load Balancer (ALB). An application that consumes the API as a client is deployed in a second account in private subnets behind a NAT gateway. When requests to the client application increase, the NAT gateway costs are higher than expected. A solutions architect has configured the ALB to be internal.

Which combination of architectural changes will reduce the NAT gateway costs? (Choose two.)

A. Configure a VPC peering connection between the two VPCs. Access the API using the private address.
B. Configure an AWS Direct Connect connection between the two VPCs. Access the API using the private address.
C. Configure a ClassicLink connection for the API into the client VPC. Access the API using the ClassicLink address.
D. Configure a PrivateLink connection for the API into the client VPC. Access the API using the PrivateLink address.
E. Configure an AWS Resource Access Manager connection between the two accounts. Access the API using the private address.

Correct Answer

A. Configure a VPC peering connection between the two VPCs. Access the API using the private address.
D. Configure a PrivateLink connection for the API into the client VPC. Access the API using the PrivateLink address.

Explanation

To reduce the NAT gateway costs, you can implement the following architectural changes:

A. Configure a VPC peering connection between the two VPCs: By establishing a VPC peering connection, the client application in the second account can directly access the API in the VPC without going through the NAT gateway. This eliminates the need for NAT gateway traffic and reduces the associated costs.

D. Configure a PrivateLink connection for the API into the client VPC: PrivateLink enables you to privately access services hosted on AWS over private network connections. By setting up a PrivateLink connection for the API in the client VPC, the client application can access the API directly without going through the internet or the NAT gateway. This eliminates the need for NAT gateway traffic and reduces the associated costs.

These two solutions allow the client application to access the API using private addresses or PrivateLink addresses, avoiding the need for NAT gateway traffic and reducing costs.

Options B, C, and E are not relevant to reducing the NAT gateway costs:

B. AWS Direct Connect is primarily used for establishing dedicated network connections between on-premises environments and AWS, and it does not directly address NAT gateway costs.

C. ClassicLink allows a VPC to link to Classic EC2 instances, and it is not applicable to the scenario described.

E. AWS Resource Access Manager enables sharing resources across AWS accounts, but it does not specifically address NAT gateway costs.

Therefore, the most suitable solutions to reduce NAT gateway costs are to configure a VPC peering connection and a PrivateLink connection between the two VPCs.

Question 915

Exam Question

A company is running an application on Amazon EC2 instances hosted in a private subnet of a VPC. The EC2 instances are configured in an Auto Scaling group behind an Elastic Load Balancer (ELB). The EC2 instances use a NAT gateway for outbound internet access. However, the EC2 instances are not able to connect to the public internet to download software updates.

What are the possible root causes of this issue? (Choose two.)

A. The ELB is not configured with a proper health check.
B. The route tables in the VPC are configured incorrectly.
C. The EC2 instances are not associated with an Elastic IP address.
D. The security group attached to the NAT gateway is configured incorrectly.
E. The outbound rules on the security group attached to the EC2 instances are configured incorrectly.

Correct Answer

B. The route tables in the VPC are configured incorrectly.
E. The outbound rules on the security group attached to the EC2 instances are configured incorrectly.

Explanation

The possible root causes of the EC2 instances not being able to connect to the public internet to download software updates are:

B. The route tables in the VPC are configured incorrectly: The route tables control the traffic flow between subnets in a VPC. If the route table associated with the private subnet where the EC2 instances are located does not have a route to the internet gateway (or the NAT gateway), the instances will not be able to access the public internet.

E. The outbound rules on the security group attached to the EC2 instances are configured incorrectly: Security groups act as virtual firewalls for EC2 instances. If the outbound rules of the security group attached to the EC2 instances do not allow outbound traffic to the necessary destination ports (e.g., port 80 for HTTP or port 443 for HTTPS), the instances will not be able to connect to the public internet.

The other options are not related to the issue of the EC2 instances not being able to connect to the public internet:

A. The ELB health check is related to the health of the instances behind the ELB and does not directly affect outbound internet access.

C. Elastic IP addresses are used for inbound traffic to EC2 instances and do not affect outbound internet access.

D. The security group attached to the NAT gateway is for controlling inbound traffic to the NAT gateway and does not directly affect outbound internet access.

Therefore, the possible root causes are incorrectly configured route tables in the VPC and incorrect outbound rules on the security group attached to the EC2 instances.

Question 916

Exam Question

A company is running an ecommerce application on Amazon EC2. The application consists of a stateless web tier that requires a minimum of 10 instances, and a peak of 250 instances to support the application’s usage. The application requires 50 instances 80% of the time.

Which solution should be used to minimize costs?

A. Purchase Reserved Instances to cover 250 instances.
B. Purchase Reserved Instances to cover 80 instances. Use Spot Instances to cover the remaining instances.
C. Purchase On-Demand Instances to cover 40 instances. Use Spot Instances to cover the remaining instances.
D. Purchase Reserved Instances to cover 50 instances. Use On-Demand and Spot Instances to cover the remaining instances.

Correct Answer

B. Purchase Reserved Instances to cover 80 instances. Use Spot Instances to cover the remaining instances.

Explanation

To minimize costs for the ecommerce application, the best solution would be:

B. Purchase Reserved Instances to cover 80 instances. Use Spot Instances to cover the remaining instances.

By purchasing Reserved Instances for the baseline usage of 50 instances, the company can benefit from cost savings compared to On-Demand Instances. Additionally, using Spot Instances for the remaining instances during peak usage can provide further cost optimization. Spot Instances are available at significantly lower prices, allowing the company to scale up to 250 instances while keeping costs under control. The stateless nature of the web tier makes it well-suited for using Spot Instances since any interrupted instances can be easily replaced without affecting the application’s overall functionality.

Question 917

Exam Question

A solutions architect needs to design the architecture for an application that a vendor provides as a Docker container image. The container needs 50 GB of storage available for temporary files. The infrastructure must be serverless.

Which solution meets these requirements with the LEAST operational overhead?

A. Create an AWS Lambda function that uses the Docker container image with an Amazon S3 mounted volume that has more than 50 GB of space.
B. Create an AWS Lambda function that uses the Docker container image with an Amazon Elastic Block Store (Amazon EBS) volume that has more than 50 GB of space.
C. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWS Fargate launch type. Create a task definition for the container image with an Amazon Elastic File System (Amazon EFS) volume. Create a service with that task definition.
D. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the Amazon EC2 launch type with an Amazon Elastic Block Store (Amazon EBS) volume that has more than 50 GB of space. Create a task definition for the container image. Create a service with that task definition.

Correct Answer

C. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWS Fargate launch type. Create a task definition for the container image with an Amazon Elastic File System (Amazon EFS) volume. Create a service with that task definition.

Explanation

The solution that meets the requirements with the least operational overhead is:

C. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWS Fargate launch type. Create a task definition for the container image with an Amazon Elastic File System (Amazon EFS) volume. Create a service with that task definition.

By using AWS Fargate, you can run containers without managing the underlying infrastructure. This eliminates the operational overhead of managing servers. With AWS Fargate, you can also specify the amount of storage required for the container and easily allocate 50 GB of storage for temporary files.

Using an Amazon Elastic File System (Amazon EFS) volume allows you to have shared storage that can be mounted by multiple containers. This meets the requirement of providing 50 GB of storage available for temporary files.

Overall, this solution provides a serverless infrastructure with minimal operational overhead, ensuring that the application’s requirements are met efficiently.

Question 918

Exam Question

A company is planning to use Amazon S3 to store images uploaded by its users. The images must be encrypted at rest in Amazon S3. The company does not want to spend time managing and rotating the keys, but it does want to control who can access those keys.

What should a solutions architect use to accomplish this?

A. Server-Side Encryption with keys stored in an S3 bucket
B. Server-Side Encryption with Customer-Provided Keys (SSE-C)
C. Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
D. Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)

Correct Answer

C. Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)

Explanation

To accomplish the requirement of encrypting images at rest in Amazon S3 while minimizing the management and rotation of keys, the solutions architect should use:

C. Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3).

With SSE-S3, Amazon S3 automatically handles the encryption and decryption of objects using strong encryption algorithms. The encryption keys are managed by Amazon S3, relieving the company from the burden of managing and rotating the keys.

SSE-S3 provides a secure and straightforward approach to encrypting data at rest in Amazon S3 without requiring any additional configuration or key management. The company can still control access to the encrypted objects by managing access permissions and policies for the S3 bucket.

Question 919

Exam Question

A company is designing a shared storage solution for a gaming application that is hosted in the AWS Cloud. The company needs the ability to use SMB clients to access data. The solution must be fully managed.

Which AWS solution meets these requirements?

A. Create an AWS DataSync task that shares the data as a mountable file system. Mount the file system to the application server.
B. Create an Amazon EC2 Windows instance. Install and configure a Windows file share role on the instance. Connect the application server to the file share.
C. Create an Amazon FSx for Windows File Server file system. Attach the file system to the origin server. Connect the application server to the file system.
D. Create an Amazon S3 bucket. Assign an IAM role to the application to grant access to the S3 bucket. Mount the S3 bucket to the application server.

Correct Answer

C. Create an Amazon FSx for Windows File Server file system. Attach the file system to the origin server. Connect the application server to the file system.

Explanation

The AWS solution that meets the requirements of providing a fully managed shared storage solution for a gaming application with the ability to use SMB clients (such as Windows machines) to access data is:

C. Create an Amazon FSx for Windows File Server file system. Attach the file system to the origin server. Connect the application server to the file system.

Amazon FSx for Windows File Server is a fully managed, native Windows file system that is accessible using the SMB protocol. It provides shared storage for Windows-based applications and supports the use of SMB clients.

By creating an Amazon FSx for Windows File Server file system, the company can easily share data using SMB and mount the file system to the application server. This solution eliminates the need to manage the underlying infrastructure and ensures high availability, durability, and performance for the shared storage.

Question 920

Exam Question

A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in another Availability Zone, placing both behind an Application Load Balancer. After completing this change, users reported that, each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time.

What should a solutions architect propose to ensure users see all of their documents at once?

A. Copy the data so both EBS volumes contain all the documents.
B. Configure the Application Load Balancer to direct a user to the server with the documents.
C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS.
D. Configure the Application Load Balancer to send the request to both servers. Return each document from the correct server.

Correct Answer

C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS.

Explanation

To ensure that users can see all of their documents at once when accessing the web application, a solutions architect should propose the following:

C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS.

By copying the data from both EBS volumes to Amazon EFS (Elastic File System), the documents will be stored in a shared file system that can be accessed by both EC2 instances behind the Application Load Balancer. This will ensure that both instances have access to the same set of documents, allowing users to see all of their documents regardless of which instance they are directed to.

Additionally, the application should be modified to save new documents to Amazon EFS instead of the individual EBS volumes. This will ensure that any new documents uploaded by users are stored in a centralized location that is accessible to both instances.

This solution provides a scalable and highly available architecture with shared storage, allowing users to access all of their documents regardless of the EC2 instance they are connected to.