The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.
Table of Contents
- Question 1011
- Exam Question
- Correct Answer
- Explanation
- Question 1012
- Exam Question
- Correct Answer
- Explanation
- Question 1013
- Exam Question
- Correct Answer
- Explanation
- Question 1014
- Exam Question
- Correct Answer
- Explanation
- Question 1015
- Exam Question
- Correct Answer
- Explanation
- Question 1016
- Exam Question
- Correct Answer
- Explanation
- Question 1017
- Exam Question
- Correct Answer
- Explanation
- Question 1018
- Exam Question
- Correct Answer
- Explanation
- Question 1019
- Exam Question
- Correct Answer
- Explanation
- Question 1020
- Exam Question
- Correct Answer
- Explanation
Question 1011
Exam Question
A company designs a mobile app for its customers to upload photos to a website. The app needs a secure login with multi-factor authentication (MFA). The company wants to limit the initial build time and the maintenance of the solution.
Which solution should a solutions architect recommend to meet these requirements?
A. Use Amazon Cognito Identity with SMS-based MFA.
B. Edit IAM policies to require MFA for all users.
C. Federate IAM against the corporate Active Directory that requires an MFA.
D. Use Amazon API Gateway and require server-side encryption (SSE) for photos.
Correct Answer
A. Use Amazon Cognito Identity with SMS-based MFA.
Explanation
To meet the requirements of secure login with multi-factor authentication (MFA) for a mobile app while minimizing build time and maintenance, a solutions architect should recommend using Amazon Cognito Identity with SMS-based MFA.
Amazon Cognito provides a fully managed user authentication service that supports MFA and integrates easily with mobile apps. It offers a simple and secure way to add user sign-up, sign-in, and access control to mobile and web apps. Here’s how this solution meets the requirements:
- Amazon Cognito Identity: It provides user identity management and authentication capabilities. You can create a user pool within Amazon Cognito to store user credentials securely.
- SMS-based MFA: Amazon Cognito offers MFA functionality out of the box, including SMS-based MFA. Users can receive a verification code via SMS to verify their identity during the login process. This adds an extra layer of security to the authentication process.
- Quick implementation: Amazon Cognito provides SDKs and integration options for various platforms, making it easier to integrate user authentication into mobile apps. The development effort and time required to implement user authentication and MFA are reduced.
- Low maintenance: Amazon Cognito is a fully managed service, meaning that AWS takes care of the underlying infrastructure, security, and maintenance. This reduces the maintenance overhead for the company, allowing them to focus on other aspects of their application.
The other options mentioned are not as suitable for the requirements:
- Option B: Editing IAM policies to require MFA for all users is more applicable for managing access to AWS resources and services, rather than providing user authentication for a mobile app.
- Option C: Federating IAM against the corporate Active Directory would require additional setup and complexity, which may increase build time and maintenance efforts.
- Option D: Using Amazon API Gateway and server-side encryption (SSE) for photos does not address the requirement of secure login with MFA. It focuses on securing data in transit and at rest but does not handle user authentication and MFA.
Question 1012
Exam Question
A company has an application that runs on Amazon EC2 instances within a private subnet in a VPC. The instances access data in an Amazon S3 bucket in the same AWS Region. The VPC contains a NAT gateway in a public subnet to access the S3 bucket. The company wants to reduce costs by replacing the NAT gateway without compromising security or redundancy.
Which solution meets these requirements?
A. Replace the NAT gateway with a NAT instance.
B. Replace the NAT gateway with an internet gateway.
C. Replace the NAT gateway with a gateway VPC endpoint.
D. Replace the NAT gateway with an AWS Direct Connect connection.
Correct Answer
C. Replace the NAT gateway with a gateway VPC endpoint.
Explanation
To reduce costs while maintaining security and redundancy, the recommended solution is to replace the NAT gateway with a gateway VPC endpoint.
A gateway VPC endpoint allows private connectivity to AWS services without the need for a NAT gateway, internet gateway, or public IP addresses. In this case, you can create a VPC endpoint for Amazon S3 within your VPC. Here’s how this solution meets the requirements:
- Cost Reduction: By replacing the NAT gateway, you eliminate the cost associated with its operation. NAT gateways incur hourly charges and data processing charges, which can be significant depending on the amount of data transferred.
- Security: The gateway VPC endpoint for Amazon S3 provides a highly secure connection between your VPC and the S3 bucket. It does not require traffic to traverse the public internet, eliminating exposure to potential security threats.
- Redundancy: Gateway VPC endpoints are highly available and redundant by default. They leverage the underlying AWS network infrastructure, ensuring that your connectivity to Amazon S3 remains resilient.
The other options mentioned are not suitable for the requirements:
- Option A: Replacing the NAT gateway with a NAT instance would still involve the cost of running an EC2 instance, and it may not provide the same level of scalability, availability, and management simplicity as the NAT gateway.
- Option B: Replacing the NAT gateway with an internet gateway would expose the private EC2 instances to the public internet, compromising security and not meeting the requirement of maintaining a private subnet.
- Option D: Using an AWS Direct Connect connection is not necessary for accessing Amazon S3 within the same AWS Region. Direct Connect is typically used for dedicated and private network connectivity to AWS services, but it’s not required for accessing services within the same region over the public internet.
Question 1013
Exam Question
A company has an application that uses overnight digital images of products on store shelves to analyze inventory data. The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB) and obtains the images from an Amazon S3 bucket for its metadata to be processed by worker nodes for analysis. A solutions architect needs to ensure that every image is processed by the worker nodes.
What should the solutions architect do to meet this requirement in the MOST cost-efficient way?
A. Send the image metadata from the application directly to a second ALB for the worker nodes that use an Auto Scaling group of EC2 Spot Instances as the target group.
B. Process the image metadata by sending it directly to EC2 Reserved Instances in an Auto Scaling group. With a dynamic scaling policy, use an Amazon CloudWatch metric for average CPU utilization of the Auto Scaling group as soon as the front-end application obtains the images.
C. Write messages to Amazon Simple Queue Service (Amazon SQS) when the front-end application obtains an image. Process the images with EC2 On- Demand instances in an Auto Scaling group with instance scale-in protection and a fixed number of instances with periodic health checks.
D. Write messages to Amazon Simple Queue Service (Amazon SQS) when the application obtains an image. Process the images with EC2 Spot Instances in an Auto Scaling group with instance scale-in protection and a dynamic scaling policy using a custom Amazon CloudWatch metric for the current number of messages in the queue.
Correct Answer
D. Write messages to Amazon Simple Queue Service (Amazon SQS) when the application obtains an image. Process the images with EC2 Spot Instances in an Auto Scaling group with instance scale-in protection and a dynamic scaling policy using a custom Amazon CloudWatch metric for the current number of messages in the queue.
Explanation
This approach offers the most cost-efficient solution to ensure that every image is processed by the worker nodes. Here’s how it meets the requirements:
- Amazon SQS: By writing messages to an SQS queue when the application obtains an image, you decouple the image processing from the application itself. This ensures that every image is processed, even if the processing infrastructure is not immediately available.
- EC2 Spot Instances: By using EC2 Spot Instances in the Auto Scaling group to process the images, you can take advantage of the cost savings provided by Spot Instances. Spot Instances allow you to bid on spare EC2 capacity, often providing significant cost savings compared to On-Demand or Reserved Instances.
- Instance scale-in protection: Enabling instance scale-in protection ensures that the worker instances processing the images are not terminated prematurely during scale-in events. This helps to maintain the desired processing capacity and ensure that every image is processed.
- Dynamic scaling policy: Using a dynamic scaling policy based on a custom CloudWatch metric for the current number of messages in the SQS queue allows you to automatically adjust the number of Spot Instances based on the workload. As the number of messages in the queue increases, the scaling policy can dynamically add more instances to handle the increased processing load.
Overall, this solution provides cost-efficiency by leveraging Spot Instances, ensures every image is processed through decoupling with SQS, and dynamically scales the processing capacity based on the workload.
Question 1014
Exam Question
A mobile gaming company runs application servers on Amazon EC2 instances. The servers receive updates from players every 15 minutes. The mobile game creates a JSON object of the progress made in the game since the last update, and sends the JSON object to an Application Load Balancer. As the mobile game is played, game updates are being lost. The company wants to create a durable way to get the updates in older.
What should a solutions architect recommend to decouple the system?
A. Use Amazon Kinesis Data Streams to capture the data and store the JSON object in Amazon S3.
B. Use Amazon Kinesis Data Firehose to capture the data and store the JSON object in Amazon S3.
C. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues to capture the data and EC2 instances to process the messages in the queue.
D. Use Amazon Simple Notification Service (Amazon SNS) to capture the data and EC2 instances to process the messages sent to the Application Load Balancer.
Correct Answer
B. Use Amazon Kinesis Data Firehose to capture the data and store the JSON object in Amazon S3.
Explanation
By recommending the use of Amazon Kinesis Data Firehose, the system can be decoupled effectively and provide durability for the game updates. Here’s how this solution meets the requirements:
- Amazon Kinesis Data Firehose: It is designed to capture and load streaming data in a reliable and scalable manner. By configuring Kinesis Data Firehose to capture the JSON object updates, you can ensure that the updates are reliably ingested and processed.
- Amazon S3: Kinesis Data Firehose can be configured to store the JSON object updates directly into an Amazon S3 bucket. Amazon S3 provides durability and high availability for storing the updates, ensuring that they are not lost.
- Decoupling: By introducing Kinesis Data Firehose between the mobile game and the application servers, the system is decoupled. The game updates are sent to Kinesis Data Firehose, and the application servers can retrieve and process the updates from S3 as needed. This decoupling ensures that updates are not lost and can be processed independently from the game.
Overall, this solution using Kinesis Data Firehose and S3 provides durability for the game updates, decouples the system, and allows for efficient retrieval and processing of the updates by the application servers.
Question 1015
Exam Question
A solutions architect needs to host a high performance computing (HPC) workload in the AWS Cloud. The workload will run on hundreds of Amazon EC2 instances and will require parallel access to a shared file system to enable distributed processing of large datasets. Datasets will be accessed across multiple instances simultaneously. The workload requires access latency within 1 ms. After processing has completed, engineers will need access to the dataset for manual post processing.
Which solution will meet these requirements?
A. Use Amazon Elastic File System (Amazon EFS) as a shared file system. Access the dataset from Amazon EFS.
B. Mount an Amazon S3 bucket to serve as the shared file system. Perform post processing directly from the S3 bucket.
C. Use Amazon FSx for Lustre as a shared file system. Link the file system to an Amazon S3 bucket for post processing.
D. Configure AWS Resource Access Manager to share an Amazon S3 bucket so that it can be mounted to all instances for processing and postprocessing.
Correct Answer
C. Use Amazon FSx for Lustre as a shared file system. Link the file system to an Amazon S3 bucket for post processing.
Explanation
Amazon FSx for Lustre is specifically designed for high-performance computing workloads and provides parallel access to shared file systems. It can meet the requirements of running a high-performance computing workload on hundreds of EC2 instances with low latency and distributed processing capabilities.
By linking the Amazon FSx for Lustre file system to an Amazon S3 bucket, you can leverage the benefits of both services. The datasets can be stored in the S3 bucket and accessed by the EC2 instances through the Lustre file system, allowing for parallel access to the data.
Additionally, after processing has completed, engineers can perform post processing directly from the linked S3 bucket, enabling efficient and convenient access to the dataset for manual post processing.
This solution provides the necessary performance, parallel access, low latency, and post processing capabilities required by the HPC workload.
Question 1016
Exam Question
A company previously migrated its data warehouse solution to AWS. The company also has an AWS Direct Connect connection. Corporate office users query the data warehouse using a visualization tool. The average size of a query returned by the data warehouse is 50 MB and each web page sent by the visualization tool is approximately 500 KB. Result sets returned by the data warehouse are not cached.
Which solution provides the LOWEST data transfer egress cost for the company?
A. Host the visualization tool on premises and query the data warehouse directly over the internet.
B. Host the visualization tool in the same AWS Region as the data warehouse. Access it over the internet.
C. Host the visualization tool on premises and query the data warehouse directly over a Direct Connect connection at a location in the same AWS Region.
D. Host the visualization tool in the same AWS Region as the data warehouse and access it over a DirectConnect connection at a location in the same Region.
Correct Answer
D. Host the visualization tool in the same AWS Region as the data warehouse and access it over a DirectConnect connection at a location in the same Region.
Explanation
By hosting the visualization tool in the same AWS Region as the data warehouse and accessing it over a Direct Connect connection within the same Region, the data transfer between the two services will incur the lowest egress cost.
When the visualization tool and the data warehouse are in the same AWS Region, the data transfer between them is considered “in-region” traffic, which typically incurs lower data transfer costs compared to transferring data over the internet.
Using the Direct Connect connection further reduces data transfer costs because it provides a dedicated and private network connection between the on-premises corporate office and the AWS Region. This allows for more efficient and cost-effective data transfer between the visualization tool and the data warehouse.
Therefore, hosting the visualization tool in the same AWS Region as the data warehouse and accessing it over a Direct Connect connection at a location in the same Region will provide the lowest data transfer egress cost for the company.
Question 1017
Exam Question
A company is using Amazon Route 53 latency-based routing to route requests to its UDP-based application for users around the world. The application is hosted on redundant servers in the company on-premises data centers in the United States, Asia, and Europe. The company compliance requirements state that the application must be hosted on premises. The company wants to improve the performance and availability of the application.
What should a solutions architect do to meet these requirements?
A. Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the NLBs as its endpoints. Provide access to the application by using a CNAME that points to the accelerator DNS.
B. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the ALBs as its endpoints. Provide access to the application by using a CNAME that points to the accelerator DNS.
C. Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a latency-based record that points to the three NLBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the application by using a CNAME that points to the CloudFront DNS.
D. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a latency-based record that points to the three ALBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the application by using a CNAME that points to the CloudFront DNS.
Correct Answer
A. Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the NLBs as its endpoints. Provide access to the application by using a CNAME that points to the accelerator DNS.
Explanation
To meet the requirements of improving the performance and availability of the application while hosting it on-premises, a solutions architect should use AWS Global Accelerator in combination with Network Load Balancers (NLBs).
Here’s how this solution addresses the requirements:
- Configure three Network Load Balancers (NLBs) in the three AWS Regions: By setting up NLBs in the US, Asia, and Europe, the company ensures redundancy and load balancing across its on-premises data centers.
- Create an accelerator with AWS Global Accelerator: AWS Global Accelerator is a service that improves the availability and performance of applications by directing traffic through the AWS global network. By creating an accelerator, the company can take advantage of the AWS global infrastructure to optimize the routing of UDP-based requests.
- Register the NLBs as endpoints for the accelerator: The NLBs in each AWS Region can be registered as endpoints with the accelerator. This allows the accelerator to direct traffic to the closest available endpoint based on the location of the user, improving performance and reducing latency.
- Provide access to the application using a CNAME that points to the accelerator DNS: The company can configure a CNAME record in Route 53 that points to the DNS name of the AWS Global Accelerator. This provides a user-friendly and globally accessible domain name for the application.
By implementing this solution, the company can leverage AWS Global Accelerator to optimize the routing of UDP-based requests, improve the performance and availability of the application, and meet the compliance requirements of hosting the application on-premises.
Question 1018
Exam Question
A company has an application that generates a large number of files, each approximately 5 MB in size. The files are stored in Amazon S3. Company policy requires the files to be stored for 4 years before they can be deleted. Immediate accessibility is always required as the files contain critical business data that is not easy to reproduce. The files are frequently accessed in the first 30 days of the object creation but are rarely accessed after the first 30 days.
Which storage solution is MOST cost-effective?
A. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Glacier 30 days from object creation. Delete the files 4 years after object creation.
B. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) 30 days from object creation. Delete the files 4 years after object creation.
C. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object creation. Delete the files 4 years after object creation.
D. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object creation. Move the files to S3 Glacier 4 years after object creation.
Correct Answer
B. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) 30 days from object creation. Delete the files 4 years after object creation.
Explanation
Here’s how this solution addresses the requirements:
- Move files to S3 One Zone-Infrequent Access (S3 One Zone-IA): Since the files are frequently accessed in the first 30 days but rarely accessed after that, moving them to S3 One Zone-IA storage class after 30 days helps reduce costs while maintaining immediate accessibility. S3 One Zone-IA provides a lower-cost storage option compared to S3 Standard, while still offering durability and availability within a single Availability Zone.
- Delete the files 4 years after object creation: The company policy requires storing the files for 4 years before deletion. By setting up a lifecycle policy to delete the files after 4 years, unnecessary storage costs are avoided, ensuring compliance with the retention policy.
This solution strikes a balance between cost optimization and accessibility. The files are initially stored in S3 Standard for immediate accessibility, then transitioned to the more cost-effective S3 One Zone-IA storage class after 30 days. Finally, the files are deleted after 4 years to prevent unnecessary long-term storage costs.
Question 1019
Exam Question
A company manages its own Amazon EC2 instances that run MySQL databases. The company is manually managing replication and scaling as demand increases or decreases. The company needs a new solution that simplifies the process of adding or removing compute capacity to or from its database tier as needed. The solution also must offer improved performance, scaling, and durability with minimal effort from operations.
Which solution meets these requirements?
A. Migrate the databases to Amazon Aurora Serverless for Aurora MySQL.
B. Migrate the databases to Amazon Aurora Serverless for Aurora PostgreSQL.
C. Combine the databases into one larger MySQL database. Run the larger database on larger EC2 instances.
D. Create an EC2 Auto Scaling group for the database tier. Migrate the existing databases to the new environment.
Correct Answer
A. Migrate the databases to Amazon Aurora Serverless for Aurora MySQL.
Explanation
The solution that best meets the requirements for simplifying the process of adding or removing compute capacity, improving performance, scaling, and durability with minimal effort from operations is:
A. Migrate the databases to Amazon Aurora Serverless for Aurora MySQL.
Here’s how this solution addresses the requirements:
- Simplified capacity management: With Amazon Aurora Serverless, the database automatically scales capacity up and down based on demand. This eliminates the need for manual management of replication and scaling, providing a more simplified and automated solution.
- Improved performance: Amazon Aurora is a high-performance database service that offers several performance enhancements compared to traditional MySQL databases. Aurora Serverless is designed to automatically scale up and down to meet the demands of your application, ensuring optimal performance.
- Scalability: Aurora Serverless scales capacity dynamically based on the workload, allowing you to handle varying levels of demand without the need for manual intervention. This provides seamless scalability as the demand increases or decreases.
- Durability: Amazon Aurora offers high durability by replicating data across multiple Availability Zones. This ensures that your data is protected against infrastructure failures.
- Minimal operations effort: Aurora Serverless manages many aspects of database operations, including scaling, patching, and backups, with minimal effort required from operations teams. This reduces the operational overhead and allows the team to focus on other critical tasks.
By migrating the databases to Amazon Aurora Serverless for Aurora MySQL, the company can leverage the benefits of a fully managed, scalable, and high-performance database solution, while significantly reducing the manual effort involved in managing replication, scaling, and durability.
Question 1020
Exam Question
A company is preparing to deploy a data lake on AWS. A solutions architect must define the encryption strategy for data at rest m Amazon S3/ The company security policy states:
- Keys must be rotated every 90 days.
- Strict separation of duties between key users and key administrators must be implemented.
- Auditing key usage must be possible.
What should the solutions architect recommend?
A. Server-side encryption with AWS KMS managed keys (SSE-KMS) with customer managed customer master keys (CMKs)
B. Server-side encryption with AWS KMS managed keys (SSE-KMS) with AWS managed customer master keys (CMKs)
C. Server-side encryption with Amazon S3 managed keys (SSE-S3) with customer managed customer master keys (CMKs)
D. Server-side encryption with Amazon S3 managed keys (SSE-S3) with AWS managed customer master keys (CMKs)
Correct Answer
A. Server-side encryption with AWS KMS managed keys (SSE-KMS) with customer managed customer master keys (CMKs)
Explanation
Based on the requirements of key rotation, strict separation of duties, and auditing key usage, the recommended encryption strategy for data at rest in Amazon S3 is:
A. Server-side encryption with AWS KMS managed keys (SSE-KMS) with customer managed customer master keys (CMKs).
Here’s how this solution addresses the requirements:
- Key Rotation: AWS KMS allows you to rotate customer managed CMKs, which can be set to automatically rotate every 90 days. This ensures compliance with the key rotation policy stated in the company’s security policy.
- Separation of Duties: With AWS KMS, you can define granular access controls and policies to enforce strict separation of duties between key users and key administrators. This ensures that only authorized personnel can manage and use the encryption keys.
- Auditing Key Usage: AWS KMS provides detailed logging and auditing capabilities, allowing you to track and monitor key usage. You can view key usage logs in AWS CloudTrail and integrate with other monitoring and logging tools for comprehensive auditing.
- SSE-KMS Security: SSE-KMS provides a higher level of security compared to SSE-S3. It uses envelope encryption where the data encryption key is encrypted with a CMK. This ensures that the data at rest is protected using strong encryption and provides more control over key management.
By using SSE-KMS with customer managed CMKs, the company can meet the requirements of key rotation, separation of duties, and auditing key usage. This solution offers a secure and compliant approach for encrypting data at rest in Amazon S3.