The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.
Table of Contents
- Question 1211
- Exam Question
- Correct Answer
- Explanation
- Question 1212
- Exam Question
- Correct Answer
- Explanation
- Question 1213
- Exam Question
- Correct Answer
- Explanation
- Question 1214
- Exam Question
- Correct Answer
- Explanation
- Question 1215
- Exam Question
- Correct Answer
- Explanation
- Question 1216
- Exam Question
- Correct Answer
- Explanation
- Question 1217
- Exam Question
- Correct Answer
- Explanation
- Question 1218
- Exam Question
- Correct Answer
- Explanation
- Question 1219
- Exam Question
- Correct Answer
- Explanation
- Question 1220
- Exam Question
- Correct Answer
- Explanation
Question 1211
Exam Question
A solutions architect is creating a new Amazon CloudFront distribution for an application. Some of the information submitted by users is sensitive. The application uses HTTPS but needs another layer of security. The sensitive information should be protected throughout the entire application stack, and access to the information should be restricted to certain applications.
Which action should the solutions architect take?
A. Configure a CloudFront signed URL
B. Configure a CloudFront signed cookie.
C. Configure a CloudFront field-level encryption profile.
D. Configure a CloudFront and set the Origin Protocol Policy setting to HTTPS. Only for the Viewer Protocol Pokey.
Correct Answer
C. Configure a CloudFront field-level encryption profile.
Explanation
To provide an additional layer of security for sensitive information throughout the entire application stack and restrict access to certain applications, the solutions architect should take the following action:
C. Configure a CloudFront field-level encryption profile.
CloudFront field-level encryption allows you to encrypt specific fields within HTTP POST requests and protect sensitive information in transit. With field-level encryption, the sensitive data is encrypted by the client application using a public key and can only be decrypted by authorized applications with the corresponding private key. This ensures that the sensitive data remains encrypted throughout the entire application stack and can only be accessed by authorized applications.
Configuring a CloudFront signed URL (option A) or signed cookie (option B) provides control over access to content at the edge, but it does not specifically address the protection of sensitive information or restrict access to certain applications.
Option D, configuring CloudFront and setting the Origin Protocol Policy to HTTPS, only ensures that the communication between CloudFront and the origin server is secured using HTTPS, but it does not provide an additional layer of security for sensitive information within the application stack or restrict access to certain applications.
Therefore, option C, configuring a CloudFront field-level encryption profile, is the most appropriate action to protect sensitive information throughout the entire application stack and restrict access to certain applications.
Question 1212
Exam Question
A company is developing a real-time multiplayer game that uses UDP for communications between client and servers in an Auto Scaling group. Spikes in demand are anticipated during the day, so the game server platform must adapt accordingly. Developers want to store gamer scores and other non-relational data in a database solution that will scale without intervention.
Which solution should a solution architect recommend?
A. Use Amazon Route 53 for traffic distribution and Amazon Aurora Serverless for data storage.
B. Use a Network Load Balancer for traffic distribution and Amazon DynamoDB on demand for data storage.
C. Use a Network Load Balancer for traffic distribution and Amazon Aura Global for data storage.
D. Use an Application Load Balancer for traffic distribution and Amazon DynamoDB global tables for data storage.
Correct Answer
A. Use Amazon Route 53 for traffic distribution and Amazon Aurora Serverless for data storage.
Explanation
In this scenario, where a real-time multiplayer game is being developed with anticipated spikes in demand, and non-relational data storage that scales without intervention is required, the recommended solution is to use Amazon Route 53 for traffic distribution and Amazon Aurora Serverless for data storage.
Amazon Route 53 is a highly scalable and reliable DNS web service that can be used to distribute incoming traffic across multiple game servers in an Auto Scaling group. It can intelligently route traffic based on latency, geolocation, or other routing policies, ensuring efficient distribution and load balancing.
Amazon Aurora Serverless is a relational database service that automatically scales based on the application’s needs. It is a good fit for non-relational data storage in this scenario because it can handle the gamer scores and other non-relational data in a scalable manner without the need for manual intervention. With Aurora Serverless, the database capacity automatically scales up or down based on demand, allowing the game platform to adapt to spikes in traffic without requiring manual provisioning or management of database resources.
Option B suggests using a Network Load Balancer and Amazon DynamoDB on-demand for data storage. While DynamoDB is a scalable NoSQL database service, it may not be the best fit for non-relational data storage in this specific scenario. Aurora Serverless provides better compatibility with non-relational data storage requirements.
Option C suggests using a Network Load Balancer and Amazon Aurora Global for data storage. While Aurora Global can provide replication across multiple regions for data durability, it may not be necessary for the given requirements of the game. Additionally, Aurora Global is more suitable for multi-region deployments, which may not be a requirement mentioned in the scenario.
Option D suggests using an Application Load Balancer and Amazon DynamoDB global tables for data storage. While DynamoDB global tables can provide multi-region replication, they may not be necessary in this scenario, and using Aurora Serverless can provide a more cost-effective and efficient solution for non-relational data storage.
Therefore, option A, using Amazon Route 53 for traffic distribution and Amazon Aurora Serverless for data storage, is the recommended solution.
Question 1213
Exam Question
A solutions architect is redesigning a monolithic application to be a loosely coupled application composed of two microservices: Microservice A and Microservice B. Microservice A places messages in a main Amazon Simple Queue Service (Amazon SQS) queue for Microservice B to consume. When Microservice B fails to process a message after four retries, the message needs to be removed from the queue and stored for further investigation.
What should the solutions architect do to meet these requirements?
A. Create an SQS dead-letter queue. Microservice B adds failed messages to that queue after it receives and fails to process the message four times.
B. Create an SQS dead-letter queue. Configure the main SQS queue to deliver messages to the dead-letter queue after the message has been received four times.
C. Create an SQS queue for failed messages. Microservice A adds failed messages to that queue after Microservice B receives and fails to process the message four times.
D. Create an SQS queue for failed messages. Configure the SQS queue for failed messages to pull messages from the main SQS queue after the original message has been received four times.
Correct Answer
B. Create an SQS dead-letter queue. Configure the main SQS queue to deliver messages to the dead-letter queue after the message has been received four times.
Explanation
To meet the requirements of removing messages from the main Amazon SQS queue for further investigation after four retries by Microservice B, the solutions architect should create an SQS dead-letter queue and configure the main SQS queue to deliver messages to the dead-letter queue after the message has been received four times.
Option A suggests creating an SQS dead-letter queue and having Microservice B add failed messages to that queue after four retries. However, in SQS, it is the responsibility of the SQS service itself to move messages to the dead-letter queue when they are not successfully processed after a specified number of retries. Microservice B does not need to explicitly add messages to the dead-letter queue.
Option C suggests creating an SQS queue for failed messages and having Microservice A add failed messages to that queue after Microservice B fails to process the message four times. However, it is more appropriate to use the dead-letter queue feature provided by SQS to handle message retries and failures.
Option D suggests creating an SQS queue for failed messages and configuring it to pull messages from the main SQS queue after the original message has been received four times. This option is not necessary because the dead-letter queue feature provided by SQS automatically moves messages to the dead-letter queue after the specified number of retries.
Therefore, option B, creating an SQS dead-letter queue and configuring the main SQS queue to deliver messages to the dead-letter queue after the message has been received four times, is the correct approach to meet the requirements.
Question 1214
Exam Question
A company needs to share an Amazon S3 bucket with an external vendor. The bucket owner must be able to access all objects.
Which action should be taken to share the S3 bucket?
A. Update the bucket to be a Requester Pays bucket.
B. Update the bucket to enable cross-origin resource sharing (CORS).
C. Create a bucket policy to require users to grant bucket-owner-full-control when uploading objects.
D. Create an IAM policy to require users to grant bucket-owner-full-control when uploading objects.
Correct Answer
C. Create a bucket policy to require users to grant bucket-owner-full-control when uploading objects.
Explanation
To share an Amazon S3 bucket with an external vendor while ensuring that the bucket owner has access to all objects, you should create a bucket policy that requires users to grant bucket-owner-full-control when uploading objects.
Option A, updating the bucket to be a Requester Pays bucket, is not relevant to the requirement of allowing the bucket owner to access all objects. Requester Pays is used to require the requester (external users) to pay for the data transfer and request costs.
Option B, enabling cross-origin resource sharing (CORS), is used to control access to resources from different origins (domains) in web browsers. It is not directly related to sharing the bucket with an external vendor.
Option D, creating an IAM policy to require users to grant bucket-owner-full-control when uploading objects, is not the recommended approach for sharing the bucket. IAM policies are used to manage permissions for IAM users and roles within an AWS account, but they do not provide the mechanism to enforce access requirements for external users.
Therefore, option C, creating a bucket policy to require users to grant bucket-owner-full-control when uploading objects, is the appropriate action to take in order to share the S3 bucket while ensuring that the bucket owner has access to all objects.
Question 1215
Exam Question
A company is launching an ecommerce website on AWS. This website is built with a three-tier architecture that includes a MySQL database in a Multi-AZ deployment of Amazon Aurora MySQL. The website application must be highly available and will initially be launched in an AWS Region with three Availability Zones The application produces a metric that describes the load the application experiences.
Which solution meets these requirements?
A. Configure an Application Load Balancer (ALB) with Amazon EC2 Auto Scaling behind the ALB with scheduled scaling
B. Configure an Application Load Balancer (ALB) and Amazon EC2 Auto Scaling behind the ALB with a simple scaling policy.
C. Configure a Network Load Balancer (NLB) and launch a Spot Fleet with Amazon EC2 Auto Scaling behind the NLB.
D. Configure an Application Load Balancer (ALB) and Amazon EC2 Auto Scaling behind the ALB with a target tracking scaling policy.
Correct Answer
D. Configure an Application Load Balancer (ALB) and Amazon EC2 Auto Scaling behind the ALB with a target tracking scaling policy.
Explanation
To meet the requirements of a highly available ecommerce website with a three-tier architecture and an Amazon Aurora Multi-AZ deployment, the best solution is to configure an Application Load Balancer (ALB) and Amazon EC2 Auto Scaling behind the ALB with a target tracking scaling policy.
An Application Load Balancer (ALB) distributes incoming traffic to multiple EC2 instances running the website application, providing high availability and load balancing across multiple Availability Zones.
Amazon EC2 Auto Scaling automatically adjusts the number of instances in the Auto Scaling group based on the configured scaling policy. This ensures that the application can handle the expected load and automatically scales up or down based on demand.
A target tracking scaling policy is a type of scaling policy that adjusts the desired capacity of the Auto Scaling group to maintain a specified target value for a specific metric. In this case, the target tracking scaling policy can be based on the metric that describes the load the application experiences, ensuring that the capacity of the Auto Scaling group is dynamically adjusted to handle the load.
Option A, configuring an Application Load Balancer (ALB) with Amazon EC2 Auto Scaling behind the ALB with scheduled scaling, does not provide dynamic scaling based on the load metric. Scheduled scaling relies on predefined schedules and may not be responsive to real-time changes in the load.
Option B, configuring an Application Load Balancer (ALB) and Amazon EC2 Auto Scaling behind the ALB with a simple scaling policy, does not provide the flexibility of dynamic scaling based on the load metric. Simple scaling policies rely on static thresholds and may not be able to handle varying levels of load.
Option C, configuring a Network Load Balancer (NLB) and launching a Spot Fleet with Amazon EC2 Auto Scaling behind the NLB, does not specify the use of a target tracking scaling policy, which is needed to automatically adjust the capacity based on the load metric.
Therefore, option D, configuring an Application Load Balancer (ALB) and Amazon EC2 Auto Scaling behind the ALB with a target tracking scaling policy, is the most appropriate solution for achieving high availability and dynamic scaling based on the load metric in a Multi-AZ deployment.
Question 1216
Exam Question
A company has multiple AWS accounts, for various departments. One of the departments wants to share an Amazon S3 bucket with all other departments.
Which solution will require the LEAST amount of effort?
A. Enable cross-account S3 replication for the bucket.
B. Create a pre-signed URL for the bucket and share it with other departments.
C. Set the S3 bucket policy to allow cross-account access to other departments.
D. Create IAM users for each of the departments and configure a read-only IAM policy.
Correct Answer
C. Set the S3 bucket policy to allow cross-account access to other departments.
Explanation
Among the provided options, setting the S3 bucket policy to allow cross-account access to other departments requires the least amount of effort to share the Amazon S3 bucket with multiple AWS accounts.
By setting the S3 bucket policy, you can define the access permissions for the bucket at a more granular level, including cross-account access. You can specify the AWS accounts that are allowed to access the bucket and the actions they can perform.
This approach avoids the need for additional configurations or setup, such as enabling cross-account S3 replication (Option A), creating pre-signed URLs for each department (Option B), or creating IAM users and configuring IAM policies (Option D).
Therefore, setting the S3 bucket policy to allow cross-account access is the simplest and least effort-intensive solution to share the Amazon S3 bucket with other departments in multiple AWS accounts.
Question 1217
Exam Question
A company is Re-architecting a strongly coupled application to be loosely coupled. Previously the application used a request/response pattern to communicate between tiers. The company plans to use Amazon Simple Queue Service (Amazon SQS) to achieve decoupling requirements. The initial design contains one queue for requests and one for responses. However, this approach is not processing all the messages as the application scales.
What should a solutions architect do to resolve this issue?
A. Configure a dead-letter queue on the ReceiveMessage API action of the SQS queue.
B. Configure a FIFO queue, and use the message deduplication ID and message group ID.
C. Create a temporary queue, with the Temporary Queue Client to receive each response message.
D. Create a queue for each request and response on startup for each producer, and use a correlation ID message attribute.
Correct Answer
D. Create a queue for each request and response on startup for each producer, and use a correlation ID message attribute.
Explanation
To resolve the issue of not processing all messages when using Amazon SQS for achieving decoupling requirements, a solutions architect should create a separate queue for each request and response on startup for each producer and use a correlation ID message attribute.
By creating a separate queue for each request and response, it ensures that each message is delivered to the intended recipient. This approach allows for better scalability and ensures that messages are not missed or lost due to the scaling of the application.
Using a correlation ID message attribute helps to associate a response message with the corresponding request message, enabling proper message routing and processing. The application can use this correlation ID to match response messages with the original request messages.
Options A, B, and C do not directly address the issue of not processing all messages when scaling the application.
Option A, configuring a dead-letter queue, is used for handling messages that cannot be processed successfully after a certain number of retries. It does not address the issue of missing messages.
Option B, configuring a FIFO queue with message deduplication ID and message group ID, is useful when strict message ordering is required, but it does not directly address the issue of processing all messages when scaling.
Option C, creating a temporary queue with the Temporary Queue Client to receive each response message, is not a recommended approach for resolving the issue. It adds complexity and may not be necessary to address the problem of missing messages during scaling.
Therefore, creating a queue for each request and response on startup for each producer and using a correlation ID message attribute is the appropriate approach to resolve the issue and ensure proper message processing when using Amazon SQS for achieving loose coupling.
Question 1218
Exam Question
A solution architect is performing a security review of a recently migrated workload. The workload is a web application that consists of Amazon EC2 instances in an Auto Scaling group behind an Application Load balancer. The solution architect must improve the security posture and minimize the impact of a DDoS attack on resources.
Which solution is MOST effective?
A. Configure an AWS WAF ACL with rate-based rules. Create an Amazon CloudFront distribution that points to the Application Load Balancer. Enable the EAF ACL on the CloudFront distribution.
B. Create a custom AWS Lambda function that adds identified attacks into a common vulnerability pool to capture a potential DDoS attack. Use the identified information to modify a network ACL to block access.
C. Enable VPC Flow Logs and store them in Amazon S3. Create a custom AWS Lambda function that parses the logs looking for a DDoS attack. Modify a network ACL to block identified source IP addresses.
D. Enable Amazon GuardDuty and, configure findings written 10 Amazon CloudWatch. Create an event with Cloud Watch Events for DDoS alerts that triggers Amazon Simple Notification Service (Amazon SNS). Have Amazon SNS invoke a custom AWS lambda function that parses the logs looking for a DDoS attack. Modify a network ACL to block identified source IP addresses.
Correct Answer
A. Configure an AWS WAF ACL with rate-based rules. Create an Amazon CloudFront distribution that points to the Application Load Balancer. Enable the EAF ACL on the CloudFront distribution.
Explanation
The most effective solution to improve the security posture and minimize the impact of a DDoS attack on resources is to configure an AWS WAF (Web Application Firewall) ACL (Access Control List) with rate-based rules, create an Amazon CloudFront distribution that points to the Application Load Balancer, and enable the WAF ACL on the CloudFront distribution.
AWS WAF allows you to create rules to filter and monitor HTTP or HTTPS requests based on specific conditions. Rate-based rules help protect against DDoS attacks by limiting the number of requests from a particular source IP address over time. By configuring rate-based rules in the AWS WAF ACL, you can control and block excessive or malicious traffic that may be part of a DDoS attack.
By creating an Amazon CloudFront distribution and pointing it to the Application Load Balancer, you can distribute the traffic globally and benefit from CloudFront’s built-in DDoS protection and scalability features. CloudFront acts as a content delivery network (CDN) and can absorb and mitigate DDoS attacks at the edge locations, reducing the impact on the underlying resources.
Enabling the AWS WAF ACL on the CloudFront distribution ensures that the traffic passing through CloudFront is filtered and protected by the defined rules, including the rate-based rules to mitigate DDoS attacks.
Option B, creating a custom AWS Lambda function to add identified attacks into a common vulnerability pool and modifying a network ACL to block access, is not as effective as using AWS WAF with rate-based rules. It requires more manual configuration and does not provide the same level of DDoS protection and scalability as the combined solution of AWS WAF and CloudFront.
Option C, enabling VPC Flow Logs and creating a custom AWS Lambda function to parse the logs for a DDoS attack and modify a network ACL, is not as effective as the AWS WAF and CloudFront solution. VPC Flow Logs provide visibility into the network traffic, but they do not provide real-time protection against DDoS attacks.
Option D, enabling Amazon GuardDuty, configuring CloudWatch findings, and using CloudWatch Events and AWS Lambda to parse logs and modify a network ACL, provides some level of threat detection and response, but it does not provide the same level of DDoS protection and real-time mitigation as the AWS WAF and CloudFront solution.
Therefore, configuring an AWS WAF ACL with rate-based rules, creating an Amazon CloudFront distribution, and enabling the WAF ACL on the CloudFront distribution is the most effective solution to improve security and minimize the impact of a DDoS attack on resources.
Question 1219
Exam Question
A company is building its web application using containers on AWS. The company requires three instances of the web application to run at all times. The application must be able to scale to meet increases in demand. Management is extremely sensitive to cost but agrees that the application should be highly available.
What should a solutions architect recommend?
A. Create an Amazon Elastic Container Service (Amazon ECS) cluster using the Fargate launch type. Create a task definition for the web application. Create an ECS service with a desired count of three tasks.
B. Create an Amazon Elastic Container Service (Amazon ECS) cluster using the Amazon EC2 launch type with three container instances in one Availability Zone. Create a task definition for the web application. Place one task for each container instance.
C. Create an Amazon Elastic Container Service (Amazon ECS) cluster using the Fargate launch type with one container instance in three different Availability Zones. Create a task definition for the web application. Create an ECS service with a desired count of three tasks.
D. Create an Amazon Elastic Container Service (Amazon ECS) cluster using the Amazon EC2 launch type with one container instance in two different Availability Zones. Create a task definition for the web application. Place two tasks on one container instance and one task on the remaining container instance.
Correct Answer
A. Create an Amazon Elastic Container Service (Amazon ECS) cluster using the Fargate launch type. Create a task definition for the web application. Create an ECS service with a desired count of three tasks.
Explanation
In order to meet the requirement of having three instances of the web application running at all times, along with the ability to scale to meet increases in demand, a solution architect should recommend the following approach:
- Create an Amazon ECS cluster using the Fargate launch type. Fargate allows you to run containers without managing the underlying infrastructure.
- Create a task definition for the web application. The task definition specifies how the container should be run, including the container image, resource requirements, and any container dependencies.
- Create an ECS service with a desired count of three tasks. The service will ensure that the specified number of tasks are always running. If a task fails or is terminated, the service will automatically launch a new one to maintain the desired count.
This approach provides high availability for the web application as it ensures that there are always three instances running. It also allows for scalability by automatically adding or removing tasks based on the desired count specified in the ECS service.
Option B, which suggests creating an Amazon ECS cluster using the EC2 launch type with three container instances in one Availability Zone, does not provide the same level of scalability as the Fargate launch type. With EC2 launch type, you would need to manually manage and scale the underlying EC2 instances to meet the demand.
Option C, creating an ECS cluster using the Fargate launch type with one container instance in three different Availability Zones, is not necessary for meeting the requirement of having three instances of the web application running at all times. It would also result in additional costs and complexity.
Option D, creating an ECS cluster using the EC2 launch type with one container instance in two different Availability Zones and placing two tasks on one instance and one task on the other, does not provide the same level of high availability and scalability as the Fargate launch type. Additionally, it introduces a single point of failure with one instance running multiple tasks.
Therefore, the recommended approach is to create an Amazon ECS cluster using the Fargate launch type, create a task definition for the web application, and create an ECS service with a desired count of three tasks.
Question 1220
Exam Question
A company hosts its website on AWS. To address the highly variable demand, the company has implemented Amazon EC2 Auto Scaling. Management is concerned that the company is over-provisioning its infrastructure, especially at the front end of the three-tier application. A solutions architect needs to ensure costs are optimized without impacting performance.
What should the solutions architect do to accomplish this?
A. Use Auto Scaling with Reserved Instances.
B. Use Auto Scaling with a scheduled scaling policy.
C. Use Auto Scaling with the suspend-resume feature
D. Use Auto Scaling with a target tracking scaling policy.
Correct Answer
D. Use Auto Scaling with a target tracking scaling policy.
Explanation
To optimize costs without impacting performance, a solutions architect should use Auto Scaling with a target tracking scaling policy. This policy automatically adjusts the number of EC2 instances based on a predefined target metric, such as CPU utilization or request count per instance.
By using a target tracking scaling policy, the Auto Scaling group can dynamically scale the number of instances up or down to maintain the desired target metric. This ensures that the infrastructure scales in response to the demand, preventing over-provisioning during low-traffic periods and ensuring sufficient capacity during high-traffic periods.
Option A, using Auto Scaling with Reserved Instances, helps optimize costs by providing discounted pricing for a specified amount of capacity. However, it does not dynamically adjust the number of instances based on demand, so it may not be suitable for highly variable workloads.
Option B, using Auto Scaling with a scheduled scaling policy, allows for predefined scaling actions at specific times. While this can be useful for predictable traffic patterns, it may not effectively handle highly variable demand.
Option C, using Auto Scaling with the suspend-resume feature, allows for temporarily suspending and resuming scaling activities. However, this is not a dynamic scaling solution and requires manual intervention, which may not be suitable for optimizing costs based on variable demand.
Therefore, the recommended approach is to use Auto Scaling with a target tracking scaling policy, which allows for automatic scaling based on predefined target metrics, ensuring optimized costs without impacting performance.