Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 9

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 801

Exam Question

A company hosts its enterprise content management platform in one AWS Region but needs to operate the platform across multiple Regions. The company has an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that runs its microservices. The EKS cluster stores and retrieves objects from Amazon S3. The EKS cluster also stores and retrieves metadata from Amazon DynamoDB.

Which combination of steps should a solutions architect take to deploy the platform across multiple Regions? (Choose two.)

A. Replicate the EKS cluster with cross-Region replication.
B. Use Amazon API Gateway to create a global endpoint to the EKS cluster.
C. Use AWS Global Accelerator endpoints to distribute the traffic to multiple Regions.
D. Use Amazon S3 access points to give access to the objects across multiple Regions. Configure DynamoDB Accelerator (DAX). Connect DAX to the relevant tables.
E. Deploy an EKS cluster and an S3 bucket in another Region. Configure cross-Region replication on both S3 buckets. Turn on global tables for DynamoDB.

Correct Answer

C. Use AWS Global Accelerator endpoints to distribute the traffic to multiple Regions.
E. Deploy an EKS cluster and an S3 bucket in another Region. Configure cross-Region replication on both S3 buckets. Turn on global tables for DynamoDB.

Explanation

To deploy the enterprise content management platform across multiple regions, the following steps should be taken:

C. Use AWS Global Accelerator endpoints: AWS Global Accelerator can be used to distribute traffic across multiple AWS Regions, improving the availability and performance of the platform.

E. Deploy an EKS cluster and an S3 bucket in another Region: By deploying an EKS cluster and an S3 bucket in another region, you can ensure redundancy and availability. Cross-Region replication should be configured on both S3 buckets to keep the objects synchronized. Additionally, turning on global tables for DynamoDB enables the replication of metadata across multiple Regions, ensuring data availability and consistency.

Options A and B are incorrect because they do not address the need for deploying the platform across multiple regions.

Option D is incorrect because it mentions using Amazon S3 access points and configuring DynamoDB Accelerator (DAX), but these steps are not relevant to deploying the platform across multiple regions.

Question 802

Exam Question

A recently acquired company is required to build its own infrastructure on AWS and migrate multiple applications to the cloud within a month. Each application has approximately 50 TB of data to be transferred. After the migration is complete, this company and its parent company will both require secure network connectivity with consistent throughput from their data centers to the applications. A solution architect must ensure one-time data migration and ongoing network connectivity.

Which solution will meet these requirements?

A. AWS Direct Connect for both the initial transfer and ongoing connectivity.
B. AWS Site-to-Site VPN for both the initial transfer and ongoing connectivity.
C. AWS Snowball for the initial transfer and AWS Direct Connect for ongoing connectivity.
D. AWS Snowball for the initial transfer and AWS Site-to-Site VPN for ongoing connectivity.

Correct Answer

C. AWS Snowball for the initial transfer and AWS Direct Connect for ongoing connectivity.

Explanation

To meet the requirements of one-time data migration and ongoing network connectivity, the following solution is recommended:

C. AWS Snowball for the initial transfer: AWS Snowball is a physical data transport solution that enables secure and efficient transfer of large amounts of data to AWS. It would be suitable for the initial migration of approximately 50 TB of data from the company’s data centers to AWS.

AWS Direct Connect for ongoing connectivity: AWS Direct Connect provides a dedicated network connection between on-premises data centers and AWS. It offers consistent network throughput and low latency, making it suitable for establishing secure and reliable connectivity between the company’s and its parent company’s data centers and the applications hosted in AWS.

Option A is incorrect because AWS Direct Connect alone cannot handle the initial data transfer of 50 TB efficiently.

Option B is incorrect because AWS Site-to-Site VPN may not provide consistent throughput and low latency required for secure network connectivity at scale.

Option D is incorrect because AWS Snowball is used for the initial transfer, but AWS Site-to-Site VPN may not provide the consistent throughput and low latency required for ongoing connectivity.

Reference

Question 803

Exam Question

A company uses AWS to run all components of its three-tier application. The company wants to automatically detect any potential security breaches within the environment. The company wants to track any findings and notify administrators if a potential breach occurs.

Which solution meets these requirements?

A. Set up AWS WAF to evaluate suspicious web traffic. Create AWS Lambda functions to log any findings in Amazon CloudWatch and send email notifications to administrators.
B. Set up AWS Shield to evaluate suspicious web traffic. Create AWS Lambda functions to log any findings in Amazon CloudWatch and send email notifications to administrators.
C. Deploy Amazon Inspector to monitor the environment and generate findings in Amazon CloudWatch. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic to notify administrators by email.
D. Deploy Amazon GuardDuty to monitor the environment and generate findings in Amazon CloudWatch. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic to notify administrators by email.

Correct Answer

D. Deploy Amazon GuardDuty to monitor the environment and generate findings in Amazon CloudWatch. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic to notify administrators by email.

Explanation

To automatically detect potential security breaches within the environment and track findings with notifications to administrators, the following solution is recommended:

D. Deploy Amazon GuardDuty: Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior within an AWS environment. It generates findings based on analysis of network traffic, DNS data, and AWS CloudTrail event logs.

Configure an Amazon EventBridge rule: An EventBridge rule can be set up to trigger actions based on specific events. In this case, the rule should be configured to detect new GuardDuty findings and publish a message to an Amazon SNS topic.

Publish a message to Amazon SNS: By publishing a message to an Amazon SNS topic, administrators can be notified via email or other subscribed endpoints about potential security breaches detected by GuardDuty.

Option A is incorrect because AWS WAF is primarily used for web application firewall protection and not for detecting potential security breaches within the environment.

Option B is incorrect because AWS Shield is a DDoS protection service and not suitable for detecting other types of security breaches.

Option C is incorrect because while Amazon Inspector is a vulnerability assessment service, it does not continuously monitor for potential security breaches. Additionally, it does not natively provide email notifications, requiring the use of other services like Amazon SNS for notification purposes. Amazon GuardDuty is a more suitable option for continuous monitoring and detection of potential security breaches.

Reference

AWS > Documentation > Amazon Simple Notification Service > Developer Guide > Amazon SNS event sources

Question 804

Exam Question

An application running on AWS uses an Amazon Aurora Multi-AZ deployment for its database. When evaluating performance metrics, a solutions architect discovered that the database reads are causing high I/O and adding latency to the write requests against the database.

What should the solutions architect do to separate the read requests from the write requests?

A. Enable read-through caching on the Amazon Aurora database.
B. Update the application to read from the Multi-AZ standby instance.
C. Create a read replica and modify the application to use the appropriate endpoint.
D. Create a second Amazon Aurora database and link it to the primary database as a read replica.

Correct Answer

C. Create a read replica and modify the application to use the appropriate endpoint.

Explanation

To separate the read requests from the write requests and improve performance, the following solution is recommended:

C. Create a read replica: By creating a read replica of the Amazon Aurora database, you can offload read traffic from the primary database to the replica. The replica will handle read requests, reducing the I/O and latency impact on write requests.

Modify the application: The application needs to be updated to use the appropriate endpoint for read requests. The application should be configured to send read queries to the read replica’s endpoint, while write queries are sent to the primary database’s endpoint.

Option A is incorrect because read-through caching is not applicable to Amazon Aurora. It is a feature provided by other caching mechanisms like Amazon ElastiCache.

Option B is incorrect because reading from the Multi-AZ standby instance is not a recommended practice. The standby instance is meant for failover purposes and should not be used for read operations.

Option D is incorrect because creating a second Amazon Aurora database and linking it as a read replica to the primary database is essentially the same as option C. However, option C is more concise and doesn’t introduce unnecessary complexity by suggesting the creation of a completely separate database.

Reference

AWS > Documentation > Amazon RDS > User Guide > Working with DB instance read replicas

Question 805

Exam Question

A company is running a multi-tier ecommerce web application in the AWS Cloud. The web application is running on Amazon EC2 instances. The database tier is on a provisioned Amazon Aurora MySQL DB cluster with a writer and a reader in a Multi-AZ environment. The new requirement for the database tier is to serve the application to achieve continuous write availability through an instance failover.

What should a solutions architect do to meet this new requirement?

A. Add a new AWS Region to the DB cluster for multiple writes.
B. Add a new reader in the same Availability Zone as the writer.
C. Migrate the database tier to an Aurora multi-master cluster.
D. Migrate the database tier to an Aurora DB cluster with parallel query enabled.

Correct Answer

C. Migrate the database tier to an Aurora multi-master cluster.

Explanation

Bring-your-own-shard (BYOS): A situation where you already have a database schema and associated applications that use sharding. You can transfer such deployments relatively easily to Aurora multi-master clusters. In this case, you can devote your effort to investigating the Aurora benefits such as server consolidation and high availability. You don’t need to create new application logic to handle multiple connections for write requests.

Global read-after-write (GRAW): A setting that introduces synchronization so that any read operations always see the most current state of the data. By default, the data seen by a read operation in a multi-master cluster is subject to replication lag, typically a few milliseconds. During this brief interval, a query on one DB instance might retrieve stale data if the same data is modified at the same time by a different DB instance. To enable this setting, change aurora_mm_session_consistency_level from its default setting of INSTANCE_RAW to REGIONAL_RAW. Doing so ensures cluster-wide consistency for read operations regardless of the DB instances that perform the reads and writes. For details on GRAW mode, see Consistency model for multi-master clusters.

To achieve continuous write availability through an instance failover in the database tier, the following solution is recommended:

C. Migrate the database tier to an Aurora multi-master cluster: An Aurora multi-master cluster allows for simultaneous writes to multiple instances in the cluster. This enables continuous write availability even during an instance failover. With multi-master, write requests can be distributed across multiple instances, providing higher availability and fault tolerance.

Option A is incorrect because adding a new AWS Region to the DB cluster does not inherently provide continuous write availability through instance failover. It introduces complexity by expanding the deployment across multiple regions.

Option B is incorrect because adding a new reader in the same Availability Zone as the writer does not address the requirement of achieving continuous write availability through an instance failover. Readers are not capable of handling writes in an Amazon Aurora DB cluster.

Option D is incorrect because enabling parallel query in an Aurora DB cluster does not address the requirement of achieving continuous write availability through an instance failover. Parallel query improves query performance by utilizing multiple CPU threads, but it does not provide continuous write availability during an instance failover.

Reference

AWS > Documentation > Amazon RDS > User Guide for Aurora > Replication with Amazon Aurora MySQL

Question 806

Exam Question

A company runs a multi-tier web application that hosts news content. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones and use an Amazon Aurora database. A solutions architect needs to make the application more resilient to periodic increases in request rates.

Which architecture should the solutions architect implement? (Choose two.)

A. Add AWS Shield.
B. Add Aurora Replica.
C. Add AWS Direct Connect.
D. Add AWS Global Accelerator.
E. Add an Amazon CloudFront distribution in front of the Application Load Balancer.

Correct Answer

B. Add Aurora Replica.
E. Add an Amazon CloudFront distribution in front of the Application Load Balancer.

Explanation

AWS Global Accelerator –
Acceleration for latency-sensitive applications
Many applications, especially in areas such as gaming, media, mobile apps, and financials, require very low latency for a great user experience. To improve the user experience, Global Accelerator directs user traffic to the application endpoint that is nearest to the client, which reduces internet latency and jitter. Global Accelerator routes traffic to the closest edge location by using Anycast, and then routes it to the closest regional endpoint over the AWS global network. Global Accelerator quickly reacts to changes in network performance to improve your users’ application performance.

Amazon CloudFront –
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.

To make the application more resilient to periodic increases in request rates, the following architectures should be implemented:

B. Add Aurora Replica: Adding an Aurora Replica to the Amazon Aurora database provides read scalability. The replica can handle read requests, offloading the read traffic from the primary database and improving overall application performance.

E. Add an Amazon CloudFront distribution: Adding an Amazon CloudFront distribution in front of the Application Load Balancer provides several benefits. CloudFront acts as a content delivery network (CDN), caching static and dynamic content at edge locations closer to the users. This helps reduce latency and improves the application’s ability to handle periodic increases in request rates by distributing the load across multiple edge locations.

Option A is incorrect because adding AWS Shield alone does not directly address the need to make the application more resilient to periodic increases in request rates. AWS Shield is a DDoS protection service that helps protect against DDoS attacks.

Option C is incorrect because adding AWS Direct Connect does not directly address the need to make the application more resilient to periodic increases in request rates. AWS Direct Connect is used to establish dedicated network connectivity between on-premises data centers and AWS.

Option D is incorrect because adding AWS Global Accelerator alone does not directly address the need to make the application more resilient to periodic increases in request rates. AWS Global Accelerator is a service that improves the availability and performance of applications by directing traffic through the global AWS network backbone.

Reference

AWS > Documentation > AWS Global Accelerator > Developer Guide > AWS Global Accelerator use cases

Question 807

Exam Question

A development team is creating an event-based application that uses AWS Lambda functions. Events will be generated when files are added to an Amazon S3 bucket. The development team currently has Amazon Simple Notification Service (Amazon SNS) configured as the event target form Amazon S3.

What should a solutions architect do to process the events from Amazon S3 in a scalable way?

A. Create an SNS subscription that processes the event in Amazon Elastic Container Service (Amazon ECS) before the event runs in Lambda.
B. Create an SNS subscription that processes the event in Amazon Elastic Kubernetes Service (Amazon EKS) before the event runs in Lambda.
C. Create an SNS subscription that sends the event to Amazon Simple Queue Service (Amazon SQS). Configure the SQS queue to trigger a Lambda function.
D. Create an SNS subscription that sends the event to AWS Server Migration Service (AWS SMS). Configure the Lambda function to poll from the SMS event.

Correct Answer

C. Create an SNS subscription that sends the event to Amazon Simple Queue Service (Amazon SQS). Configure the SQS queue to trigger a Lambda function.

Explanation

You can subscribe one or more Amazon SQS queues to an Amazon Simple Notification Service (Amazon SNS) topic. When you publish a message to a topic, Amazon SNS sends the message to each of the subscribed queues. Amazon SQS manages the subscription and any necessary permissions. For more information about Amazon SNS. see What is Amazon Simple Notification Service? in the Amazon Simple Notification Service Developer Guide.

When you subscribe an Amazon SQS queue to an SNS topic, Amazon SNS uses HTTPS to forward messages to Amazon SQS. For more information about using Amazon SNS with encrypted Amazon SQS queues, see Configure KMS permissions for AWS services.

To process events from Amazon S3 in a scalable way, the following solution is recommended:

C. Create an SNS subscription that sends the event to Amazon SQS: Set up an SNS subscription that forwards the events triggered by Amazon S3 to an Amazon SQS queue. This decouples the event source (S3) from the processing (Lambda), allowing for scalability and resilience.

Configure the SQS queue to trigger a Lambda function: Set up an event source mapping on the SQS queue to trigger a Lambda function. This ensures that the Lambda function is invoked whenever new events are available in the queue, allowing for scalable and asynchronous processing of events.

Option A and Option B are incorrect because they suggest using Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS) to process the events before invoking Lambda. This introduces unnecessary complexity and is not the most direct and scalable approach.

Option D is incorrect because AWS Server Migration Service (AWS SMS) is not designed for processing events from Amazon S3. It is used for migrating on-premises servers to AWS.

Reference

AWS > Documentation > Amazon Simple Queue Service > Developer Guide > Subscribing an Amazon SQS queue to an Amazon SNS topic (console)

Question 808

Exam Question

A company’s application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. On the first day of every month at midnight, the application becomes much slower when the month-end financial calculation batch executes. This causes the CPU utilization of the EC2 instances to immediately peak to 100%, which disrupts the application.

What should a solutions architect recommend to ensure the application is able to handle the workload and avoid downtime?

A. Configure an Amazon CloudFront distribution in front of the ALB.
B. Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization.
C. Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule.
D. Configure Amazon ElastiGache to remove some of the workload from the EC2 instances.

Correct Answer

B. Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization.

Explanation

Scheduled Scaling for Amazon EC2 Auto Scaling
Scheduled scaling allows you to set your own scaling schedule. For example, let’s say that every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling actions based on the predictable traffic patterns of your web application. Scaling actions are performed automatically as a function of time and date.

To ensure the application can handle the workload and avoid downtime during the monthly peak in CPU utilization, the following solution is recommended:

B. Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization: By setting up a scaling policy based on CPU utilization, the Auto Scaling group can automatically adjust the number of instances in response to changes in demand. During the peak period, when CPU utilization reaches a certain threshold, the Auto Scaling group can dynamically add more instances to handle the increased workload. Once the workload decreases, the Auto Scaling group can scale down the number of instances accordingly.

Option A is incorrect because configuring an Amazon CloudFront distribution in front of the ALB does not directly address the issue of high CPU utilization during the peak period. CloudFront is a content delivery network (CDN) that helps with caching and reducing latency for content delivery, but it does not directly handle the scaling of instances based on CPU utilization.

Option C is incorrect because configuring an EC2 Auto Scaling scheduled scaling policy based on a monthly schedule would not be effective in handling the workload spikes caused by the financial calculation batch. A scheduled scaling policy does not take into account the actual workload or CPU utilization.

Option D is incorrect because configuring Amazon ElastiCache would offload some workload, but it would not directly address the issue of high CPU utilization on the EC2 instances during the peak period. ElastiCache is an in-memory caching service and may not be suitable for handling CPU-intensive calculations.

Reference

AWS > Documentation > Amazon EC2 Auto Scaling > User Guide > Scheduled scaling for Amazon EC2 Auto Scaling

Question 809

Exam Question

A company wants to use AWS Systems Manager to manage a fleet of Amazon EC2 instances. According to the company’s security requirements, no EC2 instances can have internet access. A solutions architect needs to design network connectivity from the EC2 instances to Systems Manager while fulfilling this security obligation.

Which solution will meet these requirements?

A. Deploy the EC2 instances into a private subnet with no route to the internet.
B. Configure an interface VPC endpoint for Systems Manager. Update routes to use the endpoint.
C. Deploy a NAT gateway into a public subnet. Configure private subnets with a default route to the NAT gateway.
D. Deploy an internet gateway. Configure a network ACL to deny traffic to all destinations except Systems Manager.

Correct Answer

B. Configure an interface VPC endpoint for Systems Manager. Update routes to use the endpoint.

Explanation

To meet the requirement of no internet access for the EC2 instances while allowing network connectivity to AWS Systems Manager, the following solution is recommended:

B. Configure an interface VPC endpoint for Systems Manager: An interface VPC endpoint allows private connectivity between the VPC and AWS services. By configuring an interface VPC endpoint for Systems Manager, the EC2 instances can communicate with Systems Manager securely without requiring internet access. This solution ensures that the security requirement of no internet access is fulfilled while enabling management of the EC2 instances using Systems Manager.

Option A is incorrect because deploying the EC2 instances into a private subnet with no route to the internet would block all network connectivity, including access to Systems Manager.

Option C is incorrect because deploying a NAT gateway would provide internet access to the private subnets, which is not in line with the security requirement of no internet access for the EC2 instances.

Option D is incorrect because deploying an internet gateway would grant internet access to the EC2 instances, which is not permitted according to the security requirements. Additionally, relying on network ACLs alone may not provide sufficient security controls to enforce the restriction effectively.

Reference

AWS > Documentation > AWS Systems Manager > User Guide > What is AWS Systems Manager?

Question 810

Exam Question

A company captures clickstream data from multiple websites and analyzes it using batch processing. The data is loaded nightly into Amazon Redshift and is consumed by business analysts. The company wants to move towards near-real-time data processing for timely insights. The solution should process the streaming data with minimal effort and operational overhead.

Which combination of AWS services are MOST cost-effective for this solution? (Choose two.)

A. Amazon EC2
B. AWS Lambda
C. Amazon Kinesis Data Streams
D. Amazon Kinesis Data Firehose
E. Amazon Kinesis Data Analytics

Correct Answer

D. Amazon Kinesis Data Firehose
E. Amazon Kinesis Data Analytics

Explanation

To achieve near-real-time data processing with minimal effort and operational overhead, the most cost-effective combination of AWS services would be:

D. Amazon Kinesis Data Firehose: Amazon Kinesis Data Firehose is a fully managed service that allows you to reliably capture, transform, and load streaming data into storage and analytics services. It can directly load the streaming data into Amazon Redshift in near real-time without the need for any additional infrastructure or operational management.

E. Amazon Kinesis Data Analytics: Amazon Kinesis Data Analytics enables you to process and analyze streaming data using standard SQL queries. It integrates seamlessly with Amazon Kinesis Data Firehose, allowing you to apply real-time analytics to the streaming data as it flows into Amazon Redshift.

Option A is incorrect because using Amazon EC2 would require manual setup and management of the infrastructure, which would increase operational overhead and may not be the most cost-effective solution.

Option B is incorrect because AWS Lambda is primarily used for serverless compute functions and may not be the most optimal choice for processing and analyzing streaming data at scale.

Option C is incorrect because Amazon Kinesis Data Streams requires you to manage the infrastructure and handle the data ingestion and processing manually, which adds operational overhead and complexity to the solution.

Reference

Streaming Data Solutions on AWS