Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 61

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 1321

Exam Question

What should a solutions architect do to ensure that all objects uploaded to an Amazon S3 bucket are encrypted?

A. Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set.
B. Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set to private.
C. Update the bucket policy to deny if the PutObject does not have an aws:SecureTransport header set to true.
D. Update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set.

Correct Answer

D. Update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set.

Explanation

To ensure that all objects uploaded to an Amazon S3 bucket are encrypted, a solutions architect should update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set. Therefore, the correct option is D.

By updating the bucket policy to deny uploads without the x-amz-server-side-encryption header, the solutions architect can enforce that all objects uploaded to the S3 bucket must be encrypted at the server side. This ensures that even if an object is uploaded without explicit encryption parameters, it will still be encrypted automatically by S3.

Options A and B are not the best choices because they focus on the ACL (Access Control List) settings rather than encryption. While it is important to set appropriate access permissions for objects, it does not guarantee encryption.

Option C, updating the bucket policy to deny if the PutObject does not have an aws:SecureTransport header set to true, focuses on the transport layer security of the connection rather than encryption. While using a secure transport is important, it does not specifically address the requirement for encrypting the objects.

Therefore, the most appropriate solution is option D: Update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set.

Question 1322

Exam Question

A solutions architect is designing a multi-Region disaster recovery solution for an application that will provide public API access. The application will use Amazon EC2 instances with a userdata script to load application code and an Amazon RDS for MySQL database. The Recovery Time Objective (RTO) is 3 hours and the Recovery Point Objective (RPO) is 24 hours.

Which architecture would meet these requirements at the LOWEST cost?

A. Use an Application Load Balancer for Region failover. Deploy new EC2 instances with the userdata script. Deploy separate RDS instances in each Region.
B. Use Amazon Route 53 for Region failover. Deploy new EC2 instances with the userdata script. Create a read replica of the RDS instance in a backup Region.
C. Use Amazon API Gateway for the public APIs and Region failover. Deploy new EC2 instances with the userdata script. Create a MySQL read replica of the RDS instance in a backup Region.
D. Use Amazon Route 53 for Region failover. Deploy new EC2 instances with the userdata script for APIs, and create a snapshot of the RDS instance daily for a backup. Replicate the snapshot to a backup Region.

Correct Answer

B. Use Amazon Route 53 for Region failover. Deploy new EC2 instances with the userdata script. Create a read replica of the RDS instance in a backup Region.

Explanation

To meet the given requirements of a multi-Region disaster recovery solution with a Recovery Time Objective (RTO) of 3 hours and a Recovery Point Objective (RPO) of 24 hours at the lowest cost, the most suitable architecture would be option B: Use Amazon Route 53 for Region failover. Deploy new EC2 instances with the userdata script. Create a read replica of the RDS instance in a backup Region.

Option B offers an efficient and cost-effective solution. Here’s how it meets the requirements:

1. Amazon Route 53 for Region failover: By utilizing Amazon Route 53’s failover routing policy, the architect can configure health checks and failover between Regions. In the event of a disaster, Route 53 can automatically route traffic to the backup Region where the application is deployed.

2. Deploy new EC2 instances with the userdata script: The userdata script can be used to automate the deployment and configuration of EC2 instances with the application code. By launching new instances in the backup Region, the application can be quickly restored in case of a disaster.

3. Create a read replica of the RDS instance in a backup Region: By creating a read replica of the RDS instance in a different Region, the database can be replicated and kept up to date with changes from the primary Region. In the event of a failure, the read replica can be promoted to a standalone instance to ensure continuity of the database.

Option A suggests deploying separate RDS instances in each Region, which can incur higher costs due to additional infrastructure and management complexity.

Option C suggests using Amazon API Gateway for public APIs and deploying EC2 instances with a read replica of the RDS instance. While API Gateway can provide additional functionality and scalability, it might introduce additional costs and complexity compared to the simpler Route 53 solution.

Option D suggests using Route 53 for failover, deploying new EC2 instances, and creating daily snapshots of the RDS instance. However, the RPO requirement of 24 hours might not be met with daily snapshots, as data loss could occur in case of a disaster.

Therefore, the most cost-effective solution meeting the requirements is option B: Use Amazon Route 53 for Region failover, deploy new EC2 instances with the userdata script, and create a read replica of the RDS instance in a backup Region.

Question 1323

Exam Question

A company has three VPCs named Development, Testing, and Production in the us-east-1 Region. The three VPCs need to be connected to an on-premises data center and are designed to be separate to maintain security and prevent any resource sharing. A solutions architect needs to find a scalable and secure solution.

What should the solutions architect recommend?

A. Create an AWS Direct Connect connection and a VPN connection for each VPC to connect back to the data center.
B. Create VPC peers from all the VPCs to the Production VPC. Use an AWS Direct Connect connection from the Production VPC back to the data center.
C. Connect VPN connections from all the VPCs to a VPN in the Production VPC. Use a VPN connection from the Production VPC back to the data center.
D. Create a new VPC called Network. Within the Network VPC, create an AWS Transit Gateway with an AWS Direct Connect connection back to the data center. Attach all the other VPCs to the Network VPC.

Correct Answer

D. Create a new VPC called Network. Within the Network VPC, create an AWS Transit Gateway with an AWS Direct Connect connection back to the data center. Attach all the other VPCs to the Network VPC.

Explanation

To meet the requirements of connecting three separate VPCs (Development, Testing, and Production) in the us-east-1 Region to an on-premises data center in a scalable and secure manner while maintaining their separation, the solutions architect should recommend option D: Create a new VPC called Network. Within the Network VPC, create an AWS Transit Gateway with an AWS Direct Connect connection back to the data center. Attach all the other VPCs to the Network VPC.

Here’s how this solution meets the requirements:

1. Create a new VPC called Network: By creating a separate VPC called Network, the architect can establish a central hub for connecting the other VPCs and the on-premises data center.

2. Create an AWS Transit Gateway: The AWS Transit Gateway provides a scalable and resilient solution for connecting multiple VPCs and on-premises networks. It allows for secure and efficient communication between the separate VPCs while maintaining their isolation.

3. AWS Direct Connect connection: By connecting the AWS Transit Gateway in the Network VPC with an AWS Direct Connect connection, a dedicated and private connection is established between the data center and AWS, ensuring a secure and reliable network link.

4. Attach the other VPCs to the Network VPC: By attaching the Development, Testing, and Production VPCs to the Network VPC via the AWS Transit Gateway, secure and controlled communication can be established between the VPCs and the data center, while maintaining their individual security boundaries.

Option A suggests creating separate AWS Direct Connect and VPN connections for each VPC, which can lead to complexity, increased costs, and management overhead.

Option B suggests creating VPC peering connections, but it does not address the requirement of connecting the VPCs to the on-premises data center securely.

Option C suggests connecting VPN connections from all VPCs to a VPN in the Production VPC, which again does not provide a direct connection to the on-premises data center.

Therefore, the most scalable and secure solution meeting the requirements is option D: Create a new VPC called Network, create an AWS Transit Gateway with an AWS Direct Connect connection back to the data center, and attach all the other VPCs to the Network VPC.

Question 1324

Exam Question

An application is running on an Amazon EC2 instance and must have millisecond latency when running the workload. The application makes many small reads and writes to the file system, but the file system itself is small.

Which Amazon Elastic Block Store (Amazon EBS) volume type should a solutions architect attach to their EC2 instance?

A. Cold HDD (sc1)
B. General Purpose SSD (gp2)
C. Provisioned IOPS SSD (io1)
D. Throughput Optimized HDD (st1)

Correct Answer

C. Provisioned IOPS SSD (io1)

Explanation

To achieve millisecond latency for an application running on an Amazon EC2 instance with many small reads and writes to the file system, the most suitable Amazon Elastic Block Store (Amazon EBS) volume type to attach to the instance would be option C: Provisioned IOPS SSD (io1).

Here’s why this choice is appropriate:

1. Provisioned IOPS SSD (io1): This EBS volume type is designed for applications that require consistent and high-performance storage with low latency. It allows you to specify the desired number of IOPS (Input/Output Operations Per Second) when provisioning the volume. This level of control ensures that the application can achieve the required millisecond latency for its workload.

Option A, Cold HDD (sc1), is a lower-cost EBS volume type optimized for infrequent access with large amounts of throughput. However, it has higher latency compared to SSD-based volume types and may not provide the millisecond latency required for the workload.

Option B, General Purpose SSD (gp2), is a balanced and cost-effective EBS volume type suitable for most workloads. While it provides good performance, it may not consistently deliver millisecond latency for the application’s specific requirements.

Option D, Throughput Optimized HDD (st1), is designed for workloads with large amounts of data throughput but with higher latency compared to SSD-based volume types. It may not meet the millisecond latency requirement for the workload that involves many small reads and writes.

Therefore, to ensure millisecond latency for the application’s workload, the most appropriate choice is option C: Provisioned IOPS SSD (io1).

Question 1325

Exam Question

A company receives inconsistent service from its data center provider because the company is headquartered in an area affected by natural disasters. The company is not ready to fully migrate to the AWS Cloud, but it wants a failure environment on AWS in case the on-premises data center fails. The company runs web servers that connect to external vendors. The data available on AWS and on premises must be uniform.

Which solution should a solutions architect recommend that has the LEAST amount of downtime?

A. Configure an Amazon Route 53 failover record. Run application servers on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3.
B. Configure an Amazon Route 53 failover record. Execute an AWS CloudFormation template from a script to create Amazon EC2 instances behind an Application Load Balancer. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3.
C. Configure an Amazon Route 53 failover record. Set up an AWS Direct Connect connection between a VPC and the data center. Run application servers on Amazon EC2 in an Auto Scaling group. Run an AWS Lambda function to execute an AWS CloudFormation template to create an Application Load Balancer.
D. Configure an Amazon Route 53 failover record. Run an AWS Lambda function to execute an AWS CloudFormation template to launch two Amazon EC2 instances. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3. Set up an AWS Direct Connect connection between a VPC and the data center.

Correct Answer

A. Configure an Amazon Route 53 failover record. Run application servers on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3.

Explanation

To provide a failure environment with the least amount of downtime and ensure uniformity of data between the on-premises data center and AWS, the solutions architect should recommend option A: Configure an Amazon Route 53 failover record. Run application servers on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3.

Here’s why this option is the most suitable:

1. Configure an Amazon Route 53 failover record: Amazon Route 53 can be used to set up DNS failover, which allows traffic to be routed to the appropriate environment (on-premises or AWS) based on the health of the endpoints. In case of a failure in the on-premises data center, traffic can be automatically redirected to the AWS environment with minimal downtime.

2. Run application servers on Amazon EC2 instances behind an Application Load Balancer: By deploying application servers on EC2 instances within an Auto Scaling group behind an Application Load Balancer, the workload can be automatically scaled based on demand and ensure high availability. The load balancer helps distribute traffic evenly across the instances.

3. Set up AWS Storage Gateway with stored volumes: AWS Storage Gateway can be used to bridge the gap between on-premises data and Amazon S3. By using stored volumes, the company can back up its data to Amazon S3 in a consistent and uniform manner, ensuring data availability and synchronization between the on-premises environment and AWS.

Option B suggests executing an AWS CloudFormation template from a script to create EC2 instances, which may introduce additional complexity and manual steps compared to using an Auto Scaling group.

Option C suggests setting up an AWS Direct Connect connection between a VPC and the data center, which might not be necessary for a failure environment setup and could add additional costs and complexity.

Option D suggests using an AWS Lambda function to execute a CloudFormation template to launch EC2 instances, but it does not mention load balancing or Auto Scaling, which are crucial for high availability and scalability.

Therefore, the most appropriate solution with the least amount of downtime and data uniformity is option A: Configure an Amazon Route 53 failover record, run application servers on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group, and set up AWS Storage Gateway with stored volumes to back up data to Amazon S3.

Question 1326

Exam Question

A company provides an API to its users that automates inquiries for tax computations based on item prices. The company experiences a larger number of inquiries during the holiday season only that cause slower response times. A solutions architect needs to design a solution that is scalable and elastic.

What should the solutions architect do to accomplish this?

A. Provide an API hosted on an Amazon EC2 instance. The EC2 instance performs the required computations when the API request is made.
B. Design a REST API using Amazon API Gateway that accepts the item names. API Gateway passes item names to AWS Lambda for tax computations.
C. Create an Application Load Balancer that has two Amazon EC2 instances behind it. The EC2 instances will compute the tax on the received item names.
D. Design a REST API using Amazon API Gateway that connects with an API hosted on an Amazon EC2 instance. API Gateway accepts and passes the item names to the EC2 instance for tax computations.

Correct Answer

B. Design a REST API using Amazon API Gateway that accepts the item names. API Gateway passes item names to AWS Lambda for tax computations.

Explanation

To design a scalable and elastic solution for the company’s API that automates inquiries for tax computations based on item prices, the solutions architect should recommend option B: Design a REST API using Amazon API Gateway that accepts the item names. API Gateway passes item names to AWS Lambda for tax computations.

Here’s why this option is the most suitable:

1. Amazon API Gateway: API Gateway provides a fully managed service for building, deploying, and managing APIs at scale. It acts as a front-end for the API, allowing easy configuration and management of endpoints, authentication, rate limiting, and more.

2. AWS Lambda: Lambda is a serverless compute service that allows you to run code without provisioning or managing servers. By integrating API Gateway with Lambda, you can leverage its event-driven architecture and auto-scaling capabilities. In this case, Lambda functions can be used to perform the tax computations based on the received item names.

Benefits of this approach:

  • Scalability: AWS Lambda automatically scales based on the incoming requests. It can handle a large number of inquiries during the holiday season without the need to provision or manage servers, ensuring the responsiveness and performance of the API.
  • Elasticity: With Lambda, you pay only for the actual compute time consumed by the tax computation function. The resources scale up and down automatically based on demand, resulting in cost optimization.
  • Serverless architecture: The serverless nature of Lambda removes the need for infrastructure management, allowing the company to focus on developing and maintaining the business logic of the tax computation without worrying about underlying infrastructure.

Option A suggests hosting an API on an EC2 instance, which requires manual scaling and management of the instances, and may not provide the desired scalability and elasticity.

Option C suggests using an Application Load Balancer with EC2 instances for tax computation. While load balancing can help distribute the load, it requires manual provisioning and scaling of the EC2 instances, resulting in increased management overhead.

Option D suggests using API Gateway to connect with an API hosted on an EC2 instance. This approach lacks the serverless and auto-scaling capabilities of Lambda, and it involves managing and scaling EC2 instances manually.

Therefore, the most appropriate solution for a scalable and elastic API for tax computations is option B: Design a REST API using Amazon API Gateway that accepts the item names and passes them to AWS Lambda for tax computations.

Question 1327

Exam Question

A company stores user data in AWS. The data is used continuously with peak usage during business hours. Access patterns vary, with some data not being used for months at a time. A solutions architect must choose a cost-effective solution that maintains the highest level of durability while maintaining high availability.

Which storage solution meets these requirements?

A. Amazon S3 Standard
B. Amazon S3 Intelligent-Tiering
C. Amazon S3 Glacier Deep Archive
D. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)

Correct Answer

B. Amazon S3 Intelligent-Tiering

Explanation

To meet the requirements of cost-effectiveness, high durability, and high availability for storing user data in AWS, the most suitable storage solution would be option B: Amazon S3 Intelligent-Tiering.

Here’s why this option is the best choice:

1. Amazon S3 Intelligent-Tiering: This storage class is designed to optimize costs while maintaining high durability and availability. It automatically moves objects between two access tiers: frequent access and infrequent access. Data that is accessed frequently remains in the frequent access tier, while data that is not accessed frequently is moved to the infrequent access tier, reducing costs.

2. Cost-effectiveness: With Intelligent-Tiering, you only pay for the storage and access you actually use. It automatically adjusts the storage cost based on the access patterns of your data, ensuring that you are charged the most cost-effective rate for your specific usage patterns.

3. High durability: Amazon S3 offers high durability, meaning that data is designed to be protected against hardware failures, errors, and other issues. Amazon S3 Intelligent-Tiering inherits the same durability characteristics as Amazon S3 Standard, ensuring that your data is highly durable and protected.

4. High availability: Amazon S3 provides high availability by replicating data across multiple Availability Zones within a region. This ensures that your data remains accessible even in the event of hardware failures or other disruptions. Intelligent-Tiering leverages the same infrastructure and replication mechanisms as Amazon S3, ensuring high availability for your data.

Option A, Amazon S3 Standard, is a highly durable and available storage class but may not be the most cost-effective choice for data that is not frequently accessed.

Option C, Amazon S3 Glacier Deep Archive, is a low-cost storage class designed for long-term archiving with retrieval times in hours. While it provides high durability, the retrieval times may not be suitable for data that is accessed during business hours.

Option D, Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA), provides cost savings compared to Amazon S3 Standard, but it stores data in a single Availability Zone instead of multiple zones, which introduces a higher risk of data loss in case of an Availability Zone failure.

Therefore, considering the requirements of cost-effectiveness, high durability, and high availability, the most appropriate choice is option B: Amazon S3 Intelligent-Tiering.

Question 1328

Exam Question

A company is building a media sharing application and decides to use Amazon S3 for storage. When a media file is uploaded, the company starts a multi-step process to create thumbnails, identify objects in the images, transcode videos into standard formats and resolutions, and extract and store the metadata to an Amazon DynamoDB table. The metadata is used for searching and navigation. The amount of traffic is variable. The solution must be able to scale to handle spikes in load without unnecessary expenses.

What should a solutions architect recommend to support this workload?

A. Build the processing into the website or mobile app used to upload the content to Amazon S3. Save the required data to the DynamoDB table when the objects are uploaded.
B. Trigger AWS Step Functions when an object is stored in the S3 bucket. Have the Step Functions perform the steps needed to process the object and then write the metadata to the DynamoDB table.
C. Trigger an AWS Lambda function when an object is stored in the S3 bucket. Have the Lambda function start AWS Batch to perform the steps to process the object. Place the object data in the DynamoDB table when complete.
D. Trigger an AWS Lambda function to store an initial entry in the DynamoDB table when an object is uploaded to Amazon S3. Use a program running on an Amazon EC2 instance in an Auto Scaling group to poll the index for unprocessed items, and use the program to perform the processing.

Correct Answer

B. Trigger AWS Step Functions when an object is stored in the S3 bucket. Have the Step Functions perform the steps needed to process the object and then write the metadata to the DynamoDB table.

Explanation

To support the media sharing application’s workload with multi-step processing, thumbnail creation, object identification, video transcoding, and metadata extraction, while ensuring scalability and cost-efficiency, the recommended approach is option B: Trigger AWS Step Functions when an object is stored in the S3 bucket and have the Step Functions perform the required processing steps and write the metadata to the DynamoDB table.

Here’s why option B is the best choice:

1. AWS Step Functions: Step Functions allow you to build serverless workflows to coordinate and execute the different processing steps in a reliable and scalable manner. You can define the sequence of steps, dependencies, error handling, and parallel processing using the Step Functions state machine.

2. Scalability: Step Functions can handle variable traffic and scale automatically to accommodate spikes in load. By defining the workflow steps and parallel processing, you can efficiently utilize the available resources and scale when needed, without incurring unnecessary expenses.

3. Cost-efficiency: Step Functions are priced based on the number of state transitions, which means you only pay for the actual execution of the workflow. This provides cost optimization as you are not charged for idle resources or unnecessary processing time.

4. Integration with Amazon S3 and DynamoDB: Step Functions can be triggered by an event when an object is stored in the S3 bucket. The Step Functions workflow can perform the necessary processing steps, such as thumbnail creation, object identification, video transcoding, and metadata extraction. The extracted metadata can then be written to the DynamoDB table for searching and navigation.

Option A suggests building the processing into the website or mobile app used to upload content to Amazon S3. This approach may introduce complexities and tightly couple the processing logic with the user-facing application, making it harder to manage and scale.

Option C suggests using AWS Batch triggered by an AWS Lambda function. While this can be a valid solution for batch processing, it may introduce additional complexity and overhead, especially if the processing steps need to be coordinated and managed.

Option D suggests using an initial entry in the DynamoDB table and polling with an EC2 instance in an Auto Scaling group. This approach requires manual management of the polling program and the EC2 instances, which can be less scalable and may not be as cost-efficient as a fully serverless approach.

Therefore, the most suitable recommendation for supporting the media sharing application’s workload with scalability and cost-efficiency is option B: Trigger AWS Step Functions when an object is stored in the S3 bucket and have the Step Functions handle the multi-step processing and writing of metadata to the DynamoDB table.

Question 1329

Exam Question

A company wants to migrate its MySQL database from on premises to AWS. The company recently experienced a database outage that significantly impacted the business. To ensure this does not happen again, the company wants a reliable database solution on AWS that minimizes data loss and stores every transaction on at least two nodes.

Which solution meets these requirements?

A. Create an Amazon RDS DB instance with synchronous replication to three nodes in three Availability Zones.
B. Create an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data.
C. Create an Amazon RDS MySQL DB instance and then create a read replica in a separate AWS Region that synchronously replicates the data.
D. Create an Amazon EC2 instance with a MySQL engine installed that triggers an AWS Lambda function to synchronously replicate the data to an Amazon RDS MySQL DB instance.

Correct Answer

B. Create an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data.

Explanation

To meet the requirements of a reliable database solution on AWS that minimizes data loss and stores every transaction on at least two nodes, the most suitable solution is option B: Create an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data.

Here’s why option B is the best choice:

1. Multi-AZ functionality: Amazon RDS provides the Multi-AZ feature, which allows for synchronous replication of the primary database instance to a standby instance in a different Availability Zone (AZ). This ensures that every transaction is synchronously replicated to the standby instance, minimizing data loss in the event of a failure.

2. High availability: With Multi-AZ enabled, Amazon RDS automatically handles the replication and failover process. If the primary database instance becomes unavailable, Amazon RDS automatically promotes the standby instance as the new primary, reducing downtime and providing high availability for the database.

3. Data durability: The synchronous replication provided by Multi-AZ ensures that every transaction is written to at least two nodes—the primary instance and the standby instance in a different AZ. This redundancy enhances data durability and protects against the loss of a single node or AZ.

4. Managed service: Amazon RDS is a fully managed database service that handles tasks such as backups, software patching, automatic failover, and replication. This simplifies database management and reduces operational overhead for the company.

Option A suggests using Amazon RDS DB instance with synchronous replication to three nodes in three Availability Zones. While this approach provides redundancy, option B with Multi-AZ functionality is sufficient to meet the requirements while being more cost-effective.

Option C suggests creating a read replica in a separate AWS Region. While this can provide additional redundancy and data durability, it does not ensure synchronous replication or meet the requirement of storing every transaction on at least two nodes.

Option D suggests using an EC2 instance triggering an AWS Lambda function for synchronous replication to an Amazon RDS MySQL DB instance. This approach adds complexity and manual management compared to the built-in Multi-AZ functionality provided by Amazon RDS.

Therefore, option B is the most appropriate solution for ensuring a reliable database solution on AWS with minimal data loss and synchronous replication across at least two nodes.

Question 1330

Exam Question

A company has an automobile sales website that stores its listings in a database on Amazon RDS. When an automobile is sold, the listing needs to be removed from the website and the data must be sent to multiple target systems.

Which design should a solutions architect recommend?

A. Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple Queue Service (Amazon SQS) queue for the targets to consume.
B. Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple Queue Service (Amazon SQS) FIFO queue for the targets to consume.
C. Subscribe to an RDS event notification and send an Amazon Simple Queue Service (Amazon SQS) queue fanned out to multiple Amazon Simple Notification Service (Amazon SNS) topics. Use AWS Lambda functions to update the targets.
D. Subscribe to an RDS event notification and send an Amazon Simple Notification Service (Amazon SNS) topic fanned out to multiple Amazon Simple Queue Service (Amazon SQS) queues. Use AWS Lambda functions to update the targets.

Correct Answer

C. Subscribe to an RDS event notification and send an Amazon Simple Queue Service (Amazon SQS) queue fanned out to multiple Amazon Simple Notification Service (Amazon SNS) topics. Use AWS Lambda functions to update the targets.

Explanation

For the given scenario of removing automobile listings from the website and sending the data to multiple target systems when an automobile is sold, a solutions architect should recommend the following design:

Option C: Subscribe to an RDS event notification and send an Amazon Simple Queue Service (Amazon SQS) queue fanned out to multiple Amazon Simple Notification Service (Amazon SNS) topics. Use AWS Lambda functions to update the targets.

Here’s why option C is the best choice:

1. RDS event notification: By subscribing to RDS event notifications, you can receive notifications whenever a database event occurs, such as an update or deletion of a listing. This allows you to react to changes in real-time.

2. Amazon SQS queue: Upon receiving an RDS event notification, you can send the relevant information to an SQS queue. SQS provides reliable and scalable message queuing, ensuring that the data is stored safely and can be processed by multiple target systems.

3. Amazon SNS topics: By fanning out the SQS queue to multiple SNS topics, you can distribute the data to multiple target systems simultaneously. Each target system can subscribe to the relevant SNS topic to receive the data.

4. AWS Lambda functions: Use Lambda functions to process the messages from the SQS queue and update the target systems accordingly. Lambda provides serverless compute capabilities, allowing you to execute code in response to events, such as receiving messages from the SQS queue.

Option A suggests using an SQS queue triggered by an RDS update to send the information to the target systems. However, it does not consider the need for fanning out the messages to multiple topics for multiple targets.

Option B suggests using an SQS FIFO queue triggered by an RDS update. While SQS FIFO queues provide ordering and deduplication, they are not necessary for this scenario unless strict message ordering or exact-once delivery is required.

Option D suggests using an SNS topic fanned out to multiple SQS queues. While it distributes the messages, it does not directly utilize the RDS event notification, which can provide more efficient and immediate triggering of the updates.

Therefore, option C is the most suitable design for removing automobile listings from the website and sending the data to multiple target systems when an automobile is sold. It leverages RDS event notifications, SQS queues, SNS topics, and Lambda functions to achieve the desired functionality efficiently and effectively.