Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 25

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 961

Exam Question

A company hosts an application on AWS Lambda functions that are invoked by an Amazon API Gateway API. The Lambda functions save customer data to an Amazon Aurora MySQL database. Whenever the company upgrades the database, the Lambda functions fail to establish database connections until the upgrade is complete. The result is that customer data is not recorded for some of the events. A solutions architect needs to design a solution that stores customer data that is created during database upgrades.

Which solution will meet these requirements?

A. Provision an Amazon RDS proxy to sit between the Lambda functions and the database. Configure the Lambda functions to connect to the RDS proxy.
B. Increase the run time of the Lambda functions to the maximum. Create a retry mechanism in the code that stores the customer data in the database.
C. Persist the customer data to Lambda local storage. Configure new Lambda functions to scan the local storage to save the customer data to the database.
D. Store the customer data in an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Create a new Lambda function that polls the queue and stores the customer data in the database.

Correct Answer

D. Store the customer data in an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Create a new Lambda function that polls the queue and stores the customer data in the database.

Explanation

To ensure that customer data is stored even during database upgrades in an AWS Lambda and Amazon Aurora MySQL setup, you should consider the following solution:

D. Store the customer data in an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Create a new Lambda function that polls the queue and stores the customer data in the database.

By implementing option D, you can decouple the customer data storage process from the direct interaction between the Lambda functions and the Amazon Aurora MySQL database. Here’s how the solution works:

  • Store the customer data in an Amazon SQS FIFO queue: Modify your Lambda functions to send the customer data to an SQS FIFO queue instead of directly writing it to the database. SQS FIFO queues ensure that messages are processed in the order they are received, which is crucial to maintaining the correct sequence of customer data.
  • Create a new Lambda function to process the queue: Develop a separate Lambda function that polls the SQS FIFO queue and retrieves messages in the correct order. This function can then establish a database connection and store the customer data in the Amazon Aurora MySQL database.

By implementing this solution, even during database upgrades that temporarily disrupt the Lambda functions’ ability to establish database connections, the customer data will still be stored in the SQS FIFO queue. Once the database upgrade is complete, the new Lambda function responsible for processing the queue will resume storing the customer data in the Amazon Aurora MySQL database.

Options A, B, and C do not address the specific requirement of storing customer data during database upgrades:

Option A suggests using an Amazon RDS proxy to sit between the Lambda functions and the database. While an RDS proxy can help manage connections and scale the database, it does not directly address the need to store customer data during database upgrades.

Option B suggests increasing the run time of the Lambda functions and implementing a retry mechanism. While this approach can improve the reliability of storing customer data, it does not ensure data storage during database upgrades.

Option C suggests persisting the customer data to Lambda local storage and scanning it later to save it to the database. However, Lambda local storage is temporary and does not provide durable and reliable storage for customer data.

Therefore, option D, storing customer data in an Amazon SQS FIFO queue and creating a new Lambda function to process the queue and store the data in the database, is the recommended solution to meet the requirement of storing customer data during database upgrades.

Question 962

Exam Question

A gaming company has multiple Amazon EC2 instances in a single Availability Zone for its multiplayer game that communicates with users on Layer 4. The chief technology officer (CTO) wants to make the architecture highly available and cost-effective.

Which should a solutions architect do to meet these requirements? (Choose two.)?

A. Increase the number of EC2 instances.
B. Decrease the number of EC2 instances.
C. Configure a Network Load Balancer in front of the EC2 instances.
D. Configure an Application Load Balancer in front of the EC2 instances.
E. Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically.

Correct Answer

C. Configure a Network Load Balancer in front of the EC2 instances.
E. Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically.

Explanation

To make the architecture highly available and cost-effective for the multiplayer game running on Amazon EC2 instances in a single Availability Zone, you should consider the following solutions:

C. Configure a Network Load Balancer in front of the EC2 instances.
E. Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically.

  • Configure a Network Load Balancer (NLB) in front of the EC2 instances: By setting up an NLB, you can distribute the incoming traffic among the EC2 instances. The NLB operates at Layer 4 of the OSI model, allowing it to efficiently handle the game’s communication requirements. The NLB ensures high availability by automatically routing traffic to healthy instances and providing fault tolerance. It also helps optimize costs by efficiently distributing the load across instances, making the most out of available resources.
  • Configure an Auto Scaling group (ASG) to add or remove instances in multiple Availability Zones: By using an ASG, you can automatically scale the number of EC2 instances based on demand. Configuring the ASG across multiple Availability Zones ensures high availability and fault tolerance. The ASG can dynamically adjust the number of instances in response to changes in traffic, automatically adding instances during peak usage and removing them during low demand. This approach helps optimize costs by only provisioning the necessary instances to handle the current load, preventing overprovisioning.

Option A, increasing the number of EC2 instances, is not a recommended solution because manually increasing the number of instances does not provide automatic scalability or high availability.

Option B, decreasing the number of EC2 instances, is not a recommended solution because reducing the number of instances can lead to insufficient capacity to handle user traffic, which may result in degraded performance or service disruptions.

Option D, configuring an Application Load Balancer (ALB) in front of the EC2 instances, is not the best choice for Layer 4 communication requirements. ALBs are designed for Layer 7 application-level traffic and offer advanced features like content-based routing, SSL termination, and request/response modification. For a Layer 4 communication requirement, an NLB is a more suitable choice.

Therefore, the recommended solutions to make the architecture highly available and cost-effective for the multiplayer game are to configure a Network Load Balancer (NLB) in front of the EC2 instances and configure an Auto Scaling group (ASG) to add or remove instances in multiple Availability Zones automatically.

Question 963

Exam Question

A company needs to ingest and handle large amounts of streaming data that its application generates. The application runs on Amazon EC2 instances and sends data to Amazon Kinesis Data Streams, which is configured with default settings. Every other day, the application consumes the data and writes the data to an Amazon S3 bucket for business intelligence (BI) processing. The company observes that Amazon S3 is not receiving all the data that the application sends to Kinesis Data Streams.

What should a solutions architect do to resolve this issue?

A. Update the Kinesis Data Streams default settings by modifying the data retention period.
B. Update the application to use the Kinesis Producer Library (KPL) to send the data to Kinesis Data Streams.
C. Update the number of Kinesis shards to handle the throughput of the data that is sent to Kinesis Data Streams.
D. Turn on S3 Versioning within the S3 bucket to preserve every version of every object that is ingested in the S3 bucket.

Correct Answer

C. Update the number of Kinesis shards to handle the throughput of the data that is sent to Kinesis Data Streams.

Explanation

To resolve the issue of Amazon S3 not receiving all the data sent to Amazon Kinesis Data Streams, a solutions architect should:

C. Update the number of Kinesis shards to handle the throughput of the data sent to Kinesis Data Streams.

Kinesis Data Streams uses shards to partition the data and allow for parallel processing. Each shard provides a specific capacity for data ingestion and data retrieval. By default, a Kinesis Data Stream is provisioned with a single shard, which may not be sufficient to handle the incoming data from the application.

To ensure that all the data sent to Kinesis Data Streams is successfully processed and delivered to the downstream consumers, you need to scale up the number of shards based on the throughput requirements. Increasing the number of shards allows for higher data ingestion rates and improves the overall capacity of the stream.

Option A, updating the Kinesis Data Streams default settings by modifying the data retention period, is not relevant to the issue of data not being received by Amazon S3.

Option B, updating the application to use the Kinesis Producer Library (KPL), does not directly address the issue of missing data in S3. The KPL can help improve the efficiency and reliability of data ingestion into Kinesis Data Streams but does not resolve the data loss issue.

Option D, turning on S3 Versioning within the S3 bucket, is unrelated to the issue of data not being received by S3. S3 Versioning is used to preserve multiple versions of an object stored in the bucket and does not address the root cause of missing data.

Therefore, the recommended action is to update the number of Kinesis shards to handle the throughput of the data sent to Kinesis Data Streams.

Question 964

Exam Question

A company must generate sales reports at the beginning of every month. The reporting process launches 20 Amazon EC2 instances on the first of the month. The process runs for 7 days and cannot be interrupted. The company wants to minimize costs.

Which pricing model should the company choose?

A. Reserved Instances
B. Spot Block Instances
C. On-Demand Instances
D. Scheduled Reserved Instances

Correct Answer

D. Scheduled Reserved Instances

Explanation

To minimize costs for the sales reporting process that runs for a specific duration every month, the company should choose:

D. Scheduled Reserved Instances

Explanation:

Scheduled Reserved Instances are a pricing model in AWS that allows you to reserve instances for specific time periods. With Scheduled Reserved Instances, you can specify the recurring schedule (e.g., the first seven days of each month) during which the instances will be launched. This model is well-suited for workloads that have predictable and recurring patterns, such as the monthly sales reporting process described in the scenario.

By using Scheduled Reserved Instances, the company can reserve the required number of instances specifically for the reporting period and benefit from lower pricing compared to On-Demand instances. This ensures that the instances are available and launched automatically at the start of each month, eliminating the need for manual intervention and ensuring continuous operation for the entire duration of the reporting process.

Option A, Reserved Instances, provides discounted pricing for long-term usage but may not be suitable for workloads with specific recurring schedules like the monthly reporting process.

Option B, Spot Block Instances, allow you to bid on unused EC2 capacity, which can provide significant cost savings. However, Spot instances are subject to potential termination if the current Spot price exceeds your bid, which may not be suitable for a critical and uninterrupted reporting process.

Option C, On-Demand Instances, offer flexibility and no long-term commitments but can be more expensive compared to Reserved Instances or Scheduled Reserved Instances for a workload that runs continuously for a known period every month.

Therefore, to minimize costs and ensure availability for the monthly sales reporting process, the company should choose Scheduled Reserved Instances.

Question 965

Exam Question

A company wants to migrate its accounting system from an on-premises data center to the AWS Cloud in a single AWS Region. Data security and an immutable audit log are the top priorities. The company must monitor all AWS activities for compliance auditing. The company has enabled AWS CloudTrail but wants to make sure it meets these requirements.

Which actions should a solutions architect take to protect and secure CloudTrail? (Choose two.)

A. Enable CloudTrail log file validation.
B. Install the CloudTrail Processing Library.
C. Enable logging of Insights events in CloudTrail.
D. Enable custom logging from the on-premises resources.
E. Create an AWS Config rule to monitor whether CloudTrail is configured to use server-side encryption with AWS KMS managed encryption keys (SSE-KMS).

Correct Answer

A. Enable CloudTrail log file validation.
E. Create an AWS Config rule to monitor whether CloudTrail is configured to use server-side encryption with AWS KMS managed encryption keys (SSE-KMS).

Explanation

To protect and secure CloudTrail for data security and an immutable audit log, the following actions should be taken:

A. Enable CloudTrail log file validation.
E. Create an AWS Config rule to monitor whether CloudTrail is configured to use server-side encryption with AWS KMS managed encryption keys (SSE-KMS).

A. Enable CloudTrail log file validation: Enabling log file validation for CloudTrail helps ensure the integrity of log files by confirming that they haven’t been tampered with. It provides an additional layer of security by allowing detection of any unauthorized modifications or deletions of log files.

E. Create an AWS Config rule to monitor whether CloudTrail is configured to use server-side encryption with AWS KMS managed encryption keys (SSE-KMS): This action ensures that CloudTrail is configured to use server-side encryption with AWS Key Management Service (KMS) managed encryption keys. SSE-KMS provides strong encryption and control over the encryption keys, enhancing the security of the CloudTrail logs.

B. Installing the CloudTrail Processing Library is not a necessary action for protecting and securing CloudTrail. The CloudTrail Processing Library is used for processing and analyzing CloudTrail logs, but it does not directly contribute to the security or protection of CloudTrail itself.

C. Enabling logging of Insights events in CloudTrail is not directly related to securing CloudTrail or ensuring data security and an immutable audit log. CloudTrail Insights provides intelligent event analysis and anomaly detection capabilities but does not directly impact the security and protection of CloudTrail.

D. Enabling custom logging from on-premises resources is not relevant to securing CloudTrail in the AWS Cloud. CloudTrail primarily captures and logs AWS API activities, so custom logging from on-premises resources would not be directly applicable or necessary for securing CloudTrail.

Therefore, to protect and secure CloudTrail for data security and an immutable audit log, enabling CloudTrail log file validation and creating an AWS Config rule for SSE-KMS encryption are the recommended actions.

Question 966

Exam Question

A company has been storing analytics data in an Amazon RDS instance for the past few years. The company asked a solutions architect to find a solution that allows users to access this data using an API. The expectation is that the application will experience periods of inactivity but could receive bursts of traffic within seconds.

Which solution should the solutions architect suggest?

A. Set up an Amazon API Gateway and use Amazon ECS.
B. Set up an Amazon API Gateway and use AWS Elastic Beanstalk.
C. Set up an Amazon API Gateway and use AWS Lambda functions.
D. Set up an Amazon API Gateway and use Amazon EC2 with Auto Scaling.

Correct Answer

C. Set up an Amazon API Gateway and use AWS Lambda functions.

Explanation

To provide access to analytics data using an API with the ability to handle periods of inactivity and bursts of traffic, the following solution should be suggested:

C. Set up an Amazon API Gateway and use AWS Lambda functions.

Using an Amazon API Gateway in combination with AWS Lambda functions provides a scalable and cost-effective solution for accessing the analytics data through an API. AWS Lambda functions are serverless and automatically scale to handle incoming requests, making them suitable for handling bursts of traffic within seconds. When there is no traffic, Lambda functions are not consuming any resources, resulting in cost savings during periods of inactivity.

The Amazon API Gateway acts as a front-end for the API, providing authentication, request routing, and other features. It integrates seamlessly with Lambda functions, allowing them to be invoked in response to API requests. This combination allows for a flexible and scalable architecture that can handle variable traffic patterns while keeping costs optimized.

A. Setting up an Amazon API Gateway and using Amazon ECS (Elastic Container Service) would involve managing and scaling containers, which adds complexity and operational overhead. It may not be as well-suited for handling bursts of traffic within seconds.

B. Setting up an Amazon API Gateway and using AWS Elastic Beanstalk is an option, but Elastic Beanstalk primarily focuses on deploying and managing web applications. It may not provide the same level of flexibility and scalability as using AWS Lambda functions.

D. Setting up an Amazon API Gateway and using Amazon EC2 with Auto Scaling can provide scalability but requires manual management of EC2 instances. It may not be as cost-effective and efficient as using Lambda functions, which automatically scale based on the incoming request volume.

Therefore, the recommended solution is to set up an Amazon API Gateway and use AWS Lambda functions to provide access to the analytics data through an API, ensuring scalability, cost-effectiveness, and the ability to handle bursts of traffic within seconds.

Question 967

Exam Question

An online photo-sharing company stores its photos in an Amazon S3 bucket that exists in the us-west-1 Region. The company needs to store a copy of all existing and new photos in another geographical location.

Which solution will meet this requirement with the LEAST operational effort?

A. Create a second S3 bucket in us-east-1. Enable S3 Cross-Region Replication from the existing S3 bucket to the second S3 bucket.
B. Create a cross-origin resource sharing (CORS) configuration of the existing S3 bucket. Specify us-east-1 in the CORS rule’s AllowedOrigin element.
C. Create a second S3 bucket in us-east-1 across multiple Availability Zones. Create an S3 Lifecycle management rule to save photos into the second S3 bucket.
D. Create a second S3 bucket in us-east-1 to store the replicated photos. Configure S3 event notifications on object creation and update events that invoke an AWS Lambda function to copy photos from the existing S3 bucket to the second S3 bucket.

Correct Answer

A. Create a second S3 bucket in us-east-1. Enable S3 Cross-Region Replication from the existing S3 bucket to the second S3 bucket.

Explanation

The solution that will meet the requirement with the least operational effort is:

A. Create a second S3 bucket in us-east-1. Enable S3 Cross-Region Replication from the existing S3 bucket to the second S3 bucket.

Explanation:
Enabling S3 Cross-Region Replication allows for automatic and asynchronous replication of objects from one S3 bucket to another in a different region. This eliminates the need for manual intervention or additional configurations to copy the photos to another geographical location.

By creating a second S3 bucket in the us-east-1 Region and enabling Cross-Region Replication from the existing S3 bucket in the us-west-1 Region to the second bucket, all existing and new photos will be automatically replicated to the second bucket. S3 Cross-Region Replication handles the replication process, ensuring that any changes or additions to the photos in the source bucket are mirrored in the destination bucket in us-east-1 without any further operational effort.

B. Creating a CORS configuration and specifying us-east-1 as the AllowedOrigin element in the existing S3 bucket does not provide a mechanism for automatically replicating the photos to another geographical location. CORS allows for cross-origin access to resources but does not handle the replication of data.

C. Creating a second S3 bucket in us-east-1 and using S3 Lifecycle management rules to save photos into the second bucket does not provide automatic replication of the photos. S3 Lifecycle management rules can help manage the lifecycle of objects within a bucket but do not facilitate replication across regions.

D. Creating a second S3 bucket in us-east-1 and configuring S3 event notifications with an AWS Lambda function to copy photos from the existing bucket to the second bucket can achieve the replication but requires manual configuration and ongoing management of the Lambda function. It involves more operational effort compared to enabling Cross-Region Replication.

Therefore, the most efficient and least operationally demanding solution is to create a second S3 bucket in us-east-1 and enable S3 Cross-Region Replication from the existing S3 bucket in us-west-1 to the second bucket.

Question 968

Exam Question

A company’s website runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The website has a mix of dynamic and static content. Users around the globe are reporting that the website is slow.

Which set of actions will improve website performance for users worldwide?

A. Create an Amazon CloudFront distribution and configure the ALB as an origin. Then update the Amazon Route 53 record to point to the CloudFront distribution.
B. Create a latency-based Amazon Route 53 record for the ALB. Then launch new EC2 instances with larger instance sizes and register the instances with the ALB.
C. Launch new EC2 instances hosting the same web application in different Regions closer to the users. Then register instances with the same ALB using cross- Region VPC peering.
D. Host the website in an Amazon S3 bucket in the Regions closest to the users and delete the ALB and EC2 instances. Then update an Amazon Route 53 record to point to the S3 buckets.

Correct Answer

A. Create an Amazon CloudFront distribution and configure the ALB as an origin. Then update the Amazon Route 53 record to point to the CloudFront distribution.

Explanation

The set of actions that will improve website performance for users worldwide is:

A. Create an Amazon CloudFront distribution and configure the ALB as an origin. Then update the Amazon Route 53 record to point to the CloudFront distribution.

Amazon CloudFront is a content delivery network (CDN) that improves website performance by caching content at edge locations around the globe, bringing the content closer to users and reducing latency. By creating a CloudFront distribution and configuring the ALB as an origin, the static and dynamic content from the website can be cached and delivered to users from the nearest CloudFront edge location.

Updating the Amazon Route 53 record to point to the CloudFront distribution ensures that user requests are directed to the nearest CloudFront edge location, further reducing latency and improving website performance.

B. Creating a latency-based Amazon Route 53 record for the ALB allows Route 53 to direct user traffic to the ALB with the lowest latency. However, this does not address the issue of slow website performance reported by users worldwide. Additionally, launching new EC2 instances with larger instance sizes may improve performance for specific regions but may not effectively address the global performance issue.

C. Launching new EC2 instances in different Regions closer to users and using cross-Region VPC peering to register them with the same ALB can improve performance for users in those specific regions. However, this approach does not address the website’s performance issue for users worldwide. Additionally, managing instances in multiple Regions may introduce additional complexity and operational overhead.

D. Hosting the website in an Amazon S3 bucket in the Regions closest to users and deleting the ALB and EC2 instances would be suitable for static content hosting, but it does not address the presence of dynamic content in the website. Moreover, it would require redesigning the website and ensuring the necessary functionality is supported by S3 alone, which may not be feasible depending on the website’s requirements.

Therefore, the most appropriate set of actions to improve website performance for users worldwide is to create an Amazon CloudFront distribution, configure the ALB as an origin, and update the Amazon Route 53 record to point to the CloudFront distribution. This leverages the global edge locations of CloudFront to cache and deliver the website content closer to users, resulting in improved performance.

Question 969

Exam Question

A company has hired a solutions architect to design a reliable architecture for its application. The application consists of one Amazon RDS DB instance and two manually provisioned Amazon EC2 instances that run web servers. The EC2 instances are located in a single Availability Zone. An employee recently deleted the DB instance, and the application was unavailable for 24 hours as a result. The company is concerned with the overall reliability of its environment.

What should the solutions architect do to maximize reliability of the application’s infrastructure?

A. Delete one EC2 instance and enable termination protection on the other EC2 instance. Update the DB instance to be Multi-AZ, and enable deletion protection.
B. Update the DB instance to be Multi-AZ, and enable deletion protection. Place the EC2 instances behind an Application Load Balancer, and run them in an EC2 Auto Scaling group across multiple Availability Zones.
C. Create an additional DB instance along with an Amazon API Gateway and an AWS Lambda function. Configure the application to invoke the Lambda function through API Gateway. Have the Lambda function write the data to the two DB instances.
D. Place the EC2 instances in an EC2 Auto Scaling group that has multiple subnets located in multiple Availability Zones. Use Spot Instances instead of On-Demand Instances. Set up Amazon CloudWatch alarms to monitor the health of the instances. Update the DB instance to be Multi-AZ, and enable deletion protection.

Correct Answer

B. Update the DB instance to be Multi-AZ, and enable deletion protection. Place the EC2 instances behind an Application Load Balancer, and run them in an EC2 Auto Scaling group across multiple Availability Zones.

Explanation

To maximize the reliability of the application’s infrastructure, the solutions architect should:

B. Update the DB instance to be Multi-AZ, and enable deletion protection. Place the EC2 instances behind an Application Load Balancer, and run them in an EC2 Auto Scaling group across multiple Availability Zones.

  • Updating the DB instance to be Multi-AZ ensures that a standby replica of the DB instance is created in a different Availability Zone, providing high availability and automatic failover in the event of a failure in the primary Availability Zone.
  • Enabling deletion protection for the DB instance prevents accidental deletion, reducing the risk of downtime caused by human error.
  • Placing the EC2 instances behind an Application Load Balancer (ALB) allows for load balancing and distribution of incoming traffic across multiple instances. This helps improve application availability and provides the ability to scale the instances horizontally as needed.
  • Running the EC2 instances in an EC2 Auto Scaling group across multiple Availability Zones ensures that the application is resilient to failures in a single Availability Zone. The Auto Scaling group will automatically launch new instances or terminate unhealthy instances based on scaling policies and health checks.

Option A: Deleting one EC2 instance and enabling termination protection on the other does not address the reliability concerns and does not provide a solution for the deleted DB instance issue.

Option C: Creating an additional DB instance along with an API Gateway and Lambda function introduces unnecessary complexity and may not address the reliability concerns of the infrastructure.

Option D: Placing the EC2 instances in an Auto Scaling group across multiple subnets and using Spot Instances can help with availability and cost optimization, but it does not address the issue of the deleted DB instance and the overall reliability of the infrastructure.

Therefore, option B provides the best approach to maximize the reliability of the application’s infrastructure by leveraging Multi-AZ for the DB instance, using an ALB for load balancing and distributing traffic, and running the EC2 instances in an Auto Scaling group across multiple Availability Zones.

Question 970

Exam Question

A manufacturing company wants to implement predictive maintenance on its machinery equipment. The company will install thousands of IoT sensors that will send data to AWS in real time. A solutions architect is tasked with implementing a solution that will receive events in an ordered manner for each machinery asset and ensure that data is saved for further processing at a later time.

Which solution would be MOST efficient?

A. Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.
B. Use Amazon Kinesis Data Streams for real-time events with a shard for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon EBS.
C. Use an Amazon SQS FIFO queue for real-time events with one queue for each equipment asset. Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS.
D. Use an Amazon SQS standard queue for real-time events with one queue for each equipment asset. Trigger an AWS Lambda function from the SQS queue to save data to Amazon S3.

Correct Answer

A. Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.

Explanation

The most efficient solution for receiving events in an ordered manner for each machinery asset and ensuring data is saved for further processing at a later time would be:

A. Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.

  • Amazon Kinesis Data Streams is designed for real-time streaming data ingestion and processing. By using a partition for each equipment asset, the solution can ensure that events from each asset are processed in an ordered manner.
  • Amazon Kinesis Data Firehose provides an easy way to save streaming data to Amazon S3, which is a highly scalable and durable storage service.
  • By using this combination, the solution can receive real-time events in an ordered manner and save the data to Amazon S3 for further processing at a later time.

Option B suggests using Amazon Kinesis Data Streams with a shard for each equipment asset and saving data to Amazon EBS. While Amazon EBS provides block-level storage, it is not the most suitable choice for storing large amounts of streaming data.

Option C suggests using Amazon SQS FIFO queues for real-time events with one queue for each equipment asset and saving data to Amazon EFS. While Amazon EFS provides scalable and shared file storage, it may not be the optimal choice for real-time streaming data.

Option D suggests using Amazon SQS standard queues for real-time events with one queue for each equipment asset and saving data to Amazon S3 using AWS Lambda. While Amazon SQS can provide reliable and scalable messaging, it may not provide the same level of real-time processing capabilities as Amazon Kinesis Data Streams.

Therefore, option A is the most efficient solution for receiving ordered events from IoT sensors and saving the data for further processing at a later time.