The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.
Table of Contents
- Question 821
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 822
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 823
- Exam Question
- Correct Answer
- Explanation
- Question 824
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 825
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 826
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 827
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 828
- Exam Question
- Correct Answer
- Explanation
- Question 829
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 830
- Exam Question
- Correct Answer
- Explanation
Question 821
Exam Question
A solutions architect is designing a workload that will store hourly energy consumption by business tenants in a building. The sensors will feed a database through HTTP requests that will add up usage for each tenant. The solutions architect must use managed services when possible. The workload will receive more features in the future as the solutions architect adds independent components.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process the data, and store the data in an Amazon DynamoDB table.
B. Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances to receive and process the data from the sensors. Use an Amazon S3 bucket to store the processed data.
C. Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process the data, and store the data in a Microsoft SQL Server Express database on an Amazon EC2 instance.
D. Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances to receive and process the data from the sensors. Use an Amazon Elastic File System (Amazon EFS) shared file system to store the processed data.
Correct Answer
A. Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process the data, and store the data in an Amazon DynamoDB table.
Explanation
The solution that will meet the requirements with the least operational overhead is option A: Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process the data, and store the data in an Amazon DynamoDB table.
By using Amazon API Gateway with AWS Lambda, the workload can receive the data from the sensors through HTTP requests. The Lambda functions can then process the data and store it in an Amazon DynamoDB table. This solution leverages fully managed services, eliminating the need for manual infrastructure provisioning and management. DynamoDB is a highly scalable and fully managed NoSQL database, providing automatic scaling and high availability.
Option B suggests using an Elastic Load Balancer (ELB) with an Auto Scaling group of EC2 instances to receive and process the data, and storing the processed data in an Amazon S3 bucket. While this solution can work, it introduces operational overhead due to the need for managing and scaling EC2 instances, along with the associated infrastructure. It also requires manual management of the storage infrastructure in S3.
Option C involves using Amazon API Gateway with AWS Lambda, similar to option A, but suggests storing the data in a Microsoft SQL Server Express database on an EC2 instance. This introduces additional management and maintenance overhead with the need to provision and manage the EC2 instance, install and configure SQL Server Express, and handle database maintenance tasks.
Option D suggests using an Elastic Load Balancer with an Auto Scaling group of EC2 instances to receive and process the data, and storing the processed data in an Amazon EFS shared file system. While EFS provides a scalable and shared file storage solution, it introduces additional operational overhead for managing and configuring the EFS file system, as well as maintaining and scaling the EC2 instances.
Therefore, option A with API Gateway and Lambda functions using DynamoDB is the recommended solution that offers the least operational overhead and leverages managed services for scalability and ease of management.
Reference
What’s New with AWS: 2021 Archive
Question 822
Exam Question
A company runs a web application that is backed by Amazon RDS. A new database administrator caused data loss by accidentally editing information in a database table. To help recover from this type of incident, the company wants the ability to restore the database to its state from 5 minutes before any change within the last 30 days.
Which feature should the solutions architect include in the design to meet this requirement?
A. Read replicas
B. Manual snapshots
C. Automated backups
D. Multi-AZ deployments
Correct Answer
C. Automated backups
Explanation
To meet the requirement of being able to restore the database to its state from 5 minutes before any change within the last 30 days, the solutions architect should include the feature of Automated backups in the design.
Automated backups, a feature provided by Amazon RDS, automatically perform backups of the database at regular intervals defined by the backup retention period. By enabling automated backups for the RDS instance, the database administrator can easily restore the database to a specific point in time within the last 30 days.
With automated backups, RDS captures incremental changes to the database and stores these changes as transaction logs. When a restore is requested, RDS applies the relevant transaction logs to the backup to bring the database to the desired point in time. This enables point-in-time recovery, allowing the database to be restored to its state from up to 5 minutes before any change occurred within the last 30 days.
While read replicas (option A) can provide benefits such as improved read performance and high availability, they do not directly address the requirement of restoring the database to a specific point in time.
Manual snapshots (option B) can be used to create point-in-time backups of the database, but they require manual intervention to capture the snapshots and may not provide the desired level of granularity for a 5-minute restore window.
Multi-AZ deployments (option D) are used to achieve high availability by automatically replicating the database to a standby instance in a different Availability Zone. While they can help with failover and availability, they do not provide the capability for point-in-time recovery.
Therefore, option C – Automated backups, is the most suitable feature to include in the design to meet the requirement of restoring the database to its state from 5 minutes before any change within the last 30 days.
Reference
AWS > Documentation > Amazon RDS > User Guide > Restoring a DB instance to a specified time
Question 823
Exam Question
A large media company hosts a web application on AWS. The company wants to start caching confidential media files so that users around the world will have reliable access to the files. The content is stored in Amazon S3 buckets. The company must deliver the content quickly, regardless of where the requests originate geographically.
Which solution will meet these requirements?
A. Use AWS DataSync to connect the S3 buckets to the web application.
B. Deploy AWS Global Accelerator to connect the S3 buckets to the web application.
C. Deploy Amazon CloudFront to connect the S3 buckets to CloudFront edge servers.
D. Use Amazon Simple Queue Service (Amazon SQS) to connect the S3 buckets to the web application.
Correct Answer
C. Deploy Amazon CloudFront to connect the S3 buckets to CloudFront edge servers.
Explanation
The solution that will meet the requirements of caching confidential media files for reliable and fast access from users around the world is option C: Deploy Amazon CloudFront to connect the S3 buckets to CloudFront edge servers.
Amazon CloudFront is a content delivery network (CDN) service that caches content at edge locations worldwide. By configuring CloudFront with the S3 buckets hosting the media files, the content can be cached at edge locations closer to the users, resulting in reduced latency and improved performance.
CloudFront supports secure and confidential content delivery by allowing you to restrict access to the media files using various authentication and authorization mechanisms, such as signed URLs or CloudFront private content. This ensures that only authorized users can access the confidential media files.
AWS DataSync (option A) is a data transfer service that is typically used for migrating and synchronizing data between on-premises storage systems and AWS. While it can be used to transfer data to S3 buckets, it does not provide the caching and global content delivery capabilities required in this scenario.
AWS Global Accelerator (option B) is a service that improves the performance and availability of applications by directing traffic through the AWS global network infrastructure. While it can improve network performance, it does not provide the caching capabilities needed for delivering and caching media files.
Amazon Simple Queue Service (Amazon SQS) (option D) is a fully managed message queuing service that decouples and asynchronously processes distributed systems. It is not designed for caching and content delivery purposes, and it does not provide the required performance and latency benefits for media file delivery.
Therefore, option C – Deploy Amazon CloudFront to connect the S3 buckets to CloudFront edge servers – is the recommended solution for caching confidential media files and delivering them quickly to users around the world.
Question 824
Exam Question
A company is running an application on Amazon EC2 instances. Traffic to the workload increases substantially during business hours and decreases afterward. The CPU utilization of an EC2 instance is a strong indicator of end-user demand on the application. The company has configured an Auto Scaling group to have a minimum group size of 2 EC2 instances and a maximum group size of 10 EC2 instances. The company is concerned that the current scaling policy that is associated with the Auto Scaling group might not be correct. The company must avoid over-provisioning EC2 instances and incurring unnecessary costs.
What should a solutions architect recommend to meet these requirements?
A. Configure Amazon EC2 Auto Scaling to use a scheduled scaling plan and launch an additional 8 EC2 instances during business hours.
B. Configure AWS Auto Scaling to use a scaling plan that enables predictive scaling. Configure predictive scaling with a scaling mode of forecast and scale, and to enforce the maximum capacity setting during scaling.
C. Configure a step scaling policy to add 4 EC2 instances at 50% CPU utilization and add another 4 EC2 instances at 90% CPU utilization. Configure scale-in policies to perform the reverse and remove EC2 instances based on the two values.
D. Configure AWS Auto Scaling to have a desired capacity of 5 EC2 instances, and disable any existing scaling policies. Monitor the CPU utilization metric for 1 week. Then create dynamic scaling policies that are based on the observed values.
Correct Answer
B. Configure AWS Auto Scaling to use a scaling plan that enables predictive scaling. Configure predictive scaling with a scaling mode of forecast and scale, and to enforce the maximum capacity setting during scaling.
Explanation
A solutions architect should recommend option B: Configure AWS Auto Scaling to use a scaling plan that enables predictive scaling. This option will help meet the requirements of avoiding over-provisioning EC2 instances and incurring unnecessary costs.
Predictive scaling in AWS Auto Scaling leverages machine learning algorithms to analyze historical data and forecast future demand patterns. It uses this forecast to proactively scale the Auto Scaling group based on predicted load, allowing the group to scale ahead of time to handle the anticipated increase in traffic during business hours.
By configuring predictive scaling with a scaling mode of forecast and scale, and enforcing the maximum capacity setting during scaling, the company can ensure that the Auto Scaling group scales up to the appropriate capacity based on predicted demand while avoiding over-provisioning. This approach helps optimize costs by scaling the group only when necessary and scaling down when demand decreases.
The other options have certain drawbacks:
- Option A suggests using a scheduled scaling plan to launch additional instances during business hours. While this approach can help scale the group, it does not take into account the variability of demand during those hours. It may result in under-provisioning during peak periods or over-provisioning during periods of lower demand.
- Option C proposes a step scaling policy based on fixed CPU utilization thresholds. This approach may not be responsive enough to handle fluctuating demand patterns effectively, and it requires manual configuration of specific thresholds, which can be time-consuming and less accurate.
- Option D suggests starting with a fixed desired capacity and monitoring the CPU utilization for 1 week before creating dynamic scaling policies based on observed values. This approach is reactive and does not provide immediate scalability based on demand. It also requires manual intervention and monitoring, which may not be efficient for dynamically changing workloads.
Therefore, option B is the most suitable recommendation for effectively scaling the EC2 instances based on predicted demand patterns while optimizing costs.
Reference
AWS > Documentation > Amazon EC2 Auto Scaling > User Guide > Step and simple scaling policies for Amazon EC2 Auto Scaling
Question 825
Exam Question
A company wants to migrate its 1 PB on-premises image repository to AWS. The images will be used by a serverless web application images stored in the repository are rarely accessed, but they must be immediately available. Additionally, the images must be encrypted at rest and protected from accidental deletion.
Which solution meets these requirements?
A. Implement client-side encryption and store the images in an Amazon S3 Glacier vault. Set a vault lock to prevent accidental deletion.
B. Store the images in an Amazon S3 bucket in the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Enable versioning, default encryption, and MFA Delete on the S3 bucket.
C. Store the images in an Amazon FSx for Windows File Server file share. Configure the Amazon FSx file share to use an AWS Key Management Service (AWS KMS) customer master key (CMK) to encrypt the images in the file share. Use NTFS permission sets on the images to prevent accidental deletion.
D. Store the Images in an Amazon Elastic File System (Amazon EFS) file share in the Infrequent Access storage class. Configure the EFS file share to use an AWS Key Management Service (AWS KMS) customer master key (CMK) to encrypt the images in the file share. Use NFS permission sets on the images to prevent accidental deletion.
Correct Answer
B. Store the images in an Amazon S3 bucket in the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Enable versioning, default encryption, and MFA Delete on the S3 bucket.
Explanation
The solution that meets the requirements of migrating the 1 PB image repository to AWS while ensuring immediate availability, encryption at rest, and protection from accidental deletion is option B: Store the images in an Amazon S3 bucket in the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Enable versioning, default encryption, and MFA Delete on the S3 bucket.
Here’s how this solution addresses the requirements:
- Immediate availability: Storing the images in an Amazon S3 bucket ensures immediate availability. S3 provides high durability and availability, allowing quick access to the images when needed.
- Encryption at rest: By enabling default encryption on the S3 bucket, all objects (images) stored in the bucket will be encrypted automatically. This ensures that the images are encrypted at rest, protecting them from unauthorized access.
- Protection from accidental deletion: Enabling versioning on the S3 bucket allows for the preservation of multiple versions of an object. In case of accidental deletion or modification, previous versions of the images can be retrieved. Additionally, enabling MFA Delete adds an extra layer of protection by requiring Multi-Factor Authentication (MFA) before permanently deleting objects, preventing accidental deletions.
Option A (using S3 Glacier vault), Option C (using Amazon FSx for Windows File Server), and Option D (using Amazon EFS file share) do not provide immediate availability of the images as required. S3 Glacier is suitable for long-term archival storage, but it has a longer retrieval time. Amazon FSx and Amazon EFS are not optimized for storing rarely accessed files and may not provide cost-effective storage for the large image repository.
Therefore, option B is the most appropriate solution for migrating the image repository to AWS while meeting the specified requirements.
Reference
AWS > Documentation > Amazon Simple Storage Service (S3) > User Guide > Using Amazon S3 storage classes
Question 826
Exam Question
A company has been running a web application with an Oracle relational database in an on premises data center for the past 15 years. The company must migrate the database to AWS. The company needs to reduce operational overhead without having to modify the application’s code.
Which solution meets these requirements?
A. Use AWS Database Migration Service (AWS DMS) to migrate the database servers to Amazon RDS.
B. Use Amazon EC2 instances to migrate and operate the database servers.
C. Use AWS Database Migration Service (AWS DMS) to migrate the database servers to Amazon DynamoDB.
D. Use an AWS Snowball Edge Storage Optimized device to migrate the data from Oracle to Amazon Aurora.
Correct Answer
A. Use AWS Database Migration Service (AWS DMS) to migrate the database servers to Amazon RDS.
Explanation
The solution that meets the requirements of migrating the Oracle relational database to AWS while reducing operational overhead without modifying the application’s code is option A: Use AWS Database Migration Service (AWS DMS) to migrate the database servers to Amazon RDS.
Here’s how this solution addresses the requirements:
- Migrate the database servers: AWS DMS is a fully managed service that simplifies and streamlines the process of migrating databases to AWS. It supports heterogeneous migrations, including Oracle to Amazon RDS for Oracle. By using AWS DMS, the company can migrate its Oracle database to Amazon RDS without significant modifications to the application’s code.
- Reduce operational overhead: Amazon RDS is a managed database service that offloads many operational tasks, such as hardware provisioning, software patching, database backups, and automatic software updates, to AWS. By migrating to Amazon RDS, the company can leverage the managed service capabilities and reduce the operational overhead associated with maintaining the database infrastructure.
Option B (using Amazon EC2 instances) would require the company to manage the underlying infrastructure and perform manual tasks such as provisioning, patching, and backups, which would increase operational overhead.
Option C (migrating to Amazon DynamoDB) is not suitable for a relational database like Oracle. DynamoDB is a NoSQL database, and migrating a relational database to it would require significant changes to the application’s code, which is not desired.
Option D (using AWS Snowball Edge Storage Optimized device) is a data migration option but may not be the most suitable for migrating a relational database. Snowball Edge is typically used for large-scale data transfer and offline data migration scenarios.
Therefore, option A is the most appropriate solution for migrating the Oracle database to AWS while reducing operational overhead and minimizing the need for application code modifications.
Reference
AWS > Documentation > AWS Prescriptive Guidance > Patterns > Migrate an on-premises Oracle database to Amazon RDS for Oracle
Question 827
Exam Question
A solutions architect is creating an application. The application will run on Amazon EC2 instances in private subnets across multiple Availability Zones in a VPC. The EC2 instances will frequently access large files that contain confidential information. These files are stored in Amazon S3 buckets for processing. The solutions architect must optimize the network architecture to minimize data transfer costs.
What should the solutions architect do to meet these requirements?
A. Create a gateway endpoint for Amazon S3 in the VPC. In the route tables for the private subnets, add an entry for the gateway endpoint.
B. Create a single NAT gateway in a public subnet. In the route tables for the private subnets, add a default route that points to the NAT gateway.
C. Create an AWS PrivateLink interface endpoint for Amazon S3 in the VPC. In the route tables for the private subnets, add an entry for the interface endpoint.
D. Create one NAT gateway for each Availability Zone in public subnets. In each of the route tables for the private subnets, add a default route that points to the NAT gateway in the same Availability Zone.
Correct Answer
C. Create an AWS PrivateLink interface endpoint for Amazon S3 in the VPC. In the route tables for the private subnets, add an entry for the interface endpoint.
Explanation
To optimize the network architecture and minimize data transfer costs when accessing large files from Amazon S3 buckets containing confidential information, the solutions architect should choose option C: Create an AWS PrivateLink interface endpoint for Amazon S3 in the VPC and add an entry for the interface endpoint in the route tables for the private subnets.
Here’s how this solution meets the requirements:
- AWS PrivateLink interface endpoint: By creating an AWS PrivateLink interface endpoint for Amazon S3 in the VPC, the EC2 instances can securely access the S3 buckets directly from the VPC without traversing the public internet. This helps to minimize data transfer costs and ensures that the traffic remains within the AWS network.
- Route table configuration: Adding an entry for the interface endpoint in the route tables for the private subnets enables the EC2 instances in those subnets to route the traffic destined for Amazon S3 through the PrivateLink interface endpoint. This allows for efficient and secure communication between the EC2 instances and the S3 buckets.
Option A (creating a gateway endpoint for Amazon S3) is incorrect because gateway endpoints are used for accessing Amazon S3 over a VPC endpoint rather than optimizing data transfer costs.
Option B (creating a single NAT gateway) is incorrect because NAT gateways are typically used for providing internet access to instances in private subnets, and they do not optimize data transfer costs for accessing Amazon S3.
Option D (creating one NAT gateway for each Availability Zone) is also incorrect because it involves setting up multiple NAT gateways, which are not necessary for optimizing data transfer costs to Amazon S3.
Therefore, option C is the most suitable solution for optimizing the network architecture and minimizing data transfer costs when accessing large files from Amazon S3 buckets with confidential information.
Destination | Target |
---|---|
10.0.0.0/16 | local |
0.0.0.0/0 | internet-gateway-id |
The following is the route table associated with the private subnet in Availability Zone B. The first entry is the default entry for local routing in the VPC; it enables the instances in the VPC to communicate with each other. The second entry sends all other subnet traffic, such as internet bound traffic, to the NAT gateway.
Destination | Target |
---|---|
10.0.0.0/16 | local |
0.0.0.0/0 | not-gateway-id |
Reference
AWS > Documentation > Amazon VPC > User Guide > NAT gateways
Question 828
Exam Question
An Amazon EC2 instance is located in a private subnet in a new VPC. This subnet does not have outbound internet access, but the EC2 instance needs the ability to download monthly security updates from an outside vendor.
What should a solutions architect do to meet these requirements?
A. Create an internet gateway, and attach it to the VPC. Configure the private subnet route table to use the internet gateway as the default route.
B. Create a NAT gateway, and place it in a public subnet. Configure the private subnet route table to use the NAT gateway as the default route.
C. Create a NAT instance, and place it in the same subnet where the EC2 instance is located. Configure the private subnet route table to use the NAT instance as the default route.
D. Create an internet gateway, and attach it to the VPC. Create a NAT instance, and place it in the same subnet where the EC2 instance is located. Configure the private subnet route table to use the internet gateway as the default route.
Correct Answer
B. Create a NAT gateway, and place it in a public subnet. Configure the private subnet route table to use the NAT gateway as the default route.
Explanation
To allow an EC2 instance located in a private subnet to download monthly security updates from an outside vendor, a solutions architect should choose option B: Create a NAT gateway and place it in a public subnet. Then, configure the private subnet’s route table to use the NAT gateway as the default route.
Here’s how this solution meets the requirements:
- NAT gateway: By creating a NAT gateway in a public subnet, the EC2 instance in the private subnet can access the internet indirectly through the NAT gateway. The NAT gateway provides outbound internet access to resources in the private subnet without exposing them directly to the public internet.
- Route table configuration: Configuring the private subnet’s route table to use the NAT gateway as the default route allows the EC2 instance to send outbound traffic, such as the security updates, to the outside vendor through the NAT gateway.
Option A (creating an internet gateway and configuring the private subnet route table to use it as the default route) is incorrect because it would provide direct internet access to the private subnet, which is not desired in this case.
Option C (creating a NAT instance in the same subnet as the EC2 instance) is less preferred compared to a NAT gateway because NAT instances require additional configuration and management overhead, while NAT gateways are managed services provided by AWS.
Option D (creating both an internet gateway and a NAT instance) is unnecessary and would introduce additional complexity without providing any significant benefits for this use case.
Therefore, option B is the most appropriate solution, allowing the EC2 instance in the private subnet to download security updates from the outside vendor by utilizing a NAT gateway in a public subnet.
Question 829
Exam Question
A company has a legacy data processing application that runs on Amazon EC2 instances. Data is processed sequentially, but the order of results does not matter. The application uses a monolithic architecture. The only way that the company can scale the application to meet increased demand is to increase the size of the instances. The company’s developers have decided to rewrite the application to use a microservices architecture on Amazon Elastic Container Service (Amazon ECS).
What should a solutions architect recommend for communication between the microservices?
A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Add code to the data producers, and send data to the queue. Add code to the data consumers to process data from the queue.
B. Create an Amazon Simple Notification Service (Amazon SNS) topic. Add code to the data producers, and publish notifications to the topic. Add code to the data consumers to subscribe to the topic.
C. Create an AWS Lambda function to pass messages. Add code to the data producers to call the Lambda function with a data object. Add code to the data consumers to receive a data object that is passed from the Lambda function.
D. Create an Amazon DynamoDB table. Enable DynamoDB Streams. Add code to the data producers to insert data into the table. Add code to the data consumers to use the DynamoDB Streams API to detect new table entries and retrieve the data.
Correct Answer
A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Add code to the data producers, and send data to the queue. Add code to the data consumers to process data from the queue.
Explanation
A solutions architect should recommend option A: Create an Amazon Simple Queue Service (Amazon SQS) queue for communication between the microservices.
Here’s why this is the recommended approach:
- Loose coupling: Using an SQS queue decouples the data producers and data consumers. The data producers can send data to the queue without needing to know the details of the data consumers. Similarly, the data consumers can process data from the queue without being tightly coupled to the data producers.
- Scalability and elasticity: With an SQS queue, the system can easily scale to handle increased demand by adding more data producers or data consumers. The decoupled nature of the queue allows for independent scaling of the different microservices based on their specific needs.
- Fault tolerance: SQS provides a reliable and highly available message queuing service. Messages sent to the queue are stored redundantly across multiple availability zones, ensuring durability and availability even in the event of failures.
Option B (using Amazon Simple Notification Service – Amazon SNS) is not the best choice for this scenario because SNS is more suited for use cases where notifications need to be broadcasted to multiple subscribers rather than passing messages between microservices.
Option C (using AWS Lambda) introduces unnecessary complexity by involving serverless functions for message passing. While Lambda functions can be used effectively for certain tasks within a microservices architecture, they are not the ideal choice for communication between microservices.
Option D (using Amazon DynamoDB and DynamoDB Streams) is also not the most suitable solution for microservices communication. DynamoDB Streams is primarily designed for capturing and reacting to changes in DynamoDB tables, and it may not provide the flexibility and ease of use required for general message passing between microservices.
Therefore, option A with Amazon SQS is the recommended solution, providing a scalable, reliable, and decoupled communication mechanism between the microservices in the new architecture.
Reference
AWS Database Blog > DynamoDB Streams Use Cases and Design Patterns
Question 830
Exam Question
A company’s facility has badge readers at every entrance throughout the building. When badges are scanned, the readers send a message over HTTPS to indicate who attempted to access that particular entrance. A solutions architect must design a system to process these messages from the sensors. The solution must be highly available, and the results must be made available for the company’s security team to analyze.
Which system architecture should the solutions architect recommend?
A. Launch an Amazon EC2 instance to serve as the HTTPS endpoint and to process the messages. Configure the EC2 instance to save the results to an Amazon S3 bucket.
B. Create an HTTPS endpoint in Amazon API Gateway. Configure the API Gateway endpoint to invoke an AWS Lambda function to process the messages and save the results to an Amazon DynamoDB table.
C. Use Amazon Route 53 to direct incoming sensor messages to an AWS Lambda function. Configure the Lambda function to process the messages and save the results to an Amazon DynamoDB table.
D. Create a gateway VPC endpoint for Amazon S3. Configure a Site-to-Site VPN connection from the facility network to the VPC so that sensor data can be written directly to an S3 bucket by way of the VPC endpoint.
Correct Answer
B. Create an HTTPS endpoint in Amazon API Gateway. Configure the API Gateway endpoint to invoke an AWS Lambda function to process the messages and save the results to an Amazon DynamoDB table.
Explanation
A solutions architect should recommend option B: Create an HTTPS endpoint in Amazon API Gateway. Configure the API Gateway endpoint to invoke an AWS Lambda function to process the messages and save the results to an Amazon DynamoDB table.
Here’s why this is the recommended architecture:
- Scalability and Availability: Amazon API Gateway is a highly scalable and fully managed service that can handle the incoming messages from badge readers at scale. It automatically scales to handle high traffic loads, ensuring the system remains highly available.
- Serverless Processing: By configuring the API Gateway endpoint to invoke an AWS Lambda function, the system can benefit from the serverless architecture. AWS Lambda allows you to run your code without provisioning or managing servers. It automatically scales based on the incoming workload and charges only for the actual usage, providing cost optimization.
- Data Storage: Storing the results in an Amazon DynamoDB table provides a highly available and scalable NoSQL database solution. DynamoDB can handle the read and write throughput required to store and retrieve the message processing results efficiently.
- Integration and Analysis: The processed results in DynamoDB can be made available to the company’s security team for analysis. The security team can use various methods to analyze the data, such as querying the DynamoDB table directly or integrating with other analytics and visualization tools.
Options A, C, and D are not the recommended solutions for this scenario:
- Option A (EC2 instance with S3): Using an EC2 instance to serve as the HTTPS endpoint adds operational overhead and requires managing and maintaining the infrastructure. It is not as scalable and fault-tolerant as the serverless architecture provided by API Gateway and Lambda.
- Option C (Route 53 with Lambda): While Route 53 can route incoming sensor messages to a Lambda function, it doesn’t provide the additional benefits of API Gateway, such as request throttling, authentication, and caching, which can be useful for building robust and secure APIs.
- Option D (VPC endpoint with VPN): Creating a VPC endpoint for S3 and setting up a Site-to-Site VPN connection adds unnecessary complexity to the architecture. It also doesn’t provide the processing and analysis capabilities required for this use case.
Therefore, option B with Amazon API Gateway and AWS Lambda is the recommended architecture, providing a highly available, scalable, and serverless solution for processing the badge reader messages and storing the results for analysis.