Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 12

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 831

Exam Question

A company receives data from millions of users totaling about 1 each day. The company provides its users with usage reports going back 12 months. All usage data must be stored for at least 5 years to comply with regulatory and auditing requirements.

Which storage solution is MOST cost-effective?

A. Store the data in Amazon S3 Standard. Set a lifecycle rule to transition the data to S3 Glacier Deep Archive after 1 year. Set a lifecycle rule to delete the data after 5 years.
B. Store the data in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA). Set a lifecycle rule to transition the data to S3 Glacier after 1 year. Set the lifecycle rule to delete the data after 5 years.
C. Store the data in Amazon S3 Standard. Set a lifecycle rule to transition the data to S3 Standard Infrequent Access (S3 Standard-IA) after 1 year. Set a lifecycle rule to delete the data after 5 years.
D. Store the data in Amazon S3 Standard. Set a lifecycle rule to transition the data to S3 One ZoneInfrequent Access (S3 One Zone-IA) after 1 year. Set a lifecycle rule to delete the data after 5 years.

Correct Answer

B. Store the data in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA). Set a lifecycle rule to transition the data to S3 Glacier after 1 year. Set the lifecycle rule to delete the data after 5 years.

Explanation

The most cost-effective storage solution for this scenario would be option B: Store the data in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA). Set a lifecycle rule to transition the data to S3 Glacier after 1 year and set the lifecycle rule to delete the data after 5 years.

Here’s why this option is the most cost-effective:

  1. Storage Cost: Amazon S3 One Zone-IA storage class is designed to provide a lower-cost option compared to Amazon S3 Standard. It offers a reduced storage price while still providing durability and availability within a single Availability Zone. Since the company needs to store the data for at least 5 years, using S3 One Zone-IA helps reduce the overall storage costs.
  2. Lifecycle Policy: By setting a lifecycle rule to transition the data to S3 Glacier after 1 year, the less frequently accessed data is moved to a lower-cost storage class. Amazon S3 Glacier provides long-term archival storage at a lower cost compared to both S3 Standard and S3 One Zone-IA. This further optimizes the storage costs for the data that is not frequently accessed.
  3. Data Deletion: Setting a lifecycle rule to delete the data after 5 years ensures compliance with the regulatory and auditing requirements while avoiding unnecessary storage costs for retaining data beyond the required retention period.

Option A (S3 Standard with transition to S3 Glacier Deep Archive) is not the most cost-effective because S3 Glacier Deep Archive has a lower storage cost than S3 Glacier, making it more suitable for long-term archival storage. Using S3 One Zone-IA instead of S3 Standard in option C or D would provide additional cost savings while still meeting the storage requirements.

Therefore, option B with Amazon S3 One Zone-IA, transitioning to S3 Glacier, and setting a data deletion lifecycle rule is the most cost-effective solution for storing the data from millions of users for 5 years while providing access to the usage reports for the past 12 months.

Store the data in Amazon S3 Standard. Set a lifecycle rule to transition the data to S3 Standard Infrequent Access (S3 Standard-IA) after 1 year. Set a lifecycle rule to delete the data after 5 years.

Reference

AWS > Documentation > Amazon Simple Storage Service (S3) > User Guide > Examples of S3 Lifecycle configuration

Question 832

Exam Question

A company uses Amazon RDS for PostgreSQL databases for its data tier. The company must implement password rotation for the databases.

Which solution meets this requirement with the LEAST operational overhead?

A. Store the password in AWS Secrets Manager. Enable automatic rotation on the secret.
B. Store the password in AWS Systems Manager Parameter Store. Enable automatic rotation on the parameter.
C. Store the password in AWS Systems Manager Parameter Store. Write an AWS Lambda function that rotates the password.
D. Store the password in AWS Key Management Service (AWS KMS). Enable automatic rotation on the customer master key (CMK).

Correct Answer

A. Store the password in AWS Secrets Manager. Enable automatic rotation on the secret.

Explanation

The solution that meets the password rotation requirement with the least operational overhead is option A: Store the password in AWS Secrets Manager and enable automatic rotation on the secret.

Here’s why option A is the best choice:

  1. AWS Secrets Manager: AWS Secrets Manager is a fully managed service that helps you protect secrets such as database credentials, API keys, and other sensitive information. It provides built-in features for secure storage and automatic rotation of secrets.
  2. Automatic Rotation: Enabling automatic rotation on the secret in AWS Secrets Manager allows the company to automate the process of regularly updating the password for the PostgreSQL databases. This eliminates the need for manual intervention and reduces operational overhead.
  3. Integration with RDS: AWS Secrets Manager integrates seamlessly with Amazon RDS, including PostgreSQL databases. It provides a secure and efficient way to manage the database credentials and automatically update them during rotation.

Option B (AWS Systems Manager Parameter Store with automatic rotation) also provides automatic rotation but is more suitable for generic parameters rather than sensitive credentials like passwords.

Option C (AWS Systems Manager Parameter Store with a custom Lambda function for rotation) requires developing and maintaining a custom solution for password rotation, which increases operational overhead.

Option D (AWS KMS with automatic rotation on the CMK) is focused on key management and encryption rather than password rotation for the databases. While AWS KMS can be used to encrypt and manage secrets, it does not provide direct support for automatic rotation of database passwords.

Therefore, option A with AWS Secrets Manager and automatic rotation on the secret is the recommended solution with the least operational overhead for implementing password rotation for the Amazon RDS PostgreSQL databases.

Reference

AWS Security Blog > Rotate Amazon RDS database credentials automatically with AWS Secrets Manager

Question 833

Exam Question

A solutions architect is designing a shared storage solution for a web application that is deployed across multiple Availability Zones. The web application runs on Amazon EC2 instances that are in an Auto Scaling group. The company plans to make frequent changes to the content. The solution must have strong consistency in returning the new content as soon as the changes occur.

Which solutions meet these requirements? (Choose two.)

A. Use AWS Storage Gateway Volume Gateway Internet Small Computer Systems Interface (iSCSI) block storage that is mounted to the individual EC2 instances.
B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on the individual EC2 instances.
C. Create a shared Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the individual EC2 instances.
D. Use AWS DataSync to perform continuous synchronization of data between EC2 hosts in the Auto Scaling group.
E. Create an Amazon S3 bucket to store the web content. Set the metadata for the Cache-Control header to no-cache. Use Amazon CloudFront to deliver the content.

Correct Answer

B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on the individual EC2 instances.
E. Create an Amazon S3 bucket to store the web content. Set the metadata for the Cache-Control header to no-cache. Use Amazon CloudFront to deliver the content.

Explanation

The solutions that meet the requirements of strong consistency and returning new content as soon as changes occur in a shared storage solution for a web application deployed across multiple Availability Zones are:

B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on the individual EC2 instances.
E. Create an Amazon S3 bucket to store the web content. Set the metadata for the Cache-Control header to no-cache. Use Amazon CloudFront to deliver the content.

Here’s why these options are the correct choices:

B. Amazon EFS: Amazon EFS provides a fully managed, scalable, and shared file system that can be mounted on multiple EC2 instances simultaneously. It offers strong consistency, meaning that changes made to the file system are immediately visible to all instances. This makes it suitable for a web application deployed across multiple Availability Zones.

E. Amazon S3 and CloudFront: Storing web content in an Amazon S3 bucket and delivering it through Amazon CloudFront is a highly scalable and reliable solution. By setting the metadata for the Cache-Control header to no-cache, CloudFront ensures that it retrieves the latest content from the S3 bucket, providing strong consistency. This combination allows for frequent changes to the content and immediate availability across multiple instances.

A. AWS Storage Gateway Volume Gateway iSCSI block storage is not recommended for this scenario as it provides individual EC2 instance-level access to block storage rather than shared access across instances.

C. Creating a shared Amazon EBS volume and mounting it on individual EC2 instances does not provide the required strong consistency across instances, as EBS volumes can only be mounted to one instance at a time.

D. AWS DataSync is a data transfer service and does not provide shared storage or strong consistency capabilities.

Therefore, options B and E are the correct solutions for achieving strong consistency and returning new content as soon as changes occur in a shared storage solution for the web application.

In this example, the EC2 instance in the us-west-2c Availability Zone will pay EC2 data access charges for accessing a mount target in a different Availability Zone. Creating this setup works as follows:

  1. Create your Amazon EC2 resources and launch your Amazon EC2 instance. For more information about Amazon EC2, see Amazon EC2.
  2. Create your Amazon EFS file system with One Zone storage.
  3. Connect to each of your Amazon EC2 instances, and mount the Amazon EFS file system using the same mount target for each instance.

Reference

Question 834

Exam Question

An online gaming company is designing a game that is expected to be popular all over the world. A solutions architect needs to define an AWS Cloud architecture that supports near-real-time recording and displaying of current game statistics for each player, along with the names of the top 25 players in the world, at any given time.

Which AWS database solution and configuration should the solutions architect use to meet these requirements?

A. Use Amazon RDS for MySQL as the data store for player activity. Configure the RDS DB instance for Multi-AZ support.
B. Use Amazon DynamoDB as the data store for player activity. Configure DynamoDB Accelerator (DAX) for the player data.
C. Use Amazon DynamoDB as the data store for player activity. Configure global tables in each required AWS Region for the player data.
D. Use Amazon RDS for MySQL as the data store for player activity. Configure cross-Region read replicas in each required AWS Region based on player proximity.

Correct Answer

C. Use Amazon DynamoDB as the data store for player activity. Configure global tables in each required AWS Region for the player data.

Explanation

To meet the requirements of near-real-time recording and displaying of game statistics for each player, along with the names of the top 25 players in the world, at any given time, the solutions architect should use:

C. Use Amazon DynamoDB as the data store for player activity. Configure global tables in each required AWS Region for the player data.

Here’s why this option is the correct choice:

C. Amazon DynamoDB: DynamoDB is a fully managed NoSQL database service that provides fast and scalable performance. By configuring global tables in each required AWS Region, you can achieve low-latency access to player data from around the world. Global tables allow you to replicate the data automatically across multiple Regions, ensuring that the latest game statistics are available in near-real-time regardless of the player’s location.

A. Amazon RDS for MySQL: While RDS for MySQL can be a suitable choice for many applications, it may not provide the same level of scalability and low-latency access as DynamoDB. Additionally, configuring Multi-AZ support in RDS ensures high availability and fault tolerance but does not directly address the global nature of the game.

B. DynamoDB Accelerator (DAX): DAX is an in-memory cache for DynamoDB that can improve the performance of read-intensive workloads. However, it may not be necessary for this specific use case as the requirement is focused on recording and displaying near-real-time game statistics rather than optimizing read performance.

D. Cross-Region read replicas with RDS for MySQL: While cross-Region read replicas can improve read performance and provide data redundancy, they may introduce additional replication delays and complexity. DynamoDB with global tables is a more suitable choice for real-time data access and consistency in a globally distributed game.

Therefore, option C, using Amazon DynamoDB and configuring global tables in each required AWS Region for player data, is the recommended solution for achieving near-real-time recording and displaying of game statistics for each player, along with the names of the top 25 players in the world, at any given time.

Question 835

Exam Question

A company runs a three-tier web application in a VPC across multiple Availability Zones. Amazon EC2 instances run in an Auto Scaling group for the application tier. The company needs to make an automated scaling plan that will analyze each resource’s daily and weekly historical workload trends. The configuration must scale resources appropriately according to both the forecast and live changes in utilization.

Which scaling strategy should a solutions architect recommend to meet these requirements?

A. Implement dynamic scaling with step scaling based on average CPU utilization from the EC2 instances.
B. Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking.
C. Create an automated scheduled scaling action based on the traffic patterns of the web application.
D. Set up a simple scaling policy. Increase the cooldown period based on the EC2 instance startup time.

Correct Answer

B. Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking.

Explanation

To meet the requirements of analyzing historical workload trends, forecasting, and scaling resources appropriately, a solutions architect should recommend:

B. Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking.

Here’s why this option is the correct choice:

B. Predictive Scaling: Predictive scaling uses machine learning algorithms to forecast future resource utilization based on historical patterns. It analyzes historical workload trends and automatically adjusts the scaling configuration to meet predicted demand. By enabling predictive scaling and configuring dynamic scaling with target tracking, the resources can scale proactively based on the forecasted utilization, allowing the application to handle changes in workload effectively.

A. Step Scaling: Step scaling is a scaling strategy that adjusts the capacity of a resource based on predefined thresholds. While it can be effective for scaling based on average CPU utilization, it may not provide the same level of automation and intelligence as predictive scaling, which leverages historical trends and machine learning algorithms for more accurate forecasting.

C. Scheduled Scaling: Scheduled scaling actions are based on predefined time intervals and may not take into account live changes in utilization or provide the level of automation and responsiveness required for dynamic scaling based on workload patterns.

D. Simple Scaling Policy: Simple scaling policies define static thresholds for scaling actions and may not adapt to changing workload patterns or provide the forecasted scaling capabilities required in the scenario.

Therefore, option B, enabling predictive scaling to forecast and scale and configuring dynamic scaling with target tracking, is the recommended scaling strategy to meet the requirement of analyzing historical workload trends and scaling resources appropriately based on both forecasted and live changes in utilization.

Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking.

Reference

AWS News Blog > New – Predictive Scaling for EC2, Powered by Machine Learning

Question 836

Exam Question

A company is running an ASP.NET MVC application on a single Amazon EC2 instance. A recent increase in application traffic is causing slow response times for users during lunch hours. The company needs to resolve this concern with the least amount of configuration.

What should a solutions architect recommend to meet these requirements?

A. Move the application to AWS Elastic Beanstalk. Configure load-based auto scaling and time-based scaling to handle scaling during lunch hours.
B. Move the application to Amazon Elastic Container Service (Amazon ECS). Create an AWS Lambda function to handle scaling during lunch hours.
C. Move the application to Amazon Elastic Container Service (Amazon ECS). Configure scheduled scaling for AWS Application Auto Scaling during lunch hours.
D. Move the application to AWS Elastic Beanstalk. Configure load-based auto scaling, and create an AWS Lambda function to handle scaling during lunch hours.

Correct Answer

A. Move the application to AWS Elastic Beanstalk. Configure load-based auto scaling and time-based scaling to handle scaling during lunch hours.

Explanation

To resolve the slow response times for users during lunch hours with the least amount of configuration, a solutions architect should recommend:

A. Move the application to AWS Elastic Beanstalk. Configure load-based auto scaling and time-based scaling to handle scaling during lunch hours.

Here’s why this option is the correct choice:

A. AWS Elastic Beanstalk: AWS Elastic Beanstalk provides an easy way to deploy and manage applications. By moving the application to Elastic Beanstalk, you can leverage its built-in load-based auto scaling feature to automatically adjust the number of instances based on the application’s traffic. Additionally, you can configure time-based scaling rules specifically for lunch hours to handle the increased traffic during that period. This allows the application to dynamically scale and handle the higher load without manual intervention.

B. Amazon ECS and AWS Lambda: While Amazon ECS is a container orchestration service that can provide scalability and manage containers, using AWS Lambda alone is not suitable for scaling an application like ASP.NET MVC. AWS Lambda is event-driven and suited for executing short-lived functions rather than managing the application’s infrastructure and handling HTTP requests.

C. Amazon ECS and scheduled scaling: While Amazon ECS can provide scalability and manage containers, configuring scheduled scaling for AWS Application Auto Scaling may not be the most efficient solution for handling dynamic traffic during lunch hours. Scheduled scaling is more appropriate for predictable, time-based scaling events rather than real-time fluctuations in application traffic.

D. AWS Elastic Beanstalk and AWS Lambda: While AWS Elastic Beanstalk can provide load-based auto scaling for the application, using AWS Lambda alone may not be the optimal solution for handling scaling during lunch hours. AWS Lambda functions are event-driven and not designed for managing infrastructure or handling HTTP requests at scale.

Therefore, option A, moving the application to AWS Elastic Beanstalk and configuring load-based auto scaling and time-based scaling, is the recommended approach. It provides a scalable and automated solution to handle the increased traffic during lunch hours with minimal configuration effort.

Reference

AWS > Documentation > AWS Elastic Beanstalk > Developer Guide > Scheduled Auto Scaling actions

Question 837

Exam Question

A solutions architect needs to design a system to store client case files. The files are core company assets and are important. The number of files will grow over time. The files must be simultaneously accessible from multiple application servers that run on Amazon EC2 instances. The solution must have built-in redundancy. Which

Which solution meets these requirements?

A. Amazon Elastic File System (Amazon EFS)
B. Amazon Elastic Block Store (Amazon EBS)
C. Amazon S3 Glacier Deep Archive
D. AWS Backup

Correct Answer

A. Amazon Elastic File System (Amazon EFS)

Explanation

To meet the requirements of storing client case files that are important, simultaneously accessible from multiple application servers, and with built-in redundancy, the recommended solution is:

A. Amazon Elastic File System (Amazon EFS)

Amazon EFS provides a fully managed and scalable file storage service that is designed to provide concurrent access to files from multiple EC2 instances. It is suitable for scenarios where multiple application servers need to access the same set of files simultaneously. With Amazon EFS, the storage capacity automatically scales as the number of files grows over time, eliminating the need for manual capacity planning.

Amazon EFS also offers built-in redundancy by storing data across multiple Availability Zones within a region. This ensures high availability and durability of the files, as well as protection against the failure of individual servers or Availability Zones.

Option B, Amazon Elastic Block Store (Amazon EBS), provides block-level storage for individual EC2 instances and is not suitable for simultaneous access from multiple instances. It is designed to be attached to a single EC2 instance at a time.

Option C, Amazon S3 Glacier Deep Archive, is a long-term archival storage service and may not be suitable for storing actively accessed case files due to the retrieval time and costs associated with Glacier storage.

Option D, AWS Backup, is a service for managing backups of various AWS resources, but it is not specifically designed for storing and accessing client case files with simultaneous access from multiple servers.

Therefore, the most appropriate solution for storing client case files with simultaneous access and built-in redundancy is Amazon Elastic File System (Amazon EFS) (Option A).

Question 838

Exam Question

An ecommerce company is creating an application that requires a connection to a third-party payment service to process payments. The payment service needs to explicitly allow the public IP address of the server that is making the payment request. However, the company’s security policies do not allow any server to be exposed directly to the public internet.

Which solution will meet these requirements?

A. Provision an Elastic IP address. Host the application servers on Amazon EC2 instances in a private subnet. Assign the public IP address to the application servers.
B. Create a NAT gateway in a public subnet. Host the application servers on Amazon EC2 instances in a private subnet. Route payment requests through the NAT gateway.
C. Deploy an Application Load Balancer (ALB). Host the application servers on Amazon EC2 instances in a private subnet. Route the payment requests through the ALB.
D. Set up an AWS Client VPN connection to the payment service. Host the application servers on Amazon EC2 instances in a private subnet. Route the payment requests through the VPN.

Correct Answer

B. Create a NAT gateway in a public subnet. Host the application servers on Amazon EC2 instances in a private subnet. Route payment requests through the NAT gateway.

Explanation

To meet the requirement of allowing the public IP address of the server making payment requests while adhering to the company’s security policies, the recommended solution is:

B. Create a NAT gateway in a public subnet. Host the application servers on Amazon EC2 instances in a private subnet. Route payment requests through the NAT gateway.

By creating a NAT gateway in a public subnet, the application servers hosted in a private subnet can communicate with the third-party payment service using the NAT gateway’s public IP address. The NAT gateway acts as a bridge between the private subnet and the public internet, allowing outbound communication initiated by the servers in the private subnet while preventing inbound connections from the internet.

This solution ensures that the servers in the private subnet are not directly exposed to the public internet, aligning with the company’s security policies, while still allowing the payment requests to reach the third-party payment service through the NAT gateway.

Option A, provisioning an Elastic IP address and assigning it to the application servers in a private subnet, would expose the servers directly to the public internet, which contradicts the security policies.

Option C, deploying an Application Load Balancer (ALB), routes incoming requests to multiple servers but does not provide a direct solution for outbound communication from the servers to a third-party service.

Option D, setting up an AWS Client VPN connection, allows secure remote access to the VPC but is not specifically designed for outbound communication to a third-party service.

Therefore, the most suitable solution in this case is to create a NAT gateway (Option B).

Question 839

Exam Question

A company sells datasets to customers who do research in artificial intelligence and machine learning (AI/ML). The datasets are large, formatted files that are stored in an Amazon S3 bucket in the us-east-1 Region. The company hosts a web application that the customers use to purchase access to a given dataset. The web application is deployed on multiple Amazon EC2 instances behind an Application Load Balancer. After a purchase is made, customers receive an S3 signed URL that allows access to the files. The customers are distributed across North America and Europe. The company wants to reduce the cost that is associated with data transfers and wants to maintain or improve performance.

What should a solutions architect do to meet these requirements?

A. Configure S3 Transfer Acceleration on the existing S3 bucket. Direct customer requests to the S3 Transfer Acceleration endpoint. Continue to use S3 signed URLs for access control.
B. Deploy an Amazon CloudFront distribution with the existing S3 bucket as the origin. Direct customer requests to the CloudFront URL. Switch to CloudFront signed URLs for access control.
C. Set up a second S3 bucket in the eu-central-1 Region with S3 Cross-Region Replication between the buckets. Direct customer requests to the closest Region. Continue to use S3 signed URLs for access control.
D. Modify the web application to enable streaming of the datasets to end users. Configure the web application to read the data from the existing S3 bucket. Implement access control directly in the application.

Correct Answer

B. Deploy an Amazon CloudFront distribution with the existing S3 bucket as the origin. Direct customer requests to the CloudFront URL. Switch to CloudFront signed URLs for access control.

Explanation

To reduce the cost associated with data transfers and improve performance for customers distributed across North America and Europe, the recommended solution is:

B. Deploy an Amazon CloudFront distribution with the existing S3 bucket as the origin. Direct customer requests to the CloudFront URL. Switch to CloudFront signed URLs for access control.

By deploying an Amazon CloudFront distribution, the static files stored in the S3 bucket can be cached and delivered to customers from edge locations that are geographically closer to them. This reduces the latency and improves performance by serving the content from the nearest edge location.

Switching to CloudFront signed URLs for access control provides secure access to the files. CloudFront signed URLs can be generated with time-limited access and additional security features, ensuring that only authorized customers can access the datasets.

Option A, configuring S3 Transfer Acceleration, improves data transfer speed but does not provide the same performance benefits as a content delivery network like CloudFront. Additionally, S3 Transfer Acceleration might not provide significant cost reduction in this scenario.

Option C, setting up a second S3 bucket with Cross-Region Replication, can improve data availability and durability but does not directly address the goal of reducing data transfer costs and improving performance.

Option D, modifying the web application to enable streaming and implementing access control directly, would require substantial changes to the application architecture and does not specifically address the cost and performance requirements.

Therefore, the most suitable solution in this case is to deploy an Amazon CloudFront distribution (Option B) to improve performance and reduce data transfer costs while using CloudFront signed URLs for secure access control.

Question 840

Exam Question

A company is planning to migrate a TCP-based application into the company’s VPC. The application is publicly accessible on a nonstandard TCP port through a hardware appliance in the company’s data center. This public endpoint can process up to 3 million requests per second with low latency. The company requires the same level of performance for the new public endpoint in AWS.

What should a solutions architect recommend to meet this requirement?

A. Deploy a Network Load Balancer (NLB). Configure the NLB to be publicly accessible over the TCP port that the application requires.
B. Deploy an Application Load Balancer (ALB). Configure the ALB to be publicly accessible over the TCP port that the application requires.
C. Deploy an Amazon CloudFront distribution that listens on the TCP port that the application requires. Use an Application Load Balancer as the origin.
D. Deploy an Amazon API Gateway API that is configured with the TCP port that the application requires. Configure AWS Lambda functions with provisioned concurrency to process the requests.

Correct Answer

A. Deploy a Network Load Balancer (NLB). Configure the NLB to be publicly accessible over the TCP port that the application requires.

Explanation

To meet the requirement of migrating a TCP-based application with the same level of performance, a solutions architect should recommend:

A. Deploy a Network Load Balancer (NLB) and configure it to be publicly accessible over the TCP port that the application requires.

Network Load Balancer (NLB) is designed to handle high levels of incoming traffic with low latency, making it suitable for high-performance TCP-based applications. It provides ultra-low latency at the connection level, making it ideal for use cases that require the same level of performance as the hardware appliance in the company’s data center.

By deploying an NLB, incoming requests can be evenly distributed across multiple instances of the application in the VPC, ensuring scalability, fault tolerance, and high performance. The NLB can be configured to listen on the same nonstandard TCP port that the application requires, allowing for a seamless migration.

Option B, deploying an Application Load Balancer (ALB), is not recommended in this scenario as ALB is primarily designed for HTTP and HTTPS traffic and may not provide the same level of performance and low latency as NLB for TCP-based applications.

Option C, deploying an Amazon CloudFront distribution with an ALB as the origin, is not suitable for directly migrating a TCP-based application as CloudFront primarily focuses on delivering HTTP, HTTPS, and some streaming protocols.

Option D, using Amazon API Gateway with AWS Lambda, is primarily for handling API requests using REST or WebSocket protocols, not for TCP-based applications.

Therefore, the most appropriate choice is to deploy a Network Load Balancer (NLB) and configure it to be publicly accessible over the TCP port required by the application.