Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 14

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 851

Exam Question

A solutions architect at a company is designing the architecture for a two-tiered web application. The web application is composed of an internet-facing Application Load Balancer (ALB) that forwards traffic to an Auto Scaling group of Amazon EC2 instances. The EC2 instances must be able to access a database that runs on Amazon RDS. The company has requested a defense-in-depth approach to the network layout. The company does not want to rely solely on security groups or network ACLs. Only the minimum resources that are necessary should be routable from the internet.

Which network design should the solutions architect recommend to meet these requirements?

A. Place the ALB, EC2 instances, and RDS database in private subnets.
B. Place the ALB in public subnets. Place the EC2 instances and RDS database in private subnets.
C. Place the ALB and EC2 instances in public subnets. Place the RDS database in private subnets.
D. Place the ALB outside the VPC. Place the EC2 instances and RDS database in private subnets.

Correct Answer

B. Place the ALB in public subnets. Place the EC2 instances and RDS database in private subnets.

Explanation

To meet the given requirements of a defense-in-depth approach and minimizing routable resources from the internet, the solutions architect should recommend the following network design:

B. Place the ALB in public subnets. Place the EC2 instances and RDS database in private subnets.

Option B aligns with the principle of least privilege by only exposing the Application Load Balancer (ALB) to the public internet. By placing the ALB in public subnets, it can receive traffic from the internet and forward it to the backend EC2 instances in the private subnets.

The EC2 instances and the RDS database are placed in private subnets, which are not directly accessible from the internet. This ensures that only the necessary resources are exposed to the public internet, providing a defense-in-depth approach.

Option A suggests placing the ALB, EC2 instances, and RDS database in private subnets. While this approach secures the EC2 instances and the RDS database, it does not allow internet traffic to reach the ALB, making it inaccessible to users.

Option C suggests placing the ALB and EC2 instances in public subnets and the RDS database in private subnets. This design exposes both the ALB and EC2 instances to the internet, which contradicts the requirement of minimizing routable resources from the internet.

Option D suggests placing the ALB outside the VPC. This design would not allow communication between the ALB and the EC2 instances or the RDS database, making it impractical for the web application architecture.

Therefore, option B is the correct choice to meet the requirements while providing a defense-in-depth network layout.

Review the recommended security group settings for Application Load Balancers or Classic Load Balancers. Be sure that:

  • Your load balancer has open listener ports and security groups that allow access to the ports.
  • The security group for your instance allows traffic on instance listener ports and health check ports from the load balancer.
  • The load balancer security group allows inbound traffic from the client.
  • The load balancer security group allows outbound traffic to the instances and the health check port.

Reference

How do I attach backend instances with private IP addresses to my internet-facing load balancer in ELB?

Question 852

Exam Question

A company is building an online multiplayer game. The game communicates by using UDP, and low latency between the client and the backend is important. The backend is hosted on Amazon EC2 instances that can be deployed to multiple AWS Regions to meet demand. The company needs the game to be highly available so that users around the world can access the game at all times.

What should a solutions architect do to meet these requirements?

A. Deploy Amazon CloudFront to support the global traffic. Configure CloudFront with an origin group to allow access to EC2 instances in multiple Regions.
B. Deploy an Application Load Balancer in one Region to distribute traffic to EC2 instances in each Region that hosts the game’s backend instances.
C. Deploy Amazon CloudFront to support an origin access identity (OAI). Associate the OAI with EC2 instances in each Region to support global traffic.
D. Deploy a Network Load Balancer in each Region to distribute the traffic. Use AWS Global Accelerator to route traffic to the correct Regional endpoint.

Correct Answer

D. Deploy a Network Load Balancer in each Region to distribute the traffic. Use AWS Global Accelerator to route traffic to the correct Regional endpoint.

Explanation

To meet the requirements of hosting an online multiplayer game with low latency and high availability across multiple AWS Regions, a solutions architect should recommend the following approach:

D. Deploy a Network Load Balancer in each Region to distribute the traffic. Use AWS Global Accelerator to route traffic to the correct Regional endpoint.

Option D provides the best solution for achieving low latency and high availability for the online multiplayer game. By deploying a Network Load Balancer (NLB) in each AWS Region, incoming UDP traffic can be evenly distributed to the backend EC2 instances within that Region. NLB is specifically designed for handling traffic with very low latency and high throughput, making it suitable for real-time applications like online games.

To route traffic from users around the world to the appropriate Regional endpoint, AWS Global Accelerator can be utilized. Global Accelerator uses the AWS global network infrastructure to optimize the network path and direct traffic to the nearest available Regional endpoint based on latency and health checks. This ensures that players can access the game from any location with reduced latency and improved availability.

Options A, B, and C do not adequately address the requirements for low latency and high availability across multiple AWS Regions. CloudFront (option A and C) is primarily used for content delivery and may introduce additional latency due to caching mechanisms. Application Load Balancer (option B) is designed for HTTP and HTTPS traffic and may not provide optimal performance for UDP-based game communications.

Therefore, option D is the most suitable choice for achieving low latency, high availability, and global accessibility for the online multiplayer game.

Reference

AWS > Documentation > Amazon CloudFront > Developer Guide > Restricting access to an Amazon S3 origin

Question 853

Exam Question

A company is designing an application. The application uses an AWS Lambda function to receive information through Amazon API Gateway and to store the information in an Amazon Aurora PostgreSQL database. During the proof-of-concept stage, the company has to increase the Lambda quotas significantly to handle the high volumes of data that the company needs to load into the database. A solutions architect must recommend a new design to improve scalability and minimize the configuration effort.

Which solution will meet these requirements?

A. Refactor the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instances. Connect the database by using native Java Database Connectivity (JDBC) drivers.
B. Change the platform from Aurora to Amazon DynamoDB. Provision a DynamoDB Accelerator (DAX) cluster. Use the DAX client SDK to point the existing DynamoDB API calls at the DAX cluster.
C. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using Amazon Simple Notification Service (Amazon SNS).
D. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.

Correct Answer

D. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.

Explanation

To improve scalability and minimize configuration effort for an application that uses an AWS Lambda function to receive information through Amazon API Gateway and store it in an Amazon Aurora PostgreSQL database, a solutions architect should recommend the following approach:

D. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.

Option D provides a scalable and efficient solution by using two separate Lambda functions and an Amazon SQS queue for integration. This approach decouples the receiving of information and the loading into the database, allowing each function to scale independently based on demand.

The first Lambda function, configured to receive the information, can handle the initial processing, validation, and transformation of the data received through API Gateway. It then places the information onto an Amazon SQS queue.

The second Lambda function, configured to load the information into the database, is triggered by the SQS queue. It retrieves messages from the queue and performs the database insertion or other relevant operations.

This design allows for increased scalability as multiple instances of each Lambda function can be automatically scaled based on the number of messages in the SQS queue. It also helps minimize configuration effort as each function can be independently managed and scaled as needed.

Options A, B, and C do not address the requirements as effectively as option D:

  • Option A suggests refactoring the Lambda function to use Apache Tomcat on EC2 instances, which adds complexity and operational overhead, deviating from the serverless and event-driven nature of Lambda functions.
  • Option B suggests changing the database platform to DynamoDB and using DynamoDB Accelerator (DAX), which may require significant changes to the application’s data model and access patterns, as well as additional development effort.
  • Option C suggests using two Lambda functions with SNS integration, which can introduce additional complexity and latency compared to using an SQS queue. SNS is typically used for pub/sub messaging, not for direct integration between Lambda functions.

Therefore, option D is the most suitable choice for improving scalability and minimizing configuration effort in this scenario.

Reference

Amazon DynamoDB Accelerator (DAX)

Question 854

Exam Question

A company recently launched a new service that involves medical images. The company scans the images and sends them from its on-premises data center through an AWS Direct Connect connection to Amazon EC2 instances. After processing is complete, the images are stored in an Amazon S3 bucket. A company requirement states that the EC2 instances cannot be accessible through the internet. The EC2 instances run in a private subnet, which has a default route back to the on-premises data center for outbound internet access. Usage of the new service is increasing rapidly. A solutions architect must recommend a solution that meets the company’s requirements and reduces the Direct Connect charges.

Which solution accomplishes these goals MOST cost-effectively?

A. Configure a VPC endpoint for Amazon S3. Add an entry to the private subnet’s route table for the S3 endpoint.
B. Configure a NAT gateway in a public subnet. Configure the private subnet’s route table to use the NAT gateway.
C. Configure Amazon S3 as a file system mount point on the EC2 instances. Access Amazon S3 through the mount.
D. Move the EC2 instances into a public subnet. Configure the public subnet route table to point to an internet gateway.

Correct Answer

A. Configure a VPC endpoint for Amazon S3. Add an entry to the private subnet’s route table for the S3 endpoint.

Explanation

To meet the company’s requirements of not allowing internet access to the Amazon EC2 instances running in a private subnet while reducing the AWS Direct Connect charges, the most cost-effective solution would be:

A. Configure a VPC endpoint for Amazon S3. Add an entry to the private subnet’s route table for the S3 endpoint.

By configuring a VPC endpoint for Amazon S3 in the private subnet, you can securely access Amazon S3 without the need for internet access. A VPC endpoint allows private connectivity to S3 within your VPC, leveraging AWS’s private network infrastructure.

By adding an entry to the private subnet’s route table for the S3 endpoint, the traffic destined for S3 will be routed through the VPC endpoint instead of going over the Direct Connect connection. This eliminates the need for the traffic to go through the Direct Connect and reduces associated data transfer costs.

Option B, configuring a NAT gateway in a public subnet, would enable internet access for the EC2 instances, which is against the company’s requirements and would incur additional costs for NAT gateway data transfer.

Option C, configuring Amazon S3 as a file system mount point on the EC2 instances, does not address the requirement of restricting internet access to the EC2 instances and would not reduce the Direct Connect charges.

Option D, moving the EC2 instances into a public subnet and configuring the route table to point to an internet gateway, also contradicts the company’s requirement of not allowing internet access to the EC2 instances.

Therefore, option A is the most suitable and cost-effective solution for securely accessing Amazon S3 while adhering to the company’s requirements and reducing Direct Connect charges.

Question 855

Exam Question

A company is developing a serverless web application that gives users the ability to interact with real-time analytics from online games. The data from the games must be streamed in real life. The company needs a durable, low-latency database option for user data. The company does not know how many users will use the application. Any design considerations must provide response times of single-digit milliseconds as the application scales.

Which combination of AWS services will meet these requirements? (Choose two.)

A. Amazon CloudFront
B. Amazon DynamoDB
C. Amazon Kinesis
D. Amazon RDS
E. AWS Global Accelerator

Correct Answer

B. Amazon DynamoDB
C. Amazon Kinesis

Explanation

Extreme performance: For applications that need microsecond latency and support for millions of requests per second, you can use Amazon ElastiCache, a fully managed, in-memory caching service. Amazon MemoryDB for Redis is a Redis-compatible, durable, in-memory database service that delivers microsecond read latency and single-digit millisecond write latency. When you need millisecond latency with Amazon DynamoDB, you can add on DynamoDB Accelerator (DAX), an in-memory cache.

Scalable: ElastiCache can scale-out, scale-in, and scale-up to meet fluctuating application demands. With MemoryDB, scale seamlessly from just a few gigabytes to over one hundred terabytes of storage per cluster to meet the needs of your applications. With DynamoDB, you can build applications with virtually unlimited throughput and storage.

To meet the requirements of a serverless web application with real-time analytics and a durable, low-latency database for user data, the following combination of AWS services would be suitable:

B. Amazon DynamoDB: DynamoDB is a fully managed NoSQL database service that provides low-latency access to data at any scale. It is designed for high-performance applications and can handle millions of requests per second with single-digit millisecond latency. DynamoDB is a durable and highly available database option that can scale automatically to accommodate any number of users.

C. Amazon Kinesis: Kinesis is a real-time data streaming service that can be used to ingest, process, and analyze streaming data at scale. It can handle large volumes of data with low latency and provides capabilities for real-time analytics. With Kinesis, you can stream data from online games to your application in real-time, enabling real-time analytics and interactions with users.

Option A (Amazon CloudFront) is a content delivery network (CDN) service that improves the performance and scalability of web applications by caching content closer to the users. While CloudFront can improve the delivery of static content, it is not directly related to providing a durable, low-latency database option for user data.

Option D (Amazon RDS) is a managed relational database service. While RDS offers durability and scalability, it may not provide the low-latency requirements mentioned in the scenario. DynamoDB, as a NoSQL database, is better suited for low-latency and high-performance applications.

Option E (AWS Global Accelerator) is a service that improves the availability and performance of applications by routing traffic through the AWS global network infrastructure. While Global Accelerator can improve the network performance, it is not directly related to providing a durable, low-latency database option.

Therefore, the combination of services that would meet the requirements is Amazon DynamoDB and Amazon Kinesis.

Reference

Question 856

Exam Question

A company is building an application that consists of several microservices. The company has decided to use container technologies to deploy its software on AWS. The company needs a solution that minimizes the amount of ongoing effort for maintenance and scaling. The company cannot manage additional infrastructure.

Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)

A. Deploy an Amazon Elastic Container Service (Amazon ECS) cluster.
B. Deploy the Kubernetes control plane on Amazon EC2 instances that span multiple Availability Zones.
C. Deploy an Amazon Elastic Container Service (Amazon ECS) service with an Amazon EC2 launch type. Specify a desired task number level of greater than or equal to 2.
D. Deploy an Amazon Elastic Container Service (Amazon ECS) service with a Fargate launch type. Specify a desired task number level of greater than or equal to 2.
E. Deploy Kubernetes worker nodes on Amazon EC2 instances that span multiple Availability Zones. Create a deployment that specifies two or more replicas for each microservice.

Correct Answer

A. Deploy an Amazon Elastic Container Service (Amazon ECS) cluster.
D. Deploy an Amazon Elastic Container Service (Amazon ECS) service with a Fargate launch type. Specify a desired task number level of greater than or equal to 2.

Explanation

To meet the requirements of minimizing ongoing maintenance and scaling efforts while using container technologies on AWS, the following combination of actions should be taken:

A. Deploy an Amazon Elastic Container Service (Amazon ECS) cluster: ECS is a fully managed container orchestration service that simplifies the deployment, management, and scaling of containerized applications. By deploying an ECS cluster, the underlying infrastructure and cluster management tasks are abstracted, reducing the ongoing maintenance effort.

D. Deploy an Amazon Elastic Container Service (Amazon ECS) service with a Fargate launch type: Fargate is a serverless compute engine for containers provided by ECS. With Fargate, you don’t need to provision or manage the underlying infrastructure, allowing you to focus solely on deploying and scaling your containers. By using Fargate as the launch type for your ECS service, you eliminate the need to manage additional infrastructure and further reduce maintenance efforts.

Option B, deploying the Kubernetes control plane on EC2 instances, would introduce additional management and maintenance overhead as you would need to manage the EC2 instances hosting the control plane and handle the associated scaling and availability considerations.

Option C, deploying an ECS service with an EC2 launch type, still requires managing EC2 instances and their scaling, which goes against the goal of minimizing ongoing maintenance and scaling efforts.

Option E, deploying Kubernetes worker nodes on EC2 instances, introduces additional management complexity by having to manage the Kubernetes infrastructure and worker nodes.

Therefore, the recommended combination of actions is to deploy an Amazon ECS cluster and an Amazon ECS service with a Fargate launch type. This allows you to leverage the fully managed capabilities of ECS and Fargate, reducing maintenance and scaling efforts.

Question 857

Exam Question

A solutions architect is designing the cloud architecture for a new application that is being deployed on AWS. The application’s users will interactively download and upload files. Files that are more than 90 days old will be accessed less frequently than newer files, but all files need to be instantly available. The solutions architect must ensure that the application can scale to store petabytes of data with maximum durability.

Which solution meets these requirements?

A. Store the files in Amazon S3 Standard. Create an S3 Lifecycle policy that moves objects that are more than 90 days old to S3 Glacier.
B. Store the files in Amazon S3 Standard. Create an S3 Lifecycle policy that moves objects that are more than 90 days old to S3 Standard-Infrequent Access (S3 Standard-IA).
C. Store the files in Amazon Elastic Block Store (Amazon EBS) volumes. Schedule snapshots of the volumes. Use the snapshots to archive data that is more than 90 days old.
D. Store the files in RAID-striped Amazon Elastic Block Store (Amazon EBS) volumes. Schedule snapshots of the volumes. Use the snapshots to archive data that is more than 90 days old.

Correct Answer

B. Store the files in Amazon S3 Standard. Create an S3 Lifecycle policy that moves objects that are more than 90 days old to S3 Standard-Infrequent Access (S3 Standard-IA).

Explanation

To store petabytes of data with maximum durability and instant availability, a solutions architect should B store the files in Amazon S3 Standard. Create an S3 Lifecycle policy that moves objects that are more than 90 days old to S3 Standard-Infrequent Access (S3 Standard-IA).

This way, the application’s users can interactively download and upload files. Files that are more than 90 days old will be accessed less frequently than newer files, but all files need to be instantly available. The solutions architect must ensure that the application can scale to store petabytes of data with maximum durability.

Therefore, the solution that meets these requirements is B. Store the files in Amazon S3 Standard. Create an S3 Lifecycle policy that moves objects that are more than 90 days old to S3 Standard-Infrequent Access (S3 Standard-IA).

Store the files in Amazon S3 Standard. Create an S3 Lifecycle policy that moves objects that are more than 90 days old to S3 Standard-Infrequent Access (S3 Standard-IA).

Reference

AWS > Documentation > Amazon Simple Storage Service (S3) > User Guide > Examples of S3 Lifecycle configuration

Question 858

Exam Question

A company hosts a multi-tier web application that uses an Amazon Aurora MySQL DB cluster for storage. The application tier is hosted on Amazon EC2 instances. The company’s IT security guidelines mandate that the database credentials be encrypted and rotated every 14 days.

What should a solutions architect do to meet this requirement with the LEAST operational effort?

A. Create a new AWS Key Management Service (AWS KMS) encryption key. Use AWS Secrets Manager to create a new secret that uses the KMS key with the appropriate credentials. Associate the secret with the Aurora DB cluster. Configure a custom rotation period of 14 days.
B. Create two parameters in AWS Systems Manager Parameter Store: one for the user name as a string parameter and one that uses the SecureString type for the password. Select AWS Key Management Service (AWS KMS) encryption for the password parameter, and load these parameters in the application tier. Implement an AWS Lambda function that rotates the password every 14 days.
C. Store a file that contains the credentials in an AWS Key Management Service (AWS KMS) encrypted Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system in all EC2 instances of the application tier. Restrict the access to the file on the file system so that the application can read the file and that only super users can modify the file. Implement an AWS Lambda function that rotates the key in Aurora every 14 days and writes new credentials into the file.
D. Store a file that contains the credentials in an AWS Key Management Service (AWS KMS) encrypted Amazon S3 bucket that the application uses to load the credentials. Download the file to the application regularly to ensure that the correct credentials are used. Implement an AWS Lambda function that rotates the Aurora credentials every 14 days and uploads these credentials to the file in the S3 bucket.

Correct Answer

B. Create two parameters in AWS Systems Manager Parameter Store: one for the user name as a string parameter and one that uses the SecureString type for the password. Select AWS Key Management Service (AWS KMS) encryption for the password parameter, and load these parameters in the application tier. Implement an AWS Lambda function that rotates the password every 14 days.

Explanation

To meet the requirement of encrypting and rotating the database credentials every 14 days with the least operational effort, the recommended solution is:

B. Create two parameters in AWS Systems Manager Parameter Store: one for the user name as a string parameter and one that uses the SecureString type for the password. Select AWS Key Management Service (AWS KMS) encryption for the password parameter and load these parameters in the application tier. Implement an AWS Lambda function that rotates the password every 14 days.

  • AWS Systems Manager Parameter Store provides a secure and centralized location for storing sensitive data, such as database credentials.
  • By creating two parameters, one for the user name and another for the password, you can securely store the credentials in Parameter Store.
  • Selecting AWS KMS encryption for the password parameter ensures that the credentials are encrypted using AWS KMS, providing an additional layer of security.
  • Loading the credentials from Parameter Store in the application tier allows the application to access the credentials securely.
  • Implementing an AWS Lambda function to rotate the password every 14 days automates the process of credential rotation, reducing operational effort.
  • With this approach, the application can retrieve the updated credentials from Parameter Store during the rotation process, ensuring uninterrupted access to the database.

Option A, using AWS Secrets Manager, can also meet the requirements. However, it involves more configuration steps and additional complexity compared to using Parameter Store.

Options C and D, involving storing credentials in encrypted file systems (EFS or S3) and implementing custom rotation processes, introduce unnecessary complexity and operational overhead.

Therefore, the recommended solution is to use AWS Systems Manager Parameter Store to store the credentials, implement an AWS Lambda function for credential rotation, and load the credentials in the application tier.

Reference

AWS > Documentation > AWS Systems Manager > User Guide > AWS Systems Manager Parameter Store

Question 859

Exam Question

A company is deploying an application that processes large quantities of data in parallel. The company plans to use Amazon EC2 instances for the workload. The network architecture must be configurable to prevent groups of nodes from sharing the same underlying hardware.

Which networking solution meets these requirements?

A. Run the EC2 instances in a spread placement group.
B. Group the EC2 instances in separate accounts.
C. Configure the EC2 instances with dedicated tenancy.
D. Configure the EC2 instances with shared tenancy.

Correct Answer

A. Run the EC2 instances in a spread placement group.

Explanation

The networking solution that meets the requirement of preventing groups of nodes from sharing the same underlying hardware is:

A. Run the EC2 instances in a spread placement group.

  • A spread placement group is a placement strategy in Amazon EC2 that spreads instances across underlying hardware to minimize the risk of simultaneous failures. It ensures that EC2 instances are deployed on separate underlying hardware.
  • By running the EC2 instances in a spread placement group, you can prevent groups of nodes from sharing the same hardware, providing isolation and minimizing the impact of hardware failures.
  • This networking solution allows for configurable network architecture while ensuring that instances are distributed across different hardware resources.

Option B, grouping the EC2 instances in separate accounts, does not provide control over the underlying hardware and is not a direct solution to prevent sharing of hardware resources.

Option C, configuring the EC2 instances with dedicated tenancy, ensures that instances run on hardware dedicated to a specific AWS account but does not guarantee separation between groups of instances within the same account.

Option D, configuring the EC2 instances with shared tenancy, allows instances to share the same hardware, which does not meet the requirement of preventing groups of nodes from sharing underlying hardware.

Therefore, the recommended networking solution is to run the EC2 instances in a spread placement group.

Run the EC2 instances in a spread placement group.

Reference

AWS Compute Blog > Using partition placement groups for large distributed and replicated workloads in Amazon EC2

Question 860

Exam Question

A company needs to provide its employees with secure access to confidential and sensitive files. The company wants to ensure that the files can be accessed only by authorized users. The files must be downloaded securely to the employees’ devices. The files are stored in an on-premises Windows file server. However, due to an increase in remote usage, the file server is running out of capacity.

Which solution will meet these requirements?

A. Migrate the file server to an Amazon EC2 instance in a public subnet. Configure the security group to limit inbound traffic to the employees’ IP addresses.
B. Migrate the files to an Amazon FSx for Windows File Server file system. Integrate the Amazon FSx file system with the on-premises Active Directory. Configure AWS Client VPN.
C. Migrate the files to Amazon S3, and create a private VPC endpoint. Create a signed URL to allow download.
D. Migrate the files to Amazon S3, and create a public VPC endpoint. Allow employees to sign on with AWS Single Sign-On.

Correct Answer

B. Migrate the files to an Amazon FSx for Windows File Server file system. Integrate the Amazon FSx file system with the on-premises Active Directory. Configure AWS Client VPN.

Explanation

The solution that meets the requirements of providing secure access to confidential and sensitive files, ensuring authorized user access, and securely downloading files to employees’ devices while addressing the capacity issue is:

B. Migrate the files to an Amazon FSx for Windows File Server file system. Integrate the Amazon FSx file system with the on-premises Active Directory. Configure AWS Client VPN.

  • Amazon FSx for Windows File Server is a fully managed, highly reliable, and scalable file storage service that is compatible with Windows-based applications. It provides native Windows file system access and supports integration with on-premises Active Directory, allowing seamless access to files by authorized users.
  • By migrating the files to Amazon FSx for Windows File Server, you can benefit from its scalability and alleviate the capacity issue of the on-premises file server.
  • Integrating Amazon FSx with the on-premises Active Directory ensures that user access and permissions can be managed centrally and in accordance with existing security policies.
  • Configuring AWS Client VPN provides a secure and encrypted connection for remote employees to access the Amazon FSx file system securely, ensuring that files are downloaded securely to employees’ devices.

Option A, migrating the file server to an EC2 instance in a public subnet with restricted inbound traffic, may provide accessibility but does not ensure the level of security required for confidential and sensitive files.

Option C, migrating the files to Amazon S3 and creating a private VPC endpoint with a signed URL, allows for secure storage and download of files, but it does not address the need for secure access control or integration with the on-premises Active Directory.

Option D, migrating the files to Amazon S3 and creating a public VPC endpoint with AWS Single Sign-On, does not provide the necessary level of security for confidential and sensitive files, as it would make the files accessible to anyone with access to the public VPC endpoint.

Therefore, the recommended solution is to migrate the files to an Amazon FSx for Windows File Server file system, integrate it with the on-premises Active Directory, and configure AWS Client VPN for secure access by employees.

To provide secure access to confidential and sensitive files for a company that wants to ensure that the files can be accessed only by authorized users and must be downloaded securely to the employees’ devices, the solution that meets these requirements is to migrate the files to an Amazon FSx for Windows File Server file system. Integrate the Amazon FSx file system with the on-premises Active Directory. Configure AWS Client VPN.

Therefore, the solution that meets these requirements is B. Migrate the files to an Amazon FSx for Windows File Server file system. Integrate the Amazon FSx file system with the on-premises Active Directory. Configure AWS Client VPN.