Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 53

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 1241

Exam Question

A company wants to improve the availability and performance of its stateless UDP-based workload. The workload is deployed on Amazon EC2 instances in multiple AWS Regions.

What should a solutions architect recommend to accomplish this?

A. Place the EC2 instances behind Network Load Balancers (NLBs) in each Region. Create an accelerator using AWS Global Accelerator. Use the NLBs as endpoints for the accelerator.
B. Place the EC2 instances behind Application Load Balancers (ALBs) in each Region. Create an accelerator using AWS Global Accelerator. Use the ALBs as endpoints for the accelerator.
C. Place the EC2 instances behind Network Load Balancers (NLBs) in each Region. Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to the NLBs.
D. Place the EC2 instances behind Application Load Balancers (ALBs) in each Region. Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to the ALBs.

Correct Answer

A. Place the EC2 instances behind Network Load Balancers (NLBs) in each Region. Create an accelerator using AWS Global Accelerator. Use the NLBs as endpoints for the accelerator.

Explanation

To improve the availability and performance of the stateless UDP-based workload deployed on EC2 instances in multiple AWS Regions, the recommended solution is to use Network Load Balancers (NLBs) in combination with AWS Global Accelerator.

Option A suggests placing the EC2 instances behind NLBs in each Region. NLBs are specifically designed for handling high volumes of traffic and are well-suited for UDP-based workloads. By distributing the workload across multiple EC2 instances behind NLBs, the availability and scalability of the workload can be improved.

In addition, an accelerator can be created using AWS Global Accelerator. AWS Global Accelerator is a service that improves the availability and performance of applications by routing traffic through the AWS global network infrastructure. By using the NLBs as endpoints for the accelerator, traffic can be efficiently routed to the EC2 instances in each Region, providing low-latency and high-performance access to the workload.

Option B suggests using Application Load Balancers (ALBs) instead of NLBs. However, ALBs are primarily designed for HTTP and HTTPS traffic and may not be suitable for stateless UDP-based workloads.

Option C suggests using CloudFront with Route 53 latency-based routing. While CloudFront can improve the performance by caching content and leveraging edge locations, it is primarily designed for HTTP and HTTPS traffic and may not be the optimal solution for stateless UDP-based workloads.

Option D suggests using ALBs with CloudFront and Route 53 latency-based routing. This combination is also not the best fit for stateless UDP-based workloads.

Therefore, option A is the most appropriate solution to improve the availability and performance of the stateless UDP-based workload.

Question 1242

Exam Question

A company owns an asynchronous API that is used to ingest user requests and, based on the request type, dispatch requests to the appropriate microservice for processing. The company is using Amazon API Gateway to deploy the API front end, and an AWS Lambda function that invokes Amazon DynamoDB to store user requests before dispatching them to the processing microservices. The company provisioned as much DynamoDB throughput as its budget allows, but the company is still experiencing availability issues and is losing user requests.

What should a solutions architect do to address this issue without impacting existing users?

A. Add throttling on the API Gateway with server-side throttling limits.
B. Use DynamoDB Accelerator (DAX) and Lambda to buffer writes to DynamoDB.
C. Create a secondary index in DynamoDB for the table with the user requests.
D. Use the Amazon Simple Queue Service (Amazon SQS) queue and Lambda to buffer writes to DynamoDB.

Correct Answer

B. Use DynamoDB Accelerator (DAX) and Lambda to buffer writes to DynamoDB.

Explanation

To address the availability issues and prevent the loss of user requests without impacting existing users, the recommended solution is to use DynamoDB Accelerator (DAX) in combination with AWS Lambda to buffer writes to DynamoDB.

Option A suggests adding throttling on the API Gateway with server-side throttling limits. While throttling can help manage the request rate and prevent overloading the backend systems, it does not address the availability issues or prevent the loss of user requests. Throttling only limits the rate at which requests are accepted, but if the backend system cannot handle the incoming requests, they may still be lost.

Option C suggests creating a secondary index in DynamoDB for the table with the user requests. While secondary indexes can improve query performance, they do not address the availability issues or prevent the loss of user requests.

Option D suggests using Amazon Simple Queue Service (Amazon SQS) queue and Lambda to buffer writes to DynamoDB. This is a valid solution to prevent the loss of user requests and decouple the write process from the processing microservices. However, it introduces additional complexity and latency to the system.

Option B suggests using DynamoDB Accelerator (DAX) and Lambda to buffer writes to DynamoDB. DAX is an in-memory cache for DynamoDB that can significantly improve read performance. By leveraging DAX to buffer writes, the write requests can be processed faster, reducing the chances of losing user requests. Lambda can be used to handle the writes to DAX and then asynchronously process the requests in DynamoDB. This solution provides improved availability and scalability without impacting existing users.

Therefore, option B is the most appropriate solution to address the availability issues and prevent the loss of user requests without impacting existing users.

Question 1243

Exam Question

A company wants a storage option that enables its data science team to analyze its data on premises and in the AWS Cloud. The team needs to be able to run statistical analyses by using the data on premises and by using a fleet of Amazon EC2 instances across multiple Availability Zones.

What should a solutions architect do to meet these requirements?

A. Use an AWS Storage Gateway tape gateway to copy the on-premises files into Amazon S3.
B. Use an AWS Storage Gateway volume gateway to copy the on-premises files into Amazon S3.
C. Use an AWS Storage Gateway file gateway to copy the on-premises files to Amazon Elastic Block Store (Amazon EBS).
D. Attach an Amazon Elastic File System (Amazon EFS) file system to the on-premises servers. Copy the files to Amazon EFS.

Correct Answer

C. Use an AWS Storage Gateway file gateway to copy the on-premises files to Amazon Elastic Block Store (Amazon EBS).

Explanation

To meet the requirements of enabling the data science team to analyze data both on premises and in the AWS Cloud, the recommended solution is to use an AWS Storage Gateway file gateway to copy the on-premises files to Amazon Elastic Block Store (Amazon EBS).

Option A suggests using an AWS Storage Gateway tape gateway to copy the on-premises files into Amazon S3. However, tape gateways are primarily used for long-term archival storage and may not provide the required performance and access speed for data analysis purposes.

Option B suggests using an AWS Storage Gateway volume gateway to copy the on-premises files into Amazon S3. Volume gateways are typically used for block-level storage and may not be the optimal choice for enabling data analysis.

Option D suggests attaching an Amazon Elastic File System (Amazon EFS) file system to the on-premises servers and copying the files to Amazon EFS. While this solution would enable data analysis in the AWS Cloud, it does not address the need for on-premises data analysis.

Option C suggests using an AWS Storage Gateway file gateway to copy the on-premises files to Amazon EBS. With a file gateway, files are stored as objects in Amazon S3 and can be accessed as Amazon EBS volumes in the AWS Cloud. This solution allows the data science team to analyze data on premises and on a fleet of Amazon EC2 instances across multiple Availability Zones. It provides the necessary flexibility and performance for data analysis in both environments.

Therefore, option C is the most appropriate solution to meet the requirements of analyzing data on premises and in the AWS Cloud.

Question 1244

Exam Question

A company is moving its on-premises applications to Amazon EC2 instances. However, as a result of fluctuating compute requirements, the EC2 instances must always be ready to use between 8 AM and 5 PM in specific Availability Zones.

Which EC2 instances should the company choose to run the applications?

A. Scheduled Reserved Instances
B. On-Demand Instances
C. Spot Instances as part of a Spot Fleet
D. EC2 instances in an Auto Scaling group

Correct Answer

D. EC2 instances in an Auto Scaling group

Explanation

To ensure that the EC2 instances are always ready to use between 8 AM and 5 PM in specific Availability Zones, the company should choose EC2 instances in an Auto Scaling group.

Option A, Scheduled Reserved Instances, are not suitable for this scenario as they are designed for workloads with fixed start and end times. They are more appropriate for workloads that have predictable usage patterns.

Option B, On-Demand Instances, are not ideal in this case because they do not provide the ability to automatically scale up and down based on demand. On-Demand Instances are billed at the regular hourly rate and do not provide cost optimization options.

Option C, Spot Instances as part of a Spot Fleet, can provide cost savings, but they are not suitable for applications that require continuous availability during specific hours. Spot Instances can be interrupted if the Spot price exceeds the bid price or if capacity becomes limited.

Option D, EC2 instances in an Auto Scaling group, is the most suitable choice in this scenario. Auto Scaling allows the company to automatically adjust the number of EC2 instances based on demand. It ensures that the required number of instances are always available during the specified hours. The Auto Scaling group can be configured to maintain a desired capacity and automatically launch new instances if needed.

Therefore, option D is the correct choice for running the applications with the requirement of always being ready to use between 8 AM and 5 PM in specific Availability Zones.

Question 1245

Exam Question

A development team stores its Amazon RDS MySQL DB instance user name and password credentials in a configuration file. The configuration file is stored as plaintext on the root device volume of the team’s Amazon EC2 instance. When the team application needs to reach the database, it reads the file and loads the credentials into the code. The team has modified the permissions of the configuration file so that only the application can read its content. A solution architect must design a more secure solution.

What should the solutions architect do to meet this requirement?

A. Store the configuration file in Amazon S3. Grant the application access to read the configuration file.
B. Create an IAM role with permission to access the database. Attach this IAM role to the EC2 instance.
C. Enable SSL connections on the database instance. Alter the database user to require SSL when logging in.
D. Move the configuration file to an EC2 instance store, and create an Amazon Machine Image (AMI) of the instance. Launch new instances from this AMI.

Correct Answer

B. Create an IAM role with permission to access the database. Attach this IAM role to the EC2 instance.

Explanation

To meet the requirement of a more secure solution for storing and accessing the Amazon RDS MySQL DB instance credentials, a solution architect should create an IAM role with permission to access the database and then attach this IAM role to the EC2 instance.

Option A, storing the configuration file in Amazon S3, would require the application to have access to read the S3 bucket, which could still be a security risk if the application’s credentials are compromised.

Option C, enabling SSL connections on the database instance and altering the database user to require SSL, provides encryption for the data in transit but does not address the issue of securing the credentials stored in the configuration file.

Option D, moving the configuration file to an EC2 instance store and creating an Amazon Machine Image (AMI) of the instance, would still leave the credentials accessible to anyone who can access the instance store or launch instances from the AMI.

Therefore, the most appropriate solution is to create an IAM role that has the necessary permissions to access the database and attach that IAM role to the EC2 instance. This way, the application running on the EC2 instance can access the database without the need for storing the credentials in a configuration file. The IAM role provides secure and seamless access to the database for the application.

Question 1246

Exam Question

A company is launching a new application deployed on an Amazon Elastic Container Service (Amazon ECS) cluster and is using the Fargate launch type for ECS tasks. The company is monitoring CPU and memory usage because it is expecting high traffic to the application upon its launch. However, the company wants to reduce costs when utilization decreases.

What should a solutions architect recommend?

A. Use Amazon EC2 Auto Scaling to scale at certain periods based on previous traffic patterns.
B. Use an AWS Lambda function to scale Amazon ECS based on metric breaches that trigger an Amazon CloudWatch alarm.
C. Use Amazon EC2 Auto Scaling with simple scaling policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm.
D. Use AWS Application Auto Scaling with target tracking policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm.

Correct Answer

D. Use AWS Application Auto Scaling with target tracking policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm.

Explanation

To achieve cost optimization while maintaining scalability for an Amazon ECS cluster using the Fargate launch type, a solutions architect should recommend using AWS Application Auto Scaling with target tracking policies.

AWS Application Auto Scaling is designed to automatically scale resources based on predefined scaling policies. By configuring target tracking scaling policies, the ECS cluster can dynamically scale up or down based on specific metrics, such as CPU and memory utilization. When the specified metric breaches a predefined threshold, an Amazon CloudWatch alarm is triggered, and the scaling policy automatically adjusts the number of tasks in the ECS cluster.

This approach allows the ECS cluster to scale up to handle high traffic and scale down when the utilization decreases, resulting in cost optimization. It eliminates the need for manual intervention and provides automatic scaling based on real-time metrics.

Option A, using Amazon EC2 Auto Scaling, is not applicable in this case because the company is using the Fargate launch type, which abstracts the underlying EC2 instances.

Option B, using an AWS Lambda function to scale Amazon ECS, is not a recommended approach because AWS Application Auto Scaling provides native integration and automation for scaling ECS tasks.

Option C, using Amazon EC2 Auto Scaling with simple scaling policies, is not suitable for the Fargate launch type, which does not utilize EC2 instances.

Therefore, the most appropriate solution is to use AWS Application Auto Scaling with target tracking policies for dynamic and cost-effective scaling of the Amazon ECS cluster.

Question 1247

Exam Question

A company is designing an internet-facing web application. The application runs on Amazon EC2 for Linux-based instances that store sensitive user data in Amazon RDS MySQL Multi-AZ DB instances. The EC2 instances are in public subnets, and the RDS DB instances are in private subnets. The security team has mandated that the DB instances be secured against web-based attacks.

What should a solutions architect recommend?

A. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Configure the EC2 instance iptables rules to drop suspicious web traffic. Create a security group for the DB instances. Configure the RDS security group to only allow port 3306 inbound from the individual EC2 instances.
B. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Move DB instances to the same subnets that EC2 instances are located in. Create a security group for the DB instances. Configure the RDS security group to only allow port 3306 inbound from the individual EC2 instances.
C. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Use AWS WAF to monitor inbound web traffic for threats. Create a security group for the web application servers and a security group for the DB instances. Configure the RDS security group to only allow port 3306 inbound from the web application server security group.
D. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Use AWS WAF to monitor inbound web traffic for threats. Configure the Auto Scaling group to automatically create new DB instances under heavy traffic. Create a security group for the RDS DB instances. Configure the RDS security group to only allow port 3306 inbound.

Correct Answer

C. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Use AWS WAF to monitor inbound web traffic for threats. Create a security group for the web application servers and a security group for the DB instances. Configure the RDS security group to only allow port 3306 inbound from the web application server security group.

Explanation

To secure the internet-facing web application and protect the sensitive user data stored in Amazon RDS MySQL Multi-AZ DB instances, a solutions architect should recommend the following:

  1. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer (ALB). The ALB acts as a single point of entry for web traffic and provides load balancing and scalability for the EC2 instances.
  2. Use AWS Web Application Firewall (WAF) to monitor and protect against web-based attacks. AWS WAF helps identify and filter out malicious traffic, including common web attack patterns, such as SQL injection and cross-site scripting (XSS).
  3. Create a security group for the web application servers and a separate security group for the DB instances. The security group for the web application servers should allow inbound traffic on port 80 or 443 (HTTP or HTTPS) for web traffic. The security group for the DB instances should only allow inbound traffic on port 3306 from the security group associated with the web application servers. This limits access to the DB instances only from the trusted sources.

By implementing these recommendations, the web application will be protected against web-based attacks, and access to the DB instances will be restricted to only the necessary sources, enhancing the overall security of the system.

Option A is not recommended because it suggests configuring EC2 instance iptables rules, which can be complex and difficult to manage in a dynamic environment.

Option B suggests moving the DB instances to the same subnets as the EC2 instances, which compromises the security principle of isolating sensitive resources in private subnets.

Option D is not optimal because it suggests automatically creating new DB instances under heavy traffic, which may not be necessary for securing against web-based attacks.

Question 1248

Exam Question

A company is building an application on Amazon EC2 instances that generates temporary transactional data. The application requires access to data storage that can provide configurable and consistent IOPS.

What should a solutions architect recommend?

A. Provision an EC2 instance with a Throughput Optimized HDD (st1) root volume and a Cold HDD (sc1) data volume.
B. Provision an EC2 instance with a Throughput Optimized HDD (st1) volume that will serve as the root and data volume.
C. Provision an EC2 instance with a General Purpose SSD (gp2) root volume and Provisioned IOPS SSD (io1) data volume.
D. Provision an EC2 instance with a General Purpose SSD (gp2) root volume. Configure the application to store its data in an Amazon S3 bucket.

Correct Answer

C. Provision an EC2 instance with a General Purpose SSD (gp2) root volume and Provisioned IOPS SSD (io1) data volume.

Explanation

To meet the requirements of configurable and consistent IOPS for the application’s data storage, a solutions architect should recommend provisioning an EC2 instance with a General Purpose SSD (gp2) root volume and a Provisioned IOPS SSD (io1) data volume.

The General Purpose SSD (gp2) volume is suitable for the EC2 instance’s root volume as it provides a balance of price and performance for a wide range of workloads.

For the data storage requiring configurable and consistent IOPS, the Provisioned IOPS SSD (io1) volume is recommended. With io1 volumes, you can provision a specific amount of IOPS (input/output operations per second) to meet the performance requirements of your application. This allows you to configure the IOPS to match the workload needs, ensuring consistent and predictable performance.

Option A suggests using a Throughput Optimized HDD (st1) root volume, which is designed for large, sequential workloads, rather than transactional data that requires configurable and consistent IOPS.

Option B suggests using a Throughput Optimized HDD (st1) volume for both the root and data volume, which may not provide the required performance characteristics for transactional data.

Option D suggests using a General Purpose SSD (gp2) root volume and storing the application’s data in an Amazon S3 bucket. While Amazon S3 is a highly scalable and durable object storage service, it may not be suitable for applications that require configurable and consistent IOPS for transactional data.

Question 1249

Exam Question

A company has a popular gaming platform running on AWS. The application is sensitive to latency because latency can impact the user experience and introduce unfair advantages to some players. The application is deployed in every AWS Region it runs on Amazon EC2 instances that are part of Auto Scaling groups configured behind Application Load Balancers (ALBs). A solutions architect needs to implement a mechanism to monitor the health of the application and redirect traffic to healthy endpoints.

Which solution meets these requirements?

A. Configure an accelerator in AWS Global Accelerator. Add a listener for the port that the application listens on and attach it to a Regional endpoint in each Region. Add the ALB as the endpoint.
B. Create an Amazon CloudFront distribution and specify the ALB as the origin server. Configure the cache behavior to use origin cache headers. Use AWS Lambda functions to optimize the traffic.
C. Create an Amazon CloudFront distribution and specify Amazon S3 as the origin server. Configure the cache behavior to use origin cache headers. Use AWS Lambda functions to optimize the traffic.
D. Configure an Amazon DynamoDB database to serve as the data store for the application. Create a DynamoDB Accelerator (DAX) cluster to act as the in- memory cache for DynamoDB hosting the application data.

Correct Answer

A. Configure an accelerator in AWS Global Accelerator. Add a listener for the port that the application listens on and attach it to a Regional endpoint in each Region. Add the ALB as the endpoint.

Explanation

To monitor the health of the application and redirect traffic to healthy endpoints, a solutions architect should configure an accelerator in AWS Global Accelerator. AWS Global Accelerator is a service that improves the availability and performance of your applications by routing user traffic to the nearest healthy endpoint across multiple AWS Regions.

By adding a listener for the port that the application listens on and attaching it to a Regional endpoint in each Region, the solutions architect ensures that traffic is directed to the closest and healthiest endpoint, reducing latency and improving the user experience.

Using Application Load Balancers (ALBs) as the endpoints allows for efficient load balancing and traffic distribution among the EC2 instances running the application. The ALBs can monitor the health of the instances and only send traffic to the healthy instances, ensuring high availability.

Option B suggests using Amazon CloudFront with the ALB as the origin server, but CloudFront is primarily a content delivery network (CDN) service designed for caching and serving static and dynamic content. While it can help improve latency by caching content close to the users, it may not be the most suitable solution for monitoring the health of the application and redirecting traffic based on health status.

Option C suggests using Amazon CloudFront with Amazon S3 as the origin server, but this solution is more suitable for serving static content from S3 rather than monitoring application health and redirecting traffic based on health status.

Option D suggests using Amazon DynamoDB and DynamoDB Accelerator (DAX) for data storage and caching, which is unrelated to monitoring application health and traffic redirection.

Therefore, option A is the most appropriate solution for monitoring the application’s health and redirecting traffic to healthy endpoints.

Question 1250

Exam Question

A solutions architect needs to design a resilient solution for Windows users’ home directories. The solution must provide fault tolerance, file-level backup and recovery, and access control, based upon the company’s Active Directory.

Which storage solution meets these requirements?

A. Configure Amazon S3 to store the users’ home directories. Join Amazon S3 to Active Directory.
B. Configure a Multi-AZ file system with Amazon FSx for Windows File Server. Join Amazon FSx to Active Directory.
C. Configure Amazon Elastic File System (Amazon EFS) for the users’ home directories. Configure AWS Single Sign-On with Active Directory.
D. Configure Amazon Elastic Block Store (Amazon EFS) to store the users’ home directories. Configure AWS Single Sign-On with Active Directory.

Correct Answer

B. Configure a Multi-AZ file system with Amazon FSx for Windows File Server. Join Amazon FSx to Active Directory.

Explanation

To meet the requirements of fault tolerance, file-level backup and recovery, and access control based on the company’s Active Directory, the most suitable storage solution is to configure a Multi-AZ file system with Amazon FSx for Windows File Server and join it to Active Directory.

Amazon FSx for Windows File Server provides a fully managed, highly available file storage service that is compatible with Windows-based applications and workloads. It offers built-in support for integrating with Active Directory, allowing you to use existing user and group permissions for access control.

By configuring a Multi-AZ file system, the solution ensures fault tolerance and high availability. With Multi-AZ, Amazon FSx automatically replicates your file system data to a standby file system in a different Availability Zone, providing data redundancy and ensuring that the file system remains accessible even in the event of an AZ-level failure.

This solution also includes features for file-level backup and recovery. Amazon FSx provides integrated backup capabilities, allowing you to create automatic backups of your file systems. These backups are stored in Amazon S3, providing durability and data protection. You can easily restore files or file systems from these backups as needed.

Option A suggests using Amazon S3 to store the users’ home directories and joining S3 to Active Directory. However, Amazon S3 is an object storage service and may not provide the required file-level access control and integration with Active Directory.

Option C suggests using Amazon Elastic File System (Amazon EFS) for the users’ home directories and configuring AWS Single Sign-On with Active Directory. While Amazon EFS offers file storage, it does not provide native integration with Active Directory for access control.

Option D suggests using Amazon Elastic Block Store (Amazon EBS) for storing the users’ home directories and configuring AWS Single Sign-On with Active Directory. Amazon EBS is block storage and does not offer the required file-level access control and integration with Active Directory.

Therefore, option B is the most suitable storage solution as it provides fault tolerance, file-level backup and recovery, and access control based on the company’s Active Directory.