Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 29

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 1001

Exam Question

A company wants to reduce its Amazon S3 storage costs in its production environment without impacting durability or performance of the stored objects.

What is the FIRST step the company should take to meet these objectives?

A. Enable Amazon Macie on the business-critical S3 buckets to classify the sensitivity of the objects.
B. Enable S3 analytics to identify S3 buckets that are candidates for transitioning to S3 Standard-Infrequent Access (S3 Standard-IA).
C. Enable versioning on all business-critical S3 buckets.
D. Migrate the objects in all S3 buckets to S3 Intelligent-Tiering.

Correct Answer

B. Enable S3 analytics to identify S3 buckets that are candidates for transitioning to S3 Standard-Infrequent Access (S3 Standard-IA).

Explanation

To reduce Amazon S3 storage costs without impacting durability or performance, the company should first enable S3 analytics to identify S3 buckets that are candidates for transitioning to S3 Standard-Infrequent Access (S3 Standard-IA).

S3 analytics provides insights into the storage usage and access patterns of S3 buckets. By analyzing this data, the company can identify which S3 buckets have objects that are infrequently accessed and could be moved to the more cost-effective S3 Standard-IA storage class.

Enabling S3 analytics helps the company make informed decisions about optimizing storage costs without sacrificing the durability or performance of the stored objects. By transitioning infrequently accessed objects to S3 Standard-IA, the company can achieve cost savings while still maintaining the same durability and performance characteristics.

Option A, enabling Amazon Macie on the business-critical S3 buckets to classify the sensitivity of the objects, may help in understanding the sensitivity of the data but does not directly address the objective of reducing storage costs.

Option C, enabling versioning on all business-critical S3 buckets, does not directly contribute to reducing storage costs but rather introduces a mechanism for maintaining multiple versions of objects.

Option D, migrating the objects in all S3 buckets to S3 Intelligent-Tiering, may be a consideration after identifying the appropriate storage classes using S3 analytics, but it is not the first step in the process.

Therefore, option B is the first step the company should take to identify S3 buckets that are candidates for transitioning to S3 Standard-IA and reduce their Amazon S3 storage costs while maintaining durability and performance.

Question 1002

Exam Question

A company has an API-based inventory reporting application running on Amazon EC2 instances. The application stores information in an Amazon DynamoDB table. The company distribution centers have an on-premises shipping application that calls an API to update the inventory before printing shipping labels. The company has been experiencing application interruptions several times each day, resulting in lost transactions.

What should a solutions architect recommend to improve application resiliency?

A. Modify the shipping application to write to a local database.
B. Modify the application APIs to run serverless using AWS Lambda
C. Configure Amazon API Gateway to call the EC2 inventory application APIs.
D. Modify the application to send inventory updates using Amazon Simple Queue Service (Amazon SQS).

Correct Answer

D. Modify the application to send inventory updates using Amazon Simple Queue Service (Amazon SQS).

Explanation

To improve application resiliency and prevent lost transactions, a solutions architect should recommend modifying the application to send inventory updates using Amazon Simple Queue Service (Amazon SQS).

By integrating Amazon SQS into the application, the inventory updates can be decoupled from the immediate processing and stored in a reliable and durable message queue. The shipping application can send inventory update requests to the SQS queue, and the EC2 instances running the inventory reporting application can retrieve and process these messages at their own pace, ensuring that no transactions are lost even during application interruptions.

Using SQS helps to mitigate interruptions and ensure the resilience of the overall system. The shipping application can continue sending inventory updates to the SQS queue regardless of the availability of the inventory reporting application. Once the inventory reporting application is back online, it can retrieve the queued messages from SQS and process them.

Option A, modifying the shipping application to write to a local database, may introduce limitations in terms of resiliency, as an on-premises database may not offer the same durability and availability as a managed service like DynamoDB.

Option B, modifying the application APIs to run serverless using AWS Lambda, can enhance scalability and reduce operational overhead, but it may not directly address the issue of lost transactions during application interruptions.

Option C, configuring Amazon API Gateway to call the EC2 inventory application APIs, does not address the issue of lost transactions. It primarily focuses on managing and securing the API endpoints.

Therefore, option D is the recommended solution as it ensures that inventory updates are reliably processed without loss, even during application interruptions, by leveraging the message queuing capabilities of Amazon SQS.

Question 1003

Exam Question

An application calls a service run by a vendor. The vendor charges based on the number of calls. The finance department needs to know the number of calls that are made to the service to validate the billing statements.

How can a solutions architect design a system to durably store the number of calls without requiring changes to the application?

A. Call the service through an internet gateway.
B. Decouple the application from the service with an Amazon Simple Queue Service (Amazon SQS) queue.
C. Publish a custom Amazon CloudWatch metric that counts calls to the service.
D. Call the service through a VPC peering connection.

Correct Answer

C. Publish a custom Amazon CloudWatch metric that counts calls to the service.

Explanation

To durably store the number of calls made to the vendor’s service without requiring changes to the application, a solutions architect can design a system by publishing a custom Amazon CloudWatch metric that counts the calls to the service.

By instrumenting the application code or infrastructure, you can capture and publish the necessary metrics to CloudWatch. In this case, a custom metric can be created to track the number of calls made to the vendor’s service. The application can increment this metric every time it invokes the service, allowing the finance department to validate the billing statements based on the number of calls recorded in the CloudWatch metric.

Option A, calling the service through an internet gateway, does not provide a mechanism to durably store and track the number of calls without changes to the application.

Option B, decoupling the application from the service with an Amazon Simple Queue Service (Amazon SQS) queue, addresses the decoupling aspect but does not directly solve the requirement of tracking and storing the number of calls made to the service.

Option D, calling the service through a VPC peering connection, also does not provide a solution for durably storing and tracking the number of calls without modifying the application.

Therefore, option C is the most suitable solution as it allows the application to publish custom CloudWatch metrics that count the calls made to the service, providing the necessary data for the finance department to validate the billing statements.

Question 1004

Exam Question

A company has a website deployed on AWS. The database backend is hosted on Amazon RDS for MySQL with a primary instance and five read replicas to support scaling needs. The read replicas should lag no more than 1 second behind the primary instance to support the user experience. As traffic on the website continues to increase, the replicas are falling further behind during periods of peak load, resulting in complaints from users when searches yield inconsistent results. A solutions architect needs to reduce the replication lag as much as possible, with minimal changes to the application code or operational requirements.

Which solution meets these requirements?

A. Migrate the database to Amazon Aurora MySQL. Replace the MySQL read replicas with Aurora Replicas and enable Aurora Auto Scaling
B. Deploy an Amazon ElastiCache for Redis cluster in front of the database. Modify the website to check the cache before querying the database read endpoints.
C. Migrate the database from Amazon RDS to MySQL running on Amazon EC2 compute instances. Choose very large compute optimized instances for all replica nodes.
D. Migrate the database to Amazon DynamoDB. Initially provision a large number of read capacity units (RCUs) to support the required throughput with on- demand capacity scaling enabled.

Correct Answer

A. Migrate the database to Amazon Aurora MySQL. Replace the MySQL read replicas with Aurora Replicas and enable Aurora Auto Scaling

Explanation

To reduce replication lag and improve the user experience, a solutions architect should recommend migrating the database to Amazon Aurora MySQL. By replacing the MySQL read replicas with Aurora Replicas and enabling Aurora Auto Scaling, the company can achieve the desired results with minimal changes to the application code or operational requirements.

Amazon Aurora is a MySQL-compatible, fully managed relational database service that offers enhanced performance, durability, and scalability. Aurora Replicas in Aurora MySQL have a highly efficient replication mechanism that minimizes replication lag, allowing the replicas to stay closer to the primary instance.

Enabling Aurora Auto Scaling ensures that the number of replicas can automatically scale up or down based on the workload, helping to maintain optimal performance and reduce replication lag during periods of peak load.

Option B, deploying an Amazon ElastiCache for Redis cluster in front of the database, can improve read performance but does not directly address the replication lag issue or provide the consistency required by the users.

Option C, migrating the database from Amazon RDS to MySQL running on Amazon EC2 instances, may provide more flexibility but would require significant operational overhead and management compared to using managed services like Amazon Aurora or RDS.

Option D, migrating the database to Amazon DynamoDB, would require significant changes to the application code and data model as DynamoDB is a NoSQL database, which may not align with the existing MySQL-based application.

Therefore, option A is the most suitable solution as it leverages Amazon Aurora’s performance, durability, and scalability features, replaces the MySQL read replicas with Aurora Replicas for efficient replication, and enables Aurora Auto Scaling to reduce replication lag and support the increasing traffic on the website.

Question 1005

Exam Question

A company is planning to migrate a legacy application to AWS. The application currently uses NFS to communicate to an on-premises storage solution to store application data. The application cannot be modified to use any other communication protocols other than NFS for this purpose.

Which storage solution should a solutions architect recommend for use after the migration?

A. AWS DataSync
B. Amazon Elastic Block Store (Amazon EBS)
C. Amazon Elastic File System (Amazon EFS)
D. Amazon EMR File System (Amazon EMRFS)

Correct Answer

C. Amazon Elastic File System (Amazon EFS)

Explanation

To meet the requirement of maintaining NFS communication for storing application data, a solutions architect should recommend using Amazon Elastic File System (Amazon EFS) after the migration.

Amazon EFS is a fully managed, scalable, and elastic file storage service that supports the NFSv4 protocol, making it compatible with applications that rely on NFS for communication. It provides a simple and seamless way to migrate NFS-based applications to AWS without requiring significant modifications to the application code.

By mounting Amazon EFS file systems to the application instances, the legacy application can continue to use NFS to communicate with the storage solution, ensuring compatibility and data accessibility. Amazon EFS also offers scalability, durability, and high availability, making it suitable for production workloads.

Option A, AWS DataSync, is a data transfer service used for moving large amounts of data between on-premises storage and AWS storage services. It is not designed to provide NFS-compatible storage for ongoing application data storage.

Option B, Amazon Elastic Block Store (Amazon EBS), provides block-level storage volumes for EC2 instances but does not offer native NFS compatibility.

Option D, Amazon EMR File System (Amazon EMRFS), is a distributed file system used with Amazon EMR (Elastic MapReduce) for big data processing. It is not designed as a standalone storage solution for general application data storage.

Therefore, option C, Amazon Elastic File System (Amazon EFS), is the most suitable storage solution as it supports NFS communication and provides a scalable, managed, and durable file storage service for the legacy application’s data storage needs.

Question 1006

Exam Question

An ecommerce website is deploying its web application as Amazon Elastic Container Service (Amazon ECS) container instances behind an Application Load Balancer (ALB). During periods of high activity, the website slows down and availability is reduced. A solutions architect uses Amazon CloudWatch alarms to receive notifications whenever there is an availability issue so they can scale out resources. Company management wants a solution that automatically responds to such events.

Which solution meets these requirements?

A. Set up AWS Auto Scaling to scale out the ECS service when there are timeouts on the ALB. Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high.
B. Set up AWS Auto Scaling to scale out the ECS service when the ALB CPU utilization is too high. Setup AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high.
C. Set up AWS Auto Scaling to scale out the ECS service when the service CPU utilization is too high. Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high.
D. Set up AWS Auto Scaling to scale out the ECS service when the ALB target group CPU utilization is too high. Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high.

Correct Answer

B. Set up AWS Auto Scaling to scale out the ECS service when the ALB CPU utilization is too high. Setup AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high.

Explanation

To automatically respond to availability issues and scale resources accordingly, a solutions architect should recommend setting up AWS Auto Scaling for both the ECS service and the ECS cluster.

Option B is the most appropriate solution as it addresses the specific requirements mentioned in the scenario. By setting up AWS Auto Scaling based on ALB CPU utilization, the ECS service can be scaled out when the ALB is experiencing high CPU usage, indicating high activity or increased load on the website. This allows the service to handle the increased traffic and improve availability.

Additionally, setting up AWS Auto Scaling for the ECS cluster based on CPU or memory reservation ensures that the cluster scales out when the resources are being heavily utilized. This proactive scaling helps to prevent resource exhaustion and potential availability issues.

Option A, setting up AWS Auto Scaling based on timeouts on the ALB, may not accurately reflect the actual resource usage or the need for scaling. Timeouts can occur due to various factors, including network connectivity or other external dependencies, and may not always indicate the need for scaling the ECS service.

Option C, setting up AWS Auto Scaling based on the service CPU utilization, does not take into account the overall load on the website or the availability issues observed through the ALB. It is more appropriate to monitor and scale based on the ALB CPU utilization in this scenario.

Option D, setting up AWS Auto Scaling based on the ALB target group CPU utilization, does not consider the overall cluster resource utilization or memory reservation, which are important factors for maintaining availability and performance of the ECS cluster.

Therefore, option B is the best solution as it addresses the availability issues by scaling out the ECS service based on ALB CPU utilization and scales out the ECS cluster based on CPU or memory reservation to ensure optimal resource allocation and improved availability during periods of high activity.

Question 1007

Exam Question

A team has an application that detects new objects being uploaded into an Amazon S3 bucket. The uploads trigger AWS Lambda function to write object metadata into an Amazon DynamoDB table and an Amazon RDS for PostgreSQL database.

Which action should the team take to ensure high availability?

A. Enable Cross-Region Replication in the S3 bucket.
B. Create a Lambda function for each Availability Zone the application is deployed in.
C. Enable Multi-AZ on the RDS for PostgreSQL database.
D. Create a DynamoDB stream for the DynamoDB table.

Correct Answer

C. Enable Multi-AZ on the RDS for PostgreSQL database.

Explanation

To ensure high availability for the application, the team should enable Multi-AZ (Multi-Availability Zone) deployment for the Amazon RDS for PostgreSQL database.

Enabling Multi-AZ ensures that a standby replica of the database is automatically created in a different Availability Zone. This replica remains in sync with the primary database through synchronous replication. In the event of a failure or outage in the primary Availability Zone, Amazon RDS automatically fails over to the standby replica, minimizing downtime and providing high availability.

Enabling Cross-Region Replication in the S3 bucket (option A) is not directly related to ensuring high availability for the application. It is primarily used for data replication and backup purposes across different AWS Regions.

Creating a Lambda function for each Availability Zone (option B) can help distribute the workload and provide fault tolerance. However, it may not directly address the high availability requirement for the database.

Creating a DynamoDB stream for the DynamoDB table (option D) allows capturing and processing the changes made to the table. While this can be useful for different purposes such as triggering downstream processes, it doesn’t specifically address high availability.

Therefore, option C is the most appropriate action to ensure high availability for the application by enabling Multi-AZ on the RDS for PostgreSQL database.

Question 1008

Exam Question

An application uses an Amazon RDS MySQL DB instance. The RDS database is becoming low on disk space. A solutions architect wants to increase the disk space without downtime.

Which solution meets these requirements with the LEAST amount of effort?

A. Enable storage auto scaling in RDS.
B. Increase the RDS database instance size.
C. Change the RDS database instance storage type to Provisioned IOPS.
D. Back up the RDS database, increase the storage capacity, restore the database and stop the previous instance.

Correct Answer

A. Enable storage auto scaling in RDS.

Explanation

Enabling storage auto scaling in Amazon RDS is the solution that requires the least amount of effort to increase the disk space without downtime.

When storage auto scaling is enabled, Amazon RDS automatically adjusts the storage capacity of the DB instance based on the actual usage. It scales the storage capacity up or down as needed, without requiring manual intervention or causing downtime. This allows you to meet the increasing storage demands of your application without any disruptions.

Option B, increasing the RDS database instance size, can provide more CPU, memory, and networking resources to the database instance but does not directly address the low disk space issue.

Option C, changing the RDS database instance storage type to Provisioned IOPS, allows you to provision a specific amount of IOPS (Input/Output Operations Per Second) for your database storage but does not automatically increase the disk space.

Option D, backing up the RDS database, increasing the storage capacity, restoring the database, and stopping the previous instance, involves manual steps, downtime, and additional effort.

Therefore, enabling storage auto scaling in RDS (option A) is the recommended solution as it provides automatic and seamless scaling of the storage capacity to address the low disk space issue without requiring any downtime or manual intervention.

Question 1009

Exam Question

A user owns a MySQL database that is accessed by various clients who expect, at most, 100 ms latency on requests. Once a record is stored in the database, it is rarely changed. Clients only access one record at a time. Database access has been increasing exponentially due to increased client demand. The resultant load will soon exceed the capacity of the most expensive hardware available for purchase. The user wants to migrate to AWS, and is willing to change database systems.

Which service would alleviate the database load issue and offer virtually unlimited scalability for the future?

A. Amazon RDS
B. Amazon DynamoDB
C. Amazon Redshift
D. AWS Data Pipeline

Correct Answer

B. Amazon DynamoDB

Explanation

To alleviate the database load issue and offer virtually unlimited scalability for the future, the user should consider using Amazon DynamoDB.

Amazon DynamoDB is a fully managed NoSQL database service provided by AWS. It is designed for high scalability, low latency, and can handle massive workloads with ease. DynamoDB automatically scales its throughput capacity to handle any amount of traffic, allowing it to accommodate increasing client demand without impacting performance.

Key features of Amazon DynamoDB that make it suitable for this scenario include:

  • Scalability: DynamoDB provides automatic scaling of throughput capacity based on demand. As the client demand increases, DynamoDB will automatically adjust its provisioned capacity to accommodate the increased workload, ensuring low latency and high performance.
  • Low latency: DynamoDB is designed for fast, single-digit millisecond latency, making it well-suited for applications that require quick response times.
  • NoSQL data model: DynamoDB is a NoSQL database, which offers flexible schema design and supports key-value and document data models. This makes it suitable for scenarios where records are rarely changed and accessed individually.
  • Serverless option: DynamoDB offers a serverless mode called DynamoDB on-demand. With on-demand capacity mode, you pay only for the read and write requests your application makes, and DynamoDB automatically scales to handle the traffic without the need for capacity planning or management.

Amazon RDS (option A) is a managed relational database service and may have scalability limitations depending on the chosen database engine.

Amazon Redshift (option C) is a fully managed data warehousing service designed for analytics workloads and may not be the best fit for a transactional database with low latency requirements.

AWS Data Pipeline (option D) is an orchestration service for data processing and data migration, but it does not directly address the scalability and performance requirements of the database.

Therefore, in this scenario, Amazon DynamoDB (option B) would be the recommended service as it offers virtually unlimited scalability, low latency, and can handle the increasing client demand with ease.

Question 1010

Exam Question

A company hosts a website on premises and wants to migrate it to the AWS Cloud. The website exposes a single hostname to the internet but it routes its functions to different on-premises server groups based on the path of the URL. The server groups are scaled independently depending on the needs of the functions they support. The company has an AWS Direct Connect connection configured to its on-premises network.

What should a solutions architect do to provide path-based routing to send the traffic to the correct group of servers?

A. Route all traffic to an internet gateway. Configure pattern matching rules at the internet gateway to route traffic to the group of servers supporting that path.
B. Route all traffic to a Network Load Balancer (NLB) with target groups for each group of servers. Use pattern matching rules at the NLB to route traffic to the correct target group.
C. Route all traffic to an Application Load Balancer (ALB). Configure path-based routing at the ALB to route traffic to the correct target group for the servers supporting that path.
D. Use Amazon Route 53 as the DNS server. Configure Route 53 path-based alias records to route traffic to the correct Elastic Load Balancer for the group of servers supporting that path.

Correct Answer

C. Route all traffic to an Application Load Balancer (ALB). Configure path-based routing at the ALB to route traffic to the correct target group for the servers supporting that path.

Explanation

To provide path-based routing and send traffic to the correct group of servers, a solutions architect should use an Application Load Balancer (ALB) in AWS. ALB supports path-based routing, allowing you to route incoming requests based on the path of the URL.

Here’s how the solution would work:

  1. Deploy an ALB in AWS and configure it with a listener for the desired port (e.g., HTTP or HTTPS).
  2. Create target groups for each group of servers that support different functions. Each target group represents a specific server group that will handle requests for a particular path.
  3. Configure path-based routing rules at the ALB. Associate each path with the corresponding target group. For example, you can configure a path-based rule that routes requests with a specific path (e.g., /api) to the target group that handles API requests and another path-based rule that routes requests with a different path (e.g., /images) to the target group that handles image requests.
  4. Update the DNS configuration for the website’s hostname to point to the ALB’s DNS name.

By setting up path-based routing at the ALB, the traffic will be directed to the appropriate group of servers based on the path specified in the URL. This allows different server groups to be scaled independently based on their specific needs.

The other options mentioned are not suitable for achieving path-based routing in this scenario:

  • Option A: Routing traffic through an internet gateway and configuring pattern matching rules at the gateway is not a native feature and would require custom implementation.
  • Option B: Network Load Balancer (NLB) does not support path-based routing, so it would not be suitable for this requirement.
  • Option D: While Amazon Route 53 can be used for DNS routing, it does not support path-based routing at the level required in this scenario. It is better suited for DNS-based routing and global traffic management.