The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.
Table of Contents
- Question 811
- Exam Question
- Correct Answer
- Explanation
- Question 812
- Exam Question
- Correct Answer
- Explanation
- Question 813
- Exam Question
- Correct Answer
- Explanation
- Question 814
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 815
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 816
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 817
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 818
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 819
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 820
- Exam Question
- Correct Answer
- Explanation
Question 811
Exam Question
A company is creating a prototype of an ecommerce website on AWS. The website consists of an Application Load Balancer, an Auto Scaling group of Amazon EC2 instances for web servers, and an Amazon RDS for MySQL DB instance that runs with the Single-AZ configuration. The website is slow to respond during searches of the product catalog. The product catalog is a group of tables in the MySQL database that the company does not update frequently. A solutions architect has determined that the CPU utilization on the DB instance is high when product catalog searches occur.
What should the solutions architect recommend to improve the performance of the website during searches of the product catalog?
A. Migrate the product catalog to an Amazon Redshift database. Use the COPY command to load the product catalog tables.
B. Implement an Amazon ElastiCache for Redis cluster to cache the product catalog. Use lazy loading to populate the cache.
C. Add an additional scaling policy to the Auto Scaling group to launch additional EC2 instances when database response is slow.
D. Turn on the Multi-AZ configuration for the DB instance. Configure the EC2 instances to throttle the product catalog queries that are sent to the database.
Correct Answer
B. Implement an Amazon ElastiCache for Redis cluster to cache the product catalog. Use lazy loading to populate the cache.
Explanation
By using Amazon ElastiCache for Redis, the frequently accessed data from the product catalog can be cached in-memory, reducing the need to query the MySQL database repeatedly. This will improve the response time and overall performance of the website during searches. Lazy loading ensures that data is only loaded into the cache when requested, avoiding unnecessary cache population.
Migrating to Amazon Redshift (option A) is not recommended as it is a data warehousing solution and may not be suitable for a frequently accessed product catalog.
Adding additional EC2 instances to the Auto Scaling group (option C) may help with scaling the web servers but will not directly address the high CPU utilization on the DB instance.
Enabling Multi-AZ configuration for the DB instance (option D) provides high availability but does not directly address the high CPU utilization issue during product catalog searches. Throttling queries sent to the database can lead to performance degradation rather than improvement.
Question 812
Exam Question
A company has a legacy application that processes data in two parts. The second part of the process takes longer than the first, so the company has decided to rewrite the application as two microservices running on Amazon ECS that can scale independently.
How should a solutions architect integrate the microservices?
A. Implement code in microservice 1 to send data to an Amazon S3 bucket. Use S3 event notifications to invoke microservice 2.
B. Implement code in microservice 1 to publish data to an Amazon SNS topic. Implement code in microservice 2 to subscribe to this topic.
C. Implement code in microservice 1 to send data to Amazon Kinesis Data Firehose. Implement code in microservice 2 to read from Kinesis Data Firehose.
D. Implement code in microservice 1 to send data to an Amazon SQS queue. Implement code in microservice 2 to process messages from the queue.
Correct Answer
D. Implement code in microservice 1 to send data to an Amazon SQS queue. Implement code in microservice 2 to process messages from the queue.
Explanation
Using an Amazon SQS queue enables asynchronous communication between the microservices. Microservice 1 can send data to the queue without waiting for the processing to be completed by microservice 2. This allows for decoupling and scalability, as microservice 1 can continue processing new data without being blocked by the longer processing time of microservice 2. Microservice 2 can then retrieve messages from the queue and process them independently.
Option A, using Amazon S3 event notifications, is not the most suitable choice for integrating microservices in this scenario as it does not provide the necessary decoupling and asynchronous processing capability.
Option B, using Amazon SNS, is also not the best fit as it is more suited for fan-out scenarios where multiple subscribers receive notifications, whereas in this case, microservice 2 needs to consume the data directly.
Option C, using Kinesis Data Firehose, is designed for streaming data delivery to data lakes or analytics services and may not be the most appropriate choice for integrating these microservices.
Question 813
Exam Question
A company has a remote factory that has unreliable connectivity. The factory needs to gather and process machine data and sensor data so that it can sense products on its conveyor belts and initiate a robotic movement to direct the products to the right location. Predictable low-latency compute processing is essential for the on-premises control systems.
Which solution should the factory use to process the data?
A. Amazon CloudFront Lambda@Edge functions.
B. An Amazon EC2 instance that has enhanced networking enabled.
C. An Amazon EC2 instance that uses an AWS Global Accelerator.
D. An Amazon Elastic Block Store (Amazon EBS) volume on an AWS Snowball Edge cluster.
Correct Answer
D. An Amazon Elastic Block Store (Amazon EBS) volume on an AWS Snowball Edge cluster.
Explanation
The factory should use Amazon Elastic Block Store (Amazon EBS) volume on an AWS Snowball Edge cluster to process the data.
Given the unreliable connectivity at the remote factory, using on-premises processing capabilities provided by an AWS Snowball Edge cluster with an attached Amazon EBS volume would be the most suitable solution. This setup allows the factory to gather and process machine data and sensor data locally, ensuring predictable low-latency compute processing for the on-premises control systems.
Option A, Amazon CloudFront Lambda@Edge functions, is primarily used for content delivery network (CDN) edge computing and would not be the best fit for processing machine data and sensor data in a remote factory.
Option B, an Amazon EC2 instance with enhanced networking, would still rely on connectivity to the AWS cloud and may not provide the desired low-latency processing required for the on-premises control systems.
Option C, an Amazon EC2 instance using AWS Global Accelerator, improves network performance for applications deployed across multiple AWS Regions but does not address the need for on-premises processing in an unreliable connectivity scenario.
Question 814
Exam Question
A company is migrating from an on-premises infrastructure to the AWS Cloud. One of the company’s applications stores files on a Windows file server farm that uses Distributed File System Replication (DFSR) to keep data in sync. A solutions architect needs to replace the file server farm.
Which service should the solutions architect use?
A. Amazon EFS.
B. Amazon FSx.
C. Amazon S3.
D. AWS Storage Gateway.
Correct Answer
B. Amazon FSx.
Explanation
Migrating Existing Files to Amazon FSx for Windows File Server Using AWS DataSync
We recommend using AWS DataSync to transfer data between Amazon FSx for Windows File Server file systems. DataSync is a data transfer service that simplifies, automates, and accelerates moving and replicating data between on-premises storage systems and other AWS storage services over the internet or
AWS Direct Connect. DataSync can transfer your file system data and metadata, such as ownership, time stamps, and access permissions.
Amazon FSx for Windows File Server is a fully managed Windows file system that is compatible with the Microsoft Distributed File System (DFS) and DFS Replication (DFSR). It is designed to provide a scalable and highly available file storage solution in the AWS Cloud. By using Amazon FSx, the company can replace the on-premises file server farm while maintaining compatibility with DFSR for data synchronization.
Option A, Amazon EFS (Elastic File System), is a fully managed file storage service, but it does not have built-in support for DFSR. It is not the recommended choice for replacing a DFSR-based file server farm.
Option C, Amazon S3 (Simple Storage Service), is an object storage service and is not suitable for directly replacing a Windows file server farm that relies on DFSR.
Option D, AWS Storage Gateway, is a hybrid cloud storage service that enables on-premises applications to seamlessly use AWS storage services. While it provides integration between on-premises and cloud storage, it does not directly replace a file server farm or support DFSR.
Reference
AWS > Documentation > Amazon FSx > Windows User Guide > Migrating existing files to FSx for Windows File Server using AWS DataSync
Question 815
Exam Question
A company wants to relocate its on-premises MySQL database to AWS. The database accepts regular imports from a client-facing application, which causes a high volume of write operations. The company is concerned that the amount of traffic might be causing performance issues within the application.
How should a solutions architect design the architecture on AWS?
A. Provision an Amazon RDS for MySQL DB instance with Provisioned IOPS SSD storage. Monitor write operation metrics by using Amazon CloudWatch. Adjust the provisioned IOPS if necessary.
B. Provision an Amazon RDS for MySQL DB instance with General Purpose SSD storage. Place an Amazon ElastiCache cluster in front of the DB instance. Configure the application to query ElastiCache instead.
C. Provision an Amazon DocumentDB (with MongoDB compatibility) instance with a memory optimized instance type. Monitor Amazon CloudWatch for performance-related issues. Change the instance class if necessary.
D. Provision an Amazon Elastic File System (Amazon EFS) file system in General Purpose performance mode. Monitor Amazon CloudWatch for IOPS bottlenecks. Change to Provisioned Throughput performance mode if necessary.
Correct Answer
A. Provision an Amazon RDS for MySQL DB instance with Provisioned IOPS SSD storage. Monitor write operation metrics by using Amazon CloudWatch. Adjust the provisioned IOPS if necessary.
Explanation
The solutions architect should recommend option A: Provision an Amazon RDS for MySQL DB instance with Provisioned IOPS SSD storage and monitor write operation metrics using Amazon CloudWatch. Adjust the provisioned IOPS if necessary.
This option addresses the company’s concern about performance issues caused by a high volume of write operations. Provisioned IOPS (Input/Output Operations Per Second) allows you to allocate a specific amount of IOPS to your Amazon RDS for MySQL DB instance, ensuring consistent and predictable performance. By monitoring the write operation metrics using Amazon CloudWatch, you can identify if the provisioned IOPS is sufficient or if adjustments are needed.
Option B, using an Amazon RDS for MySQL DB instance with General Purpose SSD storage and placing an Amazon ElastiCache cluster in front of it, does not directly address the write operation concerns and introduces unnecessary complexity by adding a caching layer.
Option C, using Amazon DocumentDB (with MongoDB compatibility), is not a suitable choice for migrating an existing MySQL database.
Option D, provisioning an Amazon Elastic File System (Amazon EFS) file system, is not the appropriate solution for hosting a MySQL database and optimizing write operations. It is a scalable file storage service and not designed for running database systems.
Amazon DocumentDB supports the following instance classes:
- R6G—Latest generation of memory-optimized instances powered by Arm-based AWS Graviton2 processors that provide up to 30% better performance over R5 instances at 5% cheaper cost.
- R5—Memory-optimized instances that provide up to 100% better performance over R4 instances for the same instance cost.
- R4—Previous generation of memory-optimized instances.
- T4G—Latest-generation low cost burstable general-purpose instance type powered by Arm-based AWS Graviton2 processors that provides a baseline level of CPU performance, delivering up to 35% better price performance over T3 instances and ideal for running applications with moderate CPU usage that experience temporary spikes in usage.
- T3—Low cost burstable general-purpose instance type that provides a baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required.
Reference
AWS > Documentation > Amazon DocumentDB > Developer Guide > Managing Instance Classes
Question 816
Exam Question
A company wants to host a scalable web application on AWS. The application will be accessed by users from different geographic regions of the world.
Application users will be able to download and upload unique data up to gigabytes in size. The development team wants a cost-effective solution to minimize upload and download latency and maximize performance.
What should a solutions architect do to accomplish this?
A. Use Amazon S3 with Transfer Acceleration to host the application.
B. Use Amazon S3 with CacheControl headers to host the application.
C. Use Amazon EC2 with Auto Scaling and Amazon CloudFront to host the application.
D. Use Amazon EC2 with Auto Scaling and Amazon ElastiCache to host the application.
Correct Answer
C. Use Amazon EC2 with Auto Scaling and Amazon CloudFront to host the application.
Explanation
The solutions architect should recommend option C: Use Amazon EC2 with Auto Scaling and Amazon CloudFront to host the application.
By using Amazon EC2 instances with Auto Scaling, the application can scale dynamically based on demand, ensuring that it can handle varying levels of user traffic from different geographic regions. This helps maximize performance and minimize latency.
Additionally, using Amazon CloudFront as a content delivery network (CDN) in front of the EC2 instances can significantly improve upload and download latency. CloudFront caches and delivers content from edge locations located in various geographic regions, reducing the distance and network hops between the application users and the hosting infrastructure. This improves the overall performance and responsiveness of the application, delivering a better user experience.
Option A, using Amazon S3 with Transfer Acceleration, is not the optimal solution for hosting a scalable web application. While Transfer Acceleration improves upload and download speeds for large files, it is primarily designed for object storage and not suitable for hosting an entire web application.
Option B, using Amazon S3 with CacheControl headers, is also not the recommended approach for hosting a scalable web application. CacheControl headers control the caching behavior of objects in S3 but do not provide the necessary features for hosting and scaling a dynamic web application.
Option D, using Amazon EC2 with Auto Scaling and Amazon ElastiCache, is focused on scaling the compute resources and providing in-memory caching for the application’s data. While ElastiCache can enhance performance, it does not directly address the need for global availability and low latency for users in different geographic regions. Combining EC2 Auto Scaling with CloudFront (option C) provides a more comprehensive solution for the given requirements.
Reference
Question 817
Exam Question
A solutions architect is migrating a document management workload to AWS. The workload keeps 7 TiB of contract documents on a shared storage file system and tracks them on an external database. Most of the documents are stored and retrieved eventually for reference in the future. The application cannot be modified during the migration, and the storage solution must be highly available. Documents are retrieved and stored by web servers that run on Amazon EC2 instances in an Auto Scaling group. The Auto Scaling group can have up to 12 instances.
Which solution meets these requirements MOST cost-effectively?
A. Provision an enhanced networking optimized EC2 instance to serve as a shared NFS storage system.
B. Create an Amazon S3 bucket that uses the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Mount the S3 bucket to the EC2 instances in the Auto Scaling group.
C. Create an SFTP server endpoint by using AWS Transfer for SFTP and an Amazon S3 bucket. Configure the EC2 instances in the Auto Scaling group to connect to the SFTP server.
D. Create an Amazon Elastic File System (Amazon EFS) file system that uses the EFS StandardInfrequent Access (EFS Standard-IA) storage class. Mount the file system to the EC2 instances in the Auto Scaling group.
Correct Answer
D. Create an Amazon Elastic File System (Amazon EFS) file system that uses the EFS StandardInfrequent Access (EFS Standard-IA) storage class. Mount the file system to the EC2 instances in the Auto Scaling group.
Explanation
The most cost-effective solution that meets the given requirements is option D: Create an Amazon Elastic File System (Amazon EFS) file system that uses the EFS Standard storage class. Mount the file system to the EC2 instances in the Auto Scaling group.
Amazon EFS provides a scalable and highly available shared file storage solution. It is a suitable choice for migrating the shared storage file system used by the document management workload. The EFS Standard storage class is designed for frequently accessed file systems, which aligns with the workload’s requirement for storing and retrieving documents.
Option A, provisioning an enhanced networking optimized EC2 instance as a shared NFS storage system, would require additional management and maintenance efforts. It may not be as cost-effective as using a managed file storage service like Amazon EFS.
Option B, using an Amazon S3 bucket with the S3 Standard-IA storage class, is not the most suitable choice for a shared file system use case. S3 is an object storage service, and mounting it to EC2 instances as a file system can introduce additional complexities and may not provide optimal performance for a large number of small files.
Option C, creating an SFTP server endpoint using AWS Transfer for SFTP and an Amazon S3 bucket, would require modifications to the existing application to use SFTP for storing and retrieving documents. Since the application cannot be modified during the migration, this option is not viable in this scenario.
AWS services
- Amazon CloudWatch Logs helps you centralize the logs from all your systems, applications, and AWS services so you can monitor them and archive them securely.
- AWS Identity and Access Management (IAM) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
- Amazon Simple Storage Service (Amazon S3) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data. This pattern uses Amazon S3 as the storage system for file transfers.
- AWS Transfer for SFTP helps you transfer files into and out of AWS storage services over the SFTP protocol.
- Amazon Virtual Private Cloud (Amazon VPC) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.
Reference
AWS > Documentation > AWS Prescriptive Guidance > Patterns > Migrate an on-premises SFTP server to AWS using AWS Transfer for SFTP
Question 818
Exam Question
A solutions architect is designing a high performance computing (HPC) workload on Amazon EC2. The EC2 instances need to communicate to each other frequently and require network performance with low latency and high throughput.
Which EC2 configuration meets these requirements?
A. Launch the EC2 instances in a cluster placement group in one Availability Zone.
B. Launch the EC2 instances in a spread placement group in one Availability Zone.
C. Launch the EC2 instances in an Auto Scaling group in two Regions and peer the VPCs.
D. Launch the EC2 instances in an Auto Scaling group spanning multiple Availability Zones.
Correct Answer
A. Launch the EC2 instances in a cluster placement group in one Availability Zone.
Explanation
Placement groups –
When you launch a new EC2 instance, the EC2 service attempts to place the instance in such a way that all of your instances are spread out across underlying hardware to minimize correlated failures. You can use placement groups to influence the placement of a group of interdependent instances to meet the needs of your workload. Depending on the type of workload.
Cluster packs instances close together inside an Availability Zone. This strategy enables workloads to achieve the low-latency network performance necessary for tightly-coupled node-to-node communication that is typical of HPC applications.
The EC2 configuration that meets the requirements of low latency and high throughput for frequent communication between instances is option A: Launch the EC2 instances in a cluster placement group in one Availability Zone.
A cluster placement group allows for tightly packed instances within a single Availability Zone, providing low-latency and high-bandwidth communication between instances. This configuration is ideal for HPC workloads that require intensive inter-instance communication.
Option B, using a spread placement group in one Availability Zone, does not guarantee the instances will be physically close to each other, which may impact the latency and throughput requirements of the HPC workload.
Option C, launching the instances in an Auto Scaling group across multiple regions and peering the VPCs, is not necessary for achieving low latency and high throughput within a single HPC workload.
Option D, launching the instances in an Auto Scaling group spanning multiple Availability Zones, does not provide the same level of low-latency and high-bandwidth communication as a cluster placement group in a single Availability Zone. While it provides availability and fault tolerance, it may introduce additional network latency and limitations for inter-instance communication.
Reference
AWS > Documentation > Amazon EC2 > User Guide for Linux Instances > Placement groups
Question 819
Exam Question
A company has deployed a database in Amazon RDS for MySQL. Due to increased transactions, the database support team is reporting slow reads against the DB instance and recommends adding a read replica. Which
Which combination of actions should a solutions architect take before implementing this change? (Choose two.)
A. Enable binlog replication on the RDS primary node.
B. Choose a failover priority for the source DB instance.
C. Allow long-running transactions to complete on the source DB instance.
D. Create a global table and specify the AWS Regions where the table will be available.
E. Enable automatic backups on the source instance by setting the backup retention period to a value other than 0.
Correct Answer
A. Enable binlog replication on the RDS primary node.
E. Enable automatic backups on the source instance by setting the backup retention period to a value other than 0.
Explanation
An active, long-running transaction can slow the process of creating the read replica. We recommend that you wait for long-running transactions to complete before creating a read replica. If you create multiple read replicas in parallel from the same source DB instance, Amazon RDS takes only one snapshot at the start of the first create action. When creating a read replica, there are a few things to consider. First, you must enable automatic backups on the source DB instance by setting the backup retention period to a value other than 0. This requirement also applies to a read replica that is the source DB instance for another read replica.
The two actions that should be taken before implementing the addition of a read replica for the Amazon RDS for MySQL database are:
A. Enable binlog replication on the RDS primary node.
E. Enable automatic backups on the source instance by setting the backup retention period to a value other than 0.
Enabling binlog replication on the RDS primary node (option A) is necessary for replication to work between the primary instance and the read replica. Binlog replication allows changes made on the primary instance to be replicated to the read replica, ensuring data consistency.
Enabling automatic backups on the source instance (option E) is important to ensure data durability and recovery capabilities. By setting a backup retention period greater than 0, regular automated backups are taken, providing a point-in-time restore option in case of any issues during or after the addition of the read replica.
Options B, C, and D are not directly related to the implementation of a read replica:
Option B, choosing a failover priority for the source DB instance, is applicable when configuring Multi-AZ failover but is not directly related to the addition of a read replica.
Option C, allowing long-running transactions to complete on the source DB instance, is a general consideration but does not specifically relate to implementing a read replica.
Option D, creating a global table and specifying AWS Regions, is not directly related to adding a read replica. Global tables are used for multi-region replication and data distribution, which is a different concept from read replicas in a single region.
Reference
AWS > Documentation > Amazon RDS > User Guide > Working with DB instance read replicas
Question 820
Exam Question
A solutions architect is designing a solution where users will be directed to a backup static error page if the primary website is unavailable. The primary website’s DNS records are hosted in Amazon Route 53 where their domain is pointing to an Application Load Balancer (ALB).
Which configuration should the solutions architect use to meet the company’s needs while minimizing changes and infrastructure overhead?
A. Point a Route 53 alias record to an Amazon CloudFront distribution with the ALB as one of its origins. Then, create custom error pages for the distribution
B. Set up a Route 53 active-passive failover configuration. Direct traffic to a static error page hosted within an Amazon S3 bucket when Route 53 health checks determine that the ALB endpoint is unhealthy.
C. Update the Route 53 record to use a latency-based routing policy. Add the backup static error page hosted within an Amazon S3 bucket to the record so the traffic is sent to the most responsive endpoints
D. Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance hosting a static error page as endpoints. Route 53 will only send requests to the instance if the health checks fail for the ALB.
Correct Answer
B. Set up a Route 53 active-passive failover configuration. Direct traffic to a static error page hosted within an Amazon S3 bucket when Route 53 health checks determine that the ALB endpoint is unhealthy.
Explanation
The configuration that meets the requirements of directing users to a backup static error page while minimizing changes and infrastructure overhead is option B: Set up a Route 53 active-passive failover configuration. Direct traffic to a static error page hosted within an Amazon S3 bucket when Route 53 health checks determine that the ALB endpoint is unhealthy.
In an active-passive failover configuration, Route 53 monitors the health of the primary endpoint (ALB) using health checks. If the ALB becomes unhealthy, Route 53 automatically directs traffic to the backup endpoint, which in this case is a static error page hosted within an Amazon S3 bucket. This setup ensures that users are directed to the backup error page only when the primary website is unavailable, minimizing changes and infrastructure overhead.
Option A, pointing a Route 53 alias record to an Amazon CloudFront distribution with the ALB as one of its origins, does not provide a straightforward solution for directing users to a backup error page in case of unavailability. It introduces additional complexity by involving CloudFront.
Option C, updating the Route 53 record to use a latency-based routing policy and adding the backup static error page hosted in an S3 bucket, is not the best fit for the scenario. Latency-based routing is primarily used for directing traffic to the most responsive endpoints based on the user’s location.
Option D, setting up a Route 53 active-active configuration with the ALB and an EC2 instance hosting a static error page, is not necessary for the given requirements. It introduces additional complexity and infrastructure (EC2 instance) that may not be needed. An active-passive failover configuration with an S3 bucket as the backup endpoint is a simpler and more suitable solution in this case.