The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.
Table of Contents
- Question 1291
- Exam Question
- Correct Answer
- Explanation
- Question 1292
- Exam Question
- Correct Answer
- Explanation
- Question 1293
- Exam Question
- Correct Answer
- Explanation
- Question 1294
- Exam Question
- Correct Answer
- Explanation
- Question 1295
- Exam Question
- Correct Answer
- Explanation
- Question 1296
- Exam Question
- Correct Answer
- Explanation
- Question 1297
- Exam Question
- Correct Answer
- Explanation
- Question 1298
- Exam Question
- Correct Answer
- Explanation
- Question 1299
- Exam Question
- Correct Answer
- Explanation
- Question 1300
- Exam Question
- Correct Answer
- Explanation
Question 1291
Exam Question
A company has an on-premises application that collects data and stores it to an on-premises NFS server. The company recently set up a 10 Gbps AWS Direct Connect connection. The company is running out of storage capacity on premises. The company needs to migrate the application data from on premises to the AWS Cloud while maintaining low-latency access to the data from the on-premises application.
What should a solutions architect do to meet these requirements?
A. Deploy AWS Storage Gateway for the application data, and use the file gateway to store the data in Amazon S3. Connect the on-premises application servers to the file gateway using NFS.
B. Attach an Amazon Elastic File System (Amazon EFS) file system to the NFS server, and copy the application data to the EFS file system. Then connect the on-premises application to Amazon EFS.
C. Configure AWS Storage Gateway as a volume gateway. Make the application data available to the on-premises application from the NFS server and with Amazon Elastic Block Store (Amazon EBS) snapshots.
D. Create an AWS DataSync agent with the NFS server as the source location and an Amazon Elastic File System (Amazon EFS) file system as the destination for application data transfer. Connect the on-premises application to the EFS file system.
Correct Answer
A. Deploy AWS Storage Gateway for the application data, and use the file gateway to store the data in Amazon S3. Connect the on-premises application servers to the file gateway using NFS.
Explanation
To meet the requirements of migrating the application data from on-premises to the AWS Cloud while maintaining low-latency access to the data, the most suitable option would be:
A. Deploy AWS Storage Gateway for the application data and use the file gateway to store the data in Amazon S3. Connect the on-premises application servers to the file gateway using NFS.
This option allows you to leverage the AWS Storage Gateway, specifically the file gateway, to seamlessly integrate your on-premises application with the AWS Cloud. By connecting the on-premises application servers to the file gateway using NFS (Network File System), you can maintain low-latency access to the data. The data can be stored in Amazon S3, which provides scalable storage capacity in the AWS Cloud.
Option B, attaching an Amazon Elastic File System (Amazon EFS) file system to the NFS server, would require modifying the existing NFS server and may not provide the desired low-latency access.
Option C, configuring AWS Storage Gateway as a volume gateway, would involve using Amazon EBS snapshots and may not offer the same level of flexibility and scalability as storing data in Amazon S3.
Option D, using AWS DataSync to transfer data from the NFS server to Amazon EFS, may introduce additional complexity and may not be necessary if the goal is to directly store the data in Amazon S3.
Therefore, option A is the most appropriate solution for this scenario.
Question 1292
Exam Question
Management has decided to deploy all AWS VPCs with IPv6 enabled. After some time, a solutions architect tries to launch a new instance and receives an error stating that there is not enough IP address space available in the subnet.
What should the solutions architect do to fix this?
A. Check to make sure that only IPv6 was used during the VPC creation.
B. Create a new IPv4 subnet with a larger range, and then launch the instance.
C. Create a new IPv6-only subnet with a large range, and then launch the instance.
D. Disable the IPv4 subnet and migrate all instances to IPv6 only. Once that is complete, launch the instance.
Correct Answer
C. Create a new IPv6-only subnet with a large range, and then launch the instance.
Explanation
In this scenario, the error suggests that there is a shortage of IP addresses in the subnet when attempting to launch a new instance with IPv6 enabled. To resolve this issue, the solutions architect should consider the following options:
A. Checking to make sure that only IPv6 was used during VPC creation is a good practice, but it may not directly address the IP address shortage issue. This option is not likely to fix the problem.
B. Creating a new IPv4 subnet with a larger range might temporarily address the IP address shortage for IPv4, but it doesn’t specifically address the issue related to IPv6. Therefore, this option is not the most appropriate solution.
C. Creating a new IPv6-only subnet with a larger range is the correct approach to resolving the IP address space shortage issue. By creating an IPv6-only subnet and ensuring that it has a larger range of available addresses, the solutions architect can allocate sufficient IP addresses for launching new instances with IPv6.
D. Disabling the IPv4 subnet and migrating all instances to IPv6-only is a drastic and unnecessary solution for addressing the IP address shortage in this case. It’s not practical to disable IPv4 altogether, especially if there are existing instances and applications that rely on IPv4. This option is not recommended.
Therefore, the most appropriate solution in this scenario is to choose option C: create a new IPv6-only subnet with a larger range and then launch the instance.
Question 1293
Exam Question
A company is planning on deploying a newly built application on AWS in a default VPC. The application will consist of a web layer and database layer. The web server was created in public subnets, and the MySQL database was created in private subnets. All subnets are created with the default network ACL settings, and the default security group in the VPC will be replaced with new custom security groups. The following are the key requirements:
- The web servers must be accessible only to users on an SSL connection.
- The database should be accessible to the web layer, which is created in a public subnet only.
- All traffic to and from the IP range 182.20.0.0/16 subnet should be blocked.
Which combination of steps meets these requirements? (Select two.)
A. Create a database server security group with inbound and outbound rules for MySQL port 3306 traffic to and from anywhere (0 0.0.0/0).
B. Create a database server security group with an inbound rule for MySQL port 3306 and specify the source as a web server security group.
C. Create a web server security group with an inbound allow rule for HTTPS port 443 traffic from anywhere (0.0.0.0/0) and an inbound deny rule for IP range 182.20.0.0/16.
D. Create a web server security group with an inbound rule for HTTPS port 443 traffic from anywhere (0.0.0.0/0). Create network ACL inbound and outbound deny rules for IP range 182.20.0.0/16.
E. Create a web server security group with inbound and outbound rules for HTTPS port 443 traffic to and from anywhere (0.0.0.0/0). Create a network ACL inbound deny rule for IP range 182.20.0.0/16.
Correct Answer
C. Create a web server security group with an inbound allow rule for HTTPS port 443 traffic from anywhere (0.0.0.0/0) and an inbound deny rule for IP range 182.20.0.0/16.
Explanation
To meet the requirements mentioned in the scenario, the following combination of steps should be taken:
C. Create a web server security group with an inbound allow rule for HTTPS port 443 traffic from anywhere (0.0.0.0/0) and an inbound deny rule for IP range 182.20.0.0/16.
This step ensures that the web servers are accessible only through SSL on port 443, and it also blocks traffic from the specified IP range.
B. Create a database server security group with an inbound rule for MySQL port 3306 and specify the source as a web server security group.
This step allows the database server to be accessible to the web layer by allowing inbound traffic on MySQL port 3306, but only from the specified web server security group. This restricts access to the database layer from the authorized web layer.
Therefore, the correct combination of steps is C and B.
Question 1294
Exam Question
A company has a hybrid application hosted on multiple on-premises servers with static IP addresses. There is already a VPN that provides connectivity between the VPC and the on-premises network. The company wants to distribute TCP traffic across the on-premises servers for internet users.
What should a solutions architect recommend to provide a highly available and scalable solution?
A. Launch an internet-facing Network Load Balancer (NLB) and register on-premises IP addresses with the NLB.
B. Launch an internet-facing Application Load Balancer (ALB) and register on-premises IP addresses with the ALB.
C. Launch an Amazon EC2 instance, attach an Elastic IP address, and distribute traffic to the on-premises servers.
D. Launch an Amazon EC2 instance with public IP addresses in an Auto Scaling group and distribute traffic to the on-premises servers.
Correct Answer
B. Launch an internet-facing Application Load Balancer (ALB) and register on-premises IP addresses with the ALB.
Explanation
To provide a highly available and scalable solution for distributing TCP traffic across the on-premises servers for internet users in a hybrid application scenario, the solutions architect should recommend the following option:
B. Launch an internet-facing Application Load Balancer (ALB) and register on-premises IP addresses with the ALB.
- An Application Load Balancer (ALB) is a highly scalable and managed load balancer service provided by AWS.
- By launching an internet-facing ALB, the company can distribute TCP traffic across the on-premises servers for internet users efficiently.
- ALB supports integration with on-premises resources through the use of AWS Direct Connect or VPN connectivity, which is already in place according to the scenario.
- By registering the on-premises IP addresses with the ALB, the traffic can be evenly distributed across the on-premises servers, allowing for load balancing and high availability.
- ALB can perform health checks on the on-premises servers to ensure they are available to handle traffic and automatically route traffic only to healthy servers.
- ALB also provides features like SSL termination, content-based routing, and request routing based on various conditions, which can enhance the functionality and scalability of the solution.
Options A, C, and D are not the recommended solutions:
A. Launching an internet-facing Network Load Balancer (NLB) and registering on-premises IP addresses with the NLB is not the best approach in this scenario. NLB is designed for routing traffic to instances within a VPC, and although it supports IP addresses as targets, it doesn’t directly integrate with on-premises resources. ALB is a more suitable choice for this hybrid application scenario.
C. Launching an Amazon EC2 instance, attaching an Elastic IP address, and distributing traffic to the on-premises servers is not an optimal solution for high availability and scalability. It introduces a single point of failure with the EC2 instance and does not provide the built-in load balancing capabilities and scalability of ALB.
D. Launching an Amazon EC2 instance with public IP addresses in an Auto Scaling group and distributing traffic to the on-premises servers is not an ideal solution. It does not provide the load balancing functionality and ease of management offered by ALB. Additionally, it requires managing the Auto Scaling group and manually configuring the traffic distribution to on-premises servers, which is less efficient than utilizing ALB’s features.
Question 1295
Exam Question
A company recently migrated a message processing system to AWS. The system receives messages into an ActiveMQ queue running on an Amazon EC2 instance. Messages are processed by a consumer application running on Amazon EC2. The consumer application processes the messages and writes results to a MySQL database running on Amazon EC2. The company wants this application to be highly available with low operational complexity.
Which architecture offers the HIGHEST availability?
A. Add a second ActiveMQ server to another Availability Zone. Add an additional consumer EC2 instance in another Availability Zone Replicate the MySQL database to another Availability Zone.
B. Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an additional consumer EC2 instance in another Availability Zone. Replicate the MySQL database to another Availability Zone.
C. Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an additional consumer EC2 instance in another Availability Zone. Use Amazon RDS for MySQL with Multi-AZ enabled.
D. Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use Amazon RDS for MySQL with Multi-AZ enabled.
Correct Answer
C. Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an additional consumer EC2 instance in another Availability Zone. Use Amazon RDS for MySQL with Multi-AZ enabled.
Explanation
Option C offers the highest availability for the given scenario.
Here’s the breakdown of each option:
A. This option adds a second ActiveMQ server to another Availability Zone, an additional consumer EC2 instance in another Availability Zone, and replicates the MySQL database to another Availability Zone. While it provides some level of redundancy, it still relies on self-managed EC2 instances and does not provide automated failover for the database. Availability could be impacted if any of the components fail.
B. This option uses Amazon MQ with active/standby brokers configured across two Availability Zones, adds an additional consumer EC2 instance in another Availability Zone, and replicates the MySQL database to another Availability Zone. Amazon MQ provides managed message broker service with high availability, and using active/standby brokers across Availability Zones improves reliability. However, the MySQL database is still self-managed on EC2, which introduces complexity and potential availability challenges.
C. This option also uses Amazon MQ with active/standby brokers configured across two Availability Zones, adds an additional consumer EC2 instance in another Availability Zone, but instead of using self-managed MySQL on EC2, it utilizes Amazon RDS for MySQL with Multi-AZ enabled. Amazon RDS provides managed database service with automated backups, replication, and failover. This setup ensures higher availability for both the messaging system and the database, reducing operational complexity.
D. This option is similar to option C, but it also adds an Auto Scaling group for the consumer EC2 instances across two Availability Zones. While Auto Scaling helps with scalability and fault tolerance, it does not provide the same level of managed service and automated failover as using Amazon RDS for MySQL with Multi-AZ enabled.
Therefore, option C, using Amazon MQ with active/standby brokers, an additional consumer EC2 instance, and Amazon RDS for MySQL with Multi-AZ enabled, offers the highest availability with low operational complexity.
Question 1296
Exam Question
A company runs a production application on a fleet of Amazon EC2 instances. The application reads the data from an Amazon SQS queue and processes the messages in parallel. The message volume is unpredictable and often has intermittent traffic. This application should continually process messages without any downtime.
Which solution meets these requirements MOST cost-effectively?
A. Use Spot Instances exclusively to handle the maximum capacity required.
B. Use Reserved Instances exclusively to handle the maximum capacity required.
C. Use Reserved Instances for the baseline capacity and use Spot Instances to handle additional capacity.
D. Use Reserved Instances for the baseline capacity and use On-Demand Instances to handle additional capacity.
Correct Answer
C. Use Reserved Instances for the baseline capacity and use Spot Instances to handle additional capacity.
Explanation
To meet the requirements of continually processing messages without any downtime, while also being cost-effective, the most suitable solution would be:
C. Use Reserved Instances for the baseline capacity and use Spot Instances to handle additional capacity.
Here’s why:
- Reserved Instances (RIs) provide a significant discount compared to On-Demand Instances. By using RIs for the baseline capacity, you can take advantage of the lower hourly rates they offer. RIs are best suited for workloads that have a predictable baseline level of usage.
- Spot Instances are ideal for handling additional capacity since they offer the lowest cost among the EC2 instance purchasing options. Spot Instances are spare EC2 instances that are available at a significantly discounted rate compared to On-Demand Instances. The pricing is determined by supply and demand, and you can bid on the Spot price. While the availability of Spot Instances may not be guaranteed, they can handle intermittent traffic effectively and help reduce costs during periods of lower message volume.
By combining Reserved Instances for the baseline capacity and leveraging Spot Instances for additional capacity, you can achieve cost savings while maintaining the ability to scale up when needed. This approach allows you to handle unpredictable and intermittent message volumes efficiently while minimizing costs.
Question 1297
Exam Question
A company is moving its on-premises Oracle database to Amazon Aurora PostgreSQL. The database has several applications that write to the same tables. The applications need to be migrated one by one with a month in between each migration Management has expressed concerns that the database has a high number of reads and writes. The data must be kept in sync across both databases throughout tie migration.
What should a solutions architect recommend?
A. Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a change data capture (CDC) replication task and a table mapping to select all cables.
B. Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a full load plus change data capture (CDC) replication task and a table mapping to select all tables.
C. Use the AWS Schema Conversion Tool with AWS DataBase Migration Service (AWS DMS) using a memory optimized replication instance. Create a full load plus change data capture (CDC) replication task and a table mapping to select all tables.
D. Use the AWS Schema Conversion Tool with AWS Database Migration Service (AWS DMS) using a compute optimized replication instance. Create a full load plus change data capture (CDC) replication task and a table mapping to select the largest tables.
Correct Answer
C. Use the AWS Schema Conversion Tool with AWS DataBase Migration Service (AWS DMS) using a memory optimized replication instance. Create a full load plus change data capture (CDC) replication task and a table mapping to select all tables.
Explanation
Given the scenario described, the recommended solution would be:
C. Use the AWS Schema Conversion Tool with AWS Database Migration Service (AWS DMS) using a memory optimized replication instance. Create a full load plus change data capture (CDC) replication task and a table mapping to select all tables.
- AWS DMS is a service designed to migrate databases to AWS. It supports heterogeneous migrations, including Oracle to Amazon Aurora PostgreSQL.
- The AWS Schema Conversion Tool (SCT) can be used to assess the compatibility of the Oracle database with Aurora PostgreSQL and perform the necessary schema conversions.
- Since the applications need to be migrated one by one with a month in between each migration, a CDC replication task is appropriate. CDC captures the changes made to the database after the initial load and keeps the target database in sync.
- A memory-optimized replication instance is recommended for handling the high number of reads and writes to ensure optimal performance during the migration process.
- Selecting all tables for migration ensures that all the necessary data is migrated and kept in sync across both databases.
Option A is incorrect because AWS DataSync is designed for file and object-level transfers, not for database migration. It is not suitable for this scenario.
Option B is incorrect because using full load plus CDC replication task and selecting all tables for migration is a better approach than just using full load replication.
Option D is incorrect because a compute-optimized replication instance may not provide the necessary resources to handle the high number of reads and writes efficiently. A memory-optimized replication instance is better suited for this scenario. Additionally, selecting only the largest tables may result in data inconsistencies as it does not ensure all necessary data is migrated.
Question 1298
Exam Question
A company hosts a training site on a fleet of Amazon EC2 instances. The company anticipates that its new course, which consists of dozens of training videos on the site, will be extremely popular when it is released in 1 week.
What should a solutions architect do to minimize the anticipated server load?
A. Store the videos in Amazon ElastiCache for Redis. Update the web servers to serve the videos using the ElastiCache API.
B. Store the videos in Amazon Elastic File System (Amazon EFS). Create a user data script for the web servers to mount the EFS volume.
C. Store the videos in an Amazon S3 bucket. Create an Amazon CloudFront distribution with an origin access identity (OAI) of that S3 bucket. Restrict Amazon S3 access to the OAI.
D. Store the videos in an Amazon S3 bucket. Create an AWS Storage Gateway file gateway to access the S3 bucket. Create a user data script for the web servers to mount the file gateway.
Correct Answer
C. Store the videos in an Amazon S3 bucket. Create an Amazon CloudFront distribution with an origin access identity (OAI) of that S3 bucket. Restrict Amazon S3 access to the OAI.
Explanation
To minimize the anticipated server load in hosting the training videos, the most suitable option would be:
C. Store the videos in an Amazon S3 bucket. Create an Amazon CloudFront distribution with an origin access identity (OAI) of that S3 bucket. Restrict Amazon S3 access to the OAI.
Storing the videos in an Amazon S3 bucket provides a highly scalable and durable storage solution. By creating an Amazon CloudFront distribution with an OAI, you can improve the performance and reduce the load on the web servers.
Amazon CloudFront is a content delivery network (CDN) that caches content at edge locations worldwide. When users request a video, CloudFront will serve it from the edge location closest to them, reducing latency and improving the user experience.
By restricting Amazon S3 access to the OAI, you ensure that the videos can only be accessed through CloudFront. This prevents direct access to the S3 bucket, enhancing security and reducing the load on the web servers.
Options A, B, and D are not the best choices for minimizing server load in this scenario:
Option A suggests using Amazon ElastiCache for Redis, which is a caching service primarily used for accelerating database queries. It is not suitable for storing and serving video content.
Option B suggests using Amazon Elastic File System (Amazon EFS) to store the videos. While EFS provides scalable and shared file storage, it may not be the most efficient solution for serving video content.
Option D suggests using an AWS Storage Gateway file gateway to access the S3 bucket. This introduces additional complexity and overhead, which may not be necessary for serving videos.
Therefore, option C is the most appropriate choice for minimizing server load in this situation.
Question 1299
Exam Question
A company has developed a new video game as a web application. The application is in a three-tier architecture in a VPC with Amazon RDS for MySQL. In the database layer several players will compete concurrently online. The game developers want to display a top-10 scoreboard in near-real time and offer the ability to stop and restore the game while preserving the current scores.
What should a solutions architect do to meet these requirements?
A. Set up an Amazon ElastiCache for Memcached cluster to cache the scores for the web application to display.
B. Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web application to display.
C. Place an Amazon CloudFront distribution in front of the web application to cache the scoreboard in a section of the application.
D. Create a read replica on Amazon RDS for MySQL to run queries to compute the scoreboard and serve the read traffic to the web application.
Correct Answer
B. Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web application to display.
Explanation
To meet the requirements of displaying a top-10 scoreboard in near-real time and preserving current scores while allowing the game to be stopped and restored, the most appropriate solution for a three-tier architecture with Amazon RDS for MySQL would be:
B. Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web application to display.
Using Amazon ElastiCache for Redis allows for fast and efficient caching of frequently accessed data, such as the scores in this case. By utilizing Redis, the scores can be computed and stored in memory, providing near-real-time access to the top-10 scoreboard. Additionally, Redis supports data persistence, which means the current scores can be preserved even when the game is stopped and restored.
Option A, using Amazon ElastiCache for Memcached, is not the best choice in this scenario because Memcached does not support data persistence. Therefore, it may not be suitable for preserving the current scores when the game is stopped and restored.
Option C, placing an Amazon CloudFront distribution in front of the web application to cache the scoreboard, may improve performance by caching static content. However, it does not provide the necessary capabilities to compute and cache the scores as required.
Option D, creating a read replica on Amazon RDS for MySQL, is not the most efficient solution for this scenario. It would require running queries on the read replica to compute the scoreboard, which may impact performance and add unnecessary complexity. ElastiCache for Redis is a more suitable choice for this use case.
Therefore, the best option is B. Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web application to display.
Question 1300
Exam Question
A company hosts its static website content from an Amazon S3 bucket in the us-east-1 Region. Content is made available through an Amazon CloudFront origin pointing to that bucket. Cross-Region replication is set to create a second copy of the bucket in the ap-southeast-1 Region. Management wants a solution that provides greater availability for the website.
Which combination of actions should a solutions architect take to increase availability? (Choose two.)
A. Add both buckets to the CloudFront origin.
B. Configure failover routing in Amazon Route 53.
C. Create a record in Amazon Route 53 pointing to the replica bucket.
D. Create an additional CloudFront origin pointing to the ap-southeast-1 bucket.
E. Set up a CloudFront origin group with the us-east-1 bucket as the primary and the ap-southeast-1 bucket as the secondary.
Correct Answer
B. Configure failover routing in Amazon Route 53.
E. Set up a CloudFront origin group with the us-east-1 bucket as the primary and the ap-southeast-1 bucket as the secondary.
Explanation
To increase availability for the website hosted on Amazon S3 with CloudFront, the solutions architect should take the following two actions:
B. Configure failover routing in Amazon Route 53: By configuring failover routing in Route 53, the solutions architect can set up health checks for the primary bucket in the us-east-1 Region. In the event of a failure, Route 53 can automatically route traffic to the secondary bucket in the ap-southeast-1 Region, providing failover and increased availability.
E. Set up a CloudFront origin group with the us-east-1 bucket as the primary and the ap-southeast-1 bucket as the secondary: By setting up a CloudFront origin group, the solutions architect can configure CloudFront to automatically switch to the secondary bucket in the ap-southeast-1 Region if the primary bucket in the us-east-1 Region becomes unavailable. This ensures that the website content remains accessible even in the case of a failure.
So the correct combination of actions to increase availability would be:
B. Configure failover routing in Amazon Route 53.
E. Set up a CloudFront origin group with the us-east-1 bucket as the primary and the ap-southeast-1 bucket as the secondary.