The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.
Table of Contents
- Question 1251
- Exam Question
- Correct Answer
- Explanation
- Question 1252
- Exam Question
- Correct Answer
- Explanation
- Question 1253
- Exam Question
- Correct Answer
- Explanation
- Question 1254
- Exam Question
- Correct Answer
- Explanation
- Question 1255
- Exam Question
- Correct Answer
- Explanation
- Question 1256
- Exam Question
- Correct Answer
- Explanation
- Question 1257
- Exam Question
- Correct Answer
- Explanation
- Question 1258
- Exam Question
- Correct Answer
- Explanation
- Question 1259
- Exam Question
- Correct Answer
- Explanation
- Question 1260
- Exam Question
- Correct Answer
- Explanation
Question 1251
Exam Question
A solution architect needs to design a highly available application consisting of web, application, and database tiers. HTTPS content delivery should be as close to the edge as possible, with the least delivery time.
Which solution meets these requirements and is MOST secure?
A. Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2 instances in public subnets. Configure Amazon CloudFront to deliver HTTPS content using the public ALB as the origin.
B. Amazon EC2 instances in private subnets Configure. Configure a public Application Load Balancer with multiple redundant Amazon CloudFront to deliver HTTPS content using the EC2 instances as the origin.
C. Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2 instances in private subnets. Configure Amazon CloudFront to deliver HTTPS content using the public ALB as the origin.
D. Configure a public Application Load Balancer with multiple redundant Amazon EC2 instances in public subnets. Configure Amazon CloudFront to deliver HTTPS content using the EC2 instances as the origin.
Correct Answer
C. Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2 instances in private subnets. Configure Amazon CloudFront to deliver HTTPS content using the public ALB as the origin.
Explanation
To meet the requirements of high availability, secure content delivery, and least delivery time, the most suitable solution is to configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2 instances in private subnets and use Amazon CloudFront to deliver HTTPS content using the public ALB as the origin.
By placing the EC2 instances in private subnets, they are not directly accessible from the internet, providing a more secure architecture. The ALB acts as the entry point for incoming traffic and distributes requests to the EC2 instances in private subnets based on the configured load balancing algorithm.
Configuring Amazon CloudFront with the ALB as the origin allows for HTTPS content delivery as close to the edge as possible. CloudFront operates a global network of edge locations, enabling low-latency content delivery to users around the world. By caching content at the edge locations, CloudFront reduces the delivery time for subsequent requests.
Option A suggests using multiple redundant EC2 instances in public subnets and using the public ALB as the origin for CloudFront. This configuration exposes the EC2 instances directly to the internet, which may introduce potential security risks.
Option B suggests using EC2 instances in private subnets as the origin for CloudFront. However, this configuration does not provide an entry point for incoming traffic or load balancing capabilities, and it does not meet the requirement of high availability for the application.
Option D suggests using EC2 instances in public subnets and using them as the origin for CloudFront. This configuration exposes the EC2 instances directly to the internet, which may compromise security.
Therefore, option C is the most suitable solution as it provides high availability, secure content delivery, and least delivery time by configuring a public ALB with EC2 instances in private subnets and using CloudFront with the ALB as the origin.
Question 1252
Exam Question
A company wants to move a multi-tiered application from on premises to the AWS Cloud to improve the application’s performance. The application consists of application tiers that communicate with each other by way of RESTful services. Transactions are dropped when one tier becomes overloaded. A solutions architect must design a solution that resolves these issues and modernizes the application.
Which solution meets these requirements and is the MOST operationally efficient?
A. Use Amazon API Gateway and direct transactions to the AWS Lambda functions as the application layer. Use Amazon Simple Queue Service (Amazon SQS) as the communication layer between application services.
B. Use Amazon CloudWatch metrics to analyze the application performance history to determine the server’s peak utilization during the performance failures. Increase the size of the application server’s Amazon EC2 instances to meet the peak requirements.
C. Use Amazon Simple Notification Service (Amazon SNS) to handle the messaging between application servers running on Amazon EC2 in an Auto Scaling group. Use Amazon CloudWatch to monitor the SNS queue length and scale up and down as required.
D. Use Amazon Simple Queue Service (Amazon SQS) to handle the messaging between application servers running on Amazon EC2 in an Auto Scaling group. Use Amazon CloudWatch to monitor the SQS queue length and scale up when communication failures are detected.
Correct Answer
A. Use Amazon API Gateway and direct transactions to the AWS Lambda functions as the application layer. Use Amazon Simple Queue Service (Amazon SQS) as the communication layer between application services.
Explanation
The most operationally efficient solution for modernizing the multi-tiered application and resolving the performance and communication issues is to use Amazon API Gateway with AWS Lambda functions as the application layer and Amazon SQS as the communication layer between application services.
By leveraging Amazon API Gateway, you can easily create RESTful APIs and integrate them with AWS Lambda functions. AWS Lambda provides a highly scalable and managed compute service that automatically scales to handle incoming requests. This allows for efficient handling of the application tier’s workload and eliminates the risk of dropped transactions when one tier becomes overloaded.
Using Amazon SQS as the communication layer between application services helps decouple the communication and provides fault tolerance. It acts as a buffer to store messages when one tier is overwhelmed, ensuring that transactions are not dropped. The Auto Scaling capability of Amazon SQS allows you to scale up and down the number of message queues based on the workload.
Option B suggests analyzing the application performance history using Amazon CloudWatch metrics and increasing the size of the EC2 instances to meet peak requirements. While this approach may address performance issues, it does not provide a modernized solution and does not resolve the communication and transaction dropping issues.
Options C and D suggest using Amazon SNS and Amazon SQS for messaging between application servers. While both services are suitable for messaging, Amazon API Gateway with AWS Lambda offers a more efficient and modernized approach to handling the application layer, allowing for better scalability and ease of development.
Therefore, option A is the most operationally efficient solution as it leverages Amazon API Gateway with AWS Lambda functions for the application layer and uses Amazon SQS for communication between application services, ensuring efficient handling of the workload, resolving communication issues, and modernizing the application.
Question 1253
Exam Question
A company uses on-premises servers to host its applications. The company is running out of storage capacity. The applications use both block storage and NFS storage. The company needs a high-performing solution that supports local caching without re-architecting its existing applications.
Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)
A. Mount Amazon S3 as a file system to the on-premises servers.
B. Deploy an AWS Storage Gateway file gateway to replace NFS storage.
C. Deploy AWS Snowball Edge to provision NFS mounts to on-premises servers.
D. Deploy an AWS Storage Gateway volume gateway to replace the block storage.
E. Deploy Amazon Elastic File System (Amazon EFS) volumes and mount them to on-premises servers.
Correct Answer
B. Deploy an AWS Storage Gateway file gateway to replace NFS storage.
E. Deploy Amazon Elastic File System (Amazon EFS) volumes and mount them to on-premises servers.
Explanation
A company serves a multilingual website from a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). This architecture is currently running in the us-west-1 Region but is exhibiting high request latency for users located in other parts of the world. The website needs to serve requests quickly and efficiently regardless of a user’s location. However, the company does not want to recreate the existing architecture across multiple Regions.
How should a solutions architect accomplish this?
A. Replace the existing architecture with a website served from an Amazon S3 bucket. Configure an Amazon CloudFront distribution with the S3 bucket as the origin.
B. Configure an Amazon CloudFront distribution with the ALB as the origin. Set the cache behavior settings to only cache based on the Accept-Language request header.
C. Set up Amazon API Gateway with the ALB as an integration. Configure API Gateway to use an HTTP integration type. Set up an API Gateway stage to enable the API cache.
D. Launch an EC2 instance in each additional Region and configure NGINX to act as a cache server for that Region. Put all the instances plus the ALB behind an Amazon Route 53 record set with a geolocation routing policy.
Question 1254
Exam Question
A company serves a multilingual website from a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). This architecture is currently running in the us-west-1 Region but is exhibiting high request latency for users located in other parts of the world. The website needs to serve requests quickly and efficiently regardless of a user’s location. However, the company does not want to recreate the existing architecture across multiple Regions.
How should a solutions architect accomplish this?
A. Replace the existing architecture with a website served from an Amazon S3 bucket. Configure an Amazon CloudFront distribution with the S3 bucket as the origin.
B. Configure an Amazon CloudFront distribution with the ALB as the origin. Set the cache behavior settings to only cache based on the Accept-Language request header.
C. Set up Amazon API Gateway with the ALB as an integration. Configure API Gateway to use an HTTP integration type. Set up an API Gateway stage to enable the API cache.
D. Launch an EC2 instance in each additional Region and configure NGINX to act as a cache server for that Region. Put all the instances plus the ALB behind an Amazon Route 53 record set with a geolocation routing policy.
Correct Answer
B. Configure an Amazon CloudFront distribution with the ALB as the origin. Set the cache behavior settings to only cache based on the Accept-Language request header.
Explanation
To serve requests quickly and efficiently regardless of user location without recreating the existing architecture across multiple Regions, a solutions architect should choose option B: Configure an Amazon CloudFront distribution with the ALB as the origin and set the cache behavior settings to only cache based on the Accept-Language request header.
By using Amazon CloudFront as a global content delivery network (CDN), the content will be cached at edge locations closer to users, reducing latency. The ALB will serve as the origin server for CloudFront, ensuring that requests are properly routed to the fleet of EC2 instances. By configuring the cache behavior to cache based on the Accept-Language header, CloudFront will cache and serve localized content to users based on their language preferences, further improving performance.
This solution leverages CloudFront’s global infrastructure and caching capabilities to deliver content quickly to users worldwide without the need to replicate the entire architecture in multiple Regions.
Question 1255
Exam Question
A company recently started using Amazon Aurora as the data store for its global ecommerce application. When large reports are run, developers report that the ecommerce application is performing poorly. After reviewing metrics in Amazon CloudWatch. A solutions architect finds that the ReadIOPS and CPU Utilization metrics are spiking when monthly reports run.
What is the MOST cost-effective solution?
A. Migrate the monthly reporting to Amazon Redshift.
B. Migrate the monthly reporting to an Aurora Replica.
C. Migrate the Aurora database to a larger instance class.
D. Increase the Provisioned IOPS on the Aurora instance.
Correct Answer
D. Increase the Provisioned IOPS on the Aurora instance.
Explanation
The MOST cost-effective solution to address the performance issue during monthly reports on the global ecommerce application using Amazon Aurora is option D: Increase the Provisioned IOPS on the Aurora instance.
By increasing the Provisioned IOPS (Input/Output Operations Per Second) on the Aurora instance, you can allocate more resources for handling the increased workload during the monthly reports. This will improve the read performance and help reduce the spikes in ReadIOPS and CPU Utilization metrics, leading to better overall application performance.
Migrating the monthly reporting to Amazon Redshift (option A) may provide better performance for analytics workloads, but it involves additional costs and a separate data migration process. Migrating to an Aurora Replica (option B) would provide read scalability, but it may not address the performance issue if the underlying instance resources are still constrained. Migrating to a larger instance class (option C) could be considered, but it may not be as cost-effective as simply increasing the Provisioned IOPS, as it involves more significant resource scaling.
Therefore, increasing the Provisioned IOPS on the Aurora instance is the most cost-effective solution to improve performance during monthly reports in this scenario.
Question 1256
Exam Question
A solutions architect is designing the cloud architecture for a company that needs to host hundreds of machine learning models for its users. During startup, the models need to load up to 10 GB of data from Amazon S3 into memory, but they do not need disk access. Most of the models are used sporadically, but the users expect all of them to be highly available and accessible with low latency.
Which solution meets the requirements and is MOST cost-effective?
A. Deploy models as AWS Lambda functions behind an Amazon API Gateway for each model.
B. Deploy models as Amazon Elastic Container Service (Amazon ECS) services behind an Application Load Balancer for each model.
C. Deploy models as AWS Lambda functions behind a single Amazon API Gateway with path-based routing where one path corresponds to each model.
D. Deploy models as Amazon Elastic Container Service (Amazon ECS) services behind a single Application Load Balancer with path-based routing where one path corresponds to each model.
Correct Answer
C. Deploy models as AWS Lambda functions behind a single Amazon API Gateway with path-based routing where one path corresponds to each model.
Explanation
The solution that meets the requirements and is MOST cost-effective in this scenario is option C: Deploy models as AWS Lambda functions behind a single Amazon API Gateway with path-based routing where one path corresponds to each model.
By deploying models as AWS Lambda functions, the architecture can benefit from the serverless nature of Lambda, which automatically scales the functions based on the incoming request volume. This allows for efficient resource utilization and cost optimization, especially for sporadically used models.
Using a single Amazon API Gateway with path-based routing for each model simplifies the architecture and reduces operational overhead. Each model can have its own path, allowing users to access the models through a single API Gateway endpoint. The low latency requirement can be achieved by leveraging the automatic scaling capabilities of AWS Lambda and the global availability of API Gateway.
Deploying models as Amazon Elastic Container Service (Amazon ECS) services behind an Application Load Balancer (option B) would require managing and scaling the underlying container infrastructure, which may result in higher operational complexity and costs compared to serverless AWS Lambda functions.
While option D also utilizes path-based routing, deploying models as Amazon ECS services behind a single Application Load Balancer would introduce additional overhead in managing the container infrastructure and scaling. This option may not be as cost-effective and operationally efficient as using AWS Lambda functions.
Therefore, option C, deploying models as AWS Lambda functions behind a single Amazon API Gateway with path-based routing, is the recommended solution that meets the requirements and is the most cost-effective.
Question 1257
Exam Question
An ecommerce company is experiencing an increase in user traffic. The company store is deployed on Amazon EC2 instances as a two-tier two application consisting of a web tier and a separate database tier. As traffic increases, the company notices that the architecture is causing significant delays in sending timely marketing and order confirmation email to users. The company wants to reduce the time it spends resolving complex email delivery issues and minimize operational overhead.
What should a solutions architect do to meet these requirements?
A. Create a separate application tier using EC2 instances dedicated to email processing.
B. Configure the web instance to send email through Amazon Simple Email Service (Amazon SES).
C. Configure the web instance to send email through Amazon Simple Notification Service (Amazon SNS).
D. Create a separate application tier using EC2 instances dedicated to email processing. Place the instances in an Auto Scaling group.
Correct Answer
B. Configure the web instance to send email through Amazon Simple Email Service (Amazon SES).
Explanation
To meet the requirements of reducing time spent on resolving email delivery issues and minimizing operational overhead, a solutions architect should recommend option B: Configure the web instance to send email through Amazon Simple Email Service (Amazon SES).
Amazon SES is a fully managed email sending service that provides a scalable and reliable solution for sending emails. By configuring the web instance to send email through Amazon SES, the company can offload the complexities of email delivery management to the service, reducing the operational overhead and eliminating the need to handle complex email delivery issues.
Amazon SES integrates seamlessly with existing applications and offers features such as built-in email validation, reputation monitoring, bounce and complaint handling, and content filtering. It also provides robust email delivery capabilities, including the ability to send high volumes of email efficiently and handle deliverability challenges.
By using Amazon SES, the company can ensure timely delivery of marketing and order confirmation emails to users, even during periods of high user traffic. It simplifies the email delivery process, allowing the company to focus on other aspects of their ecommerce application.
Options A, C, and D are not the recommended solutions in this scenario. Creating a separate application tier dedicated to email processing (option A) would introduce additional complexity and operational overhead. Configuring the web instance to send email through Amazon SNS (option C) is not suitable for email delivery, as Amazon SNS is primarily designed for sending notifications rather than full-fledged email messages. Creating a separate application tier for email processing and placing the instances in an Auto Scaling group (option D) would add unnecessary complexity and infrastructure management overhead.
Question 1258
Exam Question
A company has created a multi-tier application for its ecommerce website. The website uses an Application Load Balancer that resides in the public subnets, a web tier in the public subnets, and a MySQL cluster hosted on Amazon EC2 instances in the private subnets. The MySQL database needs to retrieve product catalog and pricing information that is hosted on the internet by a third-party provider. A solutions architect must devise a strategy that maximizes security without increasing operational overhead.
What should the solutions architect do to meet these requirements?
A. Deploy a NAT instance in the VPC. Route all the internet-based traffic through the NAT instance.
B. Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all internet-bound traffic to the NAT gateway.
C. Configure an internet gateway and attach it to the VPC. Modify the private subnet route table to direct internet-bound traffic to the internet gateway.
D. Configure a virtual private gateway and attach it to the VPC. Modify the private subnet route table to direct internet-bound traffic to the virtual private gateway.
Correct Answer
B. Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all internet-bound traffic to the NAT gateway.
Explanation
To maximize security without increasing operational overhead in the given scenario, the solutions architect should recommend option B: Deploy a NAT gateway in the public subnets and modify the private subnet route table to direct all internet-bound traffic to the NAT gateway.
A NAT gateway is a managed network address translation (NAT) service provided by AWS. It allows resources in private subnets to communicate with the internet while keeping them isolated and protected from direct internet access. By deploying a NAT gateway in the public subnets, the private subnets can leverage it for outbound internet connectivity.
Here’s how the recommended solution addresses the requirements:
- Security: By using a NAT gateway, the private instances do not have direct internet connectivity. Outbound traffic from the private subnets is routed through the NAT gateway, which resides in the public subnets. This helps to secure the private instances by limiting their exposure to the internet.
- Operational Overhead: Deploying a NAT gateway is a managed service provided by AWS, which reduces operational overhead. It eliminates the need to manage and maintain a NAT instance manually (as suggested in option A) and simplifies the configuration and management of internet-bound traffic.
- Internet-Bound Traffic: Modifying the private subnet route table to direct all internet-bound traffic to the NAT gateway ensures that the MySQL cluster in the private subnets can access the product catalog and pricing information hosted by the third-party provider on the internet.
Option C suggests configuring an internet gateway and modifying the private subnet route table, but this would provide direct internet access to the private instances, which may not be desirable from a security standpoint.
Option D suggests configuring a virtual private gateway, but this is typically used for site-to-site VPN connections rather than providing outbound internet access for the private instances.
Therefore, option B is the most suitable and secure choice in this scenario.
Question 1259
Exam Question
A company wants to migrate its web application to AWS. The legacy web application consists of a web tier, an application tier, and a MySQL database. The re- architected application must consist of technologies that do not require the administration team to manage instances or clusters.
Which combination of services should a solutions architect include in the overall architecture? (Choose two.)
A. Amazon Aurora Serverless
B. Amazon EC2 Spot Instances
C. Amazon Elasticsearch Service (Amazon ES)
D. Amazon RDS for MySQL
E. AWS Fargate
Correct Answer
A. Amazon Aurora Serverless
E. AWS Fargate
Explanation
To meet the requirement of not requiring the administration team to manage instances or clusters, the solutions architect should include the following services in the overall architecture:
A. Amazon Aurora Serverless: Amazon Aurora Serverless is a fully managed database service that automatically scales capacity based on the application’s needs. It eliminates the need to provision and manage database instances or clusters manually. It provides the benefits of Aurora’s performance and availability while automatically handling the database scaling and administration tasks.
E. AWS Fargate: AWS Fargate is a serverless compute engine for containers that allows you to run containers without the need to manage the underlying infrastructure. With Fargate, you can focus on deploying and running your containerized applications without worrying about server or cluster management. It provides the flexibility and scalability of containers without the operational overhead.
By using Amazon Aurora Serverless for the MySQL database tier, the administration team doesn’t need to manage instances or clusters manually. Aurora Serverless automatically scales the database capacity up or down based on the application’s demand.
By using AWS Fargate for the web and application tiers, the administration team can deploy and run containers without managing the underlying infrastructure. Fargate takes care of the infrastructure provisioning, scaling, and patching, allowing the team to focus on deploying and managing the application itself.
Therefore, the combination of Amazon Aurora Serverless and AWS Fargate provides a serverless and managed solution for the web application while relieving the administration team from managing instances or clusters.
Question 1260
Exam Question
A company is backing up on-premises databases to local file server shares using the SMB protocol. The company requires immediate access to 1 week of backup files to meet recovery objectives. Recovery after a week is less likely to occur, and the company can tolerate a delay in accessing those older backup files.
What should a solutions architect do to meet these requirements with the LEAST operational effort?
A. Deploy Amazon FSx for Windows File Server to create a file system with exposed file shares with sufficient storage to hold all the desired backups.
B. Deploy an AWS Storage Gateway file gateway with sufficient storage to hold 1 week of backups. Point the backups to SMB shares from the file gateway.
C. Deploy Amazon Elastic File System (Amazon EFS) to create a file system with exposed NFS shares with sufficient storage to hold all the desired backups.
D. Continue to back up to the existing file shares. Deploy AWS Database Migration Service (AWS DMS) and define a copy task to copy backup files older than 1 week to Amazon S3, and delete the backup files from the local file store.
Correct Answer
D. Continue to back up to the existing file shares. Deploy AWS Database Migration Service (AWS DMS) and define a copy task to copy backup files older than 1 week to Amazon S3, and delete the backup files from the local file store.
Explanation
To meet the requirements with the least operational effort, the solutions architect should consider option D: continue backing up to the existing file shares and deploy AWS Database Migration Service (AWS DMS) to copy backup files older than 1 week to Amazon S3.
Here’s why option D is the most suitable:
Least operational effort: Option D allows the company to continue using the existing file shares for immediate access to backups within 1 week. This minimizes any operational changes or disruptions to the current backup process.
AWS DMS for backup archiving: By configuring a copy task in AWS DMS, the backup files older than 1 week can be automatically migrated to Amazon S3. AWS DMS supports the SMB protocol, so it can directly access the file shares and copy the files to S3 without manual intervention.
Cost-effective archiving: Amazon S3 provides cost-effective storage for long-term backup retention. Since recovery after a week is less likely to occur, the company can tolerate a delay in accessing those older backup files. By archiving these files to Amazon S3, the company can save on storage costs compared to using high-performance file systems like Amazon FSx or Amazon EFS.
Overall, option D allows the company to maintain its existing backup process while automating the archiving of older backup files to Amazon S3 using AWS DMS. This solution provides the least operational effort and cost-effective storage for long-term backup retention.