The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.
Table of Contents
- Exam Question 141
- Correct Answer
- Exam Question 142
- Correct Answer
- Exam Question 143
- Correct Answer
- Answer Description
- References
- Exam Question 144
- Correct Answer
- Answer Description
- Exam Question 145
- Correct Answer
- Answer Description
- References
- Exam Question 146
- Correct Answer
- Answer Description
- References
- Exam Question 147
- Correct Answer
- Answer Description
- References
- Exam Question 148
- Correct Answer
- Answer Description
- References
- Exam Question 149
- Correct Answer
- Answer Description
- References
- Exam Question 150
- Correct Answer
- Answer Description
- References
Exam Question 141
A company uses Application Load Balancers (ALBs) in different AWS Regions. The ALBs receive inconsistent traffic that can spike and drop throughout the year. The company’s networking team needs to allow the IP addresses of the ALBs in the on-premises firewall to enable connectivity.
Which solution is the MOST scalable with minimal configuration changes?
A. Write an AWS Lambda script to get the IP addresses of the ALBs in different Regions. Update the on-premises firewall’s rule to allow the IP addresses of the ALBs.
B. Migrate all ALBs in different Regions to the Network Load Balancer (NLBs). Update the on-premises firewall’s rule to allow the Elastic IP addresses of all the NLBs.
C. Launch AWS Global Accelerator. Register the ALBs in different Regions to the accelerator. Update the on-premises firewall’s rule to allow static IP addresses associated with the accelerator.
D. Launch a Network Load Balancer (NLB) in one Region. Register the private IP addresses of the ALBs in different Regions with the NLB. Update the on-premises firewall’s rule to allow the Elastic IP address attached to the NLB.
Correct Answer
C. Launch AWS Global Accelerator. Register the ALBs in different Regions to the accelerator. Update the on-premises firewall’s rule to allow static IP addresses associated with the accelerator.
Exam Question 142
A company runs a high performance computing (HPC) workload on AWS. The workload required low latency network performance and high network throughput with tightly coupled node-to-node communication. The Amazon EC2 instances are properly sized for compute and storage capacity, and are launched using default options.
What should a solutions architect propose to improve the performance of the workload?
A. Choose a cluster placement group while launching Amazon EC2 instances.
B. Choose dedicated instance tenancy while launching Amazon EC2 instances.
C. Choose an Elastic Inference accelerator while launching Amazon EC2 instances.
D. Choose the required capacity reservation while launching Amazon EC2 instances.
Correct Answer
A. Choose a cluster placement group while launching Amazon EC2 instances.
Exam Question 143
A solutions architect is designing a high performance computing (HPC) workload on Amazon EC2. The EC2 instances need to communicate to each other frequently and require network performance with low latency and high throughput.
Which EC2 configuration meets these requirements?
A. Launch the EC2 instances in a cluster placement group in one Availability Zone.
B. Launch the EC2 instances in a spread placement group in one Availability Zone.
C. Launch the EC2 instances in an Auto Scaling group in two Regions and peer the VPCs.
D. Launch the EC2 instances in an Auto Scaling group spanning multiple Availability Zones.
Correct Answer
A. Launch the EC2 instances in a cluster placement group in one Availability Zone.
Answer Description
When you launch a new EC2 instance, the EC2 service attempts to place the instance in such a way that all of your instances are spread out across underlying hardware to minimize correlated failures. You can use placement groups to influence the placement of a group of interdependent instances to meet the needs of your workload.
Depending on the type of workload, you can create a placement group using one of the following placement strategies:
Cluster • packs instances close together inside an Availability Zone. This strategy enables workloads to achieve the low-latency network performance necessary for tightly-coupled node-to-node communication that is typical of HPC applications.
Partition • spreads your instances across logical partitions such that groups of instances in one partition do not share the underlying hardware with groups of instances in different partitions. This strategy is typically used by large distributed and replicated workloads, such as Hadoop, Cassandra, and Kafka.
Spread • strictly places a small group of instances across distinct underlying hardware to reduce correlated failures.
For this scenario, a cluster placement group should be used as this is the best option for providing low-latency network performance for a HPC application.
CORRECT: “Launch the EC2 instances in a cluster placement group in one Availability Zone” is the correct answer.
INCORRECT: “Launch the EC2 instances in a spread placement group in one Availability Zone” is incorrect as the spread placement group is used to spread instances across distinct underlying hardware.
INCORRECT: “Launch the EC2 instances in an Auto Scaling group in two Regions. Place a Network Load Balancer in front of the instances” is incorrect as this does not achieve the stated requirement to provide low-latency, high throughput network performance between instances. Also, you cannot use an ELB across Regions.
INCORRECT: “Launch the EC2 instances in an Auto Scaling group spanning multiple Availability Zones” is incorrect as this does not reduce network latency or improve performance.
References
- Amazon Elastic Compute Cloud > User Guide for Linux Instances > Placement groups
Exam Question 144
A company’s application is running on Amazon EC2 instances in a single Region. In the event of a disaster, a solutions architect needs to ensure that the resources can also be deployed to a second Region.
Which combination of actions should the solutions architect take to accomplish this? (Choose two.)
A. Detach a volume on an EC2 instance and copy it to Amazon S3.
B. Launch a new EC2 instance from an Amazon Machine Image (AMI) in a new Region.
C. Launch a new EC2 instance in a new Region and copy a volume from Amazon S3 to the new instance.
D. Copy an Amazon Machine Image (AMI) of an EC2 instance and specify a different Region for the destination.
E. Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an EC2 instance in the destination Region using that EBS volume.
Correct Answer
B. Launch a new EC2 instance from an Amazon Machine Image (AMI) in a new Region.
D. Copy an Amazon Machine Image (AMI) of an EC2 instance and specify a different Region for the destination.
Answer Description
Cross Region EC2 AMI Copy
We know that you want to build applications that span AWS Regions and we’re working to provide you with the services and features needed to do so. We started out by launching the EBS Snapshot Copy feature late last year. This feature gave you the ability to copy a snapshot from Region to Region with just a couple of clicks. In addition, last month we made a significant reduction (26% to 83%) in the cost of transferring data between AWS Regions, making it less expensive to operate in more than one AWS region.
Today we are introducing a new feature: Amazon Machine Image (AMI) Copy. AMI Copy enables you to easily copy your Amazon Machine Images between AWS Regions. AMI Copy helps enable several key scenarios including:
Simple and Consistent Multi-Region Deployment – You can copy an AMI from one region to another, enabling you to easily launch consistent instances based on the same AMI into different regions.
Scalability – You can more easily design and build world-scale applications that meet the needs of your users, regardless of their location.
Performance – You can increase performance by distributing your application and locating critical components of your application in closer proximity to your users. You can also take advantage of region specific features such as instance types or other AWS services.
Even Higher Availability – You can design and deploy applications across AWS regions, to increase availability. Once the new AMI is in an Available state the copy is complete.
Once the new AMI is in an Available state the copy is complete.
Exam Question 145
A manufacturing company wants to implement predictive maintenance on its machinery equipment. The company will install thousands of IoT sensors that will send data to AWS in real time. A solutions architect is tasked with implementing a solution that will receive events in an ordered manner for each machinery asset and ensure that data is saved for further processing at a later time.
Which solution would be MOST efficient?
A. Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.
B. Use Amazon Kinesis Data Streams for real-time events with a shard for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon EBS.
C. Use an Amazon SQS FIFO queue for real-time events with one queue for each equipment asset. Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS.
D. Use an Amazon SQS standard queue for real-time events with one queue for each equipment asset. Trigger an AWS Lambda function from the SQS queue to save data to Amazon S3.
Correct Answer
A. Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.
Answer Description
Amazon SQS Introduces FIFO Queues with Exactly-Once Processing and Lower Prices for Standard Queues
You can now use Amazon Simple Queue Service (SQS) for applications that require messages to be processed in a strict sequence and exactly once using First-in, First-out (FIFO) queues. FIFO queues are designed to ensure that the order in which messages are sent and received is strictly preserved and that each message is processed exactly once.
Amazon SQS is a reliable and highly-scalable managed message queue service for storing messages in transit between application components. FIFO queues complement the existing Amazon SQS standard queues, which offer high throughput, best-effort ordering, and at-least-once delivery. FIFO queues have essentially the same features as standard queues, but provide the added benefits of supporting ordering and exactly-once processing. FIFO queues provide additional features that help prevent unintentional duplicates from being sent by message producers or from being received by message consumers. Additionally, message groups allow multiple separate ordered message streams within the same queue.
Amazon Kinesis Data Streams collect and process data in real time. A Kinesis data stream is a set of shards. Each shard has a sequence of data records. Each data record has a sequence number that is assigned by Kinesis Data Streams. A shard is a uniquely identified sequence of data records in a stream.
A partition key is used to group data by shard within a stream. Kinesis Data Streams segregates the data records belonging to a stream into multiple shards. It uses the partition key that is associated with each data record to determine which shard a given data record belongs to.
For this scenario, the solutions architect can use a partition key for each device. This will ensure the records for that device are grouped by shard and the shard will ensure ordering. Amazon S3 is a valid destination for saving the data records.
CORRECT: “Use Amazon Kinesis Data Streams for real-time events with a partition key for each device. Use Amazon Kinesis Data Firehose to save data to Amazon S3” is the correct answer.
INCORRECT: “Use Amazon Kinesis Data Streams for real-time events with a shard for each device. Use Amazon Kinesis Data Firehose to save data to Amazon EBS” is incorrect as you cannot save data to EBS from Kinesis.
INCORRECT: “Use an Amazon SQS FIFO queue for real-time events with one queue for each device. Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS” is incorrect as SQS is not the most efficient service for streaming, real time data.
INCORRECT: “Use an Amazon SQS standard queue for real-time events with one queue for each device. Trigger an AWS Lambda function from the SQS queue to save data to Amazon S3” is incorrect as SQS is not the most efficient service for streaming, real time data.
References
- Amazon Kinesis Data Streams > Developer Guide > Amazon Kinesis Data Streams Terminology and Concepts
Exam Question 146
A company currently operates a web application backed by an Amazon RDS MySQL database. It has automated backups that are run daily and are not encrypted. A security audit requires future backups to be encrypted and the unencrypted backups to be destroyed. The company will make at least one encrypted backup before destroying the old backups.
What should be done to enable encryption for future backups?
A. Enable default encryption for the Amazon S3 bucket where backups are stored.
B. Modify the backup section of the database configuration to toggle the Enable encryption check box.
C. Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the encrypted snapshot.
D. Enable an encrypted read replica on RDS for MySQL. Promote the encrypted read replica to primary. Remove the original database instance.
Correct Answer
C. Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the encrypted snapshot.
Answer Description
However, because you can encrypt a copy of an unencrypted DB snapshot, you can effectively add encryption to an unencrypted DB instance. That is, you can create a snapshot of your DB instance, and then create an encrypted copy of that snapshot. You can then restore a DB instance from the encrypted snapshot, and thus you have an encrypted copy of your original DB instance.
DB instances that are encrypted can’t be modified to disable encryption.
You can’t have an encrypted read replica of an unencrypted DB instance or an unencrypted read replica of an encrypted DB instance.
Encrypted read replicas must be encrypted with the same key as the source DB instance when both are in the same AWS Region.
You can’t restore an unencrypted backup or snapshot to an encrypted DB instance.
To copy an encrypted snapshot from one AWS Region to another, you must specify the KMS key identifier of the destination AWS Region. This is because KMS encryption keys are specific to the AWS Region that they are created in.
Amazon RDS uses snapshots for backup. Snapshots are encrypted when created only if the database is encrypted and you can only select encryption for the database when you first create it. In this case the database, and hence the snapshots, ad unencrypted.
However, you can create an encrypted copy of a snapshot. You can restore using that snapshot which creates a new DB instance that has encryption enabled. From that point on encryption will be enabled for all snapshots.
CORRECT: “Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the encrypted snapshot” is the correct answer. INCORRECT: “Enable an encrypted read replica on RDS for MySQL. Promote the encrypted read replica to primary.
Remove the original database instance” is incorrect as you cannot create an encrypted read replica from an unencrypted master.
INCORRECT: “Modify the backup section of the database configuration to toggle the Enable encryption check box” is incorrect as you cannot add encryption for an existing database.
INCORRECT: “Enable default encryption for the Amazon S3 bucket where backups are stored” is incorrect because you do not have access to the S3 bucket in which snapshots are stored.
References
- Amazon Relational Database Service > User Guide > Encrypting Amazon RDS resources
Exam Question 147
An application runs on Amazon EC2 instances across multiple Availability Zones. The instances run in an Amazon EC2 Auto Scaling group behind an Application Load Balancer. The application performs best when the CPU utilization of the EC2 instances is at or near 40%.
What should a solutions architect do to maintain the desired performance across all instances in the group?
A. Use a simple scaling policy to dynamically scale the Auto Scaling group.
B. Use a target tracking policy to dynamically scale the Auto Scaling group.
C. Use an AWS Lambda function to update the desired Auto Scaling group capacity.
D. Use scheduled scaling actions to scale up and scale down the Auto Scaling group.
Correct Answer
B. Use a target tracking policy to dynamically scale the Auto Scaling group.
Answer Description
“With target tracking scaling policies, you select a scaling metric and set a target value. Amazon EC2 AutoScaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the target value, a target tracking scaling policy also adjusts to changes in the metric due to a changing load pattern. For example, you can use target tracking scaling to: Configure a target tracking scaling policy to keep the average aggregate CPU utilization of your Auto Scaling group at 40 percent. Configure a target tracking scaling policy to keep the request count per target of your Application Load Balancer target group at 1000 for your AutoScaling group.”
With target tracking scaling policies, you select a scaling metric and set a target value. Amazon EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value.
The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the target value, a target tracking scaling policy also adjusts to the changes in the metric due to a changing load pattern.
CORRECT: “Use a target tracking policy to dynamically scale the Auto Scaling group” is the correct answer.
INCORRECT: “Use a simple scaling policy to dynamically scale the Auto Scaling group” is incorrect as target tracking is a better way to keep the aggregate CPU usage at around 40% INCORRECT: “Use an AWS Lambda function to update the desired Auto Scaling group capacity” is incorrect as this can be done automatically.
INCORRECT: “Use scheduled scaling actions to scale up and scale down the Auto Scaling group” is incorrect as dynamic scaling is required to respond to changes in utilization.
References
- Amazon EC2 Auto Scaling > User Guide > Target tracking scaling policies for Amazon EC2 Auto Scaling
Exam Question 148
A company runs an internal browser-based application. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales up to 20 instances during work hours, but scales down to 2 instances overnight. Staff are complaining that the application is very slow when the day begins, although it runs well by mid-morning.
How should the scaling be changed to address the staff complaints and keep costs to a minimum?
A. Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens.
B. Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period.
C. Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period.
D. Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens.
Correct Answer
A. Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens.
Answer Description
Though this sounds like a good use case for scheduled actions, both answers using scheduled actions will have 20 instances running regardless of actual demand. A better option to be more cost effective is to use a target tracking action that triggers at a lower CPU threshold.
With this solution the scaling will occur before the CPU utilization gets to a point where performance is affected. This will result in resolving the performance issues whilst minimizing costs. Using a reduced cooldown period will also more quickly terminate unneeded instances, further reducing costs.
References
- Amazon EC2 Auto Scaling > User Guide > Target tracking scaling policies for Amazon EC2 Auto Scaling
Exam Question 149
A company hosts a static website on-premises and wants to migrate the website to AWS. The website should load as quickly as possible for users around the world. The company also wants the most cost-effective solution.
What should a solutions architect do to accomplish this?
A. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Replicate the S3 bucket to multiple AWS Regions.
B. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Configure Amazon CloudFront with the S3 bucket as the origin.
C. Copy the website content to an Amazon EBS-backed Amazon EC2 instance running Apache HTTP Server. Configure Amazon Route 53 geolocation routing policies to select the closest origin.
D. Copy the website content to multiple Amazon EBS-backed Amazon EC2 instances running Apache HTTP Server in multiple AWS Regions. Configure Amazon CloudFront geolocation routing policies to select the closest origin.
Correct Answer
B. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Configure Amazon CloudFront with the S3 bucket as the origin.
Answer Description
What Is Amazon CloudFront?
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.
Using Amazon S3 Buckets for Your Origin
When you use Amazon S3 as an origin for your distribution, you place any objects that you want CloudFront to deliver in an Amazon S3 bucket. You can use any method that is supported by Amazon S3 to get your objects into Amazon S3, for example, the Amazon S3 console or API, or a third-party tool. You can create a hierarchy in your bucket to store the objects, just as you would with any other Amazon S3 bucket.
Using an existing Amazon S3 bucket as your CloudFront origin server doesn’t change the bucket in any way; you can still use it as you normally would to store and access Amazon S3 objects at the standard Amazon S3 price. You incur regular Amazon S3 charges for storing the objects in the bucket.
The most cost-effective option is to migrate the website to an Amazon S3 bucket and configure that bucket for static website hosting. To enable good performance for global users the solutions architect should then configure a CloudFront distribution with the S3 bucket as the origin. This will cache the static content around the world closer to users.
CORRECT: “Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Configure Amazon CloudFront with the S3 bucket as the origin” is the correct answer.
INCORRECT: “Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Replicate the S3 bucket to multiple AWS Regions” is incorrect as there is no solution here for directing users to the closest region. This could be a more cost-effective (though less elegant) solution if AWS Route 53 latency records are created.
INCORRECT: “Copy the website content to an Amazon EC2 instance. Configure Amazon Route 53 geolocation routing policies to select the closest origin” is incorrect as using Amazon EC2 instances is less cost-effective compared to hosting the website on S3. Also, geolocation routing does not achieve anything with only a single record.
INCORRECT: “Copy the website content to multiple Amazon EC2 instances in multiple AWS Regions. Configure AWS Route 53 geolocation routing policies to select the closest region” is incorrect as using Amazon EC2 instances is less cost-effective compared to hosting the website on S3.
References
Exam Question 150
A company has deployed an API in a VPC behind an internet-facing Application Load Balancer (ALB). An application that consumes the API as a client is deployed in a second account in private subnets behind a NAT gateway. When requests to the client application increase, the NAT gateway costs are higher than expected. A solutions architect has configured the ALB to be internal.
Which combination of architectural changes will reduce the NAT gateway costs? (Choose two.)
A. Configure a VPC peering connection between the two VPCs. Access the API using the private address.
B. Configure an AWS Direct Connect connection between the two VPCs. Access the API using the private address.
C. Configure a ClassicLink connection for the API into the client VPC. Access the API using the ClassicLink address.
D. Configure a PrivateLink connection for the API into the client VPC. Access the API using the PrivateLink address.
E. Configure an AWS Resource Access Manager connection between the two accounts. Access the API using the private address.
Correct Answer
A. Configure a VPC peering connection between the two VPCs. Access the API using the private address.
D. Configure a PrivateLink connection for the API into the client VPC. Access the API using the PrivateLink address.
Answer Description
PrivateLink makes it easy to connect services across different accounts and VPCs to significantly simplify the network architecture. There is no API listed in shareable resources for RAM.
References
- AWS Resource Access Manager > User Guide > Shareable AWS resources