The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.
Table of Contents
- Exam Question 111
- Correct Answer
- Answer Description
- References
- Exam Question 112
- Correct Answer
- Answer Description
- Exam Question 113
- Correct Answer
- Exam Question 114
- Correct Answer
- Exam Question 115
- Correct Answer
- References
- Exam Question 116
- Correct Answer
- Answer Description
- References
- Exam Question 117
- Correct Answer
- Exam Question 118
- Correct Answer
- Answer Description
- Exam Question 119
- Correct Answer
- Answer Description
- Exam Question 120
- Correct Answer
Exam Question 111
A company has multiple AWS accounts for various departments. One of the departments wants to share an Amazon S3 bucket with all other departments.
Which solution will require the LEAST amount of effort?
A. Enable cross-account S3 replication for the bucket.
B. Create a pre-signed URL for the bucket and share it with other departments.
C. Set the S3 bucket policy to allow cross-account access to other departments.
D. Create IAM users for each of the departments and configure a read-only IAM policy.
Correct Answer
C. Set the S3 bucket policy to allow cross-account access to other departments.
Answer Description
S3 standard is the best choice in this scenario for a short term storage solution. In this case the size and number of logs is unknown and it would be difficult to fully assess the access patterns at this stage. Therefore, using S3 standard is best as it is cost-effective, provides immediate access, and there are no retrieval fees or minimum capacity charge per object.
CORRECT: “Amazon S3 Standard” is the correct answer.
INCORRECT: “Amazon S3 Intelligent-Tiering” is incorrect as there is an additional fee for using this service and for a short-term requirement it may not be beneficial.
INCORRECT: “Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)” is incorrect as this storage class has a minimum capacity charge per object (128 KB) and a per GB retrieval fee. INCORRECT: “Amazon S3 Glacier Deep Archive” is incorrect as this storage class is used for archiving data. There are retrieval fees and it take hours to retrieve data from an archive.
References
Exam Question 112
A company needs to share an Amazon S3 bucket with an external vendor. The bucket owner must be able to access all objects.
Which action should be taken to share the S3 bucket?
A. Update the bucket to be a Requester Pays bucket.
B. Update the bucket to enable cross-origin resource sharing (CORS).
C. Create a bucket policy to require users to grant bucket-owner-full-control when uploading objects.
D. Create an IAM policy to require users to grant bucket-owner-full-control when uploading objects.
Correct Answer
C. Create a bucket policy to require users to grant bucket-owner-full-control when uploading objects.
Answer Description
By default, an S3 object is owned by the AWS account that uploaded it. This is true even when the bucket is owned by another account. To get access to the object, the object owner must explicitly grant you (the bucket owner) access. The object owner can grant the bucket owner full control of the object by updating the access control list (ACL) of the object. The object owner can update the ACL either during a put or copy operation, or after the object is added to the bucket.
Resolution Add a bucket policy that grants users access to put objects in your bucket only when they grant you (the bucket owner) full control of the object.
Exam Question 113
A company has a custom application running on an Amazon EC instance that:
- Reads a large amount of data from Amazon S3
- Performs a multi-stage analysis
- Writes the results to Amazon DynamoDB
The application writes a significant number of large, temporary files during the multi-stage analysis. The process performance depends on the temporary storage performance.
What would be the fastest storage option for holding the temporary files?
A. Multiple Amazon S3 buckets with Transfer Acceleration for storage.
B. Multiple Amazon EBS drives with Provisioned IOPS and EBS optimization.
C. Multiple Amazon EFS volumes using the Network File System version 4.1 (NFSv4.1) protocol.
D. Multiple instance store volumes with software RAID 0.
Correct Answer
A. Multiple Amazon S3 buckets with Transfer Acceleration for storage.
Exam Question 114
A leasing company generates and emails PDF statements every month for all its customers. Each statement is about 400 KB in size.
Customers can download their statements from the website for up to 30 days from when the statements were generated. At the end of their 3-year lease, the customers are emailed a ZIP file that contains all the statements.
What is the MOST cost-effective storage solution for this situation?
A. Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 1 day.
B. Store the statements using the Amazon S3 Glacier storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier Deep Archive storage after 30 days.
C. Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) storage after 30 days.
D. Store the statements using the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 30 days.
Correct Answer
B. Store the statements using the Amazon S3 Glacier storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier Deep Archive storage after 30 days.
Exam Question 115
A company recently released a new type of internet-connected sensor. The company is expecting to sell thousands of sensors, which are designed to stream high volumes of data each second to a central location. A solutions architect must design a solution that ingests and stores data so that engineering teams can analyze it in near-real-time with millisecond responsiveness.
Which solution should the solutions architect recommend?
A. Use an Amazon SQS queue to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon Redshift.
B. Use an Amazon SQS queue to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon DynamoDB.
C. Use Amazon Kinesis Data Streams to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon Redshift.
D. Use Amazon Kinesis Data Streams to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon DynamoDB.
Correct Answer
D. Use Amazon Kinesis Data Streams to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon DynamoDB.
References
AWS Big Data Blog > Analyze data in Amazon DynamoDB using Amazon SageMaker for real-time prediction
Exam Question 116
A website runs a web application that receives a burst of traffic each day at noon. The users upload new pictures and content daily, but have been complaining of timeouts. The architecture uses Amazon EC2 Auto Scaling groups, and the custom application consistently takes 1 minute to initiate upon boot up before responding to user requests.
How should a solutions architect redesign the architecture to better respond to changing traffic?
A. Configure a Network Load Balancer with a slow start configuration.
B. Configure AWS ElastiCache for Redis to offload direct requests to the servers.
C. Configure an Auto Scaling step scaling policy with an instance warmup condition.
D. Configure Amazon CloudFront to use an Application Load Balancer as the origin.
Correct Answer
C. Configure an Auto Scaling step scaling policy with an instance warmup condition.
Answer Description
If you are creating a step policy, you can specify the number of seconds that it takes for a newly launched instance to warm up. Until its specified warm-up time has expired, an instance is not counted toward the aggregated metrics of the Auto Scaling group. Using the example in the Step Adjustments section, suppose that the metric gets to 60, and then it gets to 62 while the new instance is still warming up. The current capacity is still 10 instances, so 1 instance is added (10 percent of 10 instances). However, the desired capacity of the group is already 11 instances, so the scaling policy does not increase the desired capacity further. If the metric gets to 70 while the new instance is still warming up, we should add 3 instances (30 percent of 10 instances). However, the desired capacity of the group is already 11, so we add only 2 instances, for a new desired capacity of 13 instances.
References
Amazon EC2 Auto Scaling > User Guide > Step and simple scaling policies for Amazon EC2 Auto Scaling
Exam Question 117
A company has an application that posts messages to Amazon SQS. Another application polls the queue and processes the messages in an I/O-intensive operation. The company has a service level agreement (SLA) that specifies the maximum amount of time that can elapse between receiving the messages and responding to the users. Due to an increase in the number of messages, the company has difficulty meeting its SLA consistently.
What should a solutions architect do to help improve the application’s processing time and ensure it can handle the load at any level?
A. Create an Amazon Machine Image (AMI) from the instance used for processing. Terminate the instance and replace it with a larger size.
B. Create an Amazon Machine Image (AMI) from the instance used for processing. Terminate the instance and replace it with an Amazon EC2 Dedicated Instance.
C. Create an Amazon Machine Image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy to keep its aggregate CPU utilization below 70%.
D. Create an Amazon Machine Image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy based on the age of the oldest message in the SQS queue.
Correct Answer
D. Create an Amazon Machine Image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy based on the age of the oldest message in the SQS queue.
Exam Question 118
A company is designing a new web service that will run on Amazon EC2 instances behind an Elastic Load Balancer. However, many of the web service clients can only reach IP addresses whitelisted on their firewalls.
What should a solutions architect recommend to meet the clients’ needs?
A. A Network Load Balancer with an associated Elastic IP address.
B. An Application Load Balancer with an associated Elastic IP address
C. An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address
D. An EC2 instance with a public IP address running as a proxy in front of the load balancer
Correct Answer
C. An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address
Answer Description
Route 53 routes end users to Internet applications so the correct answer is C. Map one of the whitelisted IP addresses using an A record to the Elastic IP address.
Exam Question 119
A company’s packaged application dynamically creates and returns single-use text files in response to user requests. The company is using Amazon CloudFront for distribution, but wants to further reduce data transfer costs. The company cannot modify the application’s source code.
What should a solutions architect do to reduce costs?
A. Use Lambda@Edge to compress the files as they are sent to users.
B. Enable Amazon S3 Transfer Acceleration to reduce the response times.
C. Enable caching on the CloudFront distribution to store generated files at the edge.
D. Use Amazon S3 multipart uploads to move the files to Amazon S3 before returning them to users.
Correct Answer
A. Use Lambda@Edge to compress the files as they are sent to users.
Answer Description
B seems more expensive; C does not seem right because they are single use files and will not be needed again from the cache; D multipart mainly for large files and will not reduce data and cost; A seems the best: change the application code to compress the files and reduce the amount of data transferred to save costs.
Exam Question 120
A company is planning to deploy an Amazon RDS DB instance running Amazon Aurora. The company has a backup retention policy requirement of 90 days. Which solution should a solutions architect recommend?
A. Set the backup retention period to 90 days when creating the RDS DB instance.
B. Configure RDS to copy automated snapshots to a user-managed Amazon S3 bucket with a lifecycle policy set to delete after 90 days.
C. Create an AWS Backup plan to perform a daily snapshot of the RDS database with the retention set to 90 days. Create an AWS Backup job to schedule the execution of the backup plan daily.
D. Use a daily scheduled event with Amazon CloudWatch Events to execute a custom AWS Lambda function that makes a copy of the RDS automated snapshot. Purge snapshots older than 90 days.
Correct Answer
B. Configure RDS to copy automated snapshots to a user-managed Amazon S3 bucket with a lifecycle policy set to delete after 90 days.