Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 45

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 1161

Exam Question

A disaster response team is using drones to collect images of recent storm damage. The response team laptops lack the storage and compute capacity to transfer the images and process the data. While the team has Amazon EC2 instances for processing and Amazon S3 buckets for storage, network connectivity is intermittent and unreliable. The images need to be processed to evaluate the damage.

What should a solutions architect recommend?

A. Use AWS Snowball Edge devices to process and store the images.
B. Upload the images to Amazon Simple Queue Service (Amazon SQS) during intermittent connectivity to EC2 instances.
C. Configure Amazon Kinesis Data Firehose to create multiple delivery streams aimed separately at the S3 buckets for storage and the EC2 instances for processing the images.
D. Use AWS Storage Gateway pre-installed on a hardware appliance to cache the images locally for Amazon S3 to process the images when connectivity becomes available.

Correct Answer

D. Use AWS Storage Gateway pre-installed on a hardware appliance to cache the images locally for Amazon S3 to process the images when connectivity becomes available.

Explanation

Given the intermittent and unreliable network connectivity, using AWS Storage Gateway in cached mode would be the most suitable option.

D. AWS Storage Gateway provides on-premises access to AWS storage services, including Amazon S3. By deploying AWS Storage Gateway in cached mode on a hardware appliance, the response team can store and cache the images locally. When connectivity becomes available, the cached images can be transferred to Amazon S3 for processing using the EC2 instances. This approach allows the team to overcome the limitations of network connectivity and utilize the compute capacity of the EC2 instances for image processing.

The other options (A, B, and C) do not address the intermittent network connectivity and the limited compute capacity of the laptops directly. AWS Snowball Edge devices (option A) are typically used for large-scale data transfer and processing in environments with limited or no network connectivity. Amazon SQS (option B) is a managed message queue service and may not be suitable for transferring and processing large image files. Amazon Kinesis Data Firehose (option C) is primarily used for real-time data streaming and delivery, which may not be applicable in this scenario.

Therefore, option D provides the most appropriate solution for the given requirements.

Question 1162

Exam Question

A web application runs on Amazon EC2 instances behind an Application Load Balancer. The application allows users to create custom reports of historical weather data. Generating a report can take up to 5 minutes. These long-running requests use many of the available incoming connections, making the system unresponsive to other users.

How can a solutions architect make the system more responsive?

A. Use Amazon SQS with AWS Lambda to generate reports.
B. Increase the idle timeout on the Application Load Balancer to 5 minutes.
C. Update the client-side application code to increase its request timeout to 5 minutes.
D. Publish the reports to Amazon S3 and use Amazon CloudFront for downloading to the user.

Correct Answer

A. Use Amazon SQS with AWS Lambda to generate reports.

Explanation

By offloading the long-running report generation process to AWS Lambda, the system can become more responsive. Instead of users waiting for the report generation to complete, their requests can be handled asynchronously by placing them in an Amazon SQS queue. AWS Lambda functions can be triggered to process the requests from the queue and generate the reports in the background. This approach allows the web application to quickly respond to user requests and frees up resources to handle other incoming connections.

Option B, increasing the idle timeout on the Application Load Balancer, would not directly address the issue of long-running report generation requests consuming all available incoming connections. It would only extend the time until the idle connections are terminated, but the system would still be unresponsive during the report generation process.

Option C, updating the client-side application code to increase its request timeout, would only address the timeout issue from the client’s perspective. The long-running report generation requests would still consume all available incoming connections, making the system unresponsive to other users.

Option D, publishing the reports to Amazon S3 and using Amazon CloudFront for downloading, would improve the delivery of the reports to the users but would not address the issue of long-running report generation requests consuming all available incoming connections.

Therefore, option A provides a scalable and asynchronous solution to make the system more responsive by leveraging Amazon SQS and AWS Lambda for report generation.

Question 1163

Exam Question

A solutions architect is developing a multiple-subnet VPC architecture. The solution will consist of six subnets in two Availability Zones. The subnets are defined as public, private and dedicated for databases. Only the Amazon EC2 instances running in the private subnets should be able to access a database.

Which solution meets these requirements?

A. Create a current route table that excludes the route to the public subnets CIDR blocks. Associate the route table to the database subnets.
B. Create a security group that denies ingress from the security group used by instances in the public subnets. Attach the security group to an Amazon RDS DB instance.
C. Create a security group that allows ingress from the security group used by instances in the private subnets. Attach the security group to an Amazon RDS DB instance.
D. Create a new peering connection between the public subnets and the private subnets. Create a different peering connection between the private subnets and the database subnets.

Correct Answer

B. Create a security group that denies ingress from the security group used by instances in the public subnets. Attach the security group to an Amazon RDS DB instance.

Explanation

To ensure that only the EC2 instances running in the private subnets can access the database, you can create a security group for the Amazon RDS DB instance and configure it to deny ingress from the security group used by instances in the public subnets. This will restrict access to the database only from the instances in the private subnets.

Option A, creating a custom route table that excludes the route to the public subnets CIDR blocks and associating it with the database subnets, does not control access to the database. Route tables control the routing of network traffic, not access to resources.

Option C, creating a security group that allows ingress from the security group used by instances in the private subnets and attaching it to the Amazon RDS DB instance, would allow access to the database from both the private and public subnets, which is not desired.

Option D, creating peering connections between subnets, is not necessary to control access to the database. Peering connections enable communication between VPCs or subnets but do not provide fine-grained control over access to specific resources.

Therefore, option B provides the most appropriate solution by using a security group to deny ingress from the security group used by instances in the public subnets, ensuring that only instances in the private subnets can access the database.

Question 1164

Exam Question

An application is running on Amazon EC2 instances. Sensitive information required for the application is stored in an Amazon S3 bucket. The bucket needs to be protected from internet access while only allowing services within the VPC access to the bucket.

Which combination of actions should solutions archived take to accomplish this? (Choose two.)

A. Create a VPC endpoint for Amazon S3.
B. Enable server access logging on the bucket.
C. Apply a bucket policy to restrict access to the S3 endpoint.
D. Add an S3 ACL to the bucket that has sensitive information.
E. Restrict users using the IAM policy to use the specific bucket.

Correct Answer

A. Create a VPC endpoint for Amazon S3.
C. Apply a bucket policy to restrict access to the S3 endpoint.

Explanation

To protect the Amazon S3 bucket from internet access and only allow services within the VPC to access it, the following actions should be taken:

A. Create a VPC endpoint for Amazon S3: This allows communication between resources in your VPC and Amazon S3 without going over the internet. By creating a VPC endpoint, you can ensure that traffic to S3 stays within your VPC.

C. Apply a bucket policy to restrict access to the S3 endpoint: You can create a bucket policy that specifies which resources or entities are allowed to access the S3 bucket. By configuring the bucket policy, you can restrict access to only the services or resources within your VPC, effectively preventing internet access to the bucket.

The remaining options are not directly related to achieving the desired outcome:

B. Enable server access logging on the bucket: Server access logging is used to capture detailed records for requests made to the S3 bucket. While it is a good practice for monitoring and auditing, it does not directly address the requirement of restricting internet access to the bucket.

D. Add an S3 ACL to the bucket that has sensitive information: S3 Access Control Lists (ACLs) control access to individual objects within the bucket, but they do not provide a mechanism to restrict overall bucket access or prevent internet access to the bucket.

E. Restrict users using the IAM policy to use the specific bucket: IAM policies are used to control access to AWS resources, including S3 buckets. However, they do not specifically address the requirement of restricting internet access to the bucket; they control access based on IAM user permissions.

Therefore, options A and C provide the necessary steps to protect the S3 bucket from internet access while allowing access only from within the VPC.

Question 1165

Exam Question

A company that operates a web application on premises is preparing to launch a newer version of the application on AWS. The company needs to route requests to either the AWS-hosted or the on-premises-hosted application based on the URL query string. The on-premises application is not available from the internet, and a VPN connection is established between Amazon VPC and the company’s data center. The company wants to use an Application Load Balancer (ALB) for this launch.

Which solution meets these requirements?

A. Use two ALBs: one for on-premises and one for the AWS resource. Add hosts to each target group of each ALB. Route with Amazon Route 53 based on the URL query string.
B. Use two ALBs: one for on-premises and one for the AWS resource. Add hosts to the target group of each ALB. Create a software router on an EC2 instance based on the URL query string.
C. Use one ALB with two target groups: one for the AWS resource and one for on premises. Add hosts to each target group of the ALB. Configure listener rules based on the URL query string.
D. Use one ALB with two AWS Auto Scaling groups: one for the AWS resource and one for on premises. Add hosts to each Auto Scaling group. Route with Amazon Route 53 based on the URL query string.

Correct Answer

C. Use one ALB with two target groups: one for the AWS resource and one for on premises. Add hosts to each target group of the ALB. Configure listener rules based on the URL query string.

Explanation

To route requests based on the URL query string between the AWS-hosted and on-premises-hosted applications, the recommended solution is to use one Application Load Balancer (ALB) with two target groups.

Here’s how the solution would work:

  1. Create an ALB: Set up an ALB in your AWS environment.
  2. Create target groups: Create two target groups—one for the AWS-hosted application and one for the on-premises-hosted application. Each target group should contain the appropriate hosts or instances.
  3. Configure listener rules: Configure the ALB’s listener rules to route requests based on the URL query string. You can define the conditions and actions in the listener rules to forward traffic to the corresponding target group based on the query string value.
  4. Add hosts to target groups: Add the hosts or instances of the AWS-hosted application and the on-premises-hosted application to their respective target groups.

With this configuration, when a request comes in, the ALB examines the URL query string and forwards the request to the appropriate target group based on the defined listener rules. The target group, in turn, routes the request to the corresponding hosts or instances.

Option C provides a simpler and more efficient approach compared to the other options, as it leverages the capabilities of the ALB and eliminates the need for additional ALBs or software routers.

Question 1166

Exam Question

A company uses Amazon S3 as its object storage solution. The company has thousands of S3 buckets it uses to store data. Some of the S3 bucket have data that is accessed less frequently than others. A solutions architect found that lifecycle policies are not consistently implemented or are implemented partially? resulting in data being stored in high-cost storage.

Which solution will lower costs without compromising the availability of objects?

A. Use S3 ACLs.
B. Use Amazon Elastic Block Store (EBS) automated snapshots.
C. Use S3 Intelligent-Tiering storage.
D. Use S3 One Zone-infrequent Access (S3 One Zone-IA).

Correct Answer

C. Use S3 Intelligent-Tiering storage.

Explanation

To lower costs without compromising the availability of objects, the best solution is to use S3 Intelligent-Tiering storage.

S3 Intelligent-Tiering is a storage class within Amazon S3 that is designed to automatically optimize costs by moving objects between two access tiers: frequent access and infrequent access. It uses machine learning to analyze access patterns and automatically moves objects that have not been accessed for a certain period of time to the infrequent access tier, which has lower storage costs.

By using S3 Intelligent-Tiering, the company can benefit from cost savings without compromising the availability of objects. The objects will still be readily accessible, and there will be no impact on performance or latency when accessing them.

S3 Intelligent-Tiering is a good fit for the scenario described because it automatically adjusts the storage tier based on access patterns, eliminating the need for manual lifecycle policies. It also provides cost savings compared to storing all objects in high-cost storage classes.

Options A, B, and D are not the most suitable choices for this scenario.

Option A suggests using S3 ACLs (Access Control Lists), which are used for managing access permissions to S3 objects. While ACLs are important for controlling access to objects, they do not directly address the cost optimization requirement.

Option B suggests using Amazon Elastic Block Store (EBS) automated snapshots, which are point-in-time backups of EBS volumes. EBS snapshots are not directly related to the management of S3 objects and do not provide a solution for lowering costs of storing data in S3 buckets.

Option D suggests using S3 One Zone-Infrequent Access (S3 One Zone-IA), which is a storage class in S3 that stores data in a single availability zone. While S3 One Zone-IA can offer cost savings compared to other storage classes, it does compromise the availability of objects as it does not replicate data across multiple availability zones. In the scenario where availability is a requirement, using S3 Intelligent-Tiering would be a better choice as it provides cost optimization while maintaining high availability.

Question 1167

Exam Question

A company is migrating a Linux-based web server group to AWS. The web servers must access files in a shared file store for some content. To meet the migration date, minimal changes can be made.

What should a solutions architect do to meet these requirements?

A. Create an Amazon S3 Standard bucket with access to the web server.
B. Configure an Amazon CloudFront distribution with an Amazon S3 bucket as the origin.
C. Create an Amazon Elastic File System (Amazon EFS) volume and mount it on all web servers.
D. Configure Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1) volumes and mount them on all web servers.

Correct Answer

C. Create an Amazon Elastic File System (Amazon EFS) volume and mount it on all web servers.

Explanation

To meet the requirements of accessing files in a shared file store with minimal changes, the best option is to create an Amazon Elastic File System (Amazon EFS) volume and mount it on all web servers.

Amazon EFS provides a scalable and fully managed file storage service that can be easily mounted on multiple EC2 instances simultaneously. It is designed to provide shared access to files across multiple instances, making it an ideal choice for the scenario where web servers need to access files in a shared file store.

By creating an Amazon EFS volume and mounting it on all web servers, the web servers can access the shared files without requiring significant modifications to the existing web server configuration. Amazon EFS provides a file system interface that is compatible with standard Linux file system semantics, so the web servers can continue to access the files using familiar file system operations.

Options A, B, and D are not the most suitable choices for this scenario.

Option A suggests using an Amazon S3 Standard bucket, but Amazon S3 is an object storage service and does not provide a traditional file system interface. While it is possible to mount S3 buckets using third-party tools or AWS services like AWS Storage Gateway, it would require more changes to the web server configuration and may not provide the desired level of compatibility.

Option B suggests using Amazon CloudFront with an Amazon S3 bucket as the origin. While this can provide caching and improved performance for static content, it does not provide a shared file store for the web servers to access files.

Option D suggests using Amazon Elastic Block Store (Amazon EBS) volumes, which are block-level storage volumes attached to EC2 instances. While it is possible to create and mount EBS volumes on the web servers, it does not provide the shared access and scalability that is required for a shared file store.

Question 1168

Exam Question

A media company is evaluating the possibility of moving its systems to the AWS Cloud. The company needs at least 10 TB of storage with the maximum possible I/O performance for video processing. 300 TB of very durable storage for storing media content, and 900 TB of storage to meet requirements for archival media that is not in use anymore.

Which set of services should a solutions architect recommend to meet these requirements?

A. Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage.
B. Amazon EBS for maximum performance. Amazon EFS for durable data storage, and Amazon S3 Glacier for archival storage
C. Amazon EC2 instance store for maximum performance, Amazon EFS for durable data storage, and Amazon S3 for archival storage
D. Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage

Correct Answer

A. Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage.

Explanation

To meet the requirements of the media company, a solutions architect should recommend the following set of services:

  1. Amazon EBS for maximum performance: Amazon Elastic Block Store (EBS) provides block-level storage volumes that can be attached to Amazon EC2 instances. It offers high-performance storage options such as Amazon EBS Provisioned IOPS (input/output operations per second) SSD (io1) volumes, which are designed to deliver predictable and consistent I/O performance. By utilizing Amazon EBS, the media company can achieve the maximum possible I/O performance for video processing.
  2. Amazon S3 for durable data storage: Amazon Simple Storage Service (S3) is an object storage service that provides highly durable and scalable storage for storing media content. It offers high availability, durability, and low latency access to data. With its durability and scalability, Amazon S3 is a suitable choice for storing the 300 TB of media content that requires very durable storage.
  3. Amazon S3 Glacier for archival storage: Amazon S3 Glacier is a storage service designed for long-term archival of data. It offers low-cost storage with high durability for data that is infrequently accessed or no longer in active use. By utilizing Amazon S3 Glacier, the media company can meet the requirement of 900 TB of storage for archival media that is not in use anymore.

Option A provides the most appropriate combination of services to meet the storage requirements of the media company, with Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage.

Question 1169

Exam Question

A company has an application workflow that uses an AWS Lambda function to download and decrypt files from Amazon S3. These files are encrypted using AWS Key Management Service Customer Master Keys (AWS KMS CMKs). A solutions architect needs to design a solution that will ensure the required permissions are set correctly.

Which combination of actions accomplish this? (Choose two.)

A. Attach the kms:decrypt permission to the Lambda function resource policy.
B. Grant the decrypt permission for the Lambda IAM role in the KMS key policy.
C. Grant the decrypt permission for the Lambda resource policy in the KMS key policy.
D. Create a new IAM policy with the kms:decrypt permission and attach the policy to the Lambda function.
E. Create a new IAM role with the kms:decrypt permission and attach the execution role to the Lambda function.

Correct Answer

B. Grant the decrypt permission for the Lambda IAM role in the KMS key policy.

Explanation

To ensure the required permissions for the Lambda function to download and decrypt files from Amazon S3 using AWS KMS CMKs, the following actions should be taken:

B. Grant the decrypt permission for the Lambda IAM role in the KMS key policy.
E. Create a new IAM role with the kms:decrypt permission and attach the execution role to the Lambda function.

  • Option B: Granting the decrypt permission for the Lambda IAM role in the KMS key policy ensures that the IAM role associated with the Lambda function has the necessary permission to decrypt the files using the KMS CMK.
  • Option E: Creating a new IAM role with the kms:decrypt permission and attaching the execution role to the Lambda function allows the Lambda function to assume the IAM role with the necessary permission to perform the decryption operation.

By combining these two actions, the Lambda function will have the required permissions to download and decrypt files from Amazon S3 using AWS KMS CMKs.

Question 1170

Exam Question

A company runs a website on Amazon EC2 instances behind an ELB Application Load Balancer. Amazon Route 53 is used for the DNS. The company wants to set up a backup website with a message including a phone number and email address that users can reach if the primary website is down.

How should the company deploy this solution?

A. Use Amazon S3 website hosting for the backup website and Route 53 failover routing policy.
B. Use Amazon S3 website hosting for the backup website and Route 53 latency routing policy.
C. Deploy the application in another AWS Region and use ELB health checks for failover routing.
D. Deploy the application in another AWS Region and use server-side redirection on the primary website.

Correct Answer

A. Use Amazon S3 website hosting for the backup website and Route 53 failover routing policy.

Explanation

The company can deploy the backup website with the required message using the following solution:

A. Use Amazon S3 website hosting for the backup website and Route 53 failover routing policy.

  1. Set up an Amazon S3 bucket and configure it for static website hosting. Upload the backup website files to this bucket.
  2. Configure the desired message, including the phone number and email address, in the backup website files.
  3. In Amazon Route 53, create a failover routing policy for the DNS record associated with the primary website.
  4. Configure the primary website as the primary resource and the S3 bucket hosting the backup website as the failover resource.
  5. Configure health checks for the primary website in the ELB associated with the primary website.
  6. Specify the appropriate failover routing behavior, such as routing traffic to the failover resource (backup website) when the primary resource (primary website) is unhealthy or unavailable.

By implementing this solution, when the primary website is down or determined to be unhealthy, Amazon Route 53 will automatically route traffic to the backup website hosted in Amazon S3, displaying the desired message with the contact information for users to reach out.