Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 34

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 1051

Exam Question

A solutions architect is moving the static content from a public website hosted on Amazon EC2 instances to an Amazon S3 bucket. An Amazon CloudFront distribution will be used to deliver the static assets. The security group used by the EC2 instances restricts access to a limited set of IP ranges. Access to the static content should be similarly restricted.

Which combination of steps will meet these requirements? (Choose two.)

A. Create an origin access identity (OAI) and associate it with the distribution. Change the permissions in the bucket policy so that only the OAI can read the objects.
B. Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the CloudFront distribution.
C. Create a new security group that includes the same IP restrictions that exist in the current EC2 security group. Associate this new security group with the CloudFront distribution.
D. Create a new security group that includes the same IP restrictions that exist in the current EC2 security group. Associate this new security group with the S3 bucket hosting the static content.
E. Create a new IAM role and associate the role with the distribution. Change the permissions either on the S3 bucket or on the files within the S3 bucket so that only the newly created IAM role has read and download permissions.

Correct Answer

A. Create an origin access identity (OAI) and associate it with the distribution. Change the permissions in the bucket policy so that only the OAI can read the objects.
B. Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the CloudFront distribution.

Explanation

The combination of steps that will meet the requirements is:

A. Create an origin access identity (OAI) and associate it with the distribution. Change the permissions in the bucket policy so that only the OAI can read the objects.
B. Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the CloudFront distribution.

Here’s the rationale for these choices:

A. Create an origin access identity (OAI) and associate it with the distribution:
By creating an OAI and associating it with the CloudFront distribution, you can control access to the S3 bucket content directly from CloudFront. In the bucket policy, you can specify that only the OAI is allowed to read the objects in the S3 bucket. This ensures that the static content is only accessible through CloudFront and not directly from the S3 bucket, effectively restricting access.

B. Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group:
By creating an AWS WAF web ACL and adding IP restrictions that match the ones defined in the EC2 security group, you can enforce similar IP-based access restrictions for the static content delivered through CloudFront. The web ACL allows you to define rules and conditions to control access to your content based on IP addresses. By associating this web ACL with the CloudFront distribution, you can ensure that only requests coming from the specified IP ranges are allowed to access the static assets.

Combining these two steps, you can ensure that access to the static content is restricted both at the CloudFront distribution level (through the OAI) and at the AWS WAF level (through IP restrictions). This helps to maintain the same level of access control that was present with the EC2 security group, ensuring that only authorized IP ranges can access the static assets served by CloudFront.

Question 1052

Exam Question

A company wants to host its web application on AWS using multiple Amazon EC2 instances across different AWS Regions. Since the application content will be specific to each geographic region, the client requests need to be routed to the server that hosts the content for that client’s Region.

What should a solutions architect do to accomplish this?

A. Configure Amazon Route 53 with a latency routing policy.
B. Configure Amazon Route 53 with a weighted routing policy.
C. Configure Amazon Route 53 with a geolocation routing policy.
D. Configure Amazon Route 53 with a multivalue answer routing policy

Correct Answer

C. Configure Amazon Route 53 with a geolocation routing policy.

Explanation

To route client requests to the server hosting content specific to each geographic region, a solutions architect should configure Amazon Route 53 with a geolocation routing policy.

The geolocation routing policy in Amazon Route 53 allows you to route traffic based on the geographic location of the client. You can create separate resource record sets for each geographic region and associate them with the corresponding EC2 instances in each AWS Region. When a client makes a request, Route 53 determines the client’s location based on the IP address and routes the request to the appropriate EC2 instance that hosts the content for that region.

This approach ensures that clients are directed to the server closest to their geographic location, minimizing latency and providing a better user experience. By utilizing the geolocation routing policy in Route 53, the company can efficiently distribute its web application across multiple AWS Regions and serve content specific to each geographic region.

Question 1053

Exam Question

A company plans to store sensitive user data on Amazon S3. Internal security compliance requirements mandate encryption of data before sending it to Amazon S3.

What should a solution architect recommend to satisfy these requirements?

A. Server-side encryption with customer-provided encryption keys
B. Client-side encryption with Amazon S3 managed encryption keys
C. Server-side encryption with keys stored in AWS key Management Service (AWS KMS)
D. Client-side encryption with a master key stored in AWS Key Management Service (AWS KMS)

Correct Answer

D. Client-side encryption with a master key stored in AWS Key Management Service (AWS KMS)

Explanation

To satisfy the internal security compliance requirements for encrypting sensitive user data before sending it to Amazon S3, a solution architect should recommend client-side encryption with a master key stored in AWS Key Management Service (AWS KMS).

Client-side encryption involves encrypting the data on the client side before it is uploaded to Amazon S3. With client-side encryption, the company has full control over the encryption process and the encryption keys. By using a master key stored in AWS KMS, the company can benefit from the strong encryption and key management capabilities provided by AWS KMS.

In this approach, the client application encrypts the sensitive user data using the master key retrieved from AWS KMS, and then uploads the encrypted data to Amazon S3. When retrieving the data, the client application must decrypt it using the same master key.

This solution ensures that sensitive user data is encrypted before it leaves the client’s environment and remains encrypted at rest in Amazon S3. The use of AWS KMS for key management adds an extra layer of security and control over the encryption keys.

Question 1054

Exam Question

A company provides an online service for posting video content and transcoding it for use by any mobile platform. The application architecture uses Amazon Elastic File System (Amazon EFS) Standard to collect and store the videos so that multiple Amazon EC2 Linux instances can access the video content for processing. As the popularity of the service has grown over time, the storage costs have become too expensive.

Which storage solution is MOST cost-effective?

A. Use AWS Storage Gateway for files to store and process the video content.
B. Use AWS Storage Gateway for volumes to store and process the video content.
C. Use Amazon EFS for storing the video content. Once processing is complete, transfer the files to Amazon Elastic Block Store (Amazon EBS).
D. Use Amazon S3 for storing the video content. Move the files temporarily over to an Amazon ElasticBlock Store (Amazon EBS) volume attached to the server for processing.

Correct Answer

D. Use Amazon S3 for storing the video content. Move the files temporarily over to an Amazon ElasticBlock Store (Amazon EBS) volume attached to the server for processing.

Explanation

To achieve the most cost-effective storage solution for the video content, the solution architect should recommend using Amazon S3 for storing the video content and temporarily moving the files to an Amazon Elastic Block Store (Amazon EBS) volume attached to the server for processing.

Amazon S3 is a highly durable and cost-effective object storage service that is well-suited for storing large amounts of data, such as video content. By using Amazon S3, the company can take advantage of its low storage costs.

When processing the videos, instead of using Amazon EFS, which can be more expensive, the solution can temporarily move the files to an Amazon EBS volume attached to the server. Amazon EBS provides block storage volumes that can be easily attached to EC2 instances for temporary storage during processing. Once the processing is complete, the files can be moved back to Amazon S3 for long-term storage.

This approach helps optimize costs by leveraging the cost advantages of Amazon S3 for storage and using Amazon EBS volumes only for temporary processing needs, reducing the overall storage costs for the company.

Question 1055

Exam Question

A company wants to run a hybrid workload for data processing. The data needs to be accessed by on-premises applications for local data processing using an NFS protocol, and must also be accessible from the AWS Cloud for further analytics and batch processing.

Which solution will meet these requirements?

A. Use an AWS Storage Gateway file gateway to provide file storage to AWS, then perform analytics on this data in the AWS Cloud.
B. Use an AWS storage Gateway tape gateway to copy the backup of the local data to AWS, then perform analytics on this data in the AWS cloud.
C. Use an AWS Storage Gateway volume gateway in a stored volume configuration to regularly take snapshots of the local data, then copy the data to AWS.
D. Use an AWS Storage Gateway volume gateway in a cached volume configuration to back up all the local storage in the AWS cloud, then perform analytics on this data in the cloud.

Correct Answer

A. Use an AWS Storage Gateway file gateway to provide file storage to AWS, then perform analytics on this data in the AWS Cloud.

Explanation

To meet the requirements of running a hybrid workload for data processing with access by both on-premises applications and the AWS Cloud, the recommended solution is to use an AWS Storage Gateway file gateway.

The AWS Storage Gateway file gateway provides a file interface (NFS or SMB) to on-premises applications, allowing them to access the data stored in Amazon S3. The data can be ingested and processed by on-premises applications using the NFS protocol. Simultaneously, the data is accessible in the AWS Cloud for further analytics and batch processing.

By using the file gateway, the company can leverage the scalability and durability of Amazon S3 for storing and processing the data in the AWS Cloud. It provides a seamless and efficient integration between on-premises applications and the cloud, allowing the company to take advantage of cloud-based analytics capabilities while preserving the ability to perform local data processing.

Option A, using an AWS Storage Gateway file gateway, is the most suitable solution that meets the requirements of the hybrid workload for data processing.

Question 1056

Exam Question

A company has applications hosted on Amazon EC2 instances with IPv6 addresses. The applications must initiate communications with other external applications using the internet. However, the company security policy states that any external service cannot initiate a connection to the EC2 instances.

What should a solutions architect recommend to resolve this issue?

A. Create a NAT gateway and make it the destination of the subnet route table.
B. Create an internet gateway and make it the destination of the subnet route table.
C. Create a virtual private gateway and make it the destination of the subnet route table.
D. Create an egress-only internet gateway and make it the destination of the subnet route table.

Correct Answer

D. Create an egress-only internet gateway and make it the destination of the subnet route table.

Explanation

To resolve the issue where the company’s applications hosted on Amazon EC2 instances with IPv6 addresses need to initiate communications with other external applications using the internet, while ensuring that external services cannot initiate a connection to the EC2 instances, a solutions architect should recommend the following:

D. Create an egress-only internet gateway and make it the destination of the subnet route table.

An egress-only internet gateway is a horizontally scalable, redundant, and highly available VPC component that allows outbound communication over IPv6 from the instances in the VPC to the internet. It prevents inbound communication initiated from the internet, ensuring that external services cannot establish connections with the EC2 instances. By configuring the subnet route table to direct all outbound IPv6 traffic to the egress-only internet gateway, the EC2 instances can initiate outbound connections while remaining protected from incoming connections.

This solution satisfies the security policy requirement and allows the company’s applications to communicate with external services using IPv6 addresses while maintaining the desired level of control and security.

Question 1057

Exam Question

A company delivers files in Amazon S3 to certain users who do not have AWS credentials. These users must be given access for a limited lime.

What should a solutions architect do to securely meet these requirements?

A. Enable public access on an Amazon S3 bucket.
B. Generate a pre-signed URL to share with the users.
C. Encrypt files using AWS KMS and provide keys to the users.
D. Create and assign IAM roles that will grant GetObject permissions to the users.

Correct Answer

B. Generate a pre-signed URL to share with the users.

Explanation

To securely provide temporary access to users who do not have AWS credentials for files stored in Amazon S3, a solutions architect should recommend the following:

B. Generate a pre-signed URL to share with the users.

A pre-signed URL is a time-limited URL that grants temporary access to specific S3 objects. By generating a pre-signed URL, the architect can define the time duration for which the URL will be valid, typically for a limited period of time. This URL can then be shared with the users, allowing them to access and download the specified S3 object without requiring AWS credentials.

This approach ensures secure access to the files for a limited time as the pre-signed URL automatically expires after the specified duration. It eliminates the need to enable public access, which could compromise security, and avoids the complexity of managing and distributing encryption keys or IAM roles.

Therefore, generating a pre-signed URL is the recommended solution for securely providing temporary access to files in Amazon S3 for users without AWS credentials.

Question 1058

Exam Question

An administrator of a large company wants to monitor for and prevent any cryptocurrency-related attacks on the company AWS accounts.

Which AWS service can the administrator use to protect the company against attacks?

A. Amazon Cognito
B. Amazon GuardDuty
C. Amazon Inspector
D. Amazon Macie

Correct Answer

B. Amazon GuardDuty

Explanation

To protect against cryptocurrency-related attacks on AWS accounts, the administrator can use:

B. Amazon GuardDuty.

Amazon GuardDuty is a threat detection service that continuously monitors AWS accounts for malicious activity and unauthorized behavior. It uses machine learning algorithms and industry-leading threat intelligence to identify various types of threats, including cryptocurrency mining attacks. GuardDuty analyzes network traffic, AWS CloudTrail logs, and DNS logs to detect suspicious activity associated with cryptocurrency mining or other malicious activities.

By using GuardDuty, the administrator can receive real-time alerts and insights into potential cryptocurrency-related attacks, allowing them to take appropriate actions to mitigate the threats and protect the company’s AWS accounts.

Therefore, Amazon GuardDuty is the appropriate AWS service to help protect the company against cryptocurrency-related attacks.

Question 1059

Exam Question

A company is processing data on a daily basis. The results of the operations are stored in an Amazon S3 bucket, analyzed daily for one week, and then must remain immediately accessible for occasional analysis.

What is the MOST cost-effective storage solution alternative to the current configuration?

A. Configure a lifecycle policy to delete the objects after 30 days.
B. Configure a lifecycle policy to transition the objects to Amazon S3 Glacier after 30 days.
C. Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.
D. Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.

Correct Answer

C. Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.

Explanation

The most cost-effective storage solution alternative for the given scenario would be:

C. Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.

With this configuration, the data is initially stored in Amazon S3, and after 30 days, the lifecycle policy automatically transitions the objects to the S3 Standard-IA storage class. S3 Standard-IA provides the same durability and availability as S3 Standard but at a lower storage cost. It is designed for data that is accessed less frequently but still requires immediate accessibility when needed.

By using S3 Standard-IA, the company can significantly reduce storage costs compared to storing the data in the S3 Standard storage class indefinitely. It ensures that the data remains immediately accessible for occasional analysis while optimizing costs for long-term storage.

Therefore, configuring a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days is the most cost-effective storage solution alternative in this case.

Question 1060

Exam Question

A company is developing a video conversion application hosted on AWS. The application will be available in two tiers: a free tier and a paid tier. Users in the paid tier will have their videos converted first and then the tree tier users will have their videos converted.

Which solution meets these requirements and is MOST cost-effective?

A. One FIFO queue for the paid tier and one standard queue for the free tier.
B. A single FIFO Amazon Simple Queue Service (Amazon SQS) queue for all file types.
C. A single standard Amazon Simple Queue Service (Amazon SQS) queue for all file types.
D. Two standard Amazon Simple Queue Service (Amazon SQS) queues with one for the paid tier and one for the free tier.

Correct Answer

D. Two standard Amazon Simple Queue Service (Amazon SQS) queues with one for the paid tier and one for the free tier.

Explanation

The most cost-effective solution that meets the requirements is:

D. Two standard Amazon Simple Queue Service (Amazon SQS) queues with one for the paid tier and one for the free tier.

By using two separate queues, one for the paid tier and one for the free tier, the application can prioritize the conversion of videos for paid tier users first. This ensures that the paid tier users receive their converted videos promptly before processing the videos for the free tier users.

Using standard Amazon SQS queues is a cost-effective choice because they provide reliable message delivery at a lower cost compared to FIFO queues. Since the order of video conversions does not need to be strictly enforced within each tier, using standard queues for both tiers is a suitable and cost-effective solution.

Therefore, having two standard Amazon SQS queues, one for the paid tier and one for the free tier, is the recommended solution that is both cost-effective and meets the requirements of prioritizing video conversions for the paid tier users.