Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 36

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 1071

Exam Question

A solution architect has created two IAM policies: Policy1 and Policy2. Both policies are attached to an IAM group.

A solution architect has created two IAM policies: Policy1 and Policy2. Both policies are attached to an IAM group.

A cloud engineer is added as an IAM user to the IAM group.

Which action will the cloud engineer be able to perform?

A. Deleting IAM users
B. Deleting directories
C. Deleting Amazon EC2 instances
D. Deleting logs from Amazon CloudWatch Logs

Correct Answer

C. Deleting Amazon EC2 instances

Question 1072

Exam Question

A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side JavaScript, and images.

Which method is the MOST cost-effective for hosting the website?

A. Containerize the website and host it in AWS Fargate.
B. Create an Amazon S3 bucket and host the website there.
C. Deploy a web server on an Amazon EC2 instance to host the website.
D. Configure an Application Load Balancer with an AWS Lambda target that uses the Express.js framework.

Correct Answer

B. Create an Amazon S3 bucket and host the website there.

Explanation

The most cost-effective method for hosting a website consisting of HTML, CSS, client-side JavaScript, and images would be:

B. Create an Amazon S3 bucket and host the website there.

Amazon S3 (Simple Storage Service) is a highly scalable and cost-effective object storage service offered by AWS. It is suitable for static website hosting and can store and serve static website content directly from the S3 bucket.

By creating an S3 bucket and configuring it for static website hosting, you can upload your HTML, CSS, JavaScript, and image files to the bucket. The content will be accessible via a website endpoint provided by S3.

This approach eliminates the need to manage and maintain an EC2 instance or containers, reducing operational costs. Additionally, S3 provides high scalability and durability for storing and serving static website content.

  • Option A suggests containerizing the website and hosting it in AWS Fargate, which may be more suitable for dynamic applications or scenarios requiring specific runtime environments.
  • Option C suggests deploying a web server on an EC2 instance, which involves additional operational overhead and costs compared to using S3 for static content hosting.
  • Option D suggests configuring an Application Load Balancer with an AWS Lambda target using the Express.js framework, which is more suitable for serverless applications and dynamic content generation.

Therefore, the most cost-effective method for hosting a static website would be to create an Amazon S3 bucket and host the website content there.

Question 1073

Exam Question

A company’s web application is running on Amazon EC2 instances behind an Application Load Balancer. The company recently changed its policy, which now requires the application to be accessed from one specific country only.

Which configuration will meet this requirement?

A. Configure the security group for the EC2 instances.
B. Configure the security group on the Application Load Balancer.
C. Configure AWS WAF on the Application Load Balancer in a VPC.
D. Configure the network ACL for the subnet that contains the EC2 instances.

Correct Answer

C. Configure AWS WAF on the Application Load Balancer in a VPC.

Explanation

To meet the requirement of allowing access to the web application from one specific country only when using Amazon EC2 instances behind an Application Load Balancer, the following configuration should be implemented:

C. Configure AWS WAF on the Application Load Balancer in a VPC.

AWS WAF (Web Application Firewall) is a web application firewall service that helps protect web applications from common web exploits and allows you to define customized security rules. It integrates with the Application Load Balancer (ALB) to provide advanced security features.

To restrict access to the web application from one specific country, you can use AWS WAF’s Geo Match conditions. Geo Match conditions allow you to create rules based on the geographic location of the client’s IP address. You can specify the desired country as the allowed source location.

By configuring AWS WAF on the Application Load Balancer in a VPC, you can enforce the country-based access restriction before the traffic reaches the EC2 instances. This ensures that only requests originating from the specified country are allowed to access the web application, while blocking requests from other countries.

  • Option A suggests configuring the security group for the EC2 instances. While security groups can control inbound and outbound traffic at the instance level, they do not provide country-based filtering capabilities.
  • Option B suggests configuring the security group on the Application Load Balancer. Similar to option A, security groups at the load balancer level control traffic flow but do not offer country-based filtering.
  • Option D suggests configuring the network ACL for the subnet containing the EC2 instances. Network ACLs can provide inbound and outbound traffic control at the subnet level, but they lack the granular country-based filtering capabilities required in this scenario.

Therefore, the most appropriate solution is to configure AWS WAF on the Application Load Balancer in a VPC and use Geo Match conditions to allow access from the specific country only.

Question 1074

Exam Question

A company’s dynamic website is hosted using on-premises servers in the United States. The company is launching its product in Europe, and it wants to optimize site loading times for new European users. The site’s backend must remain in the United States. The product is being launched in a few days, and an immediate solution is needed.

What should the solutions architect recommend?

A. Launch an Amazon EC2 instance in us-east-1 and migrate the site to it.
B. Move the website to Amazon S3. Use cross-Region replication between Regions.
C. Use Amazon CloudFront with a custom origin pointing to the on-premises servers
D. Use an Amazon Route 53 geo-proximity routing policy pointing to on-premises servers.

Correct Answer

C. Use Amazon CloudFront with a custom origin pointing to the on-premises servers

Explanation

To optimize site loading times for new European users while keeping the site’s backend in the United States, the following solution should be recommended:

C. Use Amazon CloudFront with a custom origin pointing to the on-premises servers.

Amazon CloudFront is a content delivery network (CDN) that helps improve the performance and availability of websites and applications. By deploying CloudFront with a custom origin pointing to the on-premises servers, you can leverage its global network of edge locations to cache and serve the website’s static and dynamic content to users in Europe.

Here’s how the solution works:

  1. Set up an Amazon CloudFront distribution with the appropriate settings and configurations.
  2. Configure the custom origin for the CloudFront distribution to point to the on-premises servers hosting the dynamic website.
  3. Configure CloudFront caching behavior to cache static content at the edge locations, reducing latency for subsequent requests.
  4. When a European user accesses the website, their requests will be routed to the nearest CloudFront edge location in Europe, reducing the round-trip time and improving site loading times.
  5. CloudFront will fetch the content from the on-premises servers as needed and cache it at the edge locations, resulting in faster subsequent requests from users in Europe.

Option A suggests launching an Amazon EC2 instance in us-east-1 and migrating the site to it. While this can provide better proximity for the site’s backend, it does not address the optimization of site loading times for European users.

Option B suggests moving the website to Amazon S3 and using cross-Region replication between Regions. While Amazon S3 can serve static content efficiently, it may not be suitable for hosting dynamic content or interacting with on-premises servers.

Option D suggests using an Amazon Route 53 geo-proximity routing policy pointing to on-premises servers. While Route 53 can direct traffic based on geographic proximity, it does not provide the caching and content delivery capabilities of Amazon CloudFront, which is more suitable for optimizing site loading times.

Therefore, the most appropriate solution in this scenario is to use Amazon CloudFront with a custom origin pointing to the on-premises servers, allowing for improved site performance for European users while keeping the site’s backend in the United States.

Question 1075

Exam Question

A solutions architect is designing a mission-critical web application. It will consist of Amazon EC2 instances behind an Application Load Balancer and a relational database. The database should be highly available and fault tolerant.

Which database implementations will meet these requirements? (Choose two.)

A. Amazon Redshift
B. Amazon DynamoDB
C. Amazon RDS for MySQL
D. MySQL-compatible Amazon Aurora Multi-AZ
E. Amazon RDS for SQL Server Standard Edition Multi-AZ

Correct Answer

C. Amazon RDS for MySQL
D. MySQL-compatible Amazon Aurora Multi-AZ

Explanation

To ensure high availability and fault tolerance for a mission-critical web application with Amazon EC2 instances behind an Application Load Balancer, the following database implementations can meet these requirements:

C. Amazon RDS for MySQL
D. MySQL-compatible Amazon Aurora Multi-AZ

C. Amazon RDS for MySQL: Amazon RDS (Relational Database Service) provides managed MySQL database instances. By selecting Multi-AZ deployment for Amazon RDS for MySQL, it creates a standby replica of the primary database in a different Availability Zone (AZ) for automatic failover in the event of a failure. This configuration ensures high availability and fault tolerance for the database.

D. MySQL-compatible Amazon Aurora Multi-AZ: Amazon Aurora is a MySQL-compatible database engine provided by AWS. Aurora Multi-AZ deploys a primary database instance in one AZ and synchronously replicates it to a standby replica in another AZ. This configuration ensures automatic failover in case of a primary instance failure, providing high availability and fault tolerance.

Option A, Amazon Redshift, is a fully managed data warehousing service, and it is not designed for transactional web applications.

Option B, Amazon DynamoDB, is a NoSQL database service that offers high scalability and availability. However, it does not provide the relational capabilities typically required for a web application with an RDBMS.

Option E, Amazon RDS for SQL Server Standard Edition Multi-AZ, is specific to SQL Server and not mentioned as a requirement in the question.

Therefore, the most suitable options for highly available and fault-tolerant database implementations for the mission-critical web application are:
C. Amazon RDS for MySQL
D. MySQL-compatible Amazon Aurora Multi-AZ

Question 1076

Exam Question

A company has recently updated its internal security standards. The company must now ensure all Amazon S3 buckets and Amazon Elastic Block Store (Amazon EBS) volumes are encrypted with keys created and periodically rotated by internal security specialists. The company is looking for a native, software-based AWS service to accomplish this goal.

What should a solutions architect recommend as a solution?

A. Use AWS Secrets Manager with customer master keys (CMKs) to store master key material and apply a routine to create a new CMK periodically and replace it in AWS Secrets Manager.
B. Use AWS Key Management Service (AWS KMS) with customer master keys (CMKs) to store master key material and apply a routine to re-create a new key periodically and replace it in AWS KMS.
C. Use an AWS CloudHSM cluster with customer master keys (CMKs) to store master key material and apply a routine to re-create a new key periodically and replace it in the CloudHSM cluster nodes.
D. Use AWS Systems Manager Parameter Store with customer master keys (CMKs) to store master key material and apply a routine to re-create a new key periodically and replace it in the Parameter Store.

Correct Answer

B. Use AWS Key Management Service (AWS KMS) with customer master keys (CMKs) to store master key material and apply a routine to re-create a new key periodically and replace it in AWS KMS.

Explanation

A solutions architect should recommend the following solution:

B. Use AWS Key Management Service (AWS KMS) with customer master keys (CMKs) to store master key material and apply a routine to re-create a new key periodically and replace it in AWS KMS.

AWS Key Management Service (AWS KMS) is a native, software-based AWS service that provides key management and encryption services. It allows you to create and manage customer master keys (CMKs) to encrypt data in AWS services such as Amazon S3 and Amazon EBS volumes. AWS KMS integrates with other AWS services and provides robust key management capabilities.

To meet the company’s requirement of encrypting all Amazon S3 buckets and Amazon EBS volumes with keys created and periodically rotated by internal security specialists, AWS KMS can be used. The security specialists can create and manage CMKs in AWS KMS, periodically generate new keys, and replace the existing keys in AWS KMS. This approach ensures that all data encrypted with these keys is protected and meets the company’s security standards.

Options A, C, and D are not the most suitable solutions for this scenario:

  • A. AWS Secrets Manager is primarily used for storing and managing secrets, such as database credentials, API keys, and passwords. It is not designed for managing encryption keys for Amazon S3 and Amazon EBS volumes.
  • C. AWS CloudHSM is a hardware security module (HSM) service that provides secure key storage and cryptographic operations. While it offers strong security, it is a hardware-based solution and may not be necessary for this scenario.
  • D. AWS Systems Manager Parameter Store is a managed service for storing configuration data. While it can store sensitive information, it is not specifically designed for key management and encryption purposes like AWS KMS.

Therefore, the most suitable solution for native, software-based AWS service for encrypting Amazon S3 buckets and Amazon EBS volumes with periodically rotated keys is:

B. Use AWS Key Management Service (AWS KMS) with customer master keys (CMKs) to store master key material and apply a routine to re-create a new key periodically and replace it in AWS KMS.

Question 1077

Exam Question

A Solutions Architect must design a web application that will be hosted on AWS, allowing users to purchase access to premium, shared content that is stored in an S3 bucket. Upon payment, content will be available for download for 14 days before the user is denied access.

Which of the following would be the LEAST complicated implementation?

A. Use an Amazon CloudFront distribution with an origin access identity (OAI). Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design a Lambda function to remove data that is older than 14 days.
B. Use an S3 bucket and provide direct access to the tile Design the application to track purchases in a DynamoDB table. Configure a Lambda function to remove data that is older than 14 days based on a query to Amazon DynamoDB.
C. Use an Amazon CloudFront distribution with an OAI. Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design the application to sot an expiration of 14 days for the URL.
D. Use an Amazon CloudFront distribution with an OAI. Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design the application to set an expiration of 60 minutes for the URL and recreate the URL as necessary.

Correct Answer

C. Use an Amazon CloudFront distribution with an OAI. Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design the application to sot an expiration of 14 days for the URL.

Explanation

The least complicated implementation for the given scenario would be:

C. Use an Amazon CloudFront distribution with an Origin Access Identity (OAI). Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design the application to set an expiration of 14 days for the URL.

In this implementation, an Amazon CloudFront distribution is used to serve the premium, shared content stored in an Amazon S3 bucket. By configuring the distribution with an Origin Access Identity (OAI), access to the content is restricted to only be served through CloudFront. This helps protect the direct access to the S3 bucket.

Signed URLs are generated by the application and provided to users upon successful payment. These URLs have an expiration time set to 14 days, as specified in the requirement. This ensures that users can download the content for a specific duration before access is denied.

This implementation is the least complicated because it involves using CloudFront’s built-in capabilities for serving signed URLs and setting their expiration time. There is no need to track purchases in DynamoDB or create custom Lambda functions to remove data.

Option A is more complex because it introduces a Lambda function to remove data older than 14 days, which adds additional complexity and management overhead.

Option B involves tracking purchases in DynamoDB and using a Lambda function to remove data older than 14 days based on queries to DynamoDB. This requires setting up and maintaining a DynamoDB table, creating appropriate indexes, and managing the data removal process.

Option D sets the URL expiration to 60 minutes and requires the application to recreate the URL as necessary. While it provides a shorter access window, it adds complexity to the application logic to generate and manage the URLs more frequently.

Therefore, option C is the least complicated implementation as it leverages CloudFront’s signed URLs and expiration feature to provide access to the premium content for 14 days.

Question 1078

Exam Question

A solutions architect is designing the cloud architecture for a new application being deployed to AWS. The application allows users to interactively download and upload files. Files older than 2 years will be accessed less frequently. The solutions architect needs to ensure that the application can scale to any number of files while maintaining high availability and durability.

Which scalable solutions should the solutions architect recommend? (Choose two.)

A. Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Glacier.
B. Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Standard-Infrequent Access (S3 Standard-IA)
C. Store the files on Amazon Elastic File System (Amazon EFS) with a lifecycle policy that moves objects older than 2 years to EFS Infrequent Access (EFS IA).
D. Store the files in Amazon Elastic Block Store (Amazon EBS) volumes. Schedule snapshots of the volumes. Use the snapshots to archive data older than 2 years.
E. Store the files in RAID-striped Amazon Elastic Block Store (Amazon EBS) volumes. Schedule snapshots of the volumes. Use the snapshots to archive data older than 2 years.

Correct Answer

A. Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Glacier.
B. Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Standard-Infrequent Access (S3 Standard-IA)

Explanation

The scalable solutions that the solutions architect should recommend are:

A. Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Glacier.
B. Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Standard-Infrequent Access (S3 Standard-IA).

Amazon S3 is a highly scalable and durable storage service that can handle any number of files. It provides high availability and durability for the files stored in it.

Option A suggests storing the files on Amazon S3 and using a lifecycle policy to move objects older than 2 years to S3 Glacier. S3 Glacier is an archival storage service that offers low-cost storage for long-term retention of data. This solution allows for cost optimization by transitioning less frequently accessed files to a lower-cost storage class while still maintaining durability and availability.

Option B suggests storing the files on Amazon S3 and using a lifecycle policy to move objects older than 2 years to S3 Standard-Infrequent Access (S3 Standard-IA). S3 Standard-IA is a storage class in Amazon S3 that is designed for less frequently accessed data but still provides high durability and availability. It offers lower storage costs compared to S3 Standard, making it suitable for files that are accessed less frequently.

Option C suggests using Amazon Elastic File System (Amazon EFS) with a lifecycle policy to move objects older than 2 years to EFS Infrequent Access (EFS IA). However, as of the current date, Amazon EFS does not have a separate storage class for infrequently accessed data. It only provides a single storage class, and therefore, this option is not applicable.

Option D suggests storing the files in Amazon Elastic Block Store (Amazon EBS) volumes and using snapshots to archive data older than 2 years. However, EBS volumes are not the optimal choice for storing large numbers of files as they are typically used as block-level storage for EC2 instances and may not provide the same level of scalability and durability as S3.

Option E suggests using RAID-striped Amazon EBS volumes and snapshots for archiving data older than 2 years. Similar to option D, EBS volumes are not the ideal choice for scalable storage of a large number of files, and using RAID-striping does not address the scalability and durability requirements.

Therefore, options A and B are the recommended scalable solutions, utilizing Amazon S3 with lifecycle policies to transition older files to appropriate storage classes while ensuring high availability and durability.

Question 1079

Exam Question

A solutions architect observes that a nightly batch processing job is automatically scaled up for 1 hour before the desired Amazon EC2 capacity is reached. The peak capacity is the same every night and the batch jobs always start at 1 AM. The solutions architect needs to find a cost-effective solution that will allow for the desired EC2 capacity to be reached quickly and allow the Auto Scaling group to scale down after the batch jobs are complete.

What should the solutions architect do to meet these requirements?

A. Increase the minimum capacity for the Auto Scaling group.
B. Increase the maximum capacity for the Auto Scaling group.
C. Configure scheduled scaling to scale up to the desired compute level.
D. Change the scaling policy to add more EC2 instances during each scaling operation.

Correct Answer

C. Configure scheduled scaling to scale up to the desired compute level.

Explanation

To meet the requirements of quickly reaching the desired EC2 capacity for the nightly batch processing job and allowing the Auto Scaling group to scale down after the batch jobs are complete, the solutions architect should:

C. Configure scheduled scaling to scale up to the desired compute level.

Scheduled scaling allows you to plan and set specific scaling actions at predetermined times. In this case, since the batch jobs always start at 1 AM, you can configure scheduled scaling to increase the desired capacity of the Auto Scaling group just before 1 AM to the desired compute level required for the batch processing. By doing so, the Auto Scaling group will proactively scale up the EC2 instances to the desired capacity before the batch jobs start, ensuring that the capacity is available for the processing.

Increasing the minimum capacity (option A) or increasing the maximum capacity (option B) of the Auto Scaling group won’t address the requirement of reaching the desired capacity quickly for the batch jobs. It may result in keeping a higher capacity for a longer duration, which may incur unnecessary costs.

Changing the scaling policy to add more EC2 instances during each scaling operation (option D) might increase the scaling speed, but it doesn’t address the need for the capacity to be available specifically before the batch jobs start. Moreover, it may not scale down the instances efficiently after the batch jobs are complete.

Therefore, configuring scheduled scaling (option C) is the most appropriate solution as it allows you to plan and schedule the scaling actions to align with the timing of the nightly batch processing job.

Question 1080

Exam Question

A media company stores video content in an Amazon Elastic Block Store (Amazon EBS) volume. A certain video file has become popular and a large number of users across the world are accessing this content. This has resulted in a cost increase.

Which action will DECREASE cost without compromising user accessibility?

A. Change the EBS volume to Provisioned IOPS (PIOPS).
B. Store the video in an Amazon S3 bucket and create an Amazon CloudFront distribution.
C. Split the video into multiple, smaller segments so users are routed to the requested video segments only.
D. Clear an Amazon S3 bucket in each Region and upload the videos so users are routed to the nearest S3 bucket.

Correct Answer

B. Store the video in an Amazon S3 bucket and create an Amazon CloudFront distribution.

Explanation

To decrease cost without compromising user accessibility for the popular video content, the most appropriate action would be:

B. Store the video in an Amazon S3 bucket and create an Amazon CloudFront distribution.

Storing the video content in an Amazon S3 bucket and creating an Amazon CloudFront distribution is a cost-effective solution that ensures high accessibility for users across the world. With Amazon S3, you can benefit from its durable, highly available, and scalable storage infrastructure. By creating an Amazon CloudFront distribution, the content can be cached and delivered to users globally from edge locations, reducing the load on the origin server (S3 bucket) and optimizing the delivery to users based on their geographic location.

  • Option A (Change the EBS volume to Provisioned IOPS) is not applicable in this scenario as EBS volumes are not suitable for serving video content directly to users.
  • Option C (Split the video into multiple smaller segments) may provide some optimization benefits for video streaming, but it does not directly address the cost increase issue.
  • Option D (Clear an Amazon S3 bucket in each Region and upload the videos) is not recommended as it would involve unnecessary duplication of the video content and may lead to synchronization and management challenges.

Therefore, storing the video content in an S3 bucket and creating an Amazon CloudFront distribution (option B) is the most suitable and cost-effective solution for providing global access to the popular video content.