Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 57

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 1281

Exam Question

A solutions architect is designing the architecture of a new application being deployed to the AWS Cloud. The application will run on Amazon EC2 On-Demand Instances and will automatically scale across multiple Availability Zones. The EC2 instances will scale up and down frequently throughout the day. An Application Load Balancer (ALB) will handle the load distribution. The architecture needs to support distributed session data management. The company is willing to make changes to code if needed.

What should the solutions architect do to ensure that the architecture supports distributed session data management?

A. Use Amazon ElastiCache to manage and store session data.
B. Use session affinity (sticky sessions) of the ALB to manage session data.
C. Use Session Manager from AWS Systems Manager to manage the session.
D. Use the GetSessionToken API operation in AWS Security Token Service (AWS STS) to manage the session.

Correct Answer

A. Use Amazon ElastiCache to manage and store session data.

Explanation

To ensure that the architecture supports distributed session data management, the solutions architect should:

A. Use Amazon ElastiCache to manage and store session data.

Here’s why this option is the correct choice:

Amazon ElastiCache is a fully managed, in-memory data store service that supports the caching of session data. By using ElastiCache, the application can store and manage session data in a distributed and scalable manner. ElastiCache provides high performance and low-latency access to the session data, which is crucial for managing sessions in a scalable and responsive manner.

Using session affinity (sticky sessions) of the ALB (option B) is not an ideal solution because it relies on keeping the user’s session tied to a specific EC2 instance. This can limit scalability and failover capabilities, as the load balancer cannot freely distribute traffic to different instances.

Options C and D are not directly related to session data management. Session Manager (option C) is a service for managing EC2 instances through the AWS Systems Manager. The GetSessionToken API operation in AWS STS (option D) is used for temporary access to AWS resources and not specifically for managing session data.

Therefore, the best choice for supporting distributed session data management in this scenario is to use Amazon ElastiCache (option A).

Question 1282

Exam Question

A company is using a third-party vendor to manage its marketplace analytics. The vendor needs limited programmatic access to resources in the company’s account. All the needed policies have been created to grant appropriate access.

Which additional component will provide the vendor with the MOST secure access to the account?

A. Create an IAM user.
B. Implement a service control policy (SCP)
C. Use a cross-account role with an external ID.
D. Configure a single sign-on (SSO) identity provider.

Correct Answer

C. Use a cross-account role with an external ID.

Explanation

To provide the vendor with the most secure access to the account, the best option would be:

C. Use a cross-account role with an external ID.

Using a cross-account role with an external ID provides a secure and controlled way to grant access to the vendor. Here’s how it works:

  1. Create an IAM role in your account specifically for the vendor.
  2. Define the necessary permissions for the vendor within the role.
  3. Generate a unique external ID and provide it to the vendor.
  4. The vendor will assume the cross-account role using the AWS Security Token Service (STS) and provide the external ID during the assume role process.
  5. With the cross-account role and the correct external ID, the vendor can access the limited resources specified in the role’s permissions.

This approach ensures that the vendor can only access the specific resources and actions defined in the role, providing a secure and controlled level of access.

Question 1283

Exam Question

A company wants to improve the availability and performance of its hybrid application. The application consists of a stateful TCP-based workload hosted on Amazon EC2 instances in different AWS Regions and a stateless UOP-based workload hosted on premises.

Which combination of actions should a solutions architect take to improve availability and performance? (Choose two.)

A. Create an accelerator using AWS Global Accelerator. Add the load balancers as endpoints.
B. Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to the load balancers.
C. Configure two Application Load Balancers in each Region. The first will route to the EC2 endpoints and the second will route to the on-premises endpoints.
D. Configure a Network Load Balancer in each Region to address the EC2 endpoints. Configure a Network Load Balancer in each Region that routes to the on- premises endpoints.
E. Configure a Network Load Balancer in each Region to address the EC2 endpoints. Configure an Application Load Balancer in each Region that routes to the on-premises endpoints.

Correct Answer

A. Create an accelerator using AWS Global Accelerator. Add the load balancers as endpoints.
C. Configure two Application Load Balancers in each Region. The first will route to the EC2 endpoints and the second will route to the on-premises endpoints.

Explanation

To improve the availability and performance of the hybrid application, the solutions architect should take the following two actions:

A. Create an accelerator using AWS Global Accelerator and add the load balancers as endpoints. AWS Global Accelerator is a service that improves the availability and performance of applications by routing traffic through the AWS global network. By creating an accelerator and adding the load balancers as endpoints, the company can benefit from the global network infrastructure, reducing latency and improving availability for both the stateful TCP-based workload hosted on EC2 instances and the stateless UDP-based workload hosted on premises.

C. Configure two Application Load Balancers in each Region. The first load balancer will route to the EC2 endpoints, and the second load balancer will route to the on-premises endpoints. By using Application Load Balancers, the company can distribute the incoming traffic evenly across multiple targets, improving availability and performance. This configuration allows separate load balancers for the EC2 instances and the on-premises endpoints, ensuring optimal routing for each workload.

Therefore, the correct combination of actions to improve availability and performance would be:

A. Create an accelerator using AWS Global Accelerator. Add the load balancers as endpoints.
C. Configure two Application Load Balancers in each Region. The first will route to the EC2 endpoints, and the second will route to the on-premises endpoints.

Question 1284

Exam Question

A company has an on-premises volume backup solution that has reached its end of life. The company wants to use AWS as part of a new backup solution and wants to maintain local access to all the data while it is backed up on AWS. The company wants to ensure that the data backed up on AWS is automatically and securely transferred.

Which solution meets these requirements?

A. Use AWS Snowball to migrate data out of the on-premises solution to Amazon S3. Configure on-premises systems to mount the Snowball S3 endpoint to provide local access to the data.
B. Use AWS Snowball Edge to migrate data out of the on-premises solution to Amazon S3. Use the Snowball Edge file interface to provide on-premises systems with local access to the data.
C. Use AWS Storage Gateway and configure a cached volume gateway. Run the Storage Gateway software appliance on premises and configure a percentage of data to cache locally. Mount the gateway storage volumes to provide local access to the data.
D. Use AWS Storage Gateway and configure a stored volume gateway. Run the Storage Gateway software appliance on premises and map the gateway storage volumes to on-premises storage. Mount the gateway storage volumes to provide local access to the data.

Correct Answer

C. Use AWS Storage Gateway and configure a cached volume gateway. Run the Storage Gateway software appliance on premises and configure a percentage of data to cache locally. Mount the gateway storage volumes to provide local access to the data.

Explanation

The solution that meets the requirements described is option C: Use AWS Storage Gateway and configure a cached volume gateway.

AWS Storage Gateway provides a hybrid cloud storage solution that allows on-premises applications to seamlessly use AWS storage services. In this scenario, the company wants to maintain local access to all the data while it is backed up on AWS. The cached volume gateway configuration of AWS Storage Gateway is the most suitable for this requirement.

With a cached volume gateway, the Storage Gateway software appliance is installed and run on-premises. It maintains a cache of frequently accessed data locally while asynchronously backing up the data to AWS. In this configuration, a percentage of data is cached locally, allowing for fast and efficient access to frequently used data. The remaining data is securely stored in AWS.

By mounting the gateway storage volumes on the on-premises systems, the company can continue to have local access to the data. The data is automatically and securely transferred to AWS for backup purposes, ensuring data protection and redundancy.

Option A (using AWS Snowball) and option B (using AWS Snowball Edge) involve physically transferring data using Snowball devices, which may not provide the desired automatic and seamless transfer of data. These options are better suited for large-scale data migration rather than ongoing backup and local access requirements.

Option D (using AWS Storage Gateway with a stored volume gateway) involves mapping the gateway storage volumes to on-premises storage. While this provides local access to the data, it does not provide the same level of automation and efficiency as the cached volume gateway configuration. Additionally, with stored volumes, the entire dataset needs to be stored locally, which may not be desirable if the on-premises storage capacity is limited.

Therefore, option C is the most appropriate solution for the company’s requirements.

Question 1285

Exam Question

A solutions architect wants all new users to have specific complexity requirements and mandatory rotation periods for IAM user passwords.

What should the solutions architect do to accomplish this?

A. Set an overall password policy for the entire AWS account
B. Set a password policy for each IAM user in the AWS account.
C. Use third-party vendor software to set password requirements.
D. Attach an Amazon CloudWatch rule to the Create_newuser event to set the password with the appropriate requirements.

Correct Answer

B. Set a password policy for each IAM user in the AWS account.

Explanation

To accomplish specific complexity requirements and mandatory rotation periods for IAM user passwords, the solutions architect should choose option B: Set a password policy for each IAM user in the AWS account.

In AWS, you can create a password policy that applies to IAM users within an AWS account. This policy allows you to define requirements such as password length, complexity, expiration, and password reuse prevention. By setting a password policy for each IAM user, the solutions architect can enforce the desired complexity requirements and rotation periods for their passwords.

Setting an overall password policy for the entire AWS account (option A) would apply the same policy to all IAM users, which may not meet the requirement of having specific complexity requirements and rotation periods for each user.

Using third-party vendor software (option C) is not necessary in this case as AWS provides native features to manage IAM user password policies.

Attaching an Amazon CloudWatch rule to the Create_newuser event (option D) would not be the appropriate approach for enforcing password policies. CloudWatch rules are used for monitoring and triggering actions based on events, but they do not provide built-in functionality for setting password requirements for IAM users.

Question 1286

Exam Question

The financial application at a company stores monthly reports in an Amazon S3 bucket. The vice president of finance has mandated that all access to these reports be logged and that any modifications to the log files be detected.

Which actions can a solutions architect take to meet these requirements?

A. Use S3 server access logging on the bucket that houses the reports with the read and write data events and log file validation options enabled.
B. Use S3 server access logging on the bucket that houses the reports with the read and write management events and log file validation options enabled.
C. Use AWS CloudTrail to create a new trail. Configure the trail to log read and write data events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation.
D. Use AWS CloudTrail to create a new trail. Configure the trail to log read and write management events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation.

Correct Answer

C. Use AWS CloudTrail to create a new trail. Configure the trail to log read and write data events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation.

Explanation

The correct answer is C. Use AWS CloudTrail to create a new trail. Configure the trail to log read and write data events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation.

Option A is incorrect because S3 server access logging does not provide the ability to log modifications to the log files themselves, which is a requirement in this scenario.

Option B is incorrect because using S3 server access logging with read and write management events does not satisfy the requirement of logging modifications to the log files. Additionally, log file validation is not available for S3 server access logging.

Option C is the correct answer because AWS CloudTrail is a service that provides logging and monitoring of AWS API activity. By creating a new trail and configuring it to log read and write data events on the S3 bucket, you can track access to the monthly reports. Enabling log file validation ensures that any modifications to the log files are detected. By logging these events to a new bucket, you separate the log files from the bucket that houses the reports, ensuring data integrity and compliance.

Option D is incorrect because logging read and write management events does not fulfill the requirement of tracking access to the monthly reports and detecting modifications to the log files.

Question 1287

Exam Question

A company hosts its application using Amazon Elastic Container Service (Amazon ECS) and wants to ensure high availability. The company wants to be able to deploy updates to its application even if nodes in one Availability Zone are not accessible. The expected request volume for the application is 100 requests per second, and each container task is able to serve at least 60 requests per second. The company set up Amazon ECS with a rolling update deployment type with the minimum healthy percent parameter set to 50% and the maximum percent set to 100%.

Which configuration of tasks and Availability Zones meets these requirements?

A. Deploy the application across two Availability Zones, with one task in each Availability Zone.
B. Deploy the application across two Availability Zones, with two tasks in each Availability Zone.
C. Deploy the application across three Availability Zones, with one task in each Availability Zone.
D. Deploy the application across three Availability Zones, with two tasks in each Availability Zone.

Correct Answer

D. Deploy the application across three Availability Zones, with two tasks in each Availability Zone.

Explanation

To ensure high availability and be able to deploy updates even if nodes in one Availability Zone are not accessible, it is recommended to deploy the application across multiple Availability Zones. The chosen configuration should also consider the expected request volume and the capacity of each container task.

In this case, the expected request volume is 100 requests per second, and each container task can serve at least 60 requests per second. To handle the request volume, we need at least two container tasks.

Among the given options, the configuration that meets these requirements is:

D. Deploy the application across three Availability Zones, with two tasks in each Availability Zone.

This configuration ensures redundancy across multiple Availability Zones for high availability and has enough container tasks to handle the expected request volume. With two tasks in each Availability Zone, even if one Availability Zone becomes inaccessible, the application will still have enough capacity to handle the expected load.

Question 1288

Exam Question

A company has an image processing workload running on Amazon Elastic Container Service (Amazon ECS) in two private subnets. Each private subnet uses a NAT instance for internet access. All images are stored in Amazon S3 buckets. The company is concerned about the data transfer costs between Amazon ECS and Amazon S3.

What should a solutions architect do to reduce costs?

A. Configure a NAT gateway to replace the NAT instances.
B. Configure a gateway endpoint for traffic destined to Amazon S3.
C. Configure an interface endpoint for traffic destined to Amazon S3.
D. Configure Amazon CloudFront for the S3 bucket storing the images.

Correct Answer

D. Configure Amazon CloudFront for the S3 bucket storing the images.

Explanation

To reduce costs for data transfer between Amazon ECS and Amazon S3 in this scenario, the solutions architect should consider option D: Configure Amazon CloudFront for the S3 bucket storing the images.

Amazon CloudFront is a content delivery network (CDN) that can cache and serve content from locations closer to end users, reducing the data transfer costs and improving performance. By configuring CloudFront for the S3 bucket storing the images, the images can be cached at edge locations, reducing the need for frequent data transfers between Amazon ECS and Amazon S3.

Options A, B, and C are not directly related to reducing data transfer costs between Amazon ECS and Amazon S3:

A. Configuring a NAT gateway to replace the NAT instances would provide more efficient outbound internet access for the private subnets, but it wouldn’t directly impact the data transfer costs between Amazon ECS and Amazon S3.

B. Configuring a gateway endpoint for traffic destined to Amazon S3 would allow Amazon ECS instances in the private subnets to access S3 directly without going over the internet. While this can improve security and reduce latency, it wouldn’t directly reduce the data transfer costs.

C. Configuring an interface endpoint for traffic destined to Amazon S3 would also allow Amazon ECS instances in the private subnets to access S3 directly without going over the internet. Similar to option B, this can improve security and reduce latency but wouldn’t directly reduce the data transfer costs.

Therefore, option D (Configure Amazon CloudFront for the S3 bucket storing the images) is the most appropriate choice for reducing data transfer costs between Amazon ECS and Amazon S3.

Question 1289

Exam Question

A solutions architect needs to design a network that will allow multiple Amazon EC2 instances to access a common data source used for mission-critical data that can be accessed by all the EC2 instances simultaneously. The solution must be highly scalable, easy to implement and support the NFS protocol.

Which solution meets these requirements?

A. Create an Amazon EFS file system. Configure a mount target in each Availability Zone. Attach each instance to the appropriate mount target.
B. Create an additional EC2 instance and configure it as a file server. Create a security group that allows communication between the Instances and apply that to the additional instance.
C. Create an Amazon S3 bucket with the appropriate permissions. Create a role in AWS IAM that grants the correct permissions to the S3 bucket. Attach the role to the EC2 Instances that need access to the data.
D. Create an Amazon EBS volume with the appropriate permissions. Create a role in AWS IAM that grants the correct permissions to the EBS volume. Attach the role to the EC2 instances that need access to the data.

Correct Answer

A. Create an Amazon EFS file system. Configure a mount target in each Availability Zone. Attach each instance to the appropriate mount target.

Explanation

The solution that meets the requirements of allowing multiple Amazon EC2 instances to access a common data source used for mission-critical data that can be accessed simultaneously, while being highly scalable, easy to implement, and supporting the NFS protocol, is:

A. Create an Amazon EFS file system. Configure a mount target in each Availability Zone. Attach each instance to the appropriate mount target.

Amazon Elastic File System (EFS) is a fully managed file storage service that is compatible with the Network File System (NFS) protocol. It allows multiple EC2 instances to access a common data source simultaneously. By creating an EFS file system and configuring mount targets in each Availability Zone, you ensure high availability and fault tolerance. Each EC2 instance can then be attached to the appropriate mount target, allowing them to access the shared data.

Option B suggests creating an additional EC2 instance as a file server, which introduces unnecessary complexity and may not scale well.

Option C suggests using Amazon S3, which is an object storage service and does not directly support the NFS protocol.

Option D suggests using Amazon EBS volumes, but they are not shared by multiple instances simultaneously and would require additional configuration and management to achieve the desired outcome.

Therefore, option A is the correct solution for the given requirements.

Question 1290

Exam Question

A company has a build server that is in an Auto Scaling group and often has multiple Linux instances running. The build server requires consistent and mountable shared NFS storage for jobs and configurations.

Which storage option should a solutions architect recommend?

A. Amazon S3
B. Amazon FSx
C. Amazon Elastic Block Store (Amazon EBS)
D. Amazon Elastic File System (Amazon EFS)

Correct Answer

D. Amazon Elastic File System (Amazon EFS)

Explanation

In this scenario, the most suitable storage option for consistent and mountable shared NFS storage would be D. Amazon Elastic File System (Amazon EFS).

Amazon EFS is a fully managed, scalable file storage service that is designed to provide shared access to files across multiple instances. It supports the Network File System (NFS) protocol, which makes it compatible with Linux instances.

Here’s why Amazon EFS is the recommended option:

  1. Shared Access: Amazon EFS allows multiple instances to mount the same file system concurrently, enabling shared access to files. This is essential for a build server that requires consistent and mountable shared NFS storage.
  2. Scalability: Amazon EFS automatically scales storage capacity as per the requirements of the application. It can handle thousands of concurrent connections, making it suitable for an Auto Scaling group with multiple instances.
  3. Performance: Amazon EFS provides low-latency performance, ensuring efficient access to the shared storage. It also offers consistent performance, regardless of the number of instances accessing the file system.
  4. Durability and Availability: Amazon EFS is designed to provide high durability and availability. It stores data redundantly across multiple Availability Zones, ensuring data resilience and accessibility.

On the other hand, the other options are less suitable:

  • A. Amazon S3 is an object storage service and does not provide native support for NFS. While it can be used for storing files, it doesn’t offer the same level of mountable and shared access as Amazon EFS.
  • B. Amazon FSx is a managed file storage service, but it is optimized for Windows-based workloads and uses the SMB protocol. It is not specifically designed for NFS-based Linux instances.
  • C. Amazon Elastic Block Store (Amazon EBS) provides block-level storage for individual EC2 instances but does not offer the shared access and scalability needed for multiple instances to mount the same file system concurrently.

Therefore, the best choice in this scenario is D. Amazon Elastic File System (Amazon EFS).