Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 37

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 1081

Exam Question

A company runs multiple Amazon EC2 Linux instances in a VPC with applications that use a hierarchical directory structure. The applications need to rapidly and concurrently read and write to shared storage.

How can this be achieved?

A. Create an Amazon EFS file system and mount it from each EC2 instance.
B. Create an Amazon S3 bucket and permit access from all the EC2 instances in the VPC.
C. Create a file system on an Amazon EBS Provisioned IOPS SSD (101) volume. Attach the volume to all the EC2 instances.
D. Create file systems on Amazon EBS volumes attached to each EC2 instance. Synchronize the Amazon EBS volumes across the different EC2 instances.

Correct Answer

A. Create an Amazon EFS file system and mount it from each EC2 instance.

Explanation

To achieve rapid and concurrent read and write access to shared storage for multiple EC2 instances with a hierarchical directory structure, the most suitable option is:

A. Create an Amazon EFS file system and mount it from each EC2 instance.

Amazon EFS (Elastic File System) provides a scalable and fully managed file storage service that can be easily shared across multiple EC2 instances. With Amazon EFS, you can create a file system and mount it simultaneously from multiple EC2 instances within a VPC. This allows the applications running on these instances to have concurrent access to the shared storage, enabling rapid read and write operations.

Option B (Create an Amazon S3 bucket) is not the best solution for scenarios that require concurrent read and write access to shared storage, as S3 is an object storage service and doesn’t support file-level access like a traditional file system.

Option C (Create a file system on an Amazon EBS Provisioned IOPS SSD) and Option D (Create file systems on Amazon EBS volumes attached to each EC2 instance) are not suitable for shared access scenarios, as each EC2 instance would have its own individual EBS volume or volumes. Synchronizing the EBS volumes across instances can be complex and may lead to data consistency and concurrency issues.

Therefore, creating an Amazon EFS file system and mounting it from each EC2 instance (option A) is the recommended approach for achieving rapid and concurrent read and write access to shared storage with a hierarchical directory structure.

Question 1082

Exam Question

A company’s operations team has an existing Amazon S3 bucket configured to notify an Amazon SQS queue when new objects are created within the bucket. The development team also wants to receive events when new objects are created. The existing operations team workflow must remain intact.

Which solution would satisfy these requirements?

A. Create another SQS queue. Update the S3 events in the bucket to also update the new queue when a new object is created.
B. Create a new SQS queue that only allows Amazon S3 access to the queue. Update Amazon S3 to update this queue written a new object is created.
C. Create an Amazon SNS topic and SAS queue for the bucket updates. Update the bucket to send events to the new topic. Updates queues to poll Amazon SNS.
D. Create an Amazon SNS topic and SQS queue for the bucket updates. Update the bucket to send events to the new topic. Add subscription for both queues in the topic.

Correct Answer

D. Create an Amazon SNS topic and SQS queue for the bucket updates. Update the bucket to send events to the new topic. Add subscription for both queues in the topic.

Explanation

To satisfy the requirements of both the operations team and the development team, the following solution would be appropriate:

D. Create an Amazon SNS topic and SQS queue for the bucket updates. Update the bucket to send events to the new topic. Add a subscription for both queues in the topic.

By creating an Amazon SNS (Simple Notification Service) topic and an SQS (Simple Queue Service) queue for the bucket updates, you can achieve the desired functionality. The bucket can be configured to send events to the SNS topic whenever a new object is created. The SNS topic acts as a message hub, and you can add subscriptions to it. In this case, you would add subscriptions for both the existing SQS queue used by the operations team and the new SQS queue desired by the development team.

This approach allows both teams to receive events when new objects are created, without impacting the existing workflow of the operations team. The operations team can continue to receive notifications through their existing SQS queue, while the development team can receive notifications through the new SQS queue.

Option A suggests creating another SQS queue and updating the S3 events to update the new queue, but this would not preserve the existing operations team workflow.

Option B suggests creating a new SQS queue that only allows Amazon S3 access, but this would not fulfill the requirement of the development team receiving events.

Option C suggests creating an SNS topic and SAS (Simple Asynchronous Service) queue, which is not a valid AWS service. It seems to be a typo, and it may have intended to mention SQS instead of SAS. However, even with SQS, this option does not mention adding a subscription for the existing operations team queue.

Therefore, the best solution is to create an SNS topic and SQS queue for the bucket updates, update the bucket to send events to the new topic, and add subscriptions for both queues in the topic (option D).

Question 1083

Exam Question

An application requires a development environment (DEV) and production environment (PROD) for several years. The DEV instances will run for 10 hours each day during normal business hours, while the PROD instances will run 24 hours each day. A solutions architect needs to determine a compute instance purchase strategy to minimize costs.

Which solution is the MOST cost-effective?

A. DEV with Spot Instances and PROD with On-Demand Instances
B. DEV with On-Demand Instances and PROD with Spot Instances
C. DEV with Scheduled Reserved Instances and PROD with Reserved Instances
D. DEV with On-Demand Instances and PROD with Scheduled Reserved Instances

Correct Answer

A. DEV with Spot Instances and PROD with On-Demand Instances

Explanation

The most cost-effective solution in this scenario would be:

A. DEV with Spot Instances and PROD with On-Demand Instances

In the given scenario, the DEV instances only need to run for 10 hours each day during normal business hours, while the PROD instances need to run 24 hours each day. Spot Instances can be a cost-effective choice for the DEV environment because they allow you to bid on unused EC2 instances and take advantage of the lower pricing. Since the DEV instances will run for a limited duration, you have a higher chance of getting them at a significantly reduced cost compared to On-Demand Instances.

On the other hand, PROD instances need to be available 24/7, which makes On-Demand Instances a more suitable choice. On-Demand Instances provide reliable and uninterrupted access to instances without the bidding process or potential termination interruptions associated with Spot Instances.

Option B suggests using Spot Instances for PROD, which may not be the ideal choice since the PROD environment requires continuous availability.

Option C suggests using Scheduled Reserved Instances for DEV and Reserved Instances for PROD. While Reserved Instances offer cost savings compared to On-Demand Instances, the Scheduled Reserved Instances are specifically designed for predictable and recurring workloads with a fixed schedule. Since DEV instances may not have a fixed schedule, this option may not be the most optimal.

Option D suggests using On-Demand Instances for both DEV and PROD. While this may provide the required availability for PROD, it may not be the most cost-effective solution as DEV instances have a limited running time.

Therefore, the most cost-effective solution is to use Spot Instances for DEV and On-Demand Instances for PROD (option A). This strategy optimizes costs by taking advantage of the lower pricing of Spot Instances for the DEV environment and ensures continuous availability for the PROD environment with On-Demand Instances.

Question 1084

Exam Question

A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to the loss of an Availability Zone. Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solutions architect must minimize the costs of storing and retrieving the media files.

Which storage option meets these requirements?

A. S3 Standard
B. S3 Intelligent-Tiering
C. S3 Standard-Infrequent Access (S3 Standard-IA)
D. S3 One Zone-Infrequent Access (S3 One Zone-IA)

Correct Answer

B. S3 Intelligent-Tiering

Explanation

The storage option that meets the given requirements and minimizes costs is:

B. S3 Intelligent-Tiering

S3 Intelligent-Tiering is a storage class within Amazon S3 that is designed to optimize costs while maintaining high durability and availability. It automatically moves objects between two access tiers: frequent access and infrequent access, based on their usage patterns. This tiering helps to reduce costs by automatically shifting less frequently accessed objects to the lower-cost infrequent access tier when they haven’t been accessed for a certain period.

In the scenario described, the media files have varying access patterns, with some being accessed frequently and others being rarely accessed in an unpredictable manner. With S3 Intelligent-Tiering, the frequently accessed files will be stored in the frequent access tier, ensuring fast and immediate access when needed. The rarely accessed files will be moved to the infrequent access tier, which provides cost savings without sacrificing durability or availability.

Compared to other options:

  • Option A (S3 Standard) is suitable for frequently accessed data but may not be the most cost-effective for rarely accessed files.
  • Option C (S3 Standard-IA) is designed for infrequently accessed data but does not automatically optimize costs based on usage patterns like S3 Intelligent-Tiering.
  • Option D (S3 One Zone-IA) is similar to S3 Standard-IA but stores data in a single Availability Zone, which could result in data loss if that zone becomes unavailable.

Therefore, S3 Intelligent-Tiering (option B) is the recommended storage option that meets the requirements of resiliency, cost optimization, and varying access patterns.

Question 1085

Exam Question

A company wants to replicate its data to AWS to recover in the event of a disaster. Today, a system administrator has scripts that copy data to a NFS and share individual backup files that need to be accessed with low latency by application administrators to deal with errors in processing.

What should a solutions architect recommend to meet these requirements?

A. Modify the script to copy data to an Amazon S3 bucket instead of the on-premises NFS share.
B. Modify the script to copy data to an Amazon S3 Glacier Archive instead of the on-premises NFS share.
C. Modify the script to copy data to an Amazon Elastic File System (Amazon EFS) volume instead of the on-premises NFS share.
D. Modify the script to copy data to an AWS Storage Gateway for File Gateway virtual appliance instead of the on-premises NFS share.

Correct Answer

A. Modify the script to copy data to an Amazon S3 bucket instead of the on-premises NFS share.

Explanation

To meet the requirements of replicating data to AWS for disaster recovery and providing low-latency access to individual backup files, using Amazon S3 is the recommended solution. By modifying the existing script to copy data to an Amazon S3 bucket, the company can take advantage of the durability, scalability, and availability of Amazon S3 storage.

With Amazon S3, the data will be securely stored in multiple Availability Zones, ensuring high availability and protection against data loss. Additionally, Amazon S3 provides low-latency access to individual objects, allowing application administrators to quickly access and restore specific backup files as needed.

Option B, using Amazon S3 Glacier Archive, is not suitable in this case as it is optimized for long-term archival storage with higher retrieval latency.

Option C, using Amazon Elastic File System (Amazon EFS) volume, is not the best fit as it is a fully managed NFS file system and may not provide the desired low-latency access for individual backup files.

Option D, using AWS Storage Gateway for File Gateway, is not necessary in this scenario as it introduces an additional layer of complexity and may not provide the required low-latency access for individual backup files.

Therefore, modifying the script to copy data to an Amazon S3 bucket (option A) is the most appropriate recommendation to replicate data and ensure low-latency access to individual backup files in AWS.

Question 1086

Exam Question

A solutions architect has configured the following IAM policy.

Which action will be allowed by the policy?

A. An AWS Lambda function can be deleted from any network.
B. An AWS Lambda function can be created from any network.
C. An AWS Lambda function can be deleted from the 100.220.0.0/20 network
D. An AWS Lambda function can be deleted from the 220 100.16 0 20 network

Correct Answer

A. An AWS Lambda function can be deleted from any network.

Explanation

The given IAM policy does not specify any specific IP address or network range. The “lambda:DeleteFunction” action is allowed on all resources, as denoted by the “*” wildcard in the “Resource” field. This means that the policy allows the deletion of any AWS Lambda function regardless of the network from which the request originates.

Therefore, the policy allows an AWS Lambda function to be deleted from any network.

Question 1087

Exam Question

A company recently expanded globally and wants to make its application accessible to users in those geographic locations. The application is deploying on Amazon EC2 instances behind an Application Load balancer in an Auto Scaling group. The company needs the ability to shift traffic from resources in one region to another.

What should a solutions architect recommend?

A. Configure an Amazon Route 53 latency routing policy.
B. Configure an Amazon Route 53 geolocation routing policy.
C. Configure an Amazon Route 53 geo proximity routing policy.
D. Configure an Amazon Route 53 multivalue answer routing policy.

Correct Answer

C. Configure an Amazon Route 53 geo proximity routing policy.

Explanation

To achieve the ability to shift traffic from resources in one region to another, a solutions architect should recommend:

C. Configure an Amazon Route 53 geo proximity routing policy.

A geo proximity routing policy in Amazon Route 53 routes traffic to resources based on the geographic location of the user making the request and the health of the resources in different AWS regions. This allows the company to shift traffic from one region to another based on user location and resource health.

The geo proximity routing policy is suitable for a global application deployment where the goal is to route traffic to the closest healthy resources in different regions, providing an optimal user experience while ensuring high availability.

Therefore, the recommended solution is to configure an Amazon Route 53 geo proximity routing policy.

Question 1088

Exam Question

A company is designing a web application using AWS that processes insurance quotes. Users will request quotes from the application. Quotes must be separated by quote type, must be responded to within 24 hours, and must not be lost. The solution should be simple to set up and maintain.

Which solution meets these requirements”

A. Create multiple Amazon Kinesis data streams based on the quote type. Configure the web application to send messages to the proper data stream. Configure each backend group of application servers to pool messages from its own data stream using the Kinesis Client Library (KCL).
B. Create multiple Amazon Simple Notification Service (Amazon SNS) topics and register Amazon SQS queues to their own SNS topic based on the quote type. Configure the web application to publish messages to the SNS topic queue. Configure each backend application server to work its own SQS queue.
C. Create a single Amazon Simple Notification Service (Amazon SNS) topic and subscribe the Amazon SQS queues to the SNS topic. Configure SNS message filtering to publish messages to the proper SQS queue based on the quote type. Configure each backend application server to work its own SQS queue.
D. Create multiple Amazon Kinesis Data Firehose delivery streams based on the quote type to deliver data streams to an Amazon Elasticsearch Service (Amazon ES) cluster. Configure the web application to send messages to the proper delivery stream. Configure each backend group of application servers to search for the messages from Amazon ES and process them accordingly.

Correct Answer

C. Create a single Amazon Simple Notification Service (Amazon SNS) topic and subscribe the Amazon SQS queues to the SNS topic. Configure SNS message filtering to publish messages to the proper SQS queue based on the quote type. Configure each backend application server to work its own SQS queue.

Explanation

The solution that meets the requirements of separating quotes by quote type, responding within 24 hours, and ensuring no loss of quotes while being simple to set up and maintain is:

C. Create a single Amazon Simple Notification Service (Amazon SNS) topic and subscribe the Amazon SQS queues to the SNS topic. Configure SNS message filtering to publish messages to the proper SQS queue based on the quote type. Configure each backend application server to work with its own SQS queue.

Amazon SNS provides publish/subscribe messaging, and Amazon SQS is a fully managed message queuing service. By creating a single SNS topic and subscribing multiple SQS queues to it, you can achieve separation of quotes by quote type. SNS message filtering can be used to publish messages to the appropriate SQS queue based on the quote type. Each backend application server can then work with its own SQS queue to process the quotes.

This solution allows for easy separation of quotes, ensures that they are not lost, and provides a straightforward setup and maintenance process.

Question 1089

Exam Question

A company is seeing access requests by some suspicious IP addresses. The security team discovers the requests are from different IP addresses under the same CIDR range.

What should a solutions architect recommend to the team?

A. Add a rule in the inbound table of the security to deny the traffic from that CIDR range.
B. Add a rule in the outbound table of the security group to deny the traffic from that CIDR range.
C. Add a deny rule in the inbound table of the network ACL with a lower number than other rules.
D. Add a deny rule in the outbound table of the network ACL with a lower rule number than other rules.

Correct Answer

C. Add a deny rule in the inbound table of the network ACL with a lower number than other rules.

Explanation

In this scenario, the requests are coming from different IP addresses within the same CIDR range, indicating a suspicious activity. To address this, a deny rule should be added to the inbound table of the network ACL (Access Control List) with a lower rule number than other rules. By adding a deny rule with a lower number, it takes precedence over other rules and blocks the traffic from the suspicious CIDR range.

Adding a rule to the security group (option A or B) would not be effective in this case, as security groups operate at the instance level and cannot filter based on CIDR ranges.

Option D suggests adding a deny rule in the outbound table of the network ACL, which would prevent outbound traffic from the suspicious CIDR range. However, the focus here is on addressing the inbound access requests, so the inbound table of the network ACL is the appropriate place to apply the deny rule.

Question 1090

Exam Question

A company is planning to build a new web application on AWS. The company expects predictable traffic most of the year and very high traffic on occasion. The web application needs to be highly available and fault tolerant with minimal latency.

What should a solutions architect recommend to meet these requirements?

A. Use an Amazon Route 53 routing policy to distribute requests to two AWS Regions, each with one Amazon EC2 instance.
B. Use Amazon EC2 instances in an Auto Scaling group with an Application Load Balancer across multiple Availability Zones.
C. Use Amazon EC2 instances in a cluster placement group with an Application Load Balancer across multiple Availability Zones.
D. Use Amazon EC2 instances in a cluster placement group and include the cluster placement group within a new Auto Scaling group.

Correct Answer

B. Use Amazon EC2 instances in an Auto Scaling group with an Application Load Balancer across multiple Availability Zones.

Explanation

To achieve high availability, fault tolerance, and minimal latency for the web application, it is recommended to use Amazon EC2 instances in an Auto Scaling group with an Application Load Balancer (ALB) across multiple Availability Zones.

Here’s why this option is the most suitable:

  • Auto Scaling group: By using an Auto Scaling group, the application can scale the number of EC2 instances based on the incoming traffic. This ensures that the application can handle both predictable and high traffic loads effectively.
  • Application Load Balancer: The ALB distributes incoming traffic across multiple EC2 instances, improving availability and fault tolerance. It performs health checks on instances and automatically routes traffic to healthy instances, ensuring that the application remains accessible even if some instances fail.
  • Multiple Availability Zones: Deploying EC2 instances across multiple Availability Zones provides redundancy and fault tolerance. If one Availability Zone becomes unavailable, the traffic can be automatically routed to instances in the other Availability Zones, maintaining high availability for the application.

Option A (using Amazon Route 53 routing policy to distribute requests to two AWS Regions) may provide some level of fault tolerance, but it may introduce higher latency due to cross-region communication.

Options C and D (using cluster placement groups) do not explicitly address fault tolerance and may not be suitable for achieving high availability in the event of instance failures or Availability Zone outages.

Therefore, option B with Amazon EC2 instances in an Auto Scaling group with an Application Load Balancer across multiple Availability Zones is the recommended choice to meet the requirements.