Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 31

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 1021

Exam Question

A company has thousands of edge devices that collectively generate 1 TB of status alerts each day. Each alert is approximately 2 KB in size. A solutions architect needs to implement a solution to ingest and store the alerts for future analysis. The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional infrastructure. Additionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days.

What is the MOST operationally efficient solution that meets these requirements?

A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
B. Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts. Create a script on the EC2 instances that will store the alerts in an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
C. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon Elasticsearch Service (Amazon ES) cluster. Set up the Amazon ES cluster to take manual snapshots every day and delete data from the cluster that is older than 14 days.
D. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to ingest the alerts, and set the message retention period to 14 days. Configure consumers to poll the SQS queue, check the age of the message, and analyze the message data as needed. If the message is 14 days old, the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue.

Correct Answer

A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.

Explanation

The most operationally efficient solution that meets the requirements is:

A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.

Here’s how this solution addresses the requirements:

  • Ingestion: Amazon Kinesis Data Firehose is designed to ingest and deliver streaming data with high throughput and reliability. It can handle the thousands of edge devices generating alerts and handle the incoming data stream efficiently.
  • Storage: The alerts are stored in an Amazon S3 bucket, which provides durability, availability, and scalability. By leveraging S3’s native features, the company does not need to manage additional infrastructure or worry about the operational overhead.
  • Cost Optimization: An S3 Lifecycle configuration can be set up to transition data to Amazon S3 Glacier after 14 days. This allows the company to minimize costs by moving the older data to a lower-cost storage tier while still keeping it accessible for analysis.
  • Immediate Analysis: The data is available in the S3 bucket for immediate analysis for up to 14 days. This meets the requirement of keeping 14 days of data available for immediate analysis.

By using Amazon Kinesis Data Firehose with an S3 bucket and an S3 Lifecycle configuration, the company can efficiently ingest, store, and manage the alert data without the need for additional infrastructure management. The solution provides high availability, cost optimization, and meets the specified retention period for data analysis.

Question 1022

Exam Question

A company has 700 TB of backup data stored in network attached storage (NAS) in its data center. This backup data needs to be accessible for infrequent regulatory requests and must be retained 7 years. The company has decided to migrate this backup data from its data center to AWS. The migration must be complete within 1 month. The company has 500 Mbps of dedicated bandwidth on its public internet connection available for data transfer.

What should a solutions architect do to migrate and store the data at the LOWEST cost?

A. Order AWS Snowball devices to transfer the data. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive.
B. Deploy a VPN connection between the data center and Amazon VPC. Use the AWS CLI to copy the data from on premises to Amazon S3 Glacier.
C. Provision a 500 Mbps AWS Direct Connect connection and transfer the data to Amazon S3. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive.
D. Use AWS DataSync to transfer the data and deploy a DataSync agent on premises. Use the DataSync task to copy files from the on-premises NAS storage to Amazon S3 Glacier.

Correct Answer

A. Order AWS Snowball devices to transfer the data. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive.

Explanation

To migrate and store the data at the lowest cost, the solutions architect should recommend the following approach:

A. Order AWS Snowball devices to transfer the data. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive.

Here’s how this solution meets the requirements:

  • Data Transfer: AWS Snowball devices provide a physical data transfer solution, allowing for efficient and secure transfer of large amounts of data. The company can ship the Snowball devices to their data center and transfer the 700 TB of backup data within the required timeframe of 1 month. This solution utilizes the dedicated bandwidth available on the public internet connection, ensuring optimal data transfer speeds.
  • Data Storage: The backup data can be stored in Amazon S3, which offers cost-effective and durable object storage. By using a lifecycle policy, the files can be transitioned from the S3 Standard storage class to the lower-cost Amazon S3 Glacier Deep Archive storage class. This helps to minimize long-term storage costs while retaining the data for the required 7-year retention period.
  • Cost Optimization: AWS Snowball devices have a flat-rate pricing model, making it a cost-effective option for data transfer in large volumes. Additionally, transitioning the files to Amazon S3 Glacier Deep Archive, which has the lowest storage cost among the S3 storage classes, helps minimize ongoing storage costs.

By leveraging AWS Snowball for data transfer and utilizing Amazon S3 with a lifecycle policy to transition the files to Glacier Deep Archive, the company can migrate and store the backup data in AWS at the lowest cost. The solution also ensures compliance with the regulatory requirements and provides long-term data retention.

Question 1023

Exam Question

A company has two AWS accounts: Production and Development. There are code changes ready in the Development account to push to the Production account. In the alpha phase, only two senior developers on the development team need access to the Production account. In the beta phase, more developers might need access to perform testing as well.

What should a solutions architect recommend?

A. Create two policy documents using the AWS Management Console in each account. Assign the policy to developers who need access.
B. Create an IAM role in the Development account. Give one IAM role access to the Production account. Allow developers to assume the role.
C. Create an IAM role in the Production account with the trust policy that specifies the Development account. Allow developers to assume the role.
D. Create an IAM group in the Production account and add it as a principal in the trust policy that specifies the Production account. Add developers to the group.

Correct Answer

C. Create an IAM role in the Production account with the trust policy that specifies the Development account. Allow developers to assume the role.

Explanation

A solutions architect should recommend Option C: Create an IAM role in the Production account with the trust policy that specifies the Development account. Allow developers to assume the role.

Here’s how this recommendation meets the requirements:

  • Access Control: By creating an IAM role in the Production account and specifying the Development account in the trust policy, access can be granted to developers from the Development account. This approach ensures a secure and controlled way to allow access to the Production account.
  • Least Privilege: The IAM role can be configured with the necessary permissions required for the senior developers in the alpha phase. This ensures that only the required privileges are granted to them, following the principle of least privilege. The permissions can be carefully defined based on the specific actions and resources needed in the Production account.
  • Assume Role: The senior developers can assume the IAM role in the Production account from their Development account. This allows them to switch their identity and access the Production account using their existing credentials from the Development account. This approach provides a seamless and convenient way for them to access the Production account when needed.
  • Scalability: In the beta phase, when more developers might need access for testing, additional IAM roles or permissions can be added to accommodate the expanded team. The IAM roles can be adjusted and expanded as per the requirements, providing flexibility and scalability as the development and testing needs evolve.

By implementing Option C, the solutions architect can establish a secure and controlled access mechanism for the senior developers in the Development account to access the Production account. This approach follows best practices in IAM management, ensuring a clear separation of permissions and maintaining the principle of least privilege.

Question 1024

Exam Question

A company hosts its web application on AWS using seven Amazon EC2 instances. The company requires that the IP addresses of all healthy EC2 instances be returned in response to DNS queries.

Which policy should be used to meet this requirement?

A. Simple routing policy
B. Latency routing policy
C. Multi-value routing policy
D. Geolocation routing policy

Correct Answer

C. Multi-value routing policy

Explanation

The policy that should be used to meet the requirement of returning the IP addresses of all healthy EC2 instances in response to DNS queries is the Multi-value routing policy.

In a Multi-value routing policy, multiple healthy records are returned in response to DNS queries, allowing for a random selection among them. This policy is suitable when you want to distribute traffic across multiple resources or when you want to return multiple IP addresses for redundancy purposes.

In this scenario, by configuring a Multi-value routing policy for the DNS records associated with the EC2 instances, all healthy EC2 instance IP addresses will be returned in response to DNS queries. This ensures that the web application can distribute the traffic among all healthy instances, providing fault tolerance and load balancing.

Therefore, the correct answer is option C: Multi-value routing policy.

Question 1025

Exam Question

A company is using an Amazon S3 bucket to store data uploaded by different departments from multiple locations. During an AWS Well-Architected review, the financial manager notices that 10 TB of S3 Standard storage data has been charged each month. However, in the AWS Management Console for Amazon S3, using the command to select all files and folders shows a total size of 5 TB.

What are the possible causes for this difference? (Choose two.)

A. Some files are stored with deduplication.
B. The S3 bucket has versioning enabled.
C. There are incomplete S3 multipart uploads.
D. The S3 bucket has AWS Key Management Service (AWS KMS) enabled.
E. The S3 bucket has Intelligent-Tiering enabled.

Correct Answer

B. The S3 bucket has versioning enabled.
C. There are incomplete S3 multipart uploads.

Explanation

The possible causes for the difference between the reported size of 10 TB and the total size of 5 TB in the Amazon S3 bucket are:

B. The S3 bucket has versioning enabled: If versioning is enabled for the bucket, each version of an object will be stored separately, which can result in a larger total storage size compared to the size of the current versions.

C. There are incomplete S3 multipart uploads: When a multipart upload is initiated but not completed, the uploaded parts are still stored in the bucket. These incomplete uploads can contribute to the total storage size but may not be included when selecting all files and folders.

Therefore, the correct options are B and C: The S3 bucket has versioning enabled and there are incomplete S3 multipart uploads.

Question 1026

Exam Question

A company is selling up an application to use an Amazon RDS MySQL DB instance. The database must be architected for high availability across Availability Zones and AWS Regions with minimal downtime.

How should a solutions architect meet this requirement?

A. Set up an RDS MySQL Multi-AZ DB instance. Configure an appropriate backup window.
B. Set up an RDS MySQL Multi-AZ DB instance. Configure a read replica in a different Region.
C. Set up an RDS MySQL Single-AZ DB instance. Configure a read replica in a different Region.
D. Set up an RDS MySQL Single-AZ DB instance. Copy automated snapshots to at least one other Region.

Correct Answer

B. Set up an RDS MySQL Multi-AZ DB instance. Configure a read replica in a different Region.

Explanation

To meet the requirement of high availability across Availability Zones and AWS Regions with minimal downtime for an Amazon RDS MySQL DB instance, a solutions architect should:

B. Set up an RDS MySQL Multi-AZ DB instance and configure a read replica in a different Region.

By setting up an RDS MySQL Multi-AZ DB instance, Amazon RDS will automatically replicate the primary database to a standby replica in a different Availability Zone. In the event of a failure, Amazon RDS will automatically failover to the standby replica, minimizing downtime.

Additionally, configuring a read replica in a different Region provides further redundancy and availability. The read replica can be used for read scaling and can be promoted to become the primary database in case of a Region-level failure.

This architecture ensures high availability across Availability Zones and AWS Regions, providing resilience and minimizing downtime.

Question 1027

Exam Question

A solutions architect plans to convert a company monolithic web application into a multi-tier application. The company wants to avoid managing its own infrastructure. The minimum requirements for the web application are high availability, scalability, and regional low latency during peak hours. The solution should also store and retrieve data with millisecond latency using the application API.

Which solution meets these requirements?

A. Use AWS Fargate to host the web application with backend Amazon RDS Multi-AZ DB instances.
B. Use Amazon API Gateway with an edge-optimized API endpoint, AWS Lambda for compute, and Amazon DynamoDB as the data store.
C. Use an Amazon Route 53 routing policy with geolocation that points to an Amazon S3 bucket with static website hosting and Amazon DynamoDB as the data store.
D. Use an Amazon CloudFront distribution that points to an Elastic Load Balancer with an Amazon EC2 Auto Scaling group, along with Amazon RDS Multi-AZ DB instances.

Correct Answer

B. Use Amazon API Gateway with an edge-optimized API endpoint, AWS Lambda for compute, and Amazon DynamoDB as the data store.

Explanation

To meet the requirements of high availability, scalability, regional low latency, and millisecond data access latency, the recommended solution is:

B. Use Amazon API Gateway with an edge-optimized API endpoint, AWS Lambda for compute, and Amazon DynamoDB as the data store.

With this solution, Amazon API Gateway provides a fully managed service for creating, deploying, and managing the API endpoints. It supports edge-optimized endpoints, which are distributed globally across the AWS edge locations, ensuring low latency for users accessing the application from different regions.

AWS Lambda allows you to run your application code without managing any infrastructure. It automatically scales based on the incoming request load, providing scalability to handle varying traffic patterns. Lambda functions can be integrated with Amazon API Gateway to process and respond to API requests.

Amazon DynamoDB is a highly scalable and low-latency NoSQL database service. It can handle millions of requests per second with single-digit millisecond latency. By using DynamoDB as the data store, you can achieve fast data retrieval with millisecond latency through the application API.

This solution eliminates the need for managing infrastructure and provides high availability, scalability, and low latency for the web application.

Question 1028

Exam Question

A company is running an online transaction processing (OLTP) workload on AWS. This workload uses an unencrypted Amazon RDS DB instance in a Multi-AZ deployment. Daily database snapshots are taken from this instance.

What should a solutions architect do to ensure the database and snapshots are always encrypted moving forward?

A. Encrypt a copy of the latest DB snapshot. Replace existing DB instance by restoring the encrypted snapshot.
B. Create a new encrypted Amazon Elastic Block Store (Amazon EBS) volume and copy the snapshots to it. Enable encryption on the DB instance.
C. Copy the snapshots and enable encryption using AWS Key Management Service (AWS KMS). Restore encrypted snapshot to an existing DB instance.
D. Copy the snapshots to an Amazon S3 bucket that is encrypted using server-side encryption with AWS Key Management Service (AWS KMS) managed keys (SSE-KMS).

Correct Answer

C. Copy the snapshots and enable encryption using AWS Key Management Service (AWS KMS). Restore encrypted snapshot to an existing DB instance.

Explanation

To ensure the database and snapshots are always encrypted moving forward, a solutions architect should:

C. Copy the snapshots and enable encryption using AWS Key Management Service (AWS KMS). Restore the encrypted snapshot to an existing DB instance.

By copying the existing snapshots and enabling encryption using AWS KMS, you can ensure that the snapshots are encrypted. This involves creating new encrypted copies of the snapshots. Once the encrypted snapshots are available, you can restore them to an existing DB instance to ensure that the database is also encrypted.

Enabling encryption on the DB instance itself (option B) only encrypts the underlying storage volume (EBS), but not the snapshots. This means that any new snapshots taken from the encrypted DB instance will still be unencrypted.

Creating an encrypted copy of the latest DB snapshot and replacing the existing DB instance (option A) can be a valid approach if you want to start with a fresh encrypted instance, but it involves replacing the existing instance and may result in downtime or data loss.

Copying the snapshots to an S3 bucket with server-side encryption using AWS KMS managed keys (option D) encrypts the snapshots at rest in S3, but does not encrypt the actual database or its snapshots within RDS.

Therefore, option C is the most appropriate choice for ensuring both the database and snapshots are always encrypted moving forward.

Question 1029

Exam Question

A company is storing sensitive user information in an Amazon S3 bucket. The company wants to provide secure access to this bucket from the application tier running on Amazon EC2 instances inside a VPC.

Which combination of steps should a solutions architect take to accomplish this? (Choose two.)

A. Configure a VPC gateway endpoint for Amazon S3 within the VPC.
B. Create a bucket policy to make the objects in the S3 bucket public.
C. Create a bucket policy that limits access to only the application tier running in the VPC.
D. Create an IAM user with an S3 access policy and copy the IAM credentials to the EC2 instance.
E. Create a NAT instance and have the EC2 instances use the NAT instance to access the S3 bucket.

Correct Answer

A. Configure a VPC gateway endpoint for Amazon S3 within the VPC.
C. Create a bucket policy that limits access to only the application tier running in the VPC.

Explanation

To provide secure access to an Amazon S3 bucket from the application tier running on Amazon EC2 instances inside a VPC, the following steps should be taken:

A. Configure a VPC gateway endpoint for Amazon S3 within the VPC.
C. Create a bucket policy that limits access to only the application tier running in the VPC.

Explanation:
A VPC gateway endpoint for Amazon S3 (option A) allows private connectivity between the VPC and S3 without the need for internet access. This ensures that the traffic between the EC2 instances and the S3 bucket remains within the AWS network and is not exposed to the public internet.

Creating a bucket policy (option C) that restricts access to the application tier running in the VPC allows you to define granular access controls for the S3 bucket. You can specify the VPC, subnet, or IP range from which the EC2 instances in the application tier can access the bucket. This helps enforce secure access and prevents unauthorized access to the sensitive user information.

Options B, D, and E are not recommended or necessary for achieving secure access to the S3 bucket in this scenario:

  • Making the objects in the S3 bucket public (option B) would expose the sensitive user information to the public, which is not desirable.
  • Creating an IAM user with S3 access credentials and copying them to the EC2 instance (option D) can introduce security risks, as it involves managing and distributing long-term credentials.
  • Using a NAT instance for accessing the S3 bucket (option E) adds unnecessary complexity and overhead. VPC gateway endpoints are a more secure and efficient way to establish connectivity between the VPC and S3.

Therefore, options A and C are the appropriate steps to accomplish secure access to the S3 bucket from the application tier in the VPC.

Question 1030

Exam Question

A company is planning to transfer multiple terabytes of data to AWS. The data is collected offline from ships. The company wants to run a complex transformation before transferring the data.

Which AWS service should a solutions architect recommend for this migration?

A. AWS Snowball
B. AWS Snowmobile
C. AWS Snowball Edge Storage Optimize
D. AWS Snowball Edge Compute Optimize

Correct Answer

A. AWS Snowball

Explanation

For the scenario described, the recommended AWS service for the migration of multiple terabytes of data, along with the requirement to run a complex transformation before transferring the data, is:

A. AWS Snowball

AWS Snowball is a service designed specifically for offline data transfer scenarios where large amounts of data need to be migrated to AWS. It provides a secure and efficient way to transfer data offline by using physical devices called Snowball appliances. With Snowball, you can request a Snowball appliance, which is a ruggedized storage device available in different storage capacities (up to 100TB or 80TB usable capacity). The Snowball appliance is then shipped to your location, where you can load your data onto it.

In this case, since the data is collected offline from ships and a complex transformation needs to be performed before transferring the data, you can leverage AWS Snowball. You can perform the necessary transformation on-premises or at the ship location, load the transformed data onto the Snowball appliance, and then ship it to an AWS data center for ingestion into AWS services.

Options B, C, and D (AWS Snowmobile, AWS Snowball Edge Storage Optimize, and AWS Snowball Edge Compute Optimize) are not the most suitable choices for this scenario.

  • AWS Snowmobile is designed for massive data migrations, typically involving exabytes of data. It is a shipping container-sized data transfer solution that is not necessary or practical for the described terabytes of data.
  • AWS Snowball Edge Storage Optimize and AWS Snowball Edge Compute Optimize are Snowball devices that combine storage and compute capabilities. They are more suitable for scenarios that require local storage and processing capabilities at remote or disconnected locations, which is not the primary requirement mentioned in the question.

Therefore, AWS Snowball (option A) is the appropriate service for this migration scenario.