The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.
Table of Contents
- Question 1191
- Exam Question
- Correct Answer
- Explanation
- Question 1192
- Exam Question
- Correct Answer
- Explanation
- Question 1193
- Exam Question
- Correct Answer
- Explanation
- Question 1194
- Exam Question
- Correct Answer
- Explanation
- Question 1195
- Exam Question
- Correct Answer
- Explanation
- Question 1196
- Exam Question
- Correct Answer
- Explanation
- Question 1197
- Exam Question
- Correct Answer
- Explanation
- Question 1198
- Exam Question
- Correct Answer
- Question 1199
- Exam Question
- Correct Answer
- Explanation
- Question 1200
- Exam Question
- Correct Answer
- Explanation
Question 1191
Exam Question
A company has an application hosted on Amazon EC2 instances in two VPCs across different AWS Regions. To communicate with each other, the instances use the internet for connectivity. The security team wants to ensure that no communication between the instances happens over the internet.
What should a solutions architect do to accomplish this?
A. Create a NAT gateway and update the route table of the EC2 instance subnet.
B. Create a VPC endpoint and update the route table of the EC2 instance subnet.
C. Create a VPN connection and update the route table of the EC2 instances subnet.
D. Create a VPC peering connection and update the route table of the EC2 instances subnet.
Correct Answer
D. Create a VPC peering connection and update the route table of the EC2 instances subnet.
Explanation
To ensure that no communication between the instances happens over the internet, a solutions architect should:
D. Create a VPC peering connection and update the route table of the EC2 instances subnet.
- VPC peering allows direct network connectivity between VPCs using private IP addresses. It enables instances in different VPCs to communicate with each other using private IP addresses without needing to traverse the internet.
- By creating a VPC peering connection between the two VPCs hosting the EC2 instances, the instances can communicate securely over the private network without any internet involvement.
- After creating the VPC peering connection, the route tables of the EC2 instance subnets should be updated to route traffic destined for the other VPC through the peering connection.
Option A, creating a NAT gateway, would provide internet access for instances in the VPC, which is the opposite of the desired outcome.
Option B, creating a VPC endpoint, allows instances to securely access AWS services without going over the internet but does not provide direct communication between instances in different VPCs.
Option C, creating a VPN connection, would establish a secure connection between VPCs but would still involve routing traffic over the internet.
Therefore, the correct solution is to create a VPC peering connection and update the route tables of the EC2 instance subnets accordingly.
Question 1192
Exam Question
A company wants to move its on-premises network, attached storage (NAS) to AWS. The company wants to make the data available to any Linux instances within its VPC and ensure changes are automatically synchronized across all instances accessing the data store. The majority of the data is accessed very rarely, and some files are accessed by multiple users at the same time.
Which solution meets these requirements and is MOST cost-effective?
A. Create an Amazon Elastic Block Store (Amazon EBS) snapshot containing the data. Share it with users within the VPC.
B. Create an Amazon S3 bucket that has a lifecycle policy set to transition the data to S3 Standard-Infrequent Access (S3 Standard-IA) after the appropriate number of days.
C. Create an Amazon Elastic File System (Amazon EFS) file system within the VPC. Set the throughput mode to Provisioned and to the required amount of IOPS to support concurrent usage.
D. Create an Amazon Elastic File System (Amazon EFS) file system within the VPC. Set the lifecycle policy to transition the data to EFS Infrequent Access (EFS IA) after the appropriate number of days.
Correct Answer
D. Create an Amazon Elastic File System (Amazon EFS) file system within the VPC. Set the lifecycle policy to transition the data to EFS Infrequent Access (EFS IA) after the appropriate number of days.
Explanation
The solution that meets the requirements and is most cost-effective is:
D. Create an Amazon Elastic File System (Amazon EFS) file system within the VPC. Set the lifecycle policy to transition the data to EFS Infrequent Access (EFS IA) after the appropriate number of days.
- Amazon EFS is a fully managed, scalable, and shared file storage service in AWS that is compatible with Linux-based instances.
- By creating an Amazon EFS file system within the VPC, the company can make the data available to any Linux instances within the VPC.
- Amazon EFS supports automatic file system replication across multiple Availability Zones, ensuring that changes are synchronized across all instances accessing the data store.
- The majority of the data being accessed very rarely aligns well with the lifecycle policy of EFS Infrequent Access (EFS IA), which provides a lower-cost storage class for infrequently accessed data.
- With EFS IA, the data can be automatically transitioned to the lower-cost storage class after the appropriate number of days, helping to optimize costs.
Option A, creating an Amazon EBS snapshot, is not suitable for providing shared access to the data across multiple instances, and it does not offer automatic synchronization of changes.
Option B, using an Amazon S3 bucket, is an object storage solution and does not provide a traditional file system interface for Linux instances. It also does not offer automatic synchronization of changes.
Option C, setting the throughput mode of Amazon EFS to Provisioned, would require additional costs and configuration complexity, which may not be necessary based on the described requirements.
Therefore, the best option is to create an Amazon EFS file system within the VPC and set the lifecycle policy to transition the data to EFS Infrequent Access (EFS IA) after the appropriate number of days.
Question 1193
Exam Question
A web application must persist order data to Amazon S3 to support near-real time processing. A solutions architect needs to create an architecture that is both scalable and fault tolerant.
Which solutions meet these requirements? (Choose two.)
A. Write the order event to an Amazon DynamoDB table. Use DynamoDB Streams to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3.
B. Write the order event to an Amazon Simple Queue Service (Amazon SQS) queue. Use the queue to trigger an AWSLambda function that parsers the payload and writes the data to Amazon S3.
C. Write the order event to an Amazon Simple Notification Service (Amazon SNS) topic. Use the SNS topic to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3.
D. Write the order event to an Amazon Simple Queue Service (Amazon SQS) queue. Use an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3.
E. Write the order event to an Amazon Simple Notification Service (Amazon SNS) topic. Use an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an AWS Lambda function that parses the payload andwrites the data to Amazon S3.
Correct Answer
A. Write the order event to an Amazon DynamoDB table. Use DynamoDB Streams to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3.
D. Write the order event to an Amazon Simple Queue Service (Amazon SQS) queue. Use an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3.
Explanation
The solutions that meet the requirements of being both scalable and fault-tolerant are:
A. Write the order event to an Amazon DynamoDB table. Use DynamoDB Streams to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3.
D. Write the order event to an Amazon Simple Queue Service (Amazon SQS) queue. Use an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3.
A. By writing the order event to an Amazon DynamoDB table, the data is stored in a highly scalable and durable NoSQL database. DynamoDB Streams can be used to capture the order events and trigger an AWS Lambda function. The Lambda function can then parse the payload and write the data to Amazon S3. This solution is scalable as DynamoDB can handle high write rates and fault-tolerant as DynamoDB automatically replicates data across multiple Availability Zones.
D. By writing the order event to an Amazon SQS queue, the application can decouple the producer and consumer of the events. An Amazon EventBridge (CloudWatch Events) rule can be set up to trigger an AWS Lambda function when messages are available in the SQS queue. The Lambda function can then parse the payload and write the data to Amazon S3. This solution is scalable as SQS can handle large numbers of messages and fault-tolerant as messages are stored in the queue until they are processed.
Options B and C are not the best choices because Amazon SQS and Amazon SNS are primarily used for decoupling and asynchronous messaging, but they do not provide a built-in event-driven mechanism for triggering Lambda functions. While it is possible to use a Lambda function triggered by an SNS topic or SQS queue to achieve the desired outcome, it requires additional configuration and may not be as scalable and fault-tolerant as options A and D.
Option E is not a valid solution because Amazon SNS alone cannot directly trigger an AWS Lambda function. It would require additional components, such as an Amazon EventBridge rule, to trigger the Lambda function based on SNS notifications. This introduces unnecessary complexity and does not provide a direct and streamlined architecture for the given requirements.
Question 1194
Exam Question
A company plans to host a survey website on AWS. The company anticipates an unpredictable amount of traffic. This traffic results in asynchronous updates to the database. The company wants to ensure that writes to the database hosted on AWS do not get dropped.
How should the company write its application to handle these database requests?
A. Configure the application to publish to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the database to the SNS topic.
B. Configure the application to subscribe to an Amazon Simple Notification Service (Amazon SNS) topic. Publish the database updates to the SNS topic.
C. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues to queue the database connection until the database has resources to write the data.
D. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues for capturing the writes and draining the queue as each write is made to the database.
Correct Answer
D. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues for capturing the writes and draining the queue as each write is made to the database.
Explanation
To ensure that writes to the database hosted on AWS are not dropped in the event of unpredictable traffic and asynchronous updates, the company should use a queuing mechanism to decouple the application from the database. This allows the application to offload the database writes to a separate component that can handle the load.
The recommended approach for handling these database requests is:
D. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues for capturing the writes and draining the queue as each write is made to the database.
By using Amazon SQS FIFO queues, the application can send database write requests to the queue, which acts as a buffer between the application and the database. The queue ensures that the writes are not lost even during periods of high traffic or asynchronous updates. The database can then consume the queued requests at its own pace, ensuring that the writes are processed without being dropped. The FIFO nature of the SQS queue guarantees the ordering of messages, which is important for maintaining consistency in database updates.
Option A is not the best choice because configuring the application to publish to an Amazon SNS topic and subscribing the database to the topic would introduce unnecessary complexity. SNS is typically used for pub/sub messaging, and while it can be used for decoupling, using SQS FIFO queues is a more appropriate choice for this scenario.
Option B is not suitable because configuring the application to subscribe to an SNS topic and publishing the database updates to the topic would not provide the necessary buffering and queuing functionality required to handle the unpredictable traffic and asynchronous updates.
Option C suggests using Amazon SQS FIFO queues to queue the database connection until the database has resources to write the data. However, SQS FIFO queues are designed to handle messages, not database connections. It is more appropriate to use SQS FIFO queues to handle database write requests and ensure their durability and processing.
Therefore, option D is the correct choice for handling database requests in a scalable and fault-tolerant manner, ensuring that writes are not dropped under unpredictable traffic conditions.
Question 1195
Exam Question
A company has a dynamic web application hosted on two Amazon EC2 instances. The company has its own SSL certificate, which is on each instance to perform SSL termination. There has been an increase in traffic recently, and the operations team determined that SSL encryption and decryption is causing the compute capacity of the web servers to reach their maximum limit.
What should a solutions architect do to increase the application performance?
A. Create a new SSL certificate using AWS Certificate Manager (ACM). Install the ACM certificate on each instance.
B. Create an Amazon S3 bucket. Migrate the SSL certificate to the S3 bucket. Configure the EC2 instances to reference the bucket for SSL termination.
C. Create another EC2 instance as a proxy server. Migrate the SSL certificate to the new instance and configure it to direct connections to the existing EC2 instances.
D. Import the SSL certificate into AWS Certificate Manager (ACM). Create an Application Load Balancer with an HTTPS listener that uses the SSL certificate from ACM.
Correct Answer
D. Import the SSL certificate into AWS Certificate Manager (ACM). Create an Application Load Balancer with an HTTPS listener that uses the SSL certificate from ACM.
Explanation
To increase the application performance and offload SSL encryption and decryption from the web servers, a solutions architect should consider using a load balancer with SSL termination.
The recommended solution is:
D. Import the SSL certificate into AWS Certificate Manager (ACM). Create an Application Load Balancer with an HTTPS listener that uses the SSL certificate from ACM.
By importing the SSL certificate into AWS Certificate Manager (ACM), you can centrally manage and store the SSL certificate securely. Then, by creating an Application Load Balancer (ALB) with an HTTPS listener, you can configure the ALB to handle the SSL termination. The ALB will handle the encryption and decryption of SSL traffic, relieving the EC2 instances from this compute-intensive task.
The SSL certificate from ACM can be associated with the HTTPS listener of the ALB, allowing the ALB to terminate SSL connections and forward the decrypted traffic to the EC2 instances over an internal network. This reduces the load on the EC2 instances and improves the overall performance of the application.
Option A suggests creating a new SSL certificate using AWS Certificate Manager (ACM) and installing it on each instance. While ACM can provide a convenient way to manage SSL certificates, installing the certificate on each instance would still require the instances to handle SSL termination, which does not address the issue of compute capacity reaching its limit.
Option B suggests migrating the SSL certificate to an Amazon S3 bucket and configuring the EC2 instances to reference the bucket for SSL termination. However, this approach is not suitable for SSL termination, as S3 is an object storage service and does not support SSL termination capabilities.
Option C suggests creating another EC2 instance as a proxy server and migrating the SSL certificate to the new instance for SSL termination. While this could potentially offload SSL termination from the existing EC2 instances, it introduces additional complexity and management overhead.
Therefore, option D is the recommended solution, as it leverages AWS Certificate Manager (ACM) to import the SSL certificate and offloads SSL termination to an Application Load Balancer (ALB), improving the performance of the web application.
Question 1196
Exam Question
A company that recently started using AWS establishes a Site-to-Site VPN between its on-premises datacenter and AWS. The company’s security mandate states that traffic originating from on premises should stay within the company private IP space when communicating with an Amazon Elastic Container Service (Amazon ECS) cluster that is hosting a sample web application.
Which solution meets this requirement?
A. Configure a gateway endpoint for Amazon ECS. Modify the route table to include an entry pointing to the ECS cluster.
B. Create a Network Load Balancer and AWS PrivateLink endpoint for Amazon ECS in the same VPC that is hosting the ECS cluster.
C. Create a Network Load Balancer in one VPC and an AWS PrivateLink endpoint for Amazon ECS in another VPC. Connect the two VPCs by using VPC peering.
D. Configure an Amazon Route 53 record with Amazon ECS as the target. Apply a server certificate to Route 53 from AWS Certificate Manager (ACM) for SSL offloading.
Correct Answer
B. Create a Network Load Balancer and AWS PrivateLink endpoint for Amazon ECS in the same VPC that is hosting the ECS cluster.
Explanation
To ensure that traffic originating from the on-premises datacenter stays within the company’s private IP space when communicating with an Amazon ECS cluster, the recommended solution is:
B. Create a Network Load Balancer and AWS PrivateLink endpoint for Amazon ECS in the same VPC that is hosting the ECS cluster.
Option A suggests configuring a gateway endpoint for Amazon ECS and modifying the route table to include an entry pointing to the ECS cluster. However, gateway endpoints are used for accessing AWS services over AWS PrivateLink, and they do not provide a solution for keeping traffic within the company’s private IP space.
Option C suggests creating a Network Load Balancer in one VPC and an AWS PrivateLink endpoint for Amazon ECS in another VPC, and connecting the two VPCs using VPC peering. While VPC peering can establish communication between VPCs, it does not ensure that the traffic originating from the on-premises datacenter stays within the private IP space.
Option D suggests configuring an Amazon Route 53 record with Amazon ECS as the target and applying a server certificate to Route 53 from AWS Certificate Manager (ACM) for SSL offloading. However, this solution does not address the requirement of keeping the traffic within the company’s private IP space.
Therefore, option B is the recommended solution. By creating a Network Load Balancer and AWS PrivateLink endpoint for Amazon ECS in the same VPC that is hosting the ECS cluster, the traffic between the on-premises datacenter and the ECS cluster can be routed internally within the VPC, ensuring that it stays within the company’s private IP space. The Network Load Balancer provides load balancing capabilities, while the AWS PrivateLink endpoint establishes a private connection between the on-premises datacenter and the ECS cluster, avoiding the need to traverse the internet for communication.
Question 1197
Exam Question
A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.
What should a solutions architect recommend to meet these requirements?
A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transaction data with other applications.
B. Stream the transaction data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.
C. Stream the transaction data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in AmazonDynamoDB. Other applications can consume the transactions data off the Kinesis data stream.
D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3.
Correct Answer
C. Stream the transaction data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in AmazonDynamoDB. Other applications can consume the transactions data off the Kinesis data stream.
Explanation
To meet the requirements of sharing transaction details with other applications in a scalable and near-real-time manner, while also processing and storing the data with sensitive information removed, the recommended solution is:
C. Stream the transaction data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.
Option A suggests storing the transactions data in Amazon DynamoDB, setting up a rule to remove sensitive data upon write, and using DynamoDB Streams to share the data with other applications. While DynamoDB Streams allows sharing data, it does not address the requirement of processing and removing sensitive data in near-real-time.
Option B suggests streaming the transaction data into Amazon Kinesis Data Firehose to store data in DynamoDB and S3, and using AWS Lambda integration to remove sensitive data. This option covers the data storage and processing requirements but does not provide a scalable and near-real-time solution for sharing the data with other applications.
Option D suggests storing batched transaction data in Amazon S3 as files, processing the files with AWS Lambda to remove sensitive data, and then storing the data in DynamoDB. While this option handles the processing and storage aspects, it is not a near-real-time solution for sharing the data with other applications.
Option C is the recommended solution as it leverages the capabilities of Amazon Kinesis Data Streams and AWS Lambda for scalable and near-real-time data processing and sharing. With this solution, the transaction data is streamed into Kinesis Data Streams, AWS Lambda is used to remove sensitive data from each transaction, and the processed data is stored in Amazon DynamoDB. Other applications can consume the transaction data directly from the Kinesis data stream, enabling near-real-time access to the data with sensitive information removed. This solution provides the scalability, near-real-time processing, and secure data sharing required by the company’s needs.
Question 1198
Exam Question
A solutions architect must analyze and update a company existing IAM policies prior to deploying a new workload. The solutions architect created the following policy:
What is the net effect of this policy?
A. Users will be allowed all actions except s3:PutObject if multi-factor authentication (MFA) is enabled.
B. Users will be allowed all actions except s3:PutObject if multi-factor authentication (MFA) is not enabled.
C. Users will be denied all actions except s3:PutObject if multi-factor authentication (MFA) is enabled.
D. Users will be denied all actions except s3:PutObject if multi-factor authentication (MFA) is not enabled.
Correct Answer
D. Users will be denied all actions except s3:PutObject if multi-factor authentication (MFA) is not enabled.
Question 1199
Exam Question
A company has a three-tier environment on AWS that ingests sensor data from its users devices. The traffic flows through a Network Load Balancer (NLB) then to Amazon EC2 instances for the web tier, and finally toEC2 instances for the application tier that makes database calls.
What should a solutions architect do to improve the security of data in transit to the web tier?
A. Configure a TLS listener and add the server certificate on the NLB.
B. Configure AWS Shield Advanced and enable AWS WAF on the NLB.
C. Change the load balancer to an Application Load Balancer and attach AWS WAF to it.
D. Encrypt the Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instances using AWS Key Management Service (AWS KMS).
Correct Answer
C. Change the load balancer to an Application Load Balancer and attach AWS WAF to it.
Explanation
To improve the security of data in transit to the web tier in a three-tier environment, a solutions architect should:
C. Change the load balancer to an Application Load Balancer and attach AWS WAF to it.
An Application Load Balancer (ALB) provides advanced application-level load balancing and allows for more fine-grained control over traffic routing. By using an ALB, the solutions architect can enable secure communication using HTTPS by configuring a TLS listener and adding the appropriate server certificate. This ensures that data transmitted between clients and the web tier is encrypted in transit, enhancing the security of the application.
Additionally, attaching AWS Web Application Firewall (WAF) to the ALB provides an extra layer of security by allowing the architect to define and enforce rules to protect the web tier from common web exploits and attacks.
Therefore, by changing the load balancer to an Application Load Balancer and attaching AWS WAF to it, the solutions architect can improve the security of data in transit to the web tier.
Question 1200
Exam Question
A company is running a multi-tier web application on premises. The web application is containerized and runs on a number of Linux hosts connected to a PostgreSQL database that contains user records. The operational overhead of maintaining the infrastructure and capacity planning is limiting the company growth. A solutions architect must improve the applications infrastructure.
Which combination of actions should the solutions architect take to accomplish this? (Choose two.)
A. Migrate the PostgreSQL database to Amazon Aurora.
B. Migrate the web application to be hosted on Amazon EC2 instances.
C. Set up an Amazon CloudFront distribution for the web application content.
D. Set up Amazon ElastiCache between the web application and the PostgreSQL database.
E. Migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service (Amazon ECS).
Correct Answer
A. Migrate the PostgreSQL database to Amazon Aurora.
E. Migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service (Amazon ECS).
Explanation
To improve the infrastructure of the multi-tier web application, the solutions architect should take the following actions:
A. Migrate the PostgreSQL database to Amazon Aurora.
E. Migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service (Amazon ECS).
A. Migrating the PostgreSQL database to Amazon Aurora provides a highly available, scalable, and managed database solution. Aurora is compatible with PostgreSQL and offers benefits such as automatic backups, automated software patching, and replication across multiple availability zones.
E. Migrating the web application to be hosted on AWS Fargate with Amazon ECS eliminates the operational overhead of managing and scaling the underlying infrastructure. Fargate allows you to run containers without having to provision or manage the underlying EC2 instances. It provides serverless compute for containers, enabling automatic scaling and high availability.
By migrating the PostgreSQL database to Amazon Aurora and the web application to AWS Fargate with Amazon ECS, the company can offload the infrastructure management and capacity planning tasks to AWS, allowing for easier scalability and improved performance of the application.
The other options (B, C, D) are not the most suitable choices for improving the infrastructure in this scenario. Migrating the web application to EC2 instances (B) would still require managing and scaling the underlying infrastructure. Setting up an Amazon CloudFront distribution (C) would improve content delivery, but it does not address the underlying infrastructure and database. Setting up Amazon ElastiCache (D) would improve performance but does not address the operational overhead and capacity planning challenges mentioned in the scenario.