The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.
Table of Contents
- Question 881
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 882
- Exam Question
- Correct Answer
- Explanation
- Question 883
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 884
- Exam Question
- Correct Answer
- Explanation
- Question 885
- Exam Question
- Correct Answer
- Explanation
- Question 886
- Exam Question
- Correct Answer
- Explanation
- Question 887
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 888
- Exam Question
- Correct Answer
- Explanation
- Question 889
- Exam Question
- Correct Answer
- Explanation
- Question 890
- Exam Question
- Correct Answer
- Explanation
Question 881
Exam Question
A company used an AWS Direct Connect connection to copy 1 PB of data from a colocation facility to an Amazon S3 bucket in the us-east-1 Region. The company now wants to copy the data to another S3 bucket in the us-west-2 Region.
Which solution will meet this requirement?
A. Use an AWS Snowball Edge Storage Optimized device to copy the data from the colocation facility to us-west-2.
B. Use the S3 console to copy the data from the source S3 bucket to the target S3 bucket.
C. Use S3 Transfer Acceleration and the S3 copy-object command to copy the data from the source S3 bucket to the target S3 bucket.
D. Add an S3 Cross-Region Replication configuration to copy the data from the source S3 bucket to the target S3 bucket.
Correct Answer
D. Add an S3 Cross-Region Replication configuration to copy the data from the source S3 bucket to the target S3 bucket.
Explanation
Option D is the most suitable solution for copying the data from the source S3 bucket in the us-east-1 Region to the target S3 bucket in the us-west-2 Region. S3 Cross-Region Replication allows for automatic replication of objects across different AWS regions. By configuring cross-region replication, the company can ensure that any new or updated objects in the source bucket are automatically copied to the target bucket in the desired region. This solution leverages AWS infrastructure and eliminates the need for manual intervention or additional data transfer devices.
Reference
How can I copy all objects from one Amazon S3 bucket to another bucket?
Question 882
Exam Question
A company is running a global application. The application’s users submit multiple videos that are then merged into a single video file. The application uses a single Amazon S3 bucket in the us-east-1 Region to receive uploads from users. The same S3 bucket provides the download location of the single video file that is produced. The final video file output has an average size of 250 GB. The company needs to develop a solution that delivers faster uploads and downloads of the video files that are stored in Amazon S2. The company will offer the solution as a subscription to users who want to pay for the increased speed.
What should a solutions architect do to meet these requirements?
A. Enable AWS Global Accelerator for the S3 endpoint. Adjust the application’s upload and download links to use the Global Accelerator S3 endpoint for users who have a subscription.
B. Enable S3 Cross-Region Replication to S3 buckets in all other AWS Regions. Use an Amazon Route 53 geolocation routing policy to route S3 requests based on the location of users who have a subscription.
C. Create an Amazon CloudFront distribution and use the S3 bucket in us-east-1 as an origin. Adjust the application to use the CloudFront URL as the upload and download links for users who have a subscription.
D. Enable S3 Transfer Acceleration for the S3 bucket in us-east-1. Configure the application to use the bucket’s S3-accelerate endpoint domain name for the upload and download links for users who have a subscription.
Correct Answer
D. Enable S3 Transfer Acceleration for the S3 bucket in us-east-1. Configure the application to use the bucket’s S3-accelerate endpoint domain name for the upload and download links for users who have a subscription.
Explanation
To achieve faster uploads and downloads of video files stored in Amazon S3, enabling S3 Transfer Acceleration is the recommended solution. S3 Transfer Acceleration takes advantage of Amazon’s globally distributed network of edge locations to optimize data transfers. By enabling S3 Transfer Acceleration for the S3 bucket in the us-east-1 Region, the company can leverage the faster upload and download speeds provided by this feature.
To implement this solution, the solutions architect should configure the application to use the S3-accelerate endpoint domain name for the upload and download links for users who have a subscription. This ensures that users with a subscription can benefit from the accelerated transfer speeds.
Option A is not the best choice because AWS Global Accelerator is typically used to improve the performance and availability of applications that are hosted on EC2 instances or Elastic IP addresses, rather than S3 buckets.
Option B is not the most efficient solution as it involves replicating the data across multiple regions, which can incur additional costs and complexity without directly addressing the requirement for faster uploads and downloads.
Option C is also not the most suitable solution because while CloudFront can improve the delivery of static and dynamic content, it may not provide the same level of performance improvement for uploads as S3 Transfer Acceleration.
Therefore, option D is the most appropriate solution for achieving faster uploads and downloads of video files stored in Amazon S3.
Question 883
Exam Question
A company has a website hosted on AWS. The website is behind an Application Load Balancer (ALB) that is configured to handle HTTP and HTTPS separately. The company wants to forward all requests to the website so that the requests will use HTTPS.
What solution should a solutions architect do to meet this requirement?
A. Update the ALB’s network ACL to accept only HTTPS traffic.
B. Create a rule that replaces the HTTP in the URL with HTTPS.
C. Create a listener rule on the ALB to redirect HTTP traffic to HTTPS.
D. Replace the ALB with a Network Load Balancer configured to use Server Name Indication (SNI).
Correct Answer
C. Create a listener rule on the ALB to redirect HTTP traffic to HTTPS.
Explanation
To ensure that all requests to the website use HTTPS, the solutions architect should create a listener rule on the ALB to redirect HTTP traffic to HTTPS. This can be achieved by configuring a rule on the ALB listener that captures incoming HTTP requests and redirects them to the corresponding HTTPS URL.
Option A is not the correct solution as updating the ALB’s network ACL to accept only HTTPS traffic would prevent HTTP traffic from reaching the ALB altogether, rather than redirecting it to HTTPS.
Option B is also not the correct solution as simply replacing the HTTP in the URL with HTTPS would not enforce the use of HTTPS for the requests. It would only modify the URL, but the requests could still be served over HTTP.
Option D is not necessary in this scenario as the ALB is already configured to handle HTTP and HTTPS traffic separately. Switching to a Network Load Balancer configured with Server Name Indication (SNI) would not provide any additional benefit in terms of enforcing HTTPS for all requests.
Therefore, option C is the appropriate solution for ensuring that all requests to the website are forwarded to HTTPS.
Reference
How can I redirect HTTP requests to HTTPS using an Application Load Balancer?
Question 884
Exam Question
A company is deploying an application that processes streaming data in near-real time. The company plans to use Amazon EC2 instances for the workload. The network architecture must be configurable to provide the lowest possible latency between nodes.
Which combination of network solutions will meet these requirements? (Choose two.)
A. Enable and configure enhanced networking on each EC2 instance.
B. Group the EC2 instances in separate accounts.
C. Run the EC2 instances in a cluster placement group.
D. Attach multiple elastic network interfaces to each EC2 instance.
E. Use Amazon Elastic Block Store (Amazon EBS) optimized instance types.
Correct Answer
A. Enable and configure enhanced networking on each EC2 instance.
C. Run the EC2 instances in a cluster placement group.
Explanation
To provide the lowest possible latency between nodes for processing streaming data in near-real time on Amazon EC2 instances, the following network solutions can be used:
A. Enable and configure enhanced networking on each EC2 instance. Enhanced networking uses single root I/O virtualization (SR-IOV) to provide higher packet per second (PPS) performance and lower network latency.
C. Run the EC2 instances in a cluster placement group. A cluster placement group ensures that the instances are placed in close proximity to each other within a single Availability Zone, reducing the network latency between instances.
Therefore, options A and C provide the combination of network solutions that will meet the requirements for low-latency network architecture.
Question 885
Exam Question
An image-hosting company stores its objects in Amazon S3 buckets. The company wants to avoid accidental exposure of the objects in the S3 buckets to the public. All S3 objects in the entire AWS account need to remain private.
Which solution will meet these requirements?
A. Use Amazon GuardDuty to monitor S3 bucket policies. Create an automatic remediation action rule that uses an AWS Lambda function to remediate any change that makes the objects public.
B. Use AWS Trusted Advisor to find publicly accessible S3 buckets. Configure email notifications in Trusted Advisor when a change is detected. Manually change the S3 bucket policy if it allows public access.
C. Use AWS Resource Access Manager to find publicly accessible S3 buckets. Use Amazon Simple Notification Service (Amazon SNS) to invoke an AWS Lambda function when a change is detected. Deploy a Lambda function that programmatically remediates the change.
D. Use the S3 Block Public Access feature on the account level. Use AWS Organizations to create a service control policy (SCP) that prevents IAM users from changing the setting. Apply the SCP to the account.
Correct Answer
D. Use the S3 Block Public Access feature on the account level. Use AWS Organizations to create a service control policy (SCP) that prevents IAM users from changing the setting. Apply the SCP to the account.
Explanation
To ensure that all S3 objects in the entire AWS account remain private and avoid accidental exposure, the following solution can be used:
D. Use the S3 Block Public Access feature on the account level. By enabling this feature, it ensures that no S3 bucket policy or access control list (ACL) can override the block. This provides an added layer of protection to prevent public access to S3 objects.
Additionally, using AWS Organizations, you can create a service control policy (SCP) that prevents IAM users from changing the S3 Block Public Access setting. By applying this SCP to the account, you ensure that the setting remains in place and cannot be modified by IAM users.
Therefore, option D provides the solution that will meet the requirements of keeping all S3 objects in the AWS account private.
Question 886
Exam Question
A company has two VPCs that are located in the us-west-2 Region within the same AWS account. The company needs to allow network traffic between these VPCs. Approximately 500 GB of data transfer will occur between the VPCs each month.
What is the MOST cost-effective solution to connect these VPCs?
A. Implement AWS Transit Gateway to connect the VPCs. Update the route tables of each VPC to use the transit gateway for inter-VPC communication.
B. Implement an AWS Site-to-Site VPN tunnel between the VPCs. Update the route tables of each VPC to use the VPN tunnel for inter-VPC communication.
C. Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use the VPC peering connection for inter-VPC communication.
D. Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the route tables of each VPC to use the Direct Connect connection for inter-VPC communication.
Correct Answer
C. Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use the VPC peering connection for inter-VPC communication.
Explanation
The most cost-effective solution to connect the two VPCs in the same AWS account and allow network traffic between them, considering the estimated data transfer volume, would be:
C. Set up a VPC peering connection between the VPCs and update the route tables of each VPC to use the VPC peering connection for inter-VPC communication.
VPC peering allows direct network connectivity between VPCs using private IP addresses, without the need for additional networking resources. It is a cost-effective option for connecting VPCs within the same AWS account. Additionally, since the VPCs are in the same region, there are no additional data transfer costs associated with using VPC peering.
AWS Transit Gateway (option A) is designed for connecting multiple VPCs and networks, which may be more suitable for complex network architectures involving multiple AWS accounts or connecting to on-premises resources. However, for the given scenario of two VPCs in the same account, VPC peering is the simpler and more cost-effective choice.
AWS Site-to-Site VPN (option B) and AWS Direct Connect (option D) are typically used for connecting VPCs to on-premises networks or remote sites. They involve additional networking resources and can have associated costs, making them less cost-effective for inter-VPC communication within the same account and region.
Question 887
Exam Question
A company is running a media store across multiple Amazon EC2 instances distributed across multiple Availability Zones in a single VPC. The company wants a high-performing solution to share data between all the EC2 instances, and prefers to keep the data within the VPC only.
What should a solutions architect recommend?
A. Create an Amazon S3 bucket and call the service APIs from each instance’s application.
B. Create an Amazon S3 bucket and configure all instances to access it as a mounted volume.
C. Configure an Amazon Elastic Block Store (Amazon EBS) volume and mount it across all instances.
D. Configure an Amazon Elastic File System (Amazon EFS) file system and mount it across all instances.
Correct Answer
D. Configure an Amazon Elastic File System (Amazon EFS) file system and mount it across all instances.
Explanation
A solutions architect should recommend:
D. Configure an Amazon Elastic File System (Amazon EFS) file system and mount it across all instances.
Amazon EFS is a fully managed, scalable, and highly available file storage service that is designed to provide shared access to files across multiple EC2 instances. It is suitable for scenarios where multiple instances need concurrent access to shared data within a VPC.
By configuring an Amazon EFS file system and mounting it across all instances, the company can achieve high-performance file sharing without the need for additional data replication or synchronization. Amazon EFS provides low-latency performance and is designed to scale automatically to handle the workload demands of multiple instances.
Options A and B suggest using Amazon S3, which is an object storage service and not designed for file sharing among instances. While it is possible to access S3 from EC2 instances using service APIs or as a mounted volume, it may not provide the same level of performance and flexibility as Amazon EFS for concurrent file access.
Option C suggests using Amazon Elastic Block Store (Amazon EBS) volumes, but they are designed for block-level storage and cannot be concurrently mounted across multiple instances. Each Amazon EBS volume is typically attached to a single EC2 instance.
Therefore, for high-performing shared file storage within the VPC, Amazon EFS is the recommended solution.
Reference
AWS > Documentation > Amazon Elastic File System (EFS) > User Guide > Step 3: Mount the file system on the EC2 instance and test
Question 888
Exam Question
A company’s HTTP application is behind a Network Load Balancer (NLB). The NLB’s target group is configured to use an Amazon EC2 Auto Scaling group with multiple EC2 instances that run the web service. The company notices that the NLB is not detecting HTTP errors for the application. These errors require a manual restart of the EC2 instances that run the web service. The company needs to improve the application’s availability without writing custom scripts or code.
What should a solutions architect do to meet these requirements?
A. Enable HTTP health checks on the NLB, supplying the URL of the company’s application.
B. Add a cron job to the EC2 instances to check the local application’s logs once each minute. If HTTP errors are detected, the application will restart.
C. Replace the NLB with an Application Load Balancer. Enable HTTP health checks by supplying the URL of the company’s application. Configure an Auto Scaling action to replace unhealthy instances.
D. Create an Amazon CloudWatch alarm that monitors the UnhealthyHostCount metric for the NLB. Configure an Auto Scaling action to replace unhealthy instances when the alarm is in the ALARM state.
Correct Answer
C. Replace the NLB with an Application Load Balancer. Enable HTTP health checks by supplying the URL of the company’s application. Configure an Auto Scaling action to replace unhealthy instances.
Explanation
To improve the application’s availability without writing custom scripts or code, a solutions architect should:
C. Replace the NLB with an Application Load Balancer. Enable HTTP health checks by supplying the URL of the company’s application. Configure an Auto Scaling action to replace unhealthy instances.
The Network Load Balancer (NLB) does not perform HTTP health checks by default. It relies on TCP health checks to determine the health of the targets. Since the company’s application requires manual restart of the EC2 instances in case of HTTP errors, enabling HTTP health checks on the NLB will not address the issue.
Instead, using an Application Load Balancer (ALB) is a more suitable solution. The ALB supports HTTP health checks, allowing it to monitor the health of the application by sending HTTP requests to a specific URL. By replacing the NLB with an ALB, and configuring the ALB to perform HTTP health checks on the application’s URL, the ALB will be able to detect HTTP errors and mark the instances as unhealthy.
Additionally, by configuring an Auto Scaling action to replace unhealthy instances, the Auto Scaling group associated with the ALB will automatically replace the instances that fail the health checks, improving the application’s availability without requiring manual intervention.
Therefore, replacing the NLB with an ALB, enabling HTTP health checks, and configuring Auto Scaling to replace unhealthy instances is the recommended solution in this scenario.
Question 889
Exam Question
A company is deploying an application that processes large quantities of data in batches as needed. The company plans to use Amazon EC2 instances for the workload. The network architecture must support a highly scalable solution and prevent groups of nodes from sharing the same underlying hardware.
Which combination of network solutions will meet these requirements? (Choose two.)
A. Create Capacity Reservations for the EC2 instances to run in a placement group.
B. Run the EC2 instances in a spread placement group.
C. Run the EC2 instances in a cluster placement group.
D. Place the EC2 instances in an EC2 Auto Scaling group.
E. Run the EC2 instances in a partition placement group.
Correct Answer
B. Run the EC2 instances in a spread placement group.
D. Place the EC2 instances in an EC2 Auto Scaling group.
Explanation
To meet the requirements of a highly scalable solution while preventing groups of nodes from sharing the same underlying hardware, the recommended combination of network solutions is:
B. Run the EC2 instances in a spread placement group.
D. Place the EC2 instances in an EC2 Auto Scaling group.
Running the EC2 instances in a spread placement group ensures that instances are placed on different underlying hardware within the same Availability Zone. This helps improve fault tolerance and reduces the likelihood of simultaneous failures.
Placing the EC2 instances in an EC2 Auto Scaling group allows for automatic scaling of the instances based on demand. Auto Scaling helps maintain the desired number of instances and automatically adds or removes instances as needed. It provides the scalability required for processing large quantities of data in batches.
Therefore, utilizing a spread placement group and an EC2 Auto Scaling group will provide a highly scalable solution while ensuring instances are spread across different underlying hardware.
Question 890
Exam Question
A company is migrating a large, mission-critical database to AWS. A solutions architect has decided to use an Amazon RDS for MySQL Multi-AZ DB instance that is deployed with 80,000 Provisioned IOPS for storage. The solutions architect is using AWS Database Migration Service (AWS DMS) to perform the data migration. The migration is taking longer than expected, and the company wants to speed up the process. The company’s network team has ruled out bandwidth as a limiting factor.
Which actions should the solutions architect take to speed up the migration? (Choose two.)
A. Disable Multi-AZ on the target DB instance.
B. Create a new DMS instance that has a larger instance size.
C. Turn off logging on the target DB instance until the initial load is complete.
D. Restart the DMS task on a new DMS instance with transfer acceleration enabled.
E. Change the storage type on the target DB instance to Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2).
Correct Answer
B. Create a new DMS instance that has a larger instance size.
E. Change the storage type on the target DB instance to Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2).
Explanation
To speed up the migration process for the large, mission-critical database using AWS Database Migration Service (AWS DMS) and an Amazon RDS for MySQL Multi-AZ DB instance, the following actions should be taken:
B. Create a new DMS instance that has a larger instance size: By using a larger DMS instance, more resources are allocated to the migration task, which can lead to faster data transfer and migration.
E. Change the storage type on the target DB instance to Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2): Provisioned IOPS storage may have a maximum performance limit, and changing to General Purpose SSD storage can improve the migration speed as it provides burst performance and can adapt to the workload.
Therefore, creating a new DMS instance with a larger instance size and changing the storage type on the target DB instance to Amazon EBS General Purpose SSD (gp2) will help speed up the migration process.