The latest AWS Certified Solutions Architect – Professional SAP-C02 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Professional SAP-C02 exam and earn AWS Certified Solutions Architect – Professional SAP-C02 certification.
Table of Contents
- Question 711
- Exam Question
- Correct Answer
- Explanation
- Question 712
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 713
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 714
- Exam Question
- Correct Answer
- Question 715
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 716
- Exam Question
- Correct Answer
- Question 717
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 718
- Exam Question
- Correct Answer
- Question 719
- Exam Question
- Correct Answer
- Question 720
- Exam Question
- Correct Answer
- Explanation
- Reference
Question 711
Exam Question
You would like to create a mirror image of your production environment in another region for disaster recovery purposes.
Which of the following AWS resources do not need to be recreated in the second region? (Choose 2 answers)
A. Elastic IP Addresses (EIP)
B. EC2 Key Pairs
C. Route S3 Record Sets
D. Launch configurations
E. Security Groups
F. IAM Roles
Correct Answer
C. Route S3 Record Sets
F. IAM Roles
Explanation
As per the document defined, new IPs should be reserved not the same ones Elastic IP Addresses are static IP addresses designed for dynamic cloud computing. Unlike traditional static IP addresses, however, Elastic IP addresses enable you to mask instance or Availability Zone failures by programmatically remapping your public IP addresses to instances in your account in a particular region. For DR, you can also pre-allocate some IP addresses for the most critical systems so that their IP addresses are already known before disaster strikes. This can simplify the execution of the DR plan.
Question 712
Exam Question
A company has a website that runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances are in an Auto Scaling group. The ALB is associated with an AWS WAF web ACL.
The website often encounters attacks in the application layer. The attacks produce sudden and significant increases in traffic on the application server. The access logs show that each attack originates from different IP addresses. A solutions architect needs to implement a solution to mitigate these attacks.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an Amazon CloudWatch alarm that monitors server access. Set a threshold based on access by IP address. Configure an alarm action that adds the IP address to the web ACL’s deny list.
B. Deploy AWS Shield Advanced in addition to AWS WAF. Add the ALB as a protected resource.
C. Create an Amazon CloudWatch alarm that monitors user IP addresses. Set a threshold based on access by IP address. Configure the alarm to invoke an AWS Lambda function to add a deny rule in the application server’s subnet route table for any IP addresses that activate the alarm.
D. Inspect access logs to find a pattern of IP addresses that launched the attacks. Use an Amazon Route 53 geolocation routing policy to deny traffic from the countries that host those IP addresses.
Correct Answer
B. Deploy AWS Shield Advanced in addition to AWS WAF. Add the ALB as a protected resource.
Explanation
“The AWS WAF API supports security automation such as blacklisting IP addresses that exceed request limits, which can be useful for mitigating HTTP flood attacks.”
Reference
Question 713
Exam Question
A travel company built a web application that uses Amazon Simple Email Service (Amazon SES) to send email notifications to users. The company needs to enable logging to help troubleshoot email delivery issues. The company also needs the ability to do searches that are based on recipient, subject, and time sent.
Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.)
A. Use Amazon Athena to query the logs in the Amazon S3 bucket for recipient, subject, and time sent.
B. Use Amazon Athena to query the logs in Amazon CloudWatch for recipient, subject, and time sent.
C. Enable AWS CloudTrail logging. Specify an Amazon S3 bucket as the destination for the logs.
D. Create an Amazon SES configuration set with Amazon Kinesis Data Firehose as the destination. Choose to send logs to an Amazon S3 bucket.
E. Create an Amazon CloudWatch log group. Configure Amazon SES to send logs to the log group.
Correct Answer
A. Use Amazon Athena to query the logs in the Amazon S3 bucket for recipient, subject, and time sent.
C. Enable AWS CloudTrail logging. Specify an Amazon S3 bucket as the destination for the logs.
Explanation
To enable you to track your email sending at a granular level, you can set up Amazon SES to publish email sending events to Amazon CloudWatch, Amazon Kinesis Data Firehose, or Amazon Simple Notification Service based on characteristics that you define.
Reference
- AWS > Documentation > Amazon Simple Email Service > Developer Guide > Retrieving Amazon SES event data from Kinesis Data Firehose
- Serverless Data Processing on AWS
Question 714
Exam Question
A company wants to use AWS to create a business continuity solution in case the company’s main on-premises application fails. The application runs on physical servers that also run other applications. The on-premises application that the company is planning to migrate uses a MySQL database as a data store. All the company’s on-premises applications use operating systems that are compatible with Amazon EC2.
Which solution will achieve the company’s goal with the LEAST operational overhead?
A. Install the AWS Replication Agent on the source servers, including the MySQL servers. Set up replication for all servers. Launch test instances for regular drills. Cut over to the test instances to fail over the workload in the case of a failure event.
B. Install the AWS Replication Agent on the source servers, including the MySQL servers. Initialize AWS Elastic Disaster Recovery in the target AWS Region. De ne the launch settings. Frequently perform failover and fallback from the most recent point in time.
C. Create AWS Database Migration Service (AWS DMS) replication servers and a target Amazon Aurora MySQL DB cluster to host the database. Create a DMS replication task to copy the existing data to the target DB cluster. Create a local AWS Schema Conversion Tool (AWS SCT) change data capture (CDC) task to keep the data synchronized. Install the rest of the software on EC2 instances by starting with a compatible base AMI.
D. Deploy an AWS Storage Gateway Volume Gateway on premises. Mount volumes on all on-premises servers. Install the application and the MySQL database on the new volumes. Take regular snapshots. Install all the software on EC2 Instances by starting with a compatible base AMI. Launch a Volume Gateway on an EC2 instance. Restore the volumes from the latest snapshot. Mount the new volumes on the EC2 instances in the case of a failure event.
Correct Answer
B. Install the AWS Replication Agent on the source servers, including the MySQL servers. Initialize AWS Elastic Disaster Recovery in the target AWS Region. De ne the launch settings. Frequently perform failover and fallback from the most recent point in time.
Question 715
Exam Question
A user is creating a PIOPS volume. What is the maximum ratio the user should configure between PIOPS and the volume size?
A. 0
B. 1
C. 2
D. 3
Correct Answer
A. 0
Explanation
Provisioned IOPS volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads that are sensitive to storage performance and consistency in random access I/O throughput. A provisioned IOPS volume can range in size from 10 GB to 1 TB and the user can provision up to 4000 IOPS per volume.
The ratio of IOPS provisioned to the volume size requested can be a maximum of 30; for example, a volume with 3000 IOPS must be at least 100 GB.
Reference
AWS > Documentation > Amazon EC2 > User Guide for Linux Instances > Amazon EBS volume types
Question 716
Exam Question
A financial services company receives a regular data feed from its credit card servicing partner. Approximately 5,000 records are sent every 15 minutes in plaintext, delivered over HTTPS directly into an Amazon S3 bucket with server-side encryption. This feed contains sensitive credit card primary account number (PAN) data. The company needs to automatically mask the PAN before sending the data to another S3 bucket for additional internal processing. The company also needs to remove and merge specific elds, and then transform the record into JSON format. Additionally, extra feeds are likely to be added in the future, so any design needs to be easily expandable.
Which solutions will meet these requirements?
A. Invoke an AWS Lambda function on le delivery that extracts each record and writes it to an Amazon SQS queue. Invoke another Lambda function when new messages arrive in the SQS queue to process the records, writing the results to a temporary location in Amazon S3. Invoke a final Lambda function once the SQS queue is empty to transform the records into JSON format and send the results to another S3 bucket for internal processing.
B. Invoke an AWS Lambda function on le delivery that extracts each record and writes it to an Amazon SQS queue. Configure an AWS Fargate container application to automatically scale to a single instance when the SQS queue contains messages. Have the application process each record, and transform the record into JSON format. When the queue is empty, send the results to another S3 bucket for internal processing and scale down the AWS Fargate instance.
C. Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Invoke an AWS Lambda function on le delivery to start an AWS Glue ETL job to transform the entire record according to the processing and transformation requirements. De ne the output format as JSON. Once complete, have the ETL job send the results to another S3 bucket for internal processing.
D. Create an AWS Glue crawler and custom classifier based upon the data feed formats and build a table definition to match. Perform an Amazon Athena query on le delivery to start an Amazon EMR ETL job to transform the entire record according to the processing and transformation requirements. De ne the output format as JSON. Once complete, send the results to another S3 bucket for internal processing and scale down the EMR cluster.
Correct Answer
C. Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Invoke an AWS Lambda function on le delivery to start an AWS Glue ETL job to transform the entire record according to the processing and transformation requirements. De ne the output format as JSON. Once complete, have the ETL job send the results to another S3 bucket for internal processing.
Question 717
Exam Question
Which of the following rules must be added to a mount target security group to access Amazon Elastic File System (EFS) from an on-premises server?
A. Configure an NFS proxy between Amazon EFS and the on-premises server to route traffic.
B. Set up a Point-To-Point Tunneling Protocol Server (PPTP) to allow secure connection.
C. Allow inbound traffic to the Network File System (NFS) port (2049) from the on-premises server.
D. Permit secure traffic to the Kerberos port 88 from the on-premises server.
Correct Answer
C. Allow inbound traffic to the Network File System (NFS) port (2049) from the on-premises server.
Explanation
By mounting an Amazon EFS file system on an on-premises server, on-premises data can be migrated into the AWS Cloud. Any one of the mount targets in your VPC can be used as long as the subnet of the mount target is reachable by using the AWS Direct Connect connection. To access Amazon EFS from an on-premises server, a rule must be added to the mount target security group to allow inbound traffic to the NFS port (2049) from the on-premises server.
Reference
AWS > Documentation > Amazon Elastic File System (EFS) > User Guide > Amazon EFS: How it works
Question 718
Exam Question
A company is running an application in the AWS Cloud. The application runs on containers m an Amazon Elastic Container Service (Amazon ECS) cluster. The ECS tasks use the Fargate launch type. The application’s data is relational and is stored in Amazon Aurora MySQL. To meet regulatory requirements, the application must be able to recover to a separate AWS Region in the event of an application failure. In case of a failure, no data can be lost.
Which solution will meet these requirements with the LEAST amount of operational overhead?
A. Provision an Aurora Replica in a different Region.
B. Set up AWS DataSync for continuous replication of the data to a different Region.
C. Set up AWS Database Migration Service (AWS DMS) to perform a continuous replication of the data to a different Region.
D. Use Amazon Data Lifecycle Manager (Amazon DLM) to schedule a snapshot every 5 minutes.
Correct Answer
A. Provision an Aurora Replica in a different Region.
Question 719
Exam Question
A company provides a centralized Amazon EC2 application hosted in a single shared VPC. The centralized application must be accessible from client applications running in the VPCs of other business units The centralized application front end is configured with a Network Load Balancer (NIB) foe scalability.
Up to 10 business unit VPCs will need to be connected to the shared VPC Some of the business unit VPC CIDR blocks overlap with the shared VPC and some overlap with each other Network connectivity to the centralized application in the shared VPC should be allowed from authorized business unit VPCs only.
Which network configuration should a solutions architect use to provide connectivity from the client applications in the business unit VPCs to the centralized application in the shared VPC?
A. Create an AWS Transit Gateway Attach the shared VPC and the authorized business unit VPCs to the transit gateway. Create a single transit gateway route table and associate it with all of the attached VPCs Allow automatic propagation of routes from the attachments into the route table. Configure VPC routing tables to send traffic to the transit gateway
B. Configure a virtual private gateway for the shared VPC and create customer gateways for each of the authorized business unit VPCs Establish a Site-to-Site VPN connection from the business unit VPCs to the shared VPC Configure VPC routing tables to send traffic to the VPN connection
C. Create a VPC endpoint service using the centralized application NLB and enable the option to require endpoint acceptance Create a VPC endpoint in each of the business unit VPCs using the service name of the endpoint service Accept authorized endpoint requests from the endpoint service console
D. Create a VPC peering connection from each business unit VPC to the shared VPC Accept the VPC peering connections from the shared VPC console Configure VPC routing tables to send traffic to the VPC peering connection
Correct Answer
C. Create a VPC endpoint service using the centralized application NLB and enable the option to require endpoint acceptance Create a VPC endpoint in each of the business unit VPCs using the service name of the endpoint service Accept authorized endpoint requests from the endpoint service console
Question 720
Exam Question
An IAM user is trying to perform an action on an object belonging to some other root account’s bucket. Which of the below mentioned options will AWS S3 not verify?
A. The object owner has provided access to the IAM user
B. Permission provided by the parent of the IAM user on the bucket
C. Permission provided by the bucket owner to the IAM user
D. Permission provided by the parent of the IAM user
Correct Answer
B. Permission provided by the parent of the IAM user on the bucket
Explanation
If the IAM user is trying to perform some action on the object belonging to another AWS user’s bucket, S3 will verify whether the owner of the IAM user has given sufficient permission to him. It also verifies the policy for the bucket as well as the policy defined by the object owner.
Reference
AWS > Documentation > Amazon Simple Storage Service (S3) > User Guide > How Amazon S3 authorizes a request for an object operation