The latest AWS Certified Solutions Architect – Professional SAP-C02 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Professional SAP-C02 exam and earn AWS Certified Solutions Architect – Professional SAP-C02 certification.
Table of Contents
- Question 701
- Exam Question
- Correct Answer
- Question 702
- Exam Question
- Correct Answer
- Question 703
- Exam Question
- Correct Answer
- Question 704
- Exam Question
- Correct Answer
- Question 705
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 706
- Exam Question
- Correct Answer
- Question 707
- Exam Question
- Correct Answer
- Question 708
- Exam Question
- Correct Answer
- Question 709
- Exam Question
- Correct Answer
- Question 710
- Exam Question
- Correct Answer
- Explanation
- Reference
Question 701
Exam Question
Your team has a tomcat-based Java application you need to deploy into development, test and production environments. After some research, you opt to use Elastic Beanstalk due to its tight integration with your developer tools and RDS due to its ease of management. Your QA team lead points out that you need to roll a sanitized set of production data into your environment on a nightly basis. Similarly, other software teams in your org want access to that same restored data via their EC2 instances in your VPC.
The optimal setup for persistence and security that meets the above requirements would be the following.
A. Create your RDS instance as part of your Elastic Beanstalk definition and alter its security group to allow access to it from hosts in your application subnets.
B. Create your RDS instance separately and pass its DNS name to your app’s DB connection string as an environment variable. Create a security group for client machines and add it as a valid source for DB traffic to the security group of the RDS instance itself.
C. Create your RDS instance separately and pass its DNS name to your DB connection string as an environment variable Alter its security group to allow access to It from hosts in your application subnets.
D. Create your RDS instance separately and add its IP address to your application’s DB connection strings in your code Alter its security group to allow access to it from hosts within your VPC’s IP address block.
Correct Answer
B. Create your RDS instance separately and pass its DNS name to your app’s DB connection string as an environment variable. Create a security group for client machines and add it as a valid source for DB traffic to the security group of the RDS instance itself.
Question 702
Exam Question
A company is running an application that uses an Amazon ElastiCache for Redis cluster as a caching layer. A recent security audit revealed that the company has configured encryption at rest for ElastiCache. However, the company did not configure ElastiCache to use encryption in transit.
Additionally, users can access the cache without authentication.
A solutions architect must make changes to require user authentication and to ensure that the company is using end-to-end encryption.
Which solution will meet these requirements?
A. Create an AUTH token. Store the token in AWS System Manager Parameter Store, as an encrypted parameter. Create a new cluster with AUTH, and configure encryption in transit. Update the application to retrieve the AUTH token from Parameter Store when necessary and to use the AUTH token for authentication.
B. Create an AUTH token. Store the token in AWS Secrets Manager. Configure the existing cluster to use the AUTH token, and configure encryption in transit. Update the application to retrieve the AUTH token from Secrets Manager when necessary and to use the AUTH token for authentication.
C. Create an SSL certificate. Store the certificate in AWS Secrets Manager. Create a new cluster, and configure encryption in transit. Update the application to retrieve the SSL certificate from Secrets Manager when necessary and to use the certificate for authentication.
D. Create an SSL certificate. Store the certificate in AWS Systems Manager Parameter Store, as an encrypted advanced parameter. Update the existing cluster to configure encryption in transit. Update the application to retrieve the SSL certificate from Parameter Store when necessary and to use the certificate for authentication.
Correct Answer
B. Create an AUTH token. Store the token in AWS Secrets Manager. Configure the existing cluster to use the AUTH token, and configure encryption in transit. Update the application to retrieve the AUTH token from Secrets Manager when necessary and to use the AUTH token for authentication.
Question 703
Exam Question
A marketing company is m-grating an application that stores data on premises in a PostgreSQL database. The company wants to migrate the database to Amazon Aurora PostgreSQL The database size grows at an average rate of 5 GB daily and is currently 50 TB The data center has an internet connection with 50 Mbps of available bandwidth. The migration to AWS must be completed as soon as possible within the next 45 days.
Which data transfer strategy meets those requirements with the LEAST amount of application downtime?
A. Create a local backup of the database, and copy the backup onto an AWS Snowball Edge Storage Optimized device. Snip the device to AWS. Use native database tools to restore the backup onto the new database from Amazon S3 and lo set up replication with the source database to capture any changes since ‘.he backup. Modify DNS records to pent to the new database.
B. Take the application offline. Back up the database lo a shared local file system. Install an AWS DataSync agent on a VM in the data center. Configure the file system as the source location, and configure an Amazon S3 bucket as the destination. Use native database tools to restore the backup onto the new database. Modify the database connection string, and bring the application online.
C. Install the Server Migration Connector VM in the local data center. Use the AWS Server Migraton Service (AWS SMS) console to replicate the on-premises database to the new database. Modify DNS records to point to the new database.
D. Take the application offline. Create a local backup of the database. Transmit the database backup file over the existing connection to an Amazon S3 bucket. Use native database tools to restore the backup onto the new database and to set up replication ‘.o capture any changes since the backup. Modify the database connection string, and bring the application online.
Correct Answer
A. Create a local backup of the database, and copy the backup onto an AWS Snowball Edge Storage Optimized device. Snip the device to AWS. Use native database tools to restore the backup onto the new database from Amazon S3 and lo set up replication with the source database to capture any changes since ‘.he backup. Modify DNS records to pent to the new database.
Question 704
Exam Question
A company is running a compute workload by using Amazon EC2 Spot Instances that are in an Auto Scaling group. The launch template uses two placement groups and a single instance type.
Recently, a monitoring system reported Auto Scaling instance launch failures that correlated with longer wait times for system users. The company needs to improve the overall reliability of the workload.
Which solution will meet this requirement?
A. Replace the launch template with a launch configuration to use an Auto Scaling group that uses attribute-based instance type selection.
B. Create a new launch template version that uses attribute-based instance type selection. Configure the Auto Scaling group to use the new launch template version.
C. Update the launch template Auto Scaling group to increase the number of placement groups.
D. Update the launch template to use a larger instance type.
Correct Answer
B. Create a new launch template version that uses attribute-based instance type selection. Configure the Auto Scaling group to use the new launch template version.
Question 705
Exam Question
______________ pricing offers significant savings over the normal price of DynamoDB provisioned throughput capacity.
A. Reserved Capacity
B. Reserved Point
C. Discount Voucher
D. Discount Service
Correct Answer
A. Reserved Capacity
Explanation
Reserved Capacity pricing offers significant savings over the normal price of DynamoDB provisioned throughput capacity. When you buy Reserved Capacity, you pay a one-time upfront fee and commit to paying for a minimum usage level, at the hourly rates indicated above, for the duration of the Reserved Capacity term.
Reference
Question 706
Exam Question
A company is migrating a document processing workload to AWS. The company has updated many applications to natively use the Amazon S3 API to store, retrieve, and modify documents that a processing server generates at a rate of approximately 5 documents every second. After the document processing is finished, customers can download the documents directly from Amazon S3.
During the migration, the company discovered that it could not immediately update the processing server that generates many documents to support the S3 API. The server runs on Linux and requires fast local access to the les that the server generates and modifies. When the server finishes processing, the les must be available to the public for download within 30 minutes.
Which solution will meet these requirements with the LEAST amount of effort?
A. Migrate the application to an AWS Lambda function. Use the AWS SDK for Java to generate, modify, and access the les that the company stores directly in Amazon S3.
B. Set up an Amazon S3 File Gateway and configure a le share that is linked to the document store. Mount the le share on an Amazon EC2 instance by using NFS. When changes occur in Amazon S3, initiate a RefreshCache API call to update the S3 File Gateway.
C. Configure Amazon FSx for Lustre with an import and export policy. Link the new le system to an S3 bucket. Install the Lustre client and mount the document store to an Amazon EC2 instance by using NFS.
D. Configure AWS DataSync to connect to an Amazon EC2 instance. Configure a task to synchronize the generated les to and from Amazon S3.
Correct Answer
C. Configure Amazon FSx for Lustre with an import and export policy. Link the new le system to an S3 bucket. Install the Lustre client and mount the document store to an Amazon EC2 instance by using NFS.
Question 707
Exam Question
A solutions architect is designing a solution that consists of a fleet of Amazon EC2 Reserved Instances (Rls) in an Auto Scaling group that will grow over time as usage increases. The solution needs to maintain 80% Rl coverage to maintain cost control with an alert to the DevOps team using an email distribution list when coverage drops below 30% The solution must also include the ability to generate a report to easily track and manage coverage. The company has a policy that allows only one workload for each AWS account.
Which set of steps should the solutions architect take to create the report and alert the DevOps team?
A. Create an Amazon Simple Notification Service (Amazon SNS1 topic and subscribe the DevOps email distribution list Use the Cost Explorer console to configure the report for Rl utilization set the utilization target to 30% and link to the SNS topic created in the alert configuration
B. Enable cost allocation tags and ensure instances populate a customer-managed cost allocation tag at startup Use the Cost Explorer console to configure the report for Rl coverage, filter using the customer-managed cost allocation tag and set the threshold to 80% and enter the email distribution list in the alert configuration.
C. Use the AWS Billing and Cost Management console to create a reservation budget for Rl utilization set the utilization to 80% and enter the email distribution list m the alert configuration
D. Create an Amazon Simple Notification Service (Amazon SNS) topic and subscribe the DevOps email distribution list Enable cost allocation tags and ensure instances populate a customer-managed cost allocation tag at startup Use the AWS Billing and Cost Management console to create a budget for Rl coverage, fitter using the customer-managed cost allocation tag and set the threshold to 80% and link to the SNS topic created in me alert configuration
Correct Answer
A. Create an Amazon Simple Notification Service (Amazon SNS1 topic and subscribe the DevOps email distribution list Use the Cost Explorer console to configure the report for Rl utilization set the utilization target to 30% and link to the SNS topic created in the alert configuration
Question 708
Exam Question
A delivery company is running a serverless solution in the AWS Cloud. The solution manages user data, delivery information, and past purchase details. The solution consists of several microservices. The central user service stores sensitive data in an Amazon DynamoDB table. Several of the other microservices store a copy of parts of the sensitive data in different storage services.
The company needs the ability to delete user information upon request. As soon as the central user service deletes a user, every other microservice must also delete its copy of the data immediately.
Which solution will meet these requirements?
A. Activate DynamoDB Streams on the DynamoDB table. Create an AWS Lambda trigger for the DynamoDB stream that will post events about user deletion in an Amazon Simple Queue Service (Amazon SQS) queue. Configure each microservice to poll the queue and delete the user from the DynamoDB table.
B. Set up DynamoDB event notifications on the DynamoDB table. Create an Amazon Simple Noti cation Service (Amazon SNS) topic as a target for the DynamoDB event notification. Configure each microservice to subscribe to the SNS topic and to delete the user from the DynamoDB table.
C. Configure the central user service to post an event on a custom Amazon EventBridge event bus when the company deletes a user. Create an EventBridge rule for each microservice to match the user deletion event pattern and invoke logic in the microservice to delete the user from the DynamoDB table.
D. Configure the central user service to post a message on an Amazon Simple Queue Service (Amazon SQS) queue when the company deletes a user. Configure each microservice to create an event filter on the SQS queue and to delete the user from the DynamoDB table.
Correct Answer
A. Activate DynamoDB Streams on the DynamoDB table. Create an AWS Lambda trigger for the DynamoDB stream that will post events about user deletion in an Amazon Simple Queue Service (Amazon SQS) queue. Configure each microservice to poll the queue and delete the user from the DynamoDB table.
Question 709
Exam Question
A company has multiple AWS accounts and manages these accounts which AWS Organization. A developer was given IAM user credentials to access AWS resources. The developer should have read-only access to all Amazon S3 buckets in the account. However, when the developer tries to access the S3 buckets from the console, they receive an access denied error message with no bucket listed.
A solution architect reviews the permissions and finds that the developer’s IAM user is listed as having read-only access to all S3 buckets in the account.
Which additional steps should the solutions architect take to troubleshoot the issue? (Select TWO.)
A. Check the ACLs for all S3 buckets
B. Check the bucket policies for all S3 buckets.
C. Check if an appropriate IAM role is attached to the IAM user.
D. Check for the permissions boundaries set for the IAM user.
E. Check the SCPs set at the organizational units (OUs).
Correct Answer
B. Check the bucket policies for all S3 buckets.
E. Check the SCPs set at the organizational units (OUs).
Question 710
Exam Question
A company is running a web application in a VPC. The web application runs on a group of Amazon EC2 instances behind an Application Load Balancer (ALB). The ALB is using AWS WAF.
An external customer needs to connect to the web application. The company must provide IP addresses to all external customers.
Which solution will meet these requirements with the LEAST operational overhead?
A. Replace the ALB with a Network Load Balancer (NLB). Assign an Elastic IP address to the NLB.
B. Allocate an Elastic IP address. Assign the Elastic IP address to the ALProvide the Elastic IP address to the customer.
C. Create an AWS Global Accelerator standard accelerator. Specify the ALB as the accelerator’s endpoint. Provide the accelerator’s IP addresses to the customer.
D. Configure an Amazon CloudFront distribution. Set the ALB as the origin. Ping the distribution’s DNS name to determine the distribution’s public IP address. Provide the IP address to the customer.
Correct Answer
C. Create an AWS Global Accelerator standard accelerator. Specify the ALB as the accelerator’s endpoint. Provide the accelerator’s IP addresses to the customer.
Explanation
Option A is wrong. AWS WAF does not support associating with NLB.
Option B is wrong. An ALB does not support an Elastic IP address.
Reference
- AWS > Documentation > AWS Global Accelerator > Developer Guide > Add an accelerator when you create a load balancer
- AWS > Documentation > AWS WAF > Developer Guide > AWS WAF
- Elastic Load Balancing features