Skip to Content

AWS Certified Solutions Architect – Professional SAP-C02 Exam Questions and Answers – 4

The latest AWS Certified Solutions Architect – Professional SAP-C02 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Professional SAP-C02 exam and earn AWS Certified Solutions Architect – Professional SAP-C02 certification.

Question 331

Exam Question

An auction website enables users to bid on collectible items The auction rules require that each bid is processed only once and in the order it was received The current implementation is based on a fleet of Amazon EC2 web servers that write bid records into Amazon Kinesis Data Streams A single 12 large instance has a cron job that runs the bid processor, which reads incoming bids from Kinesis Data Streams and processes each bid The auction site is growing in popularity, but users are complaining that some bids are not registering Troubleshooting indicates that the bid processor is too slow during peak demand hours sometimes crashes while processing and occasionally loses track of which record is being processed.

What changes should make the bid processing more reliable?

A. Refactor the web application to use the Amazon Kinesis Producer Library (KPL) when posting bids to Kinesis Data Streams Refactor the bid processor to flag each record in Kinesis Data Streams as being unread processing and processed At the start of each bid processing run; scan Kinesis Data Streams for unprocessed records.

B. Refactor the web application to post each incoming bid to an Amazon SNS topic in place of Kinesis Data Streams Configure the SNS topic to trigger an AWS Lambda function that B. processes each bid as soon as a user submits it.

C. Refactor the web application to post each incoming bid to an Amazon SQS FIFO queue in place of Kinesis Data Streams Refactor the bid processor to continuously consume the SQS queue Place the bid processing EC2 instance in an Auto Scaling group with a minimum and a maximum size of 1.

D. Switch the EC2 instance type from t2 large to a larger general compute instance type Put the bid processor EC2 instances in an Auto Scaling group that scales out the number of EC2 instances running the bid processor based on the incomingRecords metric in Kinesis Data Streams.

Correct Answer

C. Refactor the web application to post each incoming bid to an Amazon SQS FIFO queue in place of Kinesis Data Streams Refactor the bid processor to continuously consume the SQS queue Place the bid processing EC2 instance in an Auto Scaling group with a minimum and a maximum size of 1.

Reference

Amazon SQS FAQs

Question 332

Exam Question

A customer is in the process of deploying multiple applications to AWS that are owned and operated by different development teams. Each development team maintains the authorization offits users independently from other teams. The customerˈs information security team would like to be able to delegate user authorization to the individual development teams but independently apply restrictions to the users permissions based on factors such as the userˈs device and location. For example, the information security team would like to grant read-only permissions to a user who is defined by the development team as read/write whenever the user is authenticating from outside the corporate network.

What steps can the information security team take to implement this capability?

A. Operate an authentication service that generates AWS Security Token Service (STS) tokens with IAM policies from application-defined IAM roles.
B. Add additional IAM policies to the application IAM roles that deny user privileges based on information security policy.
C. Enable federation with the internal LDAP directory and grant the application teams permissions to modify users.
D. Configure IAM policies that restrict modification of the application IAM roles only to the information security team.

Correct Answer

B. Add additional IAM policies to the application IAM roles that deny user privileges based on information security policy.

Question 333

Exam Question

A company runs a software-as-a-service (SaaS) application on AWS. The application comets of AWS Lambda function and an Amazon RDS for MySQL Multi-AZ database During market events the application has a much higher workload than normal Users notice slow response times during the peak periods because of many database connections. The company needs to improve the scalable performance and availability of the database.

Which solution meets these requirements?

A. Create an Amazon CloudWatch alarm action that triggers a Lambda function to add an Amazon RDS for MySQL read replica when resource utilization hits a threshold.

B. Migrate the database to Amazon Aurora and add a read replica Add a database connection pool outside of the Lambda hardier function.

C. Migrate the database to Amazon Aurora and add a read replica. Use Amazon Route 53 weighted records

D. Migrate the database to Amazon Aurora and add an Aurora Replica. Configure Amazon RDS Proxy to manage database connection pools.

Correct Answer

D. Migrate the database to Amazon Aurora and add an Aurora Replica. Configure Amazon RDS Proxy to manage database connection pools.

Question 334

Exam Question

You are moving an existing traditional system to AWS, and during the migration discover that there is a master server which is a single point of failure. Having examined the implementation of the master server you realize there is not enough time during migration to re-engineer it to be highly available, though you do discover that it stores its state in a local MySQL database.

In order to minimize down-time you select RDS to replace the local database and configure master to use it, what steps would best allow you to create aself-healing architecture:

A. Replicate the local database into a RDS Read Replica. Place the master node into a multi-AZ auto-scaling group with a minimum of one and a maximum of one with health checks.

B. Migrate the local database into a multi-AZ RDS database. Place the master node into a Cross-Zone ELB with a minimum of one and a maximum of one with health checks.

C. Replicate the local database into a RDS Read Replica. Place the master node into a Cross-Zone ELB with a minimum of one and a maximum of one with health checks.

D. Migrate the local database into a multi-AZ RDS database. Place the master node into a multi-AZ auto-scaling group with a minimum of one and a maximum of one with health checks.

Correct Answer

D. Migrate the local database into a multi-AZ RDS database. Place the master node into a multi-AZ auto-scaling group with a minimum of one and a maximum of one with health checks.

Question 335

Exam Question

A company wants to retire its Oracle Solaris NFS storage arrays. The company requires rapid data migration over its internet network connection to a combination of destinations for Amazon S3. Amazon Elastic File System (Amazon EFS), and Amazon FSx lor Windows File Server. The company also requires a full initial copy, as well as incremental transfers of changes until the retirement of the storage arrays. All data must be encrypted and checked for integrity.

What should a solutions architect recommend to meet these requirements?

A. Configure CloudEndure. Create a project and deploy the CloudEndure agent and token to the storage array. Run the migration plan to start the transfer.

B. Configure AWS DataSync. Configure the DataSync agent and deploy it to the local network. Create a transfer task and start the transfer.

C. Configure the aws S3 sync command. Configure the AWS client on the client side with credentials. Run the sync command to start the transfer.

D. Configure AWS Transfer for FTP. Configure the FTP client with credentials. Script the client to connect and sync to start the transfer.

Correct Answer

B. Configure AWS DataSync. Configure the DataSync agent and deploy it to the local network. Create a transfer task and start the transfer.

Explanation

It enables secure, high-performance transfers and supports both full initial copies and incremental transfers of changes. DataSync provides encryption and checksum validation to ensure data integrity, and it can be configured to transfer data over the internet or over a private network connection. Additionally, it can be scripted and automated, making it a great choice for this scenario.

Reference

Question 336

Exam Question

Your company sells consumer devices and needs to record the first activation of all sold devices. Devices are not activated until the information is written on a persistent database. Activation data is very important for your company and must be analyzed daily with a MapReduce job. The execution time of the data analysis process must be less than three hours per day. Devices are usually sold evenly during the year, but when a new device model is out, there is a predictable peak in activations, that is, for a few days there are 10 times or even 100 times more activations than in the average day.

Which of the following databases and analysis framework would you implement to better optimize costs and performance for this workload?

A. Amazon Relational Database Service and Amazon Elastic MapReduce with Spot Instances

B. Amazon DynamoDB and Amazon Elastic MapReduce with Spot Instances

C. Amazon Relational Database Service and Amazon Elastic MapReduce with Reserved Instances

D. Amazon DynamoDB and Amazon Elastic MapReduce with Reserved Instances

Correct Answer

D. Amazon DynamoDB and Amazon Elastic MapReduce with Reserved Instances

Question 337

Exam Question

A company has an application in the AWS Cloud. The application runs on a fleet of 20 Amazon EC2 instances. The EC2 instances are persistent and store data on multiple attached Amazon Elastic Block Store (Amazon EBS) volumes.

The company must maintain backups in a separate AWS Region. The company must be able to recover the EC2 instances and their configuration within I business day, with loss of no more than I day’s worth of data.

The company has limited staff and needs a backup solution that optimizes operational efficiency and cost. The company already has created an AWS CloudFormation template that can deploy the required network configuration in a secondary Region.

Which solution will meet these requirements?

A. Create a second CloudFormation template that can recreate the EC2 instances in the secondary Region. Run daily multivolume snapshots by using AWS Systems Manager Automation runbooks. Copy the snapshots to the secondary Region. In the event of a failure, launch the CloudFormation templates, restore the EBS volumes from snapshots, and transfer usage to the secondary Region.

B. Use Amazon Data Lifecycle Manager (Amazon DLM) to create daily multivolume snapshots of the EBS volumes. In the event of a failure, launch the CloudFormation template and use Amazon DLM to restore the EBS volumes and transfer usage to the secondary Region.

C. Use AWS Backup to create a scheduled daily backup plan for the EC2 instances. Configure the backup task to copy the backups to a vault in the secondary Region. In the event of a failure, launch the CloudFormation template, restore the instance volumes and configurations from the backup vault, and transfer usage to the secondary Region.

D. Deploy EC2 instances of the same size and configuration to the secondary Region. Configure AWS DataSync daily to copy data from the primary Region to the secondary Region. In the event of a failure, launch the CloudFormation template and transfer usage to the secondary Region.

Correct Answer

C. Use AWS Backup to create a scheduled daily backup plan for the EC2 instances. Configure the backup task to copy the backups to a vault in the secondary Region. In the event of a failure, launch the CloudFormation template, restore the instance volumes and configurations from the backup vault, and transfer usage to the secondary Region.

Explanation

Using AWS Backup to create a scheduled daily backup plan for the EC2 instances will enable taking snapshots of the EC2 instances and their attached EBS volumes1. Configuring the backup task to copy the backups to a vault in the secondary Region will enable maintaining backups in a separate Region1. In the event of a failure, launching the CloudFormation template will enable deploying the network configuration in the secondary Region2. Restoring the instance volumes and configurations from the backup vault will enable recovering the EC2 instances and their data1. Transferring usage to the secondary Region will enable resuming operations2.

Question 338

Exam Question

A utility company is building an application that stores data coming from more than 10,000 sensors. Each sensor has a unique ID and will send a data point (approximately 1 KB) every 10 minutes throughout the day. Each data point contains the information coming from the sensor as well as a timestamp. This company would like to query information coming from a particular sensor for the past week very rapidly and would like to delete all data that is older than four weeks.

Using Amazon DynamoDB for its scalability and rapidity, how would you implement this in the most cost-effective way?

A. One table for each week, with a primary key that is the concatenation of the sensor ID and the timestamp

B. One table for each week, with a primary key that is the sensor ID, and a hash key that is the timestamp

C. One table, with a primary key that is the concatenation of the sensor ID and the timestamp

D. One table, with a primary key that is the sensor ID, and a hash key that is the timestamp

Correct Answer

B. One table for each week, with a primary key that is the sensor ID, and a hash key that is the timestamp

Question 339

Exam Question

A company has several applications running in an on-premises data center. The data center runs a mix of Windows and Linux VMs managed by VMware vCenter. A solutions architect needs to create a plan to migrate the applications to AWS However, the solutions architect discovers that the documentation for the applications is not up to date and that mere are no complete infrastructure diagrams The company’s developers lack time to discuss their applications and current usage with the solutions architect.

What should the solutions architect do to gather the required information?

A. Install the AWS Application Discovery Service on each of the VMs to collect the configuration and utilization data

B. Deploy the AWS Server Migration Service (AWS SMS) connector using the OVA image on the VMware cluster to collect configuration and utilization data from the VMs

C. Use the AWS Migration Portfolio Assessment (MPA) tool to connect to each of the VMs to collect the configuration and utilization data.

D. Register the on-premises VMs with the AWS Migration Hub to collect configuration and utilization data

Correct Answer

B. Deploy the AWS Server Migration Service (AWS SMS) connector using the OVA image on the VMware cluster to collect configuration and utilization data from the VMs

Question 340

Exam Question

Your company runs a complex customer relations management system that consists of around 10 different software components all backed by the same Amazon Relational Database Service (RDS) database. You adopted AWS OpsWorks to simplify management and deployment of that application and created an AWS OpsWorks stack with layers for each of the individual components. An internal security policy requires that all instances should run on the latest Amazon Linux AMI and that instances must be replaced within one month after the latest Amazon Linux AMI has been released. AMI replacements should be done without incurring application downtime or capacity problems. You decide to write a script to be run as soon as a new Amazon Linux AMI is released.

Which solutions support the security policy and meet your requirements? Choose 2 answers

A. Create a new stack and layers with identical configuration, add instances with the latest Amazon Linux AMI specified as a custom AMI to the new layers, switch DNS to the new stack, and tear down the old stack

B. Identify all Amazon Elastic Compute Cloud (EC2) instances of your AWS OpsWorks stack, stop each instance, replace the AMI ID property with the ID of the latest Amazon Linux AMI ID, and restart the instance. To avoid down time, make sure not more than one instance is stopped at the same time.

C. Specify the latest Amazon Linux AMI as a custom AMI at the stack level, terminate instances of the stack and let AWS OpsWorks launch new instances with the new AMI.

D. Add new instances with the latest Amazon Linux AMI specified as a custom AMI to all AWS OpsWorks layers of your stack, and terminate the old ones.

E. Assign a custom recipe to each layer which replaces the underlying AMI. Use AWS OpsWorks life-cycle events to incrementally execute this custom recipe and update the instances with the new AMI.

Correct Answer

A. Create a new stack and layers with identical configuration, add instances with the latest Amazon Linux AMI specified as a custom AMI to the new layers, switch DNS to the new stack, and tear down the old stack

D. Add new instances with the latest Amazon Linux AMI specified as a custom AMI to all AWS OpsWorks layers of your stack, and terminate the old ones.