The latest AWS Certified Solutions Architect – Professional SAP-C02 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Professional SAP-C02 exam and earn AWS Certified Solutions Architect – Professional SAP-C02 certification.
Table of Contents
- Question 501
- Exam Question
- Correct Answer
- Question 502
- Exam Question
- Correct Answer
- Question 503
- Exam Question
- Correct Answer
- Question 504
- Exam Question
- Correct Answer
- Question 505
- Exam Question
- Correct Answer
- Question 506
- Exam Question
- Correct Answer
- Question 507
- Exam Question
- Correct Answer
- Question 508
- Exam Question
- Correct Answer
- Question 509
- Exam Question
- Correct Answer
- Question 510
- Exam Question
- Correct Answer
Question 501
Exam Question
A Wall Street based trading firm uses AWS Cloud for its IT infrastructure. The firm runs several trading-risk simulation applications, developing complex algorithms to simulate diverse scenarios in order to evaluate the financial health of its customers. The firm stores customers’ financial records on Amazon S3. The engineering team needs to implement an archival solution based on Amazon S3 Glacier to enforce regulatory and compliance controls on data access.
As a Solutions Architect Professional, which of the following solutions would you recommend?
A. Use S3 Glacier to store the sensitive archived data and then use an S3 Access Control List to enforce compliance controls.
B. Use S3 Glacier to store the sensitive archived data and then use an S3 lifecycle policy to enforce compliance controls.
C. Use S3 Glacier vault to store the sensitive archived data and then use an S3 Access Control List to enforce compliance controls.
D. Use S3 Glacier vault to store the sensitive archived data and then use a vault lock policy to enforce compliance controls.
Correct Answer
D. Use S3 Glacier vault to store the sensitive archived data and then use a vault lock policy to enforce compliance controls.
Question 502
Exam Question
A company has several teams, and each team has their own Amazon RDS database that totals 100 TB. The company is building a data query platform for Business Intelligence Analysts to generate a weekly business report: The new system must run ad-hoc SQL queries.
What is the MOST cost-effective solution?
A. Create a new Amazon Redshift cluster. Create an AWS Glue ETL job to copy data from the RDS databases to the Amazon Redshift cluster. Use Amazon Redshift to run the query.
B. Create an Amazon EMR cluster with enough core nodes. Run an Apache Spark job to copy data from the RDS databases to a Hadoop Distributed File System (HDFS). Use a local Apache Hive metastore to maintain the table definition. Use Spark SQL to run the query.
C. Use an AWS Glue ETL job to copy all the RDS databases to a single Amazon Aurora PostgreSQL database. Run SQL queries on the Aurora PostgreSQL database.
D. Use an AWS Glue crawler to crawl all the databases and create tables in the AWS Glue Data Catalog. Use an AWS Glue ETL job to load data from the RDS databases to Amazon S3, and use Amazon Athena to run the queries.
Correct Answer
D. Use an AWS Glue crawler to crawl all the databases and create tables in the AWS Glue Data Catalog. Use an AWS Glue ETL job to load data from the RDS databases to Amazon S3, and use Amazon Athena to run the queries.
Question 503
Exam Question
The DevOps team at a leading social media company uses Chef to automate the configurations of servers in the on-premises data center. The CTO at the company now wants to migrate the IT infrastructure to AWS Cloud with minimal changes to the server configuration workflows and at the same time account for less operational overhead post-migration to AWS. The company has hired you as an AWS Certified Solutions Architect Professional to recommend a solution for this migration.
Which of the following solutions would you recommend to address the given use-case?
A. Replatform the IT infrastructure to AWS Cloud by leveraging AWS OpsWorks as a configuration management service to automate the configurations of servers on AWS.
B. Replatform the IT infrastructure to AWS Cloud by leveraging AWS Config as a configuration management service to automate the configurations of servers on AWS.
C. Rehost the IT infrastructure to AWS Cloud by leveraging AWS Elastic Beanstalk as a configuration management service to automate the configurations of servers on AWS.
D. Rehost the IT infrastructure to AWS Cloud by leveraging AWS OpsWorks as a configuration management service to automate the configurations of servers on AWS.
Correct Answer
A. Replatform the IT infrastructure to AWS Cloud by leveraging AWS OpsWorks as a configuration management service to automate the configurations of servers on AWS.
Question 504
Exam Question
A company has an application that uses Amazon EC2 instances in an Auto Scaling group. The Quality Assurance (QA) department needs to launch a large number of short-lived environments to test the application. The application environments are currently launched by the Manager of the
department using an AWS CloudFormation template. To launch the stack, the Manager uses a role with permission to use CloudFormation, EC2, and Auto Scaling APIs. The Manager wants to allow testers to launch their own environments, but does not want to grant broad permissions to each user.
Which set up would achieve these goals?
A. Upload the AWS CloudFormation template to Amazon S3. Give users in the QA department
permission to assume the Manager’s role and add a policy that restricts the permissions to the
template and the resources it creates. Train users to launch the template from the CloudFormation console.
B. Create an AWS Service Catalog product from the environment template. Add a launch constraint to the product with the existing role. Give users in the QA department permission to use AWS Service Catalog APIs only. Train users to launch the template from the AWS Service Catalog console.
C. Upload the AWS CloudFormation template to Amazon S3. Give users in the QA department
permission to use CloudFormation and S3 APIs, with conditions that restrict the permissions to the template and the resources it creates. Train users to launch the template from the CloudFormation console.
D. Create an AWS Elastic Beanstalk application from the environment template. Give users in the QA department permission to use Elastic Beanstalk permissions only. Train users to launch Elastic Beanstalk CLI, passing the existing role to the environment as a service role.
Correct Answer
B. Create an AWS Service Catalog product from the environment template. Add a launch constraint to the product with the existing role. Give users in the QA department permission to use AWS Service Catalog APIs only. Train users to launch the template from the AWS Service Catalog console.
Question 505
Exam Question
A leading internet television network company uses AWS Cloud for analytics, recommendation engines and video transcoding. To monitor and optimize this network, the engineering team at the company has developed a solution for ingesting. augmenting. and analyzing the multiple terabytes of data its network generates daily in the form of virtual private cloud (VPC) flow logs. This would enable the company to identify performance-improvement opportunities such as identifying apps that are communicating across regions and collocating them. The VPC flow logs data is funneled into Kinesis Data Streams which further acts as the source of a delivery stream for Kinesis Firehose. The engineering team has now configured a Kinesis Agent to send the VPC flow logs data from another set of network devices to the same Firehose delivery stream. They noticed that data is not reaching Firehose as expected.
As a Solutions Architect Professional, which of the following options would you identify as the MOST plausible root cause behind this issue?
A. Kinesis Agent can only write to Kinesis Data Streams, not to Kinesis Firehose.
B. Kinesis Firehose delivery stream has reached its limit and needs to be scaled manually.
C. The data sent by Kinesis Agent is lost because of a configuration error.
D. Kinesis Agent cannot write to a Kinesis Firehose for which the delivery stream source is already set as Kinesis Data Streams.
Correct Answer
D. Kinesis Agent cannot write to a Kinesis Firehose for which the delivery stream source is already set as Kinesis Data Streams.
Question 506
Exam Question
A company wants to provide a desktop as a service (DaaS) to a number of employees using Amazon WorkSpaces. WorkSpaces will need to access files and services hosted on premises with authorization based on the company’s Active Directory. Network connectivity will be provided through an existing AWS Direct Connect connection. The solution has the following requirements: Credentials from Active Directory should be used to access on-premises files and services. Credentials from Active Directory should not be stored outside the company. End users should have a single sign-on (SSO) to on-premises files and services once connected to WorkSpaces.
Which strategy should the solutions architect use for end user authentication?
A. Create an AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) directory within the WorkSpaces VPC. Use the Active Directory Migration Tool (ADMT) with the Password Export Server to copy users from the on-premises Active Directory to AWS Managed Microsoft AD. Set up a one-way trust allowing users from AWS Managed Microsoft AD to access resources in the on-premises Active Directory. Use AWS Managed Microsoft AD as the directory for WorkSpaces.
B. Create a service account in the on-premises Active Directory with the required permissions. Create an AD Connector in AWS Directory Service to be deployed on premises using the service account to communicate with the on-premises Active Directory. Ensure the required TCP ports are open from the WorkSpaces VPC to the on-premises AD Connector. Use the AD Connector as the directory for WorkSpaces.
C. Create a service account in the on-premises Active Directory with the required permissions. Create an AD Connector in AWS Directory Service within the WorkSpaces VPC using the service account to communicate with the on-premises Active Directory. Use the AD Connector as the directory for WorkSpaces.
D. Create an AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) directory in the AWS Directory Service within the WorkSpaces VPC. Set up a one-way trust allowing users from the on-premises Active Directory to access resources in the AWS Managed Microsoft AD. Use AWS Managed Microsoft AD as the directory for WorkSpaces. Create an identity provider with AWS Identity and Access Management (IAM) from an on-premises ADFS server. Allow users from this identity provider to assume a role with a policy allowing them to run WorkSpaces.
Correct Answer
C. Create a service account in the on-premises Active Directory with the required permissions. Create an AD Connector in AWS Directory Service within the WorkSpaces VPC using the service account to communicate with the on-premises Active Directory. Use the AD Connector as the directory for WorkSpaces.
Question 507
Exam Question
An e-commerce company has hired an AWS Certified Solutions Architect Professional to transform a standard three-tier web application architecture in AWS. Currently. the web and application tiers run on EC2 instances and the database tier runs on RDS MySQL. The company wants to redesign the web and application tiers to use API Gateway with Lambda Functions with the final goal of deploying the new application within 6 months. As an immediate short-term task, the Engineering Manager has mandated the Solutions Architect to reduce costs for the existing stack.
Which of the following options should the Solutions Architect recommend as the MOST cost-effective and reliable solution?
A. Provision On-Demand Instances for the web and application tiers and Reserved Instances for the database tier.
B. Provision Reserved Instances for the web and application tiers and On-Demand Instances for the database tier.
C. Provision Spot Instances for the web and application tiers and Reserved Instances for the database tier.
D. Provision Reserved Instances for the web, application and database tiers.
Correct Answer
A. Provision On-Demand Instances for the web and application tiers and Reserved Instances for the database tier.
Question 508
Exam Question
A company recently completed a large-scale migration to AWS. Development teams that support various business units have their own accounts in AWS Organizations. A central cloud team is responsible for controlling which services and resources can be accessed, and for creating operational strategies for all teams within the company. Some teams are approaching their account service quotas. The cloud team needs to create an automated and operationally efficient solution to proactively monitor service quotas. Monitoring should occur every 15 minutes and send alerts when a team exceeds 80% utilization.
Which solution will meet these requirements?
A. Create a scheduled AWS Config rule to trigger an AWS Lambda function to call the GetServiceQuota API. If any service utilization is above 80%, publish a message to an Amazon Simple Notification Service (Amazon SNS) topic to alert the cloud team. Create an AWS CloudFormation template and deploy the necessary resources to each account.
B. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that triggers an AWS Lambda function to refresh the AWS Trusted Advisor service limits checks and retrieve the most current utilization and service limit data. If the current utilization is above 80%, publish a message to an Amazon Simple Notification Service (Amazon SNS) topic to alert the cloud team. Create AWS CloudFormation StackSets that deploy the necessary resources to all Organizations accounts.
C. Create an Amazon CloudWatch alarm that triggers an AWS Lambda function to call the Amazon CloudWatch GetInsightRuleReport API to retrieve the most current utilization and service limit data. If the current utilization is above 80%, publish an Amazon Simple Email Service (Amazon SES) notification to alert the cloud team. Create AWS CloudFormation StackSets that deploy the necessary resources to all Organizations accounts.
D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that triggers an AWS Lambda function to refresh the AWS Trusted Advisor service limits checks and retrieve the most current utilization and service limit data. If the current utilization is above 80%, use Amazon Pinpoint to send an alert to the cloud team. Create an AWS CloudFormation template and deploy the necessary resources to each account.
Correct Answer
B. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that triggers an AWS Lambda function to refresh the AWS Trusted Advisor service limits checks and retrieve the most current utilization and service limit data. If the current utilization is above 80%, publish a message to an Amazon Simple Notification Service (Amazon SNS) topic to alert the cloud team. Create AWS CloudFormation StackSets that deploy the necessary resources to all Organizations accounts.
Question 509
Exam Question
A data analytics company needs to set up a data lake on Amazon S3 for a financial services client. The data lake is split in raw and curated zones. For compliance reasons, the source data needs to be kept for a minimum of 5 years. The source data arrives in the raw zone and is then processed via an AWS Glue based ETL job into the curated zone. The business analysts run ad-hoc queries only on the data in the curated zone using Athena. The team is concerned about the cost of data storage in both the raw and curated zones as the data is increasing at a rate of 2 TB daily in each zone.
Which of the following options would you implement together as the MOST cost-optimal solution? (Select two)
A. Use Glue ETL job to write the transformed data in the curated zone using CSV format
B. Use Glue ETL job to write the transformed data in the curated zone using a compressed file format.
C. Create a Lambda function based job to delete the raw zone data after 1 day.
D. Setup a lifecycle policy to transition the curated zone data into Glacier Deep Archive after 1 day of object creation.
E. Setup a lifecycle policy to transition the raw zone data into Glacier Deep Archive after 1 day of object creation.
Correct Answer
B. Use Glue ETL job to write the transformed data in the curated zone using a compressed file format.
D. Setup a lifecycle policy to transition the curated zone data into Glacier Deep Archive after 1 day of object creation.
Question 510
Exam Question
A car rental company has built a serverless REST API to provide data to its mobile app. The app consists of an Amazon API Gateway API with a Regional endpoint, AWS Lambda functions, and an Amazon Aurora MySQL Serverless DB cluster. The company recently opened the API to mobile apps of partners. A significant increase in the number of requests resulted, causing sporadic database memory errors. Analysis of the API traffic indicates that clients are making multiple HTTP GET requests for the same queries in a short period of time. Traffic is concentrated during business hours, with spikes around holidays and other events. The company needs to improve its ability to support the additional usage while minimizing the increase in costs associated with the solution.
Which strategy meets these requirements?
A. Convert the API Gateway Regional endpoint to an edge-optimized endpoint. Enable caching in the production stage.
B. Implement an Amazon ElastiCache for Redis cache to store the results of the database calls. Modify the Lambda functions to use the cache.
C. Modify the Aurora Serverless DB cluster configuration to increase the maximum amount of available memory.
D. Enable throttling in the API Gateway production stage. Set the rate and burst values to limit the incoming calls.
Correct Answer
B. Implement an Amazon ElastiCache for Redis cache to store the results of the database calls. Modify the Lambda functions to use the cache.