The latest AWS Certified Solutions Architect – Professional SAP-C02 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Professional SAP-C02 exam and earn AWS Certified Solutions Architect – Professional SAP-C02 certification.
Table of Contents
- Question 341
- Exam Question
- Correct Answer
- Question 342
- Exam Question
- Correct Answer
- Question 343
- Exam Question
- Correct Answer
- Question 344
- Exam Question
- Correct Answer
- Question 345
- Exam Question
- Correct Answer
- Question 346
- Exam Question
- Correct Answer
- Question 347
- Exam Question
- Correct Answer
- Question 348
- Exam Question
- Correct Answer
- Question 349
- Exam Question
- Correct Answer
- Question 350
- Exam Question
- Correct Answer
Question 341
Exam Question
A Solutions Architect is designing the storage layer for a data warehousing application. The data files are large, but they have statically placed metadata at the beginning of each file that describes the size and placement of the file’s index. The data files are read in by a fleet of Amazon EC2 instances that store the index size, index location, and other category information about the data file in a database. That database is used by Amazon EMR to group files together for deeper analysis.
What would be the MOST cost-effective, high availability storage solution for this workflow?
A. Store the data files in Amazon S3 and use Range GET for each file’s metadata, then index the relevant data.
B. Store the data files in Amazon EFS mounted by the EC2 fleet and EMR nodes.
C. Store the data files on Amazon EBS volumes and allow the EC2 fleet and EMR to mount and unmount the volumes where they are needed.
D. Store the content of the data files in Amazon DynamoDB tables with the metadata, index, and data as their own keys.
Correct Answer
A. Store the data files in Amazon S3 and use Range GET for each file’s metadata, then index the relevant data.
Question 342
Exam Question
Your customer needs to create an application to allow contractors to upload videos to Amazon Simple Storage Service (S3) so they can be transcoded into a different format. She creates AWS Identity and Access Management (IAM) users for her application developers, and in just one week, they have the application hosted on a fleet of Amazon Elastic Compute Cloud (EC2) instances. The attached IAM role is assigned to the instances. As expected, a contractor who authenticates to the application is given a pre-signed URL that points to the location for video upload. However, contractors are reporting that they cannot upload their videos.
Which of the following are valid reasons for this behavior? Choose 2 answers
A. The IAM role does not explicitly grant permission to upload the object
B. The contractorsˈ accounts have not been granted “write” access to the S3 bucket.
C. The application is not using valid security credentials to generate the pre-signed URL.
D. The developers do not have access to upload objects to the S3 bucket
E. The S3 bucket still has the associated default permissions
F. The pre-signed URL has expired.
Correct Answer
C. The application is not using valid security credentials to generate the pre-signed URL.
F. The pre-signed URL has expired.
Question 343
Exam Question
A company uses an Amazon EMR cluster to process data once a day. The raw data comes from Amazon S3, and the resulting processed data is also stored in Amazon S3. The processing must complete within 4 hours; currently, it only takes 3 hours. However, the processing time is taking 5 to 10 minutes. longer each week due to an increasing volume of raw data. The team is also concerned about rising costs as the compute capacity increases. The EMR cluster is currently running on three m3 xlarge instances (one master and two core nodes).
Which of the following solutions will reduce costs related to the increasing compute needs?
A. Add additional task nodes, but have the team purchase an all-upfront convertible Reserved Instance for each additional node to offset the costs.
B. Add additional task nodes, but use instance fleets with the master node in on-Demand mode and a mix of On-Demand and Spot Instances for the core and task nodes. Purchase a scheduled Reserved Instances for the master node.
C. Add additional task nodes, but use instance fleets with the master node in Spot mode and a mix of On-Demand and Spot Instances for the core and task nodes. Purchase enough scheduled Reserved Instances to offset the cost of running any On-Demand instances.
D. Add additional task nodes, but use instance fleets with the master node in On-Demand mode and a mix of On-Demand and Spot Instances for the core and task nodes. Purchase a standard all upfront Reserved Instance for the master node.
Correct Answer
D. Add additional task nodes, but use instance fleets with the master node in On-Demand mode and a mix of On-Demand and Spot Instances for the core and task nodes. Purchase a standard all upfront Reserved Instance for the master node.
Question 344
Exam Question
29. When deploying a highly available 2-tier web application on AWS, which combination of AWS Services meets the requirements?
- AWS Direct Connect
- Amazon Route 53
- AWS Storage Gateway
- Elastic Load Balancing
- Amazon EC2
- Auto Scaling
- Amazon VPC
- AWS Cloud Trail
A. 2,4,5 and 6
B. 3,4,5 and 8
C. 1,2,5 and 6
D. 1 through 8
E. 1,3,5 and 7
Correct Answer
A. 2,4,5 and 6
Question 345
Exam Question
A company is building an AWS landing zone and has asked a Solutions Architect to design a multi-account access strategy that will allow hundreds of users to use corporate credentials to access the AWS Console. The company is running a Microsoft Active Directory and users will use an AWS Direct Connect connection to connect to AWS. The company also wants to be able to federate to third-party services and providers, including custom applications.
Which solution meets the requirements by using the LEAST amount of management overhead?
A. Connect the Active Directory to AWS by using single sign-on and an Active Directory Federation Services (AD FS) with SAML 2.0, and then configure the identity Provider (IdP) system to use form-based authentication. Build the AD FS portal page with corporate branding, and integrate third-party applications that support SAML 2.0 as required.
B. Create a two-way Forest trust relationship between the on-premises Active Directory and the AWS Directory Service. Set up AWS Single Sign-On with AWS Organizations. Use single sign-on integrations for connections with third-party applications.
C. Configure single sign-on by connecting the on-premises Active Directory using the AWS Directory Service AD Connector. Enable federation to the AWS services and accounts by using the IAM applications and services linking function. Leverage third-party single sign-on as needed.
D. Connect the company’s Active Directory to AWS by using AD FS and SAML 2.0. Configure the AD FS claim rule to leverage Regex third-party single sign-on as needed, and add it to the AD FS server.
Correct Answer
A. Connect the Active Directory to AWS by using single sign-on and an Active Directory Federation Services (AD FS) with SAML 2.0, and then configure the identity Provider (IdP) system to use form-based authentication. Build the AD FS portal page with corporate branding, and integrate third-party applications that support SAML 2.0 as required.
Question 346
Exam Question
You tried to integrate two subsystems (front-end and back-end) with an HTTP interface to one large system. These subsystems don’t store any state inside. All state is stored in an Amazon DynamoDB table. You have launched each of these two subsystems from a separate AMI. Black box testing has shown that these servers have stopped running and are issuing malformed requests that do not meet HTTP specifications from the client. Your developers have discover and fixed this issue, and you deploy the fix to the two subsystems as soon as possible without service disruption.
What are the most effective options to deploy the fixes? Choose 3 answers
A. Use VPC.
B. Use AWS OpsWorks auto healing for both the front-end and back-end instance pair
C. Use Elastic Load Balancing in front of the front-end subsystem and Auto Scaling to keep the specified number of instances
D. Use Elastic Load Balancing in front of the back-end subsystem and Auto Scaling to keep specified number of instances.
E. Use Amazon CloudFront which accesses the front-end server when origin fetch
F. Use Amazon Simple Queue Service SQS between the front-end and back-end subsystems
Correct Answer
B. Use AWS OpsWorks auto healing for both the front-end and back-end instance pair
C. Use Elastic Load Balancing in front of the front-end subsystem and Auto Scaling to keep the specified number of instances
D. Use Elastic Load Balancing in front of the back-end subsystem and Auto Scaling to keep specified number of instances.
Question 347
Exam Question
A Solutions Architect is designing a network solution for a company that has applications running in a data center in Northern Virginia. The applications in the company’s data center require predictable performance to applications running in a virtual private cloud (VPC) located in us-east1, and a secondary VPC in us-west-2 within the same account. The company data center is collocated in an AWS Direct Connect facility that serves the us-east-1 region. The company has already ordered an AWS Direct Connect connection and a cross-connect has been established.
Which solution will meet the requirements at the LOWEST cost?
A. Provision a Direct Connect gateway and attach the virtual private (VGW) for the VPC in us-east-1 and the VGW for the VPC in us-west-2. Create a private VIF on the Direct Connect connection and associate it to the Direct Connect gateway.
B. Create private VIFs on the Direct Connect connection for each of the company’s VPCs in the usest-1 and us-west-2 regions. Configure the company’s data center router to connect directly with the VPCs in those regions via the private VIFs.
C. Deploy a transit VPC solution using Amazon EC2-based router instances in the us-east-1 region. Establish IPsec VPN tunnels between the transit routers and virtual private gateways (VGWs) located in the us-east-1 and us-west-2 regions, which are attached to the company’s VPCs in those regions. Create a public VIF on the Direct Connect connection and establish IPsec VPN tunnels over the public VIF between the transit routers and the company’s data center router.
D. Order a second Direct Connect connection to a Direct Connect facility with connectivity to the uswest-2 region. Work with partner to establish a network extension link over dark fiber from the Direct Connect facility to the company’s data center. Establish private VIFs on the Direct Connect connections for each of the company’s VPCs in the respective regions. Configure the company’s data center router to connect directly with the VPCs in those regions via the private VIFs.
Correct Answer
A. Provision a Direct Connect gateway and attach the virtual private (VGW) for the VPC in us-east-1 and the VGW for the VPC in us-west-2. Create a private VIF on the Direct Connect connection and associate it to the Direct Connect gateway.
Question 348
Exam Question
Your social media monitoring application uses a Python app running on AWS Elastic Beanstalk to inject tweets, Facebook updates and RSS feeds into an Amazon Kinesis stream. A second AWS Elastic Beanstalk app generates key performance indicators into an Amazon DynamoDB table and powers a dashboard application.
What is the most efficient option to prevent any data loss for this application?
A. Add a second Amazon Kinesis stream in another Availability Zone and use AWS data pipeline to replicate data across Kinesis streams.
B. Add a third AWS Elastic Beanstalk app that uses the Amazon Kinesis S3 connector to archive data from Amazon Kinesis into Amazon S3.
C. Use AWS Data Pipeline to replicate your DynamoDB tables into another region.
D. Use the second AWS Elastic Beanstalk app to store a backup of Kinesis data onto Amazon Elastic Block Store (EBS), and then create snapshots from your Amazon EBS volumes.
Correct Answer
B. Add a third AWS Elastic Beanstalk app that uses the Amazon Kinesis S3 connector to archive data from Amazon Kinesis into Amazon S3.
Question 349
Exam Question
A company has a web service deployed in the following two AWS Regions: us-west-2 and us-est1. Each AWS region runs an identical version of the web service. Amazon Route 53 is used to route customers to the AWS Region that has the lowest latency. The company wants to improve the availability of the web service in case an outage occurs in one of the two AWS Regions. A Solutions Architect has recommended that a Route 53 health check be performed. The health check must detect a specific text on an endpoint.
What combination of conditions should the endpoint meet to pass the Route 53 health check? (Choose two.)
A. The endpoint must establish a TCP connection within 10 seconds.
B. The endpoint must return an HTTP 200 status code.
C. The endpoint must return an HTTP 2xx or 3xx status code.
D. The specific text string must appear within the first 5,120 bytes of the response.
E. The endpoint must respond to the request within the number of seconds specified when creating the health check.
Correct Answer
C. The endpoint must return an HTTP 2xx or 3xx status code.
D. The specific text string must appear within the first 5,120 bytes of the response.
Question 350
Exam Question
A research scientist is planning for the one-time launch of an Elastic MapReduce cluster and is encouraged by her manager to minimize costs. The cluster is designed to ingest 200TB of genomics data with a total of 100 Amazon Elastic Compute Cloud (EC2) instances and is expected to run for around four hours. The resulting data set must be stored temporarily until archived into an Amazon Relational Database Service (RDS) Oracle instance.
Which option will help save the most money while meeting requirements?
A. Deploy on-demand master, core and task nodes and store ingest and output files in Amazon Simple Storage Service (S3) Reduced Redundancy Storage (RRS).
B. Store the ingest files in Amazon S3 RRS and store the output files in S3. Deploy Reserved Instances for the master, and core nodes and on-demand for the task nodes.
C. Store ingest and output files in Amazon S3. Deploy on-demand for the master, and core nodes and spot for the task nodes.
D. Optimize by deploying a combination of on-demand, RI, and spot-pricing models for the master, core, and task nodes. Store ingest and output files in Amazon S3 with a lifecycle policy that archives them to Amazon Glacier.
Correct Answer
C. Store ingest and output files in Amazon S3. Deploy on-demand for the master, and core nodes and spot for the task nodes.