The latest AWS Certified Solutions Architect – Professional SAP-C02 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Professional SAP-C02 exam and earn AWS Certified Solutions Architect – Professional SAP-C02 certification.
Table of Contents
- Question 541
- Exam Question
- Correct Answer
- Question 542
- Exam Question
- Correct Answer
- Question 543
- Exam Question
- Correct Answer
- Question 544
- Exam Question
- Correct Answer
- Question 545
- Exam Question
- Correct Answer
- Question 546
- Exam Question
- Correct Answer
- Question 547
- Exam Question
- Correct Answer
- Question 548
- Exam Question
- Correct Answer
- Question 549
- Exam Question
- Correct Answer
- Question 550
- Exam Question
- Correct Answer
Question 541
Exam Question
The CTO at a multi-national retail company is pursuing an IT re-engineering effort to set up a hybrid network architecture that would facilitate the company’s envisaged long-term data center migration from multiple on-premises data centers to the AWS Cloud. The current on-premises data centers are in different locations and are inter-linked via a private fiber. Due to the unique constraints of the existing legacy applications, using NAT is not an option. During the migration period, many critical applications will need access to other applications deployed in both the on-premises data centers and AWS Cloud.
As a Solutions Architect Professional, which of the following options would you suggest to set up a hybrid network architecture that is secure, highly available and supports high bandwidth for a multi-Region deployment post-migration?
A. Set up multiple software VPN connections between AWS cloud and the on-premises data centers Configure each subnet’s traffic through different VPN connections for redundancy. Make sure that no VPC CIDR blocks overlap one another or the on-premises network.
B. Set up a Direct Connect as primary connection for all on-premises data centers with another VPN as backup. Configure both connections to use the same virtual private gateway and BGP. Make sure that no VPC CIDR blocks overlap one another or the on-premises network.
C. Set up multiple hardware VPN connections between AWS cloud and the on-premises data centers Configure each subnet’s traffic through different VPN connections for redundancy. Make sure that no VPC CIDR blocks overlap one another or the on-premises network.
D. Set up a Direct Connect to each on-premises data center from different service providers and configure routing to failover to the other on-premises data center’s Direct Connect in case one connection fails. Make sure that no VPC CIDR blocks overlap one another or the on-premises network.
Correct Answer
D. Set up a Direct Connect to each on-premises data center from different service providers and configure routing to failover to the other on-premises data center’s Direct Connect in case one connection fails. Make sure that no VPC CIDR blocks overlap one another or the on-premises network.
Question 542
Exam Question
A Solutions Architect needs to migrate a legacy application from on premises to AWS. On premises, the application runs on two Linux servers behind a load balancer and accesses a database that is master-master on two servers. Each application server requires a license file that is tied to the MAC address of the server’s network adapter. It takes the software vendor 12 hours to send ne license files through email. The application requires configuration files to use static. IPv4 addresses to access the database servers, not DNS.
Given these requirements, which steps should be taken together to enable a scalable architecture for the application servers? (Choose two.)
A.Create a pool of ENIs, request license files from the vendor for the pool, and store the license files within Amazon S3. Create automation to download an unused license, and attach the corresponding ENI at boot time.
B.Create a pool of ENIs, request license files from the vendor for the pool, store the license files on an Amazon EC2 instance, modify the configuration files, and create an AMI from the instance. use this AMI for all instances.
C. Create a bootstrap automation to request a new license file from the vendor with a unique return email. Have the server configure itself with the received license file.
D. Create bootstrap automation to attach an ENI from the pool, read the database IP addresses from AWS Systems Manager Parameter Store, and inject those parameters into the local configuration files. Keep SSM up to date using a Lambda function.
E. Install the application on an EC2 instance, configure the application, and configure the IP address information. Create an AMI from this instance and use if for all instances.
Correct Answer
A.Create a pool of ENIs, request license files from the vendor for the pool, and store the license files within Amazon S3. Create automation to download an unused license, and attach the corresponding ENI at boot time.
D. Create bootstrap automation to attach an ENI from the pool, read the database IP addresses from AWS Systems Manager Parameter Store, and inject those parameters into the local configuration files. Keep SSM up to date using a Lambda function.
Question 543
Exam Question
A financial services company runs more than 400 core-banking microservices on AWS, using services including Amazon Elastic Compute Cloud (Amazon EC2). Amazon Elastic Block Store (Amazon EBS). and Amazon Simple Storage Service (Amazon S3). The company also segregates parts of its infrastructure using separate AWS accounts, so if one account is compromised, critical parts of the infrastructure in other accounts remain unaffected. The bank uses one account for production, one for non-production, and one for storing and managing users’ login information and roles within AWS. The privileges that are assigned in the user account then allow users to read or write to production and non-production accounts. The company has set up “AWS Organizations” to manage several of these scenarios. The company wants to provide shared and centrally-managed VPCs to all business units for certain applications that need a high degree of interconnectivity.
As a solutions architect, which of the following options would you choose to facilitate this use-case?
A. Use VPC sharing to share one or more subnets with other AWS accounts belonging to the same parent organization from AWS Organizations.
B. Use VPC sharing to share a VPC with other AWS accounts belonging to the same parent organization from AWS Organizations.
C. Use VPC peering to share one or more subnets with other AWS accounts belonging to the same parent organization from AWS Organizations.
D. Use VPC peering to share a VPC with other AWS accounts belonging to the same parent organization from AWS Organizations.
Correct Answer
A. Use VPC sharing to share one or more subnets with other AWS accounts belonging to the same parent organization from AWS Organizations.
Question 544
Exam Question
A Solutions Architect is responsible for redesigning a legacy Java application to improve its availability, data durability, and scalability. Currently, the application runs on a single high-memory Amazon EC2 instance. It accepts HTTP requests from upstream clients, adds them to an in memory queue, and responds with a 200 status. A separate application thread reads items from the queue, processes them, and persists the results to an Amazon RDS MySQL instance. The processing time for each item takes 90 seconds on average., most of which is spent waiting on external service calls, but the application is written to process multiple items in parallel. Traffic to this service is unpredictable. During periods of high load, items may sit in the internal queue for over an hour while the application processes the backing. In addition, the current system has issues with availability and data if the single application node fails. Clients that access this service cannot be modified. They expect to receive a response to each HTTP request they send within 10 seconds before they will time out and retry the request.
Which approach would improve the availability and durability of the system while decreasing the processing latency and minimizing costs?
A. Create an Amazon API Gateway REST API that uses Lambda proxy integration to pass requests to an AWS Lambda function. Migrate the core processing code to a Lambda function and write a wrapper class that provides a handler method that converts the proxy events to the internal application data model and invokes the processing module.
B. Create an Amazon API Gateway REST API that uses a service proxy to put items in an Amazon SQS queue. Extract the core processing code from the existing application and update it to pull items from Amazon SQS queue. Extract the core processing code from the existing application and update it to pull items from Amazon SQS instead of an in-memory queue. Deploy the new processing application to smaller EC2 instances within an Auto Scaling group that scales dynamically based on the approximate number of messages in the Amazon SQS queue.
C. Modify the application to use Amazon DynamoDB instead of Amazon RDS. Configure Auto Scaling for the DynamoDB table. Deploy the application within an Auto Scaling group with a scaling policy based on CPU utilization. Back the in-memory queue with a memory-mapped file to an instance store volume and periodically write that file to Amazon S3.
D. Update the application to use a Redis task queue instead of the in-memory queue. Build a Docker container image for the application. Create an Amazon ECS task definition that includes the application container and a separate container to host Redis. Deploy the new task definition as an ECS service using AWS Fargate and enable Auto Scaling.
Correct Answer
B. Create an Amazon API Gateway REST API that uses a service proxy to put items in an Amazon SQS queue. Extract the core processing code from the existing application and update it to pull items from Amazon SQS queue. Extract the core processing code from the existing application and update it to pull items from Amazon SQS instead of an in-memory queue. Deploy the new processing application to smaller EC2 instances within an Auto Scaling group that scales dynamically based on the approximate number of messages in the Amazon SQS queue.
Question 545
Exam Question
A social media company has a serverless application stack that consists of CloudFront, API Gateway and Lambda functions. The company has hired you as an AWS Certified Solutions Architect Professional to improve the current deployment process which creates a new version of the Lambda function and then runs an AWS CLI script for deployment. In case the new version errors out, then another CLI script is invoked to deploy the previous working version of the Lambda function. The company has mandated you to decrease the time to deploy new versions of the Lambda functions and also reduce the time to detect and rollback when errors are identified.
Which of the following solutions would you suggest for the given use-case?
A. Set up and deploy nested CloudFormation stacks with the CloudFront distribution as well as the API Gateway in the parent stack. Create and deploy a child stack containing the Lambda functions. To address any changes in a Lambda function, create a CloudFormation change set and deploy. In case the Lambda function errors out, rollback the CloudFormation change set to the previous version.
B. Use Serverless Application Model (SAM) and leverage the built-in traffic-shifting feature of SAM to deploy the new Lambda version via CodeDeploy and use pre-traffic and post-traffic test functions to verify code. Rollback in case CloudWatch alarms are triggered.
C. Set up and deploy nested CloudFormation stacks with the CloudFront distribution as well as the API Gateway in the parent stack. Create and deploy a child stack containing the Lambda functions. To address any changes in a Lambda function, create a CloudFormation change set and deploy. Use pre-traffic and post-traffic test functions of the change set to verify the deployment. Rollback in case CloudWatch alarms are triggered.
D. Set up and deploy a CloudFormation stack containing a new API Gateway endpoint that points to the new Lambda version. Test the updated CloudFront origin that points to this new API Gateway endpoint and in case errors are detected then revert the CloudFront origin to the previous working API Gateway endpoint.
Correct Answer
B. Use Serverless Application Model (SAM) and leverage the built-in traffic-shifting feature of SAM to deploy the new Lambda version via CodeDeploy and use pre-traffic and post-traffic test functions to verify code. Rollback in case CloudWatch alarms are triggered.
Question 546
Exam Question
A company wants to allow its Marketing team to perform SQL queries on customer records to identify market segments. The data is spread across hundreds of files. The records must be encrypted in transit and at rest. The Team Manager must have the ability to manage users and groups, but no team members should have access to services or resources not required for the SQL queries. Additionally, Administrators need to audit the queries made and receive notifications when a query violates rules defined by the Security team. AWS Organizations has been used to create a new account and an AWS IAM user with administrator permissions for the Team Manager.
Which design meets these requirements?
A. Apply a service control policy (SCP) that allows access to IAM, Amazon RDS, and AWS CloudTrail. Load customer records in Amazon RDS MySQL and train users to execute queries using the AWS CLI. Stream the query logs to Amazon CloudWatch Logs from the RDS database instance. use a subscription filter with AWS lambda functions to audit and alarm on queries against personal data.
B. Apply a service control policy (SCP) that denies access to all services except IAM, Amazon Athena, Amazon S3, and AWS CloudTrail. Store customer record files in Amazon S3 and train users to execute queries using the CLI via Athena. Analyze CloudTrail events to audit and alarm on queries against personal data.
C. Apply a service control policy (SCP) that denies to all services except IAM, Amazon DynamoDB, and AWS CloudTrail. Store customer records in DynamoDB and train users to execute queries using the AWS CLI. Enable DynamoDB streams to track the queries that are issued and use an AWS Lambda function for real-time monitoring and alerting.
D. Apply a service control policy (SCP) that allows to IAM, Amazon Athena, Amazon S3, and AWS CloudTrail. Store customer records as files in Amazon S3 and train users to leverage the Amazon S3 Select feature and execute queries using the AWS CLI. Enable S3 object-level logging and analyze CloudTrail events to audit and alarm on queries against personal data.
Correct Answer
B. Apply a service control policy (SCP) that denies access to all services except IAM, Amazon Athena, Amazon S3, and AWS CloudTrail. Store customer record files in Amazon S3 and train users to execute queries using the CLI via Athena. Analyze CloudTrail events to audit and alarm on queries against personal data.
Question 547
Exam Question
A media company has a static web application that is generated programmatically. The company has a build pipeline that generates HTML content that is uploaded to an Amazon S3 bucket served by Amazon CloudFront. The build pipeline runs inside a Build Account. The S3 bucket and CloudFront distribution are in a Distribution Account. The build pipeline uploads the files to Amazon S3 using an IAM role in the Build Account. The S3 bucket has a bucket policy that only allows CloudFront to read objects using an origin access identity (OAI). During testing all attempts to access the application using the CloudFront URL result in an HTTP 403 Access Denied response.
What should a solutions architect suggest to the company to allow access to the objects in Amazon S3 through CloudFront?
A. Modify the S3 upload process in the Build Account to add the bucket-owner-full-control ACL to the objects at upload.
B. Create a new cross-account IAM role in the Distribution Account with write access to the S3 bucket. Modify the build pipeline to assume this role to upload the files to the Distribution Account.
C. Modify the S3 upload process in the Build Account to set the object owner to the Distribution Account.
D. Create a new IAM role in the Distribution Account with read access to the S3 bucket. Configure CloudFront to use this new role as its OAI. Modify the build pipeline to assume this role when uploading files from the Build Account.
Correct Answer
A. Modify the S3 upload process in the Build Account to add the bucket-owner-full-control ACL to the objects at upload.
Question 548
Exam Question
A company has a 24 TB MySQL database in its on-premises data center that grows at the rate of 10 GB per day. The data center is connected to the company’s AWS infrastructure with a 50 Mbps VPN connection. The company is migrating the application and workload to AWS. The application code is already installed and tested on Amazon EC2. The company now needs to migrate the database and wants to go live on AWS within 3 weeks.
Which of the following approaches meets the schedule with LEAST downtime?
A.
1. Use the VM Import/Export service to import a snapshot on the on-premises database into AWS.
2. Launch a new EC2 instance from the snapshot.
3. Set up ongoing database replication from on premises to the EC2 database over the VPN.
4. Change the DNS entry to point to the EC2 database.
5. Stop the replication.
B.
1. Launch an AWS DMS instance.
2. Launch an Amazon RDS Aurora MySQL DB instance.
3. Configure the AWS DMS instance with on-premises and Amazon RDS database information.
4. Start the replication task within AWS DMS over the VPN.
5. Change the DNS entry to point to the Amazon RDS MySQL database.
6. Stop the replication.
C.
1. Create a database export locally using database-native tools.
2. Import that into AWS using AWS Snowball.
3. Launch an Amazon RDS Aurora DB instance.
4. Load the data in the RDS Aurora DB instance from the export.
5. Set up database replication from the on-premises database to the RDS Aurora DB instance over the VPN.
6. Change the DNS entry to point to the RDS Aurora DB instance.
7. Stop the replication.
D.
1. Take the on-premises application offline.
2. Create a database export locally using database-native tools.
3. Import that into AWS using AWS Snowball.
4. Launch an Amazon RDS Aurora DB instance.
5. Load the data in the RDS Aurora DB instance from the export.
6. Change the DNS entry to point to the Amazon RDS Aurora DB instance.
7. Put the Amazon EC2 hosted application online.
Correct Answer
C.
1. Create a database export locally using database-native tools.
2. Import that into AWS using AWS Snowball.
3. Launch an Amazon RDS Aurora DB instance.
4. Load the data in the RDS Aurora DB instance from the export.
5. Set up database replication from the on-premises database to the RDS Aurora DB instance over the VPN.
6. Change the DNS entry to point to the RDS Aurora DB instance.
7. Stop the replication.
Question 549
Exam Question
A company is migrating its applications to AWS. The applications will be deployed to AWS accounts owned by business units. The company has several teams of developers who are responsible for the development and maintenance of all applications. The company is expecting rapid growth in the number of users. The company’s chief technology officer has the following requirements: Developers must launch the AWS infrastructure using AWS CloudFormation. Developers must not be able to create resources outside of CloudFormation. The solution must be able to scale to hundreds of AWS accounts.
Which of the following would meet these requirements? (Choose two.)
A. Using CloudFormation, create an IAM role that can be assumed by CloudFormation that has permissions to create all the resources the company needs. Use CloudFormation StackSets to deploy this template to each AWS account.
B. In a central account, create an IAM role that can be assumed by developers, and attach a policy that allows interaction with CloudFormation. Modify the AssumeRolePolicyDocument action to allow the IAM role to be passed to CloudFormation.
C. Using CloudFormation, create an IAM role that can be assumed by developers, and attach policies that allow interaction with and passing a role to CloudFormation. Attach an inline policy to deny access to all other AWS services. Use CloudFormation StackSets to deploy this template to each AWS account.
D. Using CloudFormation, create an IAM role for each developer, and attach policies that allow interaction with CloudFormation. Use CloudFormation StackSets to deploy this template to each AWS account.
E. In a central AWS account, create an IAM role that can be assumed by CloudFormation that has permissions to create the resources the company requires. Create a CloudFormation stack policy that allows the IAM role to manage resources. Use CloudFormation StackSets to deploy the CloudFormation stack policy to each AWS account.
Correct Answer
A. Using CloudFormation, create an IAM role that can be assumed by CloudFormation that has permissions to create all the resources the company needs. Use CloudFormation StackSets to deploy this template to each AWS account.
C. Using CloudFormation, create an IAM role that can be assumed by developers, and attach policies that allow interaction with and passing a role to CloudFormation. Attach an inline policy to deny access to all other AWS services. Use CloudFormation StackSets to deploy this template to each AWS account.
Question 550
Exam Question
A company is creating an account strategy so that they can begin using AWS. The Security team will provide each team with the permissions they need to follow the principle or least privileged access. Teams would like to keep their resources isolated from other groups, and the Finance team would like each team’s resource usage separated for billing purposes.
Which account creation process meets these requirements and allows for changes?
A. Create a new AWS Organizations account. Create groups in Active Directory and assign them to roles in AWS to grant federated access. Require each team to tag their resources, and separate bills based on tags. Control access to resources through IAM granting the minimally required privilege.
B. Create individual accounts for each team. Assign the security as the master account, and enable consolidated billing for all other accounts. Create a cross-account role for security to manage accounts, and send logs to a bucket in the security account.
C. Create a new AWS account, and use AWS Service Catalog to provide teams with the required resources. Implement a third-party billing to provide the Finance team with the resource use for each team based on tagging. Isolate resources using IAM to avoid account sprawl. Security will control and monitor logs and permissions.
D. Create a master account for billing using Organizations, and create each team’s account from that master account. Create a security account for logs and cross-account access. Apply service control policies on each account, and grant the Security team cross-account access to all accounts. Security will create IAM policies for each account to maintain least privilege access.
Correct Answer
D. Create a master account for billing using Organizations, and create each team’s account from that master account. Create a security account for logs and cross-account access. Apply service control policies on each account, and grant the Security team cross-account access to all accounts. Security will create IAM policies for each account to maintain least privilege access.