The latest AWS Certified Solutions Architect – Professional SAP-C02 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Professional SAP-C02 exam and earn AWS Certified Solutions Architect – Professional SAP-C02 certification.
Table of Contents
- Question 651
- Exam Question
- Correct Answer
- Question 652
- Exam Question
- Correct Answer
- Explanation
- Question 653
- Exam Question
- Correct Answer
- Question 654
- Exam Question
- Correct Answer
- Explanation
- References
- Question 655
- Exam Question
- Correct Answer
- Question 656
- Exam Question
- Correct Answer
- Explanation
- Reference
- Question 657
- Exam Question
- Correct Answer
- Question 658
- Exam Question
- Correct Answer
- Reference
- Question 659
- Exam Question
- Correct Answer
- Question 660
- Exam Question
- Correct Answer
- Explanation
- Reference
Question 651
Exam Question
An Internet-of-Things (loT) company has developed an end-to-end cloud-based solution that provides customers with integrated loT functionality in devices including baby monitors, security cameras and entertainment systems. The company is using Kinesis Data Streams (KDS) to process loT data from these devices. Multiple consumer applications are using the incoming data streams and the engineers have noticed a performance lag for the data delivery speed between producers and consumers of the data streams.
As a Solutions Architect Professional, which of the following would you recommend to improve the performance for the given use-case?
A. Swap out Kinesis Data Streams with SQS FIFO queues to support the desired read throughput for the downstream applications
B. Swap out Kinesis Data Streams with SQS Standard queues to support the desired read throughput for the downstream applications.
C. Use Enhanced Fanout feature of Kinesis Data Streams to support the desired read throughput for the downstream applications.
D. Swap out Kinesis Data Streams with Kinesis Data Firehose to support the desired read throughput for the downstream applications.
Correct Answer
C. Use Enhanced Fanout feature of Kinesis Data Streams to support the desired read throughput for the downstream applications.
Question 652
Exam Question
A solutions architect is designing a solution to process events. The solution must have the ability to scale in and out based on the number of events that the solution receives. If a processing error occurs, the event must move into a separate queue for review.
Which solution will meet these requirements?
A. Send event details to an Amazon Simple Notification Service (Amazon SNS) topic. Configure an AWS Lambda function as a subscriber to the SNS topic to process the events. Add an on-failure destination to the function. Set an Amazon Simple Queue Service (Amazon SQS) queue as the target.
B. Publish events to an Amazon Simple Queue Service (Amazon SQS) queue. Create an Amazon EC2 Auto Scaling group. Configure the Auto Scaling group to scale in and out based on the ApproximateAgeOfOldestMessage metric of the queue. Configure the application to write failed messages to a dead-letter queue.
C. Write events to an Amazon DynamoDB table. Configure a DynamoDB stream for the table. Configure the stream to invoke an AWS Lambda function. Configure the Lambda function to process the events.
D. Publish events to an Amazon EventBridge event bus. Create and run an application on an Amazon EC2 instance with an Auto Scaling group that is behind an Application Load Balancer (ALB). Set the ALB as the event bus target. Configure the event bus to retry events. Write messages to a dead-letter queue if the application cannot process the messages.
Correct Answer
A. Send event details to an Amazon Simple Notification Service (Amazon SNS) topic. Configure an AWS Lambda function as a subscriber to the SNS topic to process the events. Add an on-failure destination to the function. Set an Amazon Simple Queue Service (Amazon SQS) queue as the target.
Explanation
Amazon Simple Notification Service (Amazon SNS) is a fully managed pub/sub messaging service that enables users to send messages to multiple subscribers . Users can send event details to an Amazon SNS topic and configure an AWS Lambda function as a subscriber to the SNS topic to process the events. Lambda is a serverless compute service that runs code in response to events and automatically manages the underlying compute resources2. Users can add an on-failure destination to the function and set an Amazon Simple Queue Service (Amazon SQS) queue as the target. Amazon SQS is a fully managed message queuing service that enables users to decouple and scale microservices, distributed systems, and serverless applications3. This way, if a processing error occurs, the event will move into the separate queue for review.
Option B is incorrect because publishing events to an Amazon SQS queue and creating an Amazon EC2 Auto Scaling group will not have the ability to scale in and out based on the number of events that the solution receives. Amazon EC2 is a web service that provides secure, resizable compute capacity in the cloud. Auto Scaling is a feature that helps users maintain application availability and allows them to scale their EC2 capacity up or down automatically according to conditions they define. However, for this use case, using SQS and EC2 will not take advantage of the serverless capabilities of Lambda and SNS.
Option C is incorrect because writing events to an Amazon DynamoDB table and configuring a DynamoDB stream for the table will not have the ability to move events into a separate queue for review if a processing error occurs. Amazon DynamoDB is a fully managed key-value and document database that delivers single-digit millisecond performance at any scale. DynamoDB Streams is a feature that captures data modification events in DynamoDB tables. Users can configure the stream to invoke a Lambda function, but they cannot configure an on-failure destination for the function.
Option D is incorrect because publishing events to an Amazon EventBridge event bus and setting an Application Load Balancer (ALB) as the event bus target will not have the ability to move events into a separate queue for review if a processing error occurs. Amazon EventBridge is a serverless event bus service that makes it easy to connect applications with data from a variety of sources. An ALB is a load balancer that distributes incoming application traffic across multiple targets, such as EC2 instances, containers, IP addresses, Lambda functions, and virtual appliances. Users can configure EventBridge to retry events, but they cannot configure an on-failure destination for the ALB.
Question 653
Exam Question
A leading medical imaging equipment and diagnostic imaging solutions provider uses AWS Cloud to run its healthcare data flows through more than 500,000 medical imaging devices globally. The solutions provider stores close to one petabyte of medical imaging data on Amazon S3 to provide the durability and reliability needed for their critical data. A research assistant working with the radiology department is trying to upload a high-resolution image into 53 via the public internet. The image size is approximately 5GB. The research assistant is using S3 Transfer Acceleration (S3TA) for faster image upload. It turns out that S3TA did not result in an accelerated transfer.
Given this scenario, which of the following is correct regarding the charges for this image transfer?
A. The research assistant does not need to pay any transfer charges for the image upload.
B. The research assistant only needs to pay S3 transfer charges for the image upload.
C. The research assistant only needs to pay S3TA transfer charges for the image upload.
D. The research assistant needs to pay both S3 transfer charges and S3TA transfer charges for the image upload.
Correct Answer
A. The research assistant does not need to pay any transfer charges for the image upload.
Question 654
Exam Question
A company uses AWS Organizations for a multi-account setup in the AWS Cloud. The company’s finance team has a data processing application that uses AWS Lambda and Amazon DynamoDB. The company’s marketing team wants to access the data that is stored in the DynamoDB table.
The DynamoDB table contains confidential data. The marketing team can have access to only specific attributes of data in the DynamoDB table. The finance team and the marketing team have separate AWS accounts.
What should a solutions architect do to provide the marketing team with the appropriate access to the DynamoDB table?
A. Create an SCP to grant the marketing team’s AWS account access to the specific attributes of the DynamoDB table. Attach the SCP to the OU of the finance team.
B. Create an IAM role in the finance team’s account by using IAM policy conditions for specific DynamoDB attributes (fine-grained access control). Establish trust with the marketing team’s account.
In the marketing team’s account, create an IAM role that has permissions to assume the IAM role in the finance team’s account.
C. Create a resource-based IAM policy that includes conditions for specific DynamoDB attributes (fine-grained access control). Attach the policy to the DynamoDB table. In the marketing team’s account, create an IAM role that has permissions to access the DynamoDB table in the finance team’s account.
D. Create an IAM role in the finance team’s account to access the DynamoDB table. Use an IAM permissions boundary to limit the access to the specific attributes. In the marketing team’s account, create an IAM role that has permissions to assume the IAM role in the finance team’s account.
Correct Answer
C. Create a resource-based IAM policy that includes conditions for specific DynamoDB attributes (fine-grained access control). Attach the policy to the DynamoDB table. In the marketing team’s account, create an IAM role that has permissions to access the DynamoDB table in the finance team’s account.
Explanation
The company should create a resource-based IAM policy that includes conditions for specific DynamoDB attributes (fine-grained access control). The company should attach the policy to the DynamoDB table. In the marketing team’s account, the company should create an IAM role that has permissions to access the DynamoDB table in the finance team’s account. This solution will meet the requirements because a resource-based IAM policy is a policy that you attach to an AWS resource (such as a DynamoDB table) to control who can access that resource and what actions they can perform on it. You can use IAM policy conditions to specify fine-grained access control for DynamoDB items and attributes. For example, you can allow or deny access to specific attributes of all items in a table by matching on attribute names1. By creating a resource-based policy that allows access to only specific attributes of the DynamoDB table and attaching it to the table, the company can restrict access to confidential data. By creating an IAM role in the marketing team’s account that has permissions to access the DynamoDB table in the finance team’s account, the company can enable cross-account access.
The other options are not correct because:
Creating an SCP to grant the marketing team’s AWS account access to the specific attributes of the DynamoDB table would not work because SCPs are policies that you can use with AWS Organizations to manage permissions in your organization’s accounts. SCPs do not grant permissions; instead, they specify the maximum permissions that identities in an account can have2. SCPs cannot be used to specify fine-grained access control for DynamoDB items and attributes.
Creating an IAM role in the finance team’s account by using IAM policy conditions for specific DynamoDB attributes and establishing trust with the marketing team’s account would not work because IAM roles are identities that you can create in your account that have specific permissions. You can use an IAM role to delegate access to users, applications, or services that don’t normally have access to your AWS resources3. However, creating an IAM role in the finance team’s account would not restrict access to specific attributes of the DynamoDB table; it would only allow cross-account access. The company would still need a resource-based policy attached to the table to enforce fine-grained access control.
Creating an IAM role in the finance team’s account to access the DynamoDB table and using an IAM permissions boundary to limit the access to the specific attributes would not work because IAM permissions boundaries are policies that you use to delegate permissions management to other users. You can use permissions boundaries to limit the maximum permissions that an identity-based policy can grant to an IAM entity (user or role) . Permissions boundaries cannot be used to specify fine-grained access control for DynamoDB items and attributes.
References
- AWS > Documentation > Amazon DynamoDB > Developer Guide > Using IAM policy conditions for fine-grained access control
- AWS > Documentation > AWS Organizations > User Guide > Service control policies (SCPs)
- AWS > Documentation > AWS Identity and Access Management > User Guide > IAM roles
- AWS > Documentation > AWS Identity and Access Management > User Guide > Permissions boundaries for IAM entities
Question 655
Exam Question
A social media company has its corporate headquarters in New York with an on-premises data center using an AWS Direct Connect connection to the AWS VPC. The branch offices in San Francisco and Miami use Site-to-Site VPN connections to connect to the AWS VPC. The company is looking for a solution to have the branch offices send and receive data with each other as well as with their corporate headquarters.
As a Solutions Architect Professional, which of the following solutions would you recommend to meet these requirements?
A. Set up VPC Peering between branch offices and corporate headquarters which will enable branch offices to send and receive data with each other as well as with their corporate headquarters.
B. Set up VPC CloudHub between branch offices and corporate headquarters which will enable branch offices to send and receive data with each other as well as with their corporate headquarters.
C. Configure VPC Endpoints between branch offices and corporate headquarters which will enable branch offices to send and receive data with each other as well as with their corporate headquarters.
D. Configure Public Virtual Interfaces (VIFs) between branch offices and corporate headquarters which will enable branch offices to send and receive data with each other as well as with their corporate headquarters.
Correct Answer
B. Set up VPC CloudHub between branch offices and corporate headquarters which will enable branch offices to send and receive data with each other as well as with their corporate headquarters.
Question 656
Exam Question
A company hosts a blog post application on AWS using Amazon API Gateway, Amazon DynamoDB, and AWS Lambda. The application currently does not use API keys to authorize requests. The API model is as follows:
GET/posts/[postid] to get post details
GET/users[userid] to get user details
GET/comments/[commentid] to get comments details
The company has noticed users are actively discussing topics in the comments section, and the company wants to increase user engagement by marking the comments appears in real time.
Which design should be used to reduce comment latency and improve user experience?
A. Use edge-optimized API with Amazon CloudFront to cache API responses.
B. Modify the blog application code to request GET comment[commented] every 10 seconds.
C. Use AWS AppSync and leverage WebSockets to deliver comments.
D. Change the concurrency limit of the Lambda functions to lower the API response time.
Correct Answer
C. Use AWS AppSync and leverage WebSockets to deliver comments.
Explanation
AWS AppSync is a fully managed GraphQL service that allows applications to securely access, manipulate, and receive data as well as real-time updates from multiple data sources1. AWS AppSync supports GraphQL subscriptions to perform real-time operations and can push data to clients that choose to listen to specific events from the backend1. AWS AppSync uses WebSockets to establish and maintain a secure connection between the clients and the API endpoint2. Therefore, using AWS AppSync and leveraging WebSockets is a suitable design to reduce comment latency and improve user experience.
Reference
AWS > Documentation > AWS AppSync > Developer Guide > GraphQL overview
Question 657
Exam Question
A world-leading video creation and distribution company has recently migrated to AWS Cloud for digitally transforming their movie business. The company wants to speed up its media distribution process and improve data security while also reducing costs and eliminating errors. The company wants to set up a Digital Cinema Network that would allow it to connect the space-constrained movie theater environment to content stored in Amazon S3 as well as to accelerate the online distribution of movies and advertising to theaters in 38 key media markets worldwide. The company also wants to do an accelerated online migration of hundreds of terabytes of files from their on-premises data center to Amazon S3 and then establish a mechanism to access the migrated data for ongoing updates from the on-premises applications.
As a Solutions Architect Professional, which of the following would you select as the MOST performant solution for the given use-case?
A. Use AWS DataSync to migrate existing data to Amazon S3 as well as access the S3 data for ongoing updates.
B. Use File Gateway configuration of AWS Storage Gateway to migrate data to Amazon S3 and then use S3 Transfer Acceleration for ongoing updates from the on-premises applications.
C. Use AWS DataSync to migrate existing data to Amazon S3 and then use File Gateway to retain access to the migrated data for ongoing updates from the on-premises applications.
D. Use S3 Transfer Acceleration to migrate existing data to Amazon S3 and then use DataSync for ongoing updates from the on-premises applications.
Correct Answer
C. Use AWS DataSync to migrate existing data to Amazon S3 and then use File Gateway to retain access to the migrated data for ongoing updates from the on-premises applications.
Question 658
Exam Question
A company has an environment that has a single AWS account. A solutions architect is reviewing the environment to recommend what the company could improve specifically in terms of access to the AWS Management Console. The company’s IT support workers currently access the console for administrative tasks, authenticating with named IAM users that have been mapped to their job role.
The IT support workers no longer want to maintain both their Active Directory and IAM user accounts. They want to be able to access the console by using their existing Active Directory credentials. The solutions architect is using AWS Single Sign-On (AWS SSO) to implement this functionality.
Which solution will meet these requirements MOST cost-effectively?
A. Create an organization in AWS Organizations. Turn on the AWS SSO feature in Organizations Create and configure a directory in AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) with a two-way trust to the company’s on-premises Active Directory. Configure AWS SSO and set the AWS Managed Microsoft AD directory as the identity source. Create permission sets and map them to the existing groups within the AWS Managed Microsoft AD directory.
B. Create an organization in AWS Organizations. Turn on the AWS SSO feature in Organizations Create and configure an AD Connector to connect to the company’s on-premises Active Directory. Configure AWS SSO and select the AD Connector as the identity source. Create permission sets and map them to the existing groups within the company’s Active Directory.
C. Create an organization in AWS Organizations. Turn on all features for the organization. Create and configure a directory in AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) with a two-way trust to the company’s on-premises Active Directory. Configure AWS SSO and select the AWS Managed Microsoft AD directory as the identity source. Create permission sets and map them to the existing groups within the AWS Managed Microsoft AD directory.
D. Create an organization in AWS Organizations. Turn on all features for the organization. Create and configure an AD Connector to connect to the company’s on-premises Active Directory. Configure AWS SSO and select the AD Connector as the identity source. Create permission sets and map them to the existing groups within the company’s Active Directory.
Correct Answer
D. Create an organization in AWS Organizations. Turn on all features for the organization. Create and configure an AD Connector to connect to the company’s on-premises Active Directory. Configure AWS SSO and select the AD Connector as the identity source. Create permission sets and map them to the existing groups within the company’s Active Directory.
Reference
- AWS > Documentation > AWS Organizations > User Guide > Enabling all features in your organization
- AWS > Documentation > AWS IAM Identity Center > User Guide > Prerequisites and considerations
Question 659
Exam Question
An e-commerce company runs a data archival workflow once a month for its on- premises data center which is connected to the AWS Cloud over a minimally used 10-Gbps Direct Connect connection using a private virtual interface to its virtual private cloud (VPC). The company internet connection is 200 Mbps, and the usual archive size is around 140 TB that is created on the first Friday of a month. The archive must be transferred and available in Amazon S3 by the next Monday morning.
As a Solutions Architect Professional, which of the following options would you recommend as the LEAST expensive way to address the given use-case?
A. Order multiple AWS Snowball Edge appliances, transfer the data in parallel to these appliances and ship them to AWS which will then copy the data from the Snowball Edge appliances to S3.
B. Configure a private virtual interface on the 10-Gbps Direct Connect connection and then copy the data securely to S3 over the connection.
C. Configure a public virtual interface on the 10-Gbps Direct Connect connection and then copy the data to S3 over the connection.
D. Configure a VPC endpoint for S3 and then leverage the Direct Connect connection for data transfer with VPC endpoint as the target.
Correct Answer
C. Configure a public virtual interface on the 10-Gbps Direct Connect connection and then copy the data to S3 over the connection.
Question 660
Exam Question
A company has an application that runs as a ReplicaSet of multiple pods in an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The EKS cluster has nodes in multiple Availability Zones. The application generates many small files that must be accessible across all running instances of the application. The company needs to back up the files and retain the backups for 1 year.
Which solution will meet these requirements while providing the FASTEST storage performance?
A. Create an Amazon Elastic File System (Amazon EFS) file system and a mount target for each subnet that contains nodes in the EKS cluster. Configure the ReplicaSet to mount the file system. Direct the application to store files in the file system. Configure AWS Backup to back up and retain copies of the data for 1 year.
B. Create an Amazon Elastic Block Store (Amazon EBS) volume. Enable the EBS Multi-Attach feature.
Configure the ReplicaSet to mount the EBS volume. Direct the application to store files in the EBS volume. Configure AWS Backup to back up and retain copies of the data for 1 year.
C. Create an Amazon S3 bucket. Configure the ReplicaSet to mount the S3 bucket. Direct the application to store files in the S3 bucket. Configure S3 Versioning to retain copies of the data. Configure an S3 Lifecycle policy to delete objects after 1 year.
D. Configure the ReplicaSet to use the storage available on each of the running application pods to store the files locally. Use a third-party tool to back up the EKS cluster for 1 year.
Correct Answer
A. Create an Amazon Elastic File System (Amazon EFS) file system and a mount target for each subnet that contains nodes in the EKS cluster. Configure the ReplicaSet to mount the file system. Direct the application to store files in the file system. Configure AWS Backup to back up and retain copies of the data for 1 year.
Explanation
In the past, EBS can be attached only to one ec2 instance but not anymore but there are limitations like – it works only on io1/io2 instance types and many others as described here. EFS has shareable storage In terms of performance, Amazon EFS is optimized for workloads that require high levels of aggregate throughput and IOPS, whereas EBS is optimized for low-latency, random access I/O operations. Amazon EFS is designed to scale throughput and capacity automatically as your storage needs grow, while EBS volumes can be resized on demand.
Reference
AWS > Documentation > Amazon EC2 > User Guide for Linux Instances > Attach a volume to multiple instances with Amazon EBS Multi-Attach