Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 51

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 1221

Exam Question

A company is preparing to deploy a new serverless workload. A solutions architect needs to configure permissions for invoking an AWS Lambda function. The function will be triggered by an Amazon EventBridge (Amazon CloudWatch Events) rule. Permissions should be configured using the principle of least privilege.

Which solution will meet these requirements?

A. Add an execution role to the function with lambda:InvokeFunction as the action and * as the principal.
B. Add an execution role to the function with lambda:InvokeFunction as the action and Service:eventsamazonaws.com as the principal.
C. Add a resource-based policy to the function with lambda as the action and Service:events.amazonaws.com as the principal.
D. Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service:events.amazonaws.com as the principal.

Correct Answer

D. Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service:events.amazonaws.com as the principal.

Explanation

To configure permissions for invoking an AWS Lambda function triggered by an Amazon EventBridge rule, a resource-based policy should be added to the function. This policy should specify lambda:InvokeFunction as the action and Service:events.amazonaws.com as the principal.

The principle of least privilege dictates that permissions should be granted only to the necessary resources and actions required to perform the intended operation. In this case, the Amazon EventBridge service (Service:events.amazonaws.com) should be explicitly granted permission to invoke the Lambda function.

Option A suggests adding an execution role to the function with lambda:InvokeFunction as the action and * as the principal. Granting * (wildcard) as the principal would allow any entity to invoke the Lambda function, which does not adhere to the principle of least privilege.

Option B suggests adding an execution role to the function with lambda:InvokeFunction as the action and Service:eventsamazonaws.com as the principal. The principal should be specified as Service:events.amazonaws.com, with the correct format and spelling.

Option C suggests adding a resource-based policy to the function with lambda as the action and Service:events.amazonaws.com as the principal. The action should be specified as lambda:InvokeFunction to allow invocation of the Lambda function.

Therefore, the correct solution is to add a resource-based policy to the function with lambda:InvokeFunction as the action and Service:events.amazonaws.com as the principal, ensuring that permissions are configured using the principle of least privilege.

Question 1222

Exam Question

A company is building applications in containers. The company wants to migrate its on-premises development and operations services from its on-premises data center to AWS. Management states that the production system must be cloud agnostic and use the same configuration and administrator tools across production systems. A solutions architect needs to design a managed solution that will align open-source software.

Which solution meets these requirements?

A. Launch the containers on Amazon EC2 with EC2 instance worker nodes.
B. Launch the containers on Amazon Elastic Kubernetes Service (Amazon EKS) and EKS workers nodes.
C. Launch the containers on Amazon Elastic Containers service (Amazon ECS) with AWS Fargate instances.
D. Launch the containers on Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 instance worker nodes.

Correct Answer

B. Launch the containers on Amazon Elastic Kubernetes Service (Amazon EKS) and EKS workers nodes.

Explanation

To meet the requirement of having a cloud-agnostic production system and using the same configuration and administrator tools, the best solution is to use a container orchestration service that is compatible with open-source software. Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service that allows you to run containers using the same configuration and tooling as an on-premises Kubernetes environment. This provides a consistent platform and allows you to leverage the same tools and practices for managing your containers in the cloud.

Option A, launching containers on Amazon EC2 with EC2 instance worker nodes, does not provide the same level of managed service as a container orchestration platform like Amazon EKS. It would require more manual management and configuration of the underlying EC2 instances.

Option C, launching containers on Amazon Elastic Containers service (Amazon ECS) with AWS Fargate instances, is a managed container orchestration service, but it does not provide the same compatibility with open-source software as Kubernetes. It uses a proprietary orchestration system and may require different configuration and tooling compared to an on-premises environment.

Option D, launching containers on Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 instance worker nodes, is similar to option C, but it uses EC2 instances for running containers instead of AWS Fargate. While it provides more flexibility and control, it may require additional effort to align the configuration and administrator tools with on-premises systems.

Therefore, option B, launching the containers on Amazon Elastic Kubernetes Service (Amazon EKS) and EKS worker nodes, is the best solution as it offers a managed Kubernetes service that aligns with open-source software and provides a cloud-agnostic production system with consistent configuration and tooling.

Question 1223

Exam Question

A meteorological startup company has a custom web application to sell weather data to its users online. The company uses Amazon DynamoDB to store its data and wants to build a new service that sends an alert to the managers of four internal teams every time a new weather event is recorded. The company does not want this new service to affect the performance of the current application.

What should a solutions architect do to meet these requirements with the LEAST amount of operational overhead?

A. Use DynamoDB transactions to write new event data to the table. Configure the transactions to notify internal teams.
B. Have the current application publish a message to four Amazon Simple Notification Service (Amazon SNS) topics. Have each team subscribe to one topic.
C. Enable Amazon DynamoDB Streams on the table. Use triggers to write to a single Amazon Simple Notification Service (Amazon SNS) topic to which the teams can subscribe.
D. Add a custom attribute to each record to flag new items. Write a cron job that scans the table every minute for items that are new and notifies an Amazon Simple Queue Service (Amazon SQS) queue to which the teams can subscribe.

Correct Answer

C. Enable Amazon DynamoDB Streams on the table. Use triggers to write to a single Amazon Simple Notification Service (Amazon SNS) topic to which the teams can subscribe.

Explanation

To meet the requirements with the least amount of operational overhead, the solution architect should leverage Amazon DynamoDB Streams and Amazon SNS.

Option A suggests using DynamoDB transactions to write new event data to the table and configure transactions to notify internal teams. While transactions can ensure atomicity and consistency of writes, they do not directly provide a mechanism for sending notifications. It would require additional development effort to implement the notification functionality.

Option B suggests having the current application publish a message to four Amazon SNS topics, with each team subscribing to one topic. This would require modifying the current application and adding logic for publishing messages to multiple topics. It would increase the complexity and maintenance of the application.

Option C is the recommended solution. Enabling Amazon DynamoDB Streams on the table allows capturing changes to the table in real-time. By using DynamoDB triggers, you can configure the stream to write to a single Amazon SNS topic. This way, whenever a new weather event is recorded, it will trigger a stream event that writes to the SNS topic. The four internal teams can then subscribe to this single SNS topic to receive the alerts. This approach minimizes operational overhead as it leverages existing AWS services and requires minimal modifications to the current application.

Option D suggests adding a custom attribute to each record to flag new items and using a cron job to scan the table for new items and notify an Amazon SQS queue. This would introduce additional complexity in managing the flagging and scanning process, and it would require setting up and managing the cron job.

Therefore, option C is the most suitable choice as it leverages DynamoDB Streams and Amazon SNS to provide a scalable and low-operational-overhead solution for sending alerts to the internal teams whenever a new weather event is recorded in DynamoDB.

Question 1224

Exam Question

A company wants to use an AWS Region as a disaster recovery location for its on-premises infrastructure. The company has 10 TB of existing data, and the on- premise data center has a 1 Gbps internet connection. A solutions architect must find a solution so the company can have its existing data on AWS in 72 hours without transmitting it using an unencrypted channel.

Which solution should the solutions architect select?

A. Send the initial 10 TB of data to AWS using FTP.
B. Send the initial 10 TB of data to AWS using AWS Snowball.
C. Establish a VPN connection between Amazon VPC and the company’s data center.
D. Establish an AWS Direct Connect connection between Amazon VPC and the company’s data center.

Correct Answer

B. Send the initial 10 TB of data to AWS using AWS Snowball.

Explanation

To meet the requirement of having the existing data on AWS in 72 hours without transmitting it using an unencrypted channel, the most suitable solution is to use AWS Snowball.

AWS Snowball is a physical data transfer service that enables you to securely transfer large amounts of data into and out of AWS. Snowball devices are rugged, secure, and have high storage capacity. By using AWS Snowball, the company can physically transfer the 10 TB of data to AWS without relying on the internet connection. The data is encrypted during transit and at rest, ensuring security.

Option A suggests sending the initial 10 TB of data to AWS using FTP. However, FTP relies on the internet connection, which may not meet the requirement of completing the transfer within 72 hours. Additionally, FTP does not provide the same level of security as AWS Snowball.

Option C suggests establishing a VPN connection between Amazon VPC and the company’s data center. While VPN connections provide secure connectivity between on-premises and AWS, they are primarily used for ongoing communication rather than bulk data transfer. It may not meet the requirement of transferring the 10 TB of data within the specified timeframe.

Option D suggests establishing an AWS Direct Connect connection between Amazon VPC and the company’s data center. AWS Direct Connect provides a dedicated network connection between on-premises and AWS, but it does not directly address the requirement of transferring the 10 TB of data within 72 hours without an unencrypted channel.

Therefore, the best solution for the given scenario is to use AWS Snowball, as it allows the company to securely and efficiently transfer the existing 10 TB of data to AWS within the required timeframe.

Question 1225

Exam Question

A company is planning to use an Amazon DynamoDB table for data storage. The company is concerned about cost optimization. The table will not be used on most mornings in the evenings, the read and write traffic will often be unpredictable. When traffic spikes occur they will happen very quickly.

What should a solutions architect recommend?

A. Create a DynamoDB table in on-demand capacity mode.
B. Create a DynamoDB table with a global secondary Index.
C. Create a DynamoDB table with provisioned capacity and auto scaling.
D. Create a DynamoDB table in provisioned capacity mode, and configure it as a global table.

Correct Answer

C. Create a DynamoDB table with provisioned capacity and auto scaling.

Explanation

To address the company’s concerns about cost optimization and the unpredictable nature of read and write traffic with quick traffic spikes, a solutions architect should recommend creating a DynamoDB table with provisioned capacity and auto scaling.

Option A suggests creating a DynamoDB table in on-demand capacity mode. While this mode provides flexibility and eliminates the need for capacity planning, it may not be cost-effective for workloads with unpredictable traffic patterns and quick traffic spikes. On-demand capacity mode can be more expensive compared to provisioned capacity with auto scaling when the workload has consistent or predictable patterns.

Option B suggests creating a DynamoDB table with a global secondary index (GSI). While a GSI can improve query flexibility and performance, it does not directly address the concerns of cost optimization and unpredictable traffic patterns.

Option D suggests creating a DynamoDB table in provisioned capacity mode and configuring it as a global table. While global tables can provide multi-region replication for global availability, it does not specifically address the concerns of cost optimization and unpredictable traffic patterns.

By choosing option C and creating a DynamoDB table with provisioned capacity and auto scaling, the company can benefit from cost optimization and handle unpredictable traffic patterns efficiently. Provisioned capacity allows the company to provision a baseline capacity to handle their typical workload, while auto scaling automatically adjusts the capacity based on demand, ensuring that traffic spikes are accommodated without impacting performance and cost-effectiveness.

Question 1226

Exam Question

An operations team has a standard that states IAM policies should not be applied directly to users. Some new members have not been following this standard. The operation manager needs a way to easily identify the users with attached policies.

What should a solutions architect do to accomplish this?

A. Monitor using AWS CloudTrail.
B. Create an AWS Config rule to run daily.
C. Publish IAM user changes to Amazon SNS.
D. Run AWS Lambda when a user is modified.

Correct Answer

B. Create an AWS Config rule to run daily.

Explanation

To easily identify users with attached policies and ensure compliance with the standard of not applying IAM policies directly to users, a solutions architect should create an AWS Config rule to run daily.

Option A suggests monitoring using AWS CloudTrail. While CloudTrail provides detailed event logging, it does not offer a direct solution for identifying users with attached policies. It can provide information about API calls related to IAM, but it would require manual analysis to identify users with attached policies.

Option C suggests publishing IAM user changes to Amazon SNS. While this can help in capturing events related to IAM user changes, it would still require additional processing and analysis to identify users with attached policies.

Option D suggests running an AWS Lambda function when a user is modified. While this approach can be used to trigger specific actions based on user modifications, it does not provide a direct solution for identifying users with attached policies.

By choosing option B and creating an AWS Config rule to run daily, the operation manager can easily identify users with attached policies. The AWS Config rule can be configured to evaluate the IAM user resources and check for the presence of directly attached policies. This automated approach provides an efficient way to identify non-compliant users and address the issue promptly.

Question 1227

Exam Question

A new employee has joined a company as a deployment engineer. The deployment engineer will be using AWS CloudFormation templates to create multiple AWS resources. A solutions architect wants the deployment engineer to perform job activities while following the principle of least privilege.

Which combination of actions should the solutions architect take to accomplish this goal? (Choose two.)

A. Have the deployment engineer use AWS account roof user credentials for performing AWS CloudFormation stack operations.
B. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers IAM policy attached.
C. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the Administrate/Access IAM policy attached.
D. Create a new IAM User for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only.
E. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using Dial IAM role.

Correct Answer

B. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers IAM policy attached.
E. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using Dial IAM role.

Explanation

To ensure the principle of least privilege and provide appropriate permissions for the deployment engineer to perform AWS CloudFormation operations, the following actions should be taken:

B. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers IAM policy attached. This approach allows the deployment engineer to have a dedicated IAM user account with predefined permissions. The PowerUsers IAM policy typically provides a comprehensive set of permissions, including those required for AWS CloudFormation stack operations.

E. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using that IAM role. By creating a specific IAM role for AWS CloudFormation operations, the solutions architect can define the precise set of permissions required for the deployment engineer. This approach follows the principle of least privilege, as the deployment engineer will only have the necessary permissions to perform AWS CloudFormation tasks and not additional unnecessary permissions.

Options A and C are incorrect because they suggest using broad and excessive permissions by using the AWS account root user credentials or granting administrative access, respectively. These approaches do not adhere to the principle of least privilege and introduce security risks.

Option D is not the best approach because it only allows AWS CloudFormation actions and may not provide sufficient permissions for the deployment engineer to create and manage the necessary resources for the application deployment.

Therefore, options B and E provide the best combination of actions to ensure the principle of least privilege and provide the necessary permissions for the deployment engineer to perform AWS CloudFormation operations.

Question 1228

Exam Question

A company uses an Amazon S3 bucket to store static images for its website. The company configured permissions to allow access to Amazon S3 objects by privileged users only.

What should a solutions architect do to protect against data loss? (Choose two.)

A. Enable versioning on the S3 bucket.
B. Enable access logging on the S3 bucket.
C. Enable server-side encryption on the S3 bucket.
D. Configure an S3 lifecycle rule to transition objects to Amazon S3 Glacier.
E. Use MFA Delete to require multi-factor authentication to delete an object.

Correct Answer

A. Enable versioning on the S3 bucket.
C. Enable server-side encryption on the S3 bucket.

Explanation

To protect against data loss in an Amazon S3 bucket, the following actions should be taken:

A. Enable versioning on the S3 bucket. Versioning allows multiple versions of an object to be stored in the bucket. If an object is accidentally deleted or overwritten, previous versions can be restored, providing a safety net against data loss.

C. Enable server-side encryption on the S3 bucket. Server-side encryption ensures that the data stored in the bucket is encrypted at rest. By enabling server-side encryption, even if someone gains unauthorized access to the physical storage media, the data will remain encrypted and protected.

Option B (Enable access logging on the S3 bucket) is focused on monitoring and tracking access to the bucket rather than protecting against data loss. It helps in identifying who accessed the objects in the bucket and can be useful for auditing and troubleshooting purposes.

Option D (Configure an S3 lifecycle rule to transition objects to Amazon S3 Glacier) is not directly related to protecting against data loss but rather focuses on cost optimization and long-term storage. Transitioning objects to Amazon S3 Glacier is a way to archive objects that are not frequently accessed and helps reduce storage costs. However, it does not directly address the risk of data loss.

Option E (Use MFA Delete to require multi-factor authentication to delete an object) adds an additional layer of security by requiring multi-factor authentication to delete an object. While this can help prevent accidental or unauthorized deletions, it is not directly related to protecting against data loss from other causes, such as application errors or data corruption.

Therefore, the best choices to protect against data loss are to enable versioning (option A) and enable server-side encryption (option C) on the Amazon S3 bucket.

Question 1229

Exam Question

A company needs to store data in Amazon S3. A compliance requirement states that when any changes are made to objects the previous state of the object with any changes must be preserved. Additionally, files older than 5 years should not be accessed but need to be archived for auditing.

What should a solutions architect recommend that is MOST cost-effective?

A. Enable object-level versioning and S3 Object Lock in governance mode
B. Enable object-level versioning and S3 Object Lock in compliance mode
C. Enable object-level versioning. Enable a lifecycle policy to move data older than 5 years to S3 Glacier Deep Archive
D. Enable object-level versioning. Enable a lifecycle policy to move data older than 5 years to S3 Standard-Infrequent Access (S3 Standard-IA)

Correct Answer

A. Enable object-level versioning and S3 Object Lock in governance mode

Explanation

To meet the compliance requirements, the following actions should be taken:

  1. Enable object-level versioning: Enabling versioning allows for the preservation of previous versions of an object when changes are made. Each update to an object will create a new version, ensuring that the previous state of the object is preserved.
  2. Enable S3 Object Lock: S3 Object Lock provides the ability to enforce retention periods on objects, preventing them from being deleted or modified for a specified duration. In this case, enabling S3 Object Lock in governance mode ensures that objects cannot be deleted or modified for a defined period, satisfying the compliance requirement.

Option B (Enable object-level versioning and S3 Object Lock in compliance mode) is more strict and may not be necessary if the compliance requirement does not explicitly mandate it. Compliance mode in S3 Object Lock enforces strict immutability for objects, making them non-deletable and non-modifiable for the specified retention period. If the compliance requirement does not specify this level of strictness, using governance mode (option A) is more cost-effective.

Option C (Enable object-level versioning and a lifecycle policy to move data older than 5 years to S3 Glacier Deep Archive) addresses the archiving requirement but does not explicitly preserve the previous state of objects when changes are made. It is also important to note that S3 Glacier Deep Archive has a minimum storage duration of 180 days, so it may not be suitable for objects that need to be accessed or archived for shorter periods.

Option D (Enable object-level versioning and a lifecycle policy to move data older than 5 years to S3 Standard-Infrequent Access) addresses the archiving requirement but does not explicitly preserve the previous state of objects when changes are made.

Therefore, the most cost-effective and compliant solution is to enable object-level versioning and S3 Object Lock in governance mode (option A).

Question 1230

Exam Question

A company is hosting multiple websites for several lines of business under its registered parent domain. Users accessing these websites will be routed to appropriate backend Amazon EC2 instances based on the subdomain. The websites host static webpages, images, and server-side scripts like PHP and JavaScript. Some of the websites experience peak access during the first two hours of business with constant usage throughout the rest of the day. A solutions architect needs to design a solution that will automatically adjust capacity to these traffic patterns while keeping costs low.

Which combination of AWS services or features will meet these requirements? (Choose two.)

A. AWS Batch
B. Network Load Balancer
C. Application Load Balancer
D. Amazon EC2 Auto Scaling
E. Amazon S3 website hosting

Correct Answer

C. Application Load Balancer
D. Amazon EC2 Auto Scaling

Explanation

To meet the requirements of automatically adjusting capacity to traffic patterns while keeping costs low, the following AWS services or features can be used:

  1. Application Load Balancer (ALB): An ALB can distribute incoming traffic across multiple EC2 instances based on the subdomain. It supports advanced routing features, including host-based routing, which allows routing based on the subdomain. ALB automatically scales to handle the incoming traffic and ensures that the traffic is distributed evenly among the backend EC2 instances.
  2. Amazon EC2 Auto Scaling: EC2 Auto Scaling allows for automatically adjusting the number of EC2 instances based on the demand. In this case, EC2 Auto Scaling can be configured to scale out during the peak hours of the first two hours of business and scale in during the remaining hours. By adjusting the desired capacity of the Auto Scaling group based on the traffic patterns, the capacity can automatically scale up and down, optimizing costs.

Option A (AWS Batch) is not suitable for hosting websites as it is a service specifically designed for batch processing of jobs.

Option B (Network Load Balancer) may not be the best choice as it does not support advanced routing features like host-based routing required for routing based on the subdomain.

Option E (Amazon S3 website hosting) is not sufficient on its own as it only provides static website hosting and does not support server-side scripts like PHP and JavaScript.

Therefore, the combination of Application Load Balancer (ALB) and Amazon EC2 Auto Scaling (option C and D) is the most suitable for this scenario, providing the required traffic routing and automatic capacity adjustment while keeping costs low.