Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 16

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 871

Exam Question

A solutions architect needs to design a solution that retrieves data every 2 minutes from a third party web service that is accessible through the internet. A Python script runs the data retrieval in less than 100 milliseconds for each retrieval. The response is a JSON object that contains sensor data that is less than 1 KB in size. The solutions architect needs to store the JSON object along with the timestamp.

Which solution meets these requirements MOST cost-effectively?

A. Deploy an Amazon EC2 instance with a Linux operating system. Configure a cron job to run the script every 2 minutes. Extend the script to store the JSON object along with the timestamp in a MySQL database that is hosted on an Amazon RDS DB instance.
B. Deploy an Amazon EC2 instance with a Linux operating system to extend the script to run in an infinite loop every 2 minutes. Store the JSON object along with the timestamp in an Amazon DynamoDB table that uses the timestamp as the primary key. Run the script on the EC2 instance.
C. Deploy an AWS Lambda function to extend the script to store the JSON object along with the timestamp in an Amazon DynamoDB table that uses the timestamp as the primary key. Use an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that is initiated every 2 minutes to invoke the Lambda function.
D. Deploy an AWS Lambda function to extend the script to run in an infinite loop every 2 minutes. Store the JSON object along with the timestamp in an Amazon DynamoDB table that uses the timestamp as the primary key. Ensure that the script is called by the handler function that is configured for the Lambda function.

Correct Answer

C. Deploy an AWS Lambda function to extend the script to store the JSON object along with the timestamp in an Amazon DynamoDB table that uses the timestamp as the primary key. Use an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that is initiated every 2 minutes to invoke the Lambda function.

Explanation

To meet the requirements of retrieving data every 2 minutes from a third-party web service, storing the JSON object along with the timestamp, and doing so in a cost-effective manner, a solutions architect should recommend the following:

C. Deploy an AWS Lambda function to extend the script to store the JSON object along with the timestamp in an Amazon DynamoDB table that uses the timestamp as the primary key. Use an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that is initiated every 2 minutes to invoke the Lambda function.

This solution leverages the serverless architecture of AWS Lambda, which eliminates the need to manage and provision infrastructure. The Lambda function can be triggered by an Amazon EventBridge scheduled event, ensuring that the script runs every 2 minutes.

The Lambda function can be written in Python and can execute the script to retrieve the JSON object from the third-party web service. The JSON object, along with the timestamp, can then be stored in an Amazon DynamoDB table using the timestamp as the primary key. This allows for efficient retrieval and querying of the data.

Option A, deploying an EC2 instance with a cron job and storing the data in a MySQL database on Amazon RDS, incurs additional costs and administrative overhead for managing and maintaining the EC2 instance and database.

Option B, deploying an EC2 instance with an infinite loop and storing the data in DynamoDB, introduces unnecessary complexity and cost by utilizing an EC2 instance when a serverless approach can be used.

Option D, deploying a Lambda function with an infinite loop, is not recommended because it contradicts the serverless nature of Lambda, which is event-driven.

Therefore, option C is the most cost-effective solution that meets the given requirements by leveraging AWS Lambda, DynamoDB, and Amazon EventBridge.

Reference

Amazon Connect Administrator Guide

Question 872

Exam Question

A solutions architect must provide a fully managed replacement for an on-premises solution that allows employees and partners to exchange files. The solution must be easily accessible to employees connecting from on-premises systems, remote employees, and external partners.

Which solution meets these requirements?

A. Use AWS Transfer for SFTP to transfer files into and out of Amazon S3.
B. Use AWS Snowball Edge for local storage and large-scale data transfers.
C. Use Amazon FSx to store and transfer files to make them available remotely.
D. Use AWS Storage Gateway to create a volume gateway to store and transfer files to Amazon S3.

Correct Answer

A. Use AWS Transfer for SFTP to transfer files into and out of Amazon S3.

Explanation

The solution that meets the requirements of providing a fully managed replacement for an on-premises file exchange solution, accessible to employees connecting from on-premises systems, remote employees, and external partners is:

A. Use AWS Transfer for SFTP to transfer files into and out of Amazon S3.

AWS Transfer for SFTP is a fully managed service that enables secure file transfers over the Secure File Transfer Protocol (SFTP). It provides a highly available and scalable SFTP service that allows users to transfer files to and from Amazon S3, eliminating the need for on-premises file exchange infrastructure. With AWS Transfer for SFTP, employees connecting from on-premises systems, remote employees, and external partners can easily access and exchange files securely.

Option B, AWS Snowball Edge, is primarily used for large-scale data transfers and offline data storage, and it is not designed for real-time file exchange or collaboration.

Option C, Amazon FSx, is a fully managed Windows file storage service and may not be the most suitable solution for general file exchange requirements.

Option D, AWS Storage Gateway, is primarily used for integrating on-premises environments with cloud storage and is not specifically designed for file exchange or collaboration.

Therefore, option A, using AWS Transfer for SFTP to transfer files into and out of Amazon S3, is the most appropriate and fully managed solution for the given requirements.

Reference

AWS Transfer Family

Question 873

Exam Question

A law firm needs to share information with the public. The information includes hundreds of files that must be publicly readable. Modifications or deletions of the files by anyone before a designated future date are prohibited.

Which solution will meet these requirements in the MOST secure way?

A. Upload all files to an Amazon S3 bucket that is configured for static website hosting. Grant read only IAM permissions to any AWS principals that access the S3 bucket until the designated date.
B. Create a new Amazon S3 bucket with S3 Versioning enabled. Use S3 Object Lock with a retention period in accordance with the designated date. Configure the S3 bucket for static website hosting. Set an S3 bucket policy to allow read-only access to the objects.
C. Create a new Amazon S3 bucket with S3 Versioning enabled. Configure an event trigger to run an AWS Lambda function in case of object modification or deletion. Configure the Lambda function to replace the objects with the original versions from a private S3 bucket.
D. Upload all files to an Amazon S3 bucket that is configured for static website hosting. Select the folder that contains the files. Use S3 Object Lock with a retention period in accordance with the designated date. Grant read-only IAM permissions to any AWS principals that access the S3 bucket.

Correct Answer

B. Create a new Amazon S3 bucket with S3 Versioning enabled. Use S3 Object Lock with a retention period in accordance with the designated date. Configure the S3 bucket for static website hosting. Set an S3 bucket policy to allow read-only access to the objects.

Explanation

The solution that meets the requirements of securely sharing information with the public, allowing read access but prohibiting modifications or deletions before a designated future date, in the most secure way is:

B. Create a new Amazon S3 bucket with S3 Versioning enabled. Use S3 Object Lock with a retention period in accordance with the designated date. Configure the S3 bucket for static website hosting. Set an S3 bucket policy to allow read-only access to the objects.

By creating a new Amazon S3 bucket with S3 Versioning enabled, you can ensure that all modifications or deletions of the files are tracked and retained. With S3 Object Lock, you can enforce a retention period, preventing modifications or deletions of the objects until the designated date.

Configuring the S3 bucket for static website hosting allows the files to be publicly accessible. By setting an S3 bucket policy to allow read-only access to the objects, you ensure that the public can read the files but cannot make modifications or deletions.

Option A, granting read-only IAM permissions to AWS principals, does not provide the same level of enforcement against modifications or deletions as S3 Object Lock.

Option C, using an AWS Lambda function to replace modified or deleted objects with the original versions from a private S3 bucket, adds complexity and introduces additional processing overhead that is not necessary for the given requirements.

Option D, using S3 Object Lock with a retention period and granting read-only IAM permissions, is similar to option B but does not include the additional security measure of enabling S3 Versioning to track and retain object modifications or deletions.

Therefore, option B provides the most secure solution by combining S3 Versioning, S3 Object Lock, static website hosting, and a read-only S3 bucket policy to meet the requirements.

Reference

AWS > Documentation > Amazon Simple Storage Service (S3) > User Guide > Tutorial: Configuring a static website on Amazon S3

Question 874

Exam Question

A company is using Amazon Redshift for analytics and to generate customer reports. The company recently acquired 50 TB of additional customer demographic data. The data is stored in .csv files in Amazon S3. The company needs a solution that joins the data and visualizes the results with the least possible cost and effort.

What should a solutions architect recommend to meet these requirements?

A. Use Amazon Redshift Spectrum to query the data in Amazon S3 directly and join that data with the existing data in Amazon Redshift. Use Amazon QuickSight to build the visualizations.
B. Use Amazon Athena to query the data in Amazon S3. Use Amazon QuickSight to join the data from Athena with the existing data in Amazon Redshift and to build the visualizations.
C. Increase the size of the Amazon Redshift cluster, and load the data from Amazon S3. Use Amazon EMR Notebooks to query the data and build the visualizations in Amazon Redshift.
D. Export the data from the Amazon Redshift cluster into Apache Parquet files in Amazon S3. Use Amazon Elasticsearch Service (Amazon ES) to query the data. Use Kibana to visualize the results.

Correct Answer

B. Use Amazon Athena to query the data in Amazon S3. Use Amazon QuickSight to join the data from Athena with the existing data in Amazon Redshift and to build the visualizations.

Explanation

To meet the requirements of joining the additional customer demographic data stored in .csv files in Amazon S3 and visualizing the results with the least possible cost and effort, a solutions architect should recommend:

B. Use Amazon Athena to query the data in Amazon S3. Use Amazon QuickSight to join the data from Athena with the existing data in Amazon Redshift and to build the visualizations.

Amazon Athena allows you to query data directly from Amazon S3 using standard SQL queries without the need to load the data into Amazon Redshift. This approach is cost-effective and requires minimal effort as there is no need to load the additional data into Amazon Redshift.

By using Amazon QuickSight, you can easily join the data from Amazon Athena with the existing data in Amazon Redshift and build visualizations. QuickSight supports various data sources and provides a user-friendly interface for creating visualizations and dashboards.

Option A suggests using Amazon Redshift Spectrum to query the data in Amazon S3 directly and join it with the existing data in Amazon Redshift. While this is technically feasible, it may not be the most cost-effective or efficient solution since it involves the overhead of managing both Redshift and Redshift Spectrum.

Option C suggests increasing the size of the Amazon Redshift cluster and loading the data from Amazon S3. While this would allow querying and visualization in Redshift, it may lead to higher costs and additional effort in managing the cluster.

Option D, exporting the data to Apache Parquet files in Amazon S3 and using Amazon Elasticsearch Service (Amazon ES) and Kibana for querying and visualization, introduces unnecessary complexity and dependencies on additional services, making it less cost-effective and potentially more effort-intensive.

Therefore, option B provides a cost-effective and efficient solution by leveraging Amazon Athena to query the data in Amazon S3 and Amazon QuickSight to join the data and build visualizations with the existing data in Amazon Redshift.

Question 875

Exam Question

A company wants to enforce strict security guidelines on accessing AWS Cloud resources as the company migrates production workloads from its data centers. Company management wants all users to receive permissions according to their job roles and functions.

Which solution meets these requirements with the LEAST operational overhead?

A. Create an AWS Single Sign-On deployment. Connect to the on-premises Active Directory to centrally manage users and permissions across the company.
B. Create an IAM role for each job function. Require each employee to call the sts:AssumeRole action in the AWS Management Console to perform their job role.
C. Create individual IAM user accounts for each employee. Create an IAM policy for each job function, and attach the policy to all IAM users based on their job role.
D. Create individual IAM user accounts for each employee. Create IAM policies for each job function. Create IAM groups, and attach associated policies to each group. Assign the IAM users to a group based on their job role.

Correct Answer

D. Create individual IAM user accounts for each employee. Create IAM policies for each job function. Create IAM groups, and attach associated policies to each group. Assign the IAM users to a group based on their job role.

Explanation

To meet the requirements of enforcing strict security guidelines on accessing AWS Cloud resources with the least operational overhead, a solutions architect should recommend:

D. Create individual IAM user accounts for each employee. Create IAM policies for each job function. Create IAM groups and attach associated policies to each group. Assign the IAM users to a group based on their job role.

This approach provides a structured and scalable solution for managing permissions and access control. By creating individual IAM user accounts for each employee, you can assign specific permissions to each user based on their job role.

By creating IAM policies for each job function, you can define the necessary permissions for each role in a centralized manner. These policies can be easily managed and updated as needed.

Creating IAM groups and attaching the relevant policies to each group allows for efficient management of permissions. Users can be assigned to groups based on their job role, which simplifies the process of granting and revoking access. When a user is added to or removed from a group, their permissions are automatically updated based on the policies attached to that group.

This approach minimizes operational overhead by leveraging IAM’s built-in features for managing permissions and access control. It provides a flexible and scalable solution that aligns with the principle of least privilege, as each user receives permissions according to their job role without requiring individual policy assignments or role assumption.

Option A suggests using AWS Single Sign-On (SSO) with on-premises Active Directory integration. While SSO can streamline the authentication process and provide centralized user management, it may introduce additional operational overhead and complexity compared to the simpler IAM-based approach.

Option B, requiring employees to assume IAM roles manually, can be burdensome and error-prone, leading to increased operational overhead and potential security risks.

Option C, creating individual IAM user accounts for each employee and attaching IAM policies directly to each user, can be difficult to manage at scale and may result in duplicative policy assignments and difficulties in maintaining consistency across job roles.

Therefore, option D provides a solution that meets the requirements with the least operational overhead by utilizing IAM groups, policies, and individual user accounts to manage permissions based on job roles.

Reference

Active Directory Domain Services on AWS

Question 876

Exam Question

A company is using AWS Organizations with two AWS accounts: Logistics and Sales. The Logistics account operates an Amazon Redshift cluster. The Sales account includes Amazon EC2 instances. The Sales account needs to access the Logistics account’s Amazon Redshift cluster.

What should a solutions architect recommend to meet this requirement MOST cost-effectively?

A. Set up VPC sharing with the Logistics account as the owner and the Sales account as the participant to transfer the data.
B. Create an AWS Lambda function in the Logistics account to transfer data to the Amazon EC2 instances in the Sales account.
C. Create a snapshot of the Amazon Redshift cluster, and share the snapshot with the Sales account. In the Sales account, restore the cluster by using the snapshot ID that is shared by the Logistics account.
D. Run COPY commands to load data from Amazon Redshift into Amazon S3 buckets in the Logistics account. Grant permissions to the Sales account to access the S3 buckets of the Logistics account.

Correct Answer

D. Run COPY commands to load data from Amazon Redshift into Amazon S3 buckets in the Logistics account. Grant permissions to the Sales account to access the S3 buckets of the Logistics account.

Explanation

To allow the Sales account to access the Logistics account’s Amazon Redshift cluster in the most cost-effective way, a solutions architect should recommend:

D. Run COPY commands to load data from Amazon Redshift into Amazon S3 buckets in the Logistics account. Grant permissions to the Sales account to access the S3 buckets of the Logistics account.

This approach leverages the COPY command in Amazon Redshift to unload data from the Redshift cluster and load it into Amazon S3 buckets. By granting permissions to the Sales account to access the S3 buckets in the Logistics account, the Sales account can retrieve the data stored in those buckets.

Here’s how the solution works:

  1. In the Logistics account, use the COPY command in Amazon Redshift to unload the data from the Redshift cluster into Amazon S3 buckets. The COPY command allows you to specify the destination S3 bucket, file format, and other parameters.
  2. Grant appropriate permissions to the Sales account to access the S3 buckets in the Logistics account. This can be done by creating an IAM policy in the Logistics account that allows the Sales account to access the specific S3 buckets and objects needed.
  3. In the Sales account, use the appropriate AWS SDK or tools to access the S3 buckets in the Logistics account and retrieve the data. This can be achieved by utilizing the AWS SDK for programming languages or AWS CLI commands.

By leveraging Amazon S3 as an intermediate data store and granting access to the Sales account, you establish a secure and cost-effective way for the Sales account to retrieve the data from the Amazon Redshift cluster in the Logistics account. It avoids the need for complex VPC sharing, data transfers through AWS Lambda, or snapshot sharing, which can incur additional costs and operational complexity.

Therefore, option D provides a solution that meets the requirement in the most cost-effective manner by utilizing COPY commands to load data from Amazon Redshift into Amazon S3 and granting permissions for the Sales account to access the S3 buckets.

Question 877

Exam Question

An ecommerce company needs to run a scheduled daily job to aggregate and filter sales records for analytics. The company stores the sales records in an Amazon S3 bucket. Each object can be up to 10 GB in size. Based on the number of sales events, the job can take up to an hour to complete. The CPU and memory usage of the job are constant and are known in advance. A solutions architect needs to minimize the amount of operational effort that is needed for the job to run.

Which solution meets these requirements?

A. Create an AWS Lambda function that has an Amazon EventBridge (Amazon CloudWatch Events) notification. Schedule the EventBridge (CloudWatch Events) event to run once a day.
B. Create an AWS Lambda function. Create an Amazon API Gateway HTTP API. and integrate the API with the function. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that calls the API and invokes the function.
C. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an AWS Fargate launch type. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that launches an ECS task on the cluster to run the job.
D. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an Amazon EC2 launch type and an Auto Scaling group with at least one EC2 instance. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that launches an ECS task on the cluster to run the job.

Correct Answer

A. Create an AWS Lambda function that has an Amazon EventBridge (Amazon CloudWatch Events) notification. Schedule the EventBridge (CloudWatch Events) event to run once a day.

Explanation

To minimize operational effort and meet the requirements of running a scheduled daily job to aggregate and filter sales records for analytics stored in an Amazon S3 bucket, a solutions architect should recommend:

A. Create an AWS Lambda function that has an Amazon EventBridge (Amazon CloudWatch Events) notification. Schedule the EventBridge (CloudWatch Events) event to run once a day.

This solution leverages the serverless capabilities of AWS Lambda and the scheduling capabilities of Amazon EventBridge to automate the execution of the daily job with minimal operational effort. Here’s how the solution works:

  1. Create an AWS Lambda function that performs the aggregation and filtering of the sales records. The function should be designed to handle the large size of the objects stored in the S3 bucket and the known CPU and memory requirements of the job.
  2. Configure an Amazon EventBridge rule (formerly known as CloudWatch Events) to schedule the Lambda function to run once a day at the desired time. EventBridge provides a reliable and scalable scheduling service that can trigger the Lambda function based on the specified schedule.
  3. Configure the Amazon S3 bucket to generate an event notification whenever a new sales record is added or modified. Set up the event notification to trigger the EventBridge rule associated with the Lambda function.
  4. The scheduled EventBridge event will trigger the Lambda function, which will then perform the aggregation and filtering on the sales records stored in the S3 bucket.

By using AWS Lambda and EventBridge, you eliminate the need to provision and manage infrastructure, allowing you to focus on the logic of the job rather than the operational aspects. The solution provides a scalable and cost-effective way to automate the daily job with minimal effort.

Therefore, option A is the recommended solution as it offers the least operational effort by using an AWS Lambda function with an EventBridge scheduled event to run the job once a day.

Reference

AWS > Documentation > Amazon EventBridge > User Guide > Tutorial: Schedule AWS Lambda functions using EventBridge

Question 878

Exam Question

A company has two VPCs named Management and Production. The Management VPC uses VPNs through a customer gateway to connect to a single device in the data center. The Production VPC uses a virtual private gateway with two attached AWS Direct Connect connections. The Management and Production VPCs both use a single VPC peering connection to allow communication between the applications.

What should a solutions architect do to mitigate any single point of failure in this architecture?

A. Add a set of VPNs between the Management and Production VPCs.
B. Add a second virtual private gateway and attach it to the Management VPC.
C. Add a second set of VPNs to the Management VPC from a second customer gateway device.
D. Add a second VPC peering connection between the Management VPC and the Production VPC.

Correct Answer

D. Add a second VPC peering connection between the Management VPC and the Production VPC.

Explanation

To mitigate any single point of failure in the architecture and ensure high availability, a solutions architect should recommend:

D. Add a second VPC peering connection between the Management VPC and the Production VPC.

The current architecture has the Management VPC using VPNs through a customer gateway for connectivity, while the Production VPC uses a virtual private gateway with two attached AWS Direct Connect connections. However, there is a single VPC peering connection between the Management and Production VPCs, which represents a single point of failure. If the VPC peering connection fails, communication between the applications in the two VPCs will be disrupted.

By adding a second VPC peering connection between the Management VPC and the Production VPC, you introduce redundancy and eliminate the single point of failure. This ensures that if one VPC peering connection fails, the applications can still communicate through the alternate connection.

Options A, B, and C do not address the single point of failure in the VPC peering connection. Adding a set of VPNs, a second virtual private gateway, or a second set of VPNs does not provide redundancy at the VPC peering level and does not mitigate the single point of failure.

Therefore, option D is the correct choice to mitigate any single point of failure in the architecture by adding a second VPC peering connection between the Management VPC and the Production VPC.

Question 879

Exam Question

A company recently announced the deployment of its retail website to a global audience. The website runs on multiple Amazon EC2 instances behind an Elastic Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The company wants to provide its customers with different versions of content based on the devices that the customers use to access the website.

Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)

A. Configure Amazon CloudFront to cache multiple versions of the content.
B. Configure a host header in a Network Load Balancer to forward traffic to different instances.
C. Configure a Lambda@Edge function to send specific objects to users based on the User-Agent header.
D. Configure AWS Global Accelerator. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up host-based routing to different EC2 instances.
E. Configure AWS Global Accelerator. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up path-based routing to different EC2 instances.

Correct Answer

C. Configure a Lambda@Edge function to send specific objects to users based on the User-Agent header.
E. Configure AWS Global Accelerator. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up path-based routing to different EC2 instances.

Explanation

To meet the requirements of providing different versions of content based on the devices that customers use to access the website, a solutions architect should take the following combination of actions:

C. Configure a Lambda@Edge function to send specific objects to users based on the User-Agent header.
E. Configure AWS Global Accelerator. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up path-based routing to different EC2 instances.

C. Configure a Lambda@Edge function to send specific objects to users based on the User-Agent header:
Lambda@Edge allows you to run serverless functions at AWS edge locations, closer to your users. By configuring a Lambda@Edge function, you can inspect the User-Agent header of incoming requests and dynamically serve different versions of content based on the device information. This enables you to customize the content for different devices.

E. Configure AWS Global Accelerator. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up path-based routing to different EC2 instances:

AWS Global Accelerator allows you to improve the availability and performance of your applications for a global audience. By configuring Global Accelerator to forward requests to a Network Load Balancer (NLB), you can leverage the path-based routing feature of the NLB. This enables you to route requests to different EC2 instances based on the path in the URL. You can configure the NLB to route requests to different versions of the website based on the URL path, which can be used to serve different content versions to different devices.

Together, these actions allow you to customize and serve different versions of content based on the User-Agent header of the requests and the path in the URL, ensuring that customers receive the appropriate content for their devices.

Options A and B are not directly related to providing different versions of content based on devices. Configuring CloudFront to cache multiple versions of content (Option A) is useful for content delivery and caching, but it does not address device-specific content. Configuring a host header in a Network Load Balancer (Option B) allows routing based on the host header, but it does not consider the User-Agent header for device-specific content.

Option D suggests using AWS Global Accelerator and a Network Load Balancer with host-based routing, which does not address the requirement for different versions of content based on devices.

Therefore, the correct combination of actions is to configure a Lambda@Edge function to handle User-Agent header-based routing and to configure AWS Global Accelerator with a Network Load Balancer for path-based routing.

Question 880

Exam Question

The following IAM policy is attached to an IAM group. This is the only policy applied to the group.

The following IAM policy is attached to an IAM group. This is the only policy applied to the group.

What are the effective IAM permissions of this policy for group members?

A. Group members are permitted any Amazon EC2 action within the us-east-1 Region. Statements after the Allow permission are not applied.
B. Group members are denied any Amazon EC2 permissions in the us-east-1 Region unless they are logged in with multi-factor authentication (MFA).
C. Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for all Regions when logged in with multi-factor authentication (MFA). Group members are permitted any other Amazon EC2 action.
D. Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for the us-east-1 Region only when logged in with multi-factor authentication (MFA). Group members are permitted any other Amazon EC2 action within the us-east-1 Region.

Correct Answer

D. Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for the us-east-1 Region only when logged in with multi-factor authentication (MFA). Group members are permitted any other Amazon EC2 action within the us-east-1 Region.