The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.
Table of Contents
- Question 1111
- Exam Question
- Correct Answer
- Explanation
- Question 1112
- Exam Question
- Correct Answer
- Explanation
- Question 1113
- Exam Question
- Correct Answer
- Explanation
- Question 1114
- Exam Question
- Correct Answer
- Explanation
- Question 1115
- Exam Question
- Correct Answer
- Explanation
- Question 1116
- Exam Question
- Correct Answer
- Explanation
- Question 1117
- Exam Question
- Correct Answer
- Explanation
- Question 1118
- Exam Question
- Correct Answer
- Explanation
- Question 1119
- Exam Question
- Correct Answer
- Explanation
- Question 1120
- Exam Question
- Correct Answer
- Explanation
Question 1111
Exam Question
A company wants to use Amazon S3 for the secondary copy of its on-premises dataset. The company would rarely need to access this copy. The storage solution’s cost should be minimal.
Which storage solution meets these requirements?
A. S3 Standard
B. S3 Intelligent-Tiering
C. S3 Standard-Infrequent Access (S3 Standard-IA)
D. S3 One Zone-Infrequent Access (S3 One Zone-IA)
Correct Answer
D. S3 One Zone-Infrequent Access (S3 One Zone-IA)
Explanation
In this scenario, where the company wants to use Amazon S3 as a secondary copy of its on-premises dataset and rarely needs to access this copy, the most cost-effective storage solution would be S3 One Zone-Infrequent Access (S3 One Zone-IA).
S3 One Zone-IA provides cost savings compared to other storage classes while still offering durability and availability within a single availability zone. It is ideal for data that can be easily reproduced or recreated if lost. Since the company rarely needs to access this copy, the lower cost of S3 One Zone-IA can help minimize storage costs.
Let’s briefly compare the options:
A. S3 Standard: This storage class offers high durability, availability, and performance but has higher costs compared to other options. It is suitable for frequently accessed data.
B. S3 Intelligent-Tiering: This storage class automatically moves objects between two tiers based on access patterns to optimize costs. It is suitable for data with changing access patterns but may not be the most cost-effective option if the data is rarely accessed.
C. S3 Standard-Infrequent Access (S3 Standard-IA): This storage class is designed for data that is accessed less frequently but still requires immediate access when needed. While it provides lower storage costs compared to S3 Standard, it may not be the most cost-effective option if the data is rarely accessed.
D. S3 One Zone-Infrequent Access (S3 One Zone-IA): This storage class provides cost savings by storing data in a single availability zone. While it offers lower costs compared to other options, it may not provide the same level of availability as S3 Standard or S3 Standard-IA, as it is limited to a single zone. However, if the dataset can be easily reproduced or recreated, and the company rarely needs to access this copy, S3 One Zone-IA can be a cost-effective solution.
Therefore, in this scenario, where cost minimization is a priority and the secondary copy of the dataset is rarely accessed, the most suitable storage solution would be S3 One Zone-Infrequent Access (S3 One Zone-IA) (Option D).
Question 1112
Exam Question
A company has established a new AWS account. The account is newly provisioned and no changes have been made to the default settings. The company is concerned about the security of the AWS account root user.
What should be done to secure the root user?
A. Create IAM users for daily administrative tasks. Disable the root user.
B. Create IAM users for daily administrative tasks. Enable multi-factor authentication on the root user.
C. Generate an access key for the root user. Use the access key for daily administration tasks instead of the AWS Management Console.
D. Provide the root user credentials to the most senior solution architect. Have the solution architect use the root user for daily administration tasks.
Correct Answer
B. Create IAM users for daily administrative tasks. Enable multi-factor authentication on the root user.
Explanation
The root user in an AWS account has unrestricted access to all resources and services. It is highly recommended to secure the root user to reduce the risk of unauthorized access and potential security breaches.
Creating IAM users for daily administrative tasks is a best practice as it allows you to assign appropriate permissions and manage access control more effectively. IAM users have limited permissions and can be managed individually, providing better security and accountability.
Enabling multi-factor authentication (MFA) on the root user adds an extra layer of security. MFA requires an additional authentication factor (such as a physical or virtual MFA device) along with the password to access the account. This helps prevent unauthorized access even if the password is compromised.
Option B, which suggests creating IAM users for daily administrative tasks and enabling MFA on the root user, is the recommended approach to secure the root user and maintain better security practices in the AWS account.
Question 1113
Exam Question
A company is managing health records on-premises. The company must keep these records indefinitely, disable any modifications to the records once they are stored, and granularly audit access at all levels. The chief technology officer (CTO) is concerned because there are already millions of records not being used by any application, and the current infrastructure is running out of space. The CTO has requested a solutions architect design a solution to move existing data and support future records.
Which services can the solutions architect recommend to meet these requirements?
A. Use AWS DataSync to move existing data to AWS. Use Amazon S3 to store existing and new data. Enable Amazon S3 object lock and enable AWS CloudTrail with data events.
B. Use AWS Storage Gateway to move existing data to AWS. Use Amazon S3 to store existing and new data. Enable Amazon S3 object lock and enable AWS CloudTrail with management events.
C. Use AWS DataSync to move existing data to AWS. Use Amazon S3 to store existing and new data. Enable Amazon S3 object lock and enable AWS CloudTrail with management events.
D. Use AWS Storage Gateway to move existing data to AWS. Use Amazon Elastic Block Store (Amazon EBS) to store existing and new data. Enable Amazon S3 object lock and enable Amazon S3 server access logging.
Correct Answer
A. Use AWS DataSync to move existing data to AWS. Use Amazon S3 to store existing and new data. Enable Amazon S3 object lock and enable AWS CloudTrail with data events.
Explanation
To meet the requirements of storing health records indefinitely, disabling modifications to the records, and granularly auditing access, the following services and configurations can be recommended:
- Use AWS DataSync to securely and efficiently transfer existing data from on-premises to AWS. DataSync ensures data integrity during the transfer process.
- Store the existing and new data in Amazon S3, which provides durable and scalable object storage. Amazon S3 allows you to store and manage large amounts of data cost-effectively.
- Enable Amazon S3 object lock, which helps prevent any modifications or deletions to the objects stored in S3. Once an object is locked, it cannot be modified or deleted until a specified retention period expires.
- Enable AWS CloudTrail with data events to track and log access to the health records stored in Amazon S3. CloudTrail provides detailed information about who accessed the records, what actions were performed, and when the actions occurred.
Option A suggests using AWS DataSync to move existing data, Amazon S3 to store data, enabling S3 object lock, and enabling CloudTrail with data events. This option covers all the requirements of storing, securing, and auditing the health records effectively.
Therefore, Option A is the most appropriate choice in this scenario.
Question 1114
Exam Question
A company is planning to migrate its virtual server-based workloads to AWS. The company has internet-facing load balancers backed by application servers. The application servers rely on patches from an internet-hosted repository.
Which services should a solutions architect recommend be hosted on the public subnet? (Choose two.)
A. NAT gateway
B. Amazon RDS DB instances
C. Application Load Balancers
D. Amazon EC2 application servers
E. Amazon Elastic File System (Amazon EFS) volumes
Correct Answer
C. Application Load Balancers
D. Amazon EC2 application servers
Explanation
The services that should be hosted on the public subnet in this scenario are:
C. Application Load Balancers: Application Load Balancers (ALBs) are responsible for distributing incoming traffic to the application servers. Since the ALBs need to receive traffic from the internet, they should be placed in the public subnet.
D. Amazon EC2 application servers: The application servers need to connect to the internet-hosted repository to fetch patches. Placing the application servers in the public subnet allows them to have internet connectivity and access the necessary patches.
Therefore, options C and D are the correct choices.
Question 1115
Exam Question
A company’s managing health records on-premises. The company must keep these records indefinitely, disable any modifications to the records once they are stored, and granularly audit access at all levels. The chief technology officer (CTO) is concerned because there are already millions of records not being used by any application, and the current infrastructure is running out of space. The CTO has requested a solutions architect design a solution to move existing data and support future records.
Which services can the solutions architect recommend to meet these requirements?
A. Use AWS DataSync to move existing data to AWS. Use Amazon S3 to store existing and new data. Enable Amazon S3 object lock and enable AWS CloudTrail with data events.
B. Use AWS Storage Gateway to move existing data to AWS. Use Amazon S3 to store existing and new data. Enable Amazon S3 object lock and enable AWS CloudTrail with management events.
C. Use AWS DataSync to move existing data to AWS. Use Amazon S3 to store existing and new data. Enable Amazon S3 object lock and enable AWS CloudTrail with management events.
D. Use AWS Storage Gateway to move existing data to AWS. Use Amazon Elastic Block Store (Amazon EBS) to store existing and new data. Enable Amazon S3 object lock and enable Amazon S3 server access logging.
Correct Answer
A. Use AWS DataSync to move existing data to AWS. Use Amazon S3 to store existing and new data. Enable Amazon S3 object lock and enable AWS CloudTrail with data events.
Explanation
The solutions architect can recommend the following services to meet the requirements:
A. Use AWS DataSync to move existing data to AWS: AWS DataSync is a data transfer service that simplifies and accelerates moving large amounts of data to AWS. It can be used to migrate existing data to Amazon S3.
Use Amazon S3 to store existing and new data: Amazon S3 is a highly durable and scalable object storage service that can securely store the health records. It provides the necessary storage capacity for both existing and future records.
Enable Amazon S3 object lock: Amazon S3 object lock can be enabled to prevent modifications to the records once they are stored. This helps ensure data immutability and compliance with the requirement to disable modifications.
Enable AWS CloudTrail with data events: AWS CloudTrail can be enabled with data events to granularly audit access at all levels. It provides detailed logs of API activity for monitoring and security analysis.
Therefore, option A is the recommended solution.
Question 1116
Exam Question
A company runs an application using Amazon ECS. The application creates resized versions of an original image and then makes Amazon S3 API calls to store the resized images in Amazon S3.
How can a solutions architect ensure that the application has permission to access Amazon S3?
A. Update the S3 role in AWS IAM to allow read/write access from Amazon ECS, and then relaunch the container.
B. Create an IAM role with S3 permissions, and then specify that role as the taskRoleArn in the task definition.
C. Create a security group that allows access from Amazon ECS to Amazon S3, and update the launch configuration used by the ECS cluster.
D. Create an IAM user with S3 permissions, and then relaunch the Amazon EC2 instances for the ECS cluster while logged in as this account.
Correct Answer
B. Create an IAM role with S3 permissions, and then specify that role as the taskRoleArn in the task definition.
Explanation
The solutions architect can ensure that the application has permission to access Amazon S3 by following these steps:
B. Create an IAM role with S3 permissions, and then specify that role as the taskRoleArn in the task definition.
By creating an IAM role with the necessary permissions for accessing Amazon S3 and specifying that role as the taskRoleArn in the task definition, the ECS tasks launched by the application will assume this role and inherit the permissions. This allows the application to make the necessary Amazon S3 API calls to store the resized images.
Option B is the correct solution for granting the application permission to access Amazon S3 in the context of Amazon ECS.
Question 1117
Exam Question
A company has an application that calls AWS Lambda functions. A recent code review found database credentials stored in the source code. The database credentials need to be removed from the Lambda source code. The credentials must then be securely stored and rotated on an ongoing basis to meet security policy requirements.
What should a solutions architect recommend to meet these requirements?
A. Store the password in AWS CloudHSM. Associate the Lambda function with a role that can retrieve the password from CloudHSM given its key ID.
B. Store the password in AWS Secrets Manager. Associate the Lambda function with a role that can retrieve the password from Secrets Manager given its secret ID.
C. Move the database password to an environment variable associated with the Lambda function. Retrieve the password from the environment variable upon execution.
D. Store the password in AWS Key Management Service (AWS KMS). Associate the Lambda function with a role that can retrieve the password from AWS KMS given its key ID.
Correct Answer
B. Store the password in AWS Secrets Manager. Associate the Lambda function with a role that can retrieve the password from Secrets Manager given its secret ID.
Explanation
A solutions architect should recommend the following to meet the requirements:
B. Store the password in AWS Secrets Manager. Associate the Lambda function with a role that can retrieve the password from Secrets Manager given its secret ID.
AWS Secrets Manager is a service specifically designed for securely storing and managing secrets, such as database credentials. It provides encryption, access control, and automatic rotation of secrets. By storing the database password in Secrets Manager and associating the Lambda function with a role that has permission to retrieve the password, the credentials can be securely stored outside the Lambda source code.
Option B is the recommended solution for securely storing and rotating the database credentials in an AWS Lambda environment.
Question 1118
Exam Question
A company that develops web applications has launched hundreds of Application Load Balancers (ALBs) in multiple Regions. The company wants to create an allow list (or the IPs of all the load balancers on its firewall device. A solutions architect is looking for a one-time, highly available solution to address this request, which will also help reduce the number of IPs that need to be allowed by the firewall.
What should the solutions architect recommend to meet these requirements?
A. Create an AWS Lambda function to keep track of the IPs for all the ALBs in different Regions. Keep refreshing this list.
B. Set up a Network Load Balancer (NLB) with Elastic IPs. Register the private IPs of all the ALBs as targets to this NLB.
C. Launch AWS Global Accelerator and create endpoints for all the Regions. Register all the ALBs in different Regions to the corresponding endpoints.
D. Set up an Amazon EC2 instance, assign an Elastic IP to this EC2 instance, and configure the instance as a proxy to forward traffic to all the ALBs.
Correct Answer
C. Launch AWS Global Accelerator and create endpoints for all the Regions. Register all the ALBs in different Regions to the corresponding endpoints.
Explanation
To meet the requirements of creating an allow list for the IPs of multiple Application Load Balancers (ALBs) in different regions while reducing the number of IPs that need to be allowed by the firewall, the following solution should be recommended:
C. Launch AWS Global Accelerator and create endpoints for all the Regions. Register all the ALBs in different Regions to the corresponding endpoints.
AWS Global Accelerator is a service that enables you to improve the availability and performance of your applications by routing traffic through the AWS global network infrastructure. By creating endpoints in AWS Global Accelerator for each region where the ALBs are deployed and registering the ALBs to the corresponding endpoints, you can consolidate the IP addresses that need to be allowed on the firewall to the IP addresses associated with the AWS Global Accelerator endpoints.
This solution provides a one-time, highly available solution that simplifies the IP management and reduces the number of IPs that need to be allowed on the firewall device.
Question 1119
Exam Question
A solutions architect is designing a web application that will run on Amazon EC2 instances behind an Application Load Balancer (ALB). The company strictly requires that the application be resilient against malicious internet activity and attacks, and protect against new common vulnerabilities and exposures.
What should the solutions architect recommend?
A. Leverage Amazon CloudFront with the ALB endpoint as the origin.
B. Deploy an appropriate managed rule for AWS WAF and associate it with the ALB.
C. Subscribe to AWS Shield Advanced and ensure common vulnerabilities and exposures are blocked.
D. Configure network ACLs and security groups to allow only ports 80 and 443 to access the EC2 instances.
Correct Answer
B. Deploy an appropriate managed rule for AWS WAF and associate it with the ALB.
Explanation
To ensure resilience against malicious internet activity and attacks, as well as protection against new common vulnerabilities and exposures (CVEs), the solutions architect should recommend the following:
B. Deploy an appropriate managed rule for AWS WAF and associate it with the ALB.
AWS Web Application Firewall (WAF) provides protection for web applications against common web exploits and security vulnerabilities. By deploying an appropriate managed rule for AWS WAF and associating it with the ALB, you can add an additional layer of security to your application. AWS WAF managed rules are designed to address common attack patterns and known CVEs, providing proactive protection against them.
Using Amazon CloudFront (option A) can help improve performance and availability of the application, but it does not directly address the requirement for protection against malicious activity and new vulnerabilities. Subscribing to AWS Shield Advanced (option C) can provide advanced DDoS protection, but it does not specifically address protection against new CVEs. Configuring network ACLs and security groups (option D) is important for controlling inbound and outbound traffic, but it alone may not provide sufficient protection against web application vulnerabilities and attacks.
Question 1120
Exam Question
A company wants to deploy a shared file system for its .NET application servers and Microsoft SQL Server database running on Amazon EC2 instance with Windows Server 2016. The solution must be able to be integrated into the corporate Active Directory domain, be highly durable, be managed by AWS, and provide levels of throughput and IOPS.
Which solution meets these requirements?
A. Use Amazon FSx for Windows File Server
B. Use Amazon Elastic File System (Amazon EFS)
C. Use AWS Storage Gateway in file gateway mode.
D. Deploy a Windows file server on two On Demand instances across two Availability Zones.
Correct Answer
A. Use Amazon FSx for Windows File Server
Explanation
To meet the requirements of integrating with Active Directory, high durability, managed by AWS, and providing throughput and IOPS, the recommended solution is:
A. Use Amazon FSx for Windows File Server.
Amazon FSx for Windows File Server provides a fully managed, highly durable, and highly available file storage service that is compatible with Windows applications and can be integrated into the corporate Active Directory domain. It offers high levels of throughput and IOPS, allowing for efficient file sharing and database storage.
Amazon Elastic File System (Amazon EFS) (option B) is a scalable file storage service, but it does not provide native integration with Active Directory. It is not specifically optimized for Windows applications and does not provide fine-grained control over throughput and IOPS.
AWS Storage Gateway in file gateway mode (option C) is primarily designed for hybrid cloud storage scenarios and may not provide the same level of integration, durability, and manageability as Amazon FSx for Windows File Server.
Deploying a Windows file server on EC2 instances (option D) would require manual configuration, management, and monitoring of the infrastructure, which may not be as efficient and manageable as using a fully managed service like Amazon FSx for Windows File Server.