The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.
Table of Contents
- Question 971
- Exam Question
- Correct Answer
- Explanation
- Question 972
- Exam Question
- Correct Answer
- Explanation
- Question 973
- Exam Question
- Correct Answer
- Explanation
- Question 974
- Exam Question
- Correct Answer
- Explanation
- Question 975
- Exam Question
- Correct Answer
- Explanation
- Question 976
- Exam Question
- Correct Answer
- Explanation
- Question 977
- Exam Question
- Correct Answer
- Explanation
- Question 978
- Exam Question
- Correct Answer
- Explanation
- Question 979
- Exam Question
- Correct Answer
- Explanation
- Question 980
- Exam Question
- Correct Answer
- Explanation
Question 971
Exam Question
A company is running a publicly accessible serverless application that uses Amazon API Gateway and AWS Lambda. The application’s traffic recently spiked due to fraudulent requests from botnets.
Which steps should a solutions architect take to block requests from unauthorized users? (Choose two.)
A. Create a usage plan with an API key that is shared with genuine users only.
B. Integrate logic within the Lambda function to ignore the requests from fraudulent IP addresses.
C. Implement an AWS WAF rule to target malicious requests and trigger actions to filter them out.
D. Convert the existing public API to a private API. Update the DNS records to redirect users to the new API endpoint.
E. Create an IAM role for each user attempting to access the API. A user will assume the role when making the API call.
Correct Answer
C. Implement an AWS WAF rule to target malicious requests and trigger actions to filter them out.
D. Convert the existing public API to a private API. Update the DNS records to redirect users to the new API endpoint.
Explanation
To block requests from unauthorized users and address the issue of fraudulent requests from botnets, a solutions architect should consider the following steps:
C. Implement an AWS WAF (Web Application Firewall) rule to target malicious requests and trigger actions to filter them out. AWS WAF provides a range of security features to protect web applications from common attack patterns, including botnets and malicious requests. By creating and configuring appropriate AWS WAF rules, the application can identify and block unauthorized traffic.
D. Convert the existing public API to a private API and update the DNS records to redirect users to the new API endpoint. By making the API private, access to the API is restricted only to authorized users or systems. This can help block unauthorized requests from reaching the API endpoint, providing an additional layer of security.
Options A and E do not directly address the issue of blocking unauthorized users. While they provide mechanisms for managing user access and authentication (usage plan with API key and IAM role per user), they may not be effective in blocking fraudulent requests from botnets.
Option B suggests integrating logic within the Lambda function to ignore requests from fraudulent IP addresses. While this can help filter out specific IP addresses, it may not be sufficient to handle botnet traffic, which often involves a large number of IP addresses.
Therefore, options C and D are the most appropriate steps to block requests from unauthorized users and address the issue of fraudulent requests from botnets.
Question 972
Exam Question
A company’s legacy application is currently relying on a single-instance Amazon RDS MySQL database without encryption. Due to new compliance requirements, all existing and new data in this database must be encrypted.
How should this be accomplished?
A. Create an Amazon S3 bucket with server-side encryption enabled. Move all the data to Amazon S3. Delete the RDS instance.
B. Enable RDS Multi-AZ mode with encryption at rest enabled. Perform a failover to the standby instance.
C. Take a Snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot.
D. Create an RDS read replica with encryption at rest enabled. Promote the read replica to master and switch the over to the new master. Delete the old RDS instance.
Correct Answer
C. Take a Snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot.
Explanation
To encrypt the data in an existing Amazon RDS MySQL database, the most appropriate approach is:
C. Take a Snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot.
This approach allows you to create an encrypted copy of the database snapshot while preserving the existing data. Here are the steps to follow:
- Take a snapshot of the existing RDS instance. This captures a point-in-time backup of the database.
- Create an encrypted copy of the snapshot. This ensures that the data in the snapshot is encrypted at rest.
- Restore a new RDS instance from the encrypted snapshot. This creates a new RDS instance with the same data but with encryption enabled.
By using this method, you can achieve encryption for all existing and new data in the RDS MySQL database without losing any data. It provides a secure and compliant environment for the application.
Option A suggests moving the data to Amazon S3, which is not a direct solution for encrypting the RDS database. It involves a different storage service and does not maintain the database structure and functionality.
Option B suggests enabling Multi-AZ mode with encryption at rest, but this option does not provide encryption for existing data. It only ensures future data is encrypted and does not address the compliance requirement for existing data.
Option D suggests creating an RDS read replica with encryption and promoting it to the master. While this approach provides encryption for the new master, it does not automatically encrypt the existing data in the primary RDS instance.
Question 973
Exam Question
A ride-sharing company stores historical service usage data as structured .csv data files in Amazon S3. A data analyst needs to perform SQL queries on this data. A solutions architect must recommend a solution that optimizes cost-effectiveness for the queries.
Which solution meets these requirements?
A. Create an Amazon EMR cluster. Load the data. Perform the queries.
B. Create an Amazon Redshift cluster. Import the data. Perform the queries.
C. Create an Amazon Aurora PostgreSQL DB cluster. Import the data. Perform the queries.
D. Create an Amazon Athena database. Associate the data in Amazon S3. Perform the queries.
Correct Answer
D. Create an Amazon Athena database. Associate the data in Amazon S3. Perform the queries.
Explanation
The most cost-effective solution for performing SQL queries on structured .csv data files in Amazon S3 is:
D. Create an Amazon Athena database. Associate the data in Amazon S3. Perform the queries.
Amazon Athena is a serverless interactive query service that allows you to run SQL queries directly on data stored in Amazon S3. It does not require any infrastructure setup or management, and you only pay for the queries you run. By creating an Athena database and associating the data files in Amazon S3 with it, you can easily query the structured .csv data using SQL syntax.
Option A suggests using Amazon EMR, which is a big data processing framework suitable for complex and large-scale data processing. However, for simple SQL queries on .csv files, using EMR would be overkill in terms of cost and complexity.
Option B suggests using Amazon Redshift, which is a fully managed data warehousing service. While Redshift is powerful and optimized for analytics, it may be more costly for ad-hoc SQL queries on .csv files compared to Amazon Athena.
Option C suggests using Amazon Aurora PostgreSQL, which is a relational database service. While Aurora PostgreSQL can handle SQL queries efficiently, it requires data to be imported into the database, which may incur additional costs and complexity.
Therefore, for cost-effective SQL querying on structured .csv data files in Amazon S3, Amazon Athena is the recommended solution.
Question 974
Exam Question
A solutions architect needs to ensure that API calls to Amazon DynamoDB from Amazon EC2 instances in a VPC do not traverse the internet.
What should the solutions architect do to accomplish this? (Choose two.)
A. Create a route table entry for the endpoint.
B. Create a gateway endpoint for DynamoDB.
C. Create a new DynamoDB table that uses the endpoint.
D. Create an ENI for the endpoint in each of the subnets of the VPC.
E. Create a security group entry in the default security group to provide access.
Correct Answer
B. Create a gateway endpoint for DynamoDB.
D. Create an ENI for the endpoint in each of the subnets of the VPC.
Explanation
To ensure that API calls to Amazon DynamoDB from Amazon EC2 instances in a VPC do not traverse the internet, the solutions architect should take the following steps:
B. Create a gateway endpoint for DynamoDB.
By creating a gateway endpoint for DynamoDB, the EC2 instances in the VPC can securely access DynamoDB without traversing the internet. The gateway endpoint provides a direct and private connection between the VPC and DynamoDB.
D. Create an ENI for the endpoint in each of the subnets of the VPC.
By creating an Elastic Network Interface (ENI) for the endpoint in each subnet of the VPC, the EC2 instances can establish a secure and private connection to DynamoDB through the gateway endpoint. Each subnet should have its own ENI associated with the gateway endpoint to ensure connectivity from all subnets in the VPC.
A, C, and E are not the correct actions to accomplish the goal:
A. Creating a route table entry for the endpoint is not necessary for achieving the goal of preventing API calls from traversing the internet. Route table entries determine the routing of traffic between subnets and internet gateways, but in this case, we want to ensure that the API calls to DynamoDB stay within the VPC.
C. Creating a new DynamoDB table does not address the requirement of preventing API calls from traversing the internet. Creating a new table does not affect the network path taken by the API calls to DynamoDB.
E. Creating a security group entry in the default security group is not the correct approach to prevent API calls from traversing the internet. Security groups control inbound and outbound traffic at the instance level, but they do not determine the network path taken by the API calls to DynamoDB.
Therefore, the correct actions to ensure that API calls to DynamoDB from EC2 instances in a VPC do not traverse the internet are to create a gateway endpoint for DynamoDB and create an ENI for the endpoint in each subnet of the VPC.
Question 975
Exam Question
A solutions architect is designing a two-tiered architecture that has separate private subnets for compute resources and the database. An AWS Lambda function that is deployed in the compute subnets needs connectivity to the database.
Which solution will provide this connectivity in the MOST secure way?
A. Configure the Lambda function to use Amazon RDS Proxy outside the VPC.
B. Associate a security group with the Lambda function. Authorize this security group in the database’s security group.
C. Authorize the compute subnet’s CIDR ranges in the database’s security group.
D. During the initialization phase, authorize all IP addresses in the database’s security group temporarily. Remove the rule after the initialization is complete.
Correct Answer
B. Associate a security group with the Lambda function. Authorize this security group in the database’s security group.
Explanation
The most secure solution to provide connectivity from the AWS Lambda function in the compute subnets to the database is:
B. Associate a security group with the Lambda function. Authorize this security group in the database’s security group.
By associating a security group with the Lambda function and authorizing this security group in the database’s security group, you can control the inbound traffic to the database and limit it only to the Lambda function. This approach allows you to explicitly define the source of the traffic and restrict access to the database from unauthorized sources.
Option A, configuring the Lambda function to use Amazon RDS Proxy outside the VPC, does not provide the same level of control and security as using security groups. It introduces additional complexity by relying on an external service.
Option C, authorizing the compute subnet’s CIDR ranges in the database’s security group, is less secure because it allows access to the database from the entire subnet, potentially including other resources that should not have access.
Option D, temporarily authorizing all IP addresses in the database’s security group during the initialization phase, is not recommended as it would open up the database to potential unauthorized access during that period.
Therefore, option B is the most secure and recommended solution to provide connectivity between the Lambda function and the database.
Question 976
Exam Question
A company’s application is running on Amazon EC2 instances in a single Region. In the event of a disaster, a solutions architect needs to ensure that the resources can also be deployed to a second Region.
Which combination of actions should the solutions architect take to accomplish this? (Choose two.)
A. Detach a volume on an EC2 instance and copy it to Amazon S3.
B. Launch a new EC2 instance from an Amazon Machine Image (AMI) in a new Region.
C. Launch a new EC2 instance in a new Region and copy a volume from Amazon S3 to the new instance.
D. Copy an Amazon Machine Image (AMI) of an EC2 instance and specify a different Region for the destination.
E. Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an EC2 instance in the destination Region using that EBS volume.
Correct Answer
B. Launch a new EC2 instance from an Amazon Machine Image (AMI) in a new Region.
D. Copy an Amazon Machine Image (AMI) of an EC2 instance and specify a different Region for the destination.
Explanation
To ensure the resources can be deployed to a second Region in the event of a disaster, the following combination of actions should be taken:
B. Launch a new EC2 instance from an Amazon Machine Image (AMI) in a new Region.
D. Copy an Amazon Machine Image (AMI) of an EC2 instance and specify a different Region for the destination.
Option B involves launching a new EC2 instance from an existing AMI in the desired second Region. This allows for the quick provisioning of a new instance with the same configuration and setup as the original instance.
Option D involves creating a copy of the AMI of an existing EC2 instance and specifying a different Region as the destination. This ensures that the AMI is available in the second Region and can be used to launch instances when needed.
These two actions together provide the capability to deploy the necessary EC2 instances in a different Region in the event of a disaster, allowing for continuity of the application and its resources.
Question 977
Exam Question
A company runs an application in the AWS Cloud and uses Amazon DynamoDB as the database. The company deploys Amazon EC2 instances to a private network to process data from the database. The company uses two NAT instances to provide connectivity to DynamoDB. The company wants to retire the NAT instances. A solutions architect must implement a solution that provides connectivity to DynamoDB and that does not require ongoing management.
What is the MOST cost-effective solution that meets these requirements?
A. Create a gateway VPC endpoint to provide connectivity to DynamoDB.
B. Configure a managed NAT gateway to provide connectivity to DynamoDB.
C. Establish an AWS Direct Connect connection between the private network and DynamoDB.
D. Deploy an AWS PrivateLink endpoint service between the private network and DynamoDB.
Correct Answer
B. Configure a managed NAT gateway to provide connectivity to DynamoDB.
Explanation
The most cost-effective solution that meets the requirements of providing connectivity to DynamoDB without ongoing management is:
B. Configure a managed NAT gateway to provide connectivity to DynamoDB.
By configuring a managed NAT gateway, you can eliminate the need for the existing NAT instances and simplify the network setup. The managed NAT gateway is a fully managed service provided by AWS that allows instances in a private subnet to access the internet or other AWS services, such as DynamoDB, without requiring you to manage the underlying infrastructure.
Using a managed NAT gateway is a cost-effective option as you only pay for the hours of usage and the data processed by the NAT gateway. It also provides high availability and scalability, and you do not need to worry about patching or maintaining the NAT instances.
Option A (creating a gateway VPC endpoint to DynamoDB) is not suitable for DynamoDB, as it is only available for certain AWS services such as Amazon S3 and Amazon DynamoDB Streams.
Option C (establishing an AWS Direct Connect connection) is not necessary for accessing DynamoDB from EC2 instances within the same AWS Region.
Option D (deploying an AWS PrivateLink endpoint service) is not necessary as DynamoDB already provides native integration with VPCs and does not require a PrivateLink endpoint.
Therefore, configuring a managed NAT gateway is the most cost-effective solution that meets the requirements.
Question 978
Exam Question
A company wants to migrate a high performance computing (HPC) application and data from on-premises to the AWS Cloud. The company uses tiered storage on premises with hot high-performance parallel storage to support the application during periodic runs of the application, and more economical cold storage to hold the data when the application is not actively running.
Which combination of solutions should a solutions architect recommend to support the storage needs of the application? (Choose two.)
A. Amazon S3 for cold data storage
B. Amazon EFS for cold data storage
C. Amazon S3 for high-performance parallel storage
D. Amazon FSx for Lustre for high-performance parallel storage
E. Amazon FSx for Windows for high-performance parallel storage
Correct Answer
C. Amazon S3 for high-performance parallel storage
D. Amazon FSx for Lustre for high-performance parallel storage
Explanation
To support the storage needs of the high-performance computing (HPC) application, the following combination of solutions should be recommended:
C. Amazon S3 for high-performance parallel storage
D. Amazon FSx for Lustre for high-performance parallel storage
Amazon S3 (Simple Storage Service) is a suitable option for cold data storage. It provides highly durable and scalable object storage, and it is cost-effective for storing large amounts of data that are not accessed frequently. This makes it a good choice for economical cold storage.
Amazon FSx for Lustre is a fully managed, high-performance file system designed for HPC workloads. It provides fast and scalable storage for applications that require high levels of parallel processing and low-latency data access. It is optimized for workloads that require high throughput and input/output (I/O) operations, making it suitable for the hot high-performance parallel storage required during the periodic runs of the HPC application.
Option B (Amazon EFS for cold data storage) is not the best choice for cold data storage in this scenario. While Amazon EFS provides a scalable and fully managed file system, it is designed for general-purpose workloads and does not offer the same level of performance and parallel processing capabilities as Amazon FSx for Lustre.
Option E (Amazon FSx for Windows for high-performance parallel storage) is not the most appropriate solution for the storage needs of the HPC application. Amazon FSx for Windows is designed for Windows-based workloads and does not provide the same level of performance and parallel processing capabilities as Amazon FSx for Lustre.
Therefore, the recommended combination of solutions for supporting the storage needs of the HPC application is Amazon S3 for cold data storage and Amazon FSx for Lustre for high-performance parallel storage.
Question 979
Exam Question
A company uses AWS Organizations to manage multiple AWS accounts for different departments. The management account has an Amazon S3 bucket that contains project reports. The company wants to limit access to this S3 bucket to only users of accounts within the organization in AWS Organizations.
Which solution meets these requirements with the LEAST amount of operational overhead?
A. Add the aws:PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.
B. Create an organizational unit (OU) for each department. Add the aws:PrincipalOrgPaths global condition key to the S3 bucket policy.
C. Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization, LeaveOrganization, and RemoveAccountFromOrganization events. Update the S3 bucket policy accordingly.
D. Tag each user that needs access to the S3 bucket. Add the aws:PrincipalTag global condition key to the S3 bucket policy.
Correct Answer
A. Add the aws:PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.
Explanation
The solution that meets the requirements with the least amount of operational overhead is:
A. Add the aws:PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.
By adding the aws:PrincipalOrgID condition key to the S3 bucket policy and specifying the organization ID as the value, access to the S3 bucket will be limited to users of accounts within the organization in AWS Organizations. This approach is simple and requires minimal ongoing management or configuration changes.
Option B is not the optimal choice because using the aws:PrincipalOrgPaths global condition key in the S3 bucket policy is more complex and involves specifying organizational units (OUs) for each department. It introduces additional configuration overhead and does not provide any significant advantages over the aws:PrincipalOrgID condition key in this scenario.
Option C is not a suitable solution as it suggests using AWS CloudTrail to monitor organization events, such as account creation or removal. While CloudTrail can provide audit logs for organizational events, it does not directly restrict access to the S3 bucket. The S3 bucket policy is the appropriate mechanism to control access.
Option D is not the most efficient solution as it involves manually tagging each user that needs access to the S3 bucket. Managing and maintaining user tags for access control can become cumbersome and error-prone, especially in large organizations with frequent user changes.
Therefore, the recommended solution is to add the aws:PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy. This approach provides a straightforward and effective way to limit access to the S3 bucket to users of accounts within the organization in AWS Organizations, with minimal operational overhead.
Question 980
Exam Question
A company’s web application uses an Amazon RDS PostgreSQL DB instance to store its application data. During the financial closing period at the start of every month. Accountants run large queries that impact the database’s performance due to high usage. The company wants to minimize the impact that the reporting activity has on the web application.
What should a solutions architect do to reduce the impact on the database with the LEAST amount of effort?
A. Create a read replica and direct reporting traffic to the replica.
B. Create a Multi-AZ database and direct reporting traffic to the standby.
C. Create a cross-Region read replica and direct reporting traffic to the replica.
D. Create an Amazon Redshift database and direct reporting traffic to the Amazon Redshift database.
Correct Answer
A. Create a read replica and direct reporting traffic to the replica.
Explanation
To reduce the impact of reporting activity on the database with the least amount of effort, the recommended solution is:
A. Create a read replica and direct reporting traffic to the replica.
By creating a read replica of the Amazon RDS PostgreSQL DB instance and directing reporting traffic to the replica, the database’s performance will be less affected by the heavy queries run during the financial closing period. The read replica can handle the reporting workload, while the primary DB instance continues to serve the web application’s normal traffic. This solution provides scalability and offloads the reporting workload from the primary DB instance, minimizing its impact on the web application.
Option B, creating a Multi-AZ database and directing reporting traffic to the standby, is not the most efficient solution for minimizing impact during reporting periods. Multi-AZ is designed for high availability and automatic failover, not for offloading read-heavy workloads.
Option C, creating a cross-Region read replica, may introduce additional complexity and overhead, as it involves replicating data across different Regions. This solution may not be necessary unless there are specific requirements for disaster recovery or data locality.
Option D, creating an Amazon Redshift database, is a valid option for offloading reporting workloads and minimizing the impact on the primary database. However, it requires setting up and managing a separate data warehouse solution, which may introduce more effort and complexity compared to creating a read replica of the existing RDS PostgreSQL DB instance.
Therefore, the most straightforward and least effort-intensive solution to reduce the impact of reporting activity on the database is to create a read replica and direct reporting traffic to the replica.