The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.
Table of Contents
- Question 921
- Exam Question
- Correct Answer
- Explanation
- Question 922
- Exam Question
- Correct Answer
- Explanation
- Question 923
- Exam Question
- Correct Answer
- Explanation
- Question 924
- Exam Question
- Correct Answer
- Explanation
- Question 925
- Exam Question
- Correct Answer
- Explanation
- Question 926
- Exam Question
- Correct Answer
- Explanation
- Question 927
- Exam Question
- Correct Answer
- Explanation
- Question 928
- Exam Question
- Correct Answer
- Explanation
- Question 929
- Exam Question
- Correct Answer
- Explanation
- Question 930
- Exam Question
- Correct Answer
- Explanation
Question 921
Exam Question
A company is using a VPC that is provisioned with a 10.10.1.0/24 CIDR block. Because of continued growth, IP address space in this block might be depleted soon. A solutions architect must add more IP address capacity to the VPC.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create a new VPC. Associate a larger CIDR block.
B. Add a secondary CIDR block of 10.10.2.0/24 to the VPC.
C. Resize the existing VPC CIDR block from 10.10.1.0/24 to 10.10.1.0/16.
D. Establish VPC peering with a new VPC that has a CIDR block of 10.10.1.0/16.
Correct Answer
B. Add a secondary CIDR block of 10.10.2.0/24 to the VPC.
Explanation
To add more IP address capacity to the VPC with the least operational overhead, a solutions architect should consider the following option:
B. Add a secondary CIDR block of 10.10.2.0/24 to the VPC.
By adding a secondary CIDR block to the existing VPC, the IP address space is expanded without the need to create a new VPC or modify the existing VPC’s CIDR block. This allows for easy scalability while minimizing operational overhead.
Adding a secondary CIDR block can be achieved by updating the VPC’s route tables and subnet configurations to accommodate the new IP address range. This solution ensures that the existing infrastructure and configurations remain intact, reducing the need for significant changes or reconfiguration.
Note that when adding a secondary CIDR block, it is essential to consider any routing and security group rules that may need to be adjusted to accommodate the new IP address range.
Question 922
Exam Question
A data science team requires storage for nightly log processing. The size and number of logs is unknown and will persist for 24 hours only.
What is the MOST cost-effective solution?
A. Amazon S3 Glacier
B. Amazon S3 Standard
C. Amazon S3 Intelligent-Tiering
D. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
Correct Answer
D. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
Explanation
For the scenario described, the most cost-effective solution would be:
D. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).
Amazon S3 One Zone-IA offers a lower storage cost compared to other storage classes while providing durability and availability within a single Availability Zone. Since the logs are only needed for 24 hours, the reduced redundancy of storing them in a single zone is acceptable and helps to reduce costs.
Amazon S3 Glacier (option A) would not be the most cost-effective choice in this case because it is designed for long-term archival storage, and retrieving the logs from Glacier would incur additional costs and retrieval time.
Amazon S3 Standard (option B) provides high durability and availability but at a higher cost compared to One Zone-IA.
Amazon S3 Intelligent-Tiering (option C) is suitable for data with changing access patterns but may not be the most cost-effective option for short-term log processing where the access pattern is predictable.
Overall, Amazon S3 One Zone-IA offers a cost-effective storage solution for short-term log processing with a lower cost compared to other storage classes.
Question 923
Exam Question
A news company that has reporters all over the world is hosting its broadcast system on AWS. The reporters send live broadcasts to the broadcast system. The reporters use software on their phones to send live streams through the Real Time Messaging Protocol (RTMP). A solutions architect must design a solution that gives the reporters the ability to send the highest quality streams. The solution must provide accelerated TCP connections back to the broadcast system.
What should the solutions architect use to meet these requirements?
A. Amazon CloudFront
B. AWS Global Accelerator
C. AWS Client VPN
D. Amazon EC2 instances and AWS Elastic IP addresses
Correct Answer
B. AWS Global Accelerator
Explanation
To meet the requirements of providing reporters with the ability to send high-quality streams with accelerated TCP connections back to the broadcast system, the recommended solution is:
B. AWS Global Accelerator.
AWS Global Accelerator is specifically designed to improve the performance and availability of applications with a global user base. It uses the AWS global network infrastructure to optimize the routing and accelerate the TCP and UDP traffic. By deploying AWS Global Accelerator, reporters can benefit from the optimized network paths, reduced latency, and improved performance when sending their live streams through the RTMP protocol to the broadcast system.
Amazon CloudFront (option A) is a content delivery network (CDN) service that accelerates the delivery of static and dynamic content to end users. While it can improve the delivery of content, it is not specifically designed for real-time streaming and accelerated TCP connections.
AWS Client VPN (option C) provides secure access to AWS resources for remote users, but it does not directly address the requirement of accelerated TCP connections for live streaming.
Using Amazon EC2 instances and AWS Elastic IP addresses (option D) would require manual setup and management of the infrastructure, and it would not provide the accelerated TCP connections needed for the high-quality streams.
Therefore, AWS Global Accelerator (option B) is the most suitable solution to meet the requirements of high-quality streams and accelerated TCP connections for the news company’s broadcast system.
Question 924
Exam Question
A security team to limit access to specific services or actions in all of the team’s AWS accounts. All accounts belong to a large organization in AWS Organizations.
The solution must be scalable and there must be a single point where permission can be maintained.
What should a solutions architect do to accomplish this?
A. Create an ACL to provide access to the services or actions.
B. Create a security group to allow accounts and attach it to user groups.
C. Create cross-account roles in each account to deny access to the services or actions.
D. Create a service control policy in the root organizational unit to deny access to the services or actions.
Correct Answer
D. Create a service control policy in the root organizational unit to deny access to the services or actions.
Explanation
To limit access to specific services or actions in all of the AWS accounts belonging to a large organization in AWS Organizations, while ensuring scalability and maintaining a single point for permission management, the recommended solution is:
D. Create a service control policy (SCP) in the root organizational unit to deny access to the services or actions.
AWS Organizations provides centralized management and governance of multiple AWS accounts. A service control policy (SCP) is a policy that can be applied at the root, organizational unit (OU), or account level within an AWS Organization. SCPs are used to set permissions that define what actions or services are allowed or denied across the organization.
By creating an SCP in the root organizational unit and configuring it to deny access to specific services or actions, the security team can effectively restrict access at the organization level. This approach ensures a scalable solution where permission management is centralized, allowing for consistent enforcement of access controls across all accounts within the organization.
Option A (Create an ACL) and Option B (Create a security group) are not suitable for managing access across multiple AWS accounts in an organization. ACLs and security groups are typically used within individual accounts and do not provide centralized control across the organization.
Option C (Create cross-account roles) is a mechanism for granting permissions across accounts, but in this case, the requirement is to limit access, not grant access. SCPs are more appropriate for this purpose.
Therefore, the recommended approach is to create a service control policy (SCP) in the root organizational unit to deny access to the specific services or actions in all AWS accounts within the organization.
Question 925
Exam Question
A company wants to perform an online migration of active datasets from an on-premises NFS server to an Amazon S3 bucket that is named DOC-EXAMPLE-BUCKET. Data integrity verification is required during the transfer and at the end of the transfer. The data also must be encrypted. A solutions architect is using an AWS solution to migrate the data.
Which solution meets these requirements?
A. AWS Storage Gateway file gateway
B. S3 Transfer Acceleration
C. AWS DataSync
D. AWS Snowball Edge Storage Optimized
Correct Answer
C. AWS DataSync
Explanation
The solution that meets the requirements for online migration of active datasets from an on-premises NFS server to an Amazon S3 bucket, with data integrity verification and encryption, is:
C. AWS DataSync
AWS DataSync is a data transfer service that makes it easy to migrate data between on-premises storage and Amazon S3, Amazon EFS, or Amazon FSx for Windows File Server. It supports transferring data over the network, ensuring data integrity and providing encryption options.
With AWS DataSync, you can perform an online migration of active datasets from the on-premises NFS server to an Amazon S3 bucket. Data integrity verification is built into the service, ensuring that the data transferred to the S3 bucket matches the source data. AWS DataSync also provides encryption options, allowing you to encrypt the data during transit and at rest in the S3 bucket.
Option A (AWS Storage Gateway file gateway) is not designed for online migration scenarios and is typically used for integrating on-premises file-based applications with AWS storage services.
Option B (S3 Transfer Acceleration) is a feature of Amazon S3 that helps to accelerate data transfers to and from S3 buckets but does not provide the required functionality for data migration, data integrity verification, and encryption.
Option D (AWS Snowball Edge Storage Optimized) is a data transfer and edge computing device that can be used for offline data migration scenarios but is not necessary for an online migration with data integrity verification and encryption.
Therefore, the best solution for this scenario is AWS DataSync.
Question 926
Exam Question
A company’s web application is using multiple Linux Amazon EC2 instances and storing data on Amazon EBS volumes. The company is looking for a solution to increase the resiliency of the application in case of a failure and to provide storage that complies with atomicity, consistency, isolation, and durability (ACID).
What should a solutions architect do to meet these requirements?
A. Launch the application on EC2 instances in each Availability Zone. Attach EBS volumes to each EC2 instance.
B. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Mount an instance store on each EC2 instance.
C. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance.
D. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data using Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).
Correct Answer
C. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance.
Explanation
To increase the resiliency of the application and provide storage that complies with ACID properties, a solutions architect should:
C. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance.
- Launching the application on EC2 instances in each Availability Zone (option A) provides redundancy, but it does not directly address the requirement for storage that complies with ACID properties.
- Creating an Application Load Balancer with Auto Scaling groups across multiple Availability Zones (option B) improves resiliency by distributing the load across multiple instances, but using instance store as storage does not provide ACID compliance.
- Storing data using Amazon S3 One Zone-Infrequent Access (option D) may provide durability but does not ensure ACID compliance, as S3 is an object storage service.
- Creating an Application Load Balancer with Auto Scaling groups across multiple Availability Zones and storing data on Amazon EFS (option C) addresses both resiliency and ACID compliance requirements. Amazon EFS is a file storage service that provides a shared, highly available, and scalable file system. It supports multiple EC2 instances mounting the same file system simultaneously, allowing for consistency and isolation. It provides durability and meets ACID compliance requirements.
Therefore, option C is the correct solution for increasing the resiliency of the application and providing storage that complies with ACID properties.
Question 927
Exam Question
A company has many projects that run in multiple AWS Regions. The projects usually have a three-tier architecture with Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Auto Scaling group and share Amazon Elastic File System (Amazon EFS) storage and Amazon RDS databases. Some projects have resources in more than one Region. A solutions architect needs to identify each project’s individual costs.
Which solution will provide this information with the LEAST amount of operational effort?
A. Use Cost Explorer to perform one-time queries for each Region and create a report that filters by project.
B. Use the AWS Billing and Cost Management details page to see the actual usage costs of the resources by project.
C. Use AWS Systems Manager to group resources by project and monitor each project’s resources and cost.
D. Use AWS Billing and Cost Management to activate cost allocation tags and create reports that are based on the project tags.
Correct Answer
D. Use AWS Billing and Cost Management to activate cost allocation tags and create reports that are based on the project tags.
Explanation
To identify each project’s individual costs with the least amount of operational effort, a solutions architect should:
D. Use AWS Billing and Cost Management to activate cost allocation tags and create reports that are based on the project tags.
- Using Cost Explorer to perform one-time queries for each Region (option A) would require manual effort and repeated queries for each project and Region, resulting in significant operational overhead.
- The AWS Billing and Cost Management details page (option B) provides a high-level view of costs but does not offer granular insights specific to each project.
- Using AWS Systems Manager to group resources by project (option C) can help in managing resources but does not directly provide cost visibility and reporting on a project level.
- Activating cost allocation tags in AWS Billing and Cost Management and creating reports based on project tags (option D) is the recommended solution. Cost allocation tags allow you to tag your resources with project-specific identifiers, and AWS Billing and Cost Management can generate reports and provide cost breakdowns based on these tags. This approach provides granular visibility into costs for each project without significant operational effort.
Therefore, option D is the most suitable solution for identifying each project’s individual costs with the least amount of operational effort.
Question 928
Exam Question
A company runs an application in a branch office within a small data closet with no virtualized compute resources. The application data is stored on an NFS volume. Compliance standards require a daily offsite backup of the NFS volume.
Which solution meets these requirements?
A. Install an AWS Storage Gateway file gateway on premises to replicate the data to Amazon S3.
B. Install an AWS Storage Gateway file gateway hardware appliance on premises to replicate the data to Amazon S3.
C. Install an AWS Storage Gateway volume gateway with stored volumes on premises to replicate the data to Amazon S3.
D. Install an AWS Storage Gateway volume gateway with cached volumes on premises to replicate the data to Amazon S3.
Correct Answer
C. Install an AWS Storage Gateway volume gateway with stored volumes on premises to replicate the data to Amazon S3.
Explanation
To meet the requirements of performing a daily offsite backup of the NFS volume in a branch office with no virtualized compute resources, the most suitable solution would be:
C. Install an AWS Storage Gateway volume gateway with stored volumes on premises to replicate the data to Amazon S3.
- AWS Storage Gateway file gateway (options A and B) is used for file-based storage access and is not specifically designed for block-based storage like NFS volumes.
- AWS Storage Gateway volume gateway with cached volumes (option D) is primarily used for low-latency access to frequently accessed data and would not be suitable for offsite backup scenarios.
- AWS Storage Gateway volume gateway with stored volumes (option C) is designed for on-premises block storage and allows you to back up the data to Amazon S3 in an offsite manner. It provides the ability to take snapshots of the data and replicate it to S3, fulfilling the requirement of daily offsite backups.
Therefore, option C is the most appropriate solution for performing a daily offsite backup of the NFS volume in the branch office.
Question 929
Exam Question
A company runs an application on a large fleet of Amazon EC2 instances. The application reads and writes entries into an Amazon DynamoDB table. The size of the DynamoDB table continuously grows, but the application needs only data from the last 30 days. The company needs a solution that minimizes cost and development effort.
Which solution meets these requirements?
A. Use an AWS CloudFormation template to deploy the complete solution. Redeploy the CloudFormation stack every 30 days, and delete the original stack.
B. Use an EC2 instance that runs a monitoring application from AWS Marketplace. Configure the monitoring application to use Amazon DynamoDB Streams to store the timestamp when a new item is created in the table. Use a script that runs on the EC2 instance to delete items that have a timestamp that is older than 30 days.
C. Configure Amazon DynamoDB Streams to invoke an AWS Lambda function when a new item is created in the table. Configure the Lambda function to delete items in the table that are older than 30 days.
D. Extend the application to add an attribute that has a value of the current timestamp plus 30 days to each new item that is created in the table. Configure DynamoDB to use the attribute as the TTL attribute.
Correct Answer
D. Extend the application to add an attribute that has a value of the current timestamp plus 30 days to each new item that is created in the table. Configure DynamoDB to use the attribute as the TTL attribute.
Explanation
To meet the requirements of minimizing cost and development effort while only needing data from the last 30 days in an Amazon DynamoDB table, the most suitable solution would be:
D. Extend the application to add an attribute that has a value of the current timestamp plus 30 days to each new item that is created in the table. Configure DynamoDB to use the attribute as the TTL (Time to Live) attribute.
- Option A suggests redeploying the CloudFormation stack every 30 days, which would require additional effort and potentially disrupt the application’s availability.
- Option B involves using an EC2 instance and a monitoring application to delete items older than 30 days, which introduces unnecessary complexity and additional maintenance.
- Option C suggests using DynamoDB Streams and AWS Lambda to delete items older than 30 days, which requires additional configuration and maintenance.
On the other hand, option D leverages DynamoDB’s built-in Time to Live (TTL) feature. By extending the application to add a timestamp attribute with a value of the current timestamp plus 30 days, and configuring DynamoDB to use this attribute as the TTL attribute, DynamoDB will automatically delete items that have expired beyond the specified TTL value. This approach minimizes cost and development effort as it leverages native functionality within DynamoDB, without the need for additional infrastructure or code.
Therefore, option D is the most appropriate solution for minimizing cost and development effort while maintaining only the data from the last 30 days in the DynamoDB table.
Question 930
Exam Question
A company’s production application runs online transaction processing (OLTP) transactions on an Amazon RDS MySQL DB instance. The company is launching a new reporting tool that will access the same data. The reporting tool must be highly available and not impact the performance of the production application.
How can this be achieved?
A. Create hourly snapshots of the production RDS DB instance.
B. Create a Multi-AZ RDS Read Replica of the production RDS DB instance.
C. Create multiple ROS Read Replicas of the production RDS DB instance. Place the Read Replicas in an Auto Scaling group.
D. Create a Single-AZ RDS Read Replica of the production RDS DB instance. Create a second Single-AZ RDS Read Replica from the replica.
Correct Answer
B. Create a Multi-AZ RDS Read Replica of the production RDS DB instance.
Explanation
To achieve high availability for the reporting tool accessing the same data as the production application on an Amazon RDS MySQL DB instance without impacting the performance of the production application, the most suitable solution would be:
B. Create a Multi-AZ RDS Read Replica of the production RDS DB instance.
- Option A suggests creating hourly snapshots of the production RDS DB instance. While snapshots provide backup and recovery capabilities, they do not directly address the requirement for high availability or separation of workload for the reporting tool.
- Option C suggests creating multiple Read Replicas in an Auto Scaling group. Read Replicas can offload read traffic from the primary DB instance, but they are not designed to provide high availability or separate workloads.
- Option D suggests creating a Single-AZ RDS Read Replica and then creating a second Single-AZ RDS Read Replica from the first replica. This does not provide the desired high availability for the reporting tool.
On the other hand, option B, creating a Multi-AZ RDS Read Replica of the production RDS DB instance, is the most appropriate solution. By using Multi-AZ deployment, the Read Replica will have synchronous replication to a standby instance in a different Availability Zone. This ensures data durability and automatic failover in the event of a primary DB instance failure, providing high availability for the reporting tool. Additionally, since the read traffic is offloaded to the Read Replica, it will not impact the performance of the production application.
Therefore, option B is the recommended solution to achieve high availability for the reporting tool without impacting the performance of the production application.