Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 42

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 1131

Exam Question

A solutions architect is optimizing a website for an upcoming musical event. Videos of the performances will be streamed in real time and then will be available on demand. The event is expected to attract a global online audience.

Which service will improve the performance of both the real-time and on-demand streaming?

A. Amazon CloudFront
B. AWS Global Accelerator
C. Amazon Route S3
D. Amazon S3 Transfer Acceleration

Correct Answer

B. AWS Global Accelerator

Explanation

To improve the performance of both real-time and on-demand streaming for a global online audience, the recommended service is:

B. AWS Global Accelerator.

AWS Global Accelerator is specifically designed to enhance the performance, availability, and scalability of applications with a global audience. It uses the AWS global network infrastructure to optimize the routing path and reduce the network latency for end users.

By using AWS Global Accelerator, the website’s real-time streaming and on-demand streaming can benefit from the following features:

  1. Global Anycast Network: AWS Global Accelerator leverages a network of edge locations worldwide, allowing users to connect to the closest edge location and reducing the distance and network latency.
  2. Intelligent Traffic Distribution: AWS Global Accelerator directs user traffic through the optimal AWS edge location based on performance metrics, such as the lowest latency or fewest hops.
  3. Accelerated Streaming: Global Accelerator provides fast, reliable, and secure streaming of both real-time and on-demand content by optimizing the network path and reducing the impact of network congestion.

Option A (Amazon CloudFront) is a content delivery network (CDN) service that can improve the performance of on-demand streaming by caching and delivering content from edge locations. While it can accelerate on-demand streaming, it may not provide the same level of optimization for real-time streaming as AWS Global Accelerator.

Option C (Amazon Route S3) is not a valid service name and does not exist.

Option D (Amazon S3 Transfer Acceleration) specifically improves the performance of transferring files to and from Amazon S3 buckets, but it does not directly optimize the streaming performance of real-time or on-demand videos.

Therefore, the recommended service to improve the performance of both real-time and on-demand streaming for a global online audience is AWS Global Accelerator.

Question 1132

Exam Question

A solutions architect is working on optimizing a legacy document management application running on Microsoft Windows Server in an on-premises data center. The application stores a large number of files on a network file share. The chief information officer wants to reduce the on-premises data center footprint and minimize storage costs by moving on-premises storage to AWS.

What should the solutions architect do to meet these requirements?

A. Set up an AWS Storage Gateway file gateway.
B. Set up Amazon Elastic File System (Amazon EFS)
C. Set up AWS Storage Gateway as a volume gateway
D. Set up an Amazon Elastic Block Store (Amazon EBS) volume.

Correct Answer

B. Set up Amazon Elastic File System (Amazon EFS)

Explanation

To meet the requirements of reducing the on-premises data center footprint and minimizing storage costs by moving on-premises storage to AWS, the recommended solution is:

B. Set up Amazon Elastic File System (Amazon EFS).

Amazon Elastic File System (Amazon EFS) is a fully managed, scalable, and highly available file storage service provided by AWS. It is well-suited for scenarios where multiple EC2 instances need concurrent access to a shared file system. In this case, the legacy document management application can leverage Amazon EFS to store the large number of files previously stored on the on-premises network file share.

By choosing Amazon EFS, the solutions architect can achieve the following benefits:

  • Scalability: Amazon EFS can automatically scale the storage capacity as the number of files and the file system size grows, eliminating the need for manual capacity planning.
  • High Availability: Amazon EFS is designed to provide a highly available and durable file system, ensuring the accessibility of the files stored in the cloud.
  • Concurrent Access: Multiple EC2 instances can concurrently access the same Amazon EFS file system, enabling collaboration and improving performance for the document management application.
  • Cost Optimization: Amazon EFS offers a pay-as-you-go pricing model, allowing you to pay only for the storage consumed without any upfront commitments. This helps in minimizing storage costs and optimizing your budget.

Option A (AWS Storage Gateway file gateway) is typically used to integrate on-premises file-based applications with AWS storage services. It provides access to Amazon S3 objects through a file interface, but it may not be the optimal solution for directly migrating and hosting the existing file share.

Option C (AWS Storage Gateway as a volume gateway) is more suitable for block-level storage scenarios and may not be the most efficient choice for migrating and managing a large number of files.

Option D (Amazon Elastic Block Store – Amazon EBS) is a block-level storage service primarily designed for individual EC2 instance storage and may not be the best fit for hosting a file share.

Therefore, the recommended solution to meet the requirements of reducing the on-premises data center footprint and minimizing storage costs is to set up Amazon Elastic File System (Amazon EFS).

Question 1133

Exam Question

A company’s application is running on Amazon EC2 instances within an Auto Scaling group behind an Elastic Load Balancer. Based on the application’s history the company anticipates a spike in traffic during a holiday each year. A solutions architect must design a strategy to ensure that the Auto Scaling group proactively increases capacity to minimize any performance impact on application users.

Which solution will meet these requirements?

A. Create an Amazon CloudWatch alarm to scale up the EC2 instances when CPU utilization exceeds 90%.
B. Create a recurring scheduled action to scale up the Auto Scaling group before the expected period of peak demand.
C. Increase the minimum and maximum number of EC2 instances in the Auto Scaling group during the peak demand period.
D. Configure an Amazon Simple Notification Service (Amazon SNS) notification to send alerts when there are autoscaling EC2_INSTANCE_LAUNCH events.

Correct Answer

B. Create a recurring scheduled action to scale up the Auto Scaling group before the expected period of peak demand.

Explanation

To meet the requirement of proactively increasing capacity in the Auto Scaling group to minimize performance impact during the anticipated spike in traffic, the recommended solution is:

B. Create a recurring scheduled action to scale up the Auto Scaling group before the expected period of peak demand.

By creating a recurring scheduled action, you can define a specific time and date for the Auto Scaling group to increase its capacity. This allows you to proactively scale up the number of EC2 instances before the anticipated spike in traffic, ensuring that the application can handle the increased load without performance impact.

Here’s how this solution addresses the requirements:

  1. Proactive Scaling: By scheduling the scaling action in advance, you can ensure that the Auto Scaling group is prepared for the anticipated peak demand. This helps prevent any performance impact on application users by having sufficient capacity available ahead of time.
  2. Custom Timing: The recurring scheduled action allows you to specify the exact time and date when the scaling action should occur. This flexibility enables you to align the scaling activity with the expected period of peak demand, optimizing resource allocation for the application.

Option A (Create an Amazon CloudWatch alarm to scale up the EC2 instances based on CPU utilization) is a reactive approach that scales up the EC2 instances based on CPU utilization exceeding a threshold. While this can be effective for handling sudden increases in traffic, it may not be suitable for anticipating and proactively scaling up in preparation for a known spike in traffic.

Option C (Increase the minimum and maximum number of EC2 instances in the Auto Scaling group during the peak demand period) is a possible solution, but it may result in overprovisioning resources during non-peak periods. This approach lacks the precision and efficiency of scheduling the scaling action at specific times.

Option D (Configure an Amazon SNS notification for EC2_INSTANCE_LAUNCH events) is focused on monitoring and receiving notifications rather than proactively scaling the Auto Scaling group to minimize performance impact during peak demand.

Therefore, the recommended solution is to create a recurring scheduled action to scale up the Auto Scaling group before the expected period of peak demand. This approach allows for effective capacity planning and ensures that the application can handle increased traffic without compromising performance.

Question 1134

Exam Question

A monolithic application was recently migrated to AWS and is now running on a single Amazon EC2 instance. Due to application limitations, it is not possible to use automatic scaling to scale out the application. The chief technology officer (CTO) wants an automated solution to restore the EC2 instance in the unlikely event the underlying hardware fails.

What would allow for automatic recovery of the EC2 instance as quickly as possible?

A. Configure an Amazon CloudWatch alarm that triggers the recovery of the EC2 instance if it becomes impaired.
B. Configure an Amazon CloudWatch alarm to trigger an SNS message that alerts the CTO when the EC2 instance is impaired.
C. Configure AWS CloudTrail to monitor the health of the EC2 instance, and if it becomes impaired, triggered instance recovery.
D. Configure an Amazon EventBridge event to trigger an AWS Lambda function once an hour that checks the health of the EC2 instance and triggers instance recovery if the EC2 instance is unhealthy.

Correct Answer

A. Configure an Amazon CloudWatch alarm that triggers the recovery of the EC2 instance if it becomes impaired.

Explanation

To enable automatic recovery of the EC2 instance in the event of underlying hardware failure, the most suitable option is:

A. Configure an Amazon CloudWatch alarm that triggers the recovery of the EC2 instance if it becomes impaired.

By configuring an Amazon CloudWatch alarm to monitor the EC2 instance’s status and trigger recovery in case of impairment, you can ensure the automatic restoration of the instance as quickly as possible. Here’s how this solution works:

  1. Amazon CloudWatch Alarm: Set up a CloudWatch alarm to monitor the EC2 instance’s status. Specifically, configure it to detect when the instance becomes impaired, indicating a hardware failure.
  2. Recovery Actions: Configure the CloudWatch alarm to trigger the desired recovery action. In this case, set it to recover the EC2 instance when it detects an impairment.
  3. Automatic Instance Recovery: When the CloudWatch alarm triggers the recovery action, AWS will automatically restore the EC2 instance on new underlying hardware. This process ensures that the application is quickly brought back online in the event of hardware failure, minimizing downtime and providing automated recovery.

Option B (Configure an Amazon CloudWatch alarm to trigger an SNS message) only alerts the CTO when the EC2 instance is impaired, but it does not perform any automated recovery of the instance.

Option C (Configure AWS CloudTrail to monitor the health of the EC2 instance) is not suitable for monitoring and recovering the health of an EC2 instance. AWS CloudTrail is primarily used for auditing and logging API activity.

Option D (Configure an Amazon EventBridge event to trigger an AWS Lambda function) allows for regular health checks of the EC2 instance, but it lacks the direct capability to trigger instance recovery in case of impairment.

Therefore, the recommended solution is to configure an Amazon CloudWatch alarm that triggers the recovery of the EC2 instance if it becomes impaired. This approach ensures the automated recovery of the instance in the event of underlying hardware failure, minimizing downtime and providing a swift restoration of the application.

Question 1135

Exam Question

A-company has on-premises servers running a relational database. The current database serves high read traffic for users in different locations. The company wants to migrate to AWS with the least amount of effort. The database solution should support disaster recovery and not affect the company’s current traffic flow.

Which solution meets these requirements?

A. Use a database in Amazon RDS with Multi-AZ and at least one read replica.
B. Use a database in Amazon ROS with Multi-AZ and at least one standby replica.
C. Use databases hosted on multiple Amazon EC2 instances in different AWS Regions.
D. Use databases hosted on Amazon EC2 instances behind an Application Load Balancer in different Availability Zones.

Correct Answer

A. Use a database in Amazon RDS with Multi-AZ and at least one read replica.

Explanation

Amazon RDS (Relational Database Service) provides a managed database service in AWS that offers high availability and disaster recovery capabilities. With Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. This standby replica is kept in sync with the primary database instance, providing automatic failover in the event of an infrastructure or database issue. The failover process is transparent to the application and users, ensuring continuity of service.

Additionally, having at least one read replica allows for offloading read traffic from the primary database, improving read scalability and performance. The read replica can be located in the same or a different Availability Zone.

Option B (Use a database in Amazon RDS with Multi-AZ and at least one standby replica) is not a valid option as “standby replica” is not a recognized term in the context of Amazon RDS. The correct term is “read replica.”

Option C (Use databases hosted on multiple Amazon EC2 instances in different AWS Regions) can provide disaster recovery capabilities, but it requires more effort to manage and configure replication across multiple instances in different regions. It may also introduce additional complexity and latency for the application.

Option D (Use databases hosted on Amazon EC2 instances behind an Application Load Balancer in different Availability Zones) does not provide the built-in high availability and automated failover capabilities that are available with Amazon RDS Multi-AZ deployments. Managing and ensuring data consistency across multiple EC2 instances can be complex and time-consuming.

Therefore, the recommended solution is to use a database in Amazon RDS with Multi-AZ and at least one read replica. This provides a managed and highly available database solution with automatic failover, ensuring disaster recovery and minimal impact on the company’s traffic flow during the migration to AWS.

Question 1136

Exam Question

A company has migrated an on-premises Oracle database to an Amazon RDS for Oracle Multi-AZ DB instance in the us-east-l Region. A solutions architect is designing a disaster recovery strategy to have the database provisioned in the us-west-2 Region in case the database becomes unavailable in the us-east-1
Region. The design must ensure the database is provisioned in the us-west-2 Region in a maximum of 2 hours, with a data loss window of no more than 3 hours.

How can these requirements be met?

A. Edit the DB instance and create a read replica in us-west-2. Promote the read replica to master in us-west-2 in case the disaster recovery environment needs to be activated.
B. Select the multi-Region option to provision a standby instance in us-west-2. The standby instance will be automatically promoted to master in us-west-2 in case the disaster recovery environment needs to be created.
C. Take automated snapshots of the database instance and copy them to us-west-2 every 3 hours. Restore the latest snapshot to provision another database instance in us-west-2 in case the disaster recovery environment needs to be activated.
D. Create multimaster read/write instances across multiple AWS Regions Select VPCs in us-east-1 and us-west-2 to make that deployment. Keep the master read/write instance in us-west-2 available to avoid having to activate a disaster recovery environment.

Correct Answer

B. Select the multi-Region option to provision a standby instance in us-west-2. The standby instance will be automatically promoted to master in us-west-2 in case the disaster recovery environment needs to be created.

Explanation

To meet the requirements of provisioning the database in the us-west-2 Region within a maximum of 2 hours and a data loss window of no more than 3 hours in case of a disaster, the recommended solution is:

B. Select the multi-Region option to provision a standby instance in us-west-2. The standby instance will be automatically promoted to master in us-west-2 in case the disaster recovery environment needs to be created.

By selecting the multi-Region option for Amazon RDS for Oracle, you can provision a standby instance in a different AWS Region (in this case, us-west-2) as part of the disaster recovery strategy. The standby instance remains synchronized with the primary instance in the us-east-1 Region, ensuring data consistency. In the event of a disaster or failure in the us-east-1 Region, the standby instance in us-west-2 can be automatically promoted to become the new master, enabling the database to continue operations.

This solution provides a fully managed and automated disaster recovery setup for the Oracle database. The standby instance in us-west-2 is constantly kept in sync with the primary instance, reducing the potential for data loss. With the automatic promotion of the standby instance, the recovery time objective (RTO) of 2 hours is met, and the data loss window is limited to a maximum of 3 hours.

Option A (Edit the DB instance and create a read replica in us-west-2) is not the ideal approach for disaster recovery as it involves manual intervention to promote the read replica to a master in case of a disaster.

Option C (Take automated snapshots of the database instance and copy them to us-west-2 every 3 hours) relies on periodic snapshots and manual restoration, which may not meet the required recovery time objective of 2 hours.

Option D (Create multimaster read/write instances across multiple AWS Regions) does not provide the same level of automated failover and managed disaster recovery capabilities as the multi-Region standby setup. It requires manual coordination and maintenance of the multimaster configuration, which adds complexity and potential for human error.

Therefore, the recommended solution is to select the multi-Region option to provision a standby instance in the us-west-2 Region, which can be automatically promoted to master in case of a disaster, ensuring a timely recovery within the specified RTO and data loss window.

Question 1137

Exam Question

A marketing company is storing CSV files in an Amazon S3 bucket for statistical analysis. An application on an Amazon EC2 instance needs permission to efficiently process the CSV data stored in the S3 bucket.

Which action will MOST securely grant the EC2 instance access to the S3 bucket?

A. Attach a resource-based policy to the S3 bucket.
B. Create an IAM user for the application with specific permissions to the S3 bucket.
C. Associate an IAM role with least privilege permissions to the EC2 instance profile.
D. Store AWS credentials directly on the EC2 instance for applications on the instance to use for API calls.

Correct Answer

C. Associate an IAM role with least privilege permissions to the EC2 instance profile.

Explanation

The MOST secure way to grant the EC2 instance access to the S3 bucket is:

C. Associate an IAM role with least privilege permissions to the EC2 instance profile.

Associating an IAM role with the EC2 instance profile is the recommended method for granting access to AWS services from an EC2 instance. The EC2 instance profile acts as a container for the IAM role, which defines the specific permissions granted to the instance. By associating the appropriate IAM role with the EC2 instance, you can grant the necessary permissions to access and process the CSV data stored in the S3 bucket.

Option A (Attach a resource-based policy to the S3 bucket) is a valid approach for granting access, but it is not the MOST secure option. Resource-based policies are typically used to grant access to external entities, such as other AWS accounts or IAM users outside of the EC2 instance.

Option B (Creating an IAM user for the application with specific permissions) is not the recommended approach for granting access to an EC2 instance. IAM users are typically used for human users and not intended for granting permissions directly to EC2 instances.

Option D (Storing AWS credentials directly on the EC2 instance) is not a recommended practice as it poses security risks. Storing credentials on the EC2 instance can lead to potential exposure and compromise if the instance is compromised or if the credentials are inadvertently disclosed.

By using an IAM role associated with the EC2 instance profile, you can securely grant the necessary permissions to the EC2 instance without the need to store or manage explicit credentials. The IAM role provides a secure and manageable way to control access to AWS services from the EC2 instance.

Question 1138

Exam Question

A solutions architect is designing a customer-facing application. The application is expected to have a variable amount of reads and writes depending on the time of year and clearly defined access patterns throughout the year. Management requires that database auditing and scaling be managed in the AWS Cloud. The Recovery Point Objective (RPO) must be less than 5 hours.

Which solutions can accomplish this? (Choose two.)

A. Use Amazon DynamoDB with auto scaling. Use on-demand backups and AWS CloudTrail.
B. Use Amazon DynamoDB with auto scaling. Use on-demand backups and Amazon DynamoDB Streams.
C. Use Amazon Redshift Configure concurrency scaling. Enable audit logging. Perform database snapshots every 4 hours.
D. Use Amazon RDS with Provisioned IOPS. Enable the database auditing parameter. Perform database snapshots every 5 hours.
E. Use Amazon RDS with auto scaling. Enable the database auditing parameter. Configure the backup retention period to at least 1 day.

Correct Answer

A. Use Amazon DynamoDB with auto scaling. Use on-demand backups and AWS CloudTrail.
B. Use Amazon DynamoDB with auto scaling. Use on-demand backups and Amazon DynamoDB Streams.

Explanation

The two solutions that can accomplish the requirements are:

A. Use Amazon DynamoDB with auto scaling. Use on-demand backups and AWS CloudTrail.
B. Use Amazon DynamoDB with auto scaling. Use on-demand backups and Amazon DynamoDB Streams.

Amazon DynamoDB is a fully managed NoSQL database service that provides scalability, high availability, and durability. It is well-suited for applications with variable read and write patterns. By using DynamoDB with auto scaling, the database can automatically adjust its capacity to handle the variable workload.

Option A suggests using on-demand backups for data protection and AWS CloudTrail for auditing. On-demand backups allow you to create full backups of the DynamoDB table at any time. AWS CloudTrail provides detailed monitoring and logging of API activity, including actions taken on the DynamoDB table, enabling auditing capabilities.

Option B suggests using Amazon DynamoDB Streams in addition to on-demand backups. DynamoDB Streams captures a time-ordered sequence of item-level modifications made to the DynamoDB table. It allows you to process these modifications and build real-time data pipelines or trigger downstream actions.

Options C, D, and E do not meet the requirement of having an RPO of less than 5 hours or do not provide the necessary features for managing database auditing and scaling in the AWS Cloud.

Therefore, options A and B are the correct solutions that fulfill the given requirements.

Question 1139

Exam Question

A solutions architect is designing the cloud architecture for a new application being deployed on AWS. The process should run in parallel while adding and removing application nodes as needed based on the number of jobs to be processed. The processor application is stateless. The solutions architect must ensure that the application is loosely coupled and the job items are durably stored.

Which design should the solutions architect use?

A. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage.
B. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage.
C. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue.
D. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic.

Correct Answer

C. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue.

Explanation

The design that the solutions architect should use is:

C. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue.

The requirement states that the process should run in parallel while adding and removing application nodes based on the number of jobs to be processed. This implies that the application needs to scale dynamically to handle varying workloads. Additionally, the requirement specifies that the processor application is stateless, indicating that it does not rely on any specific instance or node.

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that provides durable and reliable storage of messages. By using an SQS queue to hold the jobs, the design ensures that the job items are durably stored, allowing the processing nodes to retrieve and process the jobs as needed.

Creating an Auto Scaling group with a launch template allows for automatic scaling of the application nodes. The scaling policy can be configured to add or remove nodes based on the number of items in the SQS queue. This ensures that the application can dynamically scale up or down based on the workload.

The use of an Amazon Machine Image (AMI) allows for easy deployment of the processor application on the instances launched by the Auto Scaling group. The launch template ensures consistency in the deployment process.

Options A and D suggest using Amazon SNS for sending the jobs. While SNS is a publish/subscribe messaging service, it may not provide the durability and storage required for the job items.

Option B suggests using network usage as a scaling metric. However, it does not align with the requirement of scaling based on the number of jobs to be processed.

Therefore, option C is the correct design that meets the given requirements.

Question 1140

Exam Question

A company is migrating to the AWS Cloud. A file server is the first workload to migrate. Users must be able to access the file share using the Server Message Block (SMB) protocol.

Which AWS managed service meets these requirements?

A. Amazon EBS
B. Amazon EC2
C. Amazon FSx
D. Amazon S3

Correct Answer

C. Amazon FSx

Explanation

The AWS managed service that meets the requirement of allowing users to access the file share using the Server Message Block (SMB) protocol is:

C. Amazon FSx

Amazon FSx is a fully managed file storage service that provides file shares that are accessible over the Server Message Block (SMB) protocol. It is designed to provide high-performance file storage for Windows-based applications and workloads.

Amazon EBS (Elastic Block Store) is a block-level storage service that provides persistent storage volumes for use with Amazon EC2 instances. It does not directly provide file sharing capabilities over SMB.

Amazon EC2 (Elastic Compute Cloud) is a virtual server in the cloud that can be used to run applications and workloads. While it is possible to set up an EC2 instance as a file server using the SMB protocol, it requires manual configuration and management.

Amazon S3 (Simple Storage Service) is an object storage service that provides scalable and durable storage for various types of data. It does not natively support the SMB protocol for file sharing.

Therefore, the correct option is C. Amazon FSx, which is specifically designed for SMB file sharing in the AWS Cloud.