Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 38

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 1091

Exam Question

A public-facing web application queries a database hosted on an Amazon EC2 instance in a private subnet. A large number of queries involve multiple table joins, and the application performance has been degrading due to an increase in complex queries. The application team will be performing updates to improve performance.

What should a solutions architect recommend to the application team? (Choose two.)

A. Cache query data in Amazon SQS
B. Create a read replica to offload queries
C. Migrate the database to Amazon Athena
D. Implement Amazon DynamoDB Accelerator to cache data.
E. Migrate the database to Amazon RDS

Correct Answer

B. Create a read replica to offload queries
E. Migrate the database to Amazon RDS

Explanation

Create a read replica to offload queries: Creating a read replica of the database hosted on the Amazon EC2 instance in a private subnet can help offload the complex query workload from the primary database. The read replica can handle read traffic, allowing the primary database to focus on write operations. This can significantly improve the application’s performance by distributing the load across multiple database instances.

Migrate the database to Amazon RDS: Migrating the database to Amazon RDS (Relational Database Service) provides managed database services that simplify database administration tasks, improve scalability, and enhance performance. Amazon RDS offers automated backups, automated software patching, and easy scaling options. By migrating the database to Amazon RDS, the application team can leverage the benefits of a fully managed service and take advantage of features such as read replicas, automated backups, and scaling capabilities to improve performance and scalability.

Options A (caching query data in Amazon SQS), C (migrating the database to Amazon Athena), and D (implementing Amazon DynamoDB Accelerator to cache data) are not suitable recommendations for improving performance in the context of the given scenario.

  • Caching query data in Amazon SQS is not applicable in this case as SQS (Simple Queue Service) is primarily used for message queuing, not for caching query data.
  • Migrating the database to Amazon Athena is not recommended as Athena is an interactive query service for analyzing data stored in Amazon S3 and may not be suitable for hosting the application’s primary database.
  • Implementing Amazon DynamoDB Accelerator (DAX) is not applicable as DynamoDB is a NoSQL database service and is not mentioned as the current database solution.

Therefore, options B (create a read replica to offload queries) and E (migrate the database to Amazon RDS) are the recommended solutions to improve the application’s performance and address the increase in complex queries.

Question 1092

Exam Question

A company is reviewing its AWS Cloud deployment to ensure its data is not accessed by anyone without appropriate authorization. A solutions architect is tasked with identifying all open Amazon S3 buckets and recording any S3 bucket configuration changes.

What should the solutions architect do to accomplish this?

A. Enable AWS Config service with the appropriate rules
B. Enable AWS Trusted Advisor with the appropriate checks.
C. Write a script using an AWS SDK to generate a bucket report
D. Enable Amazon S3 server access logging and configure Amazon CloudWatch Events.

Correct Answer

C. Write a script using an AWS SDK to generate a bucket report

Explanation

To accomplish the task of identifying open Amazon S3 buckets and recording bucket configuration changes, a solutions architect should:

C. Write a script using an AWS SDK to generate a bucket report.

Writing a script using an AWS SDK (Software Development Kit) to generate a bucket report is an effective way to programmatically retrieve information about all the S3 buckets in the AWS account. The script can use AWS API calls to list the buckets, retrieve their configurations, and check for any open access permissions. The report can be generated and analyzed to identify any open buckets that may pose a security risk.

Option A (enabling AWS Config service with the appropriate rules) is not the most suitable choice for this specific task. Although AWS Config can track and record changes to resources, it is primarily used for configuration management and compliance purposes, rather than specifically identifying open S3 buckets.

Option B (enabling AWS Trusted Advisor with the appropriate checks) is not the most appropriate choice for this task either. AWS Trusted Advisor provides automated checks and recommendations for optimizing AWS deployments, cost savings, and security, but it may not have specific checks for identifying open S3 buckets and recording configuration changes.

Option D (enabling Amazon S3 server access logging and configuring Amazon CloudWatch Events) is also not the best solution for this task. Although enabling S3 server access logging can provide detailed access logs, it does not directly address the requirement of identifying open buckets. Additionally, configuring CloudWatch Events can help with monitoring and triggering actions based on S3 events but may not provide a comprehensive solution for identifying open buckets and recording configuration changes.

Therefore, the recommended approach is to write a script using an AWS SDK to generate a bucket report and retrieve the necessary information to identify open S3 buckets and record their configuration changes.

Question 1093

Exam Question

A company has 150 TB of archived image data stored on-premises that needs to be mowed to the AWS Cloud within the next month. The company’s current network connection allows up to 100 Mbps uploads for this purpose during the night only.

What is the MOST cost-effective mechanism to move this data and meet the migration deadline?

A. Use AWS Snowmobile to ship the data to AWS.
B. Order multiple AWS Snowball devices to ship the data to AWS.
C. Enable Amazon S3 Transfer Acceleration and securely upload the data.
D. Create an Amazon S3 VPC endpoint and establish a VPN to upload the data.

Correct Answer

B. Order multiple AWS Snowball devices to ship the data to AWS.

Explanation

Given the time constraint and the limitations of the current network connection, the most cost-effective mechanism to move the 150 TB of archived image data to the AWS Cloud within the next month would be:

B. Order multiple AWS Snowball devices to ship the data to AWS.

AWS Snowball is a service that provides a physical data transfer solution. It involves shipping a secure device to the customer’s location, allowing for offline data transfer. In this case, with 150 TB of data and a limited network connection of up to 100 Mbps during the night only, it would be more efficient to use AWS Snowball.

By ordering multiple AWS Snowball devices, the company can transfer large amounts of data offline, leveraging the higher capacity of the Snowball device compared to the network connection. Once the data is transferred to the Snowball devices, they can be shipped to AWS, where the data will be uploaded to the desired destination, such as an Amazon S3 bucket.

Option A (AWS Snowmobile) is not the most cost-effective choice in this scenario. AWS Snowmobile is designed for massive data transfer (exabytes) and is typically used when there is a need to transfer very large datasets. The 150 TB of data in this case does not require the scale of data transfer that Snowmobile provides, and it would likely be more costly compared to using Snowball.

Option C (enabling Amazon S3 Transfer Acceleration) would not be the most suitable choice in this case since the company’s network connection has a limited upload speed of 100 Mbps during the night only. Although Transfer Acceleration can help optimize data transfer speed for clients, it still relies on the available network connection, which is not sufficient for timely transfer of 150 TB of data.

Option D (creating an Amazon S3 VPC endpoint and establishing a VPN) would still rely on the limited network connection and would not significantly improve the data transfer speed. It would also introduce additional complexity with VPN setup and management.

Therefore, ordering multiple AWS Snowball devices would be the most cost-effective mechanism to move the 150 TB of archived image data to the AWS Cloud within the given time frame.

Question 1094

Exam Question

A company runs a web service on Amazon CC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across two Availability Zones. The company needs a minimum of tour instances all times to meet the required service level agreement (SLA) while keeping costs low.

If an Availability Zone tails, how can the company remain compliant with the SLA?

A. Add a target tracking scaling policy with a short cooldown period.
B. Change the Auto Scaling group launch configuration to use a larger instance type
C. Change the Auto Scaling group to use six servers across three Availability Zones
D. Change the Auto Scaling group to use eight servers across two Availability Zones

Correct Answer

C. Change the Auto Scaling group to use six servers across three Availability Zones

Explanation

If an Availability Zone fails, the company can remain compliant with the SLA by implementing the following:

C. Change the Auto Scaling group to use six servers across three Availability Zones.

By changing the Auto Scaling group to use six servers across three Availability Zones, the company ensures that even if one Availability Zone fails, there will still be a minimum of four instances running (two instances in each of the remaining Availability Zones). This meets the requirement of having a minimum of four instances at all times to meet the SLA.

Option A (adding a target tracking scaling policy with a short cooldown period) may help in maintaining the desired number of instances, but it does not address the issue of an Availability Zone failure. It is more focused on scaling based on target metrics rather than ensuring availability during an AZ failure.

Option B (changing the Auto Scaling group launch configuration to use a larger instance type) does not directly address the requirement of maintaining the minimum number of instances during an Availability Zone failure. It is focused on instance type and does not ensure availability in case of an AZ failure.

Option D (changing the Auto Scaling group to use eight servers across two Availability Zones) does not provide resilience to an Availability Zone failure. If one Availability Zone fails, the company will only have four instances remaining, which does not meet the requirement of having a minimum of four instances at all times to meet the SLA.

Therefore, option C is the most appropriate choice as it ensures the desired minimum number of instances across multiple Availability Zones, providing both availability and compliance with the SLA.

Question 1095

Exam Question

A company recently implemented hybrid cloud connectivity using AWS Direct Connect and is migrating data to Amazon S3. The company is looking for a fully managed solution that will automate and accelerate the replication of data between the on-premises storage systems and AWS storage services.

Which solution should a solutions architect recommend to keep the data private?

A. Deploy an AWS DataSync agent to the on-premises environment. Configure a sync job to replicate the data and connect it with an AWS service endpoint.
B. Deploy an AWS DataSync agent for the on-premises environment. Schedule a batch job to replicate point-in-time snapshots to AWS.
C. Deploy an AWS Storage Gateway volume gateway for the on-premises environment. Configure it to store data locally, and asynchronously back up point-in- time snapshots to AWS.
D. Deploy an AWS Storage Gateway file gateway for the on-premises environment. Configure it to store data locally, and asynchronously back up point-in-lime snapshots to AWS.

Correct Answer

A. Deploy an AWS DataSync agent to the on-premises environment. Configure a sync job to replicate the data and connect it with an AWS service endpoint.

Explanation

To keep the data private while automating and accelerating the replication of data between on-premises storage systems and AWS storage services, a solutions architect should recommend deploying an AWS DataSync agent to the on-premises environment. AWS DataSync is a fully managed service that simplifies and accelerates data transfers between on-premises storage systems and AWS storage services.

By deploying an AWS DataSync agent on-premises, the company can securely replicate data to Amazon S3. DataSync encrypts data in transit using SSL/TLS and allows for data encryption at rest using S3 server-side encryption. By configuring a sync job and connecting the DataSync agent with an AWS service endpoint, the company can automate the replication process and ensure the data remains private during transit.

Option B (deploying an AWS DataSync agent for the on-premises environment and scheduling batch jobs for point-in-time snapshots replication) does not provide a fully managed solution for ongoing data replication and is focused on specific snapshots rather than continuous replication.

Option C (deploying an AWS Storage Gateway volume gateway and asynchronously backing up point-in-time snapshots) is not an ideal solution as it does not provide fully managed replication between on-premises storage systems and AWS storage services. It is primarily focused on storing data locally and asynchronously backing up snapshots, rather than continuous replication.

Option D (deploying an AWS Storage Gateway file gateway and asynchronously backing up point-in-time snapshots) also does not provide a fully managed solution for ongoing replication and is focused on file-based access rather than continuous replication.

Therefore, option A is the most appropriate choice as it offers a fully managed solution with AWS DataSync for automating and accelerating data replication while keeping the data private during transit.

Question 1096

Exam Question

A company needs to implement a relational database with a multi-Region disaster recovery Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of 1 minute.

Which AWS solution can achieve this?

A. Amazon Aurora Global Database
B. Amazon DynamoDB global tables.
C. Amazon RDS for MySQL with Multi-AZ enabled.
D. Amazon RDS for MySQL with a cross-Region snapshot copy.

Correct Answer

A. Amazon Aurora Global Database

Explanation

To achieve a multi-Region disaster recovery solution with a 1-second Recovery Point Objective (RPO) and a 1-minute Recovery Time Objective (RTO), the most suitable AWS solution is Amazon Aurora Global Database.

Amazon Aurora Global Database provides a globally distributed, highly available, and durable relational database solution. It allows for the replication of an Aurora database across multiple AWS Regions, providing low-latency global reads and enabling disaster recovery in the event of a Region-wide outage.

With Amazon Aurora Global Database, updates made to the primary database in one Region are automatically replicated to the Aurora replicas in other Regions. This replication happens with a very low replication lag, typically within milliseconds, allowing for a near real-time RPO of 1 second.

In the event of a Region-wide outage, the failover process to a secondary Region is automated and can be completed within 1 minute, meeting the 1-minute RTO requirement.

Option B (Amazon DynamoDB global tables) is not a suitable solution for a relational database as it is a NoSQL database service.

Option C (Amazon RDS for MySQL with Multi-AZ enabled) provides high availability within a single AWS Region but does not offer automatic replication and failover to a different Region.

Option D (Amazon RDS for MySQL with a cross-Region snapshot copy) allows for manual replication of database snapshots to another Region, but it does not provide real-time replication and failover capabilities required for a 1-second RPO and a 1-minute RTO.

Therefore, the most appropriate choice is option A, Amazon Aurora Global Database, as it offers near real-time replication across multiple AWS Regions, providing both the desired RPO and RTO for a multi-Region disaster recovery solution.

Question 1097

Exam Question

A company recently deployed a new auditing system to centralize information about operating system versions, patching, and installed software for Amazon EC2 instances. A solutions architect must ensure all instances provisioned through EC2 Auto Scaling groups successfully send reports to the auditing system as soon as they are launched and terminated.

Which solution achieves these goals MOST efficiently?

A. Use a scheduled AWS Lambda function and execute a script remotely on all EC2 instances to send data to the audit system.
B. Use EC2 Auto Scaling lifecycle hooks to execute a custom script to send data to the audit system when instances are launched and terminated.
C. Use an EC2 Auto Scaling launch configuration to execute a custom script through user data to send data to the audit system when instances are launched and terminated.
D. Execute a custom script on the instance operating system to send data to the audit system. Configure the script to be executed by the EC2 Auto Scaling group when the instance starts and is terminated.

Correct Answer

B. Use EC2 Auto Scaling lifecycle hooks to execute a custom script to send data to the audit system when instances are launched and terminated.

Explanation

To ensure that all instances provisioned through EC2 Auto Scaling groups successfully send reports to the auditing system as soon as they are launched and terminated, the most efficient solution is to use EC2 Auto Scaling lifecycle hooks.

EC2 Auto Scaling lifecycle hooks allow you to perform custom actions as instances launch or terminate in an Auto Scaling group. By leveraging lifecycle hooks, you can execute a custom script or perform any necessary actions to send data to the auditing system at the appropriate stages.

When an instance is launched, a lifecycle hook can trigger the execution of a custom script that sends the required information to the auditing system. Similarly, when an instance is terminated, another lifecycle hook can be used to ensure that the necessary data is sent to the auditing system before the instance is terminated.

This approach ensures that the data is sent to the auditing system reliably and in a timely manner, as it is integrated with the lifecycle of the EC2 instances provisioned by the Auto Scaling group.

Option A (using a scheduled AWS Lambda function) would require additional overhead and complexity to manage the scheduling and execution of the script on all EC2 instances.

Option C (using an EC2 Auto Scaling launch configuration with user data) would execute the script during instance launch but does not provide a mechanism to trigger the script during termination.

Option D (executing a custom script on the instance operating system) would require manual configuration and management on each instance, making it less efficient and harder to ensure consistent execution across all instances.

Therefore, the most efficient solution is option B, using EC2 Auto Scaling lifecycle hooks to execute a custom script to send data to the audit system when instances are launched and terminated.

Question 1098

Exam Question

A solution architect must migrate a Windows internet information Services (IIS) web application to AWS. The application currently relies on a file share hosted in the user’s on-premises network-attached storage (NAS). The solution architect has proposed migrating the IIS web servers.

Which replacement to the on-promises file share is MOST resilient and durable?

A. Migrate the file Share to Amazon RDS.
B. Migrate the tile Share to AWS Storage Gateway
C. Migrate the file Share to Amazon FSx for Windows File Server.
D. Migrate the tile share to Amazon Elastic File System (Amazon EFS)

Correct Answer

C. Migrate the file Share to Amazon FSx for Windows File Server.

Explanation

To replace the on-premises file share in the most resilient and durable manner, the recommended option is to migrate the file share to Amazon FSx for Windows File Server.

Amazon FSx for Windows File Server provides fully managed native Windows file systems, which are highly available and durable. It is designed to provide shared file storage that is accessible over the Server Message Block (SMB) protocol, making it an ideal replacement for the Windows file share used by the IIS web application.

Key features of Amazon FSx for Windows File Server that make it resilient and durable include:

  1. High Availability: Amazon FSx automatically replicates data within the file system across multiple Availability Zones, providing built-in redundancy and ensuring availability even in the event of an Availability Zone failure.
  2. Durability: Amazon FSx stores data on highly durable and redundant storage infrastructure, which helps protect against data loss.
  3. Data Replication: Amazon FSx supports automatic daily backups, allowing you to easily recover data in case of accidental deletion or corruption.
  4. Scalability: Amazon FSx allows you to scale your file system capacity and throughput as per your application’s needs.

Migrating the file share to Amazon RDS (option A) or AWS Storage Gateway (option B) is not suitable for this use case as these services are designed for database and hybrid storage needs respectively, and may not provide the same level of resilience and durability as Amazon FSx.

Amazon Elastic File System (Amazon EFS) (option D) is another managed file storage service provided by AWS. While it offers scalability and durability, it is not specifically designed to provide native Windows file systems and may not be the optimal choice for migrating a Windows-based IIS web application. Amazon FSx for Windows File Server is better suited for this scenario.

Question 1099

Exam Question

A healthcare company stores highly sensitive patient records. Compliance requires that multiple copies be stored in different locations. Each record must be stored for 7 years. The company has a service level agreement (SLA) to provide records to government agencies immediately for the first 30 days and then within 4 hours of a request thereafter.

What should a solutions architect recommend?

A. Use Amazon S3 with cross-Region replication enabled. After 30 days, transition the data to Amazon S3 Glacier using lifecycle policy.
B. Use Amazon S3 with cross-origin resource sharing (CORS) enabled. After 30 days, transition the data to Amazon S3 Glacier using a lifecycle policy.
C. Use Amazon S3 with cross-Region replication enabled. After 30 days, transition the data to Amazon S3 Glacier Deep Achieve using a lifecycle policy.
D. Use Amazon S3 with cross-origin resource sharing (CORS) enabled. After 30 days, transition the data to Amazon S3 Glacier Deep Archive using a lifecycle policy.

Correct Answer

D. Use Amazon S3 with cross-origin resource sharing (CORS) enabled. After 30 days, transition the data to Amazon S3 Glacier Deep Archive using a lifecycle policy.

Explanation

To meet the compliance requirements for storing highly sensitive patient records and fulfill the service level agreement (SLA) for record retrieval, the recommended solution is as follows:

  1. Use Amazon S3 for storing the patient records: Amazon S3 provides durability, scalability, and high availability for data storage.
  2. Enable cross-origin resource sharing (CORS) on the Amazon S3 bucket: This allows the healthcare company to securely share the records with government agencies when requested.
  3. Configure a lifecycle policy: After 30 days, transition the data from Amazon S3 Standard storage to Amazon S3 Glacier Deep Archive using a lifecycle policy. This enables cost-effective long-term storage while still maintaining compliance.

Amazon S3 Glacier Deep Archive is a suitable storage class for long-term archiving of data. It offers the lowest cost per gigabyte and is designed for data that is rarely accessed but must be retained for long periods. In this case, the requirement to store patient records for 7 years aligns well with the capabilities of Amazon S3 Glacier Deep Archive.

Amazon S3 cross-Region replication (option A) and Amazon S3 Glacier (option C) are not necessary for this scenario as the requirement is to store multiple copies in different locations, not different AWS Regions.

Enabling CORS (option B) is recommended for providing secure access to the records, but it does not address the long-term storage and compliance requirements.

Therefore, option D is the most suitable recommendation for this healthcare company.

Question 1100

Exam Question

A solutions architect is helping a developer design a new ecommerce shopping cart application using AWS services. The developer is unsure of the current database schema and expects to make changes as the ecommerce site grows. The solution needs to be highly resilient and capable of automatically scaling read and write capacity.

Which database solution meets these requirements?

A. Amazon Aurora PostgreSQL
B. Amazon DynamoDB with on-demand enabled
C. Amazon DynamoDB with DynamoDB Streams enabled
D. Amazon SQS and Amazon Aurora PostgreSQL

Correct Answer

B. Amazon DynamoDB with on-demand enabled

Explanation

In this scenario, the key requirements are a highly resilient database solution with the ability to automatically scale read and write capacity as the ecommerce site grows. Based on these requirements, Amazon DynamoDB with on-demand capacity mode is the most suitable choice.

Amazon DynamoDB is a fully managed NoSQL database service that provides high availability, durability, and automatic scaling. With on-demand capacity mode, the database automatically scales its read and write capacity to accommodate the application’s workload without the need for explicit capacity planning or provisioning.

By using DynamoDB with on-demand capacity mode, the developer can focus on building the ecommerce shopping cart application without worrying about managing the underlying database infrastructure. As the application grows, DynamoDB will automatically scale its capacity to handle increased traffic and data storage.

Amazon Aurora PostgreSQL (option A) is a highly performant relational database service, but it may not be the best fit for a scenario where the database schema is expected to change frequently. Modifying the schema in Aurora may require more effort and planning compared to a NoSQL database like DynamoDB.

Enabling DynamoDB Streams (option C) provides a changelog of database events, which can be useful for building real-time data processing or replication workflows. However, it does not directly address the requirements of automatic scaling and schema flexibility.

Amazon SQS and Amazon Aurora PostgreSQL (option D) represent a combination of a message queue service (SQS) and a relational database (Aurora PostgreSQL). While this combination can be used to build scalable and resilient systems, it introduces additional complexity compared to using a single database service like DynamoDB that can handle both scalability and resiliency requirements out of the box.

Therefore, the most suitable choice in this scenario is option B: Amazon DynamoDB with on-demand enabled.