The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.
Table of Contents
- Question 1181
- Exam Question
- Correct Answer
- Explanation
- Question 1182
- Exam Question
- Correct Answer
- Explanation
- Question 1183
- Exam Question
- Correct Answer
- Explanation
- Question 1184
- Exam Question
- Correct Answer
- Explanation
- Question 1185
- Exam Question
- Correct Answer
- Explanation
- Question 1186
- Exam Question
- Correct Answer
- Explanation
- Question 1187
- Exam Question
- Correct Answer
- Explanation
- Question 1188
- Exam Question
- Correct Answer
- Explanation
- Question 1189
- Exam Question
- Correct Answer
- Explanation
- Question 1190
- Exam Question
- Correct Answer
- Explanation
Question 1181
Exam Question
A media company has an application that tracks user clicks on its websites and performs analytics to provide near-real time recommendations. The application has a Heel of Amazon EC2 instances that receive data from the websites and send the data to an Amazon RDS DB instance. Another fleet of EC2 instances hosts the portion of the application that is continuously checking changes in the database and executing SQL queries to provide recommendations. Management has requested a redesign to decouple the infrastructure. The solution must ensure that data analysts are writing SQL to analyze the data only. No data can be lost during the deployment.
What should a solutions architect recommend?
A. Use Amazon Kinesis Data Streams to capture the data from the websites Kinesis Data Firehose to persist the data on Amazon S3, and Amazon Athena to query the data.
B. Use Amazon Kinesis Data Streams to capture the data from the websites. Kinesis Data Analytics to query the data, and Kinesis Data Firehose to persist the data on Amazon S3.
C. Use Amazon Simple Queue Service (Amazon SQS) to capture the data from the websites, keep the fleet of EC2 instances, and change to a bigger instance type in the Auto Scaling group configuration.
D. Use Amazon Simple Notification Service (Amazon SNS) to receive data from the websites and proxy the messages to AWS Lambda functions that execute the queries and persist the data. Change Amazon RDS to Amazon Aurora Serverless to persist the data.
Correct Answer
B. Use Amazon Kinesis Data Streams to capture the data from the websites. Kinesis Data Analytics to query the data, and Kinesis Data Firehose to persist the data on Amazon S3.
Explanation
To decouple the infrastructure and ensure data analysts can focus on writing SQL for data analysis, while also ensuring no data loss during the deployment, a solutions architect should recommend:
B. Use Amazon Kinesis Data Streams to capture the data from the websites. Use Kinesis Data Analytics to query the data, and use Kinesis Data Firehose to persist the data on Amazon S3.
- Amazon Kinesis Data Streams is a reliable and scalable data streaming service that can capture and store website data in real-time. It allows decoupling the data ingestion from the analytics processing.
- The fleet of EC2 instances can send the data to Kinesis Data Streams, which acts as a buffer and ensures no data is lost during spikes in traffic or deployment activities.
- Amazon Kinesis Data Analytics can be used to query and analyze the data in real-time. It supports SQL-like syntax and provides the ability to transform and aggregate the streaming data.
- Kinesis Data Firehose can be used to persist the data from Kinesis Data Streams to Amazon S3, ensuring durability and providing a scalable and cost-effective storage solution.
- By using Kinesis Data Analytics and Kinesis Data Firehose, the application no longer requires a fleet of EC2 instances to continuously check for changes in the database and execute SQL queries. This reduces the complexity and maintenance overhead of managing EC2 instances.
- Data analysts can directly query and analyze the data stored in Amazon S3 using tools like Amazon Athena, which provides interactive querying and analysis capabilities with SQL.
- This architecture decouples the data ingestion, processing, and analysis components, allowing for more flexibility, scalability, and separation of concerns.
Option A (Use Amazon Kinesis Data Streams, Kinesis Data Firehose, and Amazon Athena) is similar to the recommended approach, but it does not include Kinesis Data Analytics, which provides the ability to query and analyze the data in real-time.
Option C (Use Amazon Simple Queue Service (Amazon SQS) to capture the data from the websites) does not provide real-time processing capabilities or a built-in analytics solution. It focuses on messaging and decoupling components, but it does not address the need for SQL-based data analysis.
Option D (Use Amazon Simple Notification Service (Amazon SNS) and AWS Lambda with Amazon Aurora Serverless) does not provide real-time analytics capabilities and does not leverage the strengths of Kinesis for data streaming and processing.
Therefore, the recommended solution is to use Amazon Kinesis Data Streams, Kinesis Data Analytics, and Kinesis Data Firehose (Option B).
Question 1182
Exam Question
A company has a Microsoft Windows-based application that must be migrated to AWS. This application requires the use of a shared Windows file system attached to multiple Amazon EC2 Windows instances.
What should a solution architect do to accomplish this?
A. Configure a volume using Amazon EFS. Mount the EBS volume to each Windows Instance.
B. Configure AWS Storage Gateway in Volume Gateway mode. Mount the volume to each Windows Instance.
C. Configure Amazon FSx for Windows File Server. Mount the Amazon FSx volume to each Windows Instance.
D. Configure an Amazon EBS volume with the required size. Attach each EC2 instance to the volume. Mount the file system within the volume to each Windows instance.
Correct Answer
C. Configure Amazon FSx for Windows File Server. Mount the Amazon FSx volume to each Windows Instance.
Explanation
To accomplish the requirement of having a shared Windows file system attached to multiple Amazon EC2 Windows instances, a solution architect should:
C. Configure Amazon FSx for Windows File Server. Mount the Amazon FSx volume to each Windows Instance.
- Amazon FSx for Windows File Server is a fully managed native Windows file system that is accessible from Windows instances in the AWS environment.
- By configuring Amazon FSx for Windows File Server, you can create a shared file system that can be mounted simultaneously by multiple EC2 Windows instances.
- Amazon FSx provides a highly available and scalable file system with support for Windows file sharing features like file and folder permissions, Active Directory integration, and Windows NTFS permissions.
- Each Windows instance can mount the Amazon FSx file system using standard Windows file sharing protocols (SMB).
- This configuration allows multiple EC2 Windows instances to access the shared file system, enabling collaboration and data sharing among the instances running the application.
Option A (Configure a volume using Amazon EFS) is not suitable for Windows instances as EFS is primarily designed for Linux-based instances and does not natively support Windows file sharing.
Option B (Configure AWS Storage Gateway in Volume Gateway mode) is used for hybrid cloud scenarios and does not provide a native shared Windows file system.
Option D (Configure an Amazon EBS volume with the required size) is not suitable for a shared file system as EBS volumes can only be attached to a single EC2 instance at a time.
Therefore, the recommended solution is to configure Amazon FSx for Windows File Server (Option C) to create a shared Windows file system accessible by multiple EC2 Windows instances.
Question 1183
Exam Question
A company has an application running on Amazon EC2 instances in a private subnet. The application needs to store and retrieve data in Amazon S3. To reduce costs, the company wants to configure its AWS resources in a cost-effective manner.
How should the company accomplish this?
A. Deploy a NAT gateway to access the S3 buckets.
B. Deploy AWS Storage Gateway to access the S3 buckets.
C. Deploy an S3 gateway endpoint to access the S3 buckets.
D. Deploy an S3 interface endpoint to access the S3 buckets.
Correct Answer
D. Deploy an S3 interface endpoint to access the S3 buckets.
Explanation
To reduce costs and efficiently access Amazon S3 buckets from EC2 instances in a private subnet, the company should:
D. Deploy an S3 interface endpoint to access the S3 buckets.
- An S3 interface endpoint allows EC2 instances in a VPC to directly access S3 buckets within the same AWS Region without going through the public internet.
- By deploying an S3 interface endpoint, the data transfer between EC2 instances and S3 buckets remains within the AWS network, reducing data transfer costs and providing faster and more reliable access.
- The S3 interface endpoint enables private connectivity to S3 using private IP addresses, enhancing security by avoiding exposure to the public internet.
- With an S3 interface endpoint, there is no additional cost incurred for data transfer between EC2 instances and S3 buckets within the same Region.
- This configuration is suitable for scenarios where EC2 instances need to frequently access S3 data while minimizing costs.
Option A (Deploy a NAT gateway) is not the most cost-effective solution for accessing S3 buckets as it incurs additional data transfer costs and introduces network latency.
Option B (Deploy AWS Storage Gateway) is not necessary for accessing S3 buckets from EC2 instances as Storage Gateway is primarily used for integrating on-premises applications with AWS storage services.
Option C (Deploy an S3 gateway endpoint) is not a valid option as there is no such endpoint available. S3 supports interface endpoints, not gateway endpoints.
Therefore, the recommended solution is to deploy an S3 interface endpoint (Option D) to enable efficient and cost-effective access to S3 buckets from EC2 instances in a private subnet.
Question 1184
Exam Question
A company is using Amazon EC2 to run its big data analytics workloads. These variable workloads run each night, and it is critical they finish by the start of business the following day. A solutions architect has been tasked with designing the MOST cost-effective solution.
Which solution will accomplish this?
A. Spot Fleet
B. Spot Instances
C. Reserved Instances
D. On-Demand Instances
Correct Answer
C. Reserved Instances.
Explanation
To accomplish the goal of running big data analytics workloads in a cost-effective manner and ensuring they finish by the start of business the following day, the most suitable solution is:
C. Reserved Instances.
- Reserved Instances (RIs) are the most cost-effective option for running workloads with predictable usage patterns and long-term commitments.
- By purchasing Reserved Instances, the company can obtain significant cost savings compared to On-Demand Instances.
- Reserved Instances provide capacity reservation, ensuring that the required instances are available when needed, avoiding potential resource constraints during peak usage periods.
- Reserved Instances can be purchased with specific instance types, sizes, and tenancies to match the requirements of the big data analytics workloads.
- Reserved Instances provide a stable and consistent pricing model, allowing for better budgeting and cost planning.
- The company can choose between Standard Reserved Instances, which offer the highest savings but require an upfront payment, or Convertible Reserved Instances, which provide more flexibility but have a slightly lower discount rate.
- By selecting the appropriate Reserved Instances offerings, the company can achieve a balance between cost savings and capacity availability, ensuring that the workloads finish before the start of business the following day.
Option A (Spot Fleet) and Option B (Spot Instances) are not suitable for ensuring the completion of critical workloads within a specific time frame. Spot Instances are based on a bidding system and can be interrupted with short notice, making them unsuitable for time-sensitive workloads.
Option D (On-Demand Instances) provide flexibility and convenience but may result in higher costs compared to Reserved Instances.
Therefore, to meet the requirements of cost-effectiveness and timely completion of the workloads, the recommended solution is to utilize Reserved Instances (Option C).
Question 1185
Exam Question
A company is preparing to migrate its on-premises application to AWS. The application consists of application servers and a Microsoft SQL Server database The database cannot be migrated to a different engine because SQL Server features are used in the application NET code. The company wants to attain the greatest availability possible while minimizing operational and management overhead.
What should a solutions architect do to accomplish this?
A. Install SQL Server on Amazon EC2 in a Multi-AZ deployment.
B. Migrate the data to Amazon RDS for SQL Server in a Multi-AZ deployment.
C. Deploy the database on Amazon RDS for SQL Server with Multi-AZ Replicas.
D. Migrate the data to Amazon RDS for SQL Server in a cross-Region Multi-AZ deployment.
Correct Answer
C. Deploy the database on Amazon RDS for SQL Server with Multi-AZ Replicas.
Explanation
To attain the greatest availability while minimizing operational and management overhead for a Microsoft SQL Server database in an AWS migration scenario, the most suitable solution is:
C. Deploy the database on Amazon RDS for SQL Server with Multi-AZ Replicas.
- Amazon RDS for SQL Server provides managed database services, reducing operational overhead for the company.
- Multi-AZ deployment in Amazon RDS automatically provisions and maintains a synchronous standby replica of the database in a different Availability Zone (AZ).
- In the event of a planned or unplanned outage in the primary AZ, Amazon RDS automatically fails over to the standby replica, minimizing downtime and providing high availability.
- The Multi-AZ configuration handles the replication and failover processes, reducing the management burden on the company’s IT team.
- With Multi-AZ Replicas, the company benefits from automatic failover and improved availability without the need to manage complex replication setups or manual failover processes.
- The application can continue to use SQL Server features and .NET code, as the migration is performed to Amazon RDS for SQL Server, which supports the same SQL Server engine and compatible with existing code and features.
- The company can leverage RDS features such as automated backups, automated software patching, and scalability options.
- Deploying the database in a Multi-AZ configuration provides protection against infrastructure failures, ensuring that the database remains highly available with minimal downtime.
Option A (Install SQL Server on Amazon EC2 in a Multi-AZ deployment) would require manual setup and management of the infrastructure, including high availability configurations and maintenance tasks.
Option B (Migrate the data to Amazon RDS for SQL Server in a Multi-AZ deployment) is a valid option, but it involves migrating the data to Amazon RDS, which may require more effort and potential application code changes.
Option D (Migrate the data to Amazon RDS for SQL Server in a cross-Region Multi-AZ deployment) adds complexity and higher costs, as it involves replicating data across multiple AWS Regions.
Therefore, to achieve the greatest availability while minimizing operational and management overhead, the recommended solution is to deploy the database on Amazon RDS for SQL Server with Multi-AZ Replicas (Option C).
Question 1186
Exam Question
A company has several business systems that require access to data stored in a file share. The business systems will access the file share using the Server Message Block (SMB) protocol. The file share solution should be accessible from both of the company’s legacy on-premises environment and with AWS.
Which services mod the business requirements? (Choose two.)
A. Amazon EBS
B. Amazon EFS
C. Amazon FSx for Windows
D. Amazon S3
E. AWS Storage Gateway file gateway
Correct Answer
B. Amazon EFS
E. AWS Storage Gateway file gateway
Explanation
The services that meet the business requirements of providing access to a file share accessible from both the company’s legacy on-premises environment and AWS using the SMB protocol are:
B. Amazon EFS (Elastic File System)
E. AWS Storage Gateway file gateway
Amazon EFS:
- Amazon EFS is a scalable and fully managed file storage service that supports the NFSv4.1 and NFSv4.0 protocols.
- While it does not support the SMB protocol directly, it can be accessed from an on-premises environment using AWS Direct Connect or a VPN connection. The on-premises systems can mount the EFS file system using the NFS protocol.
- Amazon EFS can also be accessed by EC2 instances in AWS natively using the NFS protocol.
AWS Storage Gateway file gateway:
- AWS Storage Gateway is a hybrid cloud storage service that allows on-premises applications to seamlessly use AWS cloud storage.
- The file gateway configuration of AWS Storage Gateway supports the SMB protocol, allowing legacy on-premises systems to access the file share using SMB.
- The file gateway enables you to create an SMB file share in AWS that is backed by an S3 bucket. The data in the S3 bucket is accessible using the SMB protocol.
Amazon EBS (Option A) provides block-level storage for EC2 instances and does not support the SMB protocol.
Amazon FSx for Windows (Option C) is a fully managed Windows file system service that supports the SMB protocol. While it can meet the requirement of accessing data using SMB, it does not provide direct integration with on-premises environments or support hybrid scenarios.
Amazon S3 (Option D) is an object storage service and does not support the SMB protocol. It uses RESTful APIs for data access and is not suitable for accessing data using SMB.
Therefore, the recommended services that meet the requirements are Amazon EFS (Option B) and AWS Storage Gateway file gateway (Option E).
Question 1187
Exam Question
A company is preparing to store confidential data in Amazon S3. For compliance reasons, the data must be encrypted at rest. Encryption key usage must be logged for auditing purposes. Keys must be rotated every year.
Which solution meets these requirements and is the MOST operationally efficient?
A. Server-side encryption with customer-provided keys (SSE-C)
B. Server-side encryption with Amazon S3 managed keys (SSE-S3)
C. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with manual rotation
D. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with automatic rotation
Correct Answer
C. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with manual rotation
Explanation
The solution that meets the given requirements and is the most operationally efficient is:
C. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with manual rotation.
Server-side encryption with customer-provided keys (SSE-C) (Option A) allows you to provide your own encryption key to encrypt data at rest in Amazon S3. However, SSE-C does not provide built-in key management or key rotation, so it would not meet the requirement of key rotation every year or logging key usage for auditing purposes. Additionally, managing and rotating the encryption keys manually can be operationally cumbersome and error-prone.
Server-side encryption with Amazon S3 managed keys (SSE-S3) (Option B) automatically encrypts data at rest in Amazon S3 using S3-managed keys. While it provides encryption at rest, it does not meet the requirement of key rotation or logging key usage for auditing purposes.
Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with automatic rotation (Option D) provides encryption at rest using AWS KMS to manage the encryption keys. However, as of the current AWS KMS functionality, automatic rotation of customer master keys is not supported. Therefore, this option does not meet the requirement of key rotation every year.
Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with manual rotation (Option C) is the recommended solution. It allows you to use AWS KMS to manage the encryption keys, providing encryption at rest. You can manually rotate the customer master keys in AWS KMS every year to meet the requirement of key rotation. Additionally, AWS KMS provides key usage logging, which allows you to track and audit the usage of encryption keys, fulfilling the requirement for logging key usage.
Therefore, option C, Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with manual rotation, is the most operationally efficient solution that meets all the given requirements.
Question 1188
Exam Question
A company has enabled AWS CloudTrail logs to deliver log files to an Amazon S3 bucket for each of its developer accounts. The company has created a central AWS account for streamlining management and audit reviews. An internal auditor needs to access the CloudTrail logs, yet access needs to be restricted for all developer account users. The solution must be secure and optimized.
How should a solutions architect meet these requirements?
A. Configure an AWS Lambda function in each developer account to copy the log files to the central account. Create an IAM role in the central account for the auditor. Attach an IAM policy providing read-only permissions to the bucket.
B. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket in the central account. Create an IAM user in the central account for the auditor. Attach an IAM policy providing full permissions to the bucket.
C. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket in the central account. Create an IAM role in the central account for the auditor. Attach an IAM policy providing read-only permissions to the bucket.
D. Configure an AWS Lambda function in the central account to copy the log files from the S3 bucket in each developer account. Create an IAM user in the central account for the auditor. Attach an IAM policy providing full permissions to the bucket.
Correct Answer
C. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket in the central account. Create an IAM role in the central account for the auditor. Attach an IAM policy providing read-only permissions to the bucket.
Explanation
To meet the requirements of providing restricted access to CloudTrail logs for the internal auditor while maintaining security and optimization, the following solution should be implemented:
C. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket in the central account. Create an IAM role in the central account for the auditor. Attach an IAM policy providing read-only permissions to the bucket.
Option A suggests configuring an AWS Lambda function in each developer account to copy the log files to the central account. While this would technically work, it introduces additional complexity and management overhead.
Option B suggests configuring CloudTrail from each developer account to deliver the log files to an S3 bucket in the central account and creating an IAM user in the central account for the auditor with full permissions to the bucket. However, granting full permissions to the auditor is not necessary and can pose a security risk.
Option D suggests configuring an AWS Lambda function in the central account to copy the log files from the S3 bucket in each developer account. This introduces unnecessary complexity and adds an additional layer of automation.
Option C is the recommended solution. By configuring CloudTrail in each developer account to deliver the log files to an S3 bucket in the central account, the logs can be consolidated in a single location. Then, a dedicated IAM role can be created in the central account for the auditor, providing read-only permissions to the bucket. This ensures that the auditor can access the CloudTrail logs without granting excessive permissions or introducing unnecessary complexity.
Therefore, option C is the correct solution that meets all the requirements of secure and optimized access to CloudTrail logs for the internal auditor.
Question 1189
Exam Question
An online shopping application accesses an Amazon RDS Multi-AZ DB instance. Database performance is slowing down the application. After upgrading to the next-generation instance type, there was no significant performance improvement. Analysis shows approximately 700 IOPS are sustained, common queries run for long durations and memory utilization is high.
Which application change should a solutions architect recommend to resolve these issues?
A. Migrate the RDS instance to an Amazon Redshift cluster and enable weekly garbage collection.
B. Separate the long-running queries into a new Multi-AZ RDS database and modify the application to query whichever database is needed.
C. Deploy a two-node Amazon ElastiCache cluster and modify the application to query the cluster first and query the database only if needed.
D. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue for common queries and query it first and query the database only if needed.
Correct Answer
C. Deploy a two-node Amazon ElastiCache cluster and modify the application to query the cluster first and query the database only if needed.
Explanation
Based on the provided information, the recommended application change to resolve the issues with database performance would be:
C. Deploy a two-node Amazon ElastiCache cluster and modify the application to query the cluster first and query the database only if needed.
Upgrading the RDS instance to the next-generation instance type did not significantly improve performance, indicating that the performance bottleneck may not be solely related to the compute capacity of the RDS instance.
The sustained 700 IOPS and high memory utilization suggest that the database is experiencing high I/O and memory pressure. To alleviate this, introducing a cache layer with Amazon ElastiCache can help offload some of the database workload.
By deploying a two-node Amazon ElastiCache cluster, the application can query the cache cluster first for frequently accessed data. This reduces the number of queries hitting the database, improving performance and reducing the overall load on the RDS Multi-AZ DB instance.
However, it’s important to note that not all types of queries can be effectively cached. Complex or infrequently accessed queries may still need to be executed directly against the database. Therefore, it is necessary to modify the application to query the cache cluster first and fallback to the database only if needed.
Therefore, option C is the recommended application change to improve performance by deploying an Amazon ElastiCache cluster and modifying the application’s query strategy.
Question 1190
Exam Question
A company running an on-premises application is migrating the application to AWS to increase its elasticity and availability. The current architecture uses a Microsoft SQL Server database with heavy read activity. The company wants to explore alternate database options and migrate database engines, if needed. Every 4 hours, the development team does a full copy of the production database to populate a test database. During this period, users experience latency.
What should a solution architect recommend as a replacement database?
A. Use Amazon Aurora with Multi-AZ Aurora Replicas and restore from mysqldump for the test database.
B. Use Amazon Aurora with Multi-AZ Aurora Replicas and restore snapshots from Amazon RDS for the test database.
C. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas, and use the standby instance for the test database.
D. Use Amazon RDS for SQL Server with a Multi-AZ deployment and read replicas, and restore snapshots from RDS for the test database.
Correct Answer
B. Use Amazon Aurora with Multi-AZ Aurora Replicas and restore snapshots from Amazon RDS for the test database.
Explanation
Based on the given requirements, the recommended replacement database option would be:
B. Use Amazon Aurora with Multi-AZ Aurora Replicas and restore snapshots from Amazon RDS for the test database.
- The company wants to explore alternate database options and migrate database engines if needed. Amazon Aurora is a fully managed relational database service compatible with MySQL and PostgreSQL. It provides high performance, scalability, and durability.
- Heavy read activity suggests the need for a database engine that can handle high read throughput efficiently. Amazon Aurora is designed to deliver fast read performance, making it a suitable choice in this scenario.
- The requirement for a full copy of the production database every 4 hours to populate the test database can be addressed using Amazon Aurora’s snapshot feature. Snapshots allow you to create a point-in-time copy of the database, which can be restored to create the test database without impacting production performance.
- Multi-AZ deployment in Amazon Aurora provides high availability and automatic failover in the event of a database instance failure. Aurora Replicas can be used to offload read traffic and improve overall performance, ensuring that production performance is not affected during the test database population.
While option C suggests using Amazon RDS for MySQL with Multi-AZ deployment and read replicas, Aurora is a more advanced and scalable database engine that can provide better performance and durability.
Option D suggests using Amazon RDS for SQL Server, but the requirement is to explore alternate database options. Switching to a different database engine may involve additional complexities and potential incompatibilities.
Therefore, option B is the recommended replacement database option, utilizing Amazon Aurora with Multi-AZ Aurora Replicas and restoring snapshots from Amazon RDS for the test database.