Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 18

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 891

Exam Question

A company’s security policy requires that all AWS API activity in its AWS accounts be recorded for periodic auditing. The company needs to ensure that AWS CloudTrail is enabled on all of its current and future AWS accounts using AWS Organizations.

Which solution is MOST secure?

A. At the organization’s root, define and attach a service control policy (SCP) that permits enabling CloudTrail only.
B. Create IAM groups in the organization’s management account as needed. Define and attach an IAM policy to the groups that prevents users from disabling CloudTrail.
C. Organize accounts into organizational units (OUs). At the organization’s root, define and attach a service control policy (SCP) that prevents users from disabling CloudTrail.
D. Add all existing accounts under the organization’s root. Define and attach a service control policy (SCP) to every account that prevents users from disabling CloudTrail.

Correct Answer

D. Add all existing accounts under the organization’s root. Define and attach a service control policy (SCP) to every account that prevents users from disabling CloudTrail.

Explanation

The most secure solution to ensure that AWS CloudTrail is enabled on all current and future AWS accounts using AWS Organizations is:

D. Add all existing accounts under the organization’s root. Define and attach a service control policy (SCP) to every account that prevents users from disabling CloudTrail.

By adding all existing accounts under the organization’s root and applying an SCP to each account that prevents users from disabling CloudTrail, you enforce the policy at the account level, ensuring consistent compliance across all accounts. This approach ensures that CloudTrail remains enabled and cannot be disabled by individual account users, providing a robust and secure solution for recording AWS API activity.

Question 892

Exam Question

A company has deployed a multiplayer game for mobile devices. The game requires live location tracking of players based on latitude and longitude. The data store for the game must support rapid updates and retrieval of locations. The game uses an Amazon RDS for PostgreSQL DB instance with read replicas to store the location data. During peak usage periods, the database is unable to maintain the performance that is needed for reading and writing updates. The game’s user base is increasing rapidly.

What should a solutions architect do to improve the performance of the data tier?

A. Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZ enabled.
B. Migrate from Amazon RDS to Amazon Elasticsearch Service (Amazon ES) with Kibana.
C. Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance. Modify the game to use DAX.
D. Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the game to use Redis.

Correct Answer

C. Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance. Modify the game to use DAX.

Explanation

To improve the performance of the data tier for the multiplayer game’s live location tracking, a solutions architect should consider the following option:

C. Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance. Modify the game to use DAX.

Amazon DynamoDB Accelerator (DAX) is an in-memory cache that can significantly improve the performance of database reads. By deploying DAX in front of the existing Amazon RDS for PostgreSQL DB instance, the read requests for location data can be served from the cache, reducing the load on the database and improving response times. Modifying the game to use DAX as the data source will allow it to leverage the caching capabilities and further enhance performance.

This solution can provide the required rapid updates and retrieval of player locations, improving the overall performance of the data tier and supporting the growing user base of the game.

Question 893

Exam Question

A company has a Microsoft .NET application that runs on an on-premises Windows Server. The application stores data by using an Oracle Database Standard Edition server. The company is planning a migration to AWS and wants to minimize development changes while moving the application. The AWS application environment should be highly available.

Which combination of actions should the company take to meet these requirements? (Choose two.)

A. Refactor the application as serverless with AWS Lambda functions running .NET Core.
B. Rehost the application in AWS Elastic Beanstalk with the .NET platform in a Multi-AZ deployment.
C. Replatform the application to run on Amazon EC2 with the Amazon Linux Amazon Machine Image (AMI).
D. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Amazon DynamoDB in a Multi-AZ deployment.
E. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Oracle on Amazon RDS in a Multi-AZ deployment.

Correct Answer

B. Rehost the application in AWS Elastic Beanstalk with the .NET platform in a Multi-AZ deployment.
E. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Oracle on Amazon RDS in a Multi-AZ deployment.

Explanation

To minimize development changes while migrating the company’s on-premises .NET application that uses an Oracle Database to AWS and ensure a highly available environment, the following actions should be taken:

B. Rehost the application in AWS Elastic Beanstalk with the .NET platform in a Multi-AZ deployment.
Rehosting the application using AWS Elastic Beanstalk allows for easy migration of the application without making significant changes to the codebase. It provides a managed platform for running the application and ensures high availability by deploying it in a Multi-AZ (Availability Zone) configuration.

E. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Oracle on Amazon RDS in a Multi-AZ deployment.

Migrating the Oracle database to Oracle on Amazon RDS using AWS Database Migration Service (AWS DMS) allows for a seamless transition while minimizing development changes. It enables the company to take advantage of managed database services on AWS and provides high availability through a Multi-AZ deployment of Amazon RDS.

By combining these actions, the company can migrate its .NET application and Oracle database to AWS with minimal development changes while ensuring a highly available environment.

Question 894

Exam Question

A company with a single AWS account runs its internet-facing containerized web application on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The EKS cluster is placed in a private subnet of a VPC. System administrators access the EKS cluster through a bastion host on a public subnet. A new corporate security policy requires the company to avoid the use of bastion hosts. The company also must not allow internet connectivity to the EKS cluster.

Which solution meets these requirements MOST cost-effectively?

A. Set up an AWS Direct Connect connection.
B. Create a transit gateway.
C. Establish a VPN connection.
D. Use AWS Storage Gateway.

Correct Answer

B. Create a transit gateway.

Explanation

To meet the requirements of avoiding the use of bastion hosts and disallowing internet connectivity to the Amazon EKS cluster while remaining cost-effective, the following solution can be implemented:

B. Create a transit gateway.

Creating a transit gateway allows for secure and private connectivity between the VPCs and on-premises networks without the need for a bastion host. The transit gateway can be placed in a private subnet and connected to the EKS cluster’s VPC. This provides access to the EKS cluster without exposing it to the internet.

By choosing this solution, the company can meet the security requirements while maintaining cost-effectiveness. The transit gateway enables secure connectivity and eliminates the need for a bastion host or internet access to the EKS cluster.

Question 895

Exam Question

A company is designing an application where users upload small files into Amazon S3. After a user uploads a file, the file requires one-time simple processing to transform the data and save the data in JSON format for later analysis. Each file must be processed as quickly as possible after it is uploaded. Demand will vary. On some days, users will upload a high number of files. On other days, users will upload a few files or no files.

Which solution meets these requirements with the LEAST operational overhead?

A. Configure Amazon EMR to read text files from Amazon S3. Run processing scripts to transform the data. Store the resulting JSON file in an Amazon Aurora DB cluster.
B. Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use Amazon EC2 instances to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoDB.
C. Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use an AWS Lambda function to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoDB.
D. Configure Amazon EventBridge (Amazon CloudWatch Events) to send an event to Amazon Kinesis Data Streams when a new file is uploaded. Use an AWS Lambda function to consume the event from the stream and process the data. Store the resulting JSON file in the Amazon Aurora DB cluster.

Correct Answer

C. Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use an AWS Lambda function to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoDB.

Explanation

This solution offers the least operational overhead.

By configuring Amazon S3 to send an event notification to an Amazon SQS queue, you can decouple the file upload process from the processing step. This ensures that files are processed as quickly as possible without impacting the user experience.

Using an AWS Lambda function to read from the SQS queue and process the data allows for easy scaling and handling varying levels of demand. Lambda functions automatically scale based on the number of messages in the queue, eliminating the need for manual management.

Storing the resulting JSON file in Amazon DynamoDB provides a serverless and scalable database solution that can handle the data for later analysis. DynamoDB can handle high read and write throughput, making it suitable for storing processed data.

Overall, this solution minimizes operational overhead by leveraging managed services and serverless capabilities, allowing the company to focus on the application logic rather than infrastructure management.

Reference

Amazon Aurora User Guide for Aurora

Question 896

Exam Question

A company has a customer relationship management (CRM) application that stores data in an Amazon RDS DB instance that runs Microsoft SQL Server. The company’s IT staff has administrative access to the database. The database contains sensitive data. The company wants to ensure that the data is not accessible to the IT staff and that only authorized personnel can view the data.

What should a solutions architect do to secure the data?

A. Use client-side encryption with an Amazon RDS managed key.
B. Use client-side encryption with an AWS Key Management Service (AWS KMS) customer managed key.
C. Use Amazon RDS encryption with an AWS Key Management Service (AWS KMS) default encryption key.
D. Use Amazon RDS encryption with an AWS Key Management Service (AWS KMS) customer managed key.

Correct Answer

D. Use Amazon RDS encryption with an AWS Key Management Service (AWS KMS) customer managed key.

Explanation

To secure the data in the CRM application and ensure that it is not accessible to the IT staff, using Amazon RDS encryption with an AWS KMS customer managed key is the recommended approach.

Amazon RDS encryption provides at-rest encryption for the database, ensuring that the data is encrypted when it is stored in the RDS DB instance. This protects the data from unauthorized access in case of physical theft or storage media compromise.

Using an AWS KMS customer managed key allows the company to have full control over the encryption keys used for encrypting the RDS data. Only authorized personnel can access and manage the customer managed key, ensuring that the data is accessible only to authorized individuals.

By combining Amazon RDS encryption with an AWS KMS customer managed key, the company can meet its requirement of securing the data and limiting access to authorized personnel, providing an additional layer of protection to sensitive data in the CRM application.

Question 897

Exam Question

A company is running a database on Amazon Aurora. The database is idle every evening. An application that performs extensive reads on the database experiences performance issues during morning hours when user traffic spikes. During these peak periods, the application receives timeout errors when reading from the database. The company does not have a dedicated operations team and needs an automated solution to address the performance issues.

Which actions should a solutions architect take to automatically adjust to the increased read load on the database? (Choose two.)

A. Migrate the database to Aurora Serverless.
B. Increase the instance size of the Aurora database.
C. Configure Aurora Auto Scaling with Aurora Replicas.
D. Migrate the database to an Aurora multi-master cluster.
E. Migrate the database to an Amazon RDS for MySQL Multi-AZ deployment.

Correct Answer

A. Migrate the database to Aurora Serverless.
C. Configure Aurora Auto Scaling with Aurora Replicas.

Explanation

To automatically adjust to the increased read load on the database, the following actions should be taken:

A. Migrate the database to Aurora Serverless: Aurora Serverless automatically scales the database capacity based on the workload. It eliminates the need for manual scaling and ensures that the database can handle the peak read load during user traffic spikes.

C. Configure Aurora Auto Scaling with Aurora Replicas: Aurora Auto Scaling automatically adds or removes Aurora Replicas based on the demand. By increasing the number of Aurora Replicas during peak periods, the read workload can be distributed across multiple replicas, improving the read performance and reducing timeout errors.

These two solutions work together to address the performance issues. Aurora Serverless ensures that the database capacity scales dynamically based on the workload, while Aurora Auto Scaling with Aurora Replicas distributes the read workload across multiple replicas to handle the increased load effectively.

Reference

Question 898

Exam Question

A company captures ordered clickstream data from multiple websites and uses batch processing to analyze the data. The company receives 100 million event records, all approximately 1 KB in size, each day. The company loads the data into Amazon Redshift each night, and business analysts consume the data. The company wants to move toward near-real-time data processing for timely insights. The solution should process the streaming data while requiring the least possible operational overhead.

Which combination of AWS services will meet these requirements MOST cost-effectively? (Choose two.)

A. Amazon EC2
B. AWS Batch
C. Amazon Simple Queue Service (Amazon SQS)
D. Amazon Kinesis Data Firehose
E. Amazon Kinesis Data Analytics

Correct Answer

D. Amazon Kinesis Data Firehose
E. Amazon Kinesis Data Analytics

Explanation

To meet the requirements of near-real-time data processing with minimal operational overhead, the following combination of AWS services can be used:

D. Amazon Kinesis Data Firehose: Kinesis Data Firehose is a fully managed service that can collect and deliver streaming data with low latency and high throughput. It can ingest the ordered clickstream data and load it directly into Amazon Redshift in near real-time, eliminating the need for the batch processing approach. Kinesis Data Firehose handles data delivery, buffering, and auto-scaling, reducing operational overhead.

E. Amazon Kinesis Data Analytics: Kinesis Data Analytics enables real-time analytics on streaming data. By configuring a Kinesis Data Analytics application, you can process and analyze the clickstream data as it is ingested by Kinesis Data Firehose. Kinesis Data Analytics provides SQL-based querying and processing capabilities, allowing business analysts to consume the streaming data and gain timely insights without requiring custom code or complex infrastructure management.

Using these services together, the clickstream data can be processed and analyzed in near real-time, providing timely insights to the business analysts. The managed nature of Amazon Kinesis Data Firehose and Amazon Kinesis Data Analytics reduces operational overhead and simplifies the setup and management of the data processing pipeline.

Question 899

Exam Question

A company is building a web application that serves a content management system. The content management system runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The EC2 instances run in an Auto Scaling group across multiple Availability Zones. Users are constantly adding and updating files, blogs, and other website assets in the content management system. A solutions architect must implement a solution in which all the EC2 instances share up-to-date website content with the least possible lag time.

Which solution meets these requirements?

A. Update the EC2 user data in the Auto Scaling group lifecycle policy to copy the website assets from the EC2 instance that was launched most recently. Configure the ALB to make changes to the website assets only in the newest EC2 instance.
B. Copy the website assets to an Amazon Elastic File System (Amazon EFS) file system. Configure each EC2 instance to mount the EFS file system locally. Configure the website hosting application to reference the website assets that are stored in the EFS file system.
C. Copy the website assets to an Amazon S3 bucket. Ensure that each EC2 instance downloads the website assets from the S3 bucket to the attached Amazon Elastic Block Store (Amazon EBS) volume. Run the S3 sync command once each hour to keep files up to date.
D. Restore an Amazon Elastic Block Store (Amazon EBS) snapshot with the website assets. Attach the EBS snapshot as a secondary EBS volume when a new EC2 instance is launched. Configure the website hosting application to reference the website assets that are stored in the secondary EBS volume.

Correct Answer

B. Copy the website assets to an Amazon Elastic File System (Amazon EFS) file system. Configure each EC2 instance to mount the EFS file system locally. Configure the website hosting application to reference the website assets that are stored in the EFS file system.

Explanation

Using an Amazon EFS file system is the recommended solution for sharing website assets across multiple EC2 instances in a highly available and scalable manner. Here’s how it meets the requirements:

Copy the website assets to an Amazon EFS file system: You can upload the website assets to an EFS file system, which provides a scalable and fully managed shared file storage service. The website assets will be accessible from all EC2 instances in the same Availability Zone.

Configure each EC2 instance to mount the EFS file system: Mount the EFS file system on each EC2 instance in the Auto Scaling group. This allows all instances to have access to the same set of website assets.

Configure the website hosting application to reference the website assets in the EFS file system: Update the website hosting application to use the file paths within the mounted EFS file system. This ensures that all EC2 instances serve the latest website content without lag, as they are accessing the shared file system.

By using Amazon EFS, the website assets are stored centrally and can be accessed by all EC2 instances simultaneously. Any updates or changes to the website assets will be immediately available to all instances, reducing lag time and ensuring consistent content across the application.

Reference

AWS > Documentation > Amazon Elastic File System (EFS) > User Guide > Amazon EFS: How it works

Question 900

Exam Question

A solutions architect is designing the architecture for a new web application. The application will run on AWS Fargate containers with an Application Load Balancer (ALB) and an Amazon Aurora PostgreSQL database. The web application will perform primarily read queries against the database.

What should the solutions architect do to ensure that the website can scale with increasing traffic? (Choose two.)

A. Enable auto scaling on the ALB to scale the load balancer horizontally.
B. Configure Aurora Auto Scaling to adjust the number of Aurora Replicas in the Aurora cluster dynamically.
C. Enable cross-zone load balancing on the ALB to distribute the load evenly across containers in all Availability Zones.
D. Configure an Amazon Elastic Container Service (Amazon ECS) cluster in each Availability Zone to distribute the load across multiple Availability Zones.
E. Configure Amazon Elastic Container Service (Amazon ECS) Service Auto Scaling with a target tracking scaling policy that is based on CPU utilization.

Correct Answer

A. Enable auto scaling on the ALB to scale the load balancer horizontally.
B. Configure Aurora Auto Scaling to adjust the number of Aurora Replicas in the Aurora cluster dynamically.

Explanation

A. Enable auto scaling on the ALB to scale the load balancer horizontally.

By enabling auto scaling on the Application Load Balancer (ALB), it can dynamically scale horizontally to handle increasing traffic. The ALB automatically adds or removes instances from its target group based on the configured scaling policies, ensuring that the website can scale with traffic demand.

B. Configure Aurora Auto Scaling to adjust the number of Aurora Replicas in the Aurora cluster dynamically.

By configuring Aurora Auto Scaling, the number of Aurora Replicas in the Aurora PostgreSQL cluster can be adjusted dynamically based on the workload. This allows the database to scale horizontally and handle increasing read traffic efficiently.

Together, these two solutions ensure that both the frontend (ALB) and the backend (Aurora PostgreSQL) can scale to meet increasing traffic demands. The ALB scales horizontally to handle the incoming requests and distribute the load across the containers, while Aurora Auto Scaling adjusts the number of replicas to handle the read queries efficiently.