Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 22

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 931

Exam Question

A company runs its infrastructure on AWS and has a registered base of 700,000 users for its document management application. The company intends to create a product that converts large .pdf files to .jpg image files. The .pdf files average 5 MB in size. The company needs to store the original files and the converted files. A solutions architect must design a scalable solution to accommodate demand that will grow rapidly over time.

Which solution meets these requirements MOST cost-effectively?

A. Save the .pdf files to Amazon S3. Configure an S3 PUT event to invoke an AWS Lambda function to convert the files to .jpg format and store them back in Amazon S3.
B. Save the .pdf files to Amazon DynamoDB. Use the DynamoDB Streams feature to invoke an AWS Lambda function to convert the files to .jpg format and store them back in DynamoDB.
C. Upload the .pdf files to an AWS Elastic Beanstalk application that includes Amazon EC2 instances. Amazon Elastic Block Store (Amazon EBS) storage, and an Auto Scaling group. Use a program in the EC2 instances to convert the files to .jpg format. Save the .pdf files and the .jpg files in the EBS store.
D. Upload the .pdf files to an AWS Elastic Beanstalk application that includes Amazon EC2 instances, Amazon Elastic File System (Amazon EFS) storage, and an Auto Scaling group. Use a program in the EC2 instances to convert the file to .jpg format. Save the .pdf files and the .jpg files in the EBS store.

Correct Answer

A. Save the .pdf files to Amazon S3. Configure an S3 PUT event to invoke an AWS Lambda function to convert the files to .jpg format and store them back in Amazon S3.

Explanation

The most cost-effective solution that meets the requirements of storing and converting large .pdf files to .jpg format for a rapidly growing user base would be:

A. Save the .pdf files to Amazon S3. Configure an S3 PUT event to invoke an AWS Lambda function to convert the files to .jpg format and store them back in Amazon S3.

  1. Option B suggests saving the .pdf files to Amazon DynamoDB. However, DynamoDB is not suitable for storing large file data, and it is primarily used for key-value data storage and retrieval.
  2. Option C suggests using an Elastic Beanstalk application with EC2 instances and EBS storage. This solution adds unnecessary complexity and management overhead, as well as additional costs for EC2 instances and EBS storage.
  3. Option D suggests using an Elastic Beanstalk application with EC2 instances and EFS storage. While EFS can provide scalable file storage, it is more expensive compared to Amazon S3 for storing large files.

On the other hand, option A provides a simple and cost-effective solution. By saving the .pdf files to Amazon S3 and configuring an S3 PUT event to trigger an AWS Lambda function, the files can be converted to .jpg format efficiently and stored back in Amazon S3. This solution takes advantage of the serverless nature of AWS Lambda, which scales automatically to handle any level of demand. Additionally, Amazon S3 provides highly durable and scalable object storage at a lower cost compared to other options.

Therefore, option A is the recommended solution to store and convert large .pdf files to .jpg format in a cost-effective and scalable manner.

Question 932

Exam Question

A company hosts a static website within an Amazon S3 bucket. A solutions architect needs to ensure that data can be recovered in case of accidental deletion.

Which action will accomplish this?

A. Enable Amazon S3 versioning.
B. Enable Amazon S3 Intelligent-Tiering.
C. Enable an Amazon S3 lifecycle policy.
D. Enable Amazon S3 cross-Region replication.

Correct Answer

A. Enable Amazon S3 versioning.

Explanation

To ensure data can be recovered in case of accidental deletion in an Amazon S3 bucket hosting a static website, the recommended action is:

A. Enable Amazon S3 versioning.

Enabling Amazon S3 versioning allows the bucket to keep multiple versions of an object. When a new version of an object is uploaded, the previous version is retained, and all versions can be accessed and restored if needed. In case of accidental deletion or modification of an object, the previous versions can be restored to recover the lost or modified data.

Option B, enabling Amazon S3 Intelligent-Tiering, is not directly related to data recovery in case of accidental deletion. Intelligent-Tiering is a storage class that automatically moves objects between different storage tiers based on their access patterns, but it doesn’t specifically address data recovery.

Option C, enabling an Amazon S3 lifecycle policy, allows for automating the transition and deletion of objects based on predefined rules. While it can help manage object lifecycle, it doesn’t provide the same level of data recovery as versioning.

Option D, enabling Amazon S3 cross-Region replication, is a mechanism to replicate data from one S3 bucket to another in a different AWS Region for purposes such as data redundancy and disaster recovery. While it can help protect against data loss in case of regional failures, it doesn’t directly address accidental deletion within the same bucket.

Therefore, enabling Amazon S3 versioning (option A) is the correct action to ensure data can be recovered in case of accidental deletion in an Amazon S3 bucket hosting a static website.

Question 933

Exam Question

A company runs a containerized application on a Kubernetes cluster in an on-premises data center. The company is using a MongoDB database for data storage. The company wants to migrate some of these environments to AWS, but no code changes or deployment method changes are possible at this time. The company needs a solution that minimizes operational overhead.

Which solution meets these requirements?

A. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes for compute and MongoDB on EC2 for data storage.
B. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute and Amazon DynamoDB for data storage.
C. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes for compute and Amazon DynamoDB for data storage.
D. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute and Amazon DocumentDB (with MongoDB compatibility) for data storage.

Correct Answer

C. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes for compute and Amazon DynamoDB for data storage.

Explanation

The solution that meets the given requirements and minimizes operational overhead is:

C. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes for compute and Amazon DynamoDB for data storage.

With Amazon EKS, the company can migrate their containerized application to AWS without making any code changes or deployment method changes. They can use Amazon EC2 worker nodes to provide compute capacity for their Kubernetes cluster. Amazon DynamoDB can be used as the data storage solution, which is a fully managed NoSQL database service provided by AWS. By using DynamoDB, the company can offload the operational overhead of managing a MongoDB database while benefiting from automatic scaling, high availability, and durability provided by DynamoDB.

Option A (Amazon ECS with Amazon EC2 worker nodes) and option D (Amazon EKS with AWS Fargate and Amazon DocumentDB) both involve changes to the deployment method and data storage technology, which are not allowed according to the requirements.

Option B (Amazon ECS with AWS Fargate and Amazon DynamoDB) is close, but it uses ECS instead of EKS, which is not in line with the existing Kubernetes cluster setup in the on-premises data center.

Therefore, option C is the most suitable solution that meets the requirements while minimizing operational overhead.

Question 934

Exam Question

A company is performing an AWS Well-Architected Framework review of an existing workload deployed on AWS. The review identified a public-facing website running on the same Amazon EC2 instance as a Microsoft Active Directory domain controller that was installed recently to support other AWS services. A solutions architect needs to recommend a new design that would improve the security of the architecture and minimize the administrative demand on IT staff.

What should the solutions architect recommend?

A. Use AWS Directory Service to create a managed Active Directory. Uninstall Active Directory on the current EC2 instance.
B. Create another EC2 instance in the same subnet and reinstall Active Directory on it. Uninstall Active Directory.
C. Use AWS Directory Service to create an Active Directory connector. Proxy Active Directory requests to the Active domain controller running on the current EC2 instance.
D. Enable AWS Single Sign-On (AWS SSO) with Security Assertion Markup Language (SAML) 2.0 federation with the current Active Directory controller. Modify the EC2 instance’s security group to deny public access to Active Directory.

Correct Answer

A. Use AWS Directory Service to create a managed Active Directory. Uninstall Active Directory on the current EC2 instance.

Explanation

The solutions architect should recommend:

A. Use AWS Directory Service to create a managed Active Directory. Uninstall Active Directory on the current EC2 instance.

By using AWS Directory Service, the company can create a managed Active Directory service that is specifically designed for AWS environments. This solution will improve the security of the architecture by separating the public-facing website from the Active Directory domain controller. The managed Active Directory service will be hosted and managed by AWS, reducing the administrative demand on IT staff. Uninstalling Active Directory from the current EC2 instance ensures a clear separation of roles and responsibilities.

Option B suggests creating another EC2 instance in the same subnet and reinstalling Active Directory on it. However, this would still keep the public-facing website and the Active Directory on the same instance, which is not recommended for security best practices.

Option C suggests using AWS Directory Service to create an Active Directory connector. While this could provide a proxy for Active Directory requests, it does not address the separation of roles and responsibilities or improve the security of the architecture.

Option D suggests enabling AWS Single Sign-On (AWS SSO) with SAML 2.0 federation with the current Active Directory controller. While this could enhance authentication capabilities, it does not address the security concern of having the public-facing website and the Active Directory on the same EC2 instance.

Therefore, option A is the most appropriate recommendation to improve security and minimize administrative demand.

Question 935

Exam Question

A company has migrated a two-tier application from its on-premises data center to the AWS Cloud. The data tier is a Multi-AZ deployment of Amazon RDS for Oracle with 12 of General Purpose SSD Amazon Elastic Block Store (Amazon EBS) storage. The application is designed to process and store documents in the database as binary large objects (blobs) with an average document size of 6 MB. The database size has grown over time, reducing the performance and increasing the cost of storage. The company must improve the database performance and needs a solution that is highly available and resilient.

Which solution will meet these requirements MOST cost-effectively?

A. Reduce the RDS DB instance size. Increase the storage capacity to 24 TiB. Change the storage type to Magnetic.
B. Increase the RDS DB instance size. Increase the storage capacity to 24 TiB. Change the storage type to Provisioned IOPS.
C. Create an Amazon S3 bucket. Update the application to store documents in the S3 bucket. Store the object metadata in the existing database.
D. Create an Amazon DynamoDB table. Update the application to use DynamoDB. Use AWS Database Migration Service (AWS DMS) to migrate data from the Oracle database to DynamoDB.

Correct Answer

B. Increase the RDS DB instance size. Increase the storage capacity to 24 TiB. Change the storage type to Provisioned IOPS.

Explanation

To improve database performance and cost-effectiveness, the most suitable solution would be:

B. Increase the RDS DB instance size. Increase the storage capacity to 24 TiB. Change the storage type to Provisioned IOPS.

By increasing the RDS DB instance size, the company can allocate more compute resources to handle the increased workload. Increasing the storage capacity to 24 TiB ensures that there is enough storage available for the growing database size. Changing the storage type to Provisioned IOPS allows the company to specify the desired level of IOPS (input/output operations per second) for the database, ensuring consistent and predictable performance.

Option A suggests reducing the RDS DB instance size and changing the storage type to Magnetic. This would likely lead to reduced performance due to the lower compute capacity and slower disk speeds.

Option C suggests storing the documents in an Amazon S3 bucket and keeping the object metadata in the existing database. While this solution could offload storage and reduce costs, it may require significant changes to the application logic and might not provide the same level of database performance as needed.

Option D suggests migrating the data to Amazon DynamoDB using AWS Database Migration Service (AWS DMS). While DynamoDB is a highly scalable and performant NoSQL database service, it may require significant changes to the application and data modeling. Additionally, DynamoDB is primarily suited for key-value access patterns and may not be the best fit for storing binary large objects (BLOBs).

Therefore, option B provides the most cost-effective solution by increasing the compute and storage resources while maintaining the use of Amazon RDS for Oracle, which is already being utilized by the company’s two-tier application.

Question 936

Exam Question

A company hosts a static website on-premises and wants to migrate the website to AWS. The website should load as quickly as possible for users around the world. The company also wants the most cost-effective solution.

What should a solutions architect do to accomplish this?

A. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Replicate the S3 bucket to multiple AWS Regions.
B. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Configure Amazon CloudFront with the S3 bucket as the origin.
C. Copy the website content to an Amazon EBS-backed Amazon EC2 instance running Apache HTTP Server. Configure Amazon Route 53 geolocation routing policies to select the closest origin.
D. Copy the website content to multiple Amazon EBS-backed Amazon EC2 instances running Apache HTTP Server in multiple AWS Regions. Configure Amazon CloudFront geolocation routing policies to select the closest origin.

Correct Answer

B. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Configure Amazon CloudFront with the S3 bucket as the origin.

Explanation

To migrate the static website to AWS, provide fast global access, and achieve cost-effectiveness, the most suitable solution is:

B. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Configure Amazon CloudFront with the S3 bucket as the origin.

By copying the website content to an Amazon S3 bucket and configuring it to serve static webpage content, you leverage the highly scalable and durable storage capabilities of Amazon S3. S3 is designed to deliver content with high performance and low latency.

To ensure fast global access, you can configure Amazon CloudFront, which is a content delivery network (CDN) service. CloudFront caches the website content at edge locations worldwide, allowing users to access the content from the nearest edge location. This reduces latency and improves the website’s loading speed for users around the world.

Option A suggests replicating the S3 bucket to multiple AWS Regions. While this can provide redundancy and improved availability, it may not be necessary for a static website and could increase costs without significant benefits.

Option C suggests using an Amazon EC2 instance running Apache HTTP Server. This option introduces additional operational overhead for managing the EC2 instance and may not provide the same scalability and global reach as Amazon S3 and CloudFront.

Option D suggests using multiple EC2 instances running Apache HTTP Server in multiple AWS Regions. This option also adds complexity and cost, as well as increased management overhead. Using CloudFront with a single S3 origin is usually sufficient for serving static websites globally.

Therefore, option B offers the most suitable and cost-effective solution by leveraging the benefits of Amazon S3 for storage and CloudFront for global content delivery.

Question 937

Exam Question

A company is designing a new web application that the company will deploy into a single AWS Region. The application requires a two-tier architecture that will include Amazon EC2 instances and an Amazon RDS DB instance. A solutions architect needs to design the application so that all components are highly available.

Which solution will meet these requirements MOST cost-effectively?

A. Deploy EC2 instances in an additional Region. Create a DB instance with the Multi-AZ option activated.
B. Deploy all EC2 instances in the same Region and the same Availability Zone. Create a DB instance with the Multi-AZ option activated.
C. Deploy EC2 instances across at least two Availability Zones within the same Region. Create a DB instance in a single Availability Zone.
D. Deploy EC2 instances across at least two Availability Zones within the same Region. Create a DB instance with the Multi-AZ option activated.

Correct Answer

C. Deploy EC2 instances across at least two Availability Zones within the same Region. Create a DB instance in a single Availability Zone.

Explanation

To design a highly available two-tier architecture for the web application in a single AWS Region, the most cost-effective solution is:

C. Deploy EC2 instances across at least two Availability Zones within the same Region. Create a DB instance in a single Availability Zone.

By deploying EC2 instances across multiple Availability Zones (AZs) within the same Region, you ensure redundancy and fault tolerance. In the event of an AZ-level outage, the application can continue running in the other AZ, providing high availability.

Creating a DB instance in a single Availability Zone is sufficient for cost-effectively ensuring the availability of the database. The application can still access the database in the single AZ, even if there is an AZ-level failure. This approach saves costs compared to creating a Multi-AZ DB instance, which replicates the database across multiple AZs, incurring additional expenses.

Option A suggests deploying EC2 instances in an additional Region, which introduces complexity and potentially higher costs without significant benefits for a single Region deployment.

Option B suggests deploying all EC2 instances in the same Region and the same Availability Zone, which lacks fault tolerance. If the Availability Zone experiences an outage, the entire application would be impacted.

Option D suggests deploying EC2 instances across multiple Availability Zones and creating a Multi-AZ DB instance. While this option provides high availability for both compute and database resources, it may incur higher costs due to the Multi-AZ DB instance.

Therefore, option C offers the most cost-effective solution by deploying EC2 instances across multiple AZs while keeping the DB instance in a single AZ, providing a balance of high availability and cost optimization.

Question 938

Exam Question

A company hosts a static website on-premises and wants to migrate the website to AWS. The website should load as quickly as possible for users around the world. The company also wants the most cost-effective solution.

What should a solutions architect do to accomplish this?

A. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Replicate the S3 bucket to multiple AWS Regions.
B. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Configure Amazon CloudFront with the S3 bucket as the origin.
C. Copy the website content to an Amazon EBS-backed Amazon EC2 instance running Apache HTTP Server. Configure Amazon Route 53 geolocation routing policies to select the closest origin.
D. Copy the website content to multiple Amazon EBS-backed Amazon EC2 instances running Apache HTTP Server in multiple AWS Regions. Configure Amazon CloudFront geolocation routing policies to select the closest origin.

Correct Answer

B. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Configure Amazon CloudFront with the S3 bucket as the origin.

Explanation

To migrate a static website from on-premises to AWS and ensure fast loading times for users worldwide while maintaining cost-effectiveness, the recommended solution is:

B. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Configure Amazon CloudFront with the S3 bucket as the origin.

Here’s why this solution is the most appropriate:

  1. Amazon S3 is a highly scalable and cost-effective storage service designed for static content. By copying the website content to an S3 bucket, you benefit from S3’s durability, availability, and low cost.
  2. Configuring the S3 bucket to serve static webpage content allows you to host the website directly from the bucket, providing a simple and efficient way to deliver static files.
  3. By integrating Amazon CloudFront with the S3 bucket as the origin, you leverage CloudFront’s global content delivery network (CDN) to distribute your website’s content to edge locations worldwide. CloudFront caches the content closer to end users, reducing latency and improving the website’s loading speed.

By combining Amazon S3 and Amazon CloudFront, you achieve high availability, scalability, and low latency for your static website at a reasonable cost.

Option A suggests replicating the S3 bucket to multiple AWS Regions. While this approach can improve availability, it increases complexity and costs, which may not be necessary for a static website.

Option C suggests hosting the website on an Amazon EC2 instance running Apache HTTP Server and using geolocation routing policies in Amazon Route 53. This option involves managing and scaling EC2 instances, which can introduce more operational overhead and higher costs compared to an S3-based solution.

Option D suggests using multiple EC2 instances in multiple AWS Regions with CloudFront geolocation routing. Similar to option C, this approach involves managing and scaling EC2 instances across regions, which can be more complex and expensive compared to an S3 and CloudFront solution.

Therefore, option B provides the most efficient and cost-effective solution by utilizing S3 for storage, serving static content directly from the bucket, and leveraging CloudFront’s CDN for global content delivery.

Question 939

Exam Question

A company has an AWS Lambda function that needs read access to an Amazon S3 bucket that is located in the same AWS account.

Which solution will meet these requirements in the MOST secure manner?

A. Apply an S3 bucket policy that grants read access to the S3 bucket.
B. Apply an IAM role to the Lambda function. Apply an IAM policy to the role to grant read access to the S3 bucket.
C. Embed an access key and a secret key in the Lambda function’s code to grant the required IAM permissions for read access to the S3 bucket.
D. Apply an IAM role to the Lambda function. Apply an IAM policy to the role to grant read access to all S3 buckets in the account.

Correct Answer

B. Apply an IAM role to the Lambda function. Apply an IAM policy to the role to grant read access to the S3 bucket.

Explanation

The MOST secure solution to grant read access to an Amazon S3 bucket from an AWS Lambda function in the same AWS account is:

B. Apply an IAM role to the Lambda function. Apply an IAM policy to the role to grant read access to the S3 bucket.

Here’s why this solution is the most secure:

  1. IAM (Identity and Access Management) provides fine-grained access control for AWS resources. By applying an IAM role to the Lambda function, you can assign specific permissions to the function without embedding any access credentials within the code.
  2. Applying an IAM policy to the role allows you to define the exact permissions needed for the Lambda function to read the S3 bucket. You can specify the specific bucket and actions (such as GetObject) required, ensuring that the Lambda function has the necessary access while maintaining the principle of least privilege.
  3. IAM roles provide temporary security credentials to the Lambda function, making it easier to manage and rotate access keys. The role can be associated with the Lambda function during deployment or execution, and the credentials are automatically rotated and managed by AWS.

Option A, applying an S3 bucket policy, can grant read access to the bucket, but it is generally recommended to use IAM roles and policies for access control within the AWS ecosystem, as they provide more flexibility and centralized management.

Option C, embedding access keys and secret keys in the Lambda function’s code, is not recommended for security reasons. It can expose the access keys, making them vulnerable to unauthorized access or compromise.

Option D, applying an IAM role with an IAM policy granting read access to all S3 buckets in the account, provides more access than necessary for the specific requirement. It’s best practice to grant the minimum required permissions, which is achieved by applying an IAM policy directly to the role that is attached to the Lambda function.

Therefore, option B is the most secure and recommended solution, as it leverages IAM roles and policies to grant read access to the specific S3 bucket from the Lambda function.

Question 940

Exam Question

A financial services company has a web application that serves users in the United States and Europe. The application consists of a database tier and a web server tier. The database tier consists of a MySQL database hosted in us-east-1. Amazon Route 53 geo proximity routing is used to direct traffic to instances in the closest Region. A performance review of the system reveals that European users are not receiving the same level of query performance as those in the United States.

Which changes should be made to the database tier to improve performance?

A. Migrate the database to Amazon RDS for MySQL. Configure Multi-AZ in one of the European Regions.
B. Migrate the database to Amazon DynamoDB. Use DynamoDB global tables to enable replication to additional Regions.
C. Deploy MySQL instances in each Region. Deploy an Application Load Balancer in front of MySQL to reduce the load on the primary instance.
D. Migrate the database to an Amazon Aurora global database in MySQL compatibility mode. Configure read replicas in one of the European Regions.

Correct Answer

D. Migrate the database to an Amazon Aurora global database in MySQL compatibility mode. Configure read replicas in one of the European Regions.

Explanation

To improve performance for European users in the financial services company’s web application, the following change should be made to the database tier:

D. Migrate the database to an Amazon Aurora global database in MySQL compatibility mode. Configure read replicas in one of the European Regions.

Here’s why this solution improves performance:

  1. Amazon Aurora is a MySQL-compatible database engine provided by AWS. It offers performance improvements over traditional MySQL databases, including faster read and write performance, enhanced scalability, and automatic data replication.
  2. Deploying an Amazon Aurora global database allows for cross-Region replication of data, which means that data can be replicated to a secondary cluster in a European Region. This reduces the latency for European users, as they can read data from a nearby replica instead of querying the primary database in the us-east-1 Region.
  3. By configuring read replicas in one of the European Regions, read traffic from European users can be directed to these replicas, distributing the load and improving query performance. The read replicas can handle read-intensive workloads while the primary database handles write operations.

Option A, migrating the database to Amazon RDS for MySQL with Multi-AZ in a European Region, does provide fault tolerance through automatic synchronous replication, but it does not address the performance issue for European users. The primary database is still located in the us-east-1 Region, resulting in higher latency for European users.

Option B, migrating the database to Amazon DynamoDB with global tables, would involve a significant change in the database technology. While DynamoDB can provide high scalability and global replication, it may not be a suitable replacement for a relational database like MySQL, especially if the application heavily relies on MySQL-specific features.

Option C, deploying MySQL instances in each Region with an Application Load Balancer, might help with load balancing but does not address the issue of replicating data to European Regions. It also adds complexity in managing multiple MySQL instances across Regions.

Therefore, option D is the most suitable solution for improving performance for European users. Migrating to an Amazon Aurora global database with read replicas in a European Region leverages the benefits of Aurora’s performance and cross-Region replication capabilities, ensuring lower latency and improved query performance for European users.