Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 60

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 1311

Exam Question

A company has media and application files that need to be shared internally. Users currently are authenticated using Active Directory and access files from a Microsoft Windows platform. The chief executive officer wants to keep the same user permissions, but wants the company to improve the process as the company is reaching its storage capacity limit.

What should a solutions architect recommend?

A. Set up a corporate Amazon S3 bucket and move all media and application files.
B. Configure Amazon FSx for Windows File Server and move all the media and application files.
C. Configure Amazon Elastic File System (Amazon EFS) and move all media and application files.
D. Set up Amazon EC2 on Windows, attach multiple Amazon Elastic Block Store (Amazon EBS) volumes, and move all media and application files.

Correct Answer

C. Configure Amazon Elastic File System (Amazon EFS) and move all media and application files.

Explanation

Based on the requirements mentioned, the most suitable recommendation for the solutions architect would be:

C. Configure Amazon Elastic File System (Amazon EFS) and move all media and application files.

Amazon Elastic File System (Amazon EFS) is a fully managed, scalable file storage service provided by AWS. It is designed to provide scalable and shared access to files for multiple Amazon EC2 instances and users. Since the users are currently authenticated using Active Directory and accessing files from a Windows platform, Amazon EFS seamlessly integrates with Windows environments and supports the use of existing Active Directory identities for authentication.

By configuring Amazon EFS, the company can maintain the same user permissions while improving the storage process. Amazon EFS automatically scales up or down as the storage needs grow or shrink, eliminating the concern of reaching the storage capacity limit. It also provides high availability and durability for the stored files.

Option A (Setting up a corporate Amazon S3 bucket) may not be the best choice for this scenario because Amazon S3 is an object storage service and may not provide the necessary file-level access and integration with Active Directory for seamless user authentication.

Option B (Configuring Amazon FSx for Windows File Server) is a suitable option for Windows file storage and integrates with Active Directory. However, it is primarily designed for providing file shares for Windows-based applications and may not be the most cost-effective solution for general file storage and sharing within the organization.

Option D (Setting up Amazon EC2 on Windows with multiple Amazon EBS volumes) is a more infrastructure-heavy option that requires manual management of storage volumes and may not provide the scalability and ease of use that Amazon EFS offers.

Therefore, option C is the recommended choice for this scenario.

Question 1312

Exam Question

A company is planning to migrate a commercial off-the-shelf application from its on-premises data center to AWS. The software has a software licensing model using sockets and cores with predictable capacity and uptime requirements. The company wants to use its existing licenses, which were purchased earlier this year.

Which Amazon EC2 pricing option is the MOST cost-effective?

A. Dedicated Reserved Hosts
B. Dedicated On-Demand Hosts
C. Dedicated Reserved Instances
D. Dedicated On-Demand Instances

Correct Answer

C. Dedicated Reserved Instances

Explanation

In the given scenario, the most cost-effective Amazon EC2 pricing option would be:

C. Dedicated Reserved Instances

Dedicated Reserved Instances allow you to reserve capacity on Amazon EC2 dedicated hosts for a specified period. Since you already have existing software licenses, you can leverage them by using Dedicated Reserved Instances, which offer cost savings compared to On-Demand Instances.

Reserved Instances provide a significant discount compared to On-Demand Instances, making them a cost-effective option. By reserving capacity for a specified period, you can secure a lower hourly rate for the instances, resulting in long-term cost savings.

Dedicated Reserved Instances are particularly suitable for workloads with predictable capacity and uptime requirements, as in the case of your commercial off-the-shelf application.

Note that Dedicated Reserved Hosts (option A) are used when you want to have dedicated physical servers for your instances, but they do not offer the cost savings associated with Reserved Instances. Dedicated On-Demand Hosts (option B) and Dedicated On-Demand Instances (option D) are priced at the regular On-Demand rates and do not provide the cost advantages of reserved capacity.

Therefore, option C, Dedicated Reserved Instances, is the most cost-effective pricing option in this scenario.

Question 1313

Exam Question

A company uses a legacy on-premises analytics application that operates on gigabytes of .csv files and represents months of data. The legacy application cannot handle the growing size of .csv files. New .csv files are added daily from various data sources to a central on-premises storage location. The company wants to continue to support the legacy application while users learn AWS analytics services. To achieve this, a solutions architect wants to maintain two synchronized copies of all the .csv files on-premises and in Amazon S3.

Which solution should the solution architects recommend?

A. Deploy AWS DataSync on-premises. Configure DataSync to continuously replicate the .csv files between the company’s on-premises storage and the company’s S3 bucket.
B. Deploy an on-premises file gateway. Configure data sources to write the .csv files to the file gateway. Point the legacy analytics application to the file gateway. The file gateway should replicate the .csv files to Amazon S3.
C. Deploy an on-premises volume gateway. Configure data sources to write the .csv files to the volume gateway. Point the legacy analytics application to the volume gateway. The volume gateway should replicate data to Amazon S3.
D. Deploy AWS DataSync on-premises. Configure DataSync to continuously replicate the .csv files between on-premises and Amazon Elastic File System (Amazon EFS). Enable replication from Amazon EFS to the company’s S3 bucket.

Correct Answer

A. Deploy AWS DataSync on-premises. Configure DataSync to continuously replicate the .csv files between the company’s on-premises storage and the company’s S3 bucket.

Explanation

The solution architects should recommend option A: Deploy AWS DataSync on-premises and configure it to continuously replicate the .csv files between the company’s on-premises storage and the company’s S3 bucket.

AWS DataSync is a service designed for transferring large amounts of data between on-premises storage and AWS storage services, such as Amazon S3. It provides efficient, fast, and secure data transfer capabilities. By deploying DataSync on-premises, the solution can ensure that the .csv files are synchronized between the on-premises storage and the S3 bucket.

With DataSync, the .csv files added daily to the central on-premises storage location can be seamlessly replicated to the S3 bucket, allowing users to access the data through AWS analytics services. This solution allows the company to continue using their legacy on-premises analytics application while users learn and transition to AWS analytics services.

Option B suggests deploying an on-premises file gateway, which can be a valid solution for certain use cases. However, it introduces an additional layer of complexity and may not be necessary if the main goal is to synchronize the .csv files between on-premises and S3.

Option C suggests deploying an on-premises volume gateway. While volume gateways are used for block-level storage, the scenario mentioned specifically involves .csv files, which are typically stored as objects rather than blocks. Therefore, this option may not be the most suitable choice.

Option D suggests using DataSync to replicate the .csv files between on-premises and Amazon EFS, and then enabling replication from Amazon EFS to the S3 bucket. While this option is technically feasible, it adds unnecessary complexity by involving an intermediate storage layer (Amazon EFS) between on-premises and S3. It’s more straightforward to directly replicate the files from on-premises storage to S3 using DataSync.

Question 1314

Exam Question

A company is hosting its static website in an Amazon S3 bucket, which is the origin for Amazon CloudFront. The company has users in the United States, Canada, and Europe and wants to reduce costs.

What should a solutions architect recommend?

A. Adjust the CloudFront caching time to live (TTL) from the default to a longer timeframe.
B. Implement CloudFront events with Lambda@Edge to run the website’s data processing.
C. Modify the CloudFront price class to include only the locations of the countries that are served.
D. Implement a CloudFront Secure Sockets Layer (SSL) certificate to push security closer to the locations of the countries that are served.

Correct Answer

C. Modify the CloudFront price class to include only the locations of the countries that are served.

Explanation

To reduce costs for the company’s static website hosted in an Amazon S3 bucket with Amazon CloudFront as the origin, a solutions architect should recommend the following:

C. Modify the CloudFront price class to include only the locations of the countries that are served.

Explanation: CloudFront offers different price classes based on the regions it serves. By modifying the CloudFront price class to include only the locations of the countries where the company has users (United States, Canada, and Europe), unnecessary costs can be eliminated by excluding regions that are not relevant. This ensures that CloudFront’s edge locations are strategically positioned to serve the target audience while reducing the cost associated with serving traffic in unused regions.

Options A, B, and D are not directly related to cost reduction:

A. Adjusting the CloudFront caching time to live (TTL) from the default to a longer timeframe might improve performance by reducing the number of requests to the origin, but it does not directly address cost reduction.

B. Implementing CloudFront events with Lambda@Edge for data processing might be beneficial for specific use cases, but it does not directly address cost reduction unless there is a requirement to optimize and reduce processing costs.

D. Implementing a CloudFront Secure Sockets Layer (SSL) certificate improves security by enabling secure connections (HTTPS), but it does not directly address cost reduction.

Question 1315

Exam Question

A company runs a high performance computing (HPC) workload on AWS. The workload required low-latency network performance and high network throughput with tightly coupled node-to-node communication. The Amazon EC2 instances are properly sized for compute and storage capacity, and are launched using default options.

What should a solutions architect propose to improve the performance of the workload?

A. Choose a cluster placement group while launching Amazon EC2 instances.
B. Choose dedicated instance tenancy while launching Amazon EC2 instances.
C. Choose an Elastic Inference accelerator while launching Amazon EC2 instances.
D. Choose the required capacity reservation while launching Amazon EC2 instances.

Correct Answer

A. Choose a cluster placement group while launching Amazon EC2 instances.

Explanation

To improve the performance of the high performance computing (HPC) workload on AWS, the solutions architect should propose option A: Choose a cluster placement group while launching Amazon EC2 instances.

A cluster placement group is a logical grouping of instances within a single Availability Zone that enables low-latency, high-bandwidth network communication between instances. By launching the Amazon EC2 instances within a cluster placement group, the tightly coupled node-to-node communication required by the workload can be achieved.

Option B, choosing dedicated instance tenancy, is not directly related to improving network performance. Dedicated instance tenancy ensures that the EC2 instances run on hardware dedicated exclusively to a single customer, but it does not specifically address low-latency network performance or high network throughput.

Option C, choosing an Elastic Inference accelerator, is used to attach low-cost GPU-powered inference acceleration to Amazon EC2 instances. While this option can improve performance for certain types of workloads, it is not directly related to network performance or node-to-node communication.

Option D, choosing the required capacity reservation, is used to reserve capacity for specific Amazon EC2 instances to ensure they are available when needed. This option is not directly related to improving network performance or node-to-node communication.

Therefore, option A, choosing a cluster placement group while launching Amazon EC2 instances, is the most appropriate proposal to improve the performance of the HPC workload.

Question 1316

Exam Question

A company has a highly dynamic batch processing job that uses many Amazon EC2 instances to complete it. The job is stateless in nature, can be started and stopped at any given time with no negative impact, and typically takes upwards of 60 minutes total to complete. The company has asked a solutions architect to design a scalable and cost-effective solution that meets the requirements of the job.

What should the solutions architect recommend?

A. Implement EC2 Spot Instances.
B. Purchase EC2 Reserved Instances.
C. Implement EC2 On-Demand Instances.
D. Implement the processing on AWS Lambda.

Correct Answer

A. Implement EC2 Spot Instances.

Explanation

Based on the given requirements, the solutions architect should recommend implementing EC2 Spot Instances (option A) for the scalable and cost-effective solution.

EC2 Spot Instances allow you to bid on spare EC2 capacity in the AWS cloud, providing significant cost savings compared to On-Demand or Reserved Instances. Since the batch processing job is stateless and can be started and stopped without any negative impact, Spot Instances are well-suited for this scenario.

Here’s why Spot Instances are a suitable choice:

1. Cost-effectiveness: Spot Instances offer the lowest pricing among EC2 instance options. By bidding on unused capacity, you can often achieve substantial cost savings. This is especially beneficial for long-duration jobs like the given batch processing job.

2. Scalability: Spot Instances allow you to launch and terminate instances as needed. You can scale the instance count up or down based on the workload and demand, ensuring the job completes within the desired timeframe.

3. Flexible start and stop: The stateless nature of the job allows you to start and stop the instances without negative consequences. Spot Instances are interrupted when the spot price exceeds your bid, but since your job can be paused and resumed, interruptions can be easily managed.

Option B (EC2 Reserved Instances) is not the most suitable choice in this case because reserved instances require a fixed-term commitment, which may not align with the dynamic and start/stop nature of the job.

Option C (EC2 On-Demand Instances) could be used, but they are generally more expensive than Spot Instances for long-duration workloads. If the job is run frequently and continuously, On-Demand Instances may be a viable option, but Spot Instances are still recommended for their cost savings.

Option D (AWS Lambda) is not ideal for batch processing jobs that typically take upwards of 60 minutes. AWS Lambda is better suited for short-duration, event-driven tasks where individual function invocations complete within seconds or a few minutes.

Therefore, the most appropriate recommendation in this scenario is to implement EC2 Spot Instances (option A).

Question 1317

Exam Question

A company uses Application Load Balancers (ALBs) in different AWS Regions. The ALBs receive inconsistent traffic that can spike and drop throughout the year. The company’s networking team needs to allow the IP addresses of the ALBs in the on-premises firewall to enable connectivity.

Which solution is the MOST scalable with minimal configuration changes?

A. Write an AWS Lambda script to get the IP addresses of the ALBs in different Regions. Update the on-premises firewall’s rule to allow the IP addresses of the ALBs.
B. Migrate all ALBs in different Regions to the Network Load Balancer (NLBs). Update the on-premises firewall’s rule to allow the Elastic IP addresses of all the NLBs.
C. Launch AWS Global Accelerator. Register the ALBs in different Regions to the accelerator. Update the on-premises firewall’s rule to allow static IP addresses associated with the accelerator.
D. Launch a Network Load Balancer (NLB) in one Region. Register the private IP addresses of the ALBs in different Regions with the NLB. Update the on- premises firewall’s rule to allow the Elastic IP address attached to the NLB.

Correct Answer

C. Launch AWS Global Accelerator. Register the ALBs in different Regions to the accelerator. Update the on-premises firewall’s rule to allow static IP addresses associated with the accelerator.

Explanation

The solution that is the MOST scalable with minimal configuration changes in this scenario is option C: Launch AWS Global Accelerator.

AWS Global Accelerator is a service that provides static IP addresses that act as a fixed entry point to your application endpoints in multiple AWS Regions. By registering the ALBs in different Regions with the accelerator, you can achieve global load balancing and fault tolerance while ensuring minimal changes to your configuration.

With AWS Global Accelerator, you can use static IP addresses associated with the accelerator as the allowed addresses in your on-premises firewall. This eliminates the need to update the firewall rule every time the IP addresses of the ALBs change or when you add or remove ALBs.

Additionally, AWS Global Accelerator uses Anycast routing, which allows traffic to be routed to the nearest available AWS edge location, ensuring low-latency and high-performance connectivity for your users.

Therefore, option C is the most scalable solution with minimal configuration changes in this case.

Question 1318

Exam Question

A company runs a static website through its on-premises data center. The company has multiple servers that handle all of its traffic, but on busy days, services are interrupted and the website becomes unavailable. The company wants to expand its presence globally and plans to triple its website traffic.

What should a solutions architect recommend to meet these requirements?

A. Migrate the website content to Amazon S3 and host the website on Amazon CloudFront.
B. Migrate the website content to Amazon EC2 instances with public Elastic IP addresses in multiple AWS Regions.
C. Migrate the website content to Amazon EC2 instances and vertically scale as the load increases.
D. Use Amazon Route 53 to distribute the loads across multiple Amazon CloudFront distributions for each AWS Region that exists globally.

Correct Answer

D. Use Amazon Route 53 to distribute the loads across multiple Amazon CloudFront distributions for each AWS Region that exists globally.

Explanation

To meet the requirements of expanding the company’s website presence globally and handling triple the website traffic while ensuring high availability, a solutions architect should recommend option D: Use Amazon Route 53 to distribute the loads across multiple Amazon CloudFront distributions for each AWS Region that exists globally.

Amazon Route 53 is a scalable and highly available domain name system (DNS) web service provided by Amazon Web Services (AWS). By using Route 53, the architect can configure DNS routing policies to distribute traffic across multiple CloudFront distributions. This allows the website to be served from edge locations closer to the end users, reducing latency and improving performance.

Amazon CloudFront is a content delivery network (CDN) service offered by AWS. It caches static content in edge locations worldwide, ensuring faster delivery of website content to users. By utilizing multiple CloudFront distributions across AWS Regions globally, the company can handle increased traffic and scale its presence effectively.

Migrating the website content to Amazon S3 and hosting the website on Amazon CloudFront (option A) is a viable solution for static websites, but it does not address the need for global presence and traffic distribution.

Migrating the website content to Amazon EC2 instances with public Elastic IP addresses in multiple AWS Regions (option B) would require managing and scaling the infrastructure manually, which may be complex and costly.

Migrating the website content to Amazon EC2 instances and vertically scaling as the load increases (option C) might provide some scalability, but it does not address the need for global presence and can result in performance issues during peak traffic periods.

Therefore, option D is the most suitable recommendation for the given requirements.

Question 1319

Exam Question

A company needs a secure connection between its on-premises environment and AWS. This connection does not need high bandwidth and will handle a small amount of traffic. The connection should be set up quickly.

What is the MOST cost-effective method to establish this type of connection?

A. Implement a client VPN.
B. Implement AWS Direct Connect.
C. Implement a bastion host on Amazon EC2.
D. Implement an AWS Site-to-Site VPN connection.

Correct Answer

D. Implement an AWS Site-to-Site VPN connection.

Explanation

The most cost-effective method to establish a secure connection between an on-premises environment and AWS, which requires a small amount of traffic and a quick setup, would be option D: Implement an AWS Site-to-Site VPN connection.

An AWS Site-to-Site VPN connection allows you to establish an encrypted tunnel between your on-premises network and your Amazon Virtual Private Cloud (Amazon VPC) over the internet. This connection is established using VPN appliances or software VPN solutions on both ends. It provides secure communication between your on-premises environment and AWS without the need for dedicated physical connections or costly networking equipment.

Implementing a client VPN (option A) is suitable when individual users or devices need to establish a secure connection to AWS resources. However, for a connection between the entire on-premises environment and AWS, a Site-to-Site VPN is more appropriate and cost-effective.

AWS Direct Connect (option B) is a dedicated network connection service that provides a high-bandwidth, low-latency link between your on-premises network and AWS. It is designed for high-volume and consistent traffic requirements. Since the question states that the connection does not need high bandwidth and will handle a small amount of traffic, Direct Connect would not be the most cost-effective option.

Implementing a bastion host on Amazon EC2 (option C) is a method to securely access and manage instances within an Amazon VPC. It does not directly establish a secure connection between the on-premises environment and AWS, so it is not the best choice for this scenario.

Therefore, the most cost-effective method for establishing a secure connection with the given requirements is option D: Implement an AWS Site-to-Site VPN connection.

Question 1320

Exam Question

A solutions architect needs to ensure that all Amazon Elastic Block Store (Amazon EBS) volumes restored from unencrypted EBS snapshots are encrypted.

What should the solutions architect do to accomplish this?

A. Enable EBS encryption by default for the AWS Region.
B. Enable EBS encryption by default for the specific volumes.
C. Create a new volume and specify the symmetric customer master key (CMK) to use for encryption.
D. Create a new volume and specify the asymmetric customer master key (CMK) to use for encryption.

Correct Answer

A. Enable EBS encryption by default for the AWS Region.

Explanation

To ensure that all Amazon Elastic Block Store (Amazon EBS) volumes restored from unencrypted EBS snapshots are encrypted, the solutions architect should choose option A: Enable EBS encryption by default for the AWS Region.

Enabling EBS encryption by default for the AWS Region ensures that any new EBS volume created from a snapshot without encryption will be automatically encrypted. This setting applies to all volumes within that AWS Region and eliminates the need for manual encryption configuration for each volume.

Option B, enabling EBS encryption by default for specific volumes, would require identifying and configuring each volume individually, which could be error-prone and time-consuming. It would not provide a comprehensive solution for all restored volumes.

Option C, creating a new volume and specifying a symmetric customer master key (CMK) for encryption, is a manual approach that would require creating and managing keys for each volume. This option does not address the requirement for automatically encrypting restored volumes from unencrypted snapshots.

Option D, specifying an asymmetric customer master key (CMK) for encryption, is not relevant in this scenario. Asymmetric CMKs are typically used for encryption and decryption operations that require separate public and private keys, such as digital signatures and key exchange. In the context of EBS encryption, symmetric keys are used.

Therefore, the most appropriate solution is option A: Enable EBS encryption by default for the AWS Region.

Alex Lim is a certified IT Technical Support Architect with over 15 years of experience in designing, implementing, and troubleshooting complex IT systems and networks. He has worked for leading IT companies, such as Microsoft, IBM, and Cisco, providing technical support and solutions to clients across various industries and sectors. Alex has a bachelor’s degree in computer science from the National University of Singapore and a master’s degree in information security from the Massachusetts Institute of Technology. He is also the author of several best-selling books on IT technical support, such as The IT Technical Support Handbook and Troubleshooting IT Systems and Networks. Alex lives in Bandar, Johore, Malaysia with his wife and two chilrdren. You can reach him at [email protected] or follow him on Website | Twitter | Facebook

    Ads Blocker Image Powered by Code Help Pro

    Your Support Matters...

    We run an independent site that is committed to delivering valuable content, but it comes with its challenges. Many of our readers use ad blockers, causing our advertising revenue to decline. Unlike some websites, we have not implemented paywalls to restrict access. Your support can make a significant difference. If you find this website useful and choose to support us, it would greatly secure our future. We appreciate your help. If you are currently using an ad blocker, please consider disabling it for our site. Thank you for your understanding and support.