Learn how to migrate a blog platform to AWS using Amazon EFS for shared file storage and AWS Snowball Edge for transferring large archival data in this Amazon AWS Certified DevOps Engineer – Professional exam question.
Table of Contents
Question
A company is migrating its blog platform to AWS. The company’s on-premises servers connect to AWS through an AWS Site-to-Site VPN connection. The blog content is updated several times a day by multiple authors and is served from a file share on a network-attached storage (NAS) server.
The company needs to migrate the blog platform without delaying the content updates. The company has deployed Amazon EC2 instances across multiple Availability Zones to run the blog platform behind an Application Load Balancer. The company also needs to move 200 TB of archival data from its on-premises servers to Amazon S3 as soon as possible.
Which combination of stops will meet these requirements? (Choose two.)
A. Create a weekly cron job in Amazon EventBridge. Use the cron job to invoke an AWS Lambda function to update the EC2 instances from the NAS server.
B. Configure an Amazon Elastic Block Store (Amazon EBS) Multi-Attach volume for the EC2 instances to share for content access. Write code to synchronize the EBS volume with the NAS server weekly.
C. Mount an Amazon Elastic File System (Amazon EFS) file system to the on-premises servers to act as the NAS server. Copy the blog data to the EFS file system. Mount the EFS file system to the C2 instances to serve the content.
D. Order an AWS Snowball Edge Storage Optimized device. Copy the static data artifacts to the device. Ship the device to AWS.
E. Order an AWS Snowcons SSD device. Copy the static data artifacts to the device. Ship the device to AWS.
Answer
C. Mount an Amazon Elastic File System (Amazon EFS) file system to the on-premises servers to act as the NAS server. Copy the blog data to the EFS file system. Mount the EFS file system to the C2 instances to serve the content.
D. Order an AWS Snowball Edge Storage Optimized device. Copy the static data artifacts to the device. Ship the device to AWS.
Explanation
The correct combination of steps to meet the company’s requirements is:
C. Mount an Amazon Elastic File System (Amazon EFS) file system to the on-premises servers to act as the NAS server. Copy the blog data to the EFS file system. Mount the EFS file system to the EC2 instances to serve the content.
Explanation: Amazon EFS provides a scalable, shared file system that can be accessed from both the on-premises servers and the EC2 instances running the blog platform. By mounting EFS on the on-premises servers, the company can copy the blog data to EFS without disrupting the content updates. The EC2 instances can then mount the same EFS file system to serve the blog content. This ensures a seamless migration without delaying content updates.
D. Order an AWS Snowball Edge Storage Optimized device. Copy the static data artifacts to the device. Ship the device to AWS.
Explanation: AWS Snowball Edge is a data transfer service that allows transferring large amounts of data to AWS using physical storage devices. Since the company needs to move 200 TB of archival data to Amazon S3 as soon as possible, using a Snowball Edge Storage Optimized device is the most efficient solution. The company can copy the static data artifacts to the device, ship it to AWS, and the data will be loaded into S3.
Options A and B are incorrect because they involve manual synchronization processes that may delay content updates. Option E is incorrect because AWS Snowcone is designed for smaller data transfer jobs, typically up to 8 TB, and is not suitable for transferring the 200 TB of archival data.
Amazon AWS Certified DevOps Engineer – Professional DOP-C02 certification exam practice question and answer (Q&A) dump with detail explanation and reference available free, helpful to pass the Amazon AWS Certified DevOps Engineer – Professional DOP-C02 exam and earn Amazon AWS Certified DevOps Engineer – Professional DOP-C02 certification.