The latest AWS Certified Solutions Architect – Professional SAP-C02 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Professional SAP-C02 exam and earn AWS Certified Solutions Architect – Professional SAP-C02 certification.
Table of Contents
- Question 661
- Exam Question
- Correct Answer
- Question 662
- Exam Question
- Correct Answer
- Explanation
- References
- Question 663
- Exam Question
- Correct Answer
- Question 664
- Exam Question
- Correct Answer
- Explanation
- References
- Question 665
- Exam Question
- Correct Answer
- Question 666
- Exam Question
- Correct Answer
- Explanation
- Question 667
- Exam Question
- Correct Answer
- Question 668
- Exam Question
- Correct Answer
- Question 669
- Exam Question
- Correct Answer
- Explanation
- References
- Question 670
- Exam Question
- Correct Answer
- Explanation
- Reference
Question 661
Exam Question
The engineering team at a retail company has deployed a fleet of EC2 instances under an Auto Scaling group (ASG). The instances under the ASG span two Availability Zones (AZ) within the eu-west-1 region. All the incoming requests are handled by an Application Load Balancer (ALB) that routes the requests to the EC2 instances under the ASG. A planned migration went wrong last week when two instances (belonging to AZ 1) were manually terminated and desired capacity was reduced causing the Availability Zones to become unbalanced. Later that day. another instance (belonging to AZ 2) was detected as unhealthy by the Application Load Balancer’s health check.
Which of the following options represent the correct outcomes for the aforesaid events? (Select two)
A. Amazon EC2 Auto Scaling creates a new scaling activity for terminating the unhealthy instance and then terminates it. Later, another scaling activity launches a new instance to replace the terminated instance.
B. As the Availability Zones got unbalanced Amazon EC2 Auto Scaling will compensate by rebalancing the Availability Zones. When rebalancing. Amazon EC2 Auto Scaling terminates old instances before launching new instances, so that rebalancing does not cause extra instances to be launched.
C. As the Availability Zones got unbalanced Amazon EC2 Auto Scaling will compensate by rebalancing the Availability Zones When rebalancing Amazon EC2 Auto Scaling launches new instances before terminating the old ones, so that rebalancing does not compromise the performance or availability of your application.
D. Amazon EC2 Auto Scaling creates a new scaling activity for launching a new instance to replace the unhealthy instance. Later, EC2 Auto Scaling creates a new scaling activity for terminating the unhealthy instance and then terminates it.
E. Amazon EC2 Auto Scaling creates a new scaling activity to terminate the unhealthy instance and launch the new instance simultaneously.
Correct Answer
A. Amazon EC2 Auto Scaling creates a new scaling activity for terminating the unhealthy instance and then terminates it. Later, another scaling activity launches a new instance to replace the terminated instance.
C. As the Availability Zones got unbalanced Amazon EC2 Auto Scaling will compensate by rebalancing the Availability Zones When rebalancing Amazon EC2 Auto Scaling launches new instances before terminating the old ones, so that rebalancing does not compromise the performance or availability of your application.
Question 662
Exam Question
A company has deployed its database on an Amazon RDS for MySQL DB instance in the us-east-1 Region.
The company needs to make its data available to customers in Europe. The customers in Europe must have access to the same data as customers in the United States (US) and will not tolerate high application latency or stale data. The customers in Europe and the customers in the US need to write to the database. Both groups of customers need to see updates from the other group in real time.
Which solution will meet these requirements?
A. Create an Amazon Aurora MySQL replica of the RDS for MySQL DB instance. Pause application writes to the RDS DB instance. Promote the Aurora Replica to a standalone DB cluster. Reconfigure the application to use the Aurora database and resume writes. Add eu-west-1 as a secondary Region to the
06 cluster. Enable write forwarding on the DB cluster. Deploy the application in eu-west-1. Configure the application to use the Aurora MySQL endpoint in eu- west-1.
B. Add a cross-Region replica in eu-west-1 for the RDS for MySQL DB instance. Configure the replica to replicate write queries back to the primary DB instance. Deploy the application in eu-west-1. Configure the application to use the RDS for MySQL endpoint in eu-west-1.
C. Copy the most recent snapshot from the RDS for MySQL DB instance to eu-west-1. Create a new RDS for MySQL DB instance in eu-west-1 from the snapshot. Configure MySQL logical replication from us-east-1 to eu-west-1. Enable write forwarding on the DB cluster. Deploy the application in eu-west-1.
Configure the application to use the RDS for MySQL endpoint in eu-west-1.
D. Convert the RDS for MySQL DB instance to an Amazon Aurora MySQL DB cluster. Add eu-west-1 as a secondary Region to the DB cluster. Enable write forwarding on the DB cluster. Deploy the application in eu-west-1. Configure the application to use the Aurora MySQL endpoint in eu-west-1.
Correct Answer
D. Convert the RDS for MySQL DB instance to an Amazon Aurora MySQL DB cluster. Add eu-west-1 as a secondary Region to the DB cluster. Enable write forwarding on the DB cluster. Deploy the application in eu-west-1. Configure the application to use the Aurora MySQL endpoint in eu-west-1.
Explanation
The company should use AWS Amplify to create a static website for uploads of media files. The company should use Amplify Hosting to serve the website through Amazon CloudFront. The company should use Amazon S3 to store the uploaded media files. The company should use Amazon Cognito to authenticate users.
This solution will meet the requirements with the least operational overhead because AWS Amplify is a complete solution that lets frontend web and mobile developers easily build, ship, and host full-stack applications on AWS, with the flexibility to leverage the breadth of AWS services as use cases evolve. No cloud expertise needed1. By using AWS Amplify, the company can refactor the application to a serverless architecture that reduces operational complexity and costs. AWS Amplify offers the following features and benefits:
- Amplify Studio: A visual interface that enables you to build and deploy a full-stack app quickly, including frontend UI and backend.
- Amplify CLI: A local toolchain that enables you to configure and manage an app backend with just a few commands.
- Amplify Libraries: Open-source client libraries that enable you to build cloud-powered mobile and web apps.
- Amplify UI Components: Open-source design system with cloud-connected components for building feature-rich apps fast.
- Amplify Hosting: Fully managed CI/CD and hosting for fast, secure, and reliable static and server-side rendered apps.
By using AWS Amplify to create a static website for uploads of media files, the company can leverage Amplify Studio to visually build a pixel-perfect UI and connect it to a cloud backend in clicks. By using Amplify Hosting to serve the website through Amazon CloudFront, the company can easily deploy its web app or website to the fast, secure, and reliable AWS content delivery network (CDN), with hundreds of points of presence globally. By using Amazon S3 to store the uploaded media files, the company can benefit from a highly scalable, durable, and cost-effective object storage service that can handle any amount of data2. By using Amazon Cognito to authenticate users, the company can add user sign-up, sign-in, and access control to its web app with a fully managed service that scales to support millions of users .
The other options are not correct because:
Using AWS Application Migration Service to migrate the application server to Amazon EC2 instances would not refactor the application or accelerate development. AWS Application Migration Service (AWS MGN) is a service that enables you to migrate physical servers, virtual machines (VMs), or cloud servers from any source infrastructure to AWS without requiring agents or specialized tools. However, this would not address the challenges of overutilization and data uploads failures. It would also not reduce operational overhead or costs compared to a serverless architecture.
Creating a static website for uploads of media files and using AWS AppSync to create an API would not be as simple or fast as using AWS Amplify. AWS AppSync is a service that enables you to create flexible APIs for securely accessing, manipulating, and combining data from one or more data sources.
However, this would require more configuration and management than using Amplify Studio and Amplify Hosting. It would also not provide authentication features like Amazon Cognito.
Setting up AWS IAM Identity Center (AWS Single Sign-On) to give users the ability to sign in to the application would not be as suitable as using Amazon Cognito. AWS Single Sign-On (AWS SSO) is a service that enables you to centrally manage SSO access and user permissions across multiple AWS accounts and business applications. However, this service is designed for enterprise customers who need to manage access for employees or partners across multiple resources. It is not intended for authenticating end users of web or mobile apps.
References
- AWS Amplify
- Amazon S3
- Amazon Cognito
- AWS AppSync
- AWS IAM Identity Center (Successor to AWS Single Sign-On)
Question 663
Exam Question
An online florist and gift retailer serves customers in the US as well as Europe. The company recently decided to go all-in on AWS and use the platform to host its website, order and stock management systems and fulfillment applications. The company wants to migrate its on-premises Oracle database to Aurora MySQL. The company has hired an AWS Certified Solutions Architect Professional to carry out the migration with minimal downtime using AWS DMS. The company has mandated that the migration must have minimal impact on the performance of the source database and the Solutions Architect must validate that the data was migrated accurately from the source to the target before the cutover.
Which of the following solutions will MOST effectively address this use-case?
A. Use the table metrics of the DMS task to verify the statistics for tables being migrated including the DDL statements completed.
B. Use AWS Schema Conversion Tool for the migration task so it can compare the source and target data and report any mismatches.
C. Configure DMS premigration assessment on the migration task so the assessment can compare the source and target data and report any mismatches.
D. Configure DMS data validation on the migration task so it can compare the source and target data for the DMS task and report any mismatches.
Correct Answer
D. Configure DMS data validation on the migration task so it can compare the source and target data for the DMS task and report any mismatches.
Question 664
Exam Question
A company is migrating a document processing workload to AWS. The company has updated many applications to natively use the Amazon S3 API to store, retrieve, and modify documents that a processing server generates at a rate of approximately 5 documents every second. After the document processing is finished, customers can download the documents directly from Amazon S3.
During the migration, the company discovered that it could not immediately update the processing server that generates many documents to support the S3 API. The server runs on Linux and requires fast local access to the files that the server generates and modifies. When the server finishes processing, the files must be available to the public for download within 30 minutes.
Which solution will meet these requirements with the LEAST amount of effort?
A. Migrate the application to an AWS Lambda function. Use the AWS SDK for Java to generate, modify, and access the files that the company stores directly in Amazon S3.
B. Set up an Amazon S3 File Gateway and configure a file share that is linked to the document store.
Mount the file share on an Amazon EC2 instance by using NFS. When changes occur in Amazon S3, initiate a RefreshCache API call to update the S3 File Gateway.
C. Configure Amazon FSx for Lustre with an import and export policy. Link the new file system to an S3 bucket. Install the Lustre client and mount the document store to an Amazon EC2 instance by using NFS.
D. Configure AWS DataSync to connect to an Amazon EC2 instance. Configure a task to synchronize the generated files to and from Amazon S3.
Correct Answer
C. Configure Amazon FSx for Lustre with an import and export policy. Link the new file system to an S3 bucket. Install the Lustre client and mount the document store to an Amazon EC2 instance by using NFS.
Explanation
Amazon FSx for Lustre is a fully managed service that provides cost-effective, high-performance, scalable storage for compute workloads. Powered by Lustre, the world’s most popular high-performance file system, FSx for Lustre offers shared storage with sub-ms latencies, up to terabytes per second of throughput, and millions of IOPS. FSx for Lustre file systems can also be linked to Amazon Simple Storage Service (S3) buckets, allowing you to access and process data concurrently from both a high-performance file system and from the S3 API.
The company should configure Amazon FSx for Lustre with an import and export policy. The company should link the new file system to an S3 bucket. The company should install the Lustre client and mount the document store to an Amazon EC2 instance by using NFS. This solution will meet the requirements with the least amount of effort because Amazon FSx for Lustre is a fully managed service that provides a high-performance file system optimized for fast processing of workloads such as machine learning, high performance computing, video processing, financial modeling, and electronic design automation1. Amazon FSx for Lustre can be linked to an S3 bucket and can import data from and export data to the bucket2. The import and export policy can be configured to automatically import new or changed objects from S3 and export new or changed files to S33. This will ensure that the files are available to the public for download within 30 minutes. Amazon FSx for Lustre supports NFS version 3.0 protocol for Linux clients.
The other options are not correct because:
Migrating the application to an AWS Lambda function would require a lot of effort and may not be feasible for the existing server that generates many documents. Lambda functions have limitations on execution time, memory, disk space, and network bandwidth.
Setting up an Amazon S3 File Gateway would not work because S3 File Gateway does not support write-back caching, which means that files written to the file share are uploaded to S3 immediately and are not available locally until they are downloaded again. This would not provide fast local access to the files that the server generates and modifies.
Configuring AWS DataSync to connect to an Amazon EC2 instance would not meet the requirement of making the files available to the public for download within 30 minutes. DataSync is a service that transfers data between on-premises storage systems and AWS storage services over the internet or AWS Direct Connect. DataSync tasks can be scheduled to run at specific times or intervals, but they are not triggered by file changes.
References
- Amazon FSx for Lustre
- AWS > Documentation > Amazon FSx > Lustre User Guide > What is Amazon FSx for Lustre?
- AWS > Documentation > Amazon FSx > Lustre User Guide > Mounting Amazon FSx file systems from on-premises or a peered Amazon VPC
- AWS > Documentation > AWS Lambda > Developer Guide > Lambda quotas
- AWS > Documentation > AWS Storage Gateway > AWS Storage Gateway Documentation
- AWS > Documentation > AWS DataSync > User Guide > What is AWS DataSync?
Question 665
Exam Question
A health and beauty online retailer ships thousands of orders daily to 85 countries worldwide with more than 25,000 items and carries inventory from 600 different manufacturers. The company processes thousands of online orders each day from these countries and its website is localized in 15 languages. As a global online business, the company’s website faces continual security threats and challenges in the form of HTTP flood attacks, distributed denial of service (DDoS) attacks, rogue robots that flood its website with traffic, SQL-injection attacks designed to extract data and cross-site scripting attacks (XSS). Most of these attacks originate from certain countries. Therefore, the company wants to block access to its application from specific countries; however, the company wants to allow its remote development team (from one of the blocked countries) to have access to the application. The application is deployed on EC2 instances running under an Application Load Balancer (ALB) with AWS WAF.
As a Solutions Architect Professional, which of the following solutions would you suggest as the BEST fit for the given use-case? (Select two)
A. Create a deny rule for the blocked countries in the NACL associated with each of the EC2 instances.
B. Use WAF IP set statement that specifies the IP addresses that you want to allow through.
C. Use WAF geo match statement listing the countries that you want to block.
D. Use ALB gec match statement listing the countries that you want to block.
E. Use ALB IP set statement that specifies the IP addresses that you want to allow through.
Correct Answer
B. Use WAF IP set statement that specifies the IP addresses that you want to allow through.
C. Use WAF geo match statement listing the countries that you want to block.
Question 666
Exam Question
A company that provides image storage services wants to deploy a customer-lacing solution to AWS. Millions of individual customers will use the solution. The solution will receive batches of large image files, resize the files, and store the files in an Amazon S3 bucket for up to 6 months.
The solution must handle significant variance in demand. The solution must also be reliable at enterprise scale and have the ability to rerun processing jobs in the event of failure.
Which solution will meet these requirements MOST cost-effectively?
A. Use AWS Step Functions to process the S3 event that occurs when a user stores an image. Run an AWS Lambda function that resizes the image in place and replaces the original file in the S3 bucket. Create an S3 Lifecycle expiration policy to expire all stored images after 6 months.
B. Use Amazon EventBridge to process the S3 event that occurs when a user uploads an image. Run an AWS Lambda function that resizes the image in place and replaces the original file in the S3 bucket.
Create an S3 Lifecycle expiration policy to expire all stored images after 6 months.
C. Use S3 Event Notifications to invoke an AWS Lambda function when a user stores an image. Use the Lambda function to resize the image in place and to store the original file in the S3 bucket. Create an S3 Lifecycle policy to move all stored images to S3 Standard-Infrequent Access (S3 Standard-IA) after 6 months.
D. Use Amazon Simple Queue Service (Amazon SQS) to process the S3 event that occurs when a user stores an image. Run an AWS Lambda function that resizes the image and stores the resized file in an S3 bucket that uses S3 Standard-Infrequent Access (S3 Standard-IA). Create an S3 Lifecycle policy to move all stored images to S3 Glacier Deep Archive after 6 months.
Correct Answer
C. Use S3 Event Notifications to invoke an AWS Lambda function when a user stores an image. Use the Lambda function to resize the image in place and to store the original file in the S3 bucket. Create an S3 Lifecycle policy to move all stored images to S3 Standard-Infrequent Access (S3 Standard-IA) after 6 months.
Explanation
S3 Event Notifications is a feature that allows users to receive notifications when certain events happen in an S3 bucket, such as object creation or deletion1. Users can configure S3 Event Notifications to invoke an AWS Lambda function when a user stores an image in the bucket. Lambda is a serverless compute service that runs code in response to events and automatically manages the underlying compute resources2. The Lambda function can resize the image in place and store the original file in the same S3 bucket. This way, the solution can handle significant variance in demand and be reliable at enterprise scale. The solution can also rerun processing jobs in the event of failure by using the retry and dead-letter queue features of Lambda2.
S3 Lifecycle is a feature that allows users to manage their objects so that they are stored cost-effectively throughout their lifecycle3. Users can create an S3 Lifecycle policy to move all stored images to S3 Standard-Infrequent Access (S3 Standard-IA) after 6 months. S3 Standard-IA is a storage class designed for data that is accessed less frequently, but requires rapid access when needed . It offers a lower storage cost than S3 Standard, but charges a retrieval fee. Therefore, moving the images to S3 Standard-IA after 6 months can reduce the storage cost for the solution.
Option A is incorrect because using AWS Step Functions to process the S3 event that occurs when a user stores an image is not necessary or cost-effective. AWS Step Functions is a service that lets users coordinate multiple AWS services into serverless workflows. However, for this use case, a single Lambda function can handle the image resizing task without needing Step Functions.
Option B is incorrect because using Amazon EventBridge to process the S3 event that occurs when a user uploads an image is not necessary or cost-effective. Amazon EventBridge is a serverless event bus service that makes it easy to connect applications with data from a variety of sources. However, for this use case, S3 Event Notifications can directly invoke the Lambda function without needing EventBridge.
Option D is incorrect because using Amazon Simple Queue Service (Amazon SQS) to process the S3 event that occurs when a user stores an image is not necessary or cost-effective. Amazon SQS is a fully managed message queuing service that enables users to decouple and scale microservices, distributed systems, and serverless applications. However, for this use case, S3 Event Notifications can directly invoke the Lambda function without needing SQS. Moreover, storing the resized file in an S3 bucket that uses S3 Standard-IA will incur a retrieval fee every time the file is accessed, which may not be cost-effective for frequently accessed files.
Question 667
Exam Question
A leading pharmaceutical company has significant investments in running Oracle and PostgreSQL services on Amazon RDS which provide their scientists with near real- time analysis of millions of rows of manufacturing data generated by continuous manufacturing equipment with 1,600 data points per row. The business analytics team has been running ad-hoc queries on these databases to prepare daily reports for senior management. The engineering team has observed that the database performance takes a hit whenever these reports are run by the analytics team. To facilitate the business analytics reporting. the engineering team now wants to replicate this data with high availability and consolidate these databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift.
As a Solutions Architect Professional, which of the following would you recommend as the MOST resource-efficient solution that requires the LEAST amount of development time without the need to manage the underlying infrastructure?
A. Use AWS Glue to replicate the data from the databases into Amazon Redshift.
B. Use Amazon EMR to replicate the data from the databases into Amazon Redshift.
C. Use Amazon Kinesis Data Streams to replicate the data from the databases into Amazon Redshift.
D. Use AWS Database Migration Service to replicate the data from the databases into Amazon Redshift.
Correct Answer
D. Use AWS Database Migration Service to replicate the data from the databases into Amazon Redshift.
Question 668
Exam Question
A leading telecommunications company has built a portfolio of Software-as-a-Service applications focusing on voice, video, chat, contact center, and enterprise-class API solutions powered by one global cloud communications platform. As part of this strategy. they have developed their multi-cloud storage (MCS) solution on Amazon RDS for MySQL but it’s running into performance issues despite using Read Replicas. The company has hired you as an AWS Certified Solutions Architect Professional to address these performance-related challenges on an urgent basis without moving away from the underlying relational database schema. The company has branch offices across the world, and it needs the solution to work on a global scale.
Which of the following will you recommend as the MOST cost-effective and high-performance solution?
A. Spin up EC2 instances in each AWS region, install MySQL databases and migrate the existing data into these new databases.
B. Use Amazon Aurora Global Database to enable fast local reads with low latency in each region.
C. Use Amazon DynamoDB Global Tables to provide fast, local, read and write performance in each region.
D. Spin up a Redshift cluster in each AWS region. Migrate the existing data into Redshift clusters.
Correct Answer
B. Use Amazon Aurora Global Database to enable fast local reads with low latency in each region.
Question 669
Exam Question
A company has a web application that allows users to upload short videos. The videos are stored on Amazon EBS volumes and analyzed by custom recognition software for categorization.
The website contains stat c content that has variable traffic with peaks in certain months. The architecture consists of Amazon EC2 instances running in an Auto Scaling group for the web application and EC2 instances running in an Auto Scaling group to process an Amazon SQS queue The company wants to re-architect the application to reduce operational overhead using AWS managed services where possible and remove dependencies on third-party software.
Which solution meets these requirements?
A. Use Amazon ECS containers for the web application and Spot Instances for the Auto Scaling group that processes the SQS queue. Replace the custom software with Amazon Recognition to categorize the videos.
B. Store the uploaded videos n Amazon EFS and mount the file system to the EC2 instances for Te web application. Process the SOS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.
C. Host the web application in Amazon S3. Store the uploaded videos in Amazon S3. Use S3 event notifications to publish events to the SQS queue Process the SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.
D. Use AWS Elastic Beanstalk to launch EC2 instances in an Auto Scaling group for the web application and launch a worker environment to process the SQS queue Replace the custom software with Amazon Rekognition to categorize the videos.
Correct Answer
C. Host the web application in Amazon S3. Store the uploaded videos in Amazon S3. Use S3 event notifications to publish events to the SQS queue Process the SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.
Explanation
Option C is correct because hosting the web application in Amazon S3, storing the uploaded videos in Amazon S3, and using S3 event notifications to publish events to the SQS queue reduces the operational overhead of managing EC2 instances and EBS volumes. Amazon S3 can serve static content such as HTML, CSS, JavaScript, and media files directly from S3 buckets. Amazon S3 can also trigger AWS Lambda functions through S3 event notifications when new objects are created or existing objects are updated or deleted. AWS Lambda can process the SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos. This solution eliminates the need for custom recognition software and third-party dependencies.
References
- AWS > Documentation > Amazon EC2 > User Guide for Linux Instances > Spot Instances
- Amazon EFS Pricing
- AWS > Documentation > Amazon Simple Storage Service (S3) > User Guide > Hosting a static website using Amazon S3
- AWS > Documentation > Amazon Simple Storage Service (S3) > User Guide > Amazon S3 Event Notifications
- AWS > Documentation > Amazon Rekognition > Developer Guide > What is Amazon Rekognition?
- AWS > Documentation > AWS Elastic Beanstalk > Developer Guide > What is AWS Elastic Beanstalk?
Question 670
Exam Question
A company has multiple business units that each have separate accounts on AWS. Each business unit manages its own network with several VPCs that have CIDR ranges that overlap. The company’s marketing team has created a new internal application and wants to make the application accessible to all the other business units.
The solution must use private IP addresses only.
Which solution will meet these requirements with the LEAST operational overhead?
A. Instruct each business unit to add a unique secondary CIDR range to the business unit’s VPC. Peer the VPCs and use a private NAT gateway in the secondary range to route traffic to the marketing team.
B. Create an Amazon EC2 instance to serve as a virtual appliance in the marketing account’s VPC. Create an AWS Site-to-Site VPN connection between the marketing team and each business unit’s VPC.
Perform NAT where necessary.
C. Create an AWS PrivateLink endpoint service to share the marketing application. Grant permission to specific AWS accounts to connect to the service. Create interface VPC endpoints in other accounts to access the application by using private IP addresses.
D. Create a Network Load Balancer (NLB) in front of the marketing application in a private subnet. Create an API Gateway API. Use the Amazon API Gateway private integration to connect the API to the NLB.Activate IAM authorization for the API. Grant access to the accounts of the other business units.
Correct Answer
C. Create an AWS PrivateLink endpoint service to share the marketing application. Grant permission to specific AWS accounts to connect to the service. Create interface VPC endpoints in other accounts to access the application by using private IP addresses.
Explanation
With AWS PrivateLink, the marketing team can create an endpoint service to share their internal application with other accounts securely using private IP addresses. They can grant permission to specific AWS accounts to connect to the service and create interface VPC endpoints in the other accounts to access the application by using private IP addresses. This option does not require any changes to the network of the other business units, and it does not require peering or NATing. This solution is both scalable and secure.