Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 19

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 901

Exam Question

A company has multiple AWS accounts with applications deployed in the us-west-2 Region. Application logs are stored within Amazon S3 buckets in each account. The company wants to build a centralized log analysis solution that uses a single S3 bucket. Logs must not leave us-west-2, and the company wants to incur minimal operational overhead.

Which solution meets these requirements and is MOST cost-effective?

A. Create an S3 Lifecycle policy that copies the objects from one of the application S3 buckets to the centralized S3 bucket.
B. Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3 bucket in us-west-2. Use this S3 bucket for log analysis.
C. Write a script that uses the PutObject API operation every day to copy the entire contents of the buckets to another S3 bucket in us-west-2. Use this S3 bucket for log analysis.
D. Write AWS Lambda functions in these accounts that are triggered every time logs are delivered to the S3 buckets (s3:ObjectCreated:* event). Copy the logs to another S3 bucket in us-west-2. Use this S3 bucket for log analysis.

Correct Answer

D. Write AWS Lambda functions in these accounts that are triggered every time logs are delivered to the S3 buckets (s3:ObjectCreated:* event). Copy the logs to another S3 bucket in us-west-2. Use this S3 bucket for log analysis.

Explanation

This solution allows for a centralized log analysis solution while keeping the logs within the us-west-2 Region and incurring minimal operational overhead. By using AWS Lambda functions, you can trigger the functions whenever logs are delivered to the S3 buckets. The Lambda functions can then copy the logs to another S3 bucket in the us-west-2 Region, which will serve as the centralized location for log analysis.

This approach ensures that logs remain within the desired region and provides the flexibility to perform any required transformations or analysis on the logs before storing them in the centralized S3 bucket. Additionally, using Lambda functions allows for automation and scalability, as the functions can be easily configured to handle logs from multiple accounts and adapt to changing log volumes.

Question 902

Exam Question

A company allows its developers to attach existing IAM policies to existing IAM roles to enable faster experimentation and agility. However, the security operations team is concerned that the developers could attach the existing administrator policy, when would allow the developers to circumvent any other security policies.

How should a solutions architect address this issue?

A. Create an Amazon SNS topic to send an alert every time a developer creates a new policy.
B. Use service control policies to disable IAM activity across all accounts in the organizational unit.
C. Prevent the developers from attaching any policies and assign all IAM duties to the security operations team.
D. Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy.

Correct Answer

D. Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy.

Explanation

By setting an IAM permissions boundary on the developer IAM role, the solutions architect can explicitly deny the developers from attaching the administrator policy. This approach allows the developers to continue attaching existing IAM policies for experimentation and agility, while preventing them from circumventing other security policies by attaching the administrator policy. It provides a controlled and restricted environment that balances agility with security.

Setting an IAM permissions boundary on the developer IAM role is an effective way to address the issue. A permissions boundary is an advanced feature of IAM that sets the maximum permissions that an IAM entity (such as a role) can have. By setting a permissions boundary on the developer IAM role, you can explicitly deny the ability to attach the administrator policy.

This approach allows developers to attach existing IAM policies to existing roles for faster experimentation and agility, but it restricts their ability to attach the administrator policy, which could circumvent other security policies. By explicitly denying the attachment of the administrator policy in the permissions boundary, you ensure that developers cannot escalate their privileges beyond the defined boundaries.

This solution provides a balance between developer agility and security controls, allowing developers to work efficiently while still enforcing necessary security measures.

Reference

AWS > Documentation > AWS Identity and Access Management > User Guide > Permissions boundaries for IAM entities

Question 903

Exam Question

A company has an AWS account used for software engineering. The AWS account has access to the company’s on-premises data center through a pair of AWS Direct Connect connections. All non-VPC traffic routes to the virtual private gateway. A development team recently created an AWS Lambda function through the console. The development team needs to allow the function to access a database that runs in a private subnet in the company’s data center.

Which solution will meet these requirements?

A. Configure the Lambda function to run in the VPC with the appropriate security group.
B. Set up a VPN connection from AWS to the data center. Route the traffic from the Lambda function through the VPN.
C. Update the route tables in the VPC to allow the Lambda function to access the on-premises data center through Direct Connect.
D. Create an Elastic IP address. Configure the Lambda function to send traffic through the Elastic IP address without an elastic network interface.

Correct Answer

A. Configure the Lambda function to run in the VPC with the appropriate security group.

Explanation

To allow the Lambda function to access a database in a private subnet in the company’s data center, the Lambda function should be configured to run within the Virtual Private Cloud (VPC) that is connected to the data center through the AWS Direct Connect connections.

By running the Lambda function in the VPC, it can be assigned a security group that allows outbound traffic to the database in the private subnet. This ensures that the Lambda function can communicate with the database securely.

Additionally, running the Lambda function within the VPC provides network isolation and allows it to access resources within the VPC, including the on-premises data center, through the Direct Connect connections.

Therefore, configuring the Lambda function to run in the VPC with the appropriate security group is the solution that will meet the requirements.

Question 904

Exam Question

A solutions architect is designing a two-tier web application. The application consists of a public-facing web tier hosted on Amazon EC2 in public subnets. The database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet. Security is a high priority for the company.

How should security groups be configured in this situation? (Choose two.)

A. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0. B. Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0.
B. Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier.
C. Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier.
D. Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier.

Correct Answer

C. Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier.
D. Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier.

Explanation

C. Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier.

To allow communication between the web tier and the database tier, the security group for the database tier should be configured to allow inbound traffic on the appropriate port (in this case, port 1433) from the security group associated with the web tier. This allows the web tier instances to establish connections to the database tier.

D. Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier.

In addition to allowing inbound traffic on port 1433, it is recommended to also allow inbound traffic on the appropriate port for secure connections (port 443) from the security group associated with the web tier. This ensures that communication between the web tier and the database tier can be encrypted.

By configuring the security groups in this way, the web tier can establish secure connections to the database tier while maintaining security by restricting access only to the necessary ports and from the specific security group.

Question 905

Exam Question

A company has an application that scans millions of connected devices for security threats and pushes the scan logs to an Amazon S3 bucket. A total of 70 GB of data is generated each week, and the company needs to store 3 years of data for historical reporting. The company must process, aggregate, and enrich the data from Amazon S3 by performing complex analytical queries and joins in the least amount of time. The aggregated dataset is visualized on an Amazon QuickSight dashboard.

What should a solutions architect recommend to meet these requirements?

A. Create and run an ETL job in AWS Glue to process the data from Amazon S3 and load it into Amazon Redshift. Perform the aggregation queries on Amazon Redshift.
B. Use AWS Lambda functions based on S3 PutObject event triggers to copy the incremental changes to Amazon DynamoDB. Perform the aggregation queries on DynamoDB.
C. Use AWS Lambda functions based on S3 PutObject event triggers to copy the incremental changes to Amazon Aurora MySQL. Perform the aggregation queries on Aurora MySQL.
D. Use AWS Glue to catalog the data in Amazon S3. Perform the aggregation queries on the cataloged tables by using Amazon Athena. Query the data directly from Amazon S3.

Correct Answer

D. Use AWS Glue to catalog the data in Amazon S3. Perform the aggregation queries on the cataloged tables by using Amazon Athena. Query the data directly from Amazon S3.

Explanation

In this scenario, the company needs to process, aggregate, and enrich the data from Amazon S3 in the least amount of time for complex analytical queries and joins. AWS Glue can be used to catalog the data in Amazon S3, making it easier to query and analyze the data. Amazon Athena, an interactive query service, can then be used to perform the aggregation queries directly on the cataloged tables in Amazon S3.

This solution allows for efficient querying and analysis of the data stored in Amazon S3 without the need for additional data movement or storage. It provides a serverless and cost-effective approach for processing and analyzing large datasets. The aggregated dataset can be visualized on an Amazon QuickSight dashboard, leveraging the results of the queries performed with Amazon Athena.

Question 906

Exam Question

An application hosted on AWS is experiencing performance problems, and the application vendor wants to perform an analysis of the log file to troubleshoot further. The log file is stored on Amazon S3 and is 10 GB in size. The application owner will make the log file available to the vendor for a limited time.

What is the MOST secure way to do this?

A. Enable public read on the S3 object and provide the link to the vendor.
B. Upload the file to Amazon WorkDocs and share the public link with the vendor.
C. Generate a presigned URL and have the vendor download the log file before it expires.
D. Create an IAM user for the vendor to provide access to the S3 bucket and the application. Enforce multi-factor authentication.

Correct Answer

C. Generate a presigned URL and have the vendor download the log file before it expires.

Explanation

Generating a presigned URL is the most secure way to share the log file with the vendor while maintaining control over the access and the duration of access. A presigned URL is a time-limited URL that provides temporary access to an object in Amazon S3. By generating a presigned URL for the log file, the application owner can specify the expiration time, ensuring that the vendor can access the file only for a limited period.

This approach avoids the need to enable public read access or upload the file to a different service. It also provides a level of accountability as the access is tied to the specific URL and can be tracked.

Question 907

Exam Question

A company is hosting a three-tier e-commerce application in the AWS Cloud. The company hosts the website on Amazon S3 and integrates the website with an API that handles sales requests. The company hosts the API on three Amazon EC2 instances behind an Application Load Balancer (ALB). The API consists of static and dynamic front-end content along with backend workers that process sales requests asynchronously. The company is expecting a significant and sudden increase in the number of sales requests during events for the launch of new products.

What should a solutions architect recommend to ensure that all the requests are processed successfully?

A. Add an Amazon CloudFront distribution for the dynamic content. Increase the number of EC2 instances to handle the increase in traffic.
B. Add an Amazon CloudFront distribution for the static content. Place the EC2 instances in an Auto Scaling group to launch new instances based on network traffic.
C. Add an Amazon CloudFront distribution for the dynamic content. Add an Amazon ElastiCache instance in front of the ALB to reduce traffic for the API to handle.
D. Add an Amazon CloudFront distribution for the static content. Add an Amazon Simple Queue Service (Amazon SQS) queue to receive requests from the website for later processing by the EC2 instances.

Correct Answer

B. Add an Amazon CloudFront distribution for the static content. Place the EC2 instances in an Auto Scaling group to launch new instances based on network traffic.

Explanation

By adding an Amazon CloudFront distribution for the static content, the company can leverage the global edge locations of CloudFront to cache and serve the static content closer to the users, reducing the load on the EC2 instances and improving the overall performance.

Placing the EC2 instances behind an Auto Scaling group allows the company to automatically scale the number of instances based on network traffic. With the sudden increase in sales requests during product launches, the Auto Scaling group can dynamically add more instances to handle the increased load and ensure that all requests are processed successfully.

This solution provides scalability and load balancing capabilities to handle the increased traffic efficiently, ensuring that the website and API can handle the surge in sales requests effectively.

Question 908

Exam Question

A solutions architect is implementing a document review application using an Amazon S3 bucket for storage. The solution must prevent an accidental deletion of the documents and ensure that all versions of the documents are available. Users must be able to download, modify, and upload documents.

Which combination of actions should be taken to meet these requirements? (Choose two.)

A. Enable a read-only bucket ACL.
B. Enable versioning on the bucket.
C. Attach an IAM policy to the bucket.
D. Enable MFA Delete on the bucket.
E. Encrypt the bucket using AWS KMS.

Correct Answer

B. Enable versioning on the bucket.
D. Enable MFA Delete on the bucket.

Explanation

Enabling versioning on the Amazon S3 bucket ensures that all versions of the documents are retained. Whenever a document is modified and uploaded, a new version of the document is created, preserving the previous versions. This protects against accidental deletions or overwrites, as previous versions can be restored if needed.

Enabling MFA (Multi-Factor Authentication) Delete adds an extra layer of protection against accidental or unauthorized deletions. With MFA Delete enabled, any deletion operation on the bucket requires the use of an additional authentication factor, such as a physical token or a virtual MFA device, in addition to the standard credentials. This helps prevent accidental or unauthorized deletion of documents.

Enabling a read-only bucket ACL (Access Control List) or attaching an IAM policy to the bucket may limit write operations but won’t prevent accidental deletions or ensure versioning.

Encrypting the bucket using AWS Key Management Service (AWS KMS) with customer-managed keys (CMKs) provides data-at-rest encryption but doesn’t directly address accidental deletions or versioning.

Question 909

Exam Question

A company has several web servers that need to frequently access a common Amazon RDS MySQL Multi-AZ DB instance. The company wants a secure method for the web servers to connect to the database while meeting a security requirement to rotate user credentials frequently.

Which solution meets these requirements?

A. Store the database user credentials in AWS Secrets Manager. Grant the necessary IAM permissions to allow the web servers to access AWS Secrets Manager.
B. Store the database user credentials in AWS Systems Manager OpsCenter. Grant the necessary IAM permissions to allow the web servers to access OpsCenter.
C. Store the database user credentials in a secure Amazon S3 bucket. Grant the necessary IAM permissions to allow the web servers to retrieve credentials and access the database.
D. Store the database user credentials in files encrypted with AWS Key Management Service (AWS KMS) on the web server file system. The web server should be able to decrypt the files and access the database.

Correct Answer

A. Store the database user credentials in AWS Secrets Manager. Grant the necessary IAM permissions to allow the web servers to access AWS Secrets Manager.

Explanation

Storing the database user credentials in AWS Secrets Manager provides a secure and centralized solution for managing and rotating credentials. Secrets Manager offers built-in integration with Amazon RDS, making it easy to manage and rotate database credentials. By granting the necessary IAM permissions to the web servers, they can securely access Secrets Manager and retrieve the credentials when needed.

Using AWS Systems Manager OpsCenter (option B) is not designed for storing and managing credentials, but rather for managing and resolving operational issues. It is not the recommended solution for this use case.

Storing the credentials in an Amazon S3 bucket (option C) or encrypted files on the web server file system (option D) can be insecure and difficult to manage, especially when it comes to rotating credentials frequently. These options do not provide the same level of security and ease of management as AWS Secrets Manager.

Question 910

Exam Question

A company has a two-tier application architecture that runs in public and private subnets. Amazon EC2 instances running the web application are in the public subnet and a database runs on the private subnet. The web application instances and the database are running in a single Availability Zone (AZ).

Which combination of steps should a solutions architect take to provide high availability for this architecture? (Choose two.)

A. Create new public and private subnets in the same AZ for high availability.
B. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs.
C. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer.
D. Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ.
E. Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment.

Correct Answer

B. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs.
E. Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment.

Explanation

To provide high availability for the two-tier application architecture, the following steps should be taken:

B. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs.
By creating an Auto Scaling group and an Application Load Balancer (ALB) that span multiple Availability Zones (AZs), the web application instances can be distributed across multiple AZs. This ensures that the application remains available even if one AZ becomes unavailable.

E. Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment.
To provide high availability for the database, new public and private subnets should be created in the same VPC, each in a new AZ. The database should be migrated to an Amazon RDS multi-AZ deployment. In a multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different AZ. This replica can take over in case of a failure in the primary AZ, providing high availability for the database.

    Ads Blocker Image Powered by Code Help Pro

    It looks like you are using an adblocker.

    Ads keep our content free. Please consider supporting us by allowing ads on pupuweb.com