Skip to Content

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 1 Part 1

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Exam Question 31

A company is planning to use Amazon S3 to store images uploaded by its users. The images must be encrypted at rest in Amazon S3. The company does not want to spend time managing and rotating the keys, but it does want to control who can access those keys.

What should a solutions architect use to accomplish this?

A. Server-Side Encryption with keys stored in an S3 bucket
B. Server-Side Encryption with Customer-Provided Keys (SSE-C)
C. Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
D. Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)

Correct Answer

D. Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)

Answer Description

“Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS) is similar to SSE-S3, but with some additional benefits and charges for using this service.

There are separate permissions for the use of a CMK that provides added protection against unauthorized access of your objects in Amazon S3. SSE-KMS also provides you with an audit trail that shows when your CMK was used and by whom.”

Server-Side Encryption: Using SSE-KMS
You can protect data at rest in Amazon S3 by using three different modes of server-side encryption: SSES3, SSE-C, or SSE-KMS.
SSE-S3 requires that Amazon S3 manage the data and master encryption keys. For more information about SSE-S3, see Protecting Data Using Server-Side Encryption with Amazon S3-Managed Encryption Keys (SSE-S3).

SSE-C requires that you manage the encryption key. For more information about SSE-C, see Protecting Data Using Server-Side Encryption with Customer-Provided Encryption Keys (SSE-C).

SSE-KMS requires that AWS manage the data key but you manage the customer master key (CMK) in AWS KMS.

The remainder of this topic discusses how to protect data by using server-side encryption with AWS KMS-managed keys (SSE-KMS).

You can request encryption and select a CMK by using the Amazon S3 console or API. In the console, check the appropriate box to perform encryption and select your CMK from the list. For the Amazon S3 API, specify encryption and choose your CMK by setting the appropriate headers in a GET or PUT request.

SSE-KMS requires that AWS manage the data key but you manage the customer master key (CMK) in AWS KMS. You can choose a customer managed CMK or the AWS managed CMK for Amazon S3 in your account.

Customer managed CMKs are CMKs in your AWS account that you create, own, and manage. You have full control over these CMKs, including establishing and maintaining their key policies, IAM policies, and grants, enabling and disabling them, rotating their cryptographic material, adding tags, creating aliases that refer to the CMK, and scheduling the CMKs for deletion.

For this scenario, the solutions architect should use SSE-KMS with a customer managed CMK. That way KMS will manage the data key but the company can configure key policies defining who can access the keys.

CORRECT: “Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)” is the correct answer.

INCORRECT: “Server-Side Encryption with keys stored in an S3 bucket” is incorrect as you cannot store your keys in a bucket with server-side encryption

INCORRECT: “Server-Side Encryption with Customer-Provided Keys (SSE-C)” is incorrect as the company does not want to manage the keys.

INCORRECT: “Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)” is incorrect as the company needs to manage access control for the keys which is not possible when they’re managed by Amazon.

References

Exam Question 32

A solutions architect is tasked with transferring 750 TB of data from a network-attached file system located at a branch office Amazon S3 Glacier. The solution must avoid saturating the branch office’s low-bandwidth internet connection.

What is the MOST cost-effective solution?

A. Create a site-to-site VPN tunnel to an Amazon S3 bucket and transfer the files directly. Create a bucket policy to enforce a VPC endpoint.
B. Order 10 AWS Snowball appliances and select an S3 Glacier vault as the destination. Create a bucket policy to enforce a VPC endpoint.
C. Mount the network-attached file system to Amazon S3 and copy the files directly. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier.
D. Order 10 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier.

Correct Answer

D. Order 10 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier.

Answer Description

Regional Limitations for AWS Snowball
The AWS Snowball service has two device types, the standard Snowball and the Snowball Edge. The following table highlights which of these devices are available in which regions.

The following table highlights which of these devices are available in which regions.

The following table highlights which of these devices are available in which regions.

Limitations on Jobs in AWS Snowball

The following limitations exist for creating jobs in AWS Snowball:

For security purposes, data transfers must be completed within 90 days of the Snowball being prepared.

Currently, AWS Snowball Edge device doesn’t support server-side encryption with customer-provided keys (SSE-C). AWS Snowball Edge device does support server-side encryption with Amazon S3–managed encryption keys (SSE-S3) and server-side encryption with AWS Key Management Service – managed keys (SSE-KMS). For more information, see Protecting Data Using Server-Side Encryption in the Amazon Simple Storage Service Developer Guide.

In the US regions, Snowballs come in two sizes: 50 TB and 80 TB. All other regions have the 80 TB Snowballs only. If you’re using Snowball to import data, and you need to transfer more data than will fit on a single Snowball, create additional jobs. Each export job can use multiple Snowballs.

The default service limit for the number of Snowballs you can have at one time is 1. If you want to increase your service limit, contact AWS Support.

All objects transferred to the Snowball have their metadata changed. The only metadata that remains the same is filename and filesize. All other metadata is set as in the following example: -rw-rw-r– 1 root root [filesize] Dec 31 1969 [path/filename].

Object lifecycle management
To manage your objects so that they are stored cost effectively throughout their lifecycle, configure their Amazon S3 Lifecycle. An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions:

Transition actions – Define when objects transition to another storage class. For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after you created them, or archive objects to the S3 Glacier storage class one year after creating them.

Expiration actions – Define when objects expire. Amazon S3 deletes expired objects on your behalf. The lifecycle expiration costs depend on when you choose to expire objects.

As the company’s internet link is low-bandwidth uploading directly to Amazon S3 (ready for transition to Glacier) would saturate the link. The best alternative is to use AWS Snowball appliances. The Snowball Edge appliance can hold up to 75 TB of data so 10 devices would be required to migrate 750 TB of data.

Snowball moves data into AWS using a hardware device and the data is then copied into an Amazon S3 bucket of your choice. From there, lifecycle policies can transition the S3 objects to Amazon S3 Glacier.

CORRECT: “Order 10 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier” is the correct answer.

INCORRECT: “Order 10 AWS Snowball appliances and select an S3 Glacier vault as the destination. Create a bucket policy to enforce a VPC endpoint” is incorrect as you cannot set a Glacier vault as the destination, it must be an S3 bucket. You also can’t enforce a VPC endpoint using a bucket policy.

INCORRECT: “Create an AWS Direct Connect connection and migrate the data straight into Amazon Glacier” is incorrect as this is not the most cost-effective option and takes time to setup. INCORRECT: “Use AWS Global Accelerator to accelerate upload and optimize usage of the available bandwidth” is incorrect as this service is not used for accelerating or optimizing the upload of data from on-premises networks.

References

Exam Question 33

An application hosted on AWS is experiencing performance problems, and the application vendor wants to perform an analysis of the log file to troubleshoot further. The log file is stored on Amazon S3 and is 10 GB in size. The application owner will make the log file available to the vendor for a limited time.

What is the MOST secure way to do this?

A. Enable public read on the S3 object and provide the link to the vendor.
B. Upload the file to Amazon WorkDocs and share the public link with the vendor.
C. Generate a presigned URL and have the vendor download the log file before it expires.
D. Create an IAM user for the vendor to provide access to the S3 bucket and the application. Enforce multi-factor authentication.

Correct Answer

C. Generate a presigned URL and have the vendor download the log file before it expires.

Answer Description

Share an object with others
All objects by default are private. Only the object owner has permission to access these objects. However, the object owner can optionally share objects with others by creating a presigned URL, using their own security credentials, to grant time-limited permission to download the objects.

When you create a presigned URL for your object, you must provide your security credentials, specify a bucket name, an object key, specify the HTTP method (GET to download the object) and expiration date and time. The presigned URLs are valid only for the specified duration.

Anyone who receives the presigned URL can then access the object. For example, if you have a video in your bucket and both the bucket and the object are private, you can share the video with others by generating a presigned URL.

Exam Question 34

A company allows its developers to attach existing IAM policies to existing IAM roles to enable faster experimentation and agility. However, the security operations team is concerned that the developers could attach the existing administrator policy, which would allow the developers to circumvent any other security policies.

How should a solutions architect address this issue?

A. Create an Amazon SNS topic to send an alert every time a developer creates a new policy.
B. Use service control policies to disable IAM activity across all account in the organizational unit.
C. Prevent the developers from attaching any policies and assign all IAM duties to the security operations team.
D. Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy.

Correct Answer

D. Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy.

Answer Description

The permissions boundary for an IAM entity (user or role) sets the maximum permissions that the entity can have. This can change the effective permissions for that user or role. The effective permissions for an entity are the permissions that are granted by all the policies that affect the user or role. Within an account, the permissions for an entity can be affected by identity-based policies, resource-based policies, permissions boundaries, Organizations SCPs, or session policies.

Therefore, the solutions architect can set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy.

CORRECT: “Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy” is the correct answer.

INCORRECT: “Create an Amazon SNS topic to send an alert every time a developer creates a new policy” is incorrect as this would mean investigating every incident which is not an efficient solution.

INCORRECT: “Use service control policies to disable IAM activity across all accounts in the organizational unit” is incorrect as this would prevent the developers from being able to work with IAM completely.

INCORRECT: “Prevent the developers from attaching any policies and assign all IAM duties to the security operations team” is incorrect as this is not necessary. The requirement is to allow developers to work with policies, the solution needs to find a secure way of achieving this.

References

AWS Identity and Access Management > User Guide > Permissions boundaries for IAM entities

Exam Question 35

A company has a multi-tier application that runs six front-end web servers in an Amazon EC2 Auto Scaling group in a single Availability Zone behind an Application Load Balancer (ALB). A solutions architect needs to modify the infrastructure to be highly available without modifying the application.

Which architecture should the solutions architect choose that provides high availability?

A. Create an Auto Scaling group that uses three instances across each of two Regions.
B. Modify the Auto Scaling group to use three instances across each of two Availability Zones.
C. Create an Auto Scaling template that can be used to quickly create more instances in another Region.
D. Change the ALB in front of the Amazon EC2 instances in a round-robin configuration to balance traffic to the web tier.

Correct Answer

B. Modify the Auto Scaling group to use three instances across each of two Availability Zones.

Answer Description

Expanding Your Scaled and Load-Balanced Application to an Additional Availability Zone.

When one Availability Zone becomes unhealthy or unavailable, Amazon EC2 Auto Scaling launches new instances in an unaffected zone. When the unhealthy Availability Zone returns to a healthy state, Amazon EC2 Auto Scaling automatically redistributes the application instances evenly across all of the zones for your Auto Scaling group. Amazon EC2 Auto Scaling does this by attempting to launch new instances in the Availability Zone with the fewest instances. If the attempt fails, however, Amazon EC2 Auto Scaling attempts to launch in other Availability Zones until it succeeds.

You can expand the availability of your scaled and load-balanced application by adding an Availability Zone to your Auto Scaling group and then enabling that zone for your load balancer. After you’ve enabled the new Availability Zone, the load balancer begins to route traffic equally among all the enabled zones.

High availability can be enabled for this architecture quite simply by modifying the existing Auto Scaling group to use multiple availability zones. The ASG will automatically balance the load so you don’t actually need to specify the instances per AZ.

The architecture for the web tier will look like the one below:

CORRECT: “Modify the Auto Scaling group to use four instances across each of two Availability Zones” is the correct answer.

INCORRECT: “Create an Auto Scaling group that uses four instances across each of two Regions” is incorrect as EC2 Auto Scaling does not support multiple regions.

INCORRECT: “Create an Auto Scaling template that can be used to quickly create more instances in another Region” is incorrect as EC2 Auto Scaling does not support multiple regions. INCORRECT: “Create an Auto Scaling group that uses four instances across each of two subnets” is incorrect as the subnets could be in the same AZ.

References

Exam Question 36

A company runs an application on a group of Amazon Linux EC2 instances. The application writes log files using standard API calls. For compliance reasons, all log files must be retained indefinitely and will be analyzed by a reporting tool that must access all files concurrently.

Which storage service should a solutions architect use to provide the MOST cost-effective solution?

A. Amazon EBS
B. Amazon EFS
C. Amazon EC2 instance store
D. Amazon S3

Correct Answer

D. Amazon S3

Answer Description

Amazon S3: Requests to Amazon S3 can be authenticated or anonymous. Authenticated access requires credentials that AWS can use to authenticate your requests. When making REST API calls directly from your code, you create a signature using valid credentials and include the signature in your request. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides easy-to-use management features so you can organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.999999999% (11 9’s) of durability, and stores data for millions of applications for companies all around the world.

The application is writing the files using API calls which means it will be compatible with Amazon S3 which uses a REST API. S3 is a massively scalable key-based object store that is well-suited to allowing concurrent access to the files from many instances.

Amazon S3 will also be the most cost-effective choice. A rough calculation using the AWS pricing calculator shows the cost differences between 1TB of storage on EBS, EFS, and S3 Standard.

CORRECT: “Amazon S3” is the correct answer.

INCORRECT: “Amazon EFS” is incorrect as though this does offer concurrent access from many EC2 Linux instances, it is not the most cost-effective solution.

INCORRECT: “Amazon EBS” is incorrect. The Elastic Block Store (EBS) is not a good solution for concurrent access from many EC2 instances and is not the most cost-effective option either. EBS volumes are mounted to a single instance except when using multi-attach which is a new feature and has several constraints.

INCORRECT: “Amazon EC2 instance store” is incorrect as this is an ephemeral storage solution which means the data is lost when powered down.

Therefore, this is not an option for long-term data storage.

References

Exam Question 37

A media streaming company collects real-time data and stores it in a disk-optimized database system. The company is not getting the expected throughput and wants an in-memory database storage solution that performs faster and provides high availability using data replication.

Which database should a solutions architect recommend?

A. Amazon RDS for MySQL
B. Amazon RDS for PostgreSQL.
C. Amazon ElastiCache for Redis
D. Amazon ElastiCache for Memcached

Correct Answer

C. Amazon ElastiCache for Redis

Answer Description

In-memory databases on AWS Amazon Elasticache for Redis.
Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides submillisecond latency to power internet-scale, real-time applications. Developers can use ElastiCache for Redis as an in-memory nonrelational database. The ElastiCache for Redis cluster configuration supports up to 15 shards and enables customers to run Redis workloads with up to 6.1 TB of in-memory capacity in a single cluster.

ElastiCache for Redis also provides the ability to add and remove shards from a running cluster. You can dynamically scale out and even scale in your Redis cluster workloads to adapt to changes in demand.

Amazon ElastiCache is an in-memory database. With ElastiCache Memcached there is no data replication or high availability. As you can see in the diagram, each node is a separate partition of data:

Therefore, the Redis engine must be used which does support both data replication and clustering. The following diagram shows a Redis architecture with cluster mode enabled:

CORRECT: “Amazon ElastiCache for Redis” is the correct answer.

INCORRECT: “Amazon ElastiCache for Memcached” is incorrect as Memcached does not support data replication or high availability.

INCORRECT: “Amazon RDS for MySQL” is incorrect as this is not an in-memory database. INCORRECT: “Amazon RDS for PostgreSQL” is incorrect as this is not an in-memory database.

References

Exam Question 38

A company has on-premises servers running a relational database. The current database serves high read traffic for users in different locations. The company wants to migrate to AWS with the least amount of effort.
The database solution should support disaster recovery and not affect the company’s current traffic flow.

Which solution meets these requirements?

A. Use a database in Amazon RDS with Multi-AZ and at least one read replica.
B. Use a database in Amazon RDS with Multi-AZ and at least one standby replica.
C. Use databases hosted on multiple Amazon EC2 instances in different AWS Regions.
D. Use databases hosted on Amazon EC2 instances behind an Application Load Balancer in different Availability Zones.

Correct Answer

A. Use a database in Amazon RDS with Multi-AZ and at least one read replica.

References

Exam Question 39

A company’s application is running on Amazon EC2 instances within an Auto Scaling group behind an Elastic Load Balancer. Based on the application’s history the company anticipates a spike in traffic during a holiday each year. A solutions architect must design a strategy to ensure that the Auto Scaling group proactively increases capacity to minimize any performance impact on application users.

Which solution will meet these requirements?

A. Create an Amazon CloudWatch alarm to scale up the EC2 instances when CPU utilization exceeds 90%.
B. Create a recurring scheduled action to scale up the Auto Scaling group before the expected period of peak demand.
C. Increase the minimum and maximum number of EC2 instances in the Auto Scaling group during the peak demand period.
D. Configure an Amazon Simple Notification Service (Amazon SNS) notification to send alerts when there are autoscaling EC2_INSTANCE_LAUNCH events.

Correct Answer

B. Create a recurring scheduled action to scale up the Auto Scaling group before the expected period of peak demand.

Answer Description

AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. AWS Auto Scaling refers to a collection of Auto Scaling capabilities across several AWS services.

The services within the AWS Auto Scaling family include: Amazon EC2 (known as Amazon EC2 Auto Scaling).

Amazon ECS. Amazon DynamoDB. Amazon Aurora.

The scaling options define the triggers and when instances should be provisioned/de-provisioned. There are four scaling options:

Maintain – keep a specific or minimum number of instances running. Manual – use maximum, minimum, or a specific number of instances.

Scheduled – increase or decrease the number of instances based on a schedule. Dynamic – scale based on real-time system metrics (e.g. CloudWatch metrics).

The following table describes the scaling options available and when to use them:

The scaling options are configured through Scaling Policies which determine when, if, and how the ASG scales and shrinks.

The following table describes the scaling policy types available for dynamic scaling policies and when to use them (more detail further down the page):

The diagram below depicts an Auto Scaling group with a Scaling policy set to a minimum size of 1 instance, a desired capacity of 2 instances, and a maximum size of 4 instances:

Amazon EC2 Auto Scaling supports sending Amazon SNS notifications when the following events occur.

Exam Question 40

An Amazon EC2 administrator created the following policy associated with an IAM group containing several users:

An Amazon EC2 administrator created the following policy associated with an IAM group containing several users.

What is the effect of this policy?

A. Users can terminate an EC2 instance in any AWS Region except us-east-1.
B. Users can terminate an EC2 instance with the IP address 10.100.100.1 in the us-east-1 Region.
C. Users can terminate an EC2 instance in the us-east-1 Region when the user’s source IP is 10.100.100.254.
D. Users cannot terminate an EC2 instance in the us-east-1 Region when the user’s source IP is 10.100.100.254.

Correct Answer

C. Users can terminate an EC2 instance in the us-east-1 Region when the user’s source IP is 10.100.100.254.

Answer Description

What the policy means:
1. Allow termination of any instance if user’s source IP address is 100.100.254.
2. Deny termination of instances that are not in the us-east-1 Combining this two, you get:
“Allow instance termination in the us-east-1 region if the user’s source IP address is 10.100.100.254. Deny termination operation on other regions.”