The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.
Table of Contents
- Exam Question 151
- Correct Answer
- Answer Description
- Exam Question 152
- Correct Answer
- Answer Description
- References
- Exam Question 153
- Correct Answer
- Answer Description
- References
- Exam Question 154
- Correct Answer
- Answer Description
- References
- Exam Question 155
- Correct Answer
- Answer Description
- References
- Exam Question 156
- Correct Answer
- Answer Description
- References
- Exam Question 157
- Correct Answer
- Exam Question 158
- Correct Answer
- Exam Question 159
- Correct Answer
- Exam Question 160
- Correct Answer
- References
Exam Question 151
A company has a two-tier application architecture that runs in public and private subnets. Amazon EC2 instances running the web application are in the public subnet and a database runs on the private subnet.
The web application instances and the database are running in a single Availability Zone (AZ).
Which combination of steps should a solutions architect take to provide high availability for this architecture? (Choose two.)
A. Create new public and private subnets in the same AZ for high availability.
B. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs.
C. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer.
D. Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ.
E. Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment.
Correct Answer
B. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs.
E. Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment.
Answer Description
You would the EC2 instances to have high availability by placing them in multiple AZs.
Exam Question 152
A marketing company is storing CSV files in an Amazon S3 bucket for statistical analysis. An application on an Amazon EC2 instance needs permission to efficiently process the CSV data stored in the S3 bucket.
Which action will MOST securely grant the EC2 instance access to the S3 bucket?
A. Attach a resource-based policy to the S3 bucket.
B. Create an IAM user for the application with specific permissions to the S3 bucket.
C. Associate an IAM role with least privilege permissions to the EC2 instance profile.
D. Store AWS credentials directly on the EC2 instance for applications on the instance to use for API calls.
Correct Answer
C. Associate an IAM role with least privilege permissions to the EC2 instance profile.
Answer Description
Keyword: Privilege Permission + IAM Role
AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.
IAM is a feature of your AWS account offered at no additional charge. You will be charged only for use of other AWS services by your users.
IAM roles for Amazon EC2
Applications must sign their API requests with AWS credentials. Therefore, if you are an application developer, you need a strategy for managing credentials for your applications that run on EC2 instances. For example, you can securely distribute your AWS credentials to the instances, enabling the applications on those instances to use your credentials to sign requests, while protecting your credentials from other users. However, it’s challenging to securely distribute credentials to each instance, especially those that AWS creates on your behalf, such as Spot Instances or instances in Auto Scaling groups. You must also be able to update the credentials on each instance when you rotate your AWS credentials.
We designed IAM roles so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use.
Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles as follows:
Create an IAM role.
Define which accounts or AWS services can assume the role.
Define which API actions and resources the application can use after assuming the role. Specify the role when you launch your instance, or attach the role to an existing instance. Have the application retrieve a set of temporary credentials and use them.
For example, you can use IAM roles to grant permissions to applications running on your instances that need to use a bucket in Amazon S3. You can specify permissions for IAM roles by creating a policy in JSON format. These are similar to the policies that you create for IAM users. If you change a role, the change is propagated to all instances.
When creating IAM roles, associate least privilege IAM policies that restrict access to the specific API calls the application requires.
References
- AWS Identity and Access Management (IAM) FAQs
- Amazon Elastic Compute Cloud > User Guide for Linux Instances > IAM roles for Amazon EC2
Exam Question 153
A company hosts an application on multiple Amazon EC2 instances. The application processes messages from an Amazon SQS queue, writes for an Amazon RDS table, and deletes the message from the queue. Occasional duplicate records are found in the RDS table. The SQS queue does not contain any duplicate messages.
What should a solutions architect do to ensure messages are being processed once only?
A. Use the CreateQueue API call to create a new queue.
B. Use the AddPermission API call to add appropriate permissions.
C. Use the ReceiveMessage API call to set an appropriate wait time.
D. Use the ChangeMessageVisibility API call to increase the visibility timeout.
Correct Answer
D. Use the ChangeMessageVisibility API call to increase the visibility timeout.
Answer Description
Keyword: SQS queue writes to an Amazon RDS
From this, Option D best suite & other Options ruled out [Option A – You can’t introduce one more Queue in the existing one; Option B – only Permission & Option C – Only Retrieves Messages]
FIFO queues are designed to never introduce duplicate messages. However, your message producer might introduce duplicates in certain scenarios: for example, if the producer sends a message, does not receive a response, and then resends the same message. Amazon SQS APIs provide deduplication functionality that prevents your message producer from sending duplicates. Any duplicates introduced by the message producer are removed within a 5-minute deduplication interval.
For standard queues, you might occasionally receive a duplicate copy of a message (at least once delivery). If you use a standard queue, you must design your applications to be idempotent (that is, they must not be affected adversely when processing the same message more than once).
CreateQueue – You can’t change the queue type after you create it and you can’t convert an existing standard queue into a FIFO queue. You must either create a new FIFO queue for your application or delete your existing standard queue and recreate it as a FIFO queue.
AddPermission – You create a queue, you have full control access rights for the queue. Only you, the owner of the queue, can grant or deny permissions to the queue.
ReceiveMessage – Retrieves one or more messages (up to 10), from the specified queue.
FIFO queues provide exactly-once processing, which means that each message is delivered once and remains available until a consumer processes it and deletes it.
References
- Amazon Simple Queue Service
- Amazon SQS FAQs
- Amazon Simple Queue Service > Developer Guide > What is Amazon Simple Queue Service?
Exam Question 154
A company has a three-tier image-sharing application. It uses an Amazon EC2 instance for the front-end layer, another for the backend tier, and a third for the MySQL database. A solutions architect has been tasked with designing a solution that is highly available, and requires the least amount of changes to the application.
Which solution meets these requirements?
A. Use Amazon S3 to host the front-end layer and AWS Lambda functions for the backend layer. Move the database to an Amazon DynamoDB table and use Amazon S3 to store and serve users’ images.
B. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end and backend layers. Move the database to an Amazon RDS instance with multiple read replicas to store and serve users’ images.
C. Use Amazon S3 to host the front-end layer and a fleet of Amazon EC2 instances in an Auto Scaling group for the backend layer. Move the database to a memory optimized instance type to store and serve users’ images.
D. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end and backend layers. Move the database to an Amazon RDS instance with a Multi-AZ deployment. Use Amazon S3 to store and serve users’ images.
Correct Answer
D. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end and backend layers. Move the database to an Amazon RDS instance with a Multi-AZ deployment. Use Amazon S3 to store and serve users’ images.
Answer Description
Keyword: Highly available + Least amount of changes to the application High Availability = Multi-AZ
Least amount of changes to the application = Elastic Beanstalk Automatically handles the deployment, from capacity provisioning, Load Balancing, Auto Scaling to application health monitoring
Option – D will be the right choice and Option – A; Option – B and Option – C out of race due to Cost & inter-operability.
HA with Elastic Beanstalk and RDS
AWS Elastic Beanstalk
AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.
You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.
There is no additional charge for Elastic Beanstalk – you pay only for the AWS resources needed to store and run your applications.
AWS RDS
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need.
Amazon RDS is available on several database instance types – optimized for memory, performance or I/O – and provides you with six familiar database engines to choose from, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server. You can use the AWS Database Migration Service to easily migrate or replicate your existing databases to Amazon RDS.
AWS S3
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides easy-to-use management features so you can organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.999999999% (11 9’s) of durability, and stores data for millions of applications for companies all around the world.
References
Exam Question 155
A solutions architect is designing a system to analyze the performance of financial markets while the markets are closed. The system will run a series of compute-intensive jobs for 4 hours every night. The time to complete the compute jobs is expected to remain constant, and jobs cannot be interrupted once started. Once completed, the system is expected to run for a minimum of 1 year.
Which type of Amazon EC2 instances should be used to reduce the cost of the system?
A. Spot Instances
B. On-Demand Instances
C. Standard Reserved Instances
D. Scheduled Reserved Instances
Correct Answer
D. Scheduled Reserved Instances
Answer Description
Scheduled Reserved Instances (Scheduled Instances) enable you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term. You reserve the capacity in advance, so that you know it is available when you need it. You pay for the time that the instances are scheduled, even if you do not use them.
Scheduled Instances are a good choice for workloads that do not run continuously, but do run on a regular schedule. For example, you can use Scheduled Instances for an application that runs during business hours or for batch processing that runs at the end of the week.
CORRECT: “Scheduled Reserved Instances” is the correct answer.
INCORRECT: “Standard Reserved Instances” is incorrect as the workload only runs for 4 hours a day this would be more expensive.
INCORRECT: “On-Demand Instances” is incorrect as this would be much more expensive as there is no discount applied.
INCORRECT: “Spot Instances” is incorrect as the workload cannot be interrupted once started. With Spot instances workloads can be terminated if the Spot price changes or capacity is required.
References
- Amazon Elastic Compute Cloud > User Guide for Linux Instances > Scheduled Reserved Instances
Exam Question 156
A bicycle sharing company is developing a multi-tier architecture to track the location of its bicycles during peak operating hours. The company wants to use these data points in its existing analytics platform. A solutions architect must determine the most viable multi-tier option to support this architecture. The data points must be accessible from the REST API.
Which action meets these requirements for storing and retrieving location data?
A. Use Amazon Athena with Amazon S3.
B. Use Amazon API Gateway with AWS Lambda.
C. Use Amazon QuickSight with Amazon Redshift.
D. Use Amazon API Gateway with Amazon Kinesis Data Analytics.
Correct Answer
B. Use Amazon API Gateway with AWS Lambda.
Answer Description
Keyword: Data points in its existing analytics platform + Data points must be accessible from the REST API + Track the location of its bicycles during peak operating hours
They already have an analytics platform, A (Athena) and D (Kinesis Data Analytics) are out of the race even though S3 & APT Gateway Support REST API. Now B and C are in Race. C will not support REST API. So answer should be B as per below details.
Now if we talk about data type, we are talking about GEO location data for their bicycles. API Gateway will be support REST API. So, exact solution should be API Gateway with AWS Lambda along with Amazon Kinesis Data Analytics (Assume its used already).
CORRECT: “Use Amazon API Gateway with AWS Lambda” is the correct answer. INCORRECT: “Use Amazon Athena with Amazon S3” is incorrect as they already have analytics platform.
INCORRECT: “Use Amazon QuickSight with Amazon Redshift” is incorrect. This is not support REST API.
INCORRECT: “Use Amazon API Gateway with Amazon Kinesis Data Analytics” is incorrect as they already have analytics platform.
References
Exam Question 157
An application requires a development environment (DEV) and production environment (PROD) for several years. The DEV instances will run for 10 hours each day during normal business hours, while the PROD instances will run 24 hours each day. A solutions architect needs to determine a compute instance purchase strategy to minimize costs.
Which solution is the MOST cost-effective?
A. DEV with Spot Instances and PROD with On-Demand Instances
B. DEV with On-Demand Instances and PROD with Spot Instances
C. DEV with Scheduled Reserved Instances and PROD with Reserved Instances
D. DEV with On-Demand Instances and PROD with Scheduled Reserved Instances
Correct Answer
C. DEV with Scheduled Reserved Instances and PROD with Reserved Instances
Exam Question 158
A solutions architect observes that a nightly batch processing job is automatically scaled up for 1 hour before the desired Amazon EC2 capacity is reached. The peak capacity is the same every night and the batch jobs always start at 1 AM. The solutions architect needs to find a cost-effective solution that will allow for the desired EC2 capacity to be reached quickly and allow the Auto Scaling group to scale down after the batch jobs are complete.
What should the solutions architect do to meet these requirements?
A. Increase the minimum capacity for the Auto Scaling group.
B. Increase the maximum capacity for the Auto Scaling group.
C. Configure scheduled scaling to scale up to the desired compute level.
D. Change the scaling policy to add more EC2 instances during each scaling operation.
Correct Answer
C. Configure scheduled scaling to scale up to the desired compute level.
Exam Question 159
A Solutions Architect must design a web application that will be hosted on AWS, allowing users to purchase access to premium, shared content that is stored in an S3 bucket. Upon payment, content will be available for download for 14 days before the user is denied access.
Which of the following would be the LEAST complicated implementation?
A. Use an Amazon CloudFront distribution with an origin access identity (OAI). Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design a Lambda function to remove data that is older than 14 days.
B. Use an S3 bucket and provide direct access to the file. Design the application to track purchases in a DynamoDB table. Configure a Lambda function to remove data that is older than 14 days based on a query to Amazon DynamoDB.
C. Use an Amazon CloudFront distribution with an OAI. Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design the application to set an expiration of 14 days for the URL.
D. Use an Amazon CloudFront distribution with an OAI. Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design the application to set an expiration of 60 minutes for the URL and recreate the URL as necessary.
Correct Answer
C. Use an Amazon CloudFront distribution with an OAI. Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design the application to set an expiration of 14 days for the URL.
Exam Question 160
A company wants to run a hybrid workload for data processing. The data needs to be accessed by on-premises applications for local data processing using an NFS protocol, and must also be accessible from the AWS Cloud for further analytics and batch processing.
Which solution will meet these requirements?
A. Use an AWS Storage Gateway file gateway to provide file storage to AWS, then perform analytics on this data in the AWS Cloud.
B. Use an AWS storage Gateway tape gateway to copy the backup of the local data to AWS, then perform analytics on this data in the AWS cloud.
C. Use an AWS Storage Gateway volume gateway in a stored volume configuration to regularly take snapshots of the local data, then copy the data to AWS.
D. Use an AWS Storage Gateway volume gateway in a cached volume configuration to back up all the local storage in the AWS cloud, then perform analytics on this data in the cloud.
Correct Answer
A. Use an AWS Storage Gateway file gateway to provide file storage to AWS, then perform analytics on this data in the AWS Cloud.