The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.
Table of Contents
- Exam Question 741
- Correct Answer
- Answer Description
- References
- Exam Question 742
- Correct Answer
- Answer Description
- References
- Exam Question 743
- Correct Answer
- Answer Description
- References
- Exam Question 744
- Correct Answer
- Answer Description
- References
- Exam Question 745
- Correct Answer
- Answer Description
- References
- Exam Question 746
- Correct Answer
- Answer Description
- References
- Exam Question 747
- Correct Answer
- Answer Description
- References
- Exam Question 748
- Correct Answer
- Answer Description
- References
- Exam Question 749
- Correct Answer
- Answer Description
- References
- Exam Question 750
- Correct Answer
- Answer Description
- References
Exam Question 741
In DynamoDB, could you use IAM to grant access to Amazon DynamoDB resources and API actions?
A. In DynamoDB there is no need to grant access
B. Depended to the type of access
C. No
D. Yes
Correct Answer
D. Yes
Answer Description
Amazon DynamoDB integrates with AWS Identity and Access Management (IAM). You can use AWS IAM to grant access to Amazon DynamoDB resources and API actions. To do this, you first write an AWS IAM policy, which is a document that explicitly lists the permissions you want to grant. You then attach that policy to an AWS IAM user or role.
References
- Amazon DynamoDB > Developer Guide > Identity and Access Management in Amazon DynamoDB
Exam Question 742
You are building infrastructure for a data warehousing solution and an extra request has come through that there will be a lot of business reporting queries running all the time and you are not sure if your current DB instance will be able to handle it.
What would be the best solution for this?
A. DB Parameter Groups
B. Read Replicas
C. Multi-AZ DB Instance deployment
D. Database Snapshots
Correct Answer
B. Read Replicas
Answer Description
Read Replicas make it easy to take advantage of MySQL’s built-in replication functionality to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. There are a variety of scenarios where deploying one or more Read Replicas for a given source DB Instance may make sense. Common reasons for deploying a Read Replica include:
Scaling beyond the compute or I/O capacity of a single DB Instance for read-heavy database workloads. This excess read traffic can be directed to one or more Read Replicas. Serving read traffic while the source DB Instance is unavailable. If your source DB Instance cannot take I/O requests (e.g. due to I/O suspension for backups or scheduled maintenance), you can direct read traffic to your Read Replica(s). For this use case, keep in mind that the data on the Read Replica
may be “stale” since the source DB Instance is unavailable. Business reporting or data warehousing scenarios; you may want business reporting queries to run against a Read Replica, rather than your primary, production DB Instance.
References
Exam Question 743
You have been given a scope to deploy some AWS infrastructure for a large organization. The requirements are that you will have a lot of EC2 instances but may need to add more when the average utilization of your Amazon EC2 fleet is high and conversely remove them when CPU utilization is low.
Which AWS services would be best to use to accomplish this?
A. Auto Scaling, Amazon CloudWatch and AWS Elastic Beanstalk
B. Auto Scaling, Amazon CloudWatch and Elastic Load Balancing.
C. Amazon CloudFront, Amazon CloudWatch and Elastic Load Balancing.
D. AWS Elastic Beanstalk, Amazon CloudWatch and Elastic Load Balancing.
Correct Answer
B. Auto Scaling, Amazon CloudWatch and Elastic Load Balancing.
Answer Description
Auto Scaling enables you to follow the demand curve for your applications closely, reducing the need to manually provision Amazon EC2 capacity in advance. For example, you can set a condition to add new Amazon EC2 instances in increments to the Auto Scaling group when the average utilization of your Amazon EC2 fleet is high; and similarly, you can set a condition to remove instances in the same increments when CPU utilization is low. If you have predictable load changes, you can set a schedule through Auto Scaling to plan your scaling activities. You can use Amazon CloudWatch to send alarms to trigger scaling activities and Elastic Load Balancing to help distribute traffic to your instances within Auto Scaling groups. Auto Scaling enables you to run your Amazon EC2 fleet at optimal utilization.
References
Exam Question 744
Which of the below mentioned options is not available when an instance is launched by Auto Scaling with EC2 Classic?
A. Public IP
B. Elastic IP
C. Private DNS
D. Private IP
Correct Answer
B. Elastic IP
Answer Description
Auto Scaling supports both EC2 classic and EC2-VPC. When an instance is launched as a part of EC2 classic, it will have the public IP and DNS as well as the private IP and DNS.
References
- Amazon EC2 Auto Scaling > User Guide > Getting started with Amazon EC2 Auto Scaling
Exam Question 745
You are checking the workload on some of your General Purpose (SSD) and Provisioned IOPS (SSD) volumes and it seems that the I/O latency is higher than you require. You should probably check the to make sure that your application is not trying to drive more IOPS than you have provisioned.
A. Amount of IOPS that are available
B. Acknowledgement from the storage subsystem
C. Average queue length
D. Time it takes for the I/O operation to complete
Correct Answer
C. Average queue length
Answer Description
In EBS workload demand plays an important role in getting the most out of your General Purpose (SSD) and Provisioned IOPS (SSD) volumes. In order for your volumes to deliver the amount of IOPS that are available, they need to have enough I/O requests sent to them. There is a relationship between the demand on the volumes, the amount of IOPS that are available to them, and the latency of the request (the amount of time it takes for the I/O operation to complete).
Latency is the true end-to-end client time of an I/O operation; in other words, when the client sends a IO, how long does it take to get an acknowledgment from the storage subsystem that the IO read or write is complete.
If your I/O latency is higher than you require, check your average queue length to make sure that your application is not trying to drive more IOPS than you have provisioned. You can maintain high IOPS while keeping latency down by maintaining a low average queue length (which is achieved by provisioning more IOPS for your volume).
References
- Amazon Elastic Compute Cloud > User Guide for Linux Instances > What is Amazon EC2?
Exam Question 746
You have been asked to build a database warehouse using Amazon Redshift. You know a little about it, including that it is a SQL data warehouse solution, and uses industry standard ODBC and JDBC connections and PostgreSQL drivers. However you are not sure about what sort of storage it uses for database tables. What sort of storage does Amazon Redshift use for database tables?
A. InnoDB Tables
B. NDB data storage
C. Columnar data storage
D. NDB CLUSTER Storage
Correct Answer
C. Columnar data storage
Answer Description
Amazon Redshift achieves efficient storage and optimum query performance through a combination of massively parallel processing, columnar data storage, and very efficient, targeted data compression encoding schemes.
Columnar storage for database tables is an important factor in optimizing analytic query performance because it drastically reduces the overall disk I/O requirements and reduces the amount of data you need to load from disk.
References
- Amazon Redshift > Database Developer Guide > Columnar storage
Exam Question 747
While using the EC2 GET requests as URLs, the _________ is the URL that serves as the entry point for the web service.
A. token
B. endpoint
C. action
D. None of these
Correct Answer
B. endpoint
Answer Description
The endpoint is the URL that serves as the entry point for the web service.
References
- Amazon Elastic Compute Cloud > API Reference > Query requests for Amazon EC2
Exam Question 748
Can you specify the security group that you created for a VPC when you launch an instance in EC2-Classic?
A. No, you can specify the security group created for EC2-Classic when you launch a VPC instance.
B. No
C. Yes
D. No, you can specify the security group created for EC2-Classic to a non-VPC based instance only.
Correct Answer
B. No
Answer Description
If you’re using EC2-Classic, you must use security groups created specifically for EC2-Classic. When you launch an instance in EC2-Classic, you must specify a security group in the same region as the instance. You can’t specify a security group that you created for a VPC when you launch an instance in EC2-Classic.
References
- Amazon Elastic Compute Cloud > User Guide for Linux Instances > Amazon EC2 security groups for Linux instances
Exam Question 749
You are setting up a VPC and you need to set up a public subnet within that VPC. Which following requirement must be met for this subnet to be considered a public subnet?
A. Subnet’s traffic is not routed to an internet gateway but has its traffic routed to a virtual private gateway.
B. Subnet’s traffic is routed to an internet gateway.
C. Subnet’s traffic is not routed to an internet gateway.
D. None of these answers can be considered a public subnet.
Correct Answer
B. Subnet’s traffic is routed to an internet gateway.
Answer Description
A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS cloud. You can launch your AWS resources, such as Amazon EC2 instances, into your VPC. You can configure your VPC: you can select its IP address range, create subnets, and configure route tables, network gateways, and security settings. A subnet is a range of IP addresses in your VPC. You can launch AWS resources into a subnet that you select. Use a public subnet for resources that must be connected to the internet, and a private subnet for resources that won’t be connected to the Internet. If a subnet’s traffic is routed to an internet gateway, the subnet is known as a public subnet. If a subnet doesn’t have a route to the internet gateway, the subnet is known as a private subnet. If a subnet doesn’t have a route to the internet gateway, but has its traffic routed to a virtual private gateway, the subnet is known as a VPN-only subnet.
References
- Amazon Virtual Private Cloud > User Guide > VPCs and subnets
Exam Question 750
In EC2, what happens to the data in an instance store if an instance reboots (either intentionally or unintentionally)?
A. Data is deleted from the instance store for security reasons.
B. Data persists in the instance store.
C. Data is partially present in the instance store.
D. Data in the instance store will be lost.
Correct Answer
B. Data persists in the instance store.
Answer Description
The data in an instance store persists only during the lifetime of its associated instance. If an instance reboots (intentionally or unintentionally), data in the instance store persists. However, data on instance store volumes is lost under the following circumstances.
- Failure of an underlying drive
- Stopping an Amazon EBS-backed instance Terminating an instance
References
- Amazon Elastic Compute Cloud > User Guide for Linux Instances > Amazon EC2 instance store