The latest Google Professional Cloud Database Engineer certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the Google Professional Cloud Database Engineer exam and earn Google Professional Cloud Database Engineer certification.
Table of Contents
- Question 31
- Exam Question
- Correct Answer
- Question 32
- Exam Question
- Correct Answer
- Question 33
- Exam Question
- Correct Answer
- Question 34
- Exam Question
- Correct Answer
- Question 35
- Exam Question
- Correct Answer
- Question 36
- Exam Question
- Correct Answer
- Question 37
- Exam Question
- Correct Answer
- Question 38
- Exam Question
- Correct Answer
- Question 39
- Exam Question
- Correct Answer
- Question 40
- Exam Question
- Correct Answer
Question 31
Exam Question
You need to migrate existing databases from Microsoft SQL Server 2016 Standard Edition on a single Windows Server 2019 Datacenter Edition to a single Cloud SQL for SQL Server instance. During the discovery phase of your project, you notice that your on-premises server peaks at around 25,000 read IOPS. You need to ensure that your Cloud SQL instance is sized appropriately to maximize read performance. What should you do?
A. Create a SQL Server 2019 Standard on Standard machine type with 4 vCPUs, 15 GB of RAM, and 800 GB of solid-state drive (SSD).
B. Create a SQL Server 2019 Standard on High Memory machine type with at least 16 vCPUs, 104 GB of RAM, and 200 GB of SSD.
C. Create a SQL Server 2019 Standard on High Memory machine type with 16 vCPUs, 104 GB of RAM, and 4 TB of SSD.
D. Create a SQL Server 2019 Enterprise on High Memory machine type with 16 vCPUs, 104 GB of RAM, and 500 GB of SSD.
Correct Answer
B. Create a SQL Server 2019 Standard on High Memory machine type with at least 16 vCPUs, 104 GB of RAM, and 200 GB of SSD.
Question 32
Exam Question
Your company uses Cloud Spanner for a mission-critical inventory management system that is globally available. You recently loaded stock keeping unit (SKU) and product catalog data from a company acquisition and observed hotspots in the Cloud Spanner database. You want to follow Google-recommended schema design practices to avoid performance degradation. What should you do? (Choose two.)
A. Use an auto-incrementing value as the primary key.
B. Normalize the data model.
C. Promote low-cardinality attributes in multi-attribute primary keys.
D. Promote high-cardinality attributes in multi-attribute primary keys.
E. Use bit-reverse sequential value as the primary key.
Correct Answer
A. Use an auto-incrementing value as the primary key.
D. Promote high-cardinality attributes in multi-attribute primary keys.
Question 33
Exam Question
You are managing a small Cloud SQL instance for developers to do testing. The instance is not critical and has a recovery point objective (RPO) of several days. You want to minimize ongoing costs for this instance. What should you do?
A. Take no backups, and turn off transaction log retention.
B. Take one manual backup per day, and turn off transaction log retention.
C. Turn on automated backup, and turn off transaction log retention.
D. Turn on automated backup, and turn on transaction log retention.
Correct Answer
B. Take one manual backup per day, and turn off transaction log retention.
Question 34
Exam Question
Your ecommerce website captures user clickstream data to analyze customer traffic patterns in real time and support personalization features on your website. You plan to analyze this data using big data tools. You need a low-latency solution that can store 8 TB of data and can scale to millions of read and write requests per second. What should you do?
A. Write your data into Bigtable and use Dataproc and the Apache Hbase libraries for analysis.
B. Deploy a Cloud SQL environment with read replicas for improved performance. Use Datastream to export data to Cloud Storage and analyze with Dataproc and the Cloud Storage connector.
C. Use Memorystore to handle your low-latency requirements and for real-time analytics.
D. Stream your data into BigQuery and use Dataproc and the BigQuery Storage API to analyze large volumes of data.
Correct Answer
B. Deploy a Cloud SQL environment with read replicas for improved performance. Use Datastream to export data to Cloud Storage and analyze with Dataproc and the Cloud Storage connector.
Question 35
Exam Question
You manage a meeting booking application that uses Cloud SQL. During an important launch, the Cloud SQL instance went through a maintenance event that resulted in a downtime of more than 5 minutes and adversely affected your production application. You need to immediately address the maintenance issue to prevent any unplanned events in the future. What should you do?
A. Set your production instance’s maintenance window to non-business hours.
B. Migrate the Cloud SQL instance to Cloud Spanner to avoid any future disruptions due to maintenance.
C. Contact Support to understand why your Cloud SQL instance had a downtime of more than 5 minutes.
D. Use Cloud Scheduler to schedule a maintenance window of no longer than 5 minutes.
Correct Answer
B. Migrate the Cloud SQL instance to Cloud Spanner to avoid any future disruptions due to maintenance.
Question 36
Exam Question
You recently launched a new product to the US market. You currently have two Bigtable clusters in one US region to serve all the traffic. Your marketing team is planning an immediate expansion to APAC. You need to roll out the regional expansion while implementing high availability according to Google-recommended practices. What should you do?
A. Maintain a target of 23% CPU utilization by locating:
cluster-a in zone us-central1-a
cluster-b in zone europe-west1-d
cluster-c in zone asia-east1-b
B. Maintain a target of 23% CPU utilization by locating:
cluster-a in zone us-central1-a
cluster-b in zone us-central1-b
cluster-c in zone us-east1-a
C. Maintain a target of 35% CPU utilization by locating:
cluster-a in zone us-central1-a
cluster-b in zone australia-southeast1-a
cluster-c in zone europe-west1-d
cluster-d in zone asia-east1-b
D. Maintain a target of 35% CPU utilization by locating:
cluster-a in zone us-central1-a
cluster-b in zone us-central2-a
cluster-c in zone asia-northeast1-b
cluster-d in zone asia-east1-b
Correct Answer
D. Maintain a target of 35% CPU utilization by locating:
cluster-a in zone us-central1-a
cluster-b in zone us-central2-a
cluster-c in zone asia-northeast1-b
cluster-d in zone asia-east1-b
Question 37
Exam Question
You are designing a highly available (HA) Cloud SQL for PostgreSQL instance that will be used by 100 databases. Each database contains 80 tables that were migrated from your on-premises environment to Google Cloud. The applications that use these databases are located in multiple regions in the US, and you need to ensure that read and write operations have low latency. What should you do?
A. Deploy 2 Cloud SQL instances in the us-central1 region with HA enabled, and create read replicas in us-east1 and us-west1.
B. Deploy 2 Cloud SQL instances in the us-central1 region, and create read replicas in us-east1 and uswest1.
C. Deploy 4 Cloud SQL instances in the us-central1 region with HA enabled, and create read replicas in us-central1, us-east1, and us-west1.
D. Deploy 4 Cloud SQL instances in the us-central1 region, and create read replicas in us-central1, useast1 and us-west1.
Correct Answer
B. Deploy 2 Cloud SQL instances in the us-central1 region, and create read replicas in us-east1 and uswest1.
Question 38
Exam Question
You are designing an augmented reality game for iOS and Android devices. You plan to use Cloud Spanner as the primary backend database for game state storage and player authentication. You want to track in-game rewards that players unlock at every stage of the game. During the testing phase, you discovered that costs are much higher than anticipated, but the query response times are within the SLA. You want to follow Google-recommended practices. You need the database to be performant and highly available while you keep costs low. What should you do?
A. Manually scale down the number of nodes after the peak period has passed.
B. Use interleaving to co-locate parent and child rows.
C. Use the Cloud Spanner query optimizer to determine the most efficient way to execute the SQL query.
D. Use granular instance sizing in Cloud Spanner and Autoscaler.
Correct Answer
C. Use the Cloud Spanner query optimizer to determine the most efficient way to execute the SQL query.
Question 39
Exam Question
You work in the logistics department. Your data analysis team needs daily extracts from Cloud SQL for MySQL to train a machine learning model. The model will be used to optimize next-day routes. You need to export the data in CSV format. You want to follow Google-recommended practices. What should you do?
A. Use Cloud Scheduler to trigger a Cloud Function that will run a select * from table(s) query to call the cloudsql.instances.export API.
B. Use Cloud Scheduler to trigger a Cloud Function through Pub/Sub to call the cloudsql.instances.export API.
C. Use Cloud Composer to orchestrate an export by calling the cloudsql.instances.export API.
D. Use Cloud Composer to execute a select * from table(s) query and export results.
Correct Answer
A. Use Cloud Scheduler to trigger a Cloud Function that will run a select * from table(s) query to call the cloudsql.instances.export API.
Question 40
Exam Question
Your organization operates in a highly regulated industry. Separation of concerns (SoC) and security principle of least privilege (PoLP) are critical. The operations team consists of:
Person A is a database administrator.
Person B is an analyst who generates metric reports.
Application C is responsible for automatic backups.
You need to assign roles to team members for Cloud Spanner. Which roles should you assign?
A. roles/spanner.databaseAdmin for Person A
roles/spanner.databaseReader for Person B
roles/spanner.backupWriter for Application C
B. roles/spanner.databaseAdmin for Person A
roles/spanner.databaseReader for Person B
roles/spanner.backupAdmin for Application C
C. roles/spanner.databaseAdmin for Person A
roles/spanner.databaseUser for Person B
roles/spanner databaseReader for Application C
D. roles/spanner.databaseAdmin for Person A
roles/spanner.databaseUser for Person B
roles/spanner.backupWriter for Application C
Correct Answer
B. roles/spanner.databaseAdmin for Person A
roles/spanner.databaseReader for Person B
roles/spanner.backupAdmin for Application C