Skip to Content

Google Professional Cloud Database Engineer Exam Questions and Answers – 2

The latest Google Professional Cloud Database Engineer certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the Google Professional Cloud Database Engineer exam and earn Google Professional Cloud Database Engineer certification.

Question 101

Exam Question

You need to perform a one-time migration of data from a running Cloud SQL for MySQL instance in the us-central1 region to a new Cloud SQL for MySQL instance in the us-east1 region. You want to follow Google-recommended practices to minimize performance impact on the currently running instance. What should you do?

A. Create and run a Dataflow job that uses JdbcIO to copy data from one Cloud SQL instance to another.
B. Create two Datastream connection profiles, and use them to create a stream from one Cloud SQL instance to another.
C. Create a SQL dump file in Cloud Storage using a temporary instance, and then use that file to import into a new instance.
D. Create a CSV file by running the SQL statement SELECT…INTO OUTFILE, copy the file to a Cloud Storage bucket, and import it into a new instance.

Correct Answer

C. Create a SQL dump file in Cloud Storage using a temporary instance, and then use that file to import into a new instance.

Question 102

Exam Question

During an internal audit, you realized that one of your Cloud SQL for MySQL instances does not have high availability (HA) enabled. You want to follow Google-recommended practices to enable HA on your existing instance. What should you do?

A. Create a new Cloud SQL for MySQL instance, enable HA, and use the export and import option to migrate your data.
B. Create a new Cloud SQL for MySQL instance, enable HA, and use Cloud Data Fusion to migrate your data.
C. Use the gcloud instances patch command to update your existing Cloud SQL for MySQL instance.
D. Shut down your existing Cloud SQL for MySQL instance, and enable HA.

Correct Answer

A. Create a new Cloud SQL for MySQL instance, enable HA, and use the export and import option to migrate your data.

Question 103

Exam Question

Your organization works with sensitive data that requires you to manage your own encryption keys. You are working on a project that stores that data in a Cloud SQL database. You need to ensure that stored data is encrypted with your keys. What should you do?

A. Use Cloud SQL Auth proxy.
B. Export data periodically to a Cloud Storage bucket protected by Customer-Supplied Encryption Keys.
C. Use customer-managed encryption keys with Cloud SQL.
D. Connect to Cloud SQL using a connection that has SSL encryption.

Correct Answer

C. Use customer-managed encryption keys with Cloud SQL.

Question 104

Exam Question

You are designing a new gaming application that uses a highly transactional relational database to store player authentication and inventory data in Google Cloud. You want to launch the game in multiple regions. What should you do?

A. Use BigQuery to deploy the database
B. Use Cloud SQL with a regional read replica to deploy the database.
C. Use Bigtable with clusters in multiple regions to deploy the database
D. Use Cloud Spanner to deploy the database.

Correct Answer

D. Use Cloud Spanner to deploy the database.

Question 105

Exam Question

Your neural network model is taking days to train. You want to increase the training speed. What can you do?

A. Increase the number of layers in your neural network.
B. Increase the number of input features to your model.
C. Subsample your test dataset.
D. Subsample your training dataset.

Correct Answer

D. Subsample your training dataset.

Question 106

Exam Question

You are migrating an on-premises application to Google Cloud. The application requires a high availability (HA) PostgreSQL database to support business-critical functions. Your company’s disaster recovery strategy requires a recovery time objective (RTO) and recovery point objective (RPO) within 30 minutes of failure. You plan to use a Google Cloud managed service. What should you do to maximize uptime for your application?

A. Deploy Cloud SQL for PostgreSQL in a regional configuration. Create a read replica in a different zone in the same region and a read replica in another region for disaster recovery.
B. Deploy Cloud SQL for PostgreSQL in a regional configuration with HA enabled. Create a cross-region read replica, and promote the read replica as the primary node for disaster recovery.
C. Deploy Cloud SQL for PostgreSQL in a regional configuration with HA enabled. Take periodic backups, and use this backup to restore to a new Cloud SQL for PostgreSQL instance in another region during a disaster recovery event.
D. Migrate the PostgreSQL database to multi-regional Cloud Spanner so that a single region outage will not affect your application. Update the schema to support Cloud Spanner data types, and refactor the application.

Correct Answer

B. Deploy Cloud SQL for PostgreSQL in a regional configuration with HA enabled. Create a cross-region read replica, and promote the read replica as the primary node for disaster recovery.

Question 107

Exam Question

You are the DBA of an online tutoring application that runs on a Cloud SQL for PostgreSQL database. You are testing the implementation of the cross-regional failover configuration. The database in region R1 fails over successfully to region R2, and the database becomes available for the application to process data. During testing, certain scenarios of the application work as expected in region R2, but a few scenarios fail with database errors. The application-related database queries, when executed in isolation from Cloud SQL for PostgreSQL in region R2, work as expected. The application performs completely as expected when the database fails back to region R1. You need to identify the cause of the database errors in region R2. What should you do?

A. Determine whether the failover of Cloud SQL for PostgreSQL from region R1 to region R2 is in progress or has completed successfully.
B. Determine whether Cloud SQL for PostgreSQL in region R2 is a near-real-time copy of region R1 but not an exact copy.
C. Determine whether the database patches of Cloud SQI for PostgreSQL in regions R1 and R2 are different.
D. Determine whether the versions of Cloud SQL for PostgreSQL in regions R1 and R2 are different.

Correct Answer

B. Determine whether Cloud SQL for PostgreSQL in region R2 is a near-real-time copy of region R1 but not an exact copy.

Question 108

Exam Question

You are running a large, highly transactional application on Oracle Real Application Cluster (RAC) that is multi-tenant and uses shared storage. You need a solution that ensures high-performance throughput and a low-latency connection between applications and databases. The solution must also support existing Oracle features and provide ease of migration to Google Cloud. What should you do?

A. Migrate to Bare Metal Solution for Oracle.
B. Migrate to Google Kubernetes Engine (GKE)
C. Migrate to Google Cloud VMware Engine
D. Migrate to Compute Engine.

Correct Answer

A. Migrate to Bare Metal Solution for Oracle.

Question 109

Exam Question

You are managing a set of Cloud SQL databases in Google Cloud. Regulations require that database backups reside in the region where the database is created. You want to minimize operational costs and administrative effort. What should you do?

A. Disable automated backups, and create an on-demand backup routine to a regional Cloud Storage bucket.
B. Disable automated backups, and configure serverless exports to a regional Cloud Storage bucket.
C. Configure the automated backups to use a regional Cloud Storage bucket as a custom location.
D. Use the default configuration for the automated backups location.

Correct Answer

C. Configure the automated backups to use a regional Cloud Storage bucket as a custom location.

Question 110

Exam Question

You have an application that sends banking events to Bigtable cluster-a in us-east. You decide to add cluster-b in us-central1. Cluster-a replicates data to cluster-b. You need to ensure that Bigtable continues to accept read and write requests if one of the clusters becomes unavailable and that requests are routed automatically to the other cluster. What deployment strategy should you use?

A. Create a custom app profile with multi-cluster routing.
B. Use the default app profile with single-cluster routing.
C. Use the default app profile with multi-cluster routing.
D. Create a custom app profile with single-cluster routing.

Correct Answer

A. Create a custom app profile with multi-cluster routing.