The latest Google Professional Cloud Database Engineer certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the Google Professional Cloud Database Engineer exam and earn Google Professional Cloud Database Engineer certification.
Table of Contents
- Question 21
- Exam Question
- Correct Answer
- Question 22
- Exam Question
- Correct Answer
- Question 23
- Exam Question
- Correct Answer
- Question 24
- Exam Question
- Correct Answer
- Question 25
- Exam Question
- Correct Answer
- Question 26
- Exam Question
- Correct Answer
- Question 27
- Exam Question
- Correct Answer
- Question 28
- Exam Question
- Correct Answer
- Question 29
- Exam Question
- Correct Answer
- Question 30
- Exam Question
- Correct Answer
Question 21
Exam Question
You released a popular mobile game and are using a 50 TB Cloud Spanner instance to store game data in a PITR-enabled production environment. When you analyzed the game statistics, you realized that some players are exploiting a loophole to gather more points to get on the leaderboard. Another DBA accidentally ran an emergency bugfix script that corrupted some of the data in the production environment. You need to determine the extent of the data corruption and restore the production environment. What should you do? (Choose two.)
A. If the corruption is significant, use backup and restore, and specify a recovery timestamp.
B. If the corruption is significant, perform a stale read and specify a recovery timestamp. Write the results back.
C. If the corruption is significant, use import and export.
D. If the corruption is insignificant, use backup and restore, and specify a recovery timestamp.
E. If the corruption is insignificant, perform a stale read and specify a recovery timestamp. Write the results back.
Correct Answer
B. If the corruption is significant, perform a stale read and specify a recovery timestamp. Write the results back.
E. If the corruption is insignificant, perform a stale read and specify a recovery timestamp. Write the results back.
Question 22
Exam Question
Your customer is running a MySQL database on-premises with read replicas. The nightly incremental backups are expensive and add maintenance overhead. You want to follow Google-recommended practices to migrate the database to Google Cloud, and you need to ensure minimal downtime. What should you do?
A. Create a Google Kubernetes Engine (GKE) cluster, install MySQL on the cluster, and then import the dump file.
B. Use the mysqldump utility to take a backup of the existing on-premises database, and then import it into Cloud SQL.
C. Create a Compute Engine VM, install MySQL on the VM, and then import the dump file.
D. Create an external replica, and use Cloud SQL to synchronize the data to the replica.
Correct Answer
B. Use the mysqldump utility to take a backup of the existing on-premises database, and then import it into Cloud SQL.
Question 23
Exam Question
You are starting a large CSV import into a Cloud SQL for MySQL instance that has many open connections. You checked memory and CPU usage, and sufficient resources are available. You want to follow Google-recommended practices to ensure that the import will not time out. What should you do?
A. Close idle connections or restart the instance before beginning the import operation.
B. Increase the amount of memory allocated to your instance.
C. Ensure that the service account has the Storage Admin role.
D. Increase the number of CPUs for the instance to ensure that it can handle the additional import operation.
Correct Answer
C. Ensure that the service account has the Storage Admin role.
Question 24
Exam Question
You are evaluating Cloud SQL for PostgreSQL as a possible destination for your on-premises PostgreSQL instances. Geography is becoming increasingly relevant to customer privacy worldwide. Your solution must support data residency requirements and include a strategy to: configure where data is stored control where the encryption keys are stored govern the access to data
What should you do?
A. Replicate Cloud SQL databases across different zones.
B. Create a Cloud SQL for PostgreSQL instance on Google Cloud for the data that does not need to adhere to data residency requirements. Keep the data that must adhere to data residency requirements on-premises. Make application changes to support both databases.
C. Allow application access to data only if the users are in the same region as the Google Cloud region for the Cloud SQL for PostgreSQL database.
D. Use features like customer-managed encryption keys (CMEK), VPC Service Controls, and Identity and Access Management (IAM) policies.
Correct Answer
C. Allow application access to data only if the users are in the same region as the Google Cloud region for the Cloud SQL for PostgreSQL database.
Question 25
Exam Question
You are migrating your data center to Google Cloud. You plan to migrate your applications to Compute Engine and your Oracle databases to Bare Metal Solution for Oracle. You must ensure that the applications in different projects can communicate securely and efficiently with the Oracle databases. What should you do?
A. Set up a Shared VPC, configure multiple service projects, and create firewall rules.
B. Set up Serverless VPC Access.
C. Set up Private Service Connect.
D. Set up Traffic Director.
Correct Answer
A. Set up a Shared VPC, configure multiple service projects, and create firewall rules.
Question 26
Exam Question
Your application uses Cloud SQL for MySQL. Your users run reports on data that relies on near-real time; however, the additional analytics caused excessive load on the primary database. You created a read replica for the analytics workloads, but now your users are complaining about the lag in data changes and that their reports are still slow. You need to improve the report performance and shorten the lag in data replication without making changes to the current reports. Which two approaches should you implement? (Choose two.)
A. Create secondary indexes on the replica.
B. Create additional read replicas, and partition your analytics users to use different read replicas.
C. Disable replication on the read replica, and set the flag for parallel replication on the read replica. Re-enable replication and optimize performance by setting flags on the primary instance.
D. Disable replication on the primary instance, and set the flag for parallel replication on the primary instance. Re-enable replication and optimize performance by setting flags on the read replica.
E. Move your analytics workloads to BigQuery, and set up a streaming pipeline to move data and update BigQuery.
Correct Answer
B. Create additional read replicas, and partition your analytics users to use different read replicas.
E. Move your analytics workloads to BigQuery, and set up a streaming pipeline to move data and update BigQuery.
Question 27
Exam Question
You are running an instance of Cloud Spanner as the backend of your ecommerce website. You learn that the quality assurance (QA) team has doubled the number of their test cases. You need to create a copy of your Cloud Spanner database in a new test environment to accommodate the additional test cases. You want to follow Google-recommended practices. What should you do?
A. Use Cloud Functions to run the export in Avro format.
B. Use Cloud Functions to run the export in text format.
C. Use Dataflow to run the export in Avro format.
D. Use Dataflow to run the export in text format.
Correct Answer
C. Use Dataflow to run the export in Avro format.
Question 28
Exam Question
You are building an application that allows users to customize their website and mobile experiences. The application will capture user information and preferences. User profiles have a dynamic schema, and users can add or delete information from their profile. You need to ensure that user changes automatically trigger updates to your downstream BigQuery data warehouse. What should you do?
A. Store your data in Bigtable, and use the user identifier as the key. Use one column family to store user profile data, and use another column family to store user preferences.
B. Use Cloud SQL, and create different tables for user profile data and user preferences from your recommendations model. Use SQL to join the user profile data and preferences
C. Use Firestore in Native mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.
D. Use Firestore in Datastore mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.
Correct Answer
A. Store your data in Bigtable, and use the user identifier as the key. Use one column family to store user profile data, and use another column family to store user preferences.
Question 29
Exam Question
You need to redesign the architecture of an application that currently uses Cloud SQL for PostgreSQL.
The users of the application complain about slow query response times. You want to enhance your application architecture to offer sub-millisecond query latency. What should you do?
A. Configure Firestore, and modify your application to offload queries.
B. Configure Bigtable, and modify your application to offload queries.
C. Configure Cloud SQL for PostgreSQL read replicas to offload queries.
D. Configure Memorystore, and modify your application to offload queries.
Correct Answer
D. Configure Memorystore, and modify your application to offload queries.
Question 30
Exam Question
You are managing multiple applications connecting to a database on Cloud SQL for PostgreSQL. You need to be able to monitor database performance to easily identify applications with long-running and resource-intensive queries. What should you do?
A. Use log messages produced by Cloud SQL.
B. Use Query Insights for Cloud SQL.
C. Use the Cloud Monitoring dashboard with available metrics from Cloud SQL.
D. Use Cloud SQL instance monitoring in the Google Cloud Console.
Correct Answer
C. Use the Cloud Monitoring dashboard with available metrics from Cloud SQL.