Here, I will discuss the common problems faced by the Developer while integrating Spring Boot with Google Cloud SQL and Google Cloud Storage.

Q-1). How to connect Cloud SQL from a local machine?
A-1). From a local machine, you need to download a software named Cloud SQL Proxy by which you will be able to connect to Cloud SQL.
Q-2). How to run Cloud SQL Proxy locally?

A-2). To run Cloud SQL Proxy, you need to run a command as:

cloud-sql-proxy --port 3307 spring-gcp-sql-storage:asia-south1:gcp-sql-storage

But before that, be sure that nothing is running in port no 3307.

Q-3). Why are you running Cloud SQL Proxy in port 3307?

A-3). For the safe side, if you install MySQL locally, then you have already used port 3306 (the default port of MySQL). So to be safe, use a port other than port no 3306.

Q-4). What is the advantage of installing Cloud SDK?

A-4). The advantage of installing Google Cloud SDK is to connect your local machine with the different Google services that you have configured in Google Console. Steps to follow:

  1. Download Google Cloud SDK
  2. Install Google Cloud SDK, it will create an icon on the Desktop itself
  3. Double-click the icon, and it will open a CMD/Terminal type of window
  4. Run the command as `gcloud auth application-default login`
  5. It will do the authorization, where you have to input your Gmail account which you have used to register in Google Cloud Console and configure various services
  6. After the authorization is completed, it will create a file named application_default_credentials.json in C:\Users\<Your Username>\AppData\Roaming\gcloud folder
  7. This file consists of different attributes like:
    • client_id
    • client_secret
    • quota_project_id
    • refresh_token
    • type
Q-5). Whether we have to create the initial table in Cloud SQL?

A-5). Nowadays, we use Spring Boot with Hibernate using the spring-data component. Because of this, we mention one property in the application.properties file named spring.jpa.hibernate.ddl-auto. In the production environment, it is recommended that keep this value as validate. If we keep the value as validate, then we have to create the DDLs of all the required tables before the application starts up. Otherwise, the application will not start up and will throw an exception because the required tables are not present in the database.

Similarly, in GCP, it will be a good practice that first upload the tables required for the application and then start the application.

Create DDLs of the necessary tables and views and upload them in Cloud SQL. While uploading in Cloud SQL you have to first upload in Cloud Storage and from Cloud Storage you can execute those DDLs to upload in Cloud SQL.

Q-6). If I use the GenerationType.AUTO option to generate the primary key of my table in GCP Cloud SQL. Will I have to do anything?

A-6). Yes, in Cloud SQL, you have to include one table named hibernate_sequence. This table is responsible to generate auto incremented nos every time when you are inserting any record in that table.

First create the DDL as:

CREATE TABLE `hibernate_sequence` (
  `next_val` bigint DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;

Then insert the initial record in this table as:

INSERT INTO hibernate_sequence (next_val) VALUES (1);

Both the above lines should be present in one file and named that file hibernate_sequence.sql.

After that, upload this hibernate_sequence.sql file in Cloud Storage and from there upload it in Cloud SQL.

If the hibernate_sequence table is not there, and you are using GenerationType.AUTO then you can encounter this type of error as mentioned below:

java.sql.SQLSyntaxErrorException: Table 'sql-storage.hibernate_sequence' doesn't exist
Q-7). At the end, of the above DDL Script of hibernate_sequence table is it mandatory to write ENGINE=InnoDB?

A-7). Yes, it is mandatory to add the ENGINE=InnoDB. The problem is if you don’t mention the Engine Name, then some of the data types will not work. Like, here you have used BIGINT datatype, which will not work if the engine is not InnoDB. Apart from InnoDB engine, there is another engine named MyISAM. This MyISAM engine doesn’t support various data types. So, before you are running any DDL Queries, you need to configure DB Engine properly and mention the name of the Engine in your DDL Query.

If you don’t specify the exception in your DDL Query, you may experience this type of exception as provided below:

org.hibernate.tool.schema.spi.CommandAcceptanceException: Error executing DDL "create table hibernate_sequence (next_val bigint) engine=MyISAM" via JDBC Statement
Q-8). Why do we have to insert the first record with value 1 in the hibernate_sequence table?

A-8). You have to insert the initial value to the hibernate_sequence table, as this table is unknown to Cloud SQL. You can’t expect that as in local this hibernate_sequence table is created automatically and an initial value is inserted in this table. The same procedure will not take place in Google Cloud using the Cloud SQL interface of MySQL.

If you don’t insert the initial value to this hibernate_sequence table, then you can encounter this type of exception as provided below:

could not read a hi value - you need to populate the table: hibernate_sequence
Q-9). How to run a Spring Boot application using Cloud SQL locally?

A-9). There are steps that you need to follow:

  • First, create a Google account
  • Use that Google account to log in Google Console
  • Create a project
  • Create a Cloud SQL (MySQL) instance
  • Download Google Cloud SDK
  • Install Google Cloud SDK
  • Run the command as:
gcloud auth application-default login
  • It will store the credentials file named application_default_credentials.json in your local directory C:\Users\<Your Username>\AppData\Roaming\gcloud
  • Download Cloud SQL Proxy
  • Extract Cloud SQL Proxy in your local directory
  • Run the Cloud SQL proxy by opening the CMD/Terminal from that directory as:
cloud-sql-proxy --port 3307 spring-gcp-sql-storage:asia-south1:gcp-sql-storage
  • I have chosen port 3307 as the safe side because if MySQL is already installed locally on your machine, it is already using port 3306
  • Now once done open any of your SQL Clients like SQLYog or DBeaver to create the connection and test locally whether it is connecting to Cloud SQL or not by providing the inputs as:
    • Server Host – localhost
    • Port No – 3307
    • Database Name – The name of the database that you have created under the Cloud SQL instance
    • Username – The username that you have created in the Cloud SQL instance
    • Password – The password that you have created in the Cloud SQL instance
  • Test the connection, once it gets connected you can now use this connection-URL in your spring boot application
  • Now to connect Cloud SQL from your Spring Boot application, you require 2 BOM dependencies in the pom file as well as one dependency named: spring-cloud-gcp-dependencies, libraries-bom as BOM dependencies, and spring-cloud-gcp-starter as a dependency
  • And also include one property in the application.properties file as:
spring.cloud.gcp.credentials.location=file:C:/Users/<Your Usernae>/AppData/Roaming/gcloud/application_default_credentials.json
spring.datasource.url=jdbc:mysql://localhost:3307/sql-storage
spring.datasource.username=<Your Username that you provide in Cloud SQL>
spring.datasource.password=<Your Password that you provide in Clud SQL>
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
  • Once you are done, start the application. It will start and test the application by doing some transactional operations. You will notice that all the data are populated in Cloud SQL DB.
Q-10). Any specific dependency I have to use while connecting to Cloud SQL from a Spring Boot application?

A-10). Two ways are there:

First by connecting to Cloud SQL locally using Cloud SQL Proxy and then using the localhost URL concept in the Spring Boot application’s properties file.

Second, use cloud gcp-related dependencies in your Spring Boot application and directly test in any Google Cloud VMs or in Cloud Run.

The first approach is already discussed in the above question.

Let’s discuss the second approach.

In this approach, you don’t need any Cloud Google SDK or Cloud SQL Proxy to connect to Cloud SQL locally. Here, we will use spring-cloud-gcp dependencies and will deploy the application in Cloud Run and will access the application.

Steps to follow:

  • First, open the pom.xml file
  • Include 2 BOM dependencies as spring-cloud-gcp-dependencies and libraries-bom
  • After that include certain dependencies such as spring-cloud-gcp, spring-cloud-gcp-starter, and spring-cloud-gcp-starter-sql-mysql. Apart from this, use the other dependencies like spring-boot-starter-data-jpa, and mysql-connector-java if you are using MySQL DB.
  • After that, add 2 properties in the application.properties file:
spring.cloud.gcp.sql.database-name=sql-storage
spring.cloud.gcp.sql.instance-connection-name=<Project-name>:<Region-Name>:<Instance-Name>
spring.datasource.username=<Your Username that you provide in Cloud SQL>
spring.datasource.password=<Your Password that you provide in Clud SQL>
  • Commit the code
  • Deploy in Cloud Run
  • It will connect to Cloud SQL to do the transactions

In this way, we can connect to Cloud SQL instances from the Spring Boot application, either locally or directly from Google Cloud Console.

Q-11). What happens if we use different types of generators for the primary key without using GenerationType.AUTO?

A-11). If you use other types of GenerationType, then you don’t need to create the hibernate_sequence table. As this table is only required if you are using the GenerationType.AUTO generator.

Q-12). How will we connect to Cloud Storage using a Spring Boot Application running locally?

A-12). Cloud Storage is a Google Service where we can store files of any extension similar to Blob type of data in SQL. Using your Google account, you can create a bucket. In this bucket, you will dump the files. To organize those files, you can create a folder within the bucket. But remember one thing, that bucket creation is mandatory, but folder creation is not mandatory.

Steps to follow to connect to Cloud Storage from the Spring Boot application running locally:

  • Add the 2 BOM related dependencies as mentioned in Q-10)
  • Add 2 dependencies spring-cloud-gcp and spring-cloud-gcp-starter
  • Now add one specific dependency named google-cloud-storage
  • Now add 2 properties in the file:
spring.cloud.gcp.storage.bucket-name=gcp-sql-storage
spring.cloud.gcp.bucket.dir=sqlstorage
  • The 2nd property is not mandatory if you create a folder then you can define the second property
  • Now you will be able to access some class as StorageOptions, Storage, Bucket, and Blob classes all are of com.google.cloud.storage package.
// StorageOptions
StorageOptions options = StorageOptions.newBuilder().setProjectId(gcpProjectId).build();
// Storage
Storage storage = options.getService();
// Bucket
Bucket bucket = storage.get(gcpBucketId, Storage.BucketGetOption.fields());
// Blob
Blob blob = bucket.create(gcpBucketDirectoryName + "/" + fileName, fileData, contentType);
  • Now using this Blob class you can upload a file and can delete a file from Cloud Storage using your Spring Boot Application running locally.
Q-13). What type of files we can store in Cloud Storage?

A-13). We can store any type of files like images, videos, docs, texts, bat, sh files, etc.

Q-14). What is a Bucket? And what is a folder? Is Bucket creation mandatory?

A-14). The bucket is the container where all the files get stored. A common example is Google Drive. A bucket where you are dumping your personnel files. Folders are where you organized your files. Like your educational docs are in the education” folder, professional docs are in the “professional” folder, likewise. The main mantra is bucket creation is compulsory, whereas folder creation is not mandatory.

Q-15). What is the advantage of using com.google.cloud.storage.Blob class?

A-15). The advantage of using the Blob class is that this class consists of different properties. The main properties are:

name and mediaLink

Nowadays, we didn’t store the file blob data in DB. We store the files in storage like a local drive of your computer and capture the reference in DB. As a reference, we are capturing the name of the file and the media link of the file as provided by Cloud Storage. With the help of these references, one can easily track the media files as uploaded for the respective user or any entity. Based on that record, one can delete or update the records as well.