Troubleshooting Etherpad In Google Cloud

There is quite a lot that could go wrong with the work in the previous post, and I feel I experienced most of the issues!

We need to remember there are two projects in play here. One to create the container inside the infrastructure, and the other to create the infrastructure the conatiner lives in. A problem could be caused by either of these, and so the correct action needs to be taken to correct the issue. Most of the issues I encountered were caused by the infrastructure, so after I corrected each problem I ran:

Connecting To The Database

Now that we have a database it is time to connect Etherpad to it.

We see in the Hands on Guide to Google Cloud that the way to achieve this is by setting some environment variables to be read by Etherpad when it starts. Lets have a look how to do that.

Adding Environment Variables

Once again this is set up by the cloud run app module. which as I noted before is the basis of my webapp.

Creating The Database

The next step in the Hands on Guide to Google Cloud says we should connect the Etherpad instance to a database. Let’s see how we create the database using the automated processes we are developing.

Creating the database using Terraform

Cloud SQL module

A SQL instance was created by the terraform code that was generated from the boilerplate. University members can see the sql.tf template. It contains the following:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
resource "random_id" "sql_instance_name" {
  byte_length = 4
  prefix      = "sql-"
}

# We make use of the opinionated Cloud SQL module provided by Google at
# https://registry.terraform.io/modules/GoogleCloudPlatform/sql-db/.
#
# The double-"/" is required. No, I don't know why.
module "sql_instance" {
  source  = "GoogleCloudPlatform/sql-db/google//modules/postgresql"
  version = "4.4.0"

  name = random_id.sql_instance_name.hex

  # ... Snip ...
}

The University has decided to standardise on Postgres for new database instances. The comment helpfully links to the documentation for the Google SQL module.

Creating Infrastructure With Terraform

This is an enormous and very complicated area. The hands on guide says to change the memory limit for the application to make it run faster. So, how do we do this in terraform?

In the project generated from cookiecutter, I can simply edit webapp.tf and change the webapp module to add memory_limit="1024M". Simple, but how do I know I can do that?

I need to read up on the Terraform language to understand what is going on. It seems there are two types of module. A directory of terraform code is called a Root Module, but within that code are module definitions. These are called Module blocks. So my webapp module defined with the module keyword is a module block.

SettingUpTerraform

Before starting to change the configuration with Terraform, there is some set up work that needs to be done.

While the getting started guides are fine, in practice this leaves a problem of how to work with colleagues, and how to manage secrets.

My colleagues have created a tool called Logan which they use to run terraform. It is installed using pip, but it is a docker container, so will require a working docker to run properly. I have RedHat 7 installed on my desktop, so I had to yum install python3. I found I had to upgrade pip. Since there are other requirements to install I created a virtual environment to install and run it in:

Deploying a Container to Google Cloud

Having created the container, we now need to deploy it to the Google cloud. We create a deploy project. This was created for me, I believe from our Boilerplate Google Cloud Deployment project This project can currently only be viewed from within the University. The gitlab-ci.yml contained the following at the time of writing:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# Use UIS deployment workflow, adding jobs based on the templates to deploy
# synchronisation images

include:
  - project: 'uis/devops/continuous-delivery/ci-templates'
    file: '/auto-devops/deploy.yml'
    ref: v1.2.0

  # Include template that lints local Terraform files
  - project: 'uis/devops/continuous-delivery/ci-templates'
    file: '/auto-devops/terraform-lint.yml'
    ref: v1.2.0

# Triggered by manually running the pipeline with DEPLOY_ENABLED="development"
# and WEBAPP_DOCKER_IMAGE set to the image to deploy
deploy_webapp_development:
  extends: .deploy_webapp_template
  environment:
    name: development/$DEPLOY_COMPONENT
    url: $WEBAPP_URL_DEVELOPMENT
  variables:
    DEPLOY_ENV: "DEVELOPMENT"
  rules:
    - if: $WEBAPP_DOCKER_IMAGE == null || $WEBAPP_DOCKER_IMAGE == ""
      when: never
    - if: $DEPLOY_ENABLED == "development"
      when: on_success
    - when: never
	
.deploy_webapp_template:
  extends: .cloud-run-deploy
  variables:
    # Informative name for image. This is used to name the image which we push
    # to GCP. It is *not* the name of the image we pull from the GitLab
    # container registry. The fully-qualified container name to *pull* should be
    # set via the WEBAPP_DOCKER_IMAGE variable.
    IMAGE_NAME: webapp

    # Prefix for service-specific variable names
    SERVICE_PREFIX: WEBAPP

    # Variables set by upstream deploy job
    RUN_SOURCE_IMAGE: "$WEBAPP_DOCKER_IMAGE"

    # The name of the deploy component - will be prefixed with the environment
    # name to create a gitlab deploy environment name
    DEPLOY_COMPONENT: webapp

What it does is to take the deploy template from the devops CI templates in the University GitLab (This is publicly viewable as I write this) and use them to deploy using some environment variables to control what is being deployed. We can see the in the deploy.yml the CI is instructed to do a docker pull on the container that we built, tag it with the Google container name, then push it to the Google cloud. There is another entry for staging, but first let’s understand development.

Building A Container with AutoDevops in GitLab

In the last post I set up a runner. Now lets see if we can use it to create a container.

I am looking into how my colleagues at Cambridge do things. They have a handy guide to the Google cloud using what colleagues call click ops. When I ran through it I was given a project which already had an ID, so had to make sure I used the project ID I was given rather than the one in the document. This also applied to the project registry.

Gitlab Runner for Auto Devops

The Auto DevOps pipeline in GitLab is supposed to be magic! It just reads your mind and does whatever you wanted! At least that seems to be what the sales blurb says!

Over the next few posts I will investigate how we use it. It is pretty clever, but we do need to set a couple of things first.

Requirements

This assumes docker is installed and working on the machine that will be used for the runner. I used my Linux desktop for this exercise which has Docker already set up.