Creating Infrastructure With Terraform

This is an enormous and very complicated area. The hands on guide says to change the memory limit for the application to make it run faster. So, how do we do this in terraform?

In the project generated from cookiecutter, I can simply edit webapp.tf and change the webapp module to add memory_limit="1024M". Simple, but how do I know I can do that?

I need to read up on the Terraform language to understand what is going on. It seems there are two types of module. A directory of terraform code is called a Root Module, but within that code are module definitions. These are called Module blocks. So my webapp module defined with the module keyword is a module block.

SettingUpTerraform

Before starting to change the configuration with Terraform, there is some set up work that needs to be done.

While the getting started guides are fine, in practice this leaves a problem of how to work with colleagues, and how to manage secrets.

My colleagues have created a tool called Logan which they use to run terraform. It is installed using pip, but it is a docker container, so will require a working docker to run properly. I have RedHat 7 installed on my desktop, so I had to yum install python3. I found I had to upgrade pip. Since there are other requirements to install I created a virtual environment to install and run it in:

Deploying a Container to Google Cloud

Having created the container, we now need to deploy it to the Google cloud. We create a deploy project. This was created for me, I believe from our Boilerplate Google Cloud Deployment project This project can currently only be viewed from within the University. The gitlab-ci.yml contained the following at the time of writing:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# Use UIS deployment workflow, adding jobs based on the templates to deploy
# synchronisation images

include:
  - project: 'uis/devops/continuous-delivery/ci-templates'
    file: '/auto-devops/deploy.yml'
    ref: v1.2.0

  # Include template that lints local Terraform files
  - project: 'uis/devops/continuous-delivery/ci-templates'
    file: '/auto-devops/terraform-lint.yml'
    ref: v1.2.0

# Triggered by manually running the pipeline with DEPLOY_ENABLED="development"
# and WEBAPP_DOCKER_IMAGE set to the image to deploy
deploy_webapp_development:
  extends: .deploy_webapp_template
  environment:
    name: development/$DEPLOY_COMPONENT
    url: $WEBAPP_URL_DEVELOPMENT
  variables:
    DEPLOY_ENV: "DEVELOPMENT"
  rules:
    - if: $WEBAPP_DOCKER_IMAGE == null || $WEBAPP_DOCKER_IMAGE == ""
      when: never
    - if: $DEPLOY_ENABLED == "development"
      when: on_success
    - when: never
	
.deploy_webapp_template:
  extends: .cloud-run-deploy
  variables:
    # Informative name for image. This is used to name the image which we push
    # to GCP. It is *not* the name of the image we pull from the GitLab
    # container registry. The fully-qualified container name to *pull* should be
    # set via the WEBAPP_DOCKER_IMAGE variable.
    IMAGE_NAME: webapp

    # Prefix for service-specific variable names
    SERVICE_PREFIX: WEBAPP

    # Variables set by upstream deploy job
    RUN_SOURCE_IMAGE: "$WEBAPP_DOCKER_IMAGE"

    # The name of the deploy component - will be prefixed with the environment
    # name to create a gitlab deploy environment name
    DEPLOY_COMPONENT: webapp

What it does is to take the deploy template from the devops CI templates in the University GitLab (This is publicly viewable as I write this) and use them to deploy using some environment variables to control what is being deployed. We can see the in the deploy.yml the CI is instructed to do a docker pull on the container that we built, tag it with the Google container name, then push it to the Google cloud. There is another entry for staging, but first let’s understand development.

Building A Container with AutoDevops in GitLab

In the last post I set up a runner. Now lets see if we can use it to create a container.

I am looking into how my colleagues at Cambridge do things. They have a handy guide to the Google cloud using what colleagues call click ops. When I ran through it I was given a project which already had an ID, so had to make sure I used the project ID I was given rather than the one in the document. This also applied to the project registry.

Gitlab Runner for Auto Devops

The Auto DevOps pipeline in GitLab is supposed to be magic! It just reads your mind and does whatever you wanted! At least that seems to be what the sales blurb says!

Over the next few posts I will investigate how we use it. It is pretty clever, but we do need to set a couple of things first.

Requirements

This assumes docker is installed and working on the machine that will be used for the runner. I used my Linux desktop for this exercise which has Docker already set up.

Oracle 19c Critical Patching

It is the time of year to apply critical patches. This time round we have some databases at version 19c. We tend to install a new oracle home for each patch, as we find this helps us manage the migration. It also reduces downtime, especially if you have not yet taken advantage of pluggable databases as we haven’t.

Oracle Installer

I notice from the documentation that there is an applyRU parameter that can be passed to the installer to apply the patch. The documentation is quite poor though because it doesn’t specify what should be passed after that. Even more annoying is that if the patch attempt fails, it will leave the home in an intermediate state and it will have to be deleted and recreated.

Change Assistant on Linux

Automating Tools Patches

I have already automated the VM builds, so now I want to automate the application of the patch into the database. Oracle have done some work here with their Change Assistant tool, which once the database is set up can be used to apply the patch. Hopefully I can find a way to run change assistant from the Linux command line and apply the patch using the automated procedure.

Cloning a Pluggable Database using Unix commands

I read a lot about the flexibility of Oracle commands for pluggable databases. I haven’t seen as much about the old fashioned way to copy data files around and manually creating a control file. So lets see if that still works. I have a campus solutions demo instance, lets see if I can copy it and rename it.

Back Up and Edit the Control File

Running a familiar command is promising: