TLS and PeopleSoft Integration Gateway

PeopleSoft in general leaves it to the administrator to ensure that digital certificates are set up properly. Given digital certificates don’t tend to change very often, and the provider changes even less frequently, it can be difficult to understand and remember how this works, and prevent issues.

What is a digital certificate?

The two parts

There are two parts to a digital certificate. One part is the private key which is used to encrypt data, and is installed, in our case, on the load balancer. It is important to keep this safe so nobody can impersonate our system. The public key is what we are mostly dealing with here. This is available to anyone and can be used to decrypt the data, and check it came from someone we trust.

Auto Restart Weblogic with Systemd

The Problem

Our WebLogic server recently crashed. There were a number of issues identified in the post mortem, which were addressed to prevent the same thing happening again. It also occurred to us that WebLogic should restart itself automatically if it fell over. This is easily achieved using systemd, but for whatever reason Oracle chose not to configure it to do this.

Oracles Default Setup

This is a PeopleSoft system which might be configured slightly differently to other WebLogic installations. PeopleSoft by default provides a systemd unit file which is mostly copyright warnings, so I won’t include it here. Oracle have created a legacy init.d file which is called by systemd as a oneshot. This in turn calls the startPIA.sh and stopPIA.sh scripts as appropriate. So if the web server crashes, it stays stopped.

SAML SSO for Django

The University has a Single Sign On (SSO) system. There are a number of ways that it can be used. In this case I am investigating the use of Security Assertion Markup Language (SAML). There is also Shibboleth which is related, and our SSO can also use, but I will leave that till another time.

I am creating a test application running in django on my desktop. Django by default only listens on the loopback interface which means it can provide friendly information to developers safe in the knowledge that anyone who can view it is logged on to my desktop. Sites are identified using their URL, so I need to add a unique hostname. I edited /etc/hosts and added myhost.local as a hostname to the end of the line that starts 127.0.0.1. Now I can visit http://myhost.local:8000 in my web browser and get to my test website.

Parsing XML with Ansible

I am trying to gather some information about an environment once it has been created and save it in a small Django app. This is about my adventures trying to discover the Weblogic version from the it’s registry which is an XML file.

The XML registry is in the Oracle inventory, and starts like this:

1
2
3
4
<?xml version = '1.0' encoding = 'UTF-8' standalone = 'yes'?>
<registry home="/opt/oracle/psft/pt/bea" platform="226" sessions="7" xmlns:ns2="http://xmlns.oracle.com/cie/gdr/dei" xmlns:ns3="http://xmlns.oracle.com/cie/gdr/nfo" xmlns="http://xmlns.oracle.com/cie/gdr/rgy">
   <distributions>
      <distribution status="installed" name="WebLogic Server" version="12.2.1.4.0">

I’d like to extract the WebLogic Server version which is on line 4. As I don’t know XML it’s tempting to use grep and sed to find the information I want, but I notice there is an XML module in Ansible. It is community maintained, and it’s not stable, so it’s behaviour might change. This is valid from versions between 2.4 and 4, maybe later.

Kubernetes, Terraform and Secrets

Getting started with Kubernetes and Terraform

I’ve been looking into how to learn terraform. I have also discovered for my project I need to use Kubernetes. It turns out that it is really easy to create a kubernetes cluster on the local desktop to have a play with. Here goes:

I got started using the following tutorial I used Kubernetes in Docker (kind) to test. This turned out to be really easy to install. I already had Docker installed, so I didn’t need to worry about that. I downloaded kind from it’s website - there is a compiled executable which is easiest. The instructions say on Linux:

Troubleshooting Etherpad In Google Cloud

There is quite a lot that could go wrong with the work in the previous post, and I feel I experienced most of the issues!

We need to remember there are two projects in play here. One to create the container inside the infrastructure, and the other to create the infrastructure the conatiner lives in. A problem could be caused by either of these, and so the correct action needs to be taken to correct the issue. Most of the issues I encountered were caused by the infrastructure, so after I corrected each problem I ran:

Connecting To The Database

Now that we have a database it is time to connect Etherpad to it.

We see in the Hands on Guide to Google Cloud that the way to achieve this is by setting some environment variables to be read by Etherpad when it starts. Lets have a look how to do that.

Adding Environment Variables

Once again this is set up by the cloud run app module. which as I noted before is the basis of my webapp.

Creating The Database

The next step in the Hands on Guide to Google Cloud says we should connect the Etherpad instance to a database. Let’s see how we create the database using the automated processes we are developing.

Creating the database using Terraform

Cloud SQL module

A SQL instance was created by the terraform code that was generated from the boilerplate. University members can see the sql.tf template. It contains the following:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
resource "random_id" "sql_instance_name" {
  byte_length = 4
  prefix      = "sql-"
}

# We make use of the opinionated Cloud SQL module provided by Google at
# https://registry.terraform.io/modules/GoogleCloudPlatform/sql-db/.
#
# The double-"/" is required. No, I don't know why.
module "sql_instance" {
  source  = "GoogleCloudPlatform/sql-db/google//modules/postgresql"
  version = "4.4.0"

  name = random_id.sql_instance_name.hex

  # ... Snip ...
}

The University has decided to standardise on Postgres for new database instances. The comment helpfully links to the documentation for the Google SQL module.