• This email address is being protected from spambots. You need JavaScript enabled to view it.
    • +44 (0) 20374 57368

A code integration and delivery (CI/CD) pipeline is essential to a software project.  Not only does it help us ship code faster but it also allows less room for mistakes. This tutorial will teach you how to set up a simple CI/CD pipeline for your GKE cluster with Codeship.  

WHY CODESHIP

We no longer had any time maintaining our self-hosted instance of Jenkins. Codeship works well for us because it has native Docker support. It has been extremely reliable, and their support is always quick to respond. As a bonus, adding Slack notifications is easy. It is also great for personal projects because it offers 100 free builds per month without requiring a credit card.

THE BIG PICTURE

For this tutorial, we will be setting up a pipeline for a demo application that you can find at https://gitlab.com/neso-io/cat-api. It is a simple API that returns a random image of a cat.

At Neso, the build starts when a pull/merge request is accepted. On top of Codeship, we also make use of Gitlab CI so that we only merge the PR if the automated tests pass. We will be making something similar. A git push or PR merge to the master branch will trigger the chain of events in our pipeline.

Codeship will be responsible for running the automated tests, as well as building and pushing our Docker image to Google Container Registry. It then lets Kubernetes (k8s) know that there is a new image to be rolled out. At this point, we will let Kubernetes do its magic.

At Neso, a Kubernetes namespace serves as an environment. We'll be following that convention here. The master branch goes into our staging environment/namespace; git tags matching the Semantic Versioning Specification will go into our production environment/namespace.

environment namespace

PROJECT SET UP

Similar to other managed CI/CD platforms, a bit of configuration is required.  At the time of writing, Codeship only supports Github, Bitbucket, and Gitlab. When you add a new project, you'll need to pick where you host your code between the three.  I've selected Gitlab for this tutorial since that's where the code is on.

When you're done selecting your SCM provider, it's time to connect Codeship to your repository. You'll need raised privileges in your repository before you can add it to Codeship.

select scm

After you add your project on Codeship, you'll need a few files to get started:

  1. codeship-services.yml - Contains the defined Docker images required for the pipeline

  1. codeship-steps.yml - Configuration for the commands that will be executed in the Docker images specified in the services configuration

  1. codeship.aes - Every project on Codeship Pro has an AES key found in the General Project Settings. Go ahead and download it to the project's root directory. You'll need this key to encrypt the credentials required to access resources in your project on Google Cloud Platform (GCP).

  1. secrets.env - This is a plain text file that contains GCP service account credentials, GPC project ID and etc. Everything specified here will be accessible via environment variables. More on this in next section.

  1. secrets.env.encrypted - This is the encrypted version of the secrets.env file. This is the encrypted version of the secrets.env file which I will show you how to generate later.

Only the files codeship.aes and secrets.env shouldn't be committed so go ahead and add those to your .gitignore file.

SECRETS

We'll be setting up Codeship to run the unit tests and build the Docker image. The built Docker image will need to be pushed somewhere that our Kubernetes cluster has access to. There are many registries available like Gitlab and DockerHub. Since we're already on GKE, we might as well use Google Container Registry (GCR).  

GCR isn't just going to allow anyone to push to the registry unless they're authenticated. For that, we're going to need to create a service account on GCP. The service account will need the roles:

  1. Storage Admin - required for read-write access to GRC
  2. Kubernetes Engine Admin - required for read-write access to GKE

Download the JSON and add it to the secrets.env file. For a more detailed guide for encrypting sensitive data Codeship provides excellent documentation for this at https://documentation.codeship.com/pro/builds-and-configuration/environment-variables/#encrypted-environment-variables.

Before encryption, my secrets file looked like:

encryption google

SECRETS ENCRYPTION

I mentioned earlier that every project on Codeship is provided with an AES key that will be used for encrypting anything sensitive that will be needed in our pipeline. The key can be found in the project's General Project Settings page.

aes key

If you haven't already downloaded the project's AES key go ahead and add it to the root of the project as codeship.aes.

codeship aes

Before you can encrypt the secrets file, you'll need to install a CLI tool created by Codeship called Jet.  Jet CLI is a valuable tool that can not only encrypt the secrets files, but it can help you test your Codeship configuration by being able to run the steps on your machine .

Once you've installed Jet we'll be able to encrypt the secrets file with a single command:

$ jet encrypt secrets.env secrets.env.encrypted

There should now be a file named secrets.env.encrypted containing the encrypted contents of the secrets.env file. With all of that done, we can now start designing our pipeline.

STEP 1 - Test runner

Automated tests are a requirement in Code Integration. For our test runner, we'll use Neso's image with PHP and composer built in. Before the build starts, Codeship is going to pull the code from the repo. We will need to mount this code to the image used in our test runner.

This image doesn't need too much set up on our end so we'll only need to run two commands for our test runner: installing the dependencies and running the test itself.

This should execute no matter what branch we push to.

STEP 2 - Docker Image builder

We won't have anything to run in our cluster without a Docker image. The next step we'll define is how to build the Docker image. Thankfully as I mentioned earlier, Codeship Pro has native Docker support. One important thing to take note of is the name of the image. If you're using Google Container Registry like us, then you'll need to follow the convention for the image's tag: [HOSTNAME]/[PROJECT-ID]/[IMAGE].

The step is pretty straightforward too. We only need to reference the name of the service we defined.

The Docker image should only be built when we push to the master branch as indicated by the tag field.

STEP 3 - Push the image to the registry

Before we push the Docker image, we have to decrypt the sensitive data needed to be authenticated. Fortunately, Codeship has a Docker image for that; it will only need to know the name of the encrypted file.

The image that will be pushed will be having two tags: the branch that triggered the build (in this case: master) and the first 8 characters of the git commit hash.

The image last tagged as master will serve as the release candidate in this pipeline. The tag for the commit hash will be used when we update the image to run in our k8s deployment.

STEP 4 - Deploying the image

The final step in our pipeline will be rolling out the new Docker image. We need Google Cloud SDK for this. Since it involves a couple of commands, we'll write a bash script.

The command in line 5 is where the authentication happens using the Google service account JSON we encrypted. That is the only part of the script that is Codeship specific. You probably remember running something similar to the commands in lines 8 to 11 after you installed the Gcloud SDK on your machine. The one on line 11 allows you to run kubectl commands from your terminal. Lines 13 to 17 allow us to specify which environment to roll out the image to. If we change anything in the k8s manifest, Line 21 applies that change as part of the pipeline.

The last bit of code in the script sets the image to be rolled out to the environment or k8s namespace. For the staging environment, we'll be using the Docker image tagged with the current commit hash. The only tags matching the Semantic Versioning specification will be rolled out to the production environment.

Since we'll need to authenticate Codeship to be able to execute commands on our k8s cluster, a Docker image with Gcloud SDK will be required. Codeship is there to save us once again with their own Image built for google cloud deployments. Same as the image we used for authenticating, we’ll pass the file name with the encrypted data to it.

The image will need a copy of the deploy script so let’s mount our code to it (lines 6 to 7). Now the only thing left to do is to execute the script.

This step will only execute during a push or PR merge to the master branch. The command tells it to roll out the image to the staging environment.

When we trigger the build, we should get all green.

run test suite green

STEP 5 - Promote the master image for release

At Neso, we create a Git tag following the Semantic Versioning specification from the staging branch to trigger a release. We can copy that procedure by adding a service using the Docker image we tagged as master.

The idea here is that Codeship will pull the Image we last tagged as master, tag it again with the semver tag and push it to the registry.

The last step in our pipeline is rolling out the image to production.

This looks pretty much the same as the step for deploying to staging except it will only be triggered with a semver tag. It should still run the test suite then promote and roll out the Docker image from the staging environment.

Now, if I tag the master branch as 1.0.0. It our pipeline look like:

run test suite released

CONCLUSION

Congratulations! You now have a simple CI/CD pipeline. There's still plenty of room for improving this pipeline: like having the test suite and building of the Image concurrently. But I think that will serve best as an exercise for you, dear reader.

 

Published in Cloud Deployment

The main difference between a virtual machine and a Docker container is in how they interact with the physical machine that they are running on. These differences affect how much resources running a virtual machine or a Docker container consumes. These differences also affect the portability of an application meant to be used with a virtual machine or a docker container.

Virtual Machines

A virtual machine is a software that allows you to run an operating system (OS) inside of another OS called the host OS. Below is a diagram of what it would look like to run applications on a virtual machine.

While a virtual machine can only emulate one OS at a time, a single physical machine can run multiple operating systems at a time. The number of which, is only limited to the amount of resources on that physical machine.

A virtual machine is able to run its own OS on top of the host OS with the help of a process called a hypervisor. The hypervisor acts as a mediator between the OS in the virtual machine, and the resources in the physical machine. Since each virtual machine is running a full OS, it will take a lot to be able to run any number of them.

Knowing that running multiple VMs may be resource heavy, why then are they still used when setting up an application on the web?

One of the problem software developers face is that inconsistency in the specifications of physical machines, whether or not they are running the same OS, sometimes breaks applications. “It works on that machine, but not on this one.” is a phrase you hear a lot and you might think that the problem can be easily fixed just by making sure that all the physical machines, and all the operating systems that are running on them are exactly the same. While this seems like a viable solution, you have little to no control over these two these when putting up an application on the cloud.

Since making sure that each physical machine and operating is exactly the same, software developers use virtual machines. It is far easier to set up multiple virtual machines that are perfect mirrors of each other than to do the same with physical machines. The increased use of resources is a fair price to pay for the assurance that no matter where an application is put up, as long as the virtual machine is configured a set way, it will always work.

Unique Attributes:

  • Hypervisor
  • Guest Operating System
  • Uses application code as is

Pros:

  • Easy to set up

Docker

A Docker container though still a form of virtualization, does so differently than a virtual machine. Where virtual machines need their own OS to run an application and a hypervisor to allow the OS to use the physical machines resources, a Docker container does not need its own OS or a hypervisor. The difference can be seen in the diagram below.

The hypervisor and the guest OS’ that can be found in a system that uses virtual machines are no longer present when using Docker. This still works because Docker takes stand-alone, executable package of the software and running it on the hosts OS.

This stand-alone package is called a Docker image. The application in the Docker image still needs an OS to run, however instead of a Docker container having its own OS, the host OS isolates some of its resources and allocates them to the Docker container. To make sure that the application has the correct files it needs to run, a Docker image contains all of the code, runtime system tools, system libraries and settings. As far as the application is concerned it is running on its own machine and it has everything it needs.

Allowing the Docker container to use the physical machines resources is the reason that Docker containers no longer need their own OS and a hypervisor.

Running a Docker container consumes less resources than running a virtual machine. This means that on the same machine, it is possible to run more Docker containers than virtual machines. Using Docker containers also allows for better portability. To get a Docker image up and running on a docker container, you need only use Docker to start the container and you’re good to go. Sure you will have to do some set up, but compared to having to start by installing an OS on a virtual machine its little trouble.

Unique Attributes:

  • Uses application code in built Docker Images

Pros:

  • Portable
  • Fast
  • Cheaper

Learn more about Docker here

Published in Cloud Deployment
Monday, 06 August 2018 14:07

Systematix

We have noticed that businesses that decide to leverage software to increase productive always end up with less day-to-day issues and lots of extra time for further creative development.

Background

Computer training tends to be a competitive market, you need to supply high quality training which is fully customised to clients needs, at time and place that suits them. Oh and don’t forget at a reasonable price as well. All these requirements, calls to clients, trainer organisation and sales effort all needs to be tracked and managed to deliver a high quality service.

Solution

Our approach was to develop a set of a applications to help staff manage all the aspect of a course booking, from sales to deliver.

We devised three key applications:

  • Systematix Website

    The website was face of the business it’s role could be compared to a business development executive, expect it would operate 24/7 365 bringing client enquiries to the sales team in a organised way.

  • CRM Application

    This application would track all the enquires and communication between the prospect and the sales team. It would allow the sales team to juggle more prospect and close more sales over larger periods of time.

  • Administrator Application

    This application was particular special to Systematix as it took they already existing processes and leverage automation to help with the bulk of the work. Since all aspects of a booking are managed in the application the quality of deliver skyrocketed.

Result

Since the introduction of the new applications the business had a x4 of revenues with little to no fixed costs. This also acts a based to allow the business to expand its operations with new courses by simply updating its offering on the website.  

As well because the administrator application was designed around the way they work it has cemented their process in the form of logic steps which has created a lasting business asset for years to come.

Published in Cloud Deployment