• This email address is being protected from spambots. You need JavaScript enabled to view it.
    • +44 (0) 20374 57368
Reggie Bigornia

Reggie Bigornia

Developer

Monday, 17 September 2018 10:50

The Kubernetes config

The kubernetes config is a file that kubernetes uses to store different configuration variables.

Information on things like clusters, contexts and users are all stored here. Having all of these stored in the config allows us to easily switch between any of them when working with multiple projects.

If you have already started using kubernetes, you can sneak a peek at what it looks like using the following command on your terminal:

kubectl config view

This should print out all of your kubernetes configs as a single file to the terminal.

Cluster

One of the collections in the config file will be for the clusters. This contains block sequences for each of your clusters. Each block sequence will contain mapping for the name of the cluster, it will also contain mapping for the server and the certificate authority data of the cluster.

User

Each block sequence in the user collection contains information on the name of the user block, and then the user information. The user information contains the auth provider. Which then contains the name of the service provider, and other details like token information and cmd paths and args.

Context

For the context collection, each block sequence contains the name of the context, the user it uses and the cluster it uses. The context collection basically pairs the users with the correct clusters.

The config file

When working with multiple clusturs, being able to quickly change between them is a time saver. This can be achieved by creating a context for each cluster in your kubernetes config.

Creating the config file

Start with creating a file named 'example-config'

apiVersion: v1
kind: Config
preferences: {}

clusters:
- cluster: 
  name: client-A
- cluster:
  name: client-B

users:
- name: developer
- name: maintainer

contexts:
- context:
  name: dev-production
- context:
  name: dev-staging
- context:
  name: dev-testing
- context:
  name: dev-development
- context:
  name: maint-production

The configuration file now describes two clusters named client-A and client-B respectively. It also describes two users, developer and maintainer. Finally it describes five contexts, dev-production, dev-staging, dev-testing, dev-development, and maint-production.

For each of your cluster details, you will also need to set the server details and if your server is using SSL, the certificate. Don't worry if your server is not using SSL though, you can always set insecure-skip-tls-verify: true for your cluster.

You can do this using kubectl:

kubectl config --kubeconfig=example-config set-cluster client-A --server=https://1.2.3.4 --certificate-authority=/home/user/some-ca-file.crt

In the above command, we tell kubectl to edit the definition of the client-A cluser in the example-config file. We want to set the server to https://1.2.3.4 and we want to use the crt file at /home/user/some-ca-file.crt.

If you open up your example-config file again, you will find that your definition for cluster client-A now looks like this

...
- cluster:
    certificate-authority: /home/user/some-ca-file.crt
    server: https://1.2.3.4
  name: client-A
...

To set cluster client-B, we use the same command while changing some of the variables

kubectl config --kubeconfig=example-config set-cluster client-B --server=https://5.6.7.8 --insecure-skip-tls-verify

The client-B cluster definition will now look like this,

...
-cluster:
    insecure-skip-tls-verify: true
    server: https://5.6.7.8
  name: client-B
...

Now that we have successfully set the definition for our clusters, we will have to add user details to the configuration file.

The following two lines will add the details we require to our developer user and our maintainer user:

kubectl config --kubeconfig=example-config set-credentials developer --client-certificate=some-client.crt --client-key=some-client.key

kubectl config --kubeconfig=example-config set-credentials maintainer --username=user --password=some-password

The updated user collection in your example-config should now look like this:

...
- name: developer
  user:
    client-certificate: some-client.crt
    client-key: some-client.key
- name: maintainer
  user:
    password: some-password
    username: user
...

Now that we have properly defined our clusters and users, we can define the contexts. We initially created five contexts so we will have to run five commands to add all their details

kubectl config --kubeconfig=example-config set-context dev-development --cluster=client-A --namespace=development --user=developer

kubectl config --kubeconfig=example-config set-context dev-testing --cluster=client-A --namespace=testing --user=developer

kubectl config --kubeconfig=example-config set-context dev-staging --cluster=client-A --namespace=staging --user=developer

kubectl config --kubeconfig=example-config set-context dev-production --cluster=client-A --namespace=production --user=developer

kubectl config --kubeconfig=example-config set-context maint-production --cluster=client-B --namespace=production --user=maintainer

One last look into our example-config file andd we should see this:

...
- context:
    cluster: client-A
    namespace: development
    user: developer
  name: dev-development
- context:
    cluster: client-A
    namespace: testing
    user: developer
  name:: dev-testing
- context:
    cluster: client-A
    namespace: staging
    user: developer
  name:: dev-staging
- context:
    cluster: client-A
    namespace: production
    user: developer
  name:: dev-production
- context:
    cluster: client-B
    namespace: production
    user: maintainer
  name: maint-production
...

Using the config file to pick a context

Now that we have everything set up, we can set a current context to be able to switch between clusters, users and namespaces as we have previously defined. Use the following command to set the current context:

kubectl config --kubeconfig=example-config use-context dev-development

This command will set all the user details, cluster details and the namespace defined in our example-config file.

Viewing a config file in the terminal

It is also possible to view the config file in the terminal using the view command:

kubectl config --kubeconfig=example-config view

or if you only want to see the details of the current context you can use the --minify flag with the view command:

kubectl config --kubeconfig=example-config view --minify

The main difference between a virtual machine and a Docker container is in how they interact with the physical machine that they are running on. These differences affect how much resources running a virtual machine or a Docker container consumes. These differences also affect the portability of an application meant to be used with a virtual machine or a docker container.

Virtual Machines

A virtual machine is a software that allows you to run an operating system (OS) inside of another OS called the host OS. Below is a diagram of what it would look like to run applications on a virtual machine.

While a virtual machine can only emulate one OS at a time, a single physical machine can run multiple operating systems at a time. The number of which, is only limited to the amount of resources on that physical machine.

A virtual machine is able to run its own OS on top of the host OS with the help of a process called a hypervisor. The hypervisor acts as a mediator between the OS in the virtual machine, and the resources in the physical machine. Since each virtual machine is running a full OS, it will take a lot to be able to run any number of them.

Knowing that running multiple VMs may be resource heavy, why then are they still used when setting up an application on the web?

One of the problem software developers face is that inconsistency in the specifications of physical machines, whether or not they are running the same OS, sometimes breaks applications. “It works on that machine, but not on this one.” is a phrase you hear a lot and you might think that the problem can be easily fixed just by making sure that all the physical machines, and all the operating systems that are running on them are exactly the same. While this seems like a viable solution, you have little to no control over these two these when putting up an application on the cloud.

Since making sure that each physical machine and operating is exactly the same, software developers use virtual machines. It is far easier to set up multiple virtual machines that are perfect mirrors of each other than to do the same with physical machines. The increased use of resources is a fair price to pay for the assurance that no matter where an application is put up, as long as the virtual machine is configured a set way, it will always work.

Unique Attributes:

  • Hypervisor
  • Guest Operating System
  • Uses application code as is

Pros:

  • Easy to set up

Docker

A Docker container though still a form of virtualization, does so differently than a virtual machine. Where virtual machines need their own OS to run an application and a hypervisor to allow the OS to use the physical machines resources, a Docker container does not need its own OS or a hypervisor. The difference can be seen in the diagram below.

The hypervisor and the guest OS’ that can be found in a system that uses virtual machines are no longer present when using Docker. This still works because Docker takes stand-alone, executable package of the software and running it on the hosts OS.

This stand-alone package is called a Docker image. The application in the Docker image still needs an OS to run, however instead of a Docker container having its own OS, the host OS isolates some of its resources and allocates them to the Docker container. To make sure that the application has the correct files it needs to run, a Docker image contains all of the code, runtime system tools, system libraries and settings. As far as the application is concerned it is running on its own machine and it has everything it needs.

Allowing the Docker container to use the physical machines resources is the reason that Docker containers no longer need their own OS and a hypervisor.

Running a Docker container consumes less resources than running a virtual machine. This means that on the same machine, it is possible to run more Docker containers than virtual machines. Using Docker containers also allows for better portability. To get a Docker image up and running on a docker container, you need only use Docker to start the container and you’re good to go. Sure you will have to do some set up, but compared to having to start by installing an OS on a virtual machine its little trouble.

Unique Attributes:

  • Uses application code in built Docker Images

Pros:

  • Portable
  • Fast
  • Cheaper

Learn more about Docker here