Kubernetes running on Google Cloud is our go-to deployment architecture, also known as the Google Container Engine (or GKE for short). We have helped many of our clients set up their application infrastructure using this solution with excellent results. If you’re already here then you probably know all the great features and benefits Kubernetes offers.
One question that we found ourselves asking was ‘What’s is the best ingress for Kubernetes?’. Now the answer isn’t that straightforward because of cause it entirely depends what your setup and what you're trying to achieve. So here we will take a look through four of the most popular ingresses solutions that are available.
For people that don’t know, there are basically two out of the box natively supported solutions:
- GCE Ingress via Google Load Balance service
- NGINX Ingress
We will also take a look at two other solutions on offer:
A Quick Load Balance Primer
Before we delve into the different ingress options we have on Kubernetes let’s quickly review the two types of LB we might want to use; Layer 4 (L4) and Layer 7(L7) these layers correspond to the OSI model.
Layer 4
At this layer, load balancing is relatively simple. Normally is just a service which is able to forward traffic over TCP/UDP ports to a number of consuming services. This is generally done is a round robin fashion, and employs some basic health check to determine if the service should be sent request. This layer tends to be routed by simple high-performance applications.
Layer 7
This is where the magic happens. At Layer 7 load balancing offers the ability to intelligently redirect traffic by inspecting its content. This then allows the service to optimize the way it handles traffic. Not only that but it can also manipulate the content, perform security inspection and implement access controls. With the requirement to inspect traffic content and apply rules, Layer 7 load balancing require much more resources than at Layer 4 and the applications use are highly tuned to carry out these operations.
GKE Ingress
First, we will take a look at the GKE ingress. We found this service to work great for simple deployments and as it available straight out of the box and required very little configuration. It worked well up until the point we need to do something other than the simple host and path routing.
Features | Limitations |
|
|
NGINX Ingress
The other out of the box solution is a NGINX reverse proxy setup. This offers a lot more flexibility in configuration that the GCE ingress.
Features
- Deep configuration though configmap
- SSL Enforcement via redirect
- ModSecurity Web Application Firewall – This can stop penetration attempt fore they they even reach your application.
Limitations
- Letsencrypt not natively supported by available by using Kube-Lego
Voyager
The voyager ingress is backed by the well respected HAProxy, which is known as one of the best open source load balancers available. It also come with built in support for Prometheus another great piece of soft that can provide powerful metrics on your traffic. This sounds like it could be very promising.
Features
- Supports layer 4 and 7 load balancing
- Cross Namespace traffic routing
- Has semi-native Letsencrypt which can offer
- Managed SSL certificates resource
- Allows for monitoring using Prometheus
Træfik
Traefik is the new kid on the block. It bills itself as a modern HTTP reverse proxy and load balancer for made for deploying microservices. It’s designed to complete with the likes of NGINX and HAProxy but more lightweight and focused towards container deployments. This seems very promising and it gathering quite a community around it. Support for main backends is already on offer including Kubernetes.
Features
- Web UI based on AngularJS
- Let’s Encrypt via Lego
- Choice of monitoring options nativily supported: Prometheus, DataDog, StatsD and InfluxDB
Limitations
- While Traefik offers very rich set of feature this are not all easily accessible via the ingress config
Web Application Firewall
GCE Ingress | NGINX Ingress | Voyager | Træfik | |
---|---|---|---|---|
Layer 7 Support | ||||
Layer 4 Support | ||||
Native To K8s | ||||
SSL Enforcement | ||||
Lets Encrypt Support | ||||
Basic Authentication | ||||
Access Control | ||||
Built-in Monitoring UI | (VTS) |
(HAProxy stats) |
||
CORS | ||||
Prometheus support |
Conclusions
If your using Google Cloud and all you need is a reliable, super simple entry point then the GCE Ingress will give you everything you need with minimal config.
Traefik has a great selection of features and for a young project shows great promise. However it’s integration with kubernetes is not as tight as its competitors and this makes configuring and managing it a bit more cumbersome.
On the other hand it offers a lot of integration options with different deployment, management and monitoring platforms so If you are already using one of these then this might be the best choice for you. We will certainly be keeping a close eye on this project.
This then leaves us with NGINX and Voyager, both of these platforms are backed by industry tried and tested load balancers and offer a similar set of features. Choosing between these two is likely to come down to a mixture of the features offered being the best match for your environment and a personal preference between the two load balance technologies.
One feature that Voyager offered that appealed to us was the cross Namespace traffic routing which worked well with the way we were employing namespaces. The allowed us to limit the amount of total ingresses we had and in turn simplified our deployment.
If none of the features offered by Voyager as of any consequence to you then you most likely better of sticking with NGINX ingress as this it supported under the Kubernetes umbrella.
We have heard about some other ingresses out there if you want to explore the subject further: