Kubernetes is on the lips of just about every IT admin and developer across the planet. It doesn’t matter what segment of business you work in, Kubernetes is either already in place, or it’s being discussed.
What is Kubernetes? Kubernetes is an open-source container-orchestration system, used to automate the application deployment, scaling, and management pipeline. To put it more simply: Kubernetes is a means to manage containerized applications across a cluster of nodes. If you’re looking for an incredibly agile development system, one that can be automated and scaled in ways no other system can, then Kubernetes is the way to go.
Originally created by Google, Kubernetes is now maintained by the Cloud Native Computing Foundation and is employed by enterprise businesses everywhere. Companies such as Babylon, Booz | Allen | Hamilton, Booking.com, AppDirect, Adidas, Ancestry.com, adform, Bose, box, Buffer, CapitalONe, CERN, Huawei, IBM, ING, The New York Times, NOKIA, Nordstrom, Spotify, Squarespace, and Yahoo! Japan have used (or are using) Kubernetes.
As popular as Kubernetes might seem, it’s worth noting that it’s not for the faint of heart. Anyone looking to use it for things like ERP software development, or on-premise cloud solutions would be wise to consider Kubernetes, but it’s very important to do your research first.
Why? Kubernetes isn’t like your average software development chain or application. Unlike, say, Docker containers or Docker swarm, Kubernetes has a lot of moving parts. In fact, there are a few stumbling blocks that prevent Kubernetes from being more widely deployed by businesses outside the realm of the Fortune 500.
What stumbling blocks, you ask? Let’s take a look at some of the issues that prevent Kubernetes from taking the next step in global acceptance.
This is probably the biggest issue with Kubernetes. In order to get up to speed with this technology, with any level of fluency, you’ll wind up spending a good deal of time reading and experimenting. That’s not to say you can’t get a Kubernetes cluster up and running in about ten or fifteen minutes. But that’s the easy portion of the deployment. Once the cluster is deployed, the challenges start to mount fairly quickly.
With your cluster running, you’ll immediately start to realize that, while a Docker swarm cluster is ideal for deploying a single app or service and scaling up and down with ease, Kubernetes is more about connecting those apps and services together to create a much more powerful whole. Because of this, the deployment of those apps and the development of the connecting glue between them is exponentially more complicated than it’s more simplified cousin, Docker.
But that is by design. Where Docker makes deploying and scaling containers simple, the power and flexibility of Kubernetes makes it a greater challenge to learn. You must understand all of the moving pieces (and how they connect together). Pods, services, replication controllers: to truly understand Kubernetes, you have to understand a significant amount of technology.
And let’s not forget, there are different routes to using Kubernetes. There’s k8s, kubectl, kubeadm, minikube, kubernetes-cli, and more. Which is right for you?
This complexity doesn’t mean it’s impossible. It’s just significantly more challenging to do right than other technologies.
Accessing apps and services
The next issue surrounding Kubernetes is accessing those deployed apps and services. With Docker, it’s very straightforward. You deploy an app or a service and it’s reachable almost in the same fashion as it is if that app or service had been deployed in the standard fashion. For example: Deploy an NGINX container on a Docker Swarm and you can point a browser to the IP address of any one of the Swarm nodes and you should see the web site.
With Kubernetes, it’s not quite so simple. When you deploy a containerized application to the cluster, it’s reachable by the cluster. So if your cluster is on the IP address scheme 10.20.1.x, and your LAN is on 192.168.1.x, you won’t be able to reach that app or service from the 192.168.1.x LAN without some extra work. You might have to first deploy an Ingress controller to open the cluster to the LAN. Even then, it might not work.
The lack of a good GUI
I should preface this by saying there is a good Kubernetes GUI. It’s called Web UI and it does a great job of managing your Kubernetes cluster. There’s a significant caveat to this. You deploy Web UI as a container on your Kubernetes cluster. Out of the box that GUI tool is only available from within your cluster. So unless you’ve deployed that cluster on a server with a desktop interface, you might find yourself at odds with trying to access the interface. You see, Web UI wasn’t meant to be accessed from your LAN, but from the cluster.
The caveat is most Kubernetes clusters are deployed from headless Linux servers, so accessing Web UI is problematic. There are other GUI tools, but most of them suffer from the same issue as Web UI.
This is probably one of the biggest weaknesses of Kubernetes—GUI tools that make it easy to deploy and manage containers. The issue is, of course, driven by the inability to access the cluster from a LAN without jumping through various hoops. In other words, many of the issues surrounding Kubernetes come together to prevent using a quality graphical interface for the management of the apps and services.
Because of this, the Kubernetes developers (or a third party) need to develop a much simpler means of accessing the cluster from a LAN. Without such a component, the barrier to entry for Kubernetes is far greater than many realize. Not having a point and click GUI that is simple to deploy makes the admin job more complicated than it needs to be.
Taking the Leap to Kubernetes
The truth of the matter is, if you’re considering making that leap into Kubernetes, you simply have to give yourself the time to fully understand every component of the technology before you begin. Even then, you will run up against hurdles that might take considerable effort to surmount. However, even with that barrier to entry, Kubernetes makes it possible to deploy and scale applications and services in ways you never dreamed of.
So, in the end, the effort in overcoming the complications is probably worth it.