Kubernetes is all the rage right now. Companies are scrambling to incorporate it as quickly as they can. And while Kubernetes (sometimes abbreviated k8s, because there are 8 letters between the “k” and the “s”) is a useful tool, there are a few caveats and provisos that one would do well to note before they jump in 100%.
First, a little background: I’ve been deploying Kubernetes projects in production since before 1.0. Eventually, we moved complete platforms over to it, serving hundreds of thousands of unique visitors per day. I monitor, instrument, supervise, and maintain these clusters.
The biggest reason to use Kubernetes is that it’s the dominant platform for scheduling “Linux containers”, namespaced processes as represented by Docker et al, across a group of nodes. This type of scheduling can allow better utilization of server resources, but it must be considered whether the engineering effort exerted to transition to k8s depletes any potential cost savings. It is important to make sure that the tradeoff is financially beneficial for the client.
Kubernetes is not a silver bullet. This snippet from the Linux Foundation’s Kubernetes Administration Training Program sums up the situation well:
While a powerful tool, part of the current growth in Kubernetes is making it easier to work with and handle workloads not found in a Google data center.
– Linux Foundation Training Manual
Kubernetes is the third generation of Google’s internal orchestration system, and it’s designed for Google’s specific demands. Because Google inarguably provides the most heavily visited web properties on the planet, there is an intuition that their infrastructure must be worth imitating. While this is understandable, it is not valid for something so bespoke as infrastructure orchestration.
Google has an army of PhDs on staff, including many software luminaries and inventors (some of whom they hire so they can exercise implicit influence over the project’s governance and direction). As employees, these people are committed to solving Google’s problems in the way that works best for Google, and they do a great job of it.
But Google’s problems are a special case, and the solution is not necessarily applicable to others. Most existing software will need to undergo a serious re-architecture to run safely and well within Kubernetes, and new applications must be carefully designed to ensure compatibility. This works fine for Google, as specific elements of their business forced them to develop those requirements. But those specific circumstances are not forgone conclusions for everyone else.
Kubernetes was explicitly designed for so-called “stateless applications”, and it assumes that your software scales horizontally and cooperates with all other network entities that may show up, including and especially duplicated instances of itself.
For some software, this is not a challenge at all, but for a great deal of the software out there, meeting these architectural requirements are large projects in and of themselves. And there are some classes of software that are just not well-suited to the philosophy at all, like databases.
Like I said above, all of this is great for consultants like me. It’s also great for Google — they are working aggressively to market k8s because they want to use it to break Amazon’s chokehold on the cloud ecosystem. But for many companies, the functional benefits of a Kubernetes-based infrastructure are not a worthwhile trade.
Why You Might Want To Use Kubernetes Anyway
Ultimately, computing trends become benchmarks. They develop a life of their own, outgrowing their early limited value proposition and benefiting from positive feedback loops. There are many social benefits to be had from deploying Kubernetes, and who knows what things may become available for Kubernetes clusters in the near future.
Microsoft, Google, and Amazon now offer Kubernetes-as-a-Service platforms that make it simple for relatively-green admins to start a Kubernetes cluster (which is not to say that it’s simple to configure or manage it properly). Tools like Helm can simplify service deployment, acting as a package manager for internal and external dependencies alike.
Regardless, Kubernetes is here to stay, in one form or another. The format and API defined by k8s are already becoming the de-facto standard cloud software description and deployment language. These tools will continue to be flagrantly abused by many, Google included, but that’s par for the course.
If you’d like to pay us to set up, run, configure, or maintain your Kubernetes cluster, we’ll do so happily. Please use the magic buttons on this site to inquire.