Highly-available, self-healing deployments with K8s

In the recent past, software that autonomously converged on declared, desired state was possible only in giant technology companies with the most talented engineering teams. Now, with Kubernetes, highly available, self-healing, autoscaling software deployments are within reach of every organization, even early stage startups.

There is a future in front of us where software systems accept broad, high-level directives from us and execute upon them to deliver desired outcomes by discovering conditions, navigating changing obstacles, and repairing problems without our intervention. Furthermore, these systems will do it faster and more reliably than we ever could with manual operations. Kubernetes has brought us all much closer to that future.

Kubernetes is a remarkably powerful technology and has achieved a meteoric rise in popularity. It has formed the basis for genuine advances in the way we manage software deployments. The brilliance of Kubernetes is how configurable and extensible the system is, from pluggable runtimes to storage integrations.

However, that power and capability of Kubernetes comes at the cost of some additional complexity. We have been using Kubernetes as the foundation for getting there since as early as 2015, when Kubernetes reached 1.0. Best practices exist but must always be viewed through the lens of pragmatism. There is no one-size-fits-all approach, and “It depends” is an entirely valid answer to many of the questions you’ll inevitably confront on the journey to achieving a production-grade deployment.

Why should you even consider Kubernetes?

In our path to production, it is key that we consider the idea of an application platform. We define an application platform as a viable place to run workloads.

If platforms like Cloud Foundry, OpenShift, and Heroku have solved these problems for us, why bother with Kubernetes? A major trade-off to many prebuilt application platforms is the need to conform to their view of the world. Delegating ownership of the underlying system takes a significant operational weight off your shoulders. At the same time, if how the platform approaches concerns like service discovery or secret management does not satisfy your organizational requirements, you may not have the control required to work around that issue.

Additionally, there is the notion of vendor or opinion lock-in. With abstractions come opinions on how your applications should be architected, packaged, and deployed. This means that moving to another system may not be trivial. For example, it’s significantly easier to move workloads between Google Kubernetes Engine (GKE) and Amazon Elastic Kubernetes Engine (EKS) than it is between EKS and Cloud Foundry.

At Opscale, we have tried as much as possible to focus on patterns and philosophy rather than on tools, as new tooling appears quicker than ever but are built on the same foundations.

Want to engineer your startup for scale? Talk to us now