Everyone is talking about Kubernetes these days, and it’s no secret that Kubernetes has emerged as the leading container orchestration tool. There are a variety of reasons for that, ranging from Kubernetes’s open source, community-based development model to helpful technical features like pod security policies and automatic load balancing.
Yet just because Kubernetes is a well-designed, popular tool doesn’t mean that achieving a highly effective Kubernetes experience is as simple as installing Kubernetes and calling it a day. Kubernetes can be used in many different ways, and getting the most out of it requires taking an approach that maximizes the value of Kubernetes’s functionality.
In this article, I will explain what this approach entails by highlighting Kubernetes’s core features, and discussing how best to leverage them in order to optimize the overall Kubernetes experience — both for DevOps teams who use the tool, and for end-users whose software is orchestrated by the tool (whether they realize it or not).
Automation and self-service
One of the most common reasons for using Kubernetes is to automate tasks that would be difficult to perform manually. Out of the box, Kubernetes automates many complicated procedures.
But I’d argue that taking advantage of these automation features is not enough to get the most out of Kubernetes. In order to maximize your team’s ability to take advantage of Kubernetes automation, you should implement a self-service model that lets anyone on the DevOps team leverage Kubernetes’s automation features (provided they have the proper authorization to do so, of course).
Ideally, you want to have a self-service environment that lets everyone trigger a deployment, from infrastructure components to application or service deployments. For example, you could have a dashboard from a CI/CD page that you can trigger when a pipeline procedure is completed. Kubernetes makes it easy to spin up CI/CD environments that enable this functionality. That way, you can be confident that you can deliver new features or fix bugs more easily, and you’ll give more ownership to engineering teams.
Utilizing microservices and distributed systems in the cloud automatically means that you expect more failures and downtime. You need to acknowledge that no system is infallible. It’s also critical to embrace failure and learn how to overcome it. What better way than to use system resiliency patterns like Chaos Engineering, where scenarios are introduced to randomly shut down modules or services in production sites. By testing the behavior of systems under varying loads, you can verify the functionality, scalability, performance and reliability of architecture in real time.
The benefits of this approach are many. It enables systems to be resilient to failure and lets teams apply changes confidently, with minimal disruptions — so when a real failure occurs, the operating staff will be better prepared to handle it in advance.
Inspection of Kubernetes operations, logs or configuration changes should be baked into every environment because you need to track actions that occur in the environment and act in case of inconsistencies. The need for auditing is also required for compliance.
Kubernetes has first-class support for auditing, with configurable backends, policies, and log collectors that aim to give administrators a complete picture of current operational information.
You should ideally maintain development, QA, validation and production environments that are as consistent with one another as possible, so that you can reproduce existing scenarios every time. Since there are moving parts with containers, it’s very easy to create drift between environments, leading to the infamous “It Works On My Machine” moments.
This is why it's important to invest proactively in maintaining reproducibility (sometimes also called parity) across environments so that the engineering team can have higher confidence that a decision made about one part of the Kubernetes environment will result in predictable actions in other parts.
The best way to achieve parity is to export the same configuration for every Kubernetes environment. For example, if using the kubectl tool, you can export a single parameterized definition for a deployment (service, pod, secret):
kubectl get service name -o yaml --export
If using kube-aws you can use the export flag to create CloudFormation stack files to apply them on demand:
kube-aws up --export
If using Openshift Origin you can leverage the oc-export tool to export all deployment resources so they can be applied elsewhere. They can also be used to create boilerplate templates that can accept parameters.
For example, exporting all services as a YAML file, we can run:
oc export service -o yaml
A best practice for the exported stack files is to commit them with a version control system so they are reviewable and verifiable as part of the CI/CD pipeline.
Kubernetes is a very powerful platform, but it can be used and abused in many ways if not configured properly. To have the best possible Kubernetes security, you need to evaluate all the available security controls, without sacrificing usability and availability. That includes setting up access controls, configuring firewalls and ports, adding K8s vulnerabilities scans, throttling and load balancing, and setting up VPCs and pod security policies. Ideally, you want security to be built-in and integrated as part of the build pipeline so that incidents are reported immediately, and continuously.
But that alone is not enough. A better security model has multiple layers of security that allow segregated controls and automation without sacrificing the user experience. Luckily, with Alcide’s Kubernetes Advisor, you can enable advanced security controls to protect against vulnerabilities and attack vectors that are focus on auditing, compliance, topology, network, policies, and threats of your K8s clusters. It then produces a report that contains:
- A summary of your cluster’s compliance and security status.
- A detailed list of identified compliance and security issues, followed by a recommendation for a quick remediation.
- Baseline profile based on specific cluster scan results and compare it to other clusters.
Kubernetes is an excellent tool, but it’s not magic. Getting the most out of it (and using it in a way that optimizes the experience of end-users) requires taking extra steps to improve specific dimensions of Kubernetes like security, reliability and reproducibility. These extra steps are not hard to take, and the exact approach chosen will vary from organization to organization. But whatever your Kubernetes optimization strategy entails, the key is simply to realize that the power is in your hands to make the most out of a stock Kubernetes deployment.
About Theo Despoudis
Theo Despoudis is a Senior Software Engineer and an experienced mentor. He has a keen interest in Open Source Architectures, Cloud Computing, best practices and functional programming. He occasionally blogs on several publishing platforms and enjoys creating projects from inspiration.