Last week our team came back from KubeCon Seattle, CNCF largest event to date with over 8000 attendees (!) where they showcased our Microservices Firewall, and I thought that now would be a good time to touch base again and talk about Kubernetes security.
Kubernetes (K8S) is a very popular open-source container orchestration tool that can automatically scale, distribute, and handle faults on containers. As a widely known and used application that handles an entire ecosystem, Kubernetes impacts many run-time security functions. Because of this influence, it is important to adhere to security best practices in order to avoid future headaches.
Security must be a priority for any production system. If the system is comprised of a set of distributed processes (a cluster, in other words), security must be even stricter. Securing a simple system involves maintaining good practices and updated dependencies, but securing a cluster requires evaluating the communications, images, operational system, and hardware issues. Data breaches, denial of service attacks, stolen sensitive information, and downtime can all be avoided with solid security policies.
At this article’s publication time, V1.13 is the current major version of Kubernetes. The previous V1.12 introduces the general availability of TLS bootstrapping (which simplifies the addition and removal of nodes to the cluster), enhanced multi-tenancy, improved horizontal and vertical auto-scaling, and many more features.
In the following sections, we’ll investigate some security practices that will help you improve your security and avoid unforeseen complications when deploying your own Kubernetes instance.
How to Keep Your Kubernetes Cluster Secured?
In 2017, Kubernetes became the industry’s most popular container orchestration tool because of its features, its community, its offerings in the cloud, and its recognition by competitors like Docker and DC/OS.
Because of this huge public acceptance, threats can appear more frequently in the Kubernetes environment. These threats may result in compromises or undesirable scenarios, including elevations of privileges, exfiltrations of sensitive data, compromises of operations, or breaches of compliance policies. Therefore, it is very important to have well-established Kubernetes security best practices when using Kubernetes.
Maintain the Good Practices Used With Containers
In this case, a cluster is a set of machines typically running a larger number of containers. The first step to secure the cluster system is to take care of the smallest part of it. Without guaranteeing the security of your containers, you can’t guarantee the security of the entire cluster.
As with every other technology, containers have their security pitfalls that need to be addressed. A new container developer should always read best practices before building new images. Some of these best practices include:
- Packaging a single application per container. The Docker build process is designed to avoid more than one application running on the same container, but there still are ways to end up in this situation. For example, if more than one application is running on the same container, you will have to detect a faulty or unsecure application.
- Remove unnecessary tools. The more tools you add to your image, the more your security flaws will be exposed. Use alpine or distroless-based images as the top level image hierarchy.
- Carefully consider whether to use a public image. Using publicly available images makes you more productive, but it also exposes you to other security flaws or bad behaviors. Even trusted vendors can become problematic if credentials are exposed. When using publicly available images, opt to use the SHA identification next to the name of the image.
- Use CI to eradicate vulnerabilities before container deployment. Use CI jobs to evaluate images before deployment. There are tools available to identify known issues and security flaws that can avoid problems in production.
User Authentication with RBAC
An important step for ensuring a secure environment while using Kubernetes is making sure that everyone who signs on as an administrator has the credentials to do so.
To assist with this, Kubernetes offers ABAC (attribute-based access control) and RBAC (role-based access control).
By disabling ABAC and using RBAC, it is possible to give permissions based on predefined roles and privileges, increasing overall security. In several PaaS solutions, it is possible to integrate RBAC with the applications’ own IAM solutions. AWS EKS offers IAM authentication, and within that system, you can assign RBAC roles directly to each IAM entity.
In a scenario where many users or teams share a cluster with a static number of nodes, the concern that one user could use more than its fair portion of resources arises. Unbound resources can create a situation with total cluster unavailability in the case of DoS attacks or malfunctioning applications. If a resource is not bound, it can draw all the available hardware resources to itself. To deal with this situation, administrators use resource quotas, a tool provided by Kubernetes.
Resource quotas provide constraints that limit aggregate resource consumption per namespace. They bound the number of objects that can be created in a namespace by type and by the total amount of compute resources that may be used by resources in a particular project.
Every pod has a Pod Security Policy that defines the security conditions a pod must run in order to be accepted into the system. Any Pod Security Policy can be associated with one or more Kubernetes’ Pods. The security policies can be integrated with RBAC, so not all users can specify all policies.
There are a myriad of policies that can be defined. Examples include:
- Privileged rule that determines if the pod can enable privilege mode.
- Host policies that determine how a pod can share or consume information about the host into a namespace.
- Volume and file system policies that determine the pod’s level of access to the persistent storage.
- User and group policies that enable the user to determine what user the pod will run and if it can run as root.
Please refer to the Kubernetes Pod Security Policy documentation for more examples.
Running diverse applications on the same Kubernetes cluster creates the risk of one compromised application attacking a neighboring application. In order to ensure that Pods can only communicate with Pods they are supposed to communicate with, it is important to have network segmentation.
In the example below, a network policy is defined to all pods with area = backend and allows them to receive TCP data from pods of area = frontend into the port 80.
Master and Worker Access Control
The Kubernetes Master is a collection of three processes (kube-apiserver, kube-controller-manager and kube-scheduler) that run on a single node in the cluster, one which is designated as the master node. It is responsible for maintaining the cluster’s desired state.
The Kubernetes Worker Machine is a node. A node can be a VM or a physical machine, depending on the cluster. Each node holds the service necessary to run pods and is managed by the master components. The services on a node include the container run-time, kubelet and kube-proxy.
If you are using a PaaS solution like Google Kubernetes Engine, Amazon EKS, or Azure AKS, you don’t need to handle this level of security since you generally do not have access to the master node. However, if you are deploying a K8S cluster solution, you must take care of all low level security aspects of the cluster.
Keep Your Kubernetes Infrastructure Up-To-Date
Kubernetes itself is a complicated infrastructure which is being updated at a fast pace. Updating the infrastructure to the latest version every few months is often impractical. However, some changes between Kubernetes versions may add new security capabilities or may fix bugs that have major security implications.
One example of such vulnerability fix was published on December 2018 with Kubernetes v1.13 (and the fix was also backported to older versions). The vulnerability, identified as CVE-2018-1002105, is triggered when specially crafted requests allow users to establish a connection through the Kubernetes API server to a backend server. Attackers can use this established channel to execute arbitrary requests on that backend. Any user, even unauthenticated ones, can exploit this vulnerability to circumvent Kubernetes role-based access control. Furthermore, exploitation of this vulnerability cannot be detected by standard Kubernetes audit, monitoring and logging tools, since the unauthorized malicious requests are performed over a valid, trusted connection. Use of monitoring tools to detect anomalous unauthorized changes can help to indicate compromise but only after the exploit succeeds. This recent example shows that, like practiced for any critical infrastructure, Kubernetes itself should be updated at least when major security fixes become available.
Securing a system is a challenge, and securing a cluster is even more difficult. In addition to the machine running the system, there are several machines running different applications and versions. In this situation, Kubernetes supplies different options to create and secure deployment. There is no one solution that can be used everywhere at every time, so a certain degree of familiarity with the various available options is required. It is also important to understand how these various options can enhance the application’s security.
Implementing the best practices highlighted in this article is critical to establishing security, as is using Kubernetes’ flexible configuration capabilities to incorporate security processes into the continuous integration pipeline. This, in turn, will automate the entire process with security.