alcide

Alcide Blog

Cloud-native Security Provider

Enhancing Kubernetes Security Guardrails with Admission Controllers

Apr 5, 2020 8:52:28 AM / by Yaniv Peleg Tsabari

What Is an Admission Controller?

Kubernetes admission controllers are a powerful native feature, that help define and customize the API resource configurations that can be admitted to a cluster. Described simply, an admission controller is a piece of code that acts on requests made to the Kubernetes API server. They’re invoked prior to the persistence of the object(s) defined by API requests, but after the requests have been authenticated and authorized by the API server.

An Admission Controller enforces semantic validation of objects during their creation, update, and deletion.

Admission controllers can be used to “validate” and/or “mutate” requested configurations, and are classified accordingly. Mutating admission controllers are able to modify the objects they admit, whilst validating controllers are not.

Admission control occurs in two distinct phases:

  1. In the first phase, mutating admission controllers are run.
  2. In the second phase, validating admission controllers are run.

Let’s reiterate, some admission controllers operate as both, validating and mutating controllers. If for some reason, any of the controllers in either phase reject the request, the entire request is immediately rejected, and an error is returned to the end-user.

Admission Control Use Cases

The ability to define and customize what is allowed to run in a Kubernetes cluster, makes admission controllers a perfect candidate to deploy guardrails, that:

  • Constantly watch your Kubernetes deployments,
  • Find deviations from desired baselines, and
  • Can alert, deny or automatically remediate issues.

Think preventive security controls for your Kubernetes cluster. Controls that help avoid risky configurations, ensure conformance with external or internal compliance, or even enforcement of operational best practice requirements for DevOps teams.

As an example, you might set a policy that validates there are no secrets, API keys and passwords that may have been misplaced in environment variables. Each API Server call will be validated against this policy. In the cases where an API call contains configuration that violates the policy (i.e contain secrets), and may result in unwanted exposure, the admission of the object will be denied.

Yet, even native guardrails can become problematic if they aren’t deployed and configured properly. How do you define your business problem, and set the right guardrail scope? How do you maintain the guardrail over time, ensuring it spans across the organization, whilst maintaining business agility and relevancy? I’ll review tips and milestones for this, later on.

 

Admission Controller “Blind Spots”

Whilst admission controllers validate and mutate admitted resources, they have no visibility whatsoever into cluster operational activities, such as:

  • Pod life cycle events,
  • Pod & Service access through the Kubernetes API server (with kubectl exec, kubectl proxy or port-forward), nor
  • Extraction of container logs with kubectl log.

Regulating access to these activities is covered by Kubernetes Role-Based Access Control (RBAC), and monitoring or alerting on the occurrence of such events, requires monitoring of the Kubernertes Audit log.

 

On-Demand vs. Constant Assessment

Guardrails can be deployed in two primary modes. The first, on-demand mode, validates persistence configuration to make sure it is within the boundaries of an organization’s policies. This is a looser control, that suits situations where assessment checks are introduced after Kubernetes deployments are already in production. It also suits development and staging environments, where more “freedom” is needed to run and test new configuration and services, before control policies are updated. In many cases, such early tests will be the primary reason to trigger policy updates across the organization.

In addition, on-demand mode better suits validations that require a wide Kubernetes configuration context. For example, validation that checks for the existence of network policy on Pods, requires more than the specific admission context. Such validation needs to inspect all Kubernetes network policy objects (in all namespaces), to determine if the Pod is referenced (by podSelector) in either an ingress or egress rule of a network policy. This kind of validation is more suited for on-demand mode in early testing and staging scenarios.

The second mode employs constant assessment, which provides tighter control. This mode is event-driven, where the API server rejects resource requests that fail admission control, before they are persisted as objects. Admission Controllers are used in this way to prevent misconfiguration, or implementation of risky configuration that violates organizational policy. Event-driven admission control is often used for production environments where inadvertent error or risk needs to be minimized or removed completely.

 

alcide runtime@2x Detect, Alert, Enforce: Get Alcide Kubernetes Runtime Security

 

How to Build Kubernetes Guardrails Using Admission Controllers

  1. Define the problem: define the required business outcome, and then move on to solving the technical problem to better pinpoint your guardrail. Are you looking for a specific security gap? For example, is a guardrail required to prevent the exposure of cluster edge workloads to the open internet, whilst allowing access from your organization's internal network instead?
    Are you looking for a compliance remedy? Perhaps your organization is required to adhere to PCI-DSS compliance, including the workloads deployed to Kubernetes.
    Understanding problems and business outcomes helps us to design effective guardrails, whilst assigning criticality and priority for specific checks.
  2. Set the scope: determine the span of your guardrails. Today, in many cases, organizations manage several Kubernetes clusters. More often than not they span between multiple cloud providers and on-premises assets. These distributed assets serve a variety of applications, with varying requirements when it comes to mandatory and recommended guardrails.
  3. Pick the deployment model: there are a lot of technical specifics concerning where and how to run your guardrails. You can distribute responsibility between your application owners, with each one applying guardrails in the application clusters or resources, he or she owns. Or, for a larger deployment, you might need to consider a more centralized approach to the management of guardrails. You can also choose a hybrid approach, and run a centralized operation for your production environment, and a distributed operation for your development and staging environments.
    Next, you need to choose between detection and prevention. In order not to over complicate matters, and to avoid unnecessary roadblocks, you’ll need to use on-demand assessment (detection) for development, testing and staging environments, and a real-time validation and prevention for production environments. In this way, you maintain security and agility.
  4. Define exceptions: it’s great that you scan, monitor and prevent exposure of cluster edge workloads, or prevent deployments from using unauthorized image registries. That is, until you break a production application. Exceptions allow you to tune your guardrails for a particular environment or project’s needs. Basic exceptions might work on a simple resource name, while more complex combinations may include and exclude rules for greater flexibility. For example, you might need rules to treat development and production environments differently. In a development environment, you may have a more open validation for image registries, to allow and alert on addition of new image registries without breaking the development processes. In the staging and production environments, however, the same control might be stricter, failing deployments that specify unauthorized image registries.
  5. Findings analysis: collect data associated with the use of guardrails, and analyze it in the context of the problem and scope, in order to fine tune their implementation and application. For example, if we find a new image registry referenced while running an assessment on a staging environment, we might decide to update the image registry whitelist, and to allow its use. But, if the same image registry reference is detected in a production environment, its use will be denied, and further investigation will be required to find out who defined its use.
  6. Findings notifications: notification for new incites or degraded behavior can be directed to email, Slack, a ticketing system, or some other channel. This allows for better collaboration, prompt action and better management of the lifecycle of guardrail issues. The ability to zoom in and take quick action, separates the incites garnered from guardrails, from other annoying alerts. Based on the analysis and recommendations, it is relatively easy to determine the next steps, and to perform any remedial actions.

Conclusion

Kubernetes admission controllers offer a simple and secure mechanism to integrate and enforce guardrails in your Kubernetes cluster. Security and DevOps teams can now easily and automatically embed security controls to protect against risky misconfiguration, to gain visibility, and to be made aware of suggested remedial actions.

Coupled with Alcide Kubernetes Advisor, an on-demand Kubernetes assessment tool, admission control offers a complete security solution from Dev. to production. Together, they provide automated, centralized protection and remediation, backed up with supreme visibility spanning multi-cluster, and multi-account environments.

Admission controller security is available as part of Alcide ART (Alcide Runtime) module. Request your demo to see it in action.

Topics: Admission Controller

Subscribe to Email Updates