alcide

Alcide Blog

Cloud-native Security Provider

Secret-Hunting in Kubernetes

Aug 6, 2019 10:19:28 AM / by Guest Writer: Theo Despoudis

for blog article page-Secret-Hunting in Kubernetes (1) 

Applications and workloads running on Kubernetes environment, just like any application, requires secrets to gain access to data stored in the database, 1st / 3rd party services or APIs.
Secrets, however, are only effective if they actually remain secret. When secrets leak, attackers will be able to gain access to sensitive data, services or APIs and can potentially put your entire environment and business at risk.

With that challenge in mind, let’s take a look at how to manage secrets effectively in Kubernetes. I’ll explain how secrets work, what they have to do with ConfigMaps and how admins can defend against the risk of secret hunting by attackers.

 

Kubernetes and Secrets

A Secret in Kubernetes is an object that contains a small amount of sensitive data such as a password, a token, or a key.  Secret object allows for more control over how sensitive data is used, and reduces the risk of accidental exposure.

For example, to create secrets in Kubernetes, we describe them with a yaml file. For example, let's say we have a pair for a username and password for an admin user. First we need to encode them as Base64 strings:

 

$> username=$(echo -n "admin" | base64)

$> password=$(echo -n "a62fjbd37942dcs" | base64)

 

Then, we can create a secret.yaml configuration file:

 

$> echo "apiVersion: v1

kind: Secret

metadata:

  name: test-secret

type: Opaque

data:

  username: $username

  password: $password" >> secret.yaml

 

Using Secrets

Secrets can be mounted as data volumes or be exposed as environment variables to be used by a container in a pod. They can also be used by other parts of the system, without being directly exposed to the pod. For example, they can hold credentials that other parts of the system should use to interact with external systems on your behalf.

This yaml file can be used with Kubectl to create our secret. When launching pods that require access to the secret we'll refer to the collection via the friendly name:

     

           $> kubectl apply -f secret.yml

           $> kubectl get secrets

NAME                   TYPE                                  DATA    AGE

default-token-28tfh    kubernetes.io/service-account-token   3       4d18h

test-secret            Opaque                                2       5s

 

As of v1.13 we can enable encryption at rest for secrets. We can create ConfigMaps in a similar way. Since they may contain a mixture of sensitive and non-sensitive information, it is best to avoid using secrets stored as plain text.

 

Hunting for Secrets

Attacker would try to uncover secrets from disk where file or log files may disclose such secrets, from runtime components such as environment variables, memory or container runtime that can introspect a running container.

  • Container Images:  Secrets may be “baked” into container images. If they are placed into the image unencrypted, it opens the door to anyone who can pull the container image, to simply decompress, scan and extract your secret.
  • ConfigMaps:  If we create a ConfigMap that includes secrets AND information about where the secrets are located. For example:

apiVersion: v1

 kind: ConfigMap

 metadata:

   name: secret-configmap-pod

 data:

   paths:

     certificates: ~/.certs/

 

Clearly we have information about specific certificates used in the application that can be used to conveniently figure out their contents. It's best if we avoid giving clues and other information about how to obtain secrets in the ConfigMaps.
Also note that ConfigMaps unlike Secret resources may be persisted to disk, persisted as part of an audit action or even indexed by external systems assuming it has the appropriate RBAC permissions. 

 

  • Secrets from the environment: Most developers are keen to pass secrets into the Pods via environment variables at runtime. However, that has a significant impact on security. Placing secrets in as is, within Deployment resource or helm chart expose this sensitive information to anyone who can get, list, or describe this resources, let alone external systems that are permitted to track such resources.
    A better practice would be to mount secrets at Pod runtime either to disk or to environment variables.

For example, in the case of applying previously created secrets to a container using the following secrets-pod.yml:

apiVersion: v1

kind: Pod

metadata:

  name: secret-env-pod

spec:

  containers:

    - name: example

      image: alpine:latest

      command: ["sleep", "9999"]

      env:

        - name: SECRET_USERNAME

          valueFrom:

            secretKeyRef:

              name: test-secret

              key: username

        - name: SECRET_PASSWORD

          valueFrom:

            secretKeyRef:

              name: test-secret

              key: password

  restartPolicy: Never

  

$> kubectl create -f secrets-pod.yml

 

Watch Now!

 

With that being said, anyone who can inspect that container with docker inspect can see what was passed in and that run command. kubectl describe pod on the end would only tell us Pod has specific secret keys and sensitive variables. It does not have to be local on the same machine. In addition, many times environmental variables get logged in dumps or stashes and they can be read in plain sight.

A safer way is to use mounted volumes for loading secrets. For example, secrets-pod.yml:

apiVersion: v1

kind: Pod

metadata:

  name: secret-vol-pod

spec:

  volumes:

  - name: secret-volume

    secret:

      secretName: test-secret

  containers:

    - name: example

      image: alpine:latest

      command: ["sleep", "9999"]

      volumeMounts:

          - name: secret-volume

            mountPath: /etc/secret-volume

 

That way, inspecting the container or the logs would not show any secrets unless we log access to files (which is rare in practice). Those volumes are loaded as temporary filesystems and are only held in memory.

Now bear in mind that in both ways, anyone that can exec into that container can read those secrets. Let's ssh into the container and inspect as root the /proc/1/environ file we can see the secrets in plain sight:

/ # ps

PID   USER   TIME COMMAND

    1 root      0:00 sleep 9999

   11 root      0:00 sh

   27 root      0:00 ps

 

/ # cat /proc/1/environ

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=secret-env-podSECRET_USERNAME=adminSECRET_PASSWORD=a62fjbd37942dcsKUBERNETES_PORT=tcp://10.96.0.1:443KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443KUBERNETES_PORT_443_TCP_PROTO=tcpKUBERNETES_PORT_443_TCP_PORT=443KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1KUBERNETES_SERVICE_HOST=10.96.0.1KUBERNETES_SERVICE_PORT=443KUBERNETES_SERVICE_PORT_HTTPS=443HOME=/root/ #

 

  • Timing Attacks: We are vulnerable to another common attack when we use some sort of string comparison from variables that come from external sources such as REST clients with a secret value. A naive way to do that is by a simple comparison:

     

    if (clientHash === secretHash) {

    ...

    }

The problem here is that this comparison will not always evaluate in constant time. A cunning attacker can measure and analyze the differences between them. Given enough time he can figure out the differences between the clientHash and the secretHash, thus predicting the latter. A recommended way to prevent those comparisons is to use cryptographic libraries that handle them in constant time.


Here is an example in Node.js using tsscmp:

var timingSafeCompare = require('tsscmp');

 

if (timingSafeCompare(clientHash, secretHash)) {

  console.log('good token');

} else {

  console.log('bad token');

}

 

Alcide Advisor to the Rescue

As we saw in the examples above, handling of secrets and sensitive configuration values inside a Kubernetes cluster requires dedicated time and capability to do it correctly. If you have an ever-growing list of business requirements and audit risks coming your way, then it’s best if you use someone to help you handle that risk. Hopefully, for us, we don’t have to look further.

With Alcide Advisor tool we have a complete security tool that helps us manage and configure rules and checks that lead to preventing accidental exposure of secrets.

 

The primary benefits of using Alcide Advisor are the following:

 

  • Tailored for Kubernetes: Advisor is natively integrated with Kubernetes and offers top-of-the-range workload monitoring and hardening tools against common risks. In addition, with RBAC (Role Based Access Control) and network segregation controls, we have a secure and reliable way to limit access of information to only the legitimate users or roles. By eliminating those risks of misconfigured access that enable exfiltrating secrets and other sensitive information, we can sleep well at night.

 

  • Goes the extra mile in terms of security: Alcides Advisor handles security as a whole. By integrating it in all phases of the SDLC (software development life cycle) – as we do with CI/CD for example – in all environments (development or production) and across all major Cloud Providers (Azure, AWS, GCP), we can keep up that level of protection at the highest standards. The Advisor is constantly updated to notify admin and authorised users against the latest threats or events, helping them stay ahead of the risks.

 

  • First class support for all kinds of users: Alcide is a dedicated security provider for cloud environments. They know their domain of expertise and have tailored their tools to handle all kinds of scenarios. Whether you are a developer trying to secure a deployment, a DevOps trying to monitor your infrastructure, or an Architect designing multi-cloud solutions with a strategic vision, Alcide has got you covered. In addition, it comes with a plethora of resources and information to support you along the way.

 

Conclusion

Handling secrets in a secure and observable way, especially within a container orchestration environment,  involves some risk. Organizations must handle that risk by introducing security controls that offer the best safety and mitigation techniques against threats attempting to uncover those secrets. By integrating Alcide Advisor into your Kubernetes Cluster you can kill all birds with one stone and focus on delivering real value to your customers. If you want to learn more about Alcide and its flagship product you can book a demonstration here.

 

get alcide advisor-1

 

______________

About Theo Despoudis
Theo Despoudis is a Senior Software Engineer and an experienced mentor. He has a keen interest in Open Source Architectures, Cloud Computing, best practices and functional programming. He occasionally blogs on several publishing platforms and enjoys creating projects from inspiration.

Topics: cloud security, kubernetes, microservices, devops, alcide advisor