Mount Olympus, home of the gods, is under attack. It may not yet have fallen, but in lieu of protective action, its safety is far from guaranteed. I'll be plain so you don't need to wonder what I'm going on about: the modern data center, tucked away amongst the clouds and holding the crown jewels of your digital world, is not so secure as you might think.
The old way of doing things made for a somewhat slow and restricted data center, but it had order. Order is good for security. The new way of doing things is agile, dynamic, decentralized, interoperable, and unfathomably complex. This opens doors, but unfortunately with each new opening comes new data center vulnerabilities.
The Kubernetes Effect
The comparison between the mythic Mount Olympus and today's data center is strained by the advent of Kubernetes. After all, mountains are supposed to be immovable and unchanging, but if our Olympus is the data center, we can hardly deny the change brought on by a wildly popular open-source industry standard for containerized application deployment, automation, and management.
The way I see it, Kubernetes gives a new face to Mount Olympus. You evolve your existing public cloud workloads, which are virtual machine based, into Kubernetes deployments, but it doesn't fundamentally change your needs. So you end up running a single interconnected Kubernetes cluster or maybe multiple clusters, but you still need to be able to see and understand what you have inside your virtual machine in order to segment traffic using security groups or some other mechanism.
You can and will elastically scale the different microservices that are running inside to address demand, but then there's the challenge of multiple moving parts that are interconnected — your microservices — each of which scale in a different way. How do you map and manage the interactions between these services and systematically predict, prevent, detect, and destroy security threats?
Answering that question is no simple feat. From a pure infrastructure security point of view, Kubernetes has not so much fundamentally changed the game as it has simply made it a lot more complicated to paly safely.
The bottom line is this: Kubernetes is extremely powerful and its use bears a huge impact over the entire software ecosystem and lifecycle. The magnitude and impact radius of Kubernetes adoption raises the question of how organizations value and protect their critical digital assets to new prominence.
The data center, empowered by Kubernetes, is among the most critical (and vulnerable) of your digital assets and must be valued and protected with no less fervor than Hercules summoned in his mission to protect Olympus.
Meeting the Challenges Head On
In a recent report, Gartner proclaims that the data center as we know it is dead. Instead, analyst David J. Cappuccio argues that we should be concerning ourselves with the digital infrastructure surrounding the application and workload levels.
What does that infrastructure look like? Well, it looks a lot like clouds; vast shifting bodies that interact, diverge, and combine with one another without readily identifiable boundaries. Real, but largely intangible. Always moving. Always changing. Etherial. It's not for nothing that we refer to our digital infrastructure networks as "clouds".
The fact is that emerging cloud-native technologies, in the sense to which Cappuccio referred, are quite complex and existing network security does not meet the new demands that come with a more cloud-forward, more container-centric approach. It may provide a top-view policy, but there needs to be more than that. What’s needed is “policy fusion” that allows for multiple policies to be unified in a cohesive manner so security can run at scale — enabling organizations to truly seize on the Kubernetes advantage.
Security cannot be retrofitted to your deployment as an after thought or add-on. To work comprehensively and consistently, it must be native to your deployment and boast built-in design. This point is demonstrated beautifully when it comes to containers. Say you have multiple containers sharing the same compute nodes. How do you make sure that those containers don't have any crossover effect (in the sense that they're trying to manipulate the network layer to actually divert traffic from one container to a neighboring container)?
Because this threat path circumvents normal network policy controls, this is a question that gives fits to the standard crop of network security solutions. Even iptables and security subsystems fall short on this front since they're simply not designed for this type of job.
You can take the question a step further and ask — as the gatekeeper of your network and cloud ops— what you're prepared to do to prevent malware from connecting to the outside world through the existing DNS service discovery infrastructure built into Kubernetes.
When approaching these and similar questions, you need to think about Kubernetes security just like you think about your corporate network security — but in a way that complements the dynamic nature and scale of your clusters. The fact is that this is not something that traditional security systems are well suited for.
Instead, you'll need a solution that is cloud native and that takes the threats that exist on the Kubernetes compute node and workload levels and, as a matter of operational design, detects and quarantines them, creating a hermetic defense against layer 7 interconnect (per Istio and service mesh).
Data Center Vulnerabilities: Complicated by Microservices
Whether you're referring to the data center per se or simply digital infrastructure, the point is the same: protecting a multi-service workload running containerized applications is not simple. You're likely to use more than one cluster so any solutions you come up with for one cluster will be mitigated by the need to secure interactions between clusters. Then there's the issue of containers speaking to virtual machine based infrastructure, third-party service providers, and serverless.
It's important to maintain calm and take a step back in order to impose some perspective on the whole of your digital ecosystem. To come out consistently on top, you need to put plans, policies, procedures, and protective controls in place across all levels of your data environment. When it comes to microservices, this starts with rigorous anomaly detection.
When you're running multiple services inside your Kubernetes cluster you need to don the Sherlock Holmes hat of anomaly detective and lean on the fact that it's a machine-to-machine interaction in order to establish a baseline of healthy behavior patterns and then deep dive into the deviations.
Once you've established network and runtime behavior baselines, detecting anomalies is fairly straight forward. Using this approach, you can detect a microservice breach based on a high resolution analysis of even a single instance. Of course, you should not rely on the eagle eye attention to detail of network administrators alone. You should either call on or create a tool to help automate and enforce the process, bringing to your attention whenever baselines metrics have exceeded their standard deviations.
Sadly, this type of tool doesn't come out of your box with Kubernetes, so being that you're already going to need to do some looking, you should probably also think about deeper security mechanisms to allow you to control and secure the entire cluster.
Simple Solutions for Complex Conditions
When you think about how developers, devops teams, and security teams come together, their normal rendezvous point is around the compute node or the workload, so you have to have a mechanism for each and every one of them to contribute their share to the whole security posture.
With multi-cluster, multi-level containerized architecture, it's a bit of a wild west environment and it can create a lot of chaos and the potential for a lot of conflict — both digitally and actually.
So, for example, if you have a microservice in your cluster that needs to connect to Slack in order send messages (for whatever reason), then whoever is writing these components obviously needs to have the application knowhow (i.e. dev) to provide outside access to your cluster. At the same time, you'll have your operations and security teams having conniptions over the thought that something could leave the cluster without being appropriately logged and reviewed (do we know which pod is involved? which compute node? etcetera).
At the most basic level, companies need to be smarter. They need to zoom out and take stock, with all it's complexities, of the situation that exists and that they need to navigate on a daily basis. In light of the exploding popularity of Kubernetes in particular and microservices in general, today, your data center is far more complex, interconnected, and interdependent than ever before. Without a systematic approach and the right tooling, it would take a Herculean effort to understand let alone secure this complex web of always changing interactions.
Without that type of hardened end-to-end security framework, you're left exposed to an onslaught of data center vulnerabilities and sooner or later, Olympus will fall.