Deploying workloads in cloud environments comes with many benefits for businesses in terms of time to market, scalability, cost reduction and ease of use.
Cloud environments introduce new security challenges which are very different from on-premises datacenter deployments. One way to overcome those challenges is to use policy based security. In specific, we will talk here about embedded security policies, which enable devops and developers to deploy security controls from day one.
Cloud Security Challenges
In my previous blog we discussed about cloud security essentials lengthly. In this post I would like to dive into security considerations on cloud workloads and how embedded policies can provide better control for cloud deployments.
Let’s start with some of the key security challenges that we see today in cloud-native environments:
- Elastic attack surface: The complex and elastic nature of cloud native apps in which you have changing number of entities (VMs, containers, functions, etc.) that may span multiple cloud provider accounts at any given moment and the fact that your applications can automatically scale from handful to thousands of workloads in seconds, result in elastic attack surface which is hard to secure.
- Traditional perimeter dissolved: The continuous delivery model of cloud applications, combined with elasticity and scale makes it impossible to employ traditional firewalls effectively to build perimeter at the micro-service level.
- New DevSecOps culture: In cloud deployments, where pipeline and release cycle is measured in hours/minutes, manual provisioning and management of security policies isn’t feasible anymore and securing applications should not and could not be the sole responsibility of the security team. To achieve speed and agility, developers, devops engineers and security teams should work in collaboration to plan and execute better security safeguards to avoid security issues.
- From Incident to Response: The elastic nature of cloud application combined with a growing number of moving parts makes it extremely challenging to find the origin of a security anomaly or incident and provide fast response.
In order to address these security challenges, organizations tend to apply multiple solutions, such as multi-layered tools approach (a combination of vendor tools, 3rd party and home-grown), automation, embracing DevSecOps model, etc. Shift left is another methodology that organizations should also embrace to secure their workloads.
Shifting Left Security
The notion of Shift Left in SW development is that we move things that we typically do in later stages early on in the development lifecycle, and address them at “point of origin”. This approach results in a positive ROI and a reduced cost in terms of time and resources allocation, which we usually tend to pay (more) later in the game.
Due to the increasing complexity of securing cloud workloads, security configuration and tests must shift left, or move into previous steps in the development pipeline. This means that developers are now also responsible for delivering secure code, among other things.
One way of ensuring that shift left methodology is deployed, is by using Alcide’s embedded policies. Embedded policies have the following clear benefits:
- They allow developers to stay in their comfort zone and focus on what they know (and hopefully love :-) best => coding
- Security teams are more confident with code deployed with developer's application know-how embedded as whitelist into workloads
- They’re automated, meaning once configured, they run smoothly on whatever container, VM or functions using the same already in place ops mechanism
Deploy with Confidence Using Embedded Policies
Our embedded policy is a developer/devops driven policy, embedded into deployed workloads, and is sourced from tags/labels/metadata/local files that capture the application/microservice know-how as an “allowed” whitelist of network services. Policies are also application-aware whitelist to better understand the context of the application, which provide fine grain control over what to drop/allow within the environment.
By utilizing the embedded policies capabilities developers can bake-in security firewalls into their microservices at design time and enforce it automatically at runtime.
Alcide’s embedded policies are expressed as collection of URLs (rules). The policy rules support the following:
- Specific protocol and IP
- DNS name
- Kubernetes service / AWS entity name
IP or DNS name:
The rule consists of L3/L4 protocol name followed by IP or DNS name and a specific port (for non-default ports). Word “any” allows access on any IP and/or port for the specified protocol
- For Outbound traffic: allows access to “slack.com” DNS name from this workload on http protocol to port 80 (default port)
- For Inbound traffic: allows access from IP 220.127.116.11 on tcp to port 50
- For Outbound traffic: allows access to IP 18.104.22.168 on tcp to port 50
- For Inbound traffic: allows access from IP 22.214.171.124 on ftp protocol to the default port
- For Outbound traffic: allows access to IP 126.96.36.199 on ftp protocol to the default port
Kubernetes service name:
Syntax: service://<[specific cluster.][specific namespace.]service|any>
The rule specifies a service defined in the system that this pod can access. One can also limit the service resolution to a specific cluster and/or a namespace.
- service://kafka - allows access to all services named “kafka” that are defined in the system
- service://prod1.kafka - allows access to services named “kafka” in the namespace “prod1”
- service://us-east1.prod2.kafka - allows access to service named “kafka” in the namespace “prod1” on “us-east1” cluster
Let’s see this in action:
Let’s create a simple application which reads data from twitter and stores it in S3 for further analysis.
Here is the Kubernetes deployment yaml that creates the app:
And this is how it looks like on Alcide’s platform after applying the deployment yaml:
Image 1: An app reads data from twitter and hosts it in S3 as seen on Alcide’s platform
Now let’s assume that our application contains coding error, and now in addition to twitter, it is also accessing facebook as seen is the screen below. Note that this behaviour could also result from a security breach and facebook could be easily replaced with crypto-mining sites or other malicious domains.
Image 2: The app contains coding error and is trying to access Facebook as well, as seen on Alcide’s platform
Now we will apply application-aware policies and state for our nginx deployment that it can only access twitter and S3. Here is an example of the new deployment yaml:
Note the URL-like rules we’ve added in the yaml annotation metadata section, which describe the allowed network configuration for this specific workload.
The result as we can see below: our microservice now only accesses the “allowed” URLs while issuing an alert on Facebook access:
Image 3: The microservice accesses only the “allowed” urls while issuing an alert on facebook access as seen on Alcide’s platform
Network security policies are paramount for securing cloud applications effectively. By leveraging embedded policies you have the ability to shift left your security policy from day one of your microservice’s life cycle and gain better control and visibility early on.