Alcide Blog

Cloud-native Security Provider

The Evolution of Serverless, Part 2: From Microservices to Containers to Functions

Dec 5, 2018 10:32:47 AM / by Anatoly Aliev posted in cloud security, serverless, workload protection, microservices



Following part 1 of our blog series, here's part 2.  

Containers Enable Scale 

Right around the time that applications started being broken into microservices, containers became popular. Though these are distinct technologies that are hypothetically unrelated, the rise of containers and the rise of microservices were intertwined. 

To understand containers, another history lesson is required. If we go back to the ’90s, the era in which monolithic and SOA-type applications ruled, it is important to understand that each server typically only ran one application.

Every now and again, a monolithic application would grow so large that it had to be broken up because no single server could reasonably run the application. When this occurred, the individual application components would themselves become applications, with each typically operating on its own machine.


Despite all the talk about the importance of breaking up large, monolithic applications, these bare metal application servers were massively underutilized. They were also fragile; if something happened to the server on which an application was executing, it could take several hours to restore the application to service on another server. This step often involved restoring data from backups, which could mean that a day's data might be lost during the outage.

 Virtualization and centralized storage arrays solved these problems. Together, they allowed multiple applications to be run on a single server, driving up server utilization. They also allowed applications to be run in high availability, meaning that you could recover from hardware failures in seconds.

The virtual machine model was wasteful. Each virtual machine requires its own complete operating system because every virtual machine is an emulation of an entire computer. In other words, the way that developers convinced multiple applications to securely operate on a single server was by lying to the applications and pretending that each of them had an entire server to itself.


This worked fine when the application components that were being hosted in individual virtual machines were largish SOA-style components. As soon as developers started breaking applications into microservices, however, putting each microservices in its own VM started to seem like overkill. The underlying operating system would often consume several times the resources of the microservices.

Containers are an attempt to solve the high resource consumption of virtual machines. Containers use a number of technologies to carve up a single operating system such that multiple applications can be run on that operating system but still remain completely isolated from one another. This solution provides application security similar to virtual machines while allowing even greater application density.


The downside to containers is that when updates applied to the host operating system require a reboot of that operating system, all containers hosted by that operating system are restarted as well. Containers can't be live migrated/ VMotioned between physical hosts in the same way that virtual machines can. This means that every time the container host is restarted, there must be an outage of that host's containers.


Composability and Immutability

For modern applications, which are typically designed around microservices and cloud computing, composability and immutability aren’t necessarily a big deal. Modern cloud native applications are not designed with the same expectations as traditional monolithic or SOA applications were.


Traditional applications were generally designed with several assumptions in mind. Among them was the idea that all of components of the application would be available all of the time. These applications also generally assumed that their storage would always be available and that each application component had exclusive access to that storage.

Applications based on microservices are generally designed under different assumptions. They are designed with the idea of composability. Modern IT infrastructures can be controlled using Application Program Interfaces (APIs). This means that the underlying infrastructure upon which an application operatesfrom the creation of virtual machines and containers, to networking and storage resourcescan be manipulated programmatically.

Applications can communicate directly with the automation and orchestration software that manages the infrastructure upon which the application operates.


This means that applications of the microservices era can do things that their predecessors could not; specifically, they can grow and shrink the number of instances of an application component to meet demand. If the web server hosting the user interface portion of an application is nearing maximum capacity, the application can cause another web server instance to be created and registered with the load balancer. If the database is a little slow, another instance of it can be spun up as well.


This infrastructure awareness permeated application design. Applications were designed to be able to survive the loss of individual components, and components were designed to cease doing work if they lost access to storage. Microservices-based applications following modern design principles are designed such that the limitations of containers are compensated for.

Infrastructure awareness also meant that application components and/or portions of application storage could be treated as "immutable.” Immutable applications components or storage cannot be changed.


Immutable application components, should they become infected by malware or misconfigured, can simply be restarted to be restored to a known good, working condition. Immutable storage is storage that cannot be changed, so it cannot be deleted or overwritten.


Messages Queues and the Road to Serverless


A monolithic application doesn't have to communicate between pieces of itself. It is one application running as a single block of code with sole control of its storage on a single system. As soon as that application is broken up into pieces, however, communication between each of the application components becomes a concern.

With larger, SOA-style applications, it wasn't uncommon for each component to have its own means of communication using TCP/IP networking. Databases, for example, expose their own methods of communication, and anything that wants to communicate with that database needs to be able to communicate using that database's proprietary connection protocol.

Over time, the concept of message queues was developed. Message queues are a simple means by which individual application components can communicate with one another without having to create communications protocols for each component or build awareness of those protocols into each component. Adopting message queues is often considered the first step in evolving older applications, such as SOA-based ones, over to a more microservices-based approach. At the very least, implementing message queues between components is a great way to start an argument about how microservices and SOA are all just the same concept with varying sizes of services sitting on a spectrum.

As adoption of message busses grew, a new kind of application component became possible.



Despite the name, serverless is anything but. Serverless can be thought of as the ultimate evolution of the “componentization” of applications, and message queues are very important for serverless functions.

SOA broke monolithic applications into a series of smaller applications, where each application performed a cluster of related duties. Microservices then broke these down further, with each microservices performing a single, specific function, often representing a single feature or isolated piece of functionality.

Serverless takes a specific functiona very short piece of codeand performs only that function. Instead of worrying about containers or virtual machines, developers simply place the code for the function they wish to execute into the serverless management interface and set up triggers. And as we have recently shown,

Read More

Live from Re:Invent! Alcide Cloud Security Platform is Available on AWS Marketplace for Containers

Nov 27, 2018 3:09:19 PM / by Aviv Fattal posted in workload protection, cloud security, AWS, containers, marketplace


We are happy to announce that Alcide's Cloud-Native Security Platform it is now available on the new AWS Marketplace for Containers.

AWS has announced today, Tuesday November 27th, during AWS re:Invent week on AWS Marketplace for Containers which adds support for software products that use Docker containers.

Read More

The Evolution of Serverless, Part 1: From Microservices to Containers to Functions

Nov 7, 2018 8:03:37 AM / by Anatoly Aliev posted in cloud security, serverless, workload protection



The post is part one of a blog series on the evolution of serverless security. The process of building applications has changed over time. Today, applications are designed to make use of multiple public clouds in addition to on-premises IT resources. They are also designed to use microservices, containers, and serverless. Each of these steps has been part of the evolution of application design, moving us towards applications that are inextricably interwoven with the infrastructure and workload automation software that controls the applications themselves.

Read More

Micro-segmentation for Better Cloud Security

Oct 10, 2018 5:20:56 AM / by Tal Rom posted in Micro segmentation, cloud security, workload protection


Micro-segmentation is an emerging practice that is quickly becoming a critical facet of cloud security. Its objective is not only to prevent compromise, but also to deal with what happens after compromise occurs. The purpose of micro-segmentation is to isolate applications and services from one another in order to prevent attackers from achieving their goalseven if they succeed in initially breaching the organization’s IT defenses.

Read More