If you believe all the marketing hype, then Kubernetes is the silver bullet to make containers so routine that they’re boring, and your infrastructure will have better harmony than any boy band in history. If only this were true.
While Kubernetes is a great tool for container orchestration, there are strict limitations on what it can do. That's why getting started with Kubernetes is much more complicated than simply installing it, starting it, and calling it a day. There are a number of choices you have to make in order to configure Kubernetes to meet your specific needs.
In this article, we walk through the main decisions you'll have to weigh in order to get started with Kubernetes. We'll cover which type of Kubernetes offering to use, how to get your cluster running, and how to manage and secure Kubernetes once it is set up.
Day 1: Pick your path to Kubernetes
Getting started with Kubernetes presents three paths you can take. The first is to use a managed offering through cloud providers like AKS by Microsoft Azure. The second is to deploy your own servers, and install one of many enterprise-focused distributions like OpenShift by Red Hat. The third option is to build it from scratch. This is the most complicated and work-intensive choice. Kubernetes the Hard Way by Kelsey Hightower is an often-used guide for those who are willing to put in the extra effort.
Day 2: Get Kubernetes running in your environment
$ gcloud container clusters create [CLUSTER_NAME]
On the distribution and do-it-yourself paths, there are a few more steps - starting with at least three master nodes to have high availability because of how etcd works. Then you’ll need to add compute nodes and any other support servers required for supplementary infrastructure, like software-defined storage and more enhanced load balancing options.
Now that you are up and running with Kubernetes, you have some more decisions to make before it is ready for use. These decisions at their highest level are based on topics like how to build and store containers and what to use for networking, like the default — wide open networking — or a more advanced networking option. (There are a couple of CNI options available if you are interested.)
Day 3: Now we need to maintain this Kubernetes monster
A full Kubernetes multi-node cluster deployment with live applications is not a part-time endeavour for the average operational team. Not only is it typically new to the environment, it is work in addition to any existing application deployments. Over time, other application deployment models can be retired or migrated to Kubernetes, but there is a transition period where making operations as simple as possible is key.
Most Kubernetes deployments end up using a networking layer that supports network policies, as they are the most flexible and easiest to use for developers to manage their own network security. Managing these security policies only becomes more difficult over time as the number of policies configured grows. Combine this with having to ensure the underpinning infrastructure’s network is also properly secured is daunting, especially when using a public cloud like AWS and its security groups. The best approach right now is to use a project like that offered by Alcide which understands both Kubernetes and the cloud it is running on, with a single, easy-to-use interface and real-time insights.
For monitoring and alerting, the CNCF has a top-level project that co-exists with Kubernetes called Prometheus which can handle most environments’ basic needs. If more in-depth diagnostics are needed from deployed applications, Jaeger is also supported by the CNCF.
Don’t forget Kubernetes security
As with any modern network-connected infrastructure, security is a big part of the Kubernetes landscape. This goes beyond the initial concerns people have when security is mentioned (often residing around edge security like firewalls). The network security policies mentioned above play a big part in what used to be exclusively in the domain of edge security.
Another common concern around security is secrets' management. Secrets can be anything from passwords to private certificates to API keys. Kubernetes has a native way to manage secrets so they can be referenced from other points in the configuration, and it provides a solid foundation layer. If additional layers of security are required, then consider products like HashiCorp Vault or Application Access Manager.
The biggest area that is usually overlooked for security is all the policies which are required to keep the actual platform running, and not just support the different applications. For Kubernetes, this covers areas like Pod Security Policies, which define the criteria a pod needs to meet to even run. RBAC (AuthZ) handles what access different accounts have to cluster-wide and namespace-specific resources, and authentication (AuthN) services which define what methods are used to externally validate any users in a Kubernetes cluster.
Whenever possible, it is best to use a product that was built with container-native as the target model. Not only is it the latest and fastest-growing type of development happening, it is the easiest way to interact with Kubernetes. There are products that fit this model in almost every area that operators need to support environments, making operations from day two onward actually manageable.
About Vincent Power
Vince Power is an Enterprise Architect at Medavie Blue Cross. His focus is on cloud adoption and technology planning in key areas like core computing (IaaS), identity and access management, application platforms (PaaS), and continuous delivery.