alcide

Alcide Blog

Cloud-native Security Provider

Top 5 Best Practices for Healthy Kubernetes 1.14 Environments

May 2, 2019 4:01:32 AM / by Guest Writer: Eric Bruno

for bllog article page-Top 5 best practices for healthy Kubernetes 

If you work with Kubernetes, you’re probably already familiar with basic Kubernetes best practices guides and patterns. But the recent release of Kubernetes v1.14 has introduced some new features, which in turn necessitate new best practices. Most of them center on security and automation, which are top of the list for operations staff, management, and development alike. But there are some others that factor in as well.

In this article, we take a look at the top five best practices for Kubernetes 1.14 specifically. The goal is to help organizations update their approach to Kubernetes to reflect the latest changes to the platform.

 

Security: Using Role-Based Access Control (RBAC)

Kubernetes role-based access control gives you the ability to set permissions for specific sets of users over cluster resources. This includes how users can interact with your cluster or a cluster namespace. The Kubernetes API gives administrators control to dynamically change policies and groups. Prior to 1.14, cluster role discovery was included and permissioned by default within Kubernetes pods, leading to potential security holes that led people to avoid RBAC.

 

With Kubernetes 1.14, however, API discovery endpoints are inaccessible to unauthenticated users by default, greatly improving data privacy and cluster security. The specific change was to remove the system:unauthenticated subjects from the system:discovery and system:basic-user ClusterRoleBindings. A new best practice is to use RBAC by default, knowing that users and resources won’t be able to perform sensitive operations without having the explicit permission to do so.

 

Automation: Unify Windows and Linux Orchestration

Kubernetes now officially supports adding Windows servers as worker nodes and scheduling Windows containers, enabling Windows applications (new and legacy) to leverage the Kubernetes platform. If you have existing investments in both Windows-based and Linux-based applications, or are looking to expand with Windows-specific workloads, you no longer need separate orchestrators to manage the different workloads. In the past, this often led to operational inefficiencies across deployments. Now that Kubernetes 1.14 officially supports this, you can safely deploy Windows workloads knowing they will only be migrated to pods running on Windows-based servers. The metrics and quotas for Windows-based pods closely match those for Linux containers, increasing the consistency.

 

Enabling this leads to additional advantages, including operational efficiency (as mentioned), scalability across clusters comprised of varying platform implementations, and sharing of container knowledge across development teams, regardless of platform or language chosen.

 

Security: Pod Isolation

Most Linux users and administrators are aware of process ID (PID) limitations, especially with container-based services that quickly spin new processes up and down. PID exhaustion is a situation where Kubernetes pods can no longer spin up new tasks. For pods that induce exhaustion for whatever reason, the risk is that it can impact other workloads running on the node.

 

Kubernetes v1.14 now allows administrators to place limits on new PID allocations, providing workload isolation via these limits and new PID reservation capabilities. As an example from a recent Kubernetes 1.14 blog, if a Linux machine supports 32,768 PIDs and 100 pods, an administrator can implement a budget of 300 PIDs per Pod to prevent PID exhaustion. This feature helps to increase overall system stability, and avoids denial-of-service attacks on container-based applications through PID exhaustion.

Get Alcide Advisor

 

Orchestration: Using Pod Priority

The implementation of Kubernetes pod priority and preemption enables the Kubernetes scheduler to schedule more important Pods and associated workloads first. For example, when a cluster becomes constrained in some way, it will remove less important pods (specified with lower pod priority relative to other pods) to create room for more important (higher priority) pods. To guard against malicious activity, where a nefarious individual creates a demanding pod with highest priority to effectively starve all other pods in a cluster, version 1.14 extends ResourceQuota to include priority as a “stable” feature. (The feature was first introduced in beta form in Kubernetes 1.12.) With this functionality, an admin can specify ResourceQuota for users at specific priority levels, preventing them from creating pods with high priorities.

 

As a Kubernetes 1.14 best practice, use pod priority to help the scheduler know which pods can be shut down and which pods need to be maintained when resources become constrained, or activity spikes unexpectedly. Be sure to extend PriorityClass in your pod templates, as existing pods will not have this feature turned on by default.

 

Performance: Local Persistent Volumes

Although this one mainly falls under the category of performance, it also applies to orchestration and automation. Let’s dive in before I explain.

 

Kubernetes workloads tend to use different types of block and file storage to persist data. While most use cases involve remote storage (e.g. cloud-based) they don’t always provide the level of consistent performance needed for many applications. With the Local Persistent Volume plugin, officially released as stable as of v1.14, Kubernetes workloads can now use local storage (typically with higher performance) using the same volume APIs as in the past.

 

The difference is that, compared with the similarly featured Kubernetes HostPath Volume, the Kubernetes scheduler understands which node a Local Persistent Volume belongs to. This means, unlike HostPath Volumes, the scheduler will ensure workloads that rely on local storage stick to nodes that have that storage mounted and available. Since it’s now referenceable only via a Persistent Volume Claim, using Persistent Local Volumes adds an additional layer of security, as you have more control over access to them.

 

In terms of best practices, remember that using Persistent Local Storage limits the nodes the Kubernetes scheduler can run dependant workloads on. This is where the orchestration advantages of this new feature become important for stability. However, for these same reasons, Local Persistent Volumes should only be considered for workloads that require the highest performance, or that handle data replication and backup at the application layer. This helps make the applications resilient to node or data failures, remaining available despite the lack of such guarantees at the individual disk level.

 

Examples of candidate workloads include software defined storage systems and replicated databases. Other types of applications should continue to use highly available cloud-based storage.

 

Looking Ahead: The Operational Evolution of Kubernetes

Kubernetes continues to improve the security and automation capabilities of container-based microservices and applications. A glance through the 1.14 release notes reveals additional examples beyond the top 5 presented here, such as additional pod readiness feedback, improved metrics, automated security certificate exchanges within control planes, and much more.


With version Kubernetes 1.15, planned new features such as dynamic auditing capabilities and network and application service topology-aware scheduling intend to extend the operational convenience and performance of container-based applications. The challenge is for Kubernetes to empower DevOps processes without becoming too tedious to manage, or too inflexible to adapt to different organizations’ maturing processes. So far, the team has done a good job to meet these challenges.

 

  

Learn how Alcide helps you kick-start your K8s journey and sign-up for the Alcide Advisor Early Access Program

_______________

About Eric Bruno
Eric Bruno is a writer and editor for multiple online publications, with more than 25 years of experience in the information technology community. He is a highly requested moderator and speaker for a variety of conferences and other events on topics spanning the technology spectrum, from the desktop to the data center. He has written articles, blogs, white papers and books on software architecture and development for more than a decade. He is also an enterprise architect, developer, and industry analyst with expertise in full lifecycle, large-scale software architecture, design, and development for companies all over the globe. His accomplishments span highly distributed system development, multi-tiered web development, real-time development, and transactional software development. See his editorial work online at www.ericbruno.com.

Topics: cloud security, kubernetes, microservices, devops, alcide advisor