Publishing a Kubernetes Service
In Kubernetes, a Service is an abstract way to expose an application running on a set of Pods as a network service
With Kubernetes you don’t need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.
This post will describe the different ways used to publish a Kubernetes service, the risks harbored and the methods that can be applied to mitigate those risks.
Publishing the service is done via assigning a value to the service type attribute. This attribute may have the following values: ClusterIP, NodePort, LoadBalancer and ExternalName.
An alternate way is to use the Ingress object. The below drawing summarizes the different options:
- The value of ClusterIP will make the service accessible only within the cluster
- The value of NodePort will make the service accessible from outside the cluster. The Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767). Each node proxy that port (the same port number on every Node) into the Service. This means that all cluster ingress traffic coming from the cluster load balancer that is directed to the port will be forwarded to the service.
Here is an example of NodePort configuration:
Note that in the example the specific port is specified.
Starting with Kubernetes 1.10 it is also possible to specify particular IP(s) to proxy the port. You can set the --nodeport-addresses flag in kube-proxy to particular IP block(s).
- The value of LoadBalancer will make the service accessible via the load balancer that serves the cluster. There are two variants:
- External load balancer will make the service accessible from the public network
- Internal load balancer will make the service accessible only to nodes residing within the organization VPC. The use of internal load balancer can be configured via kubernetes annotation (see a detailed description at the end of this blog).
In addition, the range of IP addresses that can access the service can be limited by using the loadBalancerSourceRanges property. A list of ranges can be specified. This property can be applied both to internal as well as external load balancers.
Below is an example for configuring the LoadBalancer service type:
Note in the above example the use of the loadBalancerSourceRanges. This property allows to limit the range of the source IPs that can access the service.
- The value of ExternalName is outside the scope of this discussion and allows to Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record.
Another way to expose the service is by using the Ingress configuration. The Ingress allows both path based and subdomain based routing to backend services.
Below is an example for configuring the Ingress:
In the above example accesses to foo.bar.com will be forwarded to the service1 service.
Some Ingress controllers support a configuration that can restrict access to the application based on dedicated IP addresses. This is done by using the ingress.kubernetes.io/whitelist-source-range annotation, where the IP range value is a comma separated list of CIDR block, e.g. 10.0.0.0/24,184.108.40.206/32
Using the LoadBalancer Value Harbors Risks
Assigning the value of LoadBalancer has two effects:
- The Service is exposed to the outside world which poses a security risk
- When running Kubernetes on cloud providers platforms which support external load balancers setting this value will cause the the cloud provider infrastructure to provision a load balancer for the Service. The operational impact is an increase of the cloud costs.
The security risk implied by using the LoadBalancer is the fact that the service can be accessed from any node on the public network as the default range is 0.0.0.0/0
As mentioned in the previous section, this risk can be partially mitigated by limiting the range of IPs that can access the service. This is achieved by specifying the loadBalancerSourceRanges:
Controlling Services Exposure via Whitelists
The service exposure implied by the different service exposure mechanisms must be well managed. A good practice is to maintain 4 whitelists of services:
- External services accessible globally
This list will hold the names of the services that can be accessed from any IP on the public network
- External services with limited access
This list will hold the names of the services that can be accessed from IPs that are within specific ranges on the public network
- Internal services accessible globally
This list will hold the names of the services that can be accessed from any IP on the organization's internal network (VPC)
- Internal services with limited access
This list will hold the names of the services that can be accessed from from IPs that are within specific ranges on the organization's internal network (VPC)
Enforcing Exposed Services Whitelists
The enforcement of such whitelists can become a very tedious work due to the high dynamicity of the Kubernetes cluster configuration and the frequent deployment of new pods and services.
One must check that each service deployment change done during both the development and production stages conforms with the whitelists.
It is especially important that the check will be done during the development stage.
Consider the following scenario:
- A new service that should not be exposed is introduced as part of a new feature development
- The service is assigned the LoadBalancer service type by mistake
- During the feature development and testing the service is accessed from the application client. Everything works perfectly
- During the production deployment the LoadBalancer service type is removed as this service should not be exposed externally (does not appear in the whitelists)
- The production operation will fail and the development will need to re-work the design.
Here at Alcide, and since we eat our own dog food, we’ve decided to take a step forward and add this lists as available checks in our Alcide Advisor, our Kubernetes Scanner that scan continuously scan the cluster configuration both as part of the CI/CD pipeline as well as during production.
It’s important that the list of services that you allow your developers to expose will be an external list so that you fully control of these exposed account services. With Alcide Advisor you can implement your white-list based on your preferences, and the Scanner will run against it making sure all exposed services are allowed, and in case they’re not allowed, will block the connection.
Internal Load Balancers
In a mixed environment it is sometimes necessary to route traffic from Services inside the same (virtual) network address block. In a split-horizon DNS environment you would need two Services to be able to route both external and internal traffic to your endpoints. You can achieve this by adding one the following annotations to a Service. The annotation to add depends on the cloud Service provider you’re using.
In GCP the specification would look like:
In AWS the specification would look like:
In Azure the specification would look like:
There are multiple ways to expose a Kubernetes service. Some of them are directly specified in the service definition, such as the LoadBalancer service type, and some are indirect, such as the use of the ingress configuration.
Service exposure must be carefully controlled as it poses a severe security risk.
An effective way of controlling the service exposure is by maintaining an explicit list of services that are to be exposed and enforcing this list using an appropriate tool such as Alcide Advisor that will continuously scan the cluster configuration and alert on such a breach.
For the full details of the Kubernetes Service definition refer to: https://kubernetes.io/docs/concepts/services-networking/service/
Some useful information about NGINX Ingress Controller: