Kubernetes offers a variety of security controls, but using a default configuration exposes a wide attack surface and leaves you open and vulnerable to potential risks.
In 2020, nearly 7 out of 10 companies reported a detected misconfiguration in their Kubernetes environment, making it by far the most common type of vulnerability.2020 State of Containers and Kubernetes Report
Hope isn’t lost if you’re using a default configuration: There are plenty of simple best-practice changes you can make to quickly and easily improve the security posture of your platform.
Here are the top five changes you can make immediately that will have the biggest impact on your overall security posture…
1. Restrict access to the Kubernetes API
The Kubernetes API enables you to query and manipulate the state of all resources within your cluster, so restricting access to it is absolutely essential and where your ‘best-practice security’ should start.
What to do:
- Enable Role Based Access Control (RBAC) and define least privilege policies per user or service requiring access
- Externalise the identity (auth0) and authorization (authn) roles, whilst retaining policy implementation within Kubernetes
- Restrict access to the API endpoint and control plane Nodes
- Disable the insecure API server port (Note: This has been disabled in Kubernetes v1.20 so it’ll only be applicable if you’re using an older version)
2. Encrypt data at rest
‘Data at rest’ is structured or unstructured data that is stored in databases, all in one place. It needs to be secured differently than data in motion, which is why Data At Rest Encryption (DARE) should quickly become one of your priorities to improve your overall Kubernetes security posture.
What to do:
- Configure encryption of secret data at rest
- Set predefined default Storage Classes with encryption enabled and limit user access to create new classes
3. Enforce least privileged pod security standards
Within a Pod specification, the SecurityContext has a collection of fields that specify security-relevant settings for a Pod. Without enforcement at a cluster level, these settings could be modified to allow a Pod to run under a privileged context, such as running as a root user, on the host network with access to sensitive service endpoints, and able to mount in file paths on the host which may contain secret data.
By using Pod Security Standards or Admission Controllers, you can enforce the use of least privilege SecurityContext settings for all workloads running in the cluster and reject the creation of any Pods that aren’t adhering to defined policies.
What to do:
- Use admission controllers to prevent running privileged pods
- Always run non-root
- Read-only filesystems
- Profile your workloads and apply seccomp policies
- Unmount the service account token from Pods if not required
- PSPs are deprecated, use PSS restricted namespaces or check out our PSP Migration Tool for other options
- Consider firecracker/g-visor/katacontainers for greater isolation such as workloads that require elevated permissions
4. Container Image Policies
By default, Kubernetes can pull container images defined in your Pod specifications from any destination, as long as it’s reachable from the Node. This introduces the risk that a user (authorised or not) could accidentally or maliciously specify an image to run from an untrusted source, which may cause direct harm to the platform and neighbouring services, resulting in data exfiltration, and increased hosting and auditing costs.
Employing cluster-level image policy controls significantly reduces this risk and can be achieved via the use of custom admission controllers (i.e. OpenPolicyAgent Gatekeeper).
What to do:
- Always reference image tags and digests (check for this for example, by using OPA Gatekeeper). You can use tools like renovate to maintain this for you.
- Sign your container images and validate signatures.
5. Implement default network policies
Kubernetes network policies control the traffic between pods and/or network endpoints. The labels you specify in your network policies determine the traffic that is or isn’t directed towards those rules, ensuring that you know exactly what is accessing your clusters.
Without creating any initial policies, by default your workloads can be accessed by anything within the cluster and can reach out to any internal or external endpoints, increasing the risk of data leaks in the event of a breach.
What to do:
- Create default deny-all network policies for all ingress and egress traffic in all namespaces, and override with least privilege network policies where required for each service.
- Protect sensitive cluster endpoints from normal workloads (i.e. via the use of global network policies, depending on your implementation and CNI choice), such as the cloud metadata service endpoint and etcd and prevent this being overridden or bypassed by users.
- Monitor failures, these can be early indications that something is trying to move laterally.
Do you know where you’re at risk?
The above recommendations are solid strategies to make your clusters more secure but, depending on the number of clusters involved, there’s potentially a mountain of work ahead of you.
We’ve developed a free 1-day assessment of your Kubernetes clusters to help you determine the relative robustness of your Kubernetes implementation.
After a single-day audit of your clusters carried out by one of our expert architects, you’ll receive a PDF of the results along with actionable insights on how to harden your security posture.