In the brim of maturation of clouds, containerization, microservices, and distributed applications, zero-trust security is in the hype of conversation in the IT world. Gartner is predicting that by the year 2022, 75% of the global organizations will be running containerized applications in production which may essentially and simultaneously follow container orchestration systems- Kubernetes as well commonly in most application architectures. Meantime, a Forrester survey indicates 43% of respondents take a back step to adopt containerization and distributed application architectures in the cloud because of the ‘Security’ challenge.
So our conclusion on all the above facts is that now is the best time to launch Zero-Trust security to distributed systems or mostly to Kubernetes. In this article, we will explore how we can launch zero-trust security to Kubernetes and why zero-trust security matters to the microservices + Kubernetes + Cloud architecture.
Zero Trust Security Vs. Perimeter Security in the Cloud
The ‘layers of perimeter security’ was the traditional data protection strategy practiced over the years. Those days sensitive data lived only on one data center (on-premises). But, today, notice how common it is to store sales data in a SaaS CRM, public data of the same application on cloud orchestrating using a platform like Kubernetes, and private or internal data on an on-premises data center. In this paradigm, can the perimeter security alone ensure the security of your sensitive data?
Sensitive data resides in multi-locations (on-prem, cloud, and multi-cloud as well) in distributed application architectures. Data consumers access data from all around the world. Employees do not anymore work connected to the organizations’ LAN networks. So, the application data traverse in multiple layers of trust. That is why it is trivial to verify the integrity and authenticity of the data of today’s containerized applications in the cloud to ensure data security.
Zero-Trust Security Model
The perimeter security model held the tenet that if you are connected to the network, then you are trusted. Even though a basic user in this model has low-level access to only the application layer, a privileged user has exclusive access to the application layer, sensitive data, and the mission-critical Tier 0 assets. So, this lax environment inherently carries higher risk in the face of direct hacks or accidental trojan horses, weak links in the network firewall, and so on, if not strategically managed and controlled.
But the zero-trust model does not assume any network connection is safe until they prove otherwise that applies to both human and non-human users alike. The zero-trust security framework focuses on tightening security from both the inside and outside.
However, there are no specific prescriptive set of controls or technologies to use to implement zero-trust security. Optionally, only set principles exist as mentioned follows:
- Security controls should apply to all the entities throughout the network.
- Network connections should authenticate at both the server and client ends. Network connections need to reauthenticate and the requests to reauthorize when spanning over a single transaction.
- Authorizations should follow the principle of least privilege.
- All the network connections and transactions should be subject to continuous monitoring.
How to Implement Zero-Trust Security Model in Kubernetes
1. Enforce Network Policies
You know that the pods in a Kubernetes cluster can talk to other pods and as well as to the internet (Cloud provider’s API). While this is a trivial feature for networking and communication of a microservices application deployed in Kubernetes, it is a high-risk security loophole for any application that lives in a dynamic and fluid perimeter as discussed above.
To step into the zero trust security model in Kubernetes, start with a default deny-all network policy that stops pods from talking to anything. Then the next step is to slowly add more permissive policies that override the default deny all policy after carefully evaluating the required connections. By applying network policies in this manner, you are following the principle of least privilege – a basic principle of zero-trust security.
//Application of Default Deny-All policy
// Allow access from App 1 to App 2
2. Manage Workload Identity using SPIFFE
Remember I said that authentication or verifying every asset that you communicate with is the base for the zero-trust security framework. Secure Production Identity Framework For Everyone or in the short form: SPIFFE is an open standard that authenticates every node, container, cluster, cloud, or anything on which you run your data workloads. Technically, SPIFFE is the standard behind mutual Transport Layer Security (mTLS) connections.
mTLS connections authenticate both the client-side and the server-side using TLS certificates. Also, the mTLS connections create encrypted network connections.
3. Implement a Service Mesh
A service mesh brings both the network policies and the SPIFFE standard to fit in the zero-trust architecture. Today, there are many open-source projects such as Istio that offer security features fitting into the zero-trust security model that you can implement alongside Kubernetes.
For example, consider the Istio service mesh. Istio’s mutual TLS is based on the SPIFFE standard. The istiod or the Citadel of the Istio service mesh issues and manages certificates and keys on both the server and client worker nodes. So, pods in this network architecture communicate with each other (through Envoy) via an encrypted and trusted connection obeying the zero-trust security principles.
The best way to enforce zero-trust network segmentation in Kubernetes is to begin with, the Kubernetes-native controls such as Kubernetes network policies. The second way is the SPIFFE. You can easily achieve both these zero-trust network segmentation mechanisms by implementing a service mesh on top of Kubernetes. In addition to the above mentioned, most of these service mesh architectures support different authorization sources as well such as static lists, third-party single sign-on services, etc.