Introduction
This blog series outlines the various configuration choices and recommended practices for achieving effective cluster multi-tenancy.
While sharing clusters can lead to cost savings and streamlined management, it also introduces challenges like ensuring security, maintaining fairness, and dealing with the impact of noisy neighbors.
Clusters can be shared in multiple ways. Sometimes, different applications might run side by side within the same cluster. In other scenarios, multiple instances of a single application could operate in the same cluster, with each instance dedicated to a specific end user. Collectively, these forms of resource sharing are often referred to as multi-tenancy.
Although Kubernetes doesn’t natively support the concepts of end users or tenants, it offers various features that can address diverse tenancy needs, which we’ll explore in this series.
Blog Series
Below are the critical aspects to consider when implementing multi-tenancy in Kubernetes.
Multi-Tenancy in Kubernetes & Openshift: A Comprehensive Guide
Part 1: Use Cases & Implementations
Part 2: Namespace-Based Isolation for Workload Separation
Part 3: Network Policies for Network Isolation
Part 4: Role-Based Access Control (RBAC) for Authorization
Part 5: Resource Quotas and LimitRanges for Resource Control
Part 6: Pod Security Standards (PSS) for Workload Security
Part 7: Storage Isolation for Persistent Volume Security
Part 8: Ingress Control Isolation for External Access Segregation
Part 9: Control Plane Robustness to Safeguard shared Kubernetes Resources
Part 10: NodePort and HostPort Restrictions for Enhanced Network Security
Part 11: Resource and Cost Tracking for ShowBack/ChargeBack
Part 12: Multi-Tenant Considerations for Shared Tools
Use Cases
Understanding our specific use case is the first step in deciding how to share a cluster effectively. This helps us evaluate the best patterns and tools for our needs. Generally, Kubernetes multi-tenancy can be divided into two main categories, though many variations and hybrid approaches exist.
Multiple Teams
A common approach to multi-tenancy is sharing a cluster among multiple teams within an organization. Each team may run one or more workloads that often need to communicate with each other or with workloads hosted on the same or different clusters.
In this setup, team members typically have direct access to Kubernetes resources through tools like kubectl or indirect access via GitOps controllers or release automation tools. While there’s usually a certain degree of trust between teams, it’s crucial to implement Kubernetes policies like RBAC, quotas, and network policies to ensure clusters are shared securely and equitably.
Multiple Customers
Another common form of multi-tenancy occurs when a Software-as-a-Service (SaaS) provider hosts multiple instances of a workload for different customers. While this model is often called "SaaS tenancy," a more accurate term might be "multi-customer tenancy," since it applies to scenarios beyond traditional SaaS models.
In this setup, customers don’t have access to the Kubernetes cluster, which is fully managed by the vendor. From the customer's perspective, Kubernetes is invisible. Cost efficiency is often a major concern, and Kubernetes policies are used to ensure workloads remain strongly isolated from one another.
Terminology
Tenants
When it comes to multi-tenancy in Kubernetes, there isn’t a universal definition for what constitutes a "tenant." Instead, the meaning of a tenant can change depending on whether we’re dealing with multi-team or multi-customer tenancy.
In a multi-team scenario, a tenant is usually defined as a team. Each team typically deploys a limited number of workloads that scale based on the complexity of their service. However, the concept of a "team" can be a bit ambiguous, as teams might be grouped into larger divisions or split into smaller sub-teams.
On the other hand, if a team deploys distinct workloads for each client, this reflects a multi-customer tenancy model. In this case, a "tenant" refers to a group of users sharing a single workload, which could range from an entire company to just one team within that company.
Many organizations often apply both definitions of "tenants" depending on the context. For example, a platform team might provide shared services, such as security tools and databases, to multiple internal "customers." At the same time, a SaaS vendor might have multiple teams sharing a development cluster. Hybrid architectures are also possible, such as a SaaS provider combining dedicated per-customer workloads for sensitive data with shared multi-tenant services.
Isolation
There are various approaches to designing and building multi-tenant solutions with Kubernetes, each with its own trade-offs that impact isolation levels, implementation effort, operational complexity, and service cost.
A Kubernetes cluster consists of a control plane, which manages the Kubernetes software, and a data plane made up of worker nodes where tenant workloads run as pods. Depending on an organization’s needs, isolation can be enforced in both the control plane and the data plane.
Isolation levels are often described using terms like “hard” and “soft” multi-tenancy. "Hard" multi-tenancy implies strong isolation and is typically needed when tenants don’t trust each other, especially regarding security and resource-sharing. It’s crucial for defending against threats like data exfiltration or denial-of-service attacks. Since data planes have a larger attack surface, achieving "hard" multi-tenancy usually requires strict data-plane isolation, though control plane security remains equally important.
However, these terms—"hard" and "soft"—can be ambiguous, as their definitions aren’t universally agreed upon. Instead, tenant isolation exists on a spectrum, and different techniques can achieve varying degrees of separation depending on specific requirements.
In some cases, achieving adequate isolation might mean avoiding shared clusters altogether and giving each tenant their own dedicated cluster, possibly even on separate hardware if virtual machines aren’t considered secure enough. Managed Kubernetes services can make this approach easier, as cloud providers handle much of the cluster management overhead. Still, the benefits of stronger isolation must be balanced against the added cost and complexity of managing multiple clusters.
Implementations
There are two main approaches to achieving multi-tenancy within a shared Kubernetes cluster: using Namespaces (assigning a separate Namespace for each tenant) or virtualizing the control plane (creating a virtual control plane for each tenant).
Namespace-based isolation is a well-established feature in Kubernetes. It has minimal resource overhead and includes mechanisms for proper tenant interactions, like enabling service-to-service communication. However, it can be complex to configure and doesn’t cover certain Kubernetes resources that aren’t namespace-scoped, such as Custom Resource Definitions, Storage Classes, and Webhooks.
Virtualizing the control plane, on the other hand, offers isolation for non-namespaced resources but comes with higher resource consumption and added complexity in facilitating cross-tenant interactions. This approach works well when namespace isolation alone isn’t enough, yet maintaining separate clusters for each tenant would be too costly or inefficient, especially in on-premises environments. Even with a virtualized control plane, namespaces can still provide additional benefits.
In the next blog, we’ll dive into Namespace-Based Isolation for workload separation, exploring why namespaces are essential for multi-tenancy and how to implement them effectively.
Comentarios