top of page

Stakater Blog

Follow our blog for the latest updates in the world of DevSecOps, Cloud and Kubernetes

MTO on EKS

In this blog, we’ll walk you through running the Multi Tenant Operator (MTO) on an Amazon EKS (Elastic Kubernetes Service) cluster, letting teams interact with the cluster in a streamlined way.


Multi Tenant Operator

MTO is a product by Stakater, and we’ve been helping customers achieve robust multi-tenancy with integrations to ArgoCD and Vault for years, primarily on OpenShift. Since March 2022, MTO has been recognized as a RedHat Certified Operator, and we’ve recently expanded its capabilities to work seamlessly with native Kubernetes and cloud-based Kubernetes distributions.


With MTO, we ensure proper tenancy through a well-structured RBAC model and targeted admission policies. This approach allows safe use of the cluster, providing segregated partitions both within the cluster and across its integrations.


For a more detailed breakdown of MTO’s features, check out Simplify Multitenancy with Stakater's Multi Tenant Operator.


Walk-through

In this example, we’ll demonstrate how different IAM and SSO users, acting as developers, can access an EKS cluster, and how tenancy impacts their scope of operations.


Prerequisites

  • kubectl: We need kubectl to interact with the clusters. You can visit Install Kubectl find installation details

  • Helm CLI: To install MTO, we’ll also need the Helm CLI. Check out Installing Helm to get Helm CLI set up.

  • AWS Console User: We’ll be using a user in the AWS Console with administrator permissions to access the cluster and create groups with users.

  • Running EKS Cluster: Lastly, we’ll need a running EKS Cluster. Creating an EKS Cluster provides a good tutorial if you need to create a demo cluster.


Configure an EKS Cluster

In this example, we’ve already set up a small EKS cluster with the following node group configuration:

EKS supports two types of cluster authentication modes. We’ve configured access using both the EKS API and ConfigMap. This setup allows the admin to access the cluster through the EKS API and map IAM users to our EKS cluster using the `aws-auth` Configmap. You can find more details on how to connect to the EKS cluster at AWS EKS Access.


We’ve attached the `AmazonEKSClusterAdminPolicy`to our user, who is synchronized via SSO and has Administrator privileges on the AWS Console. This makes us a cluster admin. 

Note: Our user is also added to the `cluster-admins` group, which we’ll use later when installing MTO.


Installing Cert Manager and MTO

In this section, we’ll install Multi Tenant Operator (MTO) to manage tenancy between different users and groups. Since MTO has several webhooks that need certificates, we’ll need to install Cert Manager first to handle the certs automatically.


As cluster admins, we’ll begin by installing Cert Manager to automate the handling of operator certificates:

Let's wait for the pods to be up:

$ kubectl get pods -n cert-manager --watch

NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-7fb948f468-wgcbx              1/1     Running   0          7m18s
cert-manager-cainjector-75c5fc965c-wxtkp   1/1     Running   0          7m18s
cert-manager-webhook-757c9d4bb7-wd9g8      1/1     Running   0          7m18s

We’ll use Helm to install MTO. In this setup, we’ve set `bypassedGroups` as `cluster-admins` because our admin user is part of that group, as shown in the previous screenshot.

helm install tenant-operator oci://ghcr.io/stakater/public/charts/multi-tenant-operator --version 0.12.62 --namespace multi-tenant-operator --create-namespace --set bypassedGroups=cluster-admins

We’ll wait for the pods to reach the running state:

$ kubectl get pods -n multi-tenant-operator --watch

NAME                                                              READY   STATUS    RESTARTS   AGE
tenant-operator-namespace-controller-768f9459c4-758kb             2/2     Running   0          2m
tenant-operator-pilot-controller-7c96f6589c-d979f                 2/2     Running   0          2m
tenant-operator-resourcesupervisor-controller-566f59d57b-xbkws    2/2     Running   0          2m
tenant-operator-template-quota-intconfig-controller-7fc99462dz6   2/2     Running   0          2m
tenant-operator-templategroupinstance-controller-75cf68c872pljv   2/2     Running   0          2m
tenant-operator-templateinstance-controller-d996b6fd-cx2dz        2/2     Running   0          2m
tenant-operator-tenant-controller-57fb885c84-7ps92                2/2     Running   0          2m
tenant-operator-webhook-5f8f675549-jv9n8                          2/2     Running   0          2m

Users Interaction with the Cluster

We’ll interact with the cluster using two types of users: IAM users created via the AWS Console and SSO Users.


IAM Users

We have created a user named `test-benzema-mto` in the AWS Console, with the ARN `arn:aws:iam::<account>:user/test-benzema-mto`.


This user has a policy attached  that allows them to get cluster info.

{
    "Statement": [
        {
            "Action": "eks:DescribeCluster",
            "Effect": "Allow",
            "Resource": "*"
        }
    ],
    "Version": "2012-10-17"
}

We’ve mapped this user in `aws-auth` Configmap in the `kube-system` namespace:

  mapUsers:
    - groups:
      - iam-devteam
      userarn: arn:aws:iam::<account>:user/test-benzema-mto
      username: test-benzema-mto

Using this AWS guide, we’ll have the user update their `kubeconfig` and try to access the cluster.


Since we haven’t attached any RBAC to this user yet, trying to access anything in the cluster results in an error:

$ kubectl get svc

Error from server (Forbidden): services is forbidden: User "test-benzema-mto" cannot list resource "services" in API group "" in the namespace "default"

SSO Users

For SSO Users, we’ll map a role `arn:aws:iam::<account>:role/aws-reserved/sso.amazonaws.com/eu-north-1/AWSReservedSSO_PowerUserAccess_b0ad9936c75e5bcc`, which is attached by default to users on SSO login to the AWS console and AWS CLI, in the `aws-auth` Configmap in the `kube-system` namespace:

  mapRoles:
    - groups:
      - sso-devteam
      rolearn: arn:aws:iam::<account>:role/AWSReservedSSO_PowerUserAccess_b0ad9936c75e5bcc
      username: sso-devteam:{{SessionName}}

Since this user also doesn’t have RBAC attached, trying to access anything in the cluster will result in an error.

$ kubectl get svc

Error from server (Forbidden): services is forbidden: User "sso-devteam:random-user-stakater.com" cannot list resource "services" in API group "" in the namespace "default"

Setting up Tenant for Users

Now, we’ll set up tenants for the users mentioned above.

We’ll start by creating a `Quota CR` with some resource limits:

kubectl apply -f - <<EOF
apiVersion: tenantoperator.stakater.com/v1beta1
kind: Quota
metadata:
  name: small
spec:
  limitrange:
    limits:
    - max:
        cpu: 800m
      min:
        cpu: 200m
      type: Container
  resourcequota:
    hard:
      configmaps: "10”
EOF

Now, we will mention this `Quota` in two `Tenant` CRs:

kubectl apply -f - <<EOF
apiVersion: tenantoperator.stakater.com/v1beta3
kind: Tenant
metadata:
  name: tenant-iam
spec:
  namespaces:
    withTenantPrefix:
    - dev
    - build
  accessControl:
    owners:
      groups:
      - iam-devteam
  quota: small
EOF
kubectl apply -f - <<EOF
apiVersion: tenantoperator.stakater.com/v1beta3
kind: Tenant
metadata:
  name: tenant-sso
spec:
  namespaces:
    withTenantPrefix:
    - dev
    - build
  accessControl:
    owners:
      groups:
    - sso-devteam
  quota: small
EOF

The only difference between the two tenant specs is the groups.


Accessing Tenant Namespaces

After creating the `Tenant` CRs, users can access namespaces in their respective tenants and perform create, update, and delete functions.


As a cluster admin, listing the namespaces will show us the recently created tenant namespaces:

$ kubectl get namespaces

NAME                    STATUS   AGE
cert-manager            Active   8d
default                 Active   9d
kube-node-lease         Active   9d
kube-public             Active   9d
kube-system             Active   9d
multi-tenant-operator   Active   8d
random                  Active   8d
tenant-iam-build        Active   5s
tenant-iam-dev          Active   5s
tenant-sso-build        Active   5s
tenant-sso-dev          Active   5s

IAM Users on Tenant Namespaces

We’ll now try to deploy a pod as user `test-benzema-mto` in their tenant namespace `tenant-iam-dev`:

$ kubectl run nginx --image nginx -n tenant-iam-dev

pod/nginx created

If we try the same operation in the other tenant with the same user, it will fail as expected:

$ kubectl run nginx --image nginx -n tenant-sso-dev

Error from server (Forbidden): pods is forbidden: User "benzema@azuretestrootstakater.onmicrosoft.com" cannot create resource "pods" in API group "" in the namespace "tenant-b-dev"

Note: `test-benzema-mto` can’t list namespaces, as expected.

$ kubectl get namespaces

Error from server (Forbidden): namespaces is forbidden: User "test-benzema-mto" cannot list resource "namespaces" in API group "" at the cluster scope

SSO Users on Tenant Namespaces

We’ll repeat the above operations for our SSO user `sso-devteam:random-user-stakater.com` as well:

$ kubectl run nginx --image nginx -n tenant-sso-dev

pod/nginx created

Attempting operations outside the scope of their own tenant will also result in errors, as expected:

$ kubectl run nginx --image nginx -n tenant-iam-dev

Error from server (Forbidden): pods is forbidden: User "sso-devteam:random-user-stakater.com" cannot create resource "pods" in API group "" in the namespace "tenant-iam-dev"

Note: `sso-devteam:random-user-stakater.com` can’t list namespaces, as expected.

$ kubectl get namespaces

Error from server (Forbidden): namespaces is forbidden: User "sso-devteam:random-user-stakater.com" cannot list resource "namespaces" in API group "" at the cluster scope

Conclusion

In this blog, we explored how to achieve multi-tenancy in EKS Clusters using the Multi-Tenant Operator by adding different AWS IAM users and groups to Tenants.


Managing numerous teams can be challenging due to shared access concerns. This issue can be addressed by separating access based on usage and needs, which is easily handled using the Multi-Tenant Operator.

4 views0 comments

Comments


bottom of page