For role based access from EKS, we may approach this in two ways. The easiest way to do this is to attach the required policy directly to the underlying node (or nodegroup used in the cluster)
But this is not easily replicable in fargate pods. Also there is no way to contol access to specific policies for specific namespaces or pods in the cluster.
Another way to do this is through service accounts
(which is recommended best practice for kubernetes).
This does require the kubernetes OIDC provider to be registed to the eks account. If it is an eks cluster it can be easily done with
eksctl utils associate-iam-oidc-provider --cluster $EKS_CLUSTER_NAME --approve
Here we create a service account with the annotation eks.amazonaws.com/role-arn
for example:
apiVersion: v1
kind: ServiceAccount
metadata:
name: myserviceaccount
namespace: mynamespace
annotations:
eks.amazonaws.com/role-arn: $ROLE_ARN
For the role make sure that it contains the following trust policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "$OIDC_ARN"
},
"Condition": {
"StringEquals": {
"${OIDC_PROVIDER_NAME}:sub": "system:serviceaccount:${Namespace}:${ServiceAccountName}",
"${OIDC_PROVIDER_NAME}:aud": "sts.amazonaws.com"
}
},
"Action": "sts:AssumeRoleWithWebIdentity"
}
]
}
In this case for example say oidc is something like
OIDC_PROVIDER_NAME=oidc.eks.ap-south-1.amazonaws.com/id/FE4B72EB7E2FCC92766A421E67A9311F
and the arn after assignment from eksctl can be
OIDC_ARN=arn:aws:iam::123456789011:oidc-provider/oidc.eks.ap-south-1.amazonaws.com/id/FE4B72EB7E2FCC92766A421E67A9311F
and
Namespace=mynamespace
ServiceAccountName=myserviceaccount
Also althrough not necessary the condition is to control which service account can take up the role,
for example say say a service account called sa2
from namespace default
, it will not be able to access the role;
and hence the pods will not get the corresponding permission.