Skip to content

Instantly share code, notes, and snippets.

@arunmk
Created August 16, 2021 05:22
Show Gist options
  • Save arunmk/9ca5093011c56c313bfed51e80869ace to your computer and use it in GitHub Desktop.
Save arunmk/9ca5093011c56c313bfed51e80869ace to your computer and use it in GitHub Desktop.

Setup Dev Environment

Assumptions:

  • cluster-api repo is cloned to ${GOPATH}/src/sigs.k8s.io/cluster-api
  • All commands run from cluster-api repo root folder

Build a node image using specific version

eval $(go env)
cd ${GOPATH}/src/k8s.io/kubernetes
git checkout v1.15.1
kinder build node-image

Create a cluster using the built image

# create a node [machine]
# latest here is the just created image
kinder create cluster --image kindest/node:latest
# Create a k8s cluster using kubeadm
kinder do kubeadm-init
# configure kubectl
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
# Go to cluster api repo for next tasks
cd ${GOPATH}/src/sigs.k8s.io/cluster-api

Relevant files and commands

tree config/rbac/
├── kustomization.yaml
├── leader_election_role_binding.yaml
├── leader_election_role.yaml
├── role_binding.yaml
└── role.yaml --------------------# Focus on this file

The rules of the ClusterRole locate in config/rbac/role.yaml come from controllers/doc.go

To generate config/rbac/role.yaml run the following

make generate

The above command also generates relevant crds in

tree config/crd/bases/

├── cluster.x-k8s.io_clusters.yaml
├── cluster.x-k8s.io_machinedeployments.yaml
├── cluster.x-k8s.io_machinesets.yaml
└── cluster.x-k8s.io_machines.yaml

Create crds

kubectl apply -f config/crd/bases/

Make a manger

make manager

Run the manager locally

./bin/manager

# view all processes(threads?) created by the manager
pstree -alp $(pgrep  -x manager)

# One process for each controller ?

Note: Please do stop the local manager before continuing further.

Distributing RBAC rules into controllers

As described above, currently, all the rules are defined in controllers/doc.go. We would like to split the rules such that specific rules are added at the begging of each controller file. Note: Test files can be ignored.

tree controllers/ -L 1
controllers/
├── BUILD.bazel
├── cluster_controller.go
├── cluster_controller_phases.go
├── doc.go
├── external
├── machine_controller.go
├── machine_controller_noderef.go
├── machine_controller_phases.go
├── machinedeployment_controller.go
├── machinedeployment_rolling.go
├── machinedeployment_sync.go
├── machineset_controller.go
├── machineset_controller_test.go
├── machineset_delete_policy.go
├── machineset_delete_policy_test.go
├── machineset_status.go
├── mdutil
├── noderefutil
├── remote

Reference RBAC

The local manager run above uses the local kubeconfig and bypassed the service account in question. We need to run the manager inside a kubernestes cluster.

First, we run the manager with rules still in controllers/doc.go. Then we move the rules gradually without breaking anything. To that end, we do the following.

  • Build a manager
  • Build a docker image out it.
  • Move the image to the cluster
  • Deploy a pod with the image.
  • Monitor the log of the manager for any access control failures.
kubectl create ns system
# the f.f might need sudo
make manager
docker build -t manager:v1 .# change the version for every test
# remove images
docker exec kind-control-plane docker rmi manager:v1
# load image
kinder load docker-image manager:v1
# verify image upload | make the image is recent
docker exec kind-control-plane docker images manager:v1

Note: The manager pod may not run if you have no workers. So Run

kubectl taint nodes --all node-role.kubernetes.io/master-

To deploy the manager, use the file in config/manager/manager.yaml and make sure the following two lines are as follows.

image: manager:v1
imagePullPolicy: IfNotPresent

Create the manager

kubectl apply -f  config/manager/manager.yaml

Monitor logs

kubectl logs -f -n system $(kubectl get pods -n system -o name | grep manager)

Current(sample) output

E0819 19:48:23.233371       1 reflector.go:126] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to list *v1alpha2.Cluster: clusters.cluster.x-k8s.io is forbidden: User "system:serviceaccount:system:default" cannot list resource "clusters" in API group "cluster.x-k8s.io" at the cluster scope
E0819 19:48:23.272888       1 reflector.go:126] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to list *v1alpha2.MachineDeployment: machinedeployments.cluster.x-k8s.io is forbidden: User "system:serviceaccount:system:default" cannot list resource "machinedeployments" in API group "cluster.x-k8s.io" at the cluster scope
E0819 19:48:23.277532       1 reflector.go:126] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to list *v1alpha2.Machine: machines.cluster.x-k8s.io is forbidden: User "system:serviceaccount:system:default" cannot list resource "machines" in API group "cluster.x-k8s.io" at the cluster scope
E0819 19:48:23.278488       1 reflector.go:126] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to list *v1alpha2.MachineSet: machinesets.cluster.x-k8s.io is forbidden: User "system:serviceaccount:system:default" cannot list resource "machinesets" in API group "cluster.x-k8s.io" at the cluster scope
E0819 19:48:23.452609       1 leaderelection.go:306] error retrieving resource lock system/controller-leader-election-helper: configmaps "controller-leader-election-helper" is forbidden: User "system:serviceaccount:system:default" cannot get resource "configmaps" in API group "" in the namespace "system"
E0819 19:48:24.234519       1 reflector.go:126] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to list *v1alpha2.Cluster: clusters.cluster.x-k8s.io is forbidden: User "system:serviceaccount:system:default" cannot list resource "clusters" in API group "cluster.x-k8s.io" at the cluster scope
E0819 19:48:24.275411       1 reflector.go:126] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to list *v1alpha2.MachineDeployment: machinedeployments.cluster.x-k8s.io is forbidden: User "system:serviceaccount:system:default" cannot list resource "machinedeployments" in API group "cluster.x-k8s.io" at the cluster scope
E0819 19:48:24.279848       1 reflector.go:126] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to list *v1alpha2.Machine: machines.cluster.x-k8s.io is forbidden: User "system:serviceaccount:system:default" cannot list resource "machines" in API group "cluster.x-k8s.io" at the cluster scope
E0819 19:48:24.280691       1 reflector.go:126] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to list *v1alpha2.MachineSet: machinesets.cluster.x-k8s.io is forbidden: User "system:serviceaccount:system:default" cannot list resource "machinesets" in API group "cluster.x-k8s.io" at the cluster scope

Most of the access formbidden issues are resovled by running.

kubectl apply -f config/rbac/

Some rules are missing for the following

E0819 20:02:18.094384       1 leaderelection.go:306] error retrieving resource lock system/controller-leader-election-helper: configmaps "controller-leader-election-helper" is forbidden: User "system:serviceaccount:system:default" cannot get resource "configmaps" in API group "" in the namespace "system"
E0819 20:02:22.439908       1 leaderelection.go:306] error retrieving resource lock system/controller-leader-election-helper: configmaps "controller-leader-election-helper" is forbidden: User "system:serviceaccount:system:default" cannot get resource "configmaps" in API group "" in the namespace "system"
E0819 20:02:25.528660       1 leaderelection.go:306] error retrieving resource lock system/controller-leader-election-helper: configmaps "controller-leader-election-helper" is forbidden: User "system:serviceaccount:system:default" cannot get resource "configmaps" in API group "" in the namespace "system"
E0819 20:02:27.637440       1 leaderelection.go:306] error retrieving resource lock system/controller-leader-election-helper: configmaps "controller-leader-election-helper" is forbidden: User "system:serviceaccount:system:default" cannot get resource "configmaps" in API group "" in the namespace "system"
E

Add missing rules for configmaps

Give the above setup and background we do the following iteration.

A. Remove existing rules and manager

  • Delete the rules
  • Delete the pod (it will be re-created by the deployment)
kubectl delete -f config/rbac/
kubectl delete $(kubectl get pods -n system -o name) -n system

B. Change and generate new rules

# create new rules for configmaps
make generate
kubectl apply -f config/rbac/

C. Apply new rules

kubectl apply -f config/rbac/

D. Repeate A, B and C as needed.

Shorten timeout for detecting absent rules

  • There seems to be a delay between removing clusterRole/binding and the manager logs starting showing errors. (above 4 minutes)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment