Skip to content

Instantly share code, notes, and snippets.

@mateiidavid
Last active September 25, 2020 09:23
Show Gist options
  • Save mateiidavid/a48f2e3440a3e0263690c6a5fd1f665e to your computer and use it in GitHub Desktop.
Save mateiidavid/a48f2e3440a3e0263690c6a5fd1f665e to your computer and use it in GitHub Desktop.
How to test out Linkerd + Service topology in a kind cluster!

Linkerd service topology tutorial

Hi, hopefully you have watched my monthly community meet-up presentation on how to use service topology in Linkerd. This is a quick guide that shows you how to set-up a kind cluster, install linkerd and then enable service topology for a service.

Prereqs:


At a minimum we need to have kubectl, linkerd-cli and kind installed. Click on the links to see install instructions.

  1. kind
  2. linkerd-cli >= edge-20.8.3
  3. kubectl

Step 1: set-up


After all tools are installed, you need to create a kind cluster, install linkerd and create some deployments.

Using a kind cluster is not straightforward as minikube, there are some extra steps to take in order to label nodes and enable the feature gates. You can use this example configuration or, alternatively, feel free to create your own configuration by looking at the docs.

Once you have the configuration sorted out, start up the cluster: kind create cluster --config my-config.yaml. You should see the following output:

Creating cluster "kind" ...
 βœ“ Ensuring node image (kindest/node:v1.18.2) πŸ–Ό
⒎⑰ Preparing nodes πŸ“¦ πŸ“¦ πŸ“¦

After the cluster creation step finishes, verify it is up and running, I recommend also checking if the nodes are labeled properly so you can avoid frustrations later on:

$ kubectl get nodes
NAME                 STATUS     ROLES    AGE   VERSION
kind-control-plane   Ready      master   43s   v1.18.2
kind-worker          NotReady   <none>   8s    v1.18.2
kind-worker2         NotReady   <none>   7s    v1.18.2

$ kubectl get node kind-worker -o yaml | grep kubernetes.io
...
kubernetes.io/hostname: kind-worker2
topology.kubernetes.io/region: east
topology.kubernetes.io/zone: east-1c
...

The next step is to install linkerd and create some deployments. If you haven't already, I suggest you follow the getting started docs and follow through until the end. This will show you how to use linkerd and you'll also get to install and mesh some deployments. I will be using the example emojivoto app throughout the rest of this tutorial. You can deploy whatever application(s) you want provided you have two services that talk to each other so you can see what is going on.

Verify likerd is up and running:

$ linkerd check
kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API

kubernetes-version
------------------
√ is running the minimum Kubernetes API version
√ is running the minimum kubectl version

linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ controller pod is running
√ can initialize the client
√ can query the control plane API

linkerd-config
--------------
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist
√ control plane PodSecurityPolicies exist

linkerd-identity
----------------
√ certificate config is valid
√ trust anchors are using supported crypto algorithm
√ trust anchors are within their validity period
√ trust anchors are valid for at least 60 days
√ issuer cert is using supported crypto algorithm
√ issuer cert is within its validity period
√ issuer cert is valid for at least 60 days
√ issuer cert is issued by the trust anchor

linkerd-api
-----------
√ control plane pods are ready
√ control plane self-check
√ [kubernetes] control plane can talk to Kubernetes
√ [prometheus] control plane can talk to Prometheus
√ tap api service is running

linkerd-version
---------------
√ can determine the latest version
β€Ό cli is up-to-date
    is running version 20.8.4 but the latest edge version is 20.9.3
    see https://linkerd.io/checks/#l5d-version-cli for hints

control-plane-version
---------------------
β€Ό control plane is up-to-date
    is running version 20.8.4 but the latest edge version is 20.9.3
    see https://linkerd.io/checks/#l5d-version-control for hints
√ control plane and cli versions match

linkerd-addons
--------------
√ 'linkerd-config-addons' config map exists

linkerd-prometheus
------------------
√ prometheus add-on service account exists
√ prometheus add-on config map exists
√ prometheus pod is running

linkerd-grafana
---------------
√ grafana add-on service account exists
√ grafana add-on config map exists
√ grafana pod is running

Status check results are √

# If you haven't already, deploy some stuff in your cluster
$ curl -sL https://run.linkerd.io/emojivoto.yml \
    | linkerd inject - \
    | kubectl apply -f -

Cool! With this out of the way, we are ready to get started.

Step 2: using EndpointSlices


At the moment, linkerd has been installed without EndpointSlice support. We turned the feature gate on though, so we Kubernetes will create slices for us automatically -- linkerd just won't use them!

Let's verify that we have the slices in:

$ kubectl get endpointslices -n linkerd
NAME                           ADDRESSTYPE   PORTS       ENDPOINTS    AGE
linkerd-controller-api-gl5lz   IPv4          8085        10.244.1.3   8m29s
linkerd-dst-fp5ck              IPv4          8086        10.244.1.2   8m29s
linkerd-grafana-q7z22          IPv4          3000        10.244.2.5   8m27s
linkerd-identity-j7pxt         IPv4          8080        10.244.2.2   8m29s
linkerd-prometheus-bkgnc       IPv4          9090        10.244.2.6   8m27s
linkerd-proxy-injector-tcfx8   IPv4          8443        10.244.2.4   8m28s
linkerd-sp-validator-dp2qr     IPv4          8443        10.244.1.4   8m28s
linkerd-tap-vj7t9              IPv4          8089,8088   10.244.1.5   8m28s
linkerd-web-fjz5b              IPv4          8084,9994   10.244.2.3   8m28s

We can also verify that the destination service does not use slices by describing the pod:

$ kubectl describe po linkerd-destination-7d4d4474f5-t8992 -n linkerd | grep -enable-endpoint-slices
-enable-endpoint-slices=false

Our next step will be to upgrade linkerd to use slices:

$ linkerd upgrade --enable-endpoint-slices | kubectl apply --prune -l linkerd.io/control-plane-ns=linkerd -f -

If you're wondering, you can also install linkerd with EndpointSlice support from the get-go. For the purpose of this quick demo, I wanted to show you that upgrading is also possible for those of you that have productionised linkerd already.

Let's check again if slices are enabled:

$ kubectl describe po linkerd-destination-65456bff9f-lnlmh -n linkerd | grep -enable-endpoint-slices
  -enable-endpoint-slices=true

So far, so good. You won't see any changes yet because we haven't added in our topology pref; that's our next step.

Step 3: adding a topology preference

First, let's scale up our services and add some affinity so that our pods are spread across the nodes. Using the emojivoto application, we'll target the emoji deployment. Using the emojivoto example, the web service will communicate with emoji svc, which is why we picked it specifically. Apply the following snippet to your emoji deployment and change the replicas to any number >= 3:

       affinity:
         nodeAffinity:
           preferredDuringSchedulingIgnoredDuringExecution:
           - preference:
               matchExpressions:
               - key: kubernetes.io/hostname
                 operator: In
                 values:
                 - kind-worker
             weight: 10
           - preference:
               matchExpressions:
               - key: kubernetes.io/hostname
                 operator: In
                 values:
                 - kind-worker2
             weight: 90

Note: if you have used any other configuration don't forget to change the node names!

We should now be able to see our pods spread out:

$ kubectl get pods -n emojivoto
NAME                        READY   STATUS    RESTARTS   AGE    IP            NODE           NOMINATED NODE   READINESS GATES
emoji-5d78b7946-5ls52       2/2     Running   0          31m    10.244.2.15   kind-worker2   <none>           <none>
emoji-5d78b7946-kcv7r       2/2     Running   0          32m    10.244.1.16   kind-worker    <none>           <none>
emoji-5d78b7946-r2vfn       2/2     Running   0          31m    10.244.2.14   kind-worker2   <none>           <none>
web-5d69bcfdb7-wcxzf        2/2     Running   0          44s    10.244.1.17   kind-worker    <none>           <none>

Before we proceed, let's see which pods communicate to our emoji pods, linkerd comes with the neat top command that we can use to inspect live traffic:

$ linkerd top deploy/emoji
Source                Destination            Method      Path                                         Count    Best   Worst    Last  Success Rate
web-5d69bcfdb7-wcxzf  emoji-5d78b7946-kcv7r  POST        /emojivoto.v1.EmojiService/FindByShortcode       6     1ms     2ms     2ms       100.00%
web-5d69bcfdb7-wcxzf  emoji-5d78b7946-r2vfn  POST        /emojivoto.v1.EmojiService/ListAll               5     1ms     2ms     1ms       100.00%
web-5d69bcfdb7-wcxzf  emoji-5d78b7946-r2vfn  POST        /emojivoto.v1.EmojiService/FindByShortcode       3     1ms     2ms     1ms       100.00%
web-5d69bcfdb7-wcxzf  emoji-5d78b7946-5ls52  POST        /emojivoto.v1.EmojiService/ListAll               3     1ms     2ms     1ms       100.00%
web-5d69bcfdb7-wcxzf  emoji-5d78b7946-kcv7r  POST        /emojivoto.v1.EmojiService/ListAll               2     1ms     1ms     1ms       100.00%
web-5d69bcfdb7-wcxzf  emoji-5d78b7946-5ls52  POST        /emojivoto.v1.EmojiService/FindByShortcode       1     1ms     1ms     1ms       100.00%

So far, the web pod is sending requests to all of our emoji pods. Let's see what happens when we apply a topology filter. Add the following snippet to your emoji service spec:

topologyKeys:
- "kubernetes.io/hostname"

I like to do this in place instead of using kubectl apply:

$ kubectl get services -n emojivoto
$ kubectl edit svc emoji-svc -n emojivoto
service/emoji-svc edited

Let's top again!

$ linkerd top deploy/emoji -n emojivoto
Source                Destination            Method      Path                                         Count    Best   Worst    Last  Success Rate
web-5d69bcfdb7-wcxzf  emoji-5d78b7946-kcv7r  POST        /emojivoto.v1.EmojiService/ListAll               8   867Β΅s     6ms     1ms       100.00%
web-5d69bcfdb7-wcxzf  emoji-5d78b7946-kcv7r  POST        /emojivoto.v1.EmojiService/FindByShortcode       8   949Β΅s     2ms     1ms       100.00%

Turns out web is only communicating with emoji-5d78b7946-kcv7r, if we do a kubectl get pods -o wide we can see the nodes pods are running on:

NAME                        READY   STATUS    RESTARTS   AGE    IP            NODE           NOMINATED NODE   READINESS GATES
emoji-5d78b7946-kcv7r       2/2     Running   0          45m    10.244.1.16   kind-worker    <none>           <none>
web-5d69bcfdb7-wcxzf        2/2     Running   0          12m    10.244.1.17   kind-worker    <none>           <none>
emoji-5d78b7946-r2vfn       2/2     Running   0          44m    10.244.2.14   kind-worker2   <none>           <none>
emoji-5d78b7946-5ls52       2/2     Running   0          44m    10.244.2.15   kind-worker2   <none>           <none>

Both pods run on the same node: kind-worker, whereas the two pods that were not considered are both running on kind-worker2.

You can experiment with different topology labels to make your own simple "traffic policies" based on location. Will leave the rest of the exploring to you :)

In closing, you can have a look at the following links to learn more:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment