Hi, hopefully you have watched my monthly community meet-up presentation on how to use service topology in Linkerd. This is a quick guide that shows you how to set-up a kind cluster, install linkerd and then enable service topology for a service.
At a minimum we need to have kubectl
, linkerd-cli
and kind
installed. Click on the links to see install instructions.
After all tools are installed, you need to create a kind
cluster, install linkerd
and create some deployments.
Using a kind
cluster is not straightforward as minikube
, there are some extra steps to take in order to label nodes and enable the feature gates.
You can use this example configuration or, alternatively, feel free to create your own configuration by looking at the docs.
Once you have the configuration sorted out, start up the cluster: kind create cluster --config my-config.yaml
. You should see the following output:
Creating cluster "kind" ...
β Ensuring node image (kindest/node:v1.18.2) πΌ
β’β‘° Preparing nodes π¦ π¦ π¦
After the cluster creation step finishes, verify it is up and running, I recommend also checking if the nodes are labeled properly so you can avoid frustrations later on:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready master 43s v1.18.2
kind-worker NotReady <none> 8s v1.18.2
kind-worker2 NotReady <none> 7s v1.18.2
$ kubectl get node kind-worker -o yaml | grep kubernetes.io
...
kubernetes.io/hostname: kind-worker2
topology.kubernetes.io/region: east
topology.kubernetes.io/zone: east-1c
...
The next step is to install linkerd and create some deployments. If you haven't already, I suggest you follow the getting started docs and follow through until the end. This will show you how to use linkerd and you'll also get to install and mesh some deployments. I will be using the example emojivoto app throughout the rest of this tutorial. You can deploy whatever application(s) you want provided you have two services that talk to each other so you can see what is going on.
Verify likerd
is up and running:
$ linkerd check
kubernetes-api
--------------
β can initialize the client
β can query the Kubernetes API
kubernetes-version
------------------
β is running the minimum Kubernetes API version
β is running the minimum kubectl version
linkerd-existence
-----------------
β 'linkerd-config' config map exists
β heartbeat ServiceAccount exist
β control plane replica sets are ready
β no unschedulable pods
β controller pod is running
β can initialize the client
β can query the control plane API
linkerd-config
--------------
β control plane Namespace exists
β control plane ClusterRoles exist
β control plane ClusterRoleBindings exist
β control plane ServiceAccounts exist
β control plane CustomResourceDefinitions exist
β control plane MutatingWebhookConfigurations exist
β control plane ValidatingWebhookConfigurations exist
β control plane PodSecurityPolicies exist
linkerd-identity
----------------
β certificate config is valid
β trust anchors are using supported crypto algorithm
β trust anchors are within their validity period
β trust anchors are valid for at least 60 days
β issuer cert is using supported crypto algorithm
β issuer cert is within its validity period
β issuer cert is valid for at least 60 days
β issuer cert is issued by the trust anchor
linkerd-api
-----------
β control plane pods are ready
β control plane self-check
β [kubernetes] control plane can talk to Kubernetes
β [prometheus] control plane can talk to Prometheus
β tap api service is running
linkerd-version
---------------
β can determine the latest version
βΌ cli is up-to-date
is running version 20.8.4 but the latest edge version is 20.9.3
see https://linkerd.io/checks/#l5d-version-cli for hints
control-plane-version
---------------------
βΌ control plane is up-to-date
is running version 20.8.4 but the latest edge version is 20.9.3
see https://linkerd.io/checks/#l5d-version-control for hints
β control plane and cli versions match
linkerd-addons
--------------
β 'linkerd-config-addons' config map exists
linkerd-prometheus
------------------
β prometheus add-on service account exists
β prometheus add-on config map exists
β prometheus pod is running
linkerd-grafana
---------------
β grafana add-on service account exists
β grafana add-on config map exists
β grafana pod is running
Status check results are β
# If you haven't already, deploy some stuff in your cluster
$ curl -sL https://run.linkerd.io/emojivoto.yml \
| linkerd inject - \
| kubectl apply -f -
Cool! With this out of the way, we are ready to get started.
At the moment, linkerd
has been installed without EndpointSlice support. We turned the feature gate on though, so we Kubernetes will create slices for us automatically -- linkerd
just won't use them!
Let's verify that we have the slices in:
$ kubectl get endpointslices -n linkerd
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
linkerd-controller-api-gl5lz IPv4 8085 10.244.1.3 8m29s
linkerd-dst-fp5ck IPv4 8086 10.244.1.2 8m29s
linkerd-grafana-q7z22 IPv4 3000 10.244.2.5 8m27s
linkerd-identity-j7pxt IPv4 8080 10.244.2.2 8m29s
linkerd-prometheus-bkgnc IPv4 9090 10.244.2.6 8m27s
linkerd-proxy-injector-tcfx8 IPv4 8443 10.244.2.4 8m28s
linkerd-sp-validator-dp2qr IPv4 8443 10.244.1.4 8m28s
linkerd-tap-vj7t9 IPv4 8089,8088 10.244.1.5 8m28s
linkerd-web-fjz5b IPv4 8084,9994 10.244.2.3 8m28s
We can also verify that the destination service
does not use slices by describing the pod:
$ kubectl describe po linkerd-destination-7d4d4474f5-t8992 -n linkerd | grep -enable-endpoint-slices
-enable-endpoint-slices=false
Our next step will be to upgrade linkerd
to use slices:
$ linkerd upgrade --enable-endpoint-slices | kubectl apply --prune -l linkerd.io/control-plane-ns=linkerd -f -
If you're wondering, you can also install linkerd
with EndpointSlice support from the get-go. For the purpose of this quick demo, I wanted to show you that upgrading is also possible for those of you that have productionised linkerd already.
Let's check again if slices are enabled:
$ kubectl describe po linkerd-destination-65456bff9f-lnlmh -n linkerd | grep -enable-endpoint-slices
-enable-endpoint-slices=true
So far, so good. You won't see any changes yet because we haven't added in our topology pref; that's our next step.
First, let's scale up our services and add some affinity so that our pods are spread across the nodes. Using the emojivoto
application, we'll target the emoji
deployment.
Using the emojivoto
example, the web
service will communicate with emoji
svc, which is why we picked it specifically.
Apply the following snippet to your emoji
deployment and change the replicas to any number >= 3:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kind-worker
weight: 10
- preference:
matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kind-worker2
weight: 90
Note: if you have used any other configuration don't forget to change the node names!
We should now be able to see our pods spread out:
$ kubectl get pods -n emojivoto
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
emoji-5d78b7946-5ls52 2/2 Running 0 31m 10.244.2.15 kind-worker2 <none> <none>
emoji-5d78b7946-kcv7r 2/2 Running 0 32m 10.244.1.16 kind-worker <none> <none>
emoji-5d78b7946-r2vfn 2/2 Running 0 31m 10.244.2.14 kind-worker2 <none> <none>
web-5d69bcfdb7-wcxzf 2/2 Running 0 44s 10.244.1.17 kind-worker <none> <none>
Before we proceed, let's see which pods communicate to our emoji
pods, linkerd
comes with the neat top
command that we can use to inspect live traffic:
$ linkerd top deploy/emoji
Source Destination Method Path Count Best Worst Last Success Rate
web-5d69bcfdb7-wcxzf emoji-5d78b7946-kcv7r POST /emojivoto.v1.EmojiService/FindByShortcode 6 1ms 2ms 2ms 100.00%
web-5d69bcfdb7-wcxzf emoji-5d78b7946-r2vfn POST /emojivoto.v1.EmojiService/ListAll 5 1ms 2ms 1ms 100.00%
web-5d69bcfdb7-wcxzf emoji-5d78b7946-r2vfn POST /emojivoto.v1.EmojiService/FindByShortcode 3 1ms 2ms 1ms 100.00%
web-5d69bcfdb7-wcxzf emoji-5d78b7946-5ls52 POST /emojivoto.v1.EmojiService/ListAll 3 1ms 2ms 1ms 100.00%
web-5d69bcfdb7-wcxzf emoji-5d78b7946-kcv7r POST /emojivoto.v1.EmojiService/ListAll 2 1ms 1ms 1ms 100.00%
web-5d69bcfdb7-wcxzf emoji-5d78b7946-5ls52 POST /emojivoto.v1.EmojiService/FindByShortcode 1 1ms 1ms 1ms 100.00%
So far, the web
pod is sending requests to all of our emoji pods. Let's see what happens when we apply a topology filter. Add the following snippet to your emoji
service spec:
topologyKeys:
- "kubernetes.io/hostname"
I like to do this in place instead of using kubectl apply
:
$ kubectl get services -n emojivoto
$ kubectl edit svc emoji-svc -n emojivoto
service/emoji-svc edited
Let's top
again!
$ linkerd top deploy/emoji -n emojivoto
Source Destination Method Path Count Best Worst Last Success Rate
web-5d69bcfdb7-wcxzf emoji-5d78b7946-kcv7r POST /emojivoto.v1.EmojiService/ListAll 8 867Β΅s 6ms 1ms 100.00%
web-5d69bcfdb7-wcxzf emoji-5d78b7946-kcv7r POST /emojivoto.v1.EmojiService/FindByShortcode 8 949Β΅s 2ms 1ms 100.00%
Turns out web
is only communicating with emoji-5d78b7946-kcv7r
, if we do a kubectl get pods -o wide
we can see the nodes pods are running on:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
emoji-5d78b7946-kcv7r 2/2 Running 0 45m 10.244.1.16 kind-worker <none> <none>
web-5d69bcfdb7-wcxzf 2/2 Running 0 12m 10.244.1.17 kind-worker <none> <none>
emoji-5d78b7946-r2vfn 2/2 Running 0 44m 10.244.2.14 kind-worker2 <none> <none>
emoji-5d78b7946-5ls52 2/2 Running 0 44m 10.244.2.15 kind-worker2 <none> <none>
Both pods run on the same node: kind-worker
, whereas the two pods that were not considered are both running on kind-worker2
.
You can experiment with different topology labels to make your own simple "traffic policies" based on location. Will leave the rest of the exploring to you :)
In closing, you can have a look at the following links to learn more: