This is the rough outline of how we successfully did an in-place control + data plane upgrade from Istio 1.4.7
-> 1.5.4
via the official Helm charts.
Upgrade was
- applied via scripting/automation
- on a mesh using
- mTLS
- Istio RBAC via
AuthorizationPolicy
- telemetry v1
- tracing enabled, but Jaeger not deployed via istio chart
- istio ingress gateway + secondary istio ingress gateway
- active traffic flowing through without any observed increase in error rates
This ignores anything specifically mentioned in the upgrade notes.
- Bug in RBAC backward compatibility with
1.4
in1.5.0
->1.5.2
, fixed in1.5.3
- issue with visibility of
ServiceEntry
s being scoped usingSidecar
resource- istio/istio#24251 subsequent added to the upgrade notes
- All traffic ports are now captured by default; this caused our non-mTLS metrics ports to start enforcing mTLS which they previously did not do on
1.4.7
- Fix: exclude metrics ports via sidecar annotations
traffic.sidecar.istio.io/excludeInboundPorts: "9080, 15090"
- Fix: exclude metrics ports via sidecar annotations
#!/usr/bin/env bash
# In 1.4 Galley manages the webhook configuration; in 1.5 Helm manages it and it is patched by galley dynamically
# without `ownerReferences`, so we can detect if we have upgraded Galley already
if kubectl get validatingwebhookconfiguration/istio-galley -n istio-system -o yaml | grep ownerReferences; then
echo "Detected 1.4 installation - preparing Helm upgrade to 1.5.x by deleting galley-managed webhook..."
# Disable webhook reconciliation so we can delete the webhook
kubectl get deployment/istio-galley -n istio-system -o yaml | \
sed 's/enable-reconcileWebhookConfiguration=true/enable-reconcileWebhookConfiguration=false/' | \
kubectl apply -f -
# Wait for Galley to come back up
kubectl rollout status deployment/istio-galley -n istio-system --timeout 60s
# Delete the webhook
kubectl delete validatingwebhookconfiguration/istio-galley -n istio-system
# Now we can proceed to helm upgrade to 1.5 which will recreate the webhook
fi
Not to be taken literally - this is pseudo-script...
helm upgrade --install --wait --atomic --cleanup-on-fail istio-init istio-init-1.5.4.tgz
# scripting to wait for jobs to complete goes here
helm upgrade --install --wait --atomic --cleanup-on-fail istio istio-1.5.4.tgz
# scripting to bounce `Deployment`s for injected services goes here
We noticed issues with ingress gateways coming up during the control plane upgrade.
It appears there was some kind of race condition when starting new 1.5.4
ingressgateway
instances while
parts of the 1.4.7
control plane were still running. Suspect new ingressgateway talking to old verison pilot
problem perhaps?
Symptom
- lots of weird errors about invalid configuration being received from pilot relating to
tracing
on the new version ingressgateway logs - a subset of new version ingress gateways would not become ready which could cause the
helm upgrade --wait
to get stuck
Fix
- delete the pods that fail to become ready (manual intervention in our case, although technically possible to automate)
- the new pods automatically re-created always came ready
helm upgrade
will go to completion
@jlcrow
Hmm, do you recall which resources the
1.5
upgrade timed out on? Might be able to trigger some memories. Wonder if it was waiting for aDeployment
of some sort or another to become ready and it never did (e.g due to the issues I noted above in ingressgateways).This is rather odd; never saw anything like this. I can't really understand what might cause this, and we haven't had certificate issues that I can recall since we switched to node-agent/SDS, a long time prior to its retirement in
1.6
. The mTLS and cert rotation has been the most stable bit for us. Did you use SDS on your1.4
install? Perhaps it doesn't like co-existing with a1.6
control plane somehow. Again, skip version upgrades and peaceful co-existance are a pretty scary and untested endeavour (as I understand) so I guess anything could break :-/Yeah, I hear ya. We've been using Istio since
1.0
, and upgrades were pretty smooth through to1.4
(albeit with keeping on top of the API deprecations where we were using alpha-ish stuff such as RBAC; and our zero-downtime/zero-traffic-loss focus only really started around our1.3
usage) but1.5
and then1.6
have honestly been a nightmare of breaking changes, upgrade bugs & churn in supported deployment tooling (and at times weaknesses in the "supported" deployment tooling upgrade path, e.g Helm install deprecated/unsupported in1.5
with no actual migration path toistiod
withistioctl
being supported).The payoff/light at the end-of-the-tunnel is the introduction of control plane canarying in
1.6
which should hopefully take a lot of fear/stress out of the process in production.We're about to do our
1.7
upgrade, so I hope that is smoother (blocked earlier on needing to get beyond Kubernetes 1.15).