Test NFSv3 file locking with persistent locks across multiple pods using Trident ONTAP-NAS storage.
kubectl apply -k .
kubectl exec -it deployment/nfs-test-nfs-lock-tester -- nfs-lock.sh test myfile.txt| #!/bin/bash | |
| # Script to reproduce the read-only clone export policy bug in Trident nas-economy driver | |
| # | |
| # Bug Description: | |
| # When a read-only clone is deleted on the same node as its source volume, the unpublish | |
| # operation incorrectly removes export policy rules that are still needed by the source. | |
| # This is NOT a race condition - it's a deterministic logic bug where the code fails to | |
| # check for remaining publications of the source volume before removing export rules. | |
| # |
The few following manifests present how asciinema-server on Kubernetes, with a caching nginx reverse proxy (which also does the requests against the S3 endpoint, for cases where the S3 server is not publicly accesible).
For more details regarding the S3 proxying, check https://github.com/asciinema/asciinema-server/pull/363/files#diff-10557452cffb1028618b3bc1bfdd9f01d79642fe2823dc078f5208a0121b8495
Kubernetes CronJob to backup, compress (with gzip --rsyncable), and finally
use restic to backup your DBs to an S3 endpoint.
Advantages:
mariadb-dump binary--rsyncable option (details here), which makes gzip "regularly reset his compression algorithm to what it was at the beginning of the file", so that changes to a portion of the file do not alter the whole compressed output, which permits to make incremental backups.With bpftrace on Linux, it's quite simple to monitor when a specific binary is run, and to print it's args and the environment variables passed to it.
This can be done with the following bpftrace "program":
tracepoint:syscalls:sys_enter_execve
/str(args->filename) == "/etc/network/if-up.d/resolved" /
{When patching some Kubernetes control-plane nodes on which etcd also happens to be running, you might want to gracefully transfer the leadership of the etcd cluster away before patching and eventually patching the node.
This can be achieved with the following script, provided you specify the adequate environment variables in /etc/profile.d/etcd-all:
set -o pipefail && \
source /etc/profile.d/etcd-all && \
AM_LEADER=$(etcdctl endpoint status | grep $(hostname) | cut -d ',' -f 5 | tr -d ' ') && \If you ever tried to delete more than a few hundred files on S3, you might have noticed how slow it was.
To speed-up the deletion, we can use a few bash commands to parallelize the deletion, and we can also use some json description of the objets we want to delete.
Concretely, it permits us to delete e.g. 1000 files with a single s3 API request.
| package main | |
| import ( | |
| "fmt" | |
| "os" | |
| "strings" | |
| "time" | |
| "github.com/bitfield/script" | |
| ) |
| #!/bin/bash | |
| stdin="$([[ -p /dev/stdin ]] && cat -)" | |
| lemonade copy $stdin |
| #fish | |
| function yqblank; | |
| yq eval "$argv[1]" "$argv[2]" | diff -B "$argv[2]" - | patch "$argv[2]" -o - | |
| end | |
| #bash | |
| yqblank() { | |
| yq "$1" "$2" | diff -B "$2" - | patch "$2" - | |
| } |