Skip to content

Instantly share code, notes, and snippets.

@tuannvm
Last active April 12, 2025 08:50
Show Gist options
  • Save tuannvm/4e1bcc993f683ee275ed36e67c30ac49 to your computer and use it in GitHub Desktop.
Save tuannvm/4e1bcc993f683ee275ed36e67c30ac49 to your computer and use it in GitHub Desktop.
#Helm #Kubernetes #cheatsheet, happy helming!

ArgoCD CheatSheet


Overview

  • ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes.
  • It continuously monitors Git repositories and automatically applies desired state configurations to clusters.
  • Supports multi-cluster deployments, application rollback, and advanced sync strategies.

Architecture & Components

  • API Server: Hosts the ArgoCD API and UI.
  • Repository Server: Clones and tracks Git repositories for application manifests.
  • Application Controller: Reconciles declared application state against live clusters.
  • Dex (optional): Provides authentication integrations (OIDC, LDAP).
  • Redis: Used for caching and managing ArgoCD sessions.
  • CLI: A command-line tool to interact with ArgoCD and automate deployment operations.

Each component operates together to maintain cluster state according to the GitOps model.


Getting Started

  • Familiarize with the ArgoCD GitOps fundamentals to manage Kubernetes deployments.
  • Learn the declarative application model and integration with Git repositories.
  • Use the web UI or CLI to monitor application states, view diffs, and trigger syncs.

Installation & Setup

  • Installation:
    # Install ArgoCD into the argocd namespace
    kubectl create namespace argocd
    kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
  • Port Forwarding for Access:
    kubectl port-forward svc/argocd-server -n argocd 8080:443
  • CLI Installation:
    Download the latest ArgoCD CLI from the official releases page and add it to your PATH.

New versions up to 2025 have streamlined integration with OCI registries, enhanced authentication, and improved RBAC policies.


ArgoCD CLI Commands

Basic commands using the argocd CLI:

# Login to the ArgoCD API server
argocd login <ARGOCD-SERVER>:<PORT> --username admin --password <password>

# List all applications
argocd app list

# Get detailed information on an application
argocd app get <app-name>

# Create a new application from a Git repository
argocd app create <app-name> \
  --repo https://github.com/your-org/your-repo.git \
  --path <path-to-manifests> \
  --dest-server https://kubernetes.default.svc \
  --dest-namespace <target-namespace>

# Sync an application to update the live state
argocd app sync <app-name>

# View differences between the desired and live state
argocd app diff <app-name>

# Rollback an application to a previous revision
argocd app rollback <app-name> <revision-number>

# Delete an application (controller will remove the resources)
argocd app delete <app-name>

# Refresh to re-fetch Git repo data
argocd app refresh <app-name>

These commands are available in the latest CLI versions and provide a consistent experience even as new features are added.


Application Management

Defining an Application

An ArgoCD application is declared as a Kubernetes custom resource. A common example:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'https://github.com/your-org/your-repo.git'
    targetRevision: HEAD
    path: overlays/production
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: my-app-namespace
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

This declarative file drives GitOps workflows and supports advanced strategies available up to the most current releases.

Sync Policies & Strategies

  • Manual Sync:
    • Default mode where a user triggers sync via CLI or UI.
  • Automated Sync:
    • Automatically applies new changes from Git.
    • Options include:
    • Prune: Automatically remove resources that are no longer in Git.
    • SelfHeal: Re-sync out-of-compliance resources automatically.
  • Sync Waves and Hooks:
    • Define pre-sync, post-sync, sync hooks using annotations such as:
    metadata:
      annotations:
        argocd.argoproj.io/hook: PreSync
    • Ensure reliable ordering and execution during application update.

Rollback & Diff

  • Rollback:
    • Roll back to a prior successful revision using the CLI or UI.
    • Useful for quickly recovering from a bad deployment.
  • Diff:
    • See the changes between the live state and Git state with the argocd app diff command.
    • Helps identify configuration drift and troubleshoot sync issues.

Project & RBAC

  • Projects:
    • Group applications under ArgoCD projects for isolation and centralized policy management.
  • RBAC:
    • Define fine-grained access rules for users and teams in the ArgoCD configuration.
    • Use ConfigMaps for RBAC policies, allowing controlled access to specific applications, projects, or actions.

Newer versions further enhance multi-tenant support and integration with enterprise identity providers.


Monitoring & Troubleshooting

  • UI Dashboard:
    • Use the ArgoCD web UI to observe application statuses, view diff details, and check logs.
  • CLI Status & Logs:
    argocd app get <app-name>   # Displays detailed health and sync status
    kubectl logs -n argocd deployment/argocd-application-controller
  • Audit & Notifications:
    • Configure external notifications (e.g., Slack, email) to be alerted on sync failures or policy violations.
  • Observability:
    • Integration with Prometheus and Grafana for performance monitoring.

With improvements in observability and notification features, troubleshooting is more efficient and proactive in current releases.


Advanced Usage & New Features

  • ApplicationSets:
    • Dynamically generate multiple ArgoCD applications from a single template, suited for multi-cluster or multi-environment deployments.
  • Declarative Config Management:
    • Use Git and YAML to control not only individual applications but also global configuration for ArgoCD.
  • Enhanced OCI Support:
    • Deploy applications backed by OCI artifacts directly.
  • Improved Security:
    • Enhanced authentication via OIDC and integration with external secrets management.
  • Custom Resource Definitions:
    • Leverage extended CRD capabilities to fine-tune application behavior according to custom policies.

These advanced features, available up to the latest release, continue to strengthen ArgoCD’s role as a central tool in GitOps continuous delivery.

Helm CheatSheet


Get Started

Helm v3 removed Tiller, improved security, and introduced native OCI registry support. These changes simplify chart management and enhance release practices.


Struture

.
├── Chart.yaml        # Metadata info
├── README.md
├── requirements.yaml  # For defining dependencies (now largely replaced by Chart.yaml 'dependencies')
├── templates          # Contains Kubernetes manifests with templating support
│   ├── spark-master-deployment.yaml
│   ├── spark-worker-deployment.yaml
│   ├── spark-zeppelin-deployment.yaml
│   ├── NOTES.txt      # Displayed after helm install
│   └── _helpers.tpl   # Helper templates / partials
├── values.yaml        # Variables used during template rendering
└── charts             # Directory for dependent charts
    └── apache/
        └── Chart.yaml
  • Chart.yaml
name: <chart-name>               # Required chart name
version: <semver-2-version>      # Follows SemVer 2; required
description: A one-sentence description (optional)
keywords:
  - helm
  - chart
home: https://example.com        # Homepage URL (optional)
sources:
  - https://example.com/source   # Source code URLs (optional)
maintainers:                    # One or more maintainers for the chart
  - name: "Your Name"           # Required for each maintainer
    email: [email protected]      # Optional for each maintainer
engine: gotpl                   # Template engine (defaults to gotpl)
icon: https://example.com/icon.png  # Icon URL, preferably SVG or PNG
appVersion: "1.2.3"             # Version of the application contained in the chart (optional)
deprecated: false               # Mark chart as deprecated (boolean)
tillerVersion: ">2.0.0"         # REQUIRED only in legacy contexts; not needed in Helm v3+
  • requirements.yaml (deprecated in favor of dependencies key in Chart.yaml)

    Use the following syntax for adding dependencies, including alias and conditions:

dependencies:
  - name: apache
    version: 1.2.3
    repository: "http://example.com/charts"
    alias: new-subchart-1
    condition: subchart1.enabled,global.subchart1.enabled
    tags:
      - front-end
      - subchart1
  - name: mysql
    version: 3.2.1
    repository: "http://another.example.com/charts"
    alias: new-subchart-2
    condition: subchart2.enabled,global.subchart2.enabled
    tags:
      - back-end

Recent Helm versions (v3.17.3 included) consolidate dependency management into Chart.yaml’s "dependencies" key, simplifying chart packaging and updates.


General Usage

helm list --all
helm repo (list|add|update)
helm search <chart-name>
helm inspect <chart-name>
helm install --set a=b -f config.yaml <chart-name> -n <release-name>
helm status <release-name>
helm delete <release-name>
helm inspect values <chart-name>
helm upgrade -f config.yaml <release-name> <chart-name>
helm rollback <release-name> <revision>
helm create <chart-name>
helm package <chart-name>
helm lint <chart-name>
helm dep update <chart-name>        # Updates chart dependencies
helm get manifest <release-name>     # Prints all Kubernetes resources rendered for the release
helm install --debug --dry-run <release-name> <chart-name>

Notes on value substitution:

  • --set outer.inner=value translates to:
outer:
  inner: value
  • --set servers[0].port=80,servers[0].host=example becomes:
servers:
  - port: 80
    host: example
  • List creation with --set name={a,b,c} produces:
name:
  - a
  - b
  - c
  • Escaping commas (e.g., --set name=value1\,value2) preserves commas in strings.

  • Dot notation in keys, for example using --set nodeSelector."kubernetes\.io/role"=master, produces:

nodeSelector:
  kubernetes.io/role: master

Template

Values defined in your values.yaml or via the --set flag are available in templates through the .Values object. Helm also exposes useful built-in objects such as:

  • Release:
    .Release.Name, .Release.Time, .Release.Namespace, .Release.IsUpgrade, .Release.Revision
  • Chart:
    .Chart.Name, .Chart.Version, .Chart.Maintainers
  • Files:
    • Access static files via {{ .Files.Get "file.name" }} or {{ .Files.GetString "file.name" }}
  • Capabilities:
    • For example, check supported Kubernetes API versions using .Capabilities.APIVersions.Has

Other useful functions include:

  • default for providing fallback values:

    {{ default "minio" .Values.storage }}
  • quote to wrap strings in quotes:

    heritage: {{ .Release.Service | quote }}
  • Using include to call named templates and then manipulate their output:

    value: {{ include "mytpl.tpl" . | lower | quote }}
  • The required function to print an error if a needed value is missing:

    value: {{ required "A valid .Values.who entry is required!" .Values.who }}
  • Generating a checksum on an included file to trigger rolling updates, for example:

    annotations:
      checksum/config: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}

The annotation "helm.sh/resource-policy": keep instructs Helm to skip a resource during deletion, ensuring persistent resources are not accidentally removed.

Files in the templates/ directory that begin with an underscore (such as _helpers.tpl) are treated as helper files and are not rendered as full Kubernetes manifests.


Hooks

Hooks let you execute resources at specific phases in a release lifecycle. They are defined by annotations in the manifest files.

For example, a hook job defined with post-install and post-upgrade phases:

apiVersion: batch/v1
kind: Job
metadata:
  annotations:
    "helm.sh/hook": post-install,post-upgrade
    "helm.sh/hook-weight": "-5"    # Defines the order in which hooks are executed
spec:
  template:
    spec:
      restartPolicy: Never
      containers:
      - name: hook
        image: busybox
        command: ["echo", "This is a hook!"]

Recent improvements in Helm v3 allow hooks to integrate better with release lifecycles and help with automated testing and pre-/post-upgrade validations.


Chart Repository

  • Helm supports chart repositories with an index file that describes available charts.
  • Use commands such as helm repo add, helm repo update, and helm search repo to manage and find charts.

For more details refer to the chart repository documentation.


Signing

Chart signing helps verify the integrity and provenance of a chart.

  • Use helm package --sign <chart-name> to sign a chart.
  • Helm v3 improves signing workflows and integrates with provenance files.
  • For more details, see the Helm provenance documentation.

Test

Helm provides a mechanism to run tests on deployed charts. Test hooks are defined as Kubernetes test Pods which are executed after a release is installed or upgraded.

  • Define test hooks by adding the helm.sh/hook: test-success annotation.
  • Run tests with helm test <release-name>.

For further details, see chart tests documentation.


Flow Control

If/Else

{{ if .Values.someCondition }}
  # Execute this if .Values.someCondition is true
{{ else if .Values.otherCondition }}
  # Else if condition
{{ else }}
  # Default case
{{ end }}

data:
  drink: {{ .Values.favorite.drink | default "tea" | quote }}
  food: {{ .Values.favorite.food | upper | quote }}
  {{- if eq .Values.favorite.drink "lemonade" }}
  mug: true
  {{- end }}

With

with changes the current scope to an object:

data:
  {{- with .Values.favorite }}
  drink: {{ .drink | default "tea" | quote }}
  food: {{ .food | upper | quote }}
  {{- end }}

Inside the restricted scope, access to objects outside the scope is not available unless using the global variable $.


Range

# Sample from values.yaml:
# pizzaToppings:
#   - mushrooms
#   - cheese
#   - peppers
#   - onions

toppings: |-
  {{- range $i, $val := .Values.pizzaToppings }}
  - {{ $val | title | quote }}
  {{- end }}

# Using a quick tuple to list sizes
sizes: |-
  {{- range tuple "small" "medium" "large" }}
  - {{ . }}
  {{- end }}

Variables

Variables are defined with the := operator and referenced with $.

data:
  myvalue: "Hello World"
  {{- $relname := .Release.Name -}}
  {{- with .Values.favorite }}
  drink: {{ .drink | default "tea" | quote }}
  food: {{ .food | upper | quote }}
  release: {{ $relname }}
  {{- end }}

# In a range loop:
toppings: |-
  {{- range $index, $topping := .Values.pizzaToppings }}
  {{ $index }}: {{ $topping }}
  {{- end }}

# Global scope variable:
labels:
  app: {{ template "fullname" $ }}
  release: "{{ $.Release.Name }}"
  chart: "{{ $.Chart.Name }}-{{ $.Chart.Version }}"

Named Templates

Define reusable snippets in your helper files (e.g. _helpers.tpl):

{{/* _helpers.tpl */}}
{{- define "my_labels" -}}
labels:
  generator: helm
  date: {{ now | htmlDate }}
  version: {{ .Chart.Version }}
  name: {{ .Chart.Name }}
{{- end -}}

Call a named template inside another manifest:

# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}-configmap
  {{- include "my_labels" . | indent 2 }}

Use include rather than template when you need to pass the output to other functions.


Files inside Templates

Access chart files in templates with .Files:

data:
  {{- $files := .Files }}
  {{- range tuple "config1.toml" "config2.toml" "config3.toml" }}
  {{ . }}: |-
    {{ $files.Get . }}
  {{- end }}

Glob-patterns & Encoding

Using the Glob functionality, you can load multiple files with a pattern:

apiVersion: v1
kind: ConfigMap
metadata:
  name: conf
data:
  {{ (.Files.Glob "foo/*").AsConfig | indent 2 }}
---
apiVersion: v1
kind: Secret
metadata:
  name: very-secret
type: Opaque
data:
  {{ (.Files.Glob "bar/*").AsSecrets | indent 2 }}
  token: |-
    {{ .Files.Get "config1.toml" | b64enc }}

YAML Reference

# Forcing a type:
age: !!str 21
port: !!int "80"

# Literal block (keeps formatting and newlines)
coffee: | 
  # Commented first line
  Latte
  Cappuccino
  Espresso

# Literal block with stripping of trailing newlines:
coffee: |-
  Latte
  Cappuccino
  Espresso

# Literal block with extra trailing newline preservation:
coffee: |+ 
  Latte
  Cappuccino
  Espresso

another: value

# Inserting a static file:
myfile: | 
  {{ .Files.Get "myfile.txt" | indent 2 }}

# Folded block (combines lines):
coffee: >
  Latte
  Cappuccino
  Espresso

New Features and Enhancements (up to v3.17.3)

  • OCI Registry Support:
    Helm v3.7 and later introduced full OCI support. You can now store and retrieve charts directly from OCI-compliant registries. Commands like helm registry login and helm pull oci://<registry>/<chart> are fully supported.

  • Improved Dependency Management:
    Enhancements allow for enhanced resolution of chart dependencies. The consolidation of dependency definitions into the Chart.yaml file along with automatic updates via helm dep update leads to a simpler, more robust dependency management experience.

  • Enhanced Security and Chart Signing:
    Chart signing and provenance verification have been further strengthened. Helm now produces a dedicated provenance file alongside packaged charts, and additional signature verification options are available during installation and upgrades.

  • Refined Hooks and Test Support:
    Hooks have received more precise control in execution order and behavior (such as using custom hook weights), and testing has been made more seamless through improved integration with test hooks and release lifecycle management.

  • Template Engine Improvements:
    New built-in functions, better error reporting with the required function, and enhanced support for global context ($) in templating have been added. These features bring more power and flexibility to chart templating.

Kubernetes cheatsheet


Getting Started

  • Fault tolerance
  • Rollback
  • Auto-healing
  • Auto-scaling
  • Load-balancing
  • Isolation (sandbox)

Tip: With rapid Kubernetes evolution through versions 1.32 and beyond, built-in resilience features are continuously enhanced to support modern cloud-native workflows.


Sample yaml

apiVersion: <apiVersion>       # Use stable versions (e.g., apps/v1 for Deployments)
kind: <Kind>                 # Pod, Service, Deployment, etc.
metadata:
  name: <object-name>
  labels:
    key: value
  annotations:
    key: value
spec:
  containers:
    - name: <container-name>
      image: <container-image>
  initContainers:            # Optional: initialization containers run before main containers
    - name: <init-container-name>
      image: <init-container-image>
  priorityClassName: <priority-class>

Adopt a declarative approach to ensure repeatability and auditability of your deployments.


Workflow

Credit: Based on community insights from 2025

  • (kube-scheduler, controller-manager, etcd) --443--> API Server
  • API Server --10055--> kubelet
    Issues such as non-verified certificates and MITM are mitigated by configuring a proper kubelet-certificate-authority or using SSH tunneling.
  • API server --> (nodes, pods, services) via plain HTTP (verify this against your network security policies)

Physical components

Master

  • API Server (443)
  • Kube-scheduler
  • Controller-manager
    • Consolidated into a unified control plane managing process groups (cloud-controller-manager and kube-controller-manager).
  • etcd

All interactions among components occur via the API server, reducing direct cross-component communication.

Node

  • Kubelet
  • Container Engine
    • Uses the Container Runtime Interface (CRI) to interact with container runtimes.
  • Kube-proxy
    2025 note: The deprecated field “status.nodeInfo.kubeProxyVersion” is no longer present.

Everything is an object - persistent entities

  • Stored in etcd with unique client-assigned names and system-generated UIDs.
  • Management methods:
    • Imperative commands via kubectl
    • Imperative configuration objects (using YAML)
    • Declarative object configuration stored in version control
      Node Capacity
---------------------------
| kube-reserved             |
|---------------------------|
| system-reserved           |
| ------------------------- |
| eviction-threshold        |
| ------------------------- |
|                           |
| allocatable               |
| (available for pods)      |
|                           |
|                           |
---------------------------

Namespaces

  • Pre-defined namespaces:
    default
    kube-system
    kube-public (auto-accessible)
  • Certain objects (Nodes, PersistentVolumes, Namespaces) remain cluster-scoped.

Labels

  • Key/value pairs for grouping and selection.
  • Not required to be unique.

ClusterIP

  • ClusterIP provides a static service endpoint that remains unchanged regardless of pod lifecycle changes.

Controller manager

  • Maintains consistency between desired and actual state via objects like ReplicaSets, Deployments, DaemonSets, and StatefulSets.

Kube-scheduler

  • Uses nodeSelector, affinity/anti-affinity, and taints & tolerations to determine pod placement.
  • Enhanced in v1.32 with improved matching rules and more granular constraint definitions.

Pod

Create a pod with:

kubectl run mypod --image=<image>

Inside a pod you can access:

  • Filesystem (from the image and attached volumes)
  • Container and Pod Info (such as hostname, environment variables, pod name, and service details)

Access pod metadata via symlinked files or Downward API volumes:

volumes:
  - name: podinfo
    downwardAPI:
      items:
        - path: "labels"
          fieldRef:
            fieldPath: metadata.labels
        - path: "annotations"
          fieldRef:
            fieldPath: metadata.annotations

Status

  • Pod status: Pending, Running, Succeeded, Failed, Unknown.

Probe

  • Liveness probe: Triggers restarts on failure.
  • Readiness probe: Removes the pod from service endpoints on failure.

Pod priorities

  • Managed via PriorityClass objects.
  • Higher priority pods may preempt lower-priority ones, with advanced scheduling controls enhanced in recent versions.

Multi-Container Pods

  • Share memory, localhost networking, and volumes.
  • Designed as a unit for scaling and scheduling.

Init containers

  • Run sequentially before app containers are launched, ensuring required setup tasks are completed.

Lifecycle hooks

  • PostStart – triggered after a container starts
  • PreStop – invoked before a graceful termination
    lifecycle:
      postStart:
        exec:
          command: ["/bin/setup"]
      preStop:
        httpGet:
          path: "/cleanup"
          port: 8080

Quality of Service (QoS)

Kubernetes assigns one of the following QoS classes:

  • Guaranteed: All containers have matching requests and limits.
  • Burstable: At least one container specifies a resource limit or request.
  • BestEffort: No resource configurations are provided.

Automatic adjustments (e.g., setting memory requests to match limits) are now standard.


PodPreset

  • Enables injection of secrets, environment variables, or volume mounts at pod creation time.
    apiVersion: settings.k8s.io/v1alpha1
    kind: PodPreset
    metadata:
      name: allow-database
    spec:
      selector:
        matchLabels:
          role: frontend
      env:
        - name: DB_PORT
          value: "6379"
      volumeMounts:
        - mountPath: /cache
          name: cache-volume
      volumes:
        - name: cache-volume
          emptyDir: {}

ReplicaSet

  • Oversees a fixed number of pod replicas using a defined pod template and selectors.
  • Use --cascade=false to delete a ReplicaSet without removing its pods.

Deployments

  • Supports versioning, rollback, and advanced update strategies (blue-green, canary, rolling updates).
  • On changes, new ReplicaSets are created while scaling down older ones.
  • Commands include:
    kubectl rollout undo deployment/<name> --to-revision=<number>
    kubectl set image deployment/<name> <container>=<new-image>

ReplicationController

  • Predecessor to ReplicaSets and Deployments and now largely deprecated.

DaemonSet

  • Runs a copy of a pod on every node or specific subsets of nodes.
  • Commonly used for log collection, monitoring, and node-level services.

StatefulSet

  • Provides stable identities and persistent storage associations for pods.
  • Volumes remain tied to pods even when scaled down or rescheduled.

Job (batch/v1)

  • Manages short-lived, batch-oriented workloads in both parallel and non-parallel forms.
  • spec.activeDeadlineSeconds can prevent runaway jobs.

Cronjob

  • Schedules jobs to run at specific times or intervals.
  • The jobs should be idempotent for reliable operation.

Horizontal pod autoscaler

  • Scales controllers like Deployments, ReplicaSets, and ReplicaControllers based on CPU or custom metrics.
  • Includes safeguards to avoid scaling thrashing.

Services

Credit: Community insights

  • A Service defines both a logical set of pods and a policy to access them.
  • Types:
    • ClusterIP: Internal connectivity only.
    • NodePort: Exposes a service on every Node’s IP.
    • LoadBalancer: Integrates with external load balancers managed by cloud-controller-manager.
    • ExternalName: Maps a service to a DNS name.
  • Service discovery:
    • Uses SRV records for named ports and established pod domain naming conventions.

Volumes

Credit: Community contributions

  • Volumes persist data beyond the lifecycle of individual pods.
  • Types include:
    • configMap
    • emptyDir – shares space among containers in a pod (data is lost on pod crash)
    • gitRepo (deprecated)
    • secret – typically stored in memory
    • hostPath

Persistent volumes

  • Offer long-term storage solutions decoupled from pod lifecycles.

Role-Based Access Control (RBAC)

Credit: Community insights

  • Role:
    • Grants permissions within a specific namespace.
  • ClusterRole:
    • Applies cluster-wide or to resources spanning multiple namespaces.

Custom Resource Definitions

  • With Kubernetes evolution, use apiextensions.k8s.io/v1 instead of the older v1beta1 API.
    apiVersion: apiextensions.k8s.io/v1
    kind: CustomResourceDefinition
    metadata:
      name: crontabs.stable.example.com
    spec:
      group: stable.example.com
      versions:
        - name: v1
          served: true
          storage: true
          schema:
            openAPIV3Schema:
              type: object
              properties:
                spec:
                  type: object
                  properties:
                    cronSpec:
                      type: string
                      pattern: '^(\d+|\*)(/\d+)?(\s+(\d+|\*)(/\d+)?){4}$'
                    replicas:
                      type: integer
                      minimum: 1
                      maximum: 10
          subresources:
            status: {}
            scale:
              specReplicasPath: .spec.replicas
              statusReplicasPath: .status.replicas
              labelSelectorPath: .status.labelSelector
      scope: Namespaced
      names:
        plural: crontabs
        singular: crontab
        kind: CronTab
        shortNames:
          - ct
        categories:
          - all

New Features and Objects (up until Kubernetes v1.32)

  • Ephemeral Containers:
    • Allow you to attach temporary containers into a running pod for debugging purposes without modifying the pod’s specification. This has become an essential tool for in-cluster troubleshooting.

  • PodDisruptionBudget (PDB):
    • Enables you to specify the minimum number of pods that should remain available during voluntary disruptions (such as node maintenance), ensuring service continuity.

  • EndpointSlices:
    • Provide a scalable alternative to Endpoints, improving service discovery performance in large clusters by grouping endpoints into smaller subsets.

  • VolumeSnapshot & CSI Volume Expansion:
    • VolumeSnapshot objects allow you to capture the state of a persistent volume while the CSI migration and volume expansion features let you dynamically increase storage—all fully supported in the latest releases.

These objects and features contribute to improved security, scalability, and observability of your Kubernetes clusters.


Notes

Basic commands

# Show the current context
kubectl config current-context

# Get a specific resource (pod|svc|deployment|ingress)
kubectl get <resource> <resource-name>

# View pod logs (follow mode)
kubectl logs -f <pod-name>

# List nodes with custom columns:
kubectl get nodes -o custom-columns=NAME:.metadata.name,EXTERNAL_ID:.spec.externalID,AGE:.metadata.creationTimestamp

# Execute a command in a pod or get an interactive shell:
kubectl exec -it <pod-name> -- <command>

# Describe a specific resource
kubectl describe <resource> <resource-name>

# Set the current namespace in the context
kubectl config set-context $(kubectl config current-context) --namespace=<namespace-name>

# Run a test pod using the Alpine image
kubectl run -it --rm --generator=run-pod/v1 --image=alpine:3.6 tuan-shell -- sh

Quick references from community experts remain popular for both learning and troubleshooting.

To access the Kubernetes dashboard:

# For bash:
kubectl -n kube-system port-forward $(kubectl get pods -n kube-system -o wide | grep dashboard | awk '{print $1}') 9090

# For fish:
kubectl -n kube-system port-forward (kubectl get pods -n kube-system -o wide | grep dashboard | awk '{print $1}') 9090

jsonpath

JSONPath expressions let you filter and extract specific fields from JSON output. Examples include:

kubectl get pods -o json
kubectl get pods -o=jsonpath='{.items[0].metadata.name}'
kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}'
Function Description Example Result
text Plain text output {.kind} List
@ Current object {@} Same as input
. or [] Child operator {.metadata.name} Pod name
.. Recursive descent {..name} All occurrences of "name"
* Wildcard expansion {.items[*].metadata.name} List of pod names
[start:end:step] Subscript operator {.items[0].metadata.name} First pod name
[,] Union operator {.items[*]['metadata.name','status.capacity']} Selected fields from each item
?() Filter {.items[?(@.metadata.name=="mypass")].status.phase} Phase for matching pod

Resource limit

CPU

  • One CPU is equivalent to one AWS vCPU, one GCP Core, one Azure vCore, or one hyperthread in a modern Intel processor.

Memory

  • Memory can be specified in bytes, using suffixes such as K, M, G or Ki, Mi, Gi.
    Example: 129M or 123Mi yield similar memory requirements.

Chapter 13. Integrating storage solutions and Kubernetes

External services:
Use an ExternalName Service or manually configure endpoints if you need to integrate with external systems.

Example: External service without selector:

kind: Service
apiVersion: v1
metadata:
  name: external-database
spec:
  type: ExternalName
  externalName: "database.company.com"

Example: External service with fixed IP:

kind: Service
apiVersion: v1
metadata:
  name: external-ip-database
spec:
  type: ClusterIP
---
kind: Endpoints
apiVersion: v1
metadata:
  name: external-ip-database
subsets:
  - addresses:
      - ip: 192.168.0.1
    ports:
      - port: 3306

Downward API

Expose pod metadata (e.g., name, namespace, labels, annotations) to containers via environment variables or volumes using the Downward API.


2025 Updates & Deprecations

  • Kubernetes v1.33:
    • The Endpoints API is superseded by EndpointSlices for better scalability.
    • The field status.nodeInfo.kubeProxyVersion is removed.
    • Enhanced scheduling options and dynamic resource allocation (including GPUs and FPGAs) are now available.
    • Windows pods no longer support host network mode; review your networking configuration if using Windows containers.

  • CRD Updates:
    • Use apiextensions.k8s.io/v1 as the stable API version for custom resource definitions.

Staying updated with these changes keeps your clusters secure, scalable, and ready for modern workloads.


Labs

Guaranteed Scheduling For Critical Add-On Pods

Review the official guide for Guaranteed Scheduling for Critical Add-On Pods.
Key aspects include:

  • The pod must run in the kube-system namespace.
  • It must have the scheduler.alpha.kubernetes.io/critical-pod annotation set (typically an empty string).
  • When priorities are enabled, assign the pod a priorityClass of system-cluster-critical or system-node-critical.

Set command or arguments via env

env:
  - name: MESSAGE
    value: "hello world"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
@tuannvm
Copy link
Author

tuannvm commented Jul 31, 2017

  • Create kubernetes user link

  • Delete evicted pod:

kubectl get po --all-namespaces -o json | \
jq  '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | 
"kubectl delete po \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c
  • check init container logs:
kubectl logs <pod-name> -c <init-container-name>

@tuannvm
Copy link
Author

tuannvm commented Sep 22, 2017

  • run in non-exit shell to debug:
        command: [ "/bin/sh", "-c", "--" ]
        args: [ "while true; do sleep 30; done;" ]

@tuannvm
Copy link
Author

tuannvm commented Nov 16, 2017

@tuannvm
Copy link
Author

tuannvm commented Dec 13, 2017

jq with dash -: jqlang/jq#38 (comment)

@tuannvm
Copy link
Author

tuannvm commented Jan 11, 2018

sample deployment:

apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template: # create pods using pod definition in this template
    metadata:
      # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
      # generated from the deployment name
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

@bikranz4u
Copy link

Thanks for the Cheat Sheet.

@nalinguptalinux
Copy link

Thanks

@prashanth-sams
Copy link

Nice! If you wish to see few more in details...
https://devopsqa.wordpress.com/2020/01/29/helm-cli-cheatsheet/

@sedkis
Copy link

sedkis commented Apr 23, 2020

Thank you

@tuannvm
Copy link
Author

tuannvm commented Apr 27, 2020

@tuannvm
Copy link
Author

tuannvm commented Sep 1, 2020

http://masterminds.github.io/sprig/defaults.html#ternary
http://masterminds.github.io/sprig/integer_slice.html#untilStep

# if env == qa --> $count = envCount, or else
{{ $count := ternary .Values.envCount 1 (eq "qa" .Values.env) }}
# generate list from 0 to $count
{{- range $var := untilStep 0 (int $count) 1 }}

Define and use new variable

{{ $foo := print .Values.bar "-" .Values.pub }}
{{- if eq $foo .Values.disco }}
{{- end }}

@tuannvm
Copy link
Author

tuannvm commented Nov 11, 2020

Use variable in values.yaml

# values.yaml

foo:
  foo1: bar1
  foo2: {{ .Release.Namespace }}
# deployment.yaml

{{ tpl (toYaml .Values.foo) . | indent 2 }}

@tuannvm
Copy link
Author

tuannvm commented Apr 11, 2021

  • Check API access:
kubectl auth can-i create deployments --namespace dev
kubectl auth can-i list secrets --namespace dev --as dave
  • Get list of nodes created before / after specific data
kubectl get node -o json | jq -r '.items[] | select (.metadata.creationTimestamp <= "2021-04-13") | .metadata.name'
  • Get pod with x restart counts:
kubectl get pod -l <label> -o json | jq -r '.items[] | select (.status.containerStatuses != null) | {container_status: .status.containerStatuses[], name: .metadata.name} | select (.container_status.restartCount > <restart_count>) | .name'

@tuannvm
Copy link
Author

tuannvm commented Sep 27, 2021

  • Get list of images used in deployments:
kubectl get deployment -o json | jq -r '.items[] | .spec.template.spec.containers[].image' | sort | uniq

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment