Assessment of the render pipeline's capabilities relative to the tf-proto reference charts. Produced after porting the sandbox component and fixing several rendering gaps.
The render pipeline handles the core mechanics well: CUE value
classification, scalar templating with defaults, if/else conditionals from
CUE comprehensions, resource-level condition wrapping, and basic toYaml for
struct/list blobs. The sandbox chart is a reasonable proof that the pipeline
works end-to-end for a single component.
Grouped by significance:
_helpers.tpl and include -- Every tf-proto chart uses a shared
_helpers.tpl that defines app.name, app.fullname, app.labels,
app.selectorLabels, app.chart. Templates call
{{ include "app.labels" . | nindent N }} rather than inlining labels. Our
pipeline bakes labels inline. This isn't just cosmetic -- it's how tf-proto
achieves DRY across resources within a chart. Without include, every resource
repeats its label block, and label changes require touching every template file.
range over maps -- tf-proto uses
{{- range $key, $value := .Values.env }} and
{{- range $key, $value := .Values.pod.annotations }} extensively. Our
@helm(range) only handles list iteration. Map iteration with key-value pairs
is a different construct entirely, and it's how tf-proto handles env vars,
annotations, and configmap data -- all of which are arbitrary key-value maps
from the deployer.
Image URI construction -- tf-proto builds images from
registry + imageName + tag with a global registry override
(containerRepositoryOverride | default .Values.deployment.container.registry).
This is a core deployment concern (airgapped environments, mirror registries).
Our pipeline has containerImage as a single string. The registry override
pattern needs to be a framework-level concern.
Variable scoping ($) -- Inside range blocks, tf-proto uses $.Values.X
to access root scope. Our range implementation doesn't handle this.
tpl for values that contain template expressions -- tf-proto puts template
expressions inside values.yaml strings, then evaluates them with tpl. We
explicitly decided against this (documented in helm-divergences.md), but it
means our env var and configmap handling is architecturally different. That's
fine, but it means we can't produce identical output for those sections.
Checksum-driven rollouts -- tf-proto uses UUID-based checksums in pod
annotations (checksum/configmap, checksum/container_image) to trigger
rolling restarts on config changes. We have nothing comparable.
Probe type dispatch -- tf-proto's probe config supports 4 probe types (exec/grpc/tcpSocket/httpGet) via if-else chains, with all timing fields parameterized. We bake probe structure with only the port templated. Documented as an acceptable divergence -- probe type changes are rare enough to warrant a component definition change, not a values override.
quote filter -- tf-proto quotes all label values
({{ .Values.labels.appPartOf | quote }}). We don't emit quote. This matters
for YAML correctness when values contain special characters.
- Ingress/Route resources (only 3 charts use them)
- RBAC resources (only models-reconciler)
- StatefulSet support (postgres, models-reconciler)
kindIstype-checking in range blocks (for mixed-type configmap data)- Multiple configmap support (
additionalConfigMaps)
The pipeline is good at the mechanical work of turning CUE values into
templated YAML. But tf-proto's charts aren't just templated YAML -- they
have a shared templating layer (_helpers.tpl + include) and compositional
patterns (map ranges, tpl evaluation, registry override logic) that represent
real deployment flexibility.
The most impactful next step would be _helpers.tpl generation with
include-based labels, because it's foundational to how charts are structured
and it replaces the increasingly complex label substitution logic we've been
building. After that, map-based range iteration would unlock env vars and
annotations, which are the other universal pattern across all tf-proto charts.