A layout of open books with some vertical and others horizontal.

Rendered Manifests

  1. Kubernetes
    /
  2. Jan 29, 2026 /
  3. 7 minutes to read

When I first started running a Kubernetes homelab I used a pattern using Helm alongside ArgoCD to deploy applications. This was first borne out of some of my frustrations at the time with my workflow to manage docker compose deployments on my mini PCs. I can't say going straight to Kubernetes for that reason alone is a good idea, but I liked the concept and wanted to try and learn.

So my first approach was coupled with ApplicationSet generators in ArgoCD. I used a folder structure in my infrastructure repo to organize my apps into their own namespace, like this:

In each folder for the application was a Helm Chart and the ApplicationSet generator created Applications for these Charts. Fairly straightforward and proved to be easy to maintain and modify. But I felt something wasn't quite right, I didn't really like how I didn't find out what was deploying until it reached my cluster in ArgoCD's diff. And once I heard about the rendered manifests pattern I saw that as my solution.

Preparation

However it was not as straight forward to migrate to rendering the Helm Charts ahead of ArgoCD consuming them. This took a bit of planning to do, specifically to work out how to structure ApplicationSets, the repository directory structure, where to place the templates, and perhaps most unknown to me how exactly to render them appropriately.

I first started by working backwards from the ApplicationSets. I knew that not all my applications I wanted to be controlled in this way, so I made sure to have exceptions, particularly for any deployments in the kube-system namespace such as Cilium. I also took this time to consolidate away from having separate types, since never really used it in any meaningful way. That left me with this:

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: application-set-cl01tl
  namespace: argocd
  labels:
    app.kubernetes.io/name: application-set-cl01tl
    app.kubernetes.io/instance: argocd
    app.kubernetes.io/part-of: argocd
spec:
  syncPolicy:
    applicationsSync: create-update
    preserveResourcesOnDeletion: false
  generators:
    - git:
        repoURL: http://gitea-http.gitea:3000/alexlebens/infrastructure
        revision: manifests
        directories:
          - path: clusters/cl01tl/manifests/*
          - path: clusters/cl01tl/manifests/stack
            exclude: true
          - path: clusters/cl01tl/manifests/cilium
            exclude: true
          - path: clusters/cl01tl/manifests/coredns
            exclude: true
          - path: clusters/cl01tl/manifests/metrics-server
            exclude: true
          - path: clusters/cl01tl/manifests/prometheus-operator-crds
            exclude: true
  template:
    metadata:
      name: '{{ `{{path.basename}}` }}'
    spec:
      project: default
      source:
        repoURL: http://gitea-http.gitea:3000/alexlebens/infrastructure
        targetRevision: manifests
        path: '{{ `{{path}}` }}'
      destination:
        name: in-cluster
        namespace: '{{ `{{path.basename}}` }}'
      revisionHistoryLimit: 3
      ignoreDifferences:
        - group: ""
          kind: Service
          jqPathExpressions:
            - .spec.externalName
      syncPolicy:
        automated:
          enabled: true
          prune: true
          selfHeal: false
        retry:
          limit: 3
          backoff:
            duration: 1m
            factor: 2
            maxDuration: 15m
        syncOptions:
          - CreateNamespace=true
          - ApplyOutOfSyncOnly=true
          - ServerSideApply=true
          - PruneLast=true
          - RespectIgnoreDifferences=true

And with that I could then see how I wanted my directory structure to be to keep it simple. While manifests get rendered to a separate branch, as is often recommended, I use the same structure and instead of a "application type" I use the kind of packaging system. This way I can easily use Kustomize, CDK8s, or other packaging systems in the future and render them separately.

Workflows

Now onto the harder part. I had seen that there were a few projects that could do this for me. Source Hydrator is one such and part of the ArgoCD project, but at this time its still in Alpha. Kargo is a more comprehensive CI/CD pipeline project also developed primarily by Akuity. I was rather tempted by this, but in the end I decided against it. I did not need all the extra complexity that this would offer, which might be fun, but might be too hard to maintain at homelab scale.

This left me with utilizing the Actions feature of my Gitea instance. This works very similarly to Github and is a simple workflow scripting system. This also means that I would have to learn and understand the whole process which is one of the main reasons I even have a homelab at all.

It took a bit of trial and error, but I managed a system that just works. Some of the early challenges I faced was working out path structures within the workflow container, particularly with managing the two branches and expectations around them.

Then properly figuring out how to do the right diff in git to narrow down what exact folders to render. I started with each change rendering all the Charts, but this took upwards of 20 minutes. I still keep that workflow around as a fallback to see if anything is missed, but I haven't needed that.

I also learned I needed to separate each kind of change that I use: push, merge, and automerge. I use renovate and have it set to automerge any patches to Charts and images. So I want those to also automerge in their manifests. But any minor, major, or changes I push I want to manually review them.

The differences in these behaviors is what lead to having 4 workflow files with a lot of duplication inside them. Right now I see that as more easily maintainable. I have a plan in the future to reduce this and run more internal logic to handle the different cases, but its work that right now doesn't have much benefit.

Rendering

The real heart of the workflow to render the Charts is this:

echo ""
echo ">> Rendering templates ..."
case "$chart_name" in
  "stack")
    echo ""
    echo ">> Special Rendering for stack into argocd namespace ..."
    TEMPLATE=$(helm template $chart_name ./ --namespace argocd --include-crds --dry-run=server --api-versions "gateway.networking.k8s.io/v1/HTTPRoute")
    ;;
  "cilium" | "coredns" | "metrics-server" |"prometheus-operator-crds")
    echo ""
    echo ">> Special Rendering for $chart_name into kube-system namespace ..."
    TEMPLATE=$(helm template $chart_name ./ --namespace kube-system --include-crds --dry-run=server --api-versions "gateway.networking.k8s.io/v1/HTTPRoute")
    ;;
  *)
    echo ""
    echo ">> Standard Rendering for $chart_name ..."
    TEMPLATE=$(helm template "$chart_name" ./ --namespace "$chart_name" --include-crds --dry-run=server --api-versions "gateway.networking.k8s.io/v1/HTTPRoute")
    ;;
esac

echo ""
echo ">> Formating rendered template ..."
echo "$TEMPLATE" | yq '... comments=""' | yq 'select(. != null)' | yq -s '"'"$OUTPUT_FOLDER"'" + .kind + "-" + .metadata.name + ".yaml"'

# Strip comments again to ensure formatting correctness
for file in "$OUTPUT_FOLDER"/*; do
  yq -i '... comments=""' $file
done

I need to handled the special cases around apps that need to use the kube-system namespace, because otherwise I prefer to use the application name for its namespace. The "stack" application also needs to render to the argocd namespace as the forms the App of Apps pattern. I need to specify the specific API version of the HTTPRoute CRD to avoid conflicts in some of the external Helm Charts that I use. This is relatively new in the Kuberenetes space and I hope this particular quirk can be smoothed over in the future.

But the major line I want to point out is this:

echo "$TEMPLATE" | yq '... comments=""' | yq 'select(. != null)' | yq -s '"'"$OUTPUT_FOLDER"'" + .kind + "-" + .metadata.name + ".yaml"'

This cleans up the templates and makes them deterministic. The first two commands in the pipe remove comments and empty templates. While comments don't necessarily need to be stripped, at this point there is no more need for them. However removing empty templates is a bit more necessary later in the pipe as these will make empty and null files that can become problematic.

The last command is certainly required as when the Helm template command is run the output, which is often stored in just one file, is not sorted. This means every change can and often will effect the order and creating a lot of clutter in the diff for the Pull Request. By outputting each template to be named by their kind and name this ensures that deterministic output. I also find it a lot easier to read in VS Code.

Conclusion

Once I finally sorted out a lot of the small details here and there I now have a much, much better system for making changes to my cluster. Any time I have a new change I can easily see in the PR that gets created what exactly is being deployed. This gives me a lot more confidence and have certainly blocked bad configuration from being deployed. It did also make ArgoCD a bit better performing, but at my scale this hasn't be that noticeable.

The only real downside is that it adds an additional step to do, which is approving the merge. Sometimes I miss pushing a change and watching ArgoCD react right away. But this is not worth the cost and I am quite happy with this slight penalty.

As for follow on tasks from this I plan, as mentioned, to consolidate my workflows. I still feel uneasy about having 4 separate files. The other task I am planning is introducing CDK8s alongside Helm. This will require its own workflow and at that point I feel it will be the best point in time to refactor down this system. Until then I expect the current state will be just fine.

Kubernetes Homelab Helm