Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Use Helm. Or some other tool for managing Kubernetes manifests, I’m not picky – the important thing is that you ~never directly use kubectl apply, edit, or delete. The resource lifecycle needs to be findable in version control.

I have to partly disagree with that one. I find tools like Helm to obscure things that should be readily visible. My favoured method is to keep manifests in full (which you can source from `helm template`!) as pure yaml files and version that. If possible freeze versions and go through regular patch cycles to review updates. That you apply them through `kubectl apply` or through Argo is irrelevant. I treat the repo as the state and the running cluster as stateless. If it's borked, just redeploy. I don't see it as useful to care too much about the in-cluster resource life-cycle. But I completely agree that resources need to be version-controlled.



> I find tools like Helm to obscure things that should be readily visible.

Goodness, is this ever true. Particularly with Prometheus Operator and all the monitoring bits that go around that. Dealing with this infrastructure breaks a number of the points in the article, like "Deploy everything all the time." and "No in-code fallbacks for configs."

A previous team built this monitoring infrastructure, so when I had to go back in and re-deploy, a bunch of the Helm charts were broken (YAML errors and the like). It hadn't been re-deployed in likely 4-6 months.

Then a lot of the components don't rely on default configs, but the default configs are there nevertheless. So another team was troubleshooting an issue, and they reached the conclusion that the config for AlertManager was empty, but it's not. The config for AlertManager is in a different directory from the default config. Then an issue with Prom2Teams came up, and Prom2Teams gives an error for its default configuration file that it doesn't have permission to load that file--Prom2Teams runs as a user, the file is readable only by root. So another team came to the conclusion that Prom2Teams can't read its configuration file. But that's not the file it's actually using to configure the service; it's just a default.

So two red herrings as a result of default config files that aren't being used at all, compounded by Helm obscuring components that should be visible, and ultimately stemming from the inherent complexity of the system.

But in reality, there are issues that make this worse which are unrelated to Helm, Kubernetes, and the Prometheus stack.


Agree with this. Helm is a great tool for making really terrible abstractions over well designed native configuration. I either end up having to fork charts and fix or literally just writing my own half the time.

It can be used for good and it can be used for bad. Less is more with Helm otherwise it will create bad.

This (and CNI) are the rough bits of Kubernetes.


I recently discovered `kustomize` and the `kubectl apply -k` flag (which uses `kustomize`), which makes keeping full manifests pretty straightforward. There's only one or two things I dislike about `kustomize`, but those are things that can be worked around.


Kustomize has the best local dev environment for k8s that I have found; hot-reloading your cluster as you edit manifests gives a very tight development feedback loop.

For my money it is worth spending the time to grok the slightly funky overlay semantics, at least for teams with infra focus / dedicated SRE.

However, I’m not certain I recommend it for small teams doing full-stack DevOps, I.e. engineers deploying their own code that aren’t k8s experts; if you only work with the Kustomize layer infrequently it can be a bit annoying/unintuitive.

Note you can still use a two-step GitOps process where the Kustomize scripts write to your raw config repo; I think this is a good middle option that keeps the infra legible, while allowing developers to get the ergonomics of a bit of dynamic logic in their deploy pipelines. (Eg parameters for each environment).


Agree 100%. Plus Helm runs against a basic tenet of microservices (the usual architecture for the deployed apps in k8s these days). People use to bundle services together when using Helm and the like which, in time, couples services together.


But don't you actually want a certain amount of coupling in the operations part? After all you need to ensure they run in a place and in a manner that allows them to find and talk to each other and usually in a combination determined by the goal of a specific deployment, i.e. sometimes you might not need a rail-cargo-service because the customer only ships by truck, etc. Then scaling/autoscaling (if any) needs to be compatible, versions need to be within a certain range, any central data store must be coordinated as well, not to speak of service meshes, chaos experiments and the like. It's a good thing to develop services with minimal coupling, but that stage has different risks and goals from devops/ops, at least in my experience on both sides of the dev/ops transition zone.


> But don't you actually want a certain amount of coupling in the operations part?

In my opinion, for sure. There's a balance between "too coupled" and "too de-coupled" that should be stricken rather than too far on either side. It's good to say that this is also contextual; some projects may be fine with either more or less coupling than others, and that's OK.


I understand but it is a slippery slope.


> My favoured method is to keep manifests in full (which you can source from `helm template`!) as pure yaml files and version that.

I do something like this but I normally find that helm charts are not parametrized as I want and have to manually modify the output manifests. When updating from helm it can be challenging for other team members to understand what bits we want to take from the new helm template output and which ones we don’t. How do you deal with this?

Sometimes I update the helm chart to fit our use case, but it’s still hard if that is not merged upstream (because that means maintaining our own version of the helm chart)


> How do you deal with this?

Isn't that the problem that kustomize is designed to solve? Flux even has a first-class declaration for "take this thing, then kustomize it". The `helm template` into git pattern could be extended to "helm template, write kustomize files, then version control both" since it would capture the end state as well as the diffs that were applied on top of the vanilla chart

I think the "maintaining our own version of the helm chart" is only painful if the helm chart itself is moving around a lot, versus they're using helm releases to carry changes to "--set image.tag=$(( tag + 1 ))" type thing


Ah that’s a great suggestion! I didn’t think about it even though I use kustomize a lot.


I'd prefer to deploy full manifests as well, but it's not my impression that you can entirely obtain those though "helm template". Certain variables, like "Release.namespace" are only available when actually being applied, AFAIK.

You will get a manifest, but it will usually be missing certain parts of it.

I completely agree with the philosophy of just redeploying the cluster if it's borked. I'm using NixOS myself for the task, and was trying to obtain full manifests though "helm template" originally -- so I'd love to know, if I was just missing something.


This isn't a problem we have seen before, and we deploy allmost all of the third party applications in this manner.

When we generate the templates we use -n to override the namespace.

The command looks like this helm template CHART_NAME CHART_PATH -f CHART_INPUT_PATH/config.yaml --output-dir CHART_OUTPUT_PATH/manifests -n NAMESPACE --include-crds --render-subchart-notes --kube-version KUBE_VERSION


My memory didn't quite serve me right, so it's not exactly as I described, and I can see, that as you say, providing the namespace to the template command does work.

The problem for me is, that setting the namespace in that way with "helm template" does not seem to add it to any manifests not specifically specifying the namespace to .Release.Namespace.

The rancher 2.6.8 chart does not set this for all manifests, but does with some, so when I set namespace though template, and deployed it all through the manifests folder, I got some objects in default namespace (because they had none specified), and some in the intended namespace; resulting in an installation that did not work.

As another reply to my comment suggested, this can of course be handled with post-processing of the result of "helm template", though, at the time, I was not certain the problem was limited to this namespace issue, so I didn't feel lucky enough to go down that route. :-)


About the namespace, I usually modify it manually if it’s a few files, otherwise use some post processing like https://github.com/helm/helm/issues/3553#issuecomment-417800...


That could definitely work. And I considered it a bit but didn't feel confident that the problem would be limited to the namespace, so it felt like the wrong tool for the job at the time. :-)

Thank you for the suggestion though. It's comforting to hear that it may actually be a viable approach.


We've deployed our helm charts with Spinnaker. Spinnaker has a nice UI that shows which charts are deployed, which environment variables were used, and the manifest files themselves.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: