Helm 2.x uses a server side component called tiller. Typically Tiller is installed in a global namespace; or you can install a Tiller in each namespace and configure your helm CLI to talk to the right tiller in the right namespace.
However Tiller has a number of issues…
The problem with Tiller
complicates RBAC since its not using the RBAC of the user or pod running the helm commands to read/write kubernetes resources - helm is talking to the remote tiller pod to do the work. If you are using a global tiller then thats often got something like the cluster-admin role which means anyone running helm commands effectively side steps RBAC completely! :).
helm forces all releases to have a unique name within a tiller. This means if you have one global tiller then each release name must be globally unique across all namespaces. This leads to very long release names since they must typically append the namespace too. This means often service names are very different between environments as the release name is often included in the service name in many charts which breaks lots of the promise of using canonical service discovery in kubernetes.
We prefer to use same service names in every environment (development, testing, staging, production) - to minimise the amount of per-environment configuration that is required which avoids manual effort and reduces errors. e.g. refer to http://my-service/ in your app and it should just work in every namespace/environment your app is deployed in without wiring up special configuration.
can cause lots of version conflicts between helm clients + tiller versions. We’ve seen this a lot in Jenkins X this year. e.g. a user has, say, helm 2.9 installed locally and installs Jenkins X. Then a build pod with helm 2.10 runs and barfs as the tiller and helm versions don’t match.
Helm 3 will be tiller-less
Whenever Helm 3 shows up Tiller will be a thing of the past - which is awesome - all of the above issues will be fixed! The only downside is no-one outside of Microsoft has any clue when Helm 3 will be a thing. Very little is happening on the public helm 3 branch, no issues or pull requests are being accepted yet. I’m hoping there’s loads of activity on a private branch somewhere - but its probably a while away from being public and GA.
Going tiller-less on Helm 2.x
Until Helm 3 the Jenkins X community needed a nice workaround for tiller on helm 2.x.
So now if you want to use Jenkins X without tiller there’s a new magic command line argument --no-tiller you can use when creating a cluster:
To be able to change the helm behaviour via feature flags we abstracted away the low level calls to the helm CLI behind some jx step helm commands. e.g. to apply a helm chart in an environment pipeline we use…
jx step helm apply
This lets us use feature flags to use different helm behaviours.
What --no-tiller means is to switch helm to use template mode which means we no longer internally use helm install mychart to install a chart, we actually use helm template mychart instead which generates the YAML using the same helm charts and the standard helm confiugration management via --set and values.yaml files.
Then we use kubectl apply to apply the YAML.
Since we are using GitOps in Jenkins X it turns out we don’t really need to rely on Helm’s use of kubernetes resources to store environment specific configuration values; since everything is already in git!
One added complication though is, with Helm you can add and remove resources inside a chart and as you upgrade to newer versions of the chart the old resources get automatically removed. Also with helm there’s a way to remove a release by name and all the resources are removed.
So to preserve the helm semantics of removing old resources from a chart in newer versions (e.g. removing a microservice from the Staging environment) or removing an entire release we do the following:
the YAML generated byhelm template is post processed to add 2 labels:
after an upgrade we remove any resources for the same helm release name but different version (to remove any old resources) via the selector jenkins.io/chart-release=my-release,jenkins.io/version!=1.2.3
to remove a release completely we just delete all resources with the label selector: jenkins.io/chart-release=my-release
One nice benefit of using helm template to generate the YAML then using kubectl apply means that we can look at optionally using tools like kustomize post process the output of helm template to allow resources to be overridden or enriched in ways that the chart author did not think of.
Other helm feature flags
Our first experiment to remove tiller involved running the tiller process locally. We still have the feature flag --remote-tiller=false which means that Jenkins X will ensure there’s a local tiller process running and that the helm CLI is pointed to the localhost port. This at least helps avoid the RBAC issues with tiller since the tiller will be reusing the same RBAC rules as the caller of helm
It turns out this kinda works; though we found some issues around multi-team support so we ended up moving to the template mode described above which works much more reliably.
Another feature flag we added was allowing different helm binaries to be used; so that we could switch between, say, helm for helm 2.x and helm3 for the 3.x version so that folks could experiment with alphas of helm 3.
Though its looking like helm 3 is still some way off so its not recommended any time soon but as helm 3 gets near to RC stage we’ll be able to reuse the helm 3 feature flag again to let folks experiment with helm 3 until its GA and we make it the default in Jenkins X.
If you use helm then we highly recommend you avoid tiller!
We’re working on Jenkins X 2.0 - most of its features are already available hidden behind feature flags. So in Jenkins X 2.0 we will default disable tiller by default along with enabling other things like Prow integration and using serverless Jenkins by default (more on those in a separate blog!).
We are also really looking forward to helm 3! :). Helm rocks, but tiller does not!