GitOps your cloud native pipelines
Tekton pipelines are cloud native and are designed from the ground up for kubernetes and the cloud:
- there’s no single point of failure and the pipelines are elastically scalable
- each pipeline is completely declarative and self defined
- each pipeline executes independently of any others
- pipelines are orchestrated via the sophisticated kubernetes scheduler:
- can use pipeline specific metadata for resource limits and node selectors: memory, CPU, machine type (GPU, windows/macOS/linux etc)
- its easy to associate pipelines with Cloud IAM roles to avoid you having to upload cluster admin secrets to your public CI service which really helps security and helps reduce accidental bitcoin mining on your cloud account
In a previous blog we talked about how you can accelerate your use of tekton with Jenkins X.
We are moving towards a microservice kind of world with many teams writing many bits of software in many repositories. So there are lots and lots of pipelines. These pipelines keep getting more sophisticated over time; doing much more (all kinds of building, analysis, reporting, testing, ChatOps etc) and the software/images/approaches they use change.
So how can we manage, configure and maintain them all so that there are many pipelines for many repositories; where each repository can customise anything it needs but we can easily maintain everything continuously and its easy to understand and tool around?
We’ve tried to tackle this problem in a number of ways over the years; each has pros and cons.
One option is to put all your pipelines in a shared library. You can then reference the pipelines by name in each of your repositories.
But what if you want to change a bit of a pipeline for a specific repository? If you change it globally for everyone you can break things. You may just want local customisation for your repository only.
You can add parameters into your pipelines. They are quite verbose on Pipelines and PipelineRuns; but it’s hard to think up front of every parameterisation that may be required by downstream repositories. e.g. changing any image; changing any command line argument in any step, adding/changing any environment variables or volumes? How about adding extra steps before/after a particular step? It can soon get very complex and results in very complex pipelines that are hard to understand and use.
Another option is if you need to change a pipeline file you just copy the entire file or create a fork. But then you end up with 100s of copies or forks of pipelines that are hard to synchronise and manage. You end using ancient image versions or older approaches in some repositories which leads to maintenance nightmare. How do you roll out security updates to images in all those repositories, copies and forks?
Another approach we tried is using a tool like kpt to share YAML files across git repositories and then upgrade them via git. This does work quite well; though the downside is whenever you upgrade a new version (e.g. we roll out a new pipeline catalog or a new image change to a tool for security reasons) you need to generate a pull request on every git repository to upgrade them and usually you end up with merge conflicts as the tekton YAML is not trivial; even fairly minor local customisations lead to merge conflict hell.
So how can you apply the benefits of GitOps to your cloud native pipelines while also avoiding copy-paste of lots of YAML into all of your repositories, keeping things easy to understand and flexible so any repository can customize things when required but at the same time make it painless to move reliably forward as the pipeline catalogs and images change?
GitOps your pipelines
- store your pipelines as declarative YAML files inside each of your git repositories.
- use the standard Tekton YAML syntax so that you get IDE support and easy linting
This lets each git repository configure what pipelines are triggered by what events with what pipeline steps.
If you need to edit your pipelines in any repository they are right there in git; it is then easy for each repository to use its own version and configuration if required. This lets pipelines and repositories change over time independently to help you accelerate.
Sharing Tasks and Steps across repositories
Rather than copy pasting task and step YAML between repositories we can refer to a
Task or a
Step in a Task as follows:
- refer to all the steps in a shared task by using
taskSpec: steps: - image: uses:sourceURI
- refer to a single named step from a shared task
taskSpec: stepTemplate: image: uses:sourceURI steps: - name: mystep
The source URI notation is enabled by a special
image prefix of uses: on step or if an image on a step is blank and the
stepTemplate: has an
image prefix of uses:
You can refer to the detailed documentation on how the step inheritence and overriding works.
For a github.com source URI we use the syntax:
- image: uses:owner/repository/pathToFile@version
This references the https://github.com repository for
owner/repository and @version can be a git tag, branch or SHA.
If you are not using github.com to host your git repositories you can access a pipeline task or step from your custom git serve use the uses:lighthouse: prefix before
- image: uses:lighthouse:owner/repository/pathToFile@version
We recommend you version everything with GitOps so you know exactly what versions are being used from git.
However you can use @HEAD to reference the latest version.
To use a locked down version based on the version stream of your cluster, you can use @versionStream which means use the git SHA for the repository which is configured in the version stream.
The nice thing about @versionStream is that the pipeline catalog you inherit tasks and steps from is locked down to an exact SHA in the version stream; but it avoids you having to go through every one of your git repositories whenever you upgrade a pipeline catalog.
Reusing Tasks and Steps from Tekton Catalog
The Tekton Catalog git repository defines a ton of Tekton pipelines you can reuse in your pipelines
image: uses:sourceURI notation inside any pipeline file in your
.lighthouse/jenkins-x/mypipeline.yaml file like this:
steps: - image: uses:tektoncd/catalog/task/git-clone/0.2/git-clone.yaml@HEAD
This will then include the steps from the git-clone.yaml file
How it looks
Also notice we don’t have to copy and paste the exact details of the images, commands, arguments, environment variables and volume mounts required for each step; we can just reference them via Git. Also each pipeline in each repository can reference different versions if required.
Customizing an inherited step
You can edit the step in your IDE and add any custom properties such as
volumeMount - those values then override the inherited step.
e.g. you can then change any command line, add an environment variable or add a new volume mount without copy pasting the whole step. e.g. we change the
script value of the
jx-variables step below:
Any extra properties in the steps are used to override the underlying uses step.
Inlining a pipeline step locally
If you want to edit a step that is inherited from a pipeline catalog just run the jx pipeline override command from a clone of your repository.
jx pipeline override
This will then prompt you to pick which pipeline and step that’s inherited via the
image: uses:sourceURI notation. When chosen the step will be inlined into your local file so you can edit any of the properties.
You can use the git compare to see the changes and remove any properties you don’t wish to override.
Viewing the effective pipeline
To see the actual Tekton pipeline that would be executed from your local source directory you can run the jx pipeline effective command:
jx pipeline effective
If you want to open the effective pipeline in your editor, such as VS Code you can do:
jx pipeline effective -e code
jx pipeline effective -e idea
If you want to always view an effective pipeline in your editor then define the
JX_EDITOR environment variable…
export JX_EDITOR="code" # now we will always open effective pipelines inside VS Code jx pipeline effective
We’ve been on our own digital transformation journey in the world of pipelines and used many different approaches over the years to manage many pipelines across many repositories.
A few months ago we moved to the above GitOps approach for our cloud native pipelines and we are absolutely loving it!
Its super easy to:
- share pipelines across all of your git repositories without copy/paste
- easily customise pipelines in any project and be able to easily understand what the local changes are and roll them back if required
- upgrade pipelines across your repositories in a consistent way as you upgrade your images, applications and cluster via GitOps so that new versions of pipeline catalogs are upgraded once they pass the system tests.