WARNING: Jenkins X version 2.x is unmaintained. Do not use it.
Please refer to the v3 documentation for the latest supported version.
Storage
When we use a Static Jenkins Server with Jenkins X we inherit the usual Jenkins storage model; that build logs and test results and reports are stored on the Persistent Volume of the Jenkins server.
However as we move towards a more Cloud Native Jenkins and use Serverless Jenkins we need a better solution for the storage of things like logs, test results, code coverage reports etc.
Storage Extensions
So we’ve added a storage extension point which is used from:
- storing logs when using Serverless Jenkins which is done by the jx controller build command
- using the jx step stash command which stashes files from a build (test or coverage reports)
Configuring Storage
You can setup the default location to use for storage. We currently support:
- storing files (logs, test or coverage reports) in a branch of a git repository. e.g. they could be part of your
gh-pages
branch for your static site. - storing files in Cloud Storage buckets like S3, GCS, Azure blobs etc
Storage uses classifications which are used to define the folder in which the kind of resources live such as
- logs
- tests
- coverage
You can also use the special classification default
which is used if you don’t have a configuration for the classification in question. e.g. you could define a location of default
and then just configure where logs
go if thats different.
If you are using jx boot to install and configure your setup then modify your jx-requirements.yml
file to configure storage as described in the boot storage documentation
Otherwise to configure the storage location for a classification and team you use the jx edit storage
e.g.
You can view your teams storage settings via jx get storage
Using Stash
Inside a pipeline you can then run the jx step stash command to stash files:
- specify the
classifier
via-c
such as fortests
orcoverage
etc. - specify the files to collect via
-p
which supports wildcards like*
. files which will be stored with the relative directory path - if you want to remove a directory prefix from the stashed files, like
target/reports
you can use--basedir
to specify the directory to create relative file names from
jx step stash
inside your git repository,
therefore dir:
should be set to /workspace/source
in your stash step.
By default jx step stash will use your team’s configured location for the classification you give. If you wish you can override the location for a stash using --git-url
or --bucket-url
Unstashing
If you need to you can unstash previously stored files via jx step unstash
If you are in some Go source code and you have a URL from Jenkins X, such as a Build Log URL or Attachment from a PipelineActivity Custom Resource the URL could be in various forms such as:
gs://anotherBucket/mydir/something.txt
: using a GCS bucket on GCPs3://nameOfBucket/mydir/something.txt
: using S3 bucket on AWSazblob://thatBucket/mydir/something.txt
: using an Azure buckethttp://foo/bar
: file stored in git not using HTTPShttps://foo/bar
: file stored in a git repo using HTTPS
If you want to easily be able to read from the URL from Go source code you can use the ReadURL
function.
GKE Storage Permissions
In GKE your node-pool requires additional permissions to be able to write into GCS buckets,
more specifically the devstorage.full_control
permission.
If you already have a cluster, you can see these permissions in your cluster overview,
under the dropdown Permissions
.
The description of the field Storage
should be Full
,
by default it is Read Only
.
There are two ways to change your permissions, either you create a new cluster with the appropriate permissions or you just migrate the node-pool, if you already have a running cluster.
Create a new GKE Cluster with Full-Control
To create a new cluster with the right permissions,
you need to use the --scopes
flag with the storage-full
argument, as seen here:
Migrate to a node-pool with Full-Control
Migrating to a new node-pool is quite simple, it’s done in 2 steps!
First create a new node-pool with the required permissions:
Now delete the old node-pool, all your Pods
will
be rescheduled on the new node-pool through Kubernetes magic!
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.