$ minikube addons enable ingress
© 2021 The original authors.
The kubernetes-gradle-plugin brings your Gradle Java applications on to Kubernetes. This plugin focuses on two tasks:
Building Container images.
Creating Kubernetes resource descriptors.
When working with kubernetes-gradle-plugin, you’ll probably be facing similar situations and following the same patterns other users do. These are some of the most common scenarios and configuration modes:
This is an example of how you can use the JKube zero configuration to build and deploy your Java application with Minikube. This is using a Quarkus project, but it could be changed to any supported Java framework.
Prerequisites
You will need the following for the scenario:
minikube
installed and running on your computer
minikube
ingress addon enabled
$ minikube addons enable ingress
Use the docker daemon installed in minikube
$ eval $(minikube -p minikube docker-env)
Zero configuration
Start by generating a new Gradle Quarkus project from https://code.quarkus.io. Make sure gradle is selected. Download the generated project and extract the content.
2.
Open the build.gradle
file and add the plugin in the plugins
section.
plugins {
id 'java'
id 'io.quarkus'
id 'org.eclipse.jkube.kubernetes' version '1.17.0'
}
3. Run the command:
$ ./gradlew quarkusBuild k8sBuild k8sResource k8sApply
The task k8sPush is not required as we are using the Minikube internal container image registry.
At this point, the Quarkus app has been built, containerized, configured for kubernetes and deployed to the minikube cluster. But there is no external endpoint to access to it:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14h
quarkus ClusterIP 10.101.31.90 <none> 8080/TCP 13h
$ kubectl get ingress
No resources found in default namespace.
External URL
To make this service available publicly, change the settings in project properties.
Retrieve the minikube ip address
$ minikube ip
192.168.99.102
Add the following in the properties. For the domain, we are using nip.io
service that dynamically maps custom hostname and IP address and avoid editing the /etc/hosts
file.
# Enable Creating External Urls
jkube.createExternalUrls=true
# Configure host domain suffix for Ingress
jkube.domain=192.168.99.102.nip.io
Re-generate and apply the kubernetes resource
$ ./gradlew k8sResource k8sApply
Make sure the ingress resource has been created
$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
quarkus <none> quarkus.192.168.99.102.nip.io 192.168.99.102 80 30m
Give it a try:
$ curl http://quarkus.192.168.99.102.nip.io/hello
Hello RESTEasy
This is an example of how you can use the kubernetes-gradle-plugin to build and deploy your Java application to any Kubernetes Cluster.
Prerequisites
You will need the following for the scenario:
Kubernetes Cluster
Access to a Container registry (Docker Hub/Quay.io)
A Java Development Kit (JDK)
Adding kubernetes-gradle-plugin to project
You would need to add kubernetes-gradle-plugin to your project in order to use it:
Open the build.gradle
file and add the plugin in the plugins
section.
plugins {
id 'java'
id 'io.quarkus'
id 'org.eclipse.jkube.kubernetes' version '1.17.0'
}
Building and pushing image to Container registry
Once you’ve set up your project and tested it, You can create a container image for your application using kubernetes-gradle-plugin. If you’ve access to a docker daemon try running this command:
$ ./gradlew k8sBuild
If you don’t have access to any docker daemon, you can configure kubernetes-gradle-plugin to use JIB mode as well:
$ ./gradlew k8sBuild -Pjkube.build.strategy=jib
After running this command you’ll see kubernetes-gradle-plugin created a container image with opinionated defaults by inspecting your project dependencies. However, you would want to configure the name of the container image.
Let’s say you want to push your image to Quay.io with username foo. You can configure image name by running this command:
$ ./gradlew k8sBuild -Pjkube.generator.name="quay.io/foo/%a:%l"
Once you’ve created a container image of your application, you need to push it to some container registry. Make sure you’ve already created account on some public/private container registry. You can provide your credentials either via environment variables or plugin configuration. kubernetes-gradle-plugin also tries to read ~/.docker/config.json
that gets created after docker login
. You can read more about this in Authentication section.
Log into your container registry:
$ docker login quay.io
Run this command to instruct kubernetes-gradle-plugin to push container image you built in previous step to your container registry:
$ ./gradlew k8sPush -Pjkube.generator.name="quay.io/foo/%a:%l"
Generating & applying Kubernetes manifests
Just like container image, kubernetes-gradle-plugin can generate opinionated Kubernetes manifests. Run this command to automatically generate and apply manifests onto currently logged in Kubernetes cluster.
$ ./gradlew k8sResource k8sApply
After running these tasks, you can also check Kubernetes manifests generated by kubernetes-gradle-plugin in build/classes/java/main/META-INF/jkube/
directory.
Cleanup applied Kubernetes resources after testing:
$ ./gradlew k8sUndeploy
kubernetes-gradle-plugin works with any Spring Boot project without any configuration. It automatically detects your project dependencies and generated opinionated container image and Kubernetes manifests.
Adding kubernetes-gradle-plugin to project
You would need to add kubernetes-gradle-plugin to your project in order to use it:
Open the build.gradle
file and add the plugin in the plugins
section.
plugins {
id 'java'
id 'io.quarkus'
id 'org.eclipse.jkube.kubernetes' version '1.17.0'
}
Building and pushing image to Container registry
Once you’ve set up your project and tested it, You can create a container image for your application using kubernetes-gradle-plugin. If you’ve access to a docker daemon try running this command:
$ ./gradlew k8sBuild
If you don’t have access to any docker daemon, you can configure kubernetes-gradle-plugin to use JIB mode as well:
$ ./gradlew k8sBuild -Pjkube.build.strategy=jib
After running this command you’ll see kubernetes-gradle-plugin created a container image with opinionated defaults by inspecting your project dependencies. However, you would want to configure the name of the container image.
Let’s say you want to push your image to Quay.io with username foo. You can configure image name by running this command:
$ ./gradlew k8sBuild -Pjkube.generator.name="quay.io/foo/%a:%l"
Once you’ve created a container image of your application, you need to push it to some container registry. Make sure you’ve already created account on some public/private container registry. You can provide your credentials either via environment variables or plugin configuration. kubernetes-gradle-plugin also tries to read ~/.docker/config.json
that gets created after docker login
. You can read more about this in Authentication section.
Log into your container registry:
$ docker login quay.io
Run this command to instruct kubernetes-gradle-plugin to push container image you built in previous step to your container registry:
$ ./gradlew k8sPush -Pjkube.generator.name="quay.io/foo/%a:%l"
Generating & applying Kubernetes manifests
Just like container image, kubernetes-gradle-plugin can generate opinionated Kubernetes manifests. Run this command to automatically generate and apply manifests onto currently logged in Kubernetes cluster.
$ ./gradlew k8sResource k8sApply
After running these tasks, you can also check Kubernetes manifests generated by kubernetes-gradle-plugin in build/classes/java/main/META-INF/jkube/
directory.
Cleanup applied Kubernetes resources after testing:
$ ./gradlew k8sUndeploy
How to add a liveness and readiness probe?
kubernetes-gradle-plugin automatically adds Kubernetes liveness and readiness probes in generated Kubernetes manifests in presence of Spring Boot Actuator dependency.
To add actuator to your project, add the following dependency:
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-actuator'
}
Once you run k8sResource task again, you should be able to see liveness and readiness probes added in generated manifests.
You can easily get started with using kubernetes-gradle-plugin on an Eclipse Vert.x without providing any explicit configuration. kubernetes-gradle-plugin would generate an opinionated container image and manifests by inspecting your project configuration.
Adding kubernetes-gradle-plugin to project
You would need to add kubernetes-gradle-plugin to your project in order to use it:
Open the build.gradle
file and add the plugin in the plugins
section.
plugins {
id 'java'
id 'io.quarkus'
id 'org.eclipse.jkube.kubernetes' version '1.17.0'
}
Building and pushing image to Container registry
Once you’ve set up your project and tested it, You can create a container image for your application using kubernetes-gradle-plugin. If you’ve access to a docker daemon try running this command:
$ ./gradlew k8sBuild
If you don’t have access to any docker daemon, you can configure kubernetes-gradle-plugin to use JIB mode as well:
$ ./gradlew k8sBuild -Pjkube.build.strategy=jib
After running this command you’ll see kubernetes-gradle-plugin created a container image with opinionated defaults by inspecting your project dependencies. However, you would want to configure the name of the container image.
Let’s say you want to push your image to Quay.io with username foo. You can configure image name by running this command:
$ ./gradlew k8sBuild -Pjkube.generator.name="quay.io/foo/%a:%l"
Once you’ve created a container image of your application, you need to push it to some container registry. Make sure you’ve already created account on some public/private container registry. You can provide your credentials either via environment variables or plugin configuration. kubernetes-gradle-plugin also tries to read ~/.docker/config.json
that gets created after docker login
. You can read more about this in Authentication section.
Log into your container registry:
$ docker login quay.io
Run this command to instruct kubernetes-gradle-plugin to push container image you built in previous step to your container registry:
$ ./gradlew k8sPush -Pjkube.generator.name="quay.io/foo/%a:%l"
Generating & applying Kubernetes manifests
Just like container image, kubernetes-gradle-plugin can generate opinionated Kubernetes manifests. Run this command to automatically generate and apply manifests onto currently logged in Kubernetes cluster.
$ ./gradlew k8sResource k8sApply
After running these tasks, you can also check Kubernetes manifests generated by kubernetes-gradle-plugin in build/classes/java/main/META-INF/jkube/
directory.
Cleanup applied Kubernetes resources after testing:
$ ./gradlew k8sUndeploy
How to set Service Port?
By default, in Vert.x applications, application port value is 8888. kubernetes-gradle-plugin opinionated defaults use port 8080. If you want to change this, you’ll need to configure kubernetes-gradle-plugin to generate image with desired port:
kubernetes {
generator {
config {
'vertx' {
webPort = '8888'
}
}
}
}
Once configured, you can go ahead and deploy application to Kubernetes.
How to add Kubernetes Readiness Liveness probes?
kubernetes-gradle-plugin doesn’t add any Kubernetes liveness and readiness probes by default. However, it does provide a rich set of configuration options to add health checks. Read Vert.x Healthchecks section for more details.
You can easily get started with using kubernetes-gradle-plugin on a Quarkus project without providing any explicit configuration. kubernetes-gradle-plugin would generate an opinionated container image and manifests by inspecting your project configuration.
Zero Configuration
Adding kubernetes-gradle-plugin to project
You would need to add kubernetes-gradle-plugin to your project in order to use it:
Open the build.gradle
file and add the plugin in the plugins
section.
plugins {
id 'java'
id 'io.quarkus'
id 'org.eclipse.jkube.kubernetes' version '1.17.0'
}
Building and pushing image to Container registry
Once you’ve set up your project and tested it, You can create a container image for your application using kubernetes-gradle-plugin. If you’ve access to a docker daemon try running this command:
$ ./gradlew k8sBuild
If you don’t have access to any docker daemon, you can configure kubernetes-gradle-plugin to use JIB mode as well:
$ ./gradlew k8sBuild -Pjkube.build.strategy=jib
After running this command you’ll see kubernetes-gradle-plugin created a container image with opinionated defaults by inspecting your project dependencies. However, you would want to configure the name of the container image.
Let’s say you want to push your image to Quay.io with username foo. You can configure image name by running this command:
$ ./gradlew k8sBuild -Pjkube.generator.name="quay.io/foo/%a:%l"
Once you’ve created a container image of your application, you need to push it to some container registry. Make sure you’ve already created account on some public/private container registry. You can provide your credentials either via environment variables or plugin configuration. kubernetes-gradle-plugin also tries to read ~/.docker/config.json
that gets created after docker login
. You can read more about this in Authentication section.
Log into your container registry:
$ docker login quay.io
Run this command to instruct kubernetes-gradle-plugin to push container image you built in previous step to your container registry:
$ ./gradlew k8sPush -Pjkube.generator.name="quay.io/foo/%a:%l"
Generating & applying Kubernetes manifests
Just like container image, kubernetes-gradle-plugin can generate opinionated Kubernetes manifests. Run this command to automatically generate and apply manifests onto currently logged in Kubernetes cluster.
$ ./gradlew k8sResource k8sApply
After running these tasks, you can also check Kubernetes manifests generated by kubernetes-gradle-plugin in build/classes/java/main/META-INF/jkube/
directory.
Cleanup applied Kubernetes resources after testing:
$ ./gradlew k8sUndeploy
Quarkus Native Mode
While containerizing a Quarkus application under native mode, kubernetes-gradle-plugin would automatically detect that it’s a native executable artifact and would select a lighter base image while containerizing application. There is no additional configuration needed by kubernetes-gradle-plugin for Native Builds.
How to add Kubernetes liveness and readiness probes?
kubernetes-gradle-plugin automatically adds Kubernetes liveness and readiness probes in generated Kubernetes manifests in presence of SmallRye Health dependency.
To add SmallRye to your project, add the following dependency:
dependencies {
implementation 'io.quarkus:quarkus-smallrye-health'
}
Once you run k8sResource task again, you should be able to see liveness and readiness probes added in generated manifests.
You can build a container image and deploy to Kubernetes with kubernetes-gradle-plugin by just providing a Dockerfile. kubernetes-gradle-plugin builds a container image based on your Dockerfile and generates opinionated Kubernetes manifests by inspecting it.
Placing Dockerfile in project root directory
You can place the Dockerfile in the project root directory along with build.gradle
.
kubernetes-gradle-plugin detects it and automatically builds an image based on this Dockerfile.
There is no need to provide any sort of configuration apart from Dockerfile and project root directory as docker context directory.
The Image is created with an opinionated name from group, artifact and version.
The name can be overridden by using the jkube.image.name
property.
Read Simple Dockerfile section for more details.
Placing Dockerfile in some other directory
You can choose to place your Dockerfile at some other location.
By default, the plugin assumes it to be src/main/docker
, but you’ll need to configure docker context directory in plugin configuration.
When not specified, context directory is assumed to be Dockerfile’s parent directory.
You can take a look at Docker File Provided Quickstarts for more details.
Controlling what gets copied to image
When using Dockerfile mode, every file and directory present in the Docker build context directory gets copied to the created Docker image. In case you want to ignore some files, or you want to include only a specific set of files, the kubernetes-gradle-plugin provides the following options to achieve this:
.jkube-dockerinclude
: Include only a specific set of files specified in this file
.jkube-dockerexclude
: Exclude a certain set of files from being copied into container image
.jkube-dockerignore
: Same as .jkube-dockerexclude
, ignore certain files from being copied into container image
Using Property placeholders in Dockerfiles
You can reference properties in your Dockerfiles using standard maven property placeholders ${*}
. For example, if you have a property in your gradle.properties
like this:
fromImage = fabric8/s2i-java
FROM ${fromImage}:latest-java11
You can override placeholders using the filter field in image build configuration, see Build Filtering for more details.
Let’s have a look at some code. The following examples will demonstrate all available configurations variants:
If the zero configuration mode doesn’t fit your use case, and you want more flexibility, you can also use kubernetes-gradle-plugin Groovy configuration to configure plugin as per your needs. kubernetes-gradle-plugin provides a rich set of configuration in form of Groovy DSL which can be used to tune plugin’s output as per your specific requirements.
The plugin configuration can be roughly divided into the following sections:
Global configuration options are responsible for tuning the behavior of plugin tasks.
images
defines which container images are used and configured.
resources
defines the resource descriptors for deploying on Kubernetes cluster.
enricher
configures various aspects of creating and enhancing resource descriptors.
A working example can be found in quickstarts/gradle/groovy-dsl-config
directory.
This section provides an overview of images
element with which you can configure different aspects of container images generated by kubernetes-gradle-plugin.
Here is an example of providing Groovy DSL configuration for a simple image:
kubernetes {
images {
image {
name = "jkube/${project.name}:${project.version}" (1)
alias = "camel-service" (2)
build {
from = "quay.io/jkube/jkube-java:0.0.13" (3)
assembly { (4)
targetDir = "/deployments" (5)
layers = [{ (6)
fileSets = [{ (7)
directory = file("${project.rootDir}/build/dependencies")
}]
}]
}
env { (8)
JAVA_LIB_DIR = "/deployments/dependencies/*"
JAVA_MAIN_CLASS = "org.apache.camel.cdi.Main"
}
labels { (9)
labelWithValue = "foo"
version = "${project.version}"
artifactId = "${project.name}"
}
ports = ["8787"] (10)
}
}
}
}
1 | Name with which we want our image to be built; See name field in Image Configuration for details. |
2 | Shortcut name for image; See alias field in Image Configuration for details. |
3 | Base image on which this image would be built upon; See from field in Image Build Configuration for more details. |
4 | Assembly Configuration for copying files/directories into image. See Assembly Configuration for more details. |
5 | Target directory inside image for copying a directory into image |
6 | Assembly layer; See Assembly Inline/Layer Configuration for details. |
7 | FileSet Assembly Configuraton for copying directories.
See fileSets field in Assembly Layer Configuration for details. |
8 | Environment variables added to image. See Environment and Labels for details. |
9 | Labels added to image. See Environment and Labels for details. |
10 | Ports to be exposed.
See port field in Image Build Configuration for details. |
You can read more about supported fields in image
configuration element in Image Configuration section.
If you want to copy some files/directories into your image.
You can make use of kubernetes-gradle-plugin Assembly Configuration.
You would need to provide assembly
element in image
> build
.
Here is an example of copying a single jar file into image.
This configuration would copy a jar file located in build/libs/
to /deployments
folder inside the image:
kubernetes {
images {
image {
name = "${project.group}/${project.name}:${project.version}"
build {
from = "quay.io/jkube/jkube-java:0.0.13"
assembly {
targetDir = "/deployments"
layers = [{
id = "custom-assembly-for-copying-file"
files = [{
source = file("build/libs/${project.name}-${project.version}-all.jar")
outputDirectory = "."
}]
}]
}
}
}
}
}
In order to copy directories, you would be using fileSets
configuration element instead of files
.
Here is an example of copying directories.
This example would copy build/dependencies
directory to /deployments
directory inside image.
kubernetes {
images {
image {
name = "${project.group}/${project.name}:${project.version}"
build {
from = "quay.io/jkube/jkube-java:0.0.13"
assembly {
targetDir = "/deployments"
layers = [{
id = "custom-assembly-for-copying-directory"
fileSets = [{
directory = file("${project.rootDir}/build/dependencies")
}]
}]
}
}
}
}
}
Refer to Labels/Annotations Configuration in Kubernetes Resource Configuration
Refer to Kubernetes Controller Resource Generation in Kubernetes Resource Configuration
Refer to Ingress Generation in Kubernetes Resource Configuration
Refer to ServiceAccount Generation in Kubernetes Resource Configuration
You can also use an external configuration in form of YAML resource descriptors which are located in the src/main/jkube
directory. Each resource gets its own file, which contains a skeleton of a resource descriptor. The plugin will pick up the resource, enrich it and then combine all to a single kubernetes.yml
and openshift.yml
file. Within these descriptor files you are can freely use any Kubernetes feature.
Let’s have a look at an example from
quickstarts/gradle/external-resources.
This is a plain Spring Boot application, whose images are auto generated like in case of no configuration.
The resource fragments are in src/main/jkube
.
spec:
replicas: 1
template:
spec:
volumes:
- name: config
gitRepo:
repository: 'https://github.com/jstrachan/sample-springboot-config.git'
revision: 667ee4db6bc842b127825351e5c9bae5a4fb2147
directory: .
containers:
- volumeMounts:
- name: config
mountPath: /app/config
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
serviceAccount: ribbon
As you can see, there is no metadata
section as would be expected for Kubernetes resources because it will be automatically added by the kubernetes-gradle-plugin
. The object’s Kind
, if not given, is automatically derived from the
filename. In this case, the kubernetes-gradle-plugin
will create a Deployment
because the file is called deployment.yml
. Similar mappings between file names and resource type exist for each supported resource kind, the
complete list of which (along with associated abbreviations) can be found in the Kind Filename Mapping section.
Additionally, if you name your fragment using a name prefix followed by a dash and the mapped file name, the plugin will automatically use that name for your resource. So, for example, if you name your deployment fragment
myapp-deployment.yml
, the plugin will name your resource myapp
. In the absence of such provided name for your resource, a name will be automatically derived from your project’s metadata (in particular, its project name
as specified in your build.gradle
).
No image is also referenced in this example because the plugin also fills in the image details based on the configured image you are building with (either from a generator or from a dedicated image plugin configuration, as seen before).
Enrichment of resource fragments can be fine-tuned by using profile sub-directories.
It’s very common, especially when dealing with the inner development loop, that you don’t need to provide any configuration for your Gradle project. You can get started simply by adding the plugin to your build.gradle file:
plugins {
id 'org.eclipse.jkube.kubernetes' version '1.17.0'
}
In this case, kubernetes-gradle-plugin analyzes your project and configures the container image and the cluster configuration manifests using a set of opinionated defaults.
This plugin supports a rich set for providing a smooth Java developer experience. These tasks can be categorized in multiple groups:
Build and Deployment tasks are all about creating and managing Kubernetes build artifacts like Docker images or S2I builds.
Development tasks target help not only in developing resource descriptors to the development cluster but also to manage the lifecycle of the development cluster as well.
Task | Description |
---|---|
Build images |
|
Pushes the built images to the container image registry |
|
Generate resource manifests for your application |
|
Applies the generated resources to the connected cluster |
|
Generate Helm charts for your application |
|
Upload Helm charts to Helm repositories. |
Task | Description |
---|---|
Debug your Java app running on the cluster |
|
Show the logs of your Java app running on the cluster |
|
Deletes the kubernetes resources that you deployed via the k8sApply task |
|
Start a remote development session |
|
Watch for file changes and perform rebuilds and redeployments |
This task is for building container images for your application.
A normal Docker build is performed by default.For Kubernetes builds the kubernetes-gradle-plugin uses the Docker remote API so the URL of your Docker Daemon must be specified.The URL can be specified by the dockerHost or machine configuration, or by the DOCKER_HOST
environment variable.
The Docker remote API supports communication via SSL and authentication with certificates.The path to the certificates can be specified by the certPath or machine configuration, or by the DOCKER_CERT_PATH
environment variable.
If you don’t have access to docker daemon, you can change build strategy using buildStrategy
option in Groovy DSL configuration like this:
kubernetes {
buildStrategy = 'jib'
}
These are the different options supported by buildStrategy
:
buildStrategy |
Description |
|
Docker build with a binary source |
|
Docker build using Cloud Native Buildpacks |
|
Deamonless container image creation using JIB build |
kubernetes-gradle-plugin by default tries to build up an opinionated Image Configuration by inspecting build.gradle
.
You can also provide your own Dockerfile or provide Custom ImageConfiguration via gradle configuration.
This task uploads images to the registry which have a build
configuration section.
The images to push can be restricted with the global option filter
(see Build Goal Configuration for details).
The registry to push is by default docker.io
but can be specified as part of the images’s name
the Docker way.
E.g. docker.test.org:5000/data:1.5
will push the image data
with tag 1.5
to the registry docker.test.org
at port
5000
.
Registry credentials (i.e. username and password) can be specified in multiple ways as described in section Authentication.
This task generates Kubernetes resources based on your project.
It can either be opinionated defaults or based on the configuration provided in Groovy DSL configuration or resource fragments in src/main/jkube
.
Generated resources are in build/classes/java/main/META-INF/jkube/kubernetes
directory.
You can find all Groovy DSL configuration options for k8sResource in Kubernetes Resource configuration section.
Resource task also validates the generated resource descriptors using API specification of Kubernetes. You can see configuration options regarding Kubernetes resource validation in Global Configuration section.
This task applies the resources created with k8sResource to a connected Kubernetes cluster.
gradle k8sApply
This feature allows you to create Helm charts from the Kubernetes resources Eclipse JKube generates for your project. You can then use the generated charts to leverage Helm's capabilities to install, update, or delete your app in Kubernetes.
To generate the Helm chart you need to invoke the k8sHelm
Gradle task on the command line:
gradle k8sResource k8sHelm
The k8sResource goal is required to create the resource descriptors that are included in the Helm chart.
If you have already generated the resources in a previous step then you can omit this task.
|
There are multiple ways to configure the generated Helm Chart:
By providing a Chart.helm.yaml
fragment in src/main/jkube
directory.
Through the helm
section in the kubernetes-gradle-plugin Groovy DSL configuration.
When using the fragment approach, you simply need to create a Chart.helm.yaml
file in the src/main/jkube
directory with the fields you want to override.
JKube will take care of merging this fragment with the opinionated and configured defaults.
The Groovy DSL configuration is defined in a helm
section within the plugin’s configuration:
kubernetes {
helm {
chart = 'Jenkins'
keywords = ['ci', 'cd', 'server']
dependencies = [{
name = 'ingress-nginx'
version = '1.26.0'
repository = 'https://kubernetes.github.io/ingress-nginx'
}]
}
}
This configuration section knows the following sub-elements in order to configure your Helm chart.
Element | Description | Property |
---|---|---|
apiVersion |
The apiVersion of Chart.yaml schema, defaults to v1. |
|
chart |
The Chart name. |
|
version |
The Chart SemVer version. |
|
debug |
enable verbose output for helm operations Defaults to |
|
description |
The Chart single-sentence description. |
|
home |
The Chart URL for this project’s home page. |
|
sources |
The Chart list of URLs to source code for this project. |
|
The Chart list of maintainers (name+email). |
||
icon |
The Chart URL to an SVG or PNG image to be used as an icon, default is extracted from the kubernetes manifest
( |
|
appVersion |
The version of the application that Chart contains. |
|
keywords |
Comma separated list of keywords to add to the chart. |
|
engine |
The template engine to use. |
|
additionalFiles |
The list of additional files to be included in the Chart archive. Any file named |
|
type / types |
Platform for which to generate the chart. By default this is Please note that there is no OpenShift support yet for charts, so this is experimental. |
|
sourceDir |
Where to find the resource descriptors generated with By default, this is |
|
outputDir |
Where to create the Helm chart, which is |
|
tarballOutputDir |
Where to create the Helm chart archive, which is same as |
|
tarFileClassifier |
A string at the end of Helm archive filename as a classifier. Defaults to empty string. |
|
chartExtension |
The Helm chart file extension ( |
|
The list of dependencies for this chart. |
||
The list of parameters to interpolate the Chart templates from the provided Fragments. These parameters can represent variables, in this case the values are used to generate
the The parameters can also represent a Golang expression |
Element | Description |
---|---|
name |
The maintainer’s name or organization. |
The maintainer’s contact email address. |
|
url |
The maintainer’s URL address. |
Element | Description |
---|---|
name |
The name of the chart dependency. |
version |
Semantic version or version range for the dependency. |
repository |
URL pointing to a chart repository. |
condition |
Optional reference to a boolean value that toggles the inclusion of the dependency. IE |
alias |
Optional reference to the map that will be passed as the value scope for the subchart. For more information see helm documentation. |
Element | Description |
---|---|
name |
The name of the interpolatable parameter. Will be used to replace placeholders
( |
required |
Set to true if this is a required value (when used to generate values). |
value |
In case we are generating a In case the placeholder has to be replaced by an expression, the Golang expression
e.g. |
In addition to the standard Kubernetes resource fragments, you can also provide fragments for Helm Chart.yaml
and values.yaml
files.
For the Chart.yaml
file you can provide a Chart.helm.yaml
fragment in the src/main/jkube
directory.
For the values.yaml
file you can provide a values.helm.yaml
fragment in the src/main/jkube
directory.
These fragments will be merged with the opinionated and configured defaults. The values provided in the fragments will override any of the generated default values taking precedence over them.
In a next step you can install this via k8sHelmInstall task as follows:
./gradlew k8sHelmInstall
In addition, this task will also create a tar-archive below outputDir
which contains the chart with its template.
You can uninstall installed Helm release from Kubernetes via k8sHelmUninstall task as follows:
./gradlew k8sHelmUninstall
This feature allows you to upload your Eclipse JKube-generated Helm charts to one of the supported repositories: Artifactory, Chartmuseum, Nexus, and OCI.
To publish a Helm chart you need to invoke the k8sHelmPush
Gradle task on the command line:
gradle k8sResource k8sHelm k8sHelmPush
The k8sResource and the k8sHelm tasks are required to create the resource descriptors which are included in the Helm chart and the Helm chart itself.
If you have already built the resource and created the chart, then you can omit these tasks.
|
The configuration is defined in a helm
section within the plugin’s configuration:
kubernetes {
helm {
chart = 'Jenkins'
keywords = ['ci', 'cd', 'server']
stableRepository {
name = 'stable-repo-id'
url = 'https://stable-repo-url'
type = 'ARTIFACTORY'
}
snapshotRepository {
name = 'snapshot-repo-id'
url = 'https://snapshot-repo-url'
type = 'ARTIFACTORY'
}
}
}
This configuration section knows the following sub-elements in order to configure your Helm chart.
Element | Description | Property |
---|---|---|
stableRepository |
The configuration of the stable helm repository (see Helm stable repository configuration). |
|
snapshotRepository |
The configuration of the snapshot helm repository (see Helm repository configuration). |
Element | Description | Property |
---|---|---|
name |
The name (id) of the server configuration. It can select the maven server by this ID. |
|
url |
The url of the server. |
|
username |
The username of the repository. Optional. If a maven server ID is specified, the username is taken from there. |
|
password |
The password of the repository. Optional. If a maven server ID is specified, the password is taken from there. |
|
type |
The type of the repository. One of ARTIFACTORY, NEXUS, CHARTMUSEUM, OCI |
|
Element | Description | Property |
---|---|---|
name |
The name (id) of the server configuration. It can select the maven server by this ID. |
|
url |
The url of the server. |
|
username |
The username of the repository. Optional. If a maven server ID is specified, the username is taken from there. |
|
password |
The password of the repository. Optional. If a maven server ID is specified, the password is taken from there. |
|
type |
The type of the repository. One of ARTIFACTORY, NEXUS, CHARTMUSEUM |
|
This feature allows you to lint your Eclipse JKube-generated Helm charts and examine it for possible issues.
It provides the same output as the helm lint
command.
To lint a Helm chart you need to invoke the k8sHelmLint
Gradle task on the command line:
gradle k8sResource k8sHelm k8sHelmLint
The k8sResource and the k8sHelm tasks are required to create the resource descriptors which are included in the Helm chart and the Helm chart itself.
If you have already built the resource and created the chart, then you can omit these tasks.
|
Element | Description | Property |
---|---|---|
lintStrict |
Enable strict mode, fails on lint warnings. |
|
lintQuiet |
Enable quiet mode, only shows warnings and errors. |
|
kubernetes {
helm {
lintStrict = true
lintQuiet = true
}
}
This feature allows you to update dependencies of your Eclipse JKube-generated Helm charts
It provides the same output as the helm dependency update
command.
To update on-disk dependencies of a Helm chart you need to invoke the k8sHelmDependencyUpdate
Gradle task on the command line:
gradle k8sResource k8sHelm k8sHelmDependencyUpdate
The k8sResource and the k8sHelm tasks are required to create the resource descriptors which are included in the Helm chart and the Helm chart itself.
If you have already built the resource and created the chart, then you can omit these tasks.
|
Element | Description | Property |
---|---|---|
dependencyVerify |
verify the packages against signatures |
|
dependencySkipRefresh |
do not refresh the local repository cache |
|
kubernetes {
helm {
dependencyVerify = false
debug = true
dependencySkipRefresh = false
dependencies = [{
name = "foo"
version = "0.0.1"
repository = "https://charts.example.com/test"
}]
}
}
This feature allows you to install your Eclipse JKube-generated Helm charts
To install Helm chart you need to invoke the k8sHelmInstall
Gradle task on the command line:
gradle k8sResource k8sHelm k8sHelmInstall
The k8sResource and the k8sHelm tasks are required to create the resource descriptors which are included in the Helm chart and the Helm chart itself.
If you have already built the resource and created the chart, then you can omit these tasks.
|
Element | Description | Property |
---|---|---|
releaseName |
Name of Helm Release (instance of a chart running in a Kubernetes cluster). |
|
installDependencyUpdate |
update dependencies if they are missing before installing the chart |
|
installWaitReady |
if set, will wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. |
|
kubernetes {
helm {
releaseName = "test-release"
installWaitReady = false
installDependencyUpdate = false
}
}
This feature allows you to remove Helm release from Kubernetes
To uninstall a Helm release you need to invoke the k8sHelmUninstall
gradle task on the command line:
gradle k8sResource k8sHelm k8sHelmInstall k8sHelmUninstall
The k8sResource , k8sHelm and k8sHelmInstall tasks are required to ensure that Helm release gets installed in Kubernetes.
If you already have the Helm release installed on Kubernetes, then you can omit these tasks.
|
Element | Description | Property |
---|---|---|
releaseName |
Name of Helm Release (instance of a chart running in a Kubernetes cluster). |
|
kubernetes {
helm {
releaseName = "test-release"
}
}
This task is for deleting the kubernetes resources that you deployed via the k8sApply task.
It iterates through all the resources generated by the k8sResource task and deletes them from your current Kubernetes cluster.
gradle k8sUndeploy
This task tails the log of the app that you deployed via the k8sApply task
gradle k8sLog
You can then terminate the output by hitting Ctrl+C
If you wish to get the log of the app and then terminate immediately then try:
gradle k8sLog -Djkube.log.follow=false
This lets you pipe the output into grep or some other tool
gradle k8sLog -Djkube.log.follow=false | grep Exception
If your app is running in multiple pods you can configure the pod name to log via the jkube.log.pod
property, otherwise it defaults to the latest pod:
gradle k8sLog -Djkube.log.pod=foo
If your pod has multiple containers you can configure the container name to log via the jkube.log.container
property, otherwise it defaults to the first container:
gradle k8sLog -Djkube.log.container=foo
This task enables debugging in your Java app and then port forwards from localhost to the latest running pod of your app so that you can easily debug your app from your Java IDE.
gradle k8sDebug
Then follow the on screen instructions.
The default debug port is 5005
.If you wish to change the local port to use for debugging then pass in the jkube.debug.port
parameter:
gradle k8sDebug -Djkube.debug.port=8000
Then in your IDE you start a Remote debug execution using this remote port using localhost and you should be able to set breakpoints and step through your code.
This lets you debug your apps while they are running inside a Kubernetes cluster - for example if you wish to debug a REST endpoint while another pod is invoking it.
Debug is enabled via the JAVA_ENABLE_DEBUG
environment variable being set to true
.
This environment variable is used for all the standard Java docker images used by Spring Boot, Quarkus,
flat classpath and executable JAR projects.
If you use your own custom docker base image you may wish to also respect this environment variable too
to enable debugging.
By default the k8sDebug
task has to edit your Deployment to enable debugging then wait for a pod to start.It might be in development you frequently want to debug things and want to speed things up a bit.
If so you can enable debug mode for each build via the jkube.debug.enabled
property.
e.g. you can pass this property on the command line:
gradle k8sResource k8sApply -Djkube.debug.enabled=true
Then whenever you type the k8sDebug
task there is no need for the gradle task to edit the Deployment
and wait for a pod to restart; we can immediately start debugging when you type:
gradle k8sDebug
The k8sDebug
task allows to attach a remote debugger to a running container, but the application is free to execute when the debugger is not attached.
In some cases, you may want to have complete control on the execution, e.g. to investigate the application behavior at startup. This can be done using the jkube.debug.suspend
flag:
gradle k8sDebug -Djkube.debug.suspend
The suspend flag will set the JAVA_DEBUG_SUSPEND
environment variable to true
and JAVA_DEBUG_SESSION
to a random number in your deployment.
When the JAVA_DEBUG_SUSPEND
environment variable is set, standard docker images will use suspend=y
in the JVM startup options for debugging.
The JAVA_DEBUG_SESSION
environment variable is always set to a random number (each time you run the debug task with the suspend flag) in order to tell Kubernetes to restart the pod.
The remote application will start only after a remote debugger is attached. You can use the remote debugging feature of your IDE to connect (on localhost
, port 5005
by default).
The jkube.debug.suspend flag will disable readiness probes in the Kubernetes deployment in order to start port-forwarding during the early phases of application startup
|
Eclipse JKube Remote Development allows you to run and debug code in your local machine:
While connected to and consuming services that are only available in your cluster
While exposing your locally running application to other Pods and services running on your cluster
Expose your local application to the cluster
Consume cluster services locally without having to expose them to the Internet
Connect your local toolset to the cluster services
Simple configuration
No tools required
No special or super-user permissions required in the local machine
No special features required in the cluster (should work on any kind of Kubernetes flavor)
Boosts your inner-loop developer experience when combined with live-reload frameworks such as Quarkus
The remote development configuration must be provided within the remoteDevelopment
configuration element for the project.
kubernetes {
remoteDevelopment {
localServices = [{
serviceName = "my-local-service" (1)
port = 8080 (2)
},{
}]
remoteServices = [{
hostname = "postgresql" (3)
port = 5432 (4)
},{
hostname = "rabbit-mq"
port = 5672
localPort = 15672 (5)
}]
}
}
1 | Name of the service to be exposed in the cluster, the local application will be able accessible in the cluster through this hostname/service |
2 | Port where the local application is listening for connections |
3 | Name of a cluster service that will be forwarded and exposed locally |
4 | Port where the cluster service listens for connections (by default, the same port will be used to expose the service locally) |
5 | Optional port where the cluster service will be exposed locally |
$ gradle k8sRemoteDev
Element | Description |
---|---|
The list of local services to expose in the cluster. |
|
The list of cluster services to expose locally. |
Element | Description |
---|---|
serviceName |
The name of the service that will be created/hijacked in the cluster. |
type |
The type of service to create (defaults to |
port |
The service port, must match the port where the local application is listening for connections. |
Element | Description |
---|---|
hostname |
The name of the cluster service whose port will be forwarded to the local machine. |
port |
The port where the cluster service is listening for connections. |
localPort |
(Optional) The port where the cluster service will be exposed locally. If not specified, the same port will be used. |
This task is used to monitor the project workspace for changes and automatically trigger a redeploy of the application running on Kubernetes. There are two kinds of watchers present at the moment:
Docker Image Watcher(watches docker images)
Spring Boot Watcher(based on Spring Boot Devtools)
Before entering the watch mode, this task must generate the docker image and the Kubernetes resources (optionally including some development libraries/configuration), and deploy the app on Kubernetes.
For any application having k8sResource
and k8sBuild
tasks bound to the lifecycle, the following
command can be used to run the watch task.
gradle k8sWatch
This plugin supports different watcher providers, enabled automatically if the project satisfies certain conditions.
Watcher providers can also be configured manually. Here is an example:
kubernetes {
watcher {
includes = ['docker-image']
config {
'spring-boot' {
serviceUrlWaitTimeSeconds = 10
}
}
}
}
This watcher is enabled by default for all Spring Boot projects. It performs the following actions:
deploys your application with Spring Boot DevTools enabled
tails the log of the latest running pod for your application
watches the local development build of your Spring Boot based application and then triggers a reload of the application when there are changes
You need to make sure that devtools
is included in the repacked archive, as shown in the following listing (taken from Spring Docs)
bootJar {
classpath configurations.developmentOnly
}
Then you need to set a spring.devtools.remote.secret
in application.properties, as shown in the following example:
spring.devtools.remote.secret=mysecret
Spring devtools automatically ignores projects named spring-boot , spring-boot-devtools ,
spring-boot-autoconfigure , spring-boot-actuator , and spring-boot-starter
|
You can try it on any spring boot application via:
gradle k8sWatch
Once the task starts up the spring boot RemoteSpringApplication it will watch for local development changes.
e.g. if you edit the java code of your app and then build it via something like this:
gradle build
You should see your app reload on the fly in the shell running the k8sWatch task!
There is also support for LiveReload as well.
This is a generic watcher that can be used in Kubernetes mode only. Once activated, it listens for changes in the project workspace in order to trigger a redeploy of the application. This enables rebuilding of images and restarting of containers in case of updates.
There are five watch modes, which can be specified in multiple ways:
build
: Automatically rebuild one or more Docker images when one of the files selected by an assembly changes. This works for all files included in assembly.
run
: Automatically restart your application when their associated images change.
copy
: Copy changed files into the running container. This is the fast way to update a container, however the target container must support hot deploy, too so that it makes sense. Most application servers like Tomcat supports this.
both
: Enables both build
and run
. This is the default.
none
: Image is completely ignored for watching.
The watcher can be activated e.g. by running this command in another shell:
gradle build
The watcher will detect that the binary artifact has changed and will first rebuild the docker image, then start a redeploy of the Kubernetes pod.
Element | Description | Property |
---|---|---|
buildStrategy |
Defines what build strategy to choose while building container image.
Possible values are |
|
buildSourceDirectory |
Default directory that contains the assembly descriptor(s) used by the plugin. The default value is |
|
authConfig |
Authentication information when pulling from or pushing to Docker registry. There is a dedicated section Authentication for how to do security. |
|
autoPull |
Decide how to pull missing base images or images to start:
|
|
Specify whether images should be pull when looking for base images while building or images for starting. This property can take the following values (case insensitive):
By default, a progress meter is printed out on the console, which is omitted when using gradle in batch mode (option |
|
|
certPath |
Path to SSL certificate when SSL is used for communicating with the Docker daemon. These certificates are normally
stored in |
|
dockerHost |
The URL of the Docker Daemon. If this configuration option is not given, then the optional The discovery sequence used by the docker-maven-plugin to determine the URL is:
|
|
filter |
In order to temporarily restrict the operation of plugin goals this configuration option can be used.
Typically, this will be set via the system property |
|
machine |
Docker machine configuration. See Docker Machine for possible values. |
|
maxConnections |
Number of parallel connections are allowed to be opened to the Docker Host. For parsing log output, a connection needs to be kept open (as well for the wait features), so don’t put that number to low. Default is 100 which should be suitable for most of the cases. |
|
outputDirectory |
Default output directory to be used by this plugin.
The default value is |
|
profile |
Profile to which contains enricher and generators configuration. See Profiles for details. |
|
registry |
Specify globally a registry to use for pulling and pushing images. See Registry handling for details. |
|
skip |
With this parameter the execution of this plugin can be skipped completely. |
|
skipBuild |
If set not images will be build (which implies also skip.tag) with |
|
skipBuildPom |
If set the build step will be skipped for modules of type |
|
skipTag |
If set to |
|
skipMachine |
Skip using docker machine in any case |
|
sourceDirectory |
Default directory that contains the assembly descriptor(s) used by the plugin. The default value is |
|
verbose |
Boolean attribute for switching on verbose output like the build steps when doing a Docker build. Default is |
|
logDate |
The date format to use when logging messages from Docker. Default is |
|
logStdout |
Log to stdout regardless if log files are configured or not. Default is |
|
Group of configuration parameters to connect to Kubernetes/OpenShift cluster. |
||
createNewResources |
Create new Kubernetes resources. Defaults to |
|
debugSuspend |
Disables readiness probes in Kubernetes Deployment in order to start port forwarding during early phases of application startup. Defaults to |
|
deletePodsOnReplicationControllerUpdate |
Delete all the pods if we update a Replication Controller. Defaults to |
|
failOnNoKubernetesJson |
Fail if there is no kubernetes json present. Defaults to |
|
failOnValidationError |
If value is set to Default is |
|
ignoreServices |
Ignore Service resources while applying resources. This is particularly useful when in recreate mode to let you easily recreate all the ReplicationControllers and Pods but leave any service definitions alone to avoid changing the portalIP addresses and breaking existing pods using the service. Defaults to |
|
interpolateTemplateParameters |
Interpolate parameter values from This is useful when using JKube in combination with Helm. Placeholders for variables defined in template files can be used in the different resource fragments. Helm generated charts will contain these placeholders/parameters. For Defaults to |
|
jsonLogDir |
The folder we should store any temporary json files or results Defaults to |
|
kubernetesManifest |
The generated kubernetes YAML file. Defaults to |
|
kubernetesTemplate |
File or directory containing YAML files with OpenShift Template resources to be used as Helm parameters. Defaults to |
|
localDebugPort |
Default port available for debugging your application inside Kubernetes. Defaults to |
|
logFollow |
Get follow logs for your application inside Kubernetes. Defaults to |
|
logContainerName |
Get logs of some specific container inside your application Deployment. Defaults to |
|
logPodName |
Get logs of some specific pod inside your application Deployment. Defaults to |
|
mergeWithDekorate |
When resource generation is delegated to Dekorate, should JKube resources be merged with Dekorate generated ones. Defaults to |
|
offline |
Whether to try detecting Kubernetes Cluster or stay offline. Defaults to |
|
pushRegistry |
The registry to use when pushing the image. See Registry Handling for more details. |
|
recreate |
Update resources by deleting them first and then creating them again. Defaults to |
|
pushRetries |
How often should a push be retried before giving up. This useful for flaky registries which tend to return 500 error codes from time to time. Defaults to 0. |
|
resourceEnvironment |
Environment name where resources are placed. For example, if you set this property to dev and resourceDir is the
default one, plugin will look at Defaults to |
|
resourceSourceDirectory |
Folder where to find project specific files. Defaults to |
|
resourceTargetDirectory |
The generated Kubernetes manifests target directory. Defaults to |
|
rollingUpgrades |
Use Rolling Upgrades to apply changes. |
|
servicesOnly |
Only process services so that those can be recursively created/updated first before creating/updating any pods and Replication Controllers. Defaults to |
|
skip |
With this parameter the execution of this plugin can be skipped completely. |
|
skipApply |
If set no resource maniefst would be applied to connected Kubernetes cluster. Defaults to |
|
skipUndeploy |
If set no applied resources would be deleted from connected Kubernetes cluster. Defaults to |
|
skipBuild |
If set not images will be build (which implies also skip.tag) with |
|
skipResource |
If not set resource manifests would be generated with |
|
skipPush |
If set to true the plugin won’t push any images that have been built. Defaults to |
|
skipResourceValidation |
If value is set to Default is |
|
skipTag |
If set to true this plugin won’t push any tags Defaults to |
|
useProjectClassPath |
Should we use the project’s compile time classpath to scan for additional enrichers/generators. Defaults to |
|
watchMode |
How to watch for image changes.
Defaults to |
|
watchInterval |
Interval in milliseconds (how often to check for changes). Defaults to |
|
watchPostExec |
A command which is executed within the container after files are copied into this container when watchMode is copy. Note that this container must be running. |
|
workDirectory |
The JKube working directory. Defaults to |
|
You can configure parameters to define how the plugin connects to the Kubernetes cluster instead of relying on default parameters.
kubernetes {
access {
username = ""
password = ""
masterUrl = ""
apiVersion = ""
}
}
Element | Description | Property |
---|---|---|
username |
Username on which to operate. |
|
password |
Password on which to operate. |
|
namespace |
Namespace on which to operate. |
|
masterUrl |
Master URL on which to operate. |
|
apiVersion |
Api version on which to operate. |
|
caCertFile |
CaCert File on which to operate. |
|
caCertData |
CaCert Data on which to operate. |
|
clientCertFile |
Client Cert File on which to operate. |
|
clientCertData |
Client Cert Data on which to operate. |
|
clientKeyFile |
Client Key File on which to operate. |
|
clientKeyData |
Client Key Data on which to operate. |
|
clientKeyAlgo |
Client Key Algorithm on which to operate. |
|
clientKeyPassphrase |
Client Key Passphrase on which to operate. |
|
currentContext |
Client Kubernetes Context that is currently in use |
|
trustStoreFile |
Trust Store File on which to operate. |
|
trustStorePassphrase |
Trust Store Passphrase on which to operate. |
|
keyStoreFile |
Key Store File on which to operate. |
|
keyStorePassphrase |
Key Store Passphrase on which to operate. |
|
The configuration how images should be created are defined in a dedicated images
sections. These are specified for each image within the images
element of the configuration with one image
element per image to use.
The image
element can contain the following sub elements:
Element | Description | Property |
---|---|---|
Each |
|
|
alias |
Shortcut name for an image which can be used for identifying the image within this configuration. This is used when linking images together or for specifying it with the global image configuration element. |
|
Registry to use for this image. If the |
|
|
Element which contains all the configuration aspects when doing a k8sBuild. This element can be omitted if the image is only pulled from a registry. e.g. as support for integration tests like database images. |
||
propertyResolverPrefix |
Prefix for property resolution. This is used to resolve properties in the configuration.
If not set, the default prefix is |
The build
section is mandatory and is explained in below.
When specifying the image name in the configuration with the name
field, then you can use several placeholders.
These placeholders are replaced during the execution by this plugin.
In addition, you can use regular Gradle properties. These properties are resolved by Gradle itself.
Placeholder | Description |
---|---|
%g |
The last part of the gradle group name.
The name gets sanitized, so that it can be used as username on GitHub.
Only the part after the last dot is used.
For example, given the group id |
%a |
A sanitized version of the artefact id, so that it can be used as part of a Docker image name. This means primarily, that it is converted to all lower case (as required by Docker). |
%v |
A sanitized version of the project version. Replaces |
%l |
If the pre-release part of the project version ends with If the |
%t |
If the project version ends with If the |
Here are different modes how images can be built:
When using this mode, the Dockerfile is created on the fly with all instructions extracted from the configuration given.
Alternatively an external Dockerfile template or Docker archive can be used. This mode is switched on by using one of these three configuration options within
contextDir specifies docker build context if an external dockerfile is located outside of Docker build context. If not specified, Dockerfile’s parent directory is used as build context.
dockerFile specifies a specific Dockerfile path. The Docker build context directory is set to contextDir
if given. If not the directory by default is the directory in which the Dockerfile is stored.
dockerArchive specifies a previously saved image archive to load directly. If a dockerArchive
is provided, no dockerFile
must be given.
All paths can be either absolute or relative paths. A relative path is looked up in $projectDir/src/main/docker
by default. You can make it easily an absolute path by using $projectDir
in your configuration.
However, you need to add the files on your own in the Dockerfile with an ADD
or COPY
command.
The files of the assembly are stored in a build context relative directory maven/
but can be changed by changing the assembly name with the option name
in the assembly configuration.
E.g. the files can be added with .Example
COPY maven/ /my/target/directory
so that the assembly files will end up in /my/target/directory
within the container.
If this directory contains a .jkube-dockerignore
(or alternatively, a .jkube-dockerexclude
file), then it is used
for excluding files for the build. If the file doesn’t exist, or it’s empty, then there are no exclusions.
Each line in this file is treated as an entry in the excludes
assembly fileSet
configuration .
Files can be referenced by using their relative path name.
Wildcards are also supported, patterns will be matched using
FileSystem#getPathMatcher glob
syntax.
It is similar to .dockerignore
when using Docker but has a slightly different syntax (hence the different name).
Example .jkube-dockerexclude
or .jkube-dockerignore
is an example which excludes all compiled Java classes.
.jkube-dockerexclude
or .jkube-dockerignore
build/classes/** (1)
1 | Exclude all compiled classes |
If this directory contains a .jkube-dockerinclude
file, then it is used for including only those files for the build.
If the file doesn’t exist or it’s empty, then everything is included.
Each line in this file is treated as an entry in the includes
assembly fileSet
configuration .
Files can be referenced by using their relative path name.
Wildcards are also supported, patterns will be matched using
FileSystem#getPathMatcher glob
syntax.
Example .jkube-dockerinclude
shows how to include only jar file that have build to the Docker build context.
.jkube-dockerinclude
build/libs/*.jar (1)
1 | Only add jar file to you Docker build context. |
Except for the assembly configuration all other configuration options are ignored for now.
When only a single image should be built with a Dockerfile no Groovy DSL configuration is needed at all.
All what need to be done is to place a Dockerfile
into the top-level module directory, alongside to build.gradle
.
You can still configure global aspects in the plugin configuration, but as soon as you add an image
in the Groovy DSL configuration, you need to configure also the build explicitly.
The image name is by default set from the gradle coordinates (%g/%a:%l
, see Image Name for an explanation of the params which are essentially the Gradle’s group, project name and project version)
This name can be set with the property jkube.image.name
in gradle.properties
.
kubernetes-gradle-plugin filters given Dockerfile with gradle properties, much like the maven-resource-plugin
does. Filtering is enabled by default and can be switched off with a build config filter='false'
. Properties which we want to replace are specified with the ${..}
syntax.
Replacement includes properties set in the build, command-line properties, and system properties. Unresolved properties remain untouched.
This partial replacement means that you can easily mix it with Docker build arguments and environment variable reference, but you need to be careful.
If you want to be more explicit about the property delimiter to clearly separate Docker properties and gradle properties you can redefine the delimiter.
In general, the filter
option can be specified the same way as delimiters in the resource plugin.
In particular, if this configuration contains a * then the parts left, and right of the asterisks are used as delimiters.
For example, the default filter='${*}'
parse gradle properties in the format that we know.
If you specify a single character for filter
then this delimiter is taken for both, the start and the end.
E.g a filter='@'
triggers on parameters in the format @…@
.
Use something like this if you want to clearly separate from Docker builds args.
This form of property replacement works for Dockerfile only.
For replacing other data in other files targeted for the Docker image, please use the assembly configuration with filtering to make them available in the docker build context.
The following example replaces all properties in the format @property@
within the Dockerfile.
kubernetes {
images {
image {
name = 'user/demo'
build {
filter = '@'
}
}
}
}
All build relevant configuration is contained in the build
section
of an image configuration. The following configuration options are supported:
Element | Description | Property |
---|---|---|
Specifies the assembly configuration as described in Build Assembly |
|
|
Map specifying the value of Docker build args
which should be used when building the image with an external Dockerfile which uses build arguments. The key-value syntax is the same as when defining gradle properties (or |
|
|
buildOptions |
Map specifying the build options to provide to the docker daemon when building the image. These options map to the ones listed as query parameters in the
Docker Remote API and are restricted to simple options
(e.g.: memory, shmsize). If you use the respective configuration options for build options natively supported by the build configuration (i.e. |
|
createImageOptions |
Map specifying the create image options to provide to the docker daemon when pulling or importing an image. These options map to the ones listed as query parameters in the Docker Remote API and are restricted to simple options (e.g.: fromImage, fromSrc, platform). |
|
cleanup |
Cleanup dangling (untagged) images after each build (including any containers created from them). Default is |
|
Path to a directory used for the build’s context. You can specify the |
|
|
A command to execute by default (i.e. if no command is provided when a container for this image is started). See Startup Arguments for details. |
|
|
compression |
The compression mode how the build archive is transmitted to the docker daemon ( |
|
dockerFile |
Path to a |
|
dockerArchive |
Path to a saved image archive which is then imported. See Docker archive for details. |
|
An entrypoint allows you to configure a container that will run as an executable. See Startup Arguments for details. |
|
|
The environments as described in Setting Environment Variables and Labels. |
e.g. |
|
filter |
Enable and set the delimiters for property replacements. By default, properties in the format |
|
The base image which should be used for this image. If not given this default to |
|
|
buildpacksBuilderImage |
Configure BuildPack builder OCI image for BuildPack Build. This field is applicable only in This field is only applicable for |
|
Extended definition for a base image. This field holds a map of defined in
|
|
|
Specifies the health check configuration as described in Build Healthcheck |
|
|
imagePullPolicy |
Specific pull policy for the base image. This overwrites any global pull policy. See the global configuration option imagePullPolicy for the possible values and the default. |
|
Labels as described in Setting Environment Variables and Labels. |
e.g. |
|
maintainer |
The author ( |
|
noCache |
Don’t use Docker’s build cache. This can be overwritten by setting a system property |
|
cacheFrom |
A list of |
e.g. |
optimise |
if set to true then it will compress all the |
|
platforms |
List of You should use a base image that includes supports for multiple platforms such as:
Supported only when using the jib build strategy |
|
ports |
The exposed ports which is a list of |
e.g. |
shell |
Shell to be used for the runCmds. It contains arg elements which are defining the executable and its params. |
|
runCmds |
Commands to be run during the build process. It contains run elements which are passed to the shell. Whitespace is trimmed from each element and empty elements are ignored. The run commands are inserted right after the assembly and after workdir into the Dockerfile. |
e.g. |
skip |
if set to true disables building of the image. This config option is best used together with a gradle property |
|
tags |
List of additional |
e.g. |
user |
User to which the Dockerfile should switch to the end (corresponds to the |
|
volumes |
List of |
e.g. |
workdir |
Directory to change to when starting the container. |
|
The assembly
element within build
element defines how build artifacts and other files
can be added to the Docker image. The files which are supposed to be added via assembly
should be
present in project directory. It’s also possible to add file from external source using your own custom
logic (see JKube Plugin for more details).
Element | Description | Property |
---|---|---|
name |
Assembly name, which is |
|
targetDir |
Directory under which the files and artifacts contained in the assembly will be copied within the container.
The default value for this is |
|
Deprecated: Use layers instead Inlined assembly descriptor as described in Assembly - Inline below. |
||
Each of the layers that the assembly will contain as described in Assembly - Layer below. |
||
exportTargetDir |
Specification whether the |
|
excludeFinalOutputArtifact |
By default, the project’s final artifact will be included in the assembly, set this flag to true in case the artifact should be excluded from the assembly. |
|
mode |
Mode how the assembled files should be collected:
The archive formats have the advantage that file permission can be preserved better (since the copying is independent from the underlying files systems) |
|
permissions |
Permission of the files to add:
|
|
tarLongFileMode |
Sets the TarArchiver behaviour on file paths with more than 100 characters length. Valid values are: "warn"(default), "fail", "truncate", "gnu", "posix", "posix_warn" or "omit" |
|
user |
User and/or group under which the files should be added. The user must already exist in the base image. It has the general format If a third part is given, then the build changes to user For example, the image |
|
In the event you do not need to include any artifacts with the image, you may safely omit this element from the configuration.
Inlined assembly description with a format very similar to Maven Assembly Plugin.
assembly {
targetDir = "/deployments"
layers = [{
fileSets = [{
directory = file("${project.rootDir}/build/dependencies")
outputDirectory = "static"
}]
}]
}
The layers
element within the assembly
element can have one or more
layer
elements with a Groovy DSL structure that supports the following configuration options:
Element | Description |
---|---|
id |
Unique ID for the layer. |
files |
List of files for the layer. Each file has the following fields:
|
fileSets |
List of filesets for the layer. Each fileset has the following fields:
|
baseDirectory |
Base directory from which to resolve the Assembly’s layer files and filesets. |
As described in section Configuration for external Dockerfiles Docker build arg can be used. In addition to the configuration within the plugin configuration you can also use properties to specify them:
Set a system property when running gradle, e.g: docker.buildArg.http_proxy=http://proxy:8001
. This is especially useful when using predefined Docker arguments for setting proxies transparently.
Set a project property within the build.gradle
, e.g:
docker.buildArg.myBuildArg = myValue
Please note that the system property setting will always override the project property. Also note that for all
properties which are not Docker predefined properties, the
external Dockerfile must contain an ARGS
instruction.
When creating a container one or more environment variables can be set via configuration with the env
parameter
kubernetes {
images {
image {
build {
env {
JAVA_HOME = '/opt/jdk8'
CATALINA_OPTS = '-Djava.security.egd=file:/dev/./urandom'
}
}
}
}
}
It is also possible to set the environment variables from the outside of the plugin’s configuration with the parameter envPropertyFile
. If given, this property file is used to set the environment variables where the keys and values specify the environment variable. Environment variables specified in this file override any environment variables specified in the configuration.
Labels can be set inline the same way as environment variables:
kubernetes {
images {
image {
build {
labels = {
version = "${project.version}"
artifactId = "${project.name}"
}
}
}
}
}
Using entryPoint
and cmd
it is possible to specify the entry point
or cmd for a container.
The difference is, that an entrypoint
is the command that always be executed, with the cmd
as argument. If no entryPoint
is provided, it defaults to /bin/sh -c
so any cmd
given is executed with a shell. The arguments given to docker run
are always given as arguments to the
entrypoint
, overriding any given cmd
option. On the other hand if no extra arguments are given to docker run
the default cmd
is used as argument to entrypoint
.
An entry point or command can be specified in two alternative formats:
Mode | Description |
---|---|
shell |
Shell form in which the whole line is given to |
exec |
List of arguments (with inner |
Either shell or params should be specified.
kubernetes {
images {
image {
build {
entryPoint {
shell = "java -jar \$HOME/server.jar"
}
}
}
}
}
or
kubernetes {
images {
image {
build {
entryPoint {
exec = ["java", "-jar", "/opt/demo/server.jar"]
}
}
}
}
}
Startup arguments are not used in S2I builds
This section includes Groovy DSL configuration options you can use to tweak generated Kubernetes manifests.
Labels and annotations can be easily added to any resource object. This is best explained by an example.
kubernetes {
resources {
labels { (1)
all { (2)
organisation = 'unesco' (3)
}
service { (4)
database = 'mysql',
persistent = 'true'
}
replicaSet { (5)
}
pod { (6)
}
deployment { (7)
}
}
annotations { (8)
}
}
}
1 | labels section with resources contains labels which should be applied to objects of various kinds |
2 | Within all labels which should be applied to every object can be specified |
3 | Within property you can specify key value pairs |
4 | service labels are used to label services |
5 | replicaSet labels are for replica set and replication controller |
6 | pod holds labels for pod specifications in replication controller, replica sets and deployments |
7 | deployment is for labels on deployments (kubernetes) and deployment configs (openshift) |
8 | The subelements are also available for specifying annotations. |
Labels and annotations can be specified in free form as a map. In this map, the element name is the name of the label or annotation respectively, whereas the content is the value to set.
The following subelements are possible for labels
and annotations
:
Element | Description |
---|---|
all |
All entries specified in the |
deployment |
Labels and annotations applied to |
pod |
Labels and annotations applied pod specification as used in |
replicaSet |
Labels and annotations applied to |
service |
Labels and annotations applied to |
ingress |
Labels and annotations applied to |
serviceAccount |
Labels and annotations applied to |
In JKube terminology, a Controller resource is a Kubernetes resource which manages Pods created for your application. These can be one of the following resources:
By default, Deployment is generated in Kubernetes mode. You can easily configure different aspects of generated Controller resource using Groovy DSL configuration. Here is an example:
kubernetes {
resources {
controller {
env { (1)
organization = 'Eclipse Foundation'
projectname = 'jkube'
}
controllerName = 'my-deploymentname' (2)
containerPrivileged = 'true' (3)
imagePullPolicy = 'Always' (4)
replicas = '3' (5)
liveness { (6)
getUrl = 'http://:8080/q/health'
tcpPort = '8080'
initialDelaySeconds = '3'
timeoutSeconds = '3'
}
startup { (7)
periodSeconds = 30
failureThreshold = 1
getUrl = "http://:8080/actuator/health"
}
volumes = [{ (8)
name = 'scratch'
type = 'emptyDir'
medium = 'Memory'
mounts = ['/var/scratch']
}]
containerResources {
requests { (9)
cpu = '250m'
memory = '32Mi'
}
limits { (10)
cpu = '500m'
memory = '64Mi'
}
}
nodeSelector { (11)
region = 'us-west'
type = 'user-node'
}
}
}
}
1 | Environment variables added to all of your application Pods |
2 | Name of Controller(metadata.name set in generated Deployment, Job, ReplicaSet etc) |
3 | Setting Security Context of all application Pods. |
4 | Configure how images would be updated. Can be one of IfNotPresent , Always or Never . Read Kubernetes Images docs for more details. |
5 | Number of replicas of pods we want to have in our application |
6 | Define an HTTP liveness request, see Kubernetes Liveness/Readiness probes for more details. |
7 | Define an HTTP startup request, see Kubernetes Startup probes for more details. |
8 | Mounting an EmptyDir Volume to your application pods |
9 | Requests describe the minimum amount of compute resources required. See Kubernetes Resource Management Documentation for more info. |
10 | Limits describe the maximum amount of compute resources allowed. See Kubernetes Resource Management Documentation for more info. |
11 | NodeSelector to schedule your application pods on specific nodes based on labels specified in the node. See Kubernetes NodeSelector for more details. |
Here are the fields available in resources
Groovy DSL configuration that would work with k8sResource:
Element | Description |
---|---|
Configuration element for changing various aspects of generated Controller. |
|
|
ServiceAccount name which will be used by pods created by controller resources(e.g. |
|
Use old |
This configuration field is focused only on changing various elements of Controller (mainly fields specified in PodTemplateSpec). Here are available configuration fields within this object:
Element | Description |
---|---|
|
Environment variables which will be added to containers in Pod template spec. |
Configuration element for adding volume mounts to containers in Pod template spec |
|
|
Name of the controller resource(i.e. |
Configuration element for adding a liveness probe |
|
Configuration element for adding readiness probe |
|
Configuration element for adding startup probe |
|
|
Run container in privileged mode. Sets |
|
How images should be pulled (maps to ImagePullPolicy). |
Configuration element for adding InitContainers to generated Controller resource. |
|
|
Number of replicas to create |
|
Pod’s restart policy. For |
Configure Controller’s compute resource requirements |
|
|
Schedule for CronJob written in Cron syntax. |
|
Configuration element for adding nodeSelector to Pod template spec. |
|
Specify secrets for pulling images from private repos |
Element | Description |
---|---|
|
Name of InitContainer |
|
Image used for InitContainer |
|
How images should be pulled (maps to ImagePullPolicy). |
|
Command to be executed in InitContainer (maps to |
|
Configuration element for adding volume mounts to InitContainers in Pod template spec |
|
Environment variables that will be added to this InitContainer in Pod template spec. |
Element | Description |
---|---|
|
The minimum amount of compute resources required. See Kubernetes Resource Management Documentation for more info. |
|
The maximum amount of compute resources allowed. See Kubernetes Resource Management Documentation for more info. |
Probe configuration is used for configuring liveness and readiness probes for containers. Both liveness
and readiness
probes the following options:
Element | Description |
---|---|
|
Initial delay in seconds before the probe is started. |
|
Timeout in seconds how long the probe might take. |
|
Command to execute for probing. |
|
Probe URL for HTTP Probe. Configures HTTP probe fields like host: "" path: /health port: 8080 scheme: HTTP Host name with empty value defaults to Pod IP. You probably want to set "Host" in httpHeaders instead. |
|
TCP port to probe. |
|
When a probe fails, Kubernetes will try failureThreshold times before giving up |
|
Minimum consecutive successes for the probe to be considered successful after having failed. |
|
Custom headers to set in the request. |
|
How often in seconds to perform the probe. Defaults to 10 seconds. Minimum value is 1. |
volumes
field contains a list of volume
configurations. Different configurations are supported in order to support different Volumes in Kubernetes.
Here are the options supported by a single volume
:
Element | Description |
---|---|
|
type of Volume |
|
name of volume to be mounted |
|
List of mount paths of this volume. |
|
Path for volume |
|
medium ,applicable for Volume type |
|
repository ,applicable for Volume type |
|
revision ,applicable for Volume type |
|
Secret name ,applicable for Volume type |
|
Server name, applicable for Volume type |
|
Whether it’s read only or not |
|
pdName, applicable for Volume type |
|
File system type for Volume |
|
partition, applicable for Volume type |
|
endpoints, applicable for Volume type |
|
Claim Reference, applicable for Volume type |
|
volume id |
|
disk name, applicable for Volume type |
|
disk uri, applicable for Volume type |
|
kind, applicable for Volume type |
|
caching mode, applicable for Volume type |
|
Host Path type |
|
Share name, applicable for Volume type |
|
User name |
|
Secret File, applicable for Volume type |
|
Secret reference, applicable for Volume type |
|
LUN(Logical Unit Number) |
|
target WWNs, applicable for Volume type |
|
data set name, applicable for Volume type |
|
list of portals, applicable for Volume type |
|
target portal, applicable for Volume type |
|
registry, applicable for Volume type |
|
volume, applicable for Volume type |
|
group, applicable for Volume type |
|
IQN, applicable for Volume type |
|
list of monitors, applicable for Volume type |
|
pool, applicable for Volume type |
|
keyring, applicable for Volume type |
|
image, applicable for Volume type |
|
gateway, applicable for Volume type |
|
system, applicable for Volume type |
|
protection domain, applicable for Volume type |
|
storage pool, applicable for Volume type |
|
volume name, applicable for Volume type |
|
ConfigMap name, applicable for Volume type |
|
List of ConfigMap items, applicable for Volume type |
|
List of items, applicable for Volume type |
Groovy DSL configuration
You can create a secret using Groovy DSL configuration in the build.gradle
file. It should contain the following fields:
key | required | description |
---|---|---|
name |
|
this will be used as name of the kubernetes secret resource |
namespace |
|
the secret resource will be applied to the specific namespace, if provided |
This is best explained by an example.
Yaml fragment with annotation
You can create a secret using a yaml fragment. You can reference the docker server id with an annotation
jkube.eclipse.org/dockerServerId
. The yaml fragment file should be put under
the src/main/jkube/
folder.
apiVersion: v1
kind: Secret
metadata:
name: mydockerkey
namespace: default
annotations:
jkube.eclipse.org/dockerServerId: ${docker.registry}
type: kubernetes.io/dockercfg
When the k8sResource
goal is run, an Ingress
will be generated for each Service
if the jkube.createExternalUrls
property is enabled.
The generated Ingress
can be further customized by using an Groovy DSL configuration, or by providing a YAML resource fragment.
Groovy DSL Configuration
Element | Description |
---|---|
Configuration element for creating new Ingress |
|
|
Set host for Ingress or OpenShift Route |
Here is an example of configuring Ingress using Groovy DSL configuration:
jkube.createExternalUrls=true
kubernetes {
resources {
ingress {
ingressTlsConfigs = [{ (1)
hosts = ["foo.bar.com"]
secretName = "testsecret-tls"
}]
ingressRules = [{
host = "foo.bar.com" (2)
paths = [{
pathType = "Prefix" (3)
path = "/foo" (4)
serviceName = "service1" (5)
servicePort = "8080" (6)
}]
}]
}
}
}
1 | Ingress TLS Configuration to specify Secret that contains TLS private key and certificate |
2 | Host names, can be precise matches or a wildcard. See Kubernetes Ingress Hostname documentation for more details |
3 | Ingress Path Type, Can be one of ImplementationSpecific , Exact or Prefix |
4 | Ingress path corresponding to provided service.name |
5 | Service Name corresponding to path |
6 | Service Port corresponding to path |
Here are the supported options while providing ingress
in Groovy DSL configuration
Element | Description |
---|---|
IngressRule configuration |
|
Ingress TLS configuration |
Here are the supported options while providing ingressRules
in Groovy DSL configuration
Element | Description |
---|---|
|
Host name |
IngressRule path configuration |
Here are the supported options while providing paths
in Groovy DSL configuration
Element | Description |
---|---|
|
type of Path |
|
path |
|
Service name |
|
Service port |
Resource reference in Ingress backend |
Here are the supported options while providing resource
in IngressRule’s path Groovy DSL configuration
Element | Description |
---|---|
|
Resource name |
|
Resource kind |
|
Resource’s apiGroup |
Here are the supported options while providing ingressTlsConfigs
in IngressRule’s path Groovy DSL configuration
Element | Description |
---|---|
|
Secret name |
|
a list of string |
Ingress Yaml fragment:
You can create Ingress
using YAML fragments too by placing the partial YAML file in the src/main/jkube
directory. The following snippet contains an Ingress
fragment example.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
tls:
- hosts:
- https-example.foo.com
secretName: testsecret-tls
rules:
- host: https-example.foo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service1
port:
number: 80
You can configure resource
configuration to generate a ServiceAccount or configure an already existing ServiceAccount into your generated Deployment.
Here is an example of Groovy DSL configuration to generate a ServiceAccount:
kubernetes {
resources {
serviceAccounts = [{
name = 'my-serviceaccount' (1)
deploymentRef = 'my-deployment-name' (2)
}]
}
}
1 | Name of ServiceAccount to be created |
2 | Deployment which will be using this ServiceAccount |
If you don’t want to generate ServiceAccount but just use an existing ServiceAccount in your Deployment. You can configure it via serviceAccount
field in resource configuration. Here is an example:
kubernetes {
resources {
serviceAccount = 'my-existing-serviceaccount'
}
}
Service Account Resource fragment:
If you don’t want to use Groovy DSL configuration, you can provide a resource fragment for ServiceAccount resource. Here is how it would look like:
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
Resource goal also validates the generated resource descriptors using API specification of Kubernetes.
Element | Description | Property |
---|---|---|
skipResourceValidation |
If value is set to Default is |
|
failOnValidationError |
If value is set to Default is |
|
The usual way to define Docker images is with the plugin configuration as explained in k8sBuild. This can either be done completely within the build.gradle
or by referring to an external Dockerfile.
However, this plugin provides an additional route for defining image configurations. This is done by so called Generators. A generator is a Java component providing an auto-detection mechanism for certain build types like a Spring Boot build or a plain Java build. As soon as a Generator detects that it is applicable it will be called with the list of images configured in the build.gradle
. Typically a generator only creates dynamically a new image configuration if this list is empty. But a generator is free to also add new images to an existing list or even change the current image list.
The included Generators are enabled by default, but you can easily disable them or only select a certain set of generators. Each generator has a name, which is unique for a generator.
The generator configuration is embedded in a generator
configuration section:
kubernetes {
generator { (1)
includes = ['spring-boot'] (2)
config { (3)
'spring-boot' { (4)
alias = 'ping'
}
}
}
}
1 | Start of generators' configuration. |
2 | Generators can be included and excluded. Includes have precedence, and the generators are called in the given order. |
3 | Configuration for individual generators. |
4 | The config is a map of supported config values. Each section is embedded in a tag named after the generator. The following sub-elements are supported: |
Element | Description |
---|---|
|
Contains one or more |
|
Holds one or more |
|
Configuration for all generators. Each generator support a specific set of configuration values as described in the documentation. The subelements of this section are generator names to configure. E.g. for generator |
Beside specifying generator configuration in the plugin’s configuration it can be set directly with properties, too:
gradle k8sBuild -Pjkube.generator.java-exec.webPort=8082
The general scheme is a prefix jkube.generator.
followed by the unique generator name and then the generator specific key.
In addition to the provided default Generators described in the next section Default Generators, custom generators can be easily added. There are two ways to include generators:
You can declare the generator holding jars as dependency to this plugin as shown in this example
buildscript {
repositories {
mavenLocal()
}
dependencies {
classpath('io.acme:mygenerator:1.0')
}
}
Alternatively and if your application code comes with a custom generator you can set the global configuration option useProjectClasspath
(property: jkube.useProjectClasspath
) to true. In this case also the project artifact and its dependencies are looked up for Generators. See Generator API for details how to write your own generators.
All default generators examine the build information for certain aspects and generate a Docker build configuration on the fly. They can be configured to a certain degree, where the configuration is generator specific.
Generator | Name | Description |
---|---|---|
|
Generator for creating Image when user places |
|
|
Generic generator for flat classpath and fat-jar Java applications |
|
|
Spring Boot specific generator |
|
|
Generator for Thorntail v2 apps |
|
|
Generator for Vert.x applications |
|
|
Generator for WAR based applications supporting Tomcat, Jetty and Wildfly base images |
|
|
Generator for Quarkus based applications |
|
|
Generator for Open Liberty applications |
|
|
Generator for Micronaut based applications |
There are some configuration options which are shared by all generators:
Element | Description | Property |
---|---|---|
add |
When set to |
|
alias |
An alias name for referencing this image in various other parts of the configuration. This is also used in the log output. The default alias name is the name of the generator. |
|
from |
This is the base image from where to start when creating the images. By default, the generators make an opinionated decision for the base image which are described in the respective generator section. |
|
fromMode |
Whe using OpenShift S2I builds the base image can be either a plain docker image (mode: |
|
labels |
A comma separated list of additional labels you want to set on your image with |
|
name |
The Docker image name used when doing Docker builds. For OpenShift S2I builds its the name of the image stream. This
can be a pattern as described in Name Placeholders. The default is |
|
registry |
A optional Docker registry used when doing Docker builds. It has no effect for OpenShift S2I builds. |
|
tags |
A comma separated list of additional tags you want to tag your image with |
|
buildpacksBuilderImage |
Configure BuildPack builder OCI image for BuildPack Build. This field is applicable only in |
|
When used as properties they can be directly referenced with the property names above.
Simple Dockerfile generator is responsible for creating an opinionated image configuration when user places Dockerfile
in project’s base directory.
This generator gets activated when these conditions are met:
Dockerfile
is placed in project’s base directory
Either image
configuration is not provided, or image
configuration provided does not have build
configured.
Image built with this configuration would use the Dockerfile
for docker build and project base directory as docker context directory.
One of the most generic Generators is the java-exec
generator.
It is responsible for starting up arbitrary Java application.
It knows how to deal with fat-jar applications where the application and all dependencies are included within a single jar and the MANIFEST.MF
within the jar references a main class.
But also flat classpath applications, where the dependencies are separate jar files and a main class is given.
If no main class is explicitly configured, the plugin first attempts to locate a fat jar.
If the gradle build creates a JAR file with a META-INF/MANIFEST.MF
containing a Main-Class
entry, then this is considered to be the fat jar to use.
If there are more than one of such files then the largest one is used.
If a main class is configured (see below) then the image configuration will contain the application jar plus all dependency jars.
If no main class is configured as well as no fat jar being detected, then this Generator tries to detect a single main class by searching for public static void main(String args[])
among the application classes. If exactly one class is found this is considered to be the main class. If no or more than one is found the Generator finally does nothing.
It will use the following base image by default, but as explained above and can be changed with the from
configuration.
Docker Build | S2I Build | ImageStream | |
---|---|---|---|
Community |
|
|
|
These images always refer to the latest tag.
When a fromMode
of istag
is used to specify an ImageStreamTag
and when no from
is given, then as default the
ImageStreamTag
jkube-java
in the namespace openshift
is chosen.
By default, fromMode = "docker"
which use the a plain Docker image reference for the S2I builder image.
Beside the common configuration parameters described in the table common generator options the following additional configuration options are recognized:
Element | Description | Property |
---|---|---|
targetDir |
Directory within the generated image where to put the detected artefacts into. Change this only if the base image is changed, too. Defaults to |
|
jolokiaPort |
Port of the Jolokia agent exposed by the base image. Set this to 0 if you don’t want to expose the Jolokia port. Defaults to |
|
mainClass |
Main class to call. If not given first a check is performed to detect a fat-jar (see above). Next a class is looked up by scanning If no such class is found or if more than one is found, then this generator does nothing. |
|
prometheusPort |
Port of the Prometheus jmx_exporter exposed by the base image. Set this to 0 if you don’t want to expose the Prometheus port. Defaults to |
|
webPort |
Port to expose as service, which is supposed to be the port of a web application. Set this to 0 if you don’t want to expose a port. Defaults to |
|
The exposed ports are typically later on use by Enrichers to create default Kubernetes or OpenShift services.
You can add additional files to the target image within baseDir
by placing files into src/main/jkube-includes
.
These will be added with mode 0644
, while everything in src/main/jkube-includes/bin
will be added with 0755
.
This generator is called spring-boot
and gets activated when it finds a plugin with id org.springframework.boot
in the build.gradle
.
This generator is based on the Java Application Generator and inherits all of its configuration values. The generated container port is read from the server.port
property application.properties
, defaulting to 8080
if it is not found. It also uses the same default images as the java-exec Generator.
Beside the common generator options and the java-exec options the following additional configuration is recognized:
Element | Description | Property |
---|---|---|
color |
If set, force the use of color in the Spring Boot console output. |
|
The generator adds Kubernetes liveness and readiness probes pointing to either the management or server port as read from the application.properties
.
If the management.port
(for Spring Boot 1) or management.server.port
(for Spring Boot 2) and management.ssl.key-store
(for Spring Boot 1) or management.server.ssl.key-store
(for Spring Boot 2) properties are set in application.properties
otherwise or server.ssl.key-store
property is set in application.properties
then the probes are automatically set to use https
.
The generator works differently when called together with k8sWatch
.
In that case it enables support for Spring Boot Developer Tools which allows for hot reloading of the Spring Boot app.
In particular, the following steps are performed:
If a secret token is not provided within the Spring Boot application configuration application.properties
or application.yml
with the key spring.devtools.remote.secret
then a custom secret token is created and added to application.properties
Add spring-boot-devtools.jar
as BOOT-INF/lib/spring-devtools.jar
to the spring-boot fat jar.
Since during k8sWatch
the application itself within the build/
directory is modified for allowing easy reloading you must ensure that you do a gradle clean
before building an artifact which should be put into production.
Since the released version are typically generated with a CI system which does a clean build anyway this should be only a theoretical problem.
The Thorntail v2 generator detects a Thorntail v2 build and disables the Prometheus Java agent because of this issue.
Otherwise, this generator is identical to the java-exec generator. It supports the common generator options and the java-exec
options.
The Vert.x generator detects an application using Eclipse Vert.x. It generates the metadata to start the application as a fat jar.
Currently, this generator is enabled if:
you are using the Vert.x Gradle Plugin (https://github.com/jponge/vertx-gradle-plugin)
you are depending on io.vertx:vertx-core
and uses the Shadow Jar plugin
Otherwise, this generator is identical to the java-exec generator. It supports the common generator options and the java-exec
options.
The generator automatically:
enable metrics and JMX publishing of the metrics when io.vertx:vertx-dropwizard-metrics
is in the project’s classpath / dependencies.
enable clustering when a Vert.x cluster manager is available in the project’s classpath / dependencies. this is done by appending -cluster
to the command line.
Force IPv4 stack when vertx-infinispan
is used.
Disable the async DNS resolver to fall back to the regular JVM DNS resolver.
You can pass application parameter by setting the JAVA_ARGS
env variable. You can pass system properties either using the same variable or using JAVA_OPTIONS
. For instance, create src/main/jkube/deployment.yml
with the following content to configure JAVA_ARGS
:
spec:
template:
spec:
containers:
- env:
- name: JAVA_ARGS
value: "-Dfoo=bar -cluster -instances=2"
The webapp
generator tries to detect WAR builds and selects a base servlet container image based on the configuration found in the build.gradle
:
A Tomcat base image is selected by default.
A Jetty base image is selected when one of the files WEB-INF/jetty-web.xml
or WEB-INF/jetty-logging.properties
is found.
A Wildfly base image is chosen when a Wildfly specific deployment descriptor like jboss-web.xml
is found.
The base images chosen are:
Docker Build | S2I Build | |
---|---|---|
Tomcat |
|
|
Jetty |
|
|
Wildfly |
|
In addition to the common generator options this generator can be configured with the following options:
Element | Description | Property |
---|---|---|
server |
Fix server to use in the base image. Can be either tomcat, jetty or wildfly. |
|
targetDir |
Where to put the war file into the target image. By default, it’s selected by the base image chosen but can be overwritten with this option. Defaults to |
|
user |
User and/or group under which the files should be added. The syntax of this options is described in Assembly Configuration. |
|
path |
Context path with which the application can be reached by default. Defaults to |
|
cmd |
Command to use to start the container. By default, the base images startup command is used. |
|
ports |
Comma separated list of ports to expose in the image and which eventually are translated later to Kubernetes services. The ports depend on the base image and are selected automatically. But they can be overridden here. |
|
env |
Environment variable to be set to the image builder environment. Should be set in the format |
|
From Tomcat 10, only JakartaEE compliant projects are supported. However, legacy JavaEE projects can automatically be migrated by deploying the war in ${CATALINA_HOME}/webapps-javaee
. By default, the webapp generator is based on a Tomcat 10+ image and will copy the war file to ${CATALINA_HOME}/webapps-javaee
.
If the project is already JakartaEE compliant, it is recommanded to set the webapp directory to ${CATALINA_HOME}/webapps
. This can be done by setting the property jkube.generator.webapp.env
to TOMCAT_WEBAPPS_DIR=webapps
.
To keep using Tomcat 9, set the properties:
jkube.generator.webapp.from
to quay.io/jkube/jkube-tomcat9:0.0.16
jkube.generator.webapp.cmd
to /usr/local/s2i/run
The Quarkus
generator detects Quarkus based projects looking at project build.gradle
:
The base images chosen are:
Docker Build | S2I Build | |
---|---|---|
Native |
|
|
Normal Build |
|
|
The Open Liberty generator runs when the Open Liberty plugin is enabled in the gradle build.
It can done via two ways as specified in OpenLiberty Gradle Plugin docs:
Within apply plugin:
section as liberty
Within plugins
section as io.openliberty.tools.gradle.Liberty
The generator is similar to the java-exec generator. It supports the common generator options and the java-exec
options.
For Open Liberty, the default value of webPort is 9080.
The Micronaut generator (named micronaut
) detects a Micronaut project by analyzing the plugin
dependencies searching for io.micronaut.application:io.micronaut.application.gradle.plugin
.
This generator is based on the Java Application Generator and inherits all of its configuration values.
The base images chosen are the following, however, these can be overridden using jkube.generator.from
property:
Docker Build | S2I Build | |
---|---|---|
Native |
|
|
Normal Build |
|
|
The Helidon
generator detects Helidon based projects looking at project build.gradle
:
The base images chosen are the following, however, these can be overridden using jkube.generator.from
property:
Docker Build | S2I Build | |
---|---|---|
Native |
|
|
Normal Build |
|
|
It’s possible to extend Eclipse JKube’s Generator API to define your own custom Generators as per use case. Please refer to the Generator Interface; You can create new generators by implementing this interface. Please check out Custom Foo generator quickstart for detailed example.
Enriching is the complementary concept to Generators. Whereas Generators are used to create and customize Docker images, Enrichers are use to create and customize Kubernetes resource objects.
There are a lot of similarities to Generators:
Each Enricher has a unique name.
Enrichers are looked up automatically from the plugin dependencies and there is a set of default enrichers delivered with this plugin.
Enrichers are configured the same ways as generators
The Generator example is a good blueprint, simply replace generator
with enricher
. The configuration is structural identical:
Element | Description |
---|---|
|
Contains one ore more |
|
Holds one or more |
|
Configuration for all enrichers. Each enricher supports a specific set of configuration values as described in its documentation. The subelements of this section are enricher names. E.g. for enricher |
This plugin comes with a set of default enrichers.
kubernetes-gradle-plugin comes with a set of enrichers which are enabled by default. There are two categories of default enrichers:
Generic Enrichers are used to add default resource object when they are missing or add common metadata extracted from the given build information.
Specific Enrichers are enrichers which are focused on a certain tech stack that they detect.
Enricher | Description |
---|---|
Add ConfigMap elements defined as Groovy DSL or as annotation. |
|
Create default controller (replication controller, replica set or deployment Kubernetes doc) if missing. |
|
Merges |
|
Enables debug mode via a property or Groovy DSL configuration |
|
Examine build dependencies for |
|
Check local |
|
Add the image name into a |
|
Create a default Ingress if missing or configured from Groovy DSL configuration |
|
Overrides ImagePullPolicy in controller resources provided |
|
Add labels/annotations to generated Kubernetes resources |
|
Set the Namespace of the generated and processed Kubernetes resources metadata and optionally create a new Namespace |
|
Add a default name to every object which misses a name. |
|
Add name of StorageClass required by PersistentVolumeClaim either in metadata or in spec. |
|
Copy over annotations from a |
|
Add a default portname for commonly known service. |
|
Add gradle coordinates as labels to all objects. |
|
Override number of replicas for any controller processed by JKube. |
|
Add revision history limit (Kubernetes doc) as a deployment spec property to the Kubernetes/OpenShift resources. |
|
Add Secret elements defined as annotation. |
|
Enforces best practice and recommended security rules for Kubernetes and OpenShift resources. |
|
Create a default service if missing and extract ports from the Docker image configuration. |
|
Add a ServiceAccount defined as Groovy DSL or mentioned in resource fragment. |
|
Add ImageStreamTag change triggers on Kubernetes resources such as StatefulSets, ReplicaSets and DaemonSets using the |
|
Fixes the permission of persistent volume mount with the help of an init container. |
|
Default generic enrichers are used for adding missing resources or adding metadata to given resource objects. The following default enhancers are available out of the box.
This enricher adds ConfigMap defined as resources
in plugin configuration and/or resolves file content from an annotation.
As Groovy you can define:
kubernetes {
resources {
configMap {
name = 'myconfigmap'
entries = [{
name = 'A'
value = 'B'
}]
}
}
}
This creates a ConfigMap data with key A
and value B
.
You can also use file
tag to refer to the content of a file.
kubernetes {
resources {
configMap {
name = 'configmap-test'
entries = [{
file = 'src/test/resources/test-application.properties'
}]
}
}
}
This creates a ConfigMap with key test-application.properties
and value the content of the src/test/resources/test-application.properties
file.
If you set name
tag then this is used as key instead of the filename.
Here are the supported options while providing configMap
in Groovy DSL configuration
Element | Description |
---|---|
data for ConfigMap |
|
|
Name of the ConfigMap |
entries
is a list of entry
configuration objects. Here are the supported options while providing entry
in Groovy DSL configuration
Element | Description |
---|---|
|
Entry value |
|
path to a file or directory. If it’s a single file then file contents would be read as value. If it’s a directory then each file’s content is stored as value with file name as key. |
|
Entry name |
If you are defining a custom ConfigMap
file, you can use an annotation to define a file name as key and its content as the value:
metadata:
name: ${name}
annotations:
jkube.eclipse.org/cm/application.properties: src/test/resources/test-application.properties
This creates a ConfigMap
data with key application.properties
(part defined after cm
) and value the content of src/test/resources/test-application.properties
file.
You can specify a directory instead of a file:
metadata:
name: ${name}
annotations:
jkube.eclipse.org/cm/application.properties: src/test/resources/test-dir
This creates a ConfigMap
named application.properties
(part defined after cm
) and for each file under the directory test-dir
one entry with file name as key and its content as the value; subdirectories are ignored.
This enricher is used to ensure that a controller is present. This can be either directly configured with fragments or with the Groovy DSL configuration. An explicit configuration always takes precedence over auto detection. See Kubernetes doc for more information on types of controllers.
The following configuration parameters can be used to influence the behaviour of this enricher:
Element | Description | Property |
---|---|---|
name |
Name of the Controller. Kubernetes Controller names must start with a letter. If the maven artifactId starts with a
digit, Defaults to project name. |
|
pullPolicy |
Deprecated: use Image pull policy to use for the container. One of: IfNotPresent, Always. Defaults to |
|
type |
Type of Controller to create. One of: ReplicationController, ReplicaSet, Deployment, DeploymentConfig, StatefulSet, DaemonSet, Job, CronJob. Defaults to |
|
replicaCount |
Number of replicas for the container. Defaults to |
|
schedule |
Schedule for CronJob written in Cron syntax. |
|
Image pull policy to use for the container. One of: IfNotPresent, Always. |
|
Merges JAVA_OPTIONS
environment variable defined in Build configuration (image)
environment (env
) with Container
JAVA_OPTIONS
environment variable added
by other enrichers, Groovy DSL configuration or fragment.
Option | Description | Property |
---|---|---|
disable |
Disabled the enricher, any Defaults to |
|
This enricher enables debug mode via a property jkube.debug.enabled
or via enabling debug mode in enricher configuration.
You can either set this property in gradle.properties
file:
jkube.debug.enabled = true
Or provide Groovy DSL configuration for enricher
kubernetes {
enricher {
config {
'jkube-debug' {
enabled = true
}
}
}
}
This would do the following things:
Add environment variable JAVA_ENABLE_DEBUG
with value set to true
in your application container
Add a container port named debug
to your existing list of container ports with value set via JAVA_DEBUG_PORT
environment variable. If not present, it defaults to 5005
.
This enricher is used for embedding Kubernetes configuration manifests (YAML) to single package. It looks for the following files in compile scope dependencies and adds Kubernetes resources inside to final generated Kubernetes manifests:
META-INF/jkube/kubernetes.yml
META-INF/jkube/k8s-template.yml
META-INF/jkube/openshift.yml
(in case of OpenShift)
Option | Description | Property |
---|---|---|
includeTransitive |
Whether to look for kubernetes manifest files in transitive dependencies. Defaults to |
|
includePlugin |
Whether to look on the current plugin classpath too. Defaults to |
|
Enricher that adds info from .git
directory as annotations. These are explained in the table below:
Annotation |
Description |
|
Current Git Branch |
|
Latest commit of current branch |
|
URL of your configured git remote |
|
Deprecated: Use Current Git Branch |
|
Deprecated: Use Latest commit of current branch |
|
Deprecated: Use URL of your configured git remote |
Option | Description | Property |
---|---|---|
gitRemote |
Configures the git remote name, whose URL you want to annotate as 'git-url'. Defaults to |
|
This enricher merges in container image related fields into specified controller (e.g Deployment
, ReplicaSet
, ReplicationController
etc.) Pod specification.
The full image name is set as image
.
An image alias is set as name
. If alias isn’t provided, then opinionated name using image user and project name is used.
The pull policy imagePullPolicy
is set according to the given configuration. If no
configuration is set, the default is IfNotPresent
for release versions, and Always
for snapshot versions.
Environment variables as configured via Groovy DSL configuration.
Any already configured container in the pod spec is updated if the property is not set.
Option | Description | Property |
---|---|---|
pullPolicy |
What pull policy to use when fetching images |
|
Enricher responsible for creation of Ingress either using opinionated defaults or as per provided Groovy DSL configuration.
This enricher gets activated when jkube.createExternalUrls
is set to true
.
JKube generates Ingress only for Services which have either expose=true
or exposeUrl=true
labels set.
For more information, check out Ingress Generation section.
Element | Description | Property |
---|---|---|
host |
Host is the fully qualified domain name of a network host. |
|
targetApiVersion |
Whether to generate Defaults to |
|
This enricher fixes ImagePullPolicy for Kubernetes/Openshift resources whenever a -Djkube.imagePullPolicy
parameter is provided.
This enricher is responsible for adding labels and annotations to your resources. It reads labels
and annotations
fields provided in resources
and adds respective labels/annotations to Kubernetes resources.
You can also configure whether you want to add these labels/annotations to some specific resource or all resources.
You can see an example if it’s usage in k8sResource Labels And Annotations section.
This enricher adds a Namespace
/Project
resource to the Kubernetes Resources list in case the namespace
configuration (jkube.enricher.jkube-namespace.namespace
) is provided.
In addition, this enricher sets the namespace (.metadata.namespace
) of the JKube generated and processed Kubernetes
resources in case they don’t already have one configured (see the force
configuration).
The following configuration parameters can be used to influence the behaviour of this enricher:
Element | Description | Property |
---|---|---|
namespace |
Namespace as string which we want to create. A new |
|
type |
Whether we want to generate a Defaults to |
|
force |
If the Defaults to false. |
|
This enricher also configures generated Namespace in .metadata.namespace
field for Kubernetes resources as per provided Groovy DSL configuration too. Here is an example:
kubernetes {
resources {
namespace = 'mynamespace'
}
}
Enricher for adding a "name" to the metadata to various objects we create.
Option | Description | Property |
---|---|---|
name |
Configures the |
|
This enricher copies the annotations from a Controller (Deployment/ReplicaSet/StatefulSet) metadata to the annotations of container Pod template spec’s metadata.
This enricher uses a given set of well known ports:
Port Number |
Name |
|
|
|
|
|
|
|
|
If not found, it creates container ports with names of IANA registered services.
Enricher that adds standard labels and selectors to generated resources (e.g. app
, group
, provider
, version
).
The jkube-project-label
enricher supports the following configuration options:
Option | Description | Property |
---|---|---|
useProjectLabel |
Enable this flag to turn on the generation of the old Defaults to |
|
app |
Makes it possible to define a custom Defaults to the Gradle Project |
|
provider |
Makes it possible to define a custom Defaults to |
|
group |
Makes it possible to define a custom Defaults to the Gradle Project |
|
version |
Makes it possible to define a custom Defaults to the Gradle Project |
|
The project labels which are already specified in the input fragments are not overridden by the enricher.
Enricher which fixes adds name of StorageClass required by PersistentVolumeClaim either in metadata or in spec.
Option | Description | Property |
---|---|---|
defaultStorageClass |
PersistentVolume storage class. |
|
useStorageClassAnnotation |
If enabled, storage class would be added to PersistentVolumeClaim metadata as Defaults to |
|
This enricher overrides the number of replicas for every controller (DaemonSet, Deployment, DeploymentConfig, Job, CronJob, ReplicationController, ReplicaSet, StatefulSet) generated or processed by JKube (including those from dependencies).
In order to use this enricher you need to configure the jkube.replicas
property:
gradle -Pjkube.replicas=42 k8sResource
jkube.replicas = 42
You can use this Enricher at runtime to temporarily force the number of replicas to a given value.
This enricher adds spec.revisionHistoryLimit
property to deployment spec of Kubernetes/OpenShift resources.
A deployment’s revision history is stored in the replica sets, that specifies the number of old ReplicaSets to retain in order to allow rollback.
For more information read Kubernetes documentation.
The following configuration parameters can be used to influence the behaviour of this enricher:
Element | Description | Property |
---|---|---|
limit |
Number of revision histories to retain. Defaults to |
|
Just as any other enricher you can specify required properties with in the enricher’s configuration as below,
kubernetes {
enricher {
config {
'jkube-revision-history' {
limit = 8
}
}
}
}
This information will be enriched as spec property in the generated manifest like,
# ...
kind: Deployment
spec:
revisionHistoryLimit: 8
# ...
This enricher adds Secret defined as file content from an annotation.
If you are defining a custom Secret
file, you can use an annotation to define a file name as key and its content as the value:
metadata:
name: ${name}
annotations:
jkube.eclipse.org/secret/application.properties: src/test/resources/test-application.properties
This creates a Secret
data with the key application.properties
(part defined after secret
) and value content of
src/test/resources/test-application.properties
file (base64 encoded).
This enricher enforces security best practices and recommendations for Kubernetes objects such as Deployments, ReplicaSets, Jobs, CronJobs, and so on.
The enricher is not included in the default
profile.
However, you can easily enable it by leveraging the security-hardening
profile.
These are some of the rules enforces by this enricher:
Disables the auto-mounting of the service account token.
Prevents containers from running in privileged mode.
Ensures containers do not allow privilege escalation.
Prevents containers from running as the root user.
Configures the container to run as a user with a high UID to avoid host conflict.
Ensures the container’s seccomp is set to Runtime/Default.
This enricher is used to ensure that a service is present. This can be either directly configured with fragments or with the Groovy DSL configuration, but it can be also automatically inferred by looking at the ports exposed by an image configuration. An explicit configuration always takes precedence over auto detection. For enriching an existing service this enricher actually works only on a configured service which matches with the configured (or inferred) service name.
The following configuration parameters can be used to influence the behaviour of this enricher:
Element | Description | Property |
---|---|---|
name |
Service name to enrich by default. If not given here or configured elsewhere, the artifactId/project name is used. |
|
headless |
Whether a headless service without a port should be configured. A headless service has the Defaults to |
|
expose |
If set to true, a label Defaults to |
|
type |
Kubernetes / OpenShift service type to set like LoadBalancer, NodePort or ClusterIP. |
|
port |
The service port to use. By default the same port as the ports exposed in the image configuration is used, but can be changed with this parameter. See below for a detailed description of the format which can be put into this variable. |
|
multiPort |
Set this to Defaults to |
|
protocol |
Default protocol to use for the services. Must be Defaults to |
|
normalizePort |
Normalize the port numbering of the service to common and conventional port numbers. Defaults to |
|
Following is the Port mapping that comes in effect, when normalizePort option is set true.
Original Port | Normalized Port |
---|---|
8080 |
80 |
8081 |
80 |
8181 |
80 |
8180 |
80 |
8443 |
443 |
443 |
443 |
You specify the properties like for any enricher within the enrichers configuration like in
kubernetes {
enricher {
config {
'jkube-service' {
name = 'my-service'
type = 'NodePort'
multiPort = true
}
}
}
}
With the option port
you can influence the mapping how ports are mapped from the pod to the service.
By default, and if this option is not given the ports exposed are dictated by the ports exposed from the Docker images contained in the pods.
Remember, each image configured can be part of the pod.
However, you can expose also completely different ports as the images meta data declare.
The property port
can contain a comma separated list of mappings of the following format:
<servicePort1>:<targetPort1>/<protocol>,<servicePort2>:<targetPort2>/<protocol>,....
where the targetPort
and protocol
specification is optional. These ports are overlayed over the ports exposed by the images, in the given order.
This is best explained by some examples.
For example if you have a pod which exposes a Microservice on port 8080 and you want to expose it as a service on port 80 (so that it can be accessed with http://myservice
) you can simply use the following enricher configuration:
kubernetes {
enricher {
config {
'jkube-service' {
name = 'myservice'
port = '80:8080' (1)
}
}
}
}
1 | 80 is the service port, 8080 the port opened in from the pod’s images |
If your pod exposes their ports (which e.g. all generator do), then you can even omit the 8080 here (i.e. port = 80
).
In this case the first port exposed will be mapped to port 80, all other exposed ports will be omitted.
By default, an automatically generated service only exposes the first port, even when more ports are exposed.
When you want to map multiple ports you need to set the config option multiPort
to true
.
In this case you can also provide multiple mappings as a comma separated list in the port
specification where each element of the list are the mapping for the first, second, … port.
A more (and bit artificially constructed) specification could be port = '80,9779:9779/udp,443'
.
Assuming that the image exposes ports 8080
and 8778
(either directly or via generators) and we have switched on multiport mode, then the following service port mappings will be performed for the automatically generated service:
Pod port 8080 is mapped to service port 80.
Pod port 9779 is mapped to service port 9779 with protocol UDP. Note how this second entry overrides the pod exposed port 8778.
Pod port 443 is mapped to service port 443.
This example shows also the mapping rules:
Port specification in port
always override the port metadata of the contained Docker images (i.e. the ports exposed)
You can always provide a complete mapping with port
on your own
The ports exposed by the images serve as default values which are used if not specified by this configuration option.
You can map ports which are not exposed by the images by specifying them as target ports.
Multiple ports are only mapped when multiPort mode is enabled (which is switched off by default). If multiPort mode is disabled, only the first port from the list of mapped ports calculated as above is taken.
When you set legacyPortMapping
to true then ports 8080 to 9090 are mapped to port 80 automatically if not explicitly mapped via port
. I.e. when an image exposes port 8080 with a legacy mapping this mapped to a service port 80, not 8080. You should not switch this on for any good reason. In fact, it might be that this option can vanish anytime.
This enricher is also used by resources
Groovy DSL configuration to generate Service configured via Groovy DSL. Here are the fields supported in resources
which work with this enricher:
Element |
Description |
Configuration element for generating Service resource |
services
is a list of service
configuration objects. Here are the supported options while providing service
in Groovy DSL configuration
Element | Description |
---|---|
|
Service name |
|
Port to expose |
|
Whether this is a headless service. |
|
Service type |
|
Whether to normalize service port numbering. |
Ports to expose |
port
Configurationports
is a list of port
configuration objects. Here are the supported options while providing port
in Groovy DSL configuration
Element | Description |
---|---|
|
Protocol to use. Can be either "tcp" or "udp". |
|
Container port to expose. |
|
Target port to expose. |
|
Port to expose from the port. |
|
Name of the port |
This enricher is responsible for creating ServiceAccount resource. See ServiceAccount Generation for more details.
The following configuration parameters can be used to influence the behaviour of this enricher:
Element | Description | Property |
---|---|---|
skipCreate |
Skip creating ServiceAccount objects Defaults to |
|
OpenShift resources like BuildConfig and DeploymentConfig can be automatically triggered by changes to ImageStreamTags. However, plain Kubernetes resources don’t have a way to support this kind of triggering. You can use image.openshift.io/triggers
annotation in OpenShift to request triggering. Read OpenShift docs for more details : Triggering updates on ImageStream changes
This enricher adds ImageStreamTag change triggers on Kubernetes resources that support the image.openshift.io/triggers
annotation, such as StatefulSets, ReplicaSets and DaemonSets.
The trigger is added to all containers that apply, but can be restricted to a limited set of containers using the following configuration:
kubernetes {
enricher {
config {
'jkube-triggers-annotation' {
containers = 'container-name-1,c2'
}
}
}
}
Enricher which fixes the permission of persistent volume mount with the help of an init container.
Option | Description | Property |
---|---|---|
imageName |
Image name for PersistentVolume init container Defaults to |
|
permission |
PersistentVolume init container access mode Defaults to |
|
cpuLimit |
Set PersistentVolume initContainer's |
|
memoryLimit |
Set PersistentVolume initContainer's |
|
cpuRequest |
Set PersistentVolume initContainer's |
|
memoryRequest |
Set PersistentVolume initContainer's |
|
Enricher that adds Well Known Labels recommended by Kubernetes.
The jkube-well-known-labels
enricher supports the following configuration options:
Option | Description | Property |
---|---|---|
Add Kubernetes Well Known labels to generated resources. Defaults to |
|
|
enabled |
Enable this flag to turn on addition of Kubernetes Well Known labels. Defaults to |
|
name |
The name of the application ( Defaults to the Gradle Project |
|
version |
The current version of the application ( Defaults to the Gradle Project |
|
component |
The component within the architecture ( |
|
partOf |
The name of a higher level application this one is part of ( Defaults to the Gradle Project |
|
managedBy |
The tool being used to manage the operation of an application ( Defaults to |
|
The Well Known Labels which are already specified in the input fragments are not overridden by the enricher.
Specific enrichers provide resource manifest enhancement for a certain tech stack that they detect.
This enricher adds kubernetes readiness and liveness and startup probes for OpenLiberty based projects. Note that Kubernetes startup probes are only added in projects using MicroProfile 3.1 and later.
The application should be configured as follows to enable the enricher (i.e. Either microProfile
or mpHealth
should be enabled in Liberty Server Configuration file as pointed out in OpenLiberty Health Docs)
<featureManager>
<feature>mpHealth-4.1</feature>
</featureManager>
You can configure the different aspects of the probes.
Element | Description | Property |
---|---|---|
scheme |
Scheme to use for connecting to the host. Defaults to |
|
port |
Port number to access the container. Defaults to |
|
livenessFailureThreshold |
Configures Defaults to |
|
livenessSuccessThreshold |
Configures Defaults to |
|
livenessInitialDelay |
Configures Defaults to |
|
livenessPeriodSeconds |
Configures Defaults to |
|
livenessPath |
Path to access on the application server. Defaults to |
|
readinessFailureThreshold |
Configures Defaults to |
|
readinessSuccessThreshold |
Configures Defaults to |
|
readinessInitialDelay |
Configures Defaults to |
|
readinessPeriodSeconds |
Configures Defaults to |
|
readinessPath |
Path to access on the application server. Defaults to |
|
startupFailureThreshold |
Configures Defaults to |
|
startupSuccessThreshold |
Configures Defaults to |
|
startupInitialDelay |
Configures Defaults to |
|
startupPeriodSeconds |
Configures Defaults to |
|
startupPath |
Path to access on the application server. Defaults to |
|
This enricher adds kubernetes readiness and liveness probes with Spring Boot. This requires the following dependency has been enabled in Spring Boot
implementation 'org.springframework.boot:spring-boot-starter-actuator'
The enricher will try to discover the settings from the application.properties
/ application.yaml
Spring Boot configuration file.
/actuator/health
is the default endpoint for the liveness and readiness probes.
If the user has enabled the management.health.probes.enabled
property this Enricher uses the /actuator/health/liveness
as liveness and /actuator/health/readiness
as readiness probe endpoints instead.
management.health.probes.enabled=true
The port number is read from the management.port
option, and will use the default value of 8080
The scheme will use HTTPS if server.ssl.key-store
option is in use, and fallback to use HTTP
otherwise.
The enricher will use the following settings by default:
readinessProbeInitialDelaySeconds
: 10
readinessProbePeriodSeconds
: <kubernetes-default>
livenessProbeInitialDelaySeconds
: 180
livenessProbePeriodSeconds
: <kubernetes-default>
timeoutSeconds
: <kubernetes-default>
failureThreshold
: 3
successThreshold
: 1
These values can be configured by the enricher in the kubernetes-gradle-plugin
configuration as shown below:
kubernetes {
enricher {
config {
'jkube-healthcheck-spring-boot' {
timeoutSeconds = '5'
readinessProbeInitialDelaySeconds = '30'
failureThreshold = '3'
successThreshold = '1'
}
}
}
}
This enricher adds kubernetes readiness and liveness probes with Thorntail v2. This requires the following fraction has been enabled in Thorntail
implementation 'io.thorntail:microprofile-health:2.7.0.Final'
The enricher will use the following settings by default:
port = 8080
scheme = HTTP
path = /health
failureThreshold = 3
successThreshold = 1
These values can be configured by the enricher in the kubernetes-gradle-plugin
configuration as shown below:
kubernetes {
enricher {
config {
'jkube-healthcheck-thorntail' {
port = '4444'
scheme = 'HTTPS'
path = 'health/myapp'
failureThreshold = '3'
successThreshold = '1'
}
}
}
}
This enricher adds kubernetes readiness, liveness and startup probes with Quarkus. This requires the following dependency to be added to your Quarkus project:
implementation 'io.quarkus:quarkus-smallrye-health'
The enricher will try to discover the settings from the application.properties
/ application.yaml
configuration file. JKube uses the following properties to resolve the health check URLs:
quarkus.http.root-path
: Quarkus Application root path.
quarkus.http.non-application-root-path
: This property got introduced in recent versions of Quarkus(2.x) for non application endpoints.
quarkus.smallrye-health.root-path
: The location of the all-encompassing health endpoint.
quarkus.smallrye-health.readiness-path
: The location of the readiness endpoint.
quarkus.smallrye-health.liveness-path
: The location of the liveness endpoint.
quarkus.smallrye-health.startup-path
: The location of the startup endpoint.
Note: Please note that behavior of these properties seem to have changed since Quarkus 1.11.x versions (e.g for health and liveness paths leading slashes are now being considered). kubernetes-gradle-plugin
would also check quarkus version along with value of these properties in order to resolve effective health endpoints.
You can read more about these flags in Quarkus Documentation.
The enricher will use the following settings by default:
scheme
: HTTP
port
: 8080
failureThreshold
: 3
successThreshold
: 1
livenessInitialDelay
: 10
readinessInitialDelay
: 5
startupInitialDelay
: 5
livenessPath
: q/health/live
readinessPath
: q/health/ready
startupPath
: q/health/started
These values can be configured by the enricher in the kubernetes-gradle-plugin
configuration as shown below:
kubernetes {
enricher {
config {
'jkube-healthcheck-quarkus' {
livenessInitialDelay = '5'
failureThreshold = '3'
successThreshold = '1'
}
}
}
}
This enricher adds kubernetes readiness and liveness probes for Micronaut based projects.
The application should be configured as follows to enable the enricher:
endpoints:
health:
enabled: true
The enricher will try to discover the settings from the application.properties
/ application.yaml
Micronaut configuration file.
You can configure the different aspects of the probes.
Element | Description | Property |
---|---|---|
readinessProbeInitialDelaySeconds |
Number of seconds after the container has started before the readiness probe is initialized. |
|
readinessProbePeriodSeconds |
How often (in seconds) to perform the readiness probe. |
|
livenessProbeInitialDelaySeconds |
Number of seconds after the container has started before the liveness probe is initialized. |
|
livenessProbePeriodSeconds |
How often (in seconds) to perform the liveness probe. |
|
failureThreshold |
Minimum consecutive failures for the probes to be considered failed after having succeeded. Defaults to |
|
successThreshold |
Minimum consecutive successes for the probes to be considered successful after having failed. Defaults to |
|
timeoutSeconds |
Number of seconds after which the probes timeout. |
|
scheme |
Scheme to use for connecting to the host. Defaults to |
|
port |
Port number to access the container. Defaults to the one provided in the Image configuration. |
|
path |
Path to access on the HTTP server. Defaults to |
|
This enricher adds kubernetes readiness and liveness probes with Eclipse Vert.x applications. The readiness probe lets Kubernetes detect when the application is ready, while the liveness probe allows Kubernetes to verify that the application is still alive.
This enricher allows configuring the readiness and liveness probes. The following probe types are supported:
http
(emit HTTP requests), tcp
(open a socket), exec
(execute a command).
By default, this enricher uses the same configuration for liveness and readiness probes. But specific configurations can be provided too. The configurations can be overridden using project’s properties.
The enricher is automatically executed if your project uses the io.vertx.vertx-plugin
or depends on io.vertx:vertx-core
.
However, by default, no health check will be added to your deployment unless configured explicitly.
The minimal configuration to add health checks is the following:
kubernetes {
enricher {
config {
'jkube-healthcheck-vertx' {
path = "/health"
}
}
}
}
It configures the readiness and liveness health checks using HTTP requests on the port 8080
(default port) and on the
path /health
. The defaults are:
port = 8080
(for HTTP)
scheme = HTTP
path = none (disabled)
the previous configuration can also be given use project’s properties:
vertx.health.path = /health
You can provide two different configuration for the readiness and liveness checks:
kubernetes {
enricher {
config {
'jkube-healthcheck-vertx' {
readiness {
path = '/ready'
}
liveness {
path = '/health'
}
}
}
}
}
You can also use the readiness
and liveness
chunks in user properties:
vertx.health.readiness.path = /ready
vertx.health.liveness.path = /ready
Shared (generic) configuration can be set outside of the specific configuration. For instance, to use the port 8081:
kubernetes {
enricher {
config {
'jkube-healthcheck-vertx' {
port = '8081'
readiness {
path = '/ready'
}
liveness {
path = '/health'
}
}
}
}
}
Or:
vertx.health.port = 8081
vertx.health.readiness.path = /ready
vertx.health.liveness.path = /ready
The configuration is structured as follows
kubernetes {
enricher {
config {
'jkube-healthcheck-vertx' {
// Generic configuration, applied to both liveness and readiness
path = '/both'
readiness = [
// Specific configuration for the liveness probe
'port-name': 'ping'
]
liveness = [
// Specific configuration for the readiness probe
'port-name': 'ready'
]
}
}
}
}
The same structure is used in project’s properties:
# Generic configuration given as vertx.health.$attribute
vertx.health.path = /both
# Specific liveness configuration given as vertx.health.liveness.$attribute
vertx.health.liveness.port-name = ping
# Specific readiness configuration given as vertx.health.readiness.$attribute
vertx.health.readiness.port-name = ready
Important: Project’s plugin configuration override the project’s properties. The overriding rules are: specific configuration > specific properties > generic configuration > generic properties.
You can configure the different aspects of the probes. These attributes can be configured for both the readiness and liveness probes or be specific to one.
Element | Description | Property |
---|---|---|
type |
The probe type among Defaults to |
|
initial-delay |
Number of seconds after the container has started before probes are initiated. |
|
period |
How often (in seconds) to perform the probe. |
|
timeout |
Number of seconds after which the probe times out. |
|
success-threshold |
Minimum consecutive successes for the probe to be considered successful after having failed. |
|
failure-threshold |
Minimum consecutive failures for the probe to be considered failed after having succeeded. |
|
More details about probes are available on https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/.
When using HTTP GET
requests to determine readiness or liveness, several aspects can be configured. HTTP probes are used by default. To be more specific set the type
attribute to http
.
Element | Description | Property |
---|---|---|
scheme |
Scheme to use for connecting to the host. Defaults to |
|
path |
Path to access on the HTTP server. An empty path disable the check. |
|
headers |
Custom headers to set in the request. HTTP allows repeated headers. It cannot be configured using project’s properties. An example is available below. |
|
port |
Port number to access the container. A 0 or negative number disables the check. Defaults to |
|
port-name |
Name of the port to access on the container. If neither the |
|
Here is an example of HTTP probe configuration:
kubernetes {
enricher {
config {
'jkube-healthcheck-vertx' {
liveness {
port = '8081'
path = '/ping'
scheme = 'HTTPS'
headers = [
'X-Custom-Header': 'Awesome'
]
}
readiness {
port = '-1'
}
}
}
}
}
You can also configure the probes to just open a socket on a specific port. The type
attribute must be set to tcp
.
Element | Description | Property |
---|---|---|
port |
Port number to access the container. A 0 or negative number disables the check. |
|
|
Name of the port to access on the container. If neither the |
|
For example:
kubernetes {
enricher {
config {
'jkube-healthcheck-vertx' {
liveness {
type = 'tcp'
port = '8081'
}
readiness {
// Use HTTP Get probe
path = '/ping'
port = '8080'
}
}
}
}
}
You can also configure the probes to execute a command. If the command succeeds, it returns 0, and Kubernetes consider the pod to be alive and healthy. If the command returns a non-zero value, Kubernetes kills the pod and restarts it. To use a command, you must set the type
attribute to exec
:
kubernetes {
enricher {
config {
'jkube-healthcheck-vertx' {
liveness {
type = 'exec'
command = [
'cmd': ['cat', '/tmp/healthy']
]
}
readiness {
// Use HTTP Get probe
path = '/ping'
port = '8080'
}
}
}
}
}
As you can see in the snippet above the command is passed using the command
attribute. This attribute cannot be
configured using project’s properties. An empty command disables the check.
You can disable the checks by setting:
the port
to 0 or to a negative number for http
and tcp
probes
the command
to an empty list for exec
In the first case, you can use project’s properties to disable them:
tcp
and http
probesvertx.health.port = -1
For http
probes, an empty or not set path
also disable the probe.
This enricher adds kubernetes readiness and liveness and startup probes for projects which have io.smallrye:smallrye-health
dependency added for Health management. Note that Kubernetes startup probes are only added in projects using MicroProfile 3.1 and later.
You can configure the different aspects of the probes.
Element | Description | Property |
---|---|---|
scheme |
Scheme to use for connecting to the host. Defaults to |
|
port |
Port number to access the container. Defaults to |
|
livenessFailureThreshold |
Configures Defaults to |
|
livenessSuccessThreshold |
Configures Defaults to |
|
livenessInitialDelay |
Configures Defaults to |
|
livenessPeriodSeconds |
Configures Defaults to |
|
livenessPath |
Path to access on the application server. Defaults to |
|
readinessFailureThreshold |
Configures Defaults to |
|
readinessSuccessThreshold |
Configures Defaults to |
|
readinessInitialDelay |
Configures Defaults to |
|
readinessPeriodSeconds |
Configures Defaults to |
|
readinessPath |
Path to access on the application server. Defaults to |
|
startupFailureThreshold |
Configures Defaults to |
|
startupSuccessThreshold |
Configures Defaults to |
|
startupInitialDelay |
Configures Defaults to |
|
startupPeriodSeconds |
Configures Defaults to |
|
startupPath |
Path to access on the application server. Defaults to |
|
This enricher adds kubernetes readiness and liveness and startup probes for Helidon based projects. Note that Kubernetes startup probes are only added in projects using MicroProfile 3.1 and later.
The application should be configured as follows to enable the enricher (i.e. io.helidon.health:helidon-health
dependency is found in project dependencies)
You can configure the different aspects of the probes.
Element | Description | Property |
---|---|---|
scheme |
Scheme to use for connecting to the host. Defaults to |
|
port |
Port number to access the container. Defaults to |
|
livenessFailureThreshold |
Configures Defaults to |
|
livenessSuccessThreshold |
Configures Defaults to |
|
livenessInitialDelay |
Configures Defaults to |
|
livenessPeriodSeconds |
Configures Defaults to |
|
livenessPath |
Path to access on the application server. Defaults to |
|
readinessFailureThreshold |
Configures Defaults to |
|
readinessSuccessThreshold |
Configures Defaults to |
|
readinessInitialDelay |
Configures Defaults to |
|
readinessPeriodSeconds |
Configures Defaults to |
|
readinessPath |
Path to access on the application server. Defaults to |
|
startupFailureThreshold |
Configures Defaults to |
|
startupSuccessThreshold |
Configures Defaults to |
|
startupInitialDelay |
Configures Defaults to |
|
startupPeriodSeconds |
Configures Defaults to |
|
startupPath |
Path to access on the application server. Defaults to |
|
How to write your own enrichers and install them.
It’s possible to extend Eclipse JKube’s Enricher API to define your own custom Enrichers as per use case. Please refer to the Enricher Interface; You can create new enrichers by implementing this interface.
Please check out Custom Istio Enricher Gradle quickstart for detailed example.
Profiles can be used to combine a set of enrichers and generators and to give this combination a referable name.
Profiles are defined in YAML. The following example shows a simple profile which uses only the Spring Boot generator and a few enrichers to add a Kubernetes Deployment and a Service:
- name: my-spring-boot-apps (1)
generator: (2)
includes:
- spring-boot
enricher: (3)
includes: (4)
# Default Deployment object
- jkube-controller
# Add a default service
- jkube-service
excludes: (5)
- jkube-icon
config: (6)
jkube-service:
# Expose service as NodePort
type: NodePort
order: 10 (7)
- name: another-profile
# ....
1 | Profile’s name |
2 | Generators to use |
3 | Enrichers to use |
4 | List of enrichers to include in that given order |
5 | List of enrichers to exclude (especially useful when extending profiles) |
6 | Configuration for services an enrichers |
7 | An order which influences the way how profiles with the same name are merged |
Each profiles.yml
has a list of profiles which are defined with these elements:
Element | Description |
---|---|
name |
Profile name. |
extends |
This plugin comes with a set of predefined profiles.
These profiles can be extended by defining a custom profile that references the name of the
profile to extend in the |
generator |
List of generator definitions. See below for the format of these definitions. |
enricher |
List of enrichers definitions. See below for the format of these definitions. |
order |
The order of the profile which is used when profiles of the same name are merged. |
The definition of generators and enrichers in the profile follows the same format:
Element | Description |
---|---|
includes |
List of generators or enrichers to include. The order in the list determines the order in which the processors are applied. |
excludes |
List of generators or enrichers. These have precedences over includes and will exclude a processor even when referenced in an includes sections |
config |
Configuration for generators or enrichers. This is a map where the keys are the name of the processor to configure and the value is again a map with configuration keys and values specific to the processor. See the documentation of the respective generator or enricher for the available configuration keys. |
Profiles can be defined externally either directly as a build resource in src/main/jkube/profiles.yml
or provided as part of a plugin’s dependency where it is supposed to be included as META-INF/jkube/profiles.yml
. Multiple profiles can be included in these profiles.yml
descriptors as a list:
If a profile is used then it is looked up from various places in the following order:
From the compile and plugin classpath from META-INF/jkube/profiles-default.yml
. These files are reserved for profiles defined by this plugin
From the compile and plugin classpath from META-INF/jkube/profiles.yml
. Use this location for defining your custom profiles which you want to include via dependencies.
From the project in src/main/jkube/profiles.yml
. The directory can be tuned with the plugin option resourceDir
(property: jkube.resourceDir
)
When multiple profiles of the same name are found, then these profiles are merged. If the profiles have an order number, then the profile with higher order takes precedence when merging.
For includes of the same processors, the processor is moved to the earliest position.
e.g. consider the following two profiles with the name my-profile
name: my-profile
enricher:
includes: [ e1, e2 ]
name: my-profile
enricher:
includes: [ e3, e1 ]
order: 10
then when merged results in the following profile (when no order is given, it defaults to 0):
name: my-profile
enricher:
includes: [ e1, e2, e3 ]
order: 10
Profile with the same order number are merged according to the lookup order described above, where the latter profile is supposed to have a higher order.
The configuration for enrichers and generators are merged, too, where higher order profiles override configuration values with the same key of lower order profile configuration.
Profiles can be selected by defining them in the plugin configuration, by giving a system property or by using special directories in the directory holding the resource fragments.
Here is an example how the profile can be used in a plugin configuration:
kubernetes {
profile = 'my-spring-boot-apps' // (1)
}
1 | Name which selects the profile from the profiles.yml or profiles-default.yml file. |
Alternatively a profile can be also specified on the command line or as a project property:
gradle -Pjkube.profile=my-spring-boot-apps k8sBuild k8sApply
If a configuration for enrichers and generators is provided as part of the project plugin’s configuration then this takes precedence and overrides any of the defaults provided by the selected profile.
Profiles are also very useful when used together with resource fragments in src/main/jkube
.
By default, the resource objects defined here are enriched with the configured profile (if any).
A different profile can be selected easily by using a subdirectory within src/main/jkube
.
The name of each subdirectory is interpreted as a profile name and all resource definition files found in this subdirectory are enhanced with the enhancers defined in this profile.
For example, consider the following directory layout:
.
├── src/main/jkube
├── app-rc.yml
├── app-svc.yml
└── raw
├── couchbase-rc.yml
└── couchbase-svc.yml
Here, the resource descriptors app-rc.yml
and app-svc.yml
are enhanced with the enrichers defined in the main configuration.
The two files couchbase-rc.yml
and couchbase-svc.yml
in the subdirectory raw/
are enriched with the profile raw instead.
This is a predefined profile which includes no enricher at all, so the couchbase resource objects are not enriched and taken over literally.
This is an easy way how you can fine tune enrichment for different object set.
This plugin comes with the following predefined profiles:
Profile | Description |
---|---|
default |
The default profile which is active if no profile is specified. It consists of a curated set of generator and enrichers. See below for the current definition. |
minimal |
This profile contains no generators and only enrichers for adding default objects (controller and services). No other enrichment is included. |
explicit |
Like default but without adding default objects like controllers and services. |
aggregate |
Includes no generators and only the jkube-dependency enricher for picking up and combining resources from the compile time dependencies. |
internal-microservice |
default profile extension that prevents services from being externally exposed. |
security-hardening |
default profile extension that enables the security-hardening enricher. |
A profile can also extend another profile to avoid repetition. This is useful to add optional enrichers/generators to a given profile or to partially exclude enrichers/generators from another.
- name: security-hardening
extends: default
enricher:
includes:
- jkube-security-hardening
For example, this profiles includes the optional jkube-security-hardening
enricher to the default profile.
This plugin supports so call jkube-plugins which have entry points that can be bound to the different JKube operation phases. jkube-plugins are enabled by just declaring a dependency in the plugin declaration:
The following example is from quickstarts/gradle/plugin
JKube Plugin is defined under Gradle’s buildSrc
directory which is automatically added to build script classpath by Gradle.
.
├── app
├── buildSrc
├── build.gradle
└── src
└── main
├── java
│ └── org
│ └── eclipse
│ └── jkube
│ └── quickstart
│ └── plugin
│ └── SimpleJKubePlugin.java
└── resources
└── META-INF
└── jkube
└── plugin
JKubePlugins are automatically loaded by JKube by declaring a dependency to a module that contains a descriptor file at
META-INF/jkube/plugin
with class names line by line, for example:
org.eclipse.jkube.quickstart.plugin.SimpleJKubePlugin
At the moment descriptor files are looked up in these locations:
META-INF/maven/io.fabric8/dmp-plugin
(Deprecated, kept for backward compatibility)
META-INF/jkube/plugin
META-INF/jkube-plugin
During a build with k8sBuild
, those classes are loaded and certain fixed method are called.
JKube plugin would need to implement org.eclipse.jkube.api.JKubePlugin
interface. At the moment, The following methods are supported:
Method | Description |
---|---|
addExtraFiles |
A method called by kubernetes-gradle-plugin with a single |
Check out quickstarts/gradle/plugin
for a fully working example.
Docker uses registries to store images. The registry is typically
specified as part of the name. I.e. if the first part (everything
before the first /
) contains a dot (.
) or colon (:
) this part is
interpreted as an address (with an optionally port) of a remote
registry. This registry (or the default docker.io
if no
registry is given) is used during push and pull operations. This
plugin follows the same semantics, so if an image name is specified
with a registry part, this registry is contacted. Authentication is
explained in the next section.
There are some situations however where you want to have more
flexibility for specifying a remote registry. This might be because
you do not want to hard code a registry into build.gradle
but
provide it from the outside with an environment variable or a system
property.
This plugin supports various ways of specifying a registry:
If the image name contains a registry part, this registry is used unconditionally and can not be overwritten from the outside.
If an image name doesn’t contain a registry, then by default the
default Docker registry docker.io
is used for push and pull
operations. But this can be overwritten through various means:
If the image
configuration contains a registry
subelement
this registry is used.
Otherwise, a global configuration element registry
is
evaluated which can be also provided as system property via
-Djkube.docker.registry
.
Finally an environment variable DOCKER_REGISTRY
is looked up for
detecting a registry.
This registry is used for pulling (i.e. for autopull the base image
when doing a k8sBuild
) and pushing with k8sPush
. However,
when these two tasks are combined on the command line like in mvn
-Djkube.docker.registry=myregistry:5000 package k8sBuild k8sPush
the same registry is used for both operation. For a more fine grained
control, separate registries for pull and push can be specified.
In the plugin’s configuration with the parameters pullRegistry
and
pushRegistry
, respectively.
With the system properties jkube.docker.pull.registry
and
jkube.docker.push.registry
, respectively.
kubernetes {
registry = "docker.jolokia.org:443"
images {
image1 {
// Without an explicit registry
name = "jolokia/jolokia-java"
// Hence use this registry
registry = "docker.ro14nd.de"
}
image2 {
name ="postgresql"
// No registry in the name, hence use this globally
// configured docker.jolokia.org:443 as registry
}
image3 {
// Explicitly specified always wins
name = "docker.example.com:5000/another/server"
}
}
}
There is some special behaviour when using an externally provided registry like described above:
When pulling, the image pulled will be also tagged with a repository name without registry. The reasoning behind this is that this image then can be referenced also by the configuration when the registry is not specified anymore explicitly.
When pushing a local image, temporarily a tag including the registry is added and removed after the push. This is required because Docker can only push registry-named images.
When pulling (via the autoPull
mode of k8sBuild
) or pushing image, it
might be necessary to authenticate against a Docker registry.
There are five different locations searched for credentials. In order, these are:
Providing system properties jkube.docker.username
and jkube.docker.password
from the outside.
Using a authConfig
section in the plugin configuration with username
and password
elements.
Using OpenShift configuration in ~/.config/kube
Login into a registry with docker login
(credentials in a credential helper or in ~/.docker/config.json
)
Using the username and password directly in the build.gradle
is not
recommended since this is widely visible. This is the easiest and
transparent way, though. Using an authConfig
is straight forward:
kubernetes {
images {
image {
name = "consol/tomcat-7.0"
}
}
authConfig {
username = "jolokia"
password = "s!cr!t"
}
}
The system property provided credentials are a good compromise when using CI servers like Jenkins. You simply provide the credentials from the outside:
gradle -Djkube.docker.username=jolokia -Djkube.docker.password=s!cr!t k8sPush
The most secure way is to rely on docker’s credential store or credential helper and read confidential information from an external credentials store, such as the native keychain of the operating system. Follow the instruction on the docker login documentation.
As a final fallback, this plugin consults $DOCKER_CONFIG/config.json
if DOCKER_CONFIG
is set, or ~/.docker/config.json
if not, and reads credentials stored directly within this
file. This unsafe behavior happened when connecting to a registry with the command docker login
from the command line
with older versions of docker (pre 1.13.0) or when docker is not configured to use a
credential store.
The credentials lookup described above is valid for both push and pull operations. In order to narrow things down, credentials can be provided for pull or push operations alone:
In an authConfig
section a sub-section pull
and/or push
can be added. In the example below the credentials provider are only
used for image push operations:
kubernetes {
images {
image {
name = "consol/tomcat-7.0"
}
}
authConfig {
push {
username = "jolokia"
password = "secret"
}
}
}
When the credentials are given on the command line as system
properties, then the properties jkube.docker.pull.username
/
jkube.docker.pull.password
and jkube.docker.push.username
/
jkube.docker.push.password
are used for pull and push operations,
respectively (when given). Either way, the standard lookup algorithm
as described in the previous section is used as fallback.
kubernetes-gradle-plugin also provides user an option to build container images without having access to any docker daemon.
You just need to set jkube.build.strategy
property to jib
. It will delegate the build process to
JIB. It creates a tarball inside your target directory which can be loaded
into any docker daemon afterwards. You may also push the image to your specified registry using push goal with feature flag enabled.
You can find more details at Spring Boot JIB With Assembly Quickstart.
kubernetes-gradle-plugin provides the required features for users to leverage Cloud Native Buildpacks for building container images.
You can enable this build strategy by setting the jkube.build.strategy
property to buildpacks
.
Access to a Docker daemon is required in order to use Buildpacks as mentioned in Buildpack Prerequisites. |
gradle k8sBuild -Djkube.build.strategy=buildpacks
kubernetes-gradle-plugin downloads Pack CLI to the user’s $HOME/.jkube
folder and starts the
pack build
process. If the download for the Pack CLI binary fails, kubernetes-gradle-plugin looks for any locally installed Pack CLI version.
If no builder image is configured, then kubernetes-gradle-plugin uses paketobuildpacks/builder:base
as the default builder image.
If builder image is provided in local Pack Config, kubernetes-gradle-plugin uses the builder image specified in the file.
For example,if the user has this image set in the $HOME/.pack/config.toml
file, kubernetes-gradle-plugin will use testuser/buildpacks-quarkus-builder:latest
as Buildpacks builder image.:
default-builder-image = "testuser/buildpacks-quarkus-builder:latest"
It’s also possible to configure BuildPack builder image using property or Groovy DSL configuration. You can use this property to configure buildpacks builder image:
jkube.generator.buildpacksBuilderImage = "testuser/buildpacks-quarkus-builder:latest"
BuildPacks integration simply involves delegation of the build process to |
Kind | Filename Type |
---|---|
BuildConfig |
|
ClusterRole |
|
ConfigMap |
|
ClusterRoleBinding |
|
CronJob |
|
CustomResourceDefinition |
|
DaemonSet |
|
Deployment |
|
DeploymentConfig |
|
ImageStream |
|
ImageStreamTag |
|
Ingress |
|
Job |
|
LimitRange |
|
Namespace |
|
NetworkPolicy |
|
OAuthClient |
|
PolicyBinding |
|
PersistentVolume |
|
PersistentVolumeClaim |
|
Project |
|
ProjectRequest |
|
ReplicaSet |
|
ReplicationController |
|
ResourceQuota |
|
Role |
|
RoleBinding |
|
RoleBindingRestriction |
|
Route |
|
Secret |
|
Service |
|
ServiceAccount |
|
StatefulSet |
|
Template |
|
Pod |
|
You can add your custom Kind/Filename
mappings.
To do it you have two approaches:
Setting an environment variable or system property called jkube.mapping
pointing out to a .properties
files with pairs <kind>⇒<filename1>, <filename2>
By default if no environment variable nor system property is set, JKube looks for a file located at classpath /META-INF/jkube.kind-filename-type-mapping-default.properties
.
By defining the Mapping in the plugin’s configuration
kubernetes {
mappings {
mapping {
kind = "Var" (1)
filenameTypes = "foo, bar" (2)
apiVersion = "api.example.com/v1" (3)
}
}
}
1 | The kind name (mandatory) |
2 | The filename types (mandatory), a comma-separated list of filenames to map to the specified kind |
3 | The apiVersion (optional) |
The easiest way is to add a src/main/jkube/deployment.yml
file to your project containing something like:
spec:
template:
spec:
containers:
- env:
- name: FOO
value: bar
The above will generate an environment variable $FOO
of value bar
For a full list of the environments used in java base images, see this list
The simplest way is to add system properties to the JAVA_OPTIONS
environment variable.
For a full list of the environments used in java base images, see this list
e.g. add a src/main/jkube/deployment.yml
file to your project containing something like:
spec:
template:
spec:
containers:
- env:
- name: JAVA_OPTIONS
value: "-Dfoo=bar -Dxyz=abc"
The above will define the system properties foo=bar
and xyz=abc
First you need to create your ConfigMap
resource via a file src/main/jkube/configmap.yml
data:
application.properties: |
# spring application properties file
welcome = Hello from Kubernetes ConfigMap!!!
dummy = some value
Then mount the entry in the ConfigMap
into your Deployment
via a file src/main/jkube/deployment.yml
metadata:
annotations:
configmap.jkube.io/update-on-change: ${project.artifactId}
spec:
replicas: 1
template:
spec:
volumes:
- name: config
configMap:
name: ${project.artifactId}
items:
- key: application.properties
path: application.properties
containers:
- volumeMounts:
- name: config
mountPath: /deployments/config
Here is an example quickstart doing this
Note that the annotation configmap.jkube.io/update-on-change
is optional; its used if your application is not capable
of watching for changes in the /deployments/config/application.properties
file. In this case if you are also running
the configmapcontroller then this will cause a rolling upgrade of your
application to use the new ConfigMap
contents as you change it.
First you need to create your PersistentVolumeClaim
resource via a file src/main/jkube/foo-pvc.yml
where foo
is the name of the PersistentVolumeClaim
. It might be your app requires multiple vpersistent volumes so you will need multiple PersistentVolumeClaim
resources.
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
Then to mount the PersistentVolumeClaim
into your Deployment
create a file src/main/jkube/deployment.yml
spec:
template:
spec:
volumes:
- name: foo
persistentVolumeClaim:
claimName: foo
containers:
- volumeMounts:
- mountPath: /whatnot
name: foo
Where the above defines the PersistentVolumeClaim
called foo
which is then mounted into the container at /whatnot
Ingress
generation is supported by Eclipse JKube for Service
objects of type LoadBalancer
. In order to generate
Ingress
you need to enable jkube.createExternalUrls
property to true
and jkube.domain
property to desired host
suffix, it would be appended to your service name for host value.
You can also provide a host for it in gradle.properties
file like this:
jkube.createExternalUrls=true
jkube.domain=example.com
When invoking k8sBuild with only Podman installed, the following error appears:
No <dockerHost> given, no DOCKER_HOST environment variable, no read/writable '/var/run/docker.sock' or '//./pipe/docker_engine' and no external provider like Docker machine configured -> [Help 1]
By default, JKube is relying on the Docker REST API /var/run/docker.sock
to build Docker images. Using Podman even with the Docker CLI emulation won’t work as it is just a CLI wrapper and does not provide any Docker REST API.
However, it is possible to start an emulated Docker REST API with the podman command:
export DOCKER_HOST="unix:/run/user/$(id -u)/podman/podman.sock" podman system service --time=0 unix:/run/user/$(id -u)/podman/podman.sock &
If you want to configure image name generated by Eclipse JKube which is %g/%a:%l
by default(see [image-name]). It will depend upon what mode you’re using in Eclipse JKube:
If you’re using zero configuration mode, which means you depend on Eclipse JKube Generators to generate an opinionated image, you will be able to do it using jkube.generator.name
maven property.
If you’re providing Groovy DSL image configuration, image name would be picked from name
tag like in this example:
kubernetes {
images {
image {
name = "myusername/myimagename:latest"
build {
from = "openjdk:latest"
cmd {
exec = ["java", "-jar", "${project.name}-${project.version}.jar"]
}
}
}
}
}
If you’re using Simple Dockerfile Mode, you can configure image name via jkube.image.name
or jkube.generator.name
flags