© 2020 The original authors.

openshift-maven-plugin

1. Introduction

The openshift-maven-plugin brings your Java applications on to OpenShift. It provides a tight integration into Maven and benefits from the build configuration already provided. This plugin focus on two tasks: Building Docker images and creating Kubernetes resource descriptors. It can be configured very flexibly and supports multiple configuration models for creating: A Zero-Config setup allows for a quick ramp-up with some opinionated defaults. For more advanced requirements, an XML configuration provides additional configuration options which can be added to the pom.xml. For the full power, in order to tune all facets of the creation, external resource fragments and Dockerfiles can be used.

1.1. Building Images

The oc:build goal is for creating Docker images containing the actual application. These then can be deployed later on Kubernetes or OpenShift. It is easy to include build artifacts and their dependencies into these images. This plugin uses the assembly descriptor format similar to the one used in maven-assembly-plugin to specify the content which will be added to the image. That images can then be pushed to public or private Docker registries with oc:push.

Depending on the operational mode, for building the actual image either a Docker daemon is used directly or an OpenShift Docker Build is performed.

A special oc:watch goal allows for reacting to code changes to automatically recreate images or copy new artifacts into running containers.

1.2. Kubernetes Resources

Kubernetes resource descriptors can be created or generated from oc:resource. These files are packaged within the Maven artifacts and can be deployed to a running orchestration platform with oc:apply.

Typically you only specify a small part of the real resource descriptors which will be enriched by this plugin with various extra information taken from the pom.xml. This drastically reduces boilerplate code for common scenarios.

1.3. Configuration

As mentioned already there are three levels of configuration:

  • Zero-Config mode makes some very opinionated decisions based on what is present in the pom.xml like what base image to use or which ports to expose. This is great for starting up things and for keeping quickstart applications small and tidy.

  • XML plugin configuration mode is similar to what docker-maven-plugin provides. This allows for type-safe configuration with IDE support, but only a subset of possible resource descriptor features is provided.

  • Kubernetes & OpenShift resource fragments are user provided YAML files that can be enriched by the plugin. This allows expert users to use a plain configuration file with all their capabilities, but also to add project specific build information and avoid boilerplate code.

The following table gives an overview of the different models

Table 1. Configuration Models
Model Docker Images Resource Descriptors

Zero-Config

Generators are used to create Docker image configurations. Generators can detect certain aspects of the build (e.g. whether Spring Boot is used) and then choose some opinionated defaults like the base image, which ports to expose and the startup command. They can be configured, but offer only a few options.

Default Enrichers will create a default Service and Deployment (DeploymentConfig for OpenShift) when no other resource objects are provided. Depending on the image they can detect which port to expose in the service. As with Generators, Enrichers support a limited set of configuration options.

XML configuration

openshift-maven-plugin inherits the XML based configuration for building images from the docker-maven-plugin and provides the same functionality. It supports an assembly descriptor for specifying the content of the Docker image.

A subset of possible resource objects can be configured with a dedicated XML syntax. With a decent IDE you get autocompletion on most objects and inline documentation for the available configuration elements. The provided configuration can be still enhanced by Enhancers which is useful for adding e.g. labels and annotations containing build or other information.

Resource Fragments and Dockerfiles

Similarly to the docker-maven-plugin, openshift-maven-plugin supports external Dockerfiles too, which are referenced from the plugin configuration.

Resource descriptors can be provided as external YAML files which will build a base skeleton for the applicable resource.

The "skeleton" is then post-processed by Enrichers which will complete the skeleton by adding the fields each enricher is responsible of (labels, annotations, port information, etc.). Maven properties within these files are resolved to their values.

With this model you can use every Kubernetes / OpenShift resource objects with all their flexibility, but still get the benefit of adding build information.

1.4. Examples

Let’s have a look at some code. The following examples will demonstrate all three configurations variants:

1.4.1. Zero-Config

This minimal but full working example pom.xml shows how a simple spring boot application can be dockerized and prepared for Kubernetes. The full example can be found in directory quickstarts/maven/zero-config.

Example
<project>
  <modelVersion>4.0.0</modelVersion>

  <groupId>org.eclipse.jkube</groupId>
  <artifactId>jkube-maven-sample-zero-config</artifactId>
  <version>1.17.0</version>
  <packaging>jar</packaging>

  <parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId> (1)
    <version>1.5.5.RELEASE</version>
  </parent>

  <dependencies>
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId> (2)
    </dependency>
  </dependencies>

  <build>
    <plugins>
      <plugin>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-maven-plugin</artifactId> (3)
      </plugin>
      <plugin>
        <groupId>org.eclipse.jkube</groupId>
        <artifactId>openshift-maven-plugin</artifactId> (4)
        <version>1.17.0</version>
      </plugin>
    </plugins>
  </build>
</project>
1 This minimalistic spring boot application uses the spring-boot parent POM for setting up dependencies and plugins
2 The Spring Boot web starter dependency enables a simple embedded Tomcat for serving Spring MVC apps
3 The spring-boot-maven-plugin is responsible for repackaging the application into a fat jar, including all dependencies and the embedded Tomcat
4 The openshift-maven-plugin enables the automatic generation of a Docker image and Kubernetes / OpenShift descriptors including this Spring application.

This setup make some opinionated decisions for you:

These choices can be influenced by configuration options as described in Spring Boot Generator.

To start the Docker image build, you simply run

mvn package oc:build

This will create the Docker image against a running Docker daemon (which must be accessible either via Unix Socket or with the URL set in DOCKER_HOST). Alternatively, when connected to an OpenShift cluster then a S2I build will be performed on OpenShift which at the end creates an ImageStream.

To deploy the resources to the cluster call

mvn oc:resource oc:deploy

By default a Service and a Deployment object pointing to the created Docker image is created. When running in OpenShift mode, a Service and DeploymentConfig which refers the ImageStream created with oc:build will be installed.

Of course you can bind all those jkube-goals to execution phases as well, so that they are called along with standard lifecycle goals like install. For example, to bind the building of the Kubernetes resource files and the Docker images, add the following goals to the execution of the openshift-maven-plugin:

Example for lifecycle bindings
<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>

  <!-- ... -->

  <executions>
    <execution>
      <goals>
        <goal>resource</goal>
        <goal>build</goal>
      </goals>
    </execution>
  </executions>
</plugin>

If you’d also like to automatically deploy to Kubernetes each time you do a mvn install you can add the apply goal:

Example for lifecycle bindings with automatic deploys for mvn install
<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>

  <!-- ... -->

  <executions>
    <execution>
      <goals>
        <goal>resource</goal>
        <goal>build</goal>
        <goal>apply</goal>
      </goals>
    </execution>
  </executions>
</plugin>

1.4.2. XML Configuration

XML based configuration is only partially implemented and is not recommended for use right now.

Although the Zero-config mode and its generators can be tweaked with options up to a certain degree, many cases require more flexibility. For such instances, an XML-based plugin configuration can be used, in a way similar to the XML configuration used by docker-maven-plugin.

The plugin configuration can be roughly divided into the following sections:

  • Global configuration options are responsible for tuning the behaviour of plugin goals

  • <images> defines which Docker images are used and configured. This section is similar to the image configuration of the docker-maven-plugin, except that <run> and <external> sub-elements are ignored)

  • <resources> defines the resource descriptors for deploying on an OpenShift or Kubernetes cluster.

  • <generator> configures generators which are responsible for creating images. Generators are used as an alternative to a dedicated <images> section.

  • <enricher> configures various aspects of enrichers for creating or enhancing resource descriptors.

A working example can be found in the quickstarts/maven/xml-config directory. An extract of the plugin configuration is shown below:

Example for an XML configuration
<configuration>
  <namespace>test-ns</namespace>
  <images>  (1)
    
  </images>

  <resources> (2)
    <labels> (3)
      <all>
        <group>quickstarts</group>
      </all>
    </labels>

    <controller>
      <replicas>2</replicas> (4)
      <controllerName>${project.artifactId}</controllerName> (5)
      <liveness> (6)
        <getUrl>http://:8080/q/health</getUrl>
        <initialDelaySeconds>3</initialDelaySeconds>
        <timeoutSeconds>3</timeoutSeconds>
      </liveness>
      <startup> (7)
        <failureThreshold>30</failureThreshold>
        <periodSeconds>10</periodSeconds>
        <getUrl>http://:8080/actuator/health</getUrl>
      </startup>
    </controller>

    <services> (8)
      <service>
        <name>camel-service</name>
        <headless>true</headless>
      </service>
    </services>

    <serviceAccounts> (9)
      <serviceAccount>
        <name>build-robot</name>
      </serviceAccount>
    </serviceAccounts>

    <annotations> (10)
      <all>
        <version>${project.version}</version>
        <artifactId>${project.artifactId}</artifactId>
      </all>
      <deployment> (11)
        <my>deployment</my>
      </deployment>
      <service>
        <property> (12)
          <name>cloud.google.com/neg</name>
          <value>{"ingress":true}</value>
        </property>
      </service>
    </annotations>

    <configMap> (13)
      <name>test</name>
      <entries>
        <entry> (14)
          <name>key1</name>
          <value>value1</value>
        </entry>
        <entry> (15)
          <name>key3</name>
          <file>${project.basedir}/src/main/resources/META-INF/resources/index.html</file>
        </entry>
      </entries>
    </configMap>

    <remotes> (16)
       <remote>https://gist.githubusercontent.com/lordofthejars/ac2823cec7831697d09444bbaa76cd50/raw/e4b43f1b6494766dfc635b5959af7730c1a58a93/deployment.yaml</remote>
    </remotes>
  </resources>
</configuration>
1 Standard XML configuration for building one single Docker image
2 Kubernetes / OpenShift resources to create
3 Labels which should be applied globally to all resource objects
4 Number of replicas desired
5 Name of controller created by plugin
6 Liveness Probe to be added in PodTemplateSpec of Controller resource
7 Startup Probe to be added in PodTemplateSpec of Controller resource
8 One or more Service definitions.
9 ServiceAccount(s) to create
10 Annotations which should be applied either to all or to specific resources
11 Annotations applied to Deployment resources only
12 Annotations with a special character, a slash in this case
13 ConfigMap to be created
14 ConfigMap data entry as a string key value pair
15 ConfigMap data entry with value as file path, file’s contents are loaded into ConfigMap as key value
16 Remote files used as resource fragments.

The XML resource configuration is based on plain Kubernetes resource objects. When targeting OpenShift, Kubernetes resource descriptors will be automatically converted to their OpenShift counterparts, e.g. a Kubernetes Deployment will be converted to an OpenShift DeploymentConfig.

1.4.3. Resource Fragments

The third configuration option is to use an external configuration in form of YAML resource descriptors which are located in the src/main/jkube directory. Each resource gets its own file, which contains a skeleton of a resource descriptor. The plugin will pick up the resource, enrich it and then combine all to a single kubernetes.yml and openshift.yml file. Within these descriptor files you are can freely use any Kubernetes feature.

Note: In order to support simultaneously both OpenShift and Kubernetes, there is currently no way to specify OpenShift-only features this way, though this might change in future releases.

Let’s have a look at an example from quickstarts/maven/external-resources. This is a plain Spring Boot application, whose images are auto generated like in the Zero-Config case. The resource fragments are in src/main/jkube.

Example fragment "deployment.yml"
spec:
  replicas: 1
  template:
    spec:
      volumes:
        - name: config
          gitRepo:
            repository: 'https://github.com/jstrachan/sample-springboot-config.git'
            revision: 667ee4db6bc842b127825351e5c9bae5a4fb2147
            directory: .
      containers:
        - volumeMounts:
            - name: config
              mountPath: /app/config
          env:
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
      serviceAccount: ribbon

As you can see, there is no metadata section as would be expected for Kubernetes resources because it will be automatically added by the openshift-maven-plugin. The object’s Kind, if not given, is automatically derived from the filename. In this case, the openshift-maven-plugin will create a Deployment because the file is called deployment.yml. Similar mappings between file names and resource type exist for each supported resource kind, the complete list of which (along with associated abbreviations) can be found in the Kind Filename Mapping section.

Now that sidecar containers are supported in this plugin(if jkube.sidecar is enabled), be careful whenever you’re supplying container name in the resource fragment. If container specified in resource fragment doesn’t have a name or it’s name is equal to default jkube generated application’s container name; it would not be treated as sidecar and it would be merged into main container. However, You can override plugin’s default name for main container via jkube.generator.alias property.

Additionally, if you name your fragment using a name prefix followed by a dash and the mapped file name, the plugin will automatically use that name for your resource. So, for example, if you name your deployment fragment myapp-deployment.yml, the plugin will name your resource myapp. In the absence of such provided name for your resource, a name will be automatically derived from your project’s metadata (in particular, its artifactId as specified in your POM).

No image is also referenced in this example because the plugin also fills in the image details based on the configured image you are building with (either from a generator or from a dedicated image plugin configuration, as seen before).

For building images there is also an alternative mode using external Dockerfiles, in addition to the XML based configuration. Refer to oc:build for details.

Enrichment of resource fragments can be fine-tuned by using profile sub-directories. For more details see Profiles.

Now that we have seen some examples of the various ways how this plugin can be used, the following sections will describe the plugin goals and extension points in detail.

2. Getting Started

When working with openshift-maven-plugin, you’ll probably be facing similar situations and following the same patterns other users do. These are some of the most common scenarios and configuration modes:

2.1. Red Hat Developer Sandbox

This is an example of how you can use the JKube zero configuration to build and deploy your Java application with Red Hat OpenShift Developer Sandbox.

Prerequisites

You will need the following for the scenario:

Provision your DevSandbox

The Developer Sandbox for Red Hat OpenShift is a free OpenShift cluster that gives you experience of working with a Kubernetes Cluster. Once you’ve created an account and logged into console, you can copy login command and paste it into your terminal:

$ oc login --token=sha256~%TOKEN% --server=https://%SERVER%:6443
Logged into "https://%SERVER%:6443" as "%USERNAME%" using the token provided.

You have access to the following projects and can switch between them with 'oc project <projectname>':

  * %USERNAME%-dev
    %USERNAME%-stage

Using project "%USERNAME-dev".
Welcome! See 'oc help' to get started.

Adding openshift-maven-plugin to your project

We’ll be using quickstarts/gradle/spring-boot for this demonstration. If you have your own gradle project set up, you can follow instructions mentioned below.

You can add openshift-maven-plugin to plugins section:

Open the pom.xml file and add the plugin in the <plugins> section.

<plugin>
    <groupId>org.eclipse.jkube</groupId>
    <artifactId>openshift-maven-plugin</artifactId>
    <version>1.17.0</version>
</plugin>

Deploying application to Red Hat OpenShift

Make sure you’ve compiled the application:

$ ./mvnw clean install

Run JKube gradle tasks to deploy the application:

$ ./mvnw oc:build oc:resource oc:apply

How to disable routes

openshift-maven-plugin automatically generates a Route to expose your application. You can disable it with jkube.openshift.generateRoute flag:

$ ./mvnw oc:resource -Djkube.openshift.generateRoute=false

2.2. Spring Boot

openshift-maven-plugin works with any Spring Boot project without any configuration. It automatically detects your project dependencies and generated opinionated container image and Kubernetes manifests.

Adding openshift-maven-plugin to project

You would need to add openshift-maven-plugin to your project in order to use it:

Open the pom.xml file and add the plugin in the <plugins> section.

<plugin>
    <groupId>org.eclipse.jkube</groupId>
    <artifactId>openshift-maven-plugin</artifactId>
    <version>1.17.0</version>
</plugin>

Building container Image of your application

In case of OpenShift, Source to Image (S2I) builds are performed by default. ImageStream is updated after the image creation.

Run this command which would build your application’s container image and push it to OpenShift’s internal container registry:

$ ./mvnw oc:build

Generating & applying Kubernetes manifests

Just like container image, openshift-maven-plugin can generate opinionated Kubernetes manifests. Run this command to automatically generate and apply manifests onto currently logged in Kubernetes cluster.

$ ./mvnw oc:resource oc:apply

After running these goals, you can also check Kubernetes manifests generated by openshift-maven-plugin in target/classes/META-INF/jkube/ directory.

Cleanup applied Kubernetes resources after testing:

$ ./mvnw oc:undeploy

How to add a liveness and readiness probe?

openshift-maven-plugin automatically adds Kubernetes liveness and readiness probes in generated Kubernetes manifests in presence of Spring Boot Actuator dependency.

To add actuator to your project, add the following dependency:

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-actuator</artifactId>
    </dependency>
</dependencies>

Once you run oc:resource goal again, you should be able to see liveness and readiness probes added in generated manifests.

2.3. Vert.x

You can easily get started with using openshift-maven-plugin on an Eclipse Vert.x without providing any explicit configuration. openshift-maven-plugin would generate an opinionated container image and manifests by inspecting your project configuration.

Adding openshift-maven-plugin to project

You would need to add openshift-maven-plugin to your project in order to use it:

Open the pom.xml file and add the plugin in the <plugins> section.

<plugin>
    <groupId>org.eclipse.jkube</groupId>
    <artifactId>openshift-maven-plugin</artifactId>
    <version>1.17.0</version>
</plugin>

Building container Image of your application

In case of OpenShift, Source to Image (S2I) builds are performed by default. ImageStream is updated after the image creation.

Run this command which would build your application’s container image and push it to OpenShift’s internal container registry:

$ ./mvnw oc:build

Generating & applying Kubernetes manifests

Just like container image, openshift-maven-plugin can generate opinionated Kubernetes manifests. Run this command to automatically generate and apply manifests onto currently logged in Kubernetes cluster.

$ ./mvnw oc:resource oc:apply

After running these goals, you can also check Kubernetes manifests generated by openshift-maven-plugin in target/classes/META-INF/jkube/ directory.

Cleanup applied Kubernetes resources after testing:

$ ./mvnw oc:undeploy

How to set Service Port?

By default, in Vert.x applications, application port value is 8888. openshift-maven-plugin opinionated defaults use port 8080. If you want to change this, you’ll need to configure openshift-maven-plugin to generate image with desired port:

<configuration>
  <generator>
    <config>
      <vertx>
        <webPort>8080</webPort>
      </vertx>
    </config>
  </generator>
</configuration>

Once configured, you can go ahead and deploy application to Kubernetes.

How to add Kubernetes Readiness Liveness probes?

openshift-maven-plugin doesn’t add any Kubernetes liveness and readiness probes by default. However, it does provide a rich set of configuration options to add health checks. Read Vert.x Healthchecks section for more details.

2.4. Quarkus

You can easily get started with using openshift-maven-plugin on a Quarkus project without providing any explicit configuration. openshift-maven-plugin would generate an opinionated container image and manifests by inspecting your project configuration.

Zero Configuration

Adding openshift-maven-plugin to project

You would need to add openshift-maven-plugin to your project in order to use it:

Open the pom.xml file and add the plugin in the <plugins> section.

<plugin>
    <groupId>org.eclipse.jkube</groupId>
    <artifactId>openshift-maven-plugin</artifactId>
    <version>1.17.0</version>
</plugin>

Building container Image of your application

In case of OpenShift, Source to Image (S2I) builds are performed by default. ImageStream is updated after the image creation.

Run this command which would build your application’s container image and push it to OpenShift’s internal container registry:

$ ./mvnw oc:build

Generating & applying Kubernetes manifests

Just like container image, openshift-maven-plugin can generate opinionated Kubernetes manifests. Run this command to automatically generate and apply manifests onto currently logged in Kubernetes cluster.

$ ./mvnw oc:resource oc:apply

After running these goals, you can also check Kubernetes manifests generated by openshift-maven-plugin in target/classes/META-INF/jkube/ directory.

Cleanup applied Kubernetes resources after testing:

$ ./mvnw oc:undeploy

Quarkus Native Mode

While containerizing a Quarkus application under native mode, openshift-maven-plugin would automatically detect that it’s a native executable artifact and would select a lighter base image while containerizing application. There is no additional configuration needed by openshift-maven-plugin for Native Builds.

How to add Kubernetes liveness and readiness probes?

openshift-maven-plugin automatically adds Kubernetes liveness and readiness probes in generated Kubernetes manifests in presence of SmallRye Health dependency.

To add SmallRye to your project, add the following dependency:

<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-smallrye-health</artifactId>
</dependency>

Once you run oc:resource goal again, you should be able to see liveness and readiness probes added in generated manifests.

2.5. Dockerfile

You can build a container image and deploy to Kubernetes with openshift-maven-plugin by just providing a Dockerfile. openshift-maven-plugin builds a container image based on your Dockerfile and generates opinionated Kubernetes manifests by inspecting it.

Placing Dockerfile in project root directory

You can place the Dockerfile in the project root directory along with pom.xml.

openshift-maven-plugin detects it and automatically builds an image based on this Dockerfile. There is no need to provide any sort of configuration apart from Dockerfile and project root directory as docker context directory. The Image is created with an opinionated name from group, artifact and version. The name can be overridden by using the jkube.image.name property. Read Simple Dockerfile section for more details.

Placing Dockerfile in some other directory

You can choose to place your Dockerfile at some other location. By default, the plugin assumes it to be src/main/docker, but you’ll need to configure docker context directory in plugin configuration. When not specified, context directory is assumed to be Dockerfile’s parent directory. You can take a look at Docker File Provided Quickstarts for more details.

Controlling what gets copied to image

When using Dockerfile mode, every file and directory present in the Docker build context directory gets copied to the created Docker image. In case you want to ignore some files, or you want to include only a specific set of files, the openshift-maven-plugin provides the following options to achieve this:

Using Property placeholders in Dockerfiles

You can reference properties in your Dockerfiles using standard maven property placeholders ${*}. For example, if you have a property in your gradle.properties like this:

gradle.properties
fromImage = fabric8/s2i-java
Dockerfile
FROM ${fromImage}:latest-java11

You can override placeholders using the filter field in image build configuration, see Build Filtering for more details.

3. Installation

This plugin is available from Maven central and can be connected to pre- and post-integration phase as seen below. The configuration and available goals are described below.

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>
  <version>1.17.0</version>

  <configuration>
     ....
     <images>
        <!-- A single's image configuration -->
        
        ....
     </images>
  </configuration>

  <!-- Connect oc:resource, oc:build and oc:helm to lifecycle phases -->
  <executions>
    <execution>
       <id>jkube</id>
       <goals>
         <goal>resource</goal>
         <goal>build</goal>
         <goal>helm</goal>
       </goals>
    </execution>
  </executions>
</plugin>

4. Goals Overview

This plugin supports a rich set for providing a smooth Java developer experience. These goals can be categorized in multiple groups:

  • Build goals are all about creating and managing Kubernetes build artifacts like Docker images or S2I builds.

  • Development goals target help not only in deploying resource descriptors to the development cluster but also to manage the lifecycle of the development cluster as well.

Table 2. Build Goals
Goal Description

oc:build

Build images

oc:push

Push images to a registry

oc:resource

Create Kubernetes or OpenShift resource descriptors

oc:apply

Apply resources to a running cluster

Table 3. Development Goals
Goal Description

oc:deploy

Deploy resources descriptors to a cluster after creating them and building the app. Same as oc:apply except that it runs in the background.

oc:undeploy

Undeploy and remove resources descriptors from a cluster.

oc:log

Show the logs of the running application

oc:debug

Enable remote debugging.

oc:remote-dev (preview)

Start a remote development session.

oc:watch

Watch for file changes and perform rebuilds and redeployments.

Depending on whether the OpenShift or Kubernetes operational mode is used, the workflow and the performed actions differs :

Table 4. Workflows
Use Case Kubernetes OpenShift

Build

oc:build oc:push

  • Creates an image against an exposed Docker daemon (with a docker.tar)

  • Pushes the image to a registry which is then referenced from the configuration

oc:build

  • Creates or uses a BuildConfig

  • Creates or uses an ImageStream which can be referenced by the deployment descriptors in a DeploymenConfig

  • Starts an OpenShift build with a docker.tar as input

Deploy

oc:deploy

  • Applies a Kubernetes resource descriptor to cluster

oc:deploy

  • Applies an OpenShift resource descriptor to a cluster

5. Build Goals

5.1. oc:resource

This goal generates Kubernetes resources based on your project. It can either be opinionated defaults or based on the configuration provided in XML config or resource fragments in src/main/jkube. Generated resources are in target/classes/META-INF/jkube/openshift directory.

This section includes XML configuration options you can use to tweak generated Kubernetes manifests.

5.1.1. Labels/Annotations

Labels and annotations can be easily added to any resource object. This is best explained by an example.

Example for label and annotations
<plugin>
  <!-- ... -->
  <configuration>
    <!-- ... -->
    <resources>
      <labels> (1)
        <all> (2)
          <property> (3)
            <name>organisation</name>
            <value>unesco</value>
          </property>
        </all>
        <service> (4)
          <property>
            <name>database</name>
            <value>mysql</value>
          </property>
          <property>
            <name>persistent</name>
            <value>true</value>
          </property>
        </service>
        <replicaSet> (5)
          <!-- ... -->
        </replicaSet>
        <pod> (6)
          <!-- ... -->
        </pod>
        <deployment> (7)
          <!-- ... -->
        </deployment>
      </labels>

      <annotations> (8)
        <deployment> (9)
          <my>deployment</my>
        </deployment>
        <service>
          <property> (10)
            <name>cloud.google.com/neg</name>
            <value>{"ingress":true}</value>
          </property>
        </service>
      </annotations>
    </resource>
  </configuration>
</plugin>
1 <labels> section with <resources> contains labels which should be applied to objects of various kinds
2 Within <all> labels which should be applied to every object can be specified
3 Within <property> you can specify key value pairs
4 <service> labels are used to label services
5 <replicaSet> labels are for replica set and replication controller
6 <pod> holds labels for pod specifications in replication controller, replica sets and deployments
7 <deployment> is for labels on deployments (kubernetes) and deployment configs (openshift)
8 The subelements are also available for specifying annotations
9 An annotation for a deployment. The key equals the tag and the value its child element
10 Instead of a self defined tag, the tag <property> can be used to handle complex cases. This case shows the usage to handle special characters that aren’t possible in xml-tags

Labels and annotations can be specified in free form as a map. In this map, the element name is the name of the label or annotation respectively, whereas the content is the value to set.

The following subelements are possible for labels and annotations :

Table 5. Label and annotation configuration
Element Description

all

All entries specified in the all sections are applied to all resource objects created. This also implies build object like image stream and build configs which are created implicitly for an Kubernetes build.

deployment

Labels and annotations applied to Deployment (for Kubernetes). This also implies build object like image stream and build configs which are created implicitly for an Kubernetes build.

pod

Labels and annotations applied pod specification as used in ReplicationController, ReplicaSets, Deployments and DeploymentConfigs objects.

replicaSet

Labels and annotations applied to ReplicaSet and ReplicationController objects.

service

Labels and annotations applied to Service objects.

ingress

Labels and annotations applied to Ingress objects.

serviceAccount

Labels and annotations applied to ServiceAccount objects.

route

Labels and annotations applied to Route objects.

5.1.2. Controller Generation

In JKube terminology, a Controller resource is a Kubernetes resource which manages Pods created for your application. These can be one of the following resources:

By default, Deployment is generated in Kubernetes mode. You can easily configure different aspects of generated Controller resource using XML configuration. Here is an example:

Example of Controller Resource Configuration
<configuration>
    <resources>
        <controller>
            <env> (1)
              <organization>Eclipse Foundation</organization>
              <projectname>jkube</projectname>
            </env>
            <controllerName>my-deploymentname</controllerName> (2)
            <containerPrivileged>true</containerPrivileged> (3)
            <imagePullPolicy>Always</imagePullPolicy> (4)
            <replicas>3</replicas> (5)
            <liveness> (6)
                <getUrl>http://:8080/q/health</getUrl>
                <tcpPort>8080</tcpPort>
                <initialDelaySeconds>3</initialDelaySeconds>
                <timeoutSeconds>3</timeoutSeconds>
            </liveness>
            <startup> (7)
                <periodSeconds>30</periodSeconds>
                <failureThreshold>1</failureThreshold>
                <getUrl>http://:8080/actuator/health</getUrl>
            </startup>
            <volumes> (8)
              <volume>
                <name>scratch</name>
                <type>emptyDir</type>
                <medium>Memory</medium>
                <mounts>
                  <mount>/var/scratch</mount>
                </mounts>
              </volume>
            </volumes>
           <containerResources>
              <requests> (9)
                 <memory>32Mi</memory>
                 <cpu>300m</cpu>
              </requests>
              <limits> (10)
                 <memory>64Mi</memory>
                 <cpu>500m</cpu>
              </limits>
           </containerResources>
           <nodeSelector> (11)
              <region>east</region>
              <type>user-node</type>
           </nodeSelector>
        </controller>
    </resources>
</configuration>
1 Environment variables added to all of your application Pods
2 Name of Controller(metadata.name set in generated Deployment, Job, ReplicaSet etc)
3 Setting Security Context of all application Pods.
4 Configure how images would be updated. Can be one of IfNotPresent, Always or Never. Read Kubernetes Images docs for more details.
5 Number of replicas of pods we want to have in our application
6 Define an HTTP liveness request, see Kubernetes Liveness/Readiness probes for more details.
7 Define an HTTP startup request, see Kubernetes Startup probes for more details.
8 Mounting an EmptyDir Volume to your application pods
9 Requests describe the minimum amount of compute resources required. See Kubernetes Resource Management Documentation for more info.
10 Limits describe the maximum amount of compute resources allowed. See Kubernetes Resource Management Documentation for more info.
11 NodeSelector is used to select nodes where the pods should be scheduled based on the labels specified in the node. See Kubernetes NodeSelector for more details.

Here are the fields available in resources XML configuration that would work with oc:resource:

Table 6. resources fields for configuring generated controllers
Element Description

controller

Configuration element for changing various aspects of generated Controller.

serviceAccount

ServiceAccount name which will be used by pods created by controller resources(e.g. Deployment, ReplicaSet etc)

useLegacyJKubePrefix

Use old jkube.io/ annotation prefix instead of jkube.eclipse.org/ annotation prefix

Configuring generated Controller via XML

This configuration field is focused only on changing various elements of Controller (mainly fields specified in PodTemplateSpec). Here are available configuration fields within this object:

Table 7. controller fields for configuring generated controllers
Element Description

env

Environment variables which will be added to containers in Pod template spec.

volumes

Configuration element for adding volume mounts to containers in Pod template spec

controllerName

Name of the controller resource(i.e. Deployment, ReplicaSet, StatefulSet etc) generated

liveness

Configuration element for adding a liveness probe

readiness

Configuration element for adding readiness probe

startup

Configuration element for adding startup probe

containerPrivileged

Run container in privileged mode. Sets privileged: true in generated Controller’s PodTemplateSpec

imagePullPolicy

How images should be pulled (maps to ImagePullPolicy).

initContainers

Configuration element for adding InitContainers to generated Controller resource.

replicas

Number of replicas to create

restartPolicy

Pod’s restart policy.

For Job, this defaults to OnFailure. For others, it’s not provided (Kubernetes assumes it to be Always)

containerResources

Configure Controller’s compute resource requirements

schedule

Schedule for CronJob written in Cron syntax.

nodeSelector

Configuration element for adding nodeSelector to Pod template spec.

imagePullSecrets

Specify secrets for pulling images from private repos

InitContainer XML configuration
Table 8. initContainer fields for specifying initContainers
Element Description

name

Name of InitContainer

imageName

Image used for InitContainer

imagePullPolicy

How images should be pulled (maps to ImagePullPolicy).

cmd

Command to be executed in InitContainer (maps to .command)

volumes

Configuration element for adding volume mounts to InitContainers in Pod template spec

env

Environment variables that will be added to this InitContainer in Pod template spec.

Container Resource XML configuration
Table 9. containerResources fields for specifying compute resource requirements
Element Description

requests

The minimum amount of compute resources required. See Kubernetes Resource Management Documentation for more info.

limits

The maximum amount of compute resources allowed. See Kubernetes Resource Management Documentation for more info.

5.1.3. Probe Configuration

Probe configuration is used for configuring liveness and readiness probes for containers. Both liveness and readiness probes the following options:

Table 10. XML Probe configuration
Element Description

initialDelaySeconds

Initial delay in seconds before the probe is started.

timeoutSeconds

Timeout in seconds how long the probe might take.

exec

Command to execute for probing.

getUrl

Probe URL for HTTP Probe. Configures HTTP probe fields like host, scheme, path etc by parsing URL. For example, a getUrl = "http://:8080/health" would result in probe generated with fields set like this:

host: ""

path: /health

port: 8080

scheme: HTTP

Host name with empty value defaults to Pod IP. You probably want to set "Host" in httpHeaders instead.

tcpPort

TCP port to probe.

failureThreshold

When a probe fails, Kubernetes will try failureThreshold times before giving up

successThreshold

Minimum consecutive successes for the probe to be considered successful after having failed.

httpHeaders

Custom headers to set in the request.

periodSeconds

How often in seconds to perform the probe. Defaults to 10 seconds. Minimum value is 1.

5.1.4. Volume Configuration

volumes field contains a list of volume configurations. Different configurations are supported in order to support different Volumes in Kubernetes.

Here are the options supported by a single volume :

Table 11. XML volume configuration
Element Description

type

type of Volume

name

name of volume to be mounted

mounts

List of mount paths of this volume.

path

Path for volume

medium

medium ,applicable for Volume type emptyDir

repository

repository ,applicable for Volume type gitRepo

revision

revision ,applicable for Volume type gitRepo

secretName

Secret name ,applicable for Volume type secret

server

Server name, applicable for Volume type nfsPath

readOnly

Whether it’s read only or not

pdName

pdName, applicable for Volume type gcePdName

fsType

File system type for Volume

partition

partition, applicable for Volume type gcePdName

endpoints

endpoints, applicable for Volume type glusterFsPath

claimRef

Claim Reference, applicable for Volume type persistentVolumeClaim

volumeId

volume id

diskName

disk name, applicable for Volume type azureDisk

diskUri

disk uri, applicable for Volume type azureDisk

kind

kind, applicable for Volume type azureDisk

cachingMode

caching mode, applicable for Volume type azureDisk

hostPathType

Host Path type

shareName

Share name, applicable for Volume type azureFile

user

User name

secretFile

Secret File, applicable for Volume type cephfs

secretRef

Secret reference, applicable for Volume type cephfs

lun

LUN(Logical Unit Number)

targetWwns

target WWNs, applicable for Volume type fc

datasetName

data set name, applicable for Volume type flocker

portals

list of portals, applicable for Volume type iscsi

targetPortal

target portal, applicable for Volume type iscsi

registry

registry, applicable for Volume type quobyte

volume

volume, applicable for Volume type quobyte

group

group, applicable for Volume type quobyte

iqn

IQN, applicable for Volume type iscsi

monitors

list of monitors, applicable for Volume type rbd

pool

pool, applicable for Volume type rbd

keyring

keyring, applicable for Volume type rbd

image

image, applicable for Volume type rbd

gateway

gateway, applicable for Volume type scaleIO

system

system, applicable for Volume type scaleIO

protectionDomain

protection domain, applicable for Volume type scaleIO

storagePool

storage pool, applicable for Volume type scaleIO

volumeName

volume name, applicable for Volume type scaleIO and storageOS

configMapName

ConfigMap name, applicable for Volume type configMap

configMapItems

List of ConfigMap items, applicable for Volume type configMap

items

List of items, applicable for Volume type downwardAPI

5.1.5. Secrets

Once you’ve configured some docker registry credentials into ~/.m2/setting.xml, as explained in the Authentication section, you can create Kubernetes secrets from a server declaration.

XML configuration

You can create a secret using XML configuration in the pom.xml file. It should contain the following fields:

key

required

description

dockerServerId

true

the server id which is configured in ~/.m2/setting.xml

name

true

this will be used as name of the kubernetes secret resource

namespace

false

the secret resource will be applied to the specific namespace, if provided

This is best explained by an example.

Example for Setting docker registry in properties
<properties>
    <jkube.docker.registry>docker.io</docker.registry>
</properties>
Example for specifying Secret Configuration to be created
<configuration>
    <resources>
        <secrets>
            <secret>
                <dockerServerId>${docker.registry}</dockerServerId>
                <name>mydockerkey</name>
            </secret>
        </secrets>
    </resources>
</configuration>

Yaml fragment with annotation

You can create a secret using a yaml fragment. You can reference the docker server id with an annotation jkube.eclipse.org/dockerServerId. The yaml fragment file should be put under the src/main/jkube/ folder.

Example
apiVersion: v1
kind: Secret
metadata:
  name: mydockerkey
  namespace: default
  annotations:
    jkube.eclipse.org/dockerServerId: ${docker.registry}
type: kubernetes.io/dockercfg

5.1.6. Ingress Generation

When the oc:resource goal is run, an Ingress will be generated for each Service if the jkube.createExternalUrls property is enabled.

The generated Ingress can be further customized by using an XML configuration, or by providing a YAML resource fragment.

XML Configuration

Table 12. Fields supported in resources
Element Description

ingress

Configuration element for creating new Ingress

routeDomain

Set host for Ingress or OpenShift Route

Here is an example of configuring Ingress using XML configuration:

Enable Ingress Generation by enabling createExternalUrl property
<properties>
     <jkube.createExternalUrls>true</jkube.createExternalUrls>
</properties>
Example for Ingress Configuration
<configuration>
    <resources>
        <ingress>
          <ingressTlsConfigs>
            <ingressTlsConfig> (1)
               <hosts>
                 <host>foo.bar.com</host>
               </hosts>
               <secretName>testsecret-tls</secretName>
            </ingressTlsConfig>
          </ingressTlsConfigs>
          <ingressRules>
            <ingressRule>
              <host>foo.bar.com</host> (2)
              <paths>
                <path>
                  <pathType>Prefix</pathType> (3)
                  <path>/foo</path>  (4)
                  <serviceName>service1</serviceName> (5)
                  <servicePort>8080</servicePort> (6)
                </path>
              </paths>
            </ingressRule>
          </ingressRules>
        </ingress>
    </resources>
</configuration>
1 Ingress TLS Configuration to specify Secret that contains TLS private key and certificate
2 Host names, can be precise matches or a wildcard. See Kubernetes Ingress Hostname documentation for more details
3 Ingress Path Type, Can be one of ImplementationSpecific, Exact or Prefix
4 Ingress path corresponding to provided service.name
5 Service Name corresponding to path
6 Service Port corresponding to path
Ingress XML Configuration

Here are the supported options while providing ingress in XML configuration

Table 13. ingress configuration
Element Description

ingressRules

IngressRule configuration

ingressTlsConfigs

Ingress TLS configuration

IngressRule XML Configuration

Here are the supported options while providing ingressRules in XML configuration

Table 14. ingressRule configuration
Element Description

host

Host name

paths

IngressRule path configuration

IngressRule Path XML Configuration

Here are the supported options while providing paths in XML configuration

Table 15. IngressRule path XML configuration
Element Description

pathType

type of Path

path

path

serviceName

Service name

servicePort

Service port

resource

Resource reference in Ingress backend

IngressRule Path Resource XML Configuration

Here are the supported options while providing resource in IngressRule’s path XML configuration

Table 16. IngressRule Path resource XML configuration
Element Description

name

Resource name

kind

Resource kind

apiGroup

Resource’s apiGroup

IngressRule Path Resource XML Configuration

Here are the supported options while providing ingressTlsConfigs in IngressRule’s path XML configuration

Table 17. IngressTls ingressTlsConfig XML configuration
Element Description

secretName

Secret name

hosts

a list of string host objects

Ingress Yaml fragment:

You can create Ingress using YAML fragments too by placing the partial YAML file in the src/main/jkube directory. The following snippet contains an Ingress fragment example.

Ingress fragment Example
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: tls-example-ingress
spec:
  tls:
  - hosts:
    - https-example.foo.com
    secretName: testsecret-tls
  rules:
  - host: https-example.foo.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: service1
            port:
              number: 80

5.1.7. ServiceAccount Generation

You can configure resource configuration to generate a ServiceAccount or configure an already existing ServiceAccount into your generated Deployment.

Here is an example of XML configuration to generate a ServiceAccount:

Example for Creating ServiceAccount via XML
<configuration>
    <resources>
      <serviceAccounts>
        <serviceAccount>
          <name>my-serviceaccount</name> (1)
          <deploymentRef>my-deployment-name</deploymentRef> (2)
        </serviceAccount>
      </serviceAccounts>
    </resources>
</configuration>
1 Name of ServiceAccount to be created
2 Deployment which will be using this ServiceAccount

If you don’t want to generate ServiceAccount but just use an existing ServiceAccount in your Deployment. You can configure it via serviceAccount field in resource configuration. Here is an example:

Example for Configuring already existing ServiceAccount into generated Deployment
<configuration>
    <resources>
      <serviceAccount>my-existing-serviceaccount</serviceAccount>
    </resources>
</configuration>

Service Account Resource fragment:

If you don’t want to use XML configuration, you can provide a resource fragment for ServiceAccount resource. Here is how it would look like:

ServiceAccount resource fragment
apiVersion: v1
kind: ServiceAccount
metadata:
  name: build-robot

5.1.8. Resource Validation

Resource goal also validates the generated resource descriptors using API specification of Kubernetes.

Table 18. Validation Configuration
Element Description Property

skipResourceValidation

If value is set to true then resource validation is skipped. This may be useful if resource validation is failing for some reason but you still want to continue the deployment.

Default is false.

jkube.skipResourceValidation

failOnValidationError

If value is set to true then any validation error will block the plugin execution. A warning will be printed otherwise.

Default is false.

jkube.failOnValidationError

5.1.9. Route Generation

When the oc:resource goal is run,

an Kubernetes Route descriptor (route.yml) will also be generated along the service if an Kubernetes cluster is targeted. If you do not want to generate a Route descriptor, you can set the jkube.openshift.generateRoute property to false.

Note: Routes will be automatically generated for Services with recognized web ports (80, 443, 8080, 8443, 9080, , 9090, 9443).

If your service exposes any other port and you still want to generate a Route, you can do any of the following:

  • Force the route creation by setting the jkube.createExternalUrls property to true.

  • Force the route creation by using the expose: true label in the Service:

    • Add the expose: true label in a Service fragment.

    • Add the expose: true label by leveraging the JKube Service Enricher (jkube.enricher.jkube-service.expose).

Table 19. Route Generation Configuration
Element Description Property

generateRoute

If value is set to false then no Route descriptor will be generated. By default it is set to true, which will create a route.yml descriptor and also add Route resource to openshift.yml.

jkube.openshift.generateRoute

jkube.enricher.jkube-openshift-route.generateRoute

tlsTermination

jkube.enricher.jkube-openshift-route.tlsTermination

tlsInsecureEdgeTerminationPolicy

tlsInsecureEdgeTerminationPolicy indicates the desired behavior for insecure connections to a route. While each router may make its own decisions on which ports to expose, this is normally port 80.

  • Allow - traffic is sent to the server on the insecure port (default)

  • Disable - no traffic is allowed on the insecure port.

  • Redirect - clients are redirected to the secure port.

jkube.enricher.jkube-openshift-route.tlsInsecureEdgeTerminationPolicy

Below is an example of generating a Route with "edge" termination and "Allow" insecureEdgeTerminationPolicy:

Example for generating route resource by configuring it in pom.xml
<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>
  <version>1.17.0</version>
  <configuration>
    <enricher>
      <config>
        <jkube-openshift-route>
          <generateRoute>true</generateRoute>
          <tlsInsecureEdgeTerminationPolicy>Allow</tlsInsecureEdgeTerminationPolicy>
          <tlsTermination>edge</tlsTermination>
        </jkube-openshift-route>
      </config>
    </enricher>
  </configuration>
</plugin>

Adding certificates for routes is not directly supported in the pom, but can be added via a yaml fragment.

If you do not want to generate a Route descriptor, you can also specify so in the plugin configuration in your POM as seen below.

Example for not generating route resource by configuring it in pom.xml
<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>
  <version>1.17.0</version>
  <configuration>
    <enricher>
      <config>
        <jkube-openshift-route>
          <generateRoute>false</generateRoute>
        </jkube-openshift-route>
      </config>
    </enricher>
  </configuration>
</plugin>

If you are using resource fragments, then also you can configure it in your Service resource fragment (e.g. service.yml). You need to add an expose label to the metadata section of your service and set it to false.

Example for not generating route resource by configuring it in resource fragments
metadata:
  annotations:
    api.service.kubernetes.io/path: /hello
  labels:
    expose: "false"
spec:
  type: LoadBalancer

5.1.10. Supported Properties for Resource goal

Table 20. Options available with resource goal
Element Description Property

profile

Profile to use. A profile contains the enrichers and generators to use as well as their configuration. Profiles are looked up in the classpath and can be provided as yaml files.

Defaults to default.

jkube.profile

sidecar

Whether to enable sidecar behavior or not. By default pod specs are merged into main application container.

Defaults to false.

jkube.sidecar

skipHealthCheck

Whether to skip health checks addition in generated resources or not.

Defaults to false.

jkube.skipHealthCheck

workDir

The JKube working directory. Defaults to ${project.build.directory}/jkube-temp.

jkube.workDir

environment

Environment name where resources are placed. For example, if you set this property to dev and resourceDir is the default one, plugin will look at src/main/jkube/dev. Multiple environments can also be provided in form of comma separated strings. Resource fragments in these directories will be combined while generating resources.

Defaults to null.

jkube.environment

useProjectClassPath

Should we use the project’s compile time classpath to scan for additional enrichers/generators.

Defaults to false.

jkube.useProjectClassPath

resourceDir

Folder where to find project specific files.

Defaults to ${basedir}/src/main/jkube.

jkube.resourceDir

targetDir

The generated Kubernetes manifests target direcotry.

Defaults to ${project.build.outputDirectory}/META-INF/jkube.

jkube.targetDir

resourceType

The artifact type for attaching the generated resource file to the project. Can be either 'json' or 'yaml'.

Defaults to yaml.

jkube.resourceType

mergeWithDekorate

When resource generation is delegated to Dekorate, should JKube resources be merged with Dekorate generated ones.

Defaults to false.

jkube.mergeWithDekorate

interpolateTemplateParameters

Interpolate parameter values from *template.yml fragments in the generated resource list (kubernetes.yml).

This is useful when using JKube in combination with Helm.

Placeholders for variables defined in template files can be used in the different resource fragments. Helm generated charts will contain these placeholders/parameters.

For resource goal, these placeholders are replaced in the aggregated resource list YAML file (not in the individual generated resources) if this option is enabled.

Defaults to true.

jkube.interpolateTemplateParameters

skipResource

Skip resource generation.

Defaults to false.

jkube.skip.resource

createExternalUrls

Should we create external Ingress for any LoadBalancer Services which don’t already have them.

Defaults to false.

jkube.createExternalUrls

domain

Domain added to the Service ID when creating Kubernetes Ingresses or OpenShift routes.

jkube.domain

replicas

Number of replicas for the container.

offline

Whether to try detecting Kubernetes Cluster or stay offline.

Defaults to false.

jkube.offline

5.2. oc:build

This task is for building container images for your application.

For the openshift mode, OpenShift specific builds will be performed. These are so called Binary Source builds ("binary builds" in short), where the data specified with the build configuration is sent directly to OpenShift as a binary archive.

There are two kind of binary builds supported by this plugin, which can be selected with the buildStrategy configuration option (jkube.build.strategy property)

Table 21. Build Strategies
buildStrategy Description

s2i

The Source-to-Image (S2I) build strategy uses so called builder images for creating new application images from binary build data. The builder image to use is taken from the base image configuration specified with from in the image build configuration. See below for a list of builder images which can be used with this plugin.

docker

A Docker Build is similar to a normal Docker build except that it is done by the OpenShift cluster and not by a Docker daemon. In addition this build pushes the generated image to the OpenShift internal registry so that it is accessbile in the whole cluster.

Both build strategies update an Image Stream after the image creation.

The Build Config and Image streams can be managed by this plugin. If they do not exist, they will be automatically created by oc:build.

If they do already exist, they are reused, except when the buildRecreate configuration option (property jkube.build.recreate) is set to a value as described in Global Configuration. Also if the provided build strategy is different than the one defined in the existing build configuration, the Build Config is edited to reflect the new type (which in turn removes all build associated with the previous build).

This image stream created can then be directly referenced from Deployment Configuration objects created by oc:resource.

By default, image streams are created with a local lookup policy, so that they can be used also by other resources such as Deployments or StatefulSets. This behavior can be turned off by setting the jkube.s2i.imageStreamLookupPolicyLocal property to false when building the project.

In order to be able to create these OpenShift resource objects access to an OpenShift installation is required.

Regardless of which build mode is used, the images are configured in the same way.

The configuration consists of two parts:

  • a global section which defines the overall behaviour of this plugin

  • and an images section which defines how the images should be build

Many of the options below are relevant for the Kubernetes Workflow or the OpenShift Workflow with Docker builds as they influence how the Docker image is build.

For an S2I binary build, on the other hand, the most relevant section is the Assembly one because the build depends on which buider/base image is used and how it interprets the content of the uploaded docker.tar.

5.2.1. Setting Quotas for OpenShift Build

You can also limit resource use by specifying resource limits as part of the build configuration. You can do this by providing openshiftBuildConfig field in resource configuration. Below is an example on how to do this:

Example of OpenShift S2I Build resource/limit Configuration
<configuration>
    <resources>
         <openshiftBuildConfig>
            <requests> (1)
              <cpu>500m</cpu> (2)
              <memory>512Mi</memory> (3)
            </requests>
            <limits> (4)
              <cpu>1000m</cpu> (5)
              <memory>1Gi</memory> (6)
            </limits>
         </openshiftBuildConfig>
    </resources>
</configuration>
1 Request field which maps to created BuildConfig’s .spec.resources.requests
2 Minimum CPU required by Build Pod
3 Minimum memory required by Build Pod
4 Limits field which maps to created BuildConfig’s (.spec.resources.limits)
5 Maximum CPU required by Build Pod
6 Maximum memory required by Build Pod

It’s also possible to provide a buildconfig.yml BuildConfig resource fragment in src/main/jkube directory like this:

BuildConfig fragment Example(buildconfig.yml)
spec:
  resources:
    limits:
      cpu: "600m"
      memory: "512Mi"
    requests:
      cpu: "500m"
      memory: "300Mi"

5.2.2. Configuration (XML)

The following sections describe the usual configuration, which is similar to the build configuration used in the docker-maven-plugin.

In addition, a more automatic way for creating predefined build configuration can be performed with so called Generators. Generators are very flexible and can be easily created. These are described in an extra section. Note that if you’re providing your own XML image configuration, it would be given more precedence. Generators won’t be used in case you’re already using your own custom image configuration.

Global configuration parameters specify overall behavior common for all images to build. Some of the configuration options are shared with other goals.

Table 22. Global build configuration options
Element Description Property

buildStrategy

Defines what build strategy to choose while building container image. Possible values are docker, buildpacks and jib out of which docker is default.

If the build is performed in an OpenShift cluster an additional s2i option is available and selected by default.

Available strategies for OpenShift are:

jkube.build.strategy

buildSourceDirectory

Default directory that contains the assembly descriptor(s) used by the plugin. The default value is src/main/docker. This option is only relevant for the {task-prefix}Build task.

jkube.build.source.dir

authConfig

Authentication information when pulling from or pushing to Docker registry. There is a dedicated section Authentication for how to do security.

autoPull

Decide how to pull missing base images or images to start:

  • on : Automatic download any missing images (default)

  • off : Automatic pulling is switched off

  • always : Pull images always even when they already exist locally

  • once : For multi-module builds images are only checked once and pulled for the whole build.

jkube.docker.autoPull

imagePullPolicy

Specify whether images should be pull when looking for base images while building or images for starting. This property can take the following values (case insensitive):

  • IfNotPresent: Automatic download any missing images (default)

  • Never : Automatic pulling is switched off always

  • Always : Pull images always even when they already exist locally.

By default, a progress meter is printed out on the console, which is omitted when using maven in batch mode (option -B). A very simplified progress meter is provided when using no color output (i.e. with -Djkube.useColor=false).

jkube.docker.imagePullPolicy

certPath

Path to SSL certificate when SSL is used for communicating with the Docker daemon. These certificates are normally stored in ~/.docker/. With this configuration the path can be set explicitly. If not set, the fallback is first taken from the environment variable DOCKER_CERT_PATH and then as last resort ~/.docker/. The keys in this are expected with it standard names ca.pem, cert.pem and key.pem. Please refer to the Docker documentation for more information about SSL security with Docker.

jkube.docker.certPath

dockerHost

The URL of the Docker Daemon. If this configuration option is not given, then the optional <machine> configuration section is consulted. The scheme of the URL can be either given directly as http or https depending on whether plain HTTP communication is enabled or SSL should be used. Alternatively the scheme could be tcp in which case the protocol is determined via the IANA assigned port: 2375 for http and 2376 for https. Finally, Unix sockets are supported by using the scheme unix together with the filesystem path to the unix socket.

The discovery sequence used by the docker-maven-plugin to determine the URL is:

  1. Value of dockerHost (jkube.docker.host)

  2. The Docker host associated with the docker-machine named in <machine>, i.e. the DOCKER_HOST from docker-machine env. See below for more information about Docker machine support.

  3. The value of the environment variable DOCKER_HOST.

  4. unix:///var/run/docker.sock if it is a readable socket.

jkube.docker.host

filter

In order to temporarily restrict the operation of plugin goals this configuration option can be used. Typically, this will be set via the system property jkube.image.filter when maven is called. The value can be a single image name (either its alias or full name) or it can be a comma separated list with multiple image names. Any name which doesn’t refer an image in the configuration will be ignored.

jkube.image.filter

machine

Docker machine configuration. See Docker Machine for possible values.

maxConnections

Number of parallel connections are allowed to be opened to the Docker Host. For parsing log output, a connection needs to be kept open (as well for the wait features), so don’t put that number to low. Default is 100 which should be suitable for most of the cases.

jkube.docker.maxConnections

outputDirectory

Default output directory to be used by this plugin. The default value is target/docker and is only used for the goal oc:build.

jkube.build.target.dir

profile

Profile to which contains enricher and generators configuration. See Profiles for details.

jkube.profile

forcePull

Applicable only for OpenShift, S2I build strategy.

While creating a BuildConfig, By default, if the builder image specified in the build configuration is available locally on the node, that image will be used.

Using forcePull will override the local image and refresh it from the registry the image stream points to.

jkube.build.forcePull

openshiftPullSecret

The name to use for naming pullSecret to be created to pull the base image in case pulling from a private registry which requires authentication for OpenShift.

The default value for pull registry will be picked from jkube.docker.pull.registry/jkube.docker.registry.

jkube.build.pullSecret

openshiftPushSecret

The name of pushSecret to be used to push the final image in case pushing from a protected registry which requires authentication.

jkube.build.pushSecret

buildOutputKind

Allow to specify in which registry to push the container image at the end of the build. If the output kind is ImageStreamTag, then the image will be pushed to the internal OpenShift registry. If the output is of type DockerImage, then the name of the output reference will be used as a Docker push specification. The default value is ImageStreamTag

jkube.build.buildOutput.kind

buildRecreate

If the build is performed in an OpenShift cluster then this option decides how the OpenShift resource objects associated with the build should be treated when they already exist:

  • buildConfig or bc : Only the BuildConfig is recreated

  • imageStream or is : Only the ImageStream is recreated

  • all : Both, BuildConfig and ImageStream are recreated

  • none : Neither BuildConfig nor ImageStream is recreated

The default is none. If you provide the property without value then all is assumed, so everything gets recreated.

jkube.build.recreate

registry

Specify globally a registry to use for pulling and pushing images. See Registry handling for details.

jkube.docker.registry

skip

With this parameter the execution of this plugin can be skipped completely.

jkube.skip

skipBuild

If set not images will be build (which implies also skip.tag) with oc:build.

jkube.skip.build

skipBuildPom

If set the build step will be skipped for modules of type pom. If not set, then by default projects of type pom will be skipped if there are no image configurations contained.

jkube.skip.build.pom

skipTag

If set to true this plugin won’t add any tags to images that have been built with oc:build.

jkube.skip.tag

skipMachine

Skip using docker machine in any case

jkube.docker.skip.machine

sourceDirectory

Default directory that contains the assembly descriptor(s) used by the plugin. The default value is src/main/docker. This option is only relevant for the oc:build goal.

jkube.build.source.dir

verbose

Boolean attribute for switching on verbose output like the build steps when doing a Docker build. Default is false.

jkube.docker.verbose

logDate

The date format to use when logging messages from Docker. Default is DEFAULT (HH:mm:ss.SSS)

jkube.docker.logDate

logStdout

Log to stdout regardless if log files are configured or not. Default is false.

jkube.docker.logStdout

5.2.3. Image Configuration

The configuration how images should be created a defined in a dedicated images sections. These are specified for each image within the images element of the configuration with one image element per image to use.

The image element can contain the following sub elements:

Table 23. Image Configuration
Element Description Property

name

Each image configuration has a mandatory, unique docker repository name. This can include registry and tag parts, but also placeholder parameters. See below for a detailed explanation.

jkube.container-image.name

alias

Shortcut name for an image which can be used for identifying the image within this configuration. This is used when linking images together or for specifying it with the global image configuration element.

jkube.container-image.alias

registry

Registry to use for this image. If the name already contains a registry this takes precedence. See Registry handling for more details.

jkube.container-image.registry

build

Element which contains all the configuration aspects when doing a oc:build.

This element can be omitted if the image is only pulled from a registry. e.g. as support for integration tests like database images.

propertyResolverPrefix

Prefix for property resolution. This is used to resolve properties in the configuration. If not set, the default prefix is jkube.container-image.

The build section is mandatory and is explained in below.

When specifying the image name in the configuration with the name field, then you can use several placeholders. These placeholders are replaced during the execution by this plugin.

Table 24. Image Names
Placeholder Description

%g

The last part of the maven group name. The name gets sanitized, so that it can be used as username on GitHub. Only the part after the last dot is used. For example, given the group id org.eclipse.jkube, this placeholder would insert jkube.

%a

A sanitized version of the artefact id, so that it can be used as part of a Docker image name. This means primarily, that it is converted to all lower case (as required by Docker).

%v

A sanitized version of the project version. Replaces + with - in ${project.version} to comply with the Docker tag convention. (A different replacement symbol can be defined by setting the jkube.image.tag.semver_plus_substitution property.) For example, the version '1.2.3b' becomes the exact same Docker tag, '1.2.3b'. But '1.2.3+internal' becomes the 1.2.3-internal Docker tag.

%l

If the pre-release part of the project version ends with -SNAPSHOT, then this placeholder resolves to latest. Otherwise, it’s the same as %v.

If the ${project.version} contains a build metadata part (i.e. everything after the +), then the + is substituted and the rest is appended. For example, the project version 1.2.3-SNAPSHOT+internal becomes the latest-internal Docker tag.

%t

If the project version ends with -SNAPSHOT, this placeholder resolves to snapshot-<timestamp> where timestamp has the date format yyMMdd-HHmmss-SSSS (eg snapshot-). This feature is especially useful during development in order to avoid conflicts when images are to be updated which are still in use. You need to take care yourself of cleaning up old images afterwards, though.

If the ${project.version} contains a build metadata part (i.e. everything after the +), then the + is substituted and the rest is appended. For example, the project version 1.2.3-SNAPSHOT+internal becomes the snapshot-221018-113000-0000-internal Docker tag.

Example for 
    
  </images>
</configuration>
1 One or more 
   </images>
 </configuration>
 ...
</plugin>

All build relevant configuration is contained in the build section of an image configuration. The following configuration options are supported:

Table 25. Build configuration (image)
Element Description Property

assembly

Specifies the assembly configuration as described in Build Assembly

jkube.container-image.assembly.xxx

args

Map specifying the value of Docker build args which should be used when building the image with an external Dockerfile which uses build arguments. The key-value syntax is the same as when defining maven properties (or labels or env). This argument is ignored when no external Dockerfile is used. Build args can also be specified as properties as described in Build Args

jkube.container-image.args

buildOptions

Map specifying the build options to provide to the docker daemon when building the image. These options map to the ones listed as query parameters in the Docker Remote API and are restricted to simple options (e.g.: memory, shmsize). If you use the respective configuration options for build options natively supported by the build configuration (i.e. noCache, cleanup=remove for buildoption forcerm=1 and args for build args) then these will override any corresponding options given here. The key-value syntax is the same as when defining environment variables or labels as described in Setting Environment Variables and Labels.

jkube.container-image.buildOptions

createImageOptions

Map specifying the create image options to provide to the docker daemon when pulling or importing an image. These options map to the ones listed as query parameters in the Docker Remote API and are restricted to simple options (e.g.: fromImage, fromSrc, platform).

jkube.container-image.createImageOptions

cleanup

Cleanup dangling (untagged) images after each build (including any containers created from them). Default is try which tries to remove the old image, but doesn’t fail the build if this is not possible because e.g. the image is still used by a running container. Use remove if you want to fail the build and none if no cleanup is requested.

jkube.container-image.cleanup

contextDir

Path to a directory used for the build’s context. You can specify the Dockerfile to use with dockerFile, which by default is the Dockerfile found in the contextDir. The Dockerfile can be also located outside of the contextDir, if provided with an absolute file path. See External Dockerfile for details.

jkube.container-image.contextDir

cmd

A command to execute by default (i.e. if no command is provided when a container for this image is started). See Startup Arguments for details.

jkube.container-image.cmd

compression

The compression mode how the build archive is transmitted to the docker daemon (oc:build) and how docker build archives are attached to this build as sources. The value can be none (default), gzip or bzip2.

dockerFile

Path to a Dockerfile which also triggers Dockerfile mode. See External Dockerfile for details.

jkube.container-image.dockerFile

dockerArchive

Path to a saved image archive which is then imported. See Docker archive for details.

jkube.container-image.dockerArchive

entryPoint

An entrypoint allows you to configure a container that will run as an executable. See Startup Arguments for details.

jkube.container-image.entrypoint

env

The environments as described in Setting Environment Variables and Labels.

jkube.container-image.env

e.g. jkube.container-image.env.FOO=bar

filter

Enable and set the delimiters for property replacements. By default, properties in the format ${..} are replaced with maven properties. You can switch off property replacement by setting this property to false. When using a single char like @ then this is used as a delimiter (e.g @…​@). See Filtering for more details.

jkube.container-image.filter

from

The base image which should be used for this image. If not given this default to busybox:latest and is suitable for a pure data image. In case of an S2I Binary build this parameter specifies the S2I Builder Image to use, which by default is fabric8/s2i-java:latest. See also from-ext how to add additional properties for the base image.

jkube.container-image.from

buildpacksBuilderImage

Configure BuildPack builder OCI image for BuildPack Build. This field is applicable only in buildpacks build strategy. This overrides builder image specified in local ~/.pack/config.toml. If not specified this defaults to null.

This field is only applicable for buildpacks build strategy.

jkube.container-image.buildpacksBuilderImage

fromExt

Extended definition for a base image. This field holds a map of defined in key = "value" format. The known keys are:

  • name : Name of the base image

  • kind : Kind of the reference to the builder image when in S2I build mode. By default its ImageStreamTag but can be also ImageStream. An alternative would be DockerImage

  • namespace : Namespace where this builder image lives.

    A provided `from` takes precedence over the name given here. This tag is useful for extensions of this plugin.

jkube.container-image.fromExt

healthCheck

Specifies the health check configuration as described in Build Healthcheck

jkube.container-image.healthcheck.xxx

imagePullPolicy

Specific pull policy for the base image. This overwrites any global pull policy. See the global configuration option imagePullPolicy for the possible values and the default.

jkube.container-image.imagePullPolicy

labels

Labels as described in Setting Environment Variables and Labels.

jkube.container-image.labels

e.g. jkube.container-image.label.foo=bar

maintainer

The author (MAINTAINER) field for the generated image

jkube.container-image.maintainer

noCache

Don’t use Docker’s build cache. This can be overwritten by setting a system property docker.noCache when running maven.

jkube.container-image.nocache

cacheFrom

A list of image elements specifying image names to use as cache sources.

jkube.container-image.cachefrom

e.g. jkube.container-image.cachefrom.1=my-cache-image:0.0.1

optimise

if set to true then it will compress all the runCmds into a single RUN directive so that only one image layer is created.

jkube.container-image.optimise

platforms

List of platform elements with the target platforms (os/architecture) for which to build the image. Enables multi-platform builds.

You should use a base image that includes supports for multiple platforms such as:

  • x86-64: linux/amd64, linux/i386

  • ARM architectures: linux/arm/v7, linux/arm64

  • PowerPC: linux/ppc64le

Supported only when using the jib build strategy

jkube.container-image.platforms

ports

The exposed ports which is a list of port elements, one for each port to expose. Whitespace is trimmed from each element and empty elements are ignored. The format can be either pure numerical ("8080") or with the protocol attached ("8080/tcp").

jkube.container-image.ports

e.g. jkube.container-image.ports.1=8080

shell

Shell to be used for the runCmds. It contains arg elements which are defining the executable and its params.

jkube.container-image.shell

runCmds

Commands to be run during the build process. It contains run elements which are passed to the shell. Whitespace is trimmed from each element and empty elements are ignored. The run commands are inserted right after the assembly and after workdir into the Dockerfile.

jkube.container-image.runCmds

e.g. jkube.container-image.runCmds.1=groupadd -r appUser

skip

if set to true disables building of the image. This config option is best used together with a maven property

jkube.container-image.skip

tags

List of additional tag elements with which an image is to be tagged after the build. Whitespace is trimmed from each element and empty elements are ignored.

jkube.container-image.tags

e.g. jkube.container-image.tags.1=latest

user

User to which the Dockerfile should switch to the end (corresponds to the USER Dockerfile directive).

jkube.container-image.user

volumes

List of volume elements to create a container volume. Whitespace is trimmed from each element and empty elements are ignored.

jkube.container-image.volumes

e.g. jkube.container-image.volumes.1=/path/to/expose

workdir

Directory to change to when starting the container.

jkube.container-image.workdir

From this configuration this Plugin creates an in-memory Dockerfile, copies over the assembled files and calls the Docker daemon via its remote API.

Example
<build>
  <from>java:8u40</from>
  <maintainer>john.doe@example.com</maintainer>
  <tags>
    <tag>latest</tag>
    <tag>${project.version}</tag>
  </tags>
  <ports>
    <port>8080</port>
  </ports>
  <volumes>
    <volume>/path/to/expose</volume>
  </volumes>
  <buildOptions>
    <shmsize>2147483648</shmsize>
  </buildOptions>

  <shell>
    <exec>
      <arg>/bin/sh</arg>
      <arg>-c</arg>
    </exec>
  </shell>
  <runCmds>
    <run>groupadd -r appUser</run>
    <run>useradd -r -g appUser appUser</run>
  </runCmds>

  <entryPoint>
    <!-- exec form for ENTRYPOINT -->
    <exec>
      <arg>java</arg>
      <arg>-jar</arg>
      <arg>/opt/demo/server.jar</arg>
    </exec>
  </entryPoint>

  <assembly>
    <mode>dir</mode>
    <targetDir>/opt/demo</targetDir>
  </assembly>
</build>

In order to see the individual build steps you can switch on verbose mode either by setting the property jkube.docker.verbose or by using <verbose>true</verbose> in the Build Goal configuration

5.2.5. Assembly

The assembly element within build element defines how build artifacts and other files can be added to the Docker image. The files which are supposed to be added via assembly should be present in project directory. It’s also possible to add file from external source using your own custom logic (see JKube Plugin for more details).

Table 26. Assembly Configuration (image : build )
Element Description Property

name

Assembly name, which is maven by default. This name is used for the archives and directories created during the build. This directory holds the files specified by the assembly. If an external Dockerfile is used then this name is also the relative directory which contains the assembly files.

jkube.container-image.assembly.name

targetDir

Directory under which the files and artifacts contained in the assembly will be copied within the container. The default value for this is ${assembly.name}, so /maven if name is not set to a different value.

jkube.container-image.assembly.targetDir

inline

Deprecated: Use layers instead Inlined assembly descriptor as described in Assembly - Inline below.

layers

Each of the layers that the assembly will contain as described in Assembly - Layer below.

exportTargetDir

Specification whether the targetDir should be exported as a volume. This value is true by default except in the case the targetDir is set to the container root (/). It is also false by default when a base image is used with from since exporting makes no sense in this case and will waste disk space unnecessarily.

jkube.container-image.assembly.exportTargetDir

excludeFinalOutputArtifact

By default, the project’s final artifact will be included in the assembly, set this flag to true in case the artifact should be excluded from the assembly.

jkube.container-image.assembly.excludeFinalOutputArtifact

mode

Mode how the assembled files should be collected:

  • dir : Files are simply copied (default),

  • tar : Transfer via tar archive

  • tgz : Transfer via compressed tar archive

  • zip : Transfer via ZIP archive

The archive formats have the advantage that file permission can be preserved better (since the copying is independent from the underlying files systems)

jkube.container-image.assembly.mode

permissions

Permission of the files to add:

  • ignore to use the permission as found on files regardless on any assembly configuration

  • keep to respect the assembly provided permissions

  • exec for setting the executable bit on all files (required for Windows when using an assembly mode dir)

  • auto to let the plugin select exec on Windows and keep on others.

keep is the default value.

jkube.container-image.assembly.permissions

tarLongFileMode

Sets the TarArchiver behaviour on file paths with more than 100 characters length. Valid values are: "warn"(default), "fail", "truncate", "gnu", "posix", "posix_warn" or "omit"

jkube.container-image.assembly.tarLongFileMode

user

User and/or group under which the files should be added. The user must already exist in the base image.

It has the general format user[:group[:run-user]]. The user and group can be given either as numeric user- and group-id or as names. The group id is optional.

If a third part is given, then the build changes to user root before changing the ownerships, changes the ownerships and then change to user run-user which is then used for the final command to execute. This feature might be needed, if the base image already changed the user (e.g. to 'jboss') so that a chown from root to this user would fail.

For example, the image jboss/wildfly use a "jboss" user under which all commands are executed. Adding files in Docker always happens under the UID root. These files can only be changed to "jboss" is the chown command is executed as root. For the following commands to be run again as "jboss" (like the final standalone.sh), the plugin switches back to user jboss (this is this "run-user") after changing the file ownership. For this example a specification of jboss:jboss:jboss would be required.

jkube.container-image.assembly.user

In the event you do not need to include any artifacts with the image, you may safely omit this element from the configuration.

5.2.6. Assembly - Inline/Layer

Inlined assembly description with a format very similar to Maven Assembly Plugin.

Partial configuration example of an inline/layer element
<assembly>
  <!-- ... -->
  <layers>
    <layer>
      <id>static-files</id>
      <fileSets>
        <fileSet>
          <directory>src/static</directory>
          <outputDirectory>static</outputDirectory>
        </fileSet>
      </fileSets>
    </layer>
  </layers>
</assembly>

The layers element within the assembly element can have one or more layer elements with a XML structure that supports the following configuration options:

Table 27. Assembly - Inline/Layer (image : build : assembly )
Element Description

id

Unique ID for the layer.

files

List of files for the layer.

Each file has the following fields:

  • source: Absolute or relative path from the project’s directory of the file to be included in the assembly.

  • outputDirectory: Output directory relative to the root of the root directory of the assembly.

  • destName: Destination filename in the outputDirectory.

  • fileMode: Similar to a UNIX permission, sets the file mode of the file included.

fileSets

List of filesets for the layer.

Each fileset has the following fields:

  • directory: Absolute or relative location from the project’s directory.

  • outputDirectory: Output directory relative to the root of the root directory of the assembly fileSet.

  • includes: A set of files and directories to include.

    • If none is present, then everything is included.

    • Files can be referenced by using their complete path name.

    • Wildcards are also supported, patterns will be matched using FileSystem#getPathMatcher glob syntax.

  • excludes: A set of files and directory to exclude.

    • If none is present, then there are no exclusions.

    • Wildcards are also supported, patterns will be matched using FileSystem#getPathMatcher glob syntax.

  • fileMode: Similar to a UNIX permission, sets the file mode of the files included.

  • directoryMode: Similar to a UNIX permission, sets the directory mode of the directories included.

baseDirectory

Base directory from which to resolve the Assembly’s layer files and filesets.

5.2.7. Build Args

As described in section Configuration for external Dockerfiles Docker build arg can be used. In addition to the configuration within the plugin configuration you can also use properties to specify them:

  • Set a system property when running maven, e.g: docker.buildArg.http_proxy=http://proxy:8001. This is especially useful when using predefined Docker arguments for setting proxies transparently.

  • Set a project property within the pom.xml, e.g:

Example
<docker.buildArg.myBuildArg>myValue</docker.buildArg.myBuildArg>

Please note that the system property setting will always override the project property. Also note that for all properties which are not Docker predefined properties, the external Dockerfile must contain an ARGS instruction.

5.2.8. Healthcheck

Healthchecks has been introduced since Docker 1.12 and are a way to tell Docker how to test a container to check that it’s still working. With a health check you specify a command which is periodically executed and checked for its return value. If the healtcheck return with an exit 0 the container is considered to be healthy, if it returns with 1 then the container is not working correctly.

The healtcheck configuration can have the following options

Table 28. Healthcheck Configuration (image : build )
Element Description Property

cmd

Command to execute, which can be given in an shell or exec format as described in Startup Arguments.

jkube.container-image.healthcheck.cmd

interval

Interval for how often to run the healthcheck. The time is specified in seconds, but a time unit can be appended to change this.

jkube.container-image.healthcheck.interval

mode

Mode of the healthcheck. This can be cmd which is the default and specifies that the health check should be executed. Or none to disable a health check from the base image. Only use this option with none for disabling some healthcheck from the base image.

jkube.container-image.healthcheck.mode

retries

How many retries should be performed before the container is to be considered unhealthy.

jkube.container-image.healthcheck.retries

startPeriod

Initialization time for containers that need time to bootstrap. Probe failure during that period will not be counted towards the maximum number of retries. However, if a health check succeeds during the start period, the container is considered started and all consecutive failures will be counted towards the maximum number of retries. Given in seconds, but another time unit can be appended.

jkube.container-image.healthcheck.startPeriod

timeout

Timeout after which healthckeck should be stopped and considered to have failed. Given in seconds, but another time unit can be appended.

jkube.container-image.healthcheck.timeout

The following example queries an URL every 10s as an healthcheck:

Example
<healthCheck>
  <!-- Check every 5 minutes -->
  <interval>5m</interval>
  <!-- Fail if no response after 3 seconds -->
  <timeout>3s</timeout>
  <!-- Allow 30 minutes for the container to start before being flagged as unhealthy -->
  <startPeriod>30m</startPeriod>
  <!-- Fail 3 times until the container is considerd unhealthy -->
  <retries>3</retries>
  <!-- Command to execute in shell form -->
  <cmd>curl -f http://localhost/ || exit 1</cmd>
</healthCheck>

5.2.9. Environment and Labels

When creating a container one or more environment variables can be set via configuration with the env parameter

Example
<env>
  <JAVA_HOME>/opt/jdk8</JAVA_HOME>
  <CATALINA_OPTS>-Djava.security.egd=file:/dev/./urandom</CATALINA_OPTS>
</env>

If you put this configuration into profiles you can easily create various test variants with a single image (e.g. by switching the JDK or whatever).

It is also possible to set the environment variables from the outside of the plugin’s configuration with the parameter envPropertyFile. If given, this property file is used to set the environment variables where the keys and values specify the environment variable. Environment variables specified in this file override any environment variables specified in the configuration.

Labels can be set inline the same way as environment variables:

Example
<labels>
   <com.example.label-with-value>foo</com.example.label-with-value>
   <version>${project.version}</version>
   <artifactId>${project.artifactId}</artifactId>
</labels>

5.2.10. Startup Arguments

Using entryPoint and cmd it is possible to specify the entry point or cmd for a container.

The difference is, that an entrypoint is the command that always be executed, with the cmd as argument. If no entryPoint is provided, it defaults to /bin/sh -c so any cmd given is executed with a shell. The arguments given to docker run are always given as arguments to the entrypoint, overriding any given cmd option. On the other hand if no extra arguments are given to docker run the default cmd is used as argument to entrypoint.

See this stackoverflow question for a detailed explanation.

An entry point or command can be specified in two alternative formats:

Table 29. Entrypoint and Command Configuration
Mode Description

shell

Shell form in which the whole line is given to shell -c for interpretation.

exec

List of arguments (with inner args) arguments which will be given to the exec call directly without any shell interpretation.

Either shell or params should be specified.

Example
<entryPoint>
   <!-- shell form  -->
   <shell>java -jar $HOME/server.jar</shell>
</entryPoint>

or

Example
<entryPoint>
   <!-- exec form  -->
   <exec>
     <arg>java</arg>
     <arg>-jar</arg>
     <arg>/opt/demo/server.jar</arg>
   </exec>
</entryPoint>

This can be formulated also more dense with:

Example
<!-- shell form  -->
<entryPoint>java -jar $HOME/server.jar</entryPoint>

or

Example
<entryPoint>
  <!-- exec form  -->
  <arg>java</arg>
  <arg>-jar</arg>
  <arg>/opt/demo/server.jar</arg>
</entryPoint>
INFO

Startup arguments are not used in S2I builds

5.3. oc:push

Section needs review and rearrangements

This goal uploads images to the registry which have a <build> configuration section. The images to push can be restricted with the global option filter (see Build Goal Configuration for details). The registry to push is by default docker.io but can be specified as part of the images’s name the Docker way. E.g. docker.test.org:5000/data:1.5 will push the image data with tag 1.5 to the registry docker.test.org at port 5000. Registry credentials (i.e. username and password) can be specified in multiple ways as described in section Authentication.

By default a progress meter is printed out on the console, which is omitted when using Maven in batch mode (option -B). A very simplified progress meter is provided when using no color output (i.e. with -Djkube.useColor=false).

Table 30. Push options
Element Description Property

skipPush

If set to true the plugin won’t push any images that have been built.

jkube.skip.push

skipTag

If set to true this plugin won’t push any tags

jkube.skip.tag

pushRegistry

The registry to use when pushing the image. See Registry Handling for more details.

jkube.docker.push.registry

retries

How often should a push be retried before giving up. This useful for flaky registries which tend to return 500 error codes from time to time. The default is 0 which means no retry at all.

jkube.docker.push.retries

5.4. oc:apply

This goal applies the resources created with oc:resource to a connected Kubernetes cluster. It’s similar to oc:deploy but does not the full deployment cycle of creating the resource, creating the application image and sending the resource descriptors to the clusters. This goal can be easily bound to <executions> within the plugin’s configuration and binds by default to the install lifecycle phase.

mvn oc:apply

5.4.1. Supported Properties For Apply goal

Table 31. Other options available with apply goal
Element Description Property

recreate

Should we update resources by deleting them first and then creating them again.

Defaults to false.

jkube.recreate

openshiftManifest

The generated kubernetes YAML file.

Defaults to ${basedir}/target/classes/META-INF/jkube/openshift.yml.

jkube.openshiftManifest

create

Should we create new resources.

Defaults to true.

jkube.deploy.create

rolling

Should we use rolling updates to apply changes.

Defaults to false.

jkube.rolling

failOnNoKubernetesJson

Should we fail if there is no Kubernetes JSON.

Defaults to false.

jkube.deploy.failOnNoKubernetesJson

servicesOnly

In services only mode we only process services so that those can be recursively created/updated first before creating/updating any pods and replication controllers.

Defaults to false.

jkube.deploy.servicesOnly

ignoreServices

Do we want to ignore services. This is particularly useful when in recreate mode to let you easily recreate all the ReplicationControllers and Pods but leave any service definitions alone to avoid changing the portalIP addresses and breaking existing pods using the service.

Defaults to false.

jkube.deploy.ignoreServices

processTemplatesLocally

Process templates locally in Java so that we can apply OpenShift templates on any Kubernetes environment.

Defaults to false.

jkube.deploy.processTemplatesLocally

deletePods

Should we delete all the pods if we update a Replication Controller.

Defaults to true.

jkube.deploy.deletePods

ignoreRunningOAuthClients

Do we want to ignore OAuthClients which are already running?. OAuthClients are shared across namespaces so we should not try to update or create/delete global oauth clients.

Defaults to true.

jkube.deploy.ignoreRunningOAuthClients

jsonLogDir

The folder we should store any temporary json files or results.

Defaults to ${basedir}/target/jkube/applyJson.

jkube.deploy.jsonLogDir

resourceDir

Folder where to find project specific files.

Defaults to ${basedir}/src/main/jkube.

jkube.resourceDir

environment

Environment name where resources are placed. For example, if you set this property to dev and resourceDir is the default one, jkube will look at src/main/jkube/dev. Multiple environments can also be provided in form of comma separated strings. Resource fragments in these directories will be combined while generating resources.

Defaults to null.

jkube.environment

skipApply

Skip applying the resources.

Defaults to false.

jkube.skip.apply

5.4.2. Kubernetes Access Configuration

You can configure parameters to define how the plugin connects to the Kubernetes cluster instead of relying on default parameters.

<configuration>
  <access>
    <username></username>
    <password></password>
    <masterUrl></masterUrl>
    <apiVersion></apiVersion>
  </access>
</configuration>
Element Description Property

username

Username on which to operate.

jkube.username

password

Password on which to operate.

jkube.password

namespace

Namespace on which to operate.

jkube.namespace

masterUrl

Master URL on which to operate.

jkube.masterUrl

apiVersion

Api version on which to operate.

jkube.apiVersion

caCertFile

CaCert File on which to operate.

jkube.caCertFile

caCertData

CaCert Data on which to operate.

jkube.caCertData

clientCertFile

Client Cert File on which to operate.

jkube.clientCertFile

clientCertData

Client Cert Data on which to operate.

jkube.clientCertData

clientKeyFile

Client Key File on which to operate.

jkube.clientKeyFile

clientKeyData

Client Key Data on which to operate.

jkube.clientKeyData

clientKeyAlgo

Client Key Algorithm on which to operate.

jkube.clientKeyAlgo

clientKeyPassphrase

Client Key Passphrase on which to operate.

jkube.clientKeyPassphrase

currentContext

Client Kubernetes Context that is currently in use

jkube.currentContext

trustStoreFile

Trust Store File on which to operate.

jkube.trustStoreFile

trustStorePassphrase

Trust Store Passphrase on which to operate.

jkube.trustStorePassphrase

keyStoreFile

Key Store File on which to operate.

jkube.keyStoreFile

keyStorePassphrase

Key Store Passphrase on which to operate.

jkube.keyStorePassphrase

5.5. oc:helm

This feature allows you to create Helm charts from the Kubernetes resources Eclipse JKube generates for your project. You can then use the generated charts to leverage Helm's capabilities to install, update, or delete your app in Kubernetes.

To generate the Helm chart you need to invoke the`oc:helm` Maven goal on the command line:

mvn oc:resource oc:helm
The oc:resource goal is required to create the resource descriptors that are included in the Helm chart. If you have already generated the resources in a previous step then you can omit this goal.

There are multiple ways to configure the generated Helm Chart:

  • By providing a Chart.helm.yaml fragment in src/main/jkube directory.

  • Through the helm section in the openshift-maven-plugin XML configuration.

When using the fragment approach, you simply need to create a Chart.helm.yaml file in the src/main/jkube directory with the fields you want to override. JKube will take care of merging this fragment with the opinionated and configured defaults.

The XML configuration is defined in a helm section within the plugin’s configuration:

Example Helm configuration
<plugin>
  <configuration>
    <helm>
      <chart>Jenkins</chart>
      <keywords>ci,cd,server</keywords>
      <dependencies>
        <dependency>
          <name>ingress-nginx</name>
          <version>1.26.0</version>
          <repository>https://kubernetes.github.io/ingress-nginx</repository>
        </dependency>
      </dependencies>
    </helm>
  </configuration>
</plugin>

This configuration section knows the following sub-elements in order to configure your Helm chart.

Table 32. Helm configuration
Element Description Property

apiVersion

The apiVersion of Chart.yaml schema, defaults to v1.

jkube.helm.apiVersion

chart

The Chart name.

Defaults to ${project.artifactId}.

jkube.helm.chart

version

The Chart SemVer version.

Defaults to ${project.version}.

jkube.helm.version

debug

enable verbose output for helm operations

Defaults to false

jkube.helm.debug

description

The Chart single-sentence description.

Defaults to ${project.description}.

jkube.helm.description

home

The Chart URL for this project’s home page.

Defaults to ${project.url}.

jkube.helm.home

sources

The Chart list of URLs to source code for this project.

Defaults to the list of ${project.scm.url}.

maintainers

The Chart list of maintainers (name+email).

Defaults to the list of ${project.developers.name}:${project.developers.name}.

icon

The Chart URL to an SVG or PNG image to be used as an icon, default is extracted from the kubernetes manifest (kubernetes.yml) jkube.eclipse.org/iconUrl annotation if not provided.

jkube.helm.icon

appVersion

The version of the application that Chart contains.

Defaults to ${project.version}.

jkube.helm.appVersion

keywords

Comma separated list of keywords to add to the chart.

engine

The template engine to use.

additionalFiles

The list of additional files to be included in the Chart archive. Any file named README or LICENSE or values.schema.json will always be included by default.

type / types

Platform for which to generate the chart. By default this is kubernetes, but can be also openshift for using OpenShift specific resources in the chart. You can also add both values as a comma separated list.

Please note that there is no OpenShift support yet for charts, so this is experimental.

jkube.helm.type

sourceDir

Where to find the resource descriptors generated with oc:resource.

By default, this is ${basedir}/target/classes/META-INF/jkube., which is also the output directory used by `oc:resource.

jkube.helm.sourceDir

outputDir

Where to create the Helm chart, which is ${basedir}/target/jkube/helm/${chartName}/kubernetes by default for Kubernetes and ${basedir}/target/jkube/helm/${chartName}/openshift for OpenShift.

jkube.helm.outputDir

tarballOutputDir

Where to create the Helm chart archive, which is same as outputDir if not provided.

jkube.helm.tarballOutputDir

tarFileClassifier

A string at the end of Helm archive filename as a classifier.

Defaults to empty string.

jkube.helm.tarFileClassifier

chartExtension

The Helm chart file extension (tgz, tar.bz, tar.bzip2, tar.bz2), default value is tar.gz if not provided.

jkube.helm.chartExtension

security

The Maven security dispatcher configuration file. If you use the default security dispatcher, you need to point this to the file containing your master password. If you followed the Maven Password Encryption guide, this is ${user.home}/.m2/settings-security.xml, so that is what this setting defaults to when you do not set it explicitly.

dependencies

The list of dependencies for this chart.

parameters

The list of parameters to interpolate the Chart templates from the provided Fragments.

These parameters can represent variables, in this case the values are used to generate the values.yaml file. The fragment placeholders will be replaced with a .Values variable.

The parameters can also represent a Golang expression

Table 33. Helm Configuration - Maintainer
Element Description

name

The maintainer’s name or organization.

email

The maintainer’s contact email address.

url

The maintainer’s URL address.

Table 34. Helm Configuration - Dependency
Element Description

name

The name of the chart dependency.

version

Semantic version or version range for the dependency.

repository

URL pointing to a chart repository.

condition

Optional reference to a boolean value that toggles the inclusion of the dependency. IE subchart.enabled. For more information see helm documentation.

alias

Optional reference to the map that will be passed as the value scope for the subchart. For more information see helm documentation.

Table 35. Helm Configuration - Parameters
Element Description

name

The name of the interpolatable parameter. Will be used to replace placeholders (${name}) in the provided YAML fragments. And to generate the values.yaml file.

required

Set to true if this is a required value (when used to generate values).

value

In case we are generating a .Values variable, the default value.

In case the placeholder has to be replaced by an expression, the Golang expression e.g. {{ .Chart.Name | upper }}.

5.5.1. Helm-specific fragments

In addition to the standard Kubernetes resource fragments, you can also provide fragments for Helm Chart.yaml and values.yaml files.

For the Chart.yaml file you can provide a Chart.helm.yaml fragment in the src/main/jkube directory.

For the values.yaml file you can provide a values.helm.yaml fragment in the src/main/jkube directory.

These fragments will be merged with the opinionated and configured defaults. The values provided in the fragments will override any of the generated default values taking precedence over them.

5.5.2. Installing the generated Helm chart

In a next step you can install this via oc:helm-install goal as follows:

mvn oc:helm-install

In addition, this goal will also create a tar-archive below ${basedir}/target which contains the chart with its template.

To add the helm goal to your project so that it is automatically executed just add the helm goal to the executions section of the openshift-maven-plugin section of your pom.xml.

Add helm goal
<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>

  <!-- ... -->

  <executions>
    <execution>
      <goals>
        <goal>resource</goal>
        <goal>helm</goal>
        <goal>build</goal>
        <goal>deploy</goal>
      </goals>
    </execution>
  </executions>
</plugin>

5.5.3. Multi-module projects

In multi-module Maven projects, some configuration default values differ from what you may expect.

Given a project with a parent module and at least a child module, if you run the helm goal within the child module, values for home and sources will get the submodule’s artifactId appended.

This behavior is normal, since the helm goal is executed in the scope of the submodule. The Maven variables from which JKube extracts these defaults (${project.url} and ${project.scm.url}) already contain the appended submodule’s artifactId.

In order to prevent this, there are several alternatives:

Manual configuration

Provide the configuration manually for these values:

<plugin>
  <configuration>
    <helm>
      <home>https://valid-home-with-no-appended-values.example.com</home>
      <sources>
          <source>https://github.com/valid-repo/with-no-appended-values</source>
      </sources>
    </helm>
  </configuration>
</plugin>
Manual configuration using properties in child module

Following the previous approach, if you don’t want to hardcode the values, or if you already defined them in the parent module you can proceed with the following configuration in the child module:

<properties>
    <!-- ... -->
    <helm.home>${project.parent.url}</helm.home>
    <helm.source>${project.parent.scm.url}</helm.source>
</properties>
<!-- ... -->
<plugin>
  <configuration>
    <helm>
      <home>${helm.home}</home>
      <sources>
          <source>${helm.source}</source>
      </sources>
    </helm>
  </configuration>
</plugin>
Configure inheritance in parent project for the affected elements

Configure inheritance of the project and scm elements in the parent module:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"
         child.project.url.inherit.append.path="false"
         >
<!-- ... -->
    <url>https://jkube.example.com</url>
    <scm child.scm.url.inherit.append.path="false">
        <url>https://github.com/eclipse-jkube/jkube</url>
    </scm>
<!-- ... -->
</project>

5.5.4. Uninstalling the installed Helm release

You can uninstall installed Helm release from Kubernetes via oc:helm-uninstall goal as follows:

mvn oc:helm-uninstall

5.6. oc:helm-push

This feature allows you to upload your Eclipse JKube-generated Helm charts to one of the supported repositories: Artifactory, Chartmuseum, Nexus, and OCI.

To publish a Helm chart you need to invoke the oc:helm-push Maven goal on the command line:

mvn oc:resource oc:helm oc:helm-push
The oc:resource and the oc:helm goals are required to create the resource descriptors which are included in the Helm chart and the Helm chart itself. If you have already built the resource and created the chart, then you can omit these goals.

The configuration is defined in a helm section within the plugin’s configuration:

Example Helm configuration
<plugin>
  <configuration>
    <helm>
      <chart>Jenkins</chart>
      <keywords>ci,cd,server</keywords>
      <stableRepository>
        <name>stable-repo-id</name>
        <url>https://stable-repo-url</url>
        <type>ARTIFACTORY</type>
      </stableRepository>
      <snapshotRepository>
        <name>snapshot-repo-id</name>
        <url>https://snapshot-repo-url</url>
        <type>ARTIFACTORY</type>
      </snapshotRepository>
    </helm>
  </configuration>
</plugin>

You can provide helm repository authentication credentials either via properties or using environment variables. It’s also possible to specify credentials in maven settings as well. You just need to add a server entry for your repo like this:

Helm Repository Authentication credentials in settings.xml
<?xml version="1.0" encoding="UTF-8"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
  <servers>
    <server>
      <id>snapshot-repo-id</id>
      <username>admin</username>
      <password>secret</password>
    </server>
  </servers>

</settings>

If you have encrypted your password with a master password (as outlined in the Maven Password Encryption guide), your master password is encrypted somewhere in a file. By default this is ~/.m2/security-settings.xml. If you’ve chosen another location, make sure to configure the security setting:

<plugin>
  <configuration>
    <helm>
      <security>~/work/.m2/work-security-settings.xml</security>
      ...
    </helm>
  </configuration>
</plugin>

This configuration section knows the following sub-elements in order to configure your Helm chart.

Table 36. Helm configuration
Element Description Property

stableRepository

The configuration of the stable helm repository (see Helm stable repository configuration).

snapshotRepository

The configuration of the snapshot helm repository (see Helm repository configuration).

Table 37. Helm stable repository configuration
Element Description Property

name

The name (id) of the server configuration. It can select the maven server by this ID.

jkube.helm.stableRepository.name

url

The url of the server.

jkube.helm.stableRepository.url

username

The username of the repository. Optional. If a maven server ID is specified, the username is taken from there.

jkube.helm.stableRepository.username

password

The password of the repository. Optional. If a maven server ID is specified, the password is taken from there.

jkube.helm.stableRepository.password

type

The type of the repository. One of ARTIFACTORY, NEXUS, CHARTMUSEUM, OCI

jkube.helm.stableRepository.type

Table 38. Helm snapshot repository configuration
Element Description Property

name

The name (id) of the server configuration. It can select the maven server by this ID.

jkube.helm.snapshotRepository.name

url

The url of the server.

jkube.helm.snapshotRepository.url

username

The username of the repository. Optional. If a maven server ID is specified, the username is taken from there.

jkube.helm.snapshotRepository.username

password

The password of the repository. Optional. If a maven server ID is specified, the password is taken from there.

jkube.helm.snapshotRepository.password

type

The type of the repository. One of ARTIFACTORY, NEXUS, CHARTMUSEUM

jkube.helm.snapshotRepository.type

To add the helm-push goal to your project so that it is automatically executed just add the helm-push goal to the executions section of the openshift-maven-plugin section of your pom.xml.

Add helm goal
<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>

  <!-- ... -->

  <executions>
    <execution>
      <goals>
        <goal>resource</goal>
        <goal>helm</goal>
        <goal>build</goal>
        <goal>deploy</goal>
        <goal>helm-push</goal>
      </goals>
    </execution>
  </executions>
</plugin>

5.7. oc:helm-lint

This feature allows you to lint your Eclipse JKube-generated Helm charts and examine it for possible issues.

It provides the same output as the helm lint command.

To lint a Helm chart you need to invoke the oc:helm-lint Maven goal on the command line:

mvn oc:resource oc:helm oc:helm-lint
The oc:resource and the oc:helm goals are required to create the resource descriptors which are included in the Helm chart and the Helm chart itself. If you have already built the resource and created the chart, then you can omit these goals.
Table 39. Helm lint configuration
Element Description Property

lintStrict

Enable strict mode, fails on lint warnings.

jkube.helm.lint.strict

lintQuiet

Enable quiet mode, only shows warnings and errors.

jkube.helm.lint.quiet

Example Helm lint configuration
<plugin>
  <configuration>
    <helm>
      <lintStrict>true</lintStrict>
      <lintQuiet>true</lintQuiet>
    </helm>
  </configuration>
</plugin>

5.8. oc:helm-dependency-update

This feature allows you to update dependencies of your Eclipse JKube-generated Helm charts

It provides the same output as the helm dependency update command.

To update on-disk dependencies of a Helm chart you need to invoke the oc:helm-dependency-update Maven goal on the command line:

mvn oc:resource oc:helm oc:helm-dependency-update
The oc:resource and the oc:helm goals are required to create the resource descriptors which are included in the Helm chart and the Helm chart itself. If you have already built the resource and created the chart, then you can omit these goals.
Table 40. Helm Dependency Update configuration
Element Description Property

dependencyVerify

verify the packages against signatures

jkube.helm.dependencyVerify

dependencySkipRefresh

do not refresh the local repository cache

jkube.helm.dependencySkipRefresh

Example Helm Dependency Update configuration
<plugin>
  <configuration>
    <helm>
      <dependencies>
        <dependency>
          <name>foo</name>
          <version>0.0.1</version>
          <repository>https://charts.example.com/test-chart</repository>
        </dependency>
      </dependencies>

      <debug>true</debug>
      <dependencySkipRefresh>true</dependencySkipRefresh>
      <dependencyVerify>true</dependencyVerify>
    </helm>
  </configuration>
</plugin>

5.9. oc:helm-install

This feature allows you to install your Eclipse JKube-generated Helm charts

To install a Helm chart you need to invoke the oc:helm-install Maven goal on the command line:

mvn oc:resource oc:helm oc:helm-install
The oc:resource and the oc:helm goals are required to create the resource descriptors which are included in the Helm chart and the Helm chart itself. If you have already built the resource and created the chart, then you can omit these goals.
Table 41. Helm Install configuration
Element Description Property

releaseName

Name of Helm Release (instance of a chart running in a Kubernetes cluster).

jkube.helm.release.name

installDependencyUpdate

update dependencies if they are missing before installing the chart

jkube.helm.install.dependencyUpdate

installWaitReady

if set, will wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful.

jkube.helm.install.waitReady

Example Helm Install configuration
<plugin>
  <configuration>
    <helm>
      <releaseName>test-release</dependencySkipRefresh>
      <installDependencyUpdate>false</dependencyVerify>
      <installWaitReady>false</installWaitReady>
    </helm>
  </configuration>
</plugin>

5.10. oc:helm-uninstall

This feature allows you to remove Helm release from Kubernetes

To uninstall a Helm release you need to invoke the oc:helm-uninstall Maven goal on the command line:

mvn oc:resource oc:helm oc:helm-install oc:helm-uninstall
The oc:resource, oc:helm and oc:helm-install goals are required to ensure that Helm release gets installed in Kubernetes. If you already have the Helm release installed on Kubernetes, then you can omit these goals.
Table 42. Helm Uninstall configuration
Element Description Property

releaseName

Name of Helm Release (instance of a chart running in a Kubernetes cluster).

jkube.helm.release.name

Example Helm Uninstall configuration
<plugin>
  <configuration>
    <helm>
      <releaseName>test-release</releaseName>
    </helm>
  </configuration>
</plugin>

6. Development Goals

6.1. oc:deploy

This is the main goal for building your docker image, generating the kubernetes resources and deploying them into the cluster (insofar your pom.xml is set up correct; keep reading :)).

mvn oc:deploy

This goal is designed to run oc:build and oc:resource before the deploy if you have the goals bound in your pom.xml:

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>
  <version>1.17.0</version>

  <!-- Connect oc:resource, oc:build and oc:helm to lifecycle phases -->
  <executions>
    <execution>
       <id>jkube</id>
       <goals>
         <goal>resource</goal>
         <goal>build</goal>
         <goal>helm</goal>
       </goals>
    </execution>
  </executions>
</plugin>

Effectively this builds your project then invokes these goals:

By default the behaviour of resource goal is it generates route.yml for a service if you have not done any configuration changes. Sometimes there may be case when you want to generate route.yml but do not want to create route resource on OpenShift Cluster. This can be achieved by the following configuration.

Example for not generating route resource on your cluster
<plugin>
    <groupId>org.eclipse.jkube</groupId>
    <artifactId>openshift-maven-plugin</artifactId>
    <version>1.17.0</version>
    <configuration>
        <enricher>
            <excludes>
                <exclude>jkube-expose</exclude>
            </excludes>
        </enricher>
    </configuration>
</plugin>

6.2. oc:undeploy

This goal is for deleting the kubernetes resources that you deployed via the oc:apply or oc:deploy goals

It iterates through all the resources generated by the oc:resource goal and deletes them from your current kubernetes cluster.

mvn oc:undeploy

6.2.1. Supported Properties For Undeploy Goal

Table 43. Options available with undeploy goal
Element Description Property

skipUndeploy

Skip Undeploying Kubernetes resources.

Defaults to false.

`jkube.skip.undeploy

6.3. oc:log

This goal tails the log of the app that you deployed via the oc:deploy goal

mvn oc:log

You can then terminate the output by hitting Ctrl+C

If you wish to get the log of the app and then terminate immediately then try:

mvn oc:log -Djkube.log.follow=false

This lets you pipe the output into grep or some other tool

mvn oc:log -Djkube.log.follow=false | grep Exception

If your app is running in multiple pods you can configure the pod name to log via the jkube.log.pod property, otherwise it defaults to the latest pod:

mvn oc:log -Djkube.log.pod=foo

If your pod has multiple containers you can configure the container name to log via the jkube.log.container property, otherwise it defaults to the first container:

mvn oc:log -Djkube.log.container=foo
Example XML configuration for log goal
<plugin>
  <configuration>
    <logFollow>true</logFollow>
    <logContainer>container</logContainer>
    <logPod>pod</logPod>
  </configuration>
</plugin>

6.3.1. Supported Properties for Log goal

Table 44. Options available with log goal
Element Description Property

logFollow

Get follow logs for your application inside Kubernetes.

Defaults to true.

jkube.log.follow

logContainer

Get logs of some specific container inside your application Deployment.

Defaults to null.

jkube.log.container

logPod

Get logs of some specific pod inside your application Deployment.

Defaults to null.

jkube.log.pod

6.4. oc:debug

This goal enables debugging in your Java app and then port forwards from localhost to the latest running pod of your app so that you can easily debug your app from your Java IDE.

mvn oc:debug

Then follow the on screen instructions.

The default debug port is 5005. If you wish to change the local port to use for debugging then pass in the jkube.debug.port parameter:

mvn oc:debug -Djkube.debug.port=8000

Then in your IDE you start a Remote debug execution using this remote port using localhost and you should be able to set breakpoints and step through your code.

This lets you debug your apps while they are running inside a Kubernetes cluster - for example if you wish to debug a REST endpoint while another pod is invoking it.

Debug is enabled via the JAVA_ENABLE_DEBUG environment variable being set to true. This environment variable is used for all the standard Java docker images used by Spring Boot, Quarkus, flat classpath and executable JAR projects. If you use your own custom docker base image you may wish to also respect this environment variable too to enable debugging.

6.4.1. Speeding up debugging

By default the oc:debug goal has to edit your Deployment to enable debugging then wait for a pod to start. It might be in development you frequently want to debug things and want to speed things up a bit.

If so you can enable debug mode for each build via the jkube.debug.enabled property.

e.g. you can pass this property on the command line:

mvn oc:deploy -Djkube.debug.enabled=true

Or you can add something like this to your ~/.m2/settings.xml file so that you enable debug mode for all maven builds on your laptop by using a profile :

<?xml version="1.0"?>
<settings>
  <profiles>
    <profile>
      <id>enable-debug</id>
      <activation>
        <activeByDefault>true</activeByDefault>
      </activation>
      <properties>
        <jkube.debug.enabled>true</jkube.debug.enabled>
      </properties>
    </profile>
  </profiles>
</settings>

Then whenever you type the oc:debug goal there is no need for the maven goal to edit the Deployment and wait for a pod to restart; we can immediately start debugging when you type:

mvn oc:debug

6.4.2. Debugging with suspension

The oc:debug goal allows to attach a remote debugger to a running container, but the application is free to execute when the debugger is not attached. In some cases, you may want to have complete control on the execution, e.g. to investigate the application behavior at startup. This can be done using the jkube.debug.suspend flag:

mvn oc:debug -Djkube.debug.suspend

The suspend flag will set the JAVA_DEBUG_SUSPEND environment variable to true and JAVA_DEBUG_SESSION to a random number in your deployment. When the JAVA_DEBUG_SUSPEND environment variable is set, standard docker images will use suspend=y in the JVM startup options for debugging.

The JAVA_DEBUG_SESSION environment variable is always set to a random number (each time you run the debug goal with the suspend flag) in order to tell Kubernetes to restart the pod. The remote application will start only after a remote debugger is attached. You can use the remote debugging feature of your IDE to connect (on localhost, port 5005 by default).

The jkube.debug.suspend flag will disable readiness probes in the Kubernetes deployment in order to start port-forwarding during the early phases of application startup

6.4.3. Supported Properties For Debug Goal

Table 45. Options available with debug goal
Element Description Property

debugPort

Default port available for debugging your application inside Kubernetes.

Defaults to 5005.

jkube.debug.port

debugSuspend

Disables readiness probes in Kubernetes Deployment in order to start port forwarding during early phases of application startup.

Defaults to false.

jkube.debug.suspend

6.5. oc:remote-dev (preview)

Eclipse JKube Remote Development allows you to run and debug code in your local machine:

  • While connected to and consuming services that are only available in your cluster

  • While exposing your locally running application to other Pods and services running on your cluster

Remote Development features
  • Expose your local application to the cluster

  • Consume cluster services locally without having to expose them to the Internet

  • Connect your local toolset to the cluster services

  • Simple configuration

  • No tools required

  • No special or super-user permissions required in the local machine

  • No special features required in the cluster (should work on any kind of Kubernetes flavor)

  • Boosts your inner-loop developer experience when combined with live-reload frameworks such as Quarkus

6.5.1. Project configuration

The remote development configuration must be provided within the remoteDevelopment configuration element for the project.

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>
  <version>1.17.0</version>
  <configuration>
    <remoteDevelopment>
      <localServices>
        <localService>
          <serviceName>my-local-service</serviceName> (1)
          <port>8080</port>  (2)
        </localService>
      </localServices>
      <remoteServices>
        <remoteService>
          <hostname>postgresql</hostname>  (3)
          <port>5432</port>  (4)
        </remoteService>
        <remoteService>
          <hostname>rabbit-mq</hostname>
          <port>5672</port>
          <localPort>15672</localPort>  (5)
        </remoteService>
      </remoteServices>
    </remoteDevelopment>
  </configuration>
</plugin>
1 Name of the service to be exposed in the cluster, the local application will be able accessible in the cluster through this hostname/service
2 Port where the local application is listening for connections
3 Name of a cluster service that will be forwarded and exposed locally
4 Port where the cluster service listens for connections (by default, the same port will be used to expose the service locally)
5 Optional port where the cluster service will be exposed locally
Starting the remote development session
$ mvn oc:remote-dev
Table 46. Options available for the remoteDevelopment configuration element
Element Description

localServices

The list of local services to expose in the cluster.

remoteServices

The list of cluster services to expose locally.

Table 47. Options available for the localServices configuration element
Element Description

serviceName

The name of the service that will be created/hijacked in the cluster.

type

The type of service to create (defaults to ClusterIP).

port

The service port, must match the port where the local application is listening for connections.

Table 48. Options available for the remoteServices configuration element
Element Description

hostname

The name of the cluster service whose port will be forwarded to the local machine.

port

The port where the cluster service is listening for connections.

localPort

(Optional) The port where the cluster service will be exposed locally. If not specified, the same port will be used.

6.6. oc:watch

This goal is used to monitor the project workspace for changes and automatically trigger a redeploy of the application running on Kubernetes. There are two kinds of watchers present at the moment:

  • Docker Image Watcher(watches docker images)

  • Spring Boot Watcher(based on Spring Boot Devtools)

Before entering the watch mode, this goal must generate the docker image and the Kubernetes resources (optionally including some development libraries/configuration), and deploy the app on Kubernetes. Lifecycle bindings should be configured as follows to allow the generation of such resources.

Lifecycle bindings for oc:watch
<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>

  <!-- ... -->

  <executions>
    <execution>
      <goals>
        <goal>resource</goal>
        <goal>build</goal>
      </goals>
    </execution>
  </executions>
</plugin>

For any application having resource and build goals bound to the lifecycle, the following command can be used to run the watch task.

mvn oc:watch

This plugin supports different watcher providers, enabled automatically if the project satisfies certain conditions.

Watcher providers can also be configured manually. The Generator example is a good blueprint, simply replace <generator> with <watcher>. The configuration is structurally identical.

6.6.1. Spring Boot

This watcher is enabled by default for all Spring Boot projects. It performs the following actions:

  • deploys your application with Spring Boot DevTools enabled

  • tails the log of the latest running pod for your application

  • watches the local development build of your Spring Boot based application and then triggers a reload of the application when there are changes

You need to make sure that devtools is included in the repacked archive, as shown in the following listing:

<plugin>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-maven-plugin</artifactId>
  <configuration>
    <excludeDevtools>false</excludeDevtools>
  </configuration>
</plugin>

Then you need to set a spring.devtools.remote.secret in application.properties, as shown in the following example:

spring.devtools.remote.secret=mysecret
Spring devtools automatically ignores projects named spring-boot, spring-boot-devtools, spring-boot-autoconfigure, spring-boot-actuator, and spring-boot-starter

You can try it on any spring boot application via:

mvn oc:watch

Once the goal starts up the spring boot RemoteSpringApplication it will watch for local development changes.

e.g. if you edit the java code of your app and then build it via something like this:

mvn package

You should see your app reload on the fly in the shell running the oc:watch goal!

There is also support for LiveReload as well.

6.6.2. Docker Image

This is a generic watcher that can be used in Kubernetes mode only. Once activated, it listens for changes in the project workspace in order to trigger a redeploy of the application. This enables rebuilding of images and restarting of containers in case of updates.

There are five watch modes, which can be specified in multiple ways:

  • build: Automatically rebuild one or more Docker images when one of the files selected by an assembly changes. This works for all files included in assembly.

  • run: Automatically restart your application when their associated images change.

  • copy: Copy changed files into the running container. This is the fast way to update a container, however the target container must support hot deploy, too so that it makes sense. Most application servers like Tomcat supports this.

  • both: Enables both build and run. This is the default.

  • none: Image is completely ignored for watching.

The watcher can be activated e.g. by running this command in another shell:

mvn package

The watcher will detect that the binary artifact has changed and will first rebuild the docker image, then start a redeploy of the Kubernetes pod.

It uses the watch feature of the docker-maven-plugin under the hood.

6.6.3. Supported Properties for Watch goal

Table 49. Options available with watch goal
Element Description Property

kubernetesManifest

The generated kubernetes YAML file.

Defaults to ${basedir}/target/classes/META-INF/jkube/kubernetes.yml.

jkube.kubernetesManifest

watchMode

How to watch for image changes.

  • copy: Copy watched artifacts into container

  • build: Build only images

  • run: Run images

  • both: Build and run images

  • none: Neither build nor run

Defaults to both.

jkube.watch.mode

watchInterval

Interval in milliseconds (how often to check for changes).

Defaults to 5000.

jkube.watch.interval

watchPostGoal

A maven goal which should be called if a rebuild or a restart has been performed.

This goal must have the format <pluginGroupId>:<pluginArtifactId>:<goal> and the plugin must be configured in the pom.xml.

For example a post-goal com.example:group:delete-pods will trigger the delete-pods goal of this hypothetic example.

jkube.watch.postGoal

watchPostExec

A command which is executed within the container after files are copied into this container when watchMode is copy. Note that this container must be running.

jkube.watch.postExec

7. Generators

The usual way to define Docker images is with the plugin configuration as explained in oc:build. This can either be done completely within the pom.xml or by referring to an external Dockerfile. Since openshift-maven-plugin includes docker-maven-plugin the way by which images are built is identical.

However, this plugin provides an additional route for defining image configurations. This is done by so called Generators. A generator is a Java component providing an auto-detection mechanism for certain build types like a Spring Boot build or a plain Java build. As soon as a Generator detects that it is applicable it will be called with the list of images configured in the pom.xml. Typically a generator only creates dynamically a new image configuration if this list is empty. But a generator is free to also add new images to an existing list or even change the current image list.

You can easily create your own generator as explained in Generator API. This section will focus on existing generators and how you can configure them.

The included Generators are enabled by default, but you can easily disable them or only select a certain set of generators. Each generator has a name, which is unique for a generator.

The generator configuration is embedded in a generator configuration section:

Example for a generator configuration
<plugin>
  ....
  <configuration>
    ....
    <generator> (1)
      <includes> (2)
        <include>spring-boot</include>
      </includes>
      <config> (3)
        <spring-boot> (4)
          <alias>ping</alias>
        </spring-boot>
      </config>
    </generator>
  </configuration>
</plugin>
1 Start of generators' configuration.
2 Generators can be included and excluded. Includes have precedence, and the generators are called in the given order.
3 Configuration for individual generators.
4 The config is a map of supported config values. Each section is embedded in a tag named after the generator. The following sub-elements are supported:
Table 50. Generator configuration
Element Description

includes

Contains one or more include elements with generator names which should be included. If given only this list of generators are included in this given order. The order is important because by default only the first matching generator kicks in. The generators from every active profile are included, too. However the generators listed here are moved to the front of the list, so that they are called first. Use the profile raw if you want to explicitly set the complete list of generators.

excludes

Holds one or more exclude elements with generator names to exclude. If set then all detected generators are used except the ones mentioned in this section.

config

Configuration for all generators. Each generator support a specific set of configuration values as described in the documentation. The subelements of this section are generator names to configure. E.g. for generator spring-boot, the sub-element is called spring-boot. This element then holds the specific generator configuration like name for specifying the final image name. See above for an example. Configuration coming from profiles are merged into this config, but not overriding the configuration specified here.

Beside specifying generator configuration in the plugin’s configuration it can be set directly with properties, too:

Example generator property config
mvn -Djkube.generator.java-exec.webPort=8082

The general scheme is a prefix jkube.generator. followed by the unique generator name and then the generator specific key.

In addition to the provided default Generators described in the next section Default Generators, custom generators can be easily added. There are two ways to include generators:

Plugin dependency

You can declare the generator holding jars as dependency to this plugin as shown in this example

<plugin>
  <artifactId>openshift-maven-plugin</artifactId>
  ....
  <dependencies>
    <dependency>
      <groupId>io.acme</groupId>
      <artifactId>mygenerator</artifactId>
      <version>1.0</version>
    </dependency>
  </dependencies>
</plugin>
Compile time dependency

Alternatively and if your application code comes with a custom generator you can set the global configuration option useProjectClasspath (property: jkube.useProjectClasspath) to true. In this case also the project artifact and its dependencies are looked up for Generators. See Generator API for details how to write your own generators.

7.1. Default Generators

All default generators examine the build information for certain aspects and generate a Docker build configuration on the fly. They can be configured to a certain degree, where the configuration is generator specific.

Table 51. Default Generators
Generator Name Description

Simple Dockerfile

dockerfile-simple

Generator for creating Image when user places Dockerfile in project base directory.

Java Applications

java-exec

Generic generator for flat classpath and fat-jar Java applications

Spring Boot

spring-boot

Spring Boot specific generator

Thorntail v2

thorntail-v2

Generator for Thorntail v2 apps

Vert.x

vertx

Generator for Vert.x applications

Web applications

webapps

Generator for WAR based applications supporting Tomcat, Jetty and Wildfly base images

Quarkus

Quarkus

Generator for Quarkus based applications

Open Liberty

openliberty

Generator for Open Liberty applications

Micronaut

micronaut

Generator for Micronaut based applications

Karaf

karaf

Generator for Karaf based apps

WildFly Bootable JAR

wildfly-jar

Generator for WildFly Bootable JAR applications

Helidon

helidon

Generator for Helidon based apps

There are some configuration options which are shared by all generators:

Table 52. Common generator options
Element Description Property

add

When set to true, then the generator adds to an existing image configuration. By default this is disabled, so that a generator only kicks in when there are no other image configurations in the build, which are either configured directly for a oc:build or already added by a generator which has been run previously.

jkube.generator.add

alias

An alias name for referencing this image in various other parts of the configuration. This is also used in the log output. The default alias name is the name of the generator.

jkube.generator.alias

from

This is the base image from where to start when creating the images. By default, the generators make an opinionated decision for the base image which are described in the respective generator section.

jkube.generator.from

fromMode

Whe using OpenShift S2I builds the base image can be either a plain docker image (mode: jib) or a reference to an ImageStreamTag (mode: istag). In the case of an ImageStreamTag, from has to be specified in the form namespace/image-stream:tag. The mode takes only effect when running in OpenShift mode.

jkube.generator.fromMode

labels

A comma separated list of additional labels you want to set on your image with

jkube.generator.labels

name

The Docker image name used when doing Docker builds. For OpenShift S2I builds its the name of the image stream. This can be a pattern as described in Name Placeholders. The default is %g/%a:%l. Note that this flag would only work when you’re using opinionated image configuration provided by generators. if generators are not applicable for your project configuration, this flag won’t work.

jkube.generator.name

registry

A optional Docker registry used when doing Docker builds. It has no effect for OpenShift S2I builds.

jkube.generator.registry

tags

A comma separated list of additional tags you want to tag your image with

jkube.generator.tags

buildpacksBuilderImage

Configure BuildPack builder OCI image for BuildPack Build. This field is applicable only in buildpacks build strategy. Defaults to paketobuildpacks/builder:base

jkube.generator.buildpacksBuilderImage

When used as properties they can be directly referenced with the property names above.

7.1.1. Simple Dockerfile

Simple Dockerfile generator is responsible for creating an opinionated image configuration when user places Dockerfile in project’s base directory.

This generator gets activated when these conditions are met:

  • Dockerfile is placed in project’s base directory

  • Either image configuration is not provided, or image configuration provided does not have build configured.

Image built with this configuration would use the Dockerfile for docker build and project base directory as docker context directory.

7.1.2. Java Applications

One of the most generic Generators is the java-exec generator. It is responsible for starting up arbitrary Java application. It knows how to deal with fat-jar applications where the application and all dependencies are included within a single jar and the MANIFEST.MF within the jar references a main class. But also flat classpath applications, where the dependencies are separate jar files and a main class is given.

If no main class is explicitly configured, the plugin first attempts to locate a fat jar. If the maven build creates a JAR file with a META-INF/MANIFEST.MF containing a Main-Class entry, then this is considered to be the fat jar to use. If there are more than one of such files then the largest one is used.

If a main class is configured (see below) then the image configuration will contain the application jar plus all dependency jars. If no main class is configured as well as no fat jar being detected, then this Generator tries to detect a single main class by searching for public static void main(String args[]) among the application classes. If exactly one class is found this is considered to be the main class. If no or more than one is found the Generator finally does nothing.

It will use the following base image by default, but as explained above and can be changed with the from configuration.

Table 53. Java Base Images
Docker Build S2I Build ImageStream

Community

quay.io/jkube/jkube-java

quay.io/jkube/jkube-java

jkube-java

These images always refer to the latest tag.

When a fromMode of istag is used to specify an ImageStreamTag and when no from is given, then as default the ImageStreamTag jkube-java in the namespace openshift is chosen. By default, fromMode = "docker" which use the a plain Docker image reference for the S2I builder image.

Beside the common configuration parameters described in the table common generator options the following additional configuration options are recognized:

Table 54. Java Application configuration options
Element Description Property

targetDir

Directory within the generated image where to put the detected artefacts into. Change this only if the base image is changed, too.

Defaults to /deployments.

jkube.generator.java-exec.targetDir

jolokiaPort

Port of the Jolokia agent exposed by the base image. Set this to 0 if you don’t want to expose the Jolokia port.

Defaults to 8778.

jkube.generator.java-exec.jolokiaPort

mainClass

Main class to call. If not given first a check is performed to detect a fat-jar (see above).

Next a class is looked up by scanning target/classes for a single class with a main method.

If no such class is found or if more than one is found, then this generator does nothing.

jkube.generator.java-exec.mainClass

prometheusPort

Port of the Prometheus jmx_exporter exposed by the base image. Set this to 0 if you don’t want to expose the Prometheus port.

Defaults to 9779.

jkube.generator.java-exec.prometheusPort

webPort

Port to expose as service, which is supposed to be the port of a web application. Set this to 0 if you don’t want to expose a port.

Defaults to 8080.

jkube.generator.java-exec.webPort

The exposed ports are typically later on use by Enrichers to create default Kubernetes or OpenShift services.

You can add additional files to the target image within baseDir by placing files into src/main/jkube-includes. These will be added with mode 0644, while everything in src/main/jkube-includes/bin will be added with 0755.

7.1.3. Spring Boot

This generator is called spring-boot and gets activated when it finds a spring-boot-maven-plugin in the pom.xml.

This generator is based on the Java Application Generator and inherits all of its configuration values. The generated container port is read from the server.port property application.properties, defaulting to 8080 if it is not found. It also uses the same default images as the java-exec Generator.

Beside the common generator options and the java-exec options the following additional configuration is recognized:

Table 55. Spring-Boot configuration options
Element Description Property

color

If set, force the use of color in the Spring Boot console output.

jkube.generator.spring-boot.color

The generator adds Kubernetes liveness and readiness probes pointing to either the management or server port as read from the application.properties. If the management.port (for Spring Boot 1) or management.server.port (for Spring Boot 2) and management.ssl.key-store (for Spring Boot 1) or management.server.ssl.key-store (for Spring Boot 2) properties are set in application.properties otherwise or server.ssl.key-store property is set in application.properties then the probes are automatically set to use https.

The generator works differently when called together with oc:watch. In that case it enables support for Spring Boot Developer Tools which allows for hot reloading of the Spring Boot app. In particular, the following steps are performed:

  • If a secret token is not provided within the Spring Boot application configuration application.properties or application.yml with the key spring.devtools.remote.secret then a custom secret token is created and added to application.properties

  • Add spring-boot-devtools.jar as BOOT-INF/lib/spring-devtools.jar to the spring-boot fat jar.

Since during oc:watch the application itself within the target/ directory is modified for allowing easy reloading you must ensure that you do a mvn clean before building an artifact which should be put into production.

Since the released version are typically generated with a CI system which does a clean build anyway this should be only a theoretical problem.

7.1.4. Thorntail v2

The Thorntail v2 generator detects a Thorntail v2 build and disables the Prometheus Java agent because of this issue.

Otherwise, this generator is identical to the java-exec generator. It supports the common generator options and the java-exec options.

7.1.5. Vert.x

The Vert.x generator detects an application using Eclipse Vert.x. It generates the metadata to start the application as a fat jar.

Currently, this generator is enabled if:

Otherwise, this generator is identical to the java-exec generator. It supports the common generator options and the java-exec options.

The generator automatically:

  • enable metrics and JMX publishing of the metrics when io.vertx:vertx-dropwizard-metrics is in the project’s classpath / dependencies.

  • enable clustering when a Vert.x cluster manager is available in the project’s classpath / dependencies. this is done by appending -cluster to the command line.

  • Force IPv4 stack when vertx-infinispan is used.

  • Disable the async DNS resolver to fall back to the regular JVM DNS resolver.

You can pass application parameter by setting the JAVA_ARGS env variable. You can pass system properties either using the same variable or using JAVA_OPTIONS. For instance, create src/main/jkube/deployment.yml with the following content to configure JAVA_ARGS:

spec:
 template:
   spec:
     containers:
       - env:
         - name: JAVA_ARGS
           value: "-Dfoo=bar -cluster -instances=2"

7.1.6. Web Applications

The webapp generator tries to detect WAR builds and selects a base servlet container image based on the configuration found in the pom.xml:

  • A Tomcat base image is selected by default, tomcat7-maven-plugin or tomcat8-maven-plugin is present or when a META-INF/context.xml could be found in the classes directory.

  • A Jetty base image is selected when a jetty-maven-plugin is present or one of the files WEB-INF/jetty-web.xml or WEB-INF/jetty-logging.properties is found.

  • A Wildfly base image is chosen for a given jboss-as-maven-plugin or wildfly-maven-plugin or when a Wildfly specific deployment descriptor like jboss-web.xml is found.

The base images chosen are:

Table 56. Webapp Base Images
Docker Build S2I Build

Tomcat

quay.io/jkube/jkube-tomcat

quay.io/jkube/jkube-tomcat

Jetty

quay.io/jkube/jkube-jetty9

quay.io/jkube/jkube-jetty9

Wildfly

jboss/wildfly

quay.io/wildfly/wildfly-centos7

In addition to the common generator options this generator can be configured with the following options:

Table 57. Webapp configuration options
Element Description Property

server

Fix server to use in the base image. Can be either tomcat, jetty or wildfly.

jkube.generator.webapp.server

targetDir

Where to put the war file into the target image. By default, it’s selected by the base image chosen but can be overwritten with this option.

Defaults to /deployments.

jkube.generator.webapp.targetDir

user

User and/or group under which the files should be added. The syntax of this options is described in Assembly Configuration.

jkube.generator.webapp.user

path

Context path with which the application can be reached by default.

Defaults to / (root context).

jkube.generator.webapp.path

cmd

Command to use to start the container. By default, the base images startup command is used.

jkube.generator.webapp.cmd

ports

Comma separated list of ports to expose in the image and which eventually are translated later to Kubernetes services. The ports depend on the base image and are selected automatically. But they can be overridden here.

jkube.generator.webapp.ports

env

Environment variable to be set to the image builder environment. Should be set in the format ENV_NAME=environment value. You can inject multiple env variables by adding a new line for each variable.

This may be required for Wildfly webapp s2i build to compose a WildFly server with Galleon layers. See https://docs.wildfly.org/21/Galleon_Guide.html#wildfly_foundational_galleon_layers and https://github.com/wildfly/wildfly-s2i#environment-variables-to-be-used-at-s2i-build-time/.

jkube.generator.webapp.env

JakartaEE and retrocompatibility with JavaEE in Tomcat

From Tomcat 10, only JakartaEE compliant projects are supported. However, legacy JavaEE projects can automatically be migrated by deploying the war in ${CATALINA_HOME}/webapps-javaee. By default, the webapp generator is based on a Tomcat 10+ image and will copy the war file to ${CATALINA_HOME}/webapps-javaee.

If the project is already JakartaEE compliant, it is recommanded to set the webapp directory to ${CATALINA_HOME}/webapps. This can be done by setting the property jkube.generator.webapp.env to TOMCAT_WEBAPPS_DIR=webapps.

To keep using Tomcat 9, set the properties:

  • jkube.generator.webapp.from to quay.io/jkube/jkube-tomcat9:0.0.16

  • jkube.generator.webapp.cmd to /usr/local/s2i/run

  • jkube.generator.webapp.supportsS2iBuild to true

7.1.7. Quarkus

The Quarkus generator tries to detect quarkus based projects looking at project pom.xml:

The base images chosen are:

Table 58. Webapp Base Images
Docker Build S2I Build

Native

registry.access.redhat.com/ubi9/ubi-minimal:9.3

quay.io/quarkus/ubi-quarkus-native-binary-s2i

Normal Build

quay.io/jkube/jkube-java

quay.io/jkube/jkube-java

7.1.8. Open Liberty

The Open Liberty generator runs when the Open Liberty plugin is enabled in the maven build.

The generator is similar to the java-exec generator. It supports the common generator options and the java-exec options.

For Open Liberty, the default value of webPort is 9080.

7.1.9. Micronaut Generator

The Micronaut generator (named micronaut) detects a Micronaut project by analyzing the plugin

dependencies searching for :

  • io.micronaut.build:micronaut-maven-plugin (for Micronaut 3) or,

  • io.micronaut.maven:micronaut-maven-plugin (for Micronaut 4)

This generator is based on the Java Application Generator and inherits all of its configuration values.

The base images chosen are the following, however, these can be overridden using jkube.generator.from property:

Table 59. Webapp Base Images
Docker Build S2I Build

Native

registry.access.redhat.com/ubi9/ubi-minimal:9.3

registry.access.redhat.com/ubi9/ubi-minimal:9.3

Normal Build

quay.io/jkube/jkube-java

quay.io/jkube/jkube-java

7.1.10. Helidon

The Helidon generator tries to detect Helidon based projects looking at project pom.xml:

The base images chosen are the following, however, these can be overridden using jkube.generator.from property:

Table 60. Webapp Base Images
Docker Build S2I Build

Native

registry.access.redhat.com/ubi9/ubi-minimal:9.3

registry.access.redhat.com/ubi9/ubi-minimal:9.3

Normal Build

quay.io/jkube/jkube-java

quay.io/jkube/jkube-java

7.1.11. Karaf

This generator named karaf kicks in when the build uses a karaf-maven-plugin. By default the following base images are used:

Table 61. Karaf Base Images
Docker Build S2I Build ImageStream

Community

quay.io/jkube/jkube-karaf

quay.io/jkube/jkube-karaf

jkube-karaf

When a fromMode of istag is used to specify an ImageStreamTag and when no from is given, then as default the ImageStreamTag jkube-karaf in the namespace openshift is chosen.

In addition to the common generator options this generator can be configured with the following options:

Table 62. Karaf configuration options
Element Description Property

baseDir

Directory within the generated image where to put the detected artifacts into. Change this only if the base image is changed, too.

Defaults to /deployments.

jkube.generator.karaf.baseDir

webPort

Port to expose as service, which is supposed to be the port of a web application. Set this to 0 if you don’t want to expose a port.

Defaults to 8080.

jkube.generator.karaf.webPort

7.1.12. Wildfly JAR Generator

The Wildfly JAR generator detects a WildFly Bootable JAR build and disables the Jolokia and Prometheus Java agent.

Otherwise this generator is identical to the java-exec generator. It supports the common generator options and the java-exec options.

Support for slim Bootable JAR

A slim Bootable JAR is a JAR that retrieves JBoss module artifacts from a Maven local cache. Such JAR are smaller and start faster. The WildFly JAR generator has a builtin support to install a maven local cache in the image.

In order to build a slim Bootable JAR, configure the wildfly-jar-maven-plugin for slim server and maven local cache generation:

  <plugin>
    <groupId>org.wildfly.plugins</groupId>
    <artifactId>wildfly-jar-maven-plugin</artifactId>
    <configuration>
      <plugin-options>
       <!-- Build a slim Bootable JAR -->
       <jboss-maven-dist/>
       <!-- Path to the Maven local cache that the plugin generates during build.
            It contains JBoss module artifacts required by the server. -->
       <jboss-maven-repo>target/myapp-repo</jboss-maven-repo>
      </plugin-options>
      ...
    </configuration>
    <executions>
      <execution>
        <goals>
          <goal>package</goal>
        </goals>
      </execution>
    </executions>
  </plugin>

The generator detects the path of the generated maven local repository directory (value of the <jboss-maven-repo> element) and copies it into the image /deployments/<repo directory name> directory. NB: A relative path is considered relative to the maven project base directory.

In order for the Bootable JAR to retrieve the JBoss modules artifacts, the java option -Dmaven.repo.local=/deployments/<repo directory name> is automatically added to the launch options.

7.2. Generator API

It’s possible to extend Eclipse JKube’s Generator API to define your own custom Generators as per use case. Please refer to the Generator Interface; You can create new generators by implementing this interface. Please check out Custom Foo generator quickstart for detailed example.

8. Enrichers

Enriching is the complementary concept to Generators. Whereas Generators are used to create and customize Docker images, Enrichers are use to create and customize Kubernetes resource objects.

There are a lot of similarities to Generators:

  • Each Enricher has a unique name.

  • Enrichers are looked up automatically from the plugin dependencies and there is a set of default enrichers delivered with this plugin.

  • Enrichers are configured the same ways as generators

The Generator example is a good blueprint, simply replace generator with enricher. The configuration is structural identical:

Table 63. Enricher configuration
Element Description

includes

Contains one ore more include elements with enricher names which should be included. If given, only this list of enrichers are included in this order. The enrichers from every active profile are included, too. However the enrichers listed here are moved to the front of the list, so that they are called first. Use the profile raw if you want to explicitly set the complete list of enrichers.

excludes

Holds one or more exclude elements with enricher names to exclude. This means all the detected enrichers are used except the ones mentioned in this section.

config

Configuration for all enrichers. Each enricher supports a specific set of configuration values as described in its documentation. The subelements of this section are enricher names. E.g. for enricher jkube-service, the sub-element is called jkube-service. This element then holds the specific enricher configuration like name for the service name. Configuration coming from profiles are merged into this config, but not overriding the configuration specified here.

This plugin comes with a set of default enrichers. In addition, custom enrichers can be easily added by providing implementation of the Enricher API and adding these as a dependency to the build.

8.1. Default Enrichers

openshift-maven-plugin comes with a set of enrichers which are enabled by default. There are two categories of default enrichers:

  • Generic Enrichers are used to add default resource object when they are missing or add common metadata extracted from the given build information.

  • Specific Enrichers are enrichers which are focused on a certain tech stack that they detect.

Table 64. Default Enrichers Overview
Enricher Description

jkube-configmap-file

Add ConfigMap elements defined as XML or as annotation.

jkube-controller

Create default controller (replication controller, replica set or deployment Kubernetes doc) if missing.

jkube-container-env-java-options

Merges JAVA_OPTIONS environment variable defined in Build configuration (image) environment (env) with Container JAVA_OPTIONS environment variable added by other enrichers, XML configuration or fragment.

jkube-debug

Enables debug mode via a property or XML configuration

jkube-dependency

Examine build dependencies for kubernetes.yml/openshift.yml and add the objects found therein.

jkube-git

Check local .git directory and add build information as annotations.

jkube-image

Add the image name into a PodSpec of replication controller, replication sets and deployments, if missing.

jkube-ingress

Create a default Ingress if missing or configured from XML configuration

jkube-imagepullpolicy

Overrides ImagePullPolicy in controller resources provided jkube.imagePullPolicy property is set.

jkube-metadata

Add labels/annotations to generated Kubernetes resources

jkube-namespace

Set the Namespace of the generated and processed Kubernetes resources metadata and optionally create a new Namespace

jkube-name

Add a default name to every object which misses a name.

jkube-openshift-autotls

Enriches declarations with auto-TLS annotations, required secrets reference, mounted volumes and PEM to keystore converter init container.

jkube-openshift-deploymentconfig

Enriches that converts existing Deployment object to DeploymentConfig.

jkube-openshift-imageChangeTrigger

Enriches that adds ImageChange trigger to DeploymentConfig.

jkube-openshift-project

Converts a Kubernetes Namespace resource to OpenShift Project.

jkube-openshift-route

Adds OpenShift Route for existing Service

jkube-persistentvolumeclaim-storageclass

Add name of StorageClass required by PersistentVolumeClaim either in metadata or in spec.

jkube-pod-annotation

Copy over annotations from a Deployment to a Pod

jkube-portname

Add a default portname for commonly known service.

jkube-project-label

Add maven coordinates as labels to all objects.

jkube-replicas

Override number of replicas for any controller processed by JKube.

jkube-revision-history

Add revision history limit (Kubernetes doc) as a deployment spec property to the Kubernetes/OpenShift resources.

jkube-secret-file

Add Secret elements defined as annotation.

jkube-security-hardening

Enforces best practice and recommended security rules for Kubernetes and OpenShift resources.

jkube-service

Create a default service if missing and extract ports from the Docker image configuration.

jkube-serviceaccount

Add a ServiceAccount defined as XML or mentioned in resource fragment.

jkube-triggers-annotation

Add ImageStreamTag change triggers on Kubernetes resources such as StatefulSets, ReplicaSets and DaemonSets using the image.openshift.io/triggers annotation.

jkube-volume-permission

Fixes the permission of persistent volume mount with the help of an init container.

jkube-docker-registry-secret

Add a Secret for your Docker registry credentials.

jkube-maven-issue-mgmt

Add Maven Issue Management information as annotations to the kubernetes/openshift resources

jkube-maven-scm-enricher

Add Maven SCM information as annotations to the kubernetes/openshift resources

jkube-prometheus

Add Prometheus annotations.

jkube-well-known-labels

Add Kubernetes Recommended Well Known labels.

8.1.1. Generic Enrichers

Default generic enrichers are used for adding missing resources or adding metadata to given resource objects. The following default enhancers are available out of the box.

jkube-configmap-file

This enricher adds ConfigMap defined as resources in plugin configuration and/or resolves file content from an annotation.

As XML you can define:

pom.xml
<configuration>
  <resources>
    <configMap>
      <name>myconfigmap</name>
      <entries>
        <entry>
          <name>A</name>
          <value>B</value>
        </entry>
       </entries>
    </configMap>
  </resources>
</configuration>

This creates a ConfigMap data with key A and value B.

You can also use file tag to refer to the content of a file.

<configuration>
  <resources>
    <configMap>
      <name>configmap-test</name>
      <entries>
        <entry>
          <file>src/test/resources/test-application.properties</file>
        </entry>
       </entries>
    </configMap>
  </resources>
</configuration>

This creates a ConfigMap with key test-application.properties and value the content of the src/test/resources/test-application.properties file. If you set name tag then this is used as key instead of the filename.

ConfigMap XML Configuration

Here are the supported options while providing configMap in XML configuration

Table 65. XML configmap configuration configMap
Element Description

entries

data for ConfigMap

name

Name of the ConfigMap

ConfigMap Entry XML Configuration

entries is a list of entry configuration objects. Here are the supported options while providing entry in XML configuration

Table 66. XML configmap entry configuration entry
Element Description

value

Entry value

file

path to a file or directory. If it’s a single file then file contents would be read as value. If it’s a directory then each file’s content is stored as value with file name as key.

name

Entry name

If you are defining a custom ConfigMap file, you can use an annotation to define a file name as key and its content as the value:

metadata:
  name: ${project.artifactId}
  annotations:
    jkube.eclipse.org/cm/application.properties: src/test/resources/test-application.properties

This creates a ConfigMap data with key application.properties (part defined after cm) and value the content of src/test/resources/test-application.properties file.

You can specify a directory instead of a file:

metadata:
  name: ${project.artifactId}
  annotations:
    jkube.eclipse.org/cm/application.properties: src/test/resources/test-dir

This creates a ConfigMap named application.properties (part defined after cm) and for each file under the directory test-dir one entry with file name as key and its content as the value; subdirectories are ignored.

jkube-controller

This enricher is used to ensure that a controller is present. This can be either directly configured with fragments or with the XML configuration. An explicit configuration always takes precedence over auto detection. See Kubernetes doc for more information on types of controllers.

The following configuration parameters can be used to influence the behaviour of this enricher:

Table 67. Default controller enricher
Element Description Property

name

Name of the Controller. Kubernetes Controller names must start with a letter. If the maven artifactId starts with a digit, s will be prefixed.

Defaults to ${project.artifactId}.

jkube.enricher.jkube-controller.name

pullPolicy

Deprecated: use jkube.imagePullPolicy instead.

Image pull policy to use for the container. One of: IfNotPresent, Always.

Defaults to IfNotPresent.

jkube.enricher.jkube-controller.pullPolicy

type

Type of Controller to create. One of: ReplicationController, ReplicaSet, Deployment, DeploymentConfig, StatefulSet, DaemonSet, Job, CronJob.

Defaults to Deployment.

jkube.enricher.jkube-controller.type

replicaCount

Number of replicas for the container.

Defaults to 1.

jkube.enricher.jkube-controller.replicaCount

schedule

Schedule for CronJob written in Cron syntax.

jkube.enricher.jkube-controller.schedule

Image pull policy to use for the container. One of: IfNotPresent, Always.

jkube.imagePullPolicy

jkube-container-env-java-options

Merges JAVA_OPTIONS environment variable defined in Build configuration (image) environment (env) with Container JAVA_OPTIONS environment variable added by other enrichers, XML configuration or fragment.

Option Description Property

disable

Disabled the enricher, any JAVA_OPTIONS environment variable defined by an enricher, XML configuration or YAML fragment will override the one defined by the generator or Image Build configuration.

Defaults to false.

jkube.enricher.jkube-container-env-java-options.disable

jkube-debug

This enricher enables debug mode via a property jkube.debug.enabled or via enabling debug mode in enricher configuration.

You can either set this property in pom.xml file:

pom.xml
<properties>
  <jkube.debug.enabled>true</jkube.debug.enabled>
</properties>

Or provide XML configuration for enricher

pom.xml
<plugin>
    <groupId>org.eclipse.jkube</groupId>
    <artifactId>openshift-maven-plugin</artifactId>

    <!-- ... -->

    <configuration>
      <enricher>
        <config>
          <jkube-debug>
            <enabled>true</enabled>
          </jkube-debug>
        </config>
      </enricher>
    </configuration>
</plugin>

This would do the following things:

  • Add environment variable JAVA_ENABLE_DEBUG with value set to true in your application container

  • Add a container port named debug to your existing list of container ports with value set via JAVA_DEBUG_PORT environment variable. If not present, it defaults to 5005.

jkube-dependency

This enricher is used for embedding Kubernetes configuration manifests (YAML) to single package. It looks for the following files in compile scope dependencies and adds Kubernetes resources inside to final generated Kubernetes manifests:

  • META-INF/jkube/kubernetes.yml

  • META-INF/jkube/k8s-template.yml

  • META-INF/jkube/openshift.yml (in case of OpenShift)

Table 68. Configuration options
Option Description Property

includeTransitive

Whether to look for kubernetes manifest files in transitive dependencies.

Defaults to true.

jkube.enricher.jkube-dependency.includeTransitive

includePlugin

Whether to look on the current plugin classpath too.

Defaults to true.

jkube.enricher.jkube-dependency.includePlugin

jkube-git

Enricher that adds info from .git directory as annotations. These are explained in the table below:

Table 69. Annotations added via Git enricher

Annotation

Description

jkube.eclipse.org/git-branch

Current Git Branch

jkube.eclipse.org/git-commit

Latest commit of current branch

jkube.eclipse.org/git-url

URL of your configured git remote

jkube.io/git-branch

Deprecated: Use jkube.eclipse.org/ annotation prefix.

Current Git Branch

jkube.io/git-commit

Deprecated: Use jkube.eclipse.org/ annotation prefix.

Latest commit of current branch

jkube.io/git-url

Deprecated: Use jkube.eclipse.org/ annotation prefix.

URL of your configured git remote

app.openshift.io/vcs-ref

Current Git Branch

app.openshift.io/vcs-uri

URL of your configured git remote

Table 70. Supported Configuration options
Option Description Property

gitRemote

Configures the git remote name, whose URL you want to annotate as 'git-url'.

Defaults to origin.

jkube.remoteName

jkube-image

This enricher merges in container image related fields into specified controller (e.g Deployment, ReplicaSet, ReplicationController etc.) Pod specification.

  • The full image name is set as image.

  • An image alias is set as name. If alias isn’t provided, then opinionated name using image user and project name is used.

  • The pull policy imagePullPolicy is set according to the given configuration. If no configuration is set, the default is IfNotPresent for release versions, and Always for snapshot versions.

  • Environment variables as configured via XML configuration.

Any already configured container in the pod spec is updated if the property is not set.

Table 71. Configuration options
Option Description Property

pullPolicy

What pull policy to use when fetching images

jkube.enricher.jkube-image.pullPolicy

jkube-ingress

Enricher responsible for creation of Ingress either using opinionated defaults or as per provided XML configuration. This enricher gets activated when jkube.createExternalUrls is set to true. JKube generates Ingress only for Services which have either expose=true or exposeUrl=true labels set.

For more information, check out Ingress Generation section.

Table 72. Ingress enricher
Element Description Property

host

Host is the fully qualified domain name of a network host.

jkube.enricher.jkube-ingress.host

targetApiVersion

Whether to generate extensions/v1beta1 Ingress or networking.k8s.io/v1 Ingress.

Defaults to networking.k8s.io/v1.

jkube.enricher.jkube-ingress.targetApiVersion

jkube-imagepullpolicy

This enricher fixes ImagePullPolicy for Kubernetes/Openshift resources whenever a -Djkube.imagePullPolicy parameter is provided.

jkube-metadata

This enricher is responsible for adding labels and annotations to your resources. It reads labels and annotations fields provided in resources and adds respective labels/annotations to Kubernetes resources.

You can also configure whether you want to add these labels/annotations to some specific resource or all resources.

You can see an example if it’s usage in oc:resource Labels And Annotations section.

jkube-namespace

This enricher adds a Namespace/Project resource to the Kubernetes Resources list in case the namespace configuration (jkube.enricher.jkube-namespace.namespace) is provided.

In addition, this enricher sets the namespace (.metadata.namespace ) of the JKube generated and processed Kubernetes resources in case they don’t already have one configured (see the force configuration).

The following configuration parameters can be used to influence the behaviour of this enricher:

Table 73. Default namespace enricher
Element Description Property

namespace

Namespace as string which we want to create. A new Namespace object will be created and added to the list of Kubernetes resources generated during the enrichment phase.

jkube.enricher.jkube-namespace.namespace

type

Whether we want to generate a Namespace or an OpenShift specific Project resource. One of: Namespace, Project.

Defaults to Namespace.

jkube.enricher.jkube-namespace.type

force

If the .metadata.namespace field must be forced even if the resource has already one configured.

Defaults to false.

jkube.enricher.jkube-namespace.force

This enricher also configures generated Namespace in .metadata.namespace field for Kubernetes resources as per provided XML configuration too. Here is an example:

<configuration>
    <resources>
        <namespace>mynamespace</namespace>
    </resources>
</configuration>
jkube-name

Enricher for adding a "name" to the metadata to various objects we create.

Table 74. Supported Configuration options
Option Description Property

name

Configures the .metadata.name of all generated Kubernetes manifests.

jkube.enricher.jkube-name.name

jkube-openshift-autotls

Enricher which adds appropriate annotations and volumes to enable OpenShift’s automatic Service Serving Certificate Secrets. This enricher adds an init container to convert the service serving certificates from PEM (the format that OpenShift generates them in) to a JKS-format Java keystore ready for consumption in Java services.

This enricher is disabled by default. In order to use it, you must configure the openshift-maven-plugin to use this enricher:

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>
  <version>1.17.0</version>
  <executions>
    <execution>
      <goals>
        <goal>resource</goal>
      </goals>
    </execution>
  </executions>
  <configuration>
    <enricher>
      <includes>
        <include>jkube-openshift-autotls</include>
      </includes>
      <config>
        <jkube-openshift-autotls>
          <!-- ... -->
        </jkube-openshift-autotls>
      </config>
    </enricher>
  </configuration>
</plugin>

The auto-TLS enricher supports the following configuration options:

Element Description Property

tlsSecretName

The name of the secret to be used to store the generated service serving certs.

Defaults to <project.artifactId>-tls.

jkube.enricher.jkube-openshift-autotls.tlsSecretName

tlsSecretVolumeMountPoint

Where the service serving secret should be mounted to in the pod.

Defaults to /var/run/secrets/jkube.io/tls-pem.

jkube.enricher.jkube-openshift-autotls.tlsSecretName

tlsSecretVolumeName

The name of the secret volume.

Defaults to tls-pem.

jkube.enricher.jkube-openshift-autotls.tlsSecretVolumeName

jksVolumeMountPoint

Where the generated keystore volume should be mounted to in the pod.

Defaults to /var/run/secrets/jkube.io/tls-jks.

jkube.enricher.jkube-openshift-autotls.the

jksVolumeName

The name of the keystore volume.

Defaults to tls-jks.

jkube.enricher.jkube-openshift-autotls.jksVolumeName

pemToJKSInitContainerImage

The name of the image used as an init container to convert PEM certificate/key to Java keystore.

Defaults to jimmidyson/pemtokeystore:v0.1.0.

jkube.enricher.jkube-openshift-autotls.pemToJKSInitContainerImage

pemToJKSInitContainerName

the name of the init container to convert PEM certificate/key to Java keystore.

Defaults to tls-jks-converter.

jkube.enricher.jkube-openshift-autotls.pemToJKSInitContainerName

keystoreFileName

The name of the generated keystore file.

Defaults to keystore.jks.

jkube.enricher.jkube-openshift-autotls.keystoreFileName

keystorePassword

The password to use for the generated keystore.

Defaults to changeit.

jkube.enricher.jkube-openshift-autotls.keystorePassword

keystoreCertAlias

The alias in the keystore used for the imported service serving certificate.

Defaults to server.

jkube.enricher.jkube-openshift-autotls.keystoreCertAlias

jkube-openshift-deploymentconfig

This enricher converts Kubernetes Deployment object(extensions/v1beta1 or apps/v1) to OpenShift equivalent DeploymentConfig.

It’s applicable only for OpenShift.

Note that this won’t be enabled if you’ve set jkube.build.switchToDeployment to true or you’ve configured DefaultControllerEnricher to generate a controller of type DeploymentConfig

Table 75. Supported configuration options for this enricher
Property Description

jkube.openshift.deployTimeoutSeconds

The OpenShift deploy timeout in seconds.

Defaults to 3600.

jkube.build.switchToDeployment

Disable conversion of Deployment to DeploymentConfig.

Defaults to false

jkube-openshift-imageChangeTrigger

This enricher is responsible for adding a DeploymentConfig trigger of type ImageChange based containers.

It is only applicable in case of OpenShift.

Table 76. Supported configuration options for this enricher
Property Description

jkube.openshift.enableAutomaticTrigger

Enable automatic deployment in generated ImageChange trigger.

Defaults to true.

jkube.openshift.imageChangeTriggers

Enable generation of ImageChange triggers to DeploymentConfigs.

Defaults to true.

jkube.openshift.trimImageInContainerSpec

Set the container image reference to "", this is done to handle weird behavior of OpenShift 3.7 in which subsequent rollouts lead to ImagePullErr.

Defaults to false.

jkube.openshift.enrichAllWithImageChangeTrigger

Add ImageChange Triggers with respect to all containers specified inside DeploymentConfig.

Defaults to false.

jkube-openshift-project

Enricher that converts Kubernetes Namespace resource into an OpenShift equivalent resource i.e. Project.

This is only applicable in case of OpenShift.

jkube-openshift-route

This enricher adds an OpenShift Route for existing Service.

This is only applicable to OpenShift.

Table 77. Supported configuration options for this enricher
Element Description Property

generateRoute

Generate Route for corresponding Service

Defaults to true

jkube.enricher.jkube-openshift-route.generateRoute

tlsTermination

Add TLS termination of the route

jkube.enricher.jkube-openshift-route.tlsTermination

tlsInsecureEdgeTerminationPolicy

Add Edge TLS termination of the route

jkube.enricher.jkube-openshift-route.tlsInsecureEdgeTerminationPolicy

Generate Route for corresponding Service. Note that this flag takes less precedence as compared to generateRoute enricher configuration option. When both flags are provided, only generateRoute would be considered.

jkube.openshift.generateRoute

jkube-pod-annotation

This enricher copies the annotations from a Controller (Deployment/ReplicaSet/StatefulSet) metadata to the annotations of container Pod template spec’s metadata.

jkube-portname

This enricher uses a given set of well known ports:

Table 78. Default Port Mappings

Port Number

Name

8080

http

8443

https

8778

jolokia

9779

prometheus

If not found, it creates container ports with names of IANA registered services.

jkube-project-label

Enricher that adds standard labels and selectors to generated resources (e.g. app, group, provider, version).

The jkube-project-label enricher supports the following configuration options:

Option Description Property

useProjectLabel

Enable this flag to turn on the generation of the old project label in Kubernetes resources. The project label has been replaced by the app label in newer versions of the plugin.

Defaults to false.

jkube.enricher.jkube-project-label.useProjectLabel

app

Makes it possible to define a custom app label used in the generated resource files used for deployment.

Defaults to the Maven project.artifactId property.

jkube.enricher.jkube-project-label.app

provider

Makes it possible to define a custom provider label used in the generated resource files used for deployment.

Defaults to jkube.

jkube.enricher.jkube-project-label.provider

group

Makes it possible to define a custom group label used in the generated resource files used for deployment.

Defaults to the Maven project.groupId property.

jkube.enricher.jkube-project-label.group

version

Makes it possible to define a custom version label used in the generated resource files used for deployment.

Defaults to the Maven project.version property.

jkube.enricher.jkube-project-label.version

The project labels which are already specified in the input fragments are not overridden by the enricher.

jkube-persistentvolumeclaim-storageclass

Enricher which fixes adds name of StorageClass required by PersistentVolumeClaim either in metadata or in spec.

Table 79. Supported properties
Option Description Property

defaultStorageClass

PersistentVolume storage class.

jkube.enricher.jkube-volume-permission.defaultStorageClass

useStorageClassAnnotation

If enabled, storage class would be added to PersistentVolumeClaim metadata as volume.beta.kubernetes.io/storage-class=<storageClassName> annotation rather than .spec.storageClassName

Defaults to false

jkube.enricher.jkube-volume-permission.useStorageClassAnnotation

jkube-replicas

This enricher overrides the number of replicas for every controller (DaemonSet, Deployment, DeploymentConfig, Job, CronJob, ReplicationController, ReplicaSet, StatefulSet) generated or processed by JKube (including those from dependencies).

In order to use this enricher you need to configure the jkube.replicas property:

mvn -Djkube.replicas=42 oc:resource
<project>
 <!-- ... -->
  <properties>
    <!-- ... -->
     <jkube.replicas>42</jkube.replicas>
  </properties>
</project>

You can use this Enricher at runtime to temporarily force the number of replicas to a given value.

jkube-revision-history

This enricher adds spec.revisionHistoryLimit property to deployment spec of Kubernetes/OpenShift resources. A deployment’s revision history is stored in the replica sets, that specifies the number of old ReplicaSets to retain in order to allow rollback. For more information read Kubernetes documentation.

The following configuration parameters can be used to influence the behaviour of this enricher:

Table 80. Default revision history enricher
Element Description Property

limit

Number of revision histories to retain.

Defaults to 2.

jkube.enricher.jkube-revision-history.limit

Just as any other enricher you can specify required properties with in the enricher’s configuration as below,

<!-- ... -->
<enricher>
    <config>
        <jkube-revision-history>
            <limit>8</limit>
        </jkube-revision-history>
    </config>
</enricher>
<!-- ... -->

This information will be enriched as spec property in the generated manifest like,

# ...
kind: Deployment
spec:
  revisionHistoryLimit: 8
# ...
jkube-secret-file

This enricher adds Secret defined as file content from an annotation.

If you are defining a custom Secret file, you can use an annotation to define a file name as key and its content as the value:

metadata:
  name: ${project.artifactId}
  annotations:
    jkube.eclipse.org/secret/application.properties: src/test/resources/test-application.properties

This creates a Secret data with the key application.properties (part defined after secret) and value content of src/test/resources/test-application.properties file (base64 encoded).

jkube-security-hardening

This enricher enforces security best practices and recommendations for Kubernetes objects such as Deployments, ReplicaSets, Jobs, CronJobs, and so on.

The enricher is not included in the default profile. However, you can easily enable it by leveraging the security-hardening profile.

These are some of the rules enforces by this enricher:

  • Disables the auto-mounting of the service account token.

  • Prevents containers from running in privileged mode.

  • Ensures containers do not allow privilege escalation.

  • Prevents containers from running as the root user.

  • Configures the container to run as a user with a high UID to avoid host conflict.

  • Ensures the container’s seccomp is set to Runtime/Default.

jkube-service

This enricher is used to ensure that a service is present. This can be either directly configured with fragments or with the XML configuration, but it can be also automatically inferred by looking at the ports exposed by an image configuration. An explicit configuration always takes precedence over auto detection. For enriching an existing service this enricher actually works only on a configured service which matches with the configured (or inferred) service name.

The following configuration parameters can be used to influence the behaviour of this enricher:

Table 81. Default service enricher
Element Description Property

name

Service name to enrich by default. If not given here or configured elsewhere, the artifactId/project name is used.

jkube.enricher.jkube-service.name

headless

Whether a headless service without a port should be configured. A headless service has the ClusterIP set to None and will be only used if no ports are exposed by the image configuration or by the configuration port.

Defaults to false.

jkube.enricher.jkube-service.headless

expose

If set to true, a label expose with value true is added which can be picked up by the jkube. expose-controller to expose the service to the outside by various means. See the documentation of expose-controller for more details.

Defaults to false.

jkube.enricher.jkube-service.expose

type

Kubernetes / OpenShift service type to set like LoadBalancer, NodePort or ClusterIP.

jkube.enricher.jkube-service.type

port

The service port to use. By default the same port as the ports exposed in the image configuration is used, but can be changed with this parameter. See below for a detailed description of the format which can be put into this variable.

jkube.enricher.jkube-service.port

multiPort

Set this to true if you want all ports to be exposed from an image configuration. Otherwise only the first port is used as a service port.

Defaults to false.

jkube.enricher.jkube-service.multiPort

protocol

Default protocol to use for the services. Must be tcp or udp.

Defaults to tcp.

jkube.enricher.jkube-service.protocol

normalizePort

Normalize the port numbering of the service to common and conventional port numbers.

Defaults to false.

jkube.enricher.jkube-service.normalizePort

Following is the Port mapping that comes in effect, when normalizePort option is set true.

Original Port Normalized Port

8080

80

8081

80

8181

80

8180

80

8443

443

443

443

You specify the properties like for any enricher within the enrichers configuration like in

Example
<configuration>
  <!-- ... -->
  <enricher>
    <config>
      <jkube-service>
        <name>my-service</name>
        <type>NodePort</type>
        <multiPort>true</multiPort>
      </jkube-service>
    </config>
  </enricher>
</configuration>
Port specification

With the option port you can influence the mapping how ports are mapped from the pod to the service. By default, and if this option is not given the ports exposed are dictated by the ports exposed from the Docker images contained in the pods. Remember, each image configured can be part of the pod. However, you can expose also completely different ports as the images meta data declare.

The property port can contain a comma separated list of mappings of the following format:

<servicePort1>:<targetPort1>/<protocol>,<servicePort2>:<targetPort2>/<protocol>,....

where the targetPort and protocol specification is optional. These ports are overlayed over the ports exposed by the images, in the given order.

This is best explained by some examples.

For example if you have a pod which exposes a Microservice on port 8080 and you want to expose it as a service on port 80 (so that it can be accessed with http://myservice) you can simply use the following enricher configuration:

Example
<configuration>
  <enricher>
    <config>
      <jkube-service>
        <name>myservice</name>
        <port>80:8080</port> (1)
      </jkube-service>
    </config>
  </enricher>
</configuration>
1 80 is the service port, 8080 the port opened in from the pod’s images

If your pod exposes their ports (which e.g. all generator do), then you can even omit the 8080 here (i.e. <port>80</port>).

In this case the first port exposed will be mapped to port 80, all other exposed ports will be omitted.

By default, an automatically generated service only exposes the first port, even when more ports are exposed. When you want to map multiple ports you need to set the config option multiPort to true. In this case you can also provide multiple mappings as a comma separated list in the port specification where each element of the list are the mapping for the first, second, …​ port.

A more (and bit artificially constructed) specification could be <port>80,9779:9779/udp,443</port>. Assuming that the image exposes ports 8080 and 8778 (either directly or via generators) and we have switched on multiport mode, then the following service port mappings will be performed for the automatically generated service:

  • Pod port 8080 is mapped to service port 80.

  • Pod port 9779 is mapped to service port 9779 with protocol UDP. Note how this second entry overrides the pod exposed port 8778.

  • Pod port 443 is mapped to service port 443.

This example shows also the mapping rules:

  • Port specification in port always override the port metadata of the contained Docker images (i.e. the ports exposed)

  • You can always provide a complete mapping with port on your own

  • The ports exposed by the images serve as default values which are used if not specified by this configuration option.

  • You can map ports which are not exposed by the images by specifying them as target ports.

Multiple ports are only mapped when multiPort mode is enabled (which is switched off by default). If multiPort mode is disabled, only the first port from the list of mapped ports calculated as above is taken.

When you set legacyPortMapping to true then ports 8080 to 9090 are mapped to port 80 automatically if not explicitly mapped via port. I.e. when an image exposes port 8080 with a legacy mapping this mapped to a service port 80, not 8080. You should not switch this on for any good reason. In fact, it might be that this option can vanish anytime.

This enricher is also used by resources XML configuration to generate Service configured via XML. Here are the fields supported in resources which work with this enricher:

Table 82. Fields supported in resources

Element

Description

services

Configuration element for generating Service resource

Service XML Configuration

services is a list of service configuration objects. Here are the supported options while providing service in XML configuration

Table 83. XML service configuration
Element Description

name

Service name

port

Port to expose

headless

Whether this is a headless service.

type

Service type

normalizePort

Whether to normalize service port numbering.

ports

Ports to expose

Service port Configuration

ports is a list of port configuration objects. Here are the supported options while providing port in XML configuration

Table 84. XML service port configuration
Element Description

protocol

Protocol to use. Can be either "tcp" or "udp".

port

Container port to expose.

targetPort

Target port to expose.

nodePort

Port to expose from the port.

name

Name of the port

jkube-serviceaccount

This enricher is responsible for creating ServiceAccount resource. See ServiceAccount Generation for more details.

The following configuration parameters can be used to influence the behaviour of this enricher:

Table 85. ServiceAccountEnricher configuration options
Element Description Property

skipCreate

Skip creating ServiceAccount objects

Defaults to false.

jkube.enricher.jkube-serviceaccount.skipCreate

jkube-triggers-annotation

OpenShift resources like BuildConfig and DeploymentConfig can be automatically triggered by changes to ImageStreamTags. However, plain Kubernetes resources don’t have a way to support this kind of triggering. You can use image.openshift.io/triggers annotation in OpenShift to request triggering. Read OpenShift docs for more details : Triggering updates on ImageStream changes

This enricher adds ImageStreamTag change triggers on Kubernetes resources that support the image.openshift.io/triggers annotation, such as StatefulSets, ReplicaSets and DaemonSets.

The trigger is added to all containers that apply, but can be restricted to a limited set of containers using the following configuration:

<!-- ... -->
<enricher>
    <config>
        <jkube-triggers-annotation>
            <containers>container-name-1,c2</containers>
        </jkube-triggers-annotation>
    </config>
</enricher>
<!-- ... -->
jkube-volume-permission

Enricher which fixes the permission of persistent volume mount with the help of an init container.

Table 86. Supported properties
Option Description Property

imageName

Image name for PersistentVolume init container

Defaults to quay.io/quay/busybox.

jkube.enricher.jkube-volume-permission.imageName

permission

PersistentVolume init container access mode

Defaults to 777.

jkube.enricher.jkube-volume-permission.permission

cpuLimit

Set PersistentVolume initContainer's .resources CPU limit

jkube.enricher.jkube-volume-permission.cpuLimit

memoryLimit

Set PersistentVolume initContainer's .resources memory limit

jkube.enricher.jkube-volume-permission.memoryLimit

cpuRequest

Set PersistentVolume initContainer's .resources CPU request

jkube.enricher.jkube-volume-permission.cpuRequest

memoryRequest

Set PersistentVolume initContainer's .resources memory request

jkube.enricher.jkube-volume-permission.memoryRequest

jkube-well-known-labels

Enricher that adds Well Known Labels recommended by Kubernetes.

The jkube-well-known-labels enricher supports the following configuration options:

Option Description Property

Add Kubernetes Well Known labels to generated resources.

Defaults to true

jkube.kubernetes.well-known-labels

enabled

Enable this flag to turn on addition of Kubernetes Well Known labels.

Defaults to true.

jkube.enricher.jkube-well-known-labels.enabled

name

The name of the application (app.kubernetes.io/name).

Defaults to the Maven project.artifactId property.

jkube.enricher.jkube-well-known-labels.name

version

The current version of the application (app.kubernetes.io/version).

Defaults to the Maven project.version property.

jkube.enricher.jkube-well-known-labels.version

component

The component within the architecture (app.kubernetes.io/component).

jkube.enricher.jkube-well-known-labels.component

partOf

The name of a higher level application this one is part of (app.kubernetes.io/part-of).

Defaults to the Maven project.groupId property.

jkube.enricher.jkube-well-known-labels.partOf

managedBy

The tool being used to manage the operation of an application (app.kubernetes.io/managed-by).

Defaults to jkube

jkube.enricher.jkube-well-known-labels.managedBy

The Well Known Labels which are already specified in the input fragments are not overridden by the enricher.

jkube-docker-registry-secret

This enricher enables oc:resource Secret generation feature. You can read more about it Secrets.

This enricher is responsible for generating Ingress resource during oc:resource goal. You can read more about it in Ingress Generation section.

jkube-maven-scm-enricher

This enricher adds additional SCM related metadata to all objects supporting annotations. These metadata will be added only if SCM information is present in the maven pom.xml of the project.

The following annotations will be added to the objects that supports annotations,

Table 87. Maven SCM Enrichers Annotation Mapping
Maven SCM Info Annotation Description

scm/connection

jkube.eclipse.org/scm-con-url

The SCM connection that will be used to connect to the project’s SCM

scm/developerConnection

jkube.eclipse.org/scm-devcon-url

The SCM Developer Connection that will be used to connect to the project’s developer SCM

scm/tag

jkube.eclipse.org/scm-tag

The SCM tag that will be used to checkout the sources, like HEAD dev-branch etc.,

scm/url

jkube.eclipse.org/scm-url

The SCM web url that can be used to browse the SCM over web browser

scm/connection

jkube.io/scm-con-url

Deprecated: Use jkube.eclipse.org/ annotation prefix.

The SCM connection that will be used to connect to the project’s SCM

scm/developerConnection

jkube.io/scm-devcon-url

Deprecated: Use jkube.eclipse.org/ annotation prefix.

The SCM Developer Connection that will be used to connect to the project’s developer SCM

scm/tag

jkube.io/scm-tag

Deprecated: Use jkube.eclipse.org/ annotation prefix.

The SCM tag that will be used to checkout the sources, like HEAD dev-branch etc.,

scm/url

jkube.io/scm-url

Deprecated: Use jkube.eclipse.org/ annotation prefix.

The SCM web url that can be used to browse the SCM over web browser

Lets say you have a maven pom.xml with the following scm information,

<scm>
    <connection>scm:git:git://github.com/eclipse-jkube/kubernetes-maven-plugin.git</connection>
    <developerConnection>scm:git:git://github.com/eclipse-jkube/kubernetes-maven-plugin.git</developerConnection>
    <url>git://github.com/eclipse-jkube/kubernetes-maven-plugin.git</url>
</scm>

This information will be enriched as annotations in the generated manifest like,

# ...
  kind: Service
  metadata:
    annotations:
      jkube.eclipse.org/scm-con-url: "scm:git:git://github.com/eclipse-jkube/kubernetes-maven-plugin.git"
      jkube.eclipse.org/scm-devcon-url: "scm:git:git://github.com/eclipse-jkube/kubernetes-maven-plugin.git"
      jkube.eclipse.org/scm-tag: "HEAD"
      jkube.eclipse.org/scm-url: "git://github.com/eclipse-jkube/kubernetes-maven-plugin.git"
# ...
jkube-maven-issue-mgmt

This enricher adds additional Issue Management related metadata to all objects supporting annotations. These metadata will be added only if the Issue Management information is available in the pom.xml of the Maven project.

The following annotations will be added to the objects that support these annotations,

Table 88. Maven Issue Tracker Enrichers Annotation Mapping

Maven Issue Tracker Info

Annotation

Description

issueManagement/system

jkube.eclipse.org/issue-system

The Issue Management system like Bugzilla, JIRA, GitHub etc.,

issueManagement/url

jkube.eclipse.org/issue-tracker-url

The Issue Management url e.g. GitHub Issues Url

issueManagement/system

jkube.io/issue-system

Deprecated: Use jkube.eclipse.org/ annotation prefix.

The Issue Management system like Bugzilla, JIRA, GitHub etc.,

issueManagement/url

jkube.io/issue-tracker-url

Deprecated: Use jkube.eclipse.org/ annotation prefix.

The Issue Management url e.g. GitHub Issues Url

Lets say you have a maven pom.xml with the following issue management information,

<issueManagement>
   <system>GitHub</system>
   <url>https://github.com/reactiverse/vertx-maven-plugin/issues/</url>
</issueManagement>

This information will be enriched as annotations in the generated manifest like,

# ...
  kind: Service
  metadata:
    annotations:
      jkube.eclipse.org/issue-system: "GitHub"
      jkube.eclipse.org/issue-tracker-url: "https://github.com/reactiverse/vertx-maven-plugin/issues/"
# ...
jkube-prometheus

This enricher adds Prometheus annotation like:

apiVersion: v1
kind: List
items:
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      prometheus.io/scrape: "true"
      prometheus.io/port: 9779
      prometheus.io/path: "/metrics"

By default the enricher inspects the images' BuildConfiguration and add the annotations if the port 9779 is listed. You can force the plugin to add annotations by setting enricher’s config prometheusPort

8.1.2. Specific Enrichers

Specific enrichers provide resource manifest enhancement for a certain tech stack that they detect.

jkube-healthcheck-openliberty

This enricher adds kubernetes readiness and liveness and startup probes for OpenLiberty based projects. Note that Kubernetes startup probes are only added in projects using MicroProfile 3.1 and later.

The application should be configured as follows to enable the enricher (i.e. Either microProfile or mpHealth should be enabled in Liberty Server Configuration file as pointed out in OpenLiberty Health Docs)

server.xml
<featureManager>
    <feature>mpHealth-4.1</feature>
</featureManager>
Probe configuration

You can configure the different aspects of the probes.

Table 89. OpenLiberty HealthCheck Enricher probe configuration
Element Description Property

scheme

Scheme to use for connecting to the host.

Defaults to HTTP.

jkube.enricher.jkube-healthcheck-openliberty.scheme

port

Port number to access the container.

Defaults to 9080.

jkube.enricher.jkube-healthcheck-openliberty.port

livenessFailureThreshold

Configures failureThreshold field in .livenessProbe . Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-openliberty.livenessFailureThreshold

livenessSuccessThreshold

Configures successThreshold field in .livenessProbe. Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-openliberty.livenessSuccessThreshold

livenessInitialDelay

Configures initialDelaySeconds field in .livenessProbe. Number of seconds after the container has started before liveness or readiness probes are initiated.

Defaults to 0 seconds.

jkube.enricher.jkube-healthcheck-openliberty.livenessInitialDelay

livenessPeriodSeconds

Configures periodSeconds field in .livenessProbe. How often (in seconds) to perform the liveness probe.

Defaults to 10.

jkube.enricher.jkube-healthcheck-openliberty.livenessPeriodSeconds

livenessPath

Path to access on the application server.

Defaults to /health/live.

jkube.enricher.jkube-healthcheck-openliberty.livenessPath

readinessFailureThreshold

Configures failureThreshold field in .readinessProbe . Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-openliberty.readinessFailureThreshold

readinessSuccessThreshold

Configures successThreshold field in .readinessProbe. Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-openliberty.readinessSuccessThreshold

readinessInitialDelay

Configures initialDelaySeconds field in .readinessProbe. Number of seconds after the container has started before liveness or readiness probes are initiated.

Defaults to 0 seconds.

jkube.enricher.jkube-healthcheck-openliberty.readinessInitialDelay

readinessPeriodSeconds

Configures periodSeconds field in .readinessProbe. How often (in seconds) to perform the liveness probe.

Defaults to 10.

jkube.enricher.jkube-healthcheck-openliberty.readinessPeriodSeconds

readinessPath

Path to access on the application server.

Defaults to /health/ready.

jkube.enricher.jkube-healthcheck-openliberty.readinessPath

startupFailureThreshold

Configures failureThreshold field in .startupProbe . Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-openliberty.startupFailureThreshold

startupSuccessThreshold

Configures successThreshold field in .startupProbe. Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-openliberty.startupSuccessThreshold

startupInitialDelay

Configures initialDelaySeconds field in .startupProbe. Number of seconds after the container has started before liveness or startup probes are initiated.

Defaults to 0 seconds.

jkube.enricher.jkube-healthcheck-openliberty.startupInitialDelay

startupPeriodSeconds

Configures periodSeconds field in .startupProbe. How often (in seconds) to perform the liveness probe.

Defaults to 10.

jkube.enricher.jkube-healthcheck-openliberty.startupPeriodSeconds

startupPath

Path to access on the application server.

Defaults to /health/started.

jkube.enricher.jkube-healthcheck-openliberty.startupPath

jkube-healthcheck-spring-boot

This enricher adds kubernetes readiness and liveness probes with Spring Boot. This requires the following dependency has been enabled in Spring Boot

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

The enricher will try to discover the settings from the application.properties / application.yaml Spring Boot configuration file.

/actuator/health is the default endpoint for the liveness and readiness probes.

If the user has enabled the management.health.probes.enabled property this Enricher uses the /actuator/health/liveness as liveness and /actuator/health/readiness as readiness probe endpoints instead.

management.health.probes.enabled=true

The port number is read from the management.port option, and will use the default value of 8080 The scheme will use HTTPS if server.ssl.key-store option is in use, and fallback to use HTTP otherwise.

The enricher will use the following settings by default:

  • readinessProbeInitialDelaySeconds : 10

  • readinessProbePeriodSeconds : <kubernetes-default>

  • livenessProbeInitialDelaySeconds : 180

  • livenessProbePeriodSeconds : <kubernetes-default>

  • timeoutSeconds : <kubernetes-default>

  • failureThreshold: 3

  • successThreshold: 1

These values can be configured by the enricher in the openshift-maven-plugin configuration as shown below:

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>
  <version>1.17.0</version>
  <executions>
    <execution>
      <id>jkube</id>
      <goals>
        <goal>resource</goal>
        <goal>helm</goal>
        <goal>build</goal>
      </goals>
    </execution>
  </executions>
  <configuration>
    <enricher>
      <config>
        <jkube-healthcheck-spring-boot>
          <timeoutSeconds>5</timeoutSeconds>
          <readinessProbeInitialDelaySeconds>30</readinessProbeInitialDelaySeconds>
          <failureThreshold>3</failureThreshold>
          <successThreshold>1</successThreshold>
        </jkube-healthcheck-spring-boot>
      </config>
    </enricher>
  </configuration>
</plugin>
jkube-healthcheck-thorntail-v2

This enricher adds kubernetes readiness and liveness probes with Thorntail v2. This requires the following fraction has been enabled in Thorntail

<dependency>
  <groupId>io.thorntail</groupId>
  <artifactId>microprofile-health</artifactId>
</dependency>

The enricher will use the following settings by default:

  • port = 8080

  • scheme = HTTP

  • path = /health

  • failureThreshold = 3

  • successThreshold = 1

These values can be configured by the enricher in the openshift-maven-plugin configuration as shown below:

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>
  <version>1.17.0</version>
  <executions>
    <execution>
      <id>jkube</id>
      <goals>
        <goal>resource</goal>
        <goal>helm</goal>
        <goal>build</goal>
      </goals>
    </execution>
  </executions>
  <configuration>
    <enricher>
      <config>
        <jkube-healthcheck-thorntail>
          <port>4444</port>
          <scheme>HTTPS</scheme>
          <path>health/myapp</path>
          <failureThreshold>3</failureThreshold>
          <successThreshold>1</successThreshold>
        </jkube-healthcheck-thorntail>
      </config>
    </enricher>
  </configuration>
</plugin>
jkube-healthcheck-quarkus

This enricher adds kubernetes readiness, liveness and startup probes with Quarkus. This requires the following dependency to be added to your Quarkus project:

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-smallrye-health</artifactId>
</dependency>

The enricher will try to discover the settings from the application.properties / application.yaml configuration file. JKube uses the following properties to resolve the health check URLs:

  • quarkus.http.root-path: Quarkus Application root path.

  • quarkus.http.non-application-root-path: This property got introduced in recent versions of Quarkus(2.x) for non application endpoints.

  • quarkus.smallrye-health.root-path: The location of the all-encompassing health endpoint.

  • quarkus.smallrye-health.readiness-path: The location of the readiness endpoint.

  • quarkus.smallrye-health.liveness-path: The location of the liveness endpoint.

  • quarkus.smallrye-health.startup-path: The location of the startup endpoint.

Note: Please note that behavior of these properties seem to have changed since Quarkus 1.11.x versions (e.g for health and liveness paths leading slashes are now being considered). openshift-maven-plugin would also check quarkus version along with value of these properties in order to resolve effective health endpoints.

You can read more about these flags in Quarkus Documentation.

The enricher will use the following settings by default:

  • scheme : HTTP

  • port : 8080

  • failureThreshold : 3

  • successThreshold : 1

  • livenessInitialDelay : 10

  • readinessInitialDelay : 5

  • startupInitialDelay : 5

  • livenessPath : q/health/live

  • readinessPath : q/health/ready

  • startupPath : q/health/started

These values can be configured by the enricher in the openshift-maven-plugin configuration as shown below:

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>
  <version>1.17.0</version>
  <configuration>
    <enricher>
      <config>
        <jkube-healthcheck-quarkus>
          <failureThreshold>3</failureThreshold>
          <successThreshold>1</successThreshold>
          <livenessInitialDelay>5</livenessInitialDelay>
        </jkube-healthcheck-quarkus>
      </config>
    </enricher>
  </configuration>
</plugin>
jkube-healthcheck-micronaut

This enricher adds kubernetes readiness and liveness probes for Micronaut based projects.

The application should be configured as follows to enable the enricher:

endpoints:
  health:
    enabled: true

The enricher will try to discover the settings from the application.properties / application.yaml Micronaut configuration file.

Probe configuration

You can configure the different aspects of the probes.

Table 90. Micronaut HealthCheck Enricher probe configuration
Element Description Property

readinessProbeInitialDelaySeconds

Number of seconds after the container has started before the readiness probe is initialized.

jkube.enricher.jkube-healthcheck-micronaut.readinessProbeInitialDelaySeconds

readinessProbePeriodSeconds

How often (in seconds) to perform the readiness probe.

jkube.enricher.jkube-healthcheck-micronaut.readinessProbePeriodSeconds

livenessProbeInitialDelaySeconds

Number of seconds after the container has started before the liveness probe is initialized.

jkube.enricher.jkube-healthcheck-micronaut.livenessProbeInitialDelaySeconds

livenessProbePeriodSeconds

How often (in seconds) to perform the liveness probe.

jkube.enricher.jkube-healthcheck-micronaut.livenessProbePeriodSeconds

failureThreshold

Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-micronaut.failureThreshold

successThreshold

Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-micronaut.successThreshold

timeoutSeconds

Number of seconds after which the probes timeout.

jkube.enricher.jkube-healthcheck-micronaut.timeoutSeconds

scheme

Scheme to use for connecting to the host.

Defaults to HTTP.

jkube.enricher.jkube-healthcheck-micronaut.scheme

port

Port number to access the container.

Defaults to the one provided in the Image configuration.

jkube.enricher.jkube-healthcheck-micronaut.port

path

Path to access on the HTTP server.

Defaults to /health.

jkube.enricher.jkube-healthcheck-micronaut.path

jkube-healthcheck-vertx

This enricher adds kubernetes readiness and liveness probes with Eclipse Vert.x applications. The readiness probe lets Kubernetes detect when the application is ready, while the liveness probe allows Kubernetes to verify that the application is still alive.

This enricher allows configuring the readiness and liveness probes. The following probe types are supported: http (emit HTTP requests), tcp (open a socket), exec (execute a command).

By default, this enricher uses the same configuration for liveness and readiness probes. But specific configurations can be provided too. The configurations can be overridden using project’s properties.

Using the jkube-healthcheck-vertx enricher

The enricher is automatically executed if your project uses the vertx-maven-plugin or depends on io.vertx:vertx-core. However, by default, no health check will be added to your deployment unless configured explicitly.

Minimal configuration

The minimal configuration to add health checks is the following:

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>
  <version>1.17.0</version>
  <executions>
    <execution>
      <id>jkube</id>
      <goals>
        <goal>resource</goal>
        <goal>helm</goal>
        <goal>build</goal>
      </goals>
    </execution>
  </executions>
  <configuration>
    <enricher>
      <config>
        <jkube-healthcheck-vertx>
            <path>/health</path>
        </jkube-healthcheck-vertx>
      </config>
    </enricher>
  </configuration>
</plugin>

It configures the readiness and liveness health checks using HTTP requests on the port 8080 (default port) and on the path /health. The defaults are:

  • port = 8080 (for HTTP)

  • scheme = HTTP

  • path = none (disabled)

the previous configuration can also be given use project’s properties:

vertx.health.path = /health
Configuring differently the readiness and liveness health checks

You can provide two different configuration for the readiness and liveness checks:

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>
  <version>1.17.0</version>
  <executions>
    <execution>
      <id>jkube</id>
      <goals>
        <goal>resource</goal>
        <goal>helm</goal>
        <goal>build</goal>
      </goals>
    </execution>
  </executions>
  <configuration>
    <enricher>
      <config>
        <jkube-healthcheck-vertx>
            <readiness>
              <path>/ready</path>
            </readiness>
            <liveness>
              <path>/health</path>
            </liveness>
        </jkube-healthcheck-vertx>
      </config>
    </enricher>
  </configuration>
</plugin>

You can also use the readiness and liveness chunks in user properties:

vertx.health.readiness.path = /ready
vertx.health.liveness.path = /ready

Shared (generic) configuration can be set outside of the specific configuration. For instance, to use the port 8081:

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>
  <version>1.17.0</version>
  <executions>
    <execution>
      <id>jkube</id>
      <goals>
        <goal>resource</goal>
        <goal>helm</goal>
        <goal>build</goal>
      </goals>
    </execution>
  </executions>
  <configuration>
    <enricher>
      <config>
        <jkube-healthcheck-vertx>
            <port>8081</port>
            <readiness>
              <path>/ready</path>
            </readiness>
            <liveness>
              <path>/health</path>
            </liveness>
        </jkube-healthcheck-vertx>
      </config>
    </enricher>
  </configuration>
</plugin>

Or:

vertx.health.port = 8081
vertx.health.readiness.path = /ready
vertx.health.liveness.path = /ready
Configuration Structure

The configuration is structured as follows

<config>
    <jkube-healthcheck-vertx>
        <!-- Generic configuration, applied to both liveness and readiness -->
        <path>/both</path>
        <liveness>
            <!-- Specific configuration for the liveness probe -->
            <port-name>ping</port-name>
        </liveness>
        <readiness>
            <!-- Specific configuration for the readiness probe -->
            <port-name>ready</port-name>
        </readiness>
    </jkube-healthcheck-vertx>
</config>

The same structure is used in project’s properties:

# Generic configuration given as vertx.health.$attribute
vertx.health.path = /both
# Specific liveness configuration given as vertx.health.liveness.$attribute
vertx.health.liveness.port-name = ping
# Specific readiness configuration given as vertx.health.readiness.$attribute
vertx.health.readiness.port-name = ready

Important: Project’s plugin configuration override the project’s properties. The overriding rules are: specific configuration > specific properties > generic configuration > generic properties.

Probe configuration

You can configure the different aspects of the probes. These attributes can be configured for both the readiness and liveness probes or be specific to one.

Table 91. Vert.x HealthCheck Enricher probe configuration
Element Description Property

type

The probe type among http (default), tcp and exec.

Defaults to http.

vertx.health.type

jkube.enricher.jkube-healthcheck-vertx.type

initial-delay

Number of seconds after the container has started before probes are initiated.

vertx.health.initial-delay

jkube.enricher.jkube-healthcheck-vertx.initial-delay

period

How often (in seconds) to perform the probe.

vertx.health.period

jkube.enricher.jkube-healthcheck-vertx.period

timeout

Number of seconds after which the probe times out.

vertx.health.timeout

jkube.enricher.jkube-healthcheck-vertx.timeout

success-threshold

Minimum consecutive successes for the probe to be considered successful after having failed.

vertx.health.success-threshold

jkube.enricher.jkube-healthcheck-vertx.success-threshold

failure-threshold

Minimum consecutive failures for the probe to be considered failed after having succeeded.

vertx.health.failure-threshold

jkube.enricher.jkube-healthcheck-vertx.failure-threshold

HTTP specific probe configuration

When using HTTP GET requests to determine readiness or liveness, several aspects can be configured. HTTP probes are used by default. To be more specific set the type attribute to http.

Table 92. Vert.x HealthCheck Enricher HTTP probe configuration
Element Description Property

scheme

Scheme to use for connecting to the host.

Defaults to HTTP.

vertx.health.scheme

jkube.enricher.jkube-healthcheck-vertx.scheme

path

Path to access on the HTTP server. An empty path disable the check.

vertx.health.path

jkube.enricher.jkube-healthcheck-vertx.path

headers

Custom headers to set in the request. HTTP allows repeated headers. It cannot be configured using project’s properties. An example is available below.

vertx.health.headers

jkube.enricher.jkube-healthcheck-vertx.headers

port

Port number to access the container. A 0 or negative number disables the check.

Defaults to 8080.

vertx.health.port

jkube.enricher.jkube-healthcheck-vertx.port

port-name

Name of the port to access on the container. If neither the port nor the port-name is set, the check is disabled. If both are set the configuration is considered invalid.

vertx.health.port-name

jkube.enricher.jkube-healthcheck-vertx.port-name

Here is an example of HTTP probe configuration:

<config>
    <jkube-healthcheck-vertx>
        <initialDelay>3</initialDelay>
        <period>3</period>
        <liveness>
            <port>8081</port>
            <path>/ping</path>
            <scheme>HTTPS</scheme>
            <headers>
                <X-Custom-Header>Awesome</X-Custom-Header>
            </headers>
        </liveness>
        <readiness>
            <!-- disable the readiness probe -->
            <port>-1</port>
        </readiness>
    </jkube-healthcheck-vertx>
</config>
TCP specific probe configuration

You can also configure the probes to just open a socket on a specific port. The type attribute must be set to tcp.

Table 93. Vert.x HealthCheck Enricher TCP probe configuration
Element Description Property

port

Port number to access the container. A 0 or negative number disables the check.

vertx.health.port

jkube.enricher.jkube-healthcheck-vertx.port

port-name

Name of the port to access on the container. If neither the port nor the port-name is set, the check is disabled. If both are set the configuration is considered invalid.

vertx.health.port-name

jkube.enricher.jkube-healthcheck-vertx.port-name

For example:

<config>
    <jkube-healthcheck-vertx>
        <initialDelay>3</initialDelay>
        <period>3</period>
        <liveness>
            <type>tcp</type>
            <port>8081</port>
        </liveness>
        <readiness>
            <!-- use HTTP Get probe -->
            <path>/ping</path>
            <port>8080</port>
        </readiness>
    </jkube-healthcheck-vertx>
</config>
Exec probe configuration

You can also configure the probes to execute a command. If the command succeeds, it returns 0, and Kubernetes consider the pod to be alive and healthy. If the command returns a non-zero value, Kubernetes kills the pod and restarts it. To use a command, you must set the type attribute to exec:

<config>
    <jkube-healthcheck-vertx>
        <initialDelay>3</initialDelay>
        <period>3</period>
        <liveness>
            <type>exec</type>
            <command>
                <cmd>cat</cmd>
                <cmd>/tmp/healthy</cmd>
            </command>
        </liveness>
        <readiness>
            <!-- use HTTP Get probe -->
            <path>/ping</path>
            <port>8080</port>
        </readiness>
    </jkube-healthcheck-vertx>
</config>

As you can see in the snippet above the command is passed using the command attribute. This attribute cannot be configured using project’s properties. An empty command disables the check.

Disabling health checks

You can disable the checks by setting:

  • the port to 0 or to a negative number for http and tcp probes

  • the command to an empty list for exec

In the first case, you can use project’s properties to disable them:

Disables tcp and http probes
vertx.health.port = -1

For http probes, an empty or not set path also disable the probe.

jkube-healthcheck-smallrye

This enricher adds kubernetes readiness and liveness and startup probes for projects which have io.smallrye:smallrye-health dependency added for Health management. Note that Kubernetes startup probes are only added in projects using MicroProfile 3.1 and later.

Probe configuration

You can configure the different aspects of the probes.

Table 94. SmallRye HealthCheck Enricher probe configuration
Element Description Property

scheme

Scheme to use for connecting to the host.

Defaults to HTTP.

jkube.enricher.jkube-healthcheck-smallrye.scheme

port

Port number to access the container.

Defaults to 9080.

jkube.enricher.jkube-healthcheck-smallrye.port

livenessFailureThreshold

Configures failureThreshold field in .livenessProbe . Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-smallrye.livenessFailureThreshold

livenessSuccessThreshold

Configures successThreshold field in .livenessProbe. Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-smallrye.livenessSuccessThreshold

livenessInitialDelay

Configures initialDelaySeconds field in .livenessProbe. Number of seconds after the container has started before liveness or readiness probes are initiated.

Defaults to 0 seconds.

jkube.enricher.jkube-healthcheck-smallrye.livenessInitialDelay

livenessPeriodSeconds

Configures periodSeconds field in .livenessProbe. How often (in seconds) to perform the liveness probe.

Defaults to 10.

jkube.enricher.jkube-healthcheck-smallrye.livenessPeriodSeconds

livenessPath

Path to access on the application server.

Defaults to /health/live.

jkube.enricher.jkube-healthcheck-smallrye.livenessPath

readinessFailureThreshold

Configures failureThreshold field in .readinessProbe . Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-smallrye.readinessFailureThreshold

readinessSuccessThreshold

Configures successThreshold field in .readinessProbe. Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-smallrye.readinessSuccessThreshold

readinessInitialDelay

Configures initialDelaySeconds field in .readinessProbe. Number of seconds after the container has started before liveness or readiness probes are initiated.

Defaults to 0 seconds.

jkube.enricher.jkube-healthcheck-smallrye.readinessInitialDelay

readinessPeriodSeconds

Configures periodSeconds field in .readinessProbe. How often (in seconds) to perform the liveness probe.

Defaults to 10.

jkube.enricher.jkube-healthcheck-smallrye.readinessPeriodSeconds

readinessPath

Path to access on the application server.

Defaults to /health/ready.

jkube.enricher.jkube-healthcheck-smallrye.readinessPath

startupFailureThreshold

Configures failureThreshold field in .startupProbe . Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-smallrye.startupFailureThreshold

startupSuccessThreshold

Configures successThreshold field in .startupProbe. Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-smallrye.startupSuccessThreshold

startupInitialDelay

Configures initialDelaySeconds field in .startupProbe. Number of seconds after the container has started before liveness or startup probes are initiated.

Defaults to 0 seconds.

jkube.enricher.jkube-healthcheck-smallrye.startupInitialDelay

startupPeriodSeconds

Configures periodSeconds field in .startupProbe. How often (in seconds) to perform the liveness probe.

Defaults to 10.

jkube.enricher.jkube-healthcheck-smallrye.startupPeriodSeconds

startupPath

Path to access on the application server.

Defaults to /health/started.

jkube.enricher.jkube-healthcheck-smallrye.startupPath

jkube-healthcheck-helidon

This enricher adds kubernetes readiness and liveness and startup probes for Helidon based projects. Note that Kubernetes startup probes are only added in projects using MicroProfile 3.1 and later.

The application should be configured as follows to enable the enricher (i.e. io.helidon.health:helidon-health dependency is found in project dependencies)

pom.xml
<dependency>
    <groupId>io.helidon.health</groupId>
    <artifactId>helidon-health</artifactId>
</dependency>
Probe configuration

You can configure the different aspects of the probes.

Table 95. Helidon HealthCheck Enricher probe configuration
Element Description Property

scheme

Scheme to use for connecting to the host.

Defaults to HTTP.

jkube.enricher.jkube-healthcheck-helidon.scheme

port

Port number to access the container.

Defaults to 8080.

jkube.enricher.jkube-healthcheck-helidon.port

livenessFailureThreshold

Configures failureThreshold field in .livenessProbe . Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-helidon.livenessFailureThreshold

livenessSuccessThreshold

Configures successThreshold field in .livenessProbe. Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-helidon.livenessSuccessThreshold

livenessInitialDelay

Configures initialDelaySeconds field in .livenessProbe. Number of seconds after the container has started before liveness or readiness probes are initiated.

Defaults to 0 seconds.

jkube.enricher.jkube-healthcheck-helidon.livenessInitialDelay

livenessPeriodSeconds

Configures periodSeconds field in .livenessProbe. How often (in seconds) to perform the liveness probe.

Defaults to 10.

jkube.enricher.jkube-healthcheck-helidon.livenessPeriodSeconds

livenessPath

Path to access on the application server.

Defaults to /health/live.

jkube.enricher.jkube-healthcheck-helidon.livenessPath

readinessFailureThreshold

Configures failureThreshold field in .readinessProbe . Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-helidon.readinessFailureThreshold

readinessSuccessThreshold

Configures successThreshold field in .readinessProbe. Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-helidon.readinessSuccessThreshold

readinessInitialDelay

Configures initialDelaySeconds field in .readinessProbe. Number of seconds after the container has started before liveness or readiness probes are initiated.

Defaults to 0 seconds.

jkube.enricher.jkube-healthcheck-helidon.readinessInitialDelay

readinessPeriodSeconds

Configures periodSeconds field in .readinessProbe. How often (in seconds) to perform the liveness probe.

Defaults to 10.

jkube.enricher.jkube-healthcheck-helidon.readinessPeriodSeconds

readinessPath

Path to access on the application server.

Defaults to /health/ready.

jkube.enricher.jkube-healthcheck-helidon.readinessPath

startupFailureThreshold

Configures failureThreshold field in .startupProbe . Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-helidon.startupFailureThreshold

startupSuccessThreshold

Configures successThreshold field in .startupProbe. Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-helidon.startupSuccessThreshold

startupInitialDelay

Configures initialDelaySeconds field in .startupProbe. Number of seconds after the container has started before liveness or startup probes are initiated.

Defaults to 0 seconds.

jkube.enricher.jkube-healthcheck-helidon.startupInitialDelay

startupPeriodSeconds

Configures periodSeconds field in .startupProbe. How often (in seconds) to perform the liveness probe.

Defaults to 10.

jkube.enricher.jkube-healthcheck-helidon.startupPeriodSeconds

startupPath

Path to access on the application server.

Defaults to /health/started.

jkube.enricher.jkube-healthcheck-helidon.startupPath

jkube-healthcheck-karaf

This enricher adds kubernetes readiness and liveness probes with Apache Karaf. This requires that jkube-karaf-checks has been enabled in the Karaf startup features.

The enricher will use the following settings by default:

  • port = 8181

  • scheme = HTTP

  • failureThreshold = 3

  • successThreshold = 1

and use paths /readiness-check for readiness check and /health-check for liveness check.

These options cannot be configured.

jkube-healthcheck-webapp

This enricher adds kubernetes readiness and liveness probes with WebApp. This requires that you have maven-war-plugin set.

The enricher will use the following settings by default:

  • port = 8080

  • scheme = HTTP

  • path = ``

  • initialReadinessDelay = 10

  • initialLivenessDelay = 180

If path attribute is not set (default value) then this enricher is disabled.

These values can be configured by the enricher in the openshift-maven-plugin configuration as shown below:

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>
  <version>1.17.0</version>
  <executions>
    <execution>
      <id>jkube</id>
      <goals>
        <goal>resource</goal>
        <goal>helm</goal>
        <goal>build</goal>
      </goals>
    </execution>
  </executions>
  <configuration>
    <enricher>
      <config>
        <jkube-healthcheck-webapp>
          <path>/</path>
        </jkube-healthcheck-webapp>
      </config>
    </enricher>
  </configuration>
    <!-- ... -->
</plugin>
jkube-healthcheck-wildfly-jar

This enricher adds kubernetes readiness, liveness and startup probes to WildFly JAR applications. Probes depend on the WildFly microprofile-health- subsystem /health/ready, /health/live/, and /health/started endpoints. When the WildFly Bootable JAR Maven plugin is configured with the <cloud> configuration item, microprofile-health subsystem is enforced in the bootable JAR server configuration.

Note that Kubernetes startup probes are only added in projects using Wildfly 25.0.0.Final or later.

This enricher looks for the <cloud> configuration item presence in the bootable JAR Maven plugin in order to add health check probes. If <cloud> item has not been defined, you can still enforce the generation of readiness, liveness and startup probes by setting enforceProbes=true.

The enricher will use the following settings by default:

  • scheme = HTTP

  • port = 9990

  • livenessPath = /health/live

  • readinessPath = /health/ready

  • startupPath = /health/started

  • livenessInitialDelay = 60

  • readinessInitialDelay = 10

  • startupInitialDelay = 10

  • failureThreshold = 3

  • successThreshold = 1

  • periodSeconds = 10

  • enforceProbes = false

Setting the port to 0 or to a negative number disables health check probes.

These values can be configured by the enricher in the openshift-maven-plugin configuration as shown below:

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>
  <version>1.17.0</version>
  <executions>
    <execution>
      <id>jkube</id>
      <goals>
        <goal>resource</goal>
        <goal>build</goal>
      </goals>
    </execution>
  </executions>
  <configuration>
    <enricher>
      <config>
        <jkube-healthcheck-wildfly-jar>
          <port>4444</port>
          <scheme>HTTPS</scheme>
          <livenessPath>/myapp/live</livenessPath>
          <failureThreshold>3</failureThreshold>
          <successThreshold>1</successThreshold>
        </jkube-healthcheck-wildfly-jar>
      </config>
    </enricher>
  </configuration>
</plugin>
jkube-service-discovery

This enricher can be used to add service annotations to facilitate automated discovery of the service by 3scale for Camel RestDSL projects. Other project types may follow at a later time. The enricher extracts the needed information from the camel-context.xml, and the restConfiguration element in particular.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
       http://camel.apache.org/schema/spring       http://camel.apache.org/schema/spring/camel-spring.xsd">
     <camelContext xmlns="http://camel.apache.org/schema/spring">
        <restConfiguration component="servlet" scheme="https"
              contextPath="myapi" apiContextPath="myapi/openapi.json"/>
...

The enricher looks for the scheme, contextPath and apiContextPath attributes, and it will add the following label and annotations:

LABEL discovery.3scale.net/discoverable. The value of the label can be set to "true" or "false", and was added to take part in the selector definition executed by 3scale to find all services that need discovery. Also it can act as a switch as well, to temporary turn off 3scale discovery integration by setting it to "false".

ANNOTATIONS discovery.3scale.net/discovery-version: the version of the 3scale discovery process. discovery.3scale.net/scheme: this can be "http" or "https" discovery.3scale.net/path: (optional) the contextPath of the service if it’s not at the root. discovery.3scale.net/description-path: (optional) the path to the service description document (OpenAPI/Swagger). The path can either be relative or an external full URL.

The following configuration parameters can be used to override the behavior of this enricher:

Table 96. JKube service discovery enricher
Element Description Default

springDir

Path to the spring configuration directory where the camel-context.xml file lives.

/src/main/resources/spring which is used to recognize a Camel RestDSL project.

scheme

The scheme part of the URL where the service is hosted.

http

path

The path part of the URL where the service is hosted.

/

descriptionPath

The path to a location where the service description document is hosted. This path can be either a relative path if the document is self-hosted, or a full URL if the document is hosted externally.

discoveryVersion

The version of the 3scale discovery implementation.

v1

discoverable

Sets the discoverable label to either true or false. If it’s set to "false" 3scale will not try to discover this service.

true

You specify the properties like for any enricher within the enrichers configuration like in

Example
<configuration>
  ..
  <enricher>
    <config>
      <jkube-service-discovery>
        <scheme>https</scheme>
        <path>/api</path>
        <descriptionPath>/api/openapi.json</descriptionPath>
      </jkube-service-discovery>
    </config>
  </enricher>
</configuration>

Alternatively you can us a src/main/jkube/service.yml fragment to override the values. For example

kind: Service
metadata:
  labels:
    discovery.3scale.net/discoverable : "true"
  annotations:
    discovery.3scale.net/discovery-version : "v1"
    discovery.3scale.net/scheme : "https"
    discovery.3scale.net/path : "/api"
    discovery.3scale.net/description-path : "/api/openapi.json"
  name: my-service
spec:
  type: LoadBalancer

8.2. Enricher API

How to write your own enrichers and install them.

It’s possible to extend Eclipse JKube’s Enricher API to define your own custom Enrichers as per use case. Please refer to the Enricher Interface; You can create new enrichers by implementing this interface.

Please check out Custom Istio Enricher Maven quickstart for detailed example.

9. Profiles

Profiles can be used to combine a set of enrichers and generators and to give this combination a referable name.

Profiles are defined in YAML. The following example shows a simple profile which uses only the Spring Boot generator and a few enrichers to add a Kubernetes Deployment and a Service:

Profile Definition
- name: my-spring-boot-apps (1)
  generator: (2)
    includes:
      - spring-boot
  enricher: (3)
    includes: (4)
      # Default Deployment object
      - jkube-controller
      # Add a default service
      - jkube-service
    excludes: (5)
      - jkube-icon
    config: (6)
      jkube-service:
        # Expose service as NodePort
        type: NodePort
  order: 10 (7)
- name: another-profile
# ....
1 Profile’s name
2 Generators to use
3 Enrichers to use
4 List of enrichers to include in that given order
5 List of enrichers to exclude (especially useful when extending profiles)
6 Configuration for services an enrichers
7 An order which influences the way how profiles with the same name are merged

Each profiles.yml has a list of profiles which are defined with these elements:

Table 97. Profile elements
Element Description

name

Profile name.

extends

This plugin comes with a set of predefined profiles. These profiles can be extended by defining a custom profile that references the name of the profile to extend in the extends field.

generator

List of generator definitions. See below for the format of these definitions.

enricher

List of enrichers definitions. See below for the format of these definitions.

order

The order of the profile which is used when profiles of the same name are merged.

9.1. Generator and Enricher definitions

The definition of generators and enrichers in the profile follows the same format:

Table 98. Generator and Enricher definition
Element Description

includes

List of generators or enrichers to include. The order in the list determines the order in which the processors are applied.

excludes

List of generators or enrichers. These have precedences over includes and will exclude a processor even when referenced in an includes sections

config

Configuration for generators or enrichers. This is a map where the keys are the name of the processor to configure and the value is again a map with configuration keys and values specific to the processor. See the documentation of the respective generator or enricher for the available configuration keys.

9.2. Lookup order

Profiles can be defined externally either directly as a build resource in src/main/jkube/profiles.yml or provided as part of a plugin’s dependency where it is supposed to be included as META-INF/jkube/profiles.yml. Multiple profiles can be included in these profiles.yml descriptors as a list:

If a profile is used then it is looked up from various places in the following order:

  • From the compile and plugin classpath from META-INF/jkube/profiles-default.yml. These files are reserved for profiles defined by this plugin

  • From the compile and plugin classpath from META-INF/jkube/profiles.yml. Use this location for defining your custom profiles which you want to include via dependencies.

  • From the project in src/main/jkube/profiles.yml. The directory can be tuned with the plugin option resourceDir (property: jkube.resourceDir)

When multiple profiles of the same name are found, then these profiles are merged. If the profiles have an order number, then the profile with higher order takes precedence when merging.

For includes of the same processors, the processor is moved to the earliest position. e.g. consider the following two profiles with the name my-profile

Profile A
name: my-profile
enricher:
  includes: [ e1, e2 ]
Profile B
name: my-profile
enricher:
  includes: [ e3, e1 ]
order: 10

then when merged results in the following profile (when no order is given, it defaults to 0):

Profile merged
name: my-profile
enricher:
  includes: [ e1, e2, e3 ]
order: 10

Profile with the same order number are merged according to the lookup order described above, where the latter profile is supposed to have a higher order.

The configuration for enrichers and generators are merged, too, where higher order profiles override configuration values with the same key of lower order profile configuration.

9.3. Using Profiles

Profiles can be selected by defining them in the plugin configuration, by giving a system property or by using special directories in the directory holding the resource fragments.

Profile used in plugin configuration

Here is an example how the profile can be used in a plugin configuration:

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>
  <configuration>
    <profile>my-spring-boot-apps</profile> (1)
    <!-- ... -->
  </configuration>
</plugin>
1 Name which selects the profile from the profiles.yml or profiles-default.yml file.
Profile as property

Alternatively a profile can be also specified on the command line or as a project property:

mvn -Djkube.profile=my-spring-boot-apps oc:build oc:apply

If a configuration for enrichers and generators is provided as part of the project plugin’s configuration then this takes precedence and overrides any of the defaults provided by the selected profile.

Profiles for resource fragments

Profiles are also very useful when used together with resource fragments in src/main/jkube. By default, the resource objects defined here are enriched with the configured profile (if any). A different profile can be selected easily by using a subdirectory within src/main/jkube. The name of each subdirectory is interpreted as a profile name and all resource definition files found in this subdirectory are enhanced with the enhancers defined in this profile.

For example, consider the following directory layout:

.
├── src/main/jkube
  ├── app-rc.yml
  ├── app-svc.yml
  └── raw
    ├── couchbase-rc.yml
    └── couchbase-svc.yml

Here, the resource descriptors app-rc.yml and app-svc.yml are enhanced with the enrichers defined in the main configuration. The two files couchbase-rc.yml and couchbase-svc.yml in the subdirectory raw/ are enriched with the profile raw instead. This is a predefined profile which includes no enricher at all, so the couchbase resource objects are not enriched and taken over literally. This is an easy way how you can fine tune enrichment for different object set.

9.4. Predefined Profiles

This plugin comes with the following predefined profiles:

Table 99. Predefined Profiles
Profile Description

default

The default profile which is active if no profile is specified. It consists of a curated set of generator and enrichers. See below for the current definition.

minimal

This profile contains no generators and only enrichers for adding default objects (controller and services). No other enrichment is included.

explicit

Like default but without adding default objects like controllers and services.

aggregate

Includes no generators and only the jkube-dependency enricher for picking up and combining resources from the compile time dependencies.

internal-microservice

default profile extension that prevents services from being externally exposed.

security-hardening

default profile extension that enables the security-hardening enricher.

9.5. Extending Profiles

A profile can also extend another profile to avoid repetition. This is useful to add optional enrichers/generators to a given profile or to partially exclude enrichers/generators from another.

- name: security-hardening
  extends: default
  enricher:
    includes:
      - jkube-security-hardening

For example, this profiles includes the optional jkube-security-hardening enricher to the default profile.

10. JKube Plugins

This plugin supports so call jkube-plugins which have entry points that can be bound to the different JKube operation phases. jkube-plugins are enabled by just declaring a dependency in the plugin declaration:

The following example is from quickstarts/maven/plugin

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>

  <dependencies>
    <dependency>
      <groupId>org.eclipse.jkube.quickstarts.maven</groupId>
      <artifactId>plugin</artifactId>
      <version>${project.version}</version>
    </dependency>
  </dependencies>
</plugin>

JKubePlugins are automatically loaded by JKube by declaring a dependency to a module that contains a descriptor file at META-INF/jkube/plugin with class names line by line, for example:

src/main/resources/META-INF/jkube/plugin
org.eclipse.jkube.quickstart.plugin.SimpleJKubePlugin

At the moment descriptor files are looked up in these locations:

  • META-INF/maven/io.fabric8/dmp-plugin (Deprecated, kept for backward compatibility)

  • META-INF/jkube/plugin

  • META-INF/jkube-plugin

During a build with oc:build, those classes are loaded and certain fixed method are called.

JKube plugin would need to implement org.eclipse.jkube.api.JKubePlugin interface. At the moment, The following methods are supported:

Method Description

addExtraFiles

A method called by openshift-maven-plugin with a single File argument. This will point to a directory jkube-extra which can be referenced easily by a Dockerfile or an assembly. A openshift-maven-plugin plugin typically will create an own subdirectory to avoid a clash with other jkube-plugins.

Check out quickstarts/maven/plugin for a fully working example.

11. Access configuration

11.1. Docker Access

For Kubernetes builds the openshift-maven-plugin uses the Docker remote API so the URL of your Docker Daemon must be specified. The URL can be specified by the dockerHost or machine configuration, or by the DOCKER_HOST environment variable. If not given

The Docker remote API supports communication via SSL and authentication with certificates. The path to the certificates can be specified by the certPath or machine configuration, or by the DOCKER_CERT_PATH environment variable.

11.2. OpenShift and Kubernetes Access

Plugin reads your kubeconfig file to read your Kubernetes/OpenShift configuration. By default, kubeconfig file is assumed to be either in ~/.kube/config or using the environment variable KUBECONFIG.

12. Registry handling

Docker uses registries to store images. The registry is typically specified as part of the name. I.e. if the first part (everything before the first /) contains a dot (.) or colon (:) this part is interpreted as an address (with an optionally port) of a remote registry. This registry (or the default docker.io if no registry is given) is used during push and pull operations. This plugin follows the same semantics, so if an image name is specified with a registry part, this registry is contacted. Authentication is explained in the next section.

There are some situations however where you want to have more flexibility for specifying a remote registry. This might be because you do not want to hard code a registry into pom.xml but provide it from the outside with an environment variable or a system property.

This plugin supports various ways of specifying a registry:

  • If the image name contains a registry part, this registry is used unconditionally and can not be overwritten from the outside.

  • If an image name doesn’t contain a registry, then by default the default Docker registry docker.io is used for push and pull operations. But this can be overwritten through various means:

    • If the 
    
    
  </images>
</configuration>

There is some special behaviour when using an externally provided registry like described above:

  • When pulling, the image pulled will be also tagged with a repository name without registry. The reasoning behind this is that this image then can be referenced also by the configuration when the registry is not specified anymore explicitly.

  • When pushing a local image, temporarily a tag including the registry is added and removed after the push. This is required because Docker can only push registry-named images.

13. Authentication

When pulling (via the autoPull mode of oc:build) or pushing image, it might be necessary to authenticate against a Docker registry.

There are five different locations searched for credentials. In order, these are:

  • Providing system properties jkube.docker.username and jkube.docker.password from the outside.

  • Using a <authConfig> section in the plugin configuration with <username> and <password> elements.

  • Using OpenShift configuration in ~/.config/kube

  • Using a <server> configuration in ~/.m2/settings.xml

  • Login into a registry with docker login (credentials in a credential helper or in ~/.docker/config.json)

Using the username and password directly in the pom.xml is not recommended since this is widely visible. This is the easiest and transparent way, though. Using an <authConfig> is straight forward:

<plugin>
  <configuration>
    
    <!-- ... -->
    <authConfig>
      <username>jolokia</username>
      <password>s!cr!t</password>
    </authConfig>
  </configuration>
</plugin>

The system property provided credentials are a good compromise when using CI servers like Jenkins. You simply provide the credentials from the outside:

Example
mvn -Djkube.docker.username=jolokia -Djkube.docker.password=s!cr!t oc:push

The most mavenish way is to add a server to the Maven settings file ~/.m2/settings.xml:

Example
<servers>
  <server>
    <id>docker.io</id>
    <username>jolokia</username>
    <password>s!cr!t</password>
  </server>
  <!-- ... -->
</servers>

The server id must specify the registry to push to/pull from, which by default is central index docker.io (or index.docker.io / registry.hub.docker.com as fallbacks). Here you should add your docker.io account for your repositories. If you have multiple accounts for the same registry, the second user can be specified as part of the ID. In the example above, if you have a second account 'jkubeio' then use an <id>docker.io/jkubeio</id> for this second entry. I.e. add the username with a slash to the id name. The default without username is only taken if no server entry with a username appended id is chosen.

The most secure way is to rely on docker’s credential store or credential helper and read confidential information from an external credentials store, such as the native keychain of the operating system. Follow the instruction on the docker login documentation.

As a final fallback, this plugin consults $DOCKER_CONFIG/config.json if DOCKER_CONFIG is set, or ~/.docker/config.json if not, and reads credentials stored directly within this file. This unsafe behavior happened when connecting to a registry with the command docker login from the command line with older versions of docker (pre 1.13.0) or when docker is not configured to use a credential store.

13.1. Pull vs. Push Authentication

The credentials lookup described above is valid for both push and pull operations. In order to narrow things down, credentials can be provided for pull or push operations alone:

In an <authConfig> section a sub-section <pull> and/or <push> can be added. In the example below the credentials provider are only used for image push operations:

Example
<plugin>
  <configuration>
    
    <!-- ... -->
    <authConfig>
      <push>
         <username>jolokia</username>
         <password>s!cr!t</password>
      </push>
    </authConfig>
  </configuration>
</plugin>

When the credentials are given on the command line as system properties, then the properties jkube.docker.pull.username / jkube.docker.pull.password and jkube.docker.push.username / jkube.docker.push.password are used for pull and push operations, respectively (when given). Either way, the standard lookup algorithm as described in the previous section is used as fallback.

13.2. OpenShift Authentication

When working with the default registry in OpenShift, the credentials to authenticate are the OpenShift username and access token. So, a typical interaction with the OpenShift registry from the outside is:

oc login
...
mvn -Djkube.docker.registry=docker-registry.domain.com:80/default/myimage \
    -Djkube.docker.username=$(oc whoami) \
    -Djkube.docker.password=$(oc whoami -t)

(note, that the image’s username part ("default" here") must correspond to an OpenShift project with the same name to which you currently connected account has access).

This can be simplified by using the system property docker.useOpenShiftAuth in which case the plugin does the lookup. The equivalent to the example above is

oc login
...
mvn -Ddocker.registry=docker-registry.domain.com:80/default/myimage \
    -Ddocker.useOpenShiftAuth

Alternatively the configuration option <useOpenShiftAuth> can be added to the <authConfig> section.

For dedicated pull and push configuration the system properties jkube.docker.pull.useOpenShiftAuth and jkube.docker.push.useOpenShiftAuth are available as well as the configuration option <useOpenShiftAuth> in an <pull> or <push> section within the <authConfig> configuration.

If useOpenShiftAuth is enabled then the OpenShift configuration will be looked up in $KUBECONFIG or, if this environment variable is not set, in ~/.kube/config.

13.3. Password encryption

Regardless of which mode you choose you can encrypt password as described in the Maven documentation. Assuming that you have setup a master password in ~/.m2/security-settings.xml you can create easily encrypt passwords:

Example
$ mvn --encrypt-password
Password:
{QJ6wvuEfacMHklqsmrtrn1/ClOLqLm8hB7yUL23KOKo=}

This password then can be used in authConfig, docker.password and/or the <server> setting configuration. However, putting an encrypted password into authConfig in the pom.xml doesn’t make much sense, since this password is encrypted with an individual master password.

13.4. Extended Authentication

Some docker registries require additional steps to authenticate. Amazon ECR requires using an IAM access key to obtain temporary docker login credentials. The oc:push and oc:build goals automatically execute this exchange for any registry of the form <awsAccountId> .dkr.ecr. <awsRegion> .amazonaws.com, unless the skipExtendedAuth configuration (jkube.docker.skip.extendedAuth property) is set true.

Note that for an ECR repository with URI 123456789012.dkr.ecr.eu-west-1.amazonaws.com/example/image the d-m-p’s jkube.docker.registry should be set to 123456789012.dkr.ecr.eu-west-1.amazonaws.com and example/image is the <name> of the image.

You can use any IAM access key with the necessary permissions in any of the locations mentioned above except ~/.docker/config.json. Use the IAM Access key ID as the username and the Secret access key as the password. In case you’re using temporary security credentials provided by the AWS Security Token Service (AWS STS), you have to provide the security token as well. To do so, either specify the an <auth> element alongside username & password in the authConfig.

Plugin will attempt to read AWS credentials from some well-known spots in case there is no explicit configuration:

If any of these authentication information is accessible, it will be used.

For a more complete, robust and reliable authentication experience, you can add the AWS SDK for Java as a dependency.
<plugins>
    <plugin>
        <groupId>org.eclipse.jkube</groupId>
        <artifactId>kubernetes-maven-plugin</artifactId>
        <dependencies>
            <dependency>
                <groupId>com.amazonaws</groupId>
                <artifactId>aws-java-sdk-core</artifactId>
                <version>1.11.707</version>
            </dependency>
        </dependencies>
    </plugin>
</plugins>

This extra dependency allows the usage of all options that the AWS default credential provider chain provides.

If the AWS SDK is found in the classpath, it takes precedence over the custom AWS credentials lookup mechanisms listed above.

14. Volume Configuration

openshift-maven-plugin supports volume configuration in XML format in pom.xml. These are the volume types which are supported:

Table 100. Supported Volume Types
Volume Type Description

hostPath

Mounts a file or directory from the host node’s filesystem into your pod

emptyDir

Containers in the Pod can all read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each Container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.

gitRepo

It mounts an empty directory and clones a git repository into it for your Pod to use.

secret

It is used to pass sensitive information, such as passwords, to Pods.

nfsPath

Allows an existing NFS(Network File System) share to be mounted into your Pod.

gcePdName

Mounts a Google Compute Engine(GCE) into your Pod. You must create PD using gcloud or GCE API or UI before you can use it.

glusterFsPath

Allows a Glusterfs (an open source networked filesystem) volume to be mounted into your Pod.

persistentVolumeClaim

Used to mount a PersistentVolume into a Pod.

awsElasticBlockStore

Mounts an Amazon Web Services(AWS) EBS Volume into your Pod.

azureDisk

Mounts a Microsoft Azure Data Disk into a Pod

azureFile

Mounts a Microsoft Azure File Volume(SMB 2.1 and 3.0 into a Pod.

cephfs

Allows an existing CephFS volume to be mounted into your Pod. You must have your own Ceph server running with the share exported before you can use it.

fc

Allows existing fibre channel volume to be mounted in a Pod. You must configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them.

flocker

Flocker is an open source clustered Container data volume manager. A flocker volume allows a Flocker dataset to be mounted into a Pod. You must have your own Flocker installation running before you can use it.

iscsi

Allows an existing ISCSI(SCSI over IP) volume to be mounted into your Pod.

portworxVolume

A portworxVolume is an elastic block storage layer that runs hyperconverged with Kubernetes.

quobyte

Allows existing Quobyte volume to be mounted into your Pod. You must have your own Quobyte setup running the volumes created.

rbd

Allows a Rados Block Device volume to be mounted into your Pod.

scaleIO

ScaleIO is a software-based storage platform that can use existing hardware to create clusters of scalable shared block networked storage. The scaleIO volume plugin allows deployed Pods to access existing ScaleIO volumes.

storageOS

A storageos volume allows an existing StorageOS volume to be mounted into your Pod. You must run the StorageOS container on each node that wants to access StorageOS volumes

vsphereVolume

Used to mount a vSphere VMDK volume into your Pod.

downwardAPI

A downwardAPI volume is used to make downward API data available to applications. It mounts a directory and writes the requested data in plain text files.

15. Integrations

15.1. Dekorate

openshift-maven-plugin provides a Zero Configuration approach to delegate deployment manifests generation to Dekorate.

Just by adding a dependency to Dekorate library in the pom.xml file, all manifest generation will be delegated to Dekorate.

  <dependencies>
    <!-- ... -->
    <dependency>
        <groupId>io.dekorate</groupId>
        <artifactId>option-annotations</artifactId>
        <version>${dekorate.version}</version>
      </dependency>
      <dependency>
        <groupId>io.dekorate</groupId>
        <artifactId>openshift-annotations</artifactId>
        <version>${dekorate.version}</version>
      </dependency>
      <dependency>
        <groupId>io.dekorate</groupId>
        <artifactId>kubernetes-annotations</artifactId>
        <version>${dekorate.version}</version>
      </dependency>
      <dependency>
        <groupId>io.dekorate</groupId>
        <artifactId>dekorate-spring-boot</artifactId>
        <version>${dekorate.version}</version>
      </dependency>
  </dependencies>

A full example of the integration can be found in the directory quickstarts/maven/spring-boot-dekorate.

An experimental feature is also provided to merge resources generated both by openshift-maven-plugin and Dekorate. You can activate this feature by using the following flag -Djkube.mergeWithDekorate in the command-line, or setting it up as a property (<jkube.mergeWithDekorate>true</jkube.mergeWithDekorate>).

15.2. JIB (Java Image Builder)

openshift-maven-plugin also provides user an option to build container images without having access to any docker daemon. You just need to set jkube.build.strategy property to jib. It will delegate the build process to JIB. It creates a tarball inside your target directory which can be loaded into any docker daemon afterwards. You may also push the image to your specified registry using push goal with feature flag enabled.

You can find more details at Spring Boot JIB Quickstart.

15.3. Buildpacks

openshift-maven-plugin provides the required features for users to leverage Cloud Native Buildpacks for building container images. You can enable this build strategy by setting the jkube.build.strategy property to buildpacks.

Access to a Docker daemon is required in order to use Buildpacks as mentioned in Buildpack Prerequisites.

  mvn oc:build -Djkube.build.strategy=buildpacks

openshift-maven-plugin downloads Pack CLI to the user’s $HOME/.jkube folder and starts the pack build process. If the download for the Pack CLI binary fails, openshift-maven-plugin looks for any locally installed Pack CLI version.

15.3.1. Buildpack Builder Image

  • If no builder image is configured, then openshift-maven-plugin uses paketobuildpacks/builder:base as the default builder image.

  • If builder image is provided in local Pack Config, openshift-maven-plugin uses the builder image specified in the file. For example,if the user has this image set in the $HOME/.pack/config.toml file, openshift-maven-plugin will use testuser/buildpacks-quarkus-builder:latest as Buildpacks builder image.:

default-builder-image = "testuser/buildpacks-quarkus-builder:latest"
  • It’s also possible to configure BuildPack builder image using property or XML configuration. You can use this property to configure buildpacks builder image:

jkube.generator.buildpacksBuilderImage = "testuser/buildpacks-quarkus-builder:latest"

BuildPacks integration simply involves delegation of the build process to pack CLI. openshift-maven-plugin has no control over how image gets built. This is controlled by BuildPack builder image. User is responsible for making sure that valid configuration is provided for BuildPacks.

16. FAQ

16.1. General questions

16.1.1. How do I define an environment variable?

The easiest way is to add a src/main/jkube/deployment.yml file to your project containing something like:

spec:
  template:
    spec:
      containers:
        - env:
          - name: FOO
            value: bar

The above will generate an environment variable $FOO of value bar

For a full list of the environments used in java base images, see this list

16.1.2. How do I define a system property?

The simplest way is to add system properties to the JAVA_OPTIONS environment variable.

For a full list of the environments used in java base images, see this list

e.g. add a src/main/jkube/deployment.yml file to your project containing something like:

spec:
 template:
   spec:
     containers:
       - env:
         - name: JAVA_OPTIONS
           value: "-Dfoo=bar -Dxyz=abc"

The above will define the system properties foo=bar and xyz=abc

16.1.3. How do I mount a config file from a ConfigMap?

First you need to create your ConfigMap resource via a file src/main/jkube/configmap.yml

data:
  application.properties: |
    # spring application properties file
    welcome = Hello from Kubernetes ConfigMap!!!
    dummy = some value

Then mount the entry in the ConfigMap into your Deployment via a file src/main/jkube/deployment.yml

metadata:
  annotations:
    configmap.jkube.io/update-on-change: ${project.artifactId}
spec:
  replicas: 1
  template:
    spec:
      volumes:
        - name: config
          configMap:
            name: ${project.artifactId}
            items:
            - key: application.properties
              path: application.properties
      containers:
        - volumeMounts:
            - name: config
              mountPath: /deployments/config

Note that the annotation configmap.jkube.io/update-on-change is optional; its used if your application is not capable of watching for changes in the /deployments/config/application.properties file. In this case if you are also running the configmapcontroller then this will cause a rolling upgrade of your application to use the new ConfigMap contents as you change it.

16.1.4. How do I use a Persistent Volume?

First you need to create your PersistentVolumeClaim resource via a file src/main/jkube/foo-pvc.yml where foo is the name of the PersistentVolumeClaim. It might be your app requires multiple vpersistent volumes so you will need multiple PersistentVolumeClaim resources.

spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi

Then to mount the PersistentVolumeClaim into your Deployment create a file src/main/jkube/deployment.yml

spec:
  template:
    spec:
      volumes:
      - name: foo
        persistentVolumeClaim:
          claimName: foo
      containers:
      - volumeMounts:
        - mountPath: /whatnot
          name: foo

Where the above defines the PersistentVolumeClaim called foo which is then mounted into the container at /whatnot

16.1.5. How do I generate Ingress for my generated Service?

Ingress generation is supported by Eclipse JKube for Service objects of type LoadBalancer. In order to generate Ingress you need to enable jkube.createExternalUrls property to true and jkube.domain property to desired host suffix, it would be appended to your service name for host value.

You can also provide a host for it in XML config like this:

<project>
 <!-- ... -->
  <properties>
    <!-- ... -->
     <jkube.createExternalUrls>true</jkube.createExternalUrls>
     <jkube.domain>example.com</jkube.domain>
  </properties>
</project>

You can find an example in our link: spring-boot quickstart in kubernetes-with-ingress profile.

16.1.6. How do I build the image with Podman instead of Docker?

When invoking oc:build with only Podman installed, the following error appears:

No <dockerHost> given, no DOCKER_HOST environment variable, no read/writable '/var/run/docker.sock' or '//./pipe/docker_engine' and no external provider like Docker machine configured -> [Help 1]

By default, JKube is relying on the Docker REST API /var/run/docker.sock to build Docker images. Using Podman even with the Docker CLI emulation won’t work as it is just a CLI wrapper and does not provide any Docker REST API. However, it is possible to start an emulated Docker REST API with the podman command:

export DOCKER_HOST="unix:/run/user/$(id -u)/podman/podman.sock"
podman system service --time=0 unix:/run/user/$(id -u)/podman/podman.sock &

16.1.7. How to configure image name generated by Eclipse JKube?

If you want to configure image name generated by Eclipse JKube which is %g/%a:%l by default(see [image-name]). It will depend upon what mode you’re using in Eclipse JKube:

  • If you’re using zero configuration mode, which means you depend on Eclipse JKube Generators to generate an opinionated image, you will be able to do it using jkube.generator.name maven property.

  • If you’re providing XML image configuration, image name would be picked from name tag like in this example:


  • If you’re using Simple Dockerfile Mode, you can configure image name via jkube.image.name or jkube.generator.name flags

17. Kind/Filename Mapping

17.1. Default Kind/Filename Mapping

Kind Filename Type

BuildConfig

bc, buildconfig

ClusterRole

cr, crole, clusterrole

ConfigMap

cm, configmap

ClusterRoleBinding

crb, clusterrb, clusterrolebinding

CronJob

cj, cronjob

CustomResourceDefinition

crd, customerresourcedefinition

DaemonSet

ds, daemonset

Deployment

deployment

DeploymentConfig

dc, deploymentconfig

ImageStream

is, imagestream

ImageStreamTag

istag, imagestreamtag

Ingress

ingress

Job

job

LimitRange

lr, limitrange

Namespace

ns, namespace

NetworkPolicy

np, networkpolicy

OAuthClient

oauthclient

PolicyBinding

pb, policybinding

PersistentVolume

pv, persistentvolume

PersistentVolumeClaim

pvc, persistentvolumeclaim

Project

project

ProjectRequest

pr, projectrequest

ReplicaSet

rs, replicaset

ReplicationController

rc, replicationcontroller

ResourceQuota

rq, resourcequota

Role

role

RoleBinding

rb, rolebinding

RoleBindingRestriction

rbr, rolebindingrestriction

Route

route

Secret

secret

Service

svc, service

ServiceAccount

sa, serviceaccount

StatefulSet

statefulset

Template

template

Pod

pd, pod

17.2. Custom Kind/Filename Mapping

You can add your custom Kind/Filename mappings. To do it you have two approaches:

  • Setting an environment variable or system property called jkube.mapping pointing out to a .properties files with pairs <kind>⇒<filename1>, <filename2> By default if no environment variable nor system property is set, JKube looks for a file located at classpath /META-INF/jkube.kind-filename-type-mapping-default.properties.

  • By defining the Mapping in the plugin’s configuration

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>openshift-maven-plugin</artifactId>
  <configuration>
    <mappings>
      <mapping>
        <kind>Var</kind> (1)
        <filenameTypes>foo, bar</filenameTypes> (2)
        <apiVersion>api.example.com/v1</apiVersion> (3)
      </mapping>
    </mappings>
  </configuration>
</plugin>
1 The kind name (mandatory)
2 The filename types (mandatory), a comma-separated list of filenames to map to the specified kind
3 The apiVersion (optional)