Images for Kogito

Clone this repo:
  1. a595d8c Upgrade to and align with Quarkus 3.8.4 LTS release. (#1760) by Alex Porcelli · 7 days ago main 999-20240502 999-20240505
  2. 692d990 Temporary removal of kogito-swf-{builder-,devmode} images from all Jenkins jobs (#1762) by Rodrigo Antunes · 9 days ago
  3. 2731f44 [kie-issues#963] Remove all references to kogito-task-console and kogito-management-console (#1753) by Thiago Lugli · 14 days ago
  4. af98d96 Temporary removal of kogito-swf-{builder-,devmode} images from release pipelines (#1752) by Rodrigo Antunes · 3 weeks ago
  5. f248f49 Fix Week deploy job param type (#1759) by Rodrigo Antunes · 4 weeks ago 999-20240414 999-20240415 999-20240417

Kogito

Kogito is the next generation of business automation platform focused on cloud-native development, deployment and execution.

GitHub Stars GitHub Forks Pull Requests Contributors License Twitter Follow

Kogito Container Images

To be able to efficiently execute Kogito services on the Cloud there's a need to have Container Images so it can be played smoothly on any Kubernetes cluster. There are a few sets images which are divided in three different groups which are the components, the builder images and the runtime images.

Table of Contents

Kogito Images Requirements

To interact with Kogito images, you would need to install the needed dependencies so that the images can be built and tested.

  • Mandatory dependencies:

    • Moby Engine or Docker CE
      • Podman can be use to build the images, but at this moment CeKit does not support it, so images build with podman cannot be tested with CeKit.
    • CeKit 4.8.0+:
      • CeKit also has its own dependencies:
        • python packages: docker, docker-squash, odcs-client.
        • All of those can be handled with pip, including CeKit.
        • if any dependency is missing CeKit will tell which one.
    • Bats
    • Java 17 or higher
    • Maven 3.9.3 or higher
  • Optional dependencies:

Kogito Images JVM Memory Management

All the Kogito Container Images contains a base module that will calculate the JVM max (Xmx) and min (Xms) values based on the container memory limits. To auto tune it, you can use the following environment variables to instruct the scripts what value the min and max should have:

  • JAVA_MAX_MEM_RATIO: Is used when no -Xmx option is given in JAVA_OPTIONS. This is used to calculate a default maximal heap memory based on a containers restriction. If used in a container without any memory constraints for the container then this option has no effect. If there is a memory constraint then -Xmx is set to a ratio of the container available memory as set here. The default is 50 which means 50% of the available memory is used as an upper boundary. You can skip this mechanism by setting this value to 0 in which case no -Xmx option is added.

  • JAVA_INITIAL_MEM_RATIO: Is used when no -Xms option is given in JAVA_OPTIONS. This is used to calculate a default initial heap memory based on the maximum heap memory. If used in a container without any memory constraints for the container then this option has no effect. If there is a memory constraint then -Xms is set to a ratio of the -Xmx memory as set here. The default is 25 which means 25% of the -Xmx is used as the initial heap size. You can skip this mechanism by setting this value to 0 in which case no -Xms option is added.

For a complete list ov environment variables that can be used to configure the JVM, please check the dynamic resources module

When performing Quarkus native builds, by default, it will rely on the cgroups memory report to determine the amount of memory that will be used by the builder container. On OpenShift or k8s, it can be defined by setting the memory limit. The build process will use 80% of the total memory reported by cgroups. For backwards compatibility, the env LIMIT_MEMORY will be respected, but it is recommended unset it and let the memory be calculated automatic based on the available memory, it can be used in specific scenarios, like a CI test where it does not run on OpenShift cluster.

SonataFlow Builder Image usage

Using as a builder

The main purpose of this image is to be used within the Kogito Serverless Operator as a builder image, below you can find an example on how to use it:

FROM quay.io/kiegroup/kogito-swf-builder:latest AS builder

# Copy all files from current directory to the builder context
COPY * ./resources/

# Build app with given resources
RUN "${KOGITO_HOME}"/launch/build-app.sh './resources'
#=============================
# Runtime Run
CMD /usr/bin/java -jar target/quarkus-app/quarkus-run.jar
#=============================
Using for application development

If you run the image, it will start an empty Kogito Serverless Workflow application with Quarkus Devmode. This allows you to develop and to run quick tests locally without having to setup Maven or Java on your machine. You can have your workflows in your local file system mounted in the image so that you can see test the application live.

To run the image for testing your local workflow files, run:

docker run -it --rm -p 8080:8080 -v <local_workflow_path>:/home/kogito/serverless-workflow-project/src/main/resources/workflows quay.io/kiegroup/kogito-swf-builder:latest

Replace <local_workflow_path> with your local filesystem containing your workflow files. You can test with the example application.

After the image bootstrap, you can access http://localhost:8080/q/swagger-ui and test the workflow application right away!

Using the SonataFlow Builder Image nightly image

The nightly builder image has been built and optimized with an internal nightly build of the Quarkus Platform.
There are 2 environment variables that should not be changed when using it:

That way, no new artifacts will be downloaded and you can directly use it.

Kogito Component Images

The Kogito Component Images can be considered as lightweight images that will complement the Kogito core engine by providing extra capabilities, like managing the processes on a web UI or providing persistence layer to the Kogito applications. Today we have the following Kogito Component Images:

Kogito Data Index Component Images

The Data Index Service aims at capturing and indexing data produced by one more Kogito runtime services. For more information please visit this (link)(https://docs.jboss.org/kogito/release/latest/html_single/#proc-kogito-travel-agency-enable-data-index_kogito-deploying-on-openshift). The Data Index Service depends on a PostgreSQL instance. The Persistence service can be switched by using its corresponding image

  • Ephemeral PostgreSQL: quay.io/kiegroup/kogito-data-index-ephemeral image.yaml
  • PostgreSQL: quay.io/kiegroup/kogito-data-index-postgresql image.yaml

Basic usage with Ephemeral PostgreSQL:

$ docker run -it quay.io/kiegroup/kogito-data-index-ephemeral:latest

Basic usage with PostgreSQL:

$ docker run -it --env QUARKUS_DATASOURCE_JDBC_URL="jdbc:postgresql://localhost:5432/quarkus"  \
    --env QUARKUS_DATASOURCE_USERNAME="kogito" \
    --env QUARKUS_DATASOURCE_PASSWORD="secret" \
    quay.io/kiegroup/kogito-data-index-postgresql:latest

To enable debug just use this env while running this image:

$ docker run -it --env SCRIPT_DEBUG=true quay.io/kiegroup/kogito-data-index-postgresql:latest

You should notice a few debug messages present in the system output.

The Kogito Operator can be used to deploy the Kogito Data Index Service to your Kogito infrastructure on a Kubernetes cluster and provide its capabilities to your Kogito applications.

Kogito Jobs Service Component Images

The Kogito Jobs Service is a dedicated lightweight service responsible for scheduling jobs that aim at firing at a given time. It does not execute the job itself, but it triggers a callback that could be an HTTP request on a given endpoint specified on the job request, or any other callback that could be supported by the service. For more information please visit this link.

Today, the Jobs service contains four images:

Basic usage:

$ docker run -it quay.io/kiegroup/kogito-jobs-service-ephemeral:latest

To enable debug on the Jobs Service images, set the SCRIPT_DEBUG to true, example:

docker run -it --env SCRIPT_DEBUG=true quay.io/kiegroup/kogito-jobs-service-postgresql:latest

You should notice a few debug messages being printed in the system output.

The ephemeral image does not have external dependencies like a backend persistence provider, it uses in-memory persistence while working with Jobs Services postgresql variant, it will need to have a PostgreSQL server previously running.

Jobs Services All-in-one

The Jobs Services All in One image provides the option to run any supported variant that we have at disposal, which are:

  • PostgreSQL
  • Ephemeral (default if no variant is specified)

There are 3 exposed environment variables that can be used to configure the behaviour, which are:

  • SCRIPT_DEBUG: enable debug level of the image and its operations
  • ENABLE_EVENTS: enable the events add-on
  • JOBS_SERVICE_PERSISTENCE: select which persistence variant to use

Note: As the Jobs Services are built on top of Quarkus, we can also set any configuration supported by Quarkus using either environment variables or system properties.

Using environment variables:

podman run -it -e VARIABLE_NAME=value quay.io/kiegroup/kogito-jobs-service-allinone:latest

Using system properties:

podman run -it -e JAVA_OPTIONS='-Dmy.sys.prop1=value1 -Dmy.sys.prop2=value2' \
  quay.io/kiegroup/kogito-jobs-service-allinone:latest

For convenience there are container-compose files that can be used to start the Jobs Service with the desired persistence variant, to use execute the following command:

podman-compose -f contrib/jobs-service/container-compose-<variant>.yaml up

The above command will spinup the Jobs-service so you can connect your application.

The Kogito Operator can be used to deploy the Kogito Jobs Service to your Kogito infrastructure on a Kubernetes cluster and provide its capabilities to your Kogito applications

Kogito JIT Runner Component Image

The Kogito JIT Runner provides a tool that allows you to submit a DMN model and evaluate it on the fly with a simple HTTP request. You can find more details on JIT here.

Basic usage:

$ docker run -it quay.io/kiegroup/kogito-jit-runner:latest

To enable debug just use this env while running this image:

docker run -it --env SCRIPT_DEBUG=true quay.io/kiegroup/kogito-jit-runner:latest

You should notice a few debug messages being printed in the system output. You can then visit localhost:8080/index.html to test the service.

To know what configurations this image accepts please take a look here on the envs section.

Contributing to Kogito Images repository

Before proceeding please make sure you have checked the requirements.

Building Images

To build the images for local testing there is a Makefile which will do all the hard work for you. With this Makefile you can:

  • Build and test all images with only one command:

    $ make
    

    If there's no need to run the tests just set the ignore_test env to true, e.g.:

    $ make ignore_test=true
    
  • Test all images with only one command, no build triggered, set the ignore_build env to true, e.g.:

    $ make ignore_build=true
    
  • Build images individually, by default it will build and test each image

    $ make build-image image_name=kogito-data-index-ephemeral
    $ make build-image image_name=kogito-data-index-postgresql
    $ make build-image image_name=kogito-jobs-service-ephemeral
    $ make build-image image_name=kogito-jobs-service-postgresql
    $ make build-image image_name=kogito-jobs-service-allinone
    $ make build-image image_name=kogito-jit-runner
    

    We can ignore the build or the tests while interacting with a specific image as well, to build only:

    $ make ignore_test=true image_name={image_name}
    
    

    Or to test only:

    $ make ignore_build=true image_name={image_name}
    
  • Build and Push the Images to quay or a repo for you preference, for this you need to edit the Makefile accordingly: bash $ make push It will create 3 tags: - X.Y - X.Y.z - latest

    to push a single image:
    ```bash
    $ make push-image image_name={image_name}
    ```     
    
  • Push staging images (release candidates, a.k.a rcX tags), the following command will build and push RC images to quay. bash $ make push-staging To override an existing tag use: bash $ make push-staging override=-o It uses the push-staging.py script to handle the images.

  • Push images to a local registry for testing bash $ make push-local-registry REGISTRY=docker-registry-default.apps.spolti.cloud NS=spolti-1 It uses the push-local-registry.sh script properly tag the images and push to the desired registry.

  • You can also add cekit_option to the make command, which will be appended to the Cekit command. Default is cekit -v.

Image Modules

CeKit can use modules to better separate concerns and reuse these modules on different images. On the Kogito Images we have several CeKit modules that are used during builds. To better understand the CeKit Modules, please visit this link.

Below you can find all modules used to build the Kogito Images

For each image, we use a specific *-image.yaml file. Please inspect the image files to learn which modules are being installed on each image:

Testing Images

There is two kind of tests, behave and bats tests.

Behave tests

For more information about behave tests please refer this link

Running Behave tests

To run all behave tests:

make test

CeKit also allows you to run a specific test. See Writing Behave Tests.

Example:

make build-image image_name=kogito-swf-builder test_options=--wip

Or by name:

make build-image image_name=kogito-swf-builder test_options=--name <Test Scenario Name>

You can also add cekit_option to the make command, which will be appended to the Cekit command. Default is cekit -v.

Writing Behave tests

With the Cekit extension of behave we can run, practically, any kind of test on the containers, even source to image tests. There are a few options that you can use to define what action and what kind of validations/verifications your test must do. The behave test structure looks like:

Feature my cool feature
    Scenario test my cool feature - it should print Hello and World on logs
  	  Given/when image is built/container is ready
	  Then container log should contain Hello
	  And container log should contain World

One feature can have as many scenarios as you want. But one Scenario can have one action defined by the keywords given or when, the most common options for this are:

  • Given s2i build {app_git_repo}
  • When container is ready
  • When container is started with env
          | variable                                     | value |
          | JBPM_LOOP_LEVEL_DISABLED   | true   |
    In this test, we can specify any valid environment variable or a set of them.
  • When container is started with args: Most useful when you want to pass some docker argument, i.e. memory limit for the container.

The Then clause is used to do your validations, test something, look for a keyword in the logs, etc. If you need to validate more than one thing you can add a new line with the And keyword, like this example:

Scenario test my cool feature - it should print Hello and World on logs
  	  Given/when image is built/container is ready
	  Then container log should contain Hello
	   And container log should contain World
	   And container log should not contain World!!
	   And file /opt/eap/standalone/deployments/bar.jar should not exist

The most common sentences are:

  • Then/And file {file} should exist
  • Then/And file {file} should not exist
  • Then/And s2i build log should not contain {string}
  • Then/And run {bash command} in container and check its output for {command_output}
  • Then/And container log should contain {string}
  • Then/And container log should not contain {string}

CeKit allow us to use tags, it is very useful to segregate tests, if we want to run only the tests for the given image, we need to annotate the test specific or the entire feature with the image name, for example, we have the common tests that needs to run against almost all images, instead to add the same tests for every image feature, we create a common feature and annotate it with the images we want that specific test or feature to run, an example can be found on this common test For example, suppose you are working on a new feature and add tests to cover your changes. You don't want to run all existing tests, this can be easily done by adding the @wip tag on the behave test that you are creating.

All images have already test feature files. If a new image is being created, a new feature file will need to be created and the very first line of this file would need to contain a tag with the image name.

For example, if we are creating a new image called quay.io/kiegroup/kogito-moon-service, we would have a feature called kogito-moon-service.feature under the tests/features directory and this file will look like with the following example:

@quay.io/kiegroup/kogito-data-index-postgresql
Feature: Kogito-data-index-postgresql feature.
    ...
    Scenarios......

For a complete list of all available sentences, please refer the CeKit source code: https://github.com/cekit/behave-test-steps/tree/v1/steps

Bats tests

What is Bats tests ? From Google: Bats is a TAP-compliant testing framework for Bash. It provides a simple way to verify that the UNIX programs you write behave as expected.
A Bats test file is a Bash script with special syntax for defining test cases.
Under the hood, each test case is just a function with a description.

Running Bats tests

To run the bats tests, we need to specify which module and test we want to run.
As an example, let's execute the tests from the kogito-s2i-core module:

 $ bats modules/kogito-s2i-core/tests/bats/s2i-core.bats 
  test manage_incremental_builds
  test assemble_runtime no binaries
  test runtime_assemble
  test runtime_assemble with binary builds
  test runtime_assemble with binary builds entire target!
  test copy_kogito_app default java build no jar file present
  test copy_kogito_app default java build jar file present
  test copy_kogito_app default quarkus java build no jar file present
  test copy_kogito_app default quarkus java build uberJar runner file present
  test copy_kogito_app default quarkus native builds file present
  build_kogito_app only checks if it will generate the project in case there's no pom.xml
 ✓ build_kogito_app only checks if it will a build will be triggered if a pom is found

16 tests, 0 failures
Writing Bats tests

The best way to start to interact with Bats tests is to take a look on its documentation and after use the existing ones as example.

Here you can find a basic example about how our Bats tests are structured.

Reporting new issues

For the Kogito Images, we use the Jira issue tracker under the KOGITO project. And to specify that the issue is specific to the Kogito images, there is a component called Image that should be added for any issue related to this repository.

When submitting the Pull Request with the fix for the reported issue, and for a better readability, we use the following pattern:

  • Pull Requests targeting only main branch:
[KOGITO-XXXX] - Description of the Issue
  • But if the Pull Request also needs to be part of a different branch/version and is cherry picked from main:
Master PR:
[main][KOGITO-XXXX] - Description of the Issue

0.9.x PR cherry picker from main:
[0.9.x][KOGITO-XXXX] - Description of the Issue