[KOGITO-748] - Renaming to kogito-operator (#124)

29 files changed
tree: 9fe63df2f444fbe503a9ae721b409d00bfacaad5
  1. .circleci/
  2. .github/
  3. build/
  4. cmd/
  5. deploy/
  6. docs/
  7. hack/
  8. pkg/
  9. test/
  10. vendor/
  11. version/
  12. .gitignore
  13. .osdk-scorecard.yaml
  14. go.mod
  15. go.sum
  16. LICENSE
  17. Makefile
  18. README.md
  19. tools.go
README.md

Kogito Operator

Go Report Card CircleCI

The Kogito Operator deploys Kogito Runtimes services from source and every piece of infrastructure that the services might need, such as SSO (Keycloak) and Persistence (Infinispan).

Table of Contents

Created by gh-md-toc

Kogito Operator requirements

  • Go v1.12 is installed.
  • The operator-sdk v0.11.0 is installed.
  • OpenShift 3.11 or 4.x is installed. (You can use CRC for local deployment.)

Kogito Operator installation

Deploying to OpenShift 4.x

The Kogito operator is a namespaced operator, so you must install it into the namespace where you want your Kogito application to run.

(Optional) You can import the Kogito image stream using the oc client manually with the following command:

$ oc apply -f https://raw.githubusercontent.com/kiegroup/kogito-cloud/master/s2i/kogito-imagestream.yaml -n openshift

This step is optional because the Kogito Operator creates the required imagestreams when it installs a new application.

Automatically in OperatorHub

The Kogito Operator is available in the OperatorHub as a community operator. To find the Operator, search by the Kogito name.

You can also verify the Operator availability in the catalog by running the following command:

$ oc describe operatorsource.operators.coreos.com/kogito-operator -n openshift-marketplace

Follow the OpenShift Web Console instructions in the Catalog -> OperatorHub section in the left menu to install it in any namespace in the cluster.

Kogito Operator in the Catalog

Manually in OperatorHub

If you cannot find the Kogito Operator in OperatorHub, you can install it manually by creating an entry in the OperatorHub Catalog:

$ oc create -f deploy/olm-catalog/kogito-operator/kogito-operator-operatorsource.yaml

After several minutes, the Operator appears under the Catalog -> OperatorHub section in the OpenShift Web Console. To find the Operator, search by the Kogito name. You can then install the Operator as described in the Automatically in OperatorHub section.

Locally on your system

You can also run the Kogito Operator locally if you have the requirements configured on your local system.

Deploying to OpenShift 3.11

The OperatorHub catalog is not available by default for OpenShift 3.11, so you must manually install the Kogito Operator on OpenShift 3.11.

## Kogito imagestreams should already be installed and available, for example:
$ oc apply -f https://raw.githubusercontent.com/kiegroup/kogito-cloud/master/s2i/kogito-imagestream.yaml -n openshift
$ oc new-project <project-name>
$ ./hack/3.11deploy.sh

Kogito Runtimes service deployment

Deploying a new service

Use the OLM console to subscribe to the kogito Operator Catalog Source within your namespace. After you subscribe, use the console to Create KogitoApp or create one manually as shown in the following example:

$ oc create -f deploy/crds/app.kiegroup.org_v1alpha1_kogitoapp_cr.yaml
kogitoapp.app.kiegroup.org/example-quarkus created

Alternatively, you can use the Kogito CLI to deploy your services:

$ kogito deploy-service example-quarkus https://github.com/kiegroup/kogito-examples/ --context-dir=drools-quarkus-example

Cleaning up a Kogito service deployment

$ kogito delete-service example-quarkus

Native X JVM builds

By default, the Kogito services are built with traditional java compilers to save time and resources. This means that the final generated artifact is a JAR file with the chosen runtime (default to Quarkus) with its dependencies in the image user's home directory: /home/kogito/bin/lib.

Kogito services implemented with Quarkus can be built to native binary. This means very low footprint on runtime (see performance examples), but a lot of resources during build time. For more information about AOT compilation, see GraalVM Native Image.

In Kogito Operator tests, native builds take approximately 10 minutes and the build pod can consume up to 10GB of RAM and 1.5 CPU cores. Ensure that you have these resources available when running native builds.

To deploy a service using native builds, run the deploy-service command with the --native flag:

$ kogito deploy-service example-quarkus https://github.com/kiegroup/kogito-examples/ --context-dir=drools-quarkus-example --native

Troubleshooting Kogito Runtimes service deployment

No builds are running

If you do not see any builds running nor any resources created in the namespace, review the Kogito Operator log.

To view the Operator logs, first identify where the operator is deployed:

$ oc get pods

NAME                                     READY   STATUS      RESTARTS   AGE
kogito-operator-6d7b6d4466-9ng8t   1/1     Running     0          26m

Use the pod name as the input in the following command:

$ oc logs -f kogito-operator-6d7b6d4466-9ng8t

Kogito Data Index Service deployment

The Kogito Operator can deploy the Data Index Service as a Custom Resource (KogitoDataIndex).

The Data Index Service depends on Kafka, starting with version 0.6.0, the Kogito Operator deploys an Apache Kafka Cluster (based on Strimzi operator) in the same namespace.

The Data Index Service also depends on Infinispan, but starting with version 0.6.0 of the Kogito Operator, Infinispan Server is automatically deployed for you.

Deploying Infinispan

If you plan to use the Data Index Service to connect to an Infinispan Server instance deployed within the same namespace, the Kogito Operator can handle this deployment for you.

When you install the Kogito Operator from OperatorHub, the Infinispan Operator is installed in the same namespace. If you do not have access to OperatorHub or OLM in your cluster, you can manually deploy the Infinispan Operator.

After you deploy the Infinispan Operator, see Deploying Strimzi for next steps.

Deploying Strimzi

Kafka

Strimzi Operator should be available in the OperatorHub. In the OpenShift Web Console, go to Catalog -> Installed Operators in the left menu and search for Strimzi.

You should see the Strimzi Operator in the Installed Operators tab:

Strimzi Installed

Next, create a Kafka cluster for the Data Index Service to connect. Click the name of the Strimzi Operator, then click the Kafka tab and click Create Kafka. Accept the default options to create a three-node Kafka cluster. If this is a development environment, consider setting the Zookeeper and Kafka replicas to 1 to conserve resources.

After a few minutes, you should see the pods running and the services available:

$ oc get svc -l strimzi.io/cluster=my-cluster

NAME                          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
my-cluster-kafka-bootstrap    ClusterIP   172.30.228.90    <none>        9091/TCP,9092/TCP,9093/TCP   9d
my-cluster-kafka-brokers      ClusterIP   None             <none>        9091/TCP,9092/TCP,9093/TCP   9d
my-cluster-zookeeper-client   ClusterIP   172.30.241.146   <none>        2181/TCP                     9d
my-cluster-zookeeper-nodes    ClusterIP   None             <none>        2181/TCP,2888/TCP,3888/TCP   9d

The service that you will use to deploy the Data Index Service is my-cluster-kafka-bootstrap:9092.

Kafka Topics

If Strimzi Operator is installed in the namespace, the Kogito Operator will create the following Kafka Topics that are required by the Data Index Service:

  • kogito-processinstances-events
  • kogito-usertaskinstances-events
  • kogito-processdomain-events
  • kogito-usertaskdomain-events

Now that you have the required infrastructure, you can deploy the Kogito Data Index Service.

Kogito Data Index Service installation

Retrieving Kafka internal URLs

After you configure the Kafka Operator, you need the Kafka service URI or Kafka instance name to install the Data Index Service.

Run the following command to retrieve the Kafka internal URI:

$ oc get svc -l strimzi.io/cluster=my-cluster | grep bootstrap

my-cluster-kafka-bootstrap    ClusterIP   172.30.228.90    <none>        9091/TCP,9092/TCP,9093/TCP   9d

In this case, the Kafka Cluster service is my-cluster-kafka-bootstrap:9092.

Or run the following command to retrieve the Kafka instance name:

$ oc get kafka

NAME         AGE
my-cluster   17s

In this case the Kafka instance name is my-cluster.

Use this information to create the Kogito Data Index resource.

Installing the Kogito Data Index Service with the Kogito CLI

If you have installed the Kogito CLI, run the following command to create the Kogito Data Index resource. Replace the URLs with the URLs you retrieved for your environment:

$ kogito install data-index -p my-project --kafka-url my-cluster-kafka-bootstrap:9092

Or run the following command to create the Kogito Data Index resource with the Kafka instance name:

$ kogito install data-index -p my-project --kafka-instance my-cluster

Infinispan is deployed for you using the Infinispan Operator. Ensure that the Infinispan deployment is running in your project. If the deployment fails, the following error message appears:

Infinispan Operator is not available in the Project: my-project. Please make sure to install it before deploying Data Index without infinispan-url provided

To resolve the error, review the deployment procedure to this point to ensure that all steps have been successful.

Installing the Kogito Data Index Service with the Operator Catalog (OLM)

If you are running on OpenShift 4.x, you can use the OperatorHub user interface to create the Kogito Data Index resource. In the OpenShift Web Console, go to Installed Operators -> Kogito Operator -> Kogito Data Index. Click Create Kogito Data Index and create a new resource that uses the Infinispan and Kafka services, as shown in the following example:

apiVersion: app.kiegroup.org/v1alpha1
kind: KogitoDataIndex
metadata:
  name: kogito-data-index
spec:
  # Number of pods to be deployed
  replicas: 1
  # Image to use for this deployment
  image: "quay.io/kiegroup/kogito-data-index:latest"
  kafka:
    # Service name and port for the Kafka cluster, for example, my-kafka-cluster:9092
    externalURI: my-cluster-kafka-bootstrap:9092

Installing the Kogito Data Index Service with the oc client

To create the Kogito Data Index resource using the oc client, you can use the CR file from the previous example as a reference and create the custom resource from the command line as shown in the following example:

# Clone this repository
$ git clone https://github.com/kiegroup/kogito-cloud-operator.git
$ cd kogito-cloud-operator
# Make your changes
$ vi deploy/crds/app.kiegroup.org_v1alpha1_kogitodataindex_cr.yaml
# Deploy to the cluster
$ oc create -f deploy/crds/app.kiegroup.org_v1alpha1_kogitodataindex_cr.yaml -n my-project

You can access the GraphQL interface through the route that was created for you:

$ oc get routes -l app=kogito-data-index

NAME                HOST/PORT                                                  PATH   SERVICES            PORT   TERMINATION   WILDCARD
kogito-data-index   kogito-data-index-kogito.apps.mycluster.example.com               kogito-data-index   8080   None

Kogito CLI

The Kogito CLI tool enables you to deploy new Kogito services from source instead of relying on CRs and YAML files.

Kogito CLI requirements

  • The oc client is installed.
  • You are an authenticated OpenShift user with permissions to create resources in a given namespace.

Kogito CLI installation

For Linux

  1. Download the correct Kogito distribution for your machine.

  2. Unpack the binary: tar -xvf release.tar.gz

    You should see an executable named kogito.

  3. Move the binary to a pre-existing directory in your PATH, for example, # cp /path/to/kogito /usr/local/bin.

For Windows

  1. Download the latest 64-bit Windows release of the Kogito distribution.

  2. Extract the zip file through a file browser.

  3. Add the extracted directory to your PATH. You can now use kogito from the command line.

Building the Kogito CLI from source

:warning: To build the Kogito CLI from source, ensure that Go is installed and available in your PATH.

Run the following command to build and install the Kogito CLI:

$ git clone https://github.com/kiegroup/kogito-cloud-operator
$ cd kogito-cloud-operator
$ make install-cli

The kogito CLI appears in your path:

$ kogito
Kogito CLI deploys your Kogito services into an OpenShift cluster

Usage:
  kogito [command]

Available Commands:
  delete-project Deletes a Kogito Project - i.e., the Kubernetes/OpenShift namespace
  delete-service Deletes a Kogito Runtime Service deployed in the namespace/project
  deploy-service Deploys a new Kogito Runtime Service into the given Project
  help           Help about any command
  install        Install all sort of infrastructure components to your Kogito project
  new-project    Creates a new Kogito Project for your Kogito services
  use-project    Sets the Kogito Project where your Kogito service will be deployed

Flags:
      --config string   config file (default is $HOME/.kogito.json)
  -h, --help            help for kogito
  -v, --verbose         verbose output
      --version         display version

Use "kogito [command] --help" for more information about a command.

Deploying a Kogito service from source with the Kogito CLI

After you complete the Kogito Operator installation, you can deploy a new Kogito service by using the Kogito CLI:

# creates a new namespace in your cluster
$ kogito new-project kogito-cli

# deploys a new Kogito Runtime Service from source
$ kogito deploy-service example-drools https://github.com/kiegroup/kogito-examples --context-dir drools-quarkus-example

If you are using OpenShift 3.11 as described in Deploying to OpenShift 3.11, use the existing namespace that you created during the manual deployment, as shown in the following example:

# Use the provisioned namespace in your OpenShift 3.11 cluster
$ kogito use-project <project-name>

# Deploys new Kogito service from source
$ kogito deploy-service example-drools https://github.com/kiegroup/kogito-examples --context-dir drools-quarkus-example

You can shorten the previous command as shown in the following example:

$ kogito deploy-service example-drools https://github.com/kiegroup/kogito-examples --context-dir drools-quarkus-example --project <project-name>

Prometheus integration with the Kogito Operator

Prometheus annotations

By default, if your Kogito Runtimes service contains the monitoring-prometheus-addon dependency, metrics for the Kogito service are enabled. For more information about Prometheus metrics in Kogito services, see Enabling metrics.

The Kogito Operator adds Prometheus annotations to the pod and service of the deployed application, as shown in the following example:

apiVersion: v1
kind: Service
metadata:
  annotations:
    org.kie.kogito/managed-by: Kogito Operator
    org.kie.kogito/operator-crd: KogitoApp
    prometheus.io/path: /metrics
    prometheus.io/port: "8080"
    prometheus.io/scheme: http
    prometheus.io/scrape: "true"
  labels:
    app: onboarding-service
    onboarding: process
  name: onboarding-service
  namespace: kogito
  ownerReferences:
  - apiVersion: app.kiegroup.org/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: KogitoApp
    name: onboarding-service
spec:
  clusterIP: 172.30.173.165
  ports:
  - name: http
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: onboarding-service
    onboarding: process
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Prometheus Operator

The Prometheus Operator does not directly support the Prometheus annotations that the Kogito Operator adds to your Kogito services. If you are deploying the Kogito Operator on OpenShift 4.x, then you are likely using the Prometheus Operator.

Therefore, in a scenario where Prometheus is deployed and managed by the Prometheus Operator, and if metrics for the Kogito service are enabled, a new ServiceMonitor resource is deployed by the Kogito Operator to expose the metrics for Prometheus to scrape:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    app: onboarding-service
  name: onboarding-service
  namespace: kogito
spec:
  endpoints:
  - path: /metrics
    targetPort: 8080
    scheme: http
  namespaceSelector:
    matchNames:
    - kogito
  selector:
    matchLabels:
      app: onboarding-service

You must manually configure your Prometheus resource that is managed by the Prometheus Operator to select the ServiceMonitor resource:

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  name: prometheus
spec:
  serviceAccountName: prometheus
  serviceMonitorSelector:
    matchLabels:
      app: onboarding-service

After you configure your Prometheus resource with the ServiceMonitor resource, you can see the endpoint being scraped by Prometheus in the Targets page of the Prometheus web console:

The metrics exposed by the Kogito service appear in the Graph view:

For more information about the Prometheus Operator, see the Prometheus Operator documentation.

Infinispan integration

To help you start and run an Infinispan Server instance in your project, the Kogito Operator has a resource called KogitoInfra to handle Infinispan deployment for you.

The KogitoInfra resource use the Infinispan Operator to deploy new Infinispan server instances if needed.

You can freely edit and manage the Infinispan instance. Kogito Operator do not manage or handle the Infinispan instances. For example, if you have plans to scale the Infinispan cluster, you can edit the replicas field in the Infinispan CR to meet your requirements.

By default, the KogitoInfra resource creates a secret that holds the user name and password for Infinispan authentication. To view the credentials, run the following command:

$ oc get secret/kogito-infinispan-credential -o yaml

apiVersion: v1
data:
  password: VzNCcW9DeXdpMVdXdlZJZQ==
  username: ZGV2ZWxvcGVy
kind: Secret
(...)

The key values are masked by a Base64 algorithm. To view the password from the previous example output in your terminal, run the following command:

$ echo VzNCcW9DeXdpMVdXdlZJZQ== | base64 -d

W3BqoCywi1WWvVIe

For more information about Infinispan Operator, please see their official documentation.

Note: Sometimes the OperatorHub will install DataGrid operator instead of Infinispan when installing Kogito Operator. If this happens, please uninstall DataGrid and install Infinispan manually since they are not compatible

Infinispan for Kogito Services

If your Kogito Service depends on the persistence add-on, Kogito Operator installs Infinispan and inject the connection properties as environment variables into the service. Depending on the runtime, this variables will differ. See the table below:

Quarkus RuntimeSpringboot RuntimeDescriptionExample
QUARKUS_INFINISPAN_CLIENT_SERVER_LISTINFINISPAN_REMOTE_SERVER_LISTService URI from deployed Infinispankogito-infinispan:11222
QUARKUS_INFINISPAN_CLIENT_AUTH_USERNAMEINFINISPAN_REMOTE_AUTH_USER_NAMEDefault username generated by Infinispan Operatordeveloper
QUARKUS_INFINISPAN_CLIENT_AUTH_PASSWORDINFINISPAN_REMOTE_AUTH_PASSWORDRandom password generated by Infinispan OperatorZ1Nz34JpuVdzMQKi
QUARKUS_INFINISPAN_CLIENT_SASL_MECHANISMINFINISPAN_REMOTE_SASL_MECHANISMDefault to PLAINPLAIN

Just make sure that your Kogito Service can read these properties in runtime. Those variables names are the same as the ones used by Infinispan clients from Quarkus and Springboot.

On Quarkus versions below 1.1.0 (Kogito 0.6.0), make sure that your aplication.properties file has the properties listed like the example below:

quarkus.infinispan-client.server-list=
quarkus.infinispan-client.auth-username=
quarkus.infinispan-client.auth-password=
quarkus.infinispan-client.sasl-mechanism=

These properties are replaced by the environment variables in runtime.

You can control the installation method for the Infinispan by using the flag infinispan-install in the Kogito CLI or editing the spec.infra.installInfinispan in KogitoApp custom resource:

  • Auto - The operator tries to discover if the service needs persistence by scanning the runtime image for the org.kie/persistence/required label attribute
  • Always - Infinispan is installed in the namespace without checking if the service needs persistence or not
  • Never - Infinispan is not installed, even if the service requires persistence. Use this option only if you intend to deploy your own persistence mechanism and you know how to configure your service to access it

Infinispan for Data Index Service

For the Data Index Service, if you do not provide a service URL to connect to Infinispan, a new server is deployed via KogitoInfra.

A random password for the developer user is created and injected into the Data Index automatically. You do not need to do anything for both services to work together.

Kafka integration

Like Infinispan, Kogito Operator can deploy a Kafka cluster for your Kogito services via KogitoInfra custom resource.

To deploy a Kafka cluster with Zookeeper to support sending and receiving messages within a process, Kogito Operator relies on the Strimzi Operator.

You can freely edit the Kafka instance deployed by the operator to fulfill any requirement that you need. The Kafka instance is not managed by Kogito, instead it‘s managed by Strimzi. That’s why Kogito Operator is dependant on Strimzi Operator and it's installed once you install the Kogito Operator using OLM.

Note: Sometimes the OperatorHub will install AMQ Streams operator instead of Strimzi when installing Kogito Operator. If this happens, please uninstall AMQ Streams and install Strimzi manually since they are not compatible

Kafka for Kogito Services

To enable Kafka installation during deployment of your service, use the following Kogito CLI command:

$ kogito deploy kogito-kafka-quickstart-quarkus https://github.com/mswiderski/kogito-quickstarts --install-kafka Always \
--build-env MAVEN_ARGS_APPEND="-pl kogito-kafka-quickstart-quarkus -am" ARTIFACT_DIR="kogito-kafka-quickstart-quarkus/target"  

Or using the custom resource (CR) yaml file:

apiVersion: app.kiegroup.org/v1alpha1
kind: KogitoApp
metadata:
  name: kogito-kafka-quickstart-quarkus
spec:
  infra:
    installKafka: Always
  build:
    env:
    - name: MAVEN_ARGS_APPEND
      value: -pl kogito-kafka-quickstart-quarkus -am
    - name: ARTIFACT_DIR
      value: kogito-kafka-quickstart-quarkus/target
    gitSource:
      uri: https://github.com/mswiderski/kogito-quickstarts

The flag --install-kafka Always in the CLI and the attribute installKafka: Always in the CR tells to the operator to deploy a Kafka cluster in the namespace if no Kafka cluster owned by Kogito Operator is found.

A variable named KAFKA_BOOTSTRAP_SERVERS is injected into the service container. For Quarkus runtimes, this works out of the box when using Kafka Client version 1.x or greater. For Springboot you might need to rely on property substitution in the application.properties like:

spring.kafka.bootstrap.servers=${KAFKA_BOOTSTRAP_SERVERS}

Also, if the container has any environment variable with the suffix _BOOTSTRAP_SERVERS, they are injected by the value of KAFKA_BOOTSTRAP_SERVERS variable as well. For example, by running:

$ kogito deploy kogito-kafka-quickstart-quarkus https://github.com/mswiderski/kogito-quickstarts --install-kafka Always \
--build-env MAVEN_ARGS_APPEND="-pl kogito-kafka-quickstart-quarkus -am" ARTIFACT_DIR="kogito-kafka-quickstart-quarkus/target \
-e MP_MESSAGING_INCOMING_TRAVELLERS_BOOTSTRAP_SERVERS -e MP_MESSAGING_OUTGOING_PROCESSEDTRAVELLERS_BOOTSTRAP_SERVERS"  

The variables MP_MESSAGING_INCOMING_TRAVELLERS_BOOTSTRAP_SERVERS and MP_MESSAGING_OUTGOING_PROCESSEDTRAVELLERS_BOOTSTRAP_SERVERS will have the deployed Kafka service URL inject into them.

Please note that for services with Quarkus version below 1.1.0 (Kogito Runtimes 0.6.0), it‘s required to add these Kafka properties in the application.properties. Otherwise they won’t be replaced in runtime by environment variables injected by the operator.

Kafka For Data Index

Data Index allows you to inform the service name defined by the Kafka cluster deployed in the same namespace. You can let KogitoInfra to deploy Kafka for you via Strimzi Operator.

Just deploy the KogitoInfra CR with installKafka attribute set to true:

apiVersion: app.kiegroup.org/v1alpha1
kind: KogitoInfra
metadata:
  name: kogito-infra
spec:
  # let's install both since Data Index needs persistence and messaging
  installInfinispan: true
  installKafka: true

Then deploy Data Index service CR like:

apiVersion: app.kiegroup.org/v1alpha1
kind: KogitoDataIndex
metadata:
  name: kogito-data-index
spec:
  replicas: 1
  image: "quay.io/kiegroup/kogito-data-index:0.6.0"
  kafka:
    # default kafka instance deployed by KogitoInfra
    instance: kogito-kafka
  infinispan:
    # reuse the Infinispan deployed by KogitoInfra
    useKogitoInfra: true

Or use Kogito CLI:

kogito install data-index --kafka-instance kogito-kafka

That's it! The information required to connect to Kafka will be automatically set for you by the operator to the Data Index service.

Kogito Operator development

Before you begin fixing issues or adding new features to the Kogito Operator, see Contributing to the Kogito Operator and Kogito Operator architecture.

Building the Kogito Operator

To build the Kogito Operator, use the following command:

$ make

The output of this command is a ready-to-use Kogito Operator image that you can deploy in any namespace.

Deploying to OpenShift 4.x for development purposes

To install the Kogito Operator on OpenShift 4.x for end-to-end (E2E) testing, ensure that you have access to a quay.io account to create an application repository. Follow the Operator Courier authentication instructions to obtain an account token. This token is in the format basic XXXXXXXXX and both words are required for the command.

Push the Operator bundle to your quay application repository as shown in the following example:

$ operator-courier push deploy/olm-catalog/kogito-operator/ namespace kogito-operator 0.6.0 "basic XXXXXXXXX"

If you push to another quay repository, replace namespace with your user name or the other namespace. The push command does not overwrite an existing repository, so you must delete the bundle before you can build and upload a new version. After you upload the bundle, create an Operator Source to load your operator bundle in OpenShift.

The OpenShift cluster needs access to the created application. Ensure that the application is public or that you have configured the private repository credentials in the cluster. To make the application public, go to your quay.io account, and in the Applications tab look for the kogito-operator application. Under the settings section, click make public.

## Kogito imagestreams should already be installed and available, for example:
$ oc apply -f https://raw.githubusercontent.com/kiegroup/kogito-cloud/master/s2i/kogito-imagestream.yaml -n openshift
$ oc create -f deploy/olm-catalog/kogito-operator/kogito-operator-operatorsource.yaml

Replace registryNamespace in the kogito-operator-operatorsource.yaml file with your quay namespace. The name, display name, and publisher of the Operator are the only other attributes that you can modify.

After several minutes, the Operator appears under Catalog -> OperatorHub in the OpenShift Web Console. To find the Operator, filter the provider type by Custom.

To verify the operator status, run the following command:

$ oc describe operatorsource.operators.coreos.com/kogito-operator -n openshift-marketplace

Running End-to-End (E2E) tests

With the Kogito Operator SDK

If you have an OpenShift cluster and admin privileges, you can run E2E tests with the following command:

$ make run-e2e namespace=<namespace> tag=<tag> maven_mirror=<maven_mirror_url> image=<image_tag> tests=<full|jvm|native>

where:

  • namespace (required) is a given temporary namespace where the test will run. You do not need to create the namespace because it will be created and deleted after the test runs.
  • tag (optional, default is current release) is the image tag for the Kogito image builds, for example, 0.6.0-rc1. This is helpful in situations where Kogito S2I images have not been released yet and are under a temporary tag.
  • maven_mirror (optional, default is empty) is the Maven mirror URL. This is helpful when you need to speed up the build time by referring to a closer Maven repository.
  • image (optional, default is empty) indicates whether the E2E test should be executed against a specified Kogito Operator image. If the value is empty, then the local Operator source code is used for the test execution.
  • tests (optional, default is full) indicates what types of tests should be executed. Possible values are full, jvm, and native. If you specify full or specify no parameter, then both JVM and native tests are executed.

If any errors are detected during this test, a detailed log appears in your command terminal.

To save the test output in a local file for future reference, run the following command:

make run-e2e namespace=kogito-e2e  2>&1 | tee log.out

With the Kogito CLI

You can run a smoke test using the Kogito CLI during development to make sure that at least the basic use case is covered.

On OpenShift 4.x, before you run this test, install the Kogito Operator in the namespace where the test will run. On OpenShift 3.11, the Kogito CLI installs the Kogito Operator for you.

To run an E2E test using the Kogito CLI, run the following command:

$ make run-e2e-cli namespace=<namespace> tag=<tag> native=<true|false> maven_mirror=<maven_mirror_url> skip_build=<true|false>

where:

  • namespace (required) is a given temporary namespace where the test will run.
  • tag (optional, default is current release) is the image tag for the Kogito image builds, for example, 0.6.0-rc1. This is helpful in situations where Kogito S2I images have not been released yet and are under a temporary tag.
  • native (optional, default is false) indicates whether the E2E test should use native or jvm builds. For more information, see Native X JVM builds.
  • maven_mirror (optional, default is empty) is the Maven mirror URL. This is helpful when you need to speed up the build time by referring to a closer Maven repository.
  • skip_build (optional, default is true) is set to true to skip building the CLI before running the test.

Running the Kogito Operator locally

To run the Kogito Operator locally, change the log level at runtime with the DEBUG environment variable, as shown in the following example:

$ make mod
$ make clean
$ DEBUG=true operator-sdk up local --namespace=<namespace>

Before submitting a pull request to the Kogito Operator repository, review the instructions for Contributing to the Kogito Operator.

You can use the following command to vet, format, lint, and test your code:

$ make test

Contributing to the Kogito Operator

For information about submitting bug fixes or proposed new features for the Kogito Operator, see Contributing to the Kogito Operator.