Merge pull request #509 from bsig-gh-bot/0.13.1-qszixzje5k

Update project version to 0.13.1 for release
tree: 7d76c68df4bd6831e115f171dfebdb20c0cdce31
  1. .github/
  2. build/
  3. cmd/
  4. deploy/
  5. examples/
  6. hack/
  7. pkg/
  8. test/
  9. version/
  10. .gitignore
  11. .osdk-scorecard.yaml
  12. codecov.yml
  13. go.mod
  14. go.sum
  15. Jenkinsfile
  16. Jenkinsfile.deploy
  17. Jenkinsfile.nightly
  18. Jenkinsfile.promote
  19. LICENSE
  20. Makefile
  21. README.md
  22. tools.go
README.md

Kogito Operator

Go Report Card CircleCI

The Kogito Operator deploys Kogito Runtimes services from source and all infrastructure requirements for the services, such as persistence with Infinispan and messaging with Apache Kafka. Kogito provides a command-line interface (CLI) that enables you to interact with the Kogito Operator for deployment tasks.

For information about the Kogito Operator architecture and instructions for using the operator and CLI to deploy Kogito services and infrastructures, see the official Kogito Documentation page.

Table of Contents

Created by gh-md-toc

Contributing to the Kogito Operator

Thank you for your interest in contributing to this project!

Any kind of contribution is welcome: code, design ideas, bug reporting, or documentation (including this page).

Prerequisites

For code contributions, review the following prerequisites:

Kogito Operator environment

The Operator SDK is updated regularly and the Kogito Operator code typically uses the most recent SDK updates as soon as possible. The Operate SDK has not reached a major version yet, so incompatibilities might occur.

If you do not have a preferred IDE, use Visual Studio Code with the vscode-go plugin for Go language tools support.

To use Go modules with VS Code, see Go modules support in VS Code.

To debug Go in your VS code, see Debugging Go code using VS Code.

We check our code with golangci-lint, so it is recommended to add this to your IDE. For adding the golangci-lint with goland, see Go Linter.

For adding the golangci-lint with VScode, install the Go Plugin and enable the linter from the plugins setting.

Kogito Operator unit tests

For information about Operator SDK testing, see Unit testing with the Operator SDK.

In general, the unit tests that are provided with the Kogito Operator are based on that Operator SDK testing resource. You might encounter minor issues as you create specific OpenShift APIs such as BuildConfig and DeploymentConfig that are not listed there. For an example test case with sample API calls, see the kogitoapp_controller_test.go test file.

Kogito Operator collaboration and pull requests

Before you start to work on a new proposed feature or on a fix for a bug, open an issue to discuss your idea or bug report with the maintainers. You can also work on a JIRA issue that has been reported. A developer might already be assigned to address the issue, but you can leave a comment in the JIRA asking if they need some help.

After you update the source with your new proposed feature or bug fix, open a pull request (PR) that meets the following requirements:

  • You have a JIRA associated with the PR.
  • Your PR has the name of the JIRA in the title, for example, [KOGITO-XXX] - Awesome feature that solves it all.
  • The PR solves only the problem described in the JIRA.
  • You have written unit tests for the particular fix or feature.
  • You ran make test before submitting the PR and everything is working accordingly.
  • You tested the feature on an actual OpenShift cluster.

After you send your PR, a maintainer will review your code and might ask you to make changes and to squash your commits before we can merge.

If you have any questions, contact a Kogito Operator maintainer in the issues page.

Kogito Operator development

Before you begin fixing issues or adding new features to the Kogito Operator, review the previous instructions for contributing to the Kogito Operator repository.

Requirements

Building the Kogito Operator

To build the Kogito Operator, use the following command:

$ make

The output of this command is a ready-to-use Kogito Operator image that you can deploy in any namespace.

Deploying to OpenShift 4.x for development purposes

To install the Kogito Operator on OpenShift 4.x for end-to-end (E2E) testing, ensure that you have access to a quay.io account to create an application repository.

Follow the steps below:

  1. Run make prepare-olm version=0.13.1. Bear in mind that if there're different versions in the deploy/olm-catalog/kogito-operator/kogito-operator.package.yaml file, every CSV must be included in the output folder. At this time, the script did not copy previous CSV versions to the output folder, so it must be copied manually.

  2. Grab Quay credentials with:

$ export QUAY_USERNAME=youruser
$ export QUAY_PASSWORD=yourpass

$ AUTH_TOKEN=$(curl -sH "Content-Type: application/json" -XPOST https://quay.io/cnr/api/v1/users/login -d '
{
    "user": {
        "username": "'"${QUAY_USERNAME}"'",
        "password": "'"${QUAY_PASSWORD}"'"
    }
}' | jq -r '.token')
  1. Set courier variables:
$ export OPERATOR_DIR=build/_output/operatorhub/
$ export QUAY_NAMESPACE=kiegroup # should be different in your environment
$ export PACKAGE_NAME=kogito-operator
$ export PACKAGE_VERSION=0.13.1
$ export TOKEN=$AUTH_TOKEN

If you push to another quay repository, replace QUAY_NAMESPACE with your user name or the other namespace. The push command does not overwrite an existing repository, so you must delete the bundle before you can build and upload a new version. After you upload the bundle, create an Operator Source to load your operator bundle in OpenShift.

  1. Run operator-courier to publish the operator application to Quay:
operator-courier push "$OPERATOR_DIR" "$QUAY_NAMESPACE" "$PACKAGE_NAME" "$PACKAGE_VERSION" "$TOKEN"
  1. Check if the application was pushed successfully in Quay.io. The OpenShift cluster needs access to the created application. Ensure that the application is public or that you have configured the private repository credentials in the cluster. To make the application public, go to your quay.io account, and in the Applications tab look for the kogito-operator application. Under the settings section, click make public.

  2. Publish the operator source to your OpenShift cluster:

$ oc create -f deploy/olm-catalog/kogito-operator/kogito-operator-operatorsource.yaml

Replace registryNamespace in the kogito-operator-operatorsource.yaml file with your quay namespace. The name, display name, and publisher of the Operator are the only other attributes that you can modify.

After several minutes, the Operator appears under Catalog -> OperatorHub in the OpenShift Web Console. To find the Operator, filter the provider type by Custom.

To verify the operator status, run the following command:

$ oc describe operatorsource.operators.coreos.com/kogito-operator -n openshift-marketplace

Running BDD Tests

REQUIREMENTS:

  • You need to be authenticated to the cluster before running the tests.
  • Native tests need a node with at least 4 GiB of memory available (build resource request).

If you have an OpenShift cluster and admin privileges, you can run BDD tests with the following command:

$ make run-tests [key=value]*

You can set those optional keys:

  • feature is a specific feature you want to run.
    If you define a relative path, this has to be based on the “test” folder as the run is happening there. Default are all enabled features from ‘test/features’ folder
    Example: feature=features/operator/deploy_quarkus_service.feature

  • tags to run only specific scenarios. It is using tags filtering.
    Scenarios with ‘@disabled’ tag are always ignored.
    Expression can be:

    • “@wip”: run all scenarios with wip tag
    • “~@wip”: exclude all scenarios with wip tag
    • “@wip && ~@new”: run wip scenarios, but exclude new
    • “@wip,@undone”: run wip or undone scenarios

    Complete list of supported tags and descriptions can be found in List of test tags

  • concurrent is the number of concurrent tests to be ran.
    Default is 1.

  • timeout sets the timeout in minutes for the overall run.
    Default is 240 minutes.

  • debug to be set to true to activate debug mode.
    Default is false.

  • load_factor sets the tests load factor. Useful for the tests to take into account that the cluster can be overloaded, for example for the calculation of timeouts.
    Default is 1.

  • local to be set to true if running tests in local using either a local or remote cluster. Default is false.

  • ci to be set if running tests with CI. Give CI name.

  • cr_deployment_only to be set if you don't have a CLI built. Default will deploy applications via the CLI.

  • load_default_config sets to true if you want to directly use the default test config (from test/.default_config)

  • container_engine engine used to interact with images and local containers. Default is docker.

  • domain_suffix domain suffix used for exposed services. Ignored when running tests on Openshift.

  • image_cache_mode Use this option to specify whether you want to use image cache for runtime images. Available options are ‘always’, ‘never’ or ‘if-available’(default).

  • http_retry_nb sets the retry number for all HTTP calls in case it fails (and response code != 500). Default is 3.

  • operator_image is the Operator image full name.
    Default: operator_image=quay.io/kiegroup/kogito-cloud-operator.
  • operator_tag is the Operator image tag.
    Default is the current version.
  • deploy_uri set operator deploy folder.
    Default is ./deploy.
  • cli_path set the built CLI path.
    Default is ./build/_output/bin/kogito.
  • services_image_version sets the services (jobs-service, data-index, ...) image version.
  • services_image_namespace sets the services (jobs-service, data-index, ...) image namespace.
  • services_image_registry sets the services (jobs-service, data-index, ...) image registry.
  • data_index_image_tag sets the Kogito Data Index image tag (‘services_image_version’ is ignored)
  • jobs_service_image_tag sets the Kogito Jobs Service image tag (‘services_image_version’ is ignored)
  • management_console_image_tag sets the Kogito Management Console image tag (‘services_image_version’ is ignored)
  • custom_maven_repo sets a custom Maven repository url for S2I builds, in case your artifacts are in a specific repository. See https://github.com/kiegroup/kogito-images/README.md for more information.
  • maven_mirror is the Maven mirror URL.
    This is helpful when you need to speed up the build time by referring to a closer Maven repository.
  • build_image_registry sets the build image registry.
  • build_image_namespace sets the build image namespace.
  • build_image_name_suffix sets the build image name suffix to append to usual image names.
  • build_image_version sets the build image version
  • build_s2i_image_tag sets the build S2I image full tag.
  • build_runtime_image_tag sets the build Runtime image full tag.
  • runtime_application_image_registry sets the registry for built runtime applications.
  • runtime_application_image_namespace sets the namespace for built runtime applications.
  • runtime_application_image_name_suffix sets the image name suffix to append to usual image names for built runtime applications.
  • runtime_application_image_version sets the version for built runtime applications.
  • show_scenarios sets to true to display scenarios which will be executed.
    Default is false.
  • show_steps sets to true to display scenarios and their steps which will be executed.
    Default is false.
  • dry_run sets to true to execute a dry run of the tests, disable crds updates and display the scenarios which will be executed.
    Default is false.
  • keep_namespace sets to true to not delete namespace(s) after scenario run (WARNING: can be resources consuming ...).
    Default is false.
  • disabled_crds_update sets to true to disable the update of CRDs.
    Default is false.
  • namespace_name to specify name of the namespace which will be used for scenario execution (intended for development purposes).
  • local_cluster to be set to true if running tests using a local cluster. Default is false.

Logs will be shown on the Terminal.

To save the test output in a local file for future reference, run the following command:

make run-tests 2>&1 | tee log.out

Running BDD tests with current branch

$ make
$ docker tag quay.io/kiegroup/kogito-cloud-operator:0.13.1 quay.io/{USERNAME}/kogito-cloud-operator:0.13.1
$ docker push quay.io/{USERNAME}/kogito-cloud-operator:0.13.1
$ make run-tests operator_image=quay.io/{USERNAME}/kogito-cloud-operator

NOTE: Replace {USERNAME} with the username/group you want to push to. Docker needs to be logged in to quay.io and be able to push to your username/group.

Running BDD tests with custom Kogito Build images' version

$ make run-tests build_image_version=<kogito_version>

Running smoke tests

The BDD tests do provide some smoke tests for a quick feedback on basic functionality:

$ make run-smoke-tests [key=value]*

It will run only tests tagged with @smoke. All options from BDD tests do also apply here.

Running performance tests

The BDD tests also provide performance tests. These tests are ignored unless you specifically provide the @performance tag or run:

$ make run-performance-tests [key=value]*

It will run only tests tagged with @performance. All options from BDD tests do also apply here.

NOTE: Performance tests should be run without concurrency.

List of test tags

Tag nameTag meaning
@smokeSmoke tests verifying basic functionality
@performancePerformance tests
@olmOLM integration tests
@travelagencyTravel agency tests
@disabledDisabled tests, usually with comment describing reasons
@cliTests to be executed only using Kogito CLI
@springbootSpringBoot tests
@quarkusQuarkus tests
@dataindexTests including DataIndex
@jobsserviceTests including Jobs service
@managementconsoleTests including Management console
@infraTests checking KogitoInfra functionality
@binaryTests using Kogito applications built locally and uploaded to OCP as a binary file
@nativeTests using native build
@persistenceTests verifying persistence capabilities
@eventsTests verifying eventing capabilities
@discoveryTests checking service discovery functionality
@usertasksTests interacting with user tasks to check authentication/authorization
@resourcesTests checking resource requests and limits
@infinispanTests using the infinispan operator
@kafkaTests using the kafka operator

Running the Kogito Operator locally

To run the Kogito Operator locally, change the log level at runtime with the DEBUG environment variable, as shown in the following example:

$ make mod
$ make clean
$ DEBUG=true operator-sdk run local --watch-namespace=<namespace>

You can use the following command to vet, format, lint, and test your code:

$ make test