commit | 7453afccb0ba18f4d83a8bfe6559f79356c6c8db | [log] [tgz] |
---|---|---|
author | Ricardo Zanini <1538000+ricardozanini@users.noreply.github.com> | Thu Oct 24 12:03:34 2019 -0300 |
committer | GitHub <noreply@github.com> | Thu Oct 24 12:03:34 2019 -0300 |
tree | 8e5e0d05b54c6e1713f79c00e8d8cf9fbe0139d0 | |
parent | 5eb5ac0c7481c223bf2de9c338709cac36f9925c [diff] |
[KOGITO-485] Small patch for the Kogito CLI 0.5.1 release (#92)
Kogito Operator was designed to deploy Kogito Runtimes services from source and every piece of infrastructure that the services might need, such as SSO (Keycloak) and Persistence (Infinispan).
First import the Kogito image stream using the oc client
:
$ oc apply -f https://raw.githubusercontent.com/kiegroup/kogito-cloud/master/s2i/kogito-imagestream.yaml -n openshift
The installation on OpenShift 4.x is pretty straightforward since Kogito Operator is available in the OperatorHub as a community operator.
Just follow the OpenShift Web Console instructions in the Catalog, OperatorHub section in the left menu to install it in any namespace in the cluster.
You can also run the operator locally if you have the requirements configured in your local machine.
Make sure that the Kogito image stream is created in the cluster:
$ oc apply -f https://raw.githubusercontent.com/kiegroup/kogito-cloud/master/s2i/kogito-imagestream.yaml -n openshift
Then create an entry in the OperatorHub catalog:
$ oc create -f deploy/olm-catalog/kogito-cloud-operator/kogitocloud-operatorsource.yaml
It will take a few minutes for the operator to become visible under the OperatorHub section of the OpenShift console Catalog. The Operator can be easily found by filtering by Kogito name.
Verify operator availability by running:
$ oc describe operatorsource.operators.coreos.com/kogitocloud-operator -n openshift-marketplace
Installation on OpenShift 3.11 has to be done manually since the OperatorHub catalog is not available by default:
## kogito imagestreams should already be installed/available ... e.g. $ oc apply -f https://raw.githubusercontent.com/kiegroup/kogito-cloud/master/s2i/kogito-imagestream.yaml -n openshift $ oc new-project <project-name> $ ./hack/3.11deploy.sh
Use the OLM console to subscribe to the kogito
Operator Catalog Source within your namespace. Once subscribed, use the console to Create KogitoApp
or create one manually as seen below.
$ oc create -f deploy/crs/app_v1alpha1_kogitoapp_cr.yaml kogitoapp.app.kiegroup.org/example-quarkus created
Alternatively, you can use the CLI to deploy your services:
$ kogito deploy-service example-quarkus https://github.com/kiegroup/kogito-examples/ --context-dir=drools-quarkus-example
By default, the Kogito Services will be built with traditional java
compilers to speed up the time and save resources. This means that the final generated artifact will be a uber jar with the chosen runtime (default to Quarkus).
Kogito Services when implemented with Quarkus can be built to native binary. This means low (really low) footprint on runtime, but will demand a lot of resources during build time. Read more about AOT compilation here.
In our tests, native builds takes approximately 10 minutes and the build pod can consume up to 3.5GB of RAM and 1.5 CPU cores. Make sure that you have this resources available when running native builds.
To deploy a service using native builds, run the deploy-service
command with --native
flag:
$ kogito deploy-service example-quarkus https://github.com/kiegroup/kogito-examples/ --context-dir=drools-quarkus-example --native
If you don't see any builds running nor any resources created in the namespace, try to take a look at the Kogito Operator log.
To look at the operator logs, first identify where the operator is deployed:
$ oc get pods NAME READY STATUS RESTARTS AGE kogito-cloud-operator-6d7b6d4466-9ng8t 1/1 Running 0 26m
Use the pod name as the input of the following command:
$ oc logs -f kogito-cloud-operator-6d7b6d4466-9ng8t
$ kogito delete-service example-quarkus
The Kogito Operator is able to deploy the Data Index Service as a Custom Resource (KogitoDataIndex
). Since Data Index Service depends on Kafka and Infinispan, it's necessary to manually deploy an Apache Kafka Cluster and an Infinispan Server (10.x) in the same namespace.
:information_source: It's planned for future releases that the Kogito Operator will deploy an Infinispan and a Kafka cluster when deploying the Data Index Service. |
---|
To deploy an Infinispan Server, you can leverage from oc new-app [docker image]
command as follows:
$ oc new-app jboss/infinispan-server:10.0.0.Beta3
Expect a similar output like this one:
--> Found Docker image caaa296 (5 months old) from Docker Hub for "jboss/infinispan-server:10.0.0.Beta3" Infinispan Server ----------------- Provides a scalable in-memory distributed database designed for fast access to large volumes of data. Tags: datagrid, java, jboss * An image stream tag will be created as "infinispan-server:10.0.0.Beta3" that will track this image * This image will be deployed in deployment config "infinispan-server" * Ports 11211/tcp, 11222/tcp, 57600/tcp, 7600/tcp, 8080/tcp, 8181/tcp, 8888/tcp, 9990/tcp will be load balanced by service "infinispan-server" * Other containers can access this service through the hostname "infinispan-server" --> Creating resources ... imagestream.image.openshift.io "infinispan-server" created deploymentconfig.apps.openshift.io "infinispan-server" created service "infinispan-server" created --> Success Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/infinispan-server' Run 'oc status' to view your app.
OpenShift will create everything you need for Infinispan Server to work in the namespace. Make sure that the pod is running:
$ oc get pods -l app=infinispan-server
Take a look at the logs by running:
# take the pod name from the command you ran before $ oc logs -f <pod name>
The Infinispan server should be accessed within the namespace by port 11222:
$ oc get svc -l app=infinispan-server NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE infinispan-server ClusterIP 172.30.193.214 <none> 7600/TCP,8080/TCP,8181/TCP,8888/TCP,9990/TCP,11211/TCP,11222/TCP,57600/TCP 4m19s
Deploying Strimzi is much easier since it's an Operator and should be available in the OperatorHub. On OpenShift Web Console, go to the left menu, Catalog, OperatorHub and search for Strimzi
.
Follow the on screen instructions to install the Strimzi Operator. At the end, you should see the Strimzi Operator on the Installed Operators tab:
Next, you need to create a Kafka cluster and a Kafka Topic for the Data Index Service to connect. Click on the name of the Strimzi Operator, then on Kafka
tab and Create Kafka
. Accept the default options to create a 3 node Kafka cluster. If it's a development environment, consider setting the Zookeeper and Kafka replicas to 1 to save resources.
After a few minutes you should see the pods running and the services available:
$ oc get svc -l strimzi.io/cluster=my-cluster NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-cluster-kafka-bootstrap ClusterIP 172.30.228.90 <none> 9091/TCP,9092/TCP,9093/TCP 9d my-cluster-kafka-brokers ClusterIP None <none> 9091/TCP,9092/TCP,9093/TCP 9d my-cluster-zookeeper-client ClusterIP 172.30.241.146 <none> 2181/TCP 9d my-cluster-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 9d
The service you're interested in is my-cluster-kafka-bootstrap:9092
. We will use it to deploy the Data Index Service later.
Having the cluster up and running, the next step is creating the Kafka Topic
s required by the Data Index Service: kogito-processinstances-events
and kogito-usertaskinstances-events
.
In the OpenShift Web Console, go to the Installed Operators, Strimzi Operator, Kafka Topic tab. From there, create a new Kafka Topic
and name it as kogito-processinstances-events
like in the example below:
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaTopic metadata: name: kogito-processinstances-events labels: strimzi.io/cluster: my-cluster namespace: kogito spec: partitions: 10 replicas: 3 config: retention.ms: 604800000 segment.bytes: 1073741824
And then do the same for the kogito-usertaskinstances-events
topic.
To check if everything was created successfully run the following command:
$ oc describe kafkatopic/kogito-processinstances-events Name: kogito-processinstances-events Namespace: kogito Labels: strimzi.io/cluster=my-cluster Annotations: <none> API Version: kafka.strimzi.io/v1beta1 Kind: KafkaTopic Metadata: Creation Timestamp: 2019-08-28T18:09:41Z Generation: 2 Resource Version: 5673235 Self Link: /apis/kafka.strimzi.io/v1beta1/namespaces/kogito/kafkatopics/kogito-processinstances-events UID: 0194989e-c9bf-11e9-8160-0615e4bfa428 Spec: Config: message.format.version: 2.3-IV1 retention.ms: 604800000 segment.bytes: 1073741824 Partitions: 10 Replicas: 1 Topic Name: kogito-processinstances-events Events: <none>
Now that you have the required infrastrucuture, it's safe to deploy the Kogito Data Index Service.
Having installed the Kogito Operator, create a new Kogito Data Index
resource using the services URIs from Infinispan and Kafka:
$ oc get svc -l app=infinispan-server NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE infinispan-server ClusterIP 172.30.193.214 <none> 7600/TCP,8080/TCP,8181/TCP,8888/TCP,9990/TCP,11211/TCP,11222/TCP,57600/TCP 4m19s
In this example, the Infinispan Server service is infinispan-server:11222
.
Then grab the Kafka cluster URI:
$ oc get svc -l strimzi.io/cluster=my-cluster | grep bootstrap my-cluster-kafka-bootstrap ClusterIP 172.30.228.90 <none> 9091/TCP,9092/TCP,9093/TCP 9d
In this case the Kafka Cluster service is my-cluster-kafka-bootstrap:9092
.
Use this information to create the Kogito Data Index resource.
If you have installed the Kogito CLI, you can simply run:
$ kogito install data-index -p my-project --infinispan-url infinispan-server:11222 --kafka-url my-cluster-kafka-bootstrap:9092
If you're running on OCP 4.x, you might use the OperatorHub user interface. In the left menu go to Installed Operators, Kogito Operator, Kogito Data Index tab. From there, click on “Create Kogito Data Index” and create a new resource like in the example below using the Infinispan and Kafka services:
apiVersion: app.kiegroup.org/v1alpha1 kind: KogitoDataIndex metadata: name: kogito-data-index spec: # If not informed, these default values will be set for you # environment variables to set in the runtime container. Example: JAVAOPTS: "-Dquarkus.log.level=DEBUG" env: {} # number of pods to be deployed replicas: 1 # image to use for this deploy image: "quay.io/kiegroup/kogito-data-index:latest" # Limits and requests for the Data Index pod memoryLimit: "" memoryRequest: "" cpuLimit: "" cpuRequest: "" # details about the kafka connection kafka: # the service name and port for the kafka cluster. Example: my-kafka-cluster:9092 serviceURI: my-cluster-kafka-bootstrap:9092 # details about the connected infinispan infinispan: # the service name and port of the infinispan cluster. Example: my-infinispan:11222 serviceURI: infinispan-server:11222
You can use the CR file showed above as a reference and create the custom resource from the command line:
# clone this repo $ git clone https://github.com/kiegroup/kogito-cloud-operator.git $ cd kogito-cloud-operator # make your changes $ vi deploy/crds/app_v1alpha1_kogitodataindex_cr.yaml # deploy to the cluster $ oc create -f deploy/crds/app_v1alpha1_kogitodataindex_cr.yaml -n my-project
You should be able to access the GraphQL interface via the route created for you:
$ oc get routes -l app=kogito-data-index NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD kogito-data-index kogito-data-index-kogito.apps.mycluster.example.com kogito-data-index 8180 None
A CLI tool is available to make it easy to deploy new Kogito Services from source instead of relying on CRs yaml files.
oc
client installedDownload the correct distribution for your machine
Unpack the binary: tar -xvf release.tar.gz
You should see an executable named kogito
. Move the binary to a pre-existing directory in your PATH
. For example: # cp /path/to/kogito /usr/local/bin
Just download the latest 64-bit Windows release. Extract the zip file through a file browser. Add the extracted directory to your PATH. You can now use kogito
from the command line.
:warning: Building CLI from source requires that go is installed and available in your PATH . |
---|
Build and install the CLI by running:
$ git clone https://github.com/kiegroup/kogito-cloud-operator $ cd kogito-cloud-operator $ make install-cli
The kogito
CLI should be available in your path:
$ kogito Kogito CLI deploys your Kogito Services into an OpenShift cluster Usage: kogito [command] Available Commands: delete-project Deletes a Kogito Project - i.e., the Kubernetes/OpenShift namespace delete-service Deletes a Kogito Runtime Service deployed in the namespace/project deploy-service Deploys a new Kogito Runtime Service into the given Project help Help about any command install Install all sort of infrastructure components to your Kogito project new-project Creates a new Kogito Project for your Kogito Services use-project Sets the Kogito Project where your Kogito Service will be deployed Flags: --config string config file (default is $HOME/.kogito.json) -h, --help help for kogito -v, --verbose verbose output --version display version Use "kogito [command] --help" for more information about a command.
After installing the Kogito Operator, it's possible to deploy a new Kogito Service by using the CLI:
# creates a new namespace in your cluster $ kogito new-project kogito-cli # deploys a new Kogito Runtime Service from source $ kogito deploy-service example-drools https://github.com/kiegroup/kogito-examples --context-dir drools-quarkus-example
If you are using OpenShift 3.11 as described in the previous chapter, you shall use the existing namespace you created during the manual deployment, by using the following CLI commands:
# use the provisioned namespace in your OpenShift 3.11 cluster $ kogito use-project <project-name> # deploys a new kogito service from source $ kogito deploy-service example-drools https://github.com/kiegroup/kogito-examples --context-dir drools-quarkus-example
This can be shorten to:
$ kogito deploy-service example-drools https://github.com/kiegroup/kogito-examples --context-dir drools-quarkus-example --project <project-name>
While fixing issues or adding new features to the Kogito Operator, please consider taking a look at Contributions and Architecture documentation.
We have a script ready for you. The output of this command is a ready to use Kogito Operator image to be deployed in any namespace.
$ make
To install this operator on OpenShift 4 for end-to-end testing, make sure you have access to a quay.io account to create an application repository. Follow the authentication instructions for Operator Courier to obtain an account token. This token is in the form of “basic XXXXXXXXX” and both words are required for the command.
Push the operator bundle to your quay application repository as follows:
$ operator-courier push deploy/olm-catalog/kogito-cloud-operator/ namespace kogitocloud-operator 0.5.0 "basic XXXXXXXXX"
If pushing to another quay repository, replace namespace with your username or other namespace. Notice that the push command does not overwrite an existing repository, and the bundle needs to be deleted before a new version can be built and uploaded. Once the bundle has been uploaded, create an Operator Source to load your operator bundle in OpenShift.
Note that the OpenShift cluster needs access to the created application. Make sure that the application is public or you have configured the private repository credentials in the cluster. To make the application public, go to your quay.io account, and in the Applications tab look for the kogitocloud-operator
application. Under the settings section click on make public button.
## kogito imagestreams should already be installed/available ... e.g. $ oc apply -f https://raw.githubusercontent.com/kiegroup/kogito-cloud/master/s2i/kogito-imagestream.yaml -n openshift $ oc create -f deploy/olm-catalog/kogito-cloud-operator/kogitocloud-operatorsource.yaml
Remember to replace registryNamespace in the kogitocloud-operatorsource.yaml
with your quay namespace. The name, display name and publisher of the operator are the only other attributes that may be modified.
It will take a few minutes for the operator to become visible under the OperatorHub section of the OpenShift console Catalog. The Operator can be easily found by filtering the provider type to Custom.
It's possible to verify the operator status by running:
$ oc describe operatorsource.operators.coreos.com/kogitocloud-operator -n openshift-marketplace
If you have an OpenShift cluster and admin privileges, you can run e2e tests with the following command:
$ make run-e2e namespace=<namespace> tag=<tag> native=<true|false> maven_mirror=<maven_mirror_url>
Where:
namespace
(required) is a given temporary namespace where the test will run. You don't need to create the namespace, since it will be created and deleted after running the teststag
(optional, default is current release) is the image tag for the Kogito image builds, for example: 0.5.0-rc1
. Useful on situations where Kogito Cloud images haven't released yet and are under a temporary tagnative
(optional, default is false
) indicates if the e2e test should use native or jvm builds. See Native X JVM Buildsmaven_mirror
(optional, default is blank) the Maven mirror URL. Useful when you need to speed up the build time by referring to a closer maven repositoryIn case of errors while running this test, a huge log dump will appear in your terminal. To save the test output in a local file to be analysed later, use the command below:
make run-e2e namespace=kogito-e2e 2>&1 | tee log.out
Change log level at runtime with the DEBUG
environment variable. e.g. -
$ make mod $ make clean $ DEBUG="true" operator-sdk up local --namespace=<namespace>
Before submitting PR, please be sure to read the contributors guide.
It's always worth noting that you should generate, vet, format, lint, and test your code. This all can be done with one command.
$ make test
By default, if your Kogito Runtime Service has the monitoring-prometheus-addon
dependency, the Kogito Operator will add annotations to the pod and service of the deployed application, for example:
apiVersion: v1 kind: Service metadata: annotations: org.kie.kogito/managed-by: Kogito Operator org.kie.kogito/operator-crd: KogitoApp prometheus.io/path: /metrics prometheus.io/port: "8080" prometheus.io/scheme: http prometheus.io/scrape: "true" labels: app: onboarding-service onboarding: process name: onboarding-service namespace: kogito ownerReferences: - apiVersion: app.kiegroup.org/v1alpha1 blockOwnerDeletion: true controller: true kind: KogitoApp name: onboarding-service spec: clusterIP: 172.30.173.165 ports: - name: http port: 8080 protocol: TCP targetPort: 8080 selector: app: onboarding-service onboarding: process sessionAffinity: None type: ClusterIP status: loadBalancer: {}
But those annotations won't work for the Prometheus Operator. If you‘re deploying on OpenShift 4.x, chances are that you’re using the Prometheus Operator.
In a scenario where the Prometheus is deployed and managed by the Prometheus Operator, you should create a new ServiceMonitor
resource to expose the Kogito Service to Prometheus to scrape:
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: team: kogito name: onboarding-service namespace: openshift-monitoring spec: endpoints: - path: /metrics port: http namespaceSelector: matchNames: # the namespace where the service is deployed - kogito selector: matchLabels: app: onboarding-service
Then you can see the endpoint being scraped by Prometheus in the Target web console:
The metrics exposed by the Kogito Service can be seen on the Graph, for example:
For more information about the Prometheus Operator, check the Getting Started guide.
Please take a look at the Contributing to Kogito Operator guide.