blob: d2b11f04023228183a243ef38a64ea402ad46e71 [file] [log] [blame]
[[contributing]]
= Contributing to Camel K
We love contributions!
The project is written in https://golang.org/[go] and contains some parts written in Java for the https://github.com/apache/camel-k-runtime/[integration runtime]
Camel K is built on top of Kubernetes through *Custom Resource Definitions*. The https://github.com/operator-framework/operator-sdk[Operator SDK] is used
to manage the lifecycle of those custom resources.
[[requirements]]
== Requirements
In order to build the project, you need to comply with the following requirements:
* **Go version 1.13+**: needed to compile and test the project. Refer to the https://golang.org/[Go website] for the installation.
* **Operator SDK v0.9.0+**: used to build the operator and the Docker images. Instructions in the https://github.com/operator-framework/operator-sdk[Operator SDK website] (binary downloads available in the release page).
* **GNU Make**: used to define composite build actions. This should be already installed or available as package if you have a good OS (https://www.gnu.org/software/make/).
The Camel K Java runtime (camel-k-runtime) requires:
* **Java 11**: needed for compilation.
* **Maven**: needed for building
[[checks]]
== Running checks
Checks rely on `golangci-lint` being installed, to install it look at the https://github.com/golangci/golangci-lint#local-installation[Local Installation] instructions.
You can run checks via `make lint` or you can install a GIT pre-commit hook and have the checks run via https://pre-commit.com[pre-commit]; then make sure to install the pre-commit hooks after installing pre-commit by running
$ pre-commit install
[[checking-out]]
== Checking Out the Sources
You can create a fork of this project from GitHub, then clone your fork with the `git` command line tool.
[[structure]]
== Structure
This is a high-level overview of the project structure:
.Structure
[options="header"]
|=======================
| Path | Content
| https://github.com/apache/camel-k/tree/master/build[/build] | Contains the Docker and Maven build configuration.
| https://github.com/apache/camel-k/tree/master/cmd[/cmd] | Contains the entry points (the *main* functions) for the **camel-k** binary (manager) and the **kamel** client tool.
| https://github.com/apache/camel-k/tree/master/deploy[/deploy] | Contains Kubernetes resource files that are used by the **kamel** client during installation. The `/deploy/resources.go` file is kept in sync with the content of the directory (`make build-embed-resources`), so that resources can be used from within the go code.
| https://github.com/apache/camel-k/tree/master/docs[/docs] | Contains the documentation website based on https://antora.org/[Antora].
| https://github.com/apache/camel-k/tree/master/e2e[/e2e] | Include integration tests to ensure that the software interacts correctly with Kubernetes and OpenShift.
| https://github.com/apache/camel-k/tree/master/examples[/examples] | Various examples of Camel K usage.
| https://github.com/apache/camel-k/tree/master/pkg[/pkg] | This is where the code resides. The code is divided in multiple subpackages.
| https://github.com/apache/camel-k/tree/master/script[/script] | Contains scripts used during make operations for building the project.
|=======================
[[building]]
== Building
To build the whole project you now need to run:
[source]
----
make
----
This execute a full build of both the Java and Go code. If you need to build the components separately you can execute:
* `make build-operator`: to build the operator binary only.
* `make build-kamel`: to build the `kamel` client tool only.
* `make build-runtime`: to build the Java-based runtime code only.
After a successful build, if you're connected to a Docker daemon, you can build the operator Docker image by running:
[source]
----
make images
----
[[push-snapshot]]
== Push runtime snapshot
For the Java bits in runtime you can push to the Apache Snapshot repository with:
* `make push-runtime-snapshot`: to push the Java-based runtime snapshot JARs.
In your settings.xml you'll need to have the correct ASF credentials to push.
[source,xml]
----
<server>
<id>apache.snapshots.https</id>
<username>username</username>
<password>password</password>
</server>
<server>
<id>apache.releases.https</id>
<username>username</username>
<password>password</password>
</server>
----
Don't forget to first run a `make build-runtime` before pushing the snapshot.
The above command produces a `camel-k` image with name `docker.io/apache/camel-k`. Sometimes you might need to produce camel-k images that need to be pushed to the custom repository e.g. `docker.io/myrepo/camel-k`, to do that you can pass a parameter `imgDestination` to the make as shown below:
[source]
----
make imgDestination='docker.io/myrepo' images
----
[[testing]]
== Testing
Unit tests are executed automatically as part of the build. They use the standard go testing framework.
Integration tests (aimed at ensuring that the code integrates correctly with Kubernetes and OpenShift), need special care.
The **convention** used in this repo is to name unit tests `xxx_test.go`, and name integration tests `yyy_integration_test.go`.
Integration tests are all in the https://github.com/apache/camel-k/tree/master/e2e[/e2e] dir.
Since both names end with `_test.go`, both would be executed by go during build, so you need to put a special **build tag** to mark
integration tests. A integration test should start with the following line:
[source]
----
// +build integration
----
Look into the https://github.com/apache/camel-k/tree/master/e2e[/e2e] directory for examples of integration tests.
Before running a integration test, you need to be connected to a Kubernetes/OpenShift namespace.
After you log in into your cluster, you can run the following command to execute **all** integration tests:
[source]
----
make test-integration
----
[[running]]
== Running
If you want to install everything you have in your source code and see it running on Kubernetes, you need to run the following command:
=== For Red Hat CodeReady Containers (CRC)
* You need to have https://docs.docker.com/get-docker/[Docker] installed and running (or connected to a Docker daemon)
* You need to setup Docker daemon to https://docs.docker.com/registry/insecure/[trust] CRC's insecure Docker registry which is exposed by default through the route `default-route-openshift-image-registry.apps-crc.testing`. One way of doing that is to instruct the Docker daemon to trust the certificate:
** `oc extract secret/router-ca --keys=tls.crt -n openshift-ingress-operator`: to extract the certificate
** `sudo cp tls.crt /etc/docker/certs.d/default-route-openshift-image-registry.apps-crc.testing/ca.crt`: to copy the certificate for Docker daemon to trust
** `docker login -u kubeadmin -p $(oc whoami -t) default-route-openshift-image-registry.apps-crc.testing`: to test that the certificate is trusted
* Run `make install-crc`: to build the project and install it in the current namespace on CRC
* You can specify a different namespace with `make install-crc project=myawesomeproject`
* To uninstall Camel K, run `kamel uninstall --all --olm=false`
The commands assumes you have an already running CRC instance and logged in correctly.
=== For Minishift
* Run `make install-minishift` (or just `make install`): to build the project and install it in the current namespace on Minishift
* You can specify a different namespace with `make install-minishift project=myawesomeproject`
This command assumes you have an already running Minishift instance.
=== For Minikube
* Run `make install-minikube`: to build the project and install it in the current namespace on Minikube
This command assumes you have an already running Minikube instance.
=== Use
Now you can play with Camel K:
[source]
----
./kamel run examples/Sample.java
----
To add additional dependencies to your routes:
[source]
----
./kamel run -d camel:dns examples/dns.js
----
[[debugging]]
== Debugging and Running from IDE
Sometimes it's useful to debug the code from the IDE when troubleshooting.
.**Debugging the `kamel` binary**
It should be straightforward: just execute the https://github.com/apache/camel-k/tree/master/cmd/kamel/main.go[/cmd/kamel/main.go] file from the IDE (e.g. Goland) in debug mode.
.**Debugging the operator**
It is a bit more complex (but not so much).
You are going to run the operator code **outside** OpenShift in your IDE so, first of all, you need to **stop the operator running inside**:
[source]
----
// use kubectl in plain Kubernetes
oc scale deployment/camel-k-operator --replicas 0
----
You can scale it back to 1 when you're done and you have updated the operator image.
You can setup the IDE (e.g. Goland) to execute the https://github.com/apache/camel-k/blob/master/cmd/manager/main.go[/cmd/manager/main.go] file in debug mode.
When configuring the IDE task, make sure to add all required environment variables in the *IDE task configuration screen*:
* Set the `KUBERNETES_CONFIG` environment variable to point to your Kubernetes configuration file (usually `<homedir>/.kube/config`).
* Set the `WATCH_NAMESPACE` environment variable to a Kubernetes namespace you have access to.
* Set the `OPERATOR_NAME` environment variable to `camel-k`.
After you setup the IDE task, you can run and debug the operator process.
NOTE: The operator can be fully debugged in Minishift, because it uses OpenShift S2I binary builds under the hood.
The build phase cannot be (currently) debugged in Minikube because the Kaniko builder requires that the operator and the publisher pod
share a common persistent volume.