commit | ad57b40ed80b6260939901f08e629b8bde7a07d6 | [log] [tgz] |
---|---|---|
author | nferraro <ni.ferraro@gmail.com> | Wed Sep 19 18:02:15 2018 +0200 |
committer | nferraro <ni.ferraro@gmail.com> | Wed Sep 19 18:02:15 2018 +0200 |
tree | 4a8d2056f093a34ec241a35f039e6b03b4c81698 | |
parent | 3e48add99c16861cfc4bffbe5831a87f5ca4707d [diff] |
Release 0.0.2
Apache Camel K (a.k.a. Kamel) is a lightweight integration framework built from Apache Camel that runs natively on Kubernetes and is specifically designed for serverless and microservice architectures.
Camel K allows to run integrations on a Kubernetes or Openshift cluster. If you don't have a cloud instance of Kubernetes or Openshift, you can create a development cluster following the instructions below.
There are various options for creating a development cluster:
Minishift
You can run Camel K integrations on Openshift using the Minishift cluster creation tool. Follow the instructions in the getting started guide for the installation.
After installing the minishift
binary, you need to enable the admin-user
addon:
minishift addons enable admin-user
Then you can start the cluster with:
minishift start
Minikube
Minikube and Kubernetes are not yet supported (but support is coming soon).
To start using Camel K you need the “kamel” binary, that can be used to both configure the cluster and run integrations. Look into the release page for latest version of the kamel
tool.
If you wanto to contribute, you can also build it from source! Refer to the contributing guide for information on how to do it.
Once you have the “kamel” binary, log into your cluster using the “oc” or “kubectl” tool and execute the following command to install Camel K:
kamel install
This will configure the cluster with the Camel K custom resource definitions and install the operator on the current namespace.
Note: Custom Resource Definitions (CRD) are cluster-wide objects and you need admin rights to install them. Fortunately this operation can be done once per cluster. So, if the kamel install
operation fails, you'll be asked to repeat it when logged as admin. For Minishift, this means executing oc login -u system:admin
then kamel install --cluster-setup
only for first-time installation.
After the initial setup, you can run a Camel integration on the cluster executing:
kamel run runtime/examples/Sample.java
A “Sample.java” file is included in the folder runtime/examples of this repository. You can change the content of the file and execute the command again to see the changes.
A JavaScript integration has also been provided as example, to run it:
kamel run runtime/examples/routes.js
Camel K integrations follow a lifecycle composed of several steps before getting into the Running
state. You can check the status of all integrations by executing the following command:
kamel get
We love contributions!
The project is written in Go and contains some parts written in Java for the integration runtime. Camel K is built on top of Kubernetes through Custom Resource Definitions. The Operator SDK is used to manage the lifecycle of those custom resources.
In order to build the project, you need to comply with the following requirements:
You can create a fork of this project from Github, then clone your fork with the git
command line tool.
You need to put the project in your $GOPATH (refer to Go documentation for information). So, make sure that the root of the github repo is in the path:
$GOPATH/src/github.com/apache/camel-k/
This is a high level overview of the project structure:
/deploy/resources.go
file is kept in sync with the content of the directory (make build-embed-resources
), so that resources can be used from within the go code.Go dependencies in the vendor directory are not included when you clone the project.
Before compiling the source code, you need to sync your local vendor directory with the project dependencies, using the following command:
make dep
The make dep
command runs dep ensure -v
under the hood, so make sure that dep
is properly installed.
To build the whole project you now need to run:
make
This execute a full build of both the Java and Go code. If you need to build the components separately you can execute:
make build-operator
: to build the operator binary only.make build-kamel
: to build the kamel
client tool only.make build-runtime
: to build the Java-based runtime code only.After a successful build, if you're connected to a Docker daemon, you can build the operator Docker image by running:
make images
Unit tests are executed automatically as part of the build. They use the standard go testing framework.
Integration tests (aimed at ensuring that the code integrates correctly with Kubernetes and Openshift), need special care.
The convention used in this repo is to name unit tests xxx_test.go
, and name integration tests yyy_integration_test.go
. Integration tests are all in the /test dir.
Since both names end with _test.go
, both would be executed by go during build, so you need to put a special build tag to mark integration tests. A integration test should start with the following line:
// +build integration
Before running a integration test, you need to:
KUBERNETES_CONFIG
environment variable to point to your Kubernetes configuration file (usually <home-dir>/.kube/config
).WATCH_NAMESPACE
environment variable to a Kubernetes namespace you have access to.OPERATOR_NAME
environment variable to camel-k-operator
.When the configuration is done, you can run the following command to execute all integration tests:
make test-integration
If you want to install everything you have in your source code and see it running on Kubernetes, you need to run the following command:
make install-minishift
(or just make install
): to build the project and install it in the current namespace on Minishiftmake install-minishift project=myawesomeproject
This command assumes you have an already running Minishift instance.
Now you can play with Camel K:
./kamel run runtime/examples/Sample.java
To add additional dependencies to your routes:
./kamel run -d camel:dns runtime/examples/dns.js
Sometimes it's useful to debug the code from the IDE when troubleshooting.
Debugging the kamel
binary
It should be straightforward: just execute the /cmd/kamel/kamel.go file from the IDE (e.g. Goland) in debug mode.
Debugging the operator
It is a bit more complex (but not so much).
You are going to run the operator code outside Openshift in your IDE so, first of all, you need to stop the operator running inside:
oc scale deployment/camel-k-operator --replicas 0
You can scale it back to 1 when you're done and you have updated the operator image.
You can setup the IDE (e.g. Goland) to execute the /cmd/camel-k-operator/camel_k_operator.go file in debug mode.
When configuring the IDE task, make sure to add all required environment variables in the IDE task configuration screen (such as KUBERNETES_CONFIG
, as explained in the testing section).
If required, it is possible to completely uninstall Camel K from OpenShift or Kubernetes with the following command, using the “oc” or “kubectl” tool:
# kubectl if using kubernetes oc delete all,pvc,configmap,rolebindings,clusterrolebindings,secrets,sa,roles,clusterroles,crd -l 'app=camel-k'