|author||Giuseppe De Palma <firstname.lastname@example.org>||Wed Jan 08 22:17:05 2020 +0100|
|committer||David Grove <email@example.com>||Wed Jan 08 16:17:05 2020 -0500|
NFS Dynamic Provisioning setup guide (#568) * added nfs server set up guide * added manual nfs client guide
Apache OpenWhisk is an open source, distributed Serverless platform that executes functions (fx) in response to events at any scale. The OpenWhisk platform supports a programming model in which developers write functional logic (called Actions), in any supported programming language, that can be dynamically scheduled and run in response to associated events (via Triggers) from external sources (Feeds) or from HTTP requests.
This repository supports deploying OpenWhisk to Kubernetes. It contains a Helm chart that can be used to deploy the core OpenWhisk platform and optionally some of its Event Providers to both single-node and multi-node Kubernetes clusters.
The same Helm chart can also be used to deploy OpenWhisk to OKD/OpenShift via a strategy of using
helm template to generate yaml that is then fed to the
oc cli. There are some rough edges still in this process, we would welcome community contributions to help improve the targeting of OKD/OpenShift and document the necessary steps.
Kubernetes is a container orchestration platform that automates the deployment, scaling, and management of containerized applications. Helm is a package manager for Kubernetes that simplifies the management of Kubernetes applications. You do not need to have detailed knowledge of either Kubernetes or Helm to use this project, but you may find it useful to review their basic documentation to become familiar with their key concepts and terminology.
Your first step is to create a Kubernetes cluster that is capable of supporting an OpenWhisk deployment. Although there are some technical requirements that the Kubernetes cluster must satisfy, any of the options described below is acceptable.
The simplest way to get a small Kubernetes cluster suitable for development and testing is to use one of the Docker-in-Docker approaches for running Kubernetes directly on top of Docker on your development machine. Configuring Docker with 4GB of memory and 2 virtual CPUs is sufficient for the default settings of OpenWhisk. Depending on your host operating system, we recommend the following:
You can also provision a Kubernetes cluster from a cloud provider, subject to the cluster meeting the technical requirements. You will need at least 1 worker node with 4GB of memory and 2 virtual CPUs to deploy the default configuration of OpenWhisk. You can deploy to significantly larger clusters by scaling up the replica count of the various components and labeling multiple nodes as invoker nodes. We have detailed documentation on using Kubernetes clusters from the following major cloud providers:
We would welcome contributions of documentation for Azure (AKS) and any other public cloud providers.
You will need at least 1 worker node with 4GB of memory and 2 virtual CPUs to deploy the default configuration of OpenWhisk. You can deploy to significantly larger clusters by scaling up the replica count of the various components and labeling multiple nodes as invoker nodes. For more detailed documentation, see:
If you are comfortable with building your own Kubernetes clusters and deploying services with ingresses to them, you should also be able to deploy OpenWhisk to a do-it-yourself cluster. Make sure your cluster meets the technical requirements. You will need at least 1 worker node with 4GB of memory and 2 virtual CPUs to deploy the default configuration of OpenWhisk. You can deploy to significantly larger clusters by scaling up the replica count of the various components and labeling multiple nodes as invoker nodes. There are some additional notes here.
Here a Kubernetes cluster example using kubeadm and Ubuntu 18.04.
Helm is a tool to simplify the deployment and management of applications on Kubernetes clusters. Helm consists of the
helm command line tool that you install on your development machine and the
tiller runtime that is deployed on your Kubernetes cluster.
For details on installing Helm, see these instructions.
WARNING: There is a serious regression in Helm v2.15.0 that impacts the OpenWhisk chart. You should use Helm v2.14.3.
In short if you already have the
helm cli installed on your development machine, you will need to execute these two commands and wait a few seconds for the
tiller-deploy pod in the
kube-system namespace to be in the
helm init kubectl create clusterrolebinding tiller-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
If you are targeting an OKD/OpenShift cluster, you will need the
helm cli on your development machine but will not run the
tiller-deploy pod in the cluster as it is not allowed by OKD/OpenShift security policies.
Now that you have your Kubernetes cluster and have installed and initialized Helm, you are ready to deploy OpenWhisk.
You will use Helm to deploy OpenWhisk to your Kubernetes cluster. There are four deployment steps that are described in more detail below in the rest of this section.
mycluster.yamlthat specifies key facts about your Kubernetes cluster and the OpenWhisk configuration you wish to deploy.
mycluster.yamlto deploy OpenWhisk to your Kubernetes cluster.
wskCLI. You need to tell the
wskCLI how to connect to your OpenWhisk deployment.
Indicate the Kubernetes worker nodes that should be used to execute user containers by OpenWhisk's invokers. Do this by labeling each node with
openwhisk-role=invoker. In the default configuration, which uses the KubernetesContainerFactory, the node labels are used in conjunction with Pod affinities to inform the Kubernetes scheduler how to place work so that user actions will not interfere with the OpenWhisk control plane. When using the non-default DockerContainerFactory, OpenWhisk assumes it has exclusive use of these invoker nodes and will schedule work on them directly, completely bypassing the Kubernetes scheduler. For a single node cluster, simply do
kubectl label nodes --all openwhisk-role=invoker
If you have a multi-node cluster, then for each node <INVOKER_NODE_NAME> you want to be an invoker, execute
$ kubectl label nodes <INVOKER_NODE_NAME> openwhisk-role=invoker
If you are targeting OKD/OpenShift, use the command
oc label node <INVOKER_NODE_NAME> openwhisk-role=invoker
For more precise control of the placement of the rest of OpenWhisk's pods on a multi-node cluster, you can optionally label additional non-invoker worker nodes. Use the label
openwhisk-role=core to indicate nodes which should run the OpenWhisk control plane (the controller, kafka, zookeeeper, and couchdb pods). If you have dedicated Ingress nodes, label them with
openwhisk-role=edge. Finally, if you want to run the OpenWhisk Event Providers on specific nodes, label those nodes with
You must create a
mycluster.yaml file to record key aspects of your Kubernetes cluster that are needed to configure the deployment of OpenWhisk to your cluster. For details, see the documentation appropriate to your Kubernetes cluster:
Beyond the Kubernetes cluster specific configuration information, the
mycluster.yaml file is also used to customize your OpenWhisk deployment by enabling optional features and controlling the replication factor of the various microservices that make up the OpenWhisk implementation. See the configuration choices documentation for a discussion of the primary options.
Deployment can be done by using the following single command:
helm install ./helm/openwhisk --namespace=openwhisk --name=owdev -f mycluster.yaml
Deploying to OKD/OpenShift uses the commands:
helm template ./helm/openwhisk --namespace=openwhisk --name=owdev -f mycluster.yaml > owdev.yaml oc create -f owdev.yaml
We recommend generating to a file to make it easier to undeploy openwhisk later by simply doing
oc delete -f owdev.yaml
For simplicity, in this README, we have used
owdev as the release name and
openwhisk as the namespace into which the Chart's resources will be deployed. You can use different names, or not specify a release name at all and let Helm auto-generate one for you.
You can use the command
helm status owdev to get a summary of the various Kubernetes artifacts that make up your OpenWhisk deployment. Once the
install-packages Pod is in the
Completed state, your OpenWhisk deployment is ready to be used.
Configure the OpenWhisk CLI, wsk, by setting the auth and apihost properties (if you don't already have the wsk cli, follow the instructions here to get it). Replace
whisk.ingress.apiHostPort with the actual values from your mycluster.yaml.
wsk property set --apihost <whisk.ingress.apiHostName>:<whisk.ingress.apiHostPort> wsk property set --auth 23bc46b1-71f6-4ed5-8c54-816aa4f8c502:123zO3xZCLrMN6v2BKK1dXYFpXlPkccOFqm12CdAsMgRU4VrNZ9lyGVCGuMDGIwP
docker0 network interface does not exist in the Docker for Mac/Windows host environment. Instead, exposed NodePorts are forwarded from localhost to the appropriate containers. This means that you will use
localhost instead of
whisk.ingress.apiHostName when configuring the
wsk cli and replace
whisk.ingress.apiHostPort with the actual values from your mycluster.yaml.
wsk property set --apihost localhost:<whisk.ingress.apiHostPort> wsk property set --auth 23bc46b1-71f6-4ed5-8c54-816aa4f8c502:123zO3xZCLrMN6v2BKK1dXYFpXlPkccOFqm12CdAsMgRU4VrNZ9lyGVCGuMDGIwP
Your OpenWhisk installation should now be usable. You can test it by following these instructions to define and invoke a sample OpenWhisk action in your favorite programming language.
You can also issue the command
helm test owdev to run the basic verification test suite included in the OpenWhisk Helm chart. Note that
helm test is not supported for OpenShift deployments because it requires the
tiller pod to be run in the cluster.
Note: if you installed self-signed certificates, which is the default for the OpenWhisk Helm chart, you will need to use
wsk -i to suppress certificate checking. This works around
cannot validate certificate errors from the
If your deployment is not working, check our troubleshooting guide for ideas.
Using defaults, your deployment is configured to provide a bare-minimum working platform for testing and exploration. For your specialized workloads, you can scale-up your openwhisk deployment by defining your deployment configurations in your
mycluster.yaml which overrides the defaults in
helm/openwhisk/values.yaml. Some important parameters to consider (for other parameters, check
helm/openwhisk/values.yaml and configurationChoices):
actionsInvokesPerminute: limits the maximum number of invocations per minute.
actionsInvokesPerminute: limits the maximum concurrent invocations.
containerPool: total memory available per
Invokeruses this memory to create containers for user-actions. The concurrency-limit (actions running in parallel) will depend upon the total memory configured for
containerPooland memory allocated per action (
default:256mb per container).
For more information about increasing concurrency-limit, check scaling-up your deployment.
Wskadmin is the tool to perform various administrative operations against an OpenWhisk deployment.
Since wskadmin requires credentials for direct access to the database (that is not normally accessible to the outside), it is deployed in a pod inside Kubernetes that is configured with the proper parameters. You can run
kubectl. You need to use the
<namespace> and the deployment
<name> that you configured with
--name when deploying.
You can then invoke
kubectl -n <namespace> -ti exec <name>-wskadmin -- wskadmin <parameters>
For example, is your deployment name is
owdev and the namespace is
openwhisk you can list users in the
guest namespace with:
$ kubectl -n openwhisk -ti exec owdev-wskadmin -- wskadmin user list guest 23bc46b1-71f6-4ed5-8c54-816aa4f8c502:123zO3xZCLrMN6v2BKK1dXYFpXlPkccOFqm12CdAsMgRU4VrNZ9lyGVCGuMDGIwP
Check here for details about the available commands.
This section outlines how common OpenWhisk development tasks are supported when OpenWhisk is deployed on Kubernetes using Helm.
Some key differences in a Kubernetes-based deployment of OpenWhisk are that deploying the system does not generate a
whisk.properties file and that the various internal microservices (
controller, etc.) are not directly accessible from the outside of the Kubernetes cluster. Therefore, although you can run full system tests against a Kubernetes-based deployment by giving some extra command line arguments, any unit tests that assume direct access to one of the internal microservices will fail. First clone the core OpenWhisk repository locally and set
$OPENWHISK_HOME to its top-level directory. Then, the system tests can be executed in a batch-style as shown below, where WHISK_SERVER and WHISK_AUTH are replaced by the values returned by
wsk property get --apihost and
wsk property get --auth respectively.
cd $OPENWHISK_HOME ./gradlew :tests:testSystemBasic -Dwhisk.auth=$WHISK_AUTH -Dwhisk.server=https://$WHISK_SERVER -Dopenwhisk.home=`pwd`
You can also launch the system tests as JUnit test from an IDE by adding the same system properties to the JVM command line used to launch the tests:
-Dwhisk.auth=$WHISK_AUTH -Dwhisk.server=https://$WHISK_SERVER -Dopenwhisk.home=`pwd`
If you are using Kubernetes in Docker, it is straightforward to deploy local images by adding a stanza to your mycluster.yaml. For example, to use a locally built controller image, just add the stanza below to your
mycluster.yaml to override the default behavior of pulling a stable
openwhisk/controller image from Docker Hub.
controller: imageName: "whisk/controller" imageTag: "latest"
You can use the
helm upgrade command to selectively redeploy one or more OpenWhisk componenets. Continuing the example above, if you make additional changes to the controller source code and want to just redeploy it without redeploying the entire OpenWhisk system you can do the following:
# Execute these commands in your openwhisk directory ./gradlew distDocker docker tag whisk/controller whisk/controller:v2
Then, edit your
mycluster.yaml to contain:
controller: imageName: "whisk/controller" imageTag: "v2"
Redeploy with Helm by executing this commaned in your openwhisk-deploy-kube directory:
helm upgrade ./helm/openwhisk --namespace=openwhisk --name=owdev -f mycluster.yaml
To have a lean setup (no Kafka, Zookeeper and no Invokers as separate entities):
controller: lean: true
Use the following command to remove all the deployed OpenWhisk components:
helm delete owdev
Helm does keep a history of previous deployments. If you want to completely remove the deployment from helm, for example so you can reuse owdev to deploy OpenWhisk again, use the command:
helm delete owdev --purge
For OpenShift deployments, you cannot use Helm to remove the OpenWhisk deployment. If you saved the output from
helm template into a file, you can simply use that file as an argument to
oc delete. If you did not save the file, you can redo the
helm template command and feed the generated yaml into an
oc delete command.
If your OpenWhisk deployment is not working, check our troubleshooting guide for ideas.
Report bugs, ask questions and request features here on GitHub.
You can also join our slack channel and chat with developers. To get access to our slack channel, request an invite here.