Initial scripts to deploy OpenWhisk on Kubernetes. (#16)
* Initial scripts to deploy OpenWhisk on Kubernetes.
* Able to deploy CouchDB
* CI setup to run Kube on travis and deploy OpenWhisk
* Scripts for managing Dockerfiles and Kube environment
* Update README.md
Add addition info about the Kubernetes environment and give more explicit detail about each section.
diff --git a/.travis.yml b/.travis.yml
index 8fcada2..c56e991 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -5,6 +5,7 @@
- DOCKER_COMPOSE_VERSION: 1.8.1
matrix:
- TOOL: docker-compose
+ - TOOL: kubernetes
services:
- docker
@@ -13,4 +14,4 @@
- ${TOOL}/.travis/setup.sh
script:
- - ${TOOL}/.travis/build.sh
\ No newline at end of file
+ - ${TOOL}/.travis/build.sh
diff --git a/kubernetes/.travis/build.sh b/kubernetes/.travis/build.sh
new file mode 100755
index 0000000..b7b6e4b
--- /dev/null
+++ b/kubernetes/.travis/build.sh
@@ -0,0 +1,42 @@
+#!/bin/bash
+
+set -ex
+
+SCRIPTDIR=$(cd $(dirname "$0") && pwd)
+ROOTDIR="$SCRIPTDIR/../"
+
+cd $ROOTDIR
+
+# build openwhisk images
+# This way everything that is teset will use the lates openwhisk builds
+# TODO: need official repo
+
+
+# run scripts to deploy using the new images.
+kubectl apply -f configure/openwhisk_kube_namespace.yml
+kubectl apply -f configure/configure_whisk.yml
+
+PASSED=false
+TIMEOUT=0
+until $PASSED || [ $TIMEOUT -eq 10 ]; do
+ KUBE_DEPLOY_STATUS=$(kubectl -n openwhisk get jobs | grep configure-openwhisk | awk '{print $3}')
+ if [ $KUBE_DEPLOY_STATUS -eq 1 ]; then
+ PASSED=true
+ break
+ fi
+
+ let TIMEOUT=TIMEOUT+1
+ sleep 30
+done
+
+kubectl get jobs --all-namespaces -o wide --show-all
+kubectl get pods --all-namespaces -o wide --show-all
+
+if [ $PASSED = false ]; then
+ echo "The job to configure OpenWhisk did not finish with an exit code of 1"
+ exit 1
+fi
+
+echo "The job to configure OpenWhisk finished successfully"
+
+# push the images to an official repo
diff --git a/kubernetes/.travis/setup.sh b/kubernetes/.travis/setup.sh
new file mode 100755
index 0000000..554ea64
--- /dev/null
+++ b/kubernetes/.travis/setup.sh
@@ -0,0 +1,51 @@
+# This script assumes Docker is already installed
+#!/bin/bash
+
+TAG=v1.5.5
+
+# install etcd
+wget https://github.com/coreos/etcd/releases/download/v3.0.14/etcd-v3.0.14-linux-amd64.tar.gz
+tar xzf etcd-v3.0.14-linux-amd64.tar.gz
+sudo mv etcd-v3.0.14-linux-amd64/etcd /usr/local/bin/etcd
+rm etcd-v3.0.14-linux-amd64.tar.gz
+rm -rf etcd-v3.0.14-linux-amd64
+
+
+# download kubectl
+wget https://storage.googleapis.com/kubernetes-release/release/$TAG/bin/linux/amd64/kubectl
+chmod +x kubectl
+sudo mv kubectl /usr/local/bin/kubectl
+
+# download kubernetes
+git clone https://github.com/kubernetes/kubernetes $HOME/kubernetes
+
+pushd $HOME/kubernetes
+ git checkout $TAG
+ kubectl config set-credentials myself --username=admin --password=admin
+ kubectl config set-context local --cluster=local --user=myself
+ kubectl config set-cluster local --server=http://localhost:8080
+ kubectl config use-context local
+
+ # start kubernetes in the background
+ sudo PATH=$PATH:/home/travis/.gimme/versions/go1.7.linux.amd64/bin/go \
+ KUBE_ENABLE_CLUSTER_DNS=true \
+ hack/local-up-cluster.sh &
+popd
+
+# Wait untill kube is up and running
+TIMEOUT=0
+TIMEOUT_COUNT=30
+until $( curl --output /dev/null --silent http://localhost:8080 ) || [ $TIMEOUT -eq $TIMEOUT_COUNT ]; do
+ echo "Kube is not up yet"
+ let TIMEOUT=TIMEOUT+1
+ sleep 20
+done
+
+if [ $TIMEOUT -eq $TIMEOUT_COUNT ]; then
+ echo "Kubernetes is not up and running"
+ exit 1
+fi
+
+echo "Kubernetes is deployed and reachable"
+
+sudo chown -R $USER:$USER $HOME/.kube
diff --git a/kubernetes/Dockerfile b/kubernetes/Dockerfile
new file mode 100644
index 0000000..22d574e
--- /dev/null
+++ b/kubernetes/Dockerfile
@@ -0,0 +1,46 @@
+FROM ubuntu:trusty
+ENV DEBIAN_FRONTEND noninteractive
+ENV UCF_FORCE_CONFFNEW YES
+RUN ucf --purge /boot/grub/menu.lst
+
+# install openwhisk
+RUN apt-get -y update && \
+ apt-get -y upgrade && \
+ apt-get install -y \
+ git \
+ curl \
+ apt-transport-https \
+ ca-certificates \
+ python-pip \
+ python-dev \
+ libffi-dev \
+ libssl-dev \
+ libxml2-dev \
+ libxslt1-dev \
+ libjpeg8-dev \
+ zlib1g-dev
+
+# clone OpenWhisk and install dependencies
+# Note that we are not running the install all script since we do not care about Docker.
+RUN git clone https://github.com/openwhisk/openwhisk && \
+ /openwhisk/tools/ubuntu-setup/misc.sh && \
+ /openwhisk/tools/ubuntu-setup/pip.sh && \
+ /openwhisk/tools/ubuntu-setup/java8.sh && \
+ /openwhisk/tools/ubuntu-setup/scala.sh && \
+ /openwhisk/tools/ubuntu-setup/ansible.sh
+
+# Change this to https://github.com/openwhisk/openwhisk-devtools when committing to master
+COPY ansible /openwhisk-devtools/kubernetes/ansible
+COPY configure /openwhisk-devtools/kubernetes/configure
+RUN mkdir /openwhisk-devtools/kubernetes/ansible/group_vars
+
+# install kube dependencies
+# Kubernetes assumes that the version is 1.5.0+
+RUN wget https://storage.googleapis.com/kubernetes-release/release/v1.5.0/bin/linux/amd64/kubectl && \
+ chmod +x kubectl && \
+ mv kubectl /usr/local/bin/kubectl
+
+# install wsk cli
+RUN wget https://openwhisk.ng.bluemix.net/cli/go/download/linux/amd64/wsk && \
+ chmod +x wsk && \
+ mv wsk /openwhisk/bin/wsk
diff --git a/kubernetes/README.md b/kubernetes/README.md
new file mode 100644
index 0000000..551a878
--- /dev/null
+++ b/kubernetes/README.md
@@ -0,0 +1,135 @@
+# Deploying OpenWhisk on Kubernetes (work in progress)
+
+[![Build Status](https://travis-ci.org/openwhisk/openwhisk-devtools.svg?branch=master)](https://travis-ci.org/openwhisk/openwhisk-devtools)
+
+This repo can be used to deploy OpenWhisk to a Kubernetes cluster.
+To accomplish this, we have created a Kubernetes job responsible for
+deploying OpenWhisk from inside of Kubernetes. This job runs through
+the OpenWhisk Ansible playbooks with some modifications to "Kube-ify"
+specific actions. The reason for this approach is to try and streamline
+a one size fits all way of deploying OpenWhisk.
+
+Currently, the OpenWhisk deployment is going to be a static set of
+Kube yaml files. It should be easy to use the tools from this
+repo to build your own OpenWhisk deployment job, allowing you to
+set up your own configurations if need be.
+
+The scripts and Docker images should be able to:
+
+1. Build the Docker image used for deploying OpenWhisk.
+2. Uses a Kubernetes job to deploy OpenWhisk.
+
+## Kubernetes Requirements
+
+* Kubernetes needs to be version 1.5+
+* Kubernetes has [Kube-DNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) deployed
+* (Optional) Kubernetes Pods can receive public addresses.
+ This will be required if you wish to reach Nginx from outside
+ of the Kubernetes cluster's network.
+
+At this time, we are not sure as to the total number of resources required
+to deploy OpenWhisk On Kubernetes. Once all of the process are running in
+Pods we will be able to list those.
+
+## Quick Start
+
+To deploy OpenWhisk on Kubernetes, you will need to target a Kubernetes
+environment. If you do not have one up and running, then you can look
+at the [Local Kube Development](#local-kube-development) section
+for setting one up. Once you are successfully targeted, you will need to create a
+create a namespace called `openwhisk`. To do this, you can just run the
+following command.
+
+```
+kubectl apply -f configure/openwhisk_kube_namespace.yml
+```
+
+From here, you should just need to run the Kubernetes job to
+setup the OpenWhisk environment.
+
+```
+kubectl apply -f configure/configure_whisk.yml
+```
+
+
+## Manually Building Custom Docker Files
+#### Building the Docker File That Deploys OpenWhisk
+
+The Docker image responsible for deploying OpenWhisk can be built using following command:
+
+```
+docker build .
+```
+
+This image must then be re-tagged and pushed to a public
+docker repo. Currently, while this project is in development,
+the docker image is built and published [here](https://hub.docker.com/r/danlavine/whisk_config/),
+until an official repo is set up. If you would like to change
+this image to one you created, then make sure to update the
+[configure_whisk.yml](./configure/configure_whisk.yml) with your image.
+
+#### Whisk Processes Docker Files
+
+for Kubernets, all of the whisk images need to be public
+Docker files. For this, there is a helper script that will
+run `gradle build` for the main openwhisk repo and retag all of the
+images for a custom docker hub user.
+
+**Note:** This scripts assumes that you already have push access to
+dockerhub, or some other repo and are already targeted. To do this,
+you will need to run the `docker login` command.
+
+This script has 2 arguments:
+1. The name of the dockerhub repo where the images will be published.
+ For example:
+
+ ```
+ docker/build.sh <danlavine>
+ ```
+
+ will retage the `whisk/invoker` docker image built by gradle and
+ publish it to `danlavine/whisk_invoker`.
+
+2. (OPTIONAL) This argument is the location of the OpenWhisk repo.
+ By default this repo is assumed to live at `$HOME/workspace/openwhisk`
+
+## Manually building Kube Files
+#### Deployments and Services
+
+The current Kube Deployment and Services files that define the OpenWhisk
+cluster can be found [here](ansible/environments/kube/files). Only one
+instance of each OpenWhisk process is created, but if you would like
+to increase that number, then this would be the place to do it. Simply edit
+the appropriate file and rebuild the
+[Docker File That Deploys OpenWhisk](#building-the-docker-file-that-deploys-openWhisk)
+
+## Development
+#### Local Kube Development
+
+There are a couple ways to bring up Kubernetes locally and currently we
+are using [kubeadm](https://kubernetes.io/docs/getting-started-guides/kubeadm/)
+with [Callico](https://www.projectcalico.org/) for the
+[network](http://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/hosted/kubeadm/).
+By default kubeadm runs with Kube-DNS already enabled and the instructions
+will install a Kube version greater the v1.5. Using this deployment method
+everything is running on one host and nothing special has to be
+done for network configurations when communicating with Kube Pods.
+
+#### Deploying OpenWhisk on Kubernetes
+
+When in the process of creating a new deployment, it is nice to
+run things by hand to see what is going on inside the container and
+not have it be removed as soon as it finishes or fails. For this,
+you can change the command of [configure_whisk.yml](configure/configure_whisk.yml)
+to `command: [ "tail", "-f", "/dev/null" ]`. Then just run the
+original command from inside the Pod's container.
+
+#### Cleanup
+
+As part of the development process, you might need to cleanup the Kubernetes
+environment at some point. For this, we want to delete all the Kube deployments,
+services and jobs. For this, you can run the following script:
+
+```
+./kube_environment/cleanup.sh
+```
diff --git a/kubernetes/ansible/couchdb.yml b/kubernetes/ansible/couchdb.yml
new file mode 100644
index 0000000..70faba9
--- /dev/null
+++ b/kubernetes/ansible/couchdb.yml
@@ -0,0 +1,6 @@
+---
+# This playbook deploys a CouchDB for Openwhisk.
+
+- hosts: db
+ roles:
+ - couchdb
\ No newline at end of file
diff --git a/kubernetes/ansible/environments/kube/files/db-service.yml b/kubernetes/ansible/environments/kube/files/db-service.yml
new file mode 100644
index 0000000..ee1ed05
--- /dev/null
+++ b/kubernetes/ansible/environments/kube/files/db-service.yml
@@ -0,0 +1,15 @@
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: couchdb
+ namespace: openwhisk
+ labels:
+ name: couchdb
+spec:
+ selector:
+ name: couchdb
+ ports:
+ - port: 5984
+ targetPort: 5984
+ name: couchdb
diff --git a/kubernetes/ansible/environments/kube/files/db.yml b/kubernetes/ansible/environments/kube/files/db.yml
new file mode 100644
index 0000000..78de1af
--- /dev/null
+++ b/kubernetes/ansible/environments/kube/files/db.yml
@@ -0,0 +1,24 @@
+---
+apiVersion: extensions/v1beta1
+kind: Deployment
+metadata:
+ name: couchdb
+ namespace: openwhisk
+ labels:
+ name: couchdb
+spec:
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ name: couchdb
+ spec:
+ restartPolicy: Always
+
+ containers:
+ - name: couchdb
+ imagePullPolicy: IfNotPresent
+ image: couchdb:1.6
+ ports:
+ - name: couchdb
+ containerPort: 5984
diff --git a/kubernetes/ansible/environments/kube/group_vars/all b/kubernetes/ansible/environments/kube/group_vars/all
new file mode 100644
index 0000000..63f636a
--- /dev/null
+++ b/kubernetes/ansible/environments/kube/group_vars/all
@@ -0,0 +1,18 @@
+---
+db_provider: CouchDB
+db_port: 5984
+db_protocol: http
+db_username: couch_user
+db_password: couch_password
+#db_host: whisk-db-service.default
+db_auth: "subjects"
+db_prefix: "ubuntu_kube-1-4-1_"
+
+# apigw db credentials minimum read/write
+db_apigw_username: "couch_user"
+db_apigw_password: "couch_password"
+db_apigw: "ubuntu_kube-1-4-1_gwapis"
+
+kube_pod_dir: "{{ playbook_dir }}/environments/kube/files"
+
+db_host: couchdb.openwhisk
diff --git a/kubernetes/ansible/environments/kube/hosts b/kubernetes/ansible/environments/kube/hosts
new file mode 100644
index 0000000..a3e1d30
--- /dev/null
+++ b/kubernetes/ansible/environments/kube/hosts
@@ -0,0 +1,26 @@
+; the first parameter in a host is the inventory_hostname which has to be
+; either an ip
+; or a resolvable hostname
+
+; used for local actions only
+ansible ansible_connection=local
+[edge]
+127.0.0.1 ansible_connection=local
+
+[controllers]
+127.0.0.1 ansible_connection=local
+
+[kafka]
+127.0.0.1 ansible_connection=local
+
+[consul_servers]
+127.0.0.1 ansible_connection=local
+
+[db]
+127.0.0.1 ansible_connection=local
+
+[invokers]
+127.0.0.1 ansible_connection=local
+
+[registry]
+127.0.0.1 ansible_connection=local
diff --git a/kubernetes/ansible/roles/couchdb/tasks/deploy.yml b/kubernetes/ansible/roles/couchdb/tasks/deploy.yml
new file mode 100644
index 0000000..c26c084
--- /dev/null
+++ b/kubernetes/ansible/roles/couchdb/tasks/deploy.yml
@@ -0,0 +1,37 @@
+---
+# This role will run a CouchDB server on the db group
+
+- name: check if db credentials are valid for CouchDB
+ fail: msg="The db provider in your {{ inventory_dir }}/group_vars/all is {{ db_provider }}, it has to be CouchDB, pls double check"
+ when: db_provider != "CouchDB"
+
+- name: create db pod
+ shell: "kubectl apply -f {{kube_pod_dir}}/db.yml"
+
+- name: wait until the CouchDB in this host is up and running
+ wait_for:
+ delay: 2
+ host: "{{ db_host }}"
+ port: "{{ db_port }}"
+ timeout: 60
+
+- name: create admin user
+ uri:
+ url: "{{ db_protocol }}://{{ db_host }}:{{ db_port }}/_config/admins/{{ db_username }}"
+ method: PUT
+ body: >
+ "{{ db_password }}"
+ body_format: json
+ status_code: 200
+
+- name: disable reduce limit on views
+ uri:
+ url: "{{ db_protocol }}://{{ db_host }}:{{ db_port }}/_config/query_server_config/reduce_limit"
+ method: PUT
+ body: >
+ "false"
+ body_format: json
+ status_code: 200
+ user: "{{ db_username }}"
+ password: "{{ db_password }}"
+ force_basic_auth: yes
diff --git a/kubernetes/ansible/roles/couchdb/tasks/main.yml b/kubernetes/ansible/roles/couchdb/tasks/main.yml
new file mode 100644
index 0000000..5169f94
--- /dev/null
+++ b/kubernetes/ansible/roles/couchdb/tasks/main.yml
@@ -0,0 +1,6 @@
+---
+# This role will deploy a database server. Use the role if you want to use CouchCB locally.
+# In deploy mode it will start the CouchDB container.
+
+- include: deploy.yml
+ when: mode == "deploy"
diff --git a/kubernetes/ansible/tasks/initdb.yml b/kubernetes/ansible/tasks/initdb.yml
new file mode 100644
index 0000000..20151fa
--- /dev/null
+++ b/kubernetes/ansible/tasks/initdb.yml
@@ -0,0 +1,94 @@
+---
+# This task will initialize the immortal DBs in the database account.
+# This step is usually done only once per account.
+
+- name: check if the immortal {{ db.whisk.auth }} db with {{ db_provider }} exists?
+ uri:
+ url: "{{ db_protocol }}://{{ db_host }}:{{ db_port }}/{{ db.whisk.auth }}"
+ method: GET
+ status_code: 200,404
+ user: "{{ db_username }}"
+ password: "{{ db_password }}"
+ force_basic_auth: yes
+ register: dbexists
+
+# create only the missing db.whisk.auth
+- name: create immortal {{ db.whisk.auth }} db with {{ db_provider }}
+ uri:
+ url: "{{ db_protocol }}://{{ db_host }}:{{ db_port }}/{{ db.whisk.auth }}"
+ method: PUT
+ status_code: 200,201,202
+ user: "{{ db_username }}"
+ password: "{{ db_password }}"
+ force_basic_auth: yes
+ when: dbexists is defined and dbexists.status == 404
+
+# fetches the revision of previous view (to update it) if it exists
+- name: check for previous view in "auth" database
+ vars:
+ auth_index: "{{ lookup('file', '{{ openwhisk_home }}/ansible/files/auth_index.json') }}"
+ uri:
+ url: "{{ db_protocol }}://{{ db_host }}:{{ db_port }}/{{ db.whisk.auth }}/{{ auth_index['_id'] }}"
+ return_content: yes
+ method: GET
+ status_code: 200, 404
+ user: "{{ db_username }}"
+ password: "{{ db_password }}"
+ force_basic_auth: yes
+ register: previousView
+ when: dbexists is defined and dbexists.status != 404 #and mode=="updateview"
+
+- name: extract revision from previous view
+ vars:
+ previousContent: "{{ previousView['content']|from_json }}"
+ revision: "{{ previousContent['_rev'] }}"
+ auth_index: "{{ lookup('file', '{{ openwhisk_home }}/ansible/files/auth_index.json') }}"
+ set_fact:
+ previousContent: "{{ previousContent }}"
+ updateWithRevision: "{{ auth_index | combine({'_rev': revision}) }}"
+ when: previousView is defined and previousView.status != 404
+
+- name: check if a view update is required
+ set_fact:
+ updateView: "{{ updateWithRevision }}"
+ when: previousContent is defined and previousContent != updateWithRevision
+
+- name: recreate or update the index on the "auth" database
+ vars:
+ auth_index: "{{ lookup('file', '{{ openwhisk_home }}/ansible/files/auth_index.json') }}"
+ uri:
+ url: "{{ db_protocol }}://{{ db_host }}:{{ db_port }}/{{ db.whisk.auth }}"
+ method: POST
+ status_code: 200, 201
+ body_format: json
+ body: "{{ updateView | default(auth_index) }}"
+ user: "{{ db_username }}"
+ password: "{{ db_password }}"
+ force_basic_auth: yes
+ when: (dbexists is defined and dbexists.status == 404) or (updateView is defined)
+
+- name: recreate necessary "auth" keys
+ vars:
+ key: "{{ lookup('file', 'files/auth.{{ item }}') }}"
+ uri:
+ url: "{{ db_protocol }}://{{ db_host }}:{{ db_port }}/{{ db.whisk.auth }}"
+ method: POST
+ status_code: 200,201
+ body_format: json
+ body: >
+ {
+ "_id": "{{ item }}",
+ "subject": "{{ item }}",
+ "namespaces": [
+ {
+ "name": "{{ item }}",
+ "uuid": "{{ key.split(":")[0] }}",
+ "key": "{{ key.split(":")[1] }}"
+ }
+ ]
+ }
+ user: "{{ db_username }}"
+ password: "{{ db_password }}"
+ force_basic_auth: yes
+ with_items: "{{ db.authkeys }}"
+ when: dbexists is defined and dbexists.status == 404
diff --git a/kubernetes/configure/cleanup.sh b/kubernetes/configure/cleanup.sh
new file mode 100755
index 0000000..cd43672
--- /dev/null
+++ b/kubernetes/configure/cleanup.sh
@@ -0,0 +1,14 @@
+#!/usr/bin/env bash
+
+# this script is used to cleanup the OpenWhisk deployment
+
+set -x
+
+# delete OpenWhisk configure job
+kubectl -n openwhisk delete job configure-openwhisk
+
+# delete deployments
+kubectl -n openwhisk delete deployment couchdb
+
+# delete services
+kubectl -n openwhisk delete service couchdb
diff --git a/kubernetes/configure/configure.sh b/kubernetes/configure/configure.sh
new file mode 100755
index 0000000..270ce65
--- /dev/null
+++ b/kubernetes/configure/configure.sh
@@ -0,0 +1,29 @@
+#!/usr/bin/env bash
+
+# this script is used to deploy OpenWhisk from a pod already running in
+# kubernetes.
+#
+# Note: This pod assumes that there is an openwhisk namespace and the pod
+# running this script has been created in that namespace.
+
+set -ex
+
+kubectl proxy -p 8001 &
+
+# Create all of the necessary services
+pushd /openwhisk-devtools/kubernetes/ansible
+ kubectl apply -f environments/kube/files/db-service.yml
+popd
+
+# Create the CouchDB deployment
+pushd /openwhisk-devtools/kubernetes/ansible
+ cp /openwhisk/ansible/group_vars/all group_vars/all
+ ansible-playbook -i environments/kube couchdb.yml
+popd
+
+## configure couch db
+pushd /openwhisk/ansible/
+ ansible-playbook -i /openwhisk-devtools/kubernetes/ansible/environments/kube initdb.yml
+ ansible-playbook -i /openwhisk-devtools/kubernetes/ansible/environments/kube wipe.yml
+popd
+
diff --git a/kubernetes/configure/configure_whisk.yml b/kubernetes/configure/configure_whisk.yml
new file mode 100644
index 0000000..65c044e
--- /dev/null
+++ b/kubernetes/configure/configure_whisk.yml
@@ -0,0 +1,21 @@
+---
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: configure-openwhisk
+ namespace: openwhisk
+ labels:
+ name: configure-openwhisk
+spec:
+ completions: 1
+ template:
+ metadata:
+ labels:
+ name: config
+ spec:
+ restartPolicy: Never
+ containers:
+ - name: configure-openwhisk
+ image: danlavine/whisk_config
+ imagePullPolicy: Always
+ command: [ "/openwhisk-devtools/kubernetes/configure/configure.sh" ]
diff --git a/kubernetes/configure/openwhisk_kube_namespace.yml b/kubernetes/configure/openwhisk_kube_namespace.yml
new file mode 100644
index 0000000..fbb5f1b
--- /dev/null
+++ b/kubernetes/configure/openwhisk_kube_namespace.yml
@@ -0,0 +1,6 @@
+kind: Namespace
+apiVersion: v1
+metadata:
+ name: openwhisk
+ labels:
+ name: openwhisk
diff --git a/kubernetes/docker/build.sh b/kubernetes/docker/build.sh
new file mode 100755
index 0000000..673b6c7
--- /dev/null
+++ b/kubernetes/docker/build.sh
@@ -0,0 +1,66 @@
+#!/usr/bin/env bash
+
+# This script can be used to build the custom docker images required
+# for Kubernetes. This involves running the entire OpenWhisk gradle
+# build process and then creating the custom images for OpenWhisk.
+
+# prerequisites:
+# * be able to run `cd <home_openwhisk> ./gradlew distDocker`
+
+set -ex
+
+
+if [ -z "$1" ]; then
+cat <<- EndOfMessage
+ First argument should be location of which docker repo to push all
+ of the built OpenWhisk docker images. This way, Kubernetes can pull
+ any images it needs to.
+EndOfMessage
+
+exit 1
+fi
+
+OPENWHISK_DIR=""
+if [ -z "$2" ]; then
+cat <<- EndOfMessage
+ Second argument should be the location of where the OpenWhisk repo lives.
+ By default the location is $HOME/workspace/openwhisk
+EndOfMessage
+
+ OPENWHISK_DIR=$HOME/workspace/openwhisk
+else
+ OPENWHISK_DIR="$2"
+fi
+
+pushd $OPENWHISK_DIR
+ ./gradlew distDocker
+popd
+
+## Retag new images for public repo
+docker tag whisk/badaction "$1"/whisk_badaction
+docker tag whisk/badproxy "$1"/whisk_badproxy
+docker tag whisk/cli "$1"/whisk_cli
+docker tag whisk/example "$1"/whisk_example
+docker tag whisk/swift3action "$1"/whisk_swift3action
+docker tag whisk/pythonaction "$1"/whisk_pythonaction
+docker tag whisk/nodejs6action "$1"/whisk_nodejs6action
+docker tag whisk/nodejsactionbase "$1"/whisk_nodejsactionbase
+docker tag whisk/javaaction "$1"/whisk_javaaction
+docker tag whisk/invoker "$1"/whisk_invoker
+docker tag whisk/controller "$1"/whisk_controller
+docker tag whisk/dockerskeleton "$1"/whisk_dockerskeleton
+docker tag whisk/scala "$1"/whisk_scala
+
+docker push "$1"/whisk_badaction
+docker push "$1"/whisk_badproxy
+docker push "$1"/whisk_cli
+docker push "$1"/whisk_example
+docker push "$1"/whisk_swift3action
+docker push "$1"/whisk_pythonaction
+docker push "$1"/whisk_nodejs6action
+docker push "$1"/whisk_nodejsactionbase
+docker push "$1"/whisk_javaaction
+docker push "$1"/whisk_invoker
+docker push "$1"/whisk_controller
+docker push "$1"/whisk_dockerskeleton
+docker push "$1"/whisk_scala