Deploy Consul, Controller, Kafka, Zookeeper and Invoker in Kubernete (#22)

* Able to deploy Consul to Kubernetes for OpenWhisk

* Deploy Consul and seed the database.
* Restructure how to override required ansible deployment files.
  Copy all of OpenWhisk deployment files and replace specific files
  for Kube.
* Update cleanup script to include consul.
* Add all group-vars so that the properties file can
  be generated.

* Able to deploy Kafka for OpenWhisk in Kuberentes

* cleanup Consul key values for invoker hosts
* properly seed kafka with correct invoker topics
* dynamically use correct kafka pod names

* Able to deploy OpenWhisk Controller in Kube.

* Able to deploy Invoker on Kubernetes.

* Deploy the Invoker via a Kube StatefulSet
* Invoker pulls all of the required OpenWhisk images
* Updated to Consul so all of the process Hostnames are correct

* Only deploy CouchDB if it doesn't exist.

* Allow more time to deploy in travis

* Split Zookeeper and Kafka into different Kube deployments.

* This fixes issues where Kube DNS cannot route to itself.

* Add retries for obtaining pod names

* Use public openwhisk/invoker image

* Remove instructions from README about building custom OpenWhisk images.
diff --git a/kubernetes/.travis/build.sh b/kubernetes/.travis/build.sh
index b7b6e4b..07ab7b5 100755
--- a/kubernetes/.travis/build.sh
+++ b/kubernetes/.travis/build.sh
@@ -7,10 +7,9 @@
 
 cd $ROOTDIR
 
+# TODO: need official repo
 # build openwhisk images
 # This way everything that is teset will use the lates openwhisk builds
-# TODO: need official repo
-
 
 # run scripts to deploy using the new images.
 kubectl apply -f configure/openwhisk_kube_namespace.yml
@@ -18,13 +17,15 @@
 
 PASSED=false
 TIMEOUT=0
-until $PASSED || [ $TIMEOUT -eq 10 ]; do
+until $PASSED || [ $TIMEOUT -eq 20 ]; do
   KUBE_DEPLOY_STATUS=$(kubectl -n openwhisk get jobs | grep configure-openwhisk | awk '{print $3}')
   if [ $KUBE_DEPLOY_STATUS -eq 1 ]; then
     PASSED=true
     break
   fi
 
+  kubectl get pods --all-namespaces -o wide --show-all
+
   let TIMEOUT=TIMEOUT+1
   sleep 30
 done
@@ -32,7 +33,7 @@
 kubectl get jobs --all-namespaces -o wide --show-all
 kubectl get pods --all-namespaces -o wide --show-all
 
-if [ $PASSED = false ]; then
+if [ "$PASSED" = false ]; then
   echo "The job to configure OpenWhisk did not finish with an exit code of 1"
   exit 1
 fi
diff --git a/kubernetes/Dockerfile b/kubernetes/Dockerfile
index 22d574e..5bbbdad 100644
--- a/kubernetes/Dockerfile
+++ b/kubernetes/Dockerfile
@@ -18,7 +18,8 @@
       libxml2-dev \
       libxslt1-dev \
       libjpeg8-dev \
-      zlib1g-dev
+      zlib1g-dev \
+      vim
 
 # clone OpenWhisk and install dependencies
 # Note that we are not running the install all script since we do not care about Docker.
@@ -30,9 +31,8 @@
     /openwhisk/tools/ubuntu-setup/ansible.sh
 
 # Change this to https://github.com/openwhisk/openwhisk-devtools when committing to master
-COPY ansible /openwhisk-devtools/kubernetes/ansible
+COPY ansible-kube /openwhisk-devtools/kubernetes/ansible-kube
 COPY configure /openwhisk-devtools/kubernetes/configure
-RUN mkdir /openwhisk-devtools/kubernetes/ansible/group_vars
 
 # install kube dependencies
 # Kubernetes assumes that the version is 1.5.0+
diff --git a/kubernetes/README.md b/kubernetes/README.md
index 551a878..9c59105 100644
--- a/kubernetes/README.md
+++ b/kubernetes/README.md
@@ -19,6 +19,16 @@
 1. Build the Docker image used for deploying OpenWhisk.
 2. Uses a Kubernetes job to deploy OpenWhisk.
 
+Currently, not all of the OpenWhisk components are deployed.
+So far, it will create Kube Deployments for:
+
+* couchdb
+* consul
+* controller
+* invoker
+
+To track the process, check out this [issue](https://github.com/openwhisk/openwhisk-devtools/issues/14).
+
 ## Kubernetes Requirements
 
 * Kubernetes needs to be version 1.5+
@@ -68,31 +78,6 @@
 this image to one you created, then make sure to update the
 [configure_whisk.yml](./configure/configure_whisk.yml) with your image.
 
-#### Whisk Processes Docker Files
-
-for Kubernets, all of the whisk images need to be public
-Docker files. For this, there is a helper script that will
-run `gradle build` for the main openwhisk repo and retag all of the
-images for a custom docker hub user.
-
-**Note:** This scripts assumes that you already have push access to
-dockerhub, or some other repo and are already targeted. To do this,
-you will need to run the `docker login` command.
-
-This script has 2 arguments:
-1. The name of the dockerhub repo where the images will be published.
-   For example:
-
-   ```
-   docker/build.sh <danlavine>
-   ```
-
-   will retage the `whisk/invoker` docker image built by gradle and
-   publish it to `danlavine/whisk_invoker`.
-
-2. (OPTIONAL) This argument is the location of the OpenWhisk repo.
-   By default this repo is assumed to live at `$HOME/workspace/openwhisk`
-
 ## Manually building Kube Files
 #### Deployments and Services
 
diff --git a/kubernetes/ansible-kube/environments/kube/files/consul-service.yml b/kubernetes/ansible-kube/environments/kube/files/consul-service.yml
new file mode 100644
index 0000000..9bfceb6
--- /dev/null
+++ b/kubernetes/ansible-kube/environments/kube/files/consul-service.yml
@@ -0,0 +1,48 @@
+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: consul
+  namespace: openwhisk
+  labels:
+    name: consul
+spec:
+  selector:
+    name: consul
+  ports:
+  - name: server
+    protocol: TCP
+    port: 8300
+    targetPort: 8300
+  - name: serflan-tcp
+    protocol: TCP
+    port: 8301
+    targetPort: 8301
+  - name: serflan-udp
+    protocol: UDP
+    port: 8301
+    targetPort: 8301
+  - name: serfwan-tcp
+    protocol: TCP
+    port: 8302
+    targetPort: 8302
+  - name: serfwan-udp
+    protocol: UDP
+    port: 8302
+    targetPort: 8302
+  - name: rpc
+    protocol: TCP
+    port: 8400
+    targetPort: 8400
+  - name: http
+    protocol: TCP
+    port: 8500
+    targetPort: 8500
+  - name: consuldns-tcp
+    protocol: TCP
+    port: 8600
+    targetPort: 8600
+  - name: consuldns-udp
+    protocol: UDP
+    port: 8600
+    targetPort: 8600
diff --git a/kubernetes/ansible-kube/environments/kube/files/consul.yml b/kubernetes/ansible-kube/environments/kube/files/consul.yml
new file mode 100644
index 0000000..7dfef00
--- /dev/null
+++ b/kubernetes/ansible-kube/environments/kube/files/consul.yml
@@ -0,0 +1,79 @@
+---
+apiVersion: extensions/v1beta1
+kind: Deployment
+metadata:
+  name: consul
+  namespace: openwhisk
+  labels:
+    name: consul
+spec:
+  replicas: 1
+  template:
+    metadata:
+      labels:
+        name: consul
+    spec:
+      restartPolicy: Always
+      volumes:
+      - name: dockersock
+        hostPath:
+          path: /var/run/docker.sock
+      - name: consulconf
+        configMap:
+          name: consul
+      containers:
+      - name: consul
+        imagePullPolicy: IfNotPresent
+        image: consul:v0.7.0
+        ports:
+        - name: server
+          protocol: TCP
+          containerPort: 8300
+          hostPort: 8300
+        - name: serflan-tcp
+          protocol: TCP
+          containerPort: 8301
+          hostPort: 8301
+        - name: serflan-udp
+          protocol: UDP
+          containerPort: 8301
+          hostPort: 8301
+        - name: serfwan-tcp
+          protocol: TCP
+          containerPort: 8302
+          hostPort: 8302
+        - name: serfwan-udp
+          protocol: UDP
+          containerPort: 8302
+          hostPort: 8302
+        - name: rpc
+          protocol: TCP
+          containerPort: 8400
+          hostPort: 8400
+        - name: http
+          protocol: TCP
+          containerPort: 8500
+          hostPort: 8500
+        - name: consuldns-tcp
+          protocol: TCP
+          containerPort: 8600
+          hostPort: 8600
+        - name: consuldns-udp
+          protocol: UDP
+          containerPort: 8600
+          hostPort: 8600
+        volumeMounts:
+        - name: consulconf
+          mountPath: "/consul/config/config.json"
+
+      - name: registrator
+        image: gliderlabs/registrator
+        env:
+        - name: MY_POD_IP
+          valueFrom:
+            fieldRef:
+              fieldPath: status.podIP
+        args: [ "-ip", "$(MY_POD_IP)", "-resync", "2", "consul://$(MY_POD_IP):8500" ]
+        volumeMounts:
+        - name: dockersock
+          mountPath: "/tmp/docker.sock"
diff --git a/kubernetes/ansible-kube/environments/kube/files/controller-service.yml b/kubernetes/ansible-kube/environments/kube/files/controller-service.yml
new file mode 100644
index 0000000..1ef39dc
--- /dev/null
+++ b/kubernetes/ansible-kube/environments/kube/files/controller-service.yml
@@ -0,0 +1,15 @@
+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: controller
+  namespace: openwhisk
+  labels:
+    name: controller
+spec:
+  selector:
+    name: controller
+  ports:
+    - port: 10001
+      targetPort: 8080
+      name: controller
diff --git a/kubernetes/ansible-kube/environments/kube/files/controller.yml b/kubernetes/ansible-kube/environments/kube/files/controller.yml
new file mode 100644
index 0000000..d53ade6
--- /dev/null
+++ b/kubernetes/ansible-kube/environments/kube/files/controller.yml
@@ -0,0 +1,41 @@
+---
+apiVersion: extensions/v1beta1
+kind: Deployment
+metadata:
+  name: controller
+  namespace: openwhisk
+  labels:
+    name: controller
+spec:
+  replicas: 1
+  template:
+    metadata:
+      labels:
+        name: controller
+    spec:
+      restartPolicy: Always
+
+      containers:
+      - name: controller
+        imagePullPolicy: IfNotPresent
+        image: openwhisk/controller
+        ports:
+        - name: controller
+          containerPort: 8080
+        env:
+        - name: "COMPONENT_NAME"
+          value: "controller"
+        - name: "CONSULSERVER_HOST"
+          value: "consul.openwhisk"
+        - name: "CONSUL_HOST_PORT4"
+          value: "8500"
+        - name: "KAFKA_NUMPARTITIONS"
+          value: "2"
+        - name: "SERVICE_CHECK_HTTP"
+          value: "/ping"
+        - name: "SERVICE_CHECK_TIMEOUT"
+          value: "2s"
+        - name: "SERVICE_CHECK_INTERVAL"
+          value: "15s"
+        - name: "PORT"
+          value: "8080"
diff --git a/kubernetes/ansible/environments/kube/files/db-service.yml b/kubernetes/ansible-kube/environments/kube/files/db-service.yml
similarity index 100%
rename from kubernetes/ansible/environments/kube/files/db-service.yml
rename to kubernetes/ansible-kube/environments/kube/files/db-service.yml
diff --git a/kubernetes/ansible/environments/kube/files/db.yml b/kubernetes/ansible-kube/environments/kube/files/db.yml
similarity index 100%
rename from kubernetes/ansible/environments/kube/files/db.yml
rename to kubernetes/ansible-kube/environments/kube/files/db.yml
diff --git a/kubernetes/ansible-kube/environments/kube/files/invoker-service.yml b/kubernetes/ansible-kube/environments/kube/files/invoker-service.yml
new file mode 100644
index 0000000..9c0e093
--- /dev/null
+++ b/kubernetes/ansible-kube/environments/kube/files/invoker-service.yml
@@ -0,0 +1,16 @@
+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: invoker
+  namespace: openwhisk
+  labels:
+    name: invoker
+spec:
+  selector:
+    name: invoker
+  clusterIP: None
+  ports:
+    - port: 8080
+      targetPort: 8080
+      name: invoker
diff --git a/kubernetes/ansible-kube/environments/kube/files/invoker.yml b/kubernetes/ansible-kube/environments/kube/files/invoker.yml
new file mode 100644
index 0000000..201e58e
--- /dev/null
+++ b/kubernetes/ansible-kube/environments/kube/files/invoker.yml
@@ -0,0 +1,76 @@
+---
+apiVersion: apps/v1beta1
+kind: StatefulSet
+metadata:
+  name: invoker
+  namespace: openwhisk
+  labels:
+    name: invoker
+spec:
+  replicas: 1
+  serviceName: "invoker"
+  template:
+    metadata:
+      labels:
+        name: invoker
+    spec:
+      restartPolicy: Always
+
+      volumes:
+      - name: cgroup
+        hostPath:
+          path: "/sys/fs/cgroup"
+      - name: runc
+        hostPath:
+          path: "/run/runc"
+      - name: dockerrootdir
+        hostPath:
+          path: "/var/lib/docker/containers"
+      - name: dockersock
+        hostPath:
+          path: "/var/run/docker.sock"
+      - name: apparmor
+        hostPath:
+          path: "/usr/lib/x86_64-linux-gnu/libapparmor.so.1"
+
+      containers:
+      - name: invoker
+        imagePullPolicy: IfNotPresent
+        image: openwhisk/invoker
+        command: [ "/bin/bash", "-c", "/invoker/bin/invoker `hostname | cut -d'-' -f2`" ]
+        env:
+          - name: "CONSULSERVER_HOST"
+            value: "consul.openwhisk"
+          - name: "CONSUL_HOST_PORT4"
+            value: "8500"
+          - name: "PORT"
+            value: "8080"
+          - name: "SELF_DOCKER_ENDPOINT"
+            value: "localhost"
+          - name: "SERVICE_CHECK_HTTP"
+            value: "/ping"
+          - name: "SERVICE_CHECK_TIMEOUT"
+            value: "2s"
+          - name: "SERVICE_CHECK_INTERVAL"
+            value: "15s"
+        ports:
+        - name: invoker
+          containerPort: 8080
+        volumeMounts:
+        - name: cgroup
+          mountPath: "/sys/fs/cgroup"
+        - name: runc
+          mountPath: "/run/runc"
+        - name: dockersock
+          mountPath: "/var/run/docker.sock"
+        - name: dockerrootdir
+          mountPath: "/containers"
+        - name: apparmor
+          mountPath: "/usr/lib/x86_64-linux-gnu/libapparmor.so.1"
+        lifecycle:
+          postStart:
+            exec:
+              command:
+              - "/bin/bash"
+              - "-c"
+              - "docker pull openwhisk/nodejs6action && docker pull openwhisk/dockerskeleton && docker pull openwhisk/python2action && docker pull openwhisk/python3action && docker pull openwhisk/swift3action && docker pull openwhisk/java8action"
diff --git a/kubernetes/ansible-kube/environments/kube/files/kafka-service.yml b/kubernetes/ansible-kube/environments/kube/files/kafka-service.yml
new file mode 100644
index 0000000..093ed76
--- /dev/null
+++ b/kubernetes/ansible-kube/environments/kube/files/kafka-service.yml
@@ -0,0 +1,15 @@
+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: kafka
+  namespace: openwhisk
+  labels:
+    name: kafka
+spec:
+  selector:
+    name: kafka
+  ports:
+    - port: 9092
+      targetPort: 9092
+      name: kafka
diff --git a/kubernetes/ansible-kube/environments/kube/files/kafka.yml b/kubernetes/ansible-kube/environments/kube/files/kafka.yml
new file mode 100644
index 0000000..02785ec
--- /dev/null
+++ b/kubernetes/ansible-kube/environments/kube/files/kafka.yml
@@ -0,0 +1,29 @@
+---
+apiVersion: extensions/v1beta1
+kind: Deployment
+metadata:
+  name: kafka
+  namespace: openwhisk
+  labels:
+    name: kafka
+spec:
+  replicas: 1
+  template:
+    metadata:
+      labels:
+        name: kafka
+    spec:
+      restartPolicy: Always
+
+      containers:
+      - name: kafka
+        image: ches/kafka:0.10.0.1
+        imagePullPolicy: IfNotPresent
+        env:
+        - name: "KAFKA_ADVERTISED_HOST_NAME"
+          value: kafka.openwhisk
+        - name: "KAFKA_PORT"
+          value: "9092"
+        ports:
+        - name: kafka
+          containerPort: 9092
diff --git a/kubernetes/ansible-kube/environments/kube/files/zookeeper-service.yml b/kubernetes/ansible-kube/environments/kube/files/zookeeper-service.yml
new file mode 100644
index 0000000..f2f5a6c
--- /dev/null
+++ b/kubernetes/ansible-kube/environments/kube/files/zookeeper-service.yml
@@ -0,0 +1,15 @@
+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: zookeeper
+  namespace: openwhisk
+  labels:
+    name: zookeeper
+spec:
+  selector:
+    name: zookeeper
+  ports:
+    - port: 2181
+      targetPort: 2181
+      name: zookeeper
diff --git a/kubernetes/ansible-kube/environments/kube/files/zookeeper.yml b/kubernetes/ansible-kube/environments/kube/files/zookeeper.yml
new file mode 100644
index 0000000..219d132
--- /dev/null
+++ b/kubernetes/ansible-kube/environments/kube/files/zookeeper.yml
@@ -0,0 +1,24 @@
+---
+apiVersion: extensions/v1beta1
+kind: Deployment
+metadata:
+  name: zookeeper
+  namespace: openwhisk
+  labels:
+    name: zookeeper
+spec:
+  replicas: 1
+  template:
+    metadata:
+      labels:
+        name: zookeeper
+    spec:
+      restartPolicy: Always
+
+      containers:
+      - name: zookeeper
+        image: zookeeper:3.4
+        imagePullPolicy: IfNotPresent
+        ports:
+        - name: zookeeper
+          containerPort: 2181
diff --git a/kubernetes/ansible-kube/environments/kube/group_vars/all b/kubernetes/ansible-kube/environments/kube/group_vars/all
new file mode 100644
index 0000000..9b13574
--- /dev/null
+++ b/kubernetes/ansible-kube/environments/kube/group_vars/all
@@ -0,0 +1,61 @@
+---
+# general properties
+kube_pod_dir: "{{ playbook_dir }}/environments/kube/files"
+whisk_version_name: kube
+whisk_logs_dir: /tmp/wsklogs
+
+# docker properties
+docker_dns: ""
+docker_registry: ""
+docker_image_prefix: "openwhisk"
+
+# CouchDB properties
+db_host: couchdb.openwhisk
+db_provider: CouchDB
+db_port: 5984
+db_protocol: http
+db_username: couch_user
+db_password: couch_password
+db_auth: "subjects"
+db_prefix: "ubuntu_kube-1-4-1_"
+
+# apigw db credentials minimum read/write
+db_apigw_username: "couch_user"
+db_apigw_password: "couch_password"
+db_apigw: "ubuntu_kube-1-4-1_gwapis"
+apigw_initdb: true
+
+# API GW connection configuration
+apigw_auth_user: ""
+apigw_auth_pwd: ""
+apigw_host: "http://edge.openwhisk:9000/v1"
+apigw_host_v2: "http://edge.openwhisk:9000/v2"
+
+
+# consul properties
+consul_host: consul.openwhisk
+consul_conf_dir: /tmp/consul
+
+# edge properties
+nginx_conf_dir: /tmp/nginx
+cli_nginx_dir: "/tmp/nginx/cli/go/download"
+edge_host: edge.openwhisk
+
+# controller properties
+controller_host: controller.openwhisk
+
+# kafka properties
+kafka_host: kafka.openwhisk
+zookeeper_host: zookeeper.openwhisk
+
+# invoker properties
+# The invoker_count property is overwritten by the (kubernetes/configure/configure.sh)
+# script. This way the source of truth for the number of Invoker instances is kept
+# in the Kubernetes Invoker deployment file. The configure.sh script will read the
+# Kube file and replace this one to keep everything in sync.
+invoker_count: REPLACE_INVOKER_COUNT
+invoker_port: 8080
+
+# registry
+registry_conf_dir: /tmp/registry
+registry_storage_dir: "/"
diff --git a/kubernetes/ansible/environments/kube/hosts b/kubernetes/ansible-kube/environments/kube/hosts
similarity index 100%
rename from kubernetes/ansible/environments/kube/hosts
rename to kubernetes/ansible-kube/environments/kube/hosts
diff --git a/kubernetes/ansible-kube/openwhisk.yml b/kubernetes/ansible-kube/openwhisk.yml
new file mode 100644
index 0000000..a5beaa3
--- /dev/null
+++ b/kubernetes/ansible-kube/openwhisk.yml
@@ -0,0 +1,16 @@
+---
+# This playbook deploys an Openwhisk stack.
+# It assumes you have already set up your database with the respective db provider playbook (currently cloudant.yml or couchdb.yml)
+# It assumes that wipe.yml have being deployed at least once
+
+- include: consul.yml
+
+- include: kafka.yml
+
+- include: controller.yml
+
+- include: invoker.yml
+
+#- include: edge.yml
+#
+#- include: routemgmt.yml
diff --git a/kubernetes/ansible-kube/roles/consul/tasks/deploy.yml b/kubernetes/ansible-kube/roles/consul/tasks/deploy.yml
new file mode 100644
index 0000000..54c47a0
--- /dev/null
+++ b/kubernetes/ansible-kube/roles/consul/tasks/deploy.yml
@@ -0,0 +1,50 @@
+---
+# This role will install Consul Servers/Agents in all machines. After that it installs the Registrators.
+# There is a group of machines in the corresponding environment inventory called 'consul_servers' where the Consul Servers are installed
+# In this way they build up a Consul Cluster
+# Other machines that are not in the 'consul_servers' group, have the Consul Agents
+# The template 'config.json.j2' will look at the environment inventory to decide to generate a config file for booting a server or an agent
+
+- name: ensure consul config directory exists
+  file:
+    path: "{{ consul_conf_dir }}"
+    state: directory
+  when: "'consul_servers' in group_names"
+
+- name: copy template from local to remote (which is really local) and fill in templates
+  template:
+    src: config.json.j2
+    dest: "{{ consul_conf_dir }}/config.json"
+  when: "'consul_servers' in group_names"
+
+- name: create configmap
+  shell: "kubectl create configmap consul --from-file={{ consul_conf_dir }}/config.json"
+
+- name: create consul deployment
+  shell: "kubectl apply -f {{kube_pod_dir}}/consul.yml"
+
+- name: wait until the Consul Server/Agent in this host is up and running
+  uri:
+    method: PUT
+    url: "http://{{ consul_host }}:{{ consul.port.http }}/v1/kv/consulIsAlive"
+    body: 'true'
+  register: result
+  until: result.status == 200
+  retries: 12
+  delay: 5
+  when: "'consul_servers' in group_names"
+
+- name: delete is alive token from Consul Server/Agent
+  uri:
+    method: DELETE
+    url: "http://{{ consul_host }}:{{ consul.port.http }}/v1/kv/consulIsAlive"
+  register: result
+  until: result.status == 200
+  retries: 10
+  delay: 1
+  when: "'consul_servers' in group_names"
+
+- name: notify handler to fill in Consul KV store with parameters in whisk.properties
+  command: "true"
+  notify: fill consul kv
+  when: "'consul_servers' in group_names"
diff --git a/kubernetes/ansible-kube/roles/consul/templates/config.json.j2 b/kubernetes/ansible-kube/roles/consul/templates/config.json.j2
new file mode 100644
index 0000000..e64cb5f
--- /dev/null
+++ b/kubernetes/ansible-kube/roles/consul/templates/config.json.j2
@@ -0,0 +1,14 @@
+{# this template is used to generate a config.json for booting a consul server #}
+{
+    "server": true,
+    "data_dir": "/consul/data",
+    "ui": true,
+    "log_level": "WARN",
+    "client_addr": "0.0.0.0",
+    "advertise_addr": "{{ consul_host }}",
+    "ports": {
+        "dns": 8600
+    },
+    "bootstrap": true,
+    "disable_update_check": true
+}
diff --git a/kubernetes/ansible-kube/roles/controller/tasks/deploy.yml b/kubernetes/ansible-kube/roles/controller/tasks/deploy.yml
new file mode 100644
index 0000000..aaba253
--- /dev/null
+++ b/kubernetes/ansible-kube/roles/controller/tasks/deploy.yml
@@ -0,0 +1,25 @@
+---
+# This role will install Controller in group 'controllers' in the environment inventory
+
+- name: create controller deployment
+  shell: "kubectl apply -f {{kube_pod_dir}}/controller.yml"
+
+- name: get controller pods
+  shell: "kubectl -n openwhisk get pods --show-all | grep controller | awk '{print $1}'"
+  register: pods
+  until: pods.stdout != ""
+  retries: 5
+  delay: 2
+
+- name: set controller pods
+  set_fact:
+    controller_pods: "{{ pods.stdout_lines }}"
+
+- name: wait until the Controller in this host is up and running
+  shell: "kubectl -n openwhisk exec {{ item[0] }} -- bash -c 'curl -I http://0.0.0.0:8080/ping'"
+  register: result
+  until: (result.rc == 0) and (result.stdout.find("200 OK") != -1)
+  retries: 12
+  delay: 5
+  with_items:
+    - ["{{ controller_pods }}"]
diff --git a/kubernetes/ansible/roles/couchdb/tasks/deploy.yml b/kubernetes/ansible-kube/roles/couchdb/tasks/deploy.yml
similarity index 100%
rename from kubernetes/ansible/roles/couchdb/tasks/deploy.yml
rename to kubernetes/ansible-kube/roles/couchdb/tasks/deploy.yml
diff --git a/kubernetes/ansible/roles/couchdb/tasks/main.yml b/kubernetes/ansible-kube/roles/couchdb/tasks/main.yml
similarity index 100%
rename from kubernetes/ansible/roles/couchdb/tasks/main.yml
rename to kubernetes/ansible-kube/roles/couchdb/tasks/main.yml
diff --git a/kubernetes/ansible-kube/roles/invoker/tasks/deploy.yml b/kubernetes/ansible-kube/roles/invoker/tasks/deploy.yml
new file mode 100644
index 0000000..f943f24
--- /dev/null
+++ b/kubernetes/ansible-kube/roles/invoker/tasks/deploy.yml
@@ -0,0 +1,15 @@
+---
+# This role installs invokers.
+- name: create invoker deployment
+  shell: "kubectl apply -f {{kube_pod_dir}}/invoker.yml"
+
+# The invoker image has a long pull timout since it needs to pull
+# all of the Docker images whisk depends on.
+- name: wait until Invoker is up and running
+  uri:
+    url: "http://invoker-{{ item }}.invoker.openwhisk:{{ invoker_port }}/ping"
+  register: result
+  until: result.status == 200
+  retries: 20
+  delay: 20
+  with_sequence: start=0 count={{ invoker_count }}
diff --git a/kubernetes/ansible-kube/roles/kafka/tasks/deploy.yml b/kubernetes/ansible-kube/roles/kafka/tasks/deploy.yml
new file mode 100644
index 0000000..1b15158
--- /dev/null
+++ b/kubernetes/ansible-kube/roles/kafka/tasks/deploy.yml
@@ -0,0 +1,72 @@
+---
+# This role will install Kafka with Zookeeper in group 'kafka' in the environment inventory
+- name: create zookeeper deployment
+  shell: "kubectl apply -f {{kube_pod_dir}}/zookeeper.yml"
+
+- name: create kafka deployment
+  shell: "kubectl apply -f {{kube_pod_dir}}/kafka.yml"
+
+- name: get zookeeper pods
+  shell: "kubectl -n openwhisk get pods --show-all | grep zookeeper | awk '{print $1}'"
+  register: zookeeperPods
+  until: zookeeperPods.stdout != ""
+  retries: 5
+  delay: 2
+
+- name: set zookeeper pods
+  set_fact:
+    zookeeper_pods: "{{ zookeeperPods.stdout_lines }}"
+
+- name: get kafka pods
+  shell: "kubectl -n openwhisk get pods --show-all | grep kafka | awk '{print $1}'"
+  register: kafkaPods
+  until: kafkaPods.stdout != ""
+  retries: 5
+  delay: 2
+
+- name: set kafka pods
+  set_fact:
+    kafka_pods: "{{ kafkaPods.stdout_lines }}"
+
+- name: wait until the Zookeeper in this host is up and running
+  shell: "kubectl -n openwhisk exec {{ item[0] }} -c zookeeper -- bash -c 'echo ruok | nc -w 3 0.0.0.0:{{ zookeeper.port }}'"
+  register: result
+  until: (result.rc == 0) and (result.stdout == 'imok')
+  retries: 36
+  delay: 5
+  with_nested:
+    - ["{{ zookeeper_pods }}"]
+
+- name: wait until the kafka server started up
+  shell: "kubectl -n openwhisk logs {{ item[0] }} -c kafka"
+  register: result
+  until: ('[Kafka Server 0], started' in result.stdout)
+  retries: 10
+  delay: 5
+  with_nested:
+    - ["{{ kafka_pods }}"]
+
+- name: create the active-ack and health topic
+  shell: "kubectl exec {{ item[0] }} -c kafka -- bash -c 'unset JMX_PORT; kafka-topics.sh --create --topic {{ item[1] }} --replication-factor 1 --partitions 1 --zookeeper {{ zookeeper_host }}:{{ zookeeper.port }}'"
+  register: command_result
+  failed_when: "not ('Created topic' in command_result.stdout or 'already exists' in command_result.stdout)"
+  with_nested:
+  - "{{ kafka_pods }}"
+  - [ 'command', 'health' ]
+
+- name: define invoker list
+  set_fact:
+    invoker_list: []
+
+- name: create the invoker list
+  set_fact:
+    invoker_list: "{{invoker_list}} + [{{item}}]"
+  with_sequence: start=0 count={{ invoker_count }}
+
+- name: create the invoker topics
+  shell: " kubectl exec {{ item[0] }} -c kafka -- bash -c 'unset JMX_PORT; kafka-topics.sh --create --topic invoke{{ item[1] }} --replication-factor 1 --partitions 1 --zookeeper {{ zookeeper_host }}:{{ zookeeper.port }}'"
+  register: command_result
+  failed_when: "not ('Created topic' in command_result.stdout or 'already exists' in command_result.stdout)"
+  with_nested:
+  - "{{ kafka_pods }}"
+  - "{{ invoker_list }}"
diff --git a/kubernetes/ansible/tasks/initdb.yml b/kubernetes/ansible-kube/tasks/initdb.yml
similarity index 100%
rename from kubernetes/ansible/tasks/initdb.yml
rename to kubernetes/ansible-kube/tasks/initdb.yml
diff --git a/kubernetes/ansible-kube/tasks/writeWhiskProperties.yml b/kubernetes/ansible-kube/tasks/writeWhiskProperties.yml
new file mode 100644
index 0000000..97ec01f
--- /dev/null
+++ b/kubernetes/ansible-kube/tasks/writeWhiskProperties.yml
@@ -0,0 +1,17 @@
+---
+# This task will write whisk.properties to the openwhisk_home.
+# Currently whisk.properties is still needed for consul and tests.
+
+- name: define invoker domains
+  set_fact:
+    invoker_hosts: []
+
+- name: update the invoker domains
+  set_fact:
+    invoker_hosts: "{{invoker_hosts}} + ['invoker-{{item}}.invoker.openwhisk]"
+  with_sequence: start=0 count={{ invoker_count }}
+
+- name: write whisk.properties template to openwhisk_home
+  template:
+    src: whisk.properties.j2
+    dest: "{{ openwhisk_home }}/whisk.properties"
diff --git a/kubernetes/ansible-kube/templates/whisk.properties.j2 b/kubernetes/ansible-kube/templates/whisk.properties.j2
new file mode 100644
index 0000000..c983fd7
--- /dev/null
+++ b/kubernetes/ansible-kube/templates/whisk.properties.j2
@@ -0,0 +1,134 @@
+openwhisk.home={{ openwhisk_home }}
+
+python.27=python
+use.cli.download=false
+nginx.conf.dir={{ nginx_conf_dir }}
+testing.auth={{ openwhisk_home }}/ansible/files/auth.guest
+vcap.services.file=
+
+whisk.logs.dir={{ whisk_logs_dir }}
+whisk.version.name={{ whisk_version_name }}
+whisk.version.date={{ whisk.version.date }}
+whisk.version.buildno={{ docker_image_tag }}
+whisk.ssl.cert={{ openwhisk_home }}/ansible/roles/nginx/files/openwhisk-cert.pem
+whisk.ssl.key={{ openwhisk_home }}/ansible/roles/nginx/files/openwhisk-key.pem
+whisk.ssl.challenge=openwhisk
+
+{#
+ # the whisk.api.host.name must be a name that can resolve form inside an action container,
+ # or an ip address reachable from inside the action container.
+ #
+ # the whisk.api.localhost.name must be a name that resolves from the client; it is either the
+ # whisk_api_host_name if it is defined, an environment specific localhost name, or the default
+ # localhost name.
+ #
+ # the whisk.api.vanity.subdomain.parts indicates how many conforming parts the router is configured to
+ # match in the subdomain, which it rewrites into a namespace; each part must match ([a-zA-Z0-9]+)
+ # with parts separated by a single dash.
+ #}
+whisk.api.host.proto={{ whisk_api_host_proto | default('https') }}
+whisk.api.host.port={{ whisk_api_host_port | default('443') }}
+whisk.api.host.name={{ whisk_api_host_name | default(groups['edge'] | first) }}
+whisk.api.localhost.name={{ whisk_api_localhost_name | default(whisk_api_host_name) | default(whisk_api_localhost_name_default) }}
+whisk.api.vanity.subdomain.parts=1
+
+runtimes.manifest={{ runtimesManifest | to_json }}
+defaultLimits.actions.invokes.perMinute={{ defaultLimits.actions.invokes.perMinute }}
+defaultLimits.actions.invokes.concurrent={{ defaultLimits.actions.invokes.concurrent }}
+defaultLimits.triggers.fires.perMinute={{ defaultLimits.triggers.fires.perMinute }}
+defaultLimits.actions.invokes.concurrentInSystem={{ defaultLimits.actions.invokes.concurrentInSystem }}
+defaultLimits.actions.sequence.maxLength={{ defaultLimits.actions.sequence.maxLength }}
+
+{% if limits is defined %}
+limits.actions.invokes.perMinute={{ limits.actions.invokes.perMinute }}
+limits.actions.invokes.concurrent={{ limits.actions.invokes.concurrent }}
+limits.actions.invokes.concurrentInSystem={{ limits.actions.invokes.concurrentInSystem }}
+limits.triggers.fires.perMinute={{ limits.triggers.fires.perMinute }}
+{% endif %}
+
+# DNS host resolution
+consulserver.host={{ consul_host }}
+invoker.hosts={{ invoker_hosts | join(",") }}
+controller.host={{ controller_host }}
+kafka.host={{ kafka_host }}
+zookeeper.host={{ zookeeper_host }}
+
+edge.host={{ groups["edge"]|first }}
+loadbalancer.host={{ groups["controllers"]|first }}
+router.host={{ groups["edge"]|first }}
+
+{#
+ # replaced host entries
+ #
+ # consulserver.host={{ groups["consul_servers"]|first }}
+ # invoker.hosts={{ groups["invokers"] | join(",") }}
+ # controller.host={{ groups["controllers"]|first }}
+ # kafka.host={{ groups["kafka"]|first }}
+ # zookeeper.host={{ groups["kafka"]|first }}
+ #}
+
+edge.host.apiport=443
+zookeeper.host.port={{ zookeeper.port }}
+kafka.host.port={{ kafka.port }}
+kafkaras.host.port={{ kafka.ras.port }}
+controller.host.port={{ controller.port }}
+loadbalancer.host.port={{ controller.port }}
+consul.host.port4={{ consul.port.http }}
+consul.host.port5={{ consul.port.server }}
+invoker.hosts.baseport={{ invoker_port }}
+
+{#
+ # ports that are replaced
+ # using Kube stateful sets, we are able to get
+ # one DNS entry per IP address. Unlike the usual
+ # Kube DNS entries, we are unable to do port mappings.
+ # So we need to use the port of the invoker instance.
+ #
+ #invoker.hosts.baseport={{ invoker.port }}
+ #}
+
+invoker.container.network=bridge
+invoker.container.policy={{ invoker_container_policy_name | default()}}
+invoker.numcore={{ invoker.numcore }}
+invoker.coreshare={{ invoker.coreshare }}
+invoker.serializeDockerOp={{ invoker.serializeDockerOp }}
+invoker.serializeDockerPull={{ invoker.serializeDockerPull }}
+invoker.useRunc={{ invoker_use_runc | default(invoker.useRunc) }}
+
+consulserver.docker.endpoint={{ groups["consul_servers"]|first }}:{{ docker.port }}
+edge.docker.endpoint={{ groups["edge"]|first }}:{{ docker.port }}
+kafka.docker.endpoint={{ groups["kafka"]|first }}:{{ docker.port }}
+main.docker.endpoint={{ groups["controllers"]|first }}:{{ docker.port }}
+
+docker.registry={{ docker_registry }}
+
+# configure to use the public docker images
+docker.image.prefix=openwhisk
+#docker.image.prefix={{ docker_image_prefix }}
+
+#use.docker.registry=false
+docker.port={{ docker.port }}
+docker.timezone.mount=
+docker.image.tag={{ docker_image_tag }}
+docker.tls.cmd=
+docker.addHost.cmd=
+docker.dns.cmd={{ docker_dns }}
+docker.restart.opts={{ docker.restart.policy }}
+
+db.provider={{ db_provider }}
+db.protocol={{ db_protocol }}
+db.host={{ db_host }}
+db.port={{ db_port }}
+db.username={{ db_username }}
+db.password={{ db_password }}
+db.prefix={{ db_prefix }}
+db.whisk.actions={{ db.whisk.actions }}
+db.whisk.activations={{ db.whisk.activations }}
+db.whisk.auths={{ db.whisk.auth }}
+
+apigw.auth.user={{apigw_auth_user}}
+apigw.auth.pwd={{apigw_auth_pwd}}
+apigw.host={{apigw_host}}
+apigw.host.v2={{apigw_host_v2}}
+
+loadbalancer.activationCountBeforeNextInvoker={{ loadbalancer_activation_count_before_next_invoker | default(10) }}
diff --git a/kubernetes/ansible/couchdb.yml b/kubernetes/ansible/couchdb.yml
deleted file mode 100644
index 70faba9..0000000
--- a/kubernetes/ansible/couchdb.yml
+++ /dev/null
@@ -1,6 +0,0 @@
----
-# This playbook deploys a CouchDB for Openwhisk.  
-
-- hosts: db
-  roles:
-  - couchdb
\ No newline at end of file
diff --git a/kubernetes/ansible/environments/kube/group_vars/all b/kubernetes/ansible/environments/kube/group_vars/all
deleted file mode 100644
index 63f636a..0000000
--- a/kubernetes/ansible/environments/kube/group_vars/all
+++ /dev/null
@@ -1,18 +0,0 @@
----
-db_provider: CouchDB
-db_port: 5984
-db_protocol: http
-db_username: couch_user
-db_password: couch_password
-#db_host: whisk-db-service.default
-db_auth: "subjects"
-db_prefix: "ubuntu_kube-1-4-1_"
-
-# apigw db credentials minimum read/write
-db_apigw_username: "couch_user"
-db_apigw_password: "couch_password"
-db_apigw: "ubuntu_kube-1-4-1_gwapis"
-
-kube_pod_dir: "{{ playbook_dir }}/environments/kube/files"
-
-db_host: couchdb.openwhisk
diff --git a/kubernetes/configure/cleanup.sh b/kubernetes/configure/cleanup.sh
index cd43672..84728f2 100755
--- a/kubernetes/configure/cleanup.sh
+++ b/kubernetes/configure/cleanup.sh
@@ -9,6 +9,19 @@
 
 # delete deployments
 kubectl -n openwhisk delete deployment couchdb
+kubectl -n openwhisk delete deployment consul
+kubectl -n openwhisk delete deployment zookeeper
+kubectl -n openwhisk delete deployment kafka
+kubectl -n openwhisk delete deployment controller
+kubectl -n openwhisk delete statefulsets invoker
+
+# delete configmaps
+kubectl -n openwhisk delete cm consul
 
 # delete services
 kubectl -n openwhisk delete service couchdb
+kubectl -n openwhisk delete service consul
+kubectl -n openwhisk delete service zookeeper
+kubectl -n openwhisk delete service kafka
+kubectl -n openwhisk delete service controller
+kubectl -n openwhisk delete service invoker
diff --git a/kubernetes/configure/configure.sh b/kubernetes/configure/configure.sh
index 270ce65..89dfa94 100755
--- a/kubernetes/configure/configure.sh
+++ b/kubernetes/configure/configure.sh
@@ -6,24 +6,54 @@
 # Note: This pod assumes that there is an openwhisk namespace and the pod
 # running this script has been created in that namespace.
 
+deployCouchDB() {
+  COUCH_DEPLOYED=$(kubectl -n openwhisk get pods --show-all | grep couchdb | grep "1/1")
+
+  if [ -z "$COUCH_DEPLOYED" ]; then
+   return 0;
+  else
+   return 1;
+  fi
+}
+
 set -ex
 
+# Currently, Consul needs to be seeded with the proper Invoker name to DNS address. To account for
+# this, we need to use StatefulSets(https://kubernetes.io/stutorials/stateful-application/basic-stateful-set/)
+# to generate the Invoker addresses in a guranteed pattern. We can then use properties from the
+# StatefulSet yaml file for OpenWhisk deployment configuration options.
+
+INVOKER_REP_COUNT=$(cat /openwhisk-devtools/kubernetes/ansible-kube/environments/kube/files/invoker.yml | grep 'replicas:' | awk '{print $2}')
+INVOKER_COUNT=${INVOKER_REP_COUNT:-1}
+sed -ie "s/REPLACE_INVOKER_COUNT/$INVOKER_COUNT/g" /openwhisk-devtools/kubernetes/ansible-kube/environments/kube/group_vars/all
+
+# copy the ansible playbooks and tools to this repo
+cp -R /openwhisk/ansible/ /openwhisk-devtools/kubernetes/ansible
+cp -R /openwhisk/tools/ /openwhisk-devtools/kubernetes/tools
+
+# overwrite the default openwhisk ansible with the kube ones.
+cp -R /openwhisk-devtools/kubernetes/ansible-kube/. /openwhisk-devtools/kubernetes/ansible/
+
+# start kubectl in proxy mode so we can talk to the Kube Api server
 kubectl proxy -p 8001 &
 
-# Create all of the necessary services
 pushd /openwhisk-devtools/kubernetes/ansible
+  # Create all of the necessary services
   kubectl apply -f environments/kube/files/db-service.yml
-popd
+  kubectl apply -f environments/kube/files/consul-service.yml
+  kubectl apply -f environments/kube/files/zookeeper-service.yml
+  kubectl apply -f environments/kube/files/kafka-service.yml
+  kubectl apply -f environments/kube/files/controller-service.yml
+  kubectl apply -f environments/kube/files/invoker-service.yml
 
-# Create the CouchDB deployment
-pushd /openwhisk-devtools/kubernetes/ansible
-  cp /openwhisk/ansible/group_vars/all group_vars/all
-  ansible-playbook -i environments/kube couchdb.yml
-popd
+  if deployCouchDB; then
+    # Create the CouchDB deployment
+    ansible-playbook -i environments/kube couchdb.yml
+    # configure couch db
+    ansible-playbook -i environments/kube initdb.yml
+    ansible-playbook -i environments/kube wipe.yml
+  fi
 
-## configure couch db
-pushd /openwhisk/ansible/
-  ansible-playbook -i /openwhisk-devtools/kubernetes/ansible/environments/kube initdb.yml
-  ansible-playbook -i /openwhisk-devtools/kubernetes/ansible/environments/kube wipe.yml
+  # Run through the openwhisk deployment
+  ansible-playbook -i environments/kube openwhisk.yml
 popd
-
diff --git a/kubernetes/configure/configure_whisk.yml b/kubernetes/configure/configure_whisk.yml
index 65c044e..acc7262 100644
--- a/kubernetes/configure/configure_whisk.yml
+++ b/kubernetes/configure/configure_whisk.yml
@@ -16,6 +16,6 @@
       restartPolicy: Never
       containers:
       - name: configure-openwhisk
-        image: danlavine/whisk_config
+        image: danlavine/whisk_config:latest
         imagePullPolicy: Always
         command: [ "/openwhisk-devtools/kubernetes/configure/configure.sh" ]
diff --git a/kubernetes/docker/build.sh b/kubernetes/docker/build.sh
index 673b6c7..0c1340d 100755
--- a/kubernetes/docker/build.sh
+++ b/kubernetes/docker/build.sh
@@ -1,15 +1,10 @@
 #!/usr/bin/env bash
 
 # This script can be used to build the custom docker images required
-# for Kubernetes. This involves running the entire OpenWhisk gradle
-# build process and then creating the custom images for OpenWhisk.
-
-# prerequisites:
-#   * be able to run `cd <home_openwhisk> ./gradlew distDocker`
+# for deploying openwhisk on Kubernetes.
 
 set -ex
 
-
 if [ -z "$1" ]; then
 cat <<- EndOfMessage
   First argument should be location of which docker repo to push all
@@ -20,47 +15,5 @@
 exit 1
 fi
 
-OPENWHISK_DIR=""
-if [ -z "$2" ]; then
-cat <<- EndOfMessage
-  Second argument should be the location of where the OpenWhisk repo lives.
-  By default the location is $HOME/workspace/openwhisk
-EndOfMessage
-
-  OPENWHISK_DIR=$HOME/workspace/openwhisk
-else
-  OPENWHISK_DIR="$2"
-fi
-
-pushd $OPENWHISK_DIR
-  ./gradlew distDocker
-popd
-
-## Retag new images for public repo
-docker tag whisk/badaction "$1"/whisk_badaction
-docker tag whisk/badproxy "$1"/whisk_badproxy
-docker tag whisk/cli "$1"/whisk_cli
-docker tag whisk/example "$1"/whisk_example
-docker tag whisk/swift3action "$1"/whisk_swift3action
-docker tag whisk/pythonaction "$1"/whisk_pythonaction
-docker tag whisk/nodejs6action "$1"/whisk_nodejs6action
-docker tag whisk/nodejsactionbase "$1"/whisk_nodejsactionbase
-docker tag whisk/javaaction "$1"/whisk_javaaction
-docker tag whisk/invoker "$1"/whisk_invoker
-docker tag whisk/controller "$1"/whisk_controller
-docker tag whisk/dockerskeleton "$1"/whisk_dockerskeleton
-docker tag whisk/scala "$1"/whisk_scala
-
-docker push "$1"/whisk_badaction
-docker push "$1"/whisk_badproxy
-docker push "$1"/whisk_cli
-docker push "$1"/whisk_example
-docker push "$1"/whisk_swift3action
-docker push "$1"/whisk_pythonaction
-docker push "$1"/whisk_nodejs6action
-docker push "$1"/whisk_nodejsactionbase
-docker push "$1"/whisk_javaaction
-docker push "$1"/whisk_invoker
-docker push "$1"/whisk_controller
-docker push "$1"/whisk_dockerskeleton
-docker push "$1"/whisk_scala
+SOURCE="${BASH_SOURCE[0]}"
+SCRIPTDIR="$( dirname "$SOURCE" )"