remove documentation/scripts for kubeadm-dind-cluster (#549)

The kubeadm-dind-cluster has been archived/ended. We migrated to
using the suggested replacement, kind, about a month ago and
travis seems stable using kind.  Therefore it is time to
cleanup old scripts and user-facing documentation.

Fixes #508.
diff --git a/docs/k8s-dind-cluster.md b/docs/k8s-dind-cluster.md
deleted file mode 100644
index 724b309..0000000
--- a/docs/k8s-dind-cluster.md
+++ /dev/null
@@ -1,133 +0,0 @@
-<!--
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
--->
-
-
-# Deploying OpenWhisk on kubeadm-dind-cluster
-
-## Overview
-
-On Linux, you can run Kubernetes on top of Docker using the
-[kubeadm-dind-cluster](https://github.com/kubernetes-retired/kubeadm-dind-cluster)
-project.  Based on using Docker-in-Docker (DIND) virtualization and
-`kubeadm`, kubeadm-dind-cluster can be used to create a
-multi-node Kubernetes cluster that is suitable for deploying
-OpenWhisk for development and testing.  For detailed instructions on kubeadm-dind-cluster, we
-refer you to that project's [github repository](https://github.com/kubernetes-retired/kubeadm-dind-cluster).
-Here we will only cover the basic operations needed to create and
-operate a default cluster with two virtual worker nodes running on a
-single host machine.
-
-NOTE: The kubeadm-dind-cluster project was recently deprecated in favor of [kind](https://kind.sigs.k8s.io/).
-We have a [work item](https://github.com/apache/openwhisk-deploy-kube/issues/508) to migrate our
-CI testing to using kind and document its setup for end users.  *Contributions Welcome!*
-
-## Initial setup
-
-There are "fixed" scripts
-[available](https://github.com/kubernetes-retired/kubeadm-dind-cluster/tree/master/fixed)
-for each major release of Kubernetes.
-Our TravisCI testing uses kubeadm-dind-cluster.sh on an ubuntu 18.04
-host.  The `fixed` `dind-cluster` scripts for Kubernetes version 1.13,
-and 1.14 are known to work for deploying OpenWhisk.
-
-### Creating the Kubernetes Cluster
-
-First, make sure your userid is in the `docker` group on the host
-machine.  This will enable you to run `dind-cluster.sh` script without
-requiring `sudo` to gain `root` privileges.
-
-To initially create your cluster, do the following:
-```shell
-# Get the script for the Kubernetes version you want
-wget https://github.com/kubernetes-retired/kubeadm-dind-cluster/releases/download/v0.3.0/dind-cluster-v1.14.sh
-
-# Make it executable
-chmod +x dind-cluster-v1.14.sh
-
-# Start the cluster. Please note you *must* set `USE_HAIRPIN` to `true`
-USE_HAIRPIN=true ./dind-cluster-v1.14.sh up
-
-# add the directory containing kubectl to your PATH
-export PATH="$HOME/.kubeadm-dind-cluster:$PATH"
-```
-
-The default configuration of `dind-cluster.sh` will create a cluster
-with three nodes: 1 master node and two worker nodes. We recommend
-labeling the two worker nodes for OpenWhisk so that you have 1 invoker
-node for running user actions and 1 core node for running the rest of
-the OpenWhisk system.
-```shell
-kubectl label node kube-node-1 openwhisk-role=core
-kubectl label node kube-node-2 openwhisk-role=invoker
-```
-
-### Configuring OpenWhisk
-
-
-You will be using a NodePort ingress to access OpenWhisk. Assuming
-`kubectl describe node kube-node-1 | grep InternalIP` returns 10.192.0.3
-and port 31001 is available to be used on your host machine, a
-mycluster.yaml for a standard deployment of OpenWhisk would be:
-```yaml
-whisk:
-  ingress:
-    type: NodePort
-    apiHostName: 10.192.0.3
-    apiHostPort: 31001
-
-nginx:
-  httpsNodePort: 31001
-
-invoker:
-  containerFactory:
-    dind: true
-
-k8s:
-  persistence:
-    enabled: false
-```
-
-Note the stanza setting `invoker.containerFactory.dind` to true. This
-is needed because the logs for docker containers running on the
-virtual worker nodes are in a non-standard location, requiring special
-configuration of OpenWhisk's invoker pods. Failure to set this
-variable when running on kubeadm-dind-cluster will result in an
-OpenWhisk deployment that cannot execute user actions.
-
-For ease of deployment, you should also disable persistent volumes
-because kubeadm-dind-cluster does not configure a default
-StorageClass.
-
-## Limitations
-
-Using kubeadm-dind-cluster is only appropriate for development and
-testing purposes.  It is not recommended for production deployments of
-OpenWhisk.
-
-Without enabling persistence, it is not possible to restart the
-Kubernetes cluster without also re-installing Helm and OpenWhisk.
-
-TLS termination will be handled by OpenWhisk's `nginx` service and
-will use self-signed certificates.  You will need to invoke `wsk` with
-the `-i` command line argument to bypass certificate checking.
-
-Unlike using Kubernetes with Docker for Mac 18.06 and later, only the
-virtual master/worker nodes are visible to Docker on the host system. The
-individual pods running the OpenWhisk system are only visible using
-`kubectl` and not directly via host Docker commands.
diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md
index ee14953..98db816 100644
--- a/docs/troubleshooting.md
+++ b/docs/troubleshooting.md
@@ -63,8 +63,7 @@
 Kafka service didn't actually come up successfully. One reason Kafka
 can fail to fully come up is that it cannot connect to itself.  On minikube,
 fix this by saying `minikube ssh -- sudo ip link set docker0 promisc
-on`. If using kubeadm-dind-cluster, set `USE_HAIRPIN=true` in your environment
-before running 'dind-cluster.sh up`. On full scale Kubernetes clusters,
+on`. On full scale Kubernetes clusters,
 make sure that your kubelet's `hairpin-mode` is not `none`).
 
 The usual symptom of this network misconfiguration is the controller
diff --git a/helm/openwhisk/templates/_invoker-helpers.tpl b/helm/openwhisk/templates/_invoker-helpers.tpl
index 3e3abd0..fcba1f1 100644
--- a/helm/openwhisk/templates/_invoker-helpers.tpl
+++ b/helm/openwhisk/templates/_invoker-helpers.tpl
@@ -24,11 +24,7 @@
     path: "/run/runc"
 - name: dockerrootdir
   hostPath:
-    {{- if .Values.invoker.containerFactory.dind }}
-    path: "/dind/docker/containers"
-    {{- else }}
     path: "/var/lib/docker/containers"
-    {{- end }}
 - name: dockersock
   hostPath:
     path: "/var/run/docker.sock"
diff --git a/helm/openwhisk/values-metadata.yaml b/helm/openwhisk/values-metadata.yaml
index f69e741..0bbe853 100644
--- a/helm/openwhisk/values-metadata.yaml
+++ b/helm/openwhisk/values-metadata.yaml
@@ -1019,12 +1019,6 @@
       type: "string"
       required: false
   containerFactory:
-    dind:
-      __metadata:
-        label: "dind"
-        description: "If using Docker-in-Docker to run your Kubernetes cluster"
-        type: "boolean"
-        required: true
     useRunc:
       __metadata:
         label: "useRunc"
diff --git a/helm/openwhisk/values.yaml b/helm/openwhisk/values.yaml
index 01af015..fa42716 100644
--- a/helm/openwhisk/values.yaml
+++ b/helm/openwhisk/values.yaml
@@ -24,7 +24,6 @@
 # to reflect your specific Kubernetes cluster.  For details, see the appropriate
 # one of these files:
 #   docs/k8s-docker-for-mac.md
-#   docs/k8s-dind-cluster.md
 #   docs/k8s-aws.md
 #   docs/k8s-ibm-public.md
 #   docs/k8s-ibm-private.md
@@ -259,7 +258,6 @@
   jvmHeapMB: "512"
   jvmOptions: ""
   containerFactory:
-    dind: false
     useRunc: false
     impl: "kubernetes"
     enableConcurrency: false
diff --git a/tools/travis/dind-cluster-v12.patch b/tools/travis/dind-cluster-v12.patch
deleted file mode 100644
index dfb51ff..0000000
--- a/tools/travis/dind-cluster-v12.patch
+++ /dev/null
@@ -1,11 +0,0 @@
---- dind-cluster.sh	2019-03-11 17:37:22.000000000 -0400
-+++ dind-cluster.sh	2019-03-11 17:38:23.000000000 -0400
-@@ -1099,7 +1099,7 @@
-     --server="http://${host}:$(dind::apiserver-port)" \
-     --insecure-skip-tls-verify=true
-   "${kubectl}" config set-context "$context_name" --cluster="$cluster_name"
--  if [[ ${DIND_LABEL} = ${DEFAULT_DIND_LABEL} ]]; then
-+  if [[ "${DIND_LABEL}" = "${DEFAULT_DIND_LABEL}" ]]; then
-       # Single cluster mode
-       "${kubectl}" config use-context "$context_name"
-   fi
diff --git a/tools/travis/start-kubeadm-dind.sh b/tools/travis/start-kubeadm-dind.sh
deleted file mode 100755
index 194662f..0000000
--- a/tools/travis/start-kubeadm-dind.sh
+++ /dev/null
@@ -1,78 +0,0 @@
-#!/bin/bash
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-set -x
-
-# Install kubernetes-dind-cluster and boot it
-wget https://github.com/kubernetes-retired/kubeadm-dind-cluster/releases/download/v0.3.0/dind-cluster-v$TRAVIS_KUBE_VERSION.sh -O $HOME/dind-cluster.sh && chmod +x $HOME/dind-cluster.sh
-if [[ "$TRAVIS_KUBE_VERSION" == "1.12" ]]; then
-    patch $HOME/dind-cluster.sh ./tools/travis/dind-cluster-v12.patch
-fi
-USE_HAIRPIN=true $HOME/dind-cluster.sh up
-
-# Install kubectl in /usr/local/bin so subsequent scripts can find it
-sudo cp $HOME/.kubeadm-dind-cluster/kubectl-v$TRAVIS_KUBE_VERSION* /usr/local/bin/kubectl
-
-
-echo "Kubernetes cluster is deployed and reachable"
-kubectl describe nodes
-
-# Download and install misc packages and utilities
-pushd /tmp
-  # Need socat for helm to forward connections to tiller on ubuntu 16.04
-  sudo apt update
-  sudo apt install -y socat
-
-  # download and install the wsk cli
-  wget -q https://github.com/apache/openwhisk-cli/releases/download/latest/OpenWhisk_CLI-latest-linux-amd64.tgz
-  tar xzf OpenWhisk_CLI-latest-linux-amd64.tgz
-  sudo cp wsk /usr/local/bin/wsk
-
-  # Download and install helm
-  curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh && chmod +x get_helm.sh && ./get_helm.sh
-popd
-
-# Pods running in kube-system namespace should have cluster-admin role
-kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
-
-# Install tiller into the cluster
-/usr/local/bin/helm init --service-account default
-
-# Wait for tiller to be ready
-TIMEOUT=0
-TIMEOUT_COUNT=60
-until [ $TIMEOUT -eq $TIMEOUT_COUNT ]; do
-  TILLER_STATUS=$(kubectl -n kube-system get pods -o wide | grep tiller-deploy | awk '{print $3}')
-  TILLER_READY_COUNT=$(kubectl -n kube-system get pods -o wide | grep tiller-deploy | awk '{print $2}')
-  if [[ "$TILLER_STATUS" == "Running" ]] && [[ "$TILLER_READY_COUNT" == "1/1" ]]; then
-    break
-  fi
-  echo "Waiting for tiller to be ready"
-  kubectl -n kube-system get pods -o wide
-  let TIMEOUT=TIMEOUT+1
-  sleep 5
-done
-
-if [ $TIMEOUT -eq $TIMEOUT_COUNT ]; then
-  echo "Failed to install tiller"
-
-  # Dump lowlevel logs to help diagnose failure to start tiller
-  $HOME/dind-cluster.sh dump
-  kubectl -n kube-system describe pods
-  exit 1
-fi