id: schedulers-k8s-by-hand title: Kubernetes by hand sidebar_label: Kubernetes by hand

This document shows you how to install Heron on Kubernetes in a step-by-step, “by hand” fashion. An easier way to install Heron on Kubernetes is to use the Helm package manager. For instructions on doing so, see Heron on Kubernetes with Helm).

Heron supports deployment on Kubernetes (sometimes called k8s). Heron deployments on Kubernetes use Docker as the containerization format for Heron topologies and use the Kubernetes API for scheduling.

You can use Heron on Kubernetes in multiple environments:

Requirements

In order to run Heron on Kubernetes, you will need:

  • A Kubernetes cluster with at least 3 nodes (unless you're running locally on Minikube)
  • The kubectl CLI tool installed and set up to communicate with your cluster
  • The heron CLI tool

Any additional requirements will depend on where you're running Heron on Kubernetes.

How Heron on Kubernetes Works

When deploying to Kubernetes, each Heron container is deployed as a Kubernetes pod inside of a Docker container. If there are 20 containers that are going to be deployed with a topoology, for example, then there will be 20 pods deployed to your Kubernetes cluster for that topology.

Minikube

Minikube enables you to run a Kubernetes cluster locally on a single machine.

Requirements

To run Heron on Minikube you'll need to install Minikube in addition to the other requirements listed above.

Starting Minikube

First you'll need to start up Minikube using the minikube start command. We recommend starting Minikube with:

  • at least 7 GB of memory
  • 5 CPUs
  • 20 GB of storage

This command will accomplish precisely that:

$ minikube start \
  --memory=7168 \
  --cpus=5 \
  --disk-size=20G

Starting components

There are a variety of Heron components that you'll need to start up separately and in order. Make sure that the necessary pods are up and in the RUNNING state before moving on to the next step. You can track the progress of the pods using this command:

$ kubectl get pods -w

ZooKeeper

Heron uses ZooKeeper for a variety of coordination- and configuration-related tasks. To start up ZooKeeper on Minikube:

$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/minikube/zookeeper.yaml

BookKeeper

When running Heron on Kubernetes, Apache BookKeeper is used for things like topology artifact storage. You can start up BookKeeper using this command:

$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/minikube/bookkeeper.yaml

Heron tools

The so-called “Heron tools” include the Heron UI and the Heron Tracker. To start up the Heron tools:

$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/minikube/tools.yaml

Heron API server

The Heron API server is the endpoint that the Heron CLI client uses to interact with the other components of Heron. To start up the Heron API server on Minikube:

$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/minikube/apiserver.yaml

Managing topologies

Once all of the components have been successfully started up, you need to open up a proxy port to your Minikube Kubernetes cluster using the kubectl proxy command:

$ kubectl proxy -p 8001

Note: All of the following Kubernetes specific urls are valid with the Kubernetes 1.10.0 release.

Now, verify that the Heron API server running on Minikube is available using curl:

$ curl http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy/api/v1/version

You should get a JSON response like this:

{
  "heron.build.git.revision" : "ddbb98bbf173fb082c6fd575caaa35205abe34df",
  "heron.build.git.status" : "Clean",
  "heron.build.host" : "ci-server-01",
  "heron.build.time" : "Sat Mar 31 09:27:19 UTC 2018",
  "heron.build.timestamp" : "1522488439000",
  "heron.build.user" : "release-agent",
  "heron.build.version" : "0.17.8"
}

Success! You can now manage Heron topologies on your Minikube Kubernetes installation. To submit an example topology to the cluster:

$ heron submit kubernetes \
  --service-url=http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy \
  ~/.heron/examples/heron-api-examples.jar \
  org.apache.heron.examples.api.AckingTopology acking

You can also track the progress of the Kubernetes pods that make up the topology. When you run kubectl get pods you should see pods with names like acking-0 and acking-1.

Another option is to set the service URL for Heron using the heron config command:

$ heron config kubernetes set service_url \
  http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy

That would enable you to manage topologies without setting the --service-url flag.

Heron UI

The Heron UI is an in-browser dashboard that you can use to monitor your Heron topologies. It should already be running in Minikube.

You can access Heron UI in your browser by navigating to http://localhost:8001/api/v1/namespaces/default/services/heron-ui:8889/proxy/topologies.

Google Container Engine

You can use Google Container Engine (GKE) to run Kubernetes clusters on Google Cloud Platform.

Requirements

To run Heron on GKE, you'll need to create a Kubernetes cluster with at least three nodes. This command would create a three-node cluster in your default Google Cloud Platform zone and project:

$ gcloud container clusters create heron-gke-cluster \
  --machine-type=n1-standard-4 \
  --num-nodes=3

You can specify a non-default zone and/or project using the --zone and --project flags, respectively.

Once the cluster is up and running, enable your local kubectl to interact with the cluster by fetching your GKE cluster's credentials:

$ gcloud container clusters get-credentials heron-gke-cluster
Fetching cluster endpoint and auth data.
kubeconfig entry generated for heron-gke-cluster.

Finally, you need to create a Kubernetes secret that specifies the Cloud Platform connection credentials for your service account. First, download your Cloud Platform credentials as a JSON file, say key.json. This command will download your credentials:

$ gcloud iam service-accounts create key.json \
  --iam-account=YOUR-ACCOUNT

Topology artifact storage

Heron on Google Container Engine supports two static file storage options for topology artifacts:

Google Cloud Storage setup

If you're running Heron on GKE, you can use either Google Cloud Storage or Apache BookKeeper for topology artifact storage.

If you'd like to use BookKeeper instead of Google Cloud Storage, skip to the BookKeeper section below.

To use Google Cloud Storage for artifact storage, you‘ll need to create a Google Cloud Storage bucket. Here’s an example bucket creation command using gsutil':

$ gsutil mb gs://my-heron-bucket

Cloud Storage bucket names must be globally unique, so make sure to choose a bucket name carefully. Once you‘ve created a bucket, you need to create a Kubernetes ConfigMap that specifies the bucket name. Here’s an example:

$ kubectl create configmap heron-apiserver-config \
  --from-literal=gcs.bucket=BUCKET-NAME

You can list your current service accounts using the gcloud iam service-accounts list command.

Then you can create the secret like this:

$ kubectl create secret generic heron-gcs-key \
  --from-file=key.json=key.json

Once you've created a bucket, a ConfigMap, and a secret, you can move on to starting up the various components of your Heron installation.

Starting components

There are a variety of Heron components that you'll need to start up separately and in order. Make sure that the necessary pods are up and in the RUNNING state before moving on to the next step. You can track the progress of the pods using this command:

$ kubectl get pods -w

ZooKeeper

Heron uses ZooKeeper for a variety of coordination- and configuration-related tasks. To start up ZooKeeper on your GKE cluster:

$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/gcp/zookeeper.yaml

BookKeeper setup

If you're using Google Cloud Storage for topology artifact storage, skip to the Heron tools section below.

To start up an Apache BookKeeper cluster for Heron:

$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/gcp/bookkeeper.yaml

Heron tools

The so-called “Heron tools” include the Heron UI and the Heron Tracker. To start up the Heron tools:

$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/gcp/tools.yaml

Heron API server

The Heron API server is the endpoint that the Heron CLI client uses to interact with the other components of Heron. Heron on Google Container Engine has two separate versions of the Heron API server that you can run depending on which artifact storage system you're using (Google Cloud Storage or Apache BookKeeper).

If you're using Google Cloud Storage:

$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/gcp/gcs-apiserver.yaml

If you're using Apache BookKeeper:

$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/gcp/bookkeeper-apiserver.yaml

Managing topologies

Once all of the components have been successfully started up, you need to open up a proxy port to your GKE Kubernetes cluster using the kubectl proxy command:

$ kubectl proxy -p 8001

Note: All of the following Kubernetes specific urls are valid with the Kubernetes 1.10.0 release.

Now, verify that the Heron API server running on GKE is available using curl:

$ curl http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy/api/v1/version

You should get a JSON response like this:

{
  "heron.build.git.revision" : "bf9fe93f76b895825d8852e010dffd5342e1f860",
  "heron.build.git.status" : "Clean",
  "heron.build.host" : "ci-server-01",
  "heron.build.time" : "Sun Oct  1 20:42:18 UTC 2017",
  "heron.build.timestamp" : "1506890538000",
  "heron.build.user" : "release-agent1",
  "heron.build.version" : "0.16.2"
}

Success! You can now manage Heron topologies on your GKE Kubernetes installation. To submit an example topology to the cluster:

$ heron submit kubernetes \
  --service-url=http://localhost:8001/api/v1/proxy/namespaces/default/services/heron-apiserver:9000 \
  ~/.heron/examples/heron-api-examples.jar \
  org.apache.heron.examples.api.AckingTopology acking

You can also track the progress of the Kubernetes pods that make up the topology. When you run kubectl get pods you should see pods with names like acking-0 and acking-1.

Another option is to set the service URL for Heron using the heron config command:

$ heron config kubernetes set service_url \
  http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy

That would enable you to manage topologies without setting the --service-url flag.

Heron UI

The Heron UI is an in-browser dashboard that you can use to monitor your Heron topologies. It should already be running in your GKE cluster.

You can access Heron UI in your browser by navigating to http://localhost:8001/api/v1/namespaces/default/services/heron-ui:8889/proxy/topologies.

General Kubernetes clusters

Although Minikube and Google Container Engine provide two easy ways to get started running Heron on Kubernetes, you can also run Heron on any Kubernetes cluster. The instructions in this section are tailored to non-Minikube, non-GKE Kubernetes installations.

Requirements

To run Heron on a general Kubernetes installation, you'll need to fulfill the requirements listed at the top of this doc. Once those requirements are met, you can begin starting up the various components that comprise a Heron on Kubernetes installation.

Starting components

There are a variety of Heron components that you'll need to start up separately and in order. Make sure that the necessary pods are up and in the RUNNING state before moving on to the next step. You can track the progress of the pods using this command:

$ kubectl get pods -w

ZooKeeper

Heron uses ZooKeeper for a variety of coordination- and configuration-related tasks. To start up ZooKeeper on your Kubernetes cluster:

$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/general/zookeeper.yaml

BookKeeper

When running Heron on Kubernetes, Apache BookKeeper is used for things like topology artifact storage (unless you're running on GKE). You can start up BookKeeper using this command:

$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/general/bookkeeper.yaml

Heron tools

The so-called “Heron tools” include the Heron UI and the Heron Tracker. To start up the Heron tools:

$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/general/tools.yaml

Heron API server

The Heron API server is the endpoint that the Heron CLI client uses to interact with the other components of Heron. To start up the Heron API server on your Kubernetes cluster:

$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/general/apiserver.yaml

Managing topologies

Once all of the components have been successfully started up, you need to open up a proxy port to your GKE Kubernetes cluster using the kubectl proxy command:

$ kubectl proxy -p 8001

Note: All of the following Kubernetes specific urls are valid with the Kubernetes 1.10.0 release.

Now, verify that the Heron API server running on GKE is available using curl:

$ curl http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy/api/v1/version

You should get a JSON response like this:

{
  "heron.build.git.revision" : "ddbb98bbf173fb082c6fd575caaa35205abe34df",
  "heron.build.git.status" : "Clean",
  "heron.build.host" : "ci-server-01",
  "heron.build.time" : "Sat Mar 31 09:27:19 UTC 2018",
  "heron.build.timestamp" : "1522488439000",
  "heron.build.user" : "release-agent",
  "heron.build.version" : "0.17.8"
}

Success! You can now manage Heron topologies on your GKE Kubernetes installation. To submit an example topology to the cluster:

$ heron submit kubernetes \
  --service-url=http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy \
  ~/.heron/examples/heron-api-examples.jar \
  org.apache.heron.examples.api.AckingTopology acking

You can also track the progress of the Kubernetes pods that make up the topology. When you run kubectl get pods you should see pods with names like acking-0 and acking-1.

Another option is to set the service URL for Heron using the heron config command:

$ heron config kubernetes set service_url \
  http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy

That would enable you to manage topologies without setting the --service-url flag.

Heron UI

The Heron UI is an in-browser dashboard that you can use to monitor your Heron topologies. It should already be running in your GKE cluster.

You can access Heron UI in your browser by navigating to http://localhost:8001/api/v1/proxy/namespaces/default/services/heron-ui:8889.

Heron on Kubernetes configuration

You can configure Heron on Kubernetes using a variety of YAML config files, listed in the sections below.

client.yaml

Configuration for the heron CLI tool.

namedescriptiondefault
heron.package.core.uriLocation of the core Heron packagefile:///vagrant/.herondata/dist/heron-core-release.tar.gz
heron.config.is.role.requiredWhether a role is required to submit a topologyFalse
heron.config.is.env.requiredWhether an environment is required to submit a topologyFalse

heron_internals.yaml

Configuration for a wide variety of Heron components, including logging, each topology's stream manager and topology master, and more.

namedescriptiondefault
heron.logging.directoryThe relative path to the logging directorylog-files
heron.logging.maximum.size.mbThe maximum log file size (in MB)100
heron.logging.maximum.filesThe maximum number of log files5
heron.check.tmaster.location.interval.secThe interval, in seconds, after which to check if the topology master location has been fetched or not120
heron.logging.prune.interval.secThe interval, in seconds, at which to prune C++ log files300
heron.logging.flush.interval.secThe interval, in seconds, at which to flush C++ log files10
heron.logging.err.thresholdThe threshold level at which to log errors3
heron.metrics.export.interval.secThe interval, in seconds, at which different components export metrics to the metrics manager60
heron.metrics.max.exceptions.per.message.countThe maximum count of exceptions in one MetricPublisherPublishMessage protobuf message1024
heron.streammgr.cache.drain.frequency.msThe frequency, in milliseconds, at which to drain the tuple cache in the stream manager10
heron.streammgr.stateful.buffer.size.mbThe sized-based threshold (in MB) for buffering data tuples waiting for checkpoint markers before giving up100
heron.streammgr.cache.drain.size.mbThe sized-based threshold (in MB) for draining the tuple cache100
heron.streammgr.xormgr.rotatingmap.nbucketsFor efficient acknowledgements3
heron.streammgr.mempool.max.message.numberThe max number of messages in the memory pool for each message type512
heron.streammgr.client.reconnect.interval.secThe reconnect interval to other stream managers (in seconds) for the stream manager client1
heron.streammgr.client.reconnect.tmaster.interval.secThe reconnect interval to the topology master (in seconds) for the stream manager client10
heron.streammgr.client.reconnect.tmaster.max.attemptsThe max reconnect attempts to tmaster for stream manager client30
heron.streammgr.network.options.maximum.packet.mbThe maximum packet size (in MB) of the stream manager's network options10
heron.streammgr.tmaster.heartbeat.interval.secThe interval (in seconds) at which to send heartbeats10
heron.streammgr.connection.read.batch.size.mbThe maximum batch size (in MB) for the stream manager to read from socket1
heron.streammgr.connection.write.batch.size.mbMaximum batch size (in MB) for the stream manager to write to socket1
heron.streammgr.network.backpressure.thresholdThe number of times Heron should wait to see a buffer full while enqueueing data before declaring the start of backpressure3
heron.streammgr.network.backpressure.highwatermark.mbThe high-water mark on the number (in MB) that can be left outstanding on a connection100
heron.streammgr.network.backpressure.lowwatermark.mbThe low-water mark on the number (in MB) that can be left outstanding on a connection
heron.tmaster.metrics.collector.maximum.interval.minThe maximum interval (in minutes) for metrics to be kept in the topology master180
heron.tmaster.establish.retry.timesThe maximum number of times to retry establishing connection with the topology master30
heron.tmaster.establish.retry.interval.secThe interval at which to retry establishing connection with the topology master1
heron.tmaster.network.master.options.maximum.packet.mbMaximum packet size (in MB) of topology master's network options to connect to stream managers16
heron.tmaster.network.controller.options.maximum.packet.mbMaximum packet size (in MB) of the topology master's network options to connect to scheduler1
heron.tmaster.network.stats.options.maximum.packet.mbMaximum packet size (in MB) of the topology master's network options for stat queries1
heron.tmaster.metrics.collector.purge.interval.secThe interval (in seconds) at which the topology master purges metrics from socket60
heron.tmaster.metrics.collector.maximum.exceptionThe maximum number of exceptions to be stored in the topology metrics collector, to prevent out-of-memory errors256
heron.tmaster.metrics.network.bindallinterfacesWhether the metrics reporter should bind on all interfacesFalse
heron.tmaster.stmgr.state.timeout.secThe timeout (in seconds) for the stream manager, compared with (current time - last heartbeat time)60
heron.metricsmgr.network.read.batch.time.msThe maximum batch time (in milliseconds) for the metrics manager to read from socket16
heron.metricsmgr.network.read.batch.size.bytesThe maximum batch size (in bytes) to read from socket32768
heron.metricsmgr.network.write.batch.time.msThe maximum batch time (in milliseconds) for the metrics manager to write to socket32768
heron.metricsmgr.network.options.socket.send.buffer.size.bytesThe maximum socket send buffer size (in bytes)6553600
heron.metricsmgr.network.options.socket.received.buffer.size.bytesThe maximum socket received buffer size (in bytes) for the metrics manager's network options8738000
heron.metricsmgr.network.options.maximum.packetsize.bytesThe maximum packet size that the metrics manager can read1048576
heron.instance.network.options.maximum.packetsize.bytesThe maximum size of packets that Heron instances can read10485760
heron.instance.internal.bolt.read.queue.capacityThe queue capacity (num of items) in bolt for buffer packets to read from stream manager128
heron.instance.internal.bolt.write.queue.capacityThe queue capacity (num of items) in bolt for buffer packets to write to stream manager128
heron.instance.internal.spout.read.queue.capacityThe queue capacity (num of items) in spout for buffer packets to read from stream manager1024
heron.instance.internal.spout.write.queue.capacityThe queue capacity (num of items) in spout for buffer packets to write to stream manager128
heron.instance.internal.metrics.write.queue.capacityThe queue capacity (num of items) for metrics packets to write to metrics manager128
heron.instance.network.read.batch.time.msTime based, the maximum batch time in ms for instance to read from stream manager per attempt16
heron.instance.network.read.batch.size.bytesSize based, the maximum batch size in bytes to read from stream manager32768
heron.instance.network.write.batch.time.msTime based, the maximum batch time (in milliseconds) for the instance to write to the stream manager per attempt16
heron.instance.network.write.batch.size.bytesSize based, the maximum batch size in bytes to write to stream manager32768
heron.instance.network.options.socket.send.buffer.size.bytesThe maximum socket's send buffer size in bytes6553600
heron.instance.network.options.socket.received.buffer.size.bytesThe maximum socket‘s received buffer size in bytes of instance’s network options8738000
heron.instance.set.data.tuple.capacityThe maximum number of data tuple to batch in a HeronDataTupleSet protobuf1024
heron.instance.set.data.tuple.size.bytesThe maximum size in bytes of data tuple to batch in a HeronDataTupleSet protobuf8388608
heron.instance.set.control.tuple.capacityThe maximum number of control tuple to batch in a HeronControlTupleSet protobuf1024
heron.instance.ack.batch.time.msThe maximum time in ms for a spout to do acknowledgement per attempt, the ack batch could also break if there are no more ack tuples to process128
heron.instance.emit.batch.time.msThe maximum time in ms for an spout instance to emit tuples per attempt16
heron.instance.emit.batch.size.bytesThe maximum batch size in bytes for an spout to emit tuples per attempt32768
heron.instance.execute.batch.time.msThe maximum time in ms for an bolt instance to execute tuples per attempt16
heron.instance.execute.batch.size.bytesThe maximum batch size in bytes for an bolt instance to execute tuples per attempt32768
heron.instance.state.check.interval.secThe time interval for an instance to check the state change, for example, the interval a spout uses to check whether activate/deactivate is invoked5
heron.instance.force.exit.timeout.msThe time to wait before the instance exits forcibly when uncaught exception happens2000
heron.instance.reconnect.streammgr.interval.secInterval in seconds to reconnect to the stream manager, including the request timeout in connecting5
heron.instance.reconnect.streammgr.interval.secInterval in seconds to reconnect to the stream manager, including the request timeout in connecting60
heron.instance.reconnect.metricsmgr.interval.secInterval in seconds to reconnect to the metrics manager, including the request timeout in connecting5
heron.instance.reconnect.metricsmgr.timesInterval in seconds to reconnect to the metrics manager, including the request timeout in connecting60
heron.instance.metrics.system.sample.interval.secThe interval in second for an instance to sample its system metrics, for instance, CPU load.10
heron.instance.slave.fetch.pplan.interval.secThe time interval (in seconds) at which Heron instances fetch the physical plan from slaves1
heron.instance.acknowledgement.nbucketsFor efficient acknowledgement10
heron.instance.tuning.expected.bolt.read.queue.sizeThe expected size on read queue in bolt8
heron.instance.tuning.expected.bolt.write.queue.sizeThe expected size on write queue in bolt8
heron.instance.tuning.expected.spout.read.queue.sizeThe expected size on read queue in spout512
heron.instance.tuning.expected.spout.write.queue.sizeThe exepected size on write queue in spout8
heron.instance.tuning.expected.metrics.write.queue.sizeThe expected size on metrics write queue8
heron.instance.tuning.current.sample.weight0.8
heron.instance.tuning.interval.msInterval in ms to tune the size of in & out data queue in instance100

packing.yaml

namedescriptiondefault
heron.class.packing.algorithmPacking algorithm for packing instances into containersorg.apache.heron.packing.roundrobin.RoundRobinPacking

scheduler.yaml

namedescriptiondefault
heron.class.schedulerscheduler class for distributing the topology for executionorg.apache.heron.scheduler.kubernetes.KubernetesScheduler
heron.class.launcherlauncher class for submitting and launching the topologyorg.apache.heron.scheduler.kubernetes.KubernetesLauncher
heron.directory.sandbox.java.homelocation of java - pick it up from shell environment$JAVA_HOME
heron.kubernetes.scheduler.uriThe URI of the Kubernetes API
heron.scheduler.is.serviceInvoke the IScheduler as a library directlyfalse
heron.executor.docker.imagedocker repo for executorheron/heron:latest

stateful.yaml

namedescriptiondefault
heron.statefulstorage.classnameThe type of storage to be used for state checkpointingorg.apache.heron.statefulstorage.localfs.LocalFileSystemStorage

statemgr.yaml

namedescriptiondefault
heron.class.state.managerlocal state manager class for managing state in a persistent fashionorg.apache.heron.statemgr.zookeeper.curator.CuratorStateManager
heron.statemgr.connection.stringlocal state manager connection string
heron.statemgr.root.pathpath of the root address to store the state in a local file system/heron
heron.statemgr.zookeeper.is.initialize.treecreate the zookeeper nodes, if they do not existTrue
heron.statemgr.zookeeper.session.timeout.mstimeout in ms to wait before considering zookeeper session is dead30000
heron.statemgr.zookeeper.connection.timeout.mstimeout in ms to wait before considering zookeeper connection is dead30000
heron.statemgr.zookeeper.retry.counttimeout in ms to wait before considering zookeeper connection is dead10
heron.statemgr.zookeeper.retry.interval.msduration of time to wait until the next retry10000

uploader.yaml

namedescriptiondefault
heron.class.uploaderuploader class for transferring the topology files (jars, tars, PEXes, etc.) to storageorg.apache.heron.uploader.s3.S3Uploader
heron.uploader.s3.bucketS3 bucket in which topology assets will be stored (if AWS S3 is being used)
heron.uploader.s3.access_keyAWS access key (if AWS S3 is being used)
heron.uploader.s3.secret_keyAWS secret access key (if AWS S3 is being used)