This document shows you how to install Heron on Kubernetes in a step-by-step, “by hand” fashion. An easier way to install Heron on Kubernetes is to use the Helm package manager. For instructions on doing so, see Heron on Kubernetes with Helm).
Heron supports deployment on Kubernetes (sometimes called k8s). Heron deployments on Kubernetes use Docker as the containerization format for Heron topologies and use the Kubernetes API for scheduling.
You can use Heron on Kubernetes in multiple environments:
In order to run Heron on Kubernetes, you will need:
kubectl
CLI tool installed and set up to communicate with your clusterheron
CLI toolAny additional requirements will depend on where you're running Heron on Kubernetes.
When deploying to Kubernetes, each Heron container is deployed as a Kubernetes pod inside of a Docker container. If there are 20 containers that are going to be deployed with a topoology, for example, then there will be 20 pods deployed to your Kubernetes cluster for that topology.
Minikube enables you to run a Kubernetes cluster locally on a single machine.
To run Heron on Minikube you'll need to install Minikube in addition to the other requirements listed above.
First you'll need to start up Minikube using the minikube start
command. We recommend starting Minikube with:
This command will accomplish precisely that:
$ minikube start \ --memory=7168 \ --cpus=5 \ --disk-size=20G
There are a variety of Heron components that you'll need to start up separately and in order. Make sure that the necessary pods are up and in the RUNNING
state before moving on to the next step. You can track the progress of the pods using this command:
$ kubectl get pods -w
Heron uses ZooKeeper for a variety of coordination- and configuration-related tasks. To start up ZooKeeper on Minikube:
$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/minikube/zookeeper.yaml
When running Heron on Kubernetes, Apache BookKeeper is used for things like topology artifact storage. You can start up BookKeeper using this command:
$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/minikube/bookkeeper.yaml
The so-called “Heron tools” include the Heron UI and the Heron Tracker. To start up the Heron tools:
$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/minikube/tools.yaml
The Heron API server is the endpoint that the Heron CLI client uses to interact with the other components of Heron. To start up the Heron API server on Minikube:
$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/minikube/apiserver.yaml
Once all of the components have been successfully started up, you need to open up a proxy port to your Minikube Kubernetes cluster using the kubectl proxy
command:
$ kubectl proxy -p 8001
Note: All of the following Kubernetes specific urls are valid with the Kubernetes 1.10.0 release.
Now, verify that the Heron API server running on Minikube is available using curl:
$ curl http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy/api/v1/version
You should get a JSON response like this:
{ "heron.build.git.revision" : "ddbb98bbf173fb082c6fd575caaa35205abe34df", "heron.build.git.status" : "Clean", "heron.build.host" : "ci-server-01", "heron.build.time" : "Sat Mar 31 09:27:19 UTC 2018", "heron.build.timestamp" : "1522488439000", "heron.build.user" : "release-agent", "heron.build.version" : "0.17.8" }
Success! You can now manage Heron topologies on your Minikube Kubernetes installation. To submit an example topology to the cluster:
$ heron submit kubernetes \ --service-url=http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy \ ~/.heron/examples/heron-api-examples.jar \ org.apache.heron.examples.api.AckingTopology acking
You can also track the progress of the Kubernetes pods that make up the topology. When you run kubectl get pods
you should see pods with names like acking-0
and acking-1
.
Another option is to set the service URL for Heron using the heron config
command:
$ heron config kubernetes set service_url \ http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy
That would enable you to manage topologies without setting the --service-url
flag.
The Heron UI is an in-browser dashboard that you can use to monitor your Heron topologies. It should already be running in Minikube.
You can access Heron UI in your browser by navigating to http://localhost:8001/api/v1/namespaces/default/services/heron-ui:8889/proxy/topologies.
You can use Google Container Engine (GKE) to run Kubernetes clusters on Google Cloud Platform.
To run Heron on GKE, you'll need to create a Kubernetes cluster with at least three nodes. This command would create a three-node cluster in your default Google Cloud Platform zone and project:
$ gcloud container clusters create heron-gke-cluster \ --machine-type=n1-standard-4 \ --num-nodes=3
You can specify a non-default zone and/or project using the --zone
and --project
flags, respectively.
Once the cluster is up and running, enable your local kubectl
to interact with the cluster by fetching your GKE cluster's credentials:
$ gcloud container clusters get-credentials heron-gke-cluster Fetching cluster endpoint and auth data. kubeconfig entry generated for heron-gke-cluster.
Finally, you need to create a Kubernetes secret that specifies the Cloud Platform connection credentials for your service account. First, download your Cloud Platform credentials as a JSON file, say key.json
. This command will download your credentials:
$ gcloud iam service-accounts create key.json \ --iam-account=YOUR-ACCOUNT
Heron on Google Container Engine supports two static file storage options for topology artifacts:
If you're running Heron on GKE, you can use either Google Cloud Storage or Apache BookKeeper for topology artifact storage.
If you'd like to use BookKeeper instead of Google Cloud Storage, skip to the BookKeeper section below.
To use Google Cloud Storage for artifact storage, you‘ll need to create a Google Cloud Storage bucket. Here’s an example bucket creation command using gsutil
':
$ gsutil mb gs://my-heron-bucket
Cloud Storage bucket names must be globally unique, so make sure to choose a bucket name carefully. Once you‘ve created a bucket, you need to create a Kubernetes ConfigMap that specifies the bucket name. Here’s an example:
$ kubectl create configmap heron-apiserver-config \ --from-literal=gcs.bucket=BUCKET-NAME
You can list your current service accounts using the
gcloud iam service-accounts list
command.
Then you can create the secret like this:
$ kubectl create secret generic heron-gcs-key \ --from-file=key.json=key.json
Once you've created a bucket, a ConfigMap
, and a secret, you can move on to starting up the various components of your Heron installation.
There are a variety of Heron components that you'll need to start up separately and in order. Make sure that the necessary pods are up and in the RUNNING
state before moving on to the next step. You can track the progress of the pods using this command:
$ kubectl get pods -w
Heron uses ZooKeeper for a variety of coordination- and configuration-related tasks. To start up ZooKeeper on your GKE cluster:
$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/gcp/zookeeper.yaml
If you're using Google Cloud Storage for topology artifact storage, skip to the Heron tools section below.
To start up an Apache BookKeeper cluster for Heron:
$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/gcp/bookkeeper.yaml
The so-called “Heron tools” include the Heron UI and the Heron Tracker. To start up the Heron tools:
$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/gcp/tools.yaml
The Heron API server is the endpoint that the Heron CLI client uses to interact with the other components of Heron. Heron on Google Container Engine has two separate versions of the Heron API server that you can run depending on which artifact storage system you're using (Google Cloud Storage or Apache BookKeeper).
If you're using Google Cloud Storage:
$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/gcp/gcs-apiserver.yaml
If you're using Apache BookKeeper:
$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/gcp/bookkeeper-apiserver.yaml
Once all of the components have been successfully started up, you need to open up a proxy port to your GKE Kubernetes cluster using the kubectl proxy
command:
$ kubectl proxy -p 8001
Note: All of the following Kubernetes specific urls are valid with the Kubernetes 1.10.0 release.
Now, verify that the Heron API server running on GKE is available using curl:
$ curl http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy/api/v1/version
You should get a JSON response like this:
{ "heron.build.git.revision" : "bf9fe93f76b895825d8852e010dffd5342e1f860", "heron.build.git.status" : "Clean", "heron.build.host" : "ci-server-01", "heron.build.time" : "Sun Oct 1 20:42:18 UTC 2017", "heron.build.timestamp" : "1506890538000", "heron.build.user" : "release-agent1", "heron.build.version" : "0.16.2" }
Success! You can now manage Heron topologies on your GKE Kubernetes installation. To submit an example topology to the cluster:
$ heron submit kubernetes \ --service-url=http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy \ ~/.heron/examples/heron-api-examples.jar \ org.apache.heron.examples.api.AckingTopology acking
You can also track the progress of the Kubernetes pods that make up the topology. When you run kubectl get pods
you should see pods with names like acking-0
and acking-1
.
Another option is to set the service URL for Heron using the heron config
command:
$ heron config kubernetes set service_url \ http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy
That would enable you to manage topologies without setting the --service-url
flag.
The Heron UI is an in-browser dashboard that you can use to monitor your Heron topologies. It should already be running in your GKE cluster.
You can access Heron UI in your browser by navigating to http://localhost:8001/api/v1/namespaces/default/services/heron-ui:8889/proxy/topologies.
Although Minikube and Google Container Engine provide two easy ways to get started running Heron on Kubernetes, you can also run Heron on any Kubernetes cluster. The instructions in this section are tailored to non-Minikube, non-GKE Kubernetes installations.
To run Heron on a general Kubernetes installation, you'll need to fulfill the requirements listed at the top of this doc. Once those requirements are met, you can begin starting up the various components that comprise a Heron on Kubernetes installation.
There are a variety of Heron components that you'll need to start up separately and in order. Make sure that the necessary pods are up and in the RUNNING
state before moving on to the next step. You can track the progress of the pods using this command:
$ kubectl get pods -w
Heron uses ZooKeeper for a variety of coordination- and configuration-related tasks. To start up ZooKeeper on your Kubernetes cluster:
$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/general/zookeeper.yaml
When running Heron on Kubernetes, Apache BookKeeper is used for things like topology artifact storage (unless you're running on GKE). You can start up BookKeeper using this command:
$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/general/bookkeeper.yaml
The so-called “Heron tools” include the Heron UI and the Heron Tracker. To start up the Heron tools:
$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/general/tools.yaml
The Heron API server is the endpoint that the Heron CLI client uses to interact with the other components of Heron. To start up the Heron API server on your Kubernetes cluster:
$ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/general/apiserver.yaml
Once all of the components have been successfully started up, you need to open up a proxy port to your GKE Kubernetes cluster using the kubectl proxy
command:
$ kubectl proxy -p 8001
Note: All of the following Kubernetes specific urls are valid with the Kubernetes 1.10.0 release.
Now, verify that the Heron API server running on GKE is available using curl:
$ curl http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy/api/v1/version
You should get a JSON response like this:
{ "heron.build.git.revision" : "ddbb98bbf173fb082c6fd575caaa35205abe34df", "heron.build.git.status" : "Clean", "heron.build.host" : "ci-server-01", "heron.build.time" : "Sat Mar 31 09:27:19 UTC 2018", "heron.build.timestamp" : "1522488439000", "heron.build.user" : "release-agent", "heron.build.version" : "0.17.8" }
Success! You can now manage Heron topologies on your GKE Kubernetes installation. To submit an example topology to the cluster:
$ heron submit kubernetes \ --service-url=http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy \ ~/.heron/examples/heron-api-examples.jar \ org.apache.heron.examples.api.AckingTopology acking
You can also track the progress of the Kubernetes pods that make up the topology. When you run kubectl get pods
you should see pods with names like acking-0
and acking-1
.
Another option is to set the service URL for Heron using the heron config
command:
$ heron config kubernetes set service_url \ http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy
That would enable you to manage topologies without setting the --service-url
flag.
The Heron UI is an in-browser dashboard that you can use to monitor your Heron topologies. It should already be running in your GKE cluster.
You can access Heron UI in your browser by navigating to http://localhost:8001/api/v1/namespaces/default/services/heron-ui:8889/proxy.
You can configure Heron on Kubernetes using a variety of YAML config files, listed in the sections below.
heron
CLI tool.name | description | default |
---|---|---|
heron.package.core.uri | Location of the core Heron package | file:///vagrant/.herondata/dist/heron-core-release.tar.gz |
heron.config.is.role.required | Whether a role is required to submit a topology | False |
heron.config.is.env.required | Whether an environment is required to submit a topology | False |
name | description | default |
---|---|---|
heron.logging.directory | The relative path to the logging directory | log-files |
heron.logging.maximum.size.mb | The maximum log file size (in MB) | 100 |
heron.logging.maximum.files | The maximum number of log files | 5 |
heron.check.tmaster.location.interval.sec | The interval, in seconds, after which to check if the topology master location has been fetched or not | 120 |
heron.logging.prune.interval.sec | The interval, in seconds, at which to prune C++ log files | 300 |
heron.logging.flush.interval.sec | The interval, in seconds, at which to flush C++ log files | 10 |
heron.logging.err.threshold | The threshold level at which to log errors | 3 |
heron.metrics.export.interval.sec | The interval, in seconds, at which different components export metrics to the metrics manager | 60 |
heron.metrics.max.exceptions.per.message.count | The maximum count of exceptions in one MetricPublisherPublishMessage protobuf message | 1024 |
heron.streammgr.cache.drain.frequency.ms | The frequency, in milliseconds, at which to drain the tuple cache in the stream manager | 10 |
heron.streammgr.stateful.buffer.size.mb | The sized-based threshold (in MB) for buffering data tuples waiting for checkpoint markers before giving up | 100 |
heron.streammgr.cache.drain.size.mb | The sized-based threshold (in MB) for draining the tuple cache | 100 |
heron.streammgr.xormgr.rotatingmap.nbuckets | For efficient acknowledgements | 3 |
heron.streammgr.mempool.max.message.number | The max number of messages in the memory pool for each message type | 512 |
heron.streammgr.client.reconnect.interval.sec | The reconnect interval to other stream managers (in seconds) for the stream manager client | 1 |
heron.streammgr.client.reconnect.tmaster.interval.sec | The reconnect interval to the topology master (in seconds) for the stream manager client | 10 |
heron.streammgr.client.reconnect.tmaster.max.attempts | The max reconnect attempts to tmaster for stream manager client | 30 |
heron.streammgr.network.options.maximum.packet.mb | The maximum packet size (in MB) of the stream manager's network options | 10 |
heron.streammgr.tmaster.heartbeat.interval.sec | The interval (in seconds) at which to send heartbeats | 10 |
heron.streammgr.connection.read.batch.size.mb | The maximum batch size (in MB) for the stream manager to read from socket | 1 |
heron.streammgr.connection.write.batch.size.mb | Maximum batch size (in MB) for the stream manager to write to socket | 1 |
heron.streammgr.network.backpressure.threshold | The number of times Heron should wait to see a buffer full while enqueueing data before declaring the start of backpressure | 3 |
heron.streammgr.network.backpressure.highwatermark.mb | The high-water mark on the number (in MB) that can be left outstanding on a connection | 100 |
heron.streammgr.network.backpressure.lowwatermark.mb | The low-water mark on the number (in MB) that can be left outstanding on a connection | |
heron.tmaster.metrics.collector.maximum.interval.min | The maximum interval (in minutes) for metrics to be kept in the topology master | 180 |
heron.tmaster.establish.retry.times | The maximum number of times to retry establishing connection with the topology master | 30 |
heron.tmaster.establish.retry.interval.sec | The interval at which to retry establishing connection with the topology master | 1 |
heron.tmaster.network.master.options.maximum.packet.mb | Maximum packet size (in MB) of topology master's network options to connect to stream managers | 16 |
heron.tmaster.network.controller.options.maximum.packet.mb | Maximum packet size (in MB) of the topology master's network options to connect to scheduler | 1 |
heron.tmaster.network.stats.options.maximum.packet.mb | Maximum packet size (in MB) of the topology master's network options for stat queries | 1 |
heron.tmaster.metrics.collector.purge.interval.sec | The interval (in seconds) at which the topology master purges metrics from socket | 60 |
heron.tmaster.metrics.collector.maximum.exception | The maximum number of exceptions to be stored in the topology metrics collector, to prevent out-of-memory errors | 256 |
heron.tmaster.metrics.network.bindallinterfaces | Whether the metrics reporter should bind on all interfaces | False |
heron.tmaster.stmgr.state.timeout.sec | The timeout (in seconds) for the stream manager, compared with (current time - last heartbeat time) | 60 |
heron.metricsmgr.network.read.batch.time.ms | The maximum batch time (in milliseconds) for the metrics manager to read from socket | 16 |
heron.metricsmgr.network.read.batch.size.bytes | The maximum batch size (in bytes) to read from socket | 32768 |
heron.metricsmgr.network.write.batch.time.ms | The maximum batch time (in milliseconds) for the metrics manager to write to socket | 32768 |
heron.metricsmgr.network.options.socket.send.buffer.size.bytes | The maximum socket send buffer size (in bytes) | 6553600 |
heron.metricsmgr.network.options.socket.received.buffer.size.bytes | The maximum socket received buffer size (in bytes) for the metrics manager's network options | 8738000 |
heron.metricsmgr.network.options.maximum.packetsize.bytes | The maximum packet size that the metrics manager can read | 1048576 |
heron.instance.network.options.maximum.packetsize.bytes | The maximum size of packets that Heron instances can read | 10485760 |
heron.instance.internal.bolt.read.queue.capacity | The queue capacity (num of items) in bolt for buffer packets to read from stream manager | 128 |
heron.instance.internal.bolt.write.queue.capacity | The queue capacity (num of items) in bolt for buffer packets to write to stream manager | 128 |
heron.instance.internal.spout.read.queue.capacity | The queue capacity (num of items) in spout for buffer packets to read from stream manager | 1024 |
heron.instance.internal.spout.write.queue.capacity | The queue capacity (num of items) in spout for buffer packets to write to stream manager | 128 |
heron.instance.internal.metrics.write.queue.capacity | The queue capacity (num of items) for metrics packets to write to metrics manager | 128 |
heron.instance.network.read.batch.time.ms | Time based, the maximum batch time in ms for instance to read from stream manager per attempt | 16 |
heron.instance.network.read.batch.size.bytes | Size based, the maximum batch size in bytes to read from stream manager | 32768 |
heron.instance.network.write.batch.time.ms | Time based, the maximum batch time (in milliseconds) for the instance to write to the stream manager per attempt | 16 |
heron.instance.network.write.batch.size.bytes | Size based, the maximum batch size in bytes to write to stream manager | 32768 |
heron.instance.network.options.socket.send.buffer.size.bytes | The maximum socket's send buffer size in bytes | 6553600 |
heron.instance.network.options.socket.received.buffer.size.bytes | The maximum socket‘s received buffer size in bytes of instance’s network options | 8738000 |
heron.instance.set.data.tuple.capacity | The maximum number of data tuple to batch in a HeronDataTupleSet protobuf | 1024 |
heron.instance.set.data.tuple.size.bytes | The maximum size in bytes of data tuple to batch in a HeronDataTupleSet protobuf | 8388608 |
heron.instance.set.control.tuple.capacity | The maximum number of control tuple to batch in a HeronControlTupleSet protobuf | 1024 |
heron.instance.ack.batch.time.ms | The maximum time in ms for a spout to do acknowledgement per attempt, the ack batch could also break if there are no more ack tuples to process | 128 |
heron.instance.emit.batch.time.ms | The maximum time in ms for an spout instance to emit tuples per attempt | 16 |
heron.instance.emit.batch.size.bytes | The maximum batch size in bytes for an spout to emit tuples per attempt | 32768 |
heron.instance.execute.batch.time.ms | The maximum time in ms for an bolt instance to execute tuples per attempt | 16 |
heron.instance.execute.batch.size.bytes | The maximum batch size in bytes for an bolt instance to execute tuples per attempt | 32768 |
heron.instance.state.check.interval.sec | The time interval for an instance to check the state change, for example, the interval a spout uses to check whether activate/deactivate is invoked | 5 |
heron.instance.force.exit.timeout.ms | The time to wait before the instance exits forcibly when uncaught exception happens | 2000 |
heron.instance.reconnect.streammgr.interval.sec | Interval in seconds to reconnect to the stream manager, including the request timeout in connecting | 5 |
heron.instance.reconnect.streammgr.interval.sec | Interval in seconds to reconnect to the stream manager, including the request timeout in connecting | 60 |
heron.instance.reconnect.metricsmgr.interval.sec | Interval in seconds to reconnect to the metrics manager, including the request timeout in connecting | 5 |
heron.instance.reconnect.metricsmgr.times | Interval in seconds to reconnect to the metrics manager, including the request timeout in connecting | 60 |
heron.instance.metrics.system.sample.interval.sec | The interval in second for an instance to sample its system metrics, for instance, CPU load. | 10 |
heron.instance.slave.fetch.pplan.interval.sec | The time interval (in seconds) at which Heron instances fetch the physical plan from slaves | 1 |
heron.instance.acknowledgement.nbuckets | For efficient acknowledgement | 10 |
heron.instance.tuning.expected.bolt.read.queue.size | The expected size on read queue in bolt | 8 |
heron.instance.tuning.expected.bolt.write.queue.size | The expected size on write queue in bolt | 8 |
heron.instance.tuning.expected.spout.read.queue.size | The expected size on read queue in spout | 512 |
heron.instance.tuning.expected.spout.write.queue.size | The exepected size on write queue in spout | 8 |
heron.instance.tuning.expected.metrics.write.queue.size | The expected size on metrics write queue | 8 |
heron.instance.tuning.current.sample.weight | 0.8 | |
heron.instance.tuning.interval.ms | Interval in ms to tune the size of in & out data queue in instance | 100 |
name | description | default |
---|---|---|
heron.class.packing.algorithm | Packing algorithm for packing instances into containers | org.apache.heron.packing.roundrobin.RoundRobinPacking |
name | description | default |
---|---|---|
heron.class.scheduler | scheduler class for distributing the topology for execution | org.apache.heron.scheduler.kubernetes.KubernetesScheduler |
heron.class.launcher | launcher class for submitting and launching the topology | org.apache.heron.scheduler.kubernetes.KubernetesLauncher |
heron.directory.sandbox.java.home | location of java - pick it up from shell environment | $JAVA_HOME |
heron.kubernetes.scheduler.uri | The URI of the Kubernetes API | |
heron.scheduler.is.service | Invoke the IScheduler as a library directly | false |
heron.executor.docker.image | docker repo for executor | heron/heron:latest |
name | description | default |
---|---|---|
heron.statefulstorage.classname | The type of storage to be used for state checkpointing | org.apache.heron.statefulstorage.localfs.LocalFileSystemStorage |
name | description | default |
---|---|---|
heron.class.state.manager | local state manager class for managing state in a persistent fashion | org.apache.heron.statemgr.zookeeper.curator.CuratorStateManager |
heron.statemgr.connection.string | local state manager connection string | |
heron.statemgr.root.path | path of the root address to store the state in a local file system | /heron |
heron.statemgr.zookeeper.is.initialize.tree | create the zookeeper nodes, if they do not exist | True |
heron.statemgr.zookeeper.session.timeout.ms | timeout in ms to wait before considering zookeeper session is dead | 30000 |
heron.statemgr.zookeeper.connection.timeout.ms | timeout in ms to wait before considering zookeeper connection is dead | 30000 |
heron.statemgr.zookeeper.retry.count | timeout in ms to wait before considering zookeeper connection is dead | 10 |
heron.statemgr.zookeeper.retry.interval.ms | duration of time to wait until the next retry | 10000 |
name | description | default |
---|---|---|
heron.class.uploader | uploader class for transferring the topology files (jars, tars, PEXes, etc.) to storage | org.apache.heron.uploader.s3.S3Uploader |
heron.uploader.s3.bucket | S3 bucket in which topology assets will be stored (if AWS S3 is being used) | |
heron.uploader.s3.access_key | AWS access key (if AWS S3 is being used) | |
heron.uploader.s3.secret_key | AWS secret access key (if AWS S3 is being used) |