This documentation provides instructions on how to setup Flink fully automatically with Hadoop 1 or Hadoop 2 on top of a Google Compute Engine cluster. This is made possible by Google's bdutil which starts a cluster and deploys Flink with Hadoop. To get started, just follow the steps below.
Please follow the instructions on how to setup the Google Cloud SDK.
At the moment, there is no bdutil release yet which includes the Flink extension. However, you can get the latest version of bdutil with Flink support from GitHub:
git clone https://github.com/GoogleCloudPlatform/bdutil.git
After you have downloaded the source, change into the newly created bdutil directory and continue with the next steps.
If you have not done so, create a bucket for the bdutil config and staging files. A new bucket can be created with gsutil:
gsutil mb gs://<bucket_name>
To deploy Flink with bdutil, adapt at least the following variables in bdutil_env.sh.
CONFIGBUCKET="<bucket_name>" PROJECT="<compute_engine_project_name>" NUM_WORKERS=<number_of_workers>
bdutil's Flink extension handles the configuration for you. You may additionally adjust configuration variables in extensions/flink/flink_env.sh. If you want to make further configuration, please take a look at configuring Flink. You will have to restart Flink after changing its configuration using bin/stop-cluster and bin/start-cluster.
To bring up the Flink cluster on Google Compute Engine, execute:
./bdutil -e extensions/flink/flink_env.sh deploy
./bdutil shell cd /home/hadoop/flink-install/bin ./flink run ../examples/flink-java-examples-*-WordCount.jar gs://dataflow-samples/shakespeare/othello.txt gs://<bucket_name>/output