PIRK-81 Update IBM Bluemix instructions, and fix typo. - closes apache/incubator-pirk#117
diff --git a/_devtools/git-hooks/post-commit b/_devtools/git-hooks/post-commit
index 46b8eeb..3786628 100755
--- a/_devtools/git-hooks/post-commit
+++ b/_devtools/git-hooks/post-commit
@@ -1,4 +1,4 @@
-#! /usr/bin/env bash
+#!/usr/bin/env bash
 
 build_jekyll_site() {
   # $1 is the expected source branch
diff --git a/cloud_instructions.md b/cloud_instructions.md
index 00340d4..47f958a 100644
--- a/cloud_instructions.md
+++ b/cloud_instructions.md
@@ -1,5 +1,5 @@
 ---
-title: Running Pirk in Cloud Environments (GCP, AWS, Azure)
+title: Running Pirk in Cloud Environments (GCP, AWS, Azure, Bluemix)
 nav: nav_commercial_cloud
 ---
 
@@ -107,7 +107,7 @@
           {
             "Classification": "yarn-site",
             "Properties": {
-              "yarn.nodemanager.aux-services": "mapreduce_shuffle",
+              "yarn.nodemanager.aux-services": "mapreduce_shuffle,spark_shuffle",
               "yarn.nodemanager.aux-services.mapreduce_shuffle.class": "org.apache.hadoop.mapred.ShuffleHandler"
             }
           }
@@ -140,4 +140,29 @@
 11. Now on the cluster, you can run the distributed tests:  
 `hadoop jar apache-pirk-0.2.0-incubating-SNAPSHOT-exe.jar org.apache.pirk.test.distributed.DistributedTestDriver -j $PWD/apache-pirk-0.2.0-incubating-SNAPSHOT-exe.jar -t 1:J`
 12. When you are done working with your cluster, terminate it:  
-`aws emr terminate-clusters --cluster-ids `**`$cid`**
\ No newline at end of file
+`aws emr terminate-clusters --cluster-ids `**`$cid`**
+
+## IBM Bluemix
+1. [Sign-up](https://console.ng.bluemix.net/registration/) for a free Bluemix account.
+2. From the Bluemix [catalog](https://console.ng.bluemix.net/catalog/), open the "Big Insights for Apache Hadoop" service (found in the Data and Analytics section) and click "Create".
+3. Click "Open" to see the cluster list, and from there create a new cluster,
+e.g.
+  * `cluster name =` *`test-cluster`*, `user name = pirk`, `password =` *`password`*.
+  * Increase the number of data nodes to 5 (the maximum number available on the basic plan).
+  * Change the "IBM Open Platform Version" to "IOP 4.3 Technical Preview" (required for Spark 2.0).
+  * Select "Spark2" as an optional component.
+  * Click "Create".
+4. Select the test cluster from the Cluster List and take note of the SSH host name,
+e.g. `bi-hadoop-prod-4174.bi.services.us-south.bluemix.net`
+5. Now you can run the distributed tests by copying the Pirk jar file and executing it in Bluemix, e.g.
+
+        $ scp target/apache-pirk-0.2.0-incubating-SNAPSHOT-exe.jar pirk@bi-hadoop-prod-4174.bi.services.us-south.bluemix.net:
+        pirk@bi-hadoop-prod-4174.bi.services.us-south.bluemix.net's password: **********
+        apache-pirk-0.2.0-incubating-SNAPSHOT-exe.jar                                                       100%  145MB  10.3MB/s   00:14 
+
+        $ ssh pirk@bi-hadoop-prod-4174.bi.services.us-south.bluemix.net
+        pirk@bi-hadoop-prod-4174.bi.services.us-south.bluemix.net's password: **********
+
+        -bash-4.1$ hadoop jar apache-pirk-0.2.0-incubating-SNAPSHOT-exe.jar org.apache.pirk.test.distributed.DistributedTestDriver -j apache-pirk-0.2.0-incubating-SNAPSHOT-exe.jar
+6. There is no need to stop the cluster as the basic service plan is free during beta, but the cluster will need to be recreated every two weeks.
+
diff --git a/for_developers.md b/for_developers.md
index c944a31..d344357 100644
--- a/for_developers.md
+++ b/for_developers.md
@@ -94,7 +94,7 @@
 
 Specific distributed test suites may be run via providing corresponding command line options. The available options are given by the following command:
 
-	hadoop jar <pirkJar> org.apache.pirk.test.distributed.DistributedTestDriver —help
+	hadoop jar <pirkJar> org.apache.pirk.test.distributed.DistributedTestDriver --help
 
 The Pirk functional tests using Spark run via utilizing the [SparkLauncher](https://spark.apache.org/docs/1.6.0/api/java/org/apache/spark/launcher/package-summary.html) via the ‘hadoop jar’ command (not by directly running with ’spark-submit’). 
 To run successfully, the ‘spark.home’ property must be set correctly in the ‘pirk.properties’ file; ’spark-home’ is the directory containing ’bin/spark-submit’.
diff --git a/index.md b/index.md
index 69a5901..cc89628 100755
--- a/index.md
+++ b/index.md
@@ -83,7 +83,7 @@
 
 ## Community
 
-Please check out our [community]({{ site.baseurl }}/get_involved) section.
+Please check out our [community]({{ site.baseurl }}/get_involved_pirk) section.
 
 ## Roadmap
 
@@ -92,4 +92,4 @@
 ## Disclaimer
 
 Apache Pirk (incubating) is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the name of Apache TLP sponsor. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.
- 
\ No newline at end of file
+