| To compile Hadoop Mapreduce next following, do the following: |
| |
| Step 1) Install dependencies for yarn |
| |
| See http://svn.apache.org/repos/asf/hadoop/common/branches/MR-279/mapreduce/yarn/README |
| Make sure protbuf library is in your library path or set: export LD_LIBRARY_PATH=/usr/local/lib |
| |
| Step 2) Checkout |
| |
| svn checkout http://svn.apache.org/repos/asf/hadoop/common/branches/MR-279/ |
| |
| Step 3) Build common |
| |
| Go to common directory |
| ant veryclean mvn-install |
| |
| Step 4) Build HDFS |
| |
| Go to hdfs directory |
| ant veryclean mvn-install -Dresolvers=internal |
| |
| Step 5) Build yarn and mapreduce |
| |
| Go to mapreduce directory |
| export MAVEN_OPTS=-Xmx512m |
| |
| mvn clean install assembly:assembly |
| ant veryclean jar jar-test -Dresolvers=internal |
| |
| In case you want to skip the tests run: |
| |
| mvn clean install assembly:assembly -DskipTests |
| ant veryclean jar jar-test -Dresolvers=internal |
| |
| You will see a tarball in |
| ls target/hadoop-mapreduce-1.0-SNAPSHOT-bin.tar.gz |
| |
| Step 6) Untar the tarball in a clean and different directory. |
| say HADOOP_YARN_INSTALL |
| |
| To run Hadoop Mapreduce next applications : |
| |
| Step 7) cd $HADOOP_YARN_INSTALL |
| |
| Step 8) export the following variables: |
| |
| HADOOP_MAPRED_HOME= |
| HADOOP_COMMON_HOME= |
| HADOOP_HDFS_HOME= |
| YARN_HOME=directory where you untarred yarn |
| HADOOP_CONF_DIR= |
| YARN_CONF_DIR=$HADOOP_CONF_DIR |
| |
| Step 9) bin/yarn-daemon.sh start resourcemanager |
| |
| Step 10) bin/yarn-daemon.sh start nodemanager |
| |
| Step 11) bin/yarn-daemon.sh start historyserver |
| |
| Step 12) Create the following symlinks in hadoop-common/lib |
| |
| ln -s $HADOOP_YARN_INSTALL/modules/hadoop-mapreduce-client-app-1.0-SNAPSHOT.jar . |
| ln -s $HADOOP_YARN_INSTALL/modules/yarn-api-1.0-SNAPSHOT.jar . |
| ln -s $HADOOP_YARN_INSTALL/modules/hadoop-mapreduce-client-common-1.0-SNAPSHOT.jar . |
| ln -s $HADOOP_YARN_INSTALL/modules/yarn-common-1.0-SNAPSHOT.jar . |
| ln -s $HADOOP_YARN_INSTALL/modules/hadoop-mapreduce-client-core-1.0-SNAPSHOT.jar . |
| ln -s $HADOOP_YARN_INSTALL/modules/yarn-server-common-1.0-SNAPSHOT.jar . |
| ln -s $HADOOP_YARN_INSTALL/modules/hadoop-mapreduce-client-jobclient-1.0-SNAPSHOT.jar . |
| ln -s $HADOOP_YARN_INSTALL/lib/protobuf-java-2.4.0a.jar . |
| |
| Step 13) Yarn daemons are up! But for running mapreduce applications, which now are in user land, you need to setup nodemanager with the following configuration in your yarn-site.xml before you start the nodemanager. |
| <property> |
| <name>nodemanager.auxiluary.services</name> |
| <value>mapreduce.shuffle</value> |
| </property> |
| |
| <property> |
| <name>nodemanager.aux.service.mapreduce.shuffle.class</name> |
| <value>org.apache.hadoop.mapred.ShuffleHandler</value> |
| </property> |
| |
| Step 14) You are all set, an example on how to run a mapreduce job is: |
| |
| cd $HADOOP_MAPRED_HOME |
| ant examples -Dresolvers=internal |
| $HADOOP_COMMON_HOME/bin/hadoop jar $HADOOP_MAPRED_HOME/build/hadoop-mapred-examples-0.22.0-SNAPSHOT.jar randomwriter -Dmapreduce.job.user.name=$USER -Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory -Dmapreduce.randomwriter.bytespermap=10000 -Ddfs.blocksize=536870912 -Ddfs.block.size=536870912 -libjars $HADOOP_YARN_INSTALL/hadoop-mapreduce-1.0-SNAPSHOT/modules/hadoop-mapreduce-client-jobclient-1.0-SNAPSHOT.jar output |
| |
| The output on the command line should be almost similar to what you see in the JT/TT setup (Hadoop 0.20/0.21) |
| |