blob: dd4eb0a3f6aa0c79c26bbf844a45f58a78c7f6f0 [file] [log] [blame]
~~ Licensed to the Apache Software Foundation (ASF) under one or more
~~ contributor license agreements. See the NOTICE file distributed with
~~ this work for additional information regarding copyright ownership.
~~ The ASF licenses this file to You under the Apache License, Version 2.0
~~ (the "License"); you may not use this file except in compliance with
~~ the License. You may obtain a copy of the License at
~~
~~ http://www.apache.org/licenses/LICENSE-2.0
~~
~~ Unless required by applicable law or agreed to in writing, software
~~ distributed under the License is distributed on an "AS IS" BASIS,
~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~~ See the License for the specific language governing permissions and
~~ limitations under the License.
---
Hadoop Commands Guide
---
---
${maven.build.timestamp}
%{toc}
Overview
All hadoop commands are invoked by the <<<bin/hadoop>>> script. Running the
hadoop script without any arguments prints the description for all
commands.
Usage: <<<hadoop [--config confdir] [COMMAND] [GENERIC_OPTIONS] [COMMAND_OPTIONS]>>>
Hadoop has an option parsing framework that employs parsing generic
options as well as running classes.
*-----------------------+---------------+
|| COMMAND_OPTION || Description
*-----------------------+---------------+
| <<<--config confdir>>>| Overwrites the default Configuration directory. Default is <<<${HADOOP_HOME}/conf>>>.
*-----------------------+---------------+
| GENERIC_OPTIONS | The common set of options supported by multiple commands.
| COMMAND_OPTIONS | Various commands with their options are described in the following sections. The commands have been grouped into User Commands and Administration Commands.
*-----------------------+---------------+
Generic Options
The following options are supported by {{dfsadmin}}, {{fs}}, {{fsck}},
{{job}} and {{fetchdt}}. Applications should implement
{{{../../api/org/apache/hadoop/util/Tool.html}Tool}} to support
GenericOptions.
*------------------------------------------------+-----------------------------+
|| GENERIC_OPTION || Description
*------------------------------------------------+-----------------------------+
|<<<-conf \<configuration file\> >>> | Specify an application
| configuration file.
*------------------------------------------------+-----------------------------+
|<<<-D \<property\>=\<value\> >>> | Use value for given property.
*------------------------------------------------+-----------------------------+
|<<<-jt \<local\> or \<jobtracker:port\> >>> | Specify a job tracker.
| Applies only to job.
*------------------------------------------------+-----------------------------+
|<<<-files \<comma separated list of files\> >>> | Specify comma separated files
| to be copied to the map
| reduce cluster. Applies only
| to job.
*------------------------------------------------+-----------------------------+
|<<<-libjars \<comma seperated list of jars\> >>>| Specify comma separated jar
| files to include in the
| classpath. Applies only to
| job.
*------------------------------------------------+-----------------------------+
|<<<-archives \<comma separated list of archives\> >>> | Specify comma separated
| archives to be unarchived on
| the compute machines. Applies
| only to job.
*------------------------------------------------+-----------------------------+
User Commands
Commands useful for users of a hadoop cluster.
* <<<archive>>>
Creates a hadoop archive. More information can be found at
{{{../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/HadoopArchives.html}
Hadoop Archives Guide}}.
* <<<distcp>>>
Copy file or directories recursively. More information can be found at
{{{../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistCp.html}
Hadoop DistCp Guide}}.
* <<<fs>>>
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#dfs}<<<hdfs dfs>>>}}
instead.
* <<<fsck>>>
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#fsck}<<<hdfs fsck>>>}}
instead.
* <<<fetchdt>>>
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#fetchdt}
<<<hdfs fetchdt>>>}} instead.
* <<<jar>>>
Runs a jar file. Users can bundle their Map Reduce code in a jar file and
execute it using this command.
Usage: <<<hadoop jar <jar> [mainClass] args...>>>
The streaming jobs are run via this command. Examples can be referred from
Streaming examples
Word count example is also run using jar command. It can be referred from
Wordcount example
* <<<job>>>
Deprecated. Use
{{{../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html#job}
<<<mapred job>>>}} instead.
* <<<pipes>>>
Deprecated. Use
{{{../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html#pipes}
<<<mapred pipes>>>}} instead.
* <<<queue>>>
Deprecated. Use
{{{../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html#queue}
<<<mapred queue>>>}} instead.
* <<<version>>>
Prints the version.
Usage: <<<hadoop version>>>
* <<<CLASSNAME>>>
hadoop script can be used to invoke any class.
Usage: <<<hadoop CLASSNAME>>>
Runs the class named <<<CLASSNAME>>>.
* <<<classpath>>>
Prints the class path needed to get the Hadoop jar and the required
libraries. If called without arguments, then prints the classpath set up by
the command scripts, which is likely to contain wildcards in the classpath
entries. Additional options print the classpath after wildcard expansion or
write the classpath into the manifest of a jar file. The latter is useful in
environments where wildcards cannot be used and the expanded classpath exceeds
the maximum supported command line length.
Usage: <<<hadoop classpath [--glob|--jar <path>|-h|--help]>>>
*-----------------+-----------------------------------------------------------+
|| COMMAND_OPTION || Description
*-----------------+-----------------------------------------------------------+
| --glob | expand wildcards
*-----------------+-----------------------------------------------------------+
| --jar <path> | write classpath as manifest in jar named <path>
*-----------------+-----------------------------------------------------------+
| -h, --help | print help
*-----------------+-----------------------------------------------------------+
Administration Commands
Commands useful for administrators of a hadoop cluster.
* <<<balancer>>>
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#balancer}
<<<hdfs balancer>>>}} instead.
* <<<daemonlog>>>
Get/Set the log level for each daemon.
Usage: <<<hadoop daemonlog -getlevel <host:port> <name> >>>
Usage: <<<hadoop daemonlog -setlevel <host:port> <name> <level> >>>
*------------------------------+-----------------------------------------------------------+
|| COMMAND_OPTION || Description
*------------------------------+-----------------------------------------------------------+
| -getlevel <host:port> <name> | Prints the log level of the daemon running at
| <host:port>. This command internally connects
| to http://<host:port>/logLevel?log=<name>
*------------------------------+-----------------------------------------------------------+
| -setlevel <host:port> <name> <level> | Sets the log level of the daemon
| running at <host:port>. This command internally
| connects to http://<host:port>/logLevel?log=<name>
*------------------------------+-----------------------------------------------------------+
* <<<datanode>>>
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#datanode}
<<<hdfs datanode>>>}} instead.
* <<<dfsadmin>>>
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#dfsadmin}
<<<hdfs dfsadmin>>>}} instead.
* <<<namenode>>>
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#namenode}
<<<hdfs namenode>>>}} instead.
* <<<secondarynamenode>>>
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#secondarynamenode}
<<<hdfs secondarynamenode>>>}} instead.