blob: 6c2a790ce6cecbcc8662c839af393b02d37d375e [file] [log] [blame]
~~ Licensed to the Apache Software Foundation (ASF) under one or more
~~ contributor license agreements. See the NOTICE file distributed with
~~ this work for additional information regarding copyright ownership.
~~ The ASF licenses this file to You under the Apache License, Version 2.0
~~ (the "License"); you may not use this file except in compliance with
~~ the License. You may obtain a copy of the License at
~~ Unless required by applicable law or agreed to in writing, software
~~ distributed under the License is distributed on an "AS IS" BASIS,
~~ See the License for the specific language governing permissions and
~~ limitations under the License.
Chukwa User Guide
This chapter is the detailed configuration guide to Chukwa configuration.
Please read this chapter carefully and ensure that all requirements have
been satisfied. Failure to do so will cause you (and us) grief debugging
strange errors and/or data loss.
Chukwa uses the same configuration system as Hadoop. To configure a deploy,
edit a file of environment variables in etc/chukwa/ -- this
configuration is used mostly by the launcher shell scripts getting the
cluster off the ground -- and then add configuration to an XML file to do
things like override Chukwa defaults, tell Chukwa what Filesystem to use,
or the location of the HBase configuration.
When running in distributed mode, after you make an edit to an Chukwa
configuration, make sure you copy the content of the conf directory to all
nodes of the cluster. Chukwa will not do this for you. Use rsync.
Chukwa should work on any POSIX platform, but GNU/Linux is the only
production platform that has been tested extensively. Chukwa has also been used
successfully on Mac OS X, which several members of the Chukwa team use for
The only absolute software requirements are Java 1.6 or better,
ZooKeeper {{${zookeeperVersion}}}, HBase {{${hbaseVersion}}} and Hadoop {{${hadoopVersion}}}.
The Chukwa cluster management scripts rely on <ssh>; these scripts, however,
are not required if you have some alternate mechanism for starting and stopping
Installing Chukwa
A minimal Chukwa deployment has five components:
* A Hadoop and HBase cluster on which Chukwa will process data (referred to as the Chukwa cluster).
* One or more agent processes, that send monitoring data to HBase.
The nodes with active agent processes are referred to as the monitored
source nodes.
* Data analytics script, summarize Hadoop Cluster Health.
* HICC, the Chukwa visualization tool.
[./images/chukwa_architecture.png] Chukwa Components
* First Steps
* Obtain a copy of Chukwa. You can find the latest release on the
{{{} Chukwa release page}}.
* Un-tar the release, via <tar xzf>.
* Make sure a copy of Chukwa is available on each node being monitored.
* We refer to the directory containing Chukwa as <CHUKWA_HOME>. It may
be helpful to set <CHUKWA_HOME> explicitly in your environment,
but Chukwa does not require that you do so.
* General Configuration
* Make sure that <JAVA_HOME> is set correctly and points to a Java 1.6 JRE.
It's generally best to set this in <CHUKWA_HOME/etc/chukwa/>.
* In <CHUKWA_HOME/etc/chukwa/>, set <CHUKWA_LOG_DIR> and
<CHUKWA_PID_DIR> to the directories where Chukwa should store its
console logs and pid files. The pid directory must not be shared between
different Chukwa instances: it should be local, not NFS-mounted.
* Optionally, set CHUKWA_IDENT_STRING. This string is
used to name Chukwa's own console log files.
Agents are the Chukwa processes that actually produce data. This section
describes how to configure and run them. More details are available in the
{{{./agent.html} Agent configuration guide}}.
* Configuration
First, edit <$CHUKWA_HOME/etc/chukwa/> In addition to
the general directions given above, you should set <HADOOP_CONF_DIR> and
<HBASE_CONF_DIR>. This should be the Hadoop deployment Chukwa will use to
store collected data. You will get a version mismatch error if this is
configured incorrectly.
Edit the <CHUKWA_HOME/etc/chukwa/initial_adaptors> configuration file.
This is where you tell Chukwa what log files to monitor. See
{{{./agent.html#Adaptors} the adaptor configuration guide}} for
a list of available adaptors.
There are a number of optional settings in
* The most important of these is the cluster/group name that identifies the
monitored source nodes. This value is stored in each Chunk of collected data;
you can therefore use it to distinguish data coming from different groups of
<description>The cluster's name for this agent</description>
* Another important option is <chukwaAgent.checkpoint.dir>.
This is the directory Chukwa will use for its periodic checkpoints of
running adaptors. It <<must not>> be a shared directory; use a local,
not NFS-mount, directory.
* Setting the option <chukwaAgent.control.remote> will disallow remote
connections to the agent control socket.
** Use HBase For Data Storage
* Configuring the pipeline: set HBaseWriter as your writer, or add it
to the pipeline if you are using
** Use HDFS For Data Storage
The one mandatory configuration parameter is <writer.hdfs.filesystem>.
This should be set to the HDFS root URL on which Chukwa will store data.
Various optional configuration options are described in
{{{./pipeline.html} the pipeline configuration guide}}.
* Starting, Stopping, And Monitoring
To run an agent process on a single node, use:
sbin/ start agent
Typically, agents run as daemons. The script <bin/>
will ssh to each machine listed in <etc/chukwa/agents> and start an agent,
running in the background. The script <bin/>
does the reverse.
You can, of course, use any other daemon-management system you like.
For instance, <tools/init.d> includes init scripts for running
Chukwa agents.
To check if an agent is working properly, you can telnet to the control
port (9093 by default) and hit "enter". You will get a status message if
the agent is running normally.
Configuring Hadoop For Monitoring
One of the key goals for Chukwa is to collect logs from Hadoop clusters.
This section describes how to configure Hadoop to send its logs to Chukwa.
Note that these directions require Hadoop 0.205.0+. Earlier versions of
Hadoop do not have the hooks that Chukwa requires in order to grab
MapReduce job logs.
The Hadoop configuration files are located in <HADOOP_HOME/etc/hadoop>.
To setup Chukwa to collect logs from Hadoop, you need to change some of the
Hadoop configuration files.
* Copy CHUKWA_HOME/etc/chukwa/ file to HADOOP_CONF_DIR/
* Copy CHUKWA_HOME/etc/chukwa/ file to HADOOP_CONF_DIR/
* Edit HADOOP_HOME/etc/hadoop/ file and change $CHUKWA_LOG_DIR to your actual CHUKWA log dirctory (ie, CHUKWA_HOME/var/log)
Setup HBase Table
Chukwa is moving towards a model of using HBase to store metrics data to
allow real-time charting. This section describes how to configure HBase and
HICC to work together.
* Presently, we support HBase 0.96+. If you have older HBase jars anywhere,
they will cause linkage errors. Check for and remove them.
* Setting up tables:
hbase/bin/hbase shell < etc/chukwa/hbase.schema
* Configuration
Edit <etc/chukwa/auth.conf> and add authorized user to the list.
* Starting, Stopping, And Monitoring
The Hadoop Infrastructure Care Center (HICC) is the Chukwa web user interface.
HICC is started by invoking
sbin/ start hicc
Once the webcontainer with HICC has been started, point your favorite
browser to:
Troubleshooting Tips
* UNIX Processes For Chukwa Data Processes
Chukwa Data Processors are identified by:
The processes are scheduled execution, therefore they are not always
visible from the process list.
* Emergency Shutdown Procedure
If the system is not functioning properly and you cannot find an answer in
the Administration Guide, execute the kill command. The current state of
the java process will be written to the log files. You can analyze
these files to determine the cause of the problem.
kill -3 <pid>