Chukwa JIRA-819
12 files changed
tree: 90caa5b503bd99d55172c44ac217d9b7d4bb9e49
  1. bin/
  2. conf/
  3. contrib/
  4. lib/
  5. script/
  6. src/
  7. test/
  8. tools/
  9. CHANGES.txt
  10. DISCLAIMER.txt
  11. forrest.properties
  12. LICENSE.txt
  13. NOTICE.txt
  14. pom.xml
  15. README.md
README.md

#Apache Chukwa Project

Chukwa is an open source data collection system for monitoring large distributed systems. Chukwa is built on top of the Hadoop Distributed File System (HDFS) and Map/Reduce framework and inherits Hadoop’s scalability and robustness. Chukwa also includes a flexible and powerful toolkit for displaying, monitoring and analyzing results to make the best use of the collected data.

##Overview

Log processing was one of the original purposes of MapReduce. Unfortunately, using Hadoop MapReduce to monitor Hadoop can be inefficient. Batch processing nature of Hadoop MapReduce prevents the system to provide real time status of the cluster.

We started this journey at beginning of 2008, and a lot of Hadoop components have been built to improve overall reliability of the system and improve realtimeness of monitoring. We have adopted HBase to facilitate lower latency of random reads and using in memory updates and write ahead logs to improve the reliability for root cause analysis.

Logs are generated incrementally across many machines, but Hadoop MapReduce works best on a small number of large files. Merging the reduced output of multiple runs may require additional mapreduce jobs. This creates some overhead for data management on Hadoop.

Chukwa is a Hadoop subproject devoted to bridging that gap between logs processing and Hadoop ecosystem. Chukwa is a scalable distributed monitoring and analysis system, particularly logs from Hadoop and other distributed systems.

The Chukwa Documentation provides the information you need to get started using Chukwa. Architecture and Design document provides high level view of Chukwa design.

If you're trying to set up a Chukwa cluster from scratch, User Guide describes the setup and deploy procedure.

If you want to configure the Chukwa agent process, to control what's collected, you should read the Agent Guide. There is also a Pipeline Guide describing configuration parameters for ETL processes for the data pipeline.

And if you want to develop Chukwa to monitor other data sources, <a href=“http://chukwa.apache.org/docs/r0.6.0/programming.html”Programming Guide maybe handy to learn about Chukwa programming API.

If you have more questions, you can ask on the Chukwa mailing lists

##Bulding Chukwa

To build Chukwa from source you require Apache Maven:

mvn clean package

To check that things are ok, run

mvn test

tests should take and run successfully after roughly fifteen minutes.

##Running Cukwa

Users should definately begin with the Chukwa Quick Start Guide

If you're impatient, the following is the 30-second explanation:

The minimum you need to run Chukwa are agents on each machine you're monitoring, and a collector to write the collected data to HDFS. The basic command to start an agent is bin/chukwa agent.

If you want to start a bunch of agents, you can use the bin/start-agents.sh script. This just uses ssh to start agents on a list of machines, given in conf/agents. It‘s exactly parallel to Hadoop’s start-hdfs and start-mapred scripts.

There are stop scripts that do the exact opposite of the start commands.