tree: 402f9a6ff84c40188c13c317a09bb19e7d9ceda9 [path history] [tgz]
  1. lib/
  2. .gitignore
  3. config-sample.json
  4. hadoop_ha_checker.py
  5. hadoop_jmx_kafka.py
  6. metric_collector.py
  7. README.md
  8. system_metric_kafka.py
  9. util_func.py
eagle-external/hadoop_jmx_collector/README.md

Hadoop Jmx Collector

These scripts help to collect Hadoop jmx and evently sent the metrics to stdout or Kafka. Tested with Python 2.7.

How to use it

  1. Edit the configuration file (json file). For example:

       {
        "env": {
         "site": "sandbox"
        },
        "input": {
         "component": "namenode",
         "port": "50070",
         "https": false
        },
        "filter": {
         "monitoring.group.selected": ["hadoop", "java.lang"]
        },
        "output": {
        }
       }
    
  2. Run the scripts

    for general use

    python hadoop_jmx_kafka.py > 1.txt

Edit eagle-collector.conf

  • input

    “port” defines the hadoop service port, such as 50070 => “namenode”, 60010 => “hbase master”.

  • filter

    “monitoring.group.selected” can filter out beans which we care about.

  • output

    if we left it empty, then the output is stdout by default.

      "output": {}
    

    It also supports Kafka as its output.

      "output": {
        "kafka": {
          "topic": "test_topic",
          "brokerList": [ "sandbox.hortonworks.com:6667"]
        }
      }