Griffin docker images are pre-built on docker hub, users can pull them to try griffin in docker.
sysctl -w vm.max_map_count=262144
docker pull bhlx3lyx7/griffin_spark2:0.2.0 docker pull bhlx3lyx7/elasticsearch docker pull bhlx3lyx7/kafka docker pull zookeeper:3.5Or you can pull the images faster through mirror acceleration if you are in China.
docker pull registry.docker-cn.com/bhlx3lyx7/griffin_spark2:0.2.0 docker pull registry.docker-cn.com/bhlx3lyx7/elasticsearch docker pull registry.docker-cn.com/bhlx3lyx7/kafka docker pull registry.docker-cn.com/zookeeper:3.5The docker images are the griffin environment images.
bhlx3lyx7/griffin_spark2
: This image contains mysql, hadoop, hive, spark, livy, griffin service, griffin measure, and some prepared demo data, it works as a single node spark cluster, providing spark engine and griffin service.bhlx3lyx7/elasticsearch
: This image is based on official elasticsearch, adding some configurations to enable cors requests, to provide elasticsearch service for metrics persist.bhlx3lyx7/kafka
: This image contains kafka 0.8, and some demo streaming data, to provide streaming data source in streaming mode.zookeeper:3.5
: This image is official zookeeper, to provide zookeeper service in streaming mode.docker-compose -f docker-compose-batch.yml up -d
BASE_PATH
value to <your local IP address>:38080
.Basic -> Get griffin version
, to make sure griffin service has started up.Measures -> Add measure
, to create a measure in griffin.jobs -> Add job
, to schedule a job to execute the measure. In the example, the schedule interval is 5 minutes.curl -XGET '<your local IP address>:39200/griffin/accuracy/_search?pretty&filter_path=hits.hits._source' -d '{"query":{"match_all":{}}, "sort": [{"tmst": {"order": "asc"}}]}'
docker-compose -f docker-compose-streaming.yml up -d
docker exec -it griffin bash
cd ~/measure
./streaming-accu.shYou can trace the log in streaming-accu.log.
tail -f streaming-accu.log
kill -9 `ps -ef | awk '/griffin-measure/{print $2}'`Then clear the checkpoint directory and other related directories of last streaming job.
./clear.shExecute the script of streaming-prof, to execute streaming profiling measurement.
./streaming-prof.shYou can trace the log in streaming-prof.log.
tail -f streaming-prof.log