Griffin docker images are pre-built on docker hub, users can pull them to try griffin in docker.
Install docker and docker compose.
Increase vm.max_map_count of your local machine(linux), to use elasticsearch.
sysctl -w vm.max_map_count=262144
For macOS, please increase enough memory available for docker (For example, set more than 4 GB in docker->preferences->Advanced) or decrease memory for es instance(For example, set -Xms512m -Xmx512m in jvm.options)
For other platforms, please reference to this link from elastic.co max_map_count kernel setting
Pull griffin pre-built docker images, but if you access docker repository easily(NOT in China).
docker pull apachegriffin/griffin_spark2:0.3.0 docker pull apachegriffin/elasticsearch docker pull apachegriffin/kafka docker pull zookeeper:3.5
For Chinese users, you can pull the images from the following mirrors.
docker pull registry.docker-cn.com/apachegriffin/griffin_spark2:0.3.0 docker pull registry.docker-cn.com/apachegriffin/elasticsearch docker pull registry.docker-cn.com/apachegriffin/kafka docker pull registry.docker-cn.com/zookeeper:3.5
The docker images are the griffin environment images.
apachegriffin/griffin_spark2
: This image contains mysql, hadoop, hive, spark, livy, griffin service, griffin measure, and some prepared demo data, it works as a single node spark cluster, providing spark engine and griffin service.apachegriffin/elasticsearch
: This image is based on official elasticsearch, adding some configurations to enable cors requests, to provide elasticsearch service for metrics persist.apachegriffin/kafka
: This image contains kafka 0.8, and some demo streaming data, to provide streaming data source in streaming mode.zookeeper:3.5
: This image is official zookeeper, to provide zookeeper service in streaming mode.docker-compose -f docker-compose-batch.yml up -d
BASE_PATH
value to <your local IP address>:38080
.Basic -> Get griffin version
, to make sure griffin service has started up.Measures -> Add measure
, to create a measure in griffin.jobs -> Add job
, to schedule a job to execute the measure. In the example, the schedule interval is 5 minutes.curl -XGET '<your local IP address>:39200/griffin/accuracy/_search?pretty&filter_path=hits.hits._source' -d '{"query":{"match_all":{}}, "sort": [{"tmst": {"order": "asc"}}]}'
docker-compose -f docker-compose-streaming.yml up -d
docker exec -it griffin bash
cd ~/measure
./streaming-accu.shYou can trace the log in streaming-accu.log.
tail -f streaming-accu.log
kill -9 `ps -ef | awk '/griffin-measure/{print $2}'`Then clear the checkpoint directory and other related directories of last streaming job.
./clear.shExecute the script of streaming-prof, to execute streaming profiling measurement.
./streaming-prof.shYou can trace the log in streaming-prof.log.
tail -f streaming-prof.log