In this quickstart, we will download Druid and set it up on a single machine. The cluster will be ready to load data after completing this initial setup.
Before beginning the quickstart, it is helpful to read the general Druid overview and the ingestion overview, as the tutorials will refer to concepts discussed on those pages.
You will need:
Download the 0.14.2-incubating release.
Extract Druid by running the following commands in your terminal:
tar -xzf apache-druid-0.14.2-incubating-bin.tar.gz cd apache-druid-0.14.2-incubating
In the package, you should find:
DISCLAIMER, LICENSE, and NOTICE filesbin/* - scripts useful for this quickstartconf/* - template configurations for a clustered setupextensions/* - core Druid extensionshadoop-dependencies/* - Druid Hadoop dependencieslib/* - libraries and dependencies for core Druidquickstart/* - configuration files, sample data, and other files for the quickstart tutorialsDruid has a dependency on Apache ZooKeeper for distributed coordination. You'll need to download and run Zookeeper.
In the package root, run the following commands:
curl https://archive.apache.org/dist/zookeeper/zookeeper-3.4.11/zookeeper-3.4.11.tar.gz -o zookeeper-3.4.11.tar.gz tar -xzf zookeeper-3.4.11.tar.gz mv zookeeper-3.4.11 zk
The startup scripts for the tutorial will expect the contents of the Zookeeper tarball to be located at zk under the apache-druid-0.14.2-incubating package root.
From the apache-druid-0.14.2-incubating package root, run the following command:
bin/supervise -c quickstart/tutorial/conf/tutorial-cluster.conf
This will bring up instances of Zookeeper and the Druid services, all running on the local machine, e.g.:
bin/supervise -c quickstart/tutorial/conf/tutorial-cluster.conf [Wed Feb 27 12:46:13 2019] Running command[zk], logging to[/apache-druid-0.14.2-incubating/var/sv/zk.log]: bin/run-zk quickstart/tutorial/conf [Wed Feb 27 12:46:13 2019] Running command[coordinator], logging to[/apache-druid-0.14.2-incubating/var/sv/coordinator.log]: bin/run-druid coordinator quickstart/tutorial/conf [Wed Feb 27 12:46:13 2019] Running command[broker], logging to[/apache-druid-0.14.2-incubating/var/sv/broker.log]: bin/run-druid broker quickstart/tutorial/conf [Wed Feb 27 12:46:13 2019] Running command[router], logging to[/apache-druid-0.14.2-incubating/var/sv/router.log]: bin/run-druid router quickstart/tutorial/conf [Wed Feb 27 12:46:13 2019] Running command[historical], logging to[/apache-druid-0.14.2-incubating/var/sv/historical.log]: bin/run-druid historical quickstart/tutorial/conf [Wed Feb 27 12:46:13 2019] Running command[overlord], logging to[/apache-druid-0.14.2-incubating/var/sv/overlord.log]: bin/run-druid overlord quickstart/tutorial/conf [Wed Feb 27 12:46:13 2019] Running command[middleManager], logging to[/apache-druid-0.14.2-incubating/var/sv/middleManager.log]: bin/run-druid middleManager quickstart/tutorial/conf
All persistent state such as the cluster metadata store and segments for the services will be kept in the var directory under the apache-druid-0.14.2-incubating package root. Logs for the services are located at var/sv.
Later on, if you'd like to stop the services, CTRL-C to exit the bin/supervise script, which will terminate the Druid processes.
If you want a clean start after stopping the services, delete the var directory and run the bin/supervise script again.
Once every service has started, you are now ready to load data.
If you completed Tutorial: Loading stream data from Kafka and wish to reset the cluster state, you should additionally clear out any Kafka state.
Shut down the Kafka broker with CTRL-C before stopping Zookeeper and the Druid services, and then delete the Kafka log directory at /tmp/kafka-logs:
rm -rf /tmp/kafka-logs
For the following data loading tutorials, we have included a sample data file containing Wikipedia page edit events that occurred on 2015-09-12.
This sample data is located at quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz from the Druid package root. The page edit events are stored as JSON objects in a text file.
The sample data has the following columns, and an example event is shown below:
{ "timestamp":"2015-09-12T20:03:45.018Z", "channel":"#en.wikipedia", "namespace":"Main", "page":"Spider-Man's powers and equipment", "user":"foobar", "comment":"/* Artificial web-shooters */", "cityName":"New York", "regionName":"New York", "regionIsoCode":"NY", "countryName":"United States", "countryIsoCode":"US", "isAnonymous":false, "isNew":false, "isMinor":false, "isRobot":false, "isUnpatrolled":false, "added":99, "delta":99, "deleted":0, }
The following tutorials demonstrate various methods of loading data into Druid, including both batch and streaming use cases.
This tutorial demonstrates how to perform a batch file load, using Druid's native batch ingestion.
This tutorial demonstrates how to load streaming data from a Kafka topic.
This tutorial demonstrates how to perform a batch file load, using a remote Hadoop cluster.
This tutorial demonstrates how to load streaming data by pushing events to Druid using the Tranquility service.
This tutorial demonstrates how to write a new ingestion spec and use it to load data.