blob: af1ef27c9b7de0c270c18086d1308c3b153efeb5 [file] [log] [blame]
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Apache Druid">
<meta name="keywords" content="druid,kafka,database,analytics,streaming,real-time,real time,apache,open source">
<meta name="author" content="Apache Software Foundation">
<title>Druid | Clustering</title>
<link rel="alternate" type="application/atom+xml" href="/feed">
<link rel="shortcut icon" href="/img/favicon.png">
<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css" integrity="sha384-fnmOCqbTlWIlj8LyTjo7mOUStjsKC4pOpQbqyi7RrhN7udi9RwhKkMHpvLbHG9Sr" crossorigin="anonymous">
<link href='//fonts.googleapis.com/css?family=Open+Sans+Condensed:300,700,300italic|Open+Sans:300italic,400italic,600italic,400,300,600,700' rel='stylesheet' type='text/css'>
<link rel="stylesheet" href="/css/bootstrap-pure.css?v=1.1">
<link rel="stylesheet" href="/css/base.css?v=1.1">
<link rel="stylesheet" href="/css/header.css?v=1.1">
<link rel="stylesheet" href="/css/footer.css?v=1.1">
<link rel="stylesheet" href="/css/syntax.css?v=1.1">
<link rel="stylesheet" href="/css/docs.css?v=1.1">
<script>
(function() {
var cx = '000162378814775985090:molvbm0vggm';
var gcse = document.createElement('script');
gcse.type = 'text/javascript';
gcse.async = true;
gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') +
'//cse.google.com/cse.js?cx=' + cx;
var s = document.getElementsByTagName('script')[0];
s.parentNode.insertBefore(gcse, s);
})();
</script>
</head>
<body>
<!-- Start page_header include -->
<script src="//ajax.googleapis.com/ajax/libs/jquery/2.2.4/jquery.min.js"></script>
<div class="top-navigator">
<div class="container">
<div class="left-cont">
<a class="logo" href="/"><span class="druid-logo"></span></a>
</div>
<div class="right-cont">
<ul class="links">
<li class=""><a href="/technology">Technology</a></li>
<li class=""><a href="/use-cases">Use Cases</a></li>
<li class=""><a href="/druid-powered">Powered By</a></li>
<li class=""><a href="/docs/latest/design/">Docs</a></li>
<li class=""><a href="/community/">Community</a></li>
<li class="header-dropdown">
<a>Apache</a>
<div class="header-dropdown-menu">
<a href="https://www.apache.org/" target="_blank">Foundation</a>
<a href="https://www.apache.org/events/current-event" target="_blank">Events</a>
<a href="https://www.apache.org/licenses/" target="_blank">License</a>
<a href="https://www.apache.org/foundation/thanks.html" target="_blank">Thanks</a>
<a href="https://www.apache.org/security/" target="_blank">Security</a>
<a href="https://www.apache.org/foundation/sponsorship.html" target="_blank">Sponsorship</a>
</div>
</li>
<li class=" button-link"><a href="/downloads.html">Download</a></li>
</ul>
</div>
</div>
<div class="action-button menu-icon">
<span class="fa fa-bars"></span> MENU
</div>
<div class="action-button menu-icon-close">
<span class="fa fa-times"></span> MENU
</div>
</div>
<script type="text/javascript">
var $menu = $('.right-cont');
var $menuIcon = $('.menu-icon');
var $menuIconClose = $('.menu-icon-close');
function showMenu() {
$menu.fadeIn(100);
$menuIcon.fadeOut(100);
$menuIconClose.fadeIn(100);
}
$menuIcon.click(showMenu);
function hideMenu() {
$menu.fadeOut(100);
$menuIconClose.fadeOut(100);
$menuIcon.fadeIn(100);
}
$menuIconClose.click(hideMenu);
$(window).resize(function() {
if ($(window).width() >= 840) {
$menu.fadeIn(100);
$menuIcon.fadeOut(100);
$menuIconClose.fadeOut(100);
}
else {
$menu.fadeOut(100);
$menuIcon.fadeIn(100);
$menuIconClose.fadeOut(100);
}
});
</script>
<!-- Stop page_header include -->
<div class="container doc-container">
<p> Looking for the <a href="/docs/0.22.1/">latest stable documentation</a>?</p>
<div class="row">
<div class="col-md-9 doc-content">
<p>
<a class="btn btn-default btn-xs visible-xs-inline-block visible-sm-inline-block" href="#toc">Table of Contents</a>
</p>
<!--
~ Licensed to the Apache Software Foundation (ASF) under one
~ or more contributor license agreements. See the NOTICE file
~ distributed with this work for additional information
~ regarding copyright ownership. The ASF licenses this file
~ to you under the Apache License, Version 2.0 (the
~ "License"); you may not use this file except in compliance
~ with the License. You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing,
~ software distributed under the License is distributed on an
~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
~ KIND, either express or implied. See the License for the
~ specific language governing permissions and limitations
~ under the License.
-->
<h1 id="clustering">Clustering</h1>
<p>Apache Druid (incubating) is designed to be deployed as a scalable, fault-tolerant cluster.</p>
<p>In this document, we&#39;ll set up a simple cluster and discuss how it can be further configured to meet
your needs. </p>
<p>This simple cluster will feature:
- A single Master server to host the Coordinator and Overlord processes
- Scalable, fault-tolerant Data servers running Historical and MiddleManager processes
- Query servers, hosting Druid Broker processes</p>
<p>In production, we recommend deploying multiple Master servers with Coordinator and Overlord processes in a fault-tolerant configuration as well.</p>
<h2 id="select-hardware">Select hardware</h2>
<h3 id="master-server">Master Server</h3>
<p>The Coordinator and Overlord processes can be co-located on a single server that is responsible for handling the metadata and coordination needs of your cluster.
The equivalent of an AWS <a href="https://aws.amazon.com/ec2/instance-types/#M3">m3.xlarge</a> is sufficient for most clusters. This
hardware offers:</p>
<ul>
<li>4 vCPUs</li>
<li>15 GB RAM</li>
<li>80 GB SSD storage</li>
</ul>
<h3 id="data-server">Data Server</h3>
<p>Historicals and MiddleManagers can be colocated on a single server to handle the actual data in your cluster. These servers benefit greatly from CPU, RAM,
and SSDs. The equivalent of an AWS <a href="https://aws.amazon.com/ec2/instance-types/#r3">r3.2xlarge</a> is a
good starting point. This hardware offers:</p>
<ul>
<li>8 vCPUs</li>
<li>61 GB RAM</li>
<li>160 GB SSD storage</li>
</ul>
<h3 id="query-server">Query Server</h3>
<p>Druid Brokers accept queries and farm them out to the rest of the cluster. They also optionally maintain an
in-memory query cache. These servers benefit greatly from CPU and RAM, and can also be deployed on
the equivalent of an AWS <a href="https://aws.amazon.com/ec2/instance-types/#r3">r3.2xlarge</a>. This hardware
offers:</p>
<ul>
<li>8 vCPUs</li>
<li>61 GB RAM</li>
<li>160 GB SSD storage</li>
</ul>
<p>You can consider co-locating any open source UIs or query libraries on the same server that the Broker is running on.</p>
<p>Very large clusters should consider selecting larger servers.</p>
<h2 id="select-os">Select OS</h2>
<p>We recommend running your favorite Linux distribution. You will also need:</p>
<ul>
<li>Java 8</li>
</ul>
<p>Your OS package manager should be able to help for both Java. If your Ubuntu-based OS
does not have a recent enough version of Java, WebUpd8 offers <a href="http://www.webupd8.org/2012/09/install-oracle-java-8-in-ubuntu-via-ppa.html">packages for those
OSes</a>.</p>
<h2 id="download-the-distribution">Download the distribution</h2>
<p>First, download and unpack the release archive. It&#39;s best to do this on a single machine at first,
since you will be editing the configurations and then copying the modified distribution out to all
of your servers.</p>
<p><a href="https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/0.14.1-incubating/apache-druid-0.14.1-incubating-bin.tar.gz">Download</a>
the 0.14.1-incubating release.</p>
<p>Extract Druid by running the following commands in your terminal:</p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span></span>tar -xzf apache-druid-0.14.1-incubating-bin.tar.gz
<span class="nb">cd</span> apache-druid-0.14.1-incubating
</code></pre></div>
<p>In the package, you should find:</p>
<ul>
<li><code>DISCLAIMER</code>, <code>LICENSE</code>, and <code>NOTICE</code> files</li>
<li><code>bin/*</code> - scripts related to the <a href="quickstart.html">single-machine quickstart</a></li>
<li><code>conf/*</code> - template configurations for a clustered setup</li>
<li><code>extensions/*</code> - core Druid extensions</li>
<li><code>hadoop-dependencies/*</code> - Druid Hadoop dependencies</li>
<li><code>lib/*</code> - libraries and dependencies for core Druid</li>
<li><code>quickstart/*</code> - files related to the <a href="quickstart.html">single-machine quickstart</a></li>
</ul>
<p>We&#39;ll be editing the files in <code>conf/</code> in order to get things running.</p>
<h2 id="configure-deep-storage">Configure deep storage</h2>
<p>Druid relies on a distributed filesystem or large object (blob) store for data storage. The most
commonly used deep storage implementations are S3 (popular for those on AWS) and HDFS (popular if
you already have a Hadoop deployment).</p>
<h3 id="s3">S3</h3>
<p>In <code>conf/druid/_common/common.runtime.properties</code>,</p>
<ul>
<li><p>Set <code>druid.extensions.loadList=[&quot;druid-s3-extensions&quot;]</code>.</p></li>
<li><p>Comment out the configurations for local storage under &quot;Deep Storage&quot; and &quot;Indexing service logs&quot;.</p></li>
<li><p>Uncomment and configure appropriate values in the &quot;For S3&quot; sections of &quot;Deep Storage&quot; and
&quot;Indexing service logs&quot;.</p></li>
</ul>
<p>After this, you should have made the following changes:</p>
<div class="highlight"><pre><code class="language-text" data-lang="text"><span></span>druid.extensions.loadList=[&quot;druid-s3-extensions&quot;]
#druid.storage.type=local
#druid.storage.storageDirectory=var/druid/segments
druid.storage.type=s3
druid.storage.bucket=your-bucket
druid.storage.baseKey=druid/segments
druid.s3.accessKey=...
druid.s3.secretKey=...
#druid.indexer.logs.type=file
#druid.indexer.logs.directory=var/druid/indexing-logs
druid.indexer.logs.type=s3
druid.indexer.logs.s3Bucket=your-bucket
druid.indexer.logs.s3Prefix=druid/indexing-logs
</code></pre></div>
<h3 id="hdfs">HDFS</h3>
<p>In <code>conf/druid/_common/common.runtime.properties</code>,</p>
<ul>
<li><p>Set <code>druid.extensions.loadList=[&quot;druid-hdfs-storage&quot;]</code>.</p></li>
<li><p>Comment out the configurations for local storage under &quot;Deep Storage&quot; and &quot;Indexing service logs&quot;.</p></li>
<li><p>Uncomment and configure appropriate values in the &quot;For HDFS&quot; sections of &quot;Deep Storage&quot; and
&quot;Indexing service logs&quot;.</p></li>
</ul>
<p>After this, you should have made the following changes:</p>
<div class="highlight"><pre><code class="language-text" data-lang="text"><span></span>druid.extensions.loadList=[&quot;druid-hdfs-storage&quot;]
#druid.storage.type=local
#druid.storage.storageDirectory=var/druid/segments
druid.storage.type=hdfs
druid.storage.storageDirectory=/druid/segments
#druid.indexer.logs.type=file
#druid.indexer.logs.directory=var/druid/indexing-logs
druid.indexer.logs.type=hdfs
druid.indexer.logs.directory=/druid/indexing-logs
</code></pre></div>
<p>Also,</p>
<ul>
<li>Place your Hadoop configuration XMLs (core-site.xml, hdfs-site.xml, yarn-site.xml,
mapred-site.xml) on the classpath of your Druid processes. You can do this by copying them into
<code>conf/druid/_common/</code>.</li>
</ul>
<h2 id="configure-tranquility-server-optional">Configure Tranquility Server (optional)</h2>
<p>Data streams can be sent to Druid through a simple HTTP API powered by Tranquility
Server. If you will be using this functionality, then at this point you should <a href="../ingestion/stream-ingestion.html#server">configure
Tranquility Server</a>.</p>
<h2 id="configure-tranquility-kafka-optional">Configure Tranquility Kafka (optional)</h2>
<p>Druid can consuming streams from Kafka through Tranquility Kafka. If you will be
using this functionality, then at this point you should
<a href="../ingestion/stream-ingestion.html#kafka">configure Tranquility Kafka</a>.</p>
<h2 id="configure-for-connecting-to-hadoop-optional">Configure for connecting to Hadoop (optional)</h2>
<p>If you will be loading data from a Hadoop cluster, then at this point you should configure Druid to be aware
of your cluster:</p>
<ul>
<li><p>Update <code>druid.indexer.task.hadoopWorkingPath</code> in <code>conf/druid/middleManager/runtime.properties</code> to
a path on HDFS that you&#39;d like to use for temporary files required during the indexing process.
<code>druid.indexer.task.hadoopWorkingPath=/tmp/druid-indexing</code> is a common choice.</p></li>
<li><p>Place your Hadoop configuration XMLs (core-site.xml, hdfs-site.xml, yarn-site.xml,
mapred-site.xml) on the classpath of your Druid processes. You can do this by copying them into
<code>conf/druid/_common/core-site.xml</code>, <code>conf/druid/_common/hdfs-site.xml</code>, and so on.</p></li>
</ul>
<p>Note that you don&#39;t need to use HDFS deep storage in order to load data from Hadoop. For example, if
your cluster is running on Amazon Web Services, we recommend using S3 for deep storage even if you
are loading data using Hadoop or Elastic MapReduce.</p>
<p>For more info, please see <a href="../ingestion/batch-ingestion.html">batch ingestion</a>.</p>
<h2 id="configure-addresses-for-druid-coordination">Configure addresses for Druid coordination</h2>
<p>In this simple cluster, you will deploy a single Master server containing the following:
- A single Druid Coordinator process
- A single Druid Overlord process
- A single ZooKeeper istance
- An embedded Derby metadata store</p>
<p>The processes on the cluster need to be configured with the addresses of this ZK instance and the metadata store.</p>
<p>In <code>conf/druid/_common/common.runtime.properties</code>, replace
&quot;zk.service.host&quot; with <a href="https://zookeeper.apache.org/doc/current/zookeeperProgrammers.html">connection string</a>
containing a comma separated list of host:port pairs, each corresponding to a ZooKeeper server
(e.g. &quot;127.0.0.1:4545&quot; or &quot;127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002&quot;):</p>
<ul>
<li><code>druid.zk.service.host</code></li>
</ul>
<p>In <code>conf/druid/_common/common.runtime.properties</code>, replace
&quot;metadata.storage.*&quot; with the address of the machine that you will use as your metadata store:</p>
<ul>
<li><code>druid.metadata.storage.connector.connectURI</code></li>
<li><code>druid.metadata.storage.connector.host</code></li>
</ul>
<div class="note caution">
In production, we recommend running 2 Master servers, each running a Druid Coordinator process
and a Druid Overlord process. We also recommend running a ZooKeeper cluster on its own dedicated hardware,
as well as replicated <a href = "../dependencies/metadata-storage.html">metadata storage</a>
such as MySQL or PostgreSQL, on its own dedicated hardware.
</div>
<h2 id="tune-processes-on-the-data-server">Tune processes on the Data Server</h2>
<p>Druid Historicals and MiddleManagers can be co-located on the same hardware. Both Druid processes benefit greatly from
being tuned to the hardware they run on. If you are running Tranquility Server or Kafka, you can also colocate Tranquility with these two Druid processes.
If you are using <a href="https://aws.amazon.com/ec2/instance-types/#r3">r3.2xlarge</a>
EC2 instances, or similar hardware, the configuration in the distribution is a
reasonable starting point.</p>
<p>If you are using different hardware, we recommend adjusting configurations for your specific
hardware. The most commonly adjusted configurations are:</p>
<ul>
<li><code>-Xmx</code> and <code>-Xms</code></li>
<li><code>druid.server.http.numThreads</code></li>
<li><code>druid.processing.buffer.sizeBytes</code></li>
<li><code>druid.processing.numThreads</code></li>
<li><code>druid.query.groupBy.maxIntermediateRows</code></li>
<li><code>druid.query.groupBy.maxResults</code></li>
<li><code>druid.server.maxSize</code> and <code>druid.segmentCache.locations</code> on Historical processes</li>
<li><code>druid.worker.capacity</code> on MiddleManagers</li>
</ul>
<div class="note info">
Keep -XX:MaxDirectMemory >= numThreads*sizeBytes, otherwise Druid will fail to start up..
</div>
<p>Please see the Druid <a href="../configuration/index.html">configuration documentation</a> for a full description of all
possible configuration options.</p>
<h2 id="tune-druid-brokers-on-the-query-server">Tune Druid Brokers on the Query Server</h2>
<p>Druid Brokers also benefit greatly from being tuned to the hardware they
run on. If you are using <a href="https://aws.amazon.com/ec2/instance-types/#r3">r3.2xlarge</a> EC2 instances,
or similar hardware, the configuration in the distribution is a reasonable starting point.</p>
<p>If you are using different hardware, we recommend adjusting configurations for your specific
hardware. The most commonly adjusted configurations are:</p>
<ul>
<li><code>-Xmx</code> and <code>-Xms</code></li>
<li><code>druid.server.http.numThreads</code></li>
<li><code>druid.cache.sizeInBytes</code></li>
<li><code>druid.processing.buffer.sizeBytes</code></li>
<li><code>druid.processing.numThreads</code></li>
<li><code>druid.query.groupBy.maxIntermediateRows</code></li>
<li><code>druid.query.groupBy.maxResults</code></li>
</ul>
<div class="note caution">
Keep -XX:MaxDirectMemory >= numThreads*sizeBytes, otherwise Druid will fail to start up.
</div>
<p>Please see the Druid <a href="../configuration/index.html">configuration documentation</a> for a full description of all
possible configuration options.</p>
<h2 id="open-ports-if-using-a-firewall">Open ports (if using a firewall)</h2>
<p>If you&#39;re using a firewall or some other system that only allows traffic on specific ports, allow
inbound connections on the following:</p>
<h3 id="master-server">Master Server</h3>
<ul>
<li>1527 (Derby metadata store; not needed if you are using a separate metadata store like MySQL or PostgreSQL)</li>
<li>2181 (ZooKeeper; not needed if you are using a separate ZooKeeper cluster)</li>
<li>8081 (Coordinator)</li>
<li>8090 (Overlord)</li>
</ul>
<h3 id="data-server">Data Server</h3>
<ul>
<li>8083 (Historical)</li>
<li>8091, 8100&ndash;8199 (Druid Middle Manager; you may need higher than port 8199 if you have a very high <code>druid.worker.capacity</code>)</li>
</ul>
<h3 id="query-server">Query Server</h3>
<ul>
<li>8082 (Broker)</li>
<li>8088 (Router, if used)</li>
</ul>
<h3 id="other">Other</h3>
<ul>
<li>8200 (Tranquility Server, if used)</li>
<li>8084 (Standalone Realtime, if used, deprecated)</li>
</ul>
<div class="note caution">
In production, we recommend deploying ZooKeeper and your metadata store on their own dedicated hardware,
rather than on the Master server.
</div>
<h2 id="start-master-server">Start Master Server</h2>
<p>Copy the Druid distribution and your edited configurations to your Master server. </p>
<p>If you have been editing the configurations on your local machine, you can use <em>rsync</em> to copy them:</p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span></span>rsync -az apache-druid-0.14.1-incubating/ COORDINATION_SERVER:apache-druid-0.14.1-incubating/
</code></pre></div>
<p>Log on to your coordination server and install Zookeeper:</p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span></span>curl http://www.gtlib.gatech.edu/pub/apache/zookeeper/zookeeper-3.4.11/zookeeper-3.4.11.tar.gz -o zookeeper-3.4.11.tar.gz
tar -xzf zookeeper-3.4.11.tar.gz
<span class="nb">cd</span> zookeeper-3.4.11
cp conf/zoo_sample.cfg conf/zoo.cfg
./bin/zkServer.sh start
</code></pre></div>
<div class="note caution">
In production, we also recommend running a ZooKeeper cluster on its own dedicated hardware.
</div>
<p>On your coordination server, <em>cd</em> into the distribution and start up the coordination services (you should do this in different windows or pipe the log to a file):</p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span></span>java <span class="sb">`</span>cat conf/druid/coordinator/jvm.config <span class="p">|</span> xargs<span class="sb">`</span> -cp conf/druid/_common:conf/druid/coordinator:lib/* org.apache.druid.cli.Main server coordinator
java <span class="sb">`</span>cat conf/druid/overlord/jvm.config <span class="p">|</span> xargs<span class="sb">`</span> -cp conf/druid/_common:conf/druid/overlord:lib/* org.apache.druid.cli.Main server overlord
</code></pre></div>
<p>You should see a log message printed out for each service that starts up. You can view detailed logs
for any service by looking in the <code>var/log/druid</code> directory using another terminal.</p>
<h2 id="start-data-server">Start Data Server</h2>
<p>Copy the Druid distribution and your edited configurations to your Data servers set aside for the Druid Historicals and MiddleManagers.</p>
<p>On each one, <em>cd</em> into the distribution and run this command to start the Data server processes:</p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span></span>java <span class="sb">`</span>cat conf/druid/historical/jvm.config <span class="p">|</span> xargs<span class="sb">`</span> -cp conf/druid/_common:conf/druid/historical:lib/* org.apache.druid.cli.Main server historical
java <span class="sb">`</span>cat conf/druid/middleManager/jvm.config <span class="p">|</span> xargs<span class="sb">`</span> -cp conf/druid/_common:conf/druid/middleManager:lib/* org.apache.druid.cli.Main server middleManager
</code></pre></div>
<p>You can add more Data servers with Druid Historicals and MiddleManagers as needed.</p>
<div class="note info">
For clusters with complex resource allocation needs, you can break apart Historicals and MiddleManagers and scale the components individually.
This also allows you take advantage of Druid's built-in MiddleManager
autoscaling facility.
</div>
<p>If you are doing push-based stream ingestion with Kafka or over HTTP, you can also start Tranquility Server on the same
hardware that holds MiddleManagers and Historicals. For large scale production, MiddleManagers and Tranquility Server
can still be co-located. If you are running Tranquility (not server) with a stream processor, you can co-locate
Tranquility with the stream processor and not require Tranquility Server.</p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span></span>curl -O http://static.druid.io/tranquility/releases/tranquility-distribution-0.8.0.tgz
tar -xzf tranquility-distribution-0.8.0.tgz
<span class="nb">cd</span> tranquility-distribution-0.8.0
bin/tranquility &lt;server or kafka&gt; -configFile &lt;path_to_druid_distro&gt;/conf/tranquility/&lt;server or kafka&gt;.json
</code></pre></div>
<h2 id="start-query-server">Start Query Server</h2>
<p>Copy the Druid distribution and your edited configurations to your Query servers set aside for the Druid Brokers.</p>
<p>On each Query server, <em>cd</em> into the distribution and run this command to start the Broker process (you may want to pipe the output to a log file):</p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span></span>java <span class="sb">`</span>cat conf/druid/broker/jvm.config <span class="p">|</span> xargs<span class="sb">`</span> -cp conf/druid/_common:conf/druid/broker:lib/* org.apache.druid.cli.Main server broker
</code></pre></div>
<p>You can add more Query servers as needed based on query load.</p>
<h2 id="loading-data">Loading data</h2>
<p>Congratulations, you now have a Druid cluster! The next step is to learn about recommended ways to load data into
Druid based on your use case. Read more about <a href="../ingestion/index.html">loading data</a>.</p>
</div>
<div class="col-md-3">
<div class="searchbox">
<gcse:searchbox-only></gcse:searchbox-only>
</div>
<div id="toc" class="nav toc hidden-print">
</div>
</div>
</div>
</div>
<!-- Start page_footer include -->
<footer class="druid-footer">
<div class="container">
<div class="text-center">
<p>
<a href="/technology">Technology</a>&ensp;·&ensp;
<a href="/use-cases">Use Cases</a>&ensp;·&ensp;
<a href="/druid-powered">Powered by Druid</a>&ensp;·&ensp;
<a href="/docs/latest/">Docs</a>&ensp;·&ensp;
<a href="/community/">Community</a>&ensp;·&ensp;
<a href="/downloads.html">Download</a>&ensp;·&ensp;
<a href="/faq">FAQ</a>
</p>
</div>
<div class="text-center">
<a title="Join the user group" href="https://groups.google.com/forum/#!forum/druid-user" target="_blank"><span class="fa fa-comments"></span></a>&ensp;·&ensp;
<a title="Follow Druid" href="https://twitter.com/druidio" target="_blank"><span class="fab fa-twitter"></span></a>&ensp;·&ensp;
<a title="GitHub" href="https://github.com/apache/druid" target="_blank"><span class="fab fa-github"></span></a>
</div>
<div class="text-center license">
Copyright © 2020 <a href="https://www.apache.org/" target="_blank">Apache Software Foundation</a>.<br>
Except where otherwise noted, licensed under <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA 4.0</a>.<br>
Apache Druid, Druid, and the Druid logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.
</div>
</div>
</footer>
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-131010415-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-131010415-1');
</script>
<script>
function trackDownload(type, url) {
ga('send', 'event', 'download', type, url);
}
</script>
<script src="//code.jquery.com/jquery.min.js"></script>
<script src="//maxcdn.bootstrapcdn.com/bootstrap/3.2.0/js/bootstrap.min.js"></script>
<script src="/assets/js/druid.js"></script>
<!-- stop page_footer include -->
<script>
$(function() {
$(".toc").load("/docs/0.14.1-incubating/toc.html");
// There is no way to tell when .gsc-input will be async loaded into the page so just try to set a placeholder until it works
var tries = 0;
var timer = setInterval(function() {
tries++;
if (tries > 300) clearInterval(timer);
var searchInput = $('input.gsc-input');
if (searchInput.length) {
searchInput.attr('placeholder', 'Search');
clearInterval(timer);
}
}, 100);
});
</script>
</body>
</html>