| <!DOCTYPE html> |
| <html lang="en"> |
| <head> |
| <meta charset="UTF-8" /> |
| <meta name="viewport" content="width=device-width, initial-scale=1.0"> |
| <meta name="description" content="Apache Druid"> |
| <meta name="keywords" content="druid,kafka,database,analytics,streaming,real-time,real time,apache,open source"> |
| <meta name="author" content="Apache Software Foundation"> |
| |
| <title>Druid | Apache Druid (incubating) Design</title> |
| |
| <link rel="alternate" type="application/atom+xml" href="/feed"> |
| <link rel="shortcut icon" href="/img/favicon.png"> |
| |
| <link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css" integrity="sha384-fnmOCqbTlWIlj8LyTjo7mOUStjsKC4pOpQbqyi7RrhN7udi9RwhKkMHpvLbHG9Sr" crossorigin="anonymous"> |
| |
| <link href='//fonts.googleapis.com/css?family=Open+Sans+Condensed:300,700,300italic|Open+Sans:300italic,400italic,600italic,400,300,600,700' rel='stylesheet' type='text/css'> |
| |
| <link rel="stylesheet" href="/css/bootstrap-pure.css?v=1.1"> |
| <link rel="stylesheet" href="/css/base.css?v=1.1"> |
| <link rel="stylesheet" href="/css/header.css?v=1.1"> |
| <link rel="stylesheet" href="/css/footer.css?v=1.1"> |
| <link rel="stylesheet" href="/css/syntax.css?v=1.1"> |
| <link rel="stylesheet" href="/css/docs.css?v=1.1"> |
| |
| <script> |
| (function() { |
| var cx = '000162378814775985090:molvbm0vggm'; |
| var gcse = document.createElement('script'); |
| gcse.type = 'text/javascript'; |
| gcse.async = true; |
| gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') + |
| '//cse.google.com/cse.js?cx=' + cx; |
| var s = document.getElementsByTagName('script')[0]; |
| s.parentNode.insertBefore(gcse, s); |
| })(); |
| </script> |
| |
| |
| </head> |
| |
| <body> |
| <!-- Start page_header include --> |
| <script src="//ajax.googleapis.com/ajax/libs/jquery/2.2.4/jquery.min.js"></script> |
| |
| <div class="top-navigator"> |
| <div class="container"> |
| <div class="left-cont"> |
| <a class="logo" href="/"><span class="druid-logo"></span></a> |
| </div> |
| <div class="right-cont"> |
| <ul class="links"> |
| <li class=""><a href="/technology">Technology</a></li> |
| <li class=""><a href="/use-cases">Use Cases</a></li> |
| <li class=""><a href="/druid-powered">Powered By</a></li> |
| <li class=""><a href="/docs/latest/design/">Docs</a></li> |
| <li class=""><a href="/community/">Community</a></li> |
| <li class="header-dropdown"> |
| <a>Apache</a> |
| <div class="header-dropdown-menu"> |
| <a href="https://www.apache.org/" target="_blank">Foundation</a> |
| <a href="https://www.apache.org/events/current-event" target="_blank">Events</a> |
| <a href="https://www.apache.org/licenses/" target="_blank">License</a> |
| <a href="https://www.apache.org/foundation/thanks.html" target="_blank">Thanks</a> |
| <a href="https://www.apache.org/security/" target="_blank">Security</a> |
| <a href="https://www.apache.org/foundation/sponsorship.html" target="_blank">Sponsorship</a> |
| </div> |
| </li> |
| <li class=" button-link"><a href="/downloads.html">Download</a></li> |
| </ul> |
| </div> |
| </div> |
| <div class="action-button menu-icon"> |
| <span class="fa fa-bars"></span> MENU |
| </div> |
| <div class="action-button menu-icon-close"> |
| <span class="fa fa-times"></span> MENU |
| </div> |
| </div> |
| |
| <script type="text/javascript"> |
| var $menu = $('.right-cont'); |
| var $menuIcon = $('.menu-icon'); |
| var $menuIconClose = $('.menu-icon-close'); |
| |
| function showMenu() { |
| $menu.fadeIn(100); |
| $menuIcon.fadeOut(100); |
| $menuIconClose.fadeIn(100); |
| } |
| |
| $menuIcon.click(showMenu); |
| |
| function hideMenu() { |
| $menu.fadeOut(100); |
| $menuIconClose.fadeOut(100); |
| $menuIcon.fadeIn(100); |
| } |
| |
| $menuIconClose.click(hideMenu); |
| |
| $(window).resize(function() { |
| if ($(window).width() >= 840) { |
| $menu.fadeIn(100); |
| $menuIcon.fadeOut(100); |
| $menuIconClose.fadeOut(100); |
| } |
| else { |
| $menu.fadeOut(100); |
| $menuIcon.fadeIn(100); |
| $menuIconClose.fadeOut(100); |
| } |
| }); |
| </script> |
| |
| <!-- Stop page_header include --> |
| |
| |
| <div class="container doc-container"> |
| |
| |
| |
| |
| <p> Looking for the <a href="/docs/0.19.0/">latest stable documentation</a>?</p> |
| |
| |
| <div class="row"> |
| <div class="col-md-9 doc-content"> |
| <p> |
| <a class="btn btn-default btn-xs visible-xs-inline-block visible-sm-inline-block" href="#toc">Table of Contents</a> |
| </p> |
| <!-- |
| ~ Licensed to the Apache Software Foundation (ASF) under one |
| ~ or more contributor license agreements. See the NOTICE file |
| ~ distributed with this work for additional information |
| ~ regarding copyright ownership. The ASF licenses this file |
| ~ to you under the Apache License, Version 2.0 (the |
| ~ "License"); you may not use this file except in compliance |
| ~ with the License. You may obtain a copy of the License at |
| ~ |
| ~ http://www.apache.org/licenses/LICENSE-2.0 |
| ~ |
| ~ Unless required by applicable law or agreed to in writing, |
| ~ software distributed under the License is distributed on an |
| ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY |
| ~ KIND, either express or implied. See the License for the |
| ~ specific language governing permissions and limitations |
| ~ under the License. |
| --> |
| |
| <h1 id="what-is-druid">What is Druid?<a id="what-is-druid"></a></h1> |
| |
| <p>Apache Druid (incubating) is a data store designed for high-performance slice-and-dice analytics |
| ("<a href="http://en.wikipedia.org/wiki/Online_analytical_processing">OLAP</a>"-style) on large data sets. Druid is most often |
| used as a data store for powering GUI analytical applications, or as a backend for highly-concurrent APIs that need |
| fast aggregations. Common application areas for Druid include:</p> |
| |
| <ul> |
| <li>Clickstream analytics</li> |
| <li>Network flow analytics</li> |
| <li>Server metrics storage</li> |
| <li>Application performance metrics</li> |
| <li>Digital marketing analytics</li> |
| <li>Business intelligence / OLAP</li> |
| </ul> |
| |
| <p>Druid's key features are:</p> |
| |
| <ol> |
| <li><strong>Columnar storage format.</strong> Druid uses column-oriented storage, meaning it only needs to load the exact columns |
| needed for a particular query. This gives a huge speed boost to queries that only hit a few columns. In addition, each |
| column is stored optimized for its particular data type, which supports fast scans and aggregations.</li> |
| <li><strong>Scalable distributed system.</strong> Druid is typically deployed in clusters of tens to hundreds of servers, and can |
| offer ingest rates of millions of records/sec, retention of trillions of records, and query latencies of sub-second to a |
| few seconds.</li> |
| <li><strong>Massively parallel processing.</strong> Druid can process a query in parallel across the entire cluster.</li> |
| <li><strong>Realtime or batch ingestion.</strong> Druid can ingest data either realtime (ingested data is immediately available for |
| querying) or in batches.</li> |
| <li><strong>Self-healing, self-balancing, easy to operate.</strong> As an operator, to scale the cluster out or in, simply add or |
| remove servers and the cluster will rebalance itself automatically, in the background, without any downtime. If any |
| Druid servers fail, the system will automatically route around the damage until those servers can be replaced. Druid |
| is designed to run 24/7 with no need for planned downtimes for any reason, including configuration changes and software |
| updates.</li> |
| <li><strong>Cloud-native, fault-tolerant architecture that won't lose data.</strong> Once Druid has ingested your data, a copy is |
| stored safely in <a href="#deep-storage">deep storage</a> (typically cloud storage, HDFS, or a shared filesystem). Your data can be |
| recovered from deep storage even if every single Druid server fails. For more limited failures affecting just a few |
| Druid servers, replication ensures that queries are still possible while the system recovers.</li> |
| <li><strong>Indexes for quick filtering.</strong> Druid uses <a href="https://arxiv.org/pdf/1004.0403">CONCISE</a> or |
| <a href="https://roaringbitmap.org/">Roaring</a> compressed bitmap indexes to create indexes that power fast filtering and |
| searching across multiple columns.</li> |
| <li><strong>Approximate algorithms.</strong> Druid includes algorithms for approximate count-distinct, approximate ranking, and |
| computation of approximate histograms and quantiles. These algorithms offer bounded memory usage and are often |
| substantially faster than exact computations. For situations where accuracy is more important than speed, Druid also |
| offers exact count-distinct and exact ranking.</li> |
| <li><strong>Automatic summarization at ingest time.</strong> Druid optionally supports data summarization at ingestion time. This |
| summarization partially pre-aggregates your data, and can lead to big costs savings and performance boosts.</li> |
| </ol> |
| |
| <h1 id="when-should-i-use-druid">When should I use Druid?<a id="when-to-use-druid"></a></h1> |
| |
| <p>Druid is likely a good choice if your use case fits a few of the following descriptors:</p> |
| |
| <ul> |
| <li>Insert rates are very high, but updates are less common.</li> |
| <li>Most of your queries are aggregation and reporting queries ("group by" queries). You may also have searching and |
| scanning queries.</li> |
| <li>You are targeting query latencies of 100ms to a few seconds.</li> |
| <li>Your data has a time component (Druid includes optimizations and design choices specifically related to time).</li> |
| <li>You may have more than one table, but each query hits just one big distributed table. Queries may potentially hit more |
| than one smaller "lookup" table.</li> |
| <li>You have high cardinality data columns (e.g. URLs, user IDs) and need fast counting and ranking over them.</li> |
| <li>You want to load data from Kafka, HDFS, flat files, or object storage like Amazon S3.</li> |
| </ul> |
| |
| <p>Situations where you would likely <em>not</em> want to use Druid include:</p> |
| |
| <ul> |
| <li>You need low-latency updates of <em>existing</em> records using a primary key. Druid supports streaming inserts, but not streaming updates (updates are done using |
| background batch jobs).</li> |
| <li>You are building an offline reporting system where query latency is not very important.</li> |
| <li>You want to do "big" joins (joining one big fact table to another big fact table).</li> |
| </ul> |
| |
| <h1 id="architecture">Architecture</h1> |
| |
| <p>Druid has a multi-process, distributed architecture that is designed to be cloud-friendly and easy to operate. Each |
| Druid process type can be configured and scaled independently, giving you maximum flexibility over your cluster. This |
| design also provides enhanced fault tolerance: an outage of one component will not immediately affect other components.</p> |
| |
| <h2 id="processes-and-servers">Processes and Servers</h2> |
| |
| <p>Druid has several process types, briefly described below:</p> |
| |
| <ul> |
| <li><a href="../design/coordinator.html"><strong>Coordinator</strong></a> processes manage data availability on the cluster.</li> |
| <li><a href="../design/overlord.html"><strong>Overlord</strong></a> processes control the assignment of data ingestion workloads.</li> |
| <li><a href="../design/broker.html"><strong>Broker</strong></a> processes handle queries from external clients.</li> |
| <li><a href="../development/router.html"><strong>Router</strong></a> processes are optional processes that can route requests to Brokers, Coordinators, and Overlords.</li> |
| <li><a href="../design/historical.html"><strong>Historical</strong></a> processes store queryable data.</li> |
| <li><a href="../design/middlemanager.html"><strong>MiddleManager</strong></a> processes are responsible for ingesting data.</li> |
| </ul> |
| |
| <p>Druid processes can be deployed any way you like, but for ease of deployment we suggest organizing them into three server types: Master, Query, and Data.</p> |
| |
| <ul> |
| <li><strong>Master</strong>: Runs Coordinator and Overlord processes, manages data availability and ingestion.</li> |
| <li><strong>Query</strong>: Runs Broker and optional Router processes, handles queries from external clients.</li> |
| <li><strong>Data</strong>: Runs Historical and MiddleManager processes, executes ingestion workloads and stores all queryable data.</li> |
| </ul> |
| |
| <p>For more details on process and server organization, please see <a href="../design/processes.html">Druid Processses and Servers</a>.</p> |
| |
| <h3 id="external-dependencies">External dependencies</h3> |
| |
| <p>In addition to its built-in process types, Druid also has three external dependencies. These are intended to be able to |
| leverage existing infrastructure, where present.</p> |
| |
| <h4 id="deep-storage">Deep storage</h4> |
| |
| <p>Shared file storage accessible by every Druid server. This is typically going to |
| be a distributed object store like S3 or HDFS, or a network mounted filesystem. Druid uses this to store any data that |
| has been ingested into the system.</p> |
| |
| <p>Druid uses deep storage only as a backup of your data and as a way to transfer data in the background between |
| Druid processes. To respond to queries, Historical processes do not read from deep storage, but instead read pre-fetched |
| segments from their local disks before any queries are served. This means that Druid never needs to access deep storage |
| during a query, helping it offer the best query latencies possible. It also means that you must have enough disk space |
| both in deep storage and across your Historical processes for the data you plan to load.</p> |
| |
| <p>For more details, please see <a href="../dependencies/deep-storage.html">Deep storage dependency</a>.</p> |
| |
| <h4 id="metadata-storage">Metadata storage</h4> |
| |
| <p>The metadata storage holds various shared system metadata such as segment availability information and task information. This is typically going to be a traditional RDBMS |
| like PostgreSQL or MySQL. </p> |
| |
| <p>For more details, please see <a href="../dependencies/metadata-storage.html">Metadata storage dependency</a></p> |
| |
| <h4 id="zookeeper">Zookeeper</h4> |
| |
| <p>Used for internal service discovery, coordination, and leader election.</p> |
| |
| <p>For more details, please see <a href="../dependencies/zookeeper.html">Zookeeper dependency</a>.</p> |
| |
| <p>The idea behind this architecture is to make a Druid cluster simple to operate in production at scale. For example, the |
| separation of deep storage and the metadata store from the rest of the cluster means that Druid processes are radically |
| fault tolerant: even if every single Druid server fails, you can still relaunch your cluster from data stored in deep |
| storage and the metadata store.</p> |
| |
| <h3 id="architecture-diagram">Architecture diagram</h3> |
| |
| <p>The following diagram shows how queries and data flow through this architecture, using the suggested Master/Query/Data server organization:</p> |
| |
| <p><img src="../../img/druid-architecture.png" width="800"/></p> |
| |
| <h1 id="datasources-and-segments">Datasources and segments</h1> |
| |
| <p>Druid data is stored in "datasources", which are similar to tables in a traditional RDBMS. Each datasource is |
| partitioned by time and, optionally, further partitioned by other attributes. Each time range is called a "chunk" (for |
| example, a single day, if your datasource is partitioned by day). Within a chunk, data is partitioned into one or more |
| "segments". Each segment is a single file, typically comprising up to a few million rows of data. Since segments are |
| organized into time chunks, it's sometimes helpful to think of segments as living on a timeline like the following:</p> |
| |
| <p><img src="../../img/druid-timeline.png" width="800" /></p> |
| |
| <p>A datasource may have anywhere from just a few segments, up to hundreds of thousands and even millions of segments. Each |
| segment starts life off being created on a MiddleManager, and at that point, is mutable and uncommitted. The segment |
| building process includes the following steps, designed to produce a data file that is compact and supports fast |
| queries:</p> |
| |
| <ul> |
| <li>Conversion to columnar format</li> |
| <li>Indexing with bitmap indexes</li> |
| <li>Compression using various algorithms |
| |
| <ul> |
| <li>Dictionary encoding with id storage minimization for String columns</li> |
| <li>Bitmap compression for bitmap indexes</li> |
| <li>Type-aware compression for all columns</li> |
| </ul></li> |
| </ul> |
| |
| <p>Periodically, segments are committed and published. At this point, they are written to <a href="#deep-storage">deep storage</a>, |
| become immutable, and move from MiddleManagers to the Historical processes (see <a href="#architecture">Architecture</a> above |
| for details). An entry about the segment is also written to the <a href="#metadata-storage">metadata store</a>. This entry is a |
| self-describing bit of metadata about the segment, including things like the schema of the segment, its size, and its |
| location on deep storage. These entries are what the Coordinator uses to know what data <em>should</em> be available on the |
| cluster.</p> |
| |
| <h1 id="query-processing">Query processing</h1> |
| |
| <p>Queries first enter the Broker, where the Broker will identify which segments have data that may pertain to that query. |
| The list of segments is always pruned by time, and may also be pruned by other attributes depending on how your |
| datasource is partitioned. The Broker will then identify which Historicals and MiddleManagers are serving those segments |
| and send a rewritten subquery to each of those processes. The Historical/MiddleManager processes will take in the |
| queries, process them and return results. The Broker receives results and merges them together to get the final answer, |
| which it returns to the original caller.</p> |
| |
| <p>Broker pruning is an important way that Druid limits the amount of data that must be scanned for each query, but it is |
| not the only way. For filters at a more granular level than what the Broker can use for pruning, indexing structures |
| inside each segment allow Druid to figure out which (if any) rows match the filter set before looking at any row of |
| data. Once Druid knows which rows match a particular query, it only accesses the specific columns it needs for that |
| query. Within those columns, Druid can skip from row to row, avoiding reading data that doesn't match the query filter.</p> |
| |
| <p>So Druid uses three different techniques to maximize query performance:</p> |
| |
| <ul> |
| <li>Pruning which segments are accessed for each query.</li> |
| <li>Within each segment, using indexes to identify which rows must be accessed.</li> |
| <li>Within each segment, only reading the specific rows and columns that are relevant to a particular query.</li> |
| </ul> |
| |
| </div> |
| <div class="col-md-3"> |
| <div class="searchbox"> |
| <gcse:searchbox-only></gcse:searchbox-only> |
| </div> |
| <div id="toc" class="nav toc hidden-print"> |
| </div> |
| </div> |
| </div> |
| </div> |
| |
| <!-- Start page_footer include --> |
| <footer class="druid-footer"> |
| <div class="container"> |
| <div class="text-center"> |
| <p> |
| <a href="/technology">Technology</a> ·  |
| <a href="/use-cases">Use Cases</a> ·  |
| <a href="/druid-powered">Powered by Druid</a> ·  |
| <a href="/docs/latest/">Docs</a> ·  |
| <a href="/community/">Community</a> ·  |
| <a href="/downloads.html">Download</a> ·  |
| <a href="/faq">FAQ</a> |
| </p> |
| </div> |
| <div class="text-center"> |
| <a title="Join the user group" href="https://groups.google.com/forum/#!forum/druid-user" target="_blank"><span class="fa fa-comments"></span></a> ·  |
| <a title="Follow Druid" href="https://twitter.com/druidio" target="_blank"><span class="fab fa-twitter"></span></a> ·  |
| <a title="GitHub" href="https://github.com/apache/druid" target="_blank"><span class="fab fa-github"></span></a> |
| </div> |
| <div class="text-center license"> |
| Copyright © 2020 <a href="https://www.apache.org/" target="_blank">Apache Software Foundation</a>.<br> |
| Except where otherwise noted, licensed under <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA 4.0</a>.<br> |
| Apache Druid, Druid, and the Druid logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries. |
| </div> |
| </div> |
| </footer> |
| |
| <script async src="https://www.googletagmanager.com/gtag/js?id=UA-131010415-1"></script> |
| <script> |
| window.dataLayer = window.dataLayer || []; |
| function gtag(){dataLayer.push(arguments);} |
| gtag('js', new Date()); |
| gtag('config', 'UA-131010415-1'); |
| </script> |
| <script> |
| function trackDownload(type, url) { |
| ga('send', 'event', 'download', type, url); |
| } |
| </script> |
| <script src="//code.jquery.com/jquery.min.js"></script> |
| <script src="//maxcdn.bootstrapcdn.com/bootstrap/3.2.0/js/bootstrap.min.js"></script> |
| <script src="/assets/js/druid.js"></script> |
| <!-- stop page_footer include --> |
| |
| |
| <script> |
| $(function() { |
| $(".toc").load("/docs/0.14.0-incubating/toc.html"); |
| |
| // There is no way to tell when .gsc-input will be async loaded into the page so just try to set a placeholder until it works |
| var tries = 0; |
| var timer = setInterval(function() { |
| tries++; |
| if (tries > 300) clearInterval(timer); |
| var searchInput = $('input.gsc-input'); |
| if (searchInput.length) { |
| searchInput.attr('placeholder', 'Search'); |
| clearInterval(timer); |
| } |
| }, 100); |
| }); |
| </script> |
| </body> |
| </html> |