[{"title":"API reference","type":0,"sectionRef":"#","url":"/docs/27.0.0/api-reference/","content":"","keywords":""},{"title":"HTTP APIs","type":1,"pageTitle":"API reference","url":"/docs/27.0.0/api-reference/#http-apis","content":"Druid SQL queries to submit SQL queries using the Druid SQL API.SQL-based ingestion to submit SQL-based batch ingestion requests.JSON querying to submit JSON-based native queries.Tasks to manage data ingestion operations.Supervisors to manage supervisors for data ingestion lifecycle and data processing.Retention rules to define and manage data retention rules across datasources.Data management to manage data segments.Automatic compaction to optimize segment sizes after ingestion.Lookups to manage and modify key-value datasources.Service status to monitor components within the Druid cluster. Dynamic configuration to configure the behavior of the Coordinator and Overlord processes.Legacy metadata to retrieve datasource metadata. "},{"title":"Java APIs","type":1,"pageTitle":"API reference","url":"/docs/27.0.0/api-reference/#java-apis","content":"SQL JDBC driver to connect to Druid and make Druid SQL queries using the Avatica JDBC driver. "},{"title":"Dynamic configuration API","type":0,"sectionRef":"#","url":"/docs/27.0.0/api-reference/dynamic-configuration-api","content":"","keywords":""},{"title":"Coordinator dynamic configuration","type":1,"pageTitle":"Dynamic configuration API","url":"/docs/27.0.0/api-reference/dynamic-configuration-api#coordinator-dynamic-configuration","content":"See Coordinator Dynamic Configuration for details. Note that all interval URL parameters are ISO 8601 strings delimited by a _ instead of a /as in 2016-06-27_2016-06-28. GET /druid/coordinator/v1/config Retrieves current coordinator dynamic configuration. GET /druid/coordinator/v1/config/history?interval={interval}&count={count} Retrieves history of changes to overlord dynamic configuration. Accepts interval and count query string parameters to filter by interval and limit the number of results respectively. POST /druid/coordinator/v1/config Update overlord dynamic worker configuration. "},{"title":"Overlord dynamic configuration","type":1,"pageTitle":"Dynamic configuration API","url":"/docs/27.0.0/api-reference/dynamic-configuration-api#overlord-dynamic-configuration","content":"See Overlord Dynamic Configuration for details. Note that all interval URL parameters are ISO 8601 strings delimited by a _ instead of a /as in 2016-06-27_2016-06-28. GET /druid/indexer/v1/worker Retrieves current overlord dynamic configuration. GET /druid/indexer/v1/worker/history?interval={interval}&count={count} Retrieves history of changes to overlord dynamic configuration. Accepts interval and count query string parameters to filter by interval and limit the number of results respectively. GET /druid/indexer/v1/workers Retrieves a list of all the worker nodes in the cluster along with its metadata. GET /druid/indexer/v1/scaling Retrieves overlord scaling events if auto-scaling runners are in use. POST /druid/indexer/v1/worker Update overlord dynamic worker configuration. "},{"title":"Data management API","type":0,"sectionRef":"#","url":"/docs/27.0.0/api-reference/data-management-api","content":"","keywords":""},{"title":"Note for Coordinator's POST and DELETE APIs","type":1,"pageTitle":"Data management API","url":"/docs/27.0.0/api-reference/data-management-api#note-for-coordinators-post-and-delete-apis","content":"While segments may be enabled by issuing POST requests for the datasources, the Coordinator may again disable segments if they match any configured drop rules. Even if segments are enabled by these APIs, you must configure a load rule to load them onto Historical processes. If an indexing or kill task runs at the same time these APIs are invoked, the behavior is undefined. Some segments might be killed and others might be enabled. It's also possible that all segments might be disabled, but the indexing task can still read data from those segments and succeed. info Avoid using indexing or kill tasks and these APIs at the same time for the same datasource and time chunk. POST /druid/coordinator/v1/datasources/{dataSourceName} Marks as used all segments belonging to a datasource. Returns a JSON object of the form{"numChangedSegments": <number>} with the number of segments in the database whose state has been changed (that is, the segments were marked as used) as the result of this API call. POST /druid/coordinator/v1/datasources/{dataSourceName}/segments/{segmentId} Marks as used a segment of a datasource. Returns a JSON object of the form {"segmentStateChanged": <boolean>} with the boolean indicating if the state of the segment has been changed (that is, the segment was marked as used) as the result of this API call. POST /druid/coordinator/v1/datasources/{dataSourceName}/markUsed POST /druid/coordinator/v1/datasources/{dataSourceName}/markUnused Marks segments (un)used for a datasource by interval or set of segment Ids. When marking used only segments that are not overshadowed will be updated. The request payload contains the interval or set of segment IDs to be marked unused. Either interval or segment IDs should be provided, if both or none are provided in the payload, the API would throw an error (400 BAD REQUEST). Interval specifies the start and end times as IS0 8601 strings. interval=(start/end) where start and end both are inclusive and only the segments completely contained within the specified interval will be disabled, partially overlapping segments will not be affected. JSON Request Payload: Key\tDescription\tExampleinterval\tThe interval for which to mark segments unused\t"2015-09-12T03:00:00.000Z/2015-09-12T05:00:00.000Z" segmentIds\tSet of segment IDs to be marked unused\t["segmentId1", "segmentId2"] DELETE /druid/coordinator/v1/datasources/{dataSourceName} Marks as unused all segments belonging to a datasource. Returns a JSON object of the form{"numChangedSegments": <number>} with the number of segments in the database whose state has been changed (that is, the segments were marked as unused) as the result of this API call. DELETE /druid/coordinator/v1/datasources/{dataSourceName}/intervals/{interval}@Deprecated. /druid/coordinator/v1/datasources/{dataSourceName}?kill=true&interval={myInterval} Runs a Kill task for a given interval and datasource. DELETE /druid/coordinator/v1/datasources/{dataSourceName}/segments/{segmentId} Marks as unused a segment of a datasource. Returns a JSON object of the form {"segmentStateChanged": <boolean>} with the boolean indicating if the state of the segment has been changed (that is, the segment was marked as unused) as the result of this API call. "},{"title":"Automatic compaction API","type":0,"sectionRef":"#","url":"/docs/27.0.0/api-reference/automatic-compaction-api","content":"","keywords":""},{"title":"Automatic compaction status","type":1,"pageTitle":"Automatic compaction API","url":"/docs/27.0.0/api-reference/automatic-compaction-api#automatic-compaction-status","content":"GET /druid/coordinator/v1/compaction/progress?dataSource={dataSource} Returns the total size of segments awaiting compaction for the given dataSource. The specified dataSource must have automatic compaction enabled. GET /druid/coordinator/v1/compaction/status Returns the status and statistics from the auto-compaction run of all dataSources which have auto-compaction enabled in the latest run. The response payload includes a list of latestStatus objects. Each latestStatus represents the status for a dataSource (which has/had auto-compaction enabled). The latestStatus object has the following keys: dataSource: name of the datasource for this status informationscheduleStatus: auto-compaction scheduling status. Possible values are NOT_ENABLED and RUNNING. Returns RUNNING if the dataSource has an active auto-compaction config submitted. Otherwise, returns NOT_ENABLED.bytesAwaitingCompaction: total bytes of this datasource waiting to be compacted by the auto-compaction (only consider intervals/segments that are eligible for auto-compaction)bytesCompacted: total bytes of this datasource that are already compacted with the spec set in the auto-compaction configbytesSkipped: total bytes of this datasource that are skipped (not eligible for auto-compaction) by the auto-compactionsegmentCountAwaitingCompaction: total number of segments of this datasource waiting to be compacted by the auto-compaction (only consider intervals/segments that are eligible for auto-compaction)segmentCountCompacted: total number of segments of this datasource that are already compacted with the spec set in the auto-compaction configsegmentCountSkipped: total number of segments of this datasource that are skipped (not eligible for auto-compaction) by the auto-compactionintervalCountAwaitingCompaction: total number of intervals of this datasource waiting to be compacted by the auto-compaction (only consider intervals/segments that are eligible for auto-compaction)intervalCountCompacted: total number of intervals of this datasource that are already compacted with the spec set in the auto-compaction configintervalCountSkipped: total number of intervals of this datasource that are skipped (not eligible for auto-compaction) by the auto-compaction GET /druid/coordinator/v1/compaction/status?dataSource={dataSource} Similar to the API /druid/coordinator/v1/compaction/status above but filters response to only return information for the dataSource given. The dataSource must have auto-compaction enabled. "},{"title":"Automatic compaction configuration","type":1,"pageTitle":"Automatic compaction API","url":"/docs/27.0.0/api-reference/automatic-compaction-api#automatic-compaction-configuration","content":"GET /druid/coordinator/v1/config/compaction Returns all automatic compaction configs. GET /druid/coordinator/v1/config/compaction/{dataSource} Returns an automatic compaction config of a dataSource. GET /druid/coordinator/v1/config/compaction/{dataSource}/history?interval={interval}&count={count} Returns the history of the automatic compaction config for a dataSource. Optionally accepts interval and countquery string parameters to filter by interval and limit the number of results respectively. If the dataSource does not exist or there is no compaction history for the dataSource, an empty list is returned. The response contains a list of objects with the following keys: globalConfig: A json object containing automatic compaction config that applies to the entire cluster. compactionConfig: A json object containing the automatic compaction config for the datasource.auditInfo: A json object that contains information about the change made - like author, comment and ip.auditTime: The date and time when the change was made. POST /druid/coordinator/v1/config/compaction/taskslots?ratio={someRatio}&max={someMaxSlots} Update the capacity for compaction tasks. ratio and max are used to limit the max number of compaction tasks. They mean the ratio of the total task slots to the compaction task slots and the maximum number of task slots for compaction tasks, respectively. The actual max number of compaction tasks is min(max, ratio * total task slots). Note that ratio and max are optional and can be omitted. If they are omitted, default values (0.1 and unbounded) will be set for them. POST /druid/coordinator/v1/config/compaction Creates or updates the automatic compaction config for a dataSource. See Automatic compaction dynamic configuration for configuration details. DELETE /druid/coordinator/v1/config/compaction/{dataSource} Removes the automatic compaction config for a dataSource. "},{"title":"Retention rules API","type":0,"sectionRef":"#","url":"/docs/27.0.0/api-reference/retention-rules-api","content":"","keywords":""},{"title":"Retention rules","type":1,"pageTitle":"Retention rules API","url":"/docs/27.0.0/api-reference/retention-rules-api#retention-rules","content":"Note that all interval URL parameters are ISO 8601 strings delimited by a _ instead of a / as in 2016-06-27_2016-06-28. GET /druid/coordinator/v1/rules Returns all rules as JSON objects for all datasources in the cluster including the default datasource. GET /druid/coordinator/v1/rules/{dataSourceName} Returns all rules for a specified datasource. GET /druid/coordinator/v1/rules/{dataSourceName}?full Returns all rules for a specified datasource and includes default datasource. GET /druid/coordinator/v1/rules/history?interval=<interval> Returns audit history of rules for all datasources. Default value of interval can be specified by setting druid.audit.manager.auditHistoryMillis (1 week if not configured) in Coordinator runtime.properties. GET /druid/coordinator/v1/rules/history?count=<n> Returns last n entries of audit history of rules for all datasources. GET /druid/coordinator/v1/rules/{dataSourceName}/history?interval=<interval> Returns audit history of rules for a specified datasource. Default value of interval can be specified by setting druid.audit.manager.auditHistoryMillis (1 week if not configured) in Coordinator runtime.properties. GET /druid/coordinator/v1/rules/{dataSourceName}/history?count=<n> Returns last n entries of audit history of rules for a specified datasource. POST /druid/coordinator/v1/rules/{dataSourceName} POST with a list of rules in JSON form to update rules. Optional Header Parameters for auditing the config change can also be specified. Header Param Name\tDescription\tDefaultX-Druid-Author\tAuthor making the config change\t"" X-Druid-Comment\tComment describing the change being done\t"" "},{"title":"Legacy metadata API","type":0,"sectionRef":"#","url":"/docs/27.0.0/api-reference/legacy-metadata-api","content":"","keywords":""},{"title":"Segment loading","type":1,"pageTitle":"Legacy metadata API","url":"/docs/27.0.0/api-reference/legacy-metadata-api#segment-loading","content":"GET /druid/coordinator/v1/loadstatus Returns the percentage of segments actually loaded in the cluster versus segments that should be loaded in the cluster. GET /druid/coordinator/v1/loadstatus?simple Returns the number of segments left to load until segments that should be loaded in the cluster are available for queries. This does not include segment replication counts. GET /druid/coordinator/v1/loadstatus?full Returns the number of segments left to load in each tier until segments that should be loaded in the cluster are all available. This includes segment replication counts. GET /druid/coordinator/v1/loadstatus?full&computeUsingClusterView Returns the number of segments not yet loaded for each tier until all segments loading in the cluster are available. The result includes segment replication counts. It also factors in the number of available nodes that are of a service type that can load the segment when computing the number of segments remaining to load. A segment is considered fully loaded when: Druid has replicated it the number of times configured in the corresponding load rule.Or the number of replicas for the segment in each tier where it is configured to be replicated equals the available nodes of a service type that are currently allowed to load the segment in the tier. GET /druid/coordinator/v1/loadqueue Returns the ids of segments to load and drop for each Historical process. GET /druid/coordinator/v1/loadqueue?simple Returns the number of segments to load and drop, as well as the total segment load and drop size in bytes for each Historical process. GET /druid/coordinator/v1/loadqueue?full Returns the serialized JSON of segments to load and drop for each Historical process. "},{"title":"Segment loading by datasource","type":1,"pageTitle":"Legacy metadata API","url":"/docs/27.0.0/api-reference/legacy-metadata-api#segment-loading-by-datasource","content":"Note that all interval query parameters are ISO 8601 strings—for example, 2016-06-27/2016-06-28. Also note that these APIs only guarantees that the segments are available at the time of the call. Segments can still become missing because of historical process failures or any other reasons afterward. GET /druid/coordinator/v1/datasources/{dataSourceName}/loadstatus?forceMetadataRefresh={boolean}&interval={myInterval} Returns the percentage of segments actually loaded in the cluster versus segments that should be loaded in the cluster for the given datasource over the given interval (or last 2 weeks if interval is not given). forceMetadataRefresh is required to be set. Setting forceMetadataRefresh to true will force the coordinator to poll latest segment metadata from the metadata store (Note: forceMetadataRefresh=true refreshes Coordinator's metadata cache of all datasources. This can be a heavy operation in terms of the load on the metadata store but can be necessary to make sure that we verify all the latest segments' load status)Setting forceMetadataRefresh to false will use the metadata cached on the coordinator from the last force/periodic refresh. If no used segments are found for the given inputs, this API returns 204 No Content GET /druid/coordinator/v1/datasources/{dataSourceName}/loadstatus?simple&forceMetadataRefresh={boolean}&interval={myInterval} Returns the number of segments left to load until segments that should be loaded in the cluster are available for the given datasource over the given interval (or last 2 weeks if interval is not given). This does not include segment replication counts. forceMetadataRefresh is required to be set. Setting forceMetadataRefresh to true will force the coordinator to poll latest segment metadata from the metadata store (Note: forceMetadataRefresh=true refreshes Coordinator's metadata cache of all datasources. This can be a heavy operation in terms of the load on the metadata store but can be necessary to make sure that we verify all the latest segments' load status)Setting forceMetadataRefresh to false will use the metadata cached on the coordinator from the last force/periodic refresh. If no used segments are found for the given inputs, this API returns 204 No Content GET /druid/coordinator/v1/datasources/{dataSourceName}/loadstatus?full&forceMetadataRefresh={boolean}&interval={myInterval} Returns the number of segments left to load in each tier until segments that should be loaded in the cluster are all available for the given datasource over the given interval (or last 2 weeks if interval is not given). This includes segment replication counts. forceMetadataRefresh is required to be set. Setting forceMetadataRefresh to true will force the coordinator to poll latest segment metadata from the metadata store (Note: forceMetadataRefresh=true refreshes Coordinator's metadata cache of all datasources. This can be a heavy operation in terms of the load on the metadata store but can be necessary to make sure that we verify all the latest segments' load status)Setting forceMetadataRefresh to false will use the metadata cached on the coordinator from the last force/periodic refresh. You can pass the optional query parameter computeUsingClusterView to factor in the available cluster services when calculating the segments left to load. See Coordinator Segment Loading for details. If no used segments are found for the given inputs, this API returns 204 No Content "},{"title":"Metadata store information","type":1,"pageTitle":"Legacy metadata API","url":"/docs/27.0.0/api-reference/legacy-metadata-api#metadata-store-information","content":"info Note: Much of this information is available in a simpler, easier-to-use form through the Druid SQLsys.segments table. GET /druid/coordinator/v1/metadata/segments Returns a list of all segments for each datasource enabled in the cluster. GET /druid/coordinator/v1/metadata/segments?datasources={dataSourceName1}&datasources={dataSourceName2} Returns a list of all segments for one or more specific datasources enabled in the cluster. GET /druid/coordinator/v1/metadata/segments?includeOvershadowedStatus Returns a list of all segments for each datasource with the full segment metadata and an extra field overshadowed. GET /druid/coordinator/v1/metadata/segments?includeOvershadowedStatus&datasources={dataSourceName1}&datasources={dataSourceName2} Returns a list of all segments for one or more specific datasources with the full segment metadata and an extra field overshadowed. GET /druid/coordinator/v1/metadata/datasources Returns a list of the names of datasources with at least one used segment in the cluster, retrieved from the metadata database. Users should call this API to get the eventual state that the system will be in. GET /druid/coordinator/v1/metadata/datasources?includeUnused Returns a list of the names of datasources, regardless of whether there are used segments belonging to those datasources in the cluster or not. GET /druid/coordinator/v1/metadata/datasources?includeDisabled Returns a list of the names of datasources, regardless of whether the datasource is disabled or not. GET /druid/coordinator/v1/metadata/datasources?full Returns a list of all datasources with at least one used segment in the cluster. Returns all metadata about those datasources as stored in the metadata store. GET /druid/coordinator/v1/metadata/datasources/{dataSourceName} Returns full metadata for a datasource as stored in the metadata store. GET /druid/coordinator/v1/metadata/datasources/{dataSourceName}/segments Returns a list of all segments for a datasource as stored in the metadata store. GET /druid/coordinator/v1/metadata/datasources/{dataSourceName}/segments?full Returns a list of all segments for a datasource with the full segment metadata as stored in the metadata store. GET /druid/coordinator/v1/metadata/datasources/{dataSourceName}/segments/{segmentId} Returns full segment metadata for a specific segment as stored in the metadata store, if the segment is used. If the segment is unused, or is unknown, a 404 response is returned. GET /druid/coordinator/v1/metadata/datasources/{dataSourceName}/segments Returns a list of all segments, overlapping with any of given intervals, for a datasource as stored in the metadata store. Request body is array of string IS0 8601 intervals like [interval1, interval2,...]—for example, ["2012-01-01T00:00:00.000/2012-01-03T00:00:00.000", "2012-01-05T00:00:00.000/2012-01-07T00:00:00.000"]. GET /druid/coordinator/v1/metadata/datasources/{dataSourceName}/segments?full Returns a list of all segments, overlapping with any of given intervals, for a datasource with the full segment metadata as stored in the metadata store. Request body is array of string ISO 8601 intervals like [interval1, interval2,...]—for example, ["2012-01-01T00:00:00.000/2012-01-03T00:00:00.000", "2012-01-05T00:00:00.000/2012-01-07T00:00:00.000"]. "},{"title":"Datasources","type":1,"pageTitle":"Legacy metadata API","url":"/docs/27.0.0/api-reference/legacy-metadata-api#datasources","content":"Note that all interval URL parameters are ISO 8601 strings delimited by a _ instead of a /—for example, 2016-06-27_2016-06-28. GET /druid/coordinator/v1/datasources Returns a list of datasource names found in the cluster as seen by the coordinator. This view is updated every druid.coordinator.period. GET /druid/coordinator/v1/datasources?simple Returns a list of JSON objects containing the name and properties of datasources found in the cluster. Properties include segment count, total segment byte size, replicated total segment byte size, minTime, and maxTime. GET /druid/coordinator/v1/datasources?full Returns a list of datasource names found in the cluster with all metadata about those datasources. GET /druid/coordinator/v1/datasources/{dataSourceName} Returns a JSON object containing the name and properties of a datasource. Properties include segment count, total segment byte size, replicated total segment byte size, minTime, and maxTime. GET /druid/coordinator/v1/datasources/{dataSourceName}?full Returns full metadata for a datasource. GET /druid/coordinator/v1/datasources/{dataSourceName}/intervals Returns a set of segment intervals. GET /druid/coordinator/v1/datasources/{dataSourceName}/intervals?simple Returns a map of an interval to a JSON object containing the total byte size of segments and number of segments for that interval. GET /druid/coordinator/v1/datasources/{dataSourceName}/intervals?full Returns a map of an interval to a map of segment metadata to a set of server names that contain the segment for that interval. GET /druid/coordinator/v1/datasources/{dataSourceName}/intervals/{interval} Returns a set of segment ids for an interval. GET /druid/coordinator/v1/datasources/{dataSourceName}/intervals/{interval}?simple Returns a map of segment intervals contained within the specified interval to a JSON object containing the total byte size of segments and number of segments for an interval. GET /druid/coordinator/v1/datasources/{dataSourceName}/intervals/{interval}?full Returns a map of segment intervals contained within the specified interval to a map of segment metadata to a set of server names that contain the segment for an interval. GET /druid/coordinator/v1/datasources/{dataSourceName}/intervals/{interval}/serverview Returns a map of segment intervals contained within the specified interval to information about the servers that contain the segment for an interval. GET /druid/coordinator/v1/datasources/{dataSourceName}/segments Returns a list of all segments for a datasource in the cluster. GET /druid/coordinator/v1/datasources/{dataSourceName}/segments?full Returns a list of all segments for a datasource in the cluster with the full segment metadata. GET /druid/coordinator/v1/datasources/{dataSourceName}/segments/{segmentId} Returns full segment metadata for a specific segment in the cluster. GET /druid/coordinator/v1/datasources/{dataSourceName}/tiers Return the tiers that a datasource exists in. "},{"title":"Intervals","type":1,"pageTitle":"Legacy metadata API","url":"/docs/27.0.0/api-reference/legacy-metadata-api#intervals","content":"Note that all interval URL parameters are ISO 8601 strings delimited by a _ instead of a / as in 2016-06-27_2016-06-28. GET /druid/coordinator/v1/intervals Returns all intervals for all datasources with total size and count. GET /druid/coordinator/v1/intervals/{interval} Returns aggregated total size and count for all intervals that intersect given ISO interval. GET /druid/coordinator/v1/intervals/{interval}?simple Returns total size and count for each interval within given ISO interval. GET /druid/coordinator/v1/intervals/{interval}?full Returns total size and count for each datasource for each interval within given ISO interval. "},{"title":"Server information","type":1,"pageTitle":"Legacy metadata API","url":"/docs/27.0.0/api-reference/legacy-metadata-api#server-information","content":"GET /druid/coordinator/v1/servers Returns a list of servers URLs using the format {hostname}:{port}. Note that processes that run with different types will appear multiple times with different ports. GET /druid/coordinator/v1/servers?simple Returns a list of server data objects in which each object has the following keys: host: host URL include ({hostname}:{port})type: process type (indexer-executor, historical)currSize: storage size currently usedmaxSize: maximum storage sizeprioritytier "},{"title":"Query server","type":1,"pageTitle":"Legacy metadata API","url":"/docs/27.0.0/api-reference/legacy-metadata-api#query-server","content":"This section documents the API endpoints for the processes that reside on Query servers (Brokers) in the suggested three-server configuration. "},{"title":"Broker","type":1,"pageTitle":"Legacy metadata API","url":"/docs/27.0.0/api-reference/legacy-metadata-api#broker","content":"Datasource information Note that all interval URL parameters are ISO 8601 strings delimited by a _ instead of a /as in 2016-06-27_2016-06-28. info Note: Much of this information is available in a simpler, easier-to-use form through the Druid SQLINFORMATION_SCHEMA.TABLES,INFORMATION_SCHEMA.COLUMNS, andsys.segments tables. GET /druid/v2/datasources Returns a list of queryable datasources. GET /druid/v2/datasources/{dataSourceName} Returns the dimensions and metrics of the datasource. Optionally, you can provide request parameter "full" to get list of served intervals with dimensions and metrics being served for those intervals. You can also provide request param "interval" explicitly to refer to a particular interval. If no interval is specified, a default interval spanning a configurable period before the current time will be used. The default duration of this interval is specified in ISO 8601 duration format via: druid.query.segmentMetadata.defaultHistory GET /druid/v2/datasources/{dataSourceName}/dimensions info This API is deprecated and will be removed in future releases. Please use SegmentMetadataQuery instead which provides more comprehensive information and supports all dataSource types including streaming dataSources. It's also encouraged to use INFORMATION_SCHEMA tablesif you're using SQL. Returns the dimensions of the datasource. GET /druid/v2/datasources/{dataSourceName}/metrics info This API is deprecated and will be removed in future releases. Please use SegmentMetadataQuery instead which provides more comprehensive information and supports all dataSource types including streaming dataSources. It's also encouraged to use INFORMATION_SCHEMA tablesif you're using SQL. Returns the metrics of the datasource. GET /druid/v2/datasources/{dataSourceName}/candidates?intervals={comma-separated-intervals}&numCandidates={numCandidates} Returns segment information lists including server locations for the given datasource and intervals. If "numCandidates" is not specified, it will return all servers for each interval. "},{"title":"Apache Druid vs Elasticsearch","type":0,"sectionRef":"#","url":"/docs/27.0.0/comparisons/druid-vs-elasticsearch","content":"Apache Druid vs Elasticsearch We are not experts on search systems, if anything is incorrect about our portrayal, please let us know on the mailing list or via some other means. Elasticsearch is a search system based on Apache Lucene. It provides full text search for schema-free documents and provides access to raw event level data. Elasticsearch is increasingly adding more support for analytics and aggregations.Some members of the community have pointed out the resource requirements for data ingestion and aggregation in Elasticsearch is much higher than those of Druid. Elasticsearch also does not support data summarization/roll-up at ingestion time, which can compact the data that needs to be stored up to 100x with real-world data sets. This leads to Elasticsearch having greater storage requirements. Druid focuses on OLAP work flows. Druid is optimized for high performance (fast aggregation and ingestion) at low cost, and supports a wide range of analytic operations. Druid has some basic search support for structured event data, but does not support full text search. Druid also does not support completely unstructured data. Measures must be defined in a Druid schema such that summarization/roll-up can be done.","keywords":""},{"title":"Apache Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)","type":0,"sectionRef":"#","url":"/docs/27.0.0/comparisons/druid-vs-key-value","content":"Apache Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB) Druid is highly optimized for scans and aggregations, it supports arbitrarily deep drill downs into data sets. This same functionality is supported in key/value stores in 2 ways: Pre-compute all permutations of possible user queriesRange scans on event data When pre-computing results, the key is the exact parameters of the query, and the value is the result of the query. The queries return extremely quickly, but at the cost of flexibility, as ad-hoc exploratory queries are not possible with pre-computing every possible query permutation. Pre-computing all permutations of all ad-hoc queries leads to result sets that grow exponentially with the number of columns of a data set, and pre-computing queries for complex real-world data sets can require hours of pre-processing time. The other approach to using key/value stores for aggregations to use the dimensions of an event as the key and the event measures as the value. Aggregations are done by issuing range scans on this data. Timeseries specific databases such as OpenTSDB use this approach. One of the limitations here is that the key/value storage model does not have indexes for any kind of filtering other than prefix ranges, which can be used to filter a query down to a metric and time range, but cannot resolve complex predicates to narrow the exact data to scan. When the number of rows to scan gets large, this limitation can greatly reduce performance. It is also harder to achieve good locality with key/value stores because most don’t support pushing down aggregates to the storage layer. For arbitrary exploration of data (flexible data filtering), Druid's custom column format enables ad-hoc queries without pre-computation. The format also enables fast scans on columns, which is important for good aggregation performance.","keywords":""},{"title":"Apache Druid vs Kudu","type":0,"sectionRef":"#","url":"/docs/27.0.0/comparisons/druid-vs-kudu","content":"Apache Druid vs Kudu Kudu's storage format enables single row updates, whereas updates to existing Druid segments requires recreating the segment, so theoretically the process for updating old values should be higher latency in Druid. However, the requirements in Kudu for maintaining extra head space to store updates as well as organizing data by id instead of time has the potential to introduce some extra latency and accessing of data that is not needed to answer a query at query time. Druid summarizes/rollups up data at ingestion time, which in practice reduces the raw data that needs to be stored significantly (up to 40 times on average), and increases performance of scanning raw data significantly. Druid segments also contain bitmap indexes for fast filtering, which Kudu does not currently support. Druid's segment architecture is heavily geared towards fast aggregates and filters, and for OLAP workflows. Appends are very fast in Druid, whereas updates of older data are higher latency. This is by design as the data Druid is good for is typically event data, and does not need to be updated too frequently. Kudu supports arbitrary primary keys with uniqueness constraints, and efficient lookup by ranges of those keys. Kudu chooses not to include the execution engine, but supports sufficient operations so as to allow node-local processing from the execution engines. This means that Kudu can support multiple frameworks on the same data (e.g., MR, Spark, and SQL). Druid includes its own query layer that allows it to push down aggregations and computations directly to data processes for faster query processing.","keywords":""},{"title":"SQL JDBC driver API","type":0,"sectionRef":"#","url":"/docs/27.0.0/api-reference/sql-jdbc","content":"","keywords":""},{"title":"Connection stickiness","type":1,"pageTitle":"SQL JDBC driver API","url":"/docs/27.0.0/api-reference/sql-jdbc#connection-stickiness","content":"Druid's JDBC server does not share connection state between Brokers. This means that if you're using JDBC and have multiple Druid Brokers, you should either connect to a specific Broker or use a load balancer with sticky sessions enabled. The Druid Router process provides connection stickiness when balancing JDBC requests, and can be used to achieve the necessary stickiness even with a normal non-sticky load balancer. Please see theRouter documentation for more details. Note that the non-JDBC JSON over HTTP API is stateless and does not require stickiness. "},{"title":"Dynamic parameters","type":1,"pageTitle":"SQL JDBC driver API","url":"/docs/27.0.0/api-reference/sql-jdbc#dynamic-parameters","content":"You can use parameterized queries in JDBC code, as in this example: PreparedStatement statement = connection.prepareStatement("SELECT COUNT(*) AS cnt FROM druid.foo WHERE dim1 = ? OR dim1 = ?"); statement.setString(1, "abc"); statement.setString(2, "def"); final ResultSet resultSet = statement.executeQuery(); "},{"title":"Examples","type":1,"pageTitle":"SQL JDBC driver API","url":"/docs/27.0.0/api-reference/sql-jdbc#examples","content":"The following section contains two complete samples that use the JDBC connector: Get the metadata for a datasource shows you how to query the INFORMATION_SCHEMA to get metadata like column names. Query data runs a select query against the datasource. You can try out these examples after verifying that you meet the prerequisites. For more information about the connection options, see Client Reference. "},{"title":"Prerequisites","type":1,"pageTitle":"SQL JDBC driver API","url":"/docs/27.0.0/api-reference/sql-jdbc#prerequisites","content":"Make sure you meet the following requirements before trying these examples: A supported Java version Avatica JDBC driver. You can add the JAR to your CLASSPATH directly or manage it externally, such as through Maven and a pom.xml file. An available Druid instance. You can use the micro-quickstart configuration described in Quickstart (local). The examples assume that you are using the quickstart, so no authentication or authorization is expected unless explicitly mentioned. The example wikipedia datasource from the quickstart is loaded on your Druid instance. If you have a different datasource loaded, you can still try these examples. You'll have to update the table name and column names to match your datasource. "},{"title":"Get the metadata for a datasource","type":1,"pageTitle":"SQL JDBC driver API","url":"/docs/27.0.0/api-reference/sql-jdbc#get-the-metadata-for-a-datasource","content":"Metadata, such as column names, is available either through the INFORMATION_SCHEMA table or through connection.getMetaData(). The following example uses the INFORMATION_SCHEMA table to retrieve and print the list of column names for the wikipedia datasource that you loaded during a previous tutorial. import java.sql.*; import java.util.Properties; public class JdbcListColumns { public static void main(String[] args) { // Connect to /druid/v2/sql/avatica/ on your Router. // You can connect to a Broker but must configure connection stickiness if you do. String url = "jdbc:avatica:remote:url=http://localhost:8888/druid/v2/sql/avatica/"; String query = "SELECT COLUMN_NAME,* FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'wikipedia' and TABLE_SCHEMA='druid'"; // Set any connection context parameters you need here // Or leave empty for default behavior. Properties connectionProperties = new Properties(); try (Connection connection = DriverManager.getConnection(url, connectionProperties)) { try ( final Statement statement = connection.createStatement(); final ResultSet rs = statement.executeQuery(query) ) { while (rs.next()) { String columnName = rs.getString("COLUMN_NAME"); System.out.println(columnName); } } } catch (SQLException e) { throw new RuntimeException(e); } } } "},{"title":"Query data","type":1,"pageTitle":"SQL JDBC driver API","url":"/docs/27.0.0/api-reference/sql-jdbc#query-data","content":"Now that you know what columns are available, you can start querying the data. The following example queries the datasource named wikipedia for the timestamps and comments from Japan. It also sets the query context parameter sqlTimeZone. Optionally, you can also parameterize queries by using dynamic parameters. import java.sql.*; import java.util.Properties; public class JdbcCountryAndTime { public static void main(String[] args) { // Connect to /druid/v2/sql/avatica/ on your Router. // You can connect to a Broker but must configure connection stickiness if you do. String url = "jdbc:avatica:remote:url=http://localhost:8888/druid/v2/sql/avatica/"; //The query you want to run. String query = "SELECT __time, isRobot, countryName, comment FROM wikipedia WHERE countryName='Japan'"; // Set any connection context parameters you need here // Or leave empty for default behavior. Properties connectionProperties = new Properties(); connectionProperties.setProperty("sqlTimeZone", "America/Los_Angeles"); try (Connection connection = DriverManager.getConnection(url, connectionProperties)) { try ( final Statement statement = connection.createStatement(); final ResultSet rs = statement.executeQuery(query) ) { while (rs.next()) { Timestamp timeStamp = rs.getTimestamp("__time"); String comment = rs.getString("comment"); System.out.println(timeStamp); System.out.println(comment); } } } catch (SQLException e) { throw new RuntimeException(e); } } } "},{"title":"Lookups API","type":0,"sectionRef":"#","url":"/docs/27.0.0/api-reference/lookups-api","content":"","keywords":""},{"title":"Configure lookups","type":1,"pageTitle":"Lookups API","url":"/docs/27.0.0/api-reference/lookups-api#configure-lookups","content":""},{"title":"Bulk update","type":1,"pageTitle":"Lookups API","url":"/docs/27.0.0/api-reference/lookups-api#bulk-update","content":"Lookups can be updated in bulk by posting a JSON object to /druid/coordinator/v1/lookups/config. The format of the json object is as follows: { "<tierName>": { "<lookupName>": { "version": "<version>", "lookupExtractorFactory": { "type": "<someExtractorFactoryType>", "<someExtractorField>": "<someExtractorValue>" } } } } Note that "version" is an arbitrary string assigned by the user, when making updates to existing lookup then user would need to specify a lexicographically higher version. For example, a config might look something like: { "__default": { "country_code": { "version": "v0", "lookupExtractorFactory": { "type": "map", "map": { "77483": "United States" } } }, "site_id": { "version": "v0", "lookupExtractorFactory": { "type": "cachedNamespace", "extractionNamespace": { "type": "jdbc", "connectorConfig": { "createTables": true, "connectURI": "jdbc:mysql:\\/\\/localhost:3306\\/druid", "user": "druid", "password": "diurd" }, "table": "lookupTable", "keyColumn": "country_id", "valueColumn": "country_name", "tsColumn": "timeColumn" }, "firstCacheTimeout": 120000, "injective": true } }, "site_id_customer1": { "version": "v0", "lookupExtractorFactory": { "type": "map", "map": { "847632": "Internal Use Only" } } }, "site_id_customer2": { "version": "v0", "lookupExtractorFactory": { "type": "map", "map": { "AHF77": "Home" } } } }, "realtime_customer1": { "country_code": { "version": "v0", "lookupExtractorFactory": { "type": "map", "map": { "77483": "United States" } } }, "site_id_customer1": { "version": "v0", "lookupExtractorFactory": { "type": "map", "map": { "847632": "Internal Use Only" } } } }, "realtime_customer2": { "country_code": { "version": "v0", "lookupExtractorFactory": { "type": "map", "map": { "77483": "United States" } } }, "site_id_customer2": { "version": "v0", "lookupExtractorFactory": { "type": "map", "map": { "AHF77": "Home" } } } } } All entries in the map will UPDATE existing entries. No entries will be deleted. "},{"title":"Update lookup","type":1,"pageTitle":"Lookups API","url":"/docs/27.0.0/api-reference/lookups-api#update-lookup","content":"A POST to a particular lookup extractor factory via /druid/coordinator/v1/lookups/config/{tier}/{id} creates or updates that specific extractor factory. For example, a post to /druid/coordinator/v1/lookups/config/realtime_customer1/site_id_customer1 might contain the following: { "version": "v1", "lookupExtractorFactory": { "type": "map", "map": { "847632": "Internal Use Only" } } } This will replace the site_id_customer1 lookup in the realtime_customer1 with the definition above. Assign a unique version identifier each time you update a lookup extractor factory. Otherwise the call will fail. "},{"title":"Get all lookups","type":1,"pageTitle":"Lookups API","url":"/docs/27.0.0/api-reference/lookups-api#get-all-lookups","content":"A GET to /druid/coordinator/v1/lookups/config/all will return all known lookup specs for all tiers. "},{"title":"Get lookup","type":1,"pageTitle":"Lookups API","url":"/docs/27.0.0/api-reference/lookups-api#get-lookup","content":"A GET to a particular lookup extractor factory is accomplished via /druid/coordinator/v1/lookups/config/{tier}/{id} Using the prior example, a GET to /druid/coordinator/v1/lookups/config/realtime_customer2/site_id_customer2 should return { "version": "v1", "lookupExtractorFactory": { "type": "map", "map": { "AHF77": "Home" } } } "},{"title":"Delete lookup","type":1,"pageTitle":"Lookups API","url":"/docs/27.0.0/api-reference/lookups-api#delete-lookup","content":"A DELETE to /druid/coordinator/v1/lookups/config/{tier}/{id} will remove that lookup from the cluster. If it was last lookup in the tier, then tier is deleted as well. "},{"title":"Delete tier","type":1,"pageTitle":"Lookups API","url":"/docs/27.0.0/api-reference/lookups-api#delete-tier","content":"A DELETE to /druid/coordinator/v1/lookups/config/{tier} will remove that tier from the cluster. "},{"title":"List tier names","type":1,"pageTitle":"Lookups API","url":"/docs/27.0.0/api-reference/lookups-api#list-tier-names","content":"A GET to /druid/coordinator/v1/lookups/config will return a list of known tier names in the dynamic configuration. To discover a list of tiers currently active in the cluster in addition to ones known in the dynamic configuration, the parameter discover=true can be added as per /druid/coordinator/v1/lookups/config?discover=true. "},{"title":"List lookup names","type":1,"pageTitle":"Lookups API","url":"/docs/27.0.0/api-reference/lookups-api#list-lookup-names","content":"A GET to /druid/coordinator/v1/lookups/config/{tier} will return a list of known lookup names for that tier. These end points can be used to get the propagation status of configured lookups to processes using lookups such as Historicals. "},{"title":"Lookup status","type":1,"pageTitle":"Lookups API","url":"/docs/27.0.0/api-reference/lookups-api#lookup-status","content":""},{"title":"List load status of all lookups","type":1,"pageTitle":"Lookups API","url":"/docs/27.0.0/api-reference/lookups-api#list-load-status-of-all-lookups","content":"GET /druid/coordinator/v1/lookups/status with optional query parameter detailed. "},{"title":"List load status of lookups in a tier","type":1,"pageTitle":"Lookups API","url":"/docs/27.0.0/api-reference/lookups-api#list-load-status-of-lookups-in-a-tier","content":"GET /druid/coordinator/v1/lookups/status/{tier} with optional query parameter detailed. "},{"title":"List load status of single lookup","type":1,"pageTitle":"Lookups API","url":"/docs/27.0.0/api-reference/lookups-api#list-load-status-of-single-lookup","content":"GET /druid/coordinator/v1/lookups/status/{tier}/{lookup} with optional query parameter detailed. "},{"title":"List lookup state of all processes","type":1,"pageTitle":"Lookups API","url":"/docs/27.0.0/api-reference/lookups-api#list-lookup-state-of-all-processes","content":"GET /druid/coordinator/v1/lookups/nodeStatus with optional query parameter discover to discover tiers advertised by other Druid nodes, or by default, returning all configured lookup tiers. The default response will also include the lookups which are loaded, being loaded, or being dropped on each node, for each tier, including the complete lookup spec. Add the optional query parameter detailed=false to only include the 'version' of the lookup instead of the complete spec. "},{"title":"List lookup state of processes in a tier","type":1,"pageTitle":"Lookups API","url":"/docs/27.0.0/api-reference/lookups-api#list-lookup-state-of-processes-in-a-tier","content":"GET /druid/coordinator/v1/lookups/nodeStatus/{tier} "},{"title":"List lookup state of single process","type":1,"pageTitle":"Lookups API","url":"/docs/27.0.0/api-reference/lookups-api#list-lookup-state-of-single-process","content":"GET /druid/coordinator/v1/lookups/nodeStatus/{tier}/{host:port} "},{"title":"Internal API","type":1,"pageTitle":"Lookups API","url":"/docs/27.0.0/api-reference/lookups-api#internal-api","content":"The Peon, Router, Broker, and Historical processes all have the ability to consume lookup configuration. There is an internal API these processes use to list/load/drop their lookups starting at /druid/listen/v1/lookups. These follow the same convention for return values as the cluster wide dynamic configuration. Following endpoints can be used for debugging purposes but not otherwise. "},{"title":"Get lookups","type":1,"pageTitle":"Lookups API","url":"/docs/27.0.0/api-reference/lookups-api#get-lookups","content":"A GET to the process at /druid/listen/v1/lookups will return a json map of all the lookups currently active on the process. The return value will be a json map of the lookups to their extractor factories. { "site_id_customer2": { "version": "v1", "lookupExtractorFactory": { "type": "map", "map": { "AHF77": "Home" } } } } "},{"title":"Get lookup","type":1,"pageTitle":"Lookups API","url":"/docs/27.0.0/api-reference/lookups-api#get-lookup-1","content":"A GET to the process at /druid/listen/v1/lookups/some_lookup_name will return the LookupExtractorFactory for the lookup identified by some_lookup_name. The return value will be the json representation of the factory. { "version": "v1", "lookupExtractorFactory": { "type": "map", "map": { "AHF77": "Home" } } } "},{"title":"Apache Druid vs Spark","type":0,"sectionRef":"#","url":"/docs/27.0.0/comparisons/druid-vs-spark","content":"Apache Druid vs Spark Druid and Spark are complementary solutions as Druid can be used to accelerate OLAP queries in Spark. Spark is a general cluster computing framework initially designed around the concept of Resilient Distributed Datasets (RDDs). RDDs enable data reuse by persisting intermediate results in memory and enable Spark to provide fast computations for iterative algorithms. This is especially beneficial for certain work flows such as machine learning, where the same operation may be applied over and over again until some result is converged upon. The generality of Spark makes it very suitable as an engine to process (clean or transform) data. Although Spark provides the ability to query data through Spark SQL, much like Hadoop, the query latencies are not specifically targeted to be interactive (sub-second). Druid's focus is on extremely low latency queries, and is ideal for powering applications used by thousands of users, and where each query must return fast enough such that users can interactively explore through data. Druid fully indexes all data, and can act as a middle layer between Spark and your application. One typical setup seen in production is to process data in Spark, and load the processed data into Druid for faster access. For more information about using Druid and Spark together, including benchmarks of the two systems, please see: https://www.linkedin.com/pulse/combining-druid-spark-interactive-flexible-analytics-scale-butani","keywords":""},{"title":"Apache Druid vs Redshift","type":0,"sectionRef":"#","url":"/docs/27.0.0/comparisons/druid-vs-redshift","content":"","keywords":""},{"title":"How does Druid compare to Redshift?","type":1,"pageTitle":"Apache Druid vs Redshift","url":"/docs/27.0.0/comparisons/druid-vs-redshift#how-does-druid-compare-to-redshift","content":"In terms of drawing a differentiation, Redshift started out as ParAccel (Actian), which Amazon is licensing and has since heavily modified. Aside from potential performance differences, there are some functional differences: "},{"title":"Real-time data ingestion","type":1,"pageTitle":"Apache Druid vs Redshift","url":"/docs/27.0.0/comparisons/druid-vs-redshift#real-time-data-ingestion","content":"Because Druid is optimized to provide insight against massive quantities of streaming data; it is able to load and aggregate data in real-time. Generally traditional data warehouses including column stores work only with batch ingestion and are not optimal for streaming data in regularly. "},{"title":"Druid is a read oriented analytical data store","type":1,"pageTitle":"Apache Druid vs Redshift","url":"/docs/27.0.0/comparisons/druid-vs-redshift#druid-is-a-read-oriented-analytical-data-store","content":"Druid’s write semantics are not as fluid and does not support full joins (we support large table to small table joins). Redshift provides full SQL support including joins and insert/update statements. "},{"title":"Data distribution model","type":1,"pageTitle":"Apache Druid vs Redshift","url":"/docs/27.0.0/comparisons/druid-vs-redshift#data-distribution-model","content":"Druid’s data distribution is segment-based and leverages a highly available "deep" storage such as S3 or HDFS. Scaling up (or down) does not require massive copy actions or downtime; in fact, losing any number of Historical processes does not result in data loss because new Historical processes can always be brought up by reading data from "deep" storage. To contrast, ParAccel’s data distribution model is hash-based. Expanding the cluster requires re-hashing the data across the nodes, making it difficult to perform without taking downtime. Amazon’s Redshift works around this issue with a multi-step process: set cluster into read-only modecopy data from cluster to new cluster that exists in parallelredirect traffic to new cluster "},{"title":"Replication strategy","type":1,"pageTitle":"Apache Druid vs Redshift","url":"/docs/27.0.0/comparisons/druid-vs-redshift#replication-strategy","content":"Druid employs segment-level data distribution meaning that more processes can be added and rebalanced without having to perform a staged swap. The replication strategy also makes all replicas available for querying. Replication is done automatically and without any impact to performance. ParAccel’s hash-based distribution generally means that replication is conducted via hot spares. This puts a numerical limit on the number of nodes you can lose without losing data, and this replication strategy often does not allow the hot spare to help share query load. "},{"title":"Indexing strategy","type":1,"pageTitle":"Apache Druid vs Redshift","url":"/docs/27.0.0/comparisons/druid-vs-redshift#indexing-strategy","content":"Along with column oriented structures, Druid uses indexing structures to speed up query execution when a filter is provided. Indexing structures do increase storage overhead (and make it more difficult to allow for mutation), but they also significantly speed up queries. ParAccel does not appear to employ indexing strategies. "},{"title":"Apache Druid vs SQL-on-Hadoop","type":0,"sectionRef":"#","url":"/docs/27.0.0/comparisons/druid-vs-sql-on-hadoop","content":"","keywords":""},{"title":"Queries","type":1,"pageTitle":"Apache Druid vs SQL-on-Hadoop","url":"/docs/27.0.0/comparisons/druid-vs-sql-on-hadoop#queries","content":"Druid segments stores data in a custom column format. Segments are scanned directly as part of queries and each Druid server calculates a set of results that are eventually merged at the Broker level. This means the data that is transferred between servers are queries and results, and all computation is done internally as part of the Druid servers. Most SQL-on-Hadoop engines are responsible for query planning and execution for underlying storage layers and storage formats. They are processes that stay on even if there is no query running (eliminating the JVM startup costs from Hadoop MapReduce). Some (Impala/Presto) SQL-on-Hadoop engines have daemon processes that can be run where the data is stored, virtually eliminating network transfer costs. There is still some latency overhead (e.g. serialization/deserialization time) associated with pulling data from the underlying storage layer into the computation layer. We are unaware of exactly how much of a performance impact this makes. "},{"title":"Data Ingestion","type":1,"pageTitle":"Apache Druid vs SQL-on-Hadoop","url":"/docs/27.0.0/comparisons/druid-vs-sql-on-hadoop#data-ingestion","content":"Druid is built to allow for real-time ingestion of data. You can ingest data and query it immediately upon ingestion, the latency between how quickly the event is reflected in the data is dominated by how long it takes to deliver the event to Druid. SQL-on-Hadoop, being based on data in HDFS or some other backing store, are limited in their data ingestion rates by the rate at which that backing store can make data available. Generally, the backing store is the biggest bottleneck for how quickly data can become available. "},{"title":"Query Flexibility","type":1,"pageTitle":"Apache Druid vs SQL-on-Hadoop","url":"/docs/27.0.0/comparisons/druid-vs-sql-on-hadoop#query-flexibility","content":"Druid's query language is fairly low level and maps to how Druid operates internally. Although Druid can be combined with a high level query planner to support most SQL queries and analytic SQL queries (minus joins among large tables), base Druid is less flexible than SQL-on-Hadoop solutions for generic processing. SQL-on-Hadoop support SQL style queries with full joins. "},{"title":"Druid vs Parquet","type":1,"pageTitle":"Apache Druid vs SQL-on-Hadoop","url":"/docs/27.0.0/comparisons/druid-vs-sql-on-hadoop#druid-vs-parquet","content":"Parquet is a column storage format that is designed to work with SQL-on-Hadoop engines. Parquet doesn't have a query execution engine, and instead relies on external sources to pull data out of it. Druid's storage format is highly optimized for linear scans. Although Druid has support for nested data, Parquet's storage format is much more hierarchical, and is more designed for binary chunking. In theory, this should lead to faster scans in Druid. "},{"title":"Extensions","type":0,"sectionRef":"#","url":"/docs/27.0.0/configuration/extensions","content":"","keywords":""},{"title":"Core extensions","type":1,"pageTitle":"Extensions","url":"/docs/27.0.0/configuration/extensions#core-extensions","content":"Core extensions are maintained by Druid committers. Name\tDescription\tDocsdruid-avro-extensions\tSupport for data in Apache Avro data format.\tlink druid-azure-extensions\tMicrosoft Azure deep storage.\tlink druid-basic-security\tSupport for Basic HTTP authentication and role-based access control.\tlink druid-bloom-filter\tSupport for providing Bloom filters in druid queries.\tlink druid-datasketches\tSupport for approximate counts and set operations with Apache DataSketches.\tlink druid-google-extensions\tGoogle Cloud Storage deep storage.\tlink druid-hdfs-storage\tHDFS deep storage.\tlink druid-histogram\tApproximate histograms and quantiles aggregator. Deprecated, please use the DataSketches quantiles aggregator from the druid-datasketches extension instead.\tlink druid-kafka-extraction-namespace\tApache Kafka-based namespaced lookup. Requires namespace lookup extension.\tlink druid-kafka-indexing-service\tSupervised exactly-once Apache Kafka ingestion for the indexing service.\tlink druid-kinesis-indexing-service\tSupervised exactly-once Kinesis ingestion for the indexing service.\tlink druid-kerberos\tKerberos authentication for druid processes.\tlink druid-lookups-cached-global\tA module for lookups providing a jvm-global eager caching for lookups. It provides JDBC and URI implementations for fetching lookup data.\tlink druid-lookups-cached-single\tPer lookup caching module to support the use cases where a lookup need to be isolated from the global pool of lookups\tlink druid-multi-stage-query\tSupport for the multi-stage query architecture for Apache Druid and the multi-stage query task engine.\tlink druid-orc-extensions\tSupport for data in Apache ORC data format.\tlink druid-parquet-extensions\tSupport for data in Apache Parquet data format. Requires druid-avro-extensions to be loaded.\tlink druid-protobuf-extensions\tSupport for data in Protobuf data format.\tlink druid-ranger-security\tSupport for access control through Apache Ranger.\tlink druid-s3-extensions\tInterfacing with data in AWS S3, and using S3 as deep storage.\tlink druid-ec2-extensions\tInterfacing with AWS EC2 for autoscaling middle managers\tUNDOCUMENTED druid-aws-rds-extensions\tSupport for AWS token based access to AWS RDS DB Cluster.\tlink druid-stats\tStatistics related module including variance and standard deviation.\tlink mysql-metadata-storage\tMySQL metadata store.\tlink postgresql-metadata-storage\tPostgreSQL metadata store.\tlink simple-client-sslcontext\tSimple SSLContext provider module to be used by Druid's internal HttpClient when talking to other Druid processes over HTTPS.\tlink druid-pac4j\tOpenID Connect authentication for druid processes.\tlink druid-kubernetes-extensions\tDruid cluster deployment on Kubernetes without Zookeeper.\tlink "},{"title":"Community extensions","type":1,"pageTitle":"Extensions","url":"/docs/27.0.0/configuration/extensions#community-extensions","content":"info Community extensions are not maintained by Druid committers, although we accept patches from community members using these extensions. They may not have been as extensively tested as the core extensions. A number of community members have contributed their own extensions to Druid that are not packaged with the default Druid tarball. If you'd like to take on maintenance for a community extension, please post on dev@druid.apache.org to let us know! All of these community extensions can be downloaded using pull-deps while specifying a -c coordinate option to pull org.apache.druid.extensions.contrib:{EXTENSION_NAME}:{DRUID_VERSION}. Name\tDescription\tDocsaliyun-oss-extensions\tAliyun OSS deep storage\tlink ambari-metrics-emitter\tAmbari Metrics Emitter\tlink druid-cassandra-storage\tApache Cassandra deep storage.\tlink druid-cloudfiles-extensions\tRackspace Cloudfiles deep storage and firehose.\tlink druid-compressed-bigdecimal\tCompressed Big Decimal Type\tlink druid-distinctcount\tDistinctCount aggregator\tlink druid-redis-cache\tA cache implementation for Druid based on Redis.\tlink druid-time-min-max\tMin/Max aggregator for timestamp.\tlink sqlserver-metadata-storage\tMicrosoft SQLServer deep storage.\tlink graphite-emitter\tGraphite metrics emitter\tlink statsd-emitter\tStatsD metrics emitter\tlink kafka-emitter\tKafka metrics emitter\tlink druid-thrift-extensions\tSupport thrift ingestion\tlink druid-opentsdb-emitter\tOpenTSDB metrics emitter\tlink materialized-view-selection, materialized-view-maintenance\tMaterialized View\tlink druid-moving-average-query\tSupport for Moving Average and other Aggregate Window Functions in Druid queries.\tlink druid-influxdb-emitter\tInfluxDB metrics emitter\tlink druid-momentsketch\tSupport for approximate quantile queries using the momentsketch library\tlink druid-tdigestsketch\tSupport for approximate sketch aggregators based on T-Digest\tlink gce-extensions\tGCE Extensions\tlink prometheus-emitter\tExposes Druid metrics for Prometheus server collection (https://prometheus.io/)\tlink kubernetes-overlord-extensions\tSupport for launching tasks in k8s without Middle Managers\tlink "},{"title":"Promoting community extensions to core extensions","type":1,"pageTitle":"Extensions","url":"/docs/27.0.0/configuration/extensions#promoting-community-extensions-to-core-extensions","content":"Please post on dev@druid.apache.org if you'd like an extension to be promoted to core. If we see a community extension actively supported by the community, we can promote it to core based on community feedback. For information how to create your own extension, please see here. "},{"title":"Loading extensions","type":1,"pageTitle":"Extensions","url":"/docs/27.0.0/configuration/extensions#loading-extensions","content":""},{"title":"Loading core extensions","type":1,"pageTitle":"Extensions","url":"/docs/27.0.0/configuration/extensions#loading-core-extensions","content":"Apache Druid bundles all core extensions out of the box. See the list of extensions for your options. You can load bundled extensions by adding their names to your common.runtime.propertiesdruid.extensions.loadList property. For example, to load the postgresql-metadata-storage and druid-hdfs-storage extensions, use the configuration: druid.extensions.loadList=["postgresql-metadata-storage", "druid-hdfs-storage"] These extensions are located in the extensions directory of the distribution. info Druid bundles two sets of configurations: one for the quickstart and one for a clustered configuration. Make sure you are updating the correctcommon.runtime.properties for your setup. info Because of licensing, the mysql-metadata-storage extension does not include the required MySQL JDBC driver. For instructions on how to install this library, see the MySQL extension page. "},{"title":"Loading community extensions","type":1,"pageTitle":"Extensions","url":"/docs/27.0.0/configuration/extensions#loading-community-extensions","content":"You can also load community and third-party extensions not already bundled with Druid. To do this, first download the extension and then install it into your extensions directory. You can download extensions from their distributors directly, or if they are available from Maven, the included pull-deps can download them for you. To use pull-deps, specify the full Maven coordinate of the extension in the form groupId:artifactId:version. For example, for the (hypothetical) extension com.example:druid-example-extension:1.0.0, run: java \\ -cp "lib/*" \\ -Ddruid.extensions.directory="extensions" \\ -Ddruid.extensions.hadoopDependenciesDir="hadoop-dependencies" \\ org.apache.druid.cli.Main tools pull-deps \\ --no-default-hadoop \\ -c "com.example:druid-example-extension:1.0.0" You only have to install the extension once. Then, add "druid-example-extension" todruid.extensions.loadList in common.runtime.properties to instruct Druid to load the extension. info Please make sure all the Extensions related configuration properties listed here are set correctly. info The Maven groupId for almost every community extension is org.apache.druid.extensions.contrib. The artifactId is the name of the extension, and the version is the latest Druid stable version. "},{"title":"Loading extensions from the classpath","type":1,"pageTitle":"Extensions","url":"/docs/27.0.0/configuration/extensions#loading-extensions-from-the-classpath","content":"If you add your extension jar to the classpath at runtime, Druid will also load it into the system. This mechanism is relatively easy to reason about, but it also means that you have to ensure that all dependency jars on the classpath are compatible. That is, Druid makes no provisions while using this method to maintain class loader isolation so you must make sure that the jars on your classpath are mutually compatible. "},{"title":"Human-readable Byte Configuration Reference","type":0,"sectionRef":"#","url":"/docs/27.0.0/configuration/human-readable-byte","content":"","keywords":""},{"title":"A number in bytes","type":1,"pageTitle":"Human-readable Byte Configuration Reference","url":"/docs/27.0.0/configuration/human-readable-byte#a-number-in-bytes","content":"Given that cache size is 3G, there's a configuration as below # 3G bytes = 3_000_000_000 bytes druid.cache.sizeInBytes=3000000000 "},{"title":"A number with a unit suffix","type":1,"pageTitle":"Human-readable Byte Configuration Reference","url":"/docs/27.0.0/configuration/human-readable-byte#a-number-with-a-unit-suffix","content":"When you have to put a large number for some configuration as above, it is easy to make a mistake such as extra or missing 0s. Druid supports a better way, a number with a unit suffix. Given a disk of 1T, the configuration can be druid.segmentCache.locations=[{"path":"/segment-cache-00","maxSize":"1t"},{"path":"/segment-cache-01","maxSize":"1200g"}] Note: in above example, both 1t and 1T are acceptable since it's case-insensitive. Also, only integers are valid as the number part. For example, you can't replace 1200g with 1.2t. "},{"title":"Supported Units","type":1,"pageTitle":"Human-readable Byte Configuration Reference","url":"/docs/27.0.0/configuration/human-readable-byte#supported-units","content":"In the world of computer, a unit like K is ambiguous. It means 1000 or 1024 in different contexts, for more information please see Here. To make it clear, the base of units are defined in Druid as below Unit\tDescription\tBaseK\tKilo Decimal Byte\t1_000 M\tMega Decimal Byte\t1_000_000 G\tGiga Decimal Byte\t1_000_000_000 T\tTera Decimal Byte\t1_000_000_000_000 P\tPeta Decimal Byte\t1_000_000_000_000_000 Ki\tKilo Binary Byte\t1024 Mi\tMega Binary Byte\t1024 * 1024 Gi\tGiga Binary Byte\t1024 1024 1024 Ti\tTera Binary Byte\t1024 1024 1024 * 1024 Pi\tPeta Binary Byte\t1024 1024 1024 1024 1024 KiB\tKilo Binary Byte\t1024 MiB\tMega Binary Byte\t1024 * 1024 GiB\tGiga Binary Byte\t1024 1024 1024 TiB\tTera Binary Byte\t1024 1024 1024 * 1024 PiB\tPeta Binary Byte\t1024 1024 1024 1024 1024 Unit is case-insensitive. k, kib, ki, KiB, Ki, kiB are all acceptable. Here are some examples # 1G bytes = 1_000_000_000 bytes druid.cache.sizeInBytes=1g # 256MiB bytes = 256 * 1024 * 1024 bytes druid.cache.sizeInBytes=256MiB # 256Mi = 256MiB = 256 * 1024 * 1024 bytes druid.cache.sizeInBytes=256Mi "},{"title":"Data management","type":0,"sectionRef":"#","url":"/docs/27.0.0/data-management/","content":"Data management Apache Druid stores data partitioned by time chunk in immutable files called segments. Data management operations involving replacing, or deleting, these segments include: Updates to existing data.Deletion of existing data.Schema changes for new and existing data.Compaction and automatic compaction, which reindex existing data to optimize storage footprint and performance.","keywords":""},{"title":"Logging","type":0,"sectionRef":"#","url":"/docs/27.0.0/configuration/logging","content":"","keywords":""},{"title":"Log directory","type":1,"pageTitle":"Logging","url":"/docs/27.0.0/configuration/logging#log-directory","content":"The included log4j2.xml configuration for Druid and ZooKeeper writes logs to the log directory at the root of the distribution. If you want to change the log directory, set the environment variable DRUID_LOG_DIR to the right directory before you start Druid. "},{"title":"All-in-one start commands","type":1,"pageTitle":"Logging","url":"/docs/27.0.0/configuration/logging#all-in-one-start-commands","content":"If you use one of the all-in-one start commands, such as bin/start-micro-quickstart, the default configuration for each service has two kinds of log files. Log4j2 writes the main log file and rotates it periodically. For example, log/historical.log. The secondary log file contains anything that is written by the component directly to standard output or standard error without going through log4j2. For example, log/historical.stdout.log. This consists mainly of messages from the Java runtime itself. This file is not rotated, but it is generally small due to the low volume of messages. If necessary, you can truncate it using the Linux command truncate --size 0 log/historical.stdout.log. "},{"title":"Set the logs to asynchronously write","type":1,"pageTitle":"Logging","url":"/docs/27.0.0/configuration/logging#set-the-logs-to-asynchronously-write","content":"If your logs are really chatty, you can set them to write asynchronously. The following example shows a log4j2.xml that configures some of the more chatty classes to write asynchronously: <?xml version="1.0" encoding="UTF-8" ?> <Configuration status="WARN"> <Appenders> <Console name="Console" target="SYSTEM_OUT"> <PatternLayout pattern="%d{ISO8601} %p [%t] %c -%notEmpty{ [%markerSimpleName]} %m%n"/> </Console> </Appenders> <Loggers> <!-- AsyncLogger instead of Logger --> <AsyncLogger name="org.apache.druid.curator.inventory.CuratorInventoryManager" level="debug" additivity="false"> <AppenderRef ref="Console"/> </AsyncLogger> <AsyncLogger name="org.apache.druid.client.BatchServerInventoryView" level="debug" additivity="false"> <AppenderRef ref="Console"/> </AsyncLogger> <!-- Make extra sure nobody adds logs in a bad way that can hurt performance --> <AsyncLogger name="org.apache.druid.client.ServerInventoryView" level="debug" additivity="false"> <AppenderRef ref="Console"/> </AsyncLogger> <AsyncLogger name ="org.apache.druid.java.util.http.client.pool.ChannelResourceFactory" level="info" additivity="false"> <AppenderRef ref="Console"/> </AsyncLogger> <Root level="info"> <AppenderRef ref="Console"/> </Root> </Loggers> </Configuration> "},{"title":"Data deletion","type":0,"sectionRef":"#","url":"/docs/27.0.0/data-management/delete","content":"","keywords":""},{"title":"By time range, manually","type":1,"pageTitle":"Data deletion","url":"/docs/27.0.0/data-management/delete#by-time-range-manually","content":"Apache Druid stores data partitioned by time chunk and supports deleting data for time chunks by dropping segments. This is a fast, metadata-only operation. Deletion by time range happens in two steps: Segments to be deleted must first be marked as "unused". This can happen when a segment is dropped by a drop rule or when you manually mark a segment unused through the Coordinator API or web console. This is a soft delete: the data is not available for querying, but the segment files remains in deep storage, and the segment records remains in the metadata store.Once a segment is marked "unused", you can use a kill task to permanently delete the segment file from deep storage and remove its record from the metadata store. This is a hard delete: the data is unrecoverable unless you have a backup. For documentation on disabling segments using the Coordinator API, see theLegacy metadata API reference. A data deletion tutorial is available at Tutorial: Deleting data. "},{"title":"By time range, automatically","type":1,"pageTitle":"Data deletion","url":"/docs/27.0.0/data-management/delete#by-time-range-automatically","content":"Druid supports load and drop rules, which are used to define intervals of time where data should be preserved, and intervals where data should be discarded. Data that falls under a drop rule is marked unused, in the same manner as if you manually mark that time range unused. This is a fast, metadata-only operation. Data that is dropped in this way is marked unused, but remains in deep storage. To permanently delete it, use akill task. "},{"title":"Specific records","type":1,"pageTitle":"Data deletion","url":"/docs/27.0.0/data-management/delete#specific-records","content":"Druid supports deleting specific records using reindexing with a filter. The filter specifies which data remains after reindexing, so it must be the inverse of the data you want to delete. Because segments must be rewritten to delete data in this way, it can be a time-consuming operation. For example, to delete records where userName is 'bob' with native batch indexing, use atransformSpec with filter {"type": "not", "field": {"type": "selector", "dimension": "userName", "value": "bob"}}. To delete the same records using SQL, use REPLACE with WHERE userName <> 'bob'. To reindex using native batch, use the druid input source. If needed,transformSpec can be used to filter or modify data during the reindexing job. To reindex with SQL, use REPLACE <table> OVERWRITEwith SELECT ... FROM <table>. (Druid does not have UPDATE or ALTER TABLE statements.) Any SQL SELECT query can be used to filter, modify, or enrich the data during the reindexing job. Data that is deleted in this way is marked unused, but remains in deep storage. To permanently delete it, use a killtask. "},{"title":"Entire table","type":1,"pageTitle":"Data deletion","url":"/docs/27.0.0/data-management/delete#entire-table","content":"Deleting an entire table works the same way as deleting part of a table by time range. First, mark all segments unused using the Coordinator API or web console. Then, optionally, delete it permanently using akill task. "},{"title":"Permanently (kill task)","type":1,"pageTitle":"Data deletion","url":"/docs/27.0.0/data-management/delete#permanently-kill-task","content":"Data that has been overwritten or soft-deleted still remains as segments that have been marked unused. You can use akill task to permanently delete this data. The available grammar is: { "type": "kill", "id": <task_id>, "dataSource": <task_datasource>, "interval" : <all_unused_segments_in_this_interval_will_die!>, "context": <task context> } WARNING: The kill task permanently removes all information about the affected segments from the metadata store and deep storage. This operation cannot be undone. "},{"title":"Compaction","type":0,"sectionRef":"#","url":"/docs/27.0.0/data-management/compaction","content":"","keywords":""},{"title":"Compaction strategies","type":1,"pageTitle":"Compaction","url":"/docs/27.0.0/data-management/compaction#compaction-strategies","content":"There are several cases to consider compaction for segment optimization: With streaming ingestion, data can arrive out of chronological order creating many small segments.If you append data using appendToExisting for native batch ingestion creating suboptimal segments.When you use index_parallel for parallel batch indexing and the parallel ingestion tasks create many small segments.When a misconfigured ingestion task creates oversized segments. By default, compaction does not modify the underlying data of the segments. However, there are cases when you may want to modify data during compaction to improve query performance: If, after ingestion, you realize that data for the time interval is sparse, you can use compaction to increase the segment granularity.If you don't need fine-grained granularity for older data, you can use compaction to change older segments to a coarser query granularity. For example, from minute to hour or hour to day. This reduces the storage space required for older data.You can change the dimension order to improve sorting and reduce segment size.You can remove unused columns in compaction or implement an aggregation metric for older data.You can change segment rollup from dynamic partitioning with best-effort rollup to hash or range partitioning with perfect rollup. For more information on rollup, see perfect vs best-effort rollup. Compaction does not improve performance in all situations. For example, if you rewrite your data with each ingestion task, you don't need to use compaction. See Segment optimization for additional guidance to determine if compaction will help in your environment. "},{"title":"Types of compaction","type":1,"pageTitle":"Compaction","url":"/docs/27.0.0/data-management/compaction#types-of-compaction","content":"You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using its segment search policy, the Coordinator periodically identifies segments for compaction starting from newest to oldest. When the Coordinator discovers segments that have not been compacted or segments that were compacted with a different or changed spec, it submits compaction tasks for the time interval covering those segments. Automatic compaction works in most use cases and should be your first option. To learn more, see Automatic compaction. In cases where you require more control over compaction, you can manually submit compaction tasks. For example: Automatic compaction is running into the limit of task slots available to it, so tasks are waiting for previous automatic compaction tasks to complete. Manual compaction can use all available task slots, therefore you can complete compaction more quickly by submitting more concurrent tasks for more intervals.You want to force compaction for a specific time range or you want to compact data out of chronological order. See Setting up a manual compaction task for more about manual compaction tasks. "},{"title":"Data handling with compaction","type":1,"pageTitle":"Compaction","url":"/docs/27.0.0/data-management/compaction#data-handling-with-compaction","content":"During compaction, Druid overwrites the original set of segments with the compacted set. Druid also locks the segments for the time interval being compacted to ensure data consistency. By default, compaction tasks do not modify the underlying data. You can configure the compaction task to change the query granularity or add or remove dimensions in the compaction task. This means that the only changes to query results should be the result of intentional, not automatic, changes. You can set dropExisting in ioConfig to "true" in the compaction task to configure Druid to replace all existing segments fully contained by the interval. See the suggestion for reindexing with finer granularity under Implementation considerations for an example. info WARNING: dropExisting in ioConfig is a beta feature. If an ingestion task needs to write data to a segment for a time interval locked for compaction, by default the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks, you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the skipOffsetFromLatest key to adjust the auto-compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. Another option is to set the compaction task to higher priority than the ingestion task. For more information, see Avoid conflicts with ingestion. "},{"title":"Segment granularity handling","type":1,"pageTitle":"Compaction","url":"/docs/27.0.0/data-management/compaction#segment-granularity-handling","content":"Unless you modify the segment granularity in granularitySpec, Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment. For example consider two overlapping segments: segment "A" for the interval 01/01/2021-01/02/2021 with day granularity and segment "B" for the interval 01/01/2021-02/01/2021. Druid attempts to combine and compact the overlapped segments. In this example, the earliest start time for the two segments is 01/01/2020 and the latest end time of the two segments is 02/01/2020. Druid compacts the segments together even though they have different segment granularity. Druid uses month segment granularity for the newly compacted segment even though segment A's original segment granularity was DAY. "},{"title":"Query granularity handling","type":1,"pageTitle":"Compaction","url":"/docs/27.0.0/data-management/compaction#query-granularity-handling","content":"Unless you modify the query granularity in the granularitySpec, Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity. info In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of NONE regardless of the query granularity of the original segments. If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with coarser granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data. "},{"title":"Dimension handling","type":1,"pageTitle":"Compaction","url":"/docs/27.0.0/data-management/compaction#dimension-handling","content":"Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same datasource. See Segments with different schemas. If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. The dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types. If you want to control dimension ordering or ensure specific values for dimension types, you can configure a custom dimensionsSpec in the compaction task spec. "},{"title":"Rollup","type":1,"pageTitle":"Compaction","url":"/docs/27.0.0/data-management/compaction#rollup","content":"Druid only rolls up the output segment when rollup is set for all input segments. See Roll-up for more details. You can check that your segments are rolled up or not by using Segment Metadata Queries. "},{"title":"Setting up manual compaction","type":1,"pageTitle":"Compaction","url":"/docs/27.0.0/data-management/compaction#setting-up-manual-compaction","content":"To perform a manual compaction, you submit a compaction task. Compaction tasks merge all segments for the defined interval according to the following syntax: { "type": "compact", "id": <task_id>, "dataSource": <task_datasource>, "ioConfig": <IO config>, "dimensionsSpec": <custom dimensionsSpec>, "transformSpec": <custom transformSpec>, "metricsSpec": <custom metricsSpec>, "tuningConfig": <parallel indexing task tuningConfig>, "granularitySpec": <compaction task granularitySpec>, "context": <task context> } Field\tDescription\tRequiredtype\tTask type. Set the value to compact.\tYes id\tTask ID\tNo dataSource\tData source name to compact\tYes ioConfig\tI/O configuration for compaction task. See Compaction I/O configuration for details.\tYes dimensionsSpec\tWhen set, the compaction task uses the specified dimensionsSpec rather than generating one from existing segments. See Compaction dimensionsSpec for details.\tNo transformSpec\tWhen set, the compaction task uses the specified transformSpec rather than using null. See Compaction transformSpec for details.\tNo metricsSpec\tWhen set, the compaction task uses the specified metricsSpec rather than generating one from existing segments.\tNo segmentGranularity\tDeprecated. Use granularitySpec.\tNo tuningConfig\tTuning configuration for parallel indexing. awaitSegmentAvailabilityTimeoutMillis value is not supported for compaction tasks. Leave this parameter at the default value, 0.\tNo granularitySpec\tWhen set, the compaction task uses the specified granularitySpec rather than generating one from existing segments. See Compaction granularitySpec for details.\tNo context\tTask context\tNo info Note: Use granularitySpec over segmentGranularity and only set one of these values. If you specify different values for these in the same compaction spec, the task fails. To control the number of result segments per time chunk, you can set maxRowsPerSegment or numShards. info You can run multiple compaction tasks in parallel. For example, if you want to compact the data for a year, you are not limited to running a single task for the entire year. You can run 12 compaction tasks with month-long intervals. A compaction task internally generates an index or index_parallel task spec for performing compaction work with some fixed parameters. For example, its inputSource is always the druid input source, and dimensionsSpec and metricsSpec include all dimensions and metrics of the input segments by default. Compaction tasks typically fetch all relevant segments prior to launching any subtasks, unless the following properties are all set to non-null values. It is strongly recommended to set them to non-null values to maximize performance and minimize disk usage of the compact task: granularitySpec, with non-null values for each of segmentGranularity, queryGranularity, and rollupdimensionsSpecmetricsSpec Compaction tasks exit without doing anything and issue a failure status code in either of the following cases: If the interval you specify has no data segments loaded.If the interval you specify is empty. Note that the metadata between input segments and the resulting compacted segments may differ if the metadata among the input segments differs as well. If all input segments have the same metadata, however, the resulting output segment will have the same metadata as all input segments. "},{"title":"Example compaction task","type":1,"pageTitle":"Compaction","url":"/docs/27.0.0/data-management/compaction#example-compaction-task","content":"The following JSON illustrates a compaction task to compact all segments within the interval 2020-01-01/2021-01-01 and create new segments: { "type": "compact", "dataSource": "wikipedia", "ioConfig": { "type": "compact", "inputSpec": { "type": "interval", "interval": "2020-01-01/2021-01-01" } }, "granularitySpec": { "segmentGranularity": "day", "queryGranularity": "hour" } } granularitySpec is an optional field. If you don't specify granularitySpec, Druid retains the original segment and query granularities when compaction is complete. "},{"title":"Compaction I/O configuration","type":1,"pageTitle":"Compaction","url":"/docs/27.0.0/data-management/compaction#compaction-io-configuration","content":"The compaction ioConfig requires specifying inputSpec as follows: Field\tDescription\tDefault\tRequiredtype\tTask type. Set the value to compact.\tnone\tYes inputSpec\tSpecification of the target interval or segments.\tnone\tYes dropExisting\tIf true, the task replaces all existing segments fully contained by either of the following: - the interval in the interval type inputSpec. - the umbrella interval of the segments in the segment type inputSpec. If compaction fails, Druid does not change any of the existing segments. WARNING: dropExisting in ioConfig is a beta feature.\tfalse\tNo allowNonAlignedInterval\tIf true, the task allows an explicit segmentGranularity that is not aligned with the provided interval or segments. This parameter is only used if segmentGranularity is explicitly provided. This parameter is provided for backwards compatibility. In most scenarios it should not be set, as it can lead to data being accidentally overshadowed. This parameter may be removed in a future release.\tfalse\tNo The compaction task has two kinds of inputSpec: Interval inputSpec Field\tDescription\tRequiredtype\tTask type. Set the value to interval.\tYes interval\tInterval to compact.\tYes Segments inputSpec Field\tDescription\tRequiredtype\tTask type. Set the value to segments.\tYes segments\tA list of segment IDs.\tYes "},{"title":"Compaction dimensions spec","type":1,"pageTitle":"Compaction","url":"/docs/27.0.0/data-management/compaction#compaction-dimensions-spec","content":"Field\tDescription\tRequireddimensions\tA list of dimension names or objects. Cannot have the same column in both dimensions and dimensionExclusions. Defaults to null, which preserves the original dimensions.\tNo dimensionExclusions\tThe names of dimensions to exclude from compaction. Only names are supported here, not objects. This list is only used if the dimensions list is null or empty; otherwise it is ignored. Defaults to [].\tNo "},{"title":"Compaction transform spec","type":1,"pageTitle":"Compaction","url":"/docs/27.0.0/data-management/compaction#compaction-transform-spec","content":"Field\tDescription\tRequiredfilter\tThe filter conditionally filters input rows during compaction. Only rows that pass the filter will be included in the compacted segments. Any of Druid's standard query filters can be used. Defaults to 'null', which will not filter any row.\tNo "},{"title":"Compaction granularity spec","type":1,"pageTitle":"Compaction","url":"/docs/27.0.0/data-management/compaction#compaction-granularity-spec","content":"Field\tDescription\tRequiredsegmentGranularity\tTime chunking period for the segment granularity. Defaults to 'null', which preserves the original segment granularity. Accepts all Query granularity values.\tNo queryGranularity\tThe resolution of timestamp storage within each segment. Defaults to 'null', which preserves the original query granularity. Accepts all Query granularity values.\tNo rollup\tEnables compaction-time rollup. To preserve the original setting, keep the default value. To enable compaction-time rollup, set the value to true. Once the data is rolled up, you can no longer recover individual records.\tNo "},{"title":"Learn more","type":1,"pageTitle":"Compaction","url":"/docs/27.0.0/data-management/compaction#learn-more","content":"See the following topics for more information: Segment optimization for guidance to determine if compaction will help in your case.Automatic compaction for how to enable and configure automatic compaction. "},{"title":"Schema changes","type":0,"sectionRef":"#","url":"/docs/27.0.0/data-management/schema-changes","content":"","keywords":""},{"title":"For new data","type":1,"pageTitle":"Schema changes","url":"/docs/27.0.0/data-management/schema-changes#for-new-data","content":"Apache Druid allows you to provide a new schema for new data without the need to update the schema of any existing data. It is sufficient to update your supervisor spec, if using streaming ingestion, or to provide the new schema the next time you do a batch ingestion. This is made possible by the fact that each segment, at the time it is created, stores a copy of its own schema. Druid reconciles all of these individual segment schemas automatically at query time. "},{"title":"For existing data","type":1,"pageTitle":"Schema changes","url":"/docs/27.0.0/data-management/schema-changes#for-existing-data","content":"Schema changes are sometimes necessary for existing data. For example, you may want to change the type of a column in previously-ingested data, or drop a column entirely. Druid handles this using reindexing, the same method it uses to handle updates of existing data. Reindexing involves rewriting all affected segments and can be a time-consuming operation. "},{"title":"Service status API","type":0,"sectionRef":"#","url":"/docs/27.0.0/api-reference/service-status-api","content":"","keywords":""},{"title":"SQL-based ingestion API","type":0,"sectionRef":"#","url":"/docs/27.0.0/api-reference/sql-ingestion-api","content":"","keywords":""},{"title":"Submit a query","type":1,"pageTitle":"SQL-based ingestion API","url":"/docs/27.0.0/api-reference/sql-ingestion-api#submit-a-query","content":"You submit queries to the MSQ task engine using the POST /druid/v2/sql/task/ endpoint. Request The SQL task endpoint accepts SQL requests in the JSON-over-HTTP form using thequery, context, and parameters fields, but ignoring the resultFormat, header, typesHeader, andsqlTypesHeader fields. This endpoint accepts INSERT and REPLACE statements. As an experimental feature, this endpoint also accepts SELECT queries. SELECT query results are collected from workers by the controller, and written into the task report as an array of arrays. The behavior and result format of plain SELECT queries (without INSERT or REPLACE) is subject to change. HTTPcurlPython POST /druid/v2/sql/task { "query": "INSERT INTO wikipedia\\nSELECT\\n TIME_PARSE(\\"timestamp\\") AS __time,\\n *\\nFROM TABLE(\\n EXTERN(\\n '{\\"type\\": \\"http\\", \\"uris\\": [\\"https://druid.apache.org/data/wikipedia.json.gz\\"]}',\\n '{\\"type\\": \\"json\\"}',\\n '[{\\"name\\": \\"added\\", \\"type\\": \\"long\\"}, {\\"name\\": \\"channel\\", \\"type\\": \\"string\\"}, {\\"name\\": \\"cityName\\", \\"type\\": \\"string\\"}, {\\"name\\": \\"comment\\", \\"type\\": \\"string\\"}, {\\"name\\": \\"commentLength\\", \\"type\\": \\"long\\"}, {\\"name\\": \\"countryIsoCode\\", \\"type\\": \\"string\\"}, {\\"name\\": \\"countryName\\", \\"type\\": \\"string\\"}, {\\"name\\": \\"deleted\\", \\"type\\": \\"long\\"}, {\\"name\\": \\"delta\\", \\"type\\": \\"long\\"}, {\\"name\\": \\"deltaBucket\\", \\"type\\": \\"string\\"}, {\\"name\\": \\"diffUrl\\", \\"type\\": \\"string\\"}, {\\"name\\": \\"flags\\", \\"type\\": \\"string\\"}, {\\"name\\": \\"isAnonymous\\", \\"type\\": \\"string\\"}, {\\"name\\": \\"isMinor\\", \\"type\\": \\"string\\"}, {\\"name\\": \\"isNew\\", \\"type\\": \\"string\\"}, {\\"name\\": \\"isRobot\\", \\"type\\": \\"string\\"}, {\\"name\\": \\"isUnpatrolled\\", \\"type\\": \\"string\\"}, {\\"name\\": \\"metroCode\\", \\"type\\": \\"string\\"}, {\\"name\\": \\"namespace\\", \\"type\\": \\"string\\"}, {\\"name\\": \\"page\\", \\"type\\": \\"string\\"}, {\\"name\\": \\"regionIsoCode\\", \\"type\\": \\"string\\"}, {\\"name\\": \\"regionName\\", \\"type\\": \\"string\\"}, {\\"name\\": \\"timestamp\\", \\"type\\": \\"string\\"}, {\\"name\\": \\"user\\", \\"type\\": \\"string\\"}]'\\n )\\n)\\nPARTITIONED BY DAY", "context": { "maxNumTasks": 3 } } Response { "taskId": "query-f795a235-4dc7-4fef-abac-3ae3f9686b79", "state": "RUNNING", } Response fields Field\tDescriptiontaskId\tController task ID. You can use Druid's standard Tasks API to interact with this controller task. state\tInitial state for the query, which is "RUNNING". "},{"title":"Get the status for a query task","type":1,"pageTitle":"SQL-based ingestion API","url":"/docs/27.0.0/api-reference/sql-ingestion-api#get-the-status-for-a-query-task","content":"You can retrieve status of a query to see if it is still running, completed successfully, failed, or got canceled. Request HTTPcurlPython GET /druid/indexer/v1/task/<taskId>/status Response { "task": "query-3dc0c45d-34d7-4b15-86c9-cdb2d3ebfc4e", "status": { "id": "query-3dc0c45d-34d7-4b15-86c9-cdb2d3ebfc4e", "groupId": "query-3dc0c45d-34d7-4b15-86c9-cdb2d3ebfc4e", "type": "query_controller", "createdTime": "2022-09-14T22:12:00.183Z", "queueInsertionTime": "1970-01-01T00:00:00.000Z", "statusCode": "RUNNING", "status": "RUNNING", "runnerStatusCode": "RUNNING", "duration": -1, "location": { "host": "localhost", "port": 8100, "tlsPort": -1 }, "dataSource": "kttm_simple", "errorMsg": null } } "},{"title":"Get the report for a query task","type":1,"pageTitle":"SQL-based ingestion API","url":"/docs/27.0.0/api-reference/sql-ingestion-api#get-the-report-for-a-query-task","content":"A report provides detailed information about a query task, including things like the stages, warnings, and errors. Keep the following in mind when using the task API to view reports: The task report for an entire job is associated with the query_controller task. The query_worker tasks do not have their own reports; their information is incorporated into the controller report.The task report API may report 404 Not Found temporarily while the task is in the process of starting up.As an experimental feature, the MSQ task engine supports running SELECT queries. SELECT query results are written into the multiStageQuery.payload.results.results task report key as an array of arrays. The behavior and result format of plain SELECT queries (without INSERT or REPLACE) is subject to change.multiStageQuery.payload.results.resultsTruncated denote whether the results of the report have been truncated to prevent the reports from blowing up For an explanation of the fields in a report, see Report response fields. Request HTTPcurlPython GET /druid/indexer/v1/task/<taskId>/reports Response The response shows an example report for a query. Show the response { "multiStageQuery": { "type": "multiStageQuery", "taskId": "query-3dc0c45d-34d7-4b15-86c9-cdb2d3ebfc4e", "payload": { "status": { "status": "SUCCESS", "startTime": "2022-09-14T22:12:09.266Z", "durationMs": 28227, "pendingTasks": 0, "runningTasks": 2 }, "stages": [ { "stageNumber": 0, "definition": { "id": "71ecb11e-09d7-42f8-9225-1662c8e7e121_0", "input": [ { "type": "external", "inputSource": { "type": "http", "uris": [ "https://static.imply.io/example-data/kttm-v2/kttm-v2-2019-08-25.json.gz" ], "httpAuthenticationUsername": null, "httpAuthenticationPassword": null }, "inputFormat": { "type": "json", "flattenSpec": null, "featureSpec": {}, "keepNullColumns": false }, "signature": [ { "name": "timestamp", "type": "STRING" }, { "name": "agent_category", "type": "STRING" }, { "name": "agent_type", "type": "STRING" } ] } ], "processor": { "type": "scan", "query": { "queryType": "scan", "dataSource": { "type": "inputNumber", "inputNumber": 0 }, "intervals": { "type": "intervals", "intervals": [ "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z" ] }, "resultFormat": "compactedList", "columns": [ "agent_category", "agent_type", "timestamp" ], "legacy": false, "context": { "finalize": false, "finalizeAggregations": false, "groupByEnableMultiValueUnnesting": false, "scanSignature": "[{\\"name\\":\\"agent_category\\",\\"type\\":\\"STRING\\"},{\\"name\\":\\"agent_type\\",\\"type\\":\\"STRING\\"},{\\"name\\":\\"timestamp\\",\\"type\\":\\"STRING\\"}]", "sqlInsertSegmentGranularity": "{\\"type\\":\\"all\\"}", "sqlQueryId": "3dc0c45d-34d7-4b15-86c9-cdb2d3ebfc4e", "sqlReplaceTimeChunks": "all" }, "granularity": { "type": "all" } } }, "signature": [ { "name": "__boost", "type": "LONG" }, { "name": "agent_category", "type": "STRING" }, { "name": "agent_type", "type": "STRING" }, { "name": "timestamp", "type": "STRING" } ], "shuffleSpec": { "type": "targetSize", "clusterBy": { "columns": [ { "columnName": "__boost" } ] }, "targetSize": 3000000 }, "maxWorkerCount": 1, "shuffleCheckHasMultipleValues": true }, "phase": "FINISHED", "workerCount": 1, "partitionCount": 1, "startTime": "2022-09-14T22:12:11.663Z", "duration": 19965, "sort": true }, { "stageNumber": 1, "definition": { "id": "71ecb11e-09d7-42f8-9225-1662c8e7e121_1", "input": [ { "type": "stage", "stage": 0 } ], "processor": { "type": "segmentGenerator", "dataSchema": { "dataSource": "kttm_simple", "timestampSpec": { "column": "__time", "format": "millis", "missingValue": null }, "dimensionsSpec": { "dimensions": [ { "type": "string", "name": "timestamp", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "string", "name": "agent_category", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "string", "name": "agent_type", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true } ], "dimensionExclusions": [ "__time" ], "includeAllDimensions": false }, "metricsSpec": [], "granularitySpec": { "type": "arbitrary", "queryGranularity": { "type": "none" }, "rollup": false, "intervals": [ "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z" ] }, "transformSpec": { "filter": null, "transforms": [] } }, "columnMappings": [ { "queryColumn": "timestamp", "outputColumn": "timestamp" }, { "queryColumn": "agent_category", "outputColumn": "agent_category" }, { "queryColumn": "agent_type", "outputColumn": "agent_type" } ], "tuningConfig": { "maxNumWorkers": 1, "maxRowsInMemory": 100000, "rowsPerSegment": 3000000 } }, "signature": [], "maxWorkerCount": 1 }, "phase": "FINISHED", "workerCount": 1, "partitionCount": 1, "startTime": "2022-09-14T22:12:31.602Z", "duration": 5891 } ], "counters": { "0": { "0": { "input0": { "type": "channel", "rows": [ 465346 ], "files": [ 1 ], "totalFiles": [ 1 ] }, "output": { "type": "channel", "rows": [ 465346 ], "bytes": [ 43694447 ], "frames": [ 7 ] }, "shuffle": { "type": "channel", "rows": [ 465346 ], "bytes": [ 41835307 ], "frames": [ 73 ] }, "sortProgress": { "type": "sortProgress", "totalMergingLevels": 3, "levelToTotalBatches": { "0": 1, "1": 1, "2": 1 }, "levelToMergedBatches": { "0": 1, "1": 1, "2": 1 }, "totalMergersForUltimateLevel": 1, "progressDigest": 1 } } }, "1": { "0": { "input0": { "type": "channel", "rows": [ 465346 ], "bytes": [ 41835307 ], "frames": [ 73 ] }, "segmentGenerationProgress": { "type": "segmentGenerationProgress", "rowsProcessed": 465346, "rowsPersisted": 465346, "rowsMerged": 465346 } } } } } } } The following table describes the response fields when you retrieve a report for a MSQ task engine using the /druid/indexer/v1/task/<taskId>/reports endpoint: Field\tDescriptionmultiStageQuery.taskId\tController task ID. multiStageQuery.payload.status\tQuery status container. multiStageQuery.payload.status.status\tRUNNING, SUCCESS, or FAILED. multiStageQuery.payload.status.startTime\tStart time of the query in ISO format. Only present if the query has started running. multiStageQuery.payload.status.durationMs\tMilliseconds elapsed after the query has started running. -1 denotes that the query hasn't started running yet. multiStageQuery.payload.status.pendingTasks\tNumber of tasks that are not fully started. -1 denotes that the number is currently unknown. multiStageQuery.payload.status.runningTasks\tNumber of currently running tasks. Should be at least 1 since the controller is included. multiStageQuery.payload.status.errorReport\tError object. Only present if there was an error. multiStageQuery.payload.status.errorReport.taskId\tThe task that reported the error, if known. May be a controller task or a worker task. multiStageQuery.payload.status.errorReport.host\tThe hostname and port of the task that reported the error, if known. multiStageQuery.payload.status.errorReport.stageNumber\tThe stage number that reported the error, if it happened during execution of a specific stage. multiStageQuery.payload.status.errorReport.error\tError object. Contains errorCode at a minimum, and may contain other fields as described in the error code table. Always present if there is an error. multiStageQuery.payload.status.errorReport.error.errorCode\tOne of the error codes from the error code table. Always present if there is an error. multiStageQuery.payload.status.errorReport.error.errorMessage\tUser-friendly error message. Not always present, even if there is an error. multiStageQuery.payload.status.errorReport.exceptionStackTrace\tJava stack trace in string form, if the error was due to a server-side exception. multiStageQuery.payload.stages\tArray of query stages. multiStageQuery.payload.stages[].stageNumber\tEach stage has a number that differentiates it from other stages. multiStageQuery.payload.stages[].phase\tEither NEW, READING_INPUT, POST_READING, RESULTS_COMPLETE, or FAILED. Only present if the stage has started. multiStageQuery.payload.stages[].workerCount\tNumber of parallel tasks that this stage is running on. Only present if the stage has started. multiStageQuery.payload.stages[].partitionCount\tNumber of output partitions generated by this stage. Only present if the stage has started and has computed its number of output partitions. multiStageQuery.payload.stages[].startTime\tStart time of this stage. Only present if the stage has started. multiStageQuery.payload.stages[].duration\tThe number of milliseconds that the stage has been running. Only present if the stage has started. multiStageQuery.payload.stages[].sort\tA boolean that is set to true if the stage does a sort as part of its execution. multiStageQuery.payload.stages[].definition\tThe object defining what the stage does. multiStageQuery.payload.stages[].definition.id\tThe unique identifier of the stage. multiStageQuery.payload.stages[].definition.input\tArray of inputs that the stage has. multiStageQuery.payload.stages[].definition.broadcast\tArray of input indexes that get broadcasted. Only present if there are inputs that get broadcasted. multiStageQuery.payload.stages[].definition.processor\tAn object defining the processor logic. multiStageQuery.payload.stages[].definition.signature\tThe output signature of the stage. "},{"title":"Cancel a query task","type":1,"pageTitle":"SQL-based ingestion API","url":"/docs/27.0.0/api-reference/sql-ingestion-api#cancel-a-query-task","content":"Request HTTPcurlPython POST /druid/indexer/v1/task/<taskId>/shutdown Response { "task": "query-655efe33-781a-4c50-ae84-c2911b42d63c" } "},{"title":"Common","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#common","content":"All services support the following endpoints. You can use each endpoint with the ports for each type of service. The following table contains port addresses for a local configuration: Service\tPort addressCoordinator\t8081 Overlord\t8081 Router\t8888 Broker\t8082 Historical\t8083 MiddleManager\t8091 "},{"title":"Get service information","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#get-service-information","content":"Retrieves the Druid version, loaded extensions, memory used, total memory, and other useful information about the individual service. Modify the host and port for the endpoint to match the service to query. Refer to the default service ports for the port numbers. URL GET /status Responses 200 SUCCESS Successfully retrieved service information Sample request cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/status" Sample response Click to show sample response { "version": "26.0.0", "modules": [ { "name": "org.apache.druid.common.aws.AWSModule", "artifact": "druid-aws-common", "version": "26.0.0" }, { "name": "org.apache.druid.common.gcp.GcpModule", "artifact": "druid-gcp-common", "version": "26.0.0" }, { "name": "org.apache.druid.storage.hdfs.HdfsStorageDruidModule", "artifact": "druid-hdfs-storage", "version": "26.0.0" }, { "name": "org.apache.druid.indexing.kafka.KafkaIndexTaskModule", "artifact": "druid-kafka-indexing-service", "version": "26.0.0" }, { "name": "org.apache.druid.query.aggregation.datasketches.theta.SketchModule", "artifact": "druid-datasketches", "version": "26.0.0" }, { "name": "org.apache.druid.query.aggregation.datasketches.theta.oldapi.OldApiSketchModule", "artifact": "druid-datasketches", "version": "26.0.0" }, { "name": "org.apache.druid.query.aggregation.datasketches.quantiles.DoublesSketchModule", "artifact": "druid-datasketches", "version": "26.0.0" }, { "name": "org.apache.druid.query.aggregation.datasketches.tuple.ArrayOfDoublesSketchModule", "artifact": "druid-datasketches", "version": "26.0.0" }, { "name": "org.apache.druid.query.aggregation.datasketches.hll.HllSketchModule", "artifact": "druid-datasketches", "version": "26.0.0" }, { "name": "org.apache.druid.query.aggregation.datasketches.kll.KllSketchModule", "artifact": "druid-datasketches", "version": "26.0.0" }, { "name": "org.apache.druid.msq.guice.MSQExternalDataSourceModule", "artifact": "druid-multi-stage-query", "version": "26.0.0" }, { "name": "org.apache.druid.msq.guice.MSQIndexingModule", "artifact": "druid-multi-stage-query", "version": "26.0.0" }, { "name": "org.apache.druid.msq.guice.MSQDurableStorageModule", "artifact": "druid-multi-stage-query", "version": "26.0.0" }, { "name": "org.apache.druid.msq.guice.MSQServiceClientModule", "artifact": "druid-multi-stage-query", "version": "26.0.0" }, { "name": "org.apache.druid.msq.guice.MSQSqlModule", "artifact": "druid-multi-stage-query", "version": "26.0.0" }, { "name": "org.apache.druid.msq.guice.SqlTaskModule", "artifact": "druid-multi-stage-query", "version": "26.0.0" } ], "memory": { "maxMemory": 268435456, "totalMemory": 268435456, "freeMemory": 139060688, "usedMemory": 129374768, "directMemory": 134217728 } } "},{"title":"Get service health","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#get-service-health","content":"Retrieves the online status of the individual Druid service. It is a simple health check to determine if the service is running and accessible. If online, it will always return a boolean true value, indicating that the service can receive API calls. This endpoint is suitable for automated health checks. Modify the host and port for the endpoint to match the service to query. Refer to the default service ports for the port numbers. Additional checks for readiness should use the Historical segment readiness and Broker query readiness endpoints. URL GET /status/health Responses 200 SUCCESS Successfully retrieved service health Sample request cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/status/health" Sample response Click to show sample response true "},{"title":"Get configuration properties","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#get-configuration-properties","content":"Retrieves the current configuration properties of the individual service queried. Modify the host and port for the endpoint to match the service to query. Refer to the default service ports for the port numbers. URL GET /status/properties Responses 200 SUCCESS Successfully retrieved service configuration properties Sample request cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/status/properties" Sample response Click to show sample response { "gopherProxySet": "false", "awt.toolkit": "sun.lwawt.macosx.LWCToolkit", "druid.monitoring.monitors": "[\\"org.apache.druid.java.util.metrics.JvmMonitor\\"]", "java.specification.version": "11", "sun.cpu.isalist": "", "druid.plaintextPort": "8888", "sun.jnu.encoding": "UTF-8", "druid.indexing.doubleStorage": "double", "druid.metadata.storage.connector.port": "1527", "java.class.path": "/Users/genericUserPath", "log4j.shutdownHookEnabled": "true", "java.vm.vendor": "Homebrew", "sun.arch.data.model": "64", "druid.extensions.loadList": "[\\"druid-hdfs-storage\\", \\"druid-kafka-indexing-service\\", \\"druid-datasketches\\", \\"druid-multi-stage-query\\"]", "java.vendor.url": "https://github.com/Homebrew/homebrew-core/issues", "druid.router.coordinatorServiceName": "druid/coordinator", "user.timezone": "UTC", "druid.global.http.eagerInitialization": "false", "os.name": "Mac OS X", "java.vm.specification.version": "11", "sun.java.launcher": "SUN_STANDARD", "user.country": "US", "sun.boot.library.path": "/opt/homebrew/Cellar/openjdk@11/11.0.19/libexec/openjdk.jdk/Contents/Home/lib", "sun.java.command": "org.apache.druid.cli.Main server router", "http.nonProxyHosts": "local|*.local|169.254/16|*.169.254/16", "jdk.debug": "release", "druid.metadata.storage.connector.host": "localhost", "sun.cpu.endian": "little", "druid.zk.paths.base": "/druid", "user.home": "/Users/genericUser", "user.language": "en", "java.specification.vendor": "Oracle Corporation", "java.version.date": "2023-04-18", "java.home": "/opt/homebrew/Cellar/openjdk@11/11.0.19/libexec/openjdk.jdk/Contents/Home", "druid.service": "druid/router", "druid.selectors.coordinator.serviceName": "druid/coordinator", "druid.metadata.storage.connector.connectURI": "jdbc:derby://localhost:1527/var/druid/metadata.db;create=true", "file.separator": "/", "druid.selectors.indexing.serviceName": "druid/overlord", "java.vm.compressedOopsMode": "Zero based", "druid.metadata.storage.type": "derby", "line.separator": "\\n", "druid.log.path": "/Users/genericUserPath", "java.vm.specification.vendor": "Oracle Corporation", "java.specification.name": "Java Platform API Specification", "druid.indexer.logs.directory": "var/druid/indexing-logs", "java.awt.graphicsenv": "sun.awt.CGraphicsEnvironment", "druid.router.defaultBrokerServiceName": "druid/broker", "druid.storage.storageDirectory": "var/druid/segments", "sun.management.compiler": "HotSpot 64-Bit Tiered Compilers", "ftp.nonProxyHosts": "local|*.local|169.254/16|*.169.254/16", "java.runtime.version": "11.0.19+0", "user.name": "genericUser", "druid.indexer.logs.type": "file", "druid.host": "localhost", "log4j2.is.webapp": "false", "path.separator": ":", "os.version": "12.6.5", "druid.lookup.enableLookupSyncOnStartup": "false", "java.runtime.name": "OpenJDK Runtime Environment", "druid.zk.service.host": "localhost", "file.encoding": "UTF-8", "druid.sql.planner.useGroupingSetForExactDistinct": "true", "druid.router.managementProxy.enabled": "true", "java.vm.name": "OpenJDK 64-Bit Server VM", "java.vendor.version": "Homebrew", "druid.startup.logging.logProperties": "true", "java.vendor.url.bug": "https://github.com/Homebrew/homebrew-core/issues", "log4j.shutdownCallbackRegistry": "org.apache.druid.common.config.Log4jShutdown", "java.io.tmpdir": "var/tmp", "druid.sql.enable": "true", "druid.emitter.logging.logLevel": "info", "java.version": "11.0.19", "user.dir": "/Users/genericUser/Downloads/apache-druid-26.0.0", "os.arch": "aarch64", "java.vm.specification.name": "Java Virtual Machine Specification", "druid.node.type": "router", "java.awt.printerjob": "sun.lwawt.macosx.CPrinterJob", "sun.os.patch.level": "unknown", "java.util.logging.manager": "org.apache.logging.log4j.jul.LogManager", "java.library.path": "/Users/genericUserPath", "java.vendor": "Homebrew", "java.vm.info": "mixed mode", "java.vm.version": "11.0.19+0", "druid.emitter": "noop", "sun.io.unicode.encoding": "UnicodeBig", "druid.storage.type": "local", "druid.expressions.useStrictBooleans": "true", "java.class.version": "55.0", "socksNonProxyHosts": "local|*.local|169.254/16|*.169.254/16", "druid.server.hiddenProperties": "[\\"druid.s3.accessKey\\",\\"druid.s3.secretKey\\",\\"druid.metadata.storage.connector.password\\", \\"password\\", \\"key\\", \\"token\\", \\"pwd\\"]" } "},{"title":"Get node discovery status and cluster integration confirmation","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#get-node-discovery-status-and-cluster-integration-confirmation","content":"Retrieves a JSON map of the form {"selfDiscovered": true/false}, indicating whether the node has received a confirmation from the central node discovery mechanism (currently ZooKeeper) of the Druid cluster that the node has been added to the cluster. Only consider a Druid node "healthy" or "ready" in automated deployment/container management systems when this endpoint returns {"selfDiscovered": true}. Nodes experiencing network issues may become isolated and are not healthy. For nodes that use Zookeeper segment discovery, a response of {"selfDiscovered": true} indicates that the node's Zookeeper client has started receiving data from the Zookeeper cluster, enabling timely discovery of segments and other nodes. URL GET /status/selfDiscovered/status Responses 200 SUCCESS Node was successfully added to the cluster Sample request cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/status/selfDiscovered/status" Sample response Click to show sample response { "selfDiscovered": true } "},{"title":"Get node self-discovery status","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#get-node-self-discovery-status","content":"Returns an HTTP status code to indicate node discovery within the Druid cluster. This endpoint is similar to the status/selfDiscovered/status endpoint, but relies on HTTP status codes alone. Use this endpoint for monitoring checks that are unable to examine the response body. For example, AWS load balancer health checks. URL GET /status/selfDiscovered Responses 200 SUCCESS503 SERVICE UNAVAILABLE Successfully retrieved node status Sample request cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/status/selfDiscovered" Sample response A successful response to this endpoint results in an empty response body. "},{"title":"Coordinator","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#coordinator","content":""},{"title":"Get Coordinator leader address","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#get-coordinator-leader-address","content":"Retrieves the address of the current leader Coordinator of the cluster. If any request is sent to a non-leader Coordinator, the request is automatically redirected to the leader Coordinator. URL GET /druid/coordinator/v1/leader Responses 200 SUCCESS Successfully retrieved leader Coordinator address Sample request cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/leader" Sample response Click to show sample response http://localhost:8081 "},{"title":"Get Coordinator leader status","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#get-coordinator-leader-status","content":"Retrieves a JSON object with a leader key. Returns true if this server is the current leader Coordinator of the cluster. To get the individual address of the leader Coordinator node, see the leader endpoint. Use this endpoint as a load balancer status check when you only want the active leader to be considered in-service at the load balancer. URL GET /druid/coordinator/v1/isLeader Responses 200 SUCCESS404 NOT FOUND Current server is the leader Sample request cURLHTTP curl "http://COORDINATOR_IP:COORDINATOR_PORT/druid/coordinator/v1/isLeader" Sample response Click to show sample response { "leader": true } "},{"title":"Overlord","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#overlord","content":""},{"title":"Get Overlord leader address","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#get-overlord-leader-address","content":"Retrieves the address of the current leader Overlord of the cluster. In a cluster of multiple Overlords, only one Overlord assumes the leading role, while the remaining Overlords remain on standby. URL GET /druid/indexer/v1/leader Responses 200 SUCCESS Successfully retrieved leader Overlord address Sample request cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/leader" Sample response Click to show sample response http://localhost:8081 "},{"title":"Get Overlord leader status","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#get-overlord-leader-status","content":"Retrieves a JSON object with a leader property. The value can be true or false, indicating if this server is the current leader Overlord of the cluster. To get the individual address of the leader Overlord node, see the leader endpoint. Use this endpoint as a load balancer status check when you only want the active leader to be considered in-service at the load balancer. URL GET /druid/indexer/v1/isLeader Responses 200 SUCCESS404 NOT FOUND Current server is the leader Sample request cURLHTTP curl "http://OVERLORD_IP:OVERLORD_PORT/druid/indexer/v1/isLeader" Sample response Click to show sample response { "leader": true } "},{"title":"MiddleManager","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#middlemanager","content":""},{"title":"Get MiddleManager state status","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#get-middlemanager-state-status","content":"Retrieves the enabled state of the MiddleManager. Returns JSON object keyed by the combined druid.host and druid.port with a boolean true or false state as the value. URL GET /druid/worker/v1/enabled Responses 200 SUCCESS Successfully retrieved MiddleManager state Sample request cURLHTTP curl "http://MIDDLEMANAGER_IP:MIDDLEMANAGER_PORT/druid/worker/v1/enabled" Sample response Click to show sample response { "localhost:8091": true } "},{"title":"Get active tasks","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#get-active-tasks","content":"Retrieves a list of active tasks being run on MiddleManager. Returns JSON list of task ID strings. Note that for normal usage, you should use the /druid/indexer/v1/tasks Tasks API endpoint or one of the task state specific variants instead. URL GET /druid/worker/v1/tasks Responses 200 SUCCESS Successfully retrieved active tasks Sample request cURLHTTP curl "http://MIDDLEMANAGER_IP:MIDDLEMANAGER_PORT/druid/worker/v1/tasks" Sample response Click to show sample response [ "index_parallel_wikipedia_mgchefio_2023-06-13T22:18:05.360Z" ] "},{"title":"Get task log","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#get-task-log","content":"Retrieves task log output stream by task ID. For normal usage, you should use the /druid/indexer/v1/task/{taskId}/logTasks API endpoint instead. URL GET /druid/worker/v1/task/:taskId/log "},{"title":"Shut down running task","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#shut-down-running-task","content":"Shuts down a running task by ID. For normal usage, you should use the /druid/indexer/v1/task/:taskId/shutdownTasks API endpoint instead. URL POST /druid/worker/v1/task/:taskId/shutdown Responses 200 SUCCESS Successfully shut down a task Sample request The following example shuts down a task with specified ID index_kafka_wikiticker_f7011f8ffba384b_fpeclode. cURLHTTP curl "http://MIDDLEMANAGER_IP:MIDDLEMANAGER_PORT/druid/worker/v1/task/index_kafka_wikiticker_f7011f8ffba384b_fpeclode/shutdown" Sample response Click to show sample response { "task":"index_kafka_wikiticker_f7011f8ffba384b_fpeclode" } "},{"title":"Disable MiddleManager","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#disable-middlemanager","content":"Disables a MiddleManager, causing it to stop accepting new tasks but complete all existing tasks. Returns a JSON object keyed by the combined druid.host and druid.port. URL POST /druid/worker/v1/disable Responses 200 SUCCESS Successfully disabled MiddleManager Sample request cURLHTTP curl "http://MIDDLEMANAGER_IP:MIDDLEMANAGER_PORT/druid/worker/v1/disable" Sample response Click to show sample response { "localhost:8091":"disabled" } "},{"title":"Enable MiddleManager","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#enable-middlemanager","content":"Enables a MiddleManager, allowing it to accept new tasks again if it was previously disabled. Returns a JSON object keyed by the combined druid.host and druid.port. URL POST /druid/worker/v1/enable Responses 200 SUCCESS Successfully enabled MiddleManager Sample request cURLHTTP curl "http://MIDDLEMANAGER_IP:MIDDLEMANAGER_PORT/druid/worker/v1/enable" Sample response Click to show sample response { "localhost:8091":"enabled" } "},{"title":"Historical","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#historical","content":""},{"title":"Get segment load status","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#get-segment-load-status","content":"Retrieves a JSON object of the form {"cacheInitialized":value}, where value is either true or false indicating if all segments in the local cache have been loaded. Use this endpoint to know when a Broker service is ready to accept queries after a restart. URL GET /druid/historical/v1/loadstatus Responses 200 SUCCESS Successfully retrieved status Sample request cURLHTTP curl "http://HISTORICAL_IP:HISTORICAL_PORT/druid/historical/v1/loadstatus" Sample response Click to show sample response { "cacheInitialized": true } "},{"title":"Get segment readiness","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#get-segment-readiness","content":"Retrieves a status code to indicate if all segments in the local cache have been loaded. Similar to /druid/historical/v1/loadstatus, but instead of returning JSON with a flag, it returns status codes. URL GET /druid/historical/v1/readiness Responses 200 SUCCESS503 SERVICE UNAVAILABLE Segments in local cache successfully loaded Sample request cURLHTTP curl "http://HISTORICAL_IP:HISTORICAL_PORT/druid/historical/v1/readiness" Sample response A successful response to this endpoint results in an empty response body. "},{"title":"Load Status","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#load-status","content":""},{"title":"Get Broker query load status","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#get-broker-query-load-status","content":"Retrieves a flag indicating if the Broker knows about all segments in the cluster. Use this endpoint to know when a Broker service is ready to accept queries after a restart. URL GET /druid/broker/v1/loadstatus Responses 200 SUCCESS Segments successfully loaded Sample request cURLHTTP curl "http://BROKER_IP:BROKER_PORT/druid/broker/v1/loadstatus" Sample response Click to show sample response { "inventoryInitialized": true } "},{"title":"Get Broker query readiness","type":1,"pageTitle":"Service status API","url":"/docs/27.0.0/api-reference/service-status-api#get-broker-query-readiness","content":"Retrieves a status code to indicate Broker readiness. Readiness signifies the Broker knows about all segments in the cluster and is ready to accept queries after a restart. Similar to /druid/broker/v1/loadstatus, but instead of returning a JSON, it returns status codes. URL GET /druid/broker/v1/readiness Responses 200 SUCCESS503 SERVICE UNAVAILABLE Segments successfully loaded Sample request cURLHTTP curl "http://BROKER_IP:BROKER_PORT/druid/broker/v1/readiness" Sample response A successful response to this endpoint results in an empty response body. "},{"title":"Introduction to Apache Druid","type":0,"sectionRef":"#","url":"/docs/27.0.0/design/","content":"","keywords":""},{"title":"Key features of Druid","type":1,"pageTitle":"Introduction to Apache Druid","url":"/docs/27.0.0/design/#key-features-of-druid","content":"Druid's core architecture combines ideas from data warehouses, timeseries databases, and logsearch systems. Some of Druid's key features are: Columnar storage format. Druid uses column-oriented storage. This means it only loads the exact columns needed for a particular query. This greatly improves speed for queries that retrieve only a few columns. Additionally, to support fast scans and aggregations, Druid optimizes column storage for each column according to its data type.Scalable distributed system. Typical Druid deployments span clusters ranging from tens to hundreds of servers. Druid can ingest data at the rate of millions of records per second while retaining trillions of records and maintaining query latencies ranging from the sub-second to a few seconds.Massively parallel processing. Druid can process each query in parallel across the entire cluster.Realtime or batch ingestion. Druid can ingest data either real-time or in batches. Ingested data is immediately available for querying.Self-healing, self-balancing, easy to operate. As an operator, you add servers to scale out or remove servers to scale down. The Druid cluster re-balances itself automatically in the background without any downtime. If a Druid server fails, the system automatically routes data around the damage until the server can be replaced. Druid is designed to run continuously without planned downtime for any reason. This is true for configuration changes and software updates.Cloud-native, fault-tolerant architecture that won't lose data. After ingestion, Druid safely stores a copy of your data in deep storage. Deep storage is typically cloud storage, HDFS, or a shared filesystem. You can recover your data from deep storage even in the unlikely case that all Druid servers fail. For a limited failure that affects only a few Druid servers, replication ensures that queries are still possible during system recoveries.Indexes for quick filtering. Druid uses Roaring orCONCISE compressed bitmap indexes to create indexes to enable fast filtering and searching across multiple columns.Time-based partitioning. Druid first partitions data by time. You can optionally implement additional partitioning based upon other fields. Time-based queries only access the partitions that match the time range of the query which leads to significant performance improvements.Approximate algorithms. Druid includes algorithms for approximate count-distinct, approximate ranking, and computation of approximate histograms and quantiles. These algorithms offer bounded memory usage and are often substantially faster than exact computations. For situations where accuracy is more important than speed, Druid also offers exact count-distinct and exact ranking.Automatic summarization at ingest time. Druid optionally supports data summarization at ingestion time. This summarization partially pre-aggregates your data, potentially leading to significant cost savings and performance boosts. "},{"title":"When to use Druid","type":1,"pageTitle":"Introduction to Apache Druid","url":"/docs/27.0.0/design/#when-to-use-druid","content":"Druid is used by many companies of various sizes for many different use cases. For more information seePowered by Apache Druid. Druid is likely a good choice if your use case matches a few of the following: Insert rates are very high, but updates are less common.Most of your queries are aggregation and reporting queries. For example "group by" queries. You may also have searching and scanning queries.You are targeting query latencies of 100ms to a few seconds.Your data has a time component. Druid includes optimizations and design choices specifically related to time.You may have more than one table, but each query hits just one big distributed table. Queries may potentially hit more than one smaller "lookup" table.You have high cardinality data columns, e.g. URLs, user IDs, and need fast counting and ranking over them.You want to load data from Kafka, HDFS, flat files, or object storage like Amazon S3. Situations where you would likely not want to use Druid include: You need low-latency updates of existing records using a primary key. Druid supports streaming inserts, but not streaming updates. You can perform updates using background batch jobs.You are building an offline reporting system where query latency is not very important.You want to do "big" joins, meaning joining one big fact table to another big fact table, and you are okay with these queries taking a long time to complete. "},{"title":"Learn more","type":1,"pageTitle":"Introduction to Apache Druid","url":"/docs/27.0.0/design/#learn-more","content":"Try the Druid Quickstart.Learn more about Druid components in Design.Read about new features and other details of Druid Releases. "},{"title":"Data updates","type":0,"sectionRef":"#","url":"/docs/27.0.0/data-management/update","content":"","keywords":""},{"title":"Overwrite","type":1,"pageTitle":"Data updates","url":"/docs/27.0.0/data-management/update#overwrite","content":"Apache Druid stores data partitioned by time chunk and supports overwriting existing data using time ranges. Data outside the replacement time range is not touched. Overwriting of existing data is done using the same mechanisms as batch ingestion. For example: Native batch with appendToExisting: false, and intervals set to a specific time range, overwrites data for that time range.SQL REPLACE <table> OVERWRITE [ALL | WHERE ...] overwrites data for the entire table or for a specified time range. In both cases, Druid's atomic update mechanism ensures that queries will flip seamlessly from the old data to the new data on a time-chunk-by-time-chunk basis. Ingestion and overwriting cannot run concurrently for the same time range of the same datasource. While an overwrite job is ongoing for a particular time range of a datasource, new ingestions for that time range are queued up. Ingestions for other time ranges proceed as normal. Read-only queries also proceed as normal, using the pre-existing version of the data. info Druid does not support single-record updates by primary key. "},{"title":"Reindex","type":1,"pageTitle":"Data updates","url":"/docs/27.0.0/data-management/update#reindex","content":"Reindexing is an overwrite of existing data where the source of new data is the existing data itself. It is used to perform schema changes, repartition data, filter out unwanted data, enrich existing data, and so on. This behaves just like any other overwrite with regard to atomic updates and locking. With native batch, use the druid input source. If needed,transformSpec can be used to filter or modify data during the reindexing job. With SQL, use REPLACE <table> OVERWRITE with SELECT ... FROM <table>. (Druid does not have UPDATE or ALTER TABLE statements.) Any SQL SELECT query can be used to filter, modify, or enrich the data during the reindexing job. "},{"title":"Rolled-up datasources","type":1,"pageTitle":"Data updates","url":"/docs/27.0.0/data-management/update#rolled-up-datasources","content":"Rolled-up datasources can be effectively updated using appends, without rewrites. When you append a row that has an identical set of dimensions to an existing row, queries that use aggregation operators automatically combine those two rows together at query time. Compaction or automatic compaction can be used to physically combine these matching rows together later on, by rewriting segments in the background. "},{"title":"Lookups","type":1,"pageTitle":"Data updates","url":"/docs/27.0.0/data-management/update#lookups","content":"If you have a dimension where values need to be updated frequently, try first using lookups. A classic use case of lookups is when you have an ID dimension stored in a Druid segment, and want to map the ID dimension to a human-readable string that may need to be updated periodically. "},{"title":"Automatic compaction","type":0,"sectionRef":"#","url":"/docs/27.0.0/data-management/automatic-compaction","content":"","keywords":""},{"title":"Enable automatic compaction","type":1,"pageTitle":"Automatic compaction","url":"/docs/27.0.0/data-management/automatic-compaction#enable-automatic-compaction","content":"You can enable automatic compaction for a datasource using the web console or programmatically via an API. This process differs for manual compaction tasks, which can be submitted from the Tasks view of the web console or the Tasks API. "},{"title":"Web console","type":1,"pageTitle":"Automatic compaction","url":"/docs/27.0.0/data-management/automatic-compaction#web-console","content":"Use the web console to enable automatic compaction for a datasource as follows. Click Datasources in the top-level navigation.In the Compaction column, click the edit icon for the datasource to compact.In the Compaction config dialog, configure the auto-compaction settings. The dialog offers a form view as well as a JSON view. Editing the form updates the JSON specification, and editing the JSON updates the form field, if present. Form fields not present in the JSON indicate default values. You may add additional properties to the JSON for auto-compaction settings not displayed in the form. See Configure automatic compaction for supported settings for auto-compaction.Click Submit.Refresh the Datasources view. The Compaction column for the datasource changes from “Not enabled” to “Awaiting first run.” The following screenshot shows the compaction config dialog for a datasource with auto-compaction enabled. To disable auto-compaction for a datasource, click Delete from the Compaction config dialog. Druid does not retain your auto-compaction configuration. "},{"title":"Compaction configuration API","type":1,"pageTitle":"Automatic compaction","url":"/docs/27.0.0/data-management/automatic-compaction#compaction-configuration-api","content":"Use the Automatic compaction API to configure automatic compaction. To enable auto-compaction for a datasource, create a JSON object with the desired auto-compaction settings. See Configure automatic compaction for the syntax of an auto-compaction spec. Send the JSON object as a payload in a POST request to /druid/coordinator/v1/config/compaction. The following example configures auto-compaction for the wikipedia datasource: curl --location --request POST 'http://localhost:8081/druid/coordinator/v1/config/compaction' \\ --header 'Content-Type: application/json' \\ --data-raw '{ "dataSource": "wikipedia", "granularitySpec": { "segmentGranularity": "DAY" } }' To disable auto-compaction for a datasource, send a DELETE request to /druid/coordinator/v1/config/compaction/{dataSource}. Replace {dataSource} with the name of the datasource for which to disable auto-compaction. For example: curl --location --request DELETE 'http://localhost:8081/druid/coordinator/v1/config/compaction/wikipedia' "},{"title":"Configure automatic compaction","type":1,"pageTitle":"Automatic compaction","url":"/docs/27.0.0/data-management/automatic-compaction#configure-automatic-compaction","content":"You can configure automatic compaction dynamically without restarting Druid. The automatic compaction system uses the following syntax: { "dataSource": <task_datasource>, "ioConfig": <IO config>, "dimensionsSpec": <custom dimensionsSpec>, "transformSpec": <custom transformSpec>, "metricsSpec": <custom metricsSpec>, "tuningConfig": <parallel indexing task tuningConfig>, "granularitySpec": <compaction task granularitySpec>, "skipOffsetFromLatest": <time period to avoid compaction>, "taskPriority": <compaction task priority>, "taskContext": <task context> } Most fields in the auto-compaction configuration correlate to a typical Druid ingestion spec. The following properties only apply to auto-compaction: skipOffsetFromLatesttaskPrioritytaskContext Since the automatic compaction system provides a management layer on top of manual compaction tasks, the auto-compaction configuration does not include task-specific properties found in a typical Druid ingestion spec. The following properties are automatically set by the Coordinator: type: Set to compact.id: Generated using the task type, datasource name, interval, and timestamp. The task ID is prefixed with coordinator-issued.context: Set according to the user-provided taskContext. Compaction tasks typically fetch all relevant segments prior to launching any subtasks,unless the following properties are all set to non-null values. It is strongly recommended to set them to non-null values to maximize performance and minimize disk usage of the compact tasks launched by auto-compaction: granularitySpec, with non-null values for each of segmentGranularity, queryGranularity, and rollupdimensionsSpecmetricsSpec For more details on each of the specs in an auto-compaction configuration, see Automatic compaction dynamic configuration. "},{"title":"Avoid conflicts with ingestion","type":1,"pageTitle":"Automatic compaction","url":"/docs/27.0.0/data-management/automatic-compaction#avoid-conflicts-with-ingestion","content":"Compaction tasks may be interrupted when they interfere with ingestion. For example, this occurs when an ingestion task needs to write data to a segment for a time interval locked for compaction. If there are continuous failures that prevent compaction from making progress, consider one of the following strategies: Set skipOffsetFromLatest to reduce the chance of conflicts between ingestion and compaction. See more details in this section below.Increase the priority value of compaction tasks relative to ingestion tasks. Only recommended for advanced users. This approach can cause ingestion jobs to fail or lag. To change the priority of compaction tasks, set taskPriority to the desired priority value in the auto-compaction configuration. For details on the priority values of different task types, see Lock priority. The Coordinator compacts segments from newest to oldest. In the auto-compaction configuration, you can set a time period, relative to the end time of the most recent segment, for segments that should not be compacted. Assign this value to skipOffsetFromLatest. Note that this offset is not relative to the current time but to the latest segment time. For example, if you want to skip over segments from five days prior to the end time of the most recent segment, assign "skipOffsetFromLatest": "P5D". To set skipOffsetFromLatest, consider how frequently you expect the stream to receive late arriving data. If your stream only occasionally receives late arriving data, the auto-compaction system robustly compacts your data even though data is ingested outside the skipOffsetFromLatest window. For most realtime streaming ingestion use cases, it is reasonable to set skipOffsetFromLatest to a few hours or a day. "},{"title":"Set frequency of compaction runs","type":1,"pageTitle":"Automatic compaction","url":"/docs/27.0.0/data-management/automatic-compaction#set-frequency-of-compaction-runs","content":"If you want the Coordinator to check for compaction more frequently than its indexing period, create a separate group to handle compaction duties. Set the time period of the duty group in the coordinator/runtime.properties file. The following example shows how to create a duty group named compaction and set the auto-compaction period to 1 minute: druid.coordinator.dutyGroups=["compaction"] druid.coordinator.compaction.duties=["compactSegments"] druid.coordinator.compaction.period=PT60S "},{"title":"View automatic compaction statistics","type":1,"pageTitle":"Automatic compaction","url":"/docs/27.0.0/data-management/automatic-compaction#view-automatic-compaction-statistics","content":"After the Coordinator has initiated auto-compaction, you can view compaction statistics for the datasource, including the number of bytes, segments, and intervals already compacted and those awaiting compaction. The Coordinator also reports the total bytes, segments, and intervals not eligible for compaction in accordance with its segment search policy. In the web console, the Datasources view displays auto-compaction statistics. The Tasks view shows the task information for compaction tasks that were triggered by the automatic compaction system. To get statistics by API, send a GET request to /druid/coordinator/v1/compaction/status. To filter the results to a particular datasource, pass the datasource name as a query parameter to the request—for example, /druid/coordinator/v1/compaction/status?dataSource=wikipedia. "},{"title":"Examples","type":1,"pageTitle":"Automatic compaction","url":"/docs/27.0.0/data-management/automatic-compaction#examples","content":"The following examples demonstrate potential use cases in which auto-compaction may improve your Druid performance. See more details in Compaction strategies. The examples in this section do not change the underlying data. "},{"title":"Change segment granularity","type":1,"pageTitle":"Automatic compaction","url":"/docs/27.0.0/data-management/automatic-compaction#change-segment-granularity","content":"You have a stream set up to ingest data with HOUR segment granularity into the wikistream datasource. You notice that your Druid segments are smaller than the recommended segment size of 5 million rows per segment. You wish to automatically compact segments to DAY granularity while leaving the latest week of data not compacted because your stream consistently receives data within that time period. The following auto-compaction configuration compacts existing HOUR segments into DAY segments while leaving the latest week of data not compacted: { "dataSource": "wikistream", "granularitySpec": { "segmentGranularity": "DAY" }, "skipOffsetFromLatest": "P1W", } info Auto-compaction skips datasources containing ALL granularity segments when the target granularity is different. "},{"title":"Update partitioning scheme","type":1,"pageTitle":"Automatic compaction","url":"/docs/27.0.0/data-management/automatic-compaction#update-partitioning-scheme","content":"For your wikipedia datasource, you want to optimize segment access when regularly ingesting data without compromising compute time when querying the data. Your ingestion spec for batch append uses dynamic partitioning to optimize for write-time operations, while your stream ingestion partitioning is configured by the stream service. You want to implement auto-compaction to reorganize the data with a suitable read-time partitioning using multi-dimension range partitioning. Based on the dimensions frequently accessed in queries, you wish to partition on the following dimensions: channel, countryName, namespace. The following auto-compaction configuration compacts updates the wikipedia segments to use multi-dimension range partitioning: { "dataSource": "wikipedia", "tuningConfig": { "partitionsSpec": { "type": "range", "partitionDimensions": [ "channel", "countryName", "namespace" ], "targetRowsPerSegment": 5000000 } } } "},{"title":"Learn more","type":1,"pageTitle":"Automatic compaction","url":"/docs/27.0.0/data-management/automatic-compaction#learn-more","content":"See the following topics for more information: Compaction for an overview of compaction and how to set up manual compaction in Druid.Segment optimization for guidance on evaluating and optimizing Druid segment size.Coordinator process for details on how the Coordinator plans compaction tasks. "},{"title":"Broker","type":0,"sectionRef":"#","url":"/docs/27.0.0/design/broker","content":"","keywords":""},{"title":"Configuration","type":1,"pageTitle":"Broker","url":"/docs/27.0.0/design/broker#configuration","content":"For Apache Druid Broker Process Configuration, see Broker Configuration. For basic tuning guidance for the Broker process, see Basic cluster tuning. "},{"title":"HTTP endpoints","type":1,"pageTitle":"Broker","url":"/docs/27.0.0/design/broker#http-endpoints","content":"For a list of API endpoints supported by the Broker, see Broker API. "},{"title":"Overview","type":1,"pageTitle":"Broker","url":"/docs/27.0.0/design/broker#overview","content":"The Broker is the process to route queries to if you want to run a distributed cluster. It understands the metadata published to ZooKeeper about what segments exist on what processes and routes queries such that they hit the right processes. This process also merges the result sets from all of the individual processes together. On start up, Historical processes announce themselves and the segments they are serving in Zookeeper. "},{"title":"Running","type":1,"pageTitle":"Broker","url":"/docs/27.0.0/design/broker#running","content":"org.apache.druid.cli.Main server broker "},{"title":"Forwarding queries","type":1,"pageTitle":"Broker","url":"/docs/27.0.0/design/broker#forwarding-queries","content":"Most Druid queries contain an interval object that indicates a span of time for which data is requested. Likewise, Druid Segments are partitioned to contain data for some interval of time and segments are distributed across a cluster. Consider a simple datasource with 7 segments where each segment contains data for a given day of the week. Any query issued to the datasource for more than one day of data will hit more than one segment. These segments will likely be distributed across multiple processes, and hence, the query will likely hit multiple processes. To determine which processes to forward queries to, the Broker process first builds a view of the world from information in Zookeeper. Zookeeper maintains information about Historical and streaming ingestion Peon processes and the segments they are serving. For every datasource in Zookeeper, the Broker process builds a timeline of segments and the processes that serve them. When queries are received for a specific datasource and interval, the Broker process performs a lookup into the timeline associated with the query datasource for the query interval and retrieves the processes that contain data for the query. The Broker process then forwards down the query to the selected processes. "},{"title":"Caching","type":1,"pageTitle":"Broker","url":"/docs/27.0.0/design/broker#caching","content":"Broker processes employ a cache with an LRU cache invalidation strategy. The Broker cache stores per-segment results. The cache can be local to each Broker process or shared across multiple processes using an external distributed cache such as memcached. Each time a broker process receives a query, it first maps the query to a set of segments. A subset of these segment results may already exist in the cache and the results can be directly pulled from the cache. For any segment results that do not exist in the cache, the broker process will forward the query to the Historical processes. Once the Historical processes return their results, the Broker will store those results in the cache. Real-time segments are never cached and hence requests for real-time data will always be forwarded to real-time processes. Real-time data is perpetually changing and caching the results would be unreliable. "},{"title":"Coordinator Process","type":0,"sectionRef":"#","url":"/docs/27.0.0/design/coordinator","content":"","keywords":""},{"title":"Configuration","type":1,"pageTitle":"Coordinator Process","url":"/docs/27.0.0/design/coordinator#configuration","content":"For Apache Druid Coordinator Process Configuration, see Coordinator Configuration. For basic tuning guidance for the Coordinator process, see Basic cluster tuning. "},{"title":"HTTP endpoints","type":1,"pageTitle":"Coordinator Process","url":"/docs/27.0.0/design/coordinator#http-endpoints","content":"For a list of API endpoints supported by the Coordinator, see Service status API reference. "},{"title":"Overview","type":1,"pageTitle":"Coordinator Process","url":"/docs/27.0.0/design/coordinator#overview","content":"The Druid Coordinator process is primarily responsible for segment management and distribution. More specifically, the Druid Coordinator process communicates to Historical processes to load or drop segments based on configurations. The Druid Coordinator is responsible for loading new segments, dropping outdated segments, ensuring that segments are "replicated" (that is, loaded on multiple different Historical nodes) proper (configured) number of times, and moving ("balancing") segments between Historical nodes to keep the latter evenly loaded. The Druid Coordinator runs its duties periodically and the time between each run is a configurable parameter. On each run, the Coordinator assesses the current state of the cluster before deciding on the appropriate actions to take. Similar to the Broker and Historical processes, the Druid Coordinator maintains a connection to a Zookeeper cluster for current cluster information. The Coordinator also maintains a connection to a database containing information about "used" segments (that is, the segments that should be loaded in the cluster) and the loading rules. Before any unassigned segments are serviced by Historical processes, the Historical processes for each tier are first sorted in terms of capacity, with least capacity servers having the highest priority. Unassigned segments are always assigned to the processes with least capacity to maintain a level of balance between processes. The Coordinator does not directly communicate with a historical process when assigning it a new segment; instead the Coordinator creates some temporary information about the new segment under load queue path of the historical process. Once this request is seen, the historical process will load the segment and begin servicing it. "},{"title":"Running","type":1,"pageTitle":"Coordinator Process","url":"/docs/27.0.0/design/coordinator#running","content":"org.apache.druid.cli.Main server coordinator "},{"title":"Rules","type":1,"pageTitle":"Coordinator Process","url":"/docs/27.0.0/design/coordinator#rules","content":"Segments can be automatically loaded and dropped from the cluster based on a set of rules. For more information on rules, see Rule Configuration. "},{"title":"Cleaning up segments","type":1,"pageTitle":"Coordinator Process","url":"/docs/27.0.0/design/coordinator#cleaning-up-segments","content":"On each run, the Druid Coordinator compares the set of used segments in the database with the segments served by some Historical nodes in the cluster. Coordinator sends requests to Historical nodes to unload unused segments or segments that are removed from the database. Segments that are overshadowed (their versions are too old and their data has been replaced by newer segments) are marked as unused. During the next Coordinator's run, they will be unloaded from Historical nodes in the cluster. "},{"title":"Segment availability","type":1,"pageTitle":"Coordinator Process","url":"/docs/27.0.0/design/coordinator#segment-availability","content":"If a Historical process restarts or becomes unavailable for any reason, the Druid Coordinator will notice a process has gone missing and treat all segments served by that process as being dropped. Given a sufficient period of time, the segments may be reassigned to other Historical processes in the cluster. However, each segment that is dropped is not immediately forgotten. Instead, there is a transitional data structure that stores all dropped segments with an associated lifetime. The lifetime represents a period of time in which the Coordinator will not reassign a dropped segment. Hence, if a historical process becomes unavailable and available again within a short period of time, the historical process will start up and serve segments from its cache without any those segments being reassigned across the cluster. "},{"title":"Balancing segment load","type":1,"pageTitle":"Coordinator Process","url":"/docs/27.0.0/design/coordinator#balancing-segment-load","content":"To ensure an even distribution of segments across Historical processes in the cluster, the Coordinator process will find the total size of all segments being served by every Historical process each time the Coordinator runs. For every Historical process tier in the cluster, the Coordinator process will determine the Historical process with the highest utilization and the Historical process with the lowest utilization. The percent difference in utilization between the two processes is computed, and if the result exceeds a certain threshold, a number of segments will be moved from the highest utilized process to the lowest utilized process. There is a configurable limit on the number of segments that can be moved from one process to another each time the Coordinator runs. Segments to be moved are selected at random and only moved if the resulting utilization calculation indicates the percentage difference between the highest and lowest servers has decreased. "},{"title":"Automatic compaction","type":1,"pageTitle":"Coordinator Process","url":"/docs/27.0.0/design/coordinator#automatic-compaction","content":"The Druid Coordinator manages the automatic compaction system. Each run, the Coordinator compacts segments by merging small segments or splitting a large one. This is useful when the size of your segments is not optimized which may degrade query performance. See Segment size optimization for details. The Coordinator first finds the segments to compact based on the segment search policy. Once some segments are found, it issues a compaction task to compact those segments. The maximum number of running compaction tasks is min(sum of worker capacity * slotRatio, maxSlots). Note that even if min(sum of worker capacity * slotRatio, maxSlots) = 0, at least one compaction task is always submitted if the compaction is enabled for a dataSource. See Automatic compaction configuration API and Automatic compaction configuration to enable and configure automatic compaction. Compaction tasks might fail due to the following reasons: If the input segments of a compaction task are removed or overshadowed before it starts, that compaction task fails immediately.If a task of a higher priority acquires a time chunk lock for an interval overlapping with the interval of a compaction task, the compaction task fails. Once a compaction task fails, the Coordinator simply checks the segments in the interval of the failed task again, and issues another compaction task in the next run. Note that Compacting Segments Coordinator Duty is automatically enabled and run as part of the Indexing Service Duties group. However, Compacting Segments Coordinator Duty can be configured to run in isolation as a separate Coordinator duty group. This allows changing the period of Compacting Segments Coordinator Duty without impacting the period of other Indexing Service Duties. This can be done by setting the following properties. For more details, see custom pluggable Coordinator Duty. druid.coordinator.dutyGroups=[<SOME_GROUP_NAME>] druid.coordinator.<SOME_GROUP_NAME>.duties=["compactSegments"] druid.coordinator.<SOME_GROUP_NAME>.period=<PERIOD_TO_RUN_COMPACTING_SEGMENTS_DUTY> "},{"title":"Segment search policy in automatic compaction","type":1,"pageTitle":"Coordinator Process","url":"/docs/27.0.0/design/coordinator#segment-search-policy-in-automatic-compaction","content":"At every Coordinator run, this policy looks up time chunks from newest to oldest and checks whether the segments in those time chunks need compaction. A set of segments needs compaction if all conditions below are satisfied: 1) Total size of segments in the time chunk is smaller than or equal to the configured inputSegmentSizeBytes. 2) Segments have never been compacted yet or compaction spec has been updated since the last compaction: maxTotalRows or indexSpec. Here are some details with an example. Suppose we have two dataSources (foo, bar) as seen below: foo foo_2017-11-01T00:00:00.000Z_2017-12-01T00:00:00.000Z_VERSIONfoo_2017-11-01T00:00:00.000Z_2017-12-01T00:00:00.000Z_VERSION_1foo_2017-09-01T00:00:00.000Z_2017-10-01T00:00:00.000Z_VERSION bar bar_2017-10-01T00:00:00.000Z_2017-11-01T00:00:00.000Z_VERSIONbar_2017-10-01T00:00:00.000Z_2017-11-01T00:00:00.000Z_VERSION_1 Assuming that each segment is 10 MB and haven't been compacted yet, this policy first returns two segments offoo_2017-11-01T00:00:00.000Z_2017-12-01T00:00:00.000Z_VERSION and foo_2017-11-01T00:00:00.000Z_2017-12-01T00:00:00.000Z_VERSION_1 to compact together because2017-11-01T00:00:00.000Z/2017-12-01T00:00:00.000Z is the most recent time chunk. If the Coordinator has enough task slots for compaction, this policy will continue searching for the next segments and returnbar_2017-10-01T00:00:00.000Z_2017-11-01T00:00:00.000Z_VERSION and bar_2017-10-01T00:00:00.000Z_2017-11-01T00:00:00.000Z_VERSION_1. Finally, foo_2017-09-01T00:00:00.000Z_2017-10-01T00:00:00.000Z_VERSION will be picked up even though there is only one segment in the time chunk of 2017-09-01T00:00:00.000Z/2017-10-01T00:00:00.000Z. The search start point can be changed by setting skipOffsetFromLatest. If this is set, this policy will ignore the segments falling into the time chunk of (the end time of the most recent segment - skipOffsetFromLatest). This is to avoid conflicts between compaction tasks and realtime tasks. Note that realtime tasks have a higher priority than compaction tasks by default. Realtime tasks will revoke the locks of compaction tasks if their intervals overlap, resulting in the termination of the compaction task. For more information, see Avoid conflicts with ingestion. info This policy currently cannot handle the situation when there are a lot of small segments which have the same interval, and their total size exceeds inputSegmentSizeBytes. If it finds such segments, it simply skips them. "},{"title":"FAQ","type":1,"pageTitle":"Coordinator Process","url":"/docs/27.0.0/design/coordinator#faq","content":"Do clients ever contact the Coordinator process? The Coordinator is not involved in a query. Historical processes never directly contact the Coordinator process. The Druid Coordinator tells the Historical processes to load/drop data via Zookeeper, but the Historical processes are completely unaware of the Coordinator. Brokers also never contact the Coordinator. Brokers base their understanding of the data topology on metadata exposed by the Historical processes via ZK and are completely unaware of the Coordinator. Does it matter if the Coordinator process starts up before or after other processes? No. If the Druid Coordinator is not started up, no new segments will be loaded in the cluster and outdated segments will not be dropped. However, the Coordinator process can be started up at any time, and after a configurable delay, will start running Coordinator tasks. This also means that if you have a working cluster and all of your Coordinators die, the cluster will continue to function, it just won’t experience any changes to its data topology. "},{"title":"Deep storage","type":0,"sectionRef":"#","url":"/docs/27.0.0/design/deep-storage","content":"","keywords":""},{"title":"Deep storage options","type":1,"pageTitle":"Deep storage","url":"/docs/27.0.0/design/deep-storage#deep-storage-options","content":"Druid supports multiple options for deep storage, including blob storage from major cloud providers. Select the one that fits your environment. "},{"title":"Local","type":1,"pageTitle":"Deep storage","url":"/docs/27.0.0/design/deep-storage#local","content":"Local storage is intended for use in the following situations: You have just one server.Or, you have multiple servers, and they all have access to a shared filesystem (for example: NFS). In multi-server production clusters, rather than local storage with a shared filesystem, it is instead recommended to use cloud-based deep storage (Amazon S3, Google Cloud Storage, or Azure Blob Storage), S3-compatible storage (like Minio), or HDFS. These options are generally more convenient, more scalable, and more robust than setting up a shared filesystem. The following configurations in common.runtime.properties apply to local storage: Property\tPossible Values\tDescription\tDefaultdruid.storage.type\tlocal Must be set. druid.storage.storageDirectory\tany local directory\tDirectory for storing segments. Must be different from druid.segmentCache.locations and druid.segmentCache.infoDir.\t/tmp/druid/localStorage druid.storage.zip\ttrue, false\tWhether segments in druid.storage.storageDirectory are written as directories (false) or zip files (true).\tfalse For example: druid.storage.type=local druid.storage.storageDirectory=/tmp/druid/localStorage The druid.storage.storageDirectory must be set to a different path than druid.segmentCache.locations ordruid.segmentCache.infoDir. "},{"title":"Amazon S3 or S3-compatible","type":1,"pageTitle":"Deep storage","url":"/docs/27.0.0/design/deep-storage#amazon-s3-or-s3-compatible","content":"See druid-s3-extensions. "},{"title":"Google Cloud Storage","type":1,"pageTitle":"Deep storage","url":"/docs/27.0.0/design/deep-storage#google-cloud-storage","content":"See druid-google-extensions. "},{"title":"Azure Blob Storage","type":1,"pageTitle":"Deep storage","url":"/docs/27.0.0/design/deep-storage#azure-blob-storage","content":"See druid-azure-extensions. "},{"title":"HDFS","type":1,"pageTitle":"Deep storage","url":"/docs/27.0.0/design/deep-storage#hdfs","content":"See druid-hdfs-storage extension documentation. "},{"title":"Additional options","type":1,"pageTitle":"Deep storage","url":"/docs/27.0.0/design/deep-storage#additional-options","content":"For additional deep storage options, please see our extensions list. "},{"title":"Querying from deep storage","type":1,"pageTitle":"Deep storage","url":"/docs/27.0.0/design/deep-storage#querying-from-deep-storage","content":"Although not as performant as querying segments stored on disk for Historical processes, you can query from deep storage to access segments that you may not need frequently or with the extreme low latency Druid queries traditionally provide. You trade some performance for a total lower storage cost because you can access more of your data without the need to increase the number or capacity of your Historical processes. For information about how to run queries, see Query from deep storage. "},{"title":"Historical Process","type":0,"sectionRef":"#","url":"/docs/27.0.0/design/historical","content":"","keywords":""},{"title":"Configuration","type":1,"pageTitle":"Historical Process","url":"/docs/27.0.0/design/historical#configuration","content":"For Apache Druid Historical Process Configuration, see Historical Configuration. For basic tuning guidance for the Historical process, see Basic cluster tuning. "},{"title":"HTTP endpoints","type":1,"pageTitle":"Historical Process","url":"/docs/27.0.0/design/historical#http-endpoints","content":"For a list of API endpoints supported by the Historical, please see the Service status API reference. "},{"title":"Running","type":1,"pageTitle":"Historical Process","url":"/docs/27.0.0/design/historical#running","content":"org.apache.druid.cli.Main server historical "},{"title":"Loading and serving segments","type":1,"pageTitle":"Historical Process","url":"/docs/27.0.0/design/historical#loading-and-serving-segments","content":"Each Historical process copies or "pulls" segment files from Deep Storage to local disk in an area called the segment cache. Set the druid.segmentCache.locations to configure the size and location of the segment cache on each Historical process. See Historical general configuration. See the Tuning Guide for more information. The Coordinator controls the assignment of segments to Historicals and the balance of segments between Historicals. Historical processes do not communicate directly with each other, nor do they communicate directly with the Coordinator. Instead, the Coordinator creates ephemeral entries in Zookeeper in a load queue path. Each Historical process maintains a connection to Zookeeper, watching those paths for segment information. For more information about how the Coordinator assigns segments to Historical processes, see Coordinator. When a Historical process detects a new entry in the Zookeeper load queue, it checks its own segment cache. If no information about the segment exists there, the Historical process first retrieves metadata from Zookeeper about the segment, including where the segment is located in Deep Storage and how it needs to decompress and process it. For more information about segment metadata and Druid segments in general, see Segments. After a Historical process pulls down and processes a segment from Deep Storage, Druid advertises the segment as being available for queries from the Broker. This announcement by the Historical is made via Zookeeper, in a served segments path. For more information about how the Broker determines what data is available for queries, please see Broker. To make data from the segment cache available for querying as soon as possible, Historical services search the local segment cache upon startup and advertise the segments found there. "},{"title":"Loading and serving segments from cache","type":1,"pageTitle":"Historical Process","url":"/docs/27.0.0/design/historical#loading-and-serving-segments-from-cache","content":"The segment cache uses memory mapping. The cache consumes memory from the underlying operating system so Historicals can hold parts of segment files in memory to increase query performance at the data level. The in-memory segment cache is affected by the size of the Historical JVM, heap / direct memory buffers, and other processes on the operating system itself. At query time, if the required part of a segment file is available in the memory mapped cache or "page cache", the Historical re-uses it and reads it directly from memory. If it is not in the memory-mapped cache, the Historical reads that part of the segment from disk. In this case, there is potential for new data to flush other segment data from memory. This means that if free operating system memory is close to druid.server.maxSize, the more likely that segment data will be available in memory and reduce query times. Conversely, the lower the free operating system memory, the more likely a Historical is to read segments from disk. Note that this memory-mapped segment cache is in addition to other query-level caches. "},{"title":"Querying segments","type":1,"pageTitle":"Historical Process","url":"/docs/27.0.0/design/historical#querying-segments","content":"Please see Querying for more information on querying Historical processes. A Historical can be configured to log and report metrics for every query it services. "},{"title":"Indexing Service","type":0,"sectionRef":"#","url":"/docs/27.0.0/design/indexing-service","content":"","keywords":""},{"title":"Overlord","type":1,"pageTitle":"Indexing Service","url":"/docs/27.0.0/design/indexing-service#overlord","content":"See Overlord. "},{"title":"Middle Managers","type":1,"pageTitle":"Indexing Service","url":"/docs/27.0.0/design/indexing-service#middle-managers","content":"See Middle Manager. "},{"title":"Peons","type":1,"pageTitle":"Indexing Service","url":"/docs/27.0.0/design/indexing-service#peons","content":"See Peon. "},{"title":"Tasks","type":1,"pageTitle":"Indexing Service","url":"/docs/27.0.0/design/indexing-service#tasks","content":"See Tasks. "},{"title":"MiddleManager Process","type":0,"sectionRef":"#","url":"/docs/27.0.0/design/middlemanager","content":"","keywords":""},{"title":"Configuration","type":1,"pageTitle":"MiddleManager Process","url":"/docs/27.0.0/design/middlemanager#configuration","content":"For Apache Druid MiddleManager Process Configuration, see Indexing Service Configuration. For basic tuning guidance for the MiddleManager process, see Basic cluster tuning. "},{"title":"HTTP endpoints","type":1,"pageTitle":"MiddleManager Process","url":"/docs/27.0.0/design/middlemanager#http-endpoints","content":"For a list of API endpoints supported by the MiddleManager, please see the Service status API reference. "},{"title":"Overview","type":1,"pageTitle":"MiddleManager Process","url":"/docs/27.0.0/design/middlemanager#overview","content":"The MiddleManager process is a worker process that executes submitted tasks. Middle Managers forward tasks to Peons that run in separate JVMs. The reason we have separate JVMs for tasks is for resource and log isolation. Each Peon is capable of running only one task at a time, however, a MiddleManager may have multiple Peons. "},{"title":"Running","type":1,"pageTitle":"MiddleManager Process","url":"/docs/27.0.0/design/middlemanager#running","content":"org.apache.druid.cli.Main server middleManager "},{"title":"Design","type":0,"sectionRef":"#","url":"/docs/27.0.0/design/architecture","content":"","keywords":""},{"title":"Druid architecture","type":1,"pageTitle":"Design","url":"/docs/27.0.0/design/architecture#druid-architecture","content":"The following diagram shows the services that make up the Druid architecture, how they are typically organized into servers, and how queries and data flow through this architecture. The following sections describe the components of this architecture. "},{"title":"Druid services","type":1,"pageTitle":"Design","url":"/docs/27.0.0/design/architecture#druid-services","content":"Druid has several types of services: Coordinator service manages data availability on the cluster.Overlord service controls the assignment of data ingestion workloads.Broker handles queries from external clients.Router services are optional; they route requests to Brokers, Coordinators, and Overlords.Historical services store queryable data.MiddleManager services ingest data. You can view services in the Services tab in the web console: "},{"title":"Druid servers","type":1,"pageTitle":"Design","url":"/docs/27.0.0/design/architecture#druid-servers","content":"Druid services can be deployed any way you like, but for ease of deployment we suggest organizing them into three server types: Master, Query, and Data. Master: Runs Coordinator and Overlord processes, manages data availability and ingestion.Query: Runs Broker and optional Router processes, handles queries from external clients.Data: Runs Historical and MiddleManager processes, executes ingestion workloads and stores all queryable data. For more details on process and server organization, please see Druid Processes and Servers. "},{"title":"External dependencies","type":1,"pageTitle":"Design","url":"/docs/27.0.0/design/architecture#external-dependencies","content":"In addition to its built-in process types, Druid also has three external dependencies. These are intended to be able to leverage existing infrastructure, where present. "},{"title":"Deep storage","type":1,"pageTitle":"Design","url":"/docs/27.0.0/design/architecture#deep-storage","content":"Druid uses deep storage to store any data that has been ingested into the system. Deep storage is shared file storage accessible by every Druid server. In a clustered deployment, this is typically a distributed object store like S3 or HDFS, or a network mounted filesystem. In a single-server deployment, this is typically local disk. Druid uses deep storage for the following purposes: To store all the data you ingest. Segments that get loaded onto Historical processes for low latency queries are also kept in deep storage for backup purposes. Additionally, segments that are only in deep storage can be used for queries from deep storage.As a way to transfer data in the background between Druid processes. Druid stores data in files called segments. Historical processes cache data segments on local disk and serve queries from that cache as well as from an in-memory cache. Segments on disk for Historical processes provide the low latency querying performance Druid is known for. You can also query directly from deep storage. When you query segments that exist only in deep storage, you trade some performance for the ability to query more of your data without necessarily having to scale your Historical processes. When determining sizing for your storage, keep the following in mind: Deep storage needs to be able to hold all the data that you ingest into Druid.On disk storage for Historical processes need to be able to accommodate the data you want to load onto them to run queries. The data on Historical processes should be data you access frequently and need to run low latency queries for. Deep storage is an important part of Druid's elastic, fault-tolerant design. Druid bootstraps from deep storage even if every single data server is lost and re-provisioned. For more details, please see the Deep storage page. "},{"title":"Metadata storage","type":1,"pageTitle":"Design","url":"/docs/27.0.0/design/architecture#metadata-storage","content":"The metadata storage holds various shared system metadata such as segment usage information and task information. In a clustered deployment, this is typically a traditional RDBMS like PostgreSQL or MySQL. In a single-server deployment, it is typically a locally-stored Apache Derby database. For more details, please see the Metadata storage page. "},{"title":"ZooKeeper","type":1,"pageTitle":"Design","url":"/docs/27.0.0/design/architecture#zookeeper","content":"Used for internal service discovery, coordination, and leader election. For more details, please see the ZooKeeper page. "},{"title":"Storage design","type":1,"pageTitle":"Design","url":"/docs/27.0.0/design/architecture#storage-design","content":""},{"title":"Datasources and segments","type":1,"pageTitle":"Design","url":"/docs/27.0.0/design/architecture#datasources-and-segments","content":"Druid data is stored in datasources, which are similar to tables in a traditional RDBMS. Each datasource is partitioned by time and, optionally, further partitioned by other attributes. Each time range is called a chunk (for example, a single day, if your datasource is partitioned by day). Within a chunk, data is partitioned into one or moresegments. Each segment is a single file, typically comprising up to a few million rows of data. Since segments are organized into time chunks, it's sometimes helpful to think of segments as living on a timeline like the following: A datasource may have anywhere from just a few segments, up to hundreds of thousands and even millions of segments. Each segment is created by a MiddleManager as mutable and uncommitted. Data is queryable as soon as it is added to an uncommitted segment. The segment building process accelerates later queries by producing a data file that is compact and indexed: Conversion to columnar formatIndexing with bitmap indexesCompression Dictionary encoding with id storage minimization for String columnsBitmap compression for bitmap indexesType-aware compression for all columns Periodically, segments are committed and published to deep storage, become immutable, and move from MiddleManagers to the Historical processes. An entry about the segment is also written to the metadata store. This entry is a self-describing bit of metadata about the segment, including things like the schema of the segment, its size, and its location on deep storage. These entries tell the Coordinator what data is available on the cluster. For details on the segment file format, please see segment files. For details on modeling your data in Druid, see schema design. "},{"title":"Indexing and handoff","type":1,"pageTitle":"Design","url":"/docs/27.0.0/design/architecture#indexing-and-handoff","content":"Indexing is the mechanism by which new segments are created, and handoff is the mechanism by which they are published and begin being served by Historical processes. On the indexing side: An indexing task starts running and building a new segment. It must determine the identifier of the segment before it starts building it. For a task that is appending (like a Kafka task, or an index task in append mode) this is done by calling an "allocate" API on the Overlord to potentially add a new partition to an existing set of segments. For a task that is overwriting (like a Hadoop task, or an index task not in append mode) this is done by locking an interval and creating a new version number and new set of segments.If the indexing task is a realtime task (like a Kafka task) then the segment is immediately queryable at this point. It's available, but unpublished.When the indexing task has finished reading data for the segment, it pushes it to deep storage and then publishes it by writing a record into the metadata store.If the indexing task is a realtime task, then to ensure data is continuously available for queries, it waits for a Historical process to load the segment. If the indexing task is not a realtime task, it exits immediately. On the Coordinator / Historical side: The Coordinator polls the metadata store periodically (by default, every 1 minute) for newly published segments.When the Coordinator finds a segment that is published and used, but unavailable, it chooses a Historical process to load that segment and instructs that Historical to do so.The Historical loads the segment and begins serving it.At this point, if the indexing task was waiting for handoff, it will exit. "},{"title":"Segment identifiers","type":1,"pageTitle":"Design","url":"/docs/27.0.0/design/architecture#segment-identifiers","content":"Segments all have a four-part identifier with the following components: Datasource name.Time interval (for the time chunk containing the segment; this corresponds to the segmentGranularity specified at ingestion time).Version number (generally an ISO8601 timestamp corresponding to when the segment set was first started).Partition number (an integer, unique within a datasource+interval+version; may not necessarily be contiguous). For example, this is the identifier for a segment in datasource clarity-cloud0, time chunk2018-05-21T16:00:00.000Z/2018-05-21T17:00:00.000Z, version 2018-05-21T15:56:09.909Z, and partition number 1: clarity-cloud0_2018-05-21T16:00:00.000Z_2018-05-21T17:00:00.000Z_2018-05-21T15:56:09.909Z_1 Segments with partition number 0 (the first partition in a chunk) omit the partition number, like the following example, which is a segment in the same time chunk as the previous one, but with partition number 0 instead of 1: clarity-cloud0_2018-05-21T16:00:00.000Z_2018-05-21T17:00:00.000Z_2018-05-21T15:56:09.909Z "},{"title":"Segment versioning","type":1,"pageTitle":"Design","url":"/docs/27.0.0/design/architecture#segment-versioning","content":"You may be wondering what the "version number" described in the previous section is for. Or, you might not be, in which case good for you and you can skip this section! The version number provides a form of multi-version concurrency control (MVCC) to support batch-mode overwriting. If all you ever do is append data, then there will be just a single version for each time chunk. But when you overwrite data, Druid will seamlessly switch from querying the old version to instead query the new, updated versions. Specifically, a new set of segments is created with the same datasource, same time interval, but a higher version number. This is a signal to the rest of the Druid system that the older version should be removed from the cluster, and the new version should replace it. The switch appears to happen instantaneously to a user, because Druid handles this by first loading the new data (but not allowing it to be queried), and then, as soon as the new data is all loaded, switching all new queries to use those new segments. Then it drops the old segments a few minutes later. "},{"title":"Segment lifecycle","type":1,"pageTitle":"Design","url":"/docs/27.0.0/design/architecture#segment-lifecycle","content":"Each segment has a lifecycle that involves the following three major areas: Metadata store: Segment metadata (a small JSON payload generally no more than a few KB) is stored in themetadata store once a segment is done being constructed. The act of inserting a record for a segment into the metadata store is called publishing. These metadata records have a boolean flag named used, which controls whether the segment is intended to be queryable or not. Segments created by realtime tasks will be available before they are published, since they are only published when the segment is complete and will not accept any additional rows of data.Deep storage: Segment data files are pushed to deep storage once a segment is done being constructed. This happens immediately before publishing metadata to the metadata store.Availability for querying: Segments are available for querying on some Druid data server, like a realtime task, directly from deep storage, or a Historical process. You can inspect the state of currently active segments using the Druid SQLsys.segments table. It includes the following flags: is_published: True if segment metadata has been published to the metadata store and used is true.is_available: True if the segment is currently available for querying, either on a realtime task or Historical process.is_realtime: True if the segment is only available on realtime tasks. For datasources that use realtime ingestion, this will generally start off true and then become false as the segment is published and handed off.is_overshadowed: True if the segment is published (with used set to true) and is fully overshadowed by some other published segments. Generally this is a transient state, and segments in this state will soon have their used flag automatically set to false. "},{"title":"Availability and consistency","type":1,"pageTitle":"Design","url":"/docs/27.0.0/design/architecture#availability-and-consistency","content":"Druid has an architectural separation between ingestion and querying, as described above inIndexing and handoff. This means that when understanding Druid's availability and consistency properties, we must look at each function separately. On the ingestion side, Druid's primary ingestion methods are all pull-based and offer transactional guarantees. This means that you are guaranteed that ingestion using these methods will publish in an all-or-nothing manner: Supervised "seekable-stream" ingestion methods like Kafka andKinesis. With these methods, Druid commits stream offsets to itsmetadata store alongside segment metadata, in the same transaction. Note that ingestion of data that has not yet been published can be rolled back if ingestion tasks fail. In this case, partially-ingested data is discarded, and Druid will resume ingestion from the last committed set of stream offsets. This ensures exactly-once publishing behavior.Hadoop-based batch ingestion. Each task publishes all segment metadata in a single transaction.Native batch ingestion. In parallel mode, the supervisor task publishes all segment metadata in a single transaction after the subtasks are finished. In simple (single-task) mode, the single task publishes all segment metadata in a single transaction after it is complete. Additionally, some ingestion methods offer an idempotency guarantee. This means that repeated executions of the same ingestion will not cause duplicate data to be ingested: Supervised "seekable-stream" ingestion methods like Kafka andKinesis are idempotent due to the fact that stream offsets and segment metadata are stored together and updated in lock-step.Hadoop-based batch ingestion is idempotent unless one of your input sources is the same Druid datasource that you are ingesting into. In this case, running the same task twice is non-idempotent, because you are adding to existing data instead of overwriting it.Native batch ingestion is idempotent unlessappendToExisting is true, or one of your input sources is the same Druid datasource that you are ingesting into. In either of these two cases, running the same task twice is non-idempotent, because you are adding to existing data instead of overwriting it. On the query side, the Druid Broker is responsible for ensuring that a consistent set of segments is involved in a given query. It selects the appropriate set of segment versions to use when the query starts based on what is currently available. This is supported by atomic replacement, a feature that ensures that from a user's perspective, queries flip instantaneously from an older version of data to a newer set of data, with no consistency or performance impact. (See segment versioning above.) This is used for Hadoop-based batch ingestion, native batch ingestion when appendToExisting is false, and compaction. Note that atomic replacement happens for each time chunk individually. If a batch ingestion task or compaction involves multiple time chunks, then each time chunk will undergo atomic replacement soon after the task finishes, but the replacements will not all happen simultaneously. Typically, atomic replacement in Druid is based on a core set concept that works in conjunction with segment versions. When a time chunk is overwritten, a new core set of segments is created with a higher version number. The core set must all be available before the Broker will use them instead of the older set. There can also only be one core set per version per time chunk. Druid will also only use a single version at a time per time chunk. Together, these properties provide Druid's atomic replacement guarantees. Druid also supports an experimental segment locking mode that is activated by settingforceTimeChunkLock to false in the context of an ingestion task. In this case, Druid creates an atomic update group using the existing version for the time chunk, instead of creating a new core set with a new version number. There can be multiple atomic update groups with the same version number per time chunk. Each one replaces a specific set of earlier segments in the same time chunk and with the same version number. Druid will query the latest one that is fully available. This is a more powerful version of the core set concept, because it enables atomically replacing a subset of data for a time chunk, as well as doing atomic replacement and appending simultaneously. If segments become unavailable due to multiple Historicals going offline simultaneously (beyond your replication factor), then Druid queries will include only the segments that are still available. In the background, Druid will reload these unavailable segments on other Historicals as quickly as possible, at which point they will be included in queries again. "},{"title":"Query processing","type":1,"pageTitle":"Design","url":"/docs/27.0.0/design/architecture#query-processing","content":"Queries are distributed across the Druid cluster, and managed by a Broker. Queries first enter the Broker, which identifies the segments with data that may pertain to that query. The list of segments is always pruned by time, and may also be pruned by other attributes depending on how your datasource is partitioned. The Broker will then identify which Historicals andMiddleManagers are serving those segments and distributes a rewritten subquery to each of those processes. The Historical/MiddleManager processes execute each subquery and return results to the Broker. The Broker merges the partial results to get the final answer, which it returns to the original caller. Time and attribute pruning is an important way that Druid limits the amount of data that must be scanned for each query, but it is not the only way. For filters at a more granular level than what the Broker can use for pruning,indexing structuresinside each segment allow Historicals to figure out which (if any) rows match the filter set before looking at any row of data. Once a Historical knows which rows match a particular query, it only accesses the specific rows and columns it needs for that query. So Druid uses three different techniques to maximize query performance: Pruning the set of segments accessed for a query.Within each segment, using indexes to identify which rows must be accessed.Within each segment, only reading the specific rows and columns that are relevant to a particular query. For more details about how Druid executes queries, refer to the Query executiondocumentation. "},{"title":"Indexer Process","type":0,"sectionRef":"#","url":"/docs/27.0.0/design/indexer","content":"","keywords":""},{"title":"Configuration","type":1,"pageTitle":"Indexer Process","url":"/docs/27.0.0/design/indexer#configuration","content":"For Apache Druid Indexer Process Configuration, see Indexer Configuration. "},{"title":"HTTP endpoints","type":1,"pageTitle":"Indexer Process","url":"/docs/27.0.0/design/indexer#http-endpoints","content":"The Indexer process shares the same HTTP endpoints as the MiddleManager. "},{"title":"Running","type":1,"pageTitle":"Indexer Process","url":"/docs/27.0.0/design/indexer#running","content":"org.apache.druid.cli.Main server indexer "},{"title":"Task resource sharing","type":1,"pageTitle":"Indexer Process","url":"/docs/27.0.0/design/indexer#task-resource-sharing","content":"The following resources are shared across all tasks running inside an Indexer process. Query resources The query processing threads and buffers are shared across all tasks. The Indexer will serve queries from a single endpoint shared by all tasks. If query caching is enabled, the query cache is also shared across all tasks. Server HTTP threads The Indexer maintains two equally sized pools of HTTP threads. One pool is exclusively used for task control messages between the Overlord and the Indexer ("chat handler threads"). The other pool is used for handling all other HTTP requests. The size of the pools are configured by the druid.server.http.numThreads configuration (e.g., if this is set to 10, there will be 10 chat handler threads and 10 non-chat handler threads). In addition to these two pools, 2 separate threads are allocated for lookup handling. If lookups are not used, these threads will not be used. Memory sharing The Indexer uses the druid.worker.globalIngestionHeapLimitBytes configuration to impose a global heap limit across all of the tasks it is running. This global limit is evenly divided across the number of task slots configured by druid.worker.capacity. To apply the per-task heap limit, the Indexer will override maxBytesInMemory in task tuning configs (i.e., ignoring the default value or any user configured value). maxRowsInMemory will also be overridden to an essentially unlimited value: the Indexer does not support row limits. By default, druid.worker.globalIngestionHeapLimitBytes is set to 1/6th of the available JVM heap. This default is chosen to align with the default value of maxBytesInMemory in task tuning configs when using the MiddleManager/Peon system, which is also 1/6th of the JVM heap. The peak usage for rows held in heap memory relates to the interaction between the maxBytesInMemory and maxPendingPersists properties in the task tuning configs. When the amount of row data held in-heap by a task reaches the limit specified by maxBytesInMemory, a task will persist the in-heap row data. After the persist has been started, the task can again ingest up to maxBytesInMemory bytes worth of row data while the persist is running. This means that the peak in-heap usage for row data can be up to approximately maxBytesInMemory * (2 + maxPendingPersists). The default value of maxPendingPersists is 0, which allows for 1 persist to run concurrently with ingestion work. The remaining portion of the heap is reserved for query processing and segment persist/merge operations, and miscellaneous heap usage. Concurrent segment persist/merge limits To help reduce peak memory usage, the Indexer imposes a limit on the number of concurrent segment persist/merge operations across all running tasks. By default, the number of concurrent persist/merge operations is limited to (druid.worker.capacity / 2), rounded down. This limit can be configured with the druid.worker.numConcurrentMerges property. "},{"title":"Current limitations","type":1,"pageTitle":"Indexer Process","url":"/docs/27.0.0/design/indexer#current-limitations","content":"Separate task logs are not currently supported when using the Indexer; all task log messages will instead be logged in the Indexer process log. The Indexer currently imposes an identical memory limit on each task. In later releases, the per-task memory limit will be removed and only the global limit will apply. The limit on concurrent merges will also be removed. In later releases, per-task memory usage will be dynamically managed. Please see https://github.com/apache/druid/issues/7900 for details on future enhancements to the Indexer. "},{"title":"Overlord Process","type":0,"sectionRef":"#","url":"/docs/27.0.0/design/overlord","content":"","keywords":""},{"title":"Configuration","type":1,"pageTitle":"Overlord Process","url":"/docs/27.0.0/design/overlord#configuration","content":"For Apache Druid Overlord Process Configuration, see Overlord Configuration. For basic tuning guidance for the Overlord process, see Basic cluster tuning. "},{"title":"HTTP endpoints","type":1,"pageTitle":"Overlord Process","url":"/docs/27.0.0/design/overlord#http-endpoints","content":"For a list of API endpoints supported by the Overlord, please see the Service status API reference. "},{"title":"Overview","type":1,"pageTitle":"Overlord Process","url":"/docs/27.0.0/design/overlord#overview","content":"The Overlord process is responsible for accepting tasks, coordinating task distribution, creating locks around tasks, and returning statuses to callers. Overlord can be configured to run in one of two modes - local or remote (local being default). In local mode Overlord is also responsible for creating Peons for executing tasks. When running the Overlord in local mode, all MiddleManager and Peon configurations must be provided as well. Local mode is typically used for simple workflows. In remote mode, the Overlord and MiddleManager are run in separate processes and you can run each on a different server. This mode is recommended if you intend to use the indexing service as the single endpoint for all Druid indexing. "},{"title":"Blacklisted workers","type":1,"pageTitle":"Overlord Process","url":"/docs/27.0.0/design/overlord#blacklisted-workers","content":"If a MiddleManager has task failures above a threshold, the Overlord will blacklist these MiddleManagers. No more than 20% of the MiddleManagers can be blacklisted. Blacklisted MiddleManagers will be periodically whitelisted. The following variables can be used to set the threshold and blacklist timeouts. druid.indexer.runner.maxRetriesBeforeBlacklist druid.indexer.runner.workerBlackListBackoffTime druid.indexer.runner.workerBlackListCleanupPeriod druid.indexer.runner.maxPercentageBlacklistWorkers "},{"title":"Autoscaling","type":1,"pageTitle":"Overlord Process","url":"/docs/27.0.0/design/overlord#autoscaling","content":"The Autoscaling mechanisms currently in place are tightly coupled with our deployment infrastructure but the framework should be in place for other implementations. We are highly open to new implementations or extensions of the existing mechanisms. In our own deployments, MiddleManager processes are Amazon AWS EC2 nodes and they are provisioned to register themselves in a galaxy environment. If autoscaling is enabled, new MiddleManagers may be added when a task has been in pending state for too long. MiddleManagers may be terminated if they have not run any tasks for a period of time. "},{"title":"Metadata storage","type":0,"sectionRef":"#","url":"/docs/27.0.0/design/metadata-storage","content":"","keywords":""},{"title":"Available metadata stores","type":1,"pageTitle":"Metadata storage","url":"/docs/27.0.0/design/metadata-storage#available-metadata-stores","content":"Druid supports Derby, MySQL, and PostgreSQL for storing metadata. "},{"title":"Derby","type":1,"pageTitle":"Metadata storage","url":"/docs/27.0.0/design/metadata-storage#derby","content":"info For production clusters, consider using MySQL or PostgreSQL instead of Derby. Configure metadata storage with Derby by setting the following properties in your Druid configuration. druid.metadata.storage.type=derby druid.metadata.storage.connector.connectURI=jdbc:derby://localhost:1527//opt/var/druid_state/derby;create=true "},{"title":"MySQL","type":1,"pageTitle":"Metadata storage","url":"/docs/27.0.0/design/metadata-storage#mysql","content":"See mysql-metadata-storage extension documentation. "},{"title":"PostgreSQL","type":1,"pageTitle":"Metadata storage","url":"/docs/27.0.0/design/metadata-storage#postgresql","content":"See postgresql-metadata-storage. "},{"title":"Adding custom DBCP properties","type":1,"pageTitle":"Metadata storage","url":"/docs/27.0.0/design/metadata-storage#adding-custom-dbcp-properties","content":"You can add custom properties to customize the database connection pool (DBCP) for connecting to the metadata store. Define these properties with a druid.metadata.storage.connector.dbcp. prefix. For example: druid.metadata.storage.connector.dbcp.maxConnLifetimeMillis=1200000 druid.metadata.storage.connector.dbcp.defaultQueryTimeout=30000 Certain properties cannot be set through druid.metadata.storage.connector.dbcp. and must be set with the prefix druid.metadata.storage.connector.: usernamepasswordconnectURIvalidationQuerytestOnBorrow See BasicDataSource Configuration for a full list of configurable properties. "},{"title":"Metadata storage tables","type":1,"pageTitle":"Metadata storage","url":"/docs/27.0.0/design/metadata-storage#metadata-storage-tables","content":"This section describes the various tables in metadata storage. "},{"title":"Segments table","type":1,"pageTitle":"Metadata storage","url":"/docs/27.0.0/design/metadata-storage#segments-table","content":"This is dictated by the druid.metadata.storage.tables.segments property. This table stores metadata about the segments that should be available in the system. (This set of segments is called "used segments" elsewhere in the documentation and throughout the project.) The table is polled by theCoordinator to determine the set of segments that should be available for querying in the system. The table has two main functional columns, the other columns are for indexing purposes. Value 1 in the used column means that the segment should be "used" by the cluster (i.e., it should be loaded and available for requests). Value 0 means that the segment should not be loaded into the cluster. We do this as a means of unloading segments from the cluster without actually removing their metadata (which allows for simpler rolling back if that is ever an issue). The payload column stores a JSON blob that has all of the metadata for the segment. Some of the data in the payload column intentionally duplicates data from other columns in the segments table. As an example, the payload column may take the following form: { "dataSource":"wikipedia", "interval":"2012-05-23T00:00:00.000Z/2012-05-24T00:00:00.000Z", "version":"2012-05-24T00:10:00.046Z", "loadSpec":{ "type":"s3_zip", "bucket":"bucket_for_segment", "key":"path/to/segment/on/s3" }, "dimensions":"comma-delimited-list-of-dimension-names", "metrics":"comma-delimited-list-of-metric-names", "shardSpec":{"type":"none"}, "binaryVersion":9, "size":size_of_segment, "identifier":"wikipedia_2012-05-23T00:00:00.000Z_2012-05-24T00:00:00.000Z_2012-05-23T00:10:00.046Z" } "},{"title":"Rule table","type":1,"pageTitle":"Metadata storage","url":"/docs/27.0.0/design/metadata-storage#rule-table","content":"The rule table stores the various rules about where segments should land. These rules are used by the Coordinatorwhen making segment (re-)allocation decisions about the cluster. "},{"title":"Config table","type":1,"pageTitle":"Metadata storage","url":"/docs/27.0.0/design/metadata-storage#config-table","content":"The config table stores runtime configuration objects. We do not have many of these yet and we are not sure if we will keep this mechanism going forward, but it is the beginnings of a method of changing some configuration parameters across the cluster at runtime. "},{"title":"Task-related tables","type":1,"pageTitle":"Metadata storage","url":"/docs/27.0.0/design/metadata-storage#task-related-tables","content":"Task-related tables are created and used by the Overlord and MiddleManager when managing tasks. "},{"title":"Audit table","type":1,"pageTitle":"Metadata storage","url":"/docs/27.0.0/design/metadata-storage#audit-table","content":"The audit table stores the audit history for configuration changes such as rule changes done by Coordinator and other config changes. "},{"title":"Metadata storage access","type":1,"pageTitle":"Metadata storage","url":"/docs/27.0.0/design/metadata-storage#metadata-storage-access","content":"Only the following processes access the metadata storage: Indexing service processes (if any)Realtime processes (if any)Coordinator processes Thus you need to give permissions (e.g., in AWS security groups) for only these machines to access the metadata storage. "},{"title":"Learn more","type":1,"pageTitle":"Metadata storage","url":"/docs/27.0.0/design/metadata-storage#learn-more","content":"See the following topics for more information: Metadata storage configurationAutomated cleanup for metadata records "},{"title":"Druid SQL API","type":0,"sectionRef":"#","url":"/docs/27.0.0/api-reference/sql-api","content":"","keywords":""},{"title":"Submit a query","type":1,"pageTitle":"Druid SQL API","url":"/docs/27.0.0/api-reference/sql-api#submit-a-query","content":"To use the SQL API to make Druid SQL queries, send your query to the Router using the POST method: POST https://ROUTER:8888/druid/v2/sql/ Submit your query as the value of a "query" field in the JSON object within the request payload. For example: {"query" : "SELECT COUNT(*) FROM data_source WHERE foo = 'bar'"} "},{"title":"Request body","type":1,"pageTitle":"Druid SQL API","url":"/docs/27.0.0/api-reference/sql-api#request-body","content":"Property\tDescription\tDefaultquery\tSQL query string.\tnone (required) resultFormat\tFormat of query results. See Responses for details.\t"object" header\tWhether or not to include a header row for the query result. See Responses for details.\tfalse typesHeader\tWhether or not to include type information in the header. Can only be set when header is also true. See Responses for details.\tfalse sqlTypesHeader\tWhether or not to include SQL type information in the header. Can only be set when header is also true. See Responses for details.\tfalse context\tJSON object containing SQL query context parameters.\t{} (empty) parameters\tList of query parameters for parameterized queries. Each parameter in the list should be a JSON object like {"type": "VARCHAR", "value": "foo"}. The type should be a SQL type; see Data types for a list of supported SQL types.\t[] (empty) You can use curl to send SQL queries from the command-line: $ cat query.json {"query":"SELECT COUNT(*) AS TheCount FROM data_source"} $ curl -XPOST -H'Content-Type: application/json' http://ROUTER:8888/druid/v2/sql/ -d @query.json [{"TheCount":24433}] There are a variety of SQL query context parameters you can provide by adding a "context" map, like: { "query" : "SELECT COUNT(*) FROM data_source WHERE foo = 'bar' AND __time > TIMESTAMP '2000-01-01 00:00:00'", "context" : { "sqlTimeZone" : "America/Los_Angeles" } } Parameterized SQL queries are also supported: { "query" : "SELECT COUNT(*) FROM data_source WHERE foo = ? AND __time > ?", "parameters": [ { "type": "VARCHAR", "value": "bar"}, { "type": "TIMESTAMP", "value": "2000-01-01 00:00:00" } ] } Metadata is available over HTTP POST by querying metadata tables. "},{"title":"Responses","type":1,"pageTitle":"Druid SQL API","url":"/docs/27.0.0/api-reference/sql-api#responses","content":"Result formats Druid SQL's HTTP POST API supports a variety of result formats. You can specify these by adding a resultFormat parameter, like: { "query" : "SELECT COUNT(*) FROM data_source WHERE foo = 'bar' AND __time > TIMESTAMP '2000-01-01 00:00:00'", "resultFormat" : "array" } To request a header with information about column names, set header to true in your request. When you set header to true, you can optionally include typesHeader and sqlTypesHeader as well, which gives you information about Druid runtime and SQL types respectively. You can request all these headers with a request like: { "query" : "SELECT COUNT(*) FROM data_source WHERE foo = 'bar' AND __time > TIMESTAMP '2000-01-01 00:00:00'", "resultFormat" : "array", "header" : true, "typesHeader" : true, "sqlTypesHeader" : true } The following table shows supported result formats: Format\tDescription\tHeader description\tContent-Typeobject\tThe default, a JSON array of JSON objects. Each object's field names match the columns returned by the SQL query, and are provided in the same order as the SQL query.\tIf header is true, the first row is an object where the fields are column names. Each field's value is either null (if typesHeader and sqlTypesHeader are false) or an object that contains the Druid type as type (if typesHeader is true) and the SQL type as sqlType (if sqlTypesHeader is true).\tapplication/json array\tJSON array of JSON arrays. Each inner array has elements matching the columns returned by the SQL query, in order.\tIf header is true, the first row is an array of column names. If typesHeader is true, the next row is an array of Druid types. If sqlTypesHeader is true, the next row is an array of SQL types.\tapplication/json objectLines\tLike object, but the JSON objects are separated by newlines instead of being wrapped in a JSON array. This can make it easier to parse the entire response set as a stream, if you do not have ready access to a streaming JSON parser. To make it possible to detect a truncated response, this format includes a trailer of one blank line.\tSame as object.\ttext/plain arrayLines\tLike array, but the JSON arrays are separated by newlines instead of being wrapped in a JSON array. This can make it easier to parse the entire response set as a stream, if you do not have ready access to a streaming JSON parser. To make it possible to detect a truncated response, this format includes a trailer of one blank line.\tSame as array, except the rows are separated by newlines.\ttext/plain csv\tComma-separated values, with one row per line. Individual field values may be escaped by being surrounded in double quotes. If double quotes appear in a field value, they will be escaped by replacing them with double-double-quotes like ""this"". To make it possible to detect a truncated response, this format includes a trailer of one blank line.\tSame as array, except the lists are in CSV format.\ttext/csv If typesHeader is set to true, Druid type information is included in the response. Complex types, like sketches, will be reported as COMPLEX<typeName> if a particular complex type name is known for that field, or as COMPLEX if the particular type name is unknown or mixed. If sqlTypesHeader is set to true,SQL type information is included in the response. It is possible to set both typesHeader andsqlTypesHeader at once. Both parameters require that header is also set. To aid in building clients that are compatible with older Druid versions, Druid returns the HTTP headerX-Druid-SQL-Header-Included: yes if header was set to true and if the version of Druid the client is connected to understands the typesHeader and sqlTypesHeader parameters. This HTTP response header is present irrespective of whether typesHeader or sqlTypesHeader are set or not. Druid returns the SQL query identifier in the X-Druid-SQL-Query-Id HTTP header. This query id will be assigned the value of sqlQueryId from the query context parametersif specified, else Druid will generate a SQL query id for you. Errors Errors that occur before the response body is sent will be reported in JSON, with an HTTP 500 status code, in the same format as native Druid query errors. If an error occurs while the response body is being sent, at that point it is too late to change the HTTP status code or report a JSON error, so the response will simply end midstream and an error will be logged by the Druid server that was handling your request. As a caller, it is important that you properly handle response truncation. This is easy for the object and arrayformats, since truncated responses will be invalid JSON. For the line-oriented formats, you should check the trailer they all include: one blank line at the end of the result set. If you detect a truncated response, either through a JSON parsing error or through a missing trailing newline, you should assume the response was not fully delivered due to an error. "},{"title":"Cancel a query","type":1,"pageTitle":"Druid SQL API","url":"/docs/27.0.0/api-reference/sql-api#cancel-a-query","content":"You can use the HTTP DELETE method to cancel a SQL query on either the Router or the Broker. When you cancel a query, Druid handles the cancellation in a best-effort manner. It marks the query canceled immediately and aborts the query execution as soon as possible. However, your query may run for a short time after your cancellation request. Druid SQL's HTTP DELETE method uses the following syntax: DELETE https://ROUTER:8888/druid/v2/sql/{sqlQueryId} The DELETE method requires the sqlQueryId path parameter. To predict the query id you must set it in the query context. Druid does not enforce unique sqlQueryId in the query context. If you issue a cancel request for a sqlQueryId active in more than one query context, Druid cancels all requests that use the query id. For example if you issue the following query: curl --request POST 'https://ROUTER:8888/druid/v2/sql' \\ --header 'Content-Type: application/json' \\ --data-raw '{"query" : "SELECT sleep(CASE WHEN sum_added > 0 THEN 1 ELSE 0 END) FROM wikiticker WHERE sum_added > 0 LIMIT 15", "context" : {"sqlQueryId" : "myQuery01"}}' You can cancel the query using the query id myQuery01 as follows: curl --request DELETE 'https://ROUTER:8888/druid/v2/sql/myQuery01' \\ Cancellation requests require READ permission on all resources used in the SQL query. Druid returns an HTTP 202 response for successful deletion requests. Druid returns an HTTP 404 response in the following cases: sqlQueryId is incorrect.The query completes before your cancellation request is processed. Druid returns an HTTP 403 response for authorization failure. "},{"title":"Query from deep storage","type":1,"pageTitle":"Druid SQL API","url":"/docs/27.0.0/api-reference/sql-api#query-from-deep-storage","content":"Query from deep storage is an experimental feature. You can use the sql/statements endpoint to query segments that exist only in deep storage and are not loaded onto your Historical processes as determined by your load rules. Note that at least one segment of a datasource must be available on a Historical process so that the Broker can plan your query. A quick way to check if this is true is whether or not a datasource is visible in the Druid console. For more information, see Query from deep storage. "},{"title":"Submit a query","type":1,"pageTitle":"Druid SQL API","url":"/docs/27.0.0/api-reference/sql-api#submit-a-query-1","content":"Submit a query for data stored in deep storage. Any data ingested into Druid is placed into deep storage. The query is contained in the "query" field in the JSON object within the request payload. Note that at least part of a datasource must be available on a Historical process so that Druid can plan your query and only the user who submits a query can see the results. URL POST /druid/v2/sql/statements Request body Generally, the sql and sql/statements endpoints support the same response body fields with minor differences. For general information about the available fields, see Submit a query to the sql endpoint. Keep the following in mind when submitting queries to the sql/statements endpoint: There are additional context parameters for sql/statements: executionMode determines how query results are fetched. Druid currently only supports ASYNC. You must manually retrieve your results after the query completes.selectDestination determines where final results get written. By default, results are written to task reports. Set this parameter to durableStorage to instruct Druid to write the results from SELECT queries to durable storage, which allows you to fetch larger result sets. Note that this requires you to have durable storage for MSQ enabled. The only supported value for resultFormat is JSON LINES. Responses 200 SUCCESS400 BAD REQUEST Successfully queried from deep storage Sample request cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/v2/sql/statements" \\ --header 'Content-Type: application/json' \\ --data '{ "query": "SELECT * FROM wikipedia WHERE user='\\''BlueMoon2662'\\''", "context": { "executionMode":"ASYNC" } }' Sample response Click to show sample response { "queryId": "query-b82a7049-b94f-41f2-a230-7fef94768745", "state": "ACCEPTED", "createdAt": "2023-07-26T21:16:25.324Z", "schema": [ { "name": "__time", "type": "TIMESTAMP", "nativeType": "LONG" }, { "name": "channel", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "cityName", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "comment", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "countryIsoCode", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "countryName", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "isAnonymous", "type": "BIGINT", "nativeType": "LONG" }, { "name": "isMinor", "type": "BIGINT", "nativeType": "LONG" }, { "name": "isNew", "type": "BIGINT", "nativeType": "LONG" }, { "name": "isRobot", "type": "BIGINT", "nativeType": "LONG" }, { "name": "isUnpatrolled", "type": "BIGINT", "nativeType": "LONG" }, { "name": "metroCode", "type": "BIGINT", "nativeType": "LONG" }, { "name": "namespace", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "page", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "regionIsoCode", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "regionName", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "user", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "delta", "type": "BIGINT", "nativeType": "LONG" }, { "name": "added", "type": "BIGINT", "nativeType": "LONG" }, { "name": "deleted", "type": "BIGINT", "nativeType": "LONG" } ], "durationMs": -1 } "},{"title":"Get query status","type":1,"pageTitle":"Druid SQL API","url":"/docs/27.0.0/api-reference/sql-api#get-query-status","content":"Retrieves information about the query associated with the given query ID. The response matches the response from the POST API if the query is accepted or running and the execution mode is ASYNC. In addition to the fields that this endpoint shares with POST /sql/statements, a completed query's status includes the following: A result object that summarizes information about your results, such as the total number of rows and sample records.A pages object that includes the following information for each page of results: numRows: the number of rows in that page of results.sizeInBytes: the size of the page.id: the page number that you can use to reference a specific page when you get query results. URL GET /druid/v2/sql/statements/:queryId Responses 200 SUCCESS400 BAD REQUEST Successfully retrieved query status Sample request The following example retrieves the status of a query with specified ID query-9b93f6f7-ab0e-48f5-986a-3520f84f0804. cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/v2/sql/statements/query-9b93f6f7-ab0e-48f5-986a-3520f84f0804" Sample response Click to show sample response { "queryId": "query-9b93f6f7-ab0e-48f5-986a-3520f84f0804", "state": "SUCCESS", "createdAt": "2023-07-26T22:57:46.620Z", "schema": [ { "name": "__time", "type": "TIMESTAMP", "nativeType": "LONG" }, { "name": "channel", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "cityName", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "comment", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "countryIsoCode", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "countryName", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "isAnonymous", "type": "BIGINT", "nativeType": "LONG" }, { "name": "isMinor", "type": "BIGINT", "nativeType": "LONG" }, { "name": "isNew", "type": "BIGINT", "nativeType": "LONG" }, { "name": "isRobot", "type": "BIGINT", "nativeType": "LONG" }, { "name": "isUnpatrolled", "type": "BIGINT", "nativeType": "LONG" }, { "name": "metroCode", "type": "BIGINT", "nativeType": "LONG" }, { "name": "namespace", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "page", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "regionIsoCode", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "regionName", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "user", "type": "VARCHAR", "nativeType": "STRING" }, { "name": "delta", "type": "BIGINT", "nativeType": "LONG" }, { "name": "added", "type": "BIGINT", "nativeType": "LONG" }, { "name": "deleted", "type": "BIGINT", "nativeType": "LONG" } ], "durationMs": 25591, "result": { "numTotalRows": 1, "totalSizeInBytes": 375, "dataSource": "__query_select", "sampleRecords": [ [ 1442018873259, "#ja.wikipedia", "", "/* 対戦通算成績と得失点 */", "", "", 0, 1, 0, 0, 0, 0, "Main", "アルビレックス新潟の年度別成績一覧", "", "", "BlueMoon2662", 14, 14, 0 ] ], "pages": [ { "id": 0, "numRows": 1, "sizeInBytes": 375 } ] } } "},{"title":"Get query results","type":1,"pageTitle":"Druid SQL API","url":"/docs/27.0.0/api-reference/sql-api#get-query-results","content":"Retrieves results for completed queries. Results are separated into pages, so you can use the optional page parameter to refine the results you get. Druid returns information about the composition of each page and its page number (id). For information about pages, see Get query status. If a page number isn't passed, all results are returned sequentially in the same response. If you have large result sets, you may encounter timeouts based on the value configured for druid.router.http.readTimeout. When getting query results, keep the following in mind: JSON Lines is the only supported result format.Getting the query results for an ingestion query returns an empty response. URL GET /druid/v2/sql/statements/:queryId/results Query parameters page Int (optional)Refine paginated results Responses 200 SUCCESS400 BAD REQUEST404 NOT FOUND500 SERVER ERROR Successfully retrieved query results Sample request The following example retrieves the status of a query with specified ID query-f3bca219-173d-44d4-bdc7-5002e910352f. cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/v2/sql/statements/query-f3bca219-173d-44d4-bdc7-5002e910352f/results" Sample response Click to show sample response [ { "__time": 1442018818771, "channel": "#en.wikipedia", "cityName": "", "comment": "added project", "countryIsoCode": "", "countryName": "", "isAnonymous": 0, "isMinor": 0, "isNew": 0, "isRobot": 0, "isUnpatrolled": 0, "metroCode": 0, "namespace": "Talk", "page": "Talk:Oswald Tilghman", "regionIsoCode": "", "regionName": "", "user": "GELongstreet", "delta": 36, "added": 36, "deleted": 0 }, { "__time": 1442018820496, "channel": "#ca.wikipedia", "cityName": "", "comment": "Robot inserta {{Commonscat}} que enllaça amb [[commons:category:Rallicula]]", "countryIsoCode": "", "countryName": "", "isAnonymous": 0, "isMinor": 1, "isNew": 0, "isRobot": 1, "isUnpatrolled": 0, "metroCode": 0, "namespace": "Main", "page": "Rallicula", "regionIsoCode": "", "regionName": "", "user": "PereBot", "delta": 17, "added": 17, "deleted": 0 }, { "__time": 1442018825474, "channel": "#en.wikipedia", "cityName": "Auburn", "comment": "/* Status of peremptory norms under international law */ fixed spelling of 'Wimbledon'", "countryIsoCode": "AU", "countryName": "Australia", "isAnonymous": 1, "isMinor": 0, "isNew": 0, "isRobot": 0, "isUnpatrolled": 0, "metroCode": 0, "namespace": "Main", "page": "Peremptory norm", "regionIsoCode": "NSW", "regionName": "New South Wales", "user": "60.225.66.142", "delta": 0, "added": 0, "deleted": 0 }, { "__time": 1442018828770, "channel": "#vi.wikipedia", "cityName": "", "comment": "fix Lỗi CS1: ngày tháng", "countryIsoCode": "", "countryName": "", "isAnonymous": 0, "isMinor": 1, "isNew": 0, "isRobot": 1, "isUnpatrolled": 0, "metroCode": 0, "namespace": "Main", "page": "Apamea abruzzorum", "regionIsoCode": "", "regionName": "", "user": "Cheers!-bot", "delta": 18, "added": 18, "deleted": 0 }, { "__time": 1442018831862, "channel": "#vi.wikipedia", "cityName": "", "comment": "clean up using [[Project:AWB|AWB]]", "countryIsoCode": "", "countryName": "", "isAnonymous": 0, "isMinor": 0, "isNew": 0, "isRobot": 1, "isUnpatrolled": 0, "metroCode": 0, "namespace": "Main", "page": "Atractus flammigerus", "regionIsoCode": "", "regionName": "", "user": "ThitxongkhoiAWB", "delta": 18, "added": 18, "deleted": 0 }, { "__time": 1442018833987, "channel": "#vi.wikipedia", "cityName": "", "comment": "clean up using [[Project:AWB|AWB]]", "countryIsoCode": "", "countryName": "", "isAnonymous": 0, "isMinor": 0, "isNew": 0, "isRobot": 1, "isUnpatrolled": 0, "metroCode": 0, "namespace": "Main", "page": "Agama mossambica", "regionIsoCode": "", "regionName": "", "user": "ThitxongkhoiAWB", "delta": 18, "added": 18, "deleted": 0 }, { "__time": 1442018837009, "channel": "#ca.wikipedia", "cityName": "", "comment": "/* Imperi Austrohongarès */", "countryIsoCode": "", "countryName": "", "isAnonymous": 0, "isMinor": 0, "isNew": 0, "isRobot": 0, "isUnpatrolled": 0, "metroCode": 0, "namespace": "Main", "page": "Campanya dels Balcans (1914-1918)", "regionIsoCode": "", "regionName": "", "user": "Jaumellecha", "delta": -20, "added": 0, "deleted": 20 }, { "__time": 1442018839591, "channel": "#en.wikipedia", "cityName": "", "comment": "adding comment on notability and possible COI", "countryIsoCode": "", "countryName": "", "isAnonymous": 0, "isMinor": 0, "isNew": 1, "isRobot": 0, "isUnpatrolled": 1, "metroCode": 0, "namespace": "Talk", "page": "Talk:Dani Ploeger", "regionIsoCode": "", "regionName": "", "user": "New Media Theorist", "delta": 345, "added": 345, "deleted": 0 }, { "__time": 1442018841578, "channel": "#en.wikipedia", "cityName": "", "comment": "Copying assessment table to wiki", "countryIsoCode": "", "countryName": "", "isAnonymous": 0, "isMinor": 0, "isNew": 0, "isRobot": 1, "isUnpatrolled": 0, "metroCode": 0, "namespace": "User", "page": "User:WP 1.0 bot/Tables/Project/Pubs", "regionIsoCode": "", "regionName": "", "user": "WP 1.0 bot", "delta": 121, "added": 121, "deleted": 0 }, { "__time": 1442018845821, "channel": "#vi.wikipedia", "cityName": "", "comment": "clean up using [[Project:AWB|AWB]]", "countryIsoCode": "", "countryName": "", "isAnonymous": 0, "isMinor": 0, "isNew": 0, "isRobot": 1, "isUnpatrolled": 0, "metroCode": 0, "namespace": "Main", "page": "Agama persimilis", "regionIsoCode": "", "regionName": "", "user": "ThitxongkhoiAWB", "delta": 18, "added": 18, "deleted": 0 } ] "},{"title":"Cancel a query","type":1,"pageTitle":"Druid SQL API","url":"/docs/27.0.0/api-reference/sql-api#cancel-a-query-1","content":"Cancels a running or accepted query. URL DELETE /druid/v2/sql/statements/:queryId Responses 200 OK202 ACCEPTED404 SERVER ERROR A no op operation since the query is not in a state to be cancelled Sample request The following example cancels a query with specified ID query-945c9633-2fa2-49ab-80ae-8221c38c024da. cURLHTTP curl --request DELETE "http://ROUTER_IP:ROUTER_PORT/druid/v2/sql/statements/query-945c9633-2fa2-49ab-80ae-8221c38c024da" Sample response A successful request returns a 202 ACCEPTED response and an empty response. "},{"title":"Processes and servers","type":0,"sectionRef":"#","url":"/docs/27.0.0/design/processes","content":"","keywords":""},{"title":"Process types","type":1,"pageTitle":"Processes and servers","url":"/docs/27.0.0/design/processes#process-types","content":"Druid has several process types: CoordinatorOverlordBrokerHistoricalMiddleManager and PeonsIndexer (Optional)Router (Optional) "},{"title":"Server types","type":1,"pageTitle":"Processes and servers","url":"/docs/27.0.0/design/processes#server-types","content":"Druid processes can be deployed any way you like, but for ease of deployment we suggest organizing them into three server types: MasterQueryData This section describes the Druid processes and the suggested Master/Query/Data server organization, as shown in the architecture diagram above. "},{"title":"Master server","type":1,"pageTitle":"Processes and servers","url":"/docs/27.0.0/design/processes#master-server","content":"A Master server manages data ingestion and availability: it is responsible for starting new ingestion jobs and coordinating availability of data on the "Data servers" described below. Within a Master server, functionality is split between two processes, the Coordinator and Overlord. Coordinator process Coordinator processes watch over the Historical processes on the Data servers. They are responsible for assigning segments to specific servers, and for ensuring segments are well-balanced across Historicals. Overlord process Overlord processes watch over the MiddleManager processes on the Data servers and are the controllers of data ingestion into Druid. They are responsible for assigning ingestion tasks to MiddleManagers and for coordinating segment publishing. "},{"title":"Query server","type":1,"pageTitle":"Processes and servers","url":"/docs/27.0.0/design/processes#query-server","content":"A Query server provides the endpoints that users and client applications interact with, routing queries to Data servers or other Query servers (and optionally proxied Master server requests as well). Within a Query server, functionality is split between two processes, the Broker and Router. Broker process Broker processes receive queries from external clients and forward those queries to Data servers. When Brokers receive results from those subqueries, they merge those results and return them to the caller. End users typically query Brokers rather than querying Historicals or MiddleManagers processes on Data servers directly. Router process (optional) Router processes are optional processes that provide a unified API gateway in front of Druid Brokers, Overlords, and Coordinators. They are optional since you can also simply contact the Druid Brokers, Overlords, and Coordinators directly. The Router also runs the web console, a management UI for datasources, segments, tasks, data processes (Historicals and MiddleManagers), and coordinator dynamic configuration. The user can also run SQL and native Druid queries within the console. "},{"title":"Data server","type":1,"pageTitle":"Processes and servers","url":"/docs/27.0.0/design/processes#data-server","content":"A Data server executes ingestion jobs and stores queryable data. Within a Data server, functionality is split between two processes, the Historical and MiddleManager. "},{"title":"Historical process","type":1,"pageTitle":"Processes and servers","url":"/docs/27.0.0/design/processes#historical-process","content":"Historical processes are the workhorses that handle storage and querying on "historical" data (including any streaming data that has been in the system long enough to be committed). Historical processes download segments from deep storage and respond to queries about these segments. They don't accept writes. "},{"title":"Middle Manager process","type":1,"pageTitle":"Processes and servers","url":"/docs/27.0.0/design/processes#middle-manager-process","content":"MiddleManager processes handle ingestion of new data into the cluster. They are responsible for reading from external data sources and publishing new Druid segments. Peon processes Peon processes are task execution engines spawned by MiddleManagers. Each Peon runs a separate JVM and is responsible for executing a single task. Peons always run on the same host as the MiddleManager that spawned them. "},{"title":"Indexer process (optional)","type":1,"pageTitle":"Processes and servers","url":"/docs/27.0.0/design/processes#indexer-process-optional","content":"Indexer processes are an alternative to MiddleManagers and Peons. Instead of forking separate JVM processes per-task, the Indexer runs tasks as individual threads within a single JVM process. The Indexer is designed to be easier to configure and deploy compared to the MiddleManager + Peon system and to better enable resource sharing across tasks. The Indexer is a newer feature and is currently designatedexperimental due to the fact that its memory management system is still under development. It will continue to mature in future versions of Druid. Typically, you would deploy either MiddleManagers or Indexers, but not both. "},{"title":"Pros and cons of colocation","type":1,"pageTitle":"Processes and servers","url":"/docs/27.0.0/design/processes#pros-and-cons-of-colocation","content":"Druid processes can be colocated based on the Master/Data/Query server organization as described above. This organization generally results in better utilization of hardware resources for most clusters. For very large scale clusters, however, it can be desirable to split the Druid processes such that they run on individual servers to avoid resource contention. This section describes guidelines and configuration parameters related to process colocation. "},{"title":"Coordinators and Overlords","type":1,"pageTitle":"Processes and servers","url":"/docs/27.0.0/design/processes#coordinators-and-overlords","content":"The workload on the Coordinator process tends to increase with the number of segments in the cluster. The Overlord's workload also increases based on the number of segments in the cluster, but to a lesser degree than the Coordinator. In clusters with very high segment counts, it can make sense to separate the Coordinator and Overlord processes to provide more resources for the Coordinator's segment balancing workload. Unified Process The Coordinator and Overlord processes can be run as a single combined process by setting the druid.coordinator.asOverlord.enabled property. Please see Coordinator Configuration: Operation for details. "},{"title":"Historicals and MiddleManagers","type":1,"pageTitle":"Processes and servers","url":"/docs/27.0.0/design/processes#historicals-and-middlemanagers","content":"With higher levels of ingestion or query load, it can make sense to deploy the Historical and MiddleManager processes on separate hosts to to avoid CPU and memory contention. The Historical also benefits from having free memory for memory mapped segments, which can be another reason to deploy the Historical and MiddleManager processes separately. "},{"title":"Router Process","type":0,"sectionRef":"#","url":"/docs/27.0.0/design/router","content":"","keywords":""},{"title":"Configuration","type":1,"pageTitle":"Router Process","url":"/docs/27.0.0/design/router#configuration","content":"For Apache Druid Router Process Configuration, see Router Configuration. For basic tuning guidance for the Router process, see Basic cluster tuning. "},{"title":"HTTP endpoints","type":1,"pageTitle":"Router Process","url":"/docs/27.0.0/design/router#http-endpoints","content":"For a list of API endpoints supported by the Router, see Legacy metadata API reference. "},{"title":"Running","type":1,"pageTitle":"Router Process","url":"/docs/27.0.0/design/router#running","content":"org.apache.druid.cli.Main server router "},{"title":"Router as management proxy","type":1,"pageTitle":"Router Process","url":"/docs/27.0.0/design/router#router-as-management-proxy","content":"The Router can be configured to forward requests to the active Coordinator or Overlord process. This may be useful for setting up a highly available cluster in situations where the HTTP redirect mechanism of the inactive -> active Coordinator/Overlord does not function correctly (servers are behind a load balancer, the hostname used in the redirect is only resolvable internally, etc.). Enabling the management proxy To enable this functionality, set the following in the Router's runtime.properties: druid.router.managementProxy.enabled=true Management proxy routing The management proxy supports implicit and explicit routes. Implicit routes are those where the destination can be determined from the original request path based on Druid API path conventions. For the Coordinator the convention is/druid/coordinator/* and for the Overlord the convention is /druid/indexer/*. These are convenient because they mean that using the management proxy does not require modifying the API request other than issuing the request to the Router instead of the Coordinator or Overlord. Most Druid API requests can be routed implicitly. Explicit routes are those where the request to the Router contains a path prefix indicating which process the request should be routed to. For the Coordinator this prefix is /proxy/coordinator and for the Overlord it is /proxy/overlord. This is required for API calls with an ambiguous destination. For example, the /status API is present on all Druid processes, so explicit routing needs to be used to indicate the proxy destination. This is summarized in the table below: Request Route\tDestination\tRewritten Route\tExample/druid/coordinator/*\tCoordinator\t/druid/coordinator/*\trouter:8888/druid/coordinator/v1/datasources -> coordinator:8081/druid/coordinator/v1/datasources /druid/indexer/*\tOverlord\t/druid/indexer/*\trouter:8888/druid/indexer/v1/task -> overlord:8090/druid/indexer/v1/task /proxy/coordinator/*\tCoordinator\t/*\trouter:8888/proxy/coordinator/status -> coordinator:8081/status /proxy/overlord/*\tOverlord\t/*\trouter:8888/proxy/overlord/druid/indexer/v1/isLeader -> overlord:8090/druid/indexer/v1/isLeader "},{"title":"Router strategies","type":1,"pageTitle":"Router Process","url":"/docs/27.0.0/design/router#router-strategies","content":"The Router has a configurable list of strategies for how it selects which Brokers to route queries to. The order of the strategies matter because as soon as a strategy condition is matched, a Broker is selected. timeBoundary { "type":"timeBoundary" } Including this strategy means all timeBoundary queries are always routed to the highest priority Broker. priority { "type":"priority", "minPriority":0, "maxPriority":1 } Queries with a priority set to less than minPriority are routed to the lowest priority Broker. Queries with priority set to greater than maxPriority are routed to the highest priority Broker. By default, minPriority is 0 and maxPriority is 1. Using these default values, if a query with priority 0 (the default query priority is 0) is sent, the query skips the priority selection logic. manual This strategy reads the parameter brokerService from the query context and routes the query to that broker service. If no valid brokerService is specified in the query context, the field defaultManualBrokerService is used to determine target broker service given the value is valid and non-null. A value is considered valid if it is present in druid.router.tierToBrokerMapThis strategy can route both Native and SQL queries (when enabled). Example: A strategy that routes queries to the Broker "druid:broker-hot" if no valid brokerService is found in the query context. { "type": "manual", "defaultManualBrokerService": "druid:broker-hot" } JavaScript Allows defining arbitrary routing rules using a JavaScript function. The function is passed the configuration and the query to be executed, and returns the tier it should be routed to, or null for the default tier. Example: a function that sends queries containing more than three aggregators to the lowest priority Broker. { "type" : "javascript", "function" : "function (config, query) { if (query.getAggregatorSpecs && query.getAggregatorSpecs().size() >= 3) { var size = config.getTierToBrokerMap().values().size(); if (size > 0) { return config.getTierToBrokerMap().values().toArray()[size-1] } else { return config.getDefaultBrokerServiceName() } } else { return null } }" } info JavaScript-based functionality is disabled by default. Please refer to the Druid JavaScript programming guide for guidelines about using Druid's JavaScript functionality, including instructions on how to enable it. "},{"title":"Routing of SQL queries using strategies","type":1,"pageTitle":"Router Process","url":"/docs/27.0.0/design/router#routing-of-sql-queries-using-strategies","content":"To enable routing of SQL queries using strategies, set druid.router.sql.enable to true. The broker service for a given SQL query is resolved using only the provided Router strategies. If not resolved using any of the strategies, the Router uses the defaultBrokerServiceName. This behavior is slightly different from native queries where the Router first tries to resolve the broker service using strategies, then load rules and finally using the defaultBrokerServiceNameif still not resolved. When druid.router.sql.enable is set to false (default value), the Router uses thedefaultBrokerServiceName. Setting druid.router.sql.enable does not affect either Avatica JDBC requests or native queries. Druid always routes native queries using the strategies and load rules as documented. Druid always routes Avatica JDBC requests based on connection ID. "},{"title":"Avatica query balancing","type":1,"pageTitle":"Router Process","url":"/docs/27.0.0/design/router#avatica-query-balancing","content":"All Avatica JDBC requests with a given connection ID must be routed to the same Broker, since Druid Brokers do not share connection state with each other. To accomplish this, Druid provides two built-in balancers that use rendezvous hashing and consistent hashing of a request's connection ID respectively to assign requests to Brokers. Note that when multiple Routers are used, all Routers should have identical balancer configuration to ensure that they make the same routing decisions. Rendezvous hash balancer This balancer uses Rendezvous Hashing on an Avatica request's connection ID to assign the request to a Broker. To use this balancer, specify the following property: druid.router.avatica.balancer.type=rendezvousHash If no druid.router.avatica.balancer property is set, the Router will also default to using the Rendezvous Hash Balancer. Consistent hash balancer This balancer uses Consistent Hashing on an Avatica request's connection ID to assign the request to a Broker. To use this balancer, specify the following property: druid.router.avatica.balancer.type=consistentHash This is a non-default implementation that is provided for experimentation purposes. The consistent hasher has longer setup times on initialization and when the set of Brokers changes, but has a faster Broker assignment time than the rendezvous hasher when tested with 5 Brokers. Benchmarks for both implementations have been provided in ConsistentHasherBenchmark and RendezvousHasherBenchmark. The consistent hasher also requires locking, while the rendezvous hasher does not. "},{"title":"Example production configuration","type":1,"pageTitle":"Router Process","url":"/docs/27.0.0/design/router#example-production-configuration","content":"In this example, we have two tiers in our production cluster: hot and _default_tier. Queries for the hot tier are routed through the broker-hot set of Brokers, and queries for the _default_tier are routed through the broker-cold set of Brokers. If any exceptions or network problems occur, queries are routed to the broker-cold set of brokers. In our example, we are running with a c3.2xlarge EC2 instance. We assume a common.runtime.properties already exists. JVM settings: -server -Xmx13g -Xms13g -XX:NewSize=256m -XX:MaxNewSize=256m -XX:+UseConcMarkSweepGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseLargePages -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/mnt/galaxy/deploy/current/ -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/mnt/tmp -Dcom.sun.management.jmxremote.port=17071 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false Runtime.properties: druid.host=#{IP_ADDR}:8080 druid.plaintextPort=8080 druid.service=druid/router druid.router.defaultBrokerServiceName=druid:broker-cold druid.router.coordinatorServiceName=druid:coordinator druid.router.tierToBrokerMap={"hot":"druid:broker-hot","_default_tier":"druid:broker-cold"} druid.router.http.numConnections=50 druid.router.http.readTimeout=PT5M # Number of threads used by the Router proxy http client druid.router.http.numMaxThreads=100 druid.server.http.numThreads=100 "},{"title":"Peons","type":0,"sectionRef":"#","url":"/docs/27.0.0/design/peons","content":"","keywords":""},{"title":"Configuration","type":1,"pageTitle":"Peons","url":"/docs/27.0.0/design/peons#configuration","content":"For Apache Druid Peon Configuration, see Peon Query Configuration and Additional Peon Configuration. For basic tuning guidance for MiddleManager tasks, see Basic cluster tuning. "},{"title":"HTTP endpoints","type":1,"pageTitle":"Peons","url":"/docs/27.0.0/design/peons#http-endpoints","content":"Peons run a single task in a single JVM. MiddleManager is responsible for creating Peons for running tasks. Peons should rarely (if ever for testing purposes) be run on their own. "},{"title":"Running","type":1,"pageTitle":"Peons","url":"/docs/27.0.0/design/peons#running","content":"The Peon should very rarely ever be run independent of the MiddleManager unless for development purposes. org.apache.druid.cli.Main internal peon <task_file> <status_file> The task file contains the task JSON object. The status file indicates where the task status will be output. "},{"title":"Experimental features","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/experimental","content":"Experimental features Features often start out in "experimental" status that indicates they are still evolving. This can mean any of the following things: The feature's API may change even in minor releases or patch releases.The feature may have known "missing" pieces that will be added later.The feature may or may not have received full battle-testing in production environments. All experimental features are optional. Note that not all of these points apply to every experimental feature. Some have been battle-tested in terms of implementation, but are still marked experimental due to an evolving API. Please check the documentation for each feature for full details.","keywords":""},{"title":"JSON querying API","type":0,"sectionRef":"#","url":"/docs/27.0.0/api-reference/json-querying-api","content":"","keywords":""},{"title":"Submit a query","type":1,"pageTitle":"JSON querying API","url":"/docs/27.0.0/api-reference/json-querying-api#submit-a-query","content":"Submits a JSON-based native query. The body of the request is the native query itself. Druid supports different types of queries for different use cases. All queries require the following properties: queryType: A string representing the type of query. Druid supports the following native query types: timeseries, topN, groupBy, timeBoundaries, segmentMetadata, datasourceMetadata, scan, and search.dataSource: A string or object defining the source of data to query. The most common value is the name of the datasource to query. For more information, see Datasources. For additional properties based on your query type or use case, see available native queries. "},{"title":"URL","type":1,"pageTitle":"JSON querying API","url":"/docs/27.0.0/api-reference/json-querying-api#url","content":"POST /druid/v2/ "},{"title":"Query parameters","type":1,"pageTitle":"JSON querying API","url":"/docs/27.0.0/api-reference/json-querying-api#query-parameters","content":"pretty (optional) Druid returns the response in a pretty-printed format using indentation and line breaks. "},{"title":"Responses","type":1,"pageTitle":"JSON querying API","url":"/docs/27.0.0/api-reference/json-querying-api#responses","content":"200 SUCCESS400 BAD REQUEST Successfully submitted query "},{"title":"Example query: topN","type":1,"pageTitle":"JSON querying API","url":"/docs/27.0.0/api-reference/json-querying-api#example-query-topn","content":"The following example shows a topN query. The query analyzes the social_media datasource to return the top five users from the username dimension with the highest number of views from the views metric. cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/v2?pretty=null" \\ --header 'Content-Type: application/json' \\ --data '{ "queryType": "topN", "dataSource": "social_media", "dimension": "username", "threshold": 5, "metric": "views", "granularity": "all", "aggregations": [ { "type": "longSum", "name": "views", "fieldName": "views" } ], "intervals": [ "2022-01-01T00:00:00.000/2024-01-01T00:00:00.000" ] }' Example response: topN Click to show sample response [ { "timestamp": "2023-07-03T18:49:54.848Z", "result": [ { "views": 11591218026, "username": "gus" }, { "views": 11578638578, "username": "miette" }, { "views": 11561618880, "username": "leon" }, { "views": 11552609824, "username": "mia" }, { "views": 11551537517, "username": "milton" } ] } ] "},{"title":"Example query: groupBy","type":1,"pageTitle":"JSON querying API","url":"/docs/27.0.0/api-reference/json-querying-api#example-query-groupby","content":"The following example submits a JSON query of the groupBy type to retrieve the username with the highest votes to posts ratio from the social_media datasource. In this query: The upvoteSum aggregation calculates the sum of the upvotes for each user.The postCount aggregation calculates the sum of posts for each user.The upvoteToPostRatio is a post-aggregation of the upvoteSum and the postCount, divided to calculate the ratio.The result is sorted based on the upvoteToPostRatio in descending order. cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/v2" \\ --header 'Content-Type: application/json' \\ --data '{ "queryType": "groupBy", "dataSource": "social_media", "dimensions": ["username"], "granularity": "all", "aggregations": [ { "type": "doubleSum", "name": "upvoteSum", "fieldName": "upvotes" }, { "type": "count", "name": "postCount", "fieldName": "post_title" } ], "postAggregations": [ { "type": "arithmetic", "name": "upvoteToPostRatio", "fn": "/", "fields": [ { "type": "fieldAccess", "name": "upvoteSum", "fieldName": "upvoteSum" }, { "type": "fieldAccess", "name": "postCount", "fieldName": "postCount" } ] } ], "intervals": ["2022-01-01T00:00:00.000/2024-01-01T00:00:00.000"], "limitSpec": { "type": "default", "limit": 1, "columns": [ { "dimension": "upvoteToPostRatio", "direction": "descending" } ] } }' Example response: groupBy Click to show sample response [ { "version": "v1", "timestamp": "2022-01-01T00:00:00.000Z", "event": { "upvoteSum": 8.0419541E7, "upvoteToPostRatio": 69.53014661762697, "postCount": 1156614, "username": "miette" } } ] "},{"title":"Get segment information for query","type":1,"pageTitle":"JSON querying API","url":"/docs/27.0.0/api-reference/json-querying-api#get-segment-information-for-query","content":"Retrieves an array that contains objects with segment information, including the server locations associated with the query provided in the request body. "},{"title":"URL","type":1,"pageTitle":"JSON querying API","url":"/docs/27.0.0/api-reference/json-querying-api#url-1","content":"POST /druid/v2/candidates/ "},{"title":"Query parameters","type":1,"pageTitle":"JSON querying API","url":"/docs/27.0.0/api-reference/json-querying-api#query-parameters-1","content":"pretty (optional) Druid returns the response in a pretty-printed format using indentation and line breaks. "},{"title":"Responses","type":1,"pageTitle":"JSON querying API","url":"/docs/27.0.0/api-reference/json-querying-api#responses-1","content":"200 SUCCESS400 BAD REQUEST Successfully retrieved segment information "},{"title":"Sample request","type":1,"pageTitle":"JSON querying API","url":"/docs/27.0.0/api-reference/json-querying-api#sample-request","content":"cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/v2/candidates" \\ --header 'Content-Type: application/json' \\ --data '{ "queryType": "topN", "dataSource": "social_media", "dimension": "username", "threshold": 5, "metric": "views", "granularity": "all", "aggregations": [ { "type": "longSum", "name": "views", "fieldName": "views" } ], "intervals": [ "2022-01-01T00:00:00.000/2024-01-01T00:00:00.000" ] }' "},{"title":"Sample response","type":1,"pageTitle":"JSON querying API","url":"/docs/27.0.0/api-reference/json-querying-api#sample-response","content":"Click to show sample response [ { "interval": "2023-07-03T18:00:00.000Z/2023-07-03T19:00:00.000Z", "version": "2023-07-03T18:51:18.905Z", "partitionNumber": 0, "size": 21563693, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-03T19:00:00.000Z/2023-07-03T20:00:00.000Z", "version": "2023-07-03T19:00:00.657Z", "partitionNumber": 0, "size": 6057236, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-05T21:00:00.000Z/2023-07-05T22:00:00.000Z", "version": "2023-07-05T21:09:58.102Z", "partitionNumber": 0, "size": 223926186, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-05T21:00:00.000Z/2023-07-05T22:00:00.000Z", "version": "2023-07-05T21:09:58.102Z", "partitionNumber": 1, "size": 20244827, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-05T22:00:00.000Z/2023-07-05T23:00:00.000Z", "version": "2023-07-05T22:00:00.524Z", "partitionNumber": 0, "size": 104628051, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-05T22:00:00.000Z/2023-07-05T23:00:00.000Z", "version": "2023-07-05T22:00:00.524Z", "partitionNumber": 1, "size": 1603995, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-05T23:00:00.000Z/2023-07-06T00:00:00.000Z", "version": "2023-07-05T23:21:55.242Z", "partitionNumber": 0, "size": 181506843, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T00:00:00.000Z/2023-07-06T01:00:00.000Z", "version": "2023-07-06T00:02:08.498Z", "partitionNumber": 0, "size": 9170974, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T00:00:00.000Z/2023-07-06T01:00:00.000Z", "version": "2023-07-06T00:02:08.498Z", "partitionNumber": 1, "size": 23969632, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T01:00:00.000Z/2023-07-06T02:00:00.000Z", "version": "2023-07-06T01:13:53.982Z", "partitionNumber": 0, "size": 599895, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T01:00:00.000Z/2023-07-06T02:00:00.000Z", "version": "2023-07-06T01:13:53.982Z", "partitionNumber": 1, "size": 1627041, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T02:00:00.000Z/2023-07-06T03:00:00.000Z", "version": "2023-07-06T02:55:50.701Z", "partitionNumber": 0, "size": 629753, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T02:00:00.000Z/2023-07-06T03:00:00.000Z", "version": "2023-07-06T02:55:50.701Z", "partitionNumber": 1, "size": 1342360, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T04:00:00.000Z/2023-07-06T05:00:00.000Z", "version": "2023-07-06T04:02:36.562Z", "partitionNumber": 0, "size": 2131434, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T05:00:00.000Z/2023-07-06T06:00:00.000Z", "version": "2023-07-06T05:23:27.856Z", "partitionNumber": 0, "size": 797161, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T05:00:00.000Z/2023-07-06T06:00:00.000Z", "version": "2023-07-06T05:23:27.856Z", "partitionNumber": 1, "size": 1176858, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T06:00:00.000Z/2023-07-06T07:00:00.000Z", "version": "2023-07-06T06:46:34.638Z", "partitionNumber": 0, "size": 2148760, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T07:00:00.000Z/2023-07-06T08:00:00.000Z", "version": "2023-07-06T07:38:28.050Z", "partitionNumber": 0, "size": 2040748, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T08:00:00.000Z/2023-07-06T09:00:00.000Z", "version": "2023-07-06T08:27:31.407Z", "partitionNumber": 0, "size": 678723, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T08:00:00.000Z/2023-07-06T09:00:00.000Z", "version": "2023-07-06T08:27:31.407Z", "partitionNumber": 1, "size": 1437866, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T10:00:00.000Z/2023-07-06T11:00:00.000Z", "version": "2023-07-06T10:02:42.079Z", "partitionNumber": 0, "size": 1671296, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T11:00:00.000Z/2023-07-06T12:00:00.000Z", "version": "2023-07-06T11:27:23.902Z", "partitionNumber": 0, "size": 574893, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T11:00:00.000Z/2023-07-06T12:00:00.000Z", "version": "2023-07-06T11:27:23.902Z", "partitionNumber": 1, "size": 1427384, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T12:00:00.000Z/2023-07-06T13:00:00.000Z", "version": "2023-07-06T12:52:00.846Z", "partitionNumber": 0, "size": 2115172, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T14:00:00.000Z/2023-07-06T15:00:00.000Z", "version": "2023-07-06T14:32:33.926Z", "partitionNumber": 0, "size": 589108, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T14:00:00.000Z/2023-07-06T15:00:00.000Z", "version": "2023-07-06T14:32:33.926Z", "partitionNumber": 1, "size": 1392649, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T15:00:00.000Z/2023-07-06T16:00:00.000Z", "version": "2023-07-06T15:53:25.467Z", "partitionNumber": 0, "size": 2037851, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T16:00:00.000Z/2023-07-06T17:00:00.000Z", "version": "2023-07-06T16:02:26.568Z", "partitionNumber": 0, "size": 230400650, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T16:00:00.000Z/2023-07-06T17:00:00.000Z", "version": "2023-07-06T16:02:26.568Z", "partitionNumber": 1, "size": 38209056, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] }, { "interval": "2023-07-06T17:00:00.000Z/2023-07-06T18:00:00.000Z", "version": "2023-07-06T17:00:02.391Z", "partitionNumber": 0, "size": 211099463, "locations": [ { "name": "localhost:8083", "host": "localhost:8083", "hostAndTlsPort": null, "maxSize": 300000000000, "type": "historical", "tier": "_default_tier", "priority": 0 } ] } ] "},{"title":"ZooKeeper","type":0,"sectionRef":"#","url":"/docs/27.0.0/design/zookeeper","content":"","keywords":""},{"title":"Minimum ZooKeeper versions","type":1,"pageTitle":"ZooKeeper","url":"/docs/27.0.0/design/zookeeper#minimum-zookeeper-versions","content":"Apache Druid supports ZooKeeper versions 3.5.x and above. info Note: Starting with Apache Druid 0.22.0, support for ZooKeeper 3.4.x has been removed "},{"title":"ZooKeeper Operations","type":1,"pageTitle":"ZooKeeper","url":"/docs/27.0.0/design/zookeeper#zookeeper-operations","content":"The operations that happen over ZK are Coordinator leader electionSegment "publishing" protocol from HistoricalSegment load/drop protocol between Coordinator and HistoricalOverlord leader electionOverlord and MiddleManager task management "},{"title":"Coordinator Leader Election","type":1,"pageTitle":"ZooKeeper","url":"/docs/27.0.0/design/zookeeper#coordinator-leader-election","content":"We use the Curator LeaderLatch recipe to perform leader election at path ${druid.zk.paths.coordinatorPath}/_COORDINATOR "},{"title":"Segment \"publishing\" protocol from Historical and Realtime","type":1,"pageTitle":"ZooKeeper","url":"/docs/27.0.0/design/zookeeper#segment-publishing-protocol-from-historical-and-realtime","content":"The announcementsPath and servedSegmentsPath are used for this. All Historical processes publish themselves on the announcementsPath, specifically, they will create an ephemeral znode at ${druid.zk.paths.announcementsPath}/${druid.host} Which signifies that they exist. They will also subsequently create a permanent znode at ${druid.zk.paths.servedSegmentsPath}/${druid.host} And as they load up segments, they will attach ephemeral znodes that look like ${druid.zk.paths.servedSegmentsPath}/${druid.host}/_segment_identifier_ Processes like the Coordinator and Broker can then watch these paths to see which processes are currently serving which segments. "},{"title":"Segment load/drop protocol between Coordinator and Historical","type":1,"pageTitle":"ZooKeeper","url":"/docs/27.0.0/design/zookeeper#segment-loaddrop-protocol-between-coordinator-and-historical","content":"The loadQueuePath is used for this. When the Coordinator decides that a Historical process should load or drop a segment, it writes an ephemeral znode to ${druid.zk.paths.loadQueuePath}/_host_of_historical_process/_segment_identifier This znode will contain a payload that indicates to the Historical process what it should do with the given segment. When the Historical process is done with the work, it will delete the znode in order to signify to the Coordinator that it is complete. "},{"title":"Build from source","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/build","content":"Build from source You can build Apache Druid directly from source. Use the version of this page that matches the version you want to build. For building the latest code in master, follow the latest version of this pagehere: make sure it has /master/ in the URL. Prerequisites Installing Java and Maven JDK 8, 8u92+ or JDK 11. See our Java documentation for information about obtaining a JDK.Maven version 3.x Other dependencies Distribution builds require Python 3.x and the pyyaml module.Integration tests require pyyaml version 5.1 or later. Downloading the source git clone git@github.com:apache/druid.git cd druid Building from source The basic command to build Druid from source is: mvn clean install This will run static analysis, unit tests, compile classes, and package the projects into JARs. It will not generate the source or binary distribution tarball. In addition to the basic stages, you may also want to add the following profiles and properties: -Pdist - Distribution profile: Generates the binary distribution tarball by pulling in core extensions and dependencies and packaging the files as distribution/target/apache-druid-x.x.x-bin.tar.gz-Papache-release - Apache release profile: Generates GPG signature and checksums, and builds the source distribution tarball as distribution/target/apache-druid-x.x.x-src.tar.gz-Prat - Apache Rat profile: Runs the Apache Rat license audit tool-DskipTests - Skips unit tests (which reduces build time)-Dweb.console.skip=true - Skip front end project Putting these together, if you wish to build the source and binary distributions with signatures and checksums, audit licenses, and skip the unit tests, you would run: mvn clean install -Papache-release,dist,rat -DskipTests Potential issues Missing pyyaml You are building Druid from source following the instructions on this page but you get [ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.6.0:exec (generate-binary-license) on project distribution: Command execution failed.: Process exited with an error: 1 (Exit value: 1) -> [Help 1] Resolution: Make sure you have Python installed as well as the yaml module: pip install pyyaml On some systems, ensure you use the Python 3.x version of pip: pip3 install pyyaml ","keywords":""},{"title":"Contribute to Druid docs","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/contribute-to-docs","content":"","keywords":""},{"title":"Getting started","type":1,"pageTitle":"Contribute to Druid docs","url":"/docs/27.0.0/development/contribute-to-docs#getting-started","content":"Druid docs contributors can open an issue about documentation, or contribute a change with a pull request (PR). The open source Druid docs are located here:https://druid.apache.org/docs/latest/design/index.html If you need to update a Druid doc, locate and update the doc in the Druid repo following the instructions below. "},{"title":"Druid repo branches","type":1,"pageTitle":"Contribute to Druid docs","url":"/docs/27.0.0/development/contribute-to-docs#druid-repo-branches","content":"The Druid team works on the master branch and then branches for a release, such as 26.0.0. See CONTRIBUTING.md for instructions on contributing to Apache Druid. "},{"title":"Before you begin","type":1,"pageTitle":"Contribute to Druid docs","url":"/docs/27.0.0/development/contribute-to-docs#before-you-begin","content":"Before you can contribute to the Druid docs for the first time, you must complete the following steps: Fork the Druid repo. Your fork will be the origin remote. Clone your fork: git clone git@github.com:GITHUB_USERNAME/druid.git Replace GITHUB_USERNAME with your GitHub username. In the directory where you cloned your fork, set up apache/druid as your your remote upstream repo: git remote add upstream https://github.com/apache/druid.git Confirm that your fork shows up as the origin repo and apache/druid shows up as the upstream repo: git remote -v Verify that you have your email configured for GitHub: git config user.email If you need to set your email, see the GitHub instructions. Install Docusaurus so that you can build the site locally. Run either npm install or yarn install in the website directory. "},{"title":"Contributing","type":1,"pageTitle":"Contribute to Druid docs","url":"/docs/27.0.0/development/contribute-to-docs#contributing","content":"Before you contribute, make sure your local branch of master and the upstream Apache branch are up-to-date and in sync. This can help you avoid merge conflicts. Run the following commands on your fork's master branch: git fetch origin git fetch upstream Then run either one of the following commands: git rebase upstream/master # or git merge upstream/master Now you're up to date, and you can make your changes. Create your working branch: git checkout -b MY-BRANCH Provide a name for your feature branch in MY-BRANCH. 2. Find the file that you want to make changes to. All the source files for the docs are written in Markdown and located in the docs directory. The URL for the page includes the subdirectory the source file is in. For example, the SQL-based ingestion tutorial found at https://druid.apache.org/docs/latest/tutorials/tutorial-msq-extern.html is in the tutorials subdirectory. If you're adding a page, create a new Markdown file in the appropriate subdirectory. Then, copy the front matter and Apache license from an existing file. Update the title and id fields. Don't forget to add it to website/sidebars.json so that your new page shows up in the navigation. Test changes locally by building the site and navigating to your changes. In the website directory, run docusaurus-start. By default, this starts the site on localhost:3000. If port 3000 is already in use, it'll increment the port number from there. Use the following commands to run the link and spellcheckers locally: npm run spellcheck npm run link-lint This step can save you time during the review process since they'll run faster than the GitHub Action version of the checks and warn you of issues before you create a PR. Push your changes to your fork: git push --set-upstream origin MY-BRANCH Go to the Druid repo. GitHub should recognize that you have a new branch in your fork. Create a pull request from your Druid fork and branch to the master branch in the Apache Druid repo. The pull request template is extensive. You may not need all the information there, so feel free to delete unneeded sections as you fill it out. Once you create the pull request, GitHub automatically labels the issue so that reviewers can take a look. The docs go through a review process similar to the code where community members will offer feedback. Once the review process is complete and your changes are merged, they'll be available on the live site when the site gets republished. "},{"title":"Style guide","type":1,"pageTitle":"Contribute to Druid docs","url":"/docs/27.0.0/development/contribute-to-docs#style-guide","content":"Before publishing new content or updating an existing topic, audit your documentation using this checklist to make sure your contributions align with existing documentation. Here are some general guidelines: Use descriptive link text. If a link downloads a file, make sure to indicate this action.Use present tense where possible.Avoid negative constructions when possible. In other words, try to tell people what they should do instead of what they shouldn't.Use clear and direct language.Use descriptive headings and titles.Avoid using a present participle or gerund as the first word in a heading or title. A shortcut for this is to not start with a word that ends in -ing. For example, don't use "Configuring Druid." Use "Configure Druid."Use sentence case in document titles and headings.Don’t use images of text or code samples.Use SVG over PNG for images if you can.Provide alt text or an equivalent text explanation with each image.Use the appropriate text-formatting. For example, make sure code snippets and property names are in code font and UI elements are bold. Generally, you should avoid using bold or italics to emphasize certain words unless there's a good reason.Put conditional clauses before instructions. In the following example, "to drop a segment" is the conditional clause: to drop a segment, do the following.Avoid gender-specific pronouns, instead use "they."Use second person singular — "you" instead of "we."When American spelling is different from Commonwealth/"British" spelling, use the American spelling.Don’t use terms considered disrespectful. Refer to a list like Google’s Word list for guidance and alternatives.Use straight quotation marks and straight apostrophes instead of the curly versions.Introduce a list, a table, or a procedure with an introductory sentence that prepares the reader for what they're about to read. "},{"title":"Experimental features","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/experimental-features","content":"","keywords":""},{"title":"SQL-based ingestion","type":1,"pageTitle":"Experimental features","url":"/docs/27.0.0/development/experimental-features#sql-based-ingestion","content":"SQL-based ingestionSQL-based ingestion conceptsSQL-based ingestion and multi-stage query task API "},{"title":"Indexer process","type":1,"pageTitle":"Experimental features","url":"/docs/27.0.0/development/experimental-features#indexer-process","content":"Indexer processProcesses and servers "},{"title":"Kubernetes","type":1,"pageTitle":"Experimental features","url":"/docs/27.0.0/development/experimental-features#kubernetes","content":"Kubernetes "},{"title":"Segment locking","type":1,"pageTitle":"Experimental features","url":"/docs/27.0.0/development/experimental-features#segment-locking","content":"Configuration referenceTask referenceDesign "},{"title":"Front coding","type":1,"pageTitle":"Experimental features","url":"/docs/27.0.0/development/experimental-features#front-coding","content":"Ingestion spec reference "},{"title":"Other configuration properties","type":1,"pageTitle":"Experimental features","url":"/docs/27.0.0/development/experimental-features#other-configuration-properties","content":"Configuration reference CLOSED_SEGMENTS_SINKS modeExpression processing configuration druid.expressions.allowNestedArrays "},{"title":"Segments","type":0,"sectionRef":"#","url":"/docs/27.0.0/design/segments","content":"","keywords":""},{"title":"Segment file structure","type":1,"pageTitle":"Segments","url":"/docs/27.0.0/design/segments#segment-file-structure","content":"Segment files are columnar: the data for each column is laid out in separate data structures. By storing each column separately, Druid decreases query latency by scanning only those columns actually needed for a query. There are three basic column types: timestamp, dimensions, and metrics: Timestamp and metrics type columns are arrays of integer or floating point values compressed withLZ4. Once a query identifies which rows to select, it decompresses them, pulls out the relevant rows, and applies the desired aggregation operator. If a query doesn’t require a column, Druid skips over that column's data. Dimension columns are different because they support filter and group-by operations, so each dimension requires the following three data structures: Dictionary: Maps values (which are always treated as strings) to integer IDs, allowing compact representation of the list and bitmap values.List: The column’s values, encoded using the dictionary. Required for GroupBy and TopN queries. These operators allow queries that solely aggregate metrics based on filters to run without accessing the list of values.Bitmap: One bitmap for each distinct value in the column, to indicate which rows contain that value. Bitmaps allow for quick filtering operations because they are convenient for quickly applying AND and OR operators. Also known as inverted indexes. To get a better sense of these data structures, consider the "Page" column from the example data above, represented by the following data structures: 1: Dictionary { "Justin Bieber": 0, "Ke$ha": 1 } 2: List of column data [0, 0, 1, 1] 3: Bitmaps value="Justin Bieber": [1,1,0,0] value="Ke$ha": [0,0,1,1] Note that the bitmap is different from the dictionary and list data structures: the dictionary and list grow linearly with the size of the data, but the size of the bitmap section is the product of data size and column cardinality. That is, there is one bitmap per separate column value. Columns with the same value share the same bitmap. For each row in the list of column data, there is only a single bitmap that has a non-zero entry. This means that high cardinality columns have extremely sparse, and therefore highly compressible, bitmaps. Druid exploits this using compression algorithms that are specially suited for bitmaps, such as Roaring bitmap compression. "},{"title":"Handling null values","type":1,"pageTitle":"Segments","url":"/docs/27.0.0/design/segments#handling-null-values","content":"By default, Druid string dimension columns use the values '' and null interchangeably. Numeric and metric columns cannot represent null but use nulls to mean 0. However, Druid provides a SQL compatible null handling mode, which you can enable at the system level through druid.generic.useDefaultValueForNull. This setting, when set to false, allows Druid to create segments at ingestion time in which the following occurs: String columns can distinguish '' from null,Numeric columns can represent null valued rows instead of 0. String dimension columns contain no additional column structures in SQL compatible null handling mode. Instead, they reserve an additional dictionary entry for the null value. Numeric columns are stored in the segment with an additional bitmap in which the set bits indicate null-valued rows. In addition to slightly increased segment sizes, SQL compatible null handling can incur a performance cost at query time, due to the need to check the null bitmap. This performance cost only occurs for columns that actually contain null values. "},{"title":"Segments with different schemas","type":1,"pageTitle":"Segments","url":"/docs/27.0.0/design/segments#segments-with-different-schemas","content":"Druid segments for the same datasource may have different schemas. If a string column (dimension) exists in one segment but not another, queries that involve both segments still work. In default mode, queries for the segment without the dimension behave as if the dimension contains only blank values. In SQL-compatible mode, queries for the segment without the dimension behave as if the dimension contains only null values. Similarly, if one segment has a numeric column (metric) but another does not, queries on the segment without the metric generally operate as expected. Aggregations over the missing metric operate as if the metric doesn't exist. "},{"title":"Column format","type":1,"pageTitle":"Segments","url":"/docs/27.0.0/design/segments#column-format","content":"Each column is stored as two parts: A Jackson-serialized ColumnDescriptor.The binary data for the column. A ColumnDescriptor is Jackson-serialized instance of the internal Druid ColumnDescriptor class . It allows the use of Jackson's polymorphic deserialization to add new and interesting methods of serialization with minimal impact to the code. It consists of some metadata about the column (for example: type, whether it's multi-value) and a list of serialization/deserialization logic that can deserialize the rest of the binary. "},{"title":"Multi-value columns","type":1,"pageTitle":"Segments","url":"/docs/27.0.0/design/segments#multi-value-columns","content":"A multi-value column allows a single row to contain multiple strings for a column. You can think of it as an array of strings. If a datasource uses multi-value columns, then the data structures within the segment files look a bit different. Let's imagine that in the example above, the second row is tagged with both the Ke$ha and Justin Bieber topics, as follows: 1: Dictionary { "Justin Bieber": 0, "Ke$ha": 1 } 2: List of column data [0, [0,1], <--Row value in a multi-value column can contain an array of values 1, 1] 3: Bitmaps value="Justin Bieber": [1,1,0,0] value="Ke$ha": [0,1,1,1] ^ | | Multi-value column contains multiple non-zero entries Note the changes to the second row in the list of column data and the Ke$habitmap. If a row has more than one value for a column, its entry in the list is an array of values. Additionally, a row with n values in the list has n non-zero valued entries in bitmaps. "},{"title":"Compression","type":1,"pageTitle":"Segments","url":"/docs/27.0.0/design/segments#compression","content":"Druid uses LZ4 by default to compress blocks of values for string, long, float, and double columns. Druid uses Roaring to compress bitmaps for string columns and numeric null values. We recommend that you use these defaults unless you've experimented with your data and query patterns suggest that non-default options will perform better in your specific case. Druid also supports Concise bitmap compression. For string column bitmaps, the differences between using Roaring and Concise are most pronounced for high cardinality columns. In this case, Roaring is substantially faster on filters that match many values, but in some cases Concise can have a lower footprint due to the overhead of the Roaring format (but is still slower when many values are matched). You configure compression at the segment level, not for individual columns. See IndexSpec for more details. "},{"title":"Segment identification","type":1,"pageTitle":"Segments","url":"/docs/27.0.0/design/segments#segment-identification","content":"Segment identifiers typically contain the segment datasource, interval start time (in ISO 8601 format), interval end time (in ISO 8601 format), and version information. If data is additionally sharded beyond a time range, the segment identifier also contains a partition number: datasource_intervalStart_intervalEnd_version_partitionNum "},{"title":"Segment ID examples","type":1,"pageTitle":"Segments","url":"/docs/27.0.0/design/segments#segment-id-examples","content":"The increasing partition numbers in the following segments indicate that multiple segments exist for the same interval: foo_2015-01-01/2015-01-02_v1_0 foo_2015-01-01/2015-01-02_v1_1 foo_2015-01-01/2015-01-02_v1_2 If you reindex the data with a new schema, Druid allocates a new version ID to the newly created segments: foo_2015-01-01/2015-01-02_v2_0 foo_2015-01-01/2015-01-02_v2_1 foo_2015-01-01/2015-01-02_v2_2 "},{"title":"Sharding","type":1,"pageTitle":"Segments","url":"/docs/27.0.0/design/segments#sharding","content":"Multiple segments can exist for a single time interval and datasource. These segments form a block for an interval. Depending on the type of shardSpec used to shard the data, Druid queries may only complete if a block is complete. For example, if a block consists of the following three segments: sampleData_2011-01-01T02:00:00:00Z_2011-01-01T03:00:00:00Z_v1_0 sampleData_2011-01-01T02:00:00:00Z_2011-01-01T03:00:00:00Z_v1_1 sampleData_2011-01-01T02:00:00:00Z_2011-01-01T03:00:00:00Z_v1_2 All three segments must load before a query for the interval 2011-01-01T02:00:00:00Z_2011-01-01T03:00:00:00Z can complete. Linear shard specs are an exception to this rule. Linear shard specs do not enforce "completeness" so queries can complete even if shards are not completely loaded. For example, if a real-time ingestion creates three segments that were sharded with linear shard spec, and only two of the segments are loaded, queries return results for those two segments. "},{"title":"Segment components","type":1,"pageTitle":"Segments","url":"/docs/27.0.0/design/segments#segment-components","content":"A segment contains several files: version.bin 4 bytes representing the current segment version as an integer. For example, for v9 segments the version is 0x0, 0x0, 0x0, 0x9. meta.smoosh A file containing metadata (filenames and offsets) about the contents of the other smoosh files. XXXXX.smoosh Smoosh (.smoosh) files contain concatenated binary data. This file consolidation reduces the number of file descriptors that must be open when accessing data. The files are 2 GB or less in size to remain within the limit of a memory-mapped ByteBuffer in Java. Smoosh files contain the following: Individual files for each column in the data, including one for the __time column that refers to the timestamp of the segment. An index.drd file that contains additional segment metadata. In the codebase, segments have an internal format version. The current segment format version is v9. "},{"title":"Implications of updating segments","type":1,"pageTitle":"Segments","url":"/docs/27.0.0/design/segments#implications-of-updating-segments","content":"Druid uses versioning to manage updates to create a form of multi-version concurrency control (MVCC). These MVCC versions are distinct from the segment format version discussed above. Note that updates that span multiple segment intervals are only atomic within each interval. They are not atomic across the entire update. For example, if you have the following segments: foo_2015-01-01/2015-01-02_v1_0 foo_2015-01-02/2015-01-03_v1_1 foo_2015-01-03/2015-01-04_v1_2 v2 segments are loaded into the cluster as soon as they are built and replace v1 segments for the period of time the segments overlap. Before v2 segments are completely loaded, the cluster may contain a mixture of v1 and v2 segments. foo_2015-01-01/2015-01-02_v1_0 foo_2015-01-02/2015-01-03_v2_1 foo_2015-01-03/2015-01-04_v1_2 In this case, queries may hit a mixture of v1 and v2 segments. "},{"title":"Ambari Metrics Emitter","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/ambari-metrics-emitter","content":"","keywords":""},{"title":"Introduction","type":1,"pageTitle":"Ambari Metrics Emitter","url":"/docs/27.0.0/development/extensions-contrib/ambari-metrics-emitter#introduction","content":"This extension emits Druid metrics to an ambari-metrics carbon server. Events are sent after been pickled (i.e., batched). The size of the batch is configurable. "},{"title":"Configuration","type":1,"pageTitle":"Ambari Metrics Emitter","url":"/docs/27.0.0/development/extensions-contrib/ambari-metrics-emitter#configuration","content":"All the configuration parameters for ambari-metrics emitter are under druid.emitter.ambari-metrics. property\tdescription\trequired?\tdefaultdruid.emitter.ambari-metrics.hostname\tThe hostname of the ambari-metrics server.\tyes\tnone druid.emitter.ambari-metrics.port\tThe port of the ambari-metrics server.\tyes\tnone druid.emitter.ambari-metrics.protocol\tThe protocol used to send metrics to ambari metrics collector. One of http/https\tno\thttp druid.emitter.ambari-metrics.trustStorePath\tPath to trustStore to be used for https\tno\tnone druid.emitter.ambari-metrics.trustStoreType\ttrustStore type to be used for https\tno\tnone druid.emitter.ambari-metrics.trustStoreType\ttrustStore password to be used for https\tno\tnone druid.emitter.ambari-metrics.batchSize\tNumber of events to send as one batch.\tno\t100 druid.emitter.ambari-metrics.eventConverter\tFilter and converter of druid events to ambari-metrics timeline event(please see next section).\tyes\tnone druid.emitter.ambari-metrics.flushPeriod\tQueue flushing period in milliseconds.\tno\t1 minute druid.emitter.ambari-metrics.maxQueueSize\tMaximum size of the queue used to buffer events.\tno\tMAX_INT druid.emitter.ambari-metrics.alertEmitters\tList of emitters where alerts will be forwarded to.\tno\tempty list (no forwarding) druid.emitter.ambari-metrics.emitWaitTime\twait time in milliseconds to try to send the event otherwise emitter will throwing event.\tno\t0 druid.emitter.ambari-metrics.waitForEventTime\twaiting time in milliseconds if necessary for an event to become available.\tno\t1000 (1 sec) "},{"title":"Druid to Ambari Metrics Timeline Event Converter","type":1,"pageTitle":"Ambari Metrics Emitter","url":"/docs/27.0.0/development/extensions-contrib/ambari-metrics-emitter#druid-to-ambari-metrics-timeline-event-converter","content":"Ambari Metrics Timeline Event Converter defines a mapping between druid metrics name plus dimensions to a timeline event metricName. ambari-metrics metric path is organized using the following schema:<namespacePrefix>.[<druid service name>].[<druid hostname>].<druid metrics dimensions>.<druid metrics name>Properly naming the metrics is critical to avoid conflicts, confusing data and potentially wrong interpretation later on. Example druid.historical.hist-host1:8080.MyDataSourceName.GroupBy.query/time: druid -> namespace prefixhistorical -> service namehist-host1:8080 -> druid hostnameMyDataSourceName -> dimension valueGroupBy -> dimension valuequery/time -> metric name We have two different implementation of event converter: Send-All converter The first implementation called all, will send all the druid service metrics events. The path will be in the form <namespacePrefix>.[<druid service name>].[<druid hostname>].<dimensions values ordered by dimension's name>.<metric>User has control of <namespacePrefix>.[<druid service name>].[<druid hostname>]. druid.emitter.ambari-metrics.eventConverter={"type":"all", "namespacePrefix": "druid.test", "appName":"druid"} White-list based converter The second implementation called whiteList, will send only the white listed metrics and dimensions. Same as for the all converter user has control of <namespacePrefix>.[<druid service name>].[<druid hostname>].White-list based converter comes with the following default white list map located under resources in ./src/main/resources/defaultWhiteListMap.json Although user can override the default white list map by supplying a property called mapPath. This property is a String containing the path for the file containing white list map JSON object. For example the following converter will read the map from the file /pathPrefix/fileName.json. druid.emitter.ambari-metrics.eventConverter={"type":"whiteList", "namespacePrefix": "druid.test", "ignoreHostname":true, "appName":"druid", "mapPath":"/pathPrefix/fileName.json"} Druid emits a huge number of metrics we highly recommend to use the whiteList converter "},{"title":"Apache Cassandra","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/cassandra","content":"Apache Cassandra To use this Apache Druid extension, include druid-cassandra-storage in the extensions load list. Apache Cassandra can also be leveraged for deep storage. This requires some additional Druid configuration as well as setting up the necessary schema within a Cassandra keystore.","keywords":""},{"title":"Rackspace Cloud Files","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/cloudfiles","content":"","keywords":""},{"title":"Deep Storage","type":1,"pageTitle":"Rackspace Cloud Files","url":"/docs/27.0.0/development/extensions-contrib/cloudfiles#deep-storage","content":"Rackspace Cloud Files is another option for deep storage. This requires some additional Druid configuration. Property\tPossible Values\tDescription\tDefaultdruid.storage.type\tcloudfiles Must be set. druid.storage.region Rackspace Cloud Files region.\tMust be set. druid.storage.container Rackspace Cloud Files container name.\tMust be set. druid.storage.basePath Rackspace Cloud Files base path to use in the container.\tMust be set. druid.storage.operationMaxRetries Number of tries before cancel a Rackspace operation.\t10 druid.cloudfiles.userName Rackspace Cloud username\tMust be set. druid.cloudfiles.apiKey Rackspace Cloud API key.\tMust be set. druid.cloudfiles.provider\trackspace-cloudfiles-us,rackspace-cloudfiles-uk\tName of the provider depending on the region.\tMust be set. druid.cloudfiles.useServiceNet\ttrue,false\tWhether to use the internal service net.\ttrue "},{"title":"Firehose","type":1,"pageTitle":"Rackspace Cloud Files","url":"/docs/27.0.0/development/extensions-contrib/cloudfiles#firehose","content":" StaticCloudFilesFirehose This firehose ingests events, similar to the StaticAzureBlobStoreFirehose, but from Rackspace's Cloud Files. Data is newline delimited, with one JSON object per line and parsed as per the InputRowParser configuration. The storage account is shared with the one used for Rackspace's Cloud Files deep storage functionality, but blobs can be in a different region and container. As with the Azure blobstore, it is assumed to be gzipped if the extension ends in .gz This firehose is splittable and can be used by native parallel index tasks. Since each split represents an object in this firehose, each worker task of index_parallel will read an object. Sample spec: "firehose" : { "type" : "static-cloudfiles", "blobs": [ { "region": "DFW" "container": "container", "path": "/path/to/your/file.json" }, { "region": "ORD" "container": "anothercontainer", "path": "/another/path.json" } ] } This firehose provides caching and prefetching features. In IndexTask, a firehose can be read twice if intervals or shardSpecs are not specified, and, in this case, caching can be useful. Prefetching is preferred when direct scan of objects is slow. property\tdescription\tdefault\trequired?type\tThis should be static-cloudfiles.\tN/A\tyes blobs\tJSON array of Cloud Files blobs.\tN/A\tyes maxCacheCapacityBytes\tMaximum size of the cache space in bytes. 0 means disabling cache.\t1073741824\tno maxCacheCapacityBytes\tMaximum size of the cache space in bytes. 0 means disabling cache. Cached files are not removed until the ingestion task completes.\t1073741824\tno maxFetchCapacityBytes\tMaximum size of the fetch space in bytes. 0 means disabling prefetch. Prefetched files are removed immediately once they are read.\t1073741824\tno fetchTimeout\tTimeout for fetching a Cloud Files object.\t60000\tno maxFetchRetry\tMaximum retry for fetching a Cloud Files object.\t3\tno Cloud Files Blobs: property\tdescription\tdefault\trequired?container\tName of the Cloud Files container\tN/A\tyes path\tThe path where data is located.\tN/A\tyes "},{"title":"Graphite Emitter","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/graphite","content":"","keywords":""},{"title":"Introduction","type":1,"pageTitle":"Graphite Emitter","url":"/docs/27.0.0/development/extensions-contrib/graphite#introduction","content":"This extension emits druid metrics to a graphite carbon server. Metrics can be sent by using plaintext or pickle protocol. The pickle protocol is more efficient and supports sending batches of metrics (plaintext protocol send only one metric) in one request; batch size is configurable. "},{"title":"Configuration","type":1,"pageTitle":"Graphite Emitter","url":"/docs/27.0.0/development/extensions-contrib/graphite#configuration","content":"All the configuration parameters for graphite emitter are under druid.emitter.graphite. property\tdescription\trequired?\tdefaultdruid.emitter.graphite.hostname\tThe hostname of the graphite server.\tyes\tnone druid.emitter.graphite.port\tThe port of the graphite server.\tyes\tnone druid.emitter.graphite.batchSize\tNumber of events to send as one batch (only for pickle protocol)\tno\t100 druid.emitter.graphite.protocol\tGraphite protocol; available protocols: pickle, plaintext.\tno\tpickle druid.emitter.graphite.eventConverter\tFilter and converter of druid events to graphite event (please see next section).\tyes\tnone druid.emitter.graphite.flushPeriod\tQueue flushing period in milliseconds.\tno\t1 minute druid.emitter.graphite.maxQueueSize\tMaximum size of the queue used to buffer events.\tno\tMAX_INT druid.emitter.graphite.alertEmitters\tList of emitters where alerts will be forwarded to. This is a JSON list of emitter names, e.g. ["logging", "http"]\tno\tempty list (no forwarding) druid.emitter.graphite.requestLogEmitters\tList of emitters where request logs (i.e., query logging events sent to emitters when druid.request.logging.type is set to emitter) will be forwarded to. This is a JSON list of emitter names, e.g. ["logging", "http"]\tno\tempty list (no forwarding) druid.emitter.graphite.emitWaitTime\twait time in milliseconds to try to send the event otherwise emitter will throwing event.\tno\t0 druid.emitter.graphite.waitForEventTime\twaiting time in milliseconds if necessary for an event to become available.\tno\t1000 (1 sec) "},{"title":"Supported event types","type":1,"pageTitle":"Graphite Emitter","url":"/docs/27.0.0/development/extensions-contrib/graphite#supported-event-types","content":"The graphite emitter only emits service metric events to graphite (See Druid Metrics for a list of metrics). Alerts and request logs are not sent to graphite. These event types are not well represented in Graphite, which is more suited for timeseries views on numeric metrics, vs. storing non-numeric log events. Instead, alerts and request logs are optionally forwarded to other emitter implementations, specified by druid.emitter.graphite.alertEmitters and druid.emitter.graphite.requestLogEmitters respectively. "},{"title":"Druid to Graphite Event Converter","type":1,"pageTitle":"Graphite Emitter","url":"/docs/27.0.0/development/extensions-contrib/graphite#druid-to-graphite-event-converter","content":"Graphite Event Converter defines a mapping between druid metrics name plus dimensions to a Graphite metric path. Graphite metric path is organized using the following schema:<namespacePrefix>.[<druid service name>].[<druid hostname>].<druid metrics dimensions>.<druid metrics name>Properly naming the metrics is critical to avoid conflicts, confusing data and potentially wrong interpretation later on. Example druid.historical.hist-host1_yahoo_com:8080.MyDataSourceName.GroupBy.query/time: druid -> namespace prefixhistorical -> service namehist-host1.yahoo.com:8080 -> druid hostnameMyDataSourceName -> dimension valueGroupBy -> dimension valuequery/time -> metric name We have two different implementation of event converter: Send-All converter The first implementation called all, will send all the druid service metrics events. The path will be in the form <namespacePrefix>.[<druid service name>].[<druid hostname>].<dimensions values ordered by dimension's name>.<metric>User has control of <namespacePrefix>.[<druid service name>].[<druid hostname>]. You can omit the hostname by setting ignoreHostname=truedruid.SERVICE_NAME.dataSourceName.queryType.query/time You can omit the service name by setting ignoreServiceName=truedruid.HOSTNAME.dataSourceName.queryType.query/time Elements in metric name by default are separated by "/", so graphite will create all metrics on one level. If you want to have metrics in the tree structure, you have to set replaceSlashWithDot=trueOriginal: druid.HOSTNAME.dataSourceName.queryType.query/timeChanged: druid.HOSTNAME.dataSourceName.queryType.query.time druid.emitter.graphite.eventConverter={"type":"all", "namespacePrefix": "druid.test", "ignoreHostname":true, "ignoreServiceName":true} White-list based converter The second implementation called whiteList, will send only the white listed metrics and dimensions. Same as for the all converter user has control of <namespacePrefix>.[<druid service name>].[<druid hostname>].White-list based converter comes with the following default white list map located under resources in ./src/main/resources/defaultWhiteListMap.json Although user can override the default white list map by supplying a property called mapPath. This property is a String containing the path for the file containing white list map JSON object. For example the following converter will read the map from the file /pathPrefix/fileName.json. druid.emitter.graphite.eventConverter={"type":"whiteList", "namespacePrefix": "druid.test", "ignoreHostname":true, "ignoreServiceName":true, "mapPath":"/pathPrefix/fileName.json"} Druid emits a huge number of metrics we highly recommend to use the whiteList converter "},{"title":"GCE Extensions","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/gce-extensions","content":"","keywords":""},{"title":"Overlord Dynamic Configuration","type":1,"pageTitle":"GCE Extensions","url":"/docs/27.0.0/development/extensions-contrib/gce-extensions#overlord-dynamic-configuration","content":"The Overlord can dynamically change worker behavior. The JSON object can be submitted to the Overlord via a POST request at: http://<OVERLORD_IP>:<port>/druid/indexer/v1/worker Optional Header Parameters for auditing the config change can also be specified. Header Param Name\tDescription\tDefaultX-Druid-Author\tauthor making the config change\t"" X-Druid-Comment\tcomment describing the change being done\t"" A sample worker config spec is shown below: { "autoScaler": { "envConfig" : { "numInstances" : 1, "projectId" : "super-project", "zoneName" : "us-central-1", "managedInstanceGroupName" : "druid-middlemanagers" }, "maxNumWorkers" : 4, "minNumWorkers" : 2, "type" : "gce" } } The configuration of the autoscaler is quite simple and it is made of two levels only. The external level specifies the type—always gce in this case— and two numeric values, the maxNumWorkers and minNumWorkers used to define the boundaries in between which the number of instances must be at any time. The internal level is the envConfig and it is used to specify The numInstances used to specify how many workers will be spawned at each request to provision more workers. This is safe to be left to 1The projectId used to specify the name of the project in which the MIG residesThe zoneName used to identify in which zone of the worlds the MIG isThe managedInstanceGroupName used to specify the MIG containing the instances created or removed Please refer to the Overlord Dynamic Configuration section in the main documentationfor parameters other than the ones specified here, such as selectStrategy etc. "},{"title":"Known limitations","type":1,"pageTitle":"GCE Extensions","url":"/docs/27.0.0/development/extensions-contrib/gce-extensions#known-limitations","content":"The module internally uses the ListManagedInstancescall from the API and, while the documentation of the API states that the call can be paged through using thepageToken argument, the responses to such call do not provide any nextPageToken to set such parameter. This means that the extension can operate safely with a maximum of 500 MiddleManagers instances at any time (the maximum number of instances to be returned for each call). "},{"title":"InfluxDB Line Protocol Parser","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/influx","content":"","keywords":""},{"title":"Line Protocol","type":1,"pageTitle":"InfluxDB Line Protocol Parser","url":"/docs/27.0.0/development/extensions-contrib/influx#line-protocol","content":"A typical line looks like this: cpu,application=dbhost=prdb123,region=us-east-1 usage_idle=99.24,usage_user=0.55 1520722030000000000 which contains four parts: measurement: A string indicating the name of the measurement represented (e.g. cpu, network, web_requests)tags: zero or more key-value pairs (i.e. dimensions)measurements: one or more key-value pairs; values can be numeric, boolean, or stringtimestamp: nanoseconds since Unix epoch (the parser truncates it to milliseconds) The parser extracts these fields into a map, giving the measurement the key measurement and the timestamp the key _ts. The tag and measurement keys are copied verbatim, so users should take care to avoid name collisions. It is up to the ingestion spec to decide which fields should be treated as dimensions and which should be treated as metrics (typically tags correspond to dimensions and measurements correspond to metrics). The parser is configured like so: "parser": { "type": "string", "parseSpec": { "format": "influx", "timestampSpec": { "column": "__ts", "format": "millis" }, "dimensionsSpec": { "dimensionExclusions": [ "__ts" ] }, "whitelistMeasurements": [ "cpu" ] } The whitelistMeasurements field is an optional list of strings. If present, measurements that do not match one of the strings in the list will be ignored. "},{"title":"DistinctCount Aggregator","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/distinctcount","content":"","keywords":""},{"title":"Timeseries query","type":1,"pageTitle":"DistinctCount Aggregator","url":"/docs/27.0.0/development/extensions-contrib/distinctcount#timeseries-query","content":"{ "queryType": "timeseries", "dataSource": "sample_datasource", "granularity": "day", "aggregations": [ { "type": "distinctCount", "name": "uv", "fieldName": "visitor_id" } ], "intervals": [ "2016-03-01T00:00:00.000/2013-03-20T00:00:00.000" ] } "},{"title":"TopN query","type":1,"pageTitle":"DistinctCount Aggregator","url":"/docs/27.0.0/development/extensions-contrib/distinctcount#topn-query","content":"{ "queryType": "topN", "dataSource": "sample_datasource", "dimension": "sample_dim", "threshold": 5, "metric": "uv", "granularity": "all", "aggregations": [ { "type": "distinctCount", "name": "uv", "fieldName": "visitor_id" } ], "intervals": [ "2016-03-06T00:00:00/2016-03-06T23:59:59" ] } "},{"title":"GroupBy query","type":1,"pageTitle":"DistinctCount Aggregator","url":"/docs/27.0.0/development/extensions-contrib/distinctcount#groupby-query","content":"{ "queryType": "groupBy", "dataSource": "sample_datasource", "dimensions": ["sample_dim"], "granularity": "all", "aggregations": [ { "type": "distinctCount", "name": "uv", "fieldName": "visitor_id" } ], "intervals": [ "2016-03-06T00:00:00/2016-03-06T23:59:59" ] } "},{"title":"InfluxDB Emitter","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/influxdb-emitter","content":"","keywords":""},{"title":"Introduction","type":1,"pageTitle":"InfluxDB Emitter","url":"/docs/27.0.0/development/extensions-contrib/influxdb-emitter#introduction","content":"This extension emits druid metrics to InfluxDB over HTTP. Currently this emitter only emits service metric events to InfluxDB (See Druid metrics for a list of metrics). When a metric event is fired it is added to a queue of events. After a configurable amount of time, the events on the queue are transformed to InfluxDB's line protocol and POSTed to the InfluxDB HTTP API. The entire queue is flushed at this point. The queue is also flushed as the emitter is shutdown. Note that authentication and authorization must be enabled on the InfluxDB server. "},{"title":"Configuration","type":1,"pageTitle":"InfluxDB Emitter","url":"/docs/27.0.0/development/extensions-contrib/influxdb-emitter#configuration","content":"All the configuration parameters for the influxdb emitter are under druid.emitter.influxdb. Property\tDescription\tRequired?\tDefaultdruid.emitter.influxdb.hostname\tThe hostname of the InfluxDB server.\tYes\tN/A druid.emitter.influxdb.port\tThe port of the InfluxDB server.\tNo\t8086 druid.emitter.influxdb.protocol\tThe protocol used to send metrics to InfluxDB. One of http/https\tNo\thttp druid.emitter.influxdb.trustStorePath\tThe path to the trustStore to be used for https\tNo\tnone druid.emitter.influxdb.trustStoreType\tThe trustStore type to be used for https\tNo\tjks druid.emitter.influxdb.trustStorePassword\tThe trustStore password to be used for https\tNo\tnone druid.emitter.influxdb.databaseName\tThe name of the database in InfluxDB.\tYes\tN/A druid.emitter.influxdb.maxQueueSize\tThe size of the queue that holds events.\tNo\tInteger.MAX_VALUE(=2^31-1) druid.emitter.influxdb.flushPeriod\tHow often (in milliseconds) the events queue is parsed into Line Protocol and POSTed to InfluxDB.\tNo\t60000 druid.emitter.influxdb.flushDelay\tHow long (in milliseconds) the scheduled method will wait until it first runs.\tNo\t60000 druid.emitter.influxdb.influxdbUserName\tThe username for authenticating with the InfluxDB database.\tYes\tN/A druid.emitter.influxdb.influxdbPassword\tThe password of the database authorized user\tYes\tN/A druid.emitter.influxdb.dimensionWhitelist\tA whitelist of metric dimensions to include as tags\tNo\t["dataSource","type","numMetrics","numDimensions","threshold","dimension","taskType","taskStatus","tier"] "},{"title":"InfluxDB Line Protocol","type":1,"pageTitle":"InfluxDB Emitter","url":"/docs/27.0.0/development/extensions-contrib/influxdb-emitter#influxdb-line-protocol","content":"An example of how this emitter parses a Druid metric event into InfluxDB's line protocol is given here: The syntax of the line protocol is : <measurement>[,<tag_key>=<tag_value>[,<tag_key>=<tag_value>]] <field_key>=<field_value>[,<field_key>=<field_value>] [<timestamp>] where timestamp is in nanoseconds since epoch. A typical service metric event as recorded by Druid's logging emitter is: Event [{"feed":"metrics","timestamp":"2017-10-31T09:09:06.857Z","service":"druid/historical","host":"historical001:8083","version":"0.11.0-SNAPSHOT","metric":"query/cache/total/hits","value":34787256}]. This event is parsed into line protocol according to these rules: The measurement becomes druid_query since query is the first part of the metric.The tags are service=druid/historical, hostname=historical001, metric=druidcache_total. (The metric tag is the middle part of the druid metric separated with and preceded by druid_. Another example would be if an event has metric=query/time then there is no middle part and hence no metric tag)The field is druid_hits since this is the last part of the metric. This gives the following String which can be POSTed to InfluxDB: "druid_query,service=druid/historical,hostname=historical001,metric=druid_cache_total druid_hits=34787256 1509440946857000000" The InfluxDB emitter has a white list of dimensions which will be added as a tag to the line protocol string if the metric has a dimension from the white list. The value of the dimension is sanitized such that every occurrence of a dot or whitespace is replaced with a _ . "},{"title":"Kafka Emitter","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/kafka-emitter","content":"","keywords":""},{"title":"Introduction","type":1,"pageTitle":"Kafka Emitter","url":"/docs/27.0.0/development/extensions-contrib/kafka-emitter#introduction","content":"This extension emits Druid metrics to Apache Kafka directly with JSON format. Currently, Kafka has not only their nice ecosystem but also consumer API readily available. So, If you currently use Kafka, It's easy to integrate various tool or UI to monitor the status of your Druid cluster with this extension. "},{"title":"Configuration","type":1,"pageTitle":"Kafka Emitter","url":"/docs/27.0.0/development/extensions-contrib/kafka-emitter#configuration","content":"All the configuration parameters for the Kafka emitter are under druid.emitter.kafka. Property\tDescription\tRequired\tDefaultdruid.emitter.kafka.bootstrap.servers\tComma-separated Kafka broker. ([hostname:port],[hostname:port]...)\tyes\tnone druid.emitter.kafka.event.types\tComma-separated event types. Supported types are alerts, metrics, requests, and segment_metadata.\tno\t["metrics", "alerts"] druid.emitter.kafka.metric.topic\tKafka topic name for emitter's target to emit service metrics. If event.types contains metrics, this field cannot be empty.\tno\tnone druid.emitter.kafka.alert.topic\tKafka topic name for emitter's target to emit alerts. If event.types contains alerts, this field cannot empty.\tno\tnone druid.emitter.kafka.request.topic\tKafka topic name for emitter's target to emit request logs. If event.types contains requests, this field cannot be empty.\tno\tnone druid.emitter.kafka.segmentMetadata.topic\tKafka topic name for emitter's target to emit segment metadata. If event.types contains segment_metadata, this field cannot be empty.\tno\tnone druid.emitter.kafka.producer.config\tJSON configuration to set additional properties to Kafka producer.\tno\tnone druid.emitter.kafka.clusterName\tOptional value to specify the name of your Druid cluster. It can help make groups in your monitoring environment.\tno\tnone "},{"title":"Example","type":1,"pageTitle":"Kafka Emitter","url":"/docs/27.0.0/development/extensions-contrib/kafka-emitter#example","content":"druid.emitter.kafka.bootstrap.servers=hostname1:9092,hostname2:9092 druid.emitter.kafka.event.types=["metrics", alerts", "requests", "segment_metadata"] druid.emitter.kafka.metric.topic=druid-metric druid.emitter.kafka.alert.topic=druid-alert druid.emitter.kafka.request.topic=druid-request-logs druid.emitter.kafka.segmentMetadata.topic=druid-segment-metadata druid.emitter.kafka.producer.config={"max.block.ms":10000} "},{"title":"Compressed Big Decimal","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/compressed-big-decimal","content":"","keywords":""},{"title":"Overview","type":1,"pageTitle":"Compressed Big Decimal","url":"/docs/27.0.0/development/extensions-contrib/compressed-big-decimal#overview","content":"Compressed Big Decimal is an extension which provides support for Mutable big decimal value that can be used to accumulate values without losing precision or reallocating memory. This type helps in absolute precision arithmetic on large numbers in applications, where greater level of accuracy is required, such as financial applications, currency based transactions. This helps avoid rounding issues where in potentially large amount of money can be lost. Accumulation requires that the two numbers have the same scale, but does not require that they are of the same size. If the value being accumulated has a larger underlying array than this value (the result), then the higher order bits are dropped, similar to what happens when adding a long to an int and storing the result in an int. A compressed big decimal that holds its data with an embedded array. Compressed big decimal is an absolute number based complex type based on big decimal in Java. This supports all the functionalities supported by Java Big Decimal. Java Big Decimal is not mutable in order to avoid big garbage collection issues. Compressed big decimal is needed to mutate the value in the accumulator. Main enhancements provided by this extension: Functionality: Mutating Big decimal type with greater precision Accuracy: Provides greater level of accuracy in decimal arithmetic "},{"title":"Operations","type":1,"pageTitle":"Compressed Big Decimal","url":"/docs/27.0.0/development/extensions-contrib/compressed-big-decimal#operations","content":"To use this extension, make sure to load compressed-big-decimal to your config file. "},{"title":"Configuration","type":1,"pageTitle":"Compressed Big Decimal","url":"/docs/27.0.0/development/extensions-contrib/compressed-big-decimal#configuration","content":"There are currently no configuration properties specific to Compressed Big Decimal "},{"title":"Limitations","type":1,"pageTitle":"Compressed Big Decimal","url":"/docs/27.0.0/development/extensions-contrib/compressed-big-decimal#limitations","content":"Compressed Big Decimal does not provide correct result when the value being accumulated has a larger underlying array than this value (the result), then the higher order bits are dropped, similar to what happens when adding a long to an int and storing the result in an int. "},{"title":"Ingestion Spec:","type":1,"pageTitle":"Compressed Big Decimal","url":"/docs/27.0.0/development/extensions-contrib/compressed-big-decimal#ingestion-spec","content":"Most properties in the Ingest spec derived from Ingestion Spec / Data Formats property\tdescription\trequired?metricsSpec\tMetrics Specification, In metrics specification while specifying metrics details such as name, type should be specified as compressedBigDecimal\tYes "},{"title":"Query spec:","type":1,"pageTitle":"Compressed Big Decimal","url":"/docs/27.0.0/development/extensions-contrib/compressed-big-decimal#query-spec","content":"Most properties in the query spec derived from groupBy query / timeseries, see documentation for these query types. property\tdescription\trequired?queryType\tThis String should always be either "groupBy" OR "timeseries"; this is the first thing Druid looks at to figure out how to interpret the query.\tyes dataSource\tA String or Object defining the data source to query, very similar to a table in a relational database. See DataSource for more information.\tyes dimensions\tA JSON list of DimensionSpec (Notice that property is optional)\tno limitSpec\tSee LimitSpec\tno having\tSee Having\tno granularity\tA period granularity; See Period Granularities\tyes filter\tSee Filters\tno aggregations\tAggregations forms the input to Averagers; See Aggregations. The Aggregations must specify type, scale and size as follows for compressedBigDecimal Type "aggregations": [{"type": "compressedBigDecimal","name": "..","fieldName": "..","scale": [Numeric],"size": [Numeric]}. Please refer query example in Examples section.\tYes postAggregations\tSupports only aggregations as input; See Post Aggregations\tno intervals\tA JSON Object representing ISO-8601 Intervals. This defines the time ranges to run the query over.\tyes context\tAn additional JSON Object which can be used to specify certain flags.\tno "},{"title":"Examples","type":1,"pageTitle":"Compressed Big Decimal","url":"/docs/27.0.0/development/extensions-contrib/compressed-big-decimal#examples","content":"Consider the data as Date\tItem\tSaleAmount 20201208,ItemA,0.0 20201208,ItemB,10.000000000 20201208,ItemA,-1.000000000 20201208,ItemC,9999999999.000000000 20201208,ItemB,5000000000.000000005 20201208,ItemA,2.0 20201208,ItemD,0.0 IngestionSpec syntax: { "type": "index_parallel", "spec": { "dataSchema": { "dataSource": "invoices", "timestampSpec": { "column": "timestamp", "format": "yyyyMMdd" }, "dimensionsSpec": { "dimensions": [{ "type": "string", "name": "itemName" }] }, "metricsSpec": [{ "name": "saleAmount", "type": *"compressedBigDecimal"*, "fieldName": "saleAmount" }], "transformSpec": { "filter": null, "transforms": [] }, "granularitySpec": { "type": "uniform", "rollup": false, "segmentGranularity": "DAY", "queryGranularity": "none", "intervals": ["2020-12-08/2020-12-09"] } }, "ioConfig": { "type": "index_parallel", "inputSource": { "type": "local", "baseDir": "/home/user/sales/data/staging/invoice-data", "filter": "invoice-001.20201208.txt" }, "inputFormat": { "type": "tsv", "delimiter": ",", "skipHeaderRows": 0, "columns": [ "timestamp", "itemName", "saleAmount" ] } }, "tuningConfig": { "type": "index_parallel" } } } "},{"title":"Group By Query example","type":1,"pageTitle":"Compressed Big Decimal","url":"/docs/27.0.0/development/extensions-contrib/compressed-big-decimal#group-by-query--example","content":"Calculating sales groupBy all. Query syntax: { "queryType": "groupBy", "dataSource": "invoices", "granularity": "ALL", "dimensions": [ ], "aggregations": [ { "type": "compressedBigDecimal", "name": "saleAmount", "fieldName": "saleAmount", "scale": 9, "size": 3 } ], "intervals": [ "2020-01-08T00:00:00.000Z/P1D" ] } Result: [ { "version" : "v1", "timestamp" : "2020-12-08T00:00:00.000Z", "event" : { "revenue" : 15000000010.000000005 } } ] Had you used doubleSum instead of compressedBigDecimal the result would be [ { "timestamp" : "2020-12-08T00:00:00.000Z", "result" : { "revenue" : 1.500000001E10 } } ] As shown above the precision is lost and could lead to loss in money. "},{"title":"TimeSeries Query Example","type":1,"pageTitle":"Compressed Big Decimal","url":"/docs/27.0.0/development/extensions-contrib/compressed-big-decimal#timeseries-query-example","content":"Query syntax: { "queryType": "timeseries", "dataSource": "invoices", "granularity": "ALL", "aggregations": [ { "type": "compressedBigDecimal", "name": "revenue", "fieldName": "revenue", "scale": 9, "size": 3 } ], "filter": { "type": "not", "field": { "type": "selector", "dimension": "itemName", "value": "ItemD" } }, "intervals": [ "2020-12-08T00:00:00.000Z/P1D" ] } Result: [ { "timestamp" : "2020-12-08T00:00:00.000Z", "result" : { "revenue" : 15000000010.000000005 } } ] "},{"title":"Materialized View","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/materialized-view","content":"","keywords":""},{"title":"Materialized-view-maintenance","type":1,"pageTitle":"Materialized View","url":"/docs/27.0.0/development/extensions-contrib/materialized-view#materialized-view-maintenance","content":"In materialized-view-maintenance, dataSources user ingested are called "base-dataSource". For each base-dataSource, we can submit derivativeDataSource supervisors to create and maintain other dataSources which we called "derived-dataSource". The dimensions and metrics of derived-dataSources are the subset of base-dataSource's. The derivativeDataSource supervisor is used to keep the timeline of derived-dataSource consistent with base-dataSource. Each derivativeDataSource supervisor is responsible for one derived-dataSource. A sample derivativeDataSource supervisor spec is shown below: { "type": "derivativeDataSource", "baseDataSource": "wikiticker", "dimensionsSpec": { "dimensions": [ "isUnpatrolled", "metroCode", "namespace", "page", "regionIsoCode", "regionName", "user" ] }, "metricsSpec": [ { "name": "count", "type": "count" }, { "name": "added", "type": "longSum", "fieldName": "added" } ], "tuningConfig": { "type": "hadoop" } } Supervisor Configuration Field\tDescription\tRequiredType\tThe supervisor type. This should always be derivativeDataSource.\tyes baseDataSource\tThe name of base dataSource. This dataSource data should be already stored inside Druid, and the dataSource will be used as input data.\tyes dimensionsSpec\tSpecifies the dimensions of the data. These dimensions must be the subset of baseDataSource's dimensions.\tyes metricsSpec\tA list of aggregators. These metrics must be the subset of baseDataSource's metrics. See aggregations.\tyes tuningConfig\tTuningConfig must be HadoopTuningConfig. See Hadoop tuning config.\tyes dataSource\tThe name of this derived dataSource.\tno(default=baseDataSource-hashCode of supervisor) hadoopDependencyCoordinates\tA JSON array of Hadoop dependency coordinates that Druid will use, this property will override the default Hadoop coordinates. Once specified, Druid will look for those Hadoop dependencies from the location specified by druid.extensions.hadoopDependenciesDir\tno classpathPrefix\tClasspath that will be prepended for the Peon process.\tno context\tSee below.\tno Context Field\tDescription\tRequiredmaxTaskCount\tThe max number of tasks the supervisor can submit simultaneously.\tno(default=1) "},{"title":"Materialized-view-selection","type":1,"pageTitle":"Materialized View","url":"/docs/27.0.0/development/extensions-contrib/materialized-view#materialized-view-selection","content":"In materialized-view-selection, we implement a new query type view. When we request a view query, Druid will try its best to optimize the query based on query dataSource and intervals. A sample view query spec is shown below: { "queryType": "view", "query": { "queryType": "groupBy", "dataSource": "wikiticker", "granularity": "all", "dimensions": [ "user" ], "limitSpec": { "type": "default", "limit": 1, "columns": [ { "dimension": "added", "direction": "descending", "dimensionOrder": "numeric" } ] }, "aggregations": [ { "type": "longSum", "name": "added", "fieldName": "added" } ], "intervals": [ "2015-09-12/2015-09-13" ] } } There are 2 parts in a view query: Field\tDescription\tRequiredqueryType\tThe query type. This should always be view\tyes query\tThe real query of this view query. The real query must be groupBy, topN, or timeseries type.\tyes Note that Materialized View is currently designated as experimental. Please make sure the time of all processes are the same and increase monotonically. Otherwise, some unexpected errors may happen on query results. "},{"title":"Aliyun OSS","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/aliyun-oss","content":"","keywords":""},{"title":"Installation","type":1,"pageTitle":"Aliyun OSS","url":"/docs/27.0.0/development/extensions-contrib/aliyun-oss#installation","content":"Use the pull-deps tool shipped with Druid to install the aliyun-oss-extensions extension, as described here on middle manager and historical nodes. java -classpath "{YOUR_DRUID_DIR}/lib/*" org.apache.druid.cli.Main tools pull-deps -c org.apache.druid.extensions.contrib:aliyun-oss-extensions:{YOUR_DRUID_VERSION} "},{"title":"Enabling","type":1,"pageTitle":"Aliyun OSS","url":"/docs/27.0.0/development/extensions-contrib/aliyun-oss#enabling","content":"After installation, add this aliyun-oss-extensions extension to druid.extensions.loadList in common.runtime.properties and then restart middle manager and historical nodes. "},{"title":"Configuration","type":1,"pageTitle":"Aliyun OSS","url":"/docs/27.0.0/development/extensions-contrib/aliyun-oss#configuration","content":"First add the following OSS configurations to common.runtime.properties Property\tDescription\tRequireddruid.oss.accessKey\tThe AccessKey ID of the account to be used to access the OSS bucket\tyes druid.oss.secretKey\tThe AccessKey Secret of the account to be used to access the OSS bucket\tyes druid.oss.endpoint\tThe endpoint URL of your OSS storage. If your Druid cluster is also hosted in the same region on Alibaba Cloud as the region of your OSS bucket, it's recommended to use the internal network endpoint url, so that any inbound and outbound traffic to the OSS bucket is free of charge.\tyes To use OSS as deep storage, add the following configurations: Property\tDescription\tRequireddruid.storage.type\tGlobal deep storage provider. Must be set to oss to make use of this extension.\tyes druid.storage.oss.bucket\tStorage bucket name.\tyes druid.storage.oss.prefix\tFolder where segments will be published to. druid/segments is recommended.\tNo If OSS is used as deep storage for segment files, it's also recommended saving index logs in the OSS too. To do this, add following configurations: Property\tDescription\tRequireddruid.indexer.logs.type\tGlobal deep storage provider. Must be set to oss to make use of this extension.\tyes druid.indexer.logs.oss.bucket\tThe bucket used to keep logs. It could be the same as druid.storage.oss.bucket\tyes druid.indexer.logs.oss.prefix\tFolder where log files will be published to. druid/logs is recommended.\tno "},{"title":"Reading data from OSS","type":1,"pageTitle":"Aliyun OSS","url":"/docs/27.0.0/development/extensions-contrib/aliyun-oss#reading-data-from-oss","content":"Currently, Web Console does not support ingestion from OSS, but it could be done by submitting an ingestion task with OSS's input source configuration. Below shows the configurations of OSS's input source. "},{"title":"OSS Input Source","type":1,"pageTitle":"Aliyun OSS","url":"/docs/27.0.0/development/extensions-contrib/aliyun-oss#oss-input-source","content":"property\tdescription\tRequiredtype\tThis should be oss.\tyes uris\tJSON array of URIs where OSS objects to be ingested are located. For example, oss://{your_bucket}/{source_file_path}\turis or prefixes or objects must be set prefixes\tJSON array of URI prefixes for the locations of OSS objects to be ingested. Empty objects starting with one of the given prefixes will be skipped.\turis or prefixes or objects must be set objects\tJSON array of OSS Objects to be ingested.\turis or prefixes or objects must be set properties\tProperties Object for overriding the default OSS configuration. See below for more information.\tno (defaults will be used if not given) OSS Object Property\tDescription\tDefault\tRequiredbucket\tName of the OSS bucket\tNone\tyes path\tThe path where data is located.\tNone\tyes Properties Object Property\tDescription\tDefault\tRequiredaccessKey\tThe Password Provider or plain text string of this OSS InputSource's access key\tNone\tyes secretKey\tThe Password Provider or plain text string of this OSS InputSource's secret key\tNone\tyes endpoint\tThe endpoint of this OSS InputSource\tNone\tno "},{"title":"Reading from a file","type":1,"pageTitle":"Aliyun OSS","url":"/docs/27.0.0/development/extensions-contrib/aliyun-oss#reading-from-a-file","content":"Say that the file rollup-data.json, which can be found under Druid's quickstart/tutorial directory, has been uploaded to a folder druid in your OSS bucket, the bucket for which your Druid is configured. In this case, the uris property of the OSS's input source can be used for reading, as shown: { "type" : "index_parallel", "spec" : { "dataSchema" : { "dataSource" : "rollup-tutorial-from-oss", "timestampSpec": { "column": "timestamp", "format": "iso" }, "dimensionsSpec" : { "dimensions" : [ "srcIP", "dstIP" ] }, "metricsSpec" : [ { "type" : "count", "name" : "count" }, { "type" : "longSum", "name" : "packets", "fieldName" : "packets" }, { "type" : "longSum", "name" : "bytes", "fieldName" : "bytes" } ], "granularitySpec" : { "type" : "uniform", "segmentGranularity" : "week", "queryGranularity" : "minute", "intervals" : ["2018-01-01/2018-01-03"], "rollup" : true } }, "ioConfig" : { "type" : "index_parallel", "inputSource" : { "type" : "oss", "uris" : [ "oss://{YOUR_BUCKET_NAME}/druid/rollup-data.json" ] }, "inputFormat" : { "type" : "json" }, "appendToExisting" : false }, "tuningConfig" : { "type" : "index_parallel", "maxRowsPerSegment" : 5000000, "maxRowsInMemory" : 25000 } } } By posting the above ingestion task spec to http://{YOUR_ROUTER_IP}:8888/druid/indexer/v1/task, an ingestion task will be created by the indexing service to ingest. "},{"title":"Reading files in folders","type":1,"pageTitle":"Aliyun OSS","url":"/docs/27.0.0/development/extensions-contrib/aliyun-oss#reading-files-in-folders","content":"If we want to read files in a same folder, we could use the prefixes property to specify the folder name where Druid could find input files instead of specifying file URIs one by one. ... "ioConfig" : { "type" : "index_parallel", "inputSource" : { "type" : "oss", "prefixes" : [ "oss://{YOUR_BUCKET_NAME}/2020", "oss://{YOUR_BUCKET_NAME}/2021" ] }, "inputFormat" : { "type" : "json" }, "appendToExisting" : false } ... The spec above tells the ingestion task to read all files under 2020 and 2021 folders. "},{"title":"Reading from other buckets","type":1,"pageTitle":"Aliyun OSS","url":"/docs/27.0.0/development/extensions-contrib/aliyun-oss#reading-from-other-buckets","content":"If you want to read from files in buckets which are different from the bucket Druid is configured, use objects property of OSS's InputSource for task submission as below: ... "ioConfig" : { "type" : "index_parallel", "inputSource" : { "type" : "oss", "objects" : [ {"bucket": "YOUR_BUCKET_NAME", "path": "druid/rollup-data.json"} ] }, "inputFormat" : { "type" : "json" }, "appendToExisting" : false } ... "},{"title":"Reading with customized accessKey","type":1,"pageTitle":"Aliyun OSS","url":"/docs/27.0.0/development/extensions-contrib/aliyun-oss#reading-with-customized-accesskey","content":"If the default druid.oss.accessKey is not able to access a bucket, properties could be used to customize these secret information as below: ... "ioConfig" : { "type" : "index_parallel", "inputSource" : { "type" : "oss", "objects" : [ {"bucket": "YOUR_BUCKET_NAME", "path": "druid/rollup-data.json"} ], "properties": { "endpoint": "YOUR_ENDPOINT_OF_BUCKET", "accessKey": "YOUR_ACCESS_KEY", "secretKey": "YOUR_SECRET_KEY" } }, "inputFormat" : { "type" : "json" }, "appendToExisting" : false } ... This properties could be applied to any of uris, objects, prefixes property above. "},{"title":"Troubleshooting","type":1,"pageTitle":"Aliyun OSS","url":"/docs/27.0.0/development/extensions-contrib/aliyun-oss#troubleshooting","content":"When using OSS as deep storage or reading from OSS, the most problems that users will encounter are related to OSS permission. Please refer to the official OSS permission troubleshooting document to find a solution. "},{"title":"MM-less Druid in K8s","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/k8s-jobs","content":"","keywords":""},{"title":"How it works","type":1,"pageTitle":"MM-less Druid in K8s","url":"/docs/27.0.0/development/extensions-contrib/k8s-jobs#how-it-works","content":"The K8s extension builds a pod spec for each task using the specified pod adapter. All jobs are natively restorable, they are decoupled from the Druid deployment, thus restarting pods or doing upgrades has no affect on tasks in flight. They will continue to run and when the overlord comes back up it will start tracking them again. "},{"title":"Configuration","type":1,"pageTitle":"MM-less Druid in K8s","url":"/docs/27.0.0/development/extensions-contrib/k8s-jobs#configuration","content":"To use this extension please make sure to includedruid-kubernetes-overlord-extensions in the extensions load list for your overlord process. The extension uses druid.indexer.runner.capacity to limit the number of k8s jobs in flight. A good initial value for this would be the sum of the total task slots of all the middle managers you were running before switching to K8s based ingestion. The K8s task runner uses one thread per Job that is created, so setting this number too large can cause memory issues on the overlord. Additionally set the variable druid.indexer.runner.namespace to the namespace in which you are running druid. Other configurations required are:druid.indexer.runner.type: k8s and druid.indexer.task.encapsulatedTask: true "},{"title":"Pod Adapters","type":1,"pageTitle":"MM-less Druid in K8s","url":"/docs/27.0.0/development/extensions-contrib/k8s-jobs#pod-adapters","content":"The logic defining how the pod template is built for your Kubernetes Job depends on which pod adapter you have specified. "},{"title":"Overlord Single Container Pod Adapter/Overlord Multi Container Pod Adapter","type":1,"pageTitle":"MM-less Druid in K8s","url":"/docs/27.0.0/development/extensions-contrib/k8s-jobs#overlord-single-container-pod-adapteroverlord-multi-container-pod-adapter","content":"The overlord single container pod adapter takes the podSpec of your Overlord pod and creates a kubernetes job from this podSpec. This is the default pod adapter implementation, to explicitly enable it you can specify the runtime property druid.indexer.runner.k8s.adapter.type: overlordSingleContainer The overlord multi container pod adapter takes the podSpec of your Overlord pod and creates a kubernetes job from this podSpec. It uses kubexit to manage dependency ordering between the main container that runs your druid peon and other sidecars defined in the Overlord pod spec. Thus if you have sidecars such as Splunk or Istio it will be able to handle them. To enable this pod adapter you can specify the runtime property druid.indexer.runner.k8s.adapter.type: overlordMultiContainer For the sidecar support to work for the multi container pod adapter, your entry point / command in docker must be explicitly defined your spec. You can't have something like this: Dockerfile:ENTRYPOINT: ["foo.sh"] and in your sidecar specs: name: foo args: - arg1 - arg2 That will not work, because we cannot decipher what your command is, the extension needs to know it explicitly. *Even for sidecars like Istio which are dynamically created by the service mesh, this needs to happen. Instead do the following: You can keep your Dockerfile the same but you must have a sidecar spec like so: name: foo command: foo.sh args: - arg1 - arg2 For both of these adapters, you can add optional labels to your K8s jobs / pods if you need them by using the following configuration:druid.indexer.runner.labels: '{"key":"value"}'Annotations are the same with:druid.indexer.runner.annotations: '{"key":"value"}' All other configurations you had for the middle manager tasks must be moved under the overlord with one caveat, you must specify javaOpts as an array:druid.indexer.runner.javaOptsArray, druid.indexer.runner.javaOpts is no longer supported. If you are running without a middle manager you need to also use druid.processing.intermediaryData.storage.type=deepstore "},{"title":"Custom Template Pod Adapter","type":1,"pageTitle":"MM-less Druid in K8s","url":"/docs/27.0.0/development/extensions-contrib/k8s-jobs#custom-template-pod-adapter","content":"The custom template pod adapter allows you to specify a pod template file per task type for more flexibility on how to define your pods. This adapter expects a Pod Template to be available on the overlord's file system. This pod template is used as the base of the pod spec for the Kubernetes Job. You can override things like labels, environment variables, resources, annotation, or even the base image with this template. To enable this pod adapter you can specify the runtime property druid.indexer.runner.k8s.adapter.type: customTemplateAdapter The base pod template must be specified as the runtime property druid.indexer.runner.k8s.podTemplate.base: /path/to/basePodSpec.yaml Task specific pod templates can be specified as the runtime property druid.indexer.runner.k8s.podTemplate.{taskType}: /path/to/taskSpecificPodSpec.yaml where {taskType} is the name of the task type i.e index_parallel The following is an example Pod Template that uses the regular druid docker image. apiVersion: "v1" kind: "PodTemplate" template: metadata: annotations: sidecar.istio.io/proxyCPU: "512m" # to handle a injected istio sidecar labels: app.kubernetes.io/name: "druid-realtime-backend" spec: affinity: {} containers: - command: - sh - -c - | /peon.sh /druid/data 1 env: - name: CUSTOM_ENV_VARIABLE value: "hello" image: apache/druid:27.0.0 name: main ports: - containerPort: 8091 name: druid-tls-port protocol: TCP - containerPort: 8100 name: druid-port protocol: TCP resources: limits: cpu: "1" memory: 2400M requests: cpu: "1" memory: 2400M volumeMounts: - mountPath: /opt/druid/conf/druid/cluster/master/coordinator-overlord # runtime props are still mounted in this location because that's where peon.sh looks for configs name: nodetype-config-volume readOnly: true - mountPath: /druid/data name: data-volume - mountPath: /druid/deepstorage name: deepstorage-volume restartPolicy: "Never" securityContext: fsGroup: 1000 runAsGroup: 1000 runAsUser: 1000 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - configMap: defaultMode: 420 name: druid-tiny-cluster-peons-config name: nodetype-config-volume - emptyDir: {} name: data-volume - emptyDir: {} name: deepstorage-volume The below runtime properties need to be passed to the Job's peon process. druid.port=8100 (what port the peon should run on) druid.peon.mode=remote druid.service=druid/peon (for metrics reporting) druid.indexer.task.baseTaskDir=/druid/data (this should match the argument to the ./peon.sh run command in the PodTemplate) druid.indexer.runner.type=k8s druid.indexer.task.encapsulatedTask=true Any runtime property or JVM config used by the peon process can also be passed. E.G. below is a example of a ConfigMap that can be used to generate the nodetype-config-volume mount in the above template. kind: ConfigMap metadata: name: druid-tiny-cluster-peons-config namespace: default apiVersion: v1 data: jvm.config: |- -server -XX:MaxDirectMemorySize=1000M -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Dlog4j.debug -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager -Djava.io.tmpdir=/druid/data -Xmx1024M -Xms1024M log4j2.xml: |- <?xml version="1.0" encoding="UTF-8" ?> <Configuration status="WARN"> <Appenders> <Console name="Console" target="SYSTEM_OUT"> <PatternLayout pattern="%d{ISO8601} %p [%t] %c - %m%n"/> </Console> </Appenders> <Loggers> <Root level="info"> <AppenderRef ref="Console"/> </Root> </Loggers> </Configuration> runtime.properties: | druid.port=8100 druid.service=druid/peon druid.server.http.numThreads=5 druid.indexer.task.baseTaskDir=/druid/data druid.indexer.runner.type=k8s druid.peon.mode=remote druid.indexer.task.encapsulatedTask=true "},{"title":"Properties","type":1,"pageTitle":"MM-less Druid in K8s","url":"/docs/27.0.0/development/extensions-contrib/k8s-jobs#properties","content":"Property\tPossible Values\tDescription\tDefault\trequireddruid.indexer.runner.debugJobs\tboolean\tClean up K8s jobs after tasks complete.\tFalse\tNo druid.indexer.runner.sidecarSupport\tboolean\tDeprecated, specify adapter type as runtime property druid.indexer.runner.k8s.adapter.type: overlordMultiContainer instead. If your overlord pod has sidecars, this will attempt to start the task with the same sidecars as the overlord pod.\tFalse\tNo druid.indexer.runner.primaryContainerName\tString\tIf running with sidecars, the primaryContainerName should be that of your druid container like druid-overlord.\tFirst container in podSpec list\tNo druid.indexer.runner.kubexitImage\tString\tUsed kubexit project to help shutdown sidecars when the main pod completes. Otherwise jobs with sidecars never terminate.\tkarlkfi/kubexit:latest\tNo druid.indexer.runner.disableClientProxy\tboolean\tUse this if you have a global http(s) proxy and you wish to bypass it.\tfalse\tNo druid.indexer.runner.maxTaskDuration\tDuration\tMax time a task is allowed to run for before getting killed\tPT4H\tNo druid.indexer.runner.taskCleanupDelay\tDuration\tHow long do jobs stay around before getting reaped from K8s\tP2D\tNo druid.indexer.runner.taskCleanupInterval\tDuration\tHow often to check for jobs to be reaped\tPT10M\tNo druid.indexer.runner.K8sjobLaunchTimeout\tDuration\tHow long to wait to launch a K8s task before marking it as failed, on a resource constrained cluster it may take some time.\tPT1H\tNo druid.indexer.runner.javaOptsArray\tJsonArray\tjava opts for the task.\t-Xmx1g\tNo druid.indexer.runner.labels\tJsonObject\tAdditional labels you want to add to peon pod\t{}\tNo druid.indexer.runner.annotations\tJsonObject\tAdditional annotations you want to add to peon pod\t{}\tNo druid.indexer.runner.peonMonitors\tJsonArray\tOverrides druid.monitoring.monitors. Use this property if you don't want to inherit monitors from the Overlord.\t[]\tNo druid.indexer.runner.graceTerminationPeriodSeconds\tLong\tNumber of seconds you want to wait after a sigterm for container lifecycle hooks to complete. Keep at a smaller value if you want tasks to hold locks for shorter periods.\tPT30S (K8s default)\tNo druid.indexer.runner.capacity\tInteger\tNumber of concurrent jobs that can be sent to Kubernetes.\t2147483647\tNo "},{"title":"Gotchas","type":1,"pageTitle":"MM-less Druid in K8s","url":"/docs/27.0.0/development/extensions-contrib/k8s-jobs#gotchas","content":"All Druid Pods belonging to one Druid cluster must be inside the same Kubernetes namespace. You must have a role binding for the overlord's service account that provides the needed permissions for interacting with Kubernetes. An example spec could be: kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: <druid-namespace> name: druid-k8s-task-scheduler rules: - apiGroups: ["batch"] resources: ["jobs"] verbs: ["get", "watch", "list", "delete", "create"] - apiGroups: [""] resources: ["pods", "pods/log"] verbs: ["get", "watch", "list", "delete", "create"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: druid-k8s-binding namespace: <druid-namespace> subjects: - kind: ServiceAccount name: <druid-overlord-k8s-service-account> namespace: <druid-namespace> roleRef: kind: Role name: druid-k8s-task-scheduler apiGroup: rbac.authorization.k8s.io "},{"title":"Moment Sketches for Approximate Quantiles module","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/momentsketch-quantiles","content":"","keywords":""},{"title":"Aggregator","type":1,"pageTitle":"Moment Sketches for Approximate Quantiles module","url":"/docs/27.0.0/development/extensions-contrib/momentsketch-quantiles#aggregator","content":"The result of the aggregation is a momentsketch that is the union of all sketches either built from raw data or read from the segments. The momentSketch aggregator operates over raw data while the momentSketchMerge aggregator should be used when aggregating precomputed sketches. { "type" : <aggregator_type>, "name" : <output_name>, "fieldName" : <input_name>, "k" : <int>, "compress" : <boolean> } property\tdescription\trequired?type\tType of aggregator desired. Either "momentSketch" or "momentSketchMerge"\tyes name\tA String for the output (result) name of the calculation.\tyes fieldName\tA String for the name of the input field (can contain sketches or raw numeric values).\tyes k\tParameter that determines the accuracy and size of the sketch. Higher k means higher accuracy but more space to store sketches. Usable range is generally [3,15]\tno, defaults to 13. compress\tFlag for whether the aggregator compresses numeric values using arcsinh. Can improve robustness to skewed and long-tailed distributions, but reduces accuracy slightly on more uniform distributions.\tno, defaults to true "},{"title":"Post Aggregators","type":1,"pageTitle":"Moment Sketches for Approximate Quantiles module","url":"/docs/27.0.0/development/extensions-contrib/momentsketch-quantiles#post-aggregators","content":"Users can query for a set of quantiles using the momentSketchSolveQuantiles post-aggregator on the sketches created by the momentSketch or momentSketchMerge aggregators. { "type" : "momentSketchSolveQuantiles", "name" : <output_name>, "field" : <reference to moment sketch>, "fractions" : <array of doubles in [0,1]> } Users can also query for the min/max of a distribution: { "type" : "momentSketchMin" | "momentSketchMax", "name" : <output_name>, "field" : <reference to moment sketch>, } "},{"title":"Example","type":1,"pageTitle":"Moment Sketches for Approximate Quantiles module","url":"/docs/27.0.0/development/extensions-contrib/momentsketch-quantiles#example","content":"As an example of a query with sketches pre-aggregated at ingestion time, one could set up the following aggregator at ingest: { "type": "momentSketch", "name": "sketch", "fieldName": "value", "k": 10, "compress": true, } and make queries using the following aggregator + post-aggregator: { "aggregations": [{ "type": "momentSketchMerge", "name": "sketch", "fieldName": "sketch", "k": 10, "compress": true }], "postAggregations": [ { "type": "momentSketchSolveQuantiles", "name": "quantiles", "fractions": [0.1, 0.5, 0.9], "field": { "type": "fieldAccess", "fieldName": "sketch" } }, { "type": "momentSketchMin", "name": "min", "field": { "type": "fieldAccess", "fieldName": "sketch" } }] } "},{"title":"OpenTSDB Emitter","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/opentsdb-emitter","content":"","keywords":""},{"title":"Introduction","type":1,"pageTitle":"OpenTSDB Emitter","url":"/docs/27.0.0/development/extensions-contrib/opentsdb-emitter#introduction","content":"This extension emits druid metrics to OpenTSDB over HTTP (Using Jersey client). And this emitter only emits service metric events to OpenTSDB (See Druid metrics for a list of metrics). "},{"title":"Configuration","type":1,"pageTitle":"OpenTSDB Emitter","url":"/docs/27.0.0/development/extensions-contrib/opentsdb-emitter#configuration","content":"All the configuration parameters for the OpenTSDB emitter are under druid.emitter.opentsdb. property\tdescription\trequired?\tdefaultdruid.emitter.opentsdb.host\tThe host of the OpenTSDB server.\tyes\tnone druid.emitter.opentsdb.port\tThe port of the OpenTSDB server.\tyes\tnone druid.emitter.opentsdb.connectionTimeout\tJersey client connection timeout(in milliseconds).\tno\t2000 druid.emitter.opentsdb.readTimeout\tJersey client read timeout(in milliseconds).\tno\t2000 druid.emitter.opentsdb.flushThreshold\tQueue flushing threshold.(Events will be sent as one batch)\tno\t100 druid.emitter.opentsdb.maxQueueSize\tMaximum size of the queue used to buffer events.\tno\t1000 druid.emitter.opentsdb.consumeDelay\tQueue consuming delay(in milliseconds). Actually, we use ScheduledExecutorService to schedule consuming events, so this consumeDelay means the delay between the termination of one execution and the commencement of the next. If your druid processes produce metric events fast, then you should decrease this consumeDelay or increase the maxQueueSize.\tno\t10000 druid.emitter.opentsdb.metricMapPath\tJSON file defining the desired metrics and dimensions for every Druid metric\tno\t./src/main/resources/defaultMetrics.json druid.emitter.opentsdb.namespacePrefix\tOptional (string) prefix for metric names, for example the default metric name query.count with a namespacePrefix set to druid would be emitted as druid.query.count\tno\tnull "},{"title":"Druid to OpenTSDB Event Converter","type":1,"pageTitle":"OpenTSDB Emitter","url":"/docs/27.0.0/development/extensions-contrib/opentsdb-emitter#druid-to-opentsdb-event-converter","content":"The OpenTSDB emitter will send only the desired metrics and dimensions which is defined in a JSON file. If the user does not specify their own JSON file, a default file is used. All metrics are expected to be configured in the JSON file. Metrics which are not configured will be logged. Desired metrics and dimensions is organized using the following schema:<druid metric name> : [ <dimension list> ] e.g. "query/time": [ "dataSource", "type" ] For most use-cases, the default configuration is sufficient. "},{"title":"Prometheus Emitter","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/prometheus","content":"","keywords":""},{"title":"Introduction","type":1,"pageTitle":"Prometheus Emitter","url":"/docs/27.0.0/development/extensions-contrib/prometheus#introduction","content":"This extension exposes Druid metrics for collection by a Prometheus server (https://prometheus.io/). Emitter is enabled by setting druid.emitter=prometheus configs or include prometheus in the composing emitter list. "},{"title":"Configuration","type":1,"pageTitle":"Prometheus Emitter","url":"/docs/27.0.0/development/extensions-contrib/prometheus#configuration","content":"All the configuration parameters for the Prometheus emitter are under druid.emitter.prometheus. property\tdescription\trequired?\tdefaultdruid.emitter.prometheus.strategy\tThe strategy to expose prometheus metrics. Should be one of exporter and pushgateway. Default strategy exporter would expose metrics for scraping purpose. Peon tasks (short-lived jobs) should use pushgateway strategy.\tyes\texporter druid.emitter.prometheus.port\tThe port on which to expose the prometheus HTTPServer. Required if using exporter strategy.\tno\tnone druid.emitter.prometheus.namespace\tOptional metric namespace. Must match the regex [a-zA-Z_:][a-zA-Z0-9_:]*\tno\tdruid druid.emitter.prometheus.dimensionMapPath\tJSON file defining the Prometheus metric type, desired dimensions, help text, and conversionFactor for every Druid metric.\tno\tDefault mapping provided. See below. druid.emitter.prometheus.addHostAsLabel\tFlag to include the hostname as a prometheus label.\tno\tfalse druid.emitter.prometheus.addServiceAsLabel\tFlag to include the druid service name (e.g. druid/broker, druid/coordinator, etc.) as a prometheus label.\tno\tfalse druid.emitter.prometheus.pushGatewayAddress\tPushgateway address. Required if using pushgateway strategy.\tno\tnone druid.emitter.prometheus.flushPeriod\tEmit metrics to Pushgateway every flushPeriod seconds. Required if pushgateway strategy is used.\tno\t15 "},{"title":"Ports for colocated Druid processes","type":1,"pageTitle":"Prometheus Emitter","url":"/docs/27.0.0/development/extensions-contrib/prometheus#ports-for-colocated-druid-processes","content":"In certain instances, Druid processes may be colocated on the same host. For example, the Broker and Router may share the same server. Other colocated processes include the Historical and MiddleManager or the Coordinator and Overlord. When you have colocated processes, specify druid.emitter.prometheus.port separately for each process on each host. For example, even if the Broker and Router share the same host, the Broker runtime properties and the Router runtime properties each need to list druid.emitter.prometheus.port, and the port value for both must be different. "},{"title":"Override properties for Peon Tasks","type":1,"pageTitle":"Prometheus Emitter","url":"/docs/27.0.0/development/extensions-contrib/prometheus#override-properties-for-peon-tasks","content":"Peon tasks are created dynamically by middle managers and have dynamic host and port addresses. Since the exporter strategy allows Prometheus to read only from a fixed address, it cannot be used for peon tasks. So, these tasks need to be configured to use pushgateway strategy to push metrics from Druid to prometheus gateway. If this emitter is configured to use exporter strategy globally, some of the above configurations need to be overridden in the middle manager so that spawned peon tasks can still use the pushgateway strategy. # # Override global prometheus emitter configuration for peon tasks to use `pushgateway` strategy. # Other configurations can also be overridden by adding `druid.indexer.fork.property.` prefix to above configuration properties. # druid.indexer.fork.property.druid.emitter.prometheus.strategy=pushgateway druid.indexer.fork.property.druid.emitter.prometheus.pushGatewayAddress=http://<push-gateway-address> "},{"title":"Metric names","type":1,"pageTitle":"Prometheus Emitter","url":"/docs/27.0.0/development/extensions-contrib/prometheus#metric-names","content":"All metric names and labels are reformatted to match Prometheus standards. For names: all characters which are not alphanumeric, underscores, or colons (matching [^a-zA-Z_:][^a-zA-Z0-9_:]*) are replaced with _For labels: all characters which are not alphanumeric or underscores (matching [^a-zA-Z0-9_][^a-zA-Z0-9_]*) are replaced with _ "},{"title":"Metric mapping","type":1,"pageTitle":"Prometheus Emitter","url":"/docs/27.0.0/development/extensions-contrib/prometheus#metric-mapping","content":"Each metric to be collected by Prometheus must specify a type, one of [timer, counter, guage]. Prometheus Emitter expects this mapping to be provided as a JSON file. Additionally, this mapping specifies which dimensions should be included for each metric. Prometheus expects histogram timers to use Seconds as the base unit. Timers which do not use seconds as a base unit can use the conversionFactor to set the base time unit. If the user does not specify their own JSON file, a default mapping is used. All metrics are expected to be mapped. Metrics which are not mapped will not be tracked. Prometheus metric path is organized using the following schema: <druid metric name> : { "dimensions" : <dimension list>, "type" : <timer|counter|gauge>, "conversionFactor": <conversionFactor>, "help" : <help text> } For example: "query/time" : { "dimensions" : ["dataSource", "type"], "type" : "timer", "conversionFactor": 1000.0, "help": "Seconds taken to complete a query." } For metrics which are emitted from multiple services with different dimensions, the metric name is prefixed with the service name. For example: "coordinator-segment/count" : { "dimensions" : ["dataSource"], "type" : "gauge" }, "historical-segment/count" : { "dimensions" : ["dataSource", "tier", "priority"], "type" : "gauge" } For most use cases, the default mapping is sufficient. "},{"title":"Druid Redis Cache","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/redis-cache","content":"","keywords":""},{"title":"Installation","type":1,"pageTitle":"Druid Redis Cache","url":"/docs/27.0.0/development/extensions-contrib/redis-cache#installation","content":"Use pull-deps tool shipped with Druid to install this extension on broker, historical and middle manager nodes. java -classpath "druid_dir/lib/*" org.apache.druid.cli.Main tools pull-deps -c org.apache.druid.extensions.contrib:druid-redis-cache:{VERSION} "},{"title":"Enabling","type":1,"pageTitle":"Druid Redis Cache","url":"/docs/27.0.0/development/extensions-contrib/redis-cache#enabling","content":"To enable this extension after installation, include this druid-redis-cache extensionto enable cache on broker nodes, follow broker caching docs to set related propertiesto enable cache on historical nodes, follow historical caching docs to set related propertiesto enable cache on middle manager nodes, follow peon caching docs to set related propertiesset druid.cache.type to redisadd the following properties "},{"title":"Configuration","type":1,"pageTitle":"Druid Redis Cache","url":"/docs/27.0.0/development/extensions-contrib/redis-cache#configuration","content":""},{"title":"Cluster mode","type":1,"pageTitle":"Druid Redis Cache","url":"/docs/27.0.0/development/extensions-contrib/redis-cache#cluster-mode","content":"To utilize a redis cluster, following properties must be set. Note: some redis cloud service providers provide redis cluster service via a redis proxy, for these clusters, please follow the Standalone mode configuration below. Properties\tDescription\tDefault\tRequireddruid.cache.cluster.nodes\tRedis nodes in a cluster, represented in comma separated string. See example below\tNone\tyes druid.cache.cluster.maxRedirection\tMax retry count\t5\tno Example # a typical redis cluster with 6 nodes druid.cache.cluster.nodes=127.0.0.1:7001,127.0.0.1:7002,127.0.0.1:7003,127.0.0.1:7004,127.0.0.1:7005,127.0.0.1:7006 "},{"title":"Standalone mode","type":1,"pageTitle":"Druid Redis Cache","url":"/docs/27.0.0/development/extensions-contrib/redis-cache#standalone-mode","content":"To use a standalone redis, following properties must be set. Properties\tDescription\tDefault\tRequireddruid.cache.host\tRedis server host\tNone\tyes druid.cache.port\tRedis server port\tNone\tyes druid.cache.database\tRedis database index\t0\tno Note: if both druid.cache.cluster.nodes and druid.cache.host are provided, cluster mode is preferred. "},{"title":"Shared Properties","type":1,"pageTitle":"Druid Redis Cache","url":"/docs/27.0.0/development/extensions-contrib/redis-cache#shared-properties","content":"Except for the properties above, there are some extra properties which can be customized to meet different needs. Properties\tDescription\tDefault\tRequireddruid.cache.password\tPassword to access redis server/cluster\tNone\tno druid.cache.expiration\tExpiration for cache entries\tP1D\tno druid.cache.timeout\tTimeout for connecting to Redis and reading entries from Redis\tPT2S\tno druid.cache.maxTotalConnections\tMax total connections to Redis\t8\tno druid.cache.maxIdleConnections\tMax idle connections to Redis\t8\tno druid.cache.minIdleConnections\tMin idle connections to Redis\t0\tno For druid.cache.expiration and druid.cache.timeout properties, values can be format of Period or a number in milliseconds. # Period format(recomended) # cache expires after 1 hour druid.cache.expiration=PT1H # or in number(milliseconds) format # 1 hour = 3_600_000 milliseconds druid.cache.expiration=3600000 "},{"title":"Metrics","type":1,"pageTitle":"Druid Redis Cache","url":"/docs/27.0.0/development/extensions-contrib/redis-cache#metrics","content":"In addition to the normal cache metrics, the redis cache implementation also reports the following in both total and delta Metric\tDescription\tNormal valuequery/cache/redis/*/requests\tCount of requests to redis cache\twhatever request to redis will increase request count by 1 "},{"title":"Microsoft SQLServer","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/sqlserver","content":"","keywords":""},{"title":"Setting up SQLServer","type":1,"pageTitle":"Microsoft SQLServer","url":"/docs/27.0.0/development/extensions-contrib/sqlserver#setting-up-sqlserver","content":"Install Microsoft SQLServer Create a druid database and user Create the druid user Microsoft SQL Server Management Studio - Security - Logins - New Login... Create a druid user, enter diurd when prompted for the password. Create a druid database owned by the user we just created Databases - New Database Database Name: druid, Owner: druid Add the Microsoft JDBC library to the Druid classpath To ensure the com.microsoft.sqlserver.jdbc.SQLServerDriver class is loaded you will have to add the appropriate Microsoft JDBC library (sqljdbc*.jar) to the Druid classpath.For instance, if all jar files in your "druid/lib" directory are automatically added to your Druid classpath, then manually download the Microsoft JDBC drivers from ( https://www.microsoft.com/en-ca/download/details.aspx?id=11774) and drop it into my druid/lib directory. Configure your Druid metadata storage extension: Add the following parameters to your Druid configuration, replacing <host>with the location (host name and port) of the database. druid.metadata.storage.type=sqlserver druid.metadata.storage.connector.connectURI=jdbc:sqlserver://<host>;databaseName=druid druid.metadata.storage.connector.user=druid druid.metadata.storage.connector.password=diurd "},{"title":"StatsD Emitter","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/statsd","content":"","keywords":""},{"title":"Introduction","type":1,"pageTitle":"StatsD Emitter","url":"/docs/27.0.0/development/extensions-contrib/statsd#introduction","content":"This extension emits druid metrics to a StatsD server. (https://github.com/etsy/statsd) (https://github.com/armon/statsite) "},{"title":"Configuration","type":1,"pageTitle":"StatsD Emitter","url":"/docs/27.0.0/development/extensions-contrib/statsd#configuration","content":"All the configuration parameters for the StatsD emitter are under druid.emitter.statsd. property\tdescription\trequired?\tdefaultdruid.emitter.statsd.hostname\tThe hostname of the StatsD server.\tyes\tnone druid.emitter.statsd.port\tThe port of the StatsD server.\tyes\tnone druid.emitter.statsd.prefix\tOptional metric name prefix.\tno\t"" druid.emitter.statsd.separator\tMetric name separator\tno\t. druid.emitter.statsd.includeHost\tFlag to include the hostname as part of the metric name.\tno\tfalse druid.emitter.statsd.dimensionMapPath\tJSON file defining the StatsD type, and desired dimensions for every Druid metric\tno\tDefault mapping provided. See below. druid.emitter.statsd.blankHolder\tThe blank character replacement as StatsD does not support path with blank character\tno\t"-" druid.emitter.statsd.dogstatsd\tFlag to enable DogStatsD support. Causes dimensions to be included as tags, not as a part of the metric name. convertRange fields will be ignored.\tno\tfalse druid.emitter.statsd.dogstatsdConstantTags\tIf druid.emitter.statsd.dogstatsd is true, the tags in the JSON list of strings will be sent with every event.\tno\t[] druid.emitter.statsd.dogstatsdServiceAsTag\tIf druid.emitter.statsd.dogstatsd and druid.emitter.statsd.dogstatsdServiceAsTag are true, druid service (e.g. druid/broker, druid/coordinator, etc) is reported as a tag (e.g. druid_service:druid/broker) instead of being included in metric name (e.g. druid.broker.query.time) and druid is used as metric prefix (e.g. druid.query.time).\tno\tfalse druid.emitter.statsd.dogstatsdEvents\tIf druid.emitter.statsd.dogstatsd and druid.emitter.statsd.dogstatsdEvents are true, Alert events are reported to DogStatsD.\tno\tfalse "},{"title":"Druid to StatsD Event Converter","type":1,"pageTitle":"StatsD Emitter","url":"/docs/27.0.0/development/extensions-contrib/statsd#druid-to-statsd-event-converter","content":"Each metric sent to StatsD must specify a type, one of [timer, counter, guage]. StatsD Emitter expects this mapping to be provided as a JSON file. Additionally, this mapping specifies which dimensions should be included for each metric. StatsD expects that metric values be integers. Druid emits some metrics with values between the range 0 and 1. To accommodate these metrics they are converted into the range 0 to 100. This conversion can be enabled by setting the optional "convertRange" field true in the JSON mapping file. If the user does not specify their own JSON file, a default mapping is used. All metrics are expected to be mapped. Metrics which are not mapped will log an error. StatsD metric path is organized using the following schema:<druid metric name> : { "dimensions" : <dimension list>, "type" : <StatsD type>, "convertRange" : true/false}e.g.query/time" : { "dimensions" : ["dataSource", "type"], "type" : "timer"} For metrics which are emitted from multiple services with different dimensions, the metric name is prefixed with the service name. e.g."coordinator-segment/count" : { "dimensions" : ["dataSource"], "type" : "gauge" }, "historical-segment/count" : { "dimensions" : ["dataSource", "tier", "priority"], "type" : "gauge" } For most use-cases, the default mapping is sufficient. "},{"title":"T-Digest Quantiles Sketch module","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/tdigestsketch-quantiles","content":"","keywords":""},{"title":"Aggregator","type":1,"pageTitle":"T-Digest Quantiles Sketch module","url":"/docs/27.0.0/development/extensions-contrib/tdigestsketch-quantiles#aggregator","content":"The result of the aggregation is a T-Digest sketch that is built ingesting numeric values from the raw data or from combining pre-generated T-Digest sketches. { "type" : "tDigestSketch", "name" : <output_name>, "fieldName" : <metric_name>, "compression": <parameter that controls size and accuracy> } Example: { "type": "tDigestSketch", "name": "sketch", "fieldName": "session_duration", "compression": 200 } { "type": "tDigestSketch", "name": "combined_sketch", "fieldName": <input-column>, "compression": 200 } property\tdescription\trequired?type\tThis String should always be "tDigestSketch"\tyes name\tA String for the output (result) name of the calculation.\tyes fieldName\tA String for the name of the input field containing raw numeric values or pre-generated T-Digest sketches.\tyes compression\tParameter that determines the accuracy and size of the sketch. Higher compression means higher accuracy but more space to store sketches.\tno, defaults to 100 "},{"title":"Post Aggregators","type":1,"pageTitle":"T-Digest Quantiles Sketch module","url":"/docs/27.0.0/development/extensions-contrib/tdigestsketch-quantiles#post-aggregators","content":"Quantiles This returns an array of quantiles corresponding to a given array of fractions. { "type" : "quantilesFromTDigestSketch", "name": <output name>, "field" : <post aggregator that refers to a TDigestSketch (fieldAccess or another post aggregator)>, "fractions" : <array of fractions> } property\tdescription\trequired?type\tThis String should always be "quantilesFromTDigestSketch"\tyes name\tA String for the output (result) name of the calculation.\tyes field\tA field reference pointing to the field aggregated/combined T-Digest sketch.\tyes fractions\tNon-empty array of fractions between 0 and 1\tyes Example: { "queryType": "groupBy", "dataSource": "test_datasource", "granularity": "ALL", "dimensions": [], "aggregations": [{ "type": "tDigestSketch", "name": "merged_sketch", "fieldName": "ingested_sketch", "compression": 200 }], "postAggregations": [{ "type": "quantilesFromTDigestSketch", "name": "quantiles", "fractions": [0, 0.5, 1], "field": { "type": "fieldAccess", "fieldName": "merged_sketch" } }], "intervals": ["2016-01-01T00:00:00.000Z/2016-01-31T00:00:00.000Z"] } Similar to quantilesFromTDigestSketch except it takes in a single fraction for computing quantile. { "type" : "quantileFromTDigestSketch", "name": <output name>, "field" : <post aggregator that refers to a TDigestSketch (fieldAccess or another post aggregator)>, "fraction" : <value> } property\tdescription\trequired?type\tThis String should always be "quantileFromTDigestSketch"\tyes name\tA String for the output (result) name of the calculation.\tyes field\tA field reference pointing to the field aggregated/combined T-Digest sketch.\tyes fraction\tDecimal value between 0 and 1\tyes "},{"title":"Thrift","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/thrift","content":"","keywords":""},{"title":"LZO Support","type":1,"pageTitle":"Thrift","url":"/docs/27.0.0/development/extensions-contrib/thrift#lzo-support","content":"If you plan to read LZO-compressed Thrift files, you will need to download version 0.4.19 of the hadoop-lzo JAR and place it in your extensions/druid-thrift-extensions directory. "},{"title":"Thrift Parser","type":1,"pageTitle":"Thrift","url":"/docs/27.0.0/development/extensions-contrib/thrift#thrift-parser","content":"Field\tType\tDescription\tRequiredtype\tString\tThis should say thrift\tyes parseSpec\tJSON Object\tSpecifies the timestamp and dimensions of the data. Should be a JSON parseSpec.\tyes thriftJar\tString\tpath of thrift jar, if not provided, it will try to find the thrift class in classpath. Thrift jar in batch ingestion should be uploaded to HDFS first and configure jobProperties with "tmpjars":"/path/to/your/thrift.jar"\tno thriftClass\tString\tclassname of thrift\tyes Batch Ingestion example - inputFormat and tmpjars should be set. This is for batch ingestion using the HadoopDruidIndexer. The inputFormat of inputSpec in ioConfig could be one of "org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat" and com.twitter.elephantbird.mapreduce.input.LzoThriftBlockInputFormat. Be careful, when LzoThriftBlockInputFormat is used, thrift class must be provided twice. { "type": "index_hadoop", "spec": { "dataSchema": { "dataSource": "book", "parser": { "type": "thrift", "jarPath": "book.jar", "thriftClass": "org.apache.druid.data.input.thrift.Book", "protocol": "compact", "parseSpec": { "format": "json", ... } }, "metricsSpec": [], "granularitySpec": {} }, "ioConfig": { "type": "hadoop", "inputSpec": { "type": "static", "inputFormat": "org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat", // "inputFormat": "com.twitter.elephantbird.mapreduce.input.LzoThriftBlockInputFormat", "paths": "/user/to/some/book.seq" } }, "tuningConfig": { "type": "hadoop", "jobProperties": { "tmpjars":"/user/h_user_profile/du00/druid/test/book.jar", // "elephantbird.class.for.MultiInputFormat" : "${YOUR_THRIFT_CLASS_NAME}" } } } } "},{"title":"Apache Avro","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/avro","content":"","keywords":""},{"title":"Load the Avro extension","type":1,"pageTitle":"Apache Avro","url":"/docs/27.0.0/development/extensions-core/avro#load-the-avro-extension","content":"To use the Avro extension, add the druid-avro-extensions to the list of loaded extensions. See Loading extensions for more information. "},{"title":"Avro types","type":1,"pageTitle":"Apache Avro","url":"/docs/27.0.0/development/extensions-core/avro#avro-types","content":"Druid supports most Avro types natively. This section describes some exceptions. "},{"title":"Unions","type":1,"pageTitle":"Apache Avro","url":"/docs/27.0.0/development/extensions-core/avro#unions","content":"Druid has two modes for supporting union types. The default mode treats unions as a single value regardless of the type of data populating the union. If you want to operate on individual members of a union, set extractUnionsByType on the Avro parser. This configuration expands union values into nested objects according to the following rules: Primitive types and unnamed complex types are keyed by their type name, such as int and string.Complex named types are keyed by their names, this includes record, fixed, and enum.The Avro null type is elided as its value can only ever be null. This is safe because an Avro union can only contain a single member of each unnamed type and duplicates of the same named type are not allowed. For example, only a single array is allowed, multiple records (or other named types) are allowed as long as each has a unique name. You can then access the members of the union with a flattenSpec like you would for other nested types. "},{"title":"Binary types","type":1,"pageTitle":"Apache Avro","url":"/docs/27.0.0/development/extensions-core/avro#binary-types","content":"The extension returns bytes and fixed Avro types as base64 encoded strings by default. To decode these types as UTF-8 strings, enable the binaryAsString option on the Avro parser. "},{"title":"Enums","type":1,"pageTitle":"Apache Avro","url":"/docs/27.0.0/development/extensions-core/avro#enums","content":"The extension returns enum types as string of the enum symbol. "},{"title":"Complex types","type":1,"pageTitle":"Apache Avro","url":"/docs/27.0.0/development/extensions-core/avro#complex-types","content":"You can ingest record and map types representing nested data with a flattenSpec on the parser. "},{"title":"Logical types","type":1,"pageTitle":"Apache Avro","url":"/docs/27.0.0/development/extensions-core/avro#logical-types","content":"Druid does not currently support Avro logical types. It ignores them and handles fields according to the underlying primitive type. "},{"title":"Microsoft Azure","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/azure","content":"","keywords":""},{"title":"Deep Storage","type":1,"pageTitle":"Microsoft Azure","url":"/docs/27.0.0/development/extensions-core/azure#deep-storage","content":"Microsoft Azure Storage is another option for deep storage. This requires some additional Druid configuration. Property\tDescription\tPossible Values\tDefaultdruid.storage.type\tazure Must be set. druid.azure.account Azure Storage account name.\tMust be set. druid.azure.key Azure Storage account key.\tOptional. Either set key or sharedAccessStorageToken but not both. druid.azure.sharedAccessStorageToken Azure Shared Storage access token\tOptional. Either set key or sharedAccessStorageToken but not both. druid.azure.container Azure Storage container name.\tMust be set. druid.azure.prefix\tA prefix string that will be prepended to the blob names for the segments published to Azure deep storage "" druid.azure.protocol\tthe protocol to use\thttp or https\thttps druid.azure.maxTries\tNumber of tries before canceling an Azure operation. 3 druid.azure.maxListingLength\tmaximum number of input files matching a given prefix to retrieve at a time 1024 See Azure Services for more information. "},{"title":"Moving Average Query","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/moving-average-query","content":"","keywords":""},{"title":"Overview","type":1,"pageTitle":"Moving Average Query","url":"/docs/27.0.0/development/extensions-contrib/moving-average-query#overview","content":"Moving Average Query is an extension which provides support for Moving Average and other Aggregate Window Functions in Druid queries. These Aggregate Window Functions consume standard Druid Aggregators and outputs additional windowed aggregates called Averagers. High level algorithm Moving Average encapsulates the groupBy query (Or timeseries in case of no dimensions) in order to rely on the maturity of these query types. It runs the query in two main phases: Runs an inner groupBy or timeseries query to compute Aggregators (i.e. daily count of events).Passes over aggregated results in Broker, in order to compute Averagers (i.e. moving 7 day average of the daily count). Main enhancements provided by this extension: Functionality: Extending druid query functionality (i.e. initial introduction of Window Functions).Performance: Improving performance of such moving aggregations by eliminating multiple segment scans. Further reading Moving Average Window Functions Analytic Functions "},{"title":"Operations","type":1,"pageTitle":"Moving Average Query","url":"/docs/27.0.0/development/extensions-contrib/moving-average-query#operations","content":""},{"title":"Installation","type":1,"pageTitle":"Moving Average Query","url":"/docs/27.0.0/development/extensions-contrib/moving-average-query#installation","content":"Use pull-deps tool shipped with Druid to install this extension on all Druid broker and router nodes. java -classpath "<your_druid_dir>/lib/*" org.apache.druid.cli.Main tools pull-deps -c org.apache.druid.extensions.contrib:druid-moving-average-query:{VERSION} "},{"title":"Enabling","type":1,"pageTitle":"Moving Average Query","url":"/docs/27.0.0/development/extensions-contrib/moving-average-query#enabling","content":"After installation, to enable this extension, just add druid-moving-average-query to druid.extensions.loadList in broker and routers' runtime.properties file and then restart broker and router nodes. For example: druid.extensions.loadList=["druid-moving-average-query"] "},{"title":"Configuration","type":1,"pageTitle":"Moving Average Query","url":"/docs/27.0.0/development/extensions-contrib/moving-average-query#configuration","content":"There are currently no configuration properties specific to Moving Average. "},{"title":"Limitations","type":1,"pageTitle":"Moving Average Query","url":"/docs/27.0.0/development/extensions-contrib/moving-average-query#limitations","content":"movingAverage is missing support for the following groupBy properties: subtotalsSpec, virtualColumns.movingAverage is missing support for the following timeseries properties: descending.movingAverage is missing support for SQL-compatible null handling (So setting druid.generic.useDefaultValueForNull in configuration will give an error). "},{"title":"Query spec","type":1,"pageTitle":"Moving Average Query","url":"/docs/27.0.0/development/extensions-contrib/moving-average-query#query-spec","content":"Most properties in the query spec derived from groupBy query / timeseries, see documentation for these query types. property\tdescription\trequired?queryType\tThis String should always be "movingAverage"; this is the first thing Druid looks at to figure out how to interpret the query.\tyes dataSource\tA String or Object defining the data source to query, very similar to a table in a relational database. See DataSource for more information.\tyes dimensions\tA JSON list of DimensionSpec (Notice that property is optional)\tno limitSpec\tSee LimitSpec\tno having\tSee Having\tno granularity\tA period granularity; See Period Granularities\tyes filter\tSee Filters\tno aggregations\tAggregations forms the input to Averagers; See Aggregations\tyes postAggregations\tSupports only aggregations as input; See Post Aggregations\tno intervals\tA JSON Object representing ISO-8601 Intervals. This defines the time ranges to run the query over.\tyes context\tAn additional JSON Object which can be used to specify certain flags.\tno averagers\tDefines the moving average function; See Averagers\tyes postAveragers\tSupport input of both averagers and aggregations; Syntax is identical to postAggregations (See Post Aggregations)\tno "},{"title":"Averagers","type":1,"pageTitle":"Moving Average Query","url":"/docs/27.0.0/development/extensions-contrib/moving-average-query#averagers","content":"Averagers are used to define the Moving-Average function. Averagers are not limited to an average - they can also provide other types of window functions such as MAX()/MIN(). "},{"title":"Properties","type":1,"pageTitle":"Moving Average Query","url":"/docs/27.0.0/development/extensions-contrib/moving-average-query#properties","content":"These are properties which are common to all Averagers: property\tdescription\trequired?type\tAverager type; See Averager types\tyes name\tAverager name\tyes fieldName\tInput name (An aggregation name)\tyes buckets\tNumber of lookback buckets (time periods), including current one. Must be >0\tyes cycleSize\tCycle size; Used to calculate day-of-week option; See Cycle size (Day of Week)\tno, defaults to 1 "},{"title":"Averager types:","type":1,"pageTitle":"Moving Average Query","url":"/docs/27.0.0/development/extensions-contrib/moving-average-query#averager-types","content":"Standard averagers: doubleMeandoubleMeanNoNullsdoubleSumdoubleMaxdoubleMinlongMeanlongMeanNoNullslongSumlongMaxlongMin Standard averagers These averagers offer four functions: Mean (Average)MeanNoNulls (Ignores empty buckets).SumMaxMin Ignoring nulls: Using a MeanNoNulls averager is useful when the interval starts at the dataset beginning time. In that case, the first records will ignore missing buckets and average won't be artificially low. However, this also means that empty days in a sparse dataset will also be ignored. Example of usage: { "type" : "doubleMean", "name" : <output_name>, "fieldName": <input_name> } "},{"title":"Cycle size (Day of Week)","type":1,"pageTitle":"Moving Average Query","url":"/docs/27.0.0/development/extensions-contrib/moving-average-query#cycle-size-day-of-week","content":"This optional parameter is used to calculate over a single bucket within each cycle instead of all buckets. A prime example would be weekly buckets, resulting in a Day of Week calculation. (Other examples: Month of year, Hour of day). I.e. when using these parameters: granularity: period=P1D (daily)buckets: 28cycleSize: 7 Within each output record, the averager will compute the result over the following buckets: current (#0), #7, #14, #21. Whereas without specifying cycleSize it would have computed over all 28 buckets. "},{"title":"Examples","type":1,"pageTitle":"Moving Average Query","url":"/docs/27.0.0/development/extensions-contrib/moving-average-query#examples","content":"All examples are based on the Wikipedia dataset provided in the Druid tutorials. "},{"title":"Basic example","type":1,"pageTitle":"Moving Average Query","url":"/docs/27.0.0/development/extensions-contrib/moving-average-query#basic-example","content":"Calculating a 7-buckets moving average for Wikipedia edit deltas. Query syntax: { "queryType": "movingAverage", "dataSource": "wikipedia", "granularity": { "type": "period", "period": "PT30M" }, "intervals": [ "2015-09-12T00:00:00Z/2015-09-13T00:00:00Z" ], "aggregations": [ { "name": "delta30Min", "fieldName": "delta", "type": "longSum" } ], "averagers": [ { "name": "trailing30MinChanges", "fieldName": "delta30Min", "type": "longMean", "buckets": 7 } ] } Result: [ { "version" : "v1", "timestamp" : "2015-09-12T00:30:00.000Z", "event" : { "delta30Min" : 30490, "trailing30MinChanges" : 4355.714285714285 } }, { "version" : "v1", "timestamp" : "2015-09-12T01:00:00.000Z", "event" : { "delta30Min" : 96526, "trailing30MinChanges" : 18145.14285714286 } }, { ... ... ... }, { "version" : "v1", "timestamp" : "2015-09-12T23:00:00.000Z", "event" : { "delta30Min" : 119100, "trailing30MinChanges" : 198697.2857142857 } }, { "version" : "v1", "timestamp" : "2015-09-12T23:30:00.000Z", "event" : { "delta30Min" : 177882, "trailing30MinChanges" : 193890.0 } } "},{"title":"Post averager example","type":1,"pageTitle":"Moving Average Query","url":"/docs/27.0.0/development/extensions-contrib/moving-average-query#post-averager-example","content":"Calculating a 7-buckets moving average for Wikipedia edit deltas, plus a ratio between the current period and the moving average. Query syntax: { "queryType": "movingAverage", "dataSource": "wikipedia", "granularity": { "type": "period", "period": "PT30M" }, "intervals": [ "2015-09-12T22:00:00Z/2015-09-13T00:00:00Z" ], "aggregations": [ { "name": "delta30Min", "fieldName": "delta", "type": "longSum" } ], "averagers": [ { "name": "trailing30MinChanges", "fieldName": "delta30Min", "type": "longMean", "buckets": 7 } ], "postAveragers" : [ { "name": "ratioTrailing30MinChanges", "type": "arithmetic", "fn": "/", "fields": [ { "type": "fieldAccess", "fieldName": "delta30Min" }, { "type": "fieldAccess", "fieldName": "trailing30MinChanges" } ] } ] } Result: [ { "version" : "v1", "timestamp" : "2015-09-12T22:00:00.000Z", "event" : { "delta30Min" : 144269, "trailing30MinChanges" : 204088.14285714287, "ratioTrailing30MinChanges" : 0.7068955500319539 } }, { "version" : "v1", "timestamp" : "2015-09-12T22:30:00.000Z", "event" : { "delta30Min" : 242860, "trailing30MinChanges" : 214031.57142857142, "ratioTrailing30MinChanges" : 1.134692411867141 } }, { "version" : "v1", "timestamp" : "2015-09-12T23:00:00.000Z", "event" : { "delta30Min" : 119100, "trailing30MinChanges" : 198697.2857142857, "ratioTrailing30MinChanges" : 0.5994042624782422 } }, { "version" : "v1", "timestamp" : "2015-09-12T23:30:00.000Z", "event" : { "delta30Min" : 177882, "trailing30MinChanges" : 193890.0, "ratioTrailing30MinChanges" : 0.9174377224199288 } } ] "},{"title":"Cycle size example","type":1,"pageTitle":"Moving Average Query","url":"/docs/27.0.0/development/extensions-contrib/moving-average-query#cycle-size-example","content":"Calculating an average of every first 10-minutes of the last 3 hours: Query syntax: { "queryType": "movingAverage", "dataSource": "wikipedia", "granularity": { "type": "period", "period": "PT10M" }, "intervals": [ "2015-09-12T00:00:00Z/2015-09-13T00:00:00Z" ], "aggregations": [ { "name": "delta10Min", "fieldName": "delta", "type": "doubleSum" } ], "averagers": [ { "name": "trailing10MinPerHourChanges", "fieldName": "delta10Min", "type": "doubleMeanNoNulls", "buckets": 18, "cycleSize": 6 } ] } "},{"title":"Timestamp Min/Max aggregators","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-contrib/time-min-max","content":"Timestamp Min/Max aggregators To use this Apache Druid extension, include druid-time-min-max in the extensions load list. These aggregators enable more precise calculation of min and max time of given events than __time column whose granularity is sparse, the same as query granularity. To use this feature, a "timeMin" or "timeMax" aggregator must be included at indexing time. They can apply to any columns that can be converted to timestamp, which include Long, DateTime, Timestamp, and String types. For example, when a data set consists of timestamp, dimension, and metric value like followings. 2015-07-28T01:00:00.000Z A 1 2015-07-28T02:00:00.000Z A 1 2015-07-28T03:00:00.000Z A 1 2015-07-28T04:00:00.000Z B 1 2015-07-28T05:00:00.000Z A 1 2015-07-28T06:00:00.000Z B 1 2015-07-29T01:00:00.000Z C 1 2015-07-29T02:00:00.000Z C 1 2015-07-29T03:00:00.000Z A 1 2015-07-29T04:00:00.000Z A 1 At ingestion time, timeMin and timeMax aggregator can be included as other aggregators. { "type": "timeMin", "name": "tmin", "fieldName": "<field_name, typically column specified in timestamp spec>" } { "type": "timeMax", "name": "tmax", "fieldName": "<field_name, typically column specified in timestamp spec>" } name is output name of aggregator and can be any string. fieldName is typically column specified in timestamp spec but can be any column that can be converted to timestamp. To query for results, the same aggregators "timeMin" and "timeMax" is used. { "queryType": "groupBy", "dataSource": "timeMinMax", "granularity": "DAY", "dimensions": ["product"], "aggregations": [ { "type": "count", "name": "count" }, { "type": "timeMin", "name": "<output_name of timeMin>", "fieldName": "tmin" }, { "type": "timeMax", "name": "<output_name of timeMax>", "fieldName": "tmax" } ], "intervals": [ "2010-01-01T00:00:00.000Z/2020-01-01T00:00:00.000Z" ] } Then, result has min and max of timestamp, which is finer than query granularity. 2015-07-28T00:00:00.000Z A 4 2015-07-28T01:00:00.000Z 2015-07-28T05:00:00.000Z 2015-07-28T00:00:00.000Z B 2 2015-07-28T04:00:00.000Z 2015-07-28T06:00:00.000Z 2015-07-29T00:00:00.000Z A 2 2015-07-29T03:00:00.000Z 2015-07-29T04:00:00.000Z 2015-07-29T00:00:00.000Z C 2 2015-07-29T01:00:00.000Z 2015-07-29T02:00:00.000Z ","keywords":""},{"title":"DataSketches extension","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/datasketches-extension","content":"DataSketches extension Apache Druid aggregators based on Apache DataSketches library. Sketches are data structures implementing approximate streaming mergeable algorithms. Sketches can be ingested from the outside of Druid or built from raw data at ingestion time. Sketches can be stored in Druid segments as additive metrics. To use the datasketches aggregators, make sure you include the extension in your config file: druid.extensions.loadList=["druid-datasketches"] The following modules are available: Theta sketch - approximate distinct counting with set operations (union, intersection and set difference).Tuple sketch - extension of Theta sketch to support values associated with distinct keys (arrays of numeric values in this specialized implementation).Quantiles sketch - approximate distribution of comparable values to obtain ranks, quantiles and histograms. This is a specialized implementation for numeric values.KLL Quantiles sketch - approximate distribution of comparable values to obtain ranks, quantiles and histograms. This is a specialized implementation for numeric values. This is a more advanced algorithm compared to the classic quantiles above, sketches are more compact for the same accuracy, or more accurate for the same size.HLL sketch - approximate distinct counting using very compact HLL sketch.","keywords":""},{"title":"Tasks API","type":0,"sectionRef":"#","url":"/docs/27.0.0/api-reference/tasks-api","content":"","keywords":""},{"title":"Task information and retrieval","type":1,"pageTitle":"Tasks API","url":"/docs/27.0.0/api-reference/tasks-api#task-information-and-retrieval","content":""},{"title":"Get an array of tasks","type":1,"pageTitle":"Tasks API","url":"/docs/27.0.0/api-reference/tasks-api#get-an-array-of-tasks","content":"Retrieves an array of all tasks in the Druid cluster. Each task object includes information on its ID, status, associated datasource, and other metadata. For definitions of the response properties, see the Tasks table. URL GET /druid/indexer/v1/tasks Query parameters The endpoint supports a set of optional query parameters to filter results. Parameter\tType\tDescriptionstate\tString\tFilter list of tasks by task state, valid options are running, complete, waiting, and pending. datasource\tString\tReturn tasks filtered by Druid datasource. createdTimeInterval\tString (ISO-8601)\tReturn tasks created within the specified interval. Use _ as the delimiter for the interval string. Do not use /. For example, 2023-06-27_2023-06-28. max\tInteger\tMaximum number of complete tasks to return. Only applies when state is set to complete. type\tString\tFilter tasks by task type. See task documentation for more details. Responses 200 SUCCESS400 BAD REQUEST500 SERVER ERROR Successfully retrieved list of tasks Sample request The following example shows how to retrieve a list of tasks filtered with the following query parameters: State: completeDatasource: wikipedia_apiTime interval: between 2015-09-12 and 2015-09-13Max entries returned: 10Task type: query_worker cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/tasks/?state=complete&datasource=wikipedia_api&createdTimeInterval=2015-09-12_2015-09-13&max=10&type=query_worker" Sample response Click to show sample response [ { "id": "query-223549f8-b993-4483-b028-1b0d54713cad-worker0_0", "groupId": "query-223549f8-b993-4483-b028-1b0d54713cad", "type": "query_worker", "createdTime": "2023-06-22T22:11:37.012Z", "queueInsertionTime": "1970-01-01T00:00:00.000Z", "statusCode": "SUCCESS", "status": "SUCCESS", "runnerStatusCode": "NONE", "duration": 17897, "location": { "host": "localhost", "port": 8101, "tlsPort": -1 }, "dataSource": "wikipedia_api", "errorMsg": null }, { "id": "query-fa82fa40-4c8c-4777-b832-cabbee5f519f-worker0_0", "groupId": "query-fa82fa40-4c8c-4777-b832-cabbee5f519f", "type": "query_worker", "createdTime": "2023-06-20T22:51:21.302Z", "queueInsertionTime": "1970-01-01T00:00:00.000Z", "statusCode": "SUCCESS", "status": "SUCCESS", "runnerStatusCode": "NONE", "duration": 16911, "location": { "host": "localhost", "port": 8101, "tlsPort": -1 }, "dataSource": "wikipedia_api", "errorMsg": null }, { "id": "query-5419da7a-b270-492f-90e6-920ecfba766a-worker0_0", "groupId": "query-5419da7a-b270-492f-90e6-920ecfba766a", "type": "query_worker", "createdTime": "2023-06-20T22:45:53.909Z", "queueInsertionTime": "1970-01-01T00:00:00.000Z", "statusCode": "SUCCESS", "status": "SUCCESS", "runnerStatusCode": "NONE", "duration": 17030, "location": { "host": "localhost", "port": 8101, "tlsPort": -1 }, "dataSource": "wikipedia_api", "errorMsg": null } ] "},{"title":"Get an array of complete tasks","type":1,"pageTitle":"Tasks API","url":"/docs/27.0.0/api-reference/tasks-api#get-an-array-of-complete-tasks","content":"Retrieves an array of completed tasks in the Druid cluster. This is functionally equivalent to /druid/indexer/v1/tasks?state=complete. For definitions of the response properties, see the Tasks table. URL GET /druid/indexer/v1/completeTasks Query parameters The endpoint supports a set of optional query parameters to filter results. Parameter\tType\tDescriptiondatasource\tString\tReturn tasks filtered by Druid datasource. createdTimeInterval\tString (ISO-8601)\tReturn tasks created within the specified interval. The interval string should be delimited by _ instead of /. For example, 2023-06-27_2023-06-28. max\tInteger\tMaximum number of complete tasks to return. Only applies when state is set to complete. type\tString\tFilter tasks by task type. See task documentation for more details. Responses 200 SUCCESS404 NOT FOUND Successfully retrieved list of complete tasks Sample request cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/completeTasks" Sample response Click to show sample response [ { "id": "query-223549f8-b993-4483-b028-1b0d54713cad-worker0_0", "groupId": "query-223549f8-b993-4483-b028-1b0d54713cad", "type": "query_worker", "createdTime": "2023-06-22T22:11:37.012Z", "queueInsertionTime": "1970-01-01T00:00:00.000Z", "statusCode": "SUCCESS", "status": "SUCCESS", "runnerStatusCode": "NONE", "duration": 17897, "location": { "host": "localhost", "port": 8101, "tlsPort": -1 }, "dataSource": "wikipedia_api", "errorMsg": null }, { "id": "query-223549f8-b993-4483-b028-1b0d54713cad", "groupId": "query-223549f8-b993-4483-b028-1b0d54713cad", "type": "query_controller", "createdTime": "2023-06-22T22:11:28.367Z", "queueInsertionTime": "1970-01-01T00:00:00.000Z", "statusCode": "SUCCESS", "status": "SUCCESS", "runnerStatusCode": "NONE", "duration": 30317, "location": { "host": "localhost", "port": 8100, "tlsPort": -1 }, "dataSource": "wikipedia_api", "errorMsg": null } ] "},{"title":"Get an array of running tasks","type":1,"pageTitle":"Tasks API","url":"/docs/27.0.0/api-reference/tasks-api#get-an-array-of-running-tasks","content":"Retrieves an array of running task objects in the Druid cluster. It is functionally equivalent to /druid/indexer/v1/tasks?state=running. For definitions of the response properties, see the Tasks table. URL GET /druid/indexer/v1/runningTasks Query parameters The endpoint supports a set of optional query parameters to filter results. Parameter\tType\tDescriptiondatasource\tString\tReturn tasks filtered by Druid datasource. createdTimeInterval\tString (ISO-8601)\tReturn tasks created within the specified interval. The interval string should be delimited by _ instead of /. For example, 2023-06-27_2023-06-28. max\tInteger\tMaximum number of complete tasks to return. Only applies when state is set to complete. type\tString\tFilter tasks by task type. See task documentation for more details. Responses 200 SUCCESS Successfully retrieved list of running tasks Sample request cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/runningTasks" Sample response Click to show sample response [ { "id": "query-32663269-ead9-405a-8eb6-0817a952ef47", "groupId": "query-32663269-ead9-405a-8eb6-0817a952ef47", "type": "query_controller", "createdTime": "2023-06-22T22:54:43.170Z", "queueInsertionTime": "2023-06-22T22:54:43.170Z", "statusCode": "RUNNING", "status": "RUNNING", "runnerStatusCode": "RUNNING", "duration": -1, "location": { "host": "localhost", "port": 8100, "tlsPort": -1 }, "dataSource": "wikipedia_api", "errorMsg": null } ] "},{"title":"Get an array of waiting tasks","type":1,"pageTitle":"Tasks API","url":"/docs/27.0.0/api-reference/tasks-api#get-an-array-of-waiting-tasks","content":"Retrieves an array of waiting tasks in the Druid cluster. It is functionally equivalent to /druid/indexer/v1/tasks?state=waiting. For definitions of the response properties, see the Tasks table. URL GET /druid/indexer/v1/waitingTasks Query parameters The endpoint supports a set of optional query parameters to filter results. Parameter\tType\tDescriptiondatasource\tString\tReturn tasks filtered by Druid datasource. createdTimeInterval\tString (ISO-8601)\tReturn tasks created within the specified interval. The interval string should be delimited by _ instead of /. For example, 2023-06-27_2023-06-28. max\tInteger\tMaximum number of complete tasks to return. Only applies when state is set to complete. type\tString\tFilter tasks by task type. See task documentation for more details. Responses 200 SUCCESS Successfully retrieved list of waiting tasks Sample request cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/waitingTasks" Sample response Click to show sample response [ { "id": "index_parallel_wikipedia_auto_biahcbmf_2023-06-26T21:08:05.216Z", "groupId": "index_parallel_wikipedia_auto_biahcbmf_2023-06-26T21:08:05.216Z", "type": "index_parallel", "createdTime": "2023-06-26T21:08:05.217Z", "queueInsertionTime": "1970-01-01T00:00:00.000Z", "statusCode": "RUNNING", "status": "RUNNING", "runnerStatusCode": "WAITING", "duration": -1, "location": { "host": null, "port": -1, "tlsPort": -1 }, "dataSource": "wikipedia_auto", "errorMsg": null }, { "id": "index_parallel_wikipedia_auto_afggfiec_2023-06-26T21:08:05.546Z", "groupId": "index_parallel_wikipedia_auto_afggfiec_2023-06-26T21:08:05.546Z", "type": "index_parallel", "createdTime": "2023-06-26T21:08:05.548Z", "queueInsertionTime": "1970-01-01T00:00:00.000Z", "statusCode": "RUNNING", "status": "RUNNING", "runnerStatusCode": "WAITING", "duration": -1, "location": { "host": null, "port": -1, "tlsPort": -1 }, "dataSource": "wikipedia_auto", "errorMsg": null }, { "id": "index_parallel_wikipedia_auto_jmmddihf_2023-06-26T21:08:06.644Z", "groupId": "index_parallel_wikipedia_auto_jmmddihf_2023-06-26T21:08:06.644Z", "type": "index_parallel", "createdTime": "2023-06-26T21:08:06.671Z", "queueInsertionTime": "1970-01-01T00:00:00.000Z", "statusCode": "RUNNING", "status": "RUNNING", "runnerStatusCode": "WAITING", "duration": -1, "location": { "host": null, "port": -1, "tlsPort": -1 }, "dataSource": "wikipedia_auto", "errorMsg": null } ] "},{"title":"Get an array of pending tasks","type":1,"pageTitle":"Tasks API","url":"/docs/27.0.0/api-reference/tasks-api#get-an-array-of-pending-tasks","content":"Retrieves an array of pending tasks in the Druid cluster. It is functionally equivalent to /druid/indexer/v1/tasks?state=pending. For definitions of the response properties, see the Tasks table. URL GET /druid/indexer/v1/pendingTasks Query parameters The endpoint supports a set of optional query parameters to filter results. Parameter\tType\tDescriptiondatasource\tString\tReturn tasks filtered by Druid datasource. createdTimeInterval\tString (ISO-8601)\tReturn tasks created within the specified interval. The interval string should be delimited by _ instead of /. For example, 2023-06-27_2023-06-28. max\tInteger\tMaximum number of complete tasks to return. Only applies when state is set to complete. type\tString\tFilter tasks by task type. See task documentation for more details. Responses 200 SUCCESS Successfully retrieved list of pending tasks Sample request cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/pendingTasks" Sample response Click to show sample response [ { "id": "query-7b37c315-50a0-4b68-aaa8-b1ef1f060e67", "groupId": "query-7b37c315-50a0-4b68-aaa8-b1ef1f060e67", "type": "query_controller", "createdTime": "2023-06-23T19:53:06.037Z", "queueInsertionTime": "2023-06-23T19:53:06.037Z", "statusCode": "RUNNING", "status": "RUNNING", "runnerStatusCode": "PENDING", "duration": -1, "location": { "host": null, "port": -1, "tlsPort": -1 }, "dataSource": "wikipedia_api", "errorMsg": null }, { "id": "query-544f0c41-f81d-4504-b98b-f9ab8b36ef36", "groupId": "query-544f0c41-f81d-4504-b98b-f9ab8b36ef36", "type": "query_controller", "createdTime": "2023-06-23T19:53:06.616Z", "queueInsertionTime": "2023-06-23T19:53:06.616Z", "statusCode": "RUNNING", "status": "RUNNING", "runnerStatusCode": "PENDING", "duration": -1, "location": { "host": null, "port": -1, "tlsPort": -1 }, "dataSource": "wikipedia_api", "errorMsg": null } ] "},{"title":"Get task payload","type":1,"pageTitle":"Tasks API","url":"/docs/27.0.0/api-reference/tasks-api#get-task-payload","content":"Retrieves the payload of a task given the task ID. It returns a JSON object with the task ID and payload that includes task configuration details and relevant specifications associated with the execution of the task. URL GET /druid/indexer/v1/task/:taskId Responses 200 SUCCESS404 NOT FOUND Successfully retrieved payload of task Sample request The following examples shows how to retrieve the task payload of a task with the specified ID index_parallel_wikipedia_short_iajoonnd_2023-07-07T17:53:12.174Z. cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task/index_parallel_wikipedia_short_iajoonnd_2023-07-07T17:53:12.174Z" Sample response Click to show sample response { "task": "index_parallel_wikipedia_short_iajoonnd_2023-07-07T17:53:12.174Z", "payload": { "type": "index_parallel", "id": "index_parallel_wikipedia_short_iajoonnd_2023-07-07T17:53:12.174Z", "groupId": "index_parallel_wikipedia_short_iajoonnd_2023-07-07T17:53:12.174Z", "resource": { "availabilityGroup": "index_parallel_wikipedia_short_iajoonnd_2023-07-07T17:53:12.174Z", "requiredCapacity": 1 }, "spec": { "dataSchema": { "dataSource": "wikipedia_short", "timestampSpec": { "column": "time", "format": "iso", "missingValue": null }, "dimensionsSpec": { "dimensions": [ { "type": "string", "name": "cityName", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "string", "name": "countryName", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "string", "name": "regionName", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true } ], "dimensionExclusions": [ "__time", "time" ], "includeAllDimensions": false, "useSchemaDiscovery": false }, "metricsSpec": [], "granularitySpec": { "type": "uniform", "segmentGranularity": "DAY", "queryGranularity": { "type": "none" }, "rollup": false, "intervals": [ "2015-09-12T00:00:00.000Z/2015-09-13T00:00:00.000Z" ] }, "transformSpec": { "filter": null, "transforms": [] } }, "ioConfig": { "type": "index_parallel", "inputSource": { "type": "local", "baseDir": "quickstart/tutorial", "filter": "wikiticker-2015-09-12-sampled.json.gz" }, "inputFormat": { "type": "json", "keepNullColumns": false, "assumeNewlineDelimited": false, "useJsonNodeReader": false }, "appendToExisting": false, "dropExisting": false }, "tuningConfig": { "type": "index_parallel", "maxRowsPerSegment": 5000000, "appendableIndexSpec": { "type": "onheap", "preserveExistingMetrics": false }, "maxRowsInMemory": 25000, "maxBytesInMemory": 0, "skipBytesInMemoryOverheadCheck": false, "maxTotalRows": null, "numShards": null, "splitHintSpec": null, "partitionsSpec": { "type": "dynamic", "maxRowsPerSegment": 5000000, "maxTotalRows": null }, "indexSpec": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "indexSpecForIntermediatePersists": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "maxPendingPersists": 0, "forceGuaranteedRollup": false, "reportParseExceptions": false, "pushTimeout": 0, "segmentWriteOutMediumFactory": null, "maxNumConcurrentSubTasks": 1, "maxRetry": 3, "taskStatusCheckPeriodMs": 1000, "chatHandlerTimeout": "PT10S", "chatHandlerNumRetries": 5, "maxNumSegmentsToMerge": 100, "totalNumMergeTasks": 10, "logParseExceptions": false, "maxParseExceptions": 2147483647, "maxSavedParseExceptions": 0, "maxColumnsToMerge": -1, "awaitSegmentAvailabilityTimeoutMillis": 0, "maxAllowedLockCount": -1, "partitionDimensions": [] } }, "context": { "forceTimeChunkLock": true, "useLineageBasedSegmentAllocation": true }, "dataSource": "wikipedia_short" } } "},{"title":"Get task status","type":1,"pageTitle":"Tasks API","url":"/docs/27.0.0/api-reference/tasks-api#get-task-status","content":"Retrieves the status of a task given the task ID. It returns a JSON object with the task's status code, runner status, task type, datasource, and other relevant metadata. URL GET /druid/indexer/v1/task/:taskId/status Responses 200 SUCCESS404 NOT FOUND Successfully retrieved task status Sample request The following examples shows how to retrieve the status of a task with the specified ID query-223549f8-b993-4483-b028-1b0d54713cad. cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task/query-223549f8-b993-4483-b028-1b0d54713cad/status" Sample response Click to show sample response { 'task': 'query-223549f8-b993-4483-b028-1b0d54713cad', 'status': { 'id': 'query-223549f8-b993-4483-b028-1b0d54713cad', 'groupId': 'query-223549f8-b993-4483-b028-1b0d54713cad', 'type': 'query_controller', 'createdTime': '2023-06-22T22:11:28.367Z', 'queueInsertionTime': '1970-01-01T00:00:00.000Z', 'statusCode': 'RUNNING', 'status': 'RUNNING', 'runnerStatusCode': 'RUNNING', 'duration': -1, 'location': {'host': 'localhost', 'port': 8100, 'tlsPort': -1}, 'dataSource': 'wikipedia_api', 'errorMsg': None } } "},{"title":"Get task segments","type":1,"pageTitle":"Tasks API","url":"/docs/27.0.0/api-reference/tasks-api#get-task-segments","content":"info This API is deprecated and will be removed in future releases. Retrieves information about segments generated by the task given the task ID. To hit this endpoint, make sure to enable the audit log config on the Overlord with druid.indexer.auditLog.enabled = true. In addition to enabling audit logs, configure a cleanup strategy to prevent overloading the metadata store with old audit logs which may cause performance issues. To enable automated cleanup of audit logs on the Coordinator, set druid.coordinator.kill.audit.on. You may also manually export the audit logs to external storage. For more information, see Audit records. URL GET /druid/indexer/v1/task/:taskId/segments Responses 200 SUCCESS Successfully retrieved task segments Sample request The following examples shows how to retrieve the task segment of the task with the specified ID query-52a8aafe-7265-4427-89fe-dc51275cc470. cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task/query-52a8aafe-7265-4427-89fe-dc51275cc470/reports" Sample response A successful request returns a 200 OK response and an array of the task segments. "},{"title":"Get task log","type":1,"pageTitle":"Tasks API","url":"/docs/27.0.0/api-reference/tasks-api#get-task-log","content":"Retrieves the event log associated with a task. It returns a list of logged events during the lifecycle of the task. The endpoint is useful for providing information about the execution of the task, including any errors or warnings raised. Task logs are automatically retrieved from the Middle Manager/Indexer or in long-term storage. For reference, see Task logs. URL GET /druid/indexer/v1/task/:taskId/log Query parameters offset (optional) Type: IntExclude the first passed in number of entries from the response. Responses 200 SUCCESS Successfully retrieved task log Sample request The following examples shows how to retrieve the task log of a task with the specified ID index_kafka_social_media_0e905aa31037879_nommnaeg. cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task/index_kafka_social_media_0e905aa31037879_nommnaeg/log" Sample response Click to show sample response 2023-07-03T22:11:17,891 INFO [qtp1251996697-122] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Sequence[index_kafka_social_media_0e905aa31037879_0] end offsets updated from [{0=9223372036854775807}] to [{0=230985}]. 2023-07-03T22:11:17,900 INFO [qtp1251996697-122] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Saved sequence metadata to disk: [SequenceMetadata{sequenceId=0, sequenceName='index_kafka_social_media_0e905aa31037879_0', assignments=[0], startOffsets={0=230985}, exclusiveStartPartitions=[], endOffsets={0=230985}, sentinel=false, checkpointed=true}] 2023-07-03T22:11:17,901 INFO [task-runner-0-priority-0] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Received resume command, resuming ingestion. 2023-07-03T22:11:17,901 INFO [task-runner-0-priority-0] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Finished reading partition[0], up to[230985]. 2023-07-03T22:11:17,902 INFO [task-runner-0-priority-0] org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-kafka-supervisor-dcanhmig-1, groupId=kafka-supervisor-dcanhmig] Resetting generation and member id due to: consumer pro-actively leaving the group 2023-07-03T22:11:17,902 INFO [task-runner-0-priority-0] org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-kafka-supervisor-dcanhmig-1, groupId=kafka-supervisor-dcanhmig] Request joining group due to: consumer pro-actively leaving the group 2023-07-03T22:11:17,902 INFO [task-runner-0-priority-0] org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-kafka-supervisor-dcanhmig-1, groupId=kafka-supervisor-dcanhmig] Unsubscribed all topics or patterns and assigned partitions 2023-07-03T22:11:17,912 INFO [task-runner-0-priority-0] org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Persisted rows[0] and (estimated) bytes[0] 2023-07-03T22:11:17,916 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-appenderator-persist] org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Flushed in-memory data with commit metadata [AppenderatorDriverMetadata{segments={}, lastSegmentIds={}, callerMetadata={nextPartitions=SeekableStreamEndSequenceNumbers{stream='social_media', partitionSequenceNumberMap={0=230985}}}}] for segments: 2023-07-03T22:11:17,917 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-appenderator-persist] org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Persisted stats: processed rows: [0], persisted rows[0], sinks: [0], total fireHydrants (across sinks): [0], persisted fireHydrants (across sinks): [0] 2023-07-03T22:11:17,919 INFO [task-runner-0-priority-0] org.apache.druid.segment.realtime.appenderator.BaseAppenderatorDriver - Pushing [0] segments in background 2023-07-03T22:11:17,921 INFO [task-runner-0-priority-0] org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Persisted rows[0] and (estimated) bytes[0] 2023-07-03T22:11:17,924 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-appenderator-persist] org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Flushed in-memory data with commit metadata [AppenderatorDriverMetadata{segments={}, lastSegmentIds={}, callerMetadata={nextPartitions=SeekableStreamStartSequenceNumbers{stream='social_media', partitionSequenceNumberMap={0=230985}, exclusivePartitions=[]}, publishPartitions=SeekableStreamEndSequenceNumbers{stream='social_media', partitionSequenceNumberMap={0=230985}}}}] for segments: 2023-07-03T22:11:17,924 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-appenderator-persist] org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Persisted stats: processed rows: [0], persisted rows[0], sinks: [0], total fireHydrants (across sinks): [0], persisted fireHydrants (across sinks): [0] 2023-07-03T22:11:17,925 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-appenderator-merge] org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Preparing to push (stats): processed rows: [0], sinks: [0], fireHydrants (across sinks): [0] 2023-07-03T22:11:17,925 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-appenderator-merge] org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Push complete... 2023-07-03T22:11:17,929 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-publish] org.apache.druid.indexing.seekablestream.SequenceMetadata - With empty segment set, start offsets [SeekableStreamStartSequenceNumbers{stream='social_media', partitionSequenceNumberMap={0=230985}, exclusivePartitions=[]}] and end offsets [SeekableStreamEndSequenceNumbers{stream='social_media', partitionSequenceNumberMap={0=230985}}] are the same, skipping metadata commit. 2023-07-03T22:11:17,930 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-publish] org.apache.druid.segment.realtime.appenderator.BaseAppenderatorDriver - Published [0] segments with commit metadata [{nextPartitions=SeekableStreamStartSequenceNumbers{stream='social_media', partitionSequenceNumberMap={0=230985}, exclusivePartitions=[]}, publishPartitions=SeekableStreamEndSequenceNumbers{stream='social_media', partitionSequenceNumberMap={0=230985}}}] 2023-07-03T22:11:17,930 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-publish] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Published 0 segments for sequence [index_kafka_social_media_0e905aa31037879_0] with metadata [AppenderatorDriverMetadata{segments={}, lastSegmentIds={}, callerMetadata={nextPartitions=SeekableStreamStartSequenceNumbers{stream='social_media', partitionSequenceNumberMap={0=230985}, exclusivePartitions=[]}, publishPartitions=SeekableStreamEndSequenceNumbers{stream='social_media', partitionSequenceNumberMap={0=230985}}}}]. 2023-07-03T22:11:17,931 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-publish] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Saved sequence metadata to disk: [] 2023-07-03T22:11:17,932 INFO [task-runner-0-priority-0] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Handoff complete for segments: 2023-07-03T22:11:17,932 INFO [task-runner-0-priority-0] org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-kafka-supervisor-dcanhmig-1, groupId=kafka-supervisor-dcanhmig] Resetting generation and member id due to: consumer pro-actively leaving the group 2023-07-03T22:11:17,932 INFO [task-runner-0-priority-0] org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-kafka-supervisor-dcanhmig-1, groupId=kafka-supervisor-dcanhmig] Request joining group due to: consumer pro-actively leaving the group 2023-07-03T22:11:17,933 INFO [task-runner-0-priority-0] org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 2023-07-03T22:11:17,933 INFO [task-runner-0-priority-0] org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 2023-07-03T22:11:17,933 INFO [task-runner-0-priority-0] org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 2023-07-03T22:11:17,935 INFO [task-runner-0-priority-0] org.apache.kafka.common.utils.AppInfoParser - App info kafka.consumer for consumer-kafka-supervisor-dcanhmig-1 unregistered 2023-07-03T22:11:17,936 INFO [task-runner-0-priority-0] org.apache.druid.curator.announcement.Announcer - Unannouncing [/druid/internal-discovery/PEON/localhost:8100] 2023-07-03T22:11:17,972 INFO [task-runner-0-priority-0] org.apache.druid.curator.discovery.CuratorDruidNodeAnnouncer - Unannounced self [{"druidNode":{"service":"druid/middleManager","host":"localhost","bindOnHost":false,"plaintextPort":8100,"port":-1,"tlsPort":-1,"enablePlaintextPort":true,"enableTlsPort":false},"nodeType":"peon","services":{"dataNodeService":{"type":"dataNodeService","tier":"_default_tier","maxSize":0,"type":"indexer-executor","serverType":"indexer-executor","priority":0},"lookupNodeService":{"type":"lookupNodeService","lookupTier":"__default"}}}]. 2023-07-03T22:11:17,972 INFO [task-runner-0-priority-0] org.apache.druid.curator.announcement.Announcer - Unannouncing [/druid/announcements/localhost:8100] 2023-07-03T22:11:17,996 INFO [task-runner-0-priority-0] org.apache.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: { "id" : "index_kafka_social_media_0e905aa31037879_nommnaeg", "status" : "SUCCESS", "duration" : 3601130, "errorMsg" : null, "location" : { "host" : null, "port" : -1, "tlsPort" : -1 } } 2023-07-03T22:11:17,998 INFO [main] org.apache.druid.java.util.common.lifecycle.Lifecycle - Stopping lifecycle [module] stage [ANNOUNCEMENTS] 2023-07-03T22:11:18,005 INFO [main] org.apache.druid.java.util.common.lifecycle.Lifecycle - Stopping lifecycle [module] stage [SERVER] 2023-07-03T22:11:18,009 INFO [main] org.eclipse.jetty.server.AbstractConnector - Stopped ServerConnector@6491006{HTTP/1.1, (http/1.1)}{0.0.0.0:8100} 2023-07-03T22:11:18,009 INFO [main] org.eclipse.jetty.server.session - node0 Stopped scavenging 2023-07-03T22:11:18,012 INFO [main] org.eclipse.jetty.server.handler.ContextHandler - Stopped o.e.j.s.ServletContextHandler@742aa00a{/,null,STOPPED} 2023-07-03T22:11:18,014 INFO [main] org.apache.druid.java.util.common.lifecycle.Lifecycle - Stopping lifecycle [module] stage [NORMAL] 2023-07-03T22:11:18,014 INFO [main] org.apache.druid.server.coordination.ZkCoordinator - Stopping ZkCoordinator for [DruidServerMetadata{name='localhost:8100', hostAndPort='localhost:8100', hostAndTlsPort='null', maxSize=0, tier='_default_tier', type=indexer-executor, priority=0}] 2023-07-03T22:11:18,014 INFO [main] org.apache.druid.server.coordination.SegmentLoadDropHandler - Stopping... 2023-07-03T22:11:18,014 INFO [main] org.apache.druid.server.coordination.SegmentLoadDropHandler - Stopped. 2023-07-03T22:11:18,014 INFO [main] org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner - Starting graceful shutdown of task[index_kafka_social_media_0e905aa31037879_nommnaeg]. 2023-07-03T22:11:18,014 INFO [main] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Stopping forcefully (status: [PUBLISHING]) 2023-07-03T22:11:18,019 INFO [LookupExtractorFactoryContainerProvider-MainThread] org.apache.druid.query.lookup.LookupReferencesManager - Lookup Management loop exited. Lookup notices are not handled anymore. 2023-07-03T22:11:18,020 INFO [main] org.apache.druid.query.lookup.LookupReferencesManager - Closed lookup [name]. 2023-07-03T22:11:18,020 INFO [Curator-Framework-0] org.apache.curator.framework.imps.CuratorFrameworkImpl - backgroundOperationsLoop exiting 2023-07-03T22:11:18,147 INFO [main] org.apache.zookeeper.ZooKeeper - Session: 0x1000097ceaf0007 closed 2023-07-03T22:11:18,147 INFO [main-EventThread] org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x1000097ceaf0007 2023-07-03T22:11:18,151 INFO [main] org.apache.druid.java.util.common.lifecycle.Lifecycle - Stopping lifecycle [module] stage [INIT] Finished peon task "},{"title":"Get task completion report","type":1,"pageTitle":"Tasks API","url":"/docs/27.0.0/api-reference/tasks-api#get-task-completion-report","content":"Retrieves a task completion report for a task. It returns a JSON object with information about the number of rows ingested, and any parse exceptions that Druid raised. URL GET /druid/indexer/v1/task/:taskId/reports Responses 200 SUCCESS Successfully retrieved task report Sample request The following examples shows how to retrieve the completion report of a task with the specified ID query-52a8aafe-7265-4427-89fe-dc51275cc470. cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task/query-52a8aafe-7265-4427-89fe-dc51275cc470/reports" Sample response Click to show sample response { "ingestionStatsAndErrors": { "type": "ingestionStatsAndErrors", "taskId": "query-52a8aafe-7265-4427-89fe-dc51275cc470", "payload": { "ingestionState": "COMPLETED", "unparseableEvents": {}, "rowStats": { "determinePartitions": { "processed": 0, "processedBytes": 0, "processedWithError": 0, "thrownAway": 0, "unparseable": 0 }, "buildSegments": { "processed": 39244, "processedBytes": 17106256, "processedWithError": 0, "thrownAway": 0, "unparseable": 0 } }, "errorMsg": null, "segmentAvailabilityConfirmed": false, "segmentAvailabilityWaitTimeMs": 0 } } } "},{"title":"Task operations","type":1,"pageTitle":"Tasks API","url":"/docs/27.0.0/api-reference/tasks-api#task-operations","content":""},{"title":"Submit a task","type":1,"pageTitle":"Tasks API","url":"/docs/27.0.0/api-reference/tasks-api#submit-a-task","content":"Submits a JSON-based ingestion spec or supervisor spec to the Overlord. It returns the task ID of the submitted task. For information on creating an ingestion spec, refer to the ingestion spec reference. Note that for most batch ingestion use cases, you should use the SQL-ingestion API instead of JSON-based batch ingestion. URL POST /druid/indexer/v1/task Responses 200 SUCCESS400 BAD REQUEST415 UNSUPPORTED MEDIA TYPE500 Server Error Successfully submitted task Sample request The following request is an example of submitting a task to create a datasource named "wikipedia auto". cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task" \\ --header 'Content-Type: application/json' \\ --data '{ "type" : "index_parallel", "spec" : { "dataSchema" : { "dataSource" : "wikipedia_auto", "timestampSpec": { "column": "time", "format": "iso" }, "dimensionsSpec" : { "useSchemaDiscovery": true }, "metricsSpec" : [], "granularitySpec" : { "type" : "uniform", "segmentGranularity" : "day", "queryGranularity" : "none", "intervals" : ["2015-09-12/2015-09-13"], "rollup" : false } }, "ioConfig" : { "type" : "index_parallel", "inputSource" : { "type" : "local", "baseDir" : "quickstart/tutorial/", "filter" : "wikiticker-2015-09-12-sampled.json.gz" }, "inputFormat" : { "type" : "json" }, "appendToExisting" : false }, "tuningConfig" : { "type" : "index_parallel", "maxRowsPerSegment" : 5000000, "maxRowsInMemory" : 25000 } } }' Sample response Click to show sample response { "task": "index_parallel_wikipedia_odofhkle_2023-06-23T21:07:28.226Z" } "},{"title":"Shut down a task","type":1,"pageTitle":"Tasks API","url":"/docs/27.0.0/api-reference/tasks-api#shut-down-a-task","content":"Shuts down a task if it not already complete. Returns a JSON object with the ID of the task that was shut down successfully. URL POST /druid/indexer/v1/task/:taskId/shutdown Responses 200 SUCCESS404 NOT FOUND Successfully shut down task Sample request The following request shows how to shut down a task with the ID query-52as 8aafe-7265-4427-89fe-dc51275cc470. cURLHTTP curl --request POST "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task/query-52as 8aafe-7265-4427-89fe-dc51275cc470/shutdown" Sample response Click to show sample response { 'task': 'query-577a83dd-a14e-4380-bd01-c942b781236b' } "},{"title":"Shut down all tasks for a datasource","type":1,"pageTitle":"Tasks API","url":"/docs/27.0.0/api-reference/tasks-api#shut-down-all-tasks-for-a-datasource","content":"Shuts down all tasks for a specified datasource. If successful, it returns a JSON object with the name of the datasource whose tasks are shut down. URL POST /druid/indexer/v1/datasources/:datasource/shutdownAllTasks Responses 200 SUCCESS404 NOT FOUND Successfully shut down tasks Sample request The following request is an example of shutting down all tasks for datasource wikipedia_auto. cURLHTTP curl --request POST "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/datasources/wikipedia_auto/shutdownAllTasks" Sample response Click to show sample response { "dataSource": "wikipedia_api" } "},{"title":"Task management","type":1,"pageTitle":"Tasks API","url":"/docs/27.0.0/api-reference/tasks-api#task-management","content":""},{"title":"Retrieve status objects for tasks","type":1,"pageTitle":"Tasks API","url":"/docs/27.0.0/api-reference/tasks-api#retrieve-status-objects-for-tasks","content":"Retrieves list of task status objects for list of task ID strings in request body. It returns a set of JSON objects with the status, duration, location of each task, and any error messages. URL POST /druid/indexer/v1/taskStatus Responses 200 SUCCESS415 UNSUPPORTED MEDIA TYPE Successfully retrieved status objects Sample request The following request is an example of retrieving status objects for task ID index_parallel_wikipedia_auto_jndhkpbo_2023-06-26T17:23:05.308Z and index_parallel_wikipedia_auto_jbgiianh_2023-06-26T23:17:56.769Z . cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/taskStatus" \\ --header 'Content-Type: application/json' \\ --data '["index_parallel_wikipedia_auto_jndhkpbo_2023-06-26T17:23:05.308Z","index_parallel_wikipedia_auto_jbgiianh_2023-06-26T23:17:56.769Z"]' Sample response Click to show sample response { "index_parallel_wikipedia_auto_jbgiianh_2023-06-26T23:17:56.769Z": { "id": "index_parallel_wikipedia_auto_jbgiianh_2023-06-26T23:17:56.769Z", "status": "SUCCESS", "duration": 10630, "errorMsg": null, "location": { "host": "localhost", "port": 8100, "tlsPort": -1 } }, "index_parallel_wikipedia_auto_jndhkpbo_2023-06-26T17:23:05.308Z": { "id": "index_parallel_wikipedia_auto_jndhkpbo_2023-06-26T17:23:05.308Z", "status": "SUCCESS", "duration": 11012, "errorMsg": null, "location": { "host": "localhost", "port": 8100, "tlsPort": -1 } } } "},{"title":"Clean up pending segments for a datasource","type":1,"pageTitle":"Tasks API","url":"/docs/27.0.0/api-reference/tasks-api#clean-up-pending-segments-for-a-datasource","content":"Manually clean up pending segments table in metadata storage for datasource. It returns a JSON object response withnumDeleted for the number of rows deleted from the pending segments table. This API is used by thedruid.coordinator.kill.pendingSegments.on Coordinator settingwhich automates this operation to perform periodically. URL DELETE /druid/indexer/v1/pendingSegments/:datasource Responses 200 SUCCESS Successfully deleted pending segments Sample request The following request is an example of cleaning up pending segments for the wikipedia_api datasource. cURLHTTP curl --request DELETE "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/pendingSegments/wikipedia_api" Sample response Click to show sample response { "numDeleted": 2 } "},{"title":"DataSketches Quantiles Sketch module","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/datasketches-quantiles","content":"","keywords":""},{"title":"Aggregator","type":1,"pageTitle":"DataSketches Quantiles Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-quantiles#aggregator","content":"The result of the aggregation is a DoublesSketch that is the union of all sketches either built from raw data or read from the segments. { "type" : "quantilesDoublesSketch", "name" : <output_name>, "fieldName" : <metric_name>, "k": <parameter that controls size and accuracy> } Property\tDescription\tRequired?type\tThis string should always be "quantilesDoublesSketch"\tyes name\tString representing the output column to store sketch values.\tyes fieldName\tA string for the name of the input field (can contain sketches or raw numeric values).\tyes k\tParameter that determines the accuracy and size of the sketch. Higher k means higher accuracy but more space to store sketches. Must be a power of 2 from 2 to 32768. See accuracy information in the DataSketches documentation for details.\tno, defaults to 128 maxStreamLength\tThis parameter defines the number of items that can be presented to each sketch before it may need to move from off-heap to on-heap memory. This is relevant to query types that use off-heap memory, including TopN and GroupBy. Ideally, should be set high enough such that most sketches can stay off-heap.\tno, defaults to 1000000000 shouldFinalize\tReturn the final double type representing the estimate rather than the intermediate sketch type itself. In addition to controlling the finalization of this aggregator, you can control whether all aggregators are finalized with the query context parameters finalize and sqlFinalizeOuterSketches.\tno, defaults to true "},{"title":"Post aggregators","type":1,"pageTitle":"DataSketches Quantiles Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-quantiles#post-aggregators","content":""},{"title":"Quantile","type":1,"pageTitle":"DataSketches Quantiles Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-quantiles#quantile","content":"This returns an approximation to the value that would be preceded by a given fraction of a hypothetical sorted version of the input stream. { "type" : "quantilesDoublesSketchToQuantile", "name": <output name>, "field" : <post aggregator that refers to a DoublesSketch (fieldAccess or another post aggregator)>, "fraction" : <fractional position in the hypothetical sorted stream, number from 0 to 1 inclusive> } "},{"title":"Quantiles","type":1,"pageTitle":"DataSketches Quantiles Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-quantiles#quantiles","content":"This returns an array of quantiles corresponding to a given array of fractions { "type" : "quantilesDoublesSketchToQuantiles", "name": <output name>, "field" : <post aggregator that refers to a DoublesSketch (fieldAccess or another post aggregator)>, "fractions" : <array of fractional positions in the hypothetical sorted stream, number from 0 to 1 inclusive> } "},{"title":"Histogram","type":1,"pageTitle":"DataSketches Quantiles Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-quantiles#histogram","content":"This returns an approximation to the histogram given an array of split points that define the histogram bins or a number of bins (not both). An array of m unique, monotonically increasing split points divide the real number line into m+1 consecutive disjoint intervals. The definition of an interval is inclusive of the left split point and exclusive of the right split point. If the number of bins is specified instead of split points, the interval between the minimum and maximum values is divided into the given number of equally-spaced bins. { "type" : "quantilesDoublesSketchToHistogram", "name": <output name>, "field" : <post aggregator that refers to a DoublesSketch (fieldAccess or another post aggregator)>, "splitPoints" : <array of split points (optional)>, "numBins" : <number of bins (optional, defaults to 10)> } "},{"title":"Rank","type":1,"pageTitle":"DataSketches Quantiles Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-quantiles#rank","content":"This returns an approximation to the rank of a given value that is the fraction of the distribution less than that value. { "type" : "quantilesDoublesSketchToRank", "name": <output name>, "field" : <post aggregator that refers to a DoublesSketch (fieldAccess or another post aggregator)>, "value" : <value> } "},{"title":"CDF","type":1,"pageTitle":"DataSketches Quantiles Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-quantiles#cdf","content":"This returns an approximation to the Cumulative Distribution Function given an array of split points that define the edges of the bins. An array of m unique, monotonically increasing split points divide the real number line into m+1 consecutive disjoint intervals. The definition of an interval is inclusive of the left split point and exclusive of the right split point. The resulting array of fractions can be viewed as ranks of each split point with one additional rank that is always 1. { "type" : "quantilesDoublesSketchToCDF", "name": <output name>, "field" : <post aggregator that refers to a DoublesSketch (fieldAccess or another post aggregator)>, "splitPoints" : <array of split points> } "},{"title":"Sketch summary","type":1,"pageTitle":"DataSketches Quantiles Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-quantiles#sketch-summary","content":"This returns a summary of the sketch that can be used for debugging. This is the result of calling toString() method. { "type" : "quantilesDoublesSketchToString", "name": <output name>, "field" : <post aggregator that refers to a DoublesSketch (fieldAccess or another post aggregator)> } "},{"title":"Approximate Histogram aggregators","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/approximate-histograms","content":"","keywords":""},{"title":"Approximate Histogram aggregator (Deprecated)","type":1,"pageTitle":"Approximate Histogram aggregators","url":"/docs/27.0.0/development/extensions-core/approximate-histograms#approximate-histogram-aggregator-deprecated","content":"info The Approximate Histogram aggregator is deprecated. Please use DataSketches Quantiles instead which provides a superior distribution-independent algorithm with formal error guarantees. This aggregator is based onhttp://jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdfto compute approximate histograms, with the following modifications: some tradeoffs in accuracy were made in the interest of speed (see below)the sketch maintains the exact original data as long as the number of distinct data points is fewer than the resolutions (number of centroids), increasing accuracy when there are few data points, or when dealing with discrete data points. You can find some of the details in this post. Here are a few things to note before using approximate histograms: As indicated in the original paper, there are no formal error bounds on the approximation. In practice, the approximation gets worse if the distribution is skewed.The algorithm is order-dependent, so results can vary for the same query, due to variations in the order in which results are merged.In general, the algorithm only works well if the data that comes is randomly distributed (i.e. if data points end up sorted in a column, approximation will be horrible)We traded accuracy for aggregation speed, taking some shortcuts when adding histograms together, which can lead to pathological cases if your data is ordered in some way, or if your distribution has long tails. It should be cheaper to increase the resolution of the sketch to get the accuracy you need. That being said, those sketches can be useful to get a first order approximation when averages are not good enough. Assuming most rows in your segment store fewer data points than the resolution of histogram, you should be able to use them for monitoring purposes and detect meaningful variations with a few hundred centroids. To get good accuracy readings on 95th percentiles with millions of rows of data, you may want to use several thousand centroids, especially with long tails, since that's where the approximation will be worse. "},{"title":"Creating approximate histogram sketches at ingestion time","type":1,"pageTitle":"Approximate Histogram aggregators","url":"/docs/27.0.0/development/extensions-core/approximate-histograms#creating-approximate-histogram-sketches-at-ingestion-time","content":"To use this feature, an "approxHistogram" or "approxHistogramFold" aggregator must be included at indexing time. The ingestion aggregator can only apply to numeric values. If you use "approxHistogram" then any input rows missing the value will be considered to have a value of 0, while with "approxHistogramFold" such rows will be ignored. To query for results, an "approxHistogramFold" aggregator must be included in the query. { "type" : "approxHistogram or approxHistogramFold (at ingestion time), approxHistogramFold (at query time)", "name" : <output_name>, "fieldName" : <metric_name>, "resolution" : <integer>, "numBuckets" : <integer>, "lowerLimit" : <float>, "upperLimit" : <float> } Property\tDescription\tDefaultresolution\tNumber of centroids (data points) to store. The higher the resolution, the more accurate results are, but the slower the computation will be.\t50 numBuckets\tNumber of output buckets for the resulting histogram. Bucket intervals are dynamic, based on the range of the underlying data. Use a post-aggregator to have finer control over the bucketing scheme\t7 lowerLimit/upperLimit\tRestrict the approximation to the given range. The values outside this range will be aggregated into two centroids. Counts of values outside this range are still maintained.\t-INF/+INF finalizeAsBase64Binary\tIf true, the finalized aggregator value will be a Base64-encoded byte array containing the serialized form of the histogram. If false, the finalized aggregator value will be a JSON representation of the histogram.\tfalse "},{"title":"Fixed Buckets Histogram","type":1,"pageTitle":"Approximate Histogram aggregators","url":"/docs/27.0.0/development/extensions-core/approximate-histograms#fixed-buckets-histogram","content":"The fixed buckets histogram aggregator builds a histogram on a numeric column, with evenly-sized buckets across a specified value range. Values outside of the range are handled based on a user-specified outlier handling mode. This histogram supports the min/max/quantiles post-aggregators but does not support the bucketing post-aggregators. "},{"title":"When to use","type":1,"pageTitle":"Approximate Histogram aggregators","url":"/docs/27.0.0/development/extensions-core/approximate-histograms#when-to-use","content":"The accuracy/usefulness of the fixed buckets histogram is extremely data-dependent; it is provided to support special use cases where the user has a great deal of prior information about the data being aggregated and knows that a fixed buckets implementation is suitable. For general histogram and quantile use cases, the DataSketches Quantiles Sketch extension is recommended. "},{"title":"Properties","type":1,"pageTitle":"Approximate Histogram aggregators","url":"/docs/27.0.0/development/extensions-core/approximate-histograms#properties","content":"Property\tDescription\tDefaulttype\tType of the aggregator. Must fixedBucketsHistogram.\tNo default, must be specified name\tColumn name for the aggregator.\tNo default, must be specified fieldName\tColumn name of the input to the aggregator.\tNo default, must be specified lowerLimit\tLower limit of the histogram.\tNo default, must be specified upperLimit\tUpper limit of the histogram.\tNo default, must be specified numBuckets\tNumber of buckets for the histogram. The range [lowerLimit, upperLimit] will be divided into numBuckets intervals of equal size.\t10 outlierHandlingMode\tSpecifies how values outside of [lowerLimit, upperLimit] will be handled. Supported modes are "ignore", "overflow", and "clip". See outlier handling modes for more details.\tNo default, must be specified finalizeAsBase64Binary\tIf true, the finalized aggregator value will be a Base64-encoded byte array containing the serialized form of the histogram. If false, the finalized aggregator value will be a JSON representation of the histogram.\tfalse An example aggregator spec is shown below: { "type" : "fixedBucketsHistogram", "name" : <output_name>, "fieldName" : <metric_name>, "numBuckets" : <integer>, "lowerLimit" : <double>, "upperLimit" : <double>, "outlierHandlingMode": <mode> } "},{"title":"Outlier handling modes","type":1,"pageTitle":"Approximate Histogram aggregators","url":"/docs/27.0.0/development/extensions-core/approximate-histograms#outlier-handling-modes","content":"The outlier handling mode specifies what should be done with values outside of the histogram's range. There are three supported modes: ignore: Throw away outlier values.overflow: A count of outlier values will be tracked by the histogram, available in the lowerOutlierCount and upperOutlierCount fields.clip: Outlier values will be clipped to the lowerLimit or the upperLimit and included in the histogram. If you don't care about outliers, ignore is the cheapest option performance-wise. There is currently no difference in storage size among the modes. "},{"title":"Output fields","type":1,"pageTitle":"Approximate Histogram aggregators","url":"/docs/27.0.0/development/extensions-core/approximate-histograms#output-fields","content":"The histogram aggregator's output object has the following fields: lowerLimit: Lower limit of the histogramupperLimit: Upper limit of the histogramnumBuckets: Number of histogram bucketsoutlierHandlingMode: Outlier handling modecount: Total number of values contained in the histogram, excluding outlierslowerOutlierCount: Count of outlier values below lowerLimit. Only used if the outlier mode is overflow.upperOutlierCount: Count of outlier values above upperLimit. Only used if the outlier mode is overflow.missingValueCount: Count of null values seen by the histogram.max: Max value seen by the histogram. This does not include outlier values.min: Min value seen by the histogram. This does not include outlier values.histogram: An array of longs with size numBuckets, containing the bucket counts "},{"title":"Ingesting existing histograms","type":1,"pageTitle":"Approximate Histogram aggregators","url":"/docs/27.0.0/development/extensions-core/approximate-histograms#ingesting-existing-histograms","content":"It is also possible to ingest existing fixed buckets histograms. The input must be a Base64 string encoding a byte array that contains a serialized histogram object. Both "full" and "sparse" formats can be used. Please see Serialization formats below for details. "},{"title":"Serialization formats","type":1,"pageTitle":"Approximate Histogram aggregators","url":"/docs/27.0.0/development/extensions-core/approximate-histograms#serialization-formats","content":"Full serialization format This format includes the full histogram bucket count array in the serialization format. byte: serialization version, must be 0x01 byte: encoding mode, 0x01 for full double: lowerLimit double: upperLimit int: numBuckets byte: outlier handling mode (0x00 for `ignore`, 0x01 for `overflow`, and 0x02 for `clip`) long: count, total number of values contained in the histogram, excluding outliers long: lowerOutlierCount long: upperOutlierCount long: missingValueCount double: max double: min array of longs: bucket counts for the histogram Sparse serialization format This format represents the histogram bucket counts as (bucketNum, count) pairs. This serialization format is used when less than half of the histogram's buckets have values. byte: serialization version, must be 0x01 byte: encoding mode, 0x02 for sparse double: lowerLimit double: upperLimit int: numBuckets byte: outlier handling mode (0x00 for `ignore`, 0x01 for `overflow`, and 0x02 for `clip`) long: count, total number of values contained in the histogram, excluding outliers long: lowerOutlierCount long: upperOutlierCount long: missingValueCount double: max double: min int: number of following (bucketNum, count) pairs sequence of (int, long) pairs: int: bucket number count: bucket count "},{"title":"Combining histograms with different bucketing schemes","type":1,"pageTitle":"Approximate Histogram aggregators","url":"/docs/27.0.0/development/extensions-core/approximate-histograms#combining-histograms-with-different-bucketing-schemes","content":"It is possible to combine two histograms with different bucketing schemes (lowerLimit, upperLimit, numBuckets) together. The bucketing scheme of the "left hand" histogram will be preserved (i.e., when running a query, the bucketing schemes specified in the query's histogram aggregators will be preserved). When merging, we assume that values are evenly distributed within the buckets of the "right hand" histogram. When the right-hand histogram contains outliers (when using overflow mode), we assume that all of the outliers counted in the right-hand histogram will be outliers in the left-hand histogram as well. For performance and accuracy reasons, we recommend avoiding aggregation of histograms with different bucketing schemes if possible. "},{"title":"Null handling","type":1,"pageTitle":"Approximate Histogram aggregators","url":"/docs/27.0.0/development/extensions-core/approximate-histograms#null-handling","content":"If druid.generic.useDefaultValueForNull is false, null values will be tracked in the missingValueCount field of the histogram. If druid.generic.useDefaultValueForNull is true, null values will be added to the histogram as the default 0.0 value. "},{"title":"Histogram post-aggregators","type":1,"pageTitle":"Approximate Histogram aggregators","url":"/docs/27.0.0/development/extensions-core/approximate-histograms#histogram-post-aggregators","content":"Post-aggregators are used to transform opaque approximate histogram sketches into bucketed histogram representations, as well as to compute various distribution metrics such as quantiles, min, and max. "},{"title":"Equal buckets post-aggregator","type":1,"pageTitle":"Approximate Histogram aggregators","url":"/docs/27.0.0/development/extensions-core/approximate-histograms#equal-buckets-post-aggregator","content":"Computes a visual representation of the approximate histogram with a given number of equal-sized bins. Bucket intervals are based on the range of the underlying data. This aggregator is not supported for the fixed buckets histogram. { "type": "equalBuckets", "name": "<output_name>", "fieldName": "<aggregator_name>", "numBuckets": <count> } "},{"title":"Buckets post-aggregator","type":1,"pageTitle":"Approximate Histogram aggregators","url":"/docs/27.0.0/development/extensions-core/approximate-histograms#buckets-post-aggregator","content":"Computes a visual representation given an initial breakpoint, offset, and a bucket size. Bucket size determines the width of the binning interval. Offset determines the value on which those interval bins align. This aggregator is not supported for the fixed buckets histogram. { "type": "buckets", "name": "<output_name>", "fieldName": "<aggregator_name>", "bucketSize": <bucket_size>, "offset": <offset> } "},{"title":"Custom buckets post-aggregator","type":1,"pageTitle":"Approximate Histogram aggregators","url":"/docs/27.0.0/development/extensions-core/approximate-histograms#custom-buckets-post-aggregator","content":"Computes a visual representation of the approximate histogram with bins laid out according to the given breaks. This aggregator is not supported for the fixed buckets histogram. { "type" : "customBuckets", "name" : <output_name>, "fieldName" : <aggregator_name>, "breaks" : [ <value>, <value>, ... ] } "},{"title":"min post-aggregator","type":1,"pageTitle":"Approximate Histogram aggregators","url":"/docs/27.0.0/development/extensions-core/approximate-histograms#min-post-aggregator","content":"Returns the minimum value of the underlying approximate or fixed buckets histogram aggregator { "type" : "min", "name" : <output_name>, "fieldName" : <aggregator_name> } "},{"title":"max post-aggregator","type":1,"pageTitle":"Approximate Histogram aggregators","url":"/docs/27.0.0/development/extensions-core/approximate-histograms#max-post-aggregator","content":"Returns the maximum value of the underlying approximate or fixed buckets histogram aggregator { "type" : "max", "name" : <output_name>, "fieldName" : <aggregator_name> } quantile post-aggregator Computes a single quantile based on the underlying approximate or fixed buckets histogram aggregator { "type" : "quantile", "name" : <output_name>, "fieldName" : <aggregator_name>, "probability" : <quantile> } quantiles post-aggregator Computes an array of quantiles based on the underlying approximate or fixed buckets histogram aggregator { "type" : "quantiles", "name" : <output_name>, "fieldName" : <aggregator_name>, "probabilities" : [ <quantile>, <quantile>, ... ] } "},{"title":"Dropwizard metrics emitter","type":0,"sectionRef":"#","url":"/docs/27.0.0/design/extensions-contrib/dropwizard","content":"","keywords":""},{"title":"Introduction","type":1,"pageTitle":"Dropwizard metrics emitter","url":"/docs/27.0.0/design/extensions-contrib/dropwizard#introduction","content":"This extension integrates Dropwizard metrics library with druid so that dropwizard users can easily absorb druid into their monitoring ecosystem. It accumulates druid metrics as dropwizard metrics, and emits them to various sinks via dropwizard supported reporters. Currently supported dropwizard metrics types counter, gauge, meter, timer and histogram. These metrics can be emitted using either Console or JMX reporter. To use this emitter, set druid.emitter=dropwizard "},{"title":"Configuration","type":1,"pageTitle":"Dropwizard metrics emitter","url":"/docs/27.0.0/design/extensions-contrib/dropwizard#configuration","content":"All the configuration parameters for Dropwizard emitter are under druid.emitter.dropwizard. property\tdescription\trequired?\tdefaultdruid.emitter.dropwizard.reporters\tList of dropwizard reporters to be used. Here is a list of Supported Reporters\tyes\tnone druid.emitter.dropwizard.prefix\tOptional prefix to be used for metrics name\tno\tnone druid.emitter.dropwizard.includeHost\tFlag to include the host and port as part of the metric name.\tno\tyes druid.emitter.dropwizard.dimensionMapPath\tPath to JSON file defining the dropwizard metric type, and desired dimensions for every Druid metric\tno\tDefault mapping provided. See below. druid.emitter.dropwizard.alertEmitters\tList of emitters where alerts will be forwarded to.\tno\tempty list (no forwarding) druid.emitter.dropwizard.maxMetricsRegistrySize\tMaximum size of metrics registry to be cached at any time.\tno\t100 Mb "},{"title":"Druid to Dropwizard Event Conversion","type":1,"pageTitle":"Dropwizard metrics emitter","url":"/docs/27.0.0/design/extensions-contrib/dropwizard#druid-to-dropwizard-event-conversion","content":"Each metric emitted using Dropwizard must specify a type, one of [timer, counter, guage, meter, histogram]. Dropwizard Emitter expects this mapping to be provided as a JSON file. Additionally, this mapping specifies which dimensions should be included for each metric. If the user does not specify their own JSON file, a default mapping is used. All metrics are expected to be mapped. Metrics which are not mapped will be ignored. Dropwizard metric path is organized using the following schema: <druid metric name> : { "dimensions" : <dimension list>, "type" : <Dropwizard metric type>, "timeUnit" : <For timers, timeunit in which metric is emitted>} e.g. "query/time" : { "dimensions" : ["dataSource", "type"], "type" : "timer", "timeUnit": "MILLISECONDS"}, "segment/scan/pending" : { "dimensions" : [], "type" : "gauge"} For most use-cases, the default mapping is sufficient. "},{"title":"Supported Dropwizard reporters","type":1,"pageTitle":"Dropwizard metrics emitter","url":"/docs/27.0.0/design/extensions-contrib/dropwizard#supported-dropwizard-reporters","content":"JMX Reporter Used to report druid metrics via JMX. druid.emitter.dropwizard.reporters=[{"type":"jmx"}] Console Reporter Used to print Druid Metrics to console logs. druid.emitter.dropwizard.reporters=[{"type":"console","emitIntervalInSecs":30}"}] "},{"title":"Default Metrics Mapping","type":1,"pageTitle":"Dropwizard metrics emitter","url":"/docs/27.0.0/design/extensions-contrib/dropwizard#default-metrics-mapping","content":"Latest default metrics mapping can be found [here] (https://github.com/apache/druid/blob/master/extensions-contrib/dropwizard-emitter/src/main/resources/defaultMetricDimensions.json) { "query/time": { "dimensions": [ "dataSource", "type" ], "type": "timer", "timeUnit": "MILLISECONDS" }, "query/node/time": { "dimensions": [ "server" ], "type": "timer", "timeUnit": "MILLISECONDS" }, "query/node/ttfb": { "dimensions": [ "server" ], "type": "timer", "timeUnit": "MILLISECONDS" }, "query/node/backpressure": { "dimensions": [ "server" ], "type": "timer", "timeUnit": "MILLISECONDS" }, "query/segment/time": { "dimensions": [], "type": "timer", "timeUnit": "MILLISECONDS" }, "query/wait/time": { "dimensions": [], "type": "timer", "timeUnit": "MILLISECONDS" }, "segment/scan/pending": { "dimensions": [], "type": "gauge" }, "query/segmentAndCache/time": { "dimensions": [], "type": "timer", "timeUnit": "MILLISECONDS" }, "query/cpu/time": { "dimensions": [ "dataSource", "type" ], "type": "timer", "timeUnit": "NANOSECONDS" }, "query/cache/delta/numEntries": { "dimensions": [], "type": "counter" }, "query/cache/delta/sizeBytes": { "dimensions": [], "type": "counter" }, "query/cache/delta/hits": { "dimensions": [], "type": "counter" }, "query/cache/delta/misses": { "dimensions": [], "type": "counter" }, "query/cache/delta/evictions": { "dimensions": [], "type": "counter" }, "query/cache/delta/hitRate": { "dimensions": [], "type": "counter" }, "query/cache/delta/averageBytes": { "dimensions": [], "type": "counter" }, "query/cache/delta/timeouts": { "dimensions": [], "type": "counter" }, "query/cache/delta/errors": { "dimensions": [], "type": "counter" }, "query/cache/total/numEntries": { "dimensions": [], "type": "gauge" }, "query/cache/total/sizeBytes": { "dimensions": [], "type": "gauge" }, "query/cache/total/hits": { "dimensions": [], "type": "gauge" }, "query/cache/total/misses": { "dimensions": [], "type": "gauge" }, "query/cache/total/evictions": { "dimensions": [], "type": "gauge" }, "query/cache/total/hitRate": { "dimensions": [], "type": "gauge" }, "query/cache/total/averageBytes": { "dimensions": [], "type": "gauge" }, "query/cache/total/timeouts": { "dimensions": [], "type": "gauge" }, "query/cache/total/errors": { "dimensions": [], "type": "gauge" }, "ingest/events/thrownAway": { "dimensions": [ "dataSource" ], "type": "counter" }, "ingest/events/unparseable": { "dimensions": [ "dataSource" ], "type": "counter" }, "ingest/events/duplicate": { "dimensions": [ "dataSource" ], "type": "counter" }, "ingest/events/processed": { "dimensions": [ "dataSource" ], "type": "counter" }, "ingest/rows/output": { "dimensions": [ "dataSource" ], "type": "counter" }, "ingest/persist/counter": { "dimensions": [ "dataSource" ], "type": "counter" }, "ingest/persist/time": { "dimensions": [ "dataSource" ], "type": "timer", "timeUnit": "MILLISECONDS" }, "ingest/persist/cpu": { "dimensions": [ "dataSource" ], "type": "timer", "timeUnit": "NANOSECONDS" }, "ingest/persist/backPressure": { "dimensions": [ "dataSource" ], "type": "gauge" }, "ingest/persist/failed": { "dimensions": [ "dataSource" ], "type": "counter" }, "ingest/handoff/failed": { "dimensions": [ "dataSource" ], "type": "counter" }, "ingest/merge/time": { "dimensions": [ "dataSource" ], "type": "timer", "timeUnit": "MILLISECONDS" }, "ingest/merge/cpu": { "dimensions": [ "dataSource" ], "type": "timer", "timeUnit": "NANOSECONDS" }, "task/run/time": { "dimensions": [ "dataSource", "taskType" ], "type": "timer", "timeUnit": "MILLISECONDS" }, "segment/added/bytes": { "dimensions": [ "dataSource", "taskType" ], "type": "counter" }, "segment/moved/bytes": { "dimensions": [ "dataSource", "taskType" ], "type": "counter" }, "segment/nuked/bytes": { "dimensions": [ "dataSource", "taskType" ], "type": "counter" }, "segment/assigned/counter": { "dimensions": [ "tier" ], "type": "counter" }, "segment/moved/counter": { "dimensions": [ "tier" ], "type": "counter" }, "segment/dropped/counter": { "dimensions": [ "tier" ], "type": "counter" }, "segment/deleted/counter": { "dimensions": [ "tier" ], "type": "counter" }, "segment/unneeded/counter": { "dimensions": [ "tier" ], "type": "counter" }, "segment/cost/raw": { "dimensions": [ "tier" ], "type": "counter" }, "segment/cost/normalization": { "dimensions": [ "tier" ], "type": "counter" }, "segment/cost/normalized": { "dimensions": [ "tier" ], "type": "counter" }, "segment/loadQueue/size": { "dimensions": [ "server" ], "type": "gauge" }, "segment/loadQueue/failed": { "dimensions": [ "server" ], "type": "gauge" }, "segment/loadQueue/counter": { "dimensions": [ "server" ], "type": "gauge" }, "segment/dropQueue/counter": { "dimensions": [ "server" ], "type": "gauge" }, "segment/size": { "dimensions": [ "dataSource" ], "type": "gauge" }, "segment/overShadowed/counter": { "dimensions": [], "type": "gauge" }, "segment/max": { "dimensions": [], "type": "gauge" }, "segment/used": { "dimensions": [ "dataSource", "tier", "priority" ], "type": "gauge" }, "segment/usedPercent": { "dimensions": [ "dataSource", "tier", "priority" ], "type": "gauge" }, "jvm/pool/committed": { "dimensions": [ "poolKind", "poolName" ], "type": "gauge" }, "jvm/pool/init": { "dimensions": [ "poolKind", "poolName" ], "type": "gauge" }, "jvm/pool/max": { "dimensions": [ "poolKind", "poolName" ], "type": "gauge" }, "jvm/pool/used": { "dimensions": [ "poolKind", "poolName" ], "type": "gauge" }, "jvm/bufferpool/counter": { "dimensions": [ "bufferpoolName" ], "type": "gauge" }, "jvm/bufferpool/used": { "dimensions": [ "bufferpoolName" ], "type": "gauge" }, "jvm/bufferpool/capacity": { "dimensions": [ "bufferpoolName" ], "type": "gauge" }, "jvm/mem/init": { "dimensions": [ "memKind" ], "type": "gauge" }, "jvm/mem/max": { "dimensions": [ "memKind" ], "type": "gauge" }, "jvm/mem/used": { "dimensions": [ "memKind" ], "type": "gauge" }, "jvm/mem/committed": { "dimensions": [ "memKind" ], "type": "gauge" }, "jvm/gc/counter": { "dimensions": [ "gcName", "gcGen" ], "type": "counter" }, "jvm/gc/cpu": { "dimensions": [ "gcName", "gcGen" ], "type": "timer", "timeUnit": "NANOSECONDS" }, "ingest/events/buffered": { "dimensions": [ "serviceName", "bufferCapacity" ], "type": "gauge" }, "sys/swap/free": { "dimensions": [], "type": "gauge" }, "sys/swap/max": { "dimensions": [], "type": "gauge" }, "sys/swap/pageIn": { "dimensions": [], "type": "gauge" }, "sys/swap/pageOut": { "dimensions": [], "type": "gauge" }, "sys/disk/write/counter": { "dimensions": [ "fsDevName" ], "type": "counter" }, "sys/disk/read/counter": { "dimensions": [ "fsDevName" ], "type": "counter" }, "sys/disk/write/size": { "dimensions": [ "fsDevName" ], "type": "counter" }, "sys/disk/read/size": { "dimensions": [ "fsDevName" ], "type": "counter" }, "sys/net/write/size": { "dimensions": [], "type": "counter" }, "sys/net/read/size": { "dimensions": [], "type": "counter" }, "sys/fs/used": { "dimensions": [ "fsDevName", "fsDirName", "fsTypeName", "fsSysTypeName", "fsOptions" ], "type": "gauge" }, "sys/fs/max": { "dimensions": [ "fsDevName", "fsDirName", "fsTypeName", "fsSysTypeName", "fsOptions" ], "type": "gauge" }, "sys/mem/used": { "dimensions": [], "type": "gauge" }, "sys/mem/max": { "dimensions": [], "type": "gauge" }, "sys/storage/used": { "dimensions": [ "fsDirName" ], "type": "gauge" }, "sys/cpu": { "dimensions": [ "cpuName", "cpuTime" ], "type": "gauge" }, "coordinator-segment/counter": { "dimensions": [ "dataSource" ], "type": "gauge" }, "historical-segment/counter": { "dimensions": [ "dataSource", "tier", "priority" ], "type": "gauge" }, "jetty/numOpenConnections": { "dimensions": [], "type": "gauge" }, "jetty/threadPool/total": { "dimensions": [], "type": "gauge" }, "jetty/threadPool/idle": { "dimensions": [], "type": "gauge" }, "jetty/threadPool/busy": { "dimensions": [], "type": "gauge" }, "jetty/threadPool/isLowOnThreads": { "dimensions": [], "type": "gauge" }, "jetty/threadPool/min": { "dimensions": [], "type": "gauge" }, "jetty/threadPool/max": { "dimensions": [], "type": "gauge" }, "jetty/threadPool/queueSize": { "dimensions": [], "type": "gauge" } } "},{"title":"DataSketches HLL Sketch module","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/datasketches-hll","content":"","keywords":""},{"title":"Aggregators","type":1,"pageTitle":"DataSketches HLL Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-hll#aggregators","content":"Property\tDescription\tRequired?type\tEither HLLSketchBuild or HLLSketchMerge.\tyes name\tString representing the output column to store sketch values.\tyes fieldName\tThe name of the input field.\tyes lgK\tlog2 of K that is the number of buckets in the sketch, parameter that controls the size and the accuracy. Must be between 4 and 21 inclusively.\tno, defaults to 12 tgtHllType\tThe type of the target HLL sketch. Must be HLL_4, HLL_6 or HLL_8\tno, defaults to HLL_4 round\tRound off values to whole numbers. Only affects query-time behavior and is ignored at ingestion-time.\tno, defaults to false shouldFinalize\tReturn the final double type representing the estimate rather than the intermediate sketch type itself. In addition to controlling the finalization of this aggregator, you can control whether all aggregators are finalized with the query context parameters finalize and sqlFinalizeOuterSketches.\tno, defaults to true info The default lgK value has proven to be sufficient for most use cases; expect only very negligible improvements in accuracy with lgK values over 16 in normal circumstances. "},{"title":"HLLSketchBuild aggregator","type":1,"pageTitle":"DataSketches HLL Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-hll#hllsketchbuild-aggregator","content":"{ "type": "HLLSketchBuild", "name": <output name>, "fieldName": <metric name>, "lgK": <size and accuracy parameter>, "tgtHllType": <target HLL type>, "round": <false | true> } The HLLSketchBuild aggregator builds an HLL sketch object from the specified input column. When used during ingestion, Druid stores pre-generated HLL sketch objects in the datasource instead of the raw data from the input column. When applied at query time on an existing dimension, you can use the resulting column as an intermediate dimension by the post-aggregators. info It is very common to use HLLSketchBuild in combination with rollup to create a metric on high-cardinality columns. In this example, a metric called userid_hll is included in the metricsSpec. This will perform a HLL sketch on the userid field at ingestion time, allowing for highly-performant approximate COUNT DISTINCT query operations and improving roll-up ratios when userid is then left out of the dimensionsSpec. "metricsSpec": [ { "type": "HLLSketchBuild", "name": "userid_hll", "fieldName": "userid", "lgK": 12, "tgtHllType": "HLL_4" } ] "},{"title":"HLLSketchMerge aggregator","type":1,"pageTitle":"DataSketches HLL Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-hll#hllsketchmerge-aggregator","content":"{ "type": "HLLSketchMerge", "name": <output name>, "fieldName": <metric name>, "lgK": <size and accuracy parameter>, "tgtHllType": <target HLL type>, "round": <false | true> } You can use the HLLSketchMerge aggregator to ingest pre-generated sketches from an input dataset. For example, you can set up a batch processing job to generate the sketches before sending the data to Druid. You must serialize the sketches in the input dataset to Base64-encoded bytes. Then, specify HLLSketchMerge for the input column in the native ingestion metricsSpec. "},{"title":"Post aggregators","type":1,"pageTitle":"DataSketches HLL Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-hll#post-aggregators","content":""},{"title":"Estimate","type":1,"pageTitle":"DataSketches HLL Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-hll#estimate","content":"Returns the distinct count estimate as a double. { "type": "HLLSketchEstimate", "name": <output name>, "field": <post aggregator that returns an HLL Sketch>, "round": <if true, round the estimate. Default is false> } "},{"title":"Estimate with bounds","type":1,"pageTitle":"DataSketches HLL Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-hll#estimate-with-bounds","content":"Returns a distinct count estimate and error bounds from an HLL sketch. The result will be an array containing three double values: estimate, lower bound and upper bound. The bounds are provided at a given number of standard deviations (optional, defaults to 1). This must be an integer value of 1, 2 or 3 corresponding to approximately 68.3%, 95.4% and 99.7% confidence intervals. { "type": "HLLSketchEstimateWithBounds", "name": <output name>, "field": <post aggregator that returns an HLL Sketch>, "numStdDev": <number of standard deviations: 1 (default), 2 or 3> } "},{"title":"Union","type":1,"pageTitle":"DataSketches HLL Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-hll#union","content":"{ "type": "HLLSketchUnion", "name": <output name>, "fields": <array of post aggregators that return HLL sketches>, "lgK": <log2 of K for the target sketch>, "tgtHllType": <target HLL type> } "},{"title":"Sketch to string","type":1,"pageTitle":"DataSketches HLL Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-hll#sketch-to-string","content":"Human-readable sketch summary for debugging. { "type": "HLLSketchToString", "name": <output name>, "field": <post aggregator that returns an HLL Sketch> } "},{"title":"Bloom Filter","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/bloom-filter","content":"","keywords":""},{"title":"Filtering queries with a Bloom Filter","type":1,"pageTitle":"Bloom Filter","url":"/docs/27.0.0/development/extensions-core/bloom-filter#filtering-queries-with-a-bloom-filter","content":""},{"title":"JSON Specification of Bloom Filter","type":1,"pageTitle":"Bloom Filter","url":"/docs/27.0.0/development/extensions-core/bloom-filter#json-specification-of-bloom-filter","content":"{ "type" : "bloom", "dimension" : <dimension_name>, "bloomKFilter" : <serialized_bytes_for_BloomKFilter>, "extractionFn" : <extraction_fn> } Property\tDescription\trequired?type\tFilter Type. Should always be bloom\tyes dimension\tThe dimension to filter over.\tyes bloomKFilter\tBase64 encoded Binary representation of org.apache.hive.common.util.BloomKFilter\tyes extractionFn\tExtraction function to apply to the dimension values\tno "},{"title":"Serialized Format for BloomKFilter","type":1,"pageTitle":"Bloom Filter","url":"/docs/27.0.0/development/extensions-core/bloom-filter#serialized-format-for-bloomkfilter","content":"Serialized BloomKFilter format: 1 byte for the number of hash functions.1 big endian int(That is how OutputStream works) for the number of longs in the bitsetbig endian longs in the BloomKFilter bitset Note: org.apache.hive.common.util.BloomKFilter provides a serialize method which can be used to serialize bloom filters to outputStream. "},{"title":"Filtering SQL Queries","type":1,"pageTitle":"Bloom Filter","url":"/docs/27.0.0/development/extensions-core/bloom-filter#filtering-sql-queries","content":"Bloom filters can be used in SQL WHERE clauses via the bloom_filter_test operator: SELECT COUNT(*) FROM druid.foo WHERE bloom_filter_test(<expr>, '<serialized_bytes_for_BloomKFilter>') "},{"title":"Expression and Virtual Column Support","type":1,"pageTitle":"Bloom Filter","url":"/docs/27.0.0/development/extensions-core/bloom-filter#expression-and-virtual-column-support","content":"The bloom filter extension also adds a bloom filter Druid expression which shares syntax with the SQL operator. bloom_filter_test(<expr>, '<serialized_bytes_for_BloomKFilter>') "},{"title":"Bloom Filter Query Aggregator","type":1,"pageTitle":"Bloom Filter","url":"/docs/27.0.0/development/extensions-core/bloom-filter#bloom-filter-query-aggregator","content":"Input for a bloomKFilter can also be created from a druid query with the bloom aggregator. Note that it is very important to set a reasonable value for the maxNumEntries parameter, which is the maximum number of distinct entries that the bloom filter can represent without increasing the false positive rate. It may be worth performing a query using one of the unique count sketches to calculate the value for this parameter in order to build a bloom filter appropriate for the query. "},{"title":"JSON Specification of Bloom Filter Aggregator","type":1,"pageTitle":"Bloom Filter","url":"/docs/27.0.0/development/extensions-core/bloom-filter#json-specification-of-bloom-filter-aggregator","content":"{ "type": "bloom", "name": <output_field_name>, "maxNumEntries": <maximum_number_of_elements_for_BloomKFilter> "field": <dimension_spec> } Property\tDescription\trequired?type\tAggregator Type. Should always be bloom\tyes name\tOutput field name\tyes field\tDimensionSpec to add to org.apache.hive.common.util.BloomKFilter\tyes maxNumEntries\tMaximum number of distinct values supported by org.apache.hive.common.util.BloomKFilter, default 1500\tno "},{"title":"Example","type":1,"pageTitle":"Bloom Filter","url":"/docs/27.0.0/development/extensions-core/bloom-filter#example","content":"{ "queryType": "timeseries", "dataSource": "wikiticker", "intervals": [ "2015-09-12T00:00:00.000/2015-09-13T00:00:00.000" ], "granularity": "day", "aggregations": [ { "type": "bloom", "name": "userBloom", "maxNumEntries": 100000, "field": { "type":"default", "dimension":"user", "outputType": "STRING" } } ] } response [{"timestamp":"2015-09-12T00:00:00.000Z","result":{"userBloom":"BAAAJhAAAA..."}}] These values can then be set in the filter specification described above. Ordering results by a bloom filter aggregator, for example in a TopN query, will perform a comparatively expensive linear scan of the filter itself to count the number of set bits as a means of approximating how many items have been added to the set. As such, ordering by an alternate aggregation is recommended if possible. "},{"title":"SQL Bloom Filter Aggregator","type":1,"pageTitle":"Bloom Filter","url":"/docs/27.0.0/development/extensions-core/bloom-filter#sql-bloom-filter-aggregator","content":"Bloom filters can be computed in SQL expressions with the bloom_filter aggregator: SELECT BLOOM_FILTER(<expression>, <max number of entries>) FROM druid.foo WHERE dim2 = 'abc' but requires the setting druid.sql.planner.serializeComplexValues to be set to true. Bloom filter results in a SQL response are serialized into a base64 string, which can then be used in subsequent queries as a filter. "},{"title":"Druid AWS RDS Module","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/druid-aws-rds","content":"Druid AWS RDS Module AWS RDS is a managed service to operate relation databases such as PostgreSQL, Mysql etc. These databases could be accessed using static db password mechanism or via AWS IAM temporary tokens. This module provides AWS RDS token password provider implementation to be used with mysql-metadata-store or postgresql-metadata-store when mysql/postgresql is operated using AWS RDS. { "type": "aws-rds-token", "user": "USER", "host": "HOST", "port": PORT, "region": "AWS_REGION" } Before using this password provider, please make sure that you have connected all dots for db user to connect using token. See AWS Guide. To use this extension, make sure you include it in your config file along with other extensions e.g. druid.extensions.loadList=["druid-aws-rds-extensions", "postgresql-metadata-storage", ...] ","keywords":""},{"title":"DataSketches KLL Sketch module","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/datasketches-kll","content":"","keywords":""},{"title":"Aggregator","type":1,"pageTitle":"DataSketches KLL Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-kll#aggregator","content":"The result of the aggregation is a KllFloatsSketch or KllDoublesSketch that is the union of all sketches either built from raw data or read from the segments. { "type" : "KllDoublesSketch", "name" : <output_name>, "fieldName" : <metric_name>, "k": <parameter that controls size and accuracy> } Property\tDescription\tRequired?type\tEither "KllFloatsSketch" or "KllDoublesSketch"\tyes name\tA String for the output (result) name of the calculation.\tyes fieldName\tString for the name of the input field, which may contain sketches or raw numeric values.\tyes k\tParameter that determines the accuracy and size of the sketch. Higher k means higher accuracy but more space to store sketches. Must be from 8 to 65535. See KLL Sketch Accuracy and Size.\tno, defaults to 200 maxStreamLength\tThis parameter defines the number of items that can be presented to each sketch before it may need to move from off-heap to on-heap memory. This is relevant to query types that use off-heap memory, including TopN and GroupBy. Ideally, should be set high enough such that most sketches can stay off-heap.\tno, defaults to 1000000000 "},{"title":"Post aggregators","type":1,"pageTitle":"DataSketches KLL Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-kll#post-aggregators","content":""},{"title":"Quantile","type":1,"pageTitle":"DataSketches KLL Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-kll#quantile","content":"This returns an approximation to the value that would be preceded by a given fraction of a hypothetical sorted version of the input stream. { "type" : "KllDoublesSketchToQuantile", "name": <output name>, "field" : <post aggregator that refers to a KllDoublesSketch (fieldAccess or another post aggregator)>, "fraction" : <fractional position in the hypothetical sorted stream, number from 0 to 1 inclusive> } "},{"title":"Quantiles","type":1,"pageTitle":"DataSketches KLL Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-kll#quantiles","content":"This returns an array of quantiles corresponding to a given array of fractions { "type" : "KllDoublesSketchToQuantiles", "name": <output name>, "field" : <post aggregator that refers to a KllDoublesSketch (fieldAccess or another post aggregator)>, "fractions" : <array of fractional positions in the hypothetical sorted stream, number from 0 to 1 inclusive> } "},{"title":"Histogram","type":1,"pageTitle":"DataSketches KLL Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-kll#histogram","content":"This returns an approximation to the histogram given an array of split points that define the histogram bins or a number of bins (not both). An array of m unique, monotonically increasing split points divide the real number line into m+1 consecutive disjoint intervals. The definition of an interval is inclusive of the left split point and exclusive of the right split point. If the number of bins is specified instead of split points, the interval between the minimum and maximum values is divided into the given number of equally-spaced bins. { "type" : "KllDoublesSketchToHistogram", "name": <output name>, "field" : <post aggregator that refers to a KllDoublesSketch (fieldAccess or another post aggregator)>, "splitPoints" : <array of split points (optional)>, "numBins" : <number of bins (optional, defaults to 10)> } "},{"title":"Rank","type":1,"pageTitle":"DataSketches KLL Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-kll#rank","content":"This returns an approximation to the rank of a given value that is the fraction of the distribution less than that value. { "type" : "KllDoublesSketchToRank", "name": <output name>, "field" : <post aggregator that refers to a KllDoublesSketch (fieldAccess or another post aggregator)>, "value" : <value> } "},{"title":"CDF","type":1,"pageTitle":"DataSketches KLL Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-kll#cdf","content":"This returns an approximation to the Cumulative Distribution Function given an array of split points that define the edges of the bins. An array of m unique, monotonically increasing split points divide the real number line into m+1 consecutive disjoint intervals. The definition of an interval is inclusive of the left split point and exclusive of the right split point. The resulting array of fractions can be viewed as ranks of each split point with one additional rank that is always 1. { "type" : "KllDoublesSketchToCDF", "name": <output name>, "field" : <post aggregator that refers to a KllDoublesSketch (fieldAccess or another post aggregator)>, "splitPoints" : <array of split points> } "},{"title":"Sketch Summary","type":1,"pageTitle":"DataSketches KLL Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-kll#sketch-summary","content":"This returns a summary of the sketch that can be used for debugging. This is the result of calling toString() method. { "type" : "KllDoublesSketchToString", "name": <output name>, "field" : <post aggregator that refers to a KllDoublesSketch (fieldAccess or another post aggregator)> } "},{"title":"Druid pac4j based Security extension","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/druid-pac4j","content":"","keywords":""},{"title":"Configuration","type":1,"pageTitle":"Druid pac4j based Security extension","url":"/docs/27.0.0/development/extensions-core/druid-pac4j#configuration","content":""},{"title":"Creating an Authenticator","type":1,"pageTitle":"Druid pac4j based Security extension","url":"/docs/27.0.0/development/extensions-core/druid-pac4j#creating-an-authenticator","content":"#Create a pac4j web user authenticator druid.auth.authenticatorChain=["pac4j"] druid.auth.authenticator.pac4j.type=pac4j #Create a JWT token authenticator druid.auth.authenticatorChain=["jwt"] druid.auth.authenticator.jwt.type=jwt "},{"title":"Properties","type":1,"pageTitle":"Druid pac4j based Security extension","url":"/docs/27.0.0/development/extensions-core/druid-pac4j#properties","content":"Property\tDescription\tDefault\trequireddruid.auth.pac4j.cookiePassphrase\tpassphrase for encrypting the cookies used to manage authentication session with browser. It can be provided as plaintext string or The Password Provider.\tnone\tYes druid.auth.pac4j.readTimeout\tSocket connect and read timeout duration used when communicating with authentication server\tPT5S\tNo druid.auth.pac4j.enableCustomSslContext\tWhether to use custom SSLContext setup via simple-client-sslcontext extension which must be added to extensions list when this property is set to true.\tfalse\tNo druid.auth.pac4j.oidc.clientID\tOAuth Client Application id.\tnone\tYes druid.auth.pac4j.oidc.clientSecret\tOAuth Client Application secret. It can be provided as plaintext string or The Password Provider.\tnone\tYes druid.auth.pac4j.oidc.discoveryURI\tdiscovery URI for fetching OP metadata see this.\tnone\tYes druid.auth.pac4j.oidc.oidcClaim\tclaim that will be extracted from the ID Token after validation.\tname\tNo druid.auth.pac4j.oidc.scope\tscope is used by an application during authentication to authorize access to a user's details\topenid profile email\tNo "},{"title":"Kerberos","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/druid-kerberos","content":"","keywords":""},{"title":"Configuration","type":1,"pageTitle":"Kerberos","url":"/docs/27.0.0/development/extensions-core/druid-kerberos#configuration","content":""},{"title":"Creating an Authenticator","type":1,"pageTitle":"Kerberos","url":"/docs/27.0.0/development/extensions-core/druid-kerberos#creating-an-authenticator","content":"druid.auth.authenticatorChain=["MyKerberosAuthenticator"] druid.auth.authenticator.MyKerberosAuthenticator.type=kerberos To use the Kerberos authenticator, add an authenticator with type kerberos to the authenticatorChain. The example above uses the name "MyKerberosAuthenticator" for the Authenticator. Configuration of the named authenticator is assigned through properties with the form: druid.auth.authenticator.<authenticatorName>.<authenticatorProperty> The configuration examples in the rest of this document will use "kerberos" as the name of the authenticator being configured. "},{"title":"Properties","type":1,"pageTitle":"Kerberos","url":"/docs/27.0.0/development/extensions-core/druid-kerberos#properties","content":"Property\tPossible Values\tDescription\tDefault\trequireddruid.auth.authenticator.kerberos.serverPrincipal\tHTTP/_HOST@EXAMPLE.COM\tSPNEGO service principal used by druid processes\tempty\tYes druid.auth.authenticator.kerberos.serverKeytab\t/etc/security/keytabs/spnego.service.keytab\tSPNego service keytab used by druid processes\tempty\tYes druid.auth.authenticator.kerberos.authToLocal\tRULE:[1:$1@$0](druid@EXAMPLE.COM)s/.*/druid DEFAULT\tIt allows you to set a general rule for mapping principal names to local user names. It will be used if there is not an explicit mapping for the principal name that is being translated.\tDEFAULT\tNo druid.auth.authenticator.kerberos.cookieSignatureSecret\tsecretString\tSecret used to sign authentication cookies. It is advisable to explicitly set it, if you have multiple druid nodes running on same machine with different ports as the Cookie Specification does not guarantee isolation by port.\tRandom value\tNo druid.auth.authenticator.kerberos.authorizerName\tDepends on available authorizers\tAuthorizer that requests should be directed to\tEmpty\tYes As a note, it is required that the SPNego principal in use by the druid processes must start with HTTP (This specified by RFC-4559) and must be of the form "HTTP/_HOST@REALM". The special string _HOST will be replaced automatically with the value of config druid.host "},{"title":"druid.auth.authenticator.kerberos.excludedPaths","type":1,"pageTitle":"Kerberos","url":"/docs/27.0.0/development/extensions-core/druid-kerberos#druidauthauthenticatorkerberosexcludedpaths","content":"In older releases, the Kerberos authenticator had an excludedPaths property that allowed the user to specify a list of paths where authentication checks should be skipped. This property has been removed from the Kerberos authenticator because the path exclusion functionality is now handled across all authenticators/authorizers by setting druid.auth.unsecuredPaths, as described in the main auth documentation. "},{"title":"Auth to Local Syntax","type":1,"pageTitle":"Kerberos","url":"/docs/27.0.0/development/extensions-core/druid-kerberos#auth-to-local-syntax","content":"druid.auth.authenticator.kerberos.authToLocal allows you to set a general rules for mapping principal names to local user names. The syntax for mapping rules is RULE:\\[n:string](regexp)s/pattern/replacement/g. The integer n indicates how many components the target principal should have. If this matches, then a string will be formed from string, substituting the realm of the principal for $0 and the nth component of the principal for $n. e.g. if the principal was druid/admin then \\[2:$2$1suffix] would result in the string admindruidsuffix. If this string matches regexp, then the s//[g] substitution command will be run over the string. The optional g will cause the substitution to be global over the string, instead of replacing only the first match in the string. If required, multiple rules can be joined by newline character and specified as a String. "},{"title":"Increasing HTTP Header size for large SPNEGO negotiate header","type":1,"pageTitle":"Kerberos","url":"/docs/27.0.0/development/extensions-core/druid-kerberos#increasing-http-header-size-for-large-spnego-negotiate-header","content":"In Active Directory environment, SPNEGO token in the Authorization header includes PAC (Privilege Access Certificate) information, which includes all security groups for the user. In some cases when the user belongs to many security groups the header to grow beyond what druid can handle by default. In such cases, max request header size that druid can handle can be increased by setting druid.server.http.maxRequestHeaderSize (default 8KiB) and druid.router.http.maxRequestBufferSize (default 8KiB). "},{"title":"Configuring Kerberos Escalated Client","type":1,"pageTitle":"Kerberos","url":"/docs/27.0.0/development/extensions-core/druid-kerberos#configuring-kerberos-escalated-client","content":"Druid internal processes communicate with each other using an escalated http Client. A Kerberos enabled escalated HTTP Client can be configured by following properties - Property\tExample Values\tDescription\tDefault\trequireddruid.escalator.type\tkerberos\tType of Escalator client used for internal process communication.\tn/a\tYes druid.escalator.internalClientPrincipal\tdruid@EXAMPLE.COM\tPrincipal user name, used for internal process communication\tn/a\tYes druid.escalator.internalClientKeytab\t/etc/security/keytabs/druid.keytab\tPath to keytab file used for internal process communication\tn/a\tYes druid.escalator.authorizerName\tMyBasicAuthorizer\tAuthorizer that requests should be directed to.\tn/a\tYes "},{"title":"Accessing Druid HTTP end points when kerberos security is enabled","type":1,"pageTitle":"Kerberos","url":"/docs/27.0.0/development/extensions-core/druid-kerberos#accessing-druid-http-end-points-when-kerberos-security-is-enabled","content":"To access druid HTTP endpoints via curl user will need to first login using kinit command as follows - kinit -k -t <path_to_keytab_file> user@REALM.COM Once the login is successful verify that login is successful using klist command Now you can access druid HTTP endpoints using curl command as follows - curl --negotiate -u:anyUser -b ~/cookies.txt -c ~/cookies.txt -X POST -H'Content-Type: application/json' <HTTP_END_POINT> e.g to send a query from file query.json to the Druid Broker use this command - curl --negotiate -u:anyUser -b ~/cookies.txt -c ~/cookies.txt -X POST -H'Content-Type: application/json' http://broker-host:port/druid/v2/?pretty -d @query.json Note: Above command will authenticate the user first time using SPNego negotiate mechanism and store the authentication cookie in file. For subsequent requests the cookie will be used for authentication. "},{"title":"Accessing Coordinator or Overlord console from web browser","type":1,"pageTitle":"Kerberos","url":"/docs/27.0.0/development/extensions-core/druid-kerberos#accessing-coordinator-or-overlord-console-from-web-browser","content":"To access Coordinator/Overlord console from browser you will need to configure your browser for SPNego authentication as follows - Safari - No configurations required.Firefox - Open firefox and follow these steps - Go to about:config and search for network.negotiate-auth.trusted-uris.Double-click and add the following values: "http://druid-coordinator-hostname:ui-port" and "http://druid-overlord-hostname:port" Google Chrome - From the command line run following commands - google-chrome --auth-server-whitelist="druid-coordinator-hostname" --auth-negotiate-delegate-whitelist="druid-coordinator-hostname"google-chrome --auth-server-whitelist="druid-overlord-hostname" --auth-negotiate-delegate-whitelist="druid-overlord-hostname" Internet Explorer - Configure trusted websites to include "druid-coordinator-hostname" and "druid-overlord-hostname"Allow negotiation for the UI website. "},{"title":"Sending Queries programmatically","type":1,"pageTitle":"Kerberos","url":"/docs/27.0.0/development/extensions-core/druid-kerberos#sending-queries-programmatically","content":"Many HTTP client libraries, such as Apache Commons HttpComponents, already have support for performing SPNEGO authentication. You can use any of the available HTTP client library to communicate with druid cluster. "},{"title":"DataSketches Theta Sketch module","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/datasketches-theta","content":"","keywords":""},{"title":"Aggregator","type":1,"pageTitle":"DataSketches Theta Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-theta#aggregator","content":"{ "type" : "thetaSketch", "name" : <output_name>, "fieldName" : <metric_name>, "isInputThetaSketch": false, "size": 16384 } Property\tDescription\tRequired?type\tThis string should always be "thetaSketch"\tyes name\tString representing the output column to store sketch values.\tyes fieldName\tA string for the name of the aggregator used at ingestion time.\tyes isInputThetaSketch\tOnly set this to true at indexing time if your input data contains Theta sketch objects. This applies to cases when you use DataSketches outside of Druid, for example with Pig or Hive, to produce the data to ingest into Druid\tno, defaults to false size\tMust be a power of 2. Internally, size refers to the maximum number of entries sketch object retains. Higher size means higher accuracy but more space to store sketches. After you index with a particular size, Druid persists the sketch in segments. At query time you must use a size greater or equal to the ingested size. See the DataSketches site for details. The default is recommended for the majority of use cases.\tno, defaults to 16384 shouldFinalize\tReturn the final double type representing the estimate rather than the intermediate sketch type itself. In addition to controlling the finalization of this aggregator, you can control whether all aggregators are finalized with the query context parameters finalize and sqlFinalizeOuterSketches.\tno, defaults to true "},{"title":"Post aggregators","type":1,"pageTitle":"DataSketches Theta Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-theta#post-aggregators","content":""},{"title":"Sketch estimator","type":1,"pageTitle":"DataSketches Theta Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-theta#sketch-estimator","content":"{ "type" : "thetaSketchEstimate", "name": <output name>, "field" : <post aggregator of type fieldAccess that refers to a thetaSketch aggregator or that of type thetaSketchSetOp> } "},{"title":"Sketch operations","type":1,"pageTitle":"DataSketches Theta Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-theta#sketch-operations","content":"{ "type" : "thetaSketchSetOp", "name": <output name>, "func": <UNION|INTERSECT|NOT>, "fields" : <array of fieldAccess type post aggregators to access the thetaSketch aggregators or thetaSketchSetOp type post aggregators to allow arbitrary combination of set operations>, "size": <16384 by default, must be max of size from sketches in fields input> } "},{"title":"Sketch summary","type":1,"pageTitle":"DataSketches Theta Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-theta#sketch-summary","content":"This returns a summary of the sketch that can be used for debugging. This is the result of calling toString() method. { "type" : "thetaSketchToString", "name": <output name>, "field" : <post aggregator that refers to a Theta sketch (fieldAccess or another post aggregator)> } "},{"title":"Constant Theta Sketch","type":1,"pageTitle":"DataSketches Theta Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-theta#constant-theta-sketch","content":"You can use the constant theta sketch post aggregator to add a Base64-encoded constant theta sketch value for use in other post-aggregators. For example, thetaSketchSetOp. { "type" : "thetaSketchConstant", "name": DESTINATION_COLUMN_NAME, "value" : CONSTANT_SKETCH_VALUE } "},{"title":"Example using a constant Theta Sketch","type":1,"pageTitle":"DataSketches Theta Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-theta#example-using-a-constant-theta-sketch","content":"Assume you have a datasource with a variety of a variety of users. Using filters and aggregation, you generate a theta sketch of all football fans. A third-party provider has provided a constant theta sketch of all cricket fans and you want to INTERSECT both cricket fans and football fans in a post-aggregation stage to identify users who are interested in both cricket. Then you want to use thetaSketchEstimate to calculate the number of unique users. { "type":"thetaSketchEstimate", "name":"football_cricket_users_count", "field":{ "type":"thetaSketchSetOp", "name":"football_cricket_fans_users_theta_sketch", "func":"INTERSECT", "fields":[ { "type":"fieldAccess", "fieldName":"football_fans_users_theta_sketch" }, { "type":"thetaSketchConstant", "name":"cricket_fans_users_theta_sketch", "value":"AgMDAAAazJMCAAAAAACAPzz9j7pWTMdROWGf15uY1nI=" } ] } } "},{"title":"Examples","type":1,"pageTitle":"DataSketches Theta Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-theta#examples","content":"Assuming, you have a dataset containing (timestamp, product, user_id). You want to answer questions like How many unique users visited product A? How many unique users visited both product A and product B? to answer above questions, you would index your data using following aggregator. { "type": "thetaSketch", "name": "user_id_sketch", "fieldName": "user_id" } then, sample query for, How many unique users visited product A? { "queryType": "groupBy", "dataSource": "test_datasource", "granularity": "ALL", "dimensions": [], "aggregations": [ { "type": "thetaSketch", "name": "unique_users", "fieldName": "user_id_sketch" } ], "filter": { "type": "selector", "dimension": "product", "value": "A" }, "intervals": [ "2014-10-19T00:00:00.000Z/2014-10-22T00:00:00.000Z" ] } sample query for, How many unique users visited both product A and B? { "queryType": "groupBy", "dataSource": "test_datasource", "granularity": "ALL", "dimensions": [], "filter": { "type": "or", "fields": [ {"type": "selector", "dimension": "product", "value": "A"}, {"type": "selector", "dimension": "product", "value": "B"} ] }, "aggregations": [ { "type" : "filtered", "filter" : { "type" : "selector", "dimension" : "product", "value" : "A" }, "aggregator" : { "type": "thetaSketch", "name": "A_unique_users", "fieldName": "user_id_sketch" } }, { "type" : "filtered", "filter" : { "type" : "selector", "dimension" : "product", "value" : "B" }, "aggregator" : { "type": "thetaSketch", "name": "B_unique_users", "fieldName": "user_id_sketch" } } ], "postAggregations": [ { "type": "thetaSketchEstimate", "name": "final_unique_users", "field": { "type": "thetaSketchSetOp", "name": "final_unique_users_sketch", "func": "INTERSECT", "fields": [ { "type": "fieldAccess", "fieldName": "A_unique_users" }, { "type": "fieldAccess", "fieldName": "B_unique_users" } ] } } ], "intervals": [ "2014-10-19T00:00:00.000Z/2014-10-22T00:00:00.000Z" ] } "},{"title":"Retention analysis example","type":1,"pageTitle":"DataSketches Theta Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-theta#retention-analysis-example","content":"Suppose you want to answer a question like, "How many unique users performed a specific action in a particular time period and also performed another specific action in a different time period?" e.g., "How many unique users signed up in week 1, and purchased something in week 2?" Using the (timestamp, product, user_id) example dataset, data would be indexed with the following aggregator, like in the example above: { "type": "thetaSketch", "name": "user_id_sketch", "fieldName": "user_id" } The following query expresses: "Out of the unique users who visited Product A between 10/01/2014 and 10/07/2014, how many visited Product A again in the week of 10/08/2014 to 10/14/2014?" { "queryType": "groupBy", "dataSource": "test_datasource", "granularity": "ALL", "dimensions": [], "filter": { "type": "or", "fields": [ {"type": "selector", "dimension": "product", "value": "A"} ] }, "aggregations": [ { "type" : "filtered", "filter" : { "type" : "and", "fields" : [ { "type" : "selector", "dimension" : "product", "value" : "A" }, { "type" : "interval", "dimension" : "__time", "intervals" : ["2014-10-01T00:00:00.000Z/2014-10-07T00:00:00.000Z"] } ] }, "aggregator" : { "type": "thetaSketch", "name": "A_unique_users_week_1", "fieldName": "user_id_sketch" } }, { "type" : "filtered", "filter" : { "type" : "and", "fields" : [ { "type" : "selector", "dimension" : "product", "value" : "A" }, { "type" : "interval", "dimension" : "__time", "intervals" : ["2014-10-08T00:00:00.000Z/2014-10-14T00:00:00.000Z"] } ] }, "aggregator" : { "type": "thetaSketch", "name": "A_unique_users_week_2", "fieldName": "user_id_sketch" } }, ], "postAggregations": [ { "type": "thetaSketchEstimate", "name": "final_unique_users", "field": { "type": "thetaSketchSetOp", "name": "final_unique_users_sketch", "func": "INTERSECT", "fields": [ { "type": "fieldAccess", "fieldName": "A_unique_users_week_1" }, { "type": "fieldAccess", "fieldName": "A_unique_users_week_2" } ] } } ], "intervals": ["2014-10-01T00:00:00.000Z/2014-10-14T00:00:00.000Z"] } "},{"title":"Extension Examples","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/examples","content":"Extension Examples This extension was removed in Apache Druid 0.16.0. In prior versions, the extension provided obsolete facilities to ingest data from the Twitter 'Spritzer' data stream as well as the Wikipedia changes IRC channel.","keywords":""},{"title":"Google Cloud Storage","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/google","content":"","keywords":""},{"title":"Google Cloud Storage Extension","type":1,"pageTitle":"Google Cloud Storage","url":"/docs/27.0.0/development/extensions-core/google#google-cloud-storage-extension","content":"This extension allows you to do 2 things: Ingest data from files stored in Google Cloud Storage.Write segments to deep storage in GCS. To use this Apache Druid extension, include druid-google-extensions in the extensions load list. "},{"title":"Required Configuration","type":1,"pageTitle":"Google Cloud Storage","url":"/docs/27.0.0/development/extensions-core/google#required-configuration","content":"To configure connectivity to google cloud, run druid processes with GOOGLE_APPLICATION_CREDENTIALS=/path/to/service_account_keyfile in the environment. "},{"title":"Reading data from Google Cloud Storage","type":1,"pageTitle":"Google Cloud Storage","url":"/docs/27.0.0/development/extensions-core/google#reading-data-from-google-cloud-storage","content":"The Google Cloud Storage input source is supported by the Parallel taskto read objects directly from Google Cloud Storage. If you use the Hadoop task, you can read data from Google Cloud Storage by specifying the paths in your inputSpec. "},{"title":"Deep Storage","type":1,"pageTitle":"Google Cloud Storage","url":"/docs/27.0.0/development/extensions-core/google#deep-storage","content":"Deep storage can be written to Google Cloud Storage either via this extension or the druid-hdfs-storage extension. Configuration To configure connectivity to google cloud, run druid processes with GOOGLE_APPLICATION_CREDENTIALS=/path/to/service_account_keyfile in the environment. Property\tDescription\tPossible Values\tDefaultdruid.storage.type\tgoogle Must be set. druid.google.bucket Google Storage bucket name.\tMust be set. druid.google.prefix\tA prefix string that will be prepended to the blob names for the segments published to Google deep storage "" druid.google.maxListingLength\tmaximum number of input files matching a given prefix to retrieve at a time 1024 "},{"title":"DataSketches Tuple Sketch module","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/datasketches-tuple","content":"","keywords":""},{"title":"Aggregator","type":1,"pageTitle":"DataSketches Tuple Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-tuple#aggregator","content":"{ "type" : "arrayOfDoublesSketch", "name" : <output_name>, "fieldName" : <metric_name>, "nominalEntries": <number>, "metricColumns" : <array of strings>, "numberOfValues" : <number> } Property\tDescription\tRequired?type\tThis string should always be "arrayOfDoublesSketch"\tyes name\tString representing the output column to store sketch values.\tyes fieldName\tA string for the name of the input field.\tyes nominalEntries\tParameter that determines the accuracy and size of the sketch. Higher k means higher accuracy but more space to store sketches. Must be a power of 2. See the Theta sketch accuracy for details.\tno, defaults to 16384 metricColumns\tWhen building sketches from raw data, an array input column that contain numeric values to associate with each distinct key. If not provided, assumes fieldName is an arrayOfDoublesSketch\tno, if not provided fieldName is assumed to be an arrayOfDoublesSketch numberOfValues\tNumber of values associated with each distinct key.\tno, defaults to the length of metricColumns if provided and 1 otherwise You can use the arrayOfDoublesSketch aggregator to: Build a sketch from raw data. In this case, set metricColumns to an array.Build a sketch from an existing ArrayOfDoubles sketch . In this case, leave metricColumns unset and set the fieldName to an ArrayOfDoubles sketch with numberOfValues doubles. At ingestion time, you must base64 encode ArrayOfDoubles sketches at ingestion time. "},{"title":"Example on top of raw data","type":1,"pageTitle":"DataSketches Tuple Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-tuple#example-on-top-of-raw-data","content":"Compute a theta of unique users. For each user store the added and deleted scores. The new sketch column will be called users_theta. { "type": "arrayOfDoublesSketch", "name": "users_theta", "fieldName": "user", "nominalEntries": 16384, "metricColumns": ["added", "deleted"], } "},{"title":"Example ingesting a precomputed sketch column","type":1,"pageTitle":"DataSketches Tuple Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-tuple#example-ingesting-a-precomputed-sketch-column","content":"Ingest a sketch column called user_sketches that has a base64 encoded value of two doubles in its array and store it in a column called users_theta. { "type": "arrayOfDoublesSketch", "name": "users_theta", "fieldName": "user_sketches", "nominalEntries": 16384, "numberOfValues": 2, } "},{"title":"Post aggregators","type":1,"pageTitle":"DataSketches Tuple Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-tuple#post-aggregators","content":""},{"title":"Estimate of the number of distinct keys","type":1,"pageTitle":"DataSketches Tuple Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-tuple#estimate-of-the-number-of-distinct-keys","content":"Returns a distinct count estimate from a given ArrayOfDoublesSketch. { "type" : "arrayOfDoublesSketchToEstimate", "name": <output name>, "field" : <post aggregator that refers to an ArrayOfDoublesSketch (fieldAccess or another post aggregator)> } "},{"title":"Estimate of the number of distinct keys with error bounds","type":1,"pageTitle":"DataSketches Tuple Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-tuple#estimate-of-the-number-of-distinct-keys-with-error-bounds","content":"Returns a distinct count estimate and error bounds from a given ArrayOfDoublesSketch. The result will be three double values: estimate of the number of distinct keys, lower bound and upper bound. The bounds are provided at the given number of standard deviations (optional, defaults to 1). This must be an integer value of 1, 2 or 3 corresponding to approximately 68.3%, 95.4% and 99.7% confidence intervals. { "type" : "arrayOfDoublesSketchToEstimateAndBounds", "name": <output name>, "field" : <post aggregator that refers to an ArrayOfDoublesSketch (fieldAccess or another post aggregator)>, "numStdDevs", <number from 1 to 3> } "},{"title":"Number of retained entries","type":1,"pageTitle":"DataSketches Tuple Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-tuple#number-of-retained-entries","content":"Returns the number of retained entries from a given ArrayOfDoublesSketch. { "type" : "arrayOfDoublesSketchToNumEntries", "name": <output name>, "field" : <post aggregator that refers to an ArrayOfDoublesSketch (fieldAccess or another post aggregator)> } "},{"title":"Mean values for each column","type":1,"pageTitle":"DataSketches Tuple Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-tuple#mean-values-for-each-column","content":"Returns a list of mean values from a given ArrayOfDoublesSketch. The result will be N double values, where N is the number of double values kept in the sketch per key. { "type" : "arrayOfDoublesSketchToMeans", "name": <output name>, "field" : <post aggregator that refers to a DoublesSketch (fieldAccess or another post aggregator)> } "},{"title":"Variance values for each column","type":1,"pageTitle":"DataSketches Tuple Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-tuple#variance-values-for-each-column","content":"Returns a list of variance values from a given ArrayOfDoublesSketch. The result will be N double values, where N is the number of double values kept in the sketch per key. { "type" : "arrayOfDoublesSketchToVariances", "name": <output name>, "field" : <post aggregator that refers to a DoublesSketch (fieldAccess or another post aggregator)> } "},{"title":"Quantiles sketch from a column","type":1,"pageTitle":"DataSketches Tuple Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-tuple#quantiles-sketch-from-a-column","content":"Returns a quantiles DoublesSketch constructed from a given column of values from a given ArrayOfDoublesSketch using optional parameter k that determines the accuracy and size of the quantiles sketch. See Quantiles Sketch Module The column number is 1-based and is optional (the default is 1).The parameter k is optional (the default is defined in the sketch library).The result is a quantiles sketch. { "type" : "arrayOfDoublesSketchToQuantilesSketch", "name": <output name>, "field" : <post aggregator that refers to a DoublesSketch (fieldAccess or another post aggregator)>, "column" : <number>, "k" : <parameter that determines the accuracy and size of the quantiles sketch> } "},{"title":"Set operations","type":1,"pageTitle":"DataSketches Tuple Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-tuple#set-operations","content":"Returns a result of a specified set operation on the given array of sketches. Supported operations are: union, intersection and set difference (UNION, INTERSECT, NOT). { "type" : "arrayOfDoublesSketchSetOp", "name": <output name>, "operation": <"UNION"|"INTERSECT"|"NOT">, "fields" : <array of post aggregators to access sketch aggregators or post aggregators to allow arbitrary combination of set operations>, "nominalEntries" : <parameter that determines the accuracy and size of the sketch>, "numberOfValues" : <number of values associated with each distinct key> } "},{"title":"Student's t-test","type":1,"pageTitle":"DataSketches Tuple Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-tuple#students-t-test","content":"Performs Student's t-test and returns a list of p-values given two instances of ArrayOfDoublesSketch. The result will be N double values, where N is the number of double values kept in the sketch per key. See t-test documentation. { "type" : "arrayOfDoublesSketchTTest", "name": <output name>, "fields" : <array with two post aggregators to access sketch aggregators or post aggregators referring to an ArrayOfDoublesSketch>, } "},{"title":"Sketch summary","type":1,"pageTitle":"DataSketches Tuple Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-tuple#sketch-summary","content":"Returns a human-readable summary of a given ArrayOfDoublesSketch. This is a string returned by toString() method of the sketch. This can be useful for debugging. { "type" : "arrayOfDoublesSketchToString", "name": <output name>, "field" : <post aggregator that refers to an ArrayOfDoublesSketch (fieldAccess or another post aggregator)> } "},{"title":"Constant ArrayOfDoublesSketch","type":1,"pageTitle":"DataSketches Tuple Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-tuple#constant-arrayofdoublessketch","content":"This post aggregator adds a Base64-encoded constant ArrayOfDoublesSketch value that you can use in other post aggregators. { "type": "arrayOfDoublesSketchConstant", "name": DESTINATION_COLUMN_NAME, "value": CONSTANT_SKETCH_VALUE } "},{"title":"Base64 output of ArrayOfDoublesSketch","type":1,"pageTitle":"DataSketches Tuple Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-tuple#base64-output-of-arrayofdoublessketch","content":"This post aggregator outputs an ArrayOfDoublesSketch as a Base64-encoded string storing the constant tuple sketch value that you can use in other post aggregators. { "type": "arrayOfDoublesSketchToBase64String", "name": DESTINATION_COLUMN_NAME, "field": <post aggregator that refers to a ArrayOfDoublesSketch (fieldAccess or another post aggregator)> } "},{"title":"Estimated metrics values for each column of ArrayOfDoublesSketch","type":1,"pageTitle":"DataSketches Tuple Sketch module","url":"/docs/27.0.0/development/extensions-core/datasketches-tuple#estimated-metrics-values-for-each-column-of-arrayofdoublessketch","content":"For the key-value pairs in the given ArrayOfDoublesSketch, this post aggregator estimates the sum for each set of values across the keys. For example, the post aggregator returns {3.0, 8.0} for the following key-value pairs: Key_1, {1.0, 3.0} Key_2, {2.0, 5.0} The post aggregator returns N double values, where N is the number of values associated with each key. { "type": "arrayOfDoublesSketchToMetricsSumEstimate", "name": DESTINATION_COLUMN_NAME, "field": <post aggregator that refers to a ArrayOfDoublesSketch (fieldAccess or another post aggregator)> } "},{"title":"Apache Kafka Lookups","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/kafka-extraction-namespace","content":"","keywords":""},{"title":"How it Works","type":1,"pageTitle":"Apache Kafka Lookups","url":"/docs/27.0.0/development/extensions-core/kafka-extraction-namespace#how-it-works","content":"The extractor works by consuming the configured Kafka topic from the beginning, and appending every record to an internal map. The key of the Kafka record is used as they key of the map, and the payload of the record is used as the value. At query time, a lookup can be used to transform the key into the associated value. See lookups for how to configure and use lookups in a query. Keys and values are both stored as strings by the lookup extractor. The extractor remains subscribed to the topic, so new records are added to the lookup map as they appear. This allows for lookup values to be updated in near-realtime. If two records are added to the topic with the same key, the record with the larger offset will replace the previous record in the lookup map. A record with a null payload will be treated as a tombstone record, and the associated key will be removed from the lookup map. The extractor treats the input topic much like a KTable. As such, it is best to create your Kafka topic using a log compaction strategy, so that the most-recent version of a key is always preserved in Kafka. Without properly configuring retention and log compaction, older keys that are automatically removed from Kafka will not be available and will be lost when Druid services are restarted. "},{"title":"Example","type":1,"pageTitle":"Apache Kafka Lookups","url":"/docs/27.0.0/development/extensions-core/kafka-extraction-namespace#example","content":"Consider a country_codes topic is being consumed, and the following records are added to the topic in the following order: Offset\tKey\tPayload1\tNZ\tNu Zeelund 2\tAU\tAustralia 3\tNZ\tNew Zealand 4\tAU\tnull 5\tNZ\tAotearoa 6\tCZ\tCzechia This input topic would be consumed from the beginning, and result in a lookup namespace containing the following mappings (notice that the entry for Australia was added and then deleted): Key\tValueNZ\tAotearoa CZ\tCzechia Now when a query uses this extraction namespace, the country codes can be mapped to the full country name at query time. "},{"title":"Tombstones and Deleting Records","type":1,"pageTitle":"Apache Kafka Lookups","url":"/docs/27.0.0/development/extensions-core/kafka-extraction-namespace#tombstones-and-deleting-records","content":"The Kafka lookup extractor treats null Kafka messages as tombstones. This means that a record on the input topic with a null message payload on Kafka will remove the associated key from the lookup map, effectively deleting it. "},{"title":"Limitations","type":1,"pageTitle":"Apache Kafka Lookups","url":"/docs/27.0.0/development/extensions-core/kafka-extraction-namespace#limitations","content":"The consumer properties group.id, auto.offset.reset and enable.auto.commit cannot be set in kafkaProperties as they are set by the extension as UUID.randomUUID().toString(), earliest and false respectively. This is because the entire topic must be consumed by the Druid service from the very beginning so that a complete map of lookup values can be built. Setting any of these consumer properties will cause the extractor to not start. Currently, the Kafka lookup extractor feeds the entire Kafka topic into a local cache. If you are using on-heap caching, this can easily clobber your java heap if the Kafka stream spews a lot of unique keys. Off-heap caching should alleviate these concerns, but there is still a limit to the quantity of data that can be stored. There is currently no eviction policy. "},{"title":"Testing the Kafka rename functionality","type":1,"pageTitle":"Apache Kafka Lookups","url":"/docs/27.0.0/development/extensions-core/kafka-extraction-namespace#testing-the-kafka-rename-functionality","content":"To test this setup, you can send key/value pairs to a Kafka stream via the following producer console: ./bin/kafka-console-producer.sh --property parse.key=true --property key.separator="->" --broker-list localhost:9092 --topic testTopic Renames can then be published as OLD_VAL->NEW_VAL followed by newline (enter or return) "},{"title":"Cached Lookup Module","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/druid-lookups","content":"","keywords":""},{"title":"Description","type":1,"pageTitle":"Cached Lookup Module","url":"/docs/27.0.0/development/extensions-core/druid-lookups#description","content":"This Apache Druid module provides a per-lookup caching mechanism for JDBC data sources. The main goal of this cache is to speed up the access to a high latency lookup sources and to provide a caching isolation for every lookup source. Thus user can define various caching strategies or and implementation per lookup, even if the source is the same. This module can be used side to side with other lookup module like the global cached lookup module. To use this Apache Druid extension, include druid-lookups-cached-single in the extensions load list. info If using JDBC, you will need to add your database's client JAR files to the extension's directory. For Postgres, the connector JAR is already included. See the MySQL extension documentation for instructions to obtain MySQL or MariaDB connector libraries. Copy or symlink the downloaded file to extensions/druid-lookups-cached-single under the distribution root directory. "},{"title":"Architecture","type":1,"pageTitle":"Cached Lookup Module","url":"/docs/27.0.0/development/extensions-core/druid-lookups#architecture","content":"Generally speaking this module can be divided into two main component, namely, the data fetcher layer and caching layer. "},{"title":"Data Fetcher layer","type":1,"pageTitle":"Cached Lookup Module","url":"/docs/27.0.0/development/extensions-core/druid-lookups#data-fetcher-layer","content":"First part is the data fetcher layer API DataFetcher, that exposes a set of fetch methods to fetch data from the actual Lookup dimension source. For instance JdbcDataFetcher provides an implementation of DataFetcher that can be used to fetch key/value from a RDBMS via JDBC driver. If you need new type of data fetcher, all you need to do, is to implement the interface DataFetcher and load it via another druid module. "},{"title":"Caching layer","type":1,"pageTitle":"Cached Lookup Module","url":"/docs/27.0.0/development/extensions-core/druid-lookups#caching-layer","content":"This extension comes with two different caching strategies. First strategy is a poll based and the second is a load based. Poll lookup cache The poll strategy cache strategy will fetch and swap all the pair of key/values periodically from the lookup source. Hence, user should make sure that the cache can fit all the data. The current implementation provides 2 type of poll cache, the first is on-heap (uses immutable map), while the second uses MapDB based off-heap map. User can also implement a different lookup polling cache by implementing PollingCacheFactory and PollingCache interfaces. Loading lookup Loading cache strategy will load the key/value pair upon request on the key it self, the general algorithm is load key if absent. Once the key/value pair is loaded eviction will occur according to the cache eviction policy. This module comes with two loading lookup implementation, the first is on-heap backed by a Guava cache implementation, the second is MapDB off-heap implementation. Both implementations offer various eviction strategies. Same for Loading cache, developer can implement a new type of loading cache by implementing LookupLoadingCache interface. "},{"title":"Configuration and Operation:","type":1,"pageTitle":"Cached Lookup Module","url":"/docs/27.0.0/development/extensions-core/druid-lookups#configuration-and-operation","content":""},{"title":"Polling Lookup","type":1,"pageTitle":"Cached Lookup Module","url":"/docs/27.0.0/development/extensions-core/druid-lookups#polling-lookup","content":"Note that the current implementation of offHeapPolling and onHeapPolling will create two caches one to lookup value based on key and the other to reverse lookup the key from value Field\tType\tDescription\tRequired\tdefaultdataFetcher\tJSON object\tSpecifies the lookup data fetcher type for fetching data\tyes\tnull cacheFactory\tJSON Object\tCache factory implementation\tno\tonHeapPolling pollPeriod\tPeriod\tpolling period\tno\tnull (poll once) Example of Polling On-heap Lookup This example demonstrates a polling cache that will update its on-heap cache every 10 minutes { "type":"pollingLookup", "pollPeriod":"PT10M", "dataFetcher":{ "type":"jdbcDataFetcher", "connectorConfig":"jdbc://mysql://localhost:3306/my_data_base", "table":"lookup_table_name", "keyColumn":"key_column_name", "valueColumn": "value_column_name"}, "cacheFactory":{"type":"onHeapPolling"} } Example Polling Off-heap Lookup This example demonstrates an off-heap lookup that will be cached once and never swapped (pollPeriod == null) { "type":"pollingLookup", "dataFetcher":{ "type":"jdbcDataFetcher", "connectorConfig":"jdbc://mysql://localhost:3306/my_data_base", "table":"lookup_table_name", "keyColumn":"key_column_name", "valueColumn": "value_column_name"}, "cacheFactory":{"type":"offHeapPolling"} } "},{"title":"Loading lookup","type":1,"pageTitle":"Cached Lookup Module","url":"/docs/27.0.0/development/extensions-core/druid-lookups#loading-lookup-1","content":"Field\tType\tDescription\tRequired\tdefaultdataFetcher\tJSON object\tSpecifies the lookup data fetcher type to use in order to fetch data\tyes\tnull loadingCacheSpec\tJSON Object\tLookup cache spec implementation\tyes\tnull reverseLoadingCacheSpec\tJSON Object\tReverse lookup cache implementation\tyes\tnull Example Loading On-heap Guava Guava cache configuration spec. Field\tType\tDescription\tRequired\tdefaultconcurrencyLevel\tint\tAllowed concurrency among update operations\tno\t4 initialCapacity\tint\tInitial capacity size\tno\tnull maximumSize\tlong\tSpecifies the maximum number of entries the cache may contain.\tno\tnull (infinite capacity) expireAfterAccess\tlong\tSpecifies the eviction time after last read in milliseconds.\tno\tnull (No read-time-based eviction when set to null) expireAfterWrite\tlong\tSpecifies the eviction time after last write in milliseconds.\tno\tnull (No write-time-based eviction when set to null) { "type":"loadingLookup", "dataFetcher":{ "type":"jdbcDataFetcher", "connectorConfig":"jdbc://mysql://localhost:3306/my_data_base", "table":"lookup_table_name", "keyColumn":"key_column_name", "valueColumn": "value_column_name"}, "loadingCacheSpec":{"type":"guava"}, "reverseLoadingCacheSpec":{"type":"guava", "maximumSize":500000, "expireAfterAccess":100000, "expireAfterWrite":10000} } Example Loading Off-heap MapDB Off heap cache is backed by MapDB implementation. MapDB is using direct memory as memory pool, please take that into account when limiting the JVM direct memory setup. Field\tType\tDescription\tRequired\tdefaultmaxStoreSize\tdouble\tmaximal size of store in GiB, if store is larger entries will start expiring\tno\t0 maxEntriesSize\tlong\tSpecifies the maximum number of entries the cache may contain.\tno\t0 (infinite capacity) expireAfterAccess\tlong\tSpecifies the eviction time after last read in milliseconds.\tno\t0 (No read-time-based eviction when set to null) expireAfterWrite\tlong\tSpecifies the eviction time after last write in milliseconds.\tno\t0 (No write-time-based eviction when set to null) { "type":"loadingLookup", "dataFetcher":{ "type":"jdbcDataFetcher", "connectorConfig":"jdbc://mysql://localhost:3306/my_data_base", "table":"lookup_table_name", "keyColumn":"key_column_name", "valueColumn": "value_column_name"}, "loadingCacheSpec":{"type":"mapDb", "maxEntriesSize":100000}, "reverseLoadingCacheSpec":{"type":"mapDb", "maxStoreSize":5, "expireAfterAccess":100000, "expireAfterWrite":10000} } "},{"title":"JDBC Data Fetcher","type":1,"pageTitle":"Cached Lookup Module","url":"/docs/27.0.0/development/extensions-core/druid-lookups#jdbc-data-fetcher","content":"Field\tType\tDescription\tRequired\tdefaultconnectorConfig\tJSON object\tSpecifies the database connection details. You can set connectURI, user and password. You can selectively allow JDBC properties in connectURI. See JDBC connections security config for more details.\tyes table\tstring\tThe table name to read from.\tyes keyColumn\tstring\tThe column name that contains the lookup key.\tyes valueColumn\tstring\tThe column name that contains the lookup value.\tyes streamingFetchSize\tint\tFetch size used in JDBC connections.\tno\t1000 "},{"title":"Apache Kafka supervisor operations reference","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-operations","content":"","keywords":""},{"title":"Getting Supervisor Status Report","type":1,"pageTitle":"Apache Kafka supervisor operations reference","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-operations#getting-supervisor-status-report","content":"GET /druid/indexer/v1/supervisor/<supervisorId>/status returns a snapshot report of the current state of the tasks managed by the given supervisor. This includes the latest offsets as reported by Kafka, the consumer lag per partition, as well as the aggregate lag of all partitions. The consumer lag per partition may be reported as negative values if the supervisor has not received a recent latest offset response from Kafka. The aggregate lag value will always be >= 0. The status report also contains the supervisor's state and a list of recently thrown exceptions (reported asrecentErrors, whose max size can be controlled using the druid.supervisor.maxStoredExceptionEvents configuration). There are two fields related to the supervisor's state - state and detailedState. The state field will always be one of a small number of generic states that are applicable to any type of supervisor, while the detailedState field will contain a more descriptive, implementation-specific state that may provide more insight into the supervisor's activities than the generic state field. The list of possible state values are: [PENDING, RUNNING, SUSPENDED, STOPPING, UNHEALTHY_SUPERVISOR, UNHEALTHY_TASKS] The list of detailedState values and their corresponding state mapping is as follows: Detailed State\tCorresponding State\tDescriptionUNHEALTHY_SUPERVISOR\tUNHEALTHY_SUPERVISOR\tThe supervisor has encountered errors on the past druid.supervisor.unhealthinessThreshold iterations UNHEALTHY_TASKS\tUNHEALTHY_TASKS\tThe last druid.supervisor.taskUnhealthinessThreshold tasks have all failed UNABLE_TO_CONNECT_TO_STREAM\tUNHEALTHY_SUPERVISOR\tThe supervisor is encountering connectivity issues with Kafka and has not successfully connected in the past LOST_CONTACT_WITH_STREAM\tUNHEALTHY_SUPERVISOR\tThe supervisor is encountering connectivity issues with Kafka but has successfully connected in the past PENDING (first iteration only)\tPENDING\tThe supervisor has been initialized and hasn't started connecting to the stream CONNECTING_TO_STREAM (first iteration only)\tRUNNING\tThe supervisor is trying to connect to the stream and update partition data DISCOVERING_INITIAL_TASKS (first iteration only)\tRUNNING\tThe supervisor is discovering already-running tasks CREATING_TASKS (first iteration only)\tRUNNING\tThe supervisor is creating tasks and discovering state RUNNING\tRUNNING\tThe supervisor has started tasks and is waiting for taskDuration to elapse IDLE\tIDLE\tThe supervisor is not creating tasks since the input stream has not received any new data and all the existing data is read. SUSPENDED\tSUSPENDED\tThe supervisor has been suspended STOPPING\tSTOPPING\tThe supervisor is stopping On each iteration of the supervisor's run loop, the supervisor completes the following tasks in sequence: 1) Fetch the list of partitions from Kafka and determine the starting offset for each partition (either based on the last processed offset if continuing, or starting from the beginning or ending of the stream if this is a new topic). 2) Discover any running indexing tasks that are writing to the supervisor's datasource and adopt them if they match the supervisor's configuration, else signal them to stop. 3) Send a status request to each supervised task to update our view of the state of the tasks under our supervision. 4) Handle tasks that have exceeded taskDuration and should transition from the reading to publishing state. 5) Handle tasks that have finished publishing and signal redundant replica tasks to stop. 6) Handle tasks that have failed and clean up the supervisor's internal state. 7) Compare the list of healthy tasks to the requested taskCount and replicas configurations and create additional tasks if required in case supervisor is not idle. The detailedState field will show additional values (those marked with "first iteration only") the first time the supervisor executes this run loop after startup or after resuming from a suspension. This is intended to surface initialization-type issues, where the supervisor is unable to reach a stable state (perhaps because it can't connect to Kafka, it can't read from the Kafka topic, or it can't communicate with existing tasks). Once the supervisor is stable - that is, once it has completed a full execution without encountering any issues - detailedState will show a RUNNINGstate until it is idle, stopped, suspended, or hits a task failure threshold and transitions to an unhealthy state. "},{"title":"Getting Supervisor Ingestion Stats Report","type":1,"pageTitle":"Apache Kafka supervisor operations reference","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-operations#getting-supervisor-ingestion-stats-report","content":"GET /druid/indexer/v1/supervisor/<supervisorId>/stats returns a snapshot of the current ingestion row counters for each task being managed by the supervisor, along with moving averages for the row counters. See Task Reports: Row Stats for more information. "},{"title":"Supervisor Health Check","type":1,"pageTitle":"Apache Kafka supervisor operations reference","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-operations#supervisor-health-check","content":"GET /druid/indexer/v1/supervisor/<supervisorId>/health returns 200 OK if the supervisor is healthy and503 Service Unavailable if it is unhealthy. Healthiness is determined by the supervisor's state (as returned by the/status endpoint) and the druid.supervisor.* Overlord configuration thresholds. "},{"title":"Updating Existing Supervisors","type":1,"pageTitle":"Apache Kafka supervisor operations reference","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-operations#updating-existing-supervisors","content":"POST /druid/indexer/v1/supervisor can be used to update existing supervisor spec. Calling this endpoint when there is already an existing supervisor for the same dataSource will cause: The running supervisor to signal its managed tasks to stop reading and begin publishing.The running supervisor to exit.A new supervisor to be created using the configuration provided in the request body. This supervisor will retain the existing publishing tasks and will create new tasks starting at the offsets the publishing tasks ended on. Seamless schema migrations can thus be achieved by simply submitting the new schema using this endpoint. "},{"title":"Suspending and Resuming Supervisors","type":1,"pageTitle":"Apache Kafka supervisor operations reference","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-operations#suspending-and-resuming-supervisors","content":"You can suspend and resume a supervisor using POST /druid/indexer/v1/supervisor/<supervisorId>/suspend and POST /druid/indexer/v1/supervisor/<supervisorId>/resume, respectively. Note that the supervisor itself will still be operating and emitting logs and metrics, it will just ensure that no indexing tasks are running until the supervisor is resumed. "},{"title":"Resetting Supervisors","type":1,"pageTitle":"Apache Kafka supervisor operations reference","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-operations#resetting-supervisors","content":"The POST /druid/indexer/v1/supervisor/<supervisorId>/reset operation clears stored offsets, causing the supervisor to start reading offsets from either the earliest or latest offsets in Kafka (depending on the value of useEarliestOffset). After clearing stored offsets, the supervisor kills and recreates any active tasks, so that tasks begin reading from valid offsets. Use care when using this operation! Resetting the supervisor may cause Kafka messages to be skipped or read twice, resulting in missing or duplicate data. The reason for using this operation is to recover from a state in which the supervisor ceases operating due to missing offsets. The indexing service keeps track of the latest persisted Kafka offsets in order to provide exactly-once ingestion guarantees across tasks. Subsequent tasks must start reading from where the previous task completed in order for the generated segments to be accepted. If the messages at the expected starting offsets are no longer available in Kafka (typically because the message retention period has elapsed or the topic was removed and re-created) the supervisor will refuse to start and in flight tasks will fail. This operation enables you to recover from this condition. Note that the supervisor must be running for this endpoint to be available. "},{"title":"Terminating Supervisors","type":1,"pageTitle":"Apache Kafka supervisor operations reference","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-operations#terminating-supervisors","content":"The POST /druid/indexer/v1/supervisor/<supervisorId>/terminate operation terminates a supervisor and causes all associated indexing tasks managed by this supervisor to immediately stop and begin publishing their segments. This supervisor will still exist in the metadata store and its history may be retrieved with the supervisor history API, but will not be listed in the 'get supervisors' API response nor can it's configuration or status report be retrieved. The only way this supervisor can start again is by submitting a functioning supervisor spec to the create API. "},{"title":"Capacity Planning","type":1,"pageTitle":"Apache Kafka supervisor operations reference","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-operations#capacity-planning","content":"Kafka indexing tasks run on MiddleManagers and are thus limited by the resources available in the MiddleManager cluster. In particular, you should make sure that you have sufficient worker capacity (configured using thedruid.worker.capacity property) to handle the configuration in the supervisor spec. Note that worker capacity is shared across all types of indexing tasks, so you should plan your worker capacity to handle your total indexing load (e.g. batch processing, realtime tasks, merging tasks, etc.). If your workers run out of capacity, Kafka indexing tasks will queue and wait for the next available worker. This may cause queries to return partial results but will not result in data loss (assuming the tasks run before Kafka purges those offsets). A running task will normally be in one of two states: reading or publishing. A task will remain in reading state fortaskDuration, at which point it will transition to publishing state. A task will remain in publishing state for as long as it takes to generate segments, push segments to deep storage, and have them be loaded and served by a Historical process (or until completionTimeout elapses). The number of reading tasks is controlled by replicas and taskCount. In general, there will be replicas * taskCountreading tasks, the exception being if taskCount > {numKafkaPartitions} in which case {numKafkaPartitions} tasks will be used instead. When taskDuration elapses, these tasks will transition to publishing state and replicas * taskCountnew reading tasks will be created. Therefore to allow for reading tasks and publishing tasks to run concurrently, there should be a minimum capacity of: workerCapacity = 2 * replicas * taskCount This value is for the ideal situation in which there is at most one set of tasks publishing while another set is reading. In some circumstances, it is possible to have multiple sets of tasks publishing simultaneously. This would happen if the time-to-publish (generate segment, push to deep storage, loaded on Historical) > taskDuration. This is a valid scenario (correctness-wise) but requires additional worker capacity to support. In general, it is a good idea to havetaskDuration be large enough that the previous set of tasks finishes publishing before the current set begins. "},{"title":"Supervisor Persistence","type":1,"pageTitle":"Apache Kafka supervisor operations reference","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-operations#supervisor-persistence","content":"When a supervisor spec is submitted via the POST /druid/indexer/v1/supervisor endpoint, it is persisted in the configured metadata database. There can only be a single supervisor per dataSource, and submitting a second spec for the same dataSource will overwrite the previous one. When an Overlord gains leadership, either by being started or as a result of another Overlord failing, it will spawn a supervisor for each supervisor spec in the metadata database. The supervisor will then discover running Kafka indexing tasks and will attempt to adopt them if they are compatible with the supervisor's configuration. If they are not compatible because they have a different ingestion spec or partition allocation, the tasks will be killed and the supervisor will create a new set of tasks. In this way, the supervisors are persistent across Overlord restarts and fail-overs. A supervisor is stopped via the POST /druid/indexer/v1/supervisor/<supervisorId>/terminate endpoint. This places a tombstone marker in the database (to prevent the supervisor from being reloaded on a restart) and then gracefully shuts down the currently running supervisor. When a supervisor is shut down in this way, it will instruct its managed tasks to stop reading and begin publishing their segments immediately. The call to the shutdown endpoint will return after all tasks have been signaled to stop but before the tasks finish publishing their segments. "},{"title":"Schema/Configuration Changes","type":1,"pageTitle":"Apache Kafka supervisor operations reference","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-operations#schemaconfiguration-changes","content":"Schema and configuration changes are handled by submitting the new supervisor spec via the samePOST /druid/indexer/v1/supervisor endpoint used to initially create the supervisor. The Overlord will initiate a graceful shutdown of the existing supervisor which will cause the tasks being managed by that supervisor to stop reading and begin publishing their segments. A new supervisor will then be started which will create a new set of tasks that will start reading from the offsets where the previous now-publishing tasks left off, but using the updated schema. In this way, configuration changes can be applied without requiring any pause in ingestion. "},{"title":"Deployment Notes on Kafka partitions and Druid segments","type":1,"pageTitle":"Apache Kafka supervisor operations reference","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-operations#deployment-notes-on-kafka-partitions-and-druid-segments","content":"Druid assigns each Kafka indexing task Kafka partitions. A task writes the events it consumes from Kafka into a single segment for the segment granularity interval until it reaches one of the following: maxRowsPerSegment, maxTotalRows or intermediateHandoffPeriod limit. At this point, the task creates a new partition for this segment granularity to contain subsequent events. The Kafka Indexing Task also does incremental hand-offs. Therefore segments become available as they are ready and you do not have to wait for all segments until the end of the task duration. When the task reaches one of maxRowsPerSegment, maxTotalRows, or intermediateHandoffPeriod, it hands off all the segments and creates a new new set of segments will be created for further events. This allows the task to run for longer durations without accumulating old segments locally on Middle Manager processes. The Kafka Indexing Service may still produce some small segments. For example, consider the following scenario: Task duration is 4 hoursSegment granularity is set to an HOURThe supervisor was started at 9:10 After 4 hours at 13:10, Druid starts a new set of tasks. The events for the interval 13:00 - 14:00 may be split across existing tasks and the new set of tasks which could result in small segments. To merge them together into new segments of an ideal size (in the range of ~500-700 MB per segment), you can schedule re-indexing tasks, optionally with a different segment granularity. For more detail, see Segment size optimization. There is also ongoing work to support automatic segment compaction of sharded segments as well as compaction not requiring Hadoop (see here). "},{"title":"Apache Ranger Security","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/druid-ranger-security","content":"","keywords":""},{"title":"Configuration","type":1,"pageTitle":"Apache Ranger Security","url":"/docs/27.0.0/development/extensions-core/druid-ranger-security#configuration","content":"Support for Apache Ranger authorization consists of three elements: configuring the extension in Apache Druidconfiguring the connection to Apache Rangerproviding the service definition for Druid to Apache Ranger "},{"title":"Enabling the extension","type":1,"pageTitle":"Apache Ranger Security","url":"/docs/27.0.0/development/extensions-core/druid-ranger-security#enabling-the-extension","content":"Ensure that you have a valid authenticator chain and escalator set in your common.runtime.properties. For every authenticator your wish to use the authorizer for, set druid.auth.authenticator.<authenticatorName>.authorizerName to the name you will give the authorizer, e.g. ranger. Then add the following and amend to your needs (in case you need to use multiple authorizers): druid.auth.authorizers=["ranger"] druid.auth.authorizer.ranger.type=ranger The following is an example that showcases using druid-basic-security for authentication and druid-ranger-security for authorization. druid.auth.authenticatorChain=["basic"] druid.auth.authenticator.basic.type=basic druid.auth.authenticator.basic.initialAdminPassword=password1 druid.auth.authenticator.basic.initialInternalClientPassword=password2 druid.auth.authenticator.basic.credentialsValidator.type=metadata druid.auth.authenticator.basic.skipOnFailure=false druid.auth.authenticator.basic.enableCacheNotifications=true druid.auth.authenticator.basic.authorizerName=ranger druid.auth.authorizers=["ranger"] druid.auth.authorizer.ranger.type=ranger # Escalator druid.escalator.type=basic druid.escalator.internalClientUsername=druid_system druid.escalator.internalClientPassword=password2 druid.escalator.authorizerName=ranger info Contrary to the documentation of druid-basic-auth Ranger does not automatically provision a highly privileged system user, you will need to do this yourself. This system user in the case of druid-basic-auth is named druid_system and for the escalator it is configurable, as shown above. Make sure to take note of these user names and configure READ access to state:STATE and to config:security in your ranger policies, otherwise system services will not work properly. Properties to configure the extension in Apache Druid Property\tDescription\tDefault\trequireddruid.auth.ranger.keytab\tDefines the keytab to be used while authenticating against Apache Ranger to obtain policies and provide auditing\tnull\tNo druid.auth.ranger.principal\tDefines the principal to be used while authenticating against Apache Ranger to obtain policies and provide auditing\tnull\tNo druid.auth.ranger.use_ugi\tDetermines if groups that the authenticated user belongs to should be obtained from Hadoop's UserGroupInformation\tnull\tNo "},{"title":"Configuring the connection to Apache Ranger","type":1,"pageTitle":"Apache Ranger Security","url":"/docs/27.0.0/development/extensions-core/druid-ranger-security#configuring-the-connection-to-apache-ranger","content":"The Apache Ranger authorization extension will read several configuration files. Discussing the contents of those files is beyond the scope of this document. Depending on your needs you will need to create them. The minimum you will need to have is a ranger-druid-security.xml file that you will need to put in the classpath (e.g. _common). For auditing, the configuration is in ranger-druid-audit.xml. "},{"title":"Adding the service definition for Apache Druid to Apache Ranger","type":1,"pageTitle":"Apache Ranger Security","url":"/docs/27.0.0/development/extensions-core/druid-ranger-security#adding-the-service-definition-for-apache-druid-to-apache-ranger","content":"At the time of writing of this document Apache Ranger (2.0) does not include an out of the box service and service definition for Druid. You can add the service definition to Apache Ranger by entering the following command: curl -u <user>:<password> -d "@ranger-servicedef-druid.json" -X POST -H "Accept: application/json" -H "Content-Type: application/json" http://localhost:6080/service/public/v2/api/servicedef/ You should get back json describing the service definition you just added. You can now go to the web interface of Apache Ranger which should now include a widget for "Druid". Click the plus sign and create the new service. Ensure your service name is equal to what you configured in ranger-druid-security.xml. Configuring Apache Ranger policies When installing a new Druid service in Apache Ranger for the first time, Ranger will provision the policies to allow the administrative user read/write access to all properties and data sources. You might want to limit this. Do not forget to add the correct policies for the druid_system user and the internalClientUserName of the escalator. info Loading new data sources requires write access to the datasource prior to the loading itself. So if you want to create a datasource wikipedia you are required to have an allow policy inside Apache Ranger before trying to load the spec. "},{"title":"Usage","type":1,"pageTitle":"Apache Ranger Security","url":"/docs/27.0.0/development/extensions-core/druid-ranger-security#usage","content":""},{"title":"HTTP methods","type":1,"pageTitle":"Apache Ranger Security","url":"/docs/27.0.0/development/extensions-core/druid-ranger-security#http-methods","content":"For information on what HTTP methods are supported for a particular request endpoint, please refer to the API documentation. GET requires READ permission, while POST and DELETE require WRITE permission. "},{"title":"SQL Permissions","type":1,"pageTitle":"Apache Ranger Security","url":"/docs/27.0.0/development/extensions-core/druid-ranger-security#sql-permissions","content":"Queries on Druid datasources require DATASOURCE READ permissions for the specified datasource. Queries on the INFORMATION_SCHEMA tables will return information about datasources that the caller has DATASOURCE READ access to. Other datasources will be omitted. Queries on the system schema tables require the following permissions: segments: Segments will be filtered based on DATASOURCE READ permissions.servers: The user requires STATE READ permissions.server_segments: The user requires STATE READ permissions and segments will be filtered based on DATASOURCE READ permissions.tasks: Tasks will be filtered based on DATASOURCE READ permissions. "},{"title":"Debugging","type":1,"pageTitle":"Apache Ranger Security","url":"/docs/27.0.0/development/extensions-core/druid-ranger-security#debugging","content":"If you face difficulty grasping why access is denied to certain elements, and the audit section in Apache Ranger does not give you any detail, you can enable debug logging for org.apache.druid.security.ranger. To do so add the following in your log4j2.xml: <!-- Set level="debug" to see access requests to Apache Ranger --> <Logger name="org.apache.druid.security" level="debug" additivity="false"> <Appender-ref ref="Console"/> </Logger> "},{"title":"HDFS","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/hdfs","content":"","keywords":""},{"title":"Deep Storage","type":1,"pageTitle":"HDFS","url":"/docs/27.0.0/development/extensions-core/hdfs#deep-storage","content":""},{"title":"Configuration for HDFS","type":1,"pageTitle":"HDFS","url":"/docs/27.0.0/development/extensions-core/hdfs#configuration-for-hdfs","content":"Property\tPossible Values\tDescription\tDefaultdruid.storage.type\thdfs Must be set. druid.storage.storageDirectory Directory for storing segments.\tMust be set. druid.hadoop.security.kerberos.principal\tdruid@EXAMPLE.COM\tPrincipal user name\tempty druid.hadoop.security.kerberos.keytab\t/etc/security/keytabs/druid.headlessUser.keytab\tPath to keytab file\tempty Besides the above settings, you also need to include all Hadoop configuration files (such as core-site.xml, hdfs-site.xml) in the Druid classpath. One way to do this is copying all those files under ${DRUID_HOME}/conf/_common. If you are using the Hadoop ingestion, set your output directory to be a location on Hadoop and it will work. If you want to eagerly authenticate against a secured hadoop/hdfs cluster you must set druid.hadoop.security.kerberos.principal and druid.hadoop.security.kerberos.keytab, this is an alternative to the cron job method that runs kinit command periodically. "},{"title":"Configuration for Cloud Storage","type":1,"pageTitle":"HDFS","url":"/docs/27.0.0/development/extensions-core/hdfs#configuration-for-cloud-storage","content":"You can also use the AWS S3 or the Google Cloud Storage as the deep storage via HDFS. Configuration for AWS S3 To use the AWS S3 as the deep storage, you need to configure druid.storage.storageDirectory properly. Property\tPossible Values\tDescription\tDefaultdruid.storage.type\thdfs Must be set. druid.storage.storageDirectory\ts3a://bucket/example/directory or s3n://bucket/example/directory\tPath to the deep storage\tMust be set. You also need to include the Hadoop AWS module, especially the hadoop-aws.jar in the Druid classpath. Run the below command to install the hadoop-aws.jar file under ${DRUID_HOME}/extensions/druid-hdfs-storage in all nodes. ${DRUID_HOME}/bin/run-java -classpath "${DRUID_HOME}/lib/*" org.apache.druid.cli.Main tools pull-deps -h "org.apache.hadoop:hadoop-aws:${HADOOP_VERSION}"; cp ${DRUID_HOME}/hadoop-dependencies/hadoop-aws/${HADOOP_VERSION}/hadoop-aws-${HADOOP_VERSION}.jar ${DRUID_HOME}/extensions/druid-hdfs-storage/ Finally, you need to add the below properties in the core-site.xml. For more configurations, see the Hadoop AWS module. <property> <name>fs.s3a.impl</name> <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value> <description>The implementation class of the S3A Filesystem</description> </property> <property> <name>fs.AbstractFileSystem.s3a.impl</name> <value>org.apache.hadoop.fs.s3a.S3A</value> <description>The implementation class of the S3A AbstractFileSystem.</description> </property> <property> <name>fs.s3a.access.key</name> <description>AWS access key ID. Omit for IAM role-based or provider-based authentication.</description> <value>your access key</value> </property> <property> <name>fs.s3a.secret.key</name> <description>AWS secret key. Omit for IAM role-based or provider-based authentication.</description> <value>your secret key</value> </property> Configuration for Google Cloud Storage To use the Google Cloud Storage as the deep storage, you need to configure druid.storage.storageDirectory properly. Property\tPossible Values\tDescription\tDefaultdruid.storage.type\thdfs Must be set. druid.storage.storageDirectory\tgs://bucket/example/directory\tPath to the deep storage\tMust be set. All services that need to access GCS need to have the GCS connector jar in their class path. Please read the install instructionsto properly set up the necessary libraries and configurations. One option is to place this jar in ${DRUID_HOME}/lib/ and ${DRUID_HOME}/extensions/druid-hdfs-storage/. Finally, you need to configure the core-site.xml file with the filesystem and authentication properties needed for GCS. You may want to copy the below example properties. Please follow the instructions athttps://github.com/GoogleCloudPlatform/bigdata-interop/blob/master/gcs/INSTALL.mdfor more details. For more configurations, GCS core defaultand GCS core template. <property> <name>fs.gs.impl</name> <value>com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem</value> <description>The FileSystem for gs: (GCS) uris.</description> </property> <property> <name>fs.AbstractFileSystem.gs.impl</name> <value>com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS</value> <description>The AbstractFileSystem for gs: uris.</description> </property> <property> <name>google.cloud.auth.service.account.enable</name> <value>true</value> <description> Whether to use a service account for GCS authorization. Setting this property to `false` will disable use of service accounts for authentication. </description> </property> <property> <name>google.cloud.auth.service.account.json.keyfile</name> <value>/path/to/keyfile</value> <description> The JSON key file of the service account used for GCS access when google.cloud.auth.service.account.enable is true. </description> </property> Tested with Druid 0.17.0, Hadoop 2.8.5 and gcs-connector jar 2.0.0-hadoop2. "},{"title":"Reading data from HDFS or Cloud Storage","type":1,"pageTitle":"HDFS","url":"/docs/27.0.0/development/extensions-core/hdfs#reading-data-from-hdfs-or-cloud-storage","content":""},{"title":"Native batch ingestion","type":1,"pageTitle":"HDFS","url":"/docs/27.0.0/development/extensions-core/hdfs#native-batch-ingestion","content":"The HDFS input source is supported by the Parallel taskto read files directly from the HDFS Storage. You may be able to read objects from cloud storage with the HDFS input source, but we highly recommend to use a properInput Source instead if possible because it is simple to set up. For now, only the S3 input sourceand the Google Cloud Storage input sourceare supported for cloud storage types, and so you may still want to use the HDFS input source to read from cloud storage other than those two. "},{"title":"Hadoop-based ingestion","type":1,"pageTitle":"HDFS","url":"/docs/27.0.0/development/extensions-core/hdfs#hadoop-based-ingestion","content":"If you use the Hadoop ingestion, you can read data from HDFS by specifying the paths in your inputSpec. See the Static inputSpec for details. "},{"title":"Kubernetes","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/kubernetes","content":"","keywords":""},{"title":"Configuration","type":1,"pageTitle":"Kubernetes","url":"/docs/27.0.0/development/extensions-core/kubernetes#configuration","content":"To use this extension please make sure to include druid-kubernetes-extensions in the extensions load list. This extension works together with HTTP based segment and task management in Druid. Consequently, following configurations must be set on all Druid nodes. druid.zk.service.enabled=falsedruid.serverview.type=httpdruid.coordinator.loadqueuepeon.type=httpdruid.indexer.runner.type=httpRemotedruid.discovery.type=k8s For Node Discovery, Each Druid process running inside a pod "announces" itself by adding few "labels" and "annotations" in the pod spec. Druid process needs to be aware of pod name and namespace which it reads from environment variables POD_NAME and POD_NAMESPACE. These variable names can be changed, see configuration below. But in the end, each pod needs to have self pod name and namespace added as environment variables. Additionally, this extension has following configuration. "},{"title":"Properties","type":1,"pageTitle":"Kubernetes","url":"/docs/27.0.0/development/extensions-core/kubernetes#properties","content":"Property\tPossible Values\tDescription\tDefault\trequireddruid.discovery.k8s.clusterIdentifier\tstring that matches [a-z0-9][a-z0-9-]*[a-z0-9]\tUnique identifier for this Druid cluster in Kubernetes e.g. us-west-prod-druid.\tNone\tYes druid.discovery.k8s.podNameEnvKey\tPod Env Variable\tPod Env variable whose value is that pod's name.\tPOD_NAME\tNo druid.discovery.k8s.podNamespaceEnvKey\tPod Env Variable\tPod Env variable whose value is that pod's kubernetes namespace.\tPOD_NAMESPACE\tNo druid.discovery.k8s.leaseDuration\tDuration\tLease duration used by Leader Election algorithm. Candidates wait for this time before taking over previous Leader.\tPT60S\tNo druid.discovery.k8s.renewDeadline\tDuration\tLease renewal period used by Leader.\tPT17S\tNo druid.discovery.k8s.retryPeriod\tDuration\tRetry wait used by Leader Election algorithm on failed operations.\tPT5S\tNo "},{"title":"Gotchas","type":1,"pageTitle":"Kubernetes","url":"/docs/27.0.0/development/extensions-core/kubernetes#gotchas","content":"Label/Annotation path in each pod spec MUST EXIST, which is easily satisfied if there is at least one label/annotation in the pod spec already. This limitation may be removed in future.All Druid Pods belonging to one Druid cluster must be inside same kubernetes namespace.All Druid Pods need permissions to be able to add labels to self-pod, List and Watch other Pods, create and read ConfigMap for leader election. Assuming, "default" service account is used by Druid pods, you might need to add following or something similar Kubernetes Role and Role Binding. apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: druid-cluster rules: - apiGroups: - "" resources: - pods - configmaps verbs: - '*' --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: druid-cluster subjects: - kind: ServiceAccount name: default roleRef: kind: Role name: druid-cluster apiGroup: rbac.authorization.k8s.io "},{"title":"Apache Kafka ingestion","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/kafka-ingestion","content":"","keywords":""},{"title":"Kafka support","type":1,"pageTitle":"Apache Kafka ingestion","url":"/docs/27.0.0/development/extensions-core/kafka-ingestion#kafka-support","content":"The Kafka indexing service supports transactional topics introduced in Kafka 0.11.x by default. The consumer for Kafka indexing service is incompatible with older Kafka brokers. If you are using an older version, refer to the Kafka upgrade guide. Additionally, you can set isolation.level to read_uncommitted in consumerProperties if either: You don't need Druid to consume transactional topics.You need Druid to consume older versions of Kafka. Make sure offsets are sequential, since there is no offset gap check in Druid anymore. If your Kafka cluster enables consumer-group based ACLs, you can set group.id in consumerProperties to override the default auto generated group id. "},{"title":"Load the Kafka indexing service","type":1,"pageTitle":"Apache Kafka ingestion","url":"/docs/27.0.0/development/extensions-core/kafka-ingestion#load-the-kafka-indexing-service","content":"To use the Kafka indexing service, load the druid-kafka-indexing-service extension on both the Overlord and the MiddleManagers. See Loading extensions for instructions on how to configure extensions. "},{"title":"Define a supervisor spec","type":1,"pageTitle":"Apache Kafka ingestion","url":"/docs/27.0.0/development/extensions-core/kafka-ingestion#define-a-supervisor-spec","content":"Similar to the ingestion spec for batch ingestion, the supervisor spec configures the data ingestion for Kafka streaming ingestion. A supervisor spec has the following sections: dataSchema to specify the Druid datasource name, primary timestamp, dimensions, metrics, transforms, and any necessary filters.ioConfig to configure Kafka connection settings and configure how Druid parses the data. Kafka-specific connection details go in the consumerProperties. The ioConfig is also where you define the input format (inputFormat) of your Kafka data. For supported formats for Kafka and information on how to configure the input format, see Data formats. tuningConfig to control various tuning parameters specific to each ingestion method. For a full description of all the fields and parameters in a Kafka supervisor spec, see the Kafka supervisor reference. The following sections contain examples to help you get started with supervisor specs. "},{"title":"JSON input format supervisor spec example","type":1,"pageTitle":"Apache Kafka ingestion","url":"/docs/27.0.0/development/extensions-core/kafka-ingestion#json-input-format-supervisor-spec-example","content":"The following example demonstrates a supervisor spec for Kafka that uses the JSON input format. In this case Druid parses the event contents in JSON format: { "type": "kafka", "spec": { "dataSchema": { "dataSource": "metrics-kafka", "timestampSpec": { "column": "timestamp", "format": "auto" }, "dimensionsSpec": { "dimensions": [], "dimensionExclusions": [ "timestamp", "value" ] }, "metricsSpec": [ { "name": "count", "type": "count" }, { "name": "value_sum", "fieldName": "value", "type": "doubleSum" }, { "name": "value_min", "fieldName": "value", "type": "doubleMin" }, { "name": "value_max", "fieldName": "value", "type": "doubleMax" } ], "granularitySpec": { "type": "uniform", "segmentGranularity": "HOUR", "queryGranularity": "NONE" } }, "ioConfig": { "topic": "metrics", "inputFormat": { "type": "json" }, "consumerProperties": { "bootstrap.servers": "localhost:9092" }, "taskCount": 1, "replicas": 1, "taskDuration": "PT1H" }, "tuningConfig": { "type": "kafka", "maxRowsPerSegment": 5000000 } } } "},{"title":"Kafka input format supervisor spec example","type":1,"pageTitle":"Apache Kafka ingestion","url":"/docs/27.0.0/development/extensions-core/kafka-ingestion#kafka-input-format-supervisor-spec-example","content":"If you want to parse the Kafka metadata fields in addition to the Kafka payload value contents, you can use the kafka input format. The kafka input format wraps around the payload parsing input format and augments the data it outputs with the Kafka event timestamp, the Kafka event headers, and the key field that itself can be parsed using any available InputFormat. For example, consider the following structure for a Kafka message that represents a fictitious wiki edit in a development environment: Kafka timestamp: 1680795276351Kafka headers: env=developmentzone=z1 Kafka key: wiki-editKafka payload value: {"channel":"#sv.wikipedia","timestamp":"2016-06-27T00:00:11.080Z","page":"Salo Toraut","delta":31,"namespace":"Main"} Using { "type": "json" } as the input format would only parse the payload value. To parse the Kafka metadata in addition to the payload, use the kafka input format. You would configure it as follows: valueFormat: Define how to parse the payload value. Set this to the payload parsing input format ({ "type": "json" }).timestampColumnName: Supply a custom name for the Kafka timestamp in the Druid schema to avoid conflicts with columns from the payload. The default is kafka.timestamp.headerFormat: The default value string decodes strings in UTF-8 encoding from the Kafka header. Other supported encoding formats include the following: ISO-8859-1: ISO Latin Alphabet No. 1, that is, ISO-LATIN-1.US-ASCII: Seven-bit ASCII. Also known as ISO646-US. The Basic Latin block of the Unicode character set.UTF-16: Sixteen-bit UCS Transformation Format, byte order identified by an optional byte-order mark.UTF-16BE: Sixteen-bit UCS Transformation Format, big-endian byte order.UTF-16LE: Sixteen-bit UCS Transformation Format, little-endian byte order. headerColumnPrefix: Supply a prefix to the Kafka headers to avoid any conflicts with columns from the payload. The default is kafka.header.. Considering the header from the example, Druid maps the headers to the following columns: kafka.header.env, kafka.header.zone.keyFormat: Supply an input format to parse the key. Only the first value will be used. If, as in the example, your key values are simple strings, then you can use the tsv format to parse them. { "type": "tsv", "findColumnsFromHeader": false, "columns": ["x"] } Note that for tsv,csv, and regex formats, you need to provide a columns array to make a valid input format. Only the first one is used, and its name will be ignored in favor of keyColumnName.keyColumnName: Supply the name for the Kafka key column to avoid conflicts with columns from the payload. The default is kafka.key. Putting it together, the following input format (that uses the default values for timestampColumnName, headerColumnPrefix, and keyColumnName) { "type": "kafka", "valueFormat": { "type": "json" }, "headerFormat": { "type": "string" }, "keyFormat": { "type": "tsv", "findColumnsFromHeader": false, "columns": ["x"] } } would parse the example message as follows: { "channel": "#sv.wikipedia", "timestamp": "2016-06-27T00:00:11.080Z", "page": "Salo Toraut", "delta": 31, "namespace": "Main", "kafka.timestamp": 1680795276351, "kafka.header.env": "development", "kafka.header.zone": "z1", "kafka.key": "wiki-edit" } For more information on data formats, see Data formats. Finally, add these Kafka metadata columns to the dimensionsSpec or set your dimensionsSpec to auto-detect columns. The following supervisor spec demonstrates how to ingest the Kafka header, key, and timestamp into Druid dimensions: { "type": "kafka", "spec": { "ioConfig": { "type": "kafka", "consumerProperties": { "bootstrap.servers": "localhost:9092" }, "topic": "wiki-edits", "inputFormat": { "type": "kafka", "valueFormat": { "type": "json" }, "headerFormat": { "type": "string" }, "keyFormat": { "type": "tsv", "findColumnsFromHeader": false, "columns": ["x"] } }, "useEarliestOffset": true }, "dataSchema": { "dataSource": "wikiticker", "timestampSpec": { "column": "timestamp", "format": "posix" }, "dimensionsSpec": "dimensionsSpec": { "useSchemaDiscovery": true, "includeAllDimensions": true }, "granularitySpec": { "queryGranularity": "none", "rollup": false, "segmentGranularity": "day" } }, "tuningConfig": { "type": "kafka" } } } After Druid ingests the data, you can query the Kafka metadata columns as follows: SELECT "kafka.header.env", "kafka.key", "kafka.timestamp" FROM "wikiticker" This query returns: kafka.header.env\tkafka.key\tkafka.timestampdevelopment\twiki-edit\t1680795276351 For more information, see kafka data format. "},{"title":"Submit a supervisor spec","type":1,"pageTitle":"Apache Kafka ingestion","url":"/docs/27.0.0/development/extensions-core/kafka-ingestion#submit-a-supervisor-spec","content":"Druid starts a supervisor for a dataSource when you submit a supervisor spec. You can use the data loader in the web console or you can submit a supervisor spec to the following endpoint: http://<OVERLORD_IP>:<OVERLORD_PORT>/druid/indexer/v1/supervisor For example: curl -X POST -H 'Content-Type: application/json' -d @supervisor-spec.json http://localhost:8090/druid/indexer/v1/supervisor Where the file supervisor-spec.json contains your Kafka supervisor spec file. "},{"title":"Apache Parquet Extension","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/parquet","content":"Apache Parquet Extension This Apache Druid module extends Druid Hadoop based indexing to ingest data directly from offline Apache Parquet files. Note: If using the parquet-avro parser for Apache Hadoop based indexing, druid-parquet-extensions depends on the druid-avro-extensions module, so be sure toinclude both. The druid-parquet-extensions provides the Parquet input format, the Parquet Hadoop parser, and the Parquet Avro Hadoop Parser with druid-avro-extensions. The Parquet input format is available for native batch ingestionand the other 2 parsers are for Hadoop batch ingestion. Please see corresponding docs for details.","keywords":""},{"title":"Apache Kafka supervisor reference","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-reference","content":"","keywords":""},{"title":"KafkaSupervisorIOConfig","type":1,"pageTitle":"Apache Kafka supervisor reference","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-reference#kafkasupervisorioconfig","content":"Field\tType\tDescription\tRequiredtopic\tString\tThe Kafka topic to read from. Must be a specific topic. Topic patterns are not supported.\tyes inputFormat\tObject\tinputFormat to define input data parsing. See Specifying data format for details about specifying the input format.\tyes consumerProperties\tMap<String, Object>\tA map of properties to pass to the Kafka consumer. See More on consumer properties.\tyes pollTimeout\tLong\tThe length of time to wait for the Kafka consumer to poll records, in milliseconds\tno (default == 100) replicas\tInteger\tThe number of replica sets. "1" means a single set of tasks without replication. Druid always assigns replica tasks to different workers to provide resiliency against worker failure.\tno (default == 1) taskCount\tInteger\tThe maximum number of reading tasks in a replica set. The maximum number of reading tasks equals taskCount * replicas. Therefore, the total number of tasks, reading + publishing, is greater than this count. See Capacity Planning for more details. When taskCount > {numKafkaPartitions}, the actual number of reading tasks is less than the taskCount value.\tno (default == 1) taskDuration\tISO8601 Period\tThe length of time before tasks stop reading and begin publishing segments.\tno (default == PT1H) startDelay\tISO8601 Period\tThe period to wait before the supervisor starts managing tasks.\tno (default == PT5S) period\tISO8601 Period\tFrequency at which the supervisor executes its management logic. The supervisor also runs in response to certain events. For example, task success, task failure, and tasks reaching their taskDuration. The period value specifies the maximum time between iterations.\tno (default == PT30S) useEarliestOffset\tBoolean\tIf a supervisor manages a dataSource for the first time, it obtains a set of starting offsets from Kafka. This flag determines whether it retrieves the earliest or latest offsets in Kafka. Under normal circumstances, subsequent tasks will start from where the previous segments ended. Therefore Druid only uses useEarliestOffset on first run.\tno (default == false) completionTimeout\tISO8601 Period\tThe length of time to wait before declaring a publishing task as failed and terminating it. If the value is too low, your tasks may never publish. The publishing clock for a task begins roughly after taskDuration elapses.\tno (default == PT30M) lateMessageRejectionStartDateTime\tISO8601 DateTime\tConfigure tasks to reject messages with timestamps earlier than this date time; for example if this is set to 2016-01-01T11:00Z and the supervisor creates a task at 2016-01-01T12:00Z, Druid drops messages with timestamps earlier than 2016-01-01T11:00Z. This can prevent concurrency issues if your data stream has late messages and you have multiple pipelines that need to operate on the same segments (e.g. a realtime and a nightly batch ingestion pipeline).\tno (default == none) lateMessageRejectionPeriod\tISO8601 Period\tConfigure tasks to reject messages with timestamps earlier than this period before the task was created; for example if this is set to PT1H and the supervisor creates a task at 2016-01-01T12:00Z, messages with timestamps earlier than 2016-01-01T11:00Z will be dropped. This may help prevent concurrency issues if your data stream has late messages and you have multiple pipelines that need to operate on the same segments (e.g. a realtime and a nightly batch ingestion pipeline). Please note that only one of lateMessageRejectionPeriod or lateMessageRejectionStartDateTime can be specified.\tno (default == none) earlyMessageRejectionPeriod\tISO8601 Period\tConfigure tasks to reject messages with timestamps later than this period after the task reached its taskDuration; for example if this is set to PT1H, the taskDuration is set to PT1H and the supervisor creates a task at 2016-01-01T12:00Z, messages with timestamps later than 2016-01-01T14:00Z will be dropped. Note: Tasks sometimes run past their task duration, for example, in cases of supervisor failover. Setting earlyMessageRejectionPeriod too low may cause messages to be dropped unexpectedly whenever a task runs past its originally configured task duration.\tno (default == none) autoScalerConfig\tObject\tDefines auto scaling behavior for Kafka ingest tasks. See Tasks Autoscaler Properties.\tno (default == null) idleConfig\tObject\tDefines how and when Kafka Supervisor can become idle. See Idle Supervisor Configuration for more details.\tno (default == null) "},{"title":"Task Autoscaler Properties","type":1,"pageTitle":"Apache Kafka supervisor reference","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-reference#task-autoscaler-properties","content":"Property\tDescription\tRequiredenableTaskAutoScaler\tEnable or disable autoscaling. false or blank disables the autoScaler even when autoScalerConfig is not null\tno (default == false) taskCountMax\tMaximum number of ingestion tasks. Set taskCountMax >= taskCountMin. If taskCountMax > {numKafkaPartitions}, Druid only scales reading tasks up to the {numKafkaPartitions}. In this case taskCountMax is ignored.\tyes taskCountMin\tMinimum number of ingestion tasks. When you enable autoscaler, Druid ignores the value of taskCount in IOConfig and starts with the taskCountMin number of tasks.\tyes minTriggerScaleActionFrequencyMillis\tMinimum time interval between two scale actions.\tno (default == 600000) autoScalerStrategy\tThe algorithm of autoScaler. Only supports lagBased. See Lag Based AutoScaler Strategy Related Properties for details.\tno (default == lagBased) "},{"title":"Lag Based AutoScaler Strategy Related Properties","type":1,"pageTitle":"Apache Kafka supervisor reference","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-reference#lag-based-autoscaler-strategy-related-properties","content":"Property\tDescription\tRequiredlagCollectionIntervalMillis\tPeriod of lag points collection.\tno (default == 30000) lagCollectionRangeMillis\tThe total time window of lag collection. Use with lagCollectionIntervalMillis,it means that in the recent lagCollectionRangeMillis, collect lag metric points every lagCollectionIntervalMillis.\tno (default == 600000) scaleOutThreshold\tThe threshold of scale out action\tno (default == 6000000) triggerScaleOutFractionThreshold\tIf triggerScaleOutFractionThreshold percent of lag points are higher than scaleOutThreshold, then do scale out action.\tno (default == 0.3) scaleInThreshold\tThe Threshold of scale in action\tno (default == 1000000) triggerScaleInFractionThreshold\tIf triggerScaleInFractionThreshold percent of lag points are lower than scaleOutThreshold, then do scale in action.\tno (default == 0.9) scaleActionStartDelayMillis\tNumber of milliseconds after supervisor starts when first check scale logic.\tno (default == 300000) scaleActionPeriodMillis\tThe frequency of checking whether to do scale action in millis\tno (default == 60000) scaleInStep\tHow many tasks to reduce at a time\tno (default == 1) scaleOutStep\tHow many tasks to add at a time\tno (default == 2) "},{"title":"Idle Supervisor Configuration","type":1,"pageTitle":"Apache Kafka supervisor reference","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-reference#idle-supervisor-configuration","content":"info Note that Idle state transitioning is currently designated as experimental. Property\tDescription\tRequiredenabled\tIf true, Kafka supervisor will become idle if there is no data on input stream/topic for some time.\tno (default == false) inactiveAfterMillis\tSupervisor is marked as idle if all existing data has been read from input topic and no new data has been published for inactiveAfterMillis milliseconds.\tno (default == 600_000) info When the supervisor enters the idle state, no new tasks will be launched subsequent to the completion of the currently executing tasks. This strategy may lead to reduced costs for cluster operators while using topics that get sporadic data. The following example demonstrates supervisor spec with lagBased autoScaler and idle config enabled: { "type": "kafka", "spec": { "dataSchema": { ... }, "ioConfig": { "topic": "metrics", "inputFormat": { "type": "json" }, "consumerProperties": { "bootstrap.servers": "localhost:9092" }, "autoScalerConfig": { "enableTaskAutoScaler": true, "taskCountMax": 6, "taskCountMin": 2, "minTriggerScaleActionFrequencyMillis": 600000, "autoScalerStrategy": "lagBased", "lagCollectionIntervalMillis": 30000, "lagCollectionRangeMillis": 600000, "scaleOutThreshold": 6000000, "triggerScaleOutFractionThreshold": 0.3, "scaleInThreshold": 1000000, "triggerScaleInFractionThreshold": 0.9, "scaleActionStartDelayMillis": 300000, "scaleActionPeriodMillis": 60000, "scaleInStep": 1, "scaleOutStep": 2 }, "taskCount":1, "replicas":1, "taskDuration":"PT1H", "idleConfig": { "enabled": true, "inactiveAfterMillis": 600000 } }, "tuningConfig":{ ... } } } "},{"title":"More on consumerProperties","type":1,"pageTitle":"Apache Kafka supervisor reference","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-reference#more-on-consumerproperties","content":"Consumer properties must contain a property bootstrap.servers with a list of Kafka brokers in the form: <BROKER_1>:<PORT_1>,<BROKER_2>:<PORT_2>,.... By default, isolation.level is set to read_committed. If you use older versions of Kafka servers without transactions support or don't want Druid to consume only committed transactions, set isolation.level to read_uncommitted. In some cases, you may need to fetch consumer properties at runtime. For example, when bootstrap.servers is not known upfront, or is not static. To enable SSL connections, you must provide passwords for keystore, truststore and key secretly. You can provide configurations at runtime with a dynamic config provider implementation like the environment variable config provider that comes with Druid. For more information, see DynamicConfigProvider. For example, if you are using SASL and SSL with Kafka, set the following environment variables for the Druid user on the machines running the Overlord and the Peon services: export KAFKA_JAAS_CONFIG="org.apache.kafka.common.security.plain.PlainLoginModule required username='admin_user' password='admin_password';" export SSL_KEY_PASSWORD=mysecretkeypassword export SSL_KEYSTORE_PASSWORD=mysecretkeystorepassword export SSL_TRUSTSTORE_PASSWORD=mysecrettruststorepassword "druid.dynamic.config.provider": { "type": "environment", "variables": { "sasl.jaas.config": "KAFKA_JAAS_CONFIG", "ssl.key.password": "SSL_KEY_PASSWORD", "ssl.keystore.password": "SSL_KEYSTORE_PASSWORD", "ssl.truststore.password": "SSL_TRUSTSTORE_PASSWORD" } } } Verify that you've changed the values for all configurations to match your own environment. You can use the environment variable config provider syntax in the Consumer properties field on the Connect tab in the Load Data UI in the web console. When connecting to Kafka, Druid replaces the environment variables with their corresponding values. Note: You can provide SSL connections with Password Provider interface to define the keystore, truststore, and key, but this feature is deprecated. "},{"title":"Specifying data format","type":1,"pageTitle":"Apache Kafka supervisor reference","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-reference#specifying-data-format","content":"Kafka indexing service supports both inputFormat and parser to specify the data format. Use the inputFormat to specify the data format for Kafka indexing service unless you need a format only supported by the legacy parser. Supported inputFormats include: csvtsvjsonkafkaavro_streamavro_ocfprotobuf For more information, see Data formats. You can also read thrift formats using parser. "},{"title":"KafkaSupervisorTuningConfig","type":1,"pageTitle":"Apache Kafka supervisor reference","url":"/docs/27.0.0/development/extensions-core/kafka-supervisor-reference#kafkasupervisortuningconfig","content":"The tuningConfig is optional and default parameters will be used if no tuningConfig is specified. Field\tType\tDescription\tRequiredtype\tString\tThe indexing task type, this should always be kafka.\tyes maxRowsInMemory\tInteger\tThe number of rows to aggregate before persisting. This number is the post-aggregation rows, so it is not equivalent to the number of input events, but the number of aggregated rows that those events result in. This is used to manage the required JVM heap size. Maximum heap memory usage for indexing scales with maxRowsInMemory * (2 + maxPendingPersists). Normally user does not need to set this, but depending on the nature of data, if rows are short in terms of bytes, user may not want to store a million rows in memory and this value should be set.\tno (default == 1000000) maxBytesInMemory\tLong\tThe number of bytes to aggregate in heap memory before persisting. This is based on a rough estimate of memory usage and not actual usage. Normally this is computed internally and user does not need to set it. The maximum heap memory usage for indexing is maxBytesInMemory * (2 + maxPendingPersists).\tno (default == One-sixth of max JVM memory) maxRowsPerSegment\tInteger\tThe number of rows to aggregate into a segment; this number is post-aggregation rows. Handoff will happen either if maxRowsPerSegment or maxTotalRows is hit or every intermediateHandoffPeriod, whichever happens earlier.\tno (default == 5000000) maxTotalRows\tLong\tThe number of rows to aggregate across all segments; this number is post-aggregation rows. Handoff will happen either if maxRowsPerSegment or maxTotalRows is hit or every intermediateHandoffPeriod, whichever happens earlier.\tno (default == 20000000) intermediatePersistPeriod\tISO8601 Period\tThe period that determines the rate at which intermediate persists occur.\tno (default == PT10M) maxPendingPersists\tInteger\tMaximum number of persists that can be pending but not started. If this limit would be exceeded by a new intermediate persist, ingestion will block until the currently-running persist finishes. Maximum heap memory usage for indexing scales with maxRowsInMemory * (2 + maxPendingPersists).\tno (default == 0, meaning one persist can be running concurrently with ingestion, and none can be queued up) indexSpec\tObject\tTune how data is indexed. See IndexSpec for more information.\tno indexSpecForIntermediatePersists Defines segment storage format options to be used at indexing time for intermediate persisted temporary segments. This can be used to disable dimension/metric compression on intermediate segments to reduce memory required for final merging. However, disabling compression on intermediate segments might increase page cache use while they are used before getting merged into final segment published, see IndexSpec for possible values.\tno (default = same as indexSpec) reportParseExceptions\tBoolean\tDEPRECATED. If true, exceptions encountered during parsing will be thrown and will halt ingestion; if false, unparseable rows and fields will be skipped. Setting reportParseExceptions to true will override existing configurations for maxParseExceptions and maxSavedParseExceptions, setting maxParseExceptions to 0 and limiting maxSavedParseExceptions to no more than 1.\tno (default == false) handoffConditionTimeout\tLong\tMilliseconds to wait for segment handoff. It must be >= 0, where 0 means to wait forever.\tno (default == 0) resetOffsetAutomatically\tBoolean\tControls behavior when Druid needs to read Kafka messages that are no longer available (i.e. when OffsetOutOfRangeException is encountered). If false, the exception will bubble up, which will cause your tasks to fail and ingestion to halt. If this occurs, manual intervention is required to correct the situation; potentially using the Reset Supervisor API. This mode is useful for production, since it will make you aware of issues with ingestion. If true, Druid will automatically reset to the earlier or latest offset available in Kafka, based on the value of the useEarliestOffset property (earliest if true, latest if false). Note that this can lead to data being DROPPED (if useEarliestOffset is false) or DUPLICATED (if useEarliestOffset is true) without your knowledge. Messages will be logged indicating that a reset has occurred, but ingestion will continue. This mode is useful for non-production situations, since it will make Druid attempt to recover from problems automatically, even if they lead to quiet dropping or duplicating of data. This feature behaves similarly to the Kafka auto.offset.reset consumer property.\tno (default == false) workerThreads\tInteger\tThe number of threads that the supervisor uses to handle requests/responses for worker tasks, along with any other internal asynchronous operation.\tno (default == min(10, taskCount)) chatAsync\tBoolean\tIf true, use asynchronous communication with indexing tasks, and ignore the chatThreads parameter. If false, use synchronous communication in a thread pool of size chatThreads.\tno (default == true) chatThreads\tInteger\tThe number of threads that will be used for communicating with indexing tasks. Ignored if chatAsync is true (the default).\tno (default == min(10, taskCount * replicas)) chatRetries\tInteger\tThe number of times HTTP requests to indexing tasks will be retried before considering tasks unresponsive.\tno (default == 8) httpTimeout\tISO8601 Period\tHow long to wait for a HTTP response from an indexing task.\tno (default == PT10S) shutdownTimeout\tISO8601 Period\tHow long to wait for the supervisor to attempt a graceful shutdown of tasks before exiting.\tno (default == PT80S) offsetFetchPeriod\tISO8601 Period\tHow often the supervisor queries Kafka and the indexing tasks to fetch current offsets and calculate lag. If the user-specified value is below the minimum value (PT5S), the supervisor ignores the value and uses the minimum value instead.\tno (default == PT30S, min == PT5S) segmentWriteOutMediumFactory\tObject\tSegment write-out medium to use when creating segments. See below for more information.\tno (not specified by default, the value from druid.peon.defaultSegmentWriteOutMediumFactory.type is used) intermediateHandoffPeriod\tISO8601 Period\tHow often the tasks should hand off segments. Handoff will happen either if maxRowsPerSegment or maxTotalRows is hit or every intermediateHandoffPeriod, whichever happens earlier.\tno (default == P2147483647D) logParseExceptions\tBoolean\tIf true, log an error message when a parsing exception occurs, containing information about the row where the error occurred.\tno, default == false maxParseExceptions\tInteger\tThe maximum number of parse exceptions that can occur before the task halts ingestion and fails. Overridden if reportParseExceptions is set.\tno, unlimited default maxSavedParseExceptions\tInteger\tWhen a parse exception occurs, Druid can keep track of the most recent parse exceptions. maxSavedParseExceptions limits how many exception instances will be saved. These saved exceptions will be made available after the task finishes in the task completion report. Overridden if reportParseExceptions is set.\tno, default == 0 IndexSpec Field\tType\tDescription\tRequiredbitmap\tObject\tCompression format for bitmap indexes. Should be a JSON object. See Bitmap types below for options.\tno (defaults to Roaring) dimensionCompression\tString\tCompression format for dimension columns. Choose from LZ4, LZF, ZSTD or uncompressed.\tno (default == LZ4) metricCompression\tString\tCompression format for primitive type metric columns. Choose from LZ4, LZF, ZSTD, uncompressed or none.\tno (default == LZ4) longEncoding\tString\tEncoding format for metric and dimension columns with type long. Choose from auto or longs. auto encodes the values using offset or lookup table depending on column cardinality, and store them with variable size. longs stores the value as is with 8 bytes each.\tno (default == longs) Bitmap types For Roaring bitmaps: Field\tType\tDescription\tRequiredtype\tString\tMust be roaring.\tyes For Concise bitmaps: Field\tType\tDescription\tRequiredtype\tString\tMust be concise.\tyes SegmentWriteOutMediumFactory Field\tType\tDescription\tRequiredtype\tString\tSee Additional Peon Configuration: SegmentWriteOutMediumFactory for explanation and available options.\tyes "},{"title":"MySQL Metadata Store","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/mysql","content":"","keywords":""},{"title":"Installing the MySQL connector library","type":1,"pageTitle":"MySQL Metadata Store","url":"/docs/27.0.0/development/extensions-core/mysql#installing-the-mysql-connector-library","content":"This extension can use Oracle's MySQL JDBC driver which is not included in the Druid distribution. You must install it separately. There are a few ways to obtain this library: It can be downloaded from the MySQL site at: https://dev.mysql.com/downloads/connector/j/It can be fetched from Maven Central at: https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.49/mysql-connector-java-5.1.49.jarIt may be available through your package manager, e.g. as libmysql-java on APT for a Debian-based OS This fetches the MySQL connector JAR file with a name like mysql-connector-java-5.1.49.jar. Copy or symlink this file inside the folder extensions/mysql-metadata-storage under the distribution root directory. "},{"title":"Alternative: Installing the MariaDB connector library","type":1,"pageTitle":"MySQL Metadata Store","url":"/docs/27.0.0/development/extensions-core/mysql#alternative-installing-the-mariadb-connector-library","content":"This extension also supports using the MariaDB connector jar, though it is also not included in the Druid distribution, so you must install it separately. Download from the MariaDB site: https://mariadb.com/downloads/connectorDownload from Maven Central: https://repo1.maven.org/maven2/org/mariadb/jdbc/mariadb-java-client/2.7.3/mariadb-java-client-2.7.3.jar This fetches the MariaDB connector JAR file with a name like maria-java-client-2.7.3.jar. Copy or symlink this file to extensions/mysql-metadata-storage under the distribution root directory. To configure the mysql-metadata-storage extension to use the MariaDB connector library instead of MySQL, set druid.metadata.mysql.driver.driverClassName=org.mariadb.jdbc.Driver. Depending on the MariaDB client library version, the connector supports both jdbc:mysql: and jdbc:mariadb: connection URIs. However, the parameters to configure the connection vary between implementations, so be sure to check the documentation for details. "},{"title":"Setting up MySQL","type":1,"pageTitle":"MySQL Metadata Store","url":"/docs/27.0.0/development/extensions-core/mysql#setting-up-mysql","content":"Install MySQL Use your favorite package manager to install mysql, e.g.: on Ubuntu/Debian using apt apt-get install mysql-server on OS X, using Homebrew brew install mysql Alternatively, download and follow installation instructions for MySQL Community Server here:http://dev.mysql.com/downloads/mysql/. This extension also supports using MariaDB server, https://mariadb.org/download/, substituting for MariaDB in the following instructions where appropriate. Create a druid database and user Connect to MySQL from the machine where it is installed. mysql -u root Paste the following snippet into the mysql prompt: -- create a druid database, make sure to use utf8mb4 as encoding CREATE DATABASE druid DEFAULT CHARACTER SET utf8mb4; -- create a druid user CREATE USER 'druid'@'localhost' IDENTIFIED BY 'diurd'; -- grant the user all the permissions on the database we just created GRANT ALL PRIVILEGES ON druid.* TO 'druid'@'localhost'; Configure your Druid metadata storage extension: Add the following parameters to your Druid configuration, replacing <host>with the location (host name and port) of the database. druid.extensions.loadList=["mysql-metadata-storage"] druid.metadata.storage.type=mysql druid.metadata.storage.connector.connectURI=jdbc:mysql://<host>/druid druid.metadata.storage.connector.user=druid druid.metadata.storage.connector.password=diurd If using the MariaDB connector library, set druid.metadata.mysql.driver.driverClassName=org.mariadb.jdbc.Driver. "},{"title":"Encrypting MySQL connections","type":1,"pageTitle":"MySQL Metadata Store","url":"/docs/27.0.0/development/extensions-core/mysql#encrypting-mysql-connections","content":"This extension provides support for encrypting MySQL connections. To get more information about encrypting MySQL connections using TLS/SSL in general, please refer to this guide. "},{"title":"Configuration","type":1,"pageTitle":"MySQL Metadata Store","url":"/docs/27.0.0/development/extensions-core/mysql#configuration","content":"Property\tDescription\tDefault\tRequireddruid.metadata.mysql.ssl.useSSL\tEnable SSL\tfalse\tno druid.metadata.mysql.ssl.clientCertificateKeyStoreUrl\tThe file path URL to the client certificate key store.\tnone\tno druid.metadata.mysql.ssl.clientCertificateKeyStoreType\tThe type of the key store where the client certificate is stored.\tnone\tno druid.metadata.mysql.ssl.clientCertificateKeyStorePassword\tThe Password Provider or String password for the client key store.\tnone\tno druid.metadata.mysql.ssl.verifyServerCertificate\tEnables server certificate verification.\tfalse\tno druid.metadata.mysql.ssl.trustCertificateKeyStoreUrl\tThe file path to the trusted root certificate key store.\tDefault trust store provided by MySQL\tyes if verifyServerCertificate is set to true and a custom trust store is used druid.metadata.mysql.ssl.trustCertificateKeyStoreType\tThe type of the key store where trusted root certificates are stored.\tJKS\tyes if verifyServerCertificate is set to true and keystore type is not JKS druid.metadata.mysql.ssl.trustCertificateKeyStorePassword\tThe Password Provider or String password for the trust store.\tnone\tyes if verifyServerCertificate is set to true and password is not null druid.metadata.mysql.ssl.enabledSSLCipherSuites\tOverrides the existing cipher suites with these cipher suites.\tnone\tno druid.metadata.mysql.ssl.enabledTLSProtocols\tOverrides the TLS protocols with these protocols.\tnone\tno "},{"title":"MySQL InputSource","type":1,"pageTitle":"MySQL Metadata Store","url":"/docs/27.0.0/development/extensions-core/mysql#mysql-inputsource","content":"{ "type": "index_parallel", "spec": { "dataSchema": { "dataSource": "some_datasource", "dimensionsSpec": { "dimensionExclusions": [], "dimensions": [ "dim1", "dim2", "dim3" ] }, "timestampSpec": { "format": "auto", "column": "ts" }, "metricsSpec": [], "granularitySpec": { "type": "uniform", "segmentGranularity": "DAY", "queryGranularity": { "type": "none" }, "rollup": false, "intervals": null }, "transformSpec": { "filter": null, "transforms": [] } }, "ioConfig": { "type": "index_parallel", "inputSource": { "type": "sql", "database": { "type": "mysql", "connectorConfig": { "connectURI": "jdbc:mysql://some-rds-host.us-west-1.rds.amazonaws.com:3306/druid", "user": "admin", "password": "secret" } }, "sqls": [ "SELECT * FROM some_table" ] }, "inputFormat": { "type": "json" } }, "tuningConfig": { "type": "index_parallel" } } } "},{"title":"Configuration reference","type":0,"sectionRef":"#","url":"/docs/27.0.0/configuration/","content":"","keywords":""},{"title":"Recommended Configuration File Organization","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#recommended-configuration-file-organization","content":"A recommended way of organizing Druid configuration files can be seen in the conf directory in the Druid package root, shown below: $ ls -R conf druid conf/druid: _common broker coordinator historical middleManager overlord conf/druid/_common: common.runtime.properties log4j2.xml conf/druid/broker: jvm.config runtime.properties conf/druid/coordinator: jvm.config runtime.properties conf/druid/historical: jvm.config runtime.properties conf/druid/middleManager: jvm.config runtime.properties conf/druid/overlord: jvm.config runtime.properties Each directory has a runtime.properties file containing configuration properties for the specific Druid process corresponding to the directory (e.g., historical). The jvm.config files contain JVM flags such as heap sizing properties for each service. Common properties shared by all services are placed in _common/common.runtime.properties. "},{"title":"Configuration Interpolation","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#configuration-interpolation","content":"Configuration values can be interpolated from System Properties, Environment Variables, or local files. Below is an example of how this can be used: druid.metadata.storage.type=${env:METADATA_STORAGE_TYPE} druid.processing.tmpDir=${sys:java.io.tmpdir} druid.segmentCache.locations=${file:UTF-8:/config/segment-cache-def.json} Interpolation is also recursive so you can do: druid.segmentCache.locations=${file:UTF-8:${env:SEGMENT_DEF_LOCATION}} If the property is not set an exception will be thrown on startup, but a default can be provided if desired. Setting a default value will not work with file interpolation as an exception will be thrown if the file does not exist. druid.metadata.storage.type=${env:METADATA_STORAGE_TYPE:-mysql} druid.processing.tmpDir=${sys:java.io.tmpdir:-/tmp} If you need to set a variable that is wrapped by ${...} but do not want it to be interpolated you can escape it by adding another $. For example: config.name=$${value} "},{"title":"Common Configurations","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#common-configurations","content":"The properties under this section are common configurations that should be shared across all Druid services in a cluster. "},{"title":"JVM Configuration Best Practices","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#jvm-configuration-best-practices","content":"There are four JVM parameters that we set on all of our processes: -Duser.timezone=UTC: This sets the default timezone of the JVM to UTC. We always set this and do not test with other default timezones, so local timezones might work, but they also might uncover weird and interesting bugs. To issue queries in a non-UTC timezone, see query granularities -Dfile.encoding=UTF-8 This is similar to timezone, we test assuming UTF-8. Local encodings might work, but they also might result in weird and interesting bugs. -Djava.io.tmpdir=<a path> Various parts of Druid use temporary files to interact with the file system. These files can become quite large. This means that systems that have small /tmp directories can cause problems for Druid. Therefore, set the JVM tmp directory to a location with ample space. Also consider the following when configuring the JVM tmp directory: The temp directory should not be volatile tmpfs.This directory should also have good read and write speed.Avoid NFS mount.The org.apache.druid.java.util.metrics.SysMonitor requires execute privileges on files in java.io.tmpdir. If you are using the system monitor, do not set java.io.tmpdir to noexec. -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager This allows log4j2 to handle logs for non-log4j2 components (like jetty) which use standard java logging. "},{"title":"Extensions","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#extensions","content":"Many of Druid's external dependencies can be plugged in as modules. Extensions can be provided using the following configs: Property\tDescription\tDefaultdruid.extensions.directory\tThe root extension directory where user can put extensions related files. Druid will load extensions stored under this directory.\textensions (This is a relative path to Druid's working directory) druid.extensions.hadoopDependenciesDir\tThe root hadoop dependencies directory where user can put hadoop related dependencies files. Druid will load the dependencies based on the hadoop coordinate specified in the hadoop index task.\thadoop-dependencies (This is a relative path to Druid's working directory druid.extensions.loadList\tA JSON array of extensions to load from extension directories by Druid. If it is not specified, its value will be null and Druid will load all the extensions under druid.extensions.directory. If its value is empty list [], then no extensions will be loaded at all. It is also allowed to specify absolute path of other custom extensions not stored in the common extensions directory.\tnull druid.extensions.searchCurrentClassloader\tThis is a boolean flag that determines if Druid will search the main classloader for extensions. It defaults to true but can be turned off if you have reason to not automatically add all modules on the classpath.\ttrue druid.extensions.useExtensionClassloaderFirst\tThis is a boolean flag that determines if Druid extensions should prefer loading classes from their own jars rather than jars bundled with Druid. If false, extensions must be compatible with classes provided by any jars bundled with Druid. If true, extensions may depend on conflicting versions.\tfalse druid.extensions.hadoopContainerDruidClasspath\tHadoop Indexing launches hadoop jobs and this configuration provides way to explicitly set the user classpath for the hadoop job. By default this is computed automatically by druid based on the druid process classpath and set of extensions. However, sometimes you might want to be explicit to resolve dependency conflicts between druid and hadoop.\tnull druid.extensions.addExtensionsToHadoopContainer\tOnly applicable if druid.extensions.hadoopContainerDruidClasspath is provided. If set to true, then extensions specified in the loadList are added to hadoop container classpath. Note that when druid.extensions.hadoopContainerDruidClasspath is not provided then extensions are always added to hadoop container classpath.\tfalse "},{"title":"Modules","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#modules","content":"Property\tDescription\tDefaultdruid.modules.excludeList\tA JSON array of canonical class names (e.g., "org.apache.druid.somepackage.SomeModule") of module classes which shouldn't be loaded, even if they are found in extensions specified by druid.extensions.loadList, or in the list of core modules specified to be loaded on a particular Druid process type. Useful when some useful extension contains some module, which shouldn't be loaded on some Druid process type because some dependencies of that module couldn't be satisfied.\t[] "},{"title":"ZooKeeper","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#zookeeper","content":"We recommend just setting the base ZK path and the ZK service host, but all ZK paths that Druid uses can be overwritten to absolute paths. Property\tDescription\tDefaultdruid.zk.paths.base\tBase ZooKeeper path.\t/druid druid.zk.service.host\tThe ZooKeeper hosts to connect to. This is a REQUIRED property and therefore a host address must be supplied.\tnone druid.zk.service.user\tThe username to authenticate with ZooKeeper. This is an optional property.\tnone druid.zk.service.pwd\tThe Password Provider or the string password to authenticate with ZooKeeper. This is an optional property.\tnone druid.zk.service.authScheme\tdigest is the only authentication scheme supported.\tdigest ZooKeeper Behavior Property\tDescription\tDefaultdruid.zk.service.sessionTimeoutMs\tZooKeeper session timeout, in milliseconds.\t30000 druid.zk.service.connectionTimeoutMs\tZooKeeper connection timeout, in milliseconds.\t15000 druid.zk.service.compress\tBoolean flag for whether or not created Znodes should be compressed.\ttrue druid.zk.service.acl\tBoolean flag for whether or not to enable ACL security for ZooKeeper. If ACL is enabled, zNode creators will have all permissions.\tfalse Path Configuration Druid interacts with ZK through a set of standard path configurations. We recommend just setting the base ZK path, but all ZK paths that Druid uses can be overwritten to absolute paths. Property\tDescription\tDefaultdruid.zk.paths.base\tBase ZooKeeper path.\t/druid druid.zk.paths.propertiesPath\tZooKeeper properties path.\t${druid.zk.paths.base}/properties druid.zk.paths.announcementsPath\tDruid process announcement path.\t${druid.zk.paths.base}/announcements druid.zk.paths.liveSegmentsPath\tCurrent path for where Druid processes announce their segments.\t${druid.zk.paths.base}/segments druid.zk.paths.loadQueuePath\tEntries here cause Historical processes to load and drop segments.\t${druid.zk.paths.base}/loadQueue druid.zk.paths.coordinatorPath\tUsed by the Coordinator for leader election.\t${druid.zk.paths.base}/coordinator druid.zk.paths.servedSegmentsPath\tDeprecated. Legacy path for where Druid processes announce their segments.\t${druid.zk.paths.base}/servedSegments The indexing service also uses its own set of paths. These configs can be included in the common configuration. Property\tDescription\tDefaultdruid.zk.paths.indexer.base\tBase ZooKeeper path for\t${druid.zk.paths.base}/indexer druid.zk.paths.indexer.announcementsPath\tMiddle managers announce themselves here.\t${druid.zk.paths.indexer.base}/announcements druid.zk.paths.indexer.tasksPath\tUsed to assign tasks to MiddleManagers.\t${druid.zk.paths.indexer.base}/tasks druid.zk.paths.indexer.statusPath\tParent path for announcement of task statuses.\t${druid.zk.paths.indexer.base}/status If druid.zk.paths.base and druid.zk.paths.indexer.base are both set, and none of the other druid.zk.paths.* or druid.zk.paths.indexer.* values are set, then the other properties will be evaluated relative to their respective base. For example, if druid.zk.paths.base is set to /druid1 and druid.zk.paths.indexer.base is set to /druid2 then druid.zk.paths.announcementsPath will default to /druid1/announcements while druid.zk.paths.indexer.announcementsPath will default to /druid2/announcements. The following path is used for service discovery. It is not affected by druid.zk.paths.base and must be specified separately. Property\tDescription\tDefaultdruid.discovery.curator.path\tServices announce themselves under this ZooKeeper path.\t/druid/discovery "},{"title":"TLS","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#tls","content":"General Configuration Property\tDescription\tDefaultdruid.enablePlaintextPort\tEnable/Disable HTTP connector.\ttrue druid.enableTlsPort\tEnable/Disable HTTPS connector.\tfalse Although not recommended but both HTTP and HTTPS connectors can be enabled at a time and respective ports are configurable using druid.plaintextPortand druid.tlsPort properties on each process. Please see Configuration section of individual processes to check the valid and default values for these ports. Jetty Server TLS Configuration Druid uses Jetty as an embedded web server. To learn more about TLS/SSL, certificates, and related concepts in Jetty, including explanations of the configuration settings below, see "Configuring SSL/TLS KeyStores" in the Jetty Operations Guide. For information about TLS/SSL support in Java in general, see the Java Secure Socket Extension (JSSE) Reference Guide. The Java Cryptography Architecture Standard Algorithm Name Documentation for JDK 8 lists all possible values for the following properties, among others provided by the Java implementation. Property\tDescription\tDefault\tRequireddruid.server.https.keyStorePath\tThe file path or URL of the TLS/SSL Key store.\tnone\tyes druid.server.https.keyStoreType\tThe type of the key store.\tnone\tyes druid.server.https.certAlias\tAlias of TLS/SSL certificate for the connector.\tnone\tyes druid.server.https.keyStorePassword\tThe Password Provider or String password for the Key Store.\tnone\tyes Following table contains non-mandatory advanced configuration options, use caution. Property\tDescription\tDefault\tRequireddruid.server.https.keyManagerFactoryAlgorithm\tAlgorithm to use for creating KeyManager, more details here.\tjavax.net.ssl.KeyManagerFactory.getDefaultAlgorithm()\tno druid.server.https.keyManagerPassword\tThe Password Provider or String password for the Key Manager.\tnone\tno druid.server.https.includeCipherSuites\tList of cipher suite names to include. You can either use the exact cipher suite name or a regular expression.\tJetty's default include cipher list\tno druid.server.https.excludeCipherSuites\tList of cipher suite names to exclude. You can either use the exact cipher suite name or a regular expression.\tJetty's default exclude cipher list\tno druid.server.https.includeProtocols\tList of exact protocols names to include.\tJetty's default include protocol list\tno druid.server.https.excludeProtocols\tList of exact protocols names to exclude.\tJetty's default exclude protocol list\tno Internal Client TLS Configuration (requires simple-client-sslcontext extension) These properties apply to the SSLContext that will be provided to the internal HTTP client that Druid services use to communicate with each other. These properties require the simple-client-sslcontext extension to be loaded. Without it, Druid services will be unable to communicate with each other when TLS is enabled. Property\tDescription\tDefault\tRequireddruid.client.https.protocol\tSSL protocol to use.\tTLSv1.2\tno druid.client.https.trustStoreType\tThe type of the key store where trusted root certificates are stored.\tjava.security.KeyStore.getDefaultType()\tno druid.client.https.trustStorePath\tThe file path or URL of the TLS/SSL Key store where trusted root certificates are stored.\tnone\tyes druid.client.https.trustStoreAlgorithm\tAlgorithm to be used by TrustManager to validate certificate chains\tjavax.net.ssl.TrustManagerFactory.getDefaultAlgorithm()\tno druid.client.https.trustStorePassword\tThe Password Provider or String password for the Trust Store.\tnone\tyes This document lists all the possible values for the above mentioned configs among others provided by Java implementation. "},{"title":"Authentication and Authorization","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#authentication-and-authorization","content":"Property\tType\tDescription\tDefault\tRequireddruid.auth.authenticatorChain\tJSON List of Strings\tList of Authenticator type names\t["allowAll"]\tno druid.escalator.type\tString\tType of the Escalator that should be used for internal Druid communications. This Escalator must use an authentication scheme that is supported by an Authenticator in druid.auth.authenticatorChain.\t"noop"\tno druid.auth.authorizers\tJSON List of Strings\tList of Authorizer type names\t["allowAll"]\tno druid.auth.unsecuredPaths\tList of Strings\tList of paths for which security checks will not be performed. All requests to these paths will be allowed.\t[]\tno druid.auth.allowUnauthenticatedHttpOptions\tBoolean\tIf true, skip authentication checks for HTTP OPTIONS requests. This is needed for certain use cases, such as supporting CORS pre-flight requests. Note that disabling authentication checks for OPTIONS requests will allow unauthenticated users to determine what Druid endpoints are valid (by checking if the OPTIONS request returns a 200 instead of 404), so enabling this option may reveal information about server configuration, including information about what extensions are loaded (if those extensions add endpoints).\tfalse\tno For more information, please see Authentication and Authorization. For configuration options for specific auth extensions, please refer to the extension documentation. "},{"title":"Startup Logging","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#startup-logging","content":"All processes can log debugging information on startup. Property\tDescription\tDefaultdruid.startup.logging.logProperties\tLog all properties on startup (from common.runtime.properties, runtime.properties, and the JVM command line).\tfalse druid.startup.logging.maskProperties\tMasks sensitive properties (passwords, for example) containing theses words.\t["password"] Note that some sensitive information may be logged if these settings are enabled. "},{"title":"Request Logging","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#request-logging","content":"All processes that can serve queries can also log the query requests they see. Broker processes can additionally log the SQL requests (both from HTTP and JDBC) they see. For an example of setting up request logging, see Request logging. Property\tDescription\tDefaultdruid.request.logging.type\tHow to log every query request. Choices: noop, file, emitter, slf4j, filtered, composing, switching\tnoop (request logging disabled by default) Note that you can enable sending all the HTTP requests to log by setting org.apache.druid.jetty.RequestLog to the DEBUG level. See Logging for more information. File request logging The file request logger stores daily request logs on disk. Property\tDescription\tDefaultdruid.request.logging.dir\tHistorical, Realtime and Broker processes maintain request logs of all of the requests they get (interaction is via POST, so normal request logs don’t generally capture information about the actual query), this specifies the directory to store the request logs in\tnone druid.request.logging.filePattern\tJoda datetime format for each file\t"yyyy-MM-dd'.log'" druid.request.logging.durationToRetain\tPeriod to retain the request logs on disk. The period should be at least longer than P1D.\tnone The format of request logs is TSV, one line per requests, with five fields: timestamp, remote_addr, native_query, query_context, sql_query. For native JSON request, the sql_query field is empty. Example 2019-01-14T10:00:00.000Z 127.0.0.1 {"queryType":"topN","dataSource":{"type":"table","name":"wikiticker"},"virtualColumns":[],"dimension":{"type":"LegacyDimensionSpec","dimension":"page","outputName":"page","outputType":"STRING"},"metric":{"type":"LegacyTopNMetricSpec","metric":"count"},"threshold":10,"intervals":{"type":"LegacySegmentSpec","intervals":["2015-09-12T00:00:00.000Z/2015-09-13T00:00:00.000Z"]},"filter":null,"granularity":{"type":"all"},"aggregations":[{"type":"count","name":"count"}],"postAggregations":[],"context":{"queryId":"74c2d540-d700-4ebd-b4a9-3d02397976aa"},"descending":false} {"query/time":100,"query/bytes":800,"success":true,"identity":"user1"} For SQL query request, the native_query field is empty. Example 2019-01-14T10:00:00.000Z 127.0.0.1 {"sqlQuery/time":100, "sqlQuery/planningTimeMs":10, "sqlQuery/bytes":600, "success":true, "identity":"user1"} {"query":"SELECT page, COUNT(*) AS Edits FROM wikiticker WHERE TIME_IN_INTERVAL(\\"__time\\", '2015-09-12/2015-09-13') GROUP BY page ORDER BY Edits DESC LIMIT 10","context":{"sqlQueryId":"c9d035a0-5ffd-4a79-a865-3ffdadbb5fdd","nativeQueryIds":"[490978e4-f5c7-4cf6-b174-346e63cf8863]"}} Emitter request logging The emitter request logger emits every request to the external location specified in the emitter configuration. Property\tDescription\tDefaultdruid.request.logging.feed\tFeed name for requests.\tnone SLF4J request logging The slf4j request logger logs every request using SLF4J. It serializes native queries into JSON in the log message regardless of the SLF4J format specification. Requests are logged under the class org.apache.druid.server.log.LoggingRequestLogger. Property\tDescription\tDefaultdruid.request.logging.setMDC\tIf you want to set MDC entries within the log entry, set this value to true. Your logging system must be configured to support MDC in order to format this data.\tfalse druid.request.logging.setContextMDC\tSet to "true" to add the Druid query context to the MDC entries. Only applies when setMDC is true.\tfalse For a native query, the following MDC fields are populated when setMDC is true: MDC field\tDescriptionqueryId\tThe query ID sqlQueryId\tThe SQL query ID if this query is part of a SQL request dataSource\tThe datasource the query was against queryType\tThe type of the query hasFilters\tIf the query has any filters remoteAddr\tThe remote address of the requesting client duration\tThe duration of the query interval resultOrdering\tThe ordering of results descending\tIf the query is a descending query Filtered request logging The filtered request logger filters requests based on the query type or how long a query takes to complete. For native queries, the logger only logs requests when the query/time metric exceeds the threshold provided in queryTimeThresholdMs. For SQL queries, it only logs requests when the sqlQuery/time metric exceeds threshold provided in sqlQueryTimeThresholdMs. See Metrics for more details on query metrics. Requests that meet the threshold are logged using the request logger type set in druid.request.logging.delegate.type. Property\tDescription\tDefaultdruid.request.logging.queryTimeThresholdMs\tThreshold value for the query/time metric in milliseconds.\t0, i.e., no filtering druid.request.logging.sqlQueryTimeThresholdMs\tThreshold value for the sqlQuery/time metric in milliseconds.\t0, i.e., no filtering druid.request.logging.mutedQueryTypes\tQuery requests of these types are not logged. Query types are defined as string objects corresponding to the "queryType" value for the specified query in the Druid's native JSON query API. Misspelled query types will be ignored. Example to ignore scan and timeBoundary queries: ["scan", "timeBoundary"]\t[] druid.request.logging.delegate.type\tType of delegate request logger to log requests.\tnone Composing request logging The composing request logger emits request logs to multiple request loggers. Property\tDescription\tDefaultdruid.request.logging.loggerProviders\tList of request loggers for emitting request logs.\tnone Switching request logging The switching request logger routes native query request logs to one request logger and SQL query request logs to another request logger. Property\tDescription\tDefaultdruid.request.logging.nativeQueryLogger\tRequest logger for emitting native query request logs.\tnone druid.request.logging.sqlQueryLogger\tRequest logger for emitting SQL query request logs.\tnone "},{"title":"Audit Logging","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#audit-logging","content":"Coordinator and Overlord log changes to lookups, segment load/drop rules, dynamic configuration changes for auditing Property\tDescription\tDefaultdruid.audit.manager.auditHistoryMillis\tDefault duration for querying audit history.\t1 week druid.audit.manager.includePayloadAsDimensionInMetric\tBoolean flag on whether to add payload column in service metric.\tfalse druid.audit.manager.maxPayloadSizeBytes\tThe maximum size of audit payload to store in Druid's metadata store audit table. If the size of audit payload exceeds this value, the audit log would be stored with a message indicating that the payload was omitted instead. Setting maxPayloadSizeBytes to -1 (default value) disables this check, meaning Druid will always store audit payload regardless of it's size. Setting to any negative number other than -1 is invalid. Human-readable format is supported, see here.\t-1 druid.audit.manager.skipNullField\tIf true, the audit payload stored in metadata store will exclude any field with null value.\tfalse "},{"title":"Enabling Metrics","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#enabling-metrics","content":"You can configure Druid processes to emit metrics regularly from a number of monitors via emitters. Property\tDescription\tDefaultdruid.monitoring.emissionPeriod\tFrequency that Druid emits metrics.\tPT1M druid.monitoring.monitors\tSets list of Druid monitors used by a process.\tnone (no monitors) druid.emitter\tSetting this value initializes one of the emitter modules.\tnoop (metric emission disabled by default) Metrics monitors Metric monitoring is an essential part of Druid operations. The following monitors are available: Name\tDescriptionorg.apache.druid.client.cache.CacheMonitor\tEmits metrics (to logs) about the segment results cache for Historical and Broker processes. Reports typical cache statistics include hits, misses, rates, and size (bytes and number of entries), as well as timeouts and and errors. org.apache.druid.java.util.metrics.SysMonitor\tReports on various system activities and statuses using the SIGAR library. Requires execute privileges on files in java.io.tmpdir. Do not set java.io.tmpdir to noexec when using SysMonitor. org.apache.druid.java.util.metrics.JvmMonitor\tReports various JVM-related statistics. org.apache.druid.java.util.metrics.JvmCpuMonitor\tReports statistics of CPU consumption by the JVM. org.apache.druid.java.util.metrics.CpuAcctDeltaMonitor\tReports consumed CPU as per the cpuacct cgroup. org.apache.druid.java.util.metrics.JvmThreadsMonitor\tReports Thread statistics in the JVM, like numbers of total, daemon, started, died threads. org.apache.druid.java.util.metrics.CgroupCpuMonitor\tReports CPU shares and quotas as per the cpu cgroup. org.apache.druid.java.util.metrics.CgroupCpuSetMonitor\tReports CPU core/HT and memory node allocations as per the cpuset cgroup. org.apache.druid.java.util.metrics.CgroupMemoryMonitor\tReports memory statistic as per the memory cgroup. org.apache.druid.server.metrics.EventReceiverFirehoseMonitor\tReports how many events have been queued in the EventReceiverFirehose. org.apache.druid.server.metrics.HistoricalMetricsMonitor\tReports statistics on Historical processes. Available only on Historical processes. org.apache.druid.server.metrics.SegmentStatsMonitor\tEXPERIMENTAL Reports statistics about segments on Historical processes. Available only on Historical processes. Not to be used when lazy loading is configured. org.apache.druid.server.metrics.QueryCountStatsMonitor\tReports how many queries have been successful/failed/interrupted. org.apache.druid.server.emitter.HttpEmittingMonitor\tReports internal metrics of http or parametrized emitter (see below). Must not be used with another emitter type. See the description of the metrics here: https://github.com/apache/druid/pull/4973. org.apache.druid.server.metrics.TaskCountStatsMonitor\tReports how many ingestion tasks are currently running/pending/waiting and also the number of successful/failed tasks per emission period. org.apache.druid.server.metrics.TaskSlotCountStatsMonitor\tReports metrics about task slot usage per emission period. org.apache.druid.server.metrics.WorkerTaskCountStatsMonitor\tReports how many ingestion tasks are currently running/pending/waiting, the number of successful/failed tasks, and metrics about task slot usage for the reporting worker, per emission period. Only supported by middleManager node types. org.apache.druid.server.metrics.ServiceStatusMonitor\tReports a heartbeat for the service. For example, you might configure monitors on all processes for system and JVM information within common.runtime.properties as follows: druid.monitoring.monitors=["org.apache.druid.java.util.metrics.SysMonitor","org.apache.druid.java.util.metrics.JvmMonitor"] You can override cluster-wide configuration by amending the runtime.properties of individual processes. Metrics emitters There are several emitters available: noop (default) disables metric emission.logging emits logs using Log4j2.http sends POST requests of JSON events.parametrized operates like the http emitter but fine-tunes the recipient URL based on the event feed.composing initializes multiple emitter modules.graphite emits metrics to a Graphite Carbon service.switching initializes and emits to multiple emitter modules based on the event feed. Logging Emitter Module The use this emitter module, set druid.emitter=logging. The logging emitter uses a Log4j2 logger nameddruid.emitter.logging.loggerClass to emit events. Each event is logged as a single json object with aMarker as the feed of the event. Users may wish to edit the log4j config to route these logs to different sources based on the feed of the event. Property\tDescription\tDefaultdruid.emitter.logging.loggerClass\tThe class used for logging.\torg.apache.druid.java.util.emitter.core.LoggingEmitter druid.emitter.logging.logLevel\tChoices: debug, info, warn, error. The log level at which message are logged.\tinfo HTTP Emitter Module Property\tDescription\tDefaultdruid.emitter.http.flushMillis\tHow often the internal message buffer is flushed (data is sent).\t60000 druid.emitter.http.flushCount\tHow many messages the internal message buffer can hold before flushing (sending).\t500 druid.emitter.http.basicAuthentication\tPassword Provider for providing Login and password for authentication in "login:password" form, e.g., druid.emitter.http.basicAuthentication=admin:adminpassword uses Default Password Provider which allows plain text passwords.\tnot specified = no authentication druid.emitter.http.flushTimeOut\tThe timeout after which an event should be sent to the endpoint, even if internal buffers are not filled, in milliseconds.\tnot specified = no timeout druid.emitter.http.batchingStrategy\tThe strategy of how the batch is formatted. "ARRAY" means [event1,event2], "NEWLINES" means event1\\nevent2, ONLY_EVENTS means event1event2.\tARRAY druid.emitter.http.maxBatchSize\tThe maximum batch size, in bytes.\tthe minimum of (10% of JVM heap size divided by 2) or (5242880 (i. e. 5 MiB)) druid.emitter.http.batchQueueSizeLimit\tThe maximum number of batches in emitter queue, if there are problems with emitting.\tthe maximum of (2) or (10% of the JVM heap size divided by 5MiB) druid.emitter.http.minHttpTimeoutMillis\tIf the speed of filling batches imposes timeout smaller than that, not even trying to send batch to endpoint, because it will likely fail, not being able to send the data that fast. Configure this depending based on emitter/successfulSending/minTimeMs metric. Reasonable values are 10ms..100ms.\t0 druid.emitter.http.recipientBaseUrl\tThe base URL to emit messages to. Druid will POST JSON to be consumed at the HTTP endpoint specified by this property.\tnone, required config HTTP Emitter Module TLS Overrides By default, when sending events to a TLS-enabled receiver, the HTTP Emitter uses an SSLContext obtained from the process described at Druid's internal communication over TLS, i.e., the same SSLContext that would be used for internal communications between Druid processes. In some use cases it may be desirable to have the HTTP Emitter use its own separate truststore configuration. For example, there may be organizational policies that prevent the TLS-enabled metrics receiver's certificate from being added to the same truststore used by Druid's internal HTTP client. The following properties allow the HTTP Emitter to use its own truststore configuration when building its SSLContext. Property\tDescription\tDefaultdruid.emitter.http.ssl.useDefaultJavaContext\tIf set to true, the HttpEmitter will use SSLContext.getDefault(), the default Java SSLContext, and all other properties below are ignored.\tfalse druid.emitter.http.ssl.trustStorePath\tThe file path or URL of the TLS/SSL Key store where trusted root certificates are stored. If this is unspecified, the HTTP Emitter will use the same SSLContext as Druid's internal HTTP client, as described in the beginning of this section, and all other properties below are ignored.\tnull druid.emitter.http.ssl.trustStoreType\tThe type of the key store where trusted root certificates are stored.\tjava.security.KeyStore.getDefaultType() druid.emitter.http.ssl.trustStoreAlgorithm\tAlgorithm to be used by TrustManager to validate certificate chains\tjavax.net.ssl.TrustManagerFactory.getDefaultAlgorithm() druid.emitter.http.ssl.trustStorePassword\tThe Password Provider or String password for the Trust Store.\tnone druid.emitter.http.ssl.protocol\tTLS protocol to use.\t"TLSv1.2" Parametrized HTTP Emitter Module The parametrized emitter takes the same configs as the http emitter using the prefix druid.emitter.parametrized.httpEmitting.. For example: druid.emitter.parametrized.httpEmitting.flushMillisdruid.emitter.parametrized.httpEmitting.flushCountdruid.emitter.parametrized.httpEmitting.ssl.trustStorePath Do not specify recipientBaseUrl with the parametrized emitter. Instead use recipientBaseUrlPattern described in the table below. Property\tDescription\tDefaultdruid.emitter.parametrized.recipientBaseUrlPattern\tThe URL pattern to send an event to, based on the event's feed. E.g., http://foo.bar/{feed}, that will send event to http://foo.bar/metrics if the event's feed is "metrics".\tnone, required config Composing Emitter Module Property\tDescription\tDefaultdruid.emitter.composing.emitters\tList of emitter modules to load, e.g., ["logging","http"].\t[] Graphite Emitter To use graphite as emitter set druid.emitter=graphite. For configuration details, see Graphite emitter for the Graphite emitter Druid extension. Switching Emitter To use switching as emitter set druid.emitter=switching. Property\tDescription\tDefaultdruid.emitter.switching.emitters\tJSON map of feed to list of emitter modules that will be used for the mapped feed, e.g., {"metrics":["http"], "alerts":["logging"]}\t{} druid.emitter.switching.defaultEmitters\tJSON list of emitter modules to load that will be used if there is no emitter specifically designated for that event's feed, e.g., ["logging","http"].\t[] "},{"title":"Metadata storage","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#metadata-storage","content":"These properties specify the JDBC connection and other configuration around the metadata storage. The only processes that connect to the metadata storage with these properties are the Coordinator and Overlord. Property\tDescription\tDefaultdruid.metadata.storage.type\tThe type of metadata storage to use. Choose from "mysql", "postgresql", or "derby".\tderby druid.metadata.storage.connector.connectURI\tThe JDBC URI for the database to connect to\tnone druid.metadata.storage.connector.user\tThe username to connect with.\tnone druid.metadata.storage.connector.password\tThe Password Provider or String password used to connect with.\tnone druid.metadata.storage.connector.createTables\tIf Druid requires a table and it doesn't exist, create it?\ttrue druid.metadata.storage.tables.base\tThe base name for tables.\tdruid druid.metadata.storage.tables.dataSource\tThe table to use to look for dataSources which created by Kafka Indexing Service.\tdruid_dataSource druid.metadata.storage.tables.pendingSegments\tThe table to use to look for pending segments.\tdruid_pendingSegments druid.metadata.storage.tables.segments\tThe table to use to look for segments.\tdruid_segments druid.metadata.storage.tables.rules\tThe table to use to look for segment load/drop rules.\tdruid_rules druid.metadata.storage.tables.config\tThe table to use to look for configs.\tdruid_config druid.metadata.storage.tables.tasks\tUsed by the indexing service to store tasks.\tdruid_tasks druid.metadata.storage.tables.taskLog\tUsed by the indexing service to store task logs.\tdruid_tasklogs druid.metadata.storage.tables.taskLock\tUsed by the indexing service to store task locks.\tdruid_tasklocks druid.metadata.storage.tables.supervisors\tUsed by the indexing service to store supervisor configurations.\tdruid_supervisors druid.metadata.storage.tables.audit\tThe table to use for audit history of configuration changes, e.g., Coordinator rules.\tdruid_audit "},{"title":"Deep storage","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#deep-storage","content":"The configurations concern how to push and pull Segments from deep storage. Property\tDescription\tDefaultdruid.storage.type\tChoices:local, noop, s3, hdfs, c*. The type of deep storage to use.\tlocal Local Deep Storage Local deep storage uses the local filesystem. Property\tDescription\tDefaultdruid.storage.storageDirectory\tDirectory on disk to use as deep storage.\t/tmp/druid/localStorage Noop Deep Storage This deep storage doesn't do anything. There are no configs. S3 Deep Storage This deep storage is used to interface with Amazon's S3. Note that the druid-s3-extensions extension must be loaded. The below table shows some important configurations for S3. See S3 Deep Storage for full configurations. Property\tDescription\tDefaultdruid.storage.bucket\tS3 bucket name.\tnone druid.storage.baseKey\tS3 object key prefix for storage.\tnone druid.storage.disableAcl\tBoolean flag for ACL. If this is set to false, the full control would be granted to the bucket owner. This may require to set additional permissions. See S3 permissions settings.\tfalse druid.storage.archiveBucket\tS3 bucket name for archiving when running the archive task.\tnone druid.storage.archiveBaseKey\tS3 object key prefix for archiving.\tnone druid.storage.sse.type\tServer-side encryption type. Should be one of s3, kms, and custom. See the below Server-side encryption section for more details.\tNone druid.storage.sse.kms.keyId\tAWS KMS key ID. This is used only when druid.storage.sse.type is kms and can be empty to use the default key ID.\tNone druid.storage.sse.custom.base64EncodedKey\tBase64-encoded key. Should be specified if druid.storage.sse.type is custom.\tNone druid.storage.useS3aSchema\tIf true, use the "s3a" filesystem when using Hadoop-based ingestion. If false, the "s3n" filesystem will be used. Only affects Hadoop-based ingestion.\tfalse HDFS Deep Storage This deep storage is used to interface with HDFS. Note that the druid-hdfs-storage extension must be loaded. Property\tDescription\tDefaultdruid.storage.storageDirectory\tHDFS directory to use as deep storage.\tnone Cassandra Deep Storage This deep storage is used to interface with Cassandra. Note that the druid-cassandra-storage extension must be loaded. Property\tDescription\tDefaultdruid.storage.host\tCassandra host.\tnone druid.storage.keyspace\tCassandra key space.\tnone "},{"title":"Ingestion Security Configuration","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#ingestion-security-configuration","content":"HDFS input source You can set the following property to specify permissible protocols for the HDFS input source. Property\tPossible Values\tDescription\tDefaultdruid.ingestion.hdfs.allowedProtocols\tList of protocols\tAllowed protocols for the HDFS input source and HDFS firehose.\t["hdfs"] HTTP input source You can set the following property to specify permissible protocols for the HTTP input source. Property\tPossible Values\tDescription\tDefaultdruid.ingestion.http.allowedProtocols\tList of protocols\tAllowed protocols for the HTTP input source and HTTP firehose.\t["http", "https"] "},{"title":"External Data Access Security Configuration","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#external-data-access-security-configuration","content":"JDBC Connections to External Databases You can use the following properties to specify permissible JDBC options for: SQL input sourceglobally cached JDBC lookupsJDBC Data Fetcher for per-lookup caching. These properties do not apply to metadata storage connections. Property\tPossible Values\tDescription\tDefaultdruid.access.jdbc.enforceAllowedProperties\tBoolean\tWhen true, Druid applies druid.access.jdbc.allowedProperties to JDBC connections starting with jdbc:postgresql:, jdbc:mysql:, or jdbc:mariadb:. When false, Druid allows any kind of JDBC connections without JDBC property validation. This config is for backward compatibility especially during upgrades since enforcing allow list can break existing ingestion jobs or lookups based on JDBC. This config is deprecated and will be removed in a future release.\ttrue druid.access.jdbc.allowedProperties\tList of JDBC properties\tDefines a list of allowed JDBC properties. Druid always enforces the list for all JDBC connections starting with jdbc:postgresql:, jdbc:mysql:, and jdbc:mariadb: if druid.access.jdbc.enforceAllowedProperties is set to true. This option is tested against MySQL connector 5.1.49, MariaDB connector 2.7.4, and PostgreSQL connector 42.2.14. Other connector versions might not work.\t["useSSL", "requireSSL", "ssl", "sslmode"] druid.access.jdbc.allowUnknownJdbcUrlFormat\tBoolean\tWhen false, Druid only accepts JDBC connections starting with jdbc:postgresql: or jdbc:mysql:. When true, Druid allows JDBC connections to any kind of database, but only enforces druid.access.jdbc.allowedProperties for PostgreSQL and MySQL/MariaDB.\ttrue "},{"title":"Task Logging","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#task-logging","content":"You can use the druid.indexer configuration to set a long-term storage location for task log files, and to set a retention policy. For more information about ingestion tasks and the process of generating logs, see the task reference. Log Long-term Storage Property\tDescription\tDefaultdruid.indexer.logs.type\tWhere to store task logs. noop, s3, azure, google, hdfs, file\tfile File Task Logs Store task logs in the local filesystem. Property\tDescription\tDefaultdruid.indexer.logs.directory\tLocal filesystem path.\tlog S3 Task Logs Store task logs in S3. Note that the druid-s3-extensions extension must be loaded. Property\tDescription\tDefaultdruid.indexer.logs.s3Bucket\tS3 bucket name.\tnone druid.indexer.logs.s3Prefix\tS3 key prefix.\tnone druid.indexer.logs.disableAcl\tBoolean flag for ACL. If this is set to false, the full control would be granted to the bucket owner. If the task logs bucket is the same as the deep storage (S3) bucket, then the value of this property will need to be set to true if druid.storage.disableAcl has been set to true.\tfalse Azure Blob Store Task Logs Store task logs in Azure Blob Store. Note: The druid-azure-extensions extension must be loaded, and this uses the same storage account as the deep storage module for azure. Property\tDescription\tDefaultdruid.indexer.logs.container\tThe Azure Blob Store container to write logs to\tnone druid.indexer.logs.prefix\tThe path to prepend to logs\tnone Google Cloud Storage Task Logs Store task logs in Google Cloud Storage. Note: The druid-google-extensions extension must be loaded, and this uses the same storage settings as the deep storage module for google. Property\tDescription\tDefaultdruid.indexer.logs.bucket\tThe Google Cloud Storage bucket to write logs to\tnone druid.indexer.logs.prefix\tThe path to prepend to logs\tnone HDFS Task Logs Store task logs in HDFS. Note that the druid-hdfs-storage extension must be loaded. Property\tDescription\tDefaultdruid.indexer.logs.directory\tThe directory to store logs.\tnone Log Retention Policy Property\tDescription\tDefaultdruid.indexer.logs.kill.enabled\tBoolean value for whether to enable deletion of old task logs. If set to true, Overlord will submit kill tasks periodically based on druid.indexer.logs.kill.delay specified, which will delete task logs from the log directory as well as tasks and tasklogs table entries in metadata storage except for tasks created in the last druid.indexer.logs.kill.durationToRetain period.\tfalse druid.indexer.logs.kill.durationToRetain\tRequired if kill is enabled. In milliseconds, task logs and entries in task-related metadata storage tables to be retained created in last x milliseconds.\tNone druid.indexer.logs.kill.initialDelay\tOptional. Number of milliseconds after Overlord start when first auto kill is run.\trandom value less than 300000 (5 mins) druid.indexer.logs.kill.delay\tOptional. Number of milliseconds of delay between successive executions of auto kill run.\t21600000 (6 hours) "},{"title":"API error response","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#api-error-response","content":"You can configure Druid API error responses to hide internal information like the Druid class name, stack trace, thread name, servlet name, code, line/column number, host, or IP address. Property\tDescription\tDefaultdruid.server.http.showDetailedJettyErrors\tWhen set to true, any error from the Jetty layer / Jetty filter includes the following fields in the JSON response: servlet, message, url, status, and cause, if it exists. When set to false, the JSON response only includes message, url, and status. The field values remain unchanged.\ttrue druid.server.http.errorResponseTransform.strategy\tError response transform strategy. The strategy controls how Druid transforms error responses from Druid services. When unset or set to none, Druid leaves error responses unchanged.\tnone Error response transform strategy You can use an error response transform strategy to transform error responses from within Druid services to hide internal information. When you specify an error response transform strategy other than none, Druid transforms the error responses from Druid services as follows: For any query API that fails in the Router service, Druid sets the fields errorClass and host to null. Druid applies the transformation strategy to the errorMessage field.For any SQL query API that fails, for example POST /druid/v2/sql/..., Druid sets the fields errorClass and host to null. Druid applies the transformation strategy to the errorMessage field.For any JDBC related exceptions, Druid will turn all checked exceptions into QueryInterruptedException otherwise druid will attempt to keep the exception as the same type. For example if the original exception isn't owned by Druid it will become QueryInterruptedException. Druid applies the transformation strategy to the errorMessage field. No error response transform strategy In this mode, Druid leaves error responses from underlying services unchanged and returns the unchanged errors to the API client. This is the default Druid error response mode. To explicitly enable this strategy, set druid.server.http.errorResponseTransform.strategy to "none". Allowed regular expression error response transform strategy In this mode, Druid validates the error responses from underlying services against a list of regular expressions. Only error messages that match a configured regular expression are returned. To enable this strategy, set druid.server.http.errorResponseTransform.strategy to allowedRegex. Property\tDescription\tDefaultdruid.server.http.errorResponseTransform.allowedRegex\tThe list of regular expressions Druid uses to validate error messages. If the error message matches any of the regular expressions, then Druid includes it in the response unchanged. If the error message does not match any of the regular expressions, Druid replaces the error message with null or with a default message depending on the type of underlying Exception.\t[] For example, consider the following error response: {"error":"Plan validation failed","errorMessage":"org.apache.calcite.runtime.CalciteContextException: From line 1, column 15 to line 1, column 38: Object 'nonexistent-datasource' not found","errorClass":"org.apache.calcite.tools.ValidationException","host":null} If druid.server.http.errorResponseTransform.allowedRegex is set to [], Druid transforms the query error response to the following: {"error":"Plan validation failed","errorMessage":null,"errorClass":null,"host":null} On the other hand, if druid.server.http.errorResponseTransform.allowedRegex is set to [".*CalciteContextException.*"] then Druid transforms the query error response to the following: {"error":"Plan validation failed","errorMessage":"org.apache.calcite.runtime.CalciteContextException: From line 1, column 15 to line 1, column 38: Object 'nonexistent-datasource' not found","errorClass":null,"host":null} "},{"title":"Overlord Discovery","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#overlord-discovery","content":"This config is used to find the Overlord using Curator service discovery. Only required if you are actually running an Overlord. Property\tDescription\tDefaultdruid.selectors.indexing.serviceName\tThe druid.service name of the Overlord process. To start the Overlord with a different name, set it with this property.\tdruid/overlord "},{"title":"Coordinator Discovery","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#coordinator-discovery","content":"This config is used to find the Coordinator using Curator service discovery. This config is used by the realtime indexing processes to get information about the segments loaded in the cluster. Property\tDescription\tDefaultdruid.selectors.coordinator.serviceName\tThe druid.service name of the Coordinator process. To start the Coordinator with a different name, set it with this property.\tdruid/coordinator "},{"title":"Announcing Segments","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#announcing-segments","content":"You can configure how to announce and unannounce Znodes in ZooKeeper (using Curator). For normal operations you do not need to override any of these configs. Batch Data Segment Announcer In current Druid, multiple data segments may be announced under the same Znode. Property\tDescription\tDefaultdruid.announcer.segmentsPerNode\tEach Znode contains info for up to this many segments.\t50 druid.announcer.maxBytesPerNode\tMax byte size for Znode.\t524288 druid.announcer.skipDimensionsAndMetrics\tSkip Dimensions and Metrics list from segment announcements. NOTE: Enabling this will also remove the dimensions and metrics list from Coordinator and Broker endpoints.\tfalse druid.announcer.skipLoadSpec\tSkip segment LoadSpec from segment announcements. NOTE: Enabling this will also remove the loadspec from Coordinator and Broker endpoints.\tfalse If you want to turn off the batch data segment announcer, you can add a property to skip announcing segments. You do not want to enable this config if you have any services using batch for druid.serverview.type Property\tDescription\tDefaultdruid.announcer.skipSegmentAnnouncementOnZk\tSkip announcing segments to zookeeper. Note that the batch server view will not work if this is set to true.\tfalse "},{"title":"JavaScript","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#javascript","content":"Druid supports dynamic runtime extension through JavaScript functions. This functionality can be configured through the following properties. Property\tDescription\tDefaultdruid.javascript.enabled\tSet to "true" to enable JavaScript functionality. This affects the JavaScript parser, filter, extractionFn, aggregator, post-aggregator, router strategy, and worker selection strategy.\tfalse info JavaScript-based functionality is disabled by default. Please refer to the Druid JavaScript programming guide for guidelines about using Druid's JavaScript functionality, including instructions on how to enable it. "},{"title":"Double Column storage","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#double-column-storage","content":"Prior to version 0.13.0, Druid's storage layer used a 32-bit float representation to store columns created by the doubleSum, doubleMin, and doubleMax aggregators at indexing time. Starting from version 0.13.0 the default will be 64-bit floats for Double columns. Using 64-bit representation for double column will lead to avoid precision loss at the cost of doubling the storage size of such columns. To keep the old format set the system-wide property druid.indexing.doubleStorage=float. You can also use floatSum, floatMin and floatMax to use 32-bit float representation. Support for 64-bit floating point columns was released in Druid 0.11.0, so if you use this feature then older versions of Druid will not be able to read your data segments. Property\tDescription\tDefaultdruid.indexing.doubleStorage\tSet to "float" to use 32-bit double representation for double columns.\tdouble "},{"title":"SQL compatible null handling","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#sql-compatible-null-handling","content":"Prior to version 0.13.0, Druid string columns treated '' and null values as interchangeable, and numeric columns were unable to represent null values, coercing null to 0. Druid 0.13.0 introduced a mode which enabled SQL compatible null handling, allowing string columns to distinguish empty strings from nulls, and numeric columns to contain null rows. Property\tDescription\tDefaultdruid.generic.useDefaultValueForNull\tWhen set to true, null values will be stored as '' for string columns and 0 for numeric columns. Set to false to store and query data in SQL compatible mode.\ttrue druid.generic.ignoreNullsForStringCardinality\tWhen set to true, null values will be ignored for the built-in cardinality aggregator over string columns. Set to false to include null values while estimating cardinality of only string columns using the built-in cardinality aggregator. This setting takes effect only when druid.generic.useDefaultValueForNull is set to true and is ignored in SQL compatibility mode. Additionally, empty strings (equivalent to null) are not counted when this is set to true.\tfalse This mode does have a storage size and query performance cost, see segment documentation for more details. "},{"title":"HTTP Client","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#http-client","content":"All Druid components can communicate with each other over HTTP. Property\tDescription\tDefaultdruid.global.http.numConnections\tSize of connection pool per destination URL. If there are more HTTP requests than this number that all need to speak to the same URL, then they will queue up.\t20 druid.global.http.eagerInitialization\tIndicates that http connections should be eagerly initialized. If set to true, numConnections connections are created upon initialization\ttrue druid.global.http.compressionCodec\tCompression codec to communicate with others. May be "gzip" or "identity".\tgzip druid.global.http.readTimeout\tThe timeout for data reads.\tPT15M druid.global.http.unusedConnectionTimeout\tThe timeout for idle connections in connection pool. The connection in the pool will be closed after this timeout and a new one will be established. This timeout should be less than druid.global.http.readTimeout. Set this timeout = ~90% of druid.global.http.readTimeout\tPT4M druid.global.http.numMaxThreads\tMaximum number of I/O worker threads\tmax(10, ((number of cores * 17) / 16 + 2) + 30) "},{"title":"Common endpoints Configuration","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#common-endpoints-configuration","content":"This section contains the configuration options for endpoints that are supported by all processes. Property\tDescription\tDefaultdruid.server.hiddenProperties\tIf property names or substring of property names (case insensitive) is in this list, responses of the /status/properties endpoint do not show these properties\t["druid.s3.accessKey","druid.s3.secretKey","druid.metadata.storage.connector.password", "password", "key", "token", "pwd"] "},{"title":"Master Server","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#master-server","content":"This section contains the configuration options for the processes that reside on Master servers (Coordinators and Overlords) in the suggested three-server configuration. "},{"title":"Coordinator","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#coordinator","content":"For general Coordinator Process information, see here. Static Configuration These Coordinator static configurations can be defined in the coordinator/runtime.properties file. Coordinator Process Config Property\tDescription\tDefaultdruid.host\tThe host for the current process. This is used to advertise the current processes location as reachable from another process and should generally be specified such that http://${druid.host}/ could actually talk to this process\tInetAddress.getLocalHost().getCanonicalHostName() druid.bindOnHost\tIndicating whether the process's internal jetty server bind on druid.host. Default is false, which means binding to all interfaces.\tfalse druid.plaintextPort\tThis is the port to actually listen on; unless port mapping is used, this will be the same port as is on druid.host\t8081 druid.tlsPort\tTLS port for HTTPS connector, if druid.enableTlsPort is set then this config will be used. If druid.host contains port then that port will be ignored. This should be a non-negative Integer.\t8281 druid.service\tThe name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services\tdruid/coordinator Coordinator Operation Property\tDescription\tDefaultdruid.coordinator.period\tThe run period for the Coordinator. The Coordinator operates by maintaining the current state of the world in memory and periodically looking at the set of "used" segments and segments being served to make decisions about whether any changes need to be made to the data topology. This property sets the delay between each of these runs.\tPT60S druid.coordinator.period.indexingPeriod\tHow often to send compact/merge/conversion tasks to the indexing service. It's recommended to be longer than druid.manager.segments.pollDuration\tPT1800S (30 mins) druid.coordinator.startDelay\tThe operation of the Coordinator works on the assumption that it has an up-to-date view of the state of the world when it runs, the current ZK interaction code, however, is written in a way that doesn’t allow the Coordinator to know for a fact that it’s done loading the current state of the world. This delay is a hack to give it enough time to believe that it has all the data.\tPT300S druid.coordinator.load.timeout\tThe timeout duration for when the Coordinator assigns a segment to a Historical process.\tPT15M druid.coordinator.kill.pendingSegments.on\tBoolean flag for whether or not the Coordinator clean up old entries in the pendingSegments table of metadata store. If set to true, Coordinator will check the created time of most recently complete task. If it doesn't exist, it finds the created time of the earliest running/pending/waiting tasks. Once the created time is found, then for all dataSources not in the killPendingSegmentsSkipList (see Dynamic configuration), Coordinator will ask the Overlord to clean up the entries 1 day or more older than the found created time in the pendingSegments table. This will be done periodically based on druid.coordinator.period.indexingPeriod specified.\ttrue druid.coordinator.kill.on\tBoolean flag for whether or not the Coordinator should submit kill task for unused segments, that is, permanently delete them from metadata store and deep storage. If set to true, then for all whitelisted dataSources (or optionally all), Coordinator will submit tasks periodically based on period specified. A whitelist can be set via dynamic configuration killDataSourceWhitelist described later. When druid.coordinator.kill.on is true, segments are eligible for permanent deletion once their data intervals are older than druid.coordinator.kill.durationToRetain relative to the current time. If a segment's data interval is older than this threshold at the time it is marked unused, it is eligible for permanent deletion immediately after being marked unused.\tfalse druid.coordinator.kill.period\tHow often to send kill tasks to the indexing service. Value must be greater than druid.coordinator.period.indexingPeriod. Only applies if kill is turned on.\tP1D (1 Day) druid.coordinator.kill.durationToRetain\tOnly applies if you set druid.coordinator.kill.on to true. This value is ignored if druid.coordinator.kill.ignoreDurationToRetain is true. Valid configurations must be a ISO8601 period. Druid will not kill unused segments whose interval end date is beyond now - durationToRetain. durationToRetain can be a negative ISO8601 period, which would result in now - durationToRetain to be in the future. Note that the durationToRetain parameter applies to the segment interval, not the time that the segment was last marked unused. For example, if durationToRetain is set to P90D, then a segment for a time chunk 90 days in the past is eligible for permanent deletion immediately after being marked unused.\tP90D druid.coordinator.kill.ignoreDurationToRetain\tA way to override druid.coordinator.kill.durationToRetain and tell the coordinator that you do not care about the end date of unused segment intervals when it comes to killing them. If true, the coordinator considers all unused segments as eligible to be killed.\tfalse druid.coordinator.kill.maxSegments\tThe number of unused segments to kill per kill task. This number must be greater than 0. This only applies when druid.coordinator.kill.on=true.\t100 druid.coordinator.balancer.strategy\tSpecify the type of balancing strategy for the coordinator to use to distribute segments among the historicals. cachingCost is logically equivalent to cost but is more CPU-efficient on large clusters. diskNormalized weights the costs according to the servers' disk usage ratios - there are known issues with this strategy distributing segments unevenly across the cluster. random distributes segments among services randomly.\tcost druid.coordinator.balancer.cachingCost.awaitInitialization\tWhether to wait for segment view initialization before creating the cachingCost balancing strategy. This property is enabled only when druid.coordinator.balancer.strategy is cachingCost. If set to 'true', the Coordinator will not start to assign segments, until the segment view is initialized. If set to 'false', the Coordinator will fallback to use the cost balancing strategy only if the segment view is not initialized yet. Notes, it may take much time to wait for the initialization since the cachingCost balancing strategy involves much computing to build itself.\tfalse druid.coordinator.loadqueuepeon.repeatDelay\tThe start and repeat delay for the loadqueuepeon, which manages the load and drop of segments.\tPT0.050S (50 ms) druid.coordinator.asOverlord.enabled\tBoolean value for whether this Coordinator process should act like an Overlord as well. This configuration allows users to simplify a druid cluster by not having to deploy any standalone Overlord processes. If set to true, then Overlord console is available at http://coordinator-host:port/console.html and be sure to set druid.coordinator.asOverlord.overlordService also. See next.\tfalse druid.coordinator.asOverlord.overlordService\tRequired, if druid.coordinator.asOverlord.enabled is true. This must be same value as druid.service on standalone Overlord processes and druid.selectors.indexing.serviceName on Middle Managers.\tNULL Metadata Management Property\tDescription\tRequired\tDefaultdruid.coordinator.period.metadataStoreManagementPeriod\tHow often to run metadata management tasks in ISO 8601 duration format.\tNo\tPT1H druid.coordinator.kill.supervisor.on\tBoolean value for whether to enable automatic deletion of terminated supervisors. If set to true, Coordinator will periodically remove terminated supervisors from the supervisor table in metadata storage.\tNo\tTrue druid.coordinator.kill.supervisor.period\tHow often to do automatic deletion of terminated supervisor in ISO 8601 duration format. Value must be equal to or greater than druid.coordinator.period.metadataStoreManagementPeriod. Only applies if druid.coordinator.kill.supervisor.on is set to "True".\tNo\tP1D druid.coordinator.kill.supervisor.durationToRetain\tDuration of terminated supervisor to be retained from created time in ISO 8601 duration format. Only applies if druid.coordinator.kill.supervisor.on is set to "True".\tYes if druid.coordinator.kill.supervisor.on is set to "True".\tP90D druid.coordinator.kill.audit.on\tBoolean value for whether to enable automatic deletion of audit logs. If set to true, Coordinator will periodically remove audit logs from the audit table entries in metadata storage.\tNo\tTrue druid.coordinator.kill.audit.period\tHow often to do automatic deletion of audit logs in ISO 8601 duration format. Value must be equal to or greater than druid.coordinator.period.metadataStoreManagementPeriod. Only applies if druid.coordinator.kill.audit.on is set to "True".\tNo\tP1D druid.coordinator.kill.audit.durationToRetain\tDuration of audit logs to be retained from created time in ISO 8601 duration format. Only applies if druid.coordinator.kill.audit.on is set to "True".\tYes if druid.coordinator.kill.audit.on is set to "True".\tP90D druid.coordinator.kill.compaction.on\tBoolean value for whether to enable automatic deletion of compaction configurations. If set to true, Coordinator will periodically remove compaction configuration of inactive datasource (datasource with no used and unused segments) from the config table in metadata storage.\tNo\tFalse druid.coordinator.kill.compaction.period\tHow often to do automatic deletion of compaction configurations in ISO 8601 duration format. Value must be equal to or greater than druid.coordinator.period.metadataStoreManagementPeriod. Only applies if druid.coordinator.kill.compaction.on is set to "True".\tNo\tP1D druid.coordinator.kill.rule.on\tBoolean value for whether to enable automatic deletion of rules. If set to true, Coordinator will periodically remove rules of inactive datasource (datasource with no used and unused segments) from the rule table in metadata storage.\tNo\tTrue druid.coordinator.kill.rule.period\tHow often to do automatic deletion of rules in ISO 8601 duration format. Value must be equal to or greater than druid.coordinator.period.metadataStoreManagementPeriod. Only applies if druid.coordinator.kill.rule.on is set to "True".\tNo\tP1D druid.coordinator.kill.rule.durationToRetain\tDuration of rules to be retained from created time in ISO 8601 duration format. Only applies if druid.coordinator.kill.rule.on is set to "True".\tYes if druid.coordinator.kill.rule.on is set to "True".\tP90D druid.coordinator.kill.datasource.on\tBoolean value for whether to enable automatic deletion of datasource metadata (Note: datasource metadata only exists for datasource created from supervisor). If set to true, Coordinator will periodically remove datasource metadata of terminated supervisor from the datasource table in metadata storage.\tNo\tTrue druid.coordinator.kill.datasource.period\tHow often to do automatic deletion of datasource metadata in ISO 8601 duration format. Value must be equal to or greater than druid.coordinator.period.metadataStoreManagementPeriod. Only applies if druid.coordinator.kill.datasource.on is set to "True".\tNo\tP1D druid.coordinator.kill.datasource.durationToRetain\tDuration of datasource metadata to be retained from created time in ISO 8601 duration format. Only applies if druid.coordinator.kill.datasource.on is set to "True".\tYes if druid.coordinator.kill.datasource.on is set to "True".\tP90D Segment Management Property\tPossible Values\tDescription\tDefaultdruid.serverview.type\tbatch or http\tSegment discovery method to use. "http" enables discovering segments using HTTP instead of ZooKeeper.\thttp druid.coordinator.loadqueuepeon.type\tcurator or http\tImplementation to use to assign segment loads and drops to historicals. Curator-based implementation is now deprecated, so you should transition to using HTTP-based segment assignments.\thttp druid.coordinator.segment.awaitInitializationOnStart\ttrue or false\tWhether the Coordinator will wait for its view of segments to fully initialize before starting up. If set to 'true', the Coordinator's HTTP server will not start up, and the Coordinator will not announce itself as available, until the server view is initialized.\ttrue Additional config when "http" loadqueuepeon is used Property\tDescription\tDefaultdruid.coordinator.loadqueuepeon.http.batchSize\tNumber of segment load/drop requests to batch in one HTTP request. Note that it must be smaller than druid.segmentCache.numLoadingThreads config on Historical process.\t1 Metadata Retrieval Property\tDescription\tDefaultdruid.manager.config.pollDuration\tHow often the manager polls the config table for updates.\tPT1M druid.manager.segments.pollDuration\tThe duration between polls the Coordinator does for updates to the set of active segments. Generally defines the amount of lag time it can take for the Coordinator to notice new segments.\tPT1M druid.manager.rules.pollDuration\tThe duration between polls the Coordinator does for updates to the set of active rules. Generally defines the amount of lag time it can take for the Coordinator to notice rules.\tPT1M druid.manager.rules.defaultRule\tThe default rule for the cluster\t_default druid.manager.rules.alertThreshold\tThe duration after a failed poll upon which an alert should be emitted.\tPT10M Dynamic Configuration The Coordinator has dynamic configurations to tune certain behavior on the fly, without requiring a service restart. It is recommended that you use the web console to configure these parameters. However, if you need to do it via HTTP, the JSON object can be submitted to the Coordinator via a POST request at: http://<COORDINATOR_IP>:<PORT>/druid/coordinator/v1/config Optional Header Parameters for auditing the config change can also be specified. Header Param Name\tDescription\tDefaultX-Druid-Author\tauthor making the config change\t"" X-Druid-Comment\tcomment describing the change being done\t"" A sample Coordinator dynamic config JSON object is shown below: { "millisToWaitBeforeDeleting": 900000, "mergeBytesLimit": 100000000, "mergeSegmentsLimit" : 1000, "maxSegmentsToMove": 5, "replicantLifetime": 15, "replicationThrottleLimit": 10, "killDataSourceWhitelist": ["wikipedia", "testDatasource"], "decommissioningNodes": ["localhost:8182", "localhost:8282"], "decommissioningMaxPercentOfMaxSegmentsToMove": 70, "pauseCoordination": false, "replicateAfterLoadTimeout": false, "maxNonPrimaryReplicantsToLoad": 2147483647 } Issuing a GET request at the same URL will return the spec that is currently in place. A description of the config setup spec is shown below. Property\tDescription\tDefaultmillisToWaitBeforeDeleting\tHow long does the Coordinator need to be a leader before it can start marking overshadowed segments as unused in metadata storage.\t900000 (15 mins) mergeBytesLimit\tThe maximum total uncompressed size in bytes of segments to merge.\t524288000L mergeSegmentsLimit\tThe maximum number of segments that can be in a single append task.\t100 smartSegmentLoading\tEnables "smart" segment loading mode which dynamically computes the optimal values of several properties that maximize Coordinator performance.\ttrue maxSegmentsToMove\tThe maximum number of segments that can be moved at any given time.\t100 replicantLifetime\tThe maximum number of Coordinator runs for which a segment can wait in the load queue of a Historical before Druid raises an alert.\t15 replicationThrottleLimit\tThe maximum number of segment replicas that can be assigned to a historical tier in a single Coordinator run. This property prevents historicals from becoming overwhelmed when loading extra replicas of segments that are already available in the cluster.\t500 balancerComputeThreads\tThread pool size for computing moving cost of segments during segment balancing. Consider increasing this if you have a lot of segments and moving segments begins to stall.\t1 killDataSourceWhitelist\tList of specific data sources for which kill tasks are sent if property druid.coordinator.kill.on is true. This can be a list of comma-separated data source names or a JSON array.\tnone killPendingSegmentsSkipList\tList of data sources for which pendingSegments are NOT cleaned up if property druid.coordinator.kill.pendingSegments.on is true. This can be a list of comma-separated data sources or a JSON array.\tnone maxSegmentsInNodeLoadingQueue\tThe maximum number of segments allowed in the load queue of any given server. Use this parameter to load segments faster if, for example, the cluster contains slow-loading nodes or if there are too many segments to be replicated to a particular node (when faster loading is preferred to better segments distribution). The optimal value depends on the loading speed of segments, acceptable replication time and number of nodes.\t500 useRoundRobinSegmentAssignment\tBoolean flag for whether segments should be assigned to historicals in a round robin fashion. When disabled, segment assignment is done using the chosen balancer strategy. When enabled, this can speed up segment assignments leaving balancing to move the segments to their optimal locations (based on the balancer strategy) lazily.\ttrue decommissioningNodes\tList of historical servers to 'decommission'. Coordinator will not assign new segments to 'decommissioning' servers, and segments will be moved away from them to be placed on non-decommissioning servers at the maximum rate specified by decommissioningMaxPercentOfMaxSegmentsToMove.\tnone decommissioningMaxPercentOfMaxSegmentsToMove\tUpper limit of segments the Coordinator can move from decommissioning servers to active non-decommissioning servers during a single run. This value is relative to the total maximum number of segments that can be moved at any given time based upon the value of maxSegmentsToMove. If decommissioningMaxPercentOfMaxSegmentsToMove is 0, the Coordinator does not move segments to decommissioning servers, effectively putting them in a type of "maintenance" mode. In this case, decommissioning servers do not participate in balancing or assignment by load rules. The Coordinator still considers segments on decommissioning servers as candidates to replicate on active servers. Decommissioning can stall if there are no available active servers to move the segments to. You can use the maximum percent of decommissioning segment movements to prioritize balancing or to decrease commissioning time to prevent active servers from being overloaded. The value must be between 0 and 100.\t70 pauseCoordination\tBoolean flag for whether or not the coordinator should execute its various duties of coordinating the cluster. Setting this to true essentially pauses all coordination work while allowing the API to remain up. Duties that are paused include all classes that implement the CoordinatorDuty Interface. Such duties include: Segment balancing, Segment compaction, Submitting kill tasks for unused segments (if enabled), Logging of used segments in the cluster, Marking of newly unused or overshadowed segments, Matching and execution of load/drop rules for used segments, Unloading segments that are no longer marked as used from Historical servers. An example of when an admin may want to pause coordination would be if they are doing deep storage maintenance on HDFS Name Nodes with downtime and don't want the coordinator to be directing Historical Nodes to hit the Name Node with API requests until maintenance is done and the deep store is declared healthy for use again.\tfalse replicateAfterLoadTimeout\tBoolean flag for whether or not additional replication is needed for segments that have failed to load due to the expiry of druid.coordinator.load.timeout. If this is set to true, the coordinator will attempt to replicate the failed segment on a different historical server. This helps improve the segment availability if there are a few slow historicals in the cluster. However, the slow historical may still load the segment later and the coordinator may issue drop requests if the segment is over-replicated.\tfalse maxNonPrimaryReplicantsToLoad\tThe maximum number of replicas that can be assigned across all tiers in a single Coordinator run. This parameter serves the same purpose as replicationThrottleLimit except this limit applies at the cluster-level instead of per tier. The default value does not apply a limit to the number of replicas assigned per coordination cycle. If you want to use a non-default value for this property, you may want to start with ~20% of the number of segments found on the historical server with the most segments. Use the Druid metric, coordinator/time with the filter duty=org.apache.druid.server.coordinator.duty.RunRules to see how different values of this property impact your Coordinator execution time.\tInteger.MAX_VALUE (no limit) Smart segment loading The smartSegmentLoading mode simplifies Coordinator configuration for segment loading and balancing. In this mode, the Coordinator does not require the user to provide values of the following parameters and computes them automatically instead. info If you enable smartSegmentLoading mode and provide values for the following properties, Druid ignores your values. The Coordinator computes them automatically. The computed values are based on the current state of the cluster and are designed to optimize Coordinator performance. Property\tComputed value\tDescriptionuseRoundRobinSegmentAssignment\ttrue\tSpeeds up segment assignment. maxSegmentsInNodeLoadingQueue\t0\tRemoves the limit on load queue size. replicationThrottleLimit\t2% of used segments, minimum value 100\tPrevents aggressive replication when a historical disappears only intermittently. replicantLifetime\t60\tAllows segments to wait about an hour (assuming a Coordinator period of 1 minute) in the load queue before an alert is raised. In smartSegmentLoading mode, load queues are not limited by size. Segments might therefore assigned to a load queue even if the corresponding server is slow to load them. maxNonPrimaryReplicantsToLoad\tInteger.MAX_VALUE (no limit)\tThis throttling is already handled by replicationThrottleLimit. maxSegmentsToMove\t2% of used segments, minimum value 100, maximum value 1000\tEnsures that some segments are always moving in the cluster to keep it well balanced. The maximum value keeps the Coordinator run times bounded. decommissioningMaxPercentOfMaxSegmentsToMove\t100\tPrioritizes the move of segments from decommissioning servers so that they can be terminated quickly. When smartSegmentLoading is disabled, Druid uses the configured values of these properties. Disable smartSegmentLoading only if you want to explicitly set the values of any of the above properties. Audit history To view the audit history of Coordinator dynamic config issue a GET request to the URL - http://<COORDINATOR_IP>:<PORT>/druid/coordinator/v1/config/history?interval=<interval> default value of interval can be specified by setting druid.audit.manager.auditHistoryMillis (1 week if not configured) in Coordinator runtime.properties To view last n entries of the audit history of Coordinator dynamic config issue a GET request to the URL - http://<COORDINATOR_IP>:<PORT>/druid/coordinator/v1/config/history?count=<n> Lookups Dynamic Configuration These configuration options control Coordinator lookup management. See dynamic configuration for lookups configurations that affect lookup propagation. Property\tDescription\tDefaultdruid.manager.lookups.hostDeleteTimeout\tHow long to wait for a DELETE request to a particular process before considering the DELETE a failure\tPT1S druid.manager.lookups.hostUpdateTimeout\tHow long to wait for a POST request to a particular process before considering the POST a failure\tPT10S druid.manager.lookups.deleteAllTimeout\tHow long to wait for all DELETE requests to finish before considering the delete attempt a failure\tPT10S druid.manager.lookups.updateAllTimeout\tHow long to wait for all POST requests to finish before considering the attempt a failure\tPT60S druid.manager.lookups.threadPoolSize\tHow many processes can be managed concurrently (concurrent POST and DELETE requests). Requests this limit will wait in a queue until a slot becomes available.\t10 druid.manager.lookups.period\tNumber of milliseconds between checks for configuration changes\t120000 (2 minutes) Automatic compaction dynamic configuration You can set or update automatic compaction properties dynamically using theAutomatic compaction API without restarting Coordinators. For details about segment compaction, see Segment size optimization. You can configure automatic compaction through the following properties: Property\tDescription\tRequireddataSource\tdataSource name to be compacted.\tyes taskPriority\tPriority of compaction task.\tno (default = 25) inputSegmentSizeBytes\tMaximum number of total segment bytes processed per compaction task. Since a time chunk must be processed in its entirety, if the segments for a particular time chunk have a total size in bytes greater than this parameter, compaction will not run for that time chunk.\tno (default = 100,000,000,000,000 i.e. 100TB) skipOffsetFromLatest\tThe offset for searching segments to be compacted in ISO 8601 duration format. Strongly recommended to set for realtime dataSources. See Data handling with compaction.\tno (default = "P1D") tuningConfig\tTuning config for compaction tasks. See below Automatic compaction tuningConfig.\tno taskContext\tTask context for compaction tasks.\tno granularitySpec\tCustom granularitySpec. See Automatic compaction granularitySpec.\tNo dimensionsSpec\tCustom dimensionsSpec. See Automatic compaction dimensionsSpec.\tNo transformSpec\tCustom transformSpec. See Automatic compaction transformSpec.\tNo metricsSpec\tCustom metricsSpec. The compaction task preserves any existing metrics regardless of whether metricsSpec is specified. If metricsSpec is specified, Druid does not reapply any aggregators matching the metric names specified in metricsSpec to rows that already have the associated metrics. For rows that do not already have the metric specified in metricsSpec, Druid applies the metric aggregator on the source column, then proceeds to combine the metrics across segments as usual. If metricsSpec is not specified, Druid automatically discovers the metrics in the existing segments and combines existing metrics with the same metric name across segments. Aggregators for metrics with the same name are assumed to be compatible for combining across segments, otherwise the compaction task may fail.\tNo ioConfig\tIO config for compaction tasks. See Automatic compaction ioConfig.\tno Automatic compaction config example: { "dataSource": "wikiticker", "granularitySpec" : { "segmentGranularity" : "none" } } Compaction tasks fail when higher priority tasks cause Druid to revoke their locks. By default, realtime tasks like ingestion have a higher priority than compaction tasks. Frequent conflicts between compaction tasks and realtime tasks can cause the Coordinator's automatic compaction to hang. You may see this issue with streaming ingestion from Kafka and Kinesis, which ingest late-arriving data. To mitigate this problem, set skipOffsetFromLatest to a value large enough so that arriving data tends to fall outside the offset value from the current time. This way you can avoid conflicts between compaction tasks and realtime ingestion tasks. For example, if you want to skip over segments from thirty days prior to the end time of the most recent segment, assign "skipOffsetFromLatest": "P30D". For more information, see Avoid conflicts with ingestion. Automatic compaction tuningConfig Auto-compaction supports a subset of the tuningConfig for Parallel task. The below is a list of the supported configurations for auto-compaction. Property\tDescription\tRequiredtype\tThe task type, this should always be index_parallel.\tyes maxRowsInMemory\tUsed in determining when intermediate persists to disk should occur. Normally user does not need to set this, but depending on the nature of data, if rows are short in terms of bytes, user may not want to store a million rows in memory and this value should be set.\tno (default = 1000000) maxBytesInMemory\tUsed in determining when intermediate persists to disk should occur. Normally this is computed internally and user does not need to set it. This value represents number of bytes to aggregate in heap memory before persisting. This is based on a rough estimate of memory usage and not actual usage. The maximum heap memory usage for indexing is maxBytesInMemory * (2 + maxPendingPersists)\tno (default = 1/6 of max JVM memory) splitHintSpec\tUsed to give a hint to control the amount of data that each first phase task reads. This hint could be ignored depending on the implementation of the input source. See Split hint spec for more details.\tno (default = size-based split hint spec) partitionsSpec\tDefines how to partition data in each time chunk, see PartitionsSpec\tno (default = dynamic) indexSpec\tDefines segment storage format options to be used at indexing time, see IndexSpec\tno indexSpecForIntermediatePersists\tDefines segment storage format options to be used at indexing time for intermediate persisted temporary segments. this can be used to disable dimension/metric compression on intermediate segments to reduce memory required for final merging. however, disabling compression on intermediate segments might increase page cache use while they are used before getting merged into final segment published, see IndexSpec for possible values.\tno maxPendingPersists\tMaximum number of persists that can be pending but not started. If this limit would be exceeded by a new intermediate persist, ingestion will block until the currently-running persist finishes. Maximum heap memory usage for indexing scales with maxRowsInMemory * (2 + maxPendingPersists).\tno (default = 0, meaning one persist can be running concurrently with ingestion, and none can be queued up) pushTimeout\tMilliseconds to wait for pushing segments. It must be >= 0, where 0 means to wait forever.\tno (default = 0) segmentWriteOutMediumFactory\tSegment write-out medium to use when creating segments. See SegmentWriteOutMediumFactory.\tno (default is the value from druid.peon.defaultSegmentWriteOutMediumFactory.type is used) maxNumConcurrentSubTasks\tMaximum number of worker tasks which can be run in parallel at the same time. The supervisor task would spawn worker tasks up to maxNumConcurrentSubTasks regardless of the current available task slots. If this value is set to 1, the supervisor task processes data ingestion on its own instead of spawning worker tasks. If this value is set to too large, too many worker tasks can be created which might block other ingestion. Check Capacity Planning for more details.\tno (default = 1) maxRetry\tMaximum number of retries on task failures.\tno (default = 3) maxNumSegmentsToMerge\tMax limit for the number of segments that a single task can merge at the same time in the second phase. Used only with hashed or single_dim partitionsSpec.\tno (default = 100) totalNumMergeTasks\tTotal number of tasks to merge segments in the merge phase when partitionsSpec is set to hashed or single_dim.\tno (default = 10) taskStatusCheckPeriodMs\tPolling period in milliseconds to check running task statuses.\tno (default = 1000) chatHandlerTimeout\tTimeout for reporting the pushed segments in worker tasks.\tno (default = PT10S) chatHandlerNumRetries\tRetries for reporting the pushed segments in worker tasks.\tno (default = 5) Automatic compaction granularitySpec Field\tDescription\tRequiredsegmentGranularity\tTime chunking period for the segment granularity. Defaults to 'null', which preserves the original segment granularity. Accepts all Query granularity values.\tNo queryGranularity\tThe resolution of timestamp storage within each segment. Defaults to 'null', which preserves the original query granularity. Accepts all Query granularity values.\tNo rollup\tWhether to enable ingestion-time rollup or not. Defaults to 'null', which preserves the original setting. Note that once data is rollup, individual records can no longer be recovered.\tNo Automatic compaction dimensionsSpec Field\tDescription\tRequireddimensions\tA list of dimension names or objects. Defaults to 'null', which preserves the original dimensions. Note that setting this will cause segments manually compacted with dimensionExclusions to be compacted again.\tNo Automatic compaction transformSpec Field\tDescription\tRequiredfilter\tThe filter conditionally filters input rows during compaction. Only rows that pass the filter will be included in the compacted segments. Any of Druid's standard query filters can be used. Defaults to 'null', which will not filter any row.\tNo Automatic compaction ioConfig Auto-compaction supports a subset of the ioConfig for Parallel task. The below is a list of the supported configurations for auto-compaction. Property\tDescription\tDefault\tRequireddropExisting\tIf true the compaction task replaces all existing segments fully contained by the umbrella interval of the compacted segments when the task publishes new segments and tombstones. If compaction fails, Druid does not publish any segments or tombstones. WARNING: this functionality is still in beta. Note that changing this config does not cause intervals to be compacted again.\tfalse\tno "},{"title":"Overlord","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#overlord","content":"For general Overlord Process information, see here. Overlord Static Configuration These Overlord static configurations can be defined in the overlord/runtime.properties file. Overlord Process Configs Property\tDescription\tDefaultdruid.host\tThe host for the current process. This is used to advertise the current processes location as reachable from another process and should generally be specified such that http://${druid.host}/ could actually talk to this process\tInetAddress.getLocalHost().getCanonicalHostName() druid.bindOnHost\tIndicating whether the process's internal jetty server bind on druid.host. Default is false, which means binding to all interfaces.\tfalse druid.plaintextPort\tThis is the port to actually listen on; unless port mapping is used, this will be the same port as is on druid.host\t8090 druid.tlsPort\tTLS port for HTTPS connector, if druid.enableTlsPort is set then this config will be used. If druid.host contains port then that port will be ignored. This should be a non-negative Integer.\t8290 druid.service\tThe name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services\tdruid/overlord Overlord Operations Property\tDescription\tDefaultdruid.indexer.runner.type\tIndicates whether tasks should be run locally using "local" or in a distributed environment using "remote". The recommended option is "httpRemote", which is similar to "remote" but uses HTTP to interact with Middle Managers instead of ZooKeeper.\thttpRemote druid.indexer.storage.type\tChoices are "local" or "metadata". Indicates whether incoming tasks should be stored locally (in heap) or in metadata storage. "local" is mainly for internal testing while "metadata" is recommended in production because storing incoming tasks in metadata storage allows for tasks to be resumed if the Overlord should fail.\tlocal druid.indexer.storage.recentlyFinishedThreshold\tDuration of time to store task results. Default is 24 hours. If you have hundreds of tasks running in a day, consider increasing this threshold.\tPT24H druid.indexer.tasklock.forceTimeChunkLock\tSetting this to false is still experimental If set, all tasks are enforced to use time chunk lock. If not set, each task automatically chooses a lock type to use. This configuration can be overwritten by setting forceTimeChunkLock in the task context. See Task Locking & Priority for more details about locking in tasks.\ttrue druid.indexer.tasklock.batchSegmentAllocation\tIf set to true, Druid performs segment allocate actions in batches to improve throughput and reduce the average task/action/run/time. See batching segmentAllocate actions for details.\ttrue druid.indexer.tasklock.batchAllocationWaitTime\tNumber of milliseconds after Druid adds the first segment allocate action to a batch, until it executes the batch. Allows the batch to add more requests and improve the average segment allocation run time. This configuration takes effect only if batchSegmentAllocation is enabled.\t500 druid.indexer.task.default.context\tDefault task context that is applied to all tasks submitted to the Overlord. Any default in this config does not override neither the context values the user provides nor druid.indexer.tasklock.forceTimeChunkLock.\tempty context druid.indexer.queue.maxSize\tMaximum number of active tasks at one time.\tInteger.MAX_VALUE druid.indexer.queue.startDelay\tSleep this long before starting Overlord queue management. This can be useful to give a cluster time to re-orient itself after e.g. a widespread network issue.\tPT1M druid.indexer.queue.restartDelay\tSleep this long when Overlord queue management throws an exception before trying again.\tPT30S druid.indexer.queue.storageSyncRate\tSync Overlord state this often with an underlying task persistence mechanism.\tPT1M The following configs only apply if the Overlord is running in remote mode. For a description of local vs. remote mode, see Overlord Process. Property\tDescription\tDefaultdruid.indexer.runner.taskAssignmentTimeout\tHow long to wait after a task as been assigned to a MiddleManager before throwing an error.\tPT5M druid.indexer.runner.minWorkerVersion\tThe minimum MiddleManager version to send tasks to. The version number is a string. This affects the expected behavior during certain operations like comparison against druid.worker.version. Specifically, the version comparison follows dictionary order. Use ISO8601 date format for the version to accommodate date comparisons.\t"0" druid.indexer.runner.parallelIndexTaskSlotRatio\tThe ratio of task slots available for parallel indexing supervisor tasks per worker. The specified value must be in the range [0, 1].\t1 druid.indexer.runner.compressZnodes\tIndicates whether or not the Overlord should expect MiddleManagers to compress Znodes.\ttrue druid.indexer.runner.maxZnodeBytes\tThe maximum size Znode in bytes that can be created in ZooKeeper, should be in the range of [10KiB, 2GiB). Human-readable format is supported.\t512 KiB druid.indexer.runner.taskCleanupTimeout\tHow long to wait before failing a task after a MiddleManager is disconnected from ZooKeeper.\tPT15M druid.indexer.runner.taskShutdownLinkTimeout\tHow long to wait on a shutdown request to a MiddleManager before timing out\tPT1M druid.indexer.runner.pendingTasksRunnerNumThreads\tNumber of threads to allocate pending-tasks to workers, must be at least 1.\t1 druid.indexer.runner.maxRetriesBeforeBlacklist\tNumber of consecutive times the MiddleManager can fail tasks, before the worker is blacklisted, must be at least 1\t5 druid.indexer.runner.workerBlackListBackoffTime\tHow long to wait before a task is whitelisted again. This value should be greater that the value set for taskBlackListCleanupPeriod.\tPT15M druid.indexer.runner.workerBlackListCleanupPeriod\tA duration after which the cleanup thread will startup to clean blacklisted workers.\tPT5M druid.indexer.runner.maxPercentageBlacklistWorkers\tThe maximum percentage of workers to blacklist, this must be between 0 and 100.\t20 There are additional configs for autoscaling (if it is enabled): Property\tDescription\tDefaultdruid.indexer.autoscale.strategy\tChoices are "noop", "ec2" or "gce". Sets the strategy to run when autoscaling is required.\tnoop druid.indexer.autoscale.doAutoscale\tIf set to "true" autoscaling will be enabled.\tfalse druid.indexer.autoscale.provisionPeriod\tHow often to check whether or not new MiddleManagers should be added.\tPT1M druid.indexer.autoscale.terminatePeriod\tHow often to check when MiddleManagers should be removed.\tPT5M druid.indexer.autoscale.originTime\tThe starting reference timestamp that the terminate period increments upon.\t2012-01-01T00:55:00.000Z druid.indexer.autoscale.workerIdleTimeout\tHow long can a worker be idle (not a run task) before it can be considered for termination.\tPT90M druid.indexer.autoscale.maxScalingDuration\tHow long the Overlord will wait around for a MiddleManager to show up before giving up.\tPT15M druid.indexer.autoscale.numEventsToTrack\tThe number of autoscaling related events (node creation and termination) to track.\t10 druid.indexer.autoscale.pendingTaskTimeout\tHow long a task can be in "pending" state before the Overlord tries to scale up.\tPT30S druid.indexer.autoscale.workerVersion\tIf set, will only create nodes of set version during autoscaling. Overrides dynamic configuration.\tnull druid.indexer.autoscale.workerPort\tThe port that MiddleManagers will run on.\t8080 druid.indexer.autoscale.workerCapacityHint\tAn estimation of the number of task slots available for each worker launched by the auto scaler when there are no workers running. The auto scaler uses the worker capacity hint to launch workers with an adequate capacity to handle pending tasks. When unset or set to a value less than or equal to 0, the auto scaler scales workers equal to the value for minNumWorkers in autoScaler config instead. The auto scaler assumes that each worker, either a middleManager or indexer, has the same amount of task slots. Therefore, when all your workers have the same capacity (homogeneous capacity), set the value for autoscale.workerCapacityHint equal to druid.worker.capacity. If your workers have different capacities (heterogeneous capacity), set the value to the average of druid.worker.capacity across the workers. For example, if two workers have druid.worker.capacity=10, and one has druid.worker.capacity=4, set autoscale.workerCapacityHint=8. Only applies to pendingTaskBased provisioning strategy.\t-1 Supervisors Property\tDescription\tDefaultdruid.supervisor.healthinessThreshold\tThe number of successful runs before an unhealthy supervisor is again considered healthy.\t3 druid.supervisor.unhealthinessThreshold\tThe number of failed runs before the supervisor is considered unhealthy.\t3 druid.supervisor.taskHealthinessThreshold\tThe number of consecutive task successes before an unhealthy supervisor is again considered healthy.\t3 druid.supervisor.taskUnhealthinessThreshold\tThe number of consecutive task failures before the supervisor is considered unhealthy.\t3 druid.supervisor.storeStackTrace\tWhether full stack traces of supervisor exceptions should be stored and returned by the supervisor /status endpoint.\tfalse druid.supervisor.maxStoredExceptionEvents\tThe maximum number of exception events that can be returned through the supervisor /status endpoint.\tmax(healthinessThreshold, unhealthinessThreshold) druid.supervisor.idleConfig.enabled\tIf true, supervisor can become idle if there is no data on input stream/topic for some time.\tfalse druid.supervisor.idleConfig.inactiveAfterMillis\tSupervisor is marked as idle if all existing data has been read from input topic and no new data has been published for inactiveAfterMillis milliseconds.\t600_000 The druid.supervisor.idleConfig.* specified in the runtime properties of the overlord defines the default behavior for the entire cluster. See Idle Configuration in Kafka Supervisor IOConfig to override it for an individual supervisor. Overlord dynamic configuration The Overlord can be dynamically configured to specify how tasks are assigned to workers. The JSON object can be submitted to the Overlord via a POST request at: http://<OVERLORD_IP>:<port>/druid/indexer/v1/worker Optional header parameters for auditing the config change can also be specified. Header Param Name\tDescription\tDefaultX-Druid-Author\tauthor making the config change\t"" X-Druid-Comment\tcomment describing the change being done\t"" An example Overlord dynamic config is shown below: { "selectStrategy": { "type": "fillCapacity", "affinityConfig": { "affinity": { "datasource1": ["host1:port", "host2:port"], "datasource2": ["host3:port"] } } }, "autoScaler": { "type": "ec2", "minNumWorkers": 2, "maxNumWorkers": 12, "envConfig": { "availabilityZone": "us-east-1a", "nodeData": { "amiId": "${AMI}", "instanceType": "c3.8xlarge", "minInstances": 1, "maxInstances": 1, "securityGroupIds": ["${IDs}"], "keyName": "${KEY_NAME}" }, "userData": { "impl": "string", "data": "${SCRIPT_COMMAND}", "versionReplacementString": ":VERSION:", "version": null } } } } Issuing a GET request to the same URL returns the current Overlord dynamic config. Property\tDescription\tDefaultselectStrategy\tDescribes how to assign tasks to MiddleManagers. The type can be equalDistribution, equalDistributionWithCategorySpec, fillCapacity, fillCapacityWithCategorySpec, and javascript.\t{"type":"equalDistribution"} autoScaler\tOnly used if autoscaling is enabled. See below.\tnull To view the audit history of worker config issue a GET request to the URL - http://<OVERLORD_IP>:<port>/druid/indexer/v1/worker/history?interval=<interval> The default value of interval can be specified by setting druid.audit.manager.auditHistoryMillis (1 week if not configured) in Overlord runtime.properties. To view the last n entries of the audit history of worker config, issue a GET request to the following URL: http://<OVERLORD_IP>:<port>/druid/indexer/v1/worker/history?count=<n> Worker select strategy The select strategy controls how Druid assigns tasks to workers (MiddleManagers). At a high level, the select strategy determines the list of eligible workers for a given task using either an affinityConfig or a categorySpec. Then, Druid assigns the task by either trying to distribute load equally (equalDistribution) or to fill as many workers as possible to capacity (fillCapacity). There are 4 options for select strategies: equalDistributionequalDistributionWithCategorySpecfillCapacityfillCapacityWithCategorySpec A javascript option is also available but should only be used for prototyping new strategies. If an affinityConfig is provided (as part of fillCapacity and equalDistribution strategies) for a given task, the list of workers eligible to be assigned is determined as follows: a non-affinity worker if no affinity is specified for that datasource. Any worker not listed in the affinityConfig is considered a non-affinity worker.a non-affinity worker if preferred workers are not available and the affinity is weak i.e. strong: false.a preferred worker listed in the affinityConfig for this datasource if it has available capacityno worker if preferred workers are not available and affinity is strong i.e. strong: true. In this case, the task remains in "pending" state. The chosen provisioning strategy (e.g. pendingTaskBased) may then use the total number of pending tasks to determine if a new node should be provisioned. Note that every worker listed in the affinityConfig will only be used for the assigned datasources and no other. If a categorySpec is provided (as part of fillCapacityWithCategorySpec and equalDistributionWithCategorySpec strategies), then a task of a given datasource may be assigned to: any worker if no category config is given for task typeany worker if category config is given for task type but no category is given for datasource and there's no default categorya preferred worker (based on category config and category for datasource) if availableany worker if category config and category are given but no preferred worker is available and category config is weaknot assigned at all if preferred workers are not available and category config is strong In both the cases, Druid determines the list of eligible workers and selects one depending on their load with the goal of either distributing the load equally or filling as few workers as possible. If you are using auto-scaling, use the fillCapacity select strategy since auto-scaled nodes can not be assigned a category, and you want the work to be concentrated on the fewest number of workers to allow the empty ones to scale down. equalDistribution Tasks are assigned to the MiddleManager with the most free slots at the time the task begins running. This evenly distributes work across your MiddleManagers. Property\tDescription\tDefaulttype\tequalDistribution\trequired; must be equalDistribution affinityConfig\tAffinityConfig object\tnull (no affinity) equalDistributionWithCategorySpec This strategy is a variant of equalDistribution, which supports workerCategorySpec field rather than affinityConfig. By specifying workerCategorySpec, you can assign tasks to run on different categories of MiddleManagers based on the type and dataSource of the task. This strategy doesn't work with AutoScaler since the behavior is undefined. Property\tDescription\tDefaulttype\tequalDistributionWithCategorySpec\trequired; must be equalDistributionWithCategorySpec workerCategorySpec\tWorkerCategorySpec object\tnull (no worker category spec) Example: tasks of type "index_kafka" default to running on MiddleManagers of category c1, except for tasks that write to datasource "ds1," which run on MiddleManagers of category c2. { "selectStrategy": { "type": "equalDistributionWithCategorySpec", "workerCategorySpec": { "strong": false, "categoryMap": { "index_kafka": { "defaultCategory": "c1", "categoryAffinity": { "ds1": "c2" } } } } } } fillCapacity Tasks are assigned to the worker with the most currently-running tasks. This is useful when you are auto-scaling MiddleManagers since it tends to pack some full and leave others empty. The empty ones can be safely terminated. Note that if druid.indexer.runner.pendingTasksRunnerNumThreads is set to N > 1, then this strategy will fill NMiddleManagers up to capacity simultaneously, rather than a single MiddleManager. Property\tDescription\tDefaulttype\tfillCapacity\trequired; must be fillCapacity affinityConfig\tAffinityConfig object\tnull (no affinity) fillCapacityWithCategorySpec This strategy is a variant of fillCapacity, which supports workerCategorySpec instead of an affinityConfig. The usage is the same as equalDistributionWithCategorySpec strategy. This strategy doesn't work with AutoScaler since the behavior is undefined. Property\tDescription\tDefaulttype\tfillCapacityWithCategorySpec.\trequired; must be fillCapacityWithCategorySpec workerCategorySpec\tWorkerCategorySpec object\tnull (no worker category spec) javascript Allows defining arbitrary logic for selecting workers to run task using a JavaScript function. The function is passed remoteTaskRunnerConfig, map of workerId to available workers and task to be executed and returns the workerId on which the task should be run or null if the task cannot be run. It can be used for rapid development of missing features where the worker selection logic is to be changed or tuned often. If the selection logic is quite complex and cannot be easily tested in JavaScript environment, its better to write a druid extension module with extending current worker selection strategies written in java. Property\tDescription\tDefaulttype\tjavascript\trequired; must be javascript function\tString representing JavaScript function\t Example: a function that sends batch_index_task to workers 10.0.0.1 and 10.0.0.2 and all other tasks to other available workers. { "type":"javascript", "function":"function (config, zkWorkers, task) {\\nvar batch_workers = new java.util.ArrayList();\\nbatch_workers.add(\\"middleManager1_hostname:8091\\");\\nbatch_workers.add(\\"middleManager2_hostname:8091\\");\\nworkers = zkWorkers.keySet().toArray();\\nvar sortedWorkers = new Array()\\n;for(var i = 0; i < workers.length; i++){\\n sortedWorkers[i] = workers[i];\\n}\\nArray.prototype.sort.call(sortedWorkers,function(a, b){return zkWorkers.get(b).getCurrCapacityUsed() - zkWorkers.get(a).getCurrCapacityUsed();});\\nvar minWorkerVer = config.getMinWorkerVersion();\\nfor (var i = 0; i < sortedWorkers.length; i++) {\\n var worker = sortedWorkers[i];\\n var zkWorker = zkWorkers.get(worker);\\n if(zkWorker.canRunTask(task) && zkWorker.isValidVersion(minWorkerVer)){\\n if(task.getType() == 'index_hadoop' && batch_workers.contains(worker)){\\n return worker;\\n } else {\\n if(task.getType() != 'index_hadoop' && !batch_workers.contains(worker)){\\n return worker;\\n }\\n }\\n }\\n}\\nreturn null;\\n}" } info JavaScript-based functionality is disabled by default. Please refer to the Druid JavaScript programming guide for guidelines about using Druid's JavaScript functionality, including instructions on how to enable it. affinityConfig Use the affinityConfig field to pass affinity configuration to the equalDistribution and fillCapacity strategies. If not provided, the default is to have no affinity. Property\tDescription\tDefaultaffinity\tJSON object mapping a datasource String name to a list of indexing service MiddleManager host:port values. Druid doesn't perform DNS resolution, so the 'host' value must match what is configured on the MiddleManager and what the MiddleManager announces itself as (examine the Overlord logs to see what your MiddleManager announces itself as).\t{} strong\tWhen true tasks for a datasource must be assigned to affinity-mapped MiddleManagers. Tasks remain queued until a slot becomes available. When false, Druid may assign tasks for a datasource to other MiddleManagers when affinity-mapped MiddleManagers are unavailable to run queued tasks.\tfalse workerCategorySpec WorkerCategorySpec can be provided to the equalDistributionWithCategorySpec and fillCapacityWithCategorySpec strategies using the workerCategorySpecfield. If not provided, the default is to not use it at all. Property\tDescription\tDefaultcategoryMap\tA JSON map object mapping a task type String name to a CategoryConfig object, by which you can specify category config for different task type.\t{} strong\tWith weak workerCategorySpec (the default), tasks for a dataSource may be assigned to other MiddleManagers if the MiddleManagers specified in categoryMap are not able to run all pending tasks in the queue for that dataSource. With strong workerCategorySpec, tasks for a dataSource will only ever be assigned to their specified MiddleManagers, and will wait in the pending queue if necessary.\tfalse CategoryConfig Property\tDescription\tDefaultdefaultCategory\tSpecify default category for a task type.\tnull categoryAffinity\tA JSON map object mapping a datasource String name to a category String name of the MiddleManager. If category isn't specified for a datasource, then using the defaultCategory. If no specified category and the defaultCategory is also null, then tasks can run on any available MiddleManagers.\tnull Autoscaler Amazon's EC2 together with Google's GCE are currently the only supported autoscalers. EC2's autoscaler properties are: Property\tDescription\tDefaulttype\tec2\t0 minNumWorkers\tThe minimum number of workers that can be in the cluster at any given time.\t0 maxNumWorkers\tThe maximum number of workers that can be in the cluster at any given time.\t0 envConfig.availabilityZone\tWhat Amazon availability zone to run in.\tnone envConfig.nodeData\tA JSON object that describes how to launch new nodes.\tnone; required envConfig.userData\tA JSON object that describes how to configure new nodes. If you have set druid.indexer.autoscale.workerVersion, this must have a versionReplacementString. Otherwise, a versionReplacementString is not necessary.\tnone; optional For GCE's properties, please refer to the gce-extensions. "},{"title":"Data Server","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#data-server","content":"This section contains the configuration options for the processes that reside on Data servers (MiddleManagers/Peons and Historicals) in the suggested three-server configuration. Configuration options for the Indexer process are also provided here. "},{"title":"MiddleManager and Peons","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#middlemanager-and-peons","content":"These MiddleManager and Peon configurations can be defined in the middleManager/runtime.properties file. MiddleManager Process Config Property\tDescription\tDefaultdruid.host\tThe host for the current process. This is used to advertise the current processes location as reachable from another process and should generally be specified such that http://${druid.host}/ could actually talk to this process\tInetAddress.getLocalHost().getCanonicalHostName() druid.bindOnHost\tIndicating whether the process's internal jetty server bind on druid.host. Default is false, which means binding to all interfaces.\tfalse druid.plaintextPort\tThis is the port to actually listen on; unless port mapping is used, this will be the same port as is on druid.host\t8091 druid.tlsPort\tTLS port for HTTPS connector, if druid.enableTlsPort is set then this config will be used. If druid.host contains port then that port will be ignored. This should be a non-negative Integer.\t8291 druid.service\tThe name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services\tdruid/middlemanager MiddleManager Configuration Middle managers pass their configurations down to their child peons. The MiddleManager requires the following configs: Property\tDescription\tDefaultdruid.indexer.runner.allowedPrefixes\tWhitelist of prefixes for configs that can be passed down to child peons.\t"com.metamx", "druid", "org.apache.druid", "user.timezone", "file.encoding", "java.io.tmpdir", "hadoop" druid.indexer.runner.compressZnodes\tIndicates whether or not the MiddleManagers should compress Znodes.\ttrue druid.indexer.runner.classpath\tJava classpath for the peon.\tSystem.getProperty("java.class.path") druid.indexer.runner.javaCommand\tCommand required to execute java.\tjava druid.indexer.runner.javaOpts\tDEPRECATED A string of -X Java options to pass to the peon's JVM. Quotable parameters or parameters with spaces are encouraged to use javaOptsArray\t"" druid.indexer.runner.javaOptsArray\tA JSON array of strings to be passed in as options to the peon's JVM. This is additive to druid.indexer.runner.javaOpts and is recommended for properly handling arguments which contain quotes or spaces like ["-XX:OnOutOfMemoryError=kill -9 %p"]\t[] druid.indexer.runner.maxZnodeBytes\tThe maximum size Znode in bytes that can be created in ZooKeeper, should be in the range of [10KiB, 2GiB). Human-readable format is supported.\t512KiB druid.indexer.runner.startPort\tStarting port used for peon processes, should be greater than 1023 and less than 65536.\t8100 druid.indexer.runner.endPort\tEnding port used for peon processes, should be greater than or equal to druid.indexer.runner.startPort and less than 65536.\t65535 druid.indexer.runner.ports\tA JSON array of integers to specify ports that used for peon processes. If provided and non-empty, ports for peon processes will be chosen from these ports. And druid.indexer.runner.startPort/druid.indexer.runner.endPort will be completely ignored.\t[] druid.worker.ip\tThe IP of the worker.\tlocalhost druid.worker.version\tVersion identifier for the MiddleManager. The version number is a string. This affects the expected behavior during certain operations like comparison against druid.indexer.runner.minWorkerVersion. Specifically, the version comparison follows dictionary order. Use ISO8601 date format for the version to accommodate date comparisons.\t0 druid.worker.capacity\tMaximum number of tasks the MiddleManager can accept.\tNumber of CPUs on the machine - 1 druid.worker.baseTaskDirs\tList of base temporary working directories, one of which is assigned per task in a round-robin fashion. This property can be used to allow usage of multiple disks for indexing. This property is recommended in place of and takes precedence over ${druid.indexer.task.baseTaskDir}. If this configuration is not set, ${druid.indexer.task.baseTaskDir} is used. Example: druid.worker.baseTaskDirs=[\\"PATH1\\",\\"PATH2\\",...].\tnull druid.worker.baseTaskDirSize\tThe total amount of bytes that can be used by tasks on any single task dir. This value is treated symmetrically across all directories, that is, if this is 500 GB and there are 3 baseTaskDirs, then each of those task directories is assumed to allow for 500 GB to be used and a total of 1.5 TB will potentially be available across all tasks. The actual amount of memory assigned to each task is discussed in Configuring task storage sizes\tLong.MAX_VALUE druid.worker.category\tA string to name the category that the MiddleManager node belongs to.\t_default_worker_category Peon Processing Processing properties set on the MiddleManager will be passed through to Peons. Property\tDescription\tDefaultdruid.processing.buffer.sizeBytes\tThis specifies a buffer size (less than 2GiB) for the storage of intermediate results. The computation engine in both the Historical and Realtime processes will use a scratch buffer of this size to do all of their intermediate computations off-heap. Larger values allow for more aggregations in a single pass over the data while smaller values can require more passes depending on the query that is being executed. Human-readable format is supported.\tauto (max 1 GiB) druid.processing.buffer.poolCacheMaxCount\tProcessing buffer pool caches the buffers for later use. This is the maximum count that the cache will grow to. Note that pool can create more buffers than it can cache if necessary.\tInteger.MAX_VALUE druid.processing.formatString\tRealtime and Historical processes use this format string to name their processing threads.\tprocessing-%s druid.processing.numMergeBuffers\tThe number of direct memory buffers available for merging query results. The buffers are sized by druid.processing.buffer.sizeBytes. This property is effectively a concurrency limit for queries that require merging buffers. If you are using any queries that require merge buffers (currently, just groupBy v2) then you should have at least two of these.\tmax(2, druid.processing.numThreads / 4) druid.processing.numThreads\tThe number of processing threads to have available for parallel processing of segments. Our rule of thumb is num_cores - 1, which means that even under heavy load there will still be one core available to do background tasks like talking with ZooKeeper and pulling down segments. If only one core is available, this property defaults to the value 1.\tNumber of cores - 1 (or 1) druid.processing.fifo\tEnables the processing queue to treat tasks of equal priority in a FIFO manner.\ttrue druid.processing.tmpDir\tPath where temporary files created while processing a query should be stored. If specified, this configuration takes priority over the default java.io.tmpdir path.\tpath represented by java.io.tmpdir druid.processing.intermediaryData.storage.type\tStorage type for intermediary segments of data shuffle between native parallel index tasks. Set to local to store segment files in the local storage of the MiddleManager or Indexer. Set to deepstore to use configured deep storage for better fault tolerance during rolling updates. When the storage type is deepstore, Druid stores the data in the shuffle-data directory under the configured deep storage path. Druid does not support automated cleanup for the shuffle-data directory. You can set up cloud storage lifecycle rules for automated cleanup of data at the shuffle-data prefix location.\tlocal The amount of direct memory needed by Druid is at leastdruid.processing.buffer.sizeBytes * (druid.processing.numMergeBuffers + druid.processing.numThreads + 1). You can ensure at least this amount of direct memory is available by providing -XX:MaxDirectMemorySize=<VALUE> indruid.indexer.runner.javaOptsArray as documented above. Peon query configuration See general query configuration. Peon Caching You can optionally configure caching to be enabled on the peons by setting caching configs here. Property\tPossible Values\tDescription\tDefaultdruid.realtime.cache.useCache\ttrue, false\tEnable the cache on the realtime.\tfalse druid.realtime.cache.populateCache\ttrue, false\tPopulate the cache on the realtime.\tfalse druid.realtime.cache.unCacheable\tAll druid query types\tAll query types to not cache.\t[] druid.realtime.cache.maxEntrySize\tpositive integer\tMaximum cache entry size in bytes.\t1_000_000 See cache configuration for how to configure cache settings. Additional Peon Configuration Although peons inherit the configurations of their parent MiddleManagers, explicit child peon configs in MiddleManager can be set by prefixing them with: druid.indexer.fork.property Additional peon configs include: Property\tDescription\tDefaultdruid.peon.mode\tChoices are "local" and "remote". Setting this to local means you intend to run the peon as a standalone process (Not recommended).\tremote druid.indexer.task.baseDir\tBase temporary working directory.\tSystem.getProperty("java.io.tmpdir") druid.indexer.task.baseTaskDir\tBase temporary working directory for tasks.\t${druid.indexer.task.baseDir}/persistent/task druid.indexer.task.batchProcessingMode\tBatch ingestion tasks have three operating modes to control construction and tracking for intermediary segments: OPEN_SEGMENTS, CLOSED_SEGMENTS, and CLOSED_SEGMENT_SINKS. OPEN_SEGMENTS uses the streaming ingestion code path and performs a mmap on intermediary segments to build a timeline to make these segments available to realtime queries. Batch ingestion doesn't require intermediary segments, so the default mode, CLOSED_SEGMENTS, eliminates mmap of intermediary segments. CLOSED_SEGMENTS mode still tracks the entire set of segments in heap. The CLOSED_SEGMENTS_SINKS mode is the most aggressive configuration and should have the smallest memory footprint. It eliminates in-memory tracking and mmap of intermediary segments produced during segment creation. CLOSED_SEGMENTS_SINKS mode isn't as well tested as other modes so is currently considered experimental. You can use OPEN_SEGMENTS mode if problems occur with the 2 newer modes.\tCLOSED_SEGMENTS druid.indexer.task.defaultHadoopCoordinates\tHadoop version to use with HadoopIndexTasks that do not request a particular version.\torg.apache.hadoop:hadoop-client:2.8.5 druid.indexer.task.defaultRowFlushBoundary\tHighest row count before persisting to disk. Used for indexing generating tasks.\t75000 druid.indexer.task.directoryLockTimeout\tWait this long for zombie peons to exit before giving up on their replacements.\tPT10M druid.indexer.task.gracefulShutdownTimeout\tWait this long on middleManager restart for restorable tasks to gracefully exit.\tPT5M druid.indexer.task.hadoopWorkingPath\tTemporary working directory for Hadoop tasks.\t/tmp/druid-indexing druid.indexer.task.restoreTasksOnRestart\tIf true, MiddleManagers will attempt to stop tasks gracefully on shutdown and restore them on restart.\tfalse druid.indexer.task.ignoreTimestampSpecForDruidInputSource\tIf true, tasks using the Druid input source will ignore the provided timestampSpec, and will use the __time column of the input datasource. This option is provided for compatibility with ingestion specs written before Druid 0.22.0.\tfalse druid.indexer.task.storeEmptyColumns\tBoolean value for whether or not to store empty columns during ingestion. When set to true, Druid stores every column specified in the dimensionsSpec. If you use the string-based schemaless ingestion and don't specify any dimensions to ingest, you must also set includeAllDimensions for Druid to store empty columns. If you set storeEmptyColumns to false, Druid SQL queries referencing empty columns will fail. If you intend to leave storeEmptyColumns disabled, you should either ingest placeholder data for empty columns or else not query on empty columns. You can overwrite this configuration by setting storeEmptyColumns in the task context.\ttrue druid.indexer.task.tmpStorageBytesPerTask\tMaximum number of bytes per task to be used to store temporary files on disk. This config is generally intended for internal usage. Attempts to set it are very likely to be overwritten by the TaskRunner that executes the task, so be sure of what you expect to happen before directly adjusting this configuration parameter. The config is documented here primarily to provide an understanding of what it means if/when someone sees that it has been set. A value of -1 disables this limit.\t-1 druid.indexer.server.maxChatRequests\tMaximum number of concurrent requests served by a task's chat handler. Set to 0 to disable limiting.\t0 If the peon is running in remote mode, there must be an Overlord up and running. Peons in remote mode can set the following configurations: Property\tDescription\tDefaultdruid.peon.taskActionClient.retry.minWait\tThe minimum retry time to communicate with Overlord.\tPT5S druid.peon.taskActionClient.retry.maxWait\tThe maximum retry time to communicate with Overlord.\tPT1M druid.peon.taskActionClient.retry.maxRetryCount\tThe maximum number of retries to communicate with Overlord.\t60 SegmentWriteOutMediumFactory When new segments are created, Druid temporarily stores some preprocessed data in some buffers. Currently three types ofmedium exist for those buffers: temporary files, off-heap memory, and on-heap memory. Temporary files (tmpFile) are stored under the task working directory (see druid.worker.baseTaskDirsconfiguration above) and thus share it's mounting properties, e. g. they could be backed by HDD, SSD or memory (tmpfs). This type of medium may do unnecessary disk I/O and requires some disk space to be available. Off-heap memory medium (offHeapMemory) creates buffers in off-heap memory of a JVM process that is running a task. This type of medium is preferred, but it may require to allow the JVM to have more off-heap memory, by changing-XX:MaxDirectMemorySize configuration. It is not yet understood how does the required off-heap memory size relates to the size of the segments being created. But definitely it doesn't make sense to add more extra off-heap memory, than the configured maximum heap size (-Xmx) for the same JVM. On-heap memory medium (onHeapMemory) creates buffers using the allocated heap memory of the JVM process running a task. Using on-heap memory introduces garbage collection overhead and so is not recommended in most cases. This type of medium is most helpful for tasks run on external clusters where it may be difficult to allocate and work with direct memory effectively. For most types of tasks SegmentWriteOutMediumFactory could be configured per-task (see Taskspage, "TuningConfig" section), but if it's not specified for a task, or it's not supported for a particular task type, then the value from the configuration below is used: Property\tDescription\tDefaultdruid.peon.defaultSegmentWriteOutMediumFactory.type\ttmpFile, offHeapMemory, or onHeapMemory, see explanation above\ttmpFile "},{"title":"Indexer","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#indexer","content":"Indexer Process Configuration Property\tDescription\tDefaultdruid.host\tThe host for the current process. This is used to advertise the current processes location as reachable from another process and should generally be specified such that http://${druid.host}/ could actually talk to this process\tInetAddress.getLocalHost().getCanonicalHostName() druid.bindOnHost\tIndicating whether the process's internal jetty server bind on druid.host. Default is false, which means binding to all interfaces.\tfalse druid.plaintextPort\tThis is the port to actually listen on; unless port mapping is used, this will be the same port as is on druid.host\t8091 druid.tlsPort\tTLS port for HTTPS connector, if druid.enableTlsPort is set then this config will be used. If druid.host contains port then that port will be ignored. This should be a non-negative Integer.\t8283 druid.service\tThe name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services\tdruid/indexer Indexer General Configuration Property\tDescription\tDefaultdruid.worker.version\tVersion identifier for the Indexer.\t0 druid.worker.capacity\tMaximum number of tasks the Indexer can accept.\tNumber of available processors - 1 druid.worker.baseTaskDirs\tList of base temporary working directories, one of which is assigned per task in a round-robin fashion. This property can be used to allow usage of multiple disks for indexing. This property is recommended in place of and takes precedence over ${druid.indexer.task.baseTaskDir}. If this configuration is not set, ${druid.indexer.task.baseTaskDir} is used. Example: druid.worker.baseTaskDirs=[\\"PATH1\\",\\"PATH2\\",...].\tnull druid.worker.baseTaskDirSize\tThe total amount of bytes that can be used by tasks on any single task dir. This value is treated symmetrically across all directories, that is, if this is 500 GB and there are 3 baseTaskDirs, then each of those task directories is assumed to allow for 500 GB to be used and a total of 1.5 TB will potentially be available across all tasks. The actual amount of memory assigned to each task is discussed in Configuring task storage sizes\tLong.MAX_VALUE druid.worker.globalIngestionHeapLimitBytes\tTotal amount of heap available for ingestion processing. This is applied by automatically setting the maxBytesInMemory property on tasks.\t60% of configured JVM heap druid.worker.numConcurrentMerges\tMaximum number of segment persist or merge operations that can run concurrently across all tasks.\tdruid.worker.capacity / 2, rounded down druid.indexer.task.baseDir\tBase temporary working directory.\tSystem.getProperty("java.io.tmpdir") druid.indexer.task.baseTaskDir\tBase temporary working directory for tasks.\t${druid.indexer.task.baseDir}/persistent/tasks druid.indexer.task.defaultHadoopCoordinates\tHadoop version to use with HadoopIndexTasks that do not request a particular version.\torg.apache.hadoop:hadoop-client:2.8.5 druid.indexer.task.gracefulShutdownTimeout\tWait this long on Indexer restart for restorable tasks to gracefully exit.\tPT5M druid.indexer.task.hadoopWorkingPath\tTemporary working directory for Hadoop tasks.\t/tmp/druid-indexing druid.indexer.task.restoreTasksOnRestart\tIf true, the Indexer will attempt to stop tasks gracefully on shutdown and restore them on restart.\tfalse druid.indexer.task.ignoreTimestampSpecForDruidInputSource\tIf true, tasks using the Druid input source will ignore the provided timestampSpec, and will use the __time column of the input datasource. This option is provided for compatibility with ingestion specs written before Druid 0.22.0.\tfalse druid.indexer.task.storeEmptyColumns\tBoolean value for whether or not to store empty columns during ingestion. When set to true, Druid stores every column specified in the dimensionsSpec. If you set storeEmptyColumns to false, Druid SQL queries referencing empty columns will fail. If you intend to leave storeEmptyColumns disabled, you should either ingest placeholder data for empty columns or else not query on empty columns. You can overwrite this configuration by setting storeEmptyColumns in the task context.\ttrue druid.peon.taskActionClient.retry.maxWait\tThe maximum retry time to communicate with Overlord.\tPT1M druid.peon.taskActionClient.retry.maxRetryCount\tThe maximum number of retries to communicate with Overlord.\t60 Indexer Concurrent Requests Druid uses Jetty to serve HTTP requests. Property\tDescription\tDefaultdruid.server.http.numThreads\tNumber of threads for HTTP requests. Please see the Indexer Server HTTP threads documentation for more details on how the Indexer uses this configuration.\tmax(10, (Number of cores * 17) / 16 + 2) + 30 druid.server.http.queueSize\tSize of the worker queue used by Jetty server to temporarily store incoming client connections. If this value is set and a request is rejected by jetty because queue is full then client would observe request failure with TCP connection being closed immediately with a completely empty response from server.\tUnbounded druid.server.http.maxIdleTime\tThe Jetty max idle time for a connection.\tPT5M druid.server.http.enableRequestLimit\tIf enabled, no requests would be queued in jetty queue and "HTTP 429 Too Many Requests" error response would be sent.\tfalse druid.server.http.defaultQueryTimeout\tQuery timeout in millis, beyond which unfinished queries will be cancelled\t300000 druid.server.http.gracefulShutdownTimeout\tThe maximum amount of time Jetty waits after receiving shutdown signal. After this timeout the threads will be forcefully shutdown. This allows any queries that are executing to complete(Only values greater than zero are valid).\tPT30S druid.server.http.unannouncePropagationDelay\tHow long to wait for ZooKeeper unannouncements to propagate before shutting down Jetty. This is a minimum and druid.server.http.gracefulShutdownTimeout does not start counting down until after this period elapses.\tPT0S (do not wait) druid.server.http.maxQueryTimeout\tMaximum allowed value (in milliseconds) for timeout parameter. See query-context to know more about timeout. Query is rejected if the query context timeout is greater than this value.\tLong.MAX_VALUE druid.server.http.maxRequestHeaderSize\tMaximum size of a request header in bytes. Larger headers consume more memory and can make a server more vulnerable to denial of service attacks.\t8 * 1024 druid.server.http.enableForwardedRequestCustomizer\tIf enabled, adds Jetty ForwardedRequestCustomizer which reads X-Forwarded-* request headers to manipulate servlet request object when Druid is used behind a proxy.\tfalse druid.server.http.allowedHttpMethods\tList of HTTP methods that should be allowed in addition to the ones required by Druid APIs. Druid APIs require GET, PUT, POST, and DELETE, which are always allowed. This option is not useful unless you have installed an extension that needs these additional HTTP methods or that adds functionality related to CORS. None of Druid's bundled extensions require these methods.\t[] druid.server.http.contentSecurityPolicy\tContent-Security-Policy header value to set on each non-POST response. Setting this property to an empty string, or omitting it, both result in the default frame-ancestors: none being set.\tframe-ancestors 'none' Indexer Processing Resources Property\tDescription\tDefaultdruid.processing.buffer.sizeBytes\tThis specifies a buffer size (less than 2GiB) for the storage of intermediate results. The computation engine in the Indexer processes will use a scratch buffer of this size to do all of their intermediate computations off-heap. Larger values allow for more aggregations in a single pass over the data while smaller values can require more passes depending on the query that is being executed. Human-readable format is supported.\tauto (max 1GiB) druid.processing.buffer.poolCacheMaxCount\tprocessing buffer pool caches the buffers for later use, this is the maximum count cache will grow to. note that pool can create more buffers than it can cache if necessary.\tInteger.MAX_VALUE druid.processing.formatString\tIndexer processes use this format string to name their processing threads.\tprocessing-%s druid.processing.numMergeBuffers\tThe number of direct memory buffers available for merging query results. The buffers are sized by druid.processing.buffer.sizeBytes. This property is effectively a concurrency limit for queries that require merging buffers. If you are using any queries that require merge buffers (currently, just groupBy v2) then you should have at least two of these.\tmax(2, druid.processing.numThreads / 4) druid.processing.numThreads\tThe number of processing threads to have available for parallel processing of segments. Our rule of thumb is num_cores - 1, which means that even under heavy load there will still be one core available to do background tasks like talking with ZooKeeper and pulling down segments. If only one core is available, this property defaults to the value 1.\tNumber of cores - 1 (or 1) druid.processing.fifo\tIf the processing queue should treat tasks of equal priority in a FIFO manner\ttrue druid.processing.tmpDir\tPath where temporary files created while processing a query should be stored. If specified, this configuration takes priority over the default java.io.tmpdir path.\tpath represented by java.io.tmpdir The amount of direct memory needed by Druid is at leastdruid.processing.buffer.sizeBytes * (druid.processing.numMergeBuffers + druid.processing.numThreads + 1). You can ensure at least this amount of direct memory is available by providing -XX:MaxDirectMemorySize=<VALUE> at the command line. Query Configurations See general query configuration. Indexer Caching You can optionally configure caching to be enabled on the Indexer by setting caching configs here. Property\tPossible Values\tDescription\tDefaultdruid.realtime.cache.useCache\ttrue, false\tEnable the cache on the realtime.\tfalse druid.realtime.cache.populateCache\ttrue, false\tPopulate the cache on the realtime.\tfalse druid.realtime.cache.unCacheable\tAll druid query types\tAll query types to not cache.\t[] druid.realtime.cache.maxEntrySize\tpositive integer\tMaximum cache entry size in bytes.\t1_000_000 See cache configuration for how to configure cache settings. Note that only local caches such as the local-type cache and caffeine cache are supported. If a remote cache such as memcached is used, it will be ignored. "},{"title":"Historical","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#historical","content":"For general Historical Process information, see here. These Historical configurations can be defined in the historical/runtime.properties file. Historical Process Configuration Property\tDescription\tDefaultdruid.host\tThe host for the current process. This is used to advertise the current processes location as reachable from another process and should generally be specified such that http://${druid.host}/ could actually talk to this process\tInetAddress.getLocalHost().getCanonicalHostName() druid.bindOnHost\tIndicating whether the process's internal jetty server bind on druid.host. Default is false, which means binding to all interfaces.\tfalse druid.plaintextPort\tThis is the port to actually listen on; unless port mapping is used, this will be the same port as is on druid.host\t8083 druid.tlsPort\tTLS port for HTTPS connector, if druid.enableTlsPort is set then this config will be used. If druid.host contains port then that port will be ignored. This should be a non-negative Integer.\t8283 druid.service\tThe name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services\tdruid/historical Historical General Configuration Property\tDescription\tDefaultdruid.server.maxSize\tThe maximum number of bytes-worth of segments that the process wants assigned to it. The Coordinator process will attempt to assign segments to a Historical process only if this property is greater than the total size of segments served by it. Since this property defines the upper limit on the total segment size that can be assigned to a Historical, it is defaulted to the sum of all maxSize values specified within druid.segmentCache.locations property. Human-readable format is supported, see here.\tSum of maxSize values defined within druid.segmentCache.locations druid.server.tier\tA string to name the distribution tier that the storage process belongs to. Many of the rules Coordinator processes use to manage segments can be keyed on tiers.\t_default_tier druid.server.priority\tIn a tiered architecture, the priority of the tier, thus allowing control over which processes are queried. Higher numbers mean higher priority. The default (no priority) works for architecture with no cross replication (tiers that have no data-storage overlap). Data centers typically have equal priority.\t0 Storing Segments Property\tDescription\tDefaultdruid.segmentCache.locations\tSegments assigned to a Historical process are first stored on the local file system (in a disk cache) and then served by the Historical process. These locations define where that local cache resides. This value cannot be NULL or EMPTY. Here is an example druid.segmentCache.locations=[{"path": "/mnt/druidSegments", "maxSize": "10k", "freeSpacePercent": 1.0}]. "freeSpacePercent" is optional, if provided then enforces that much of free disk partition space while storing segments. But, it depends on File.getTotalSpace() and File.getFreeSpace() methods, so enable if only if they work for your File System.\tnone druid.segmentCache.locationSelector.strategy\tThe strategy used to select a location from the configured druid.segmentCache.locations for segment distribution. Possible values are leastBytesUsed, roundRobin, random, or mostAvailableSize.\tleastBytesUsed druid.segmentCache.deleteOnRemove\tDelete segment files from cache once a process is no longer serving a segment.\ttrue druid.segmentCache.dropSegmentDelayMillis\tHow long a process delays before completely dropping segment.\t30000 (30 seconds) druid.segmentCache.infoDir\tHistorical processes keep track of the segments they are serving so that when the process is restarted they can reload the same segments without waiting for the Coordinator to reassign. This path defines where this metadata is kept. Directory will be created if needed.\t${first_location}/info_dir druid.segmentCache.announceIntervalMillis\tHow frequently to announce segments while segments are loading from cache. Set this value to zero to wait for all segments to be loaded before announcing.\t5000 (5 seconds) druid.segmentCache.numLoadingThreads\tHow many segments to drop or load concurrently from deep storage. Note that the work of loading segments involves downloading segments from deep storage, decompressing them and loading them to a memory mapped location. So the work is not all I/O Bound. Depending on CPU and network load, one could possibly increase this config to a higher value.\tmax(1,Number of cores / 6) druid.segmentCache.numBootstrapThreads\tHow many segments to load concurrently during historical startup.\tdruid.segmentCache.numLoadingThreads druid.segmentCache.lazyLoadOnStart\tWhether or not to load segment columns metadata lazily during historical startup. When set to true, Historical startup time will be dramatically improved by deferring segment loading until the first time that segment takes part in a query, which will incur this cost instead.\tfalse druid.coordinator.loadqueuepeon.curator.numCallbackThreads\tNumber of threads for executing callback actions associated with loading or dropping of segments. One might want to increase this number when noticing clusters are lagging behind w.r.t. balancing segments across historical nodes.\t2 druid.segmentCache.numThreadsToLoadSegmentsIntoPageCacheOnDownload\tNumber of threads to asynchronously read segment index files into null output stream on each new segment download after the historical process finishes bootstrapping. Recommended to set to 1 or 2 or leave unspecified to disable. See also druid.segmentCache.numThreadsToLoadSegmentsIntoPageCacheOnBootstrap\t0 druid.segmentCache.numThreadsToLoadSegmentsIntoPageCacheOnBootstrap\tNumber of threads to asynchronously read segment index files into null output stream during historical process bootstrap. This thread pool is terminated after historical process finishes bootstrapping. Recommended to set to half of available cores. If left unspecified, druid.segmentCache.numThreadsToLoadSegmentsIntoPageCacheOnDownload will be used. If both configs are unspecified, this feature is disabled. Preemptively loading segments into page cache helps in the sense that later when a segment is queried, it's already in page cache and only a minor page fault needs to be triggered instead of a more costly major page fault to make the query latency more consistent. Note that loading segment into page cache just does a blind loading of segment index files and will evict any existing segments from page cache at the discretion of operating system when the total segment size on local disk is larger than the page cache usable in the RAM, which roughly equals to total available RAM in the host - druid process memory including both heap and direct memory allocated - memory used by other non druid processes on the host, so it is the user's responsibility to ensure the host has enough RAM to host all the segments to avoid random evictions to fully leverage this feature.\tdruid.segmentCache.numThreadsToLoadSegmentsIntoPageCacheOnDownload In druid.segmentCache.locations, freeSpacePercent was added because maxSize setting is only a theoretical limit and assumes that much space will always be available for storing segments. In case of any druid bug leading to unaccounted segment files left alone on disk or some other process writing stuff to disk, This check can start failing segment loading early before filling up the disk completely and leaving the host usable otherwise. In druid.segmentCache.locationSelector.strategy, one of leastBytesUsed, roundRobin, random, or mostAvailableSize could be specified to represent the strategy to distribute segments across multiple segment cache locations. Strategy\tDescriptionleastBytesUsed\tselects a location which has least bytes used in absolute terms. roundRobin\tselects a location in a round robin fashion oblivious to the bytes used or the capacity. random\tselects a segment cache location randomly each time among the available storage locations. mostAvailableSize\tselects a segment cache location that has most free space among the available storage locations. Note that if druid.segmentCache.numLoadingThreads > 1, multiple threads can download different segments at the same time. In this case, with the leastBytesUsed strategy or mostAvailableSize strategy, historicals may select a sub-optimal storage location because each decision is based on a snapshot of the storage location status of when a segment is requested to download. Historical query configs Concurrent Requests Druid uses Jetty to serve HTTP requests. Property\tDescription\tDefaultdruid.server.http.numThreads\tNumber of threads for HTTP requests.\tmax(10, (Number of cores * 17) / 16 + 2) + 30 druid.server.http.queueSize\tSize of the worker queue used by Jetty server to temporarily store incoming client connections. If this value is set and a request is rejected by jetty because queue is full then client would observe request failure with TCP connection being closed immediately with a completely empty response from server.\tUnbounded druid.server.http.maxIdleTime\tThe Jetty max idle time for a connection.\tPT5M druid.server.http.enableRequestLimit\tIf enabled, no requests would be queued in jetty queue and "HTTP 429 Too Many Requests" error response would be sent.\tfalse druid.server.http.defaultQueryTimeout\tQuery timeout in millis, beyond which unfinished queries will be cancelled\t300000 druid.server.http.gracefulShutdownTimeout\tThe maximum amount of time Jetty waits after receiving shutdown signal. After this timeout the threads will be forcefully shutdown. This allows any queries that are executing to complete(Only values greater than zero are valid).\tPT30S druid.server.http.unannouncePropagationDelay\tHow long to wait for ZooKeeper unannouncements to propagate before shutting down Jetty. This is a minimum and druid.server.http.gracefulShutdownTimeout does not start counting down until after this period elapses.\tPT0S (do not wait) druid.server.http.maxQueryTimeout\tMaximum allowed value (in milliseconds) for timeout parameter. See query-context to know more about timeout. Query is rejected if the query context timeout is greater than this value.\tLong.MAX_VALUE druid.server.http.maxRequestHeaderSize\tMaximum size of a request header in bytes. Larger headers consume more memory and can make a server more vulnerable to denial of service attacks.\t8 * 1024 druid.server.http.contentSecurityPolicy\tContent-Security-Policy header value to set on each non-POST response. Setting this property to an empty string, or omitting it, both result in the default frame-ancestors: none being set.\tframe-ancestors 'none' Processing Property\tDescription\tDefaultdruid.processing.buffer.sizeBytes\tThis specifies a buffer size (less than 2GiB), for the storage of intermediate results. The computation engine in both the Historical and Realtime processes will use a scratch buffer of this size to do all of their intermediate computations off-heap. Larger values allow for more aggregations in a single pass over the data while smaller values can require more passes depending on the query that is being executed. Human-readable format is supported.\tauto (max 1GiB) druid.processing.buffer.poolCacheMaxCount\tprocessing buffer pool caches the buffers for later use, this is the maximum count cache will grow to. note that pool can create more buffers than it can cache if necessary.\tInteger.MAX_VALUE druid.processing.formatString\tRealtime and Historical processes use this format string to name their processing threads.\tprocessing-%s druid.processing.numMergeBuffers\tThe number of direct memory buffers available for merging query results. The buffers are sized by druid.processing.buffer.sizeBytes. This property is effectively a concurrency limit for queries that require merging buffers. If you are using any queries that require merge buffers (currently, just groupBy v2) then you should have at least two of these.\tmax(2, druid.processing.numThreads / 4) druid.processing.numThreads\tThe number of processing threads to have available for parallel processing of segments. Our rule of thumb is num_cores - 1, which means that even under heavy load there will still be one core available to do background tasks like talking with ZooKeeper and pulling down segments. If only one core is available, this property defaults to the value 1.\tNumber of cores - 1 (or 1) druid.processing.fifo\tIf the processing queue should treat tasks of equal priority in a FIFO manner\ttrue druid.processing.tmpDir\tPath where temporary files created while processing a query should be stored. If specified, this configuration takes priority over the default java.io.tmpdir path.\tpath represented by java.io.tmpdir The amount of direct memory needed by Druid is at leastdruid.processing.buffer.sizeBytes * (druid.processing.numMergeBuffers + druid.processing.numThreads + 1). You can ensure at least this amount of direct memory is available by providing -XX:MaxDirectMemorySize=<VALUE> at the command line. Historical query configuration See general query configuration. Historical Caching You can optionally only configure caching to be enabled on the Historical by setting caching configs here. Property\tPossible Values\tDescription\tDefaultdruid.historical.cache.useCache\ttrue, false\tEnable the cache on the Historical.\tfalse druid.historical.cache.populateCache\ttrue, false\tPopulate the cache on the Historical.\tfalse druid.historical.cache.unCacheable\tAll druid query types\tAll query types to not cache.\t[] druid.historical.cache.maxEntrySize\tpositive integer\tMaximum cache entry size in bytes.\t1_000_000 See cache configuration for how to configure cache settings. "},{"title":"Query Server","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#query-server","content":"This section contains the configuration options for the processes that reside on Query servers (Brokers) in the suggested three-server configuration. Configuration options for the experimental Router process are also provided here. "},{"title":"Broker","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#broker","content":"For general Broker process information, see here. These Broker configurations can be defined in the broker/runtime.properties file. Broker Process Configs Property\tDescription\tDefaultdruid.host\tThe host for the current process. This is used to advertise the current processes location as reachable from another process and should generally be specified such that http://${druid.host}/ could actually talk to this process\tInetAddress.getLocalHost().getCanonicalHostName() druid.bindOnHost\tIndicating whether the process's internal jetty server bind on druid.host. Default is false, which means binding to all interfaces.\tfalse druid.plaintextPort\tThis is the port to actually listen on; unless port mapping is used, this will be the same port as is on druid.host\t8082 druid.tlsPort\tTLS port for HTTPS connector, if druid.enableTlsPort is set then this config will be used. If druid.host contains port then that port will be ignored. This should be a non-negative Integer.\t8282 druid.service\tThe name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services\tdruid/broker Query configuration Query routing Property\tPossible Values\tDescription\tDefaultdruid.broker.balancer.type\trandom, connectionCount\tDetermines how the broker balances connections to Historical processes. random choose randomly, connectionCount picks the process with the fewest number of active connections to\trandom druid.broker.select.tier\thighestPriority, lowestPriority, custom\tIf segments are cross-replicated across tiers in a cluster, you can tell the broker to prefer to select segments in a tier with a certain priority.\thighestPriority druid.broker.select.tier.custom.priorities\tAn array of integer priorities. E.g., [-1, 0, 1, 2]\tSelect servers in tiers with a custom priority list.\tThe config only has effect if druid.broker.select.tier is set to custom. If druid.broker.select.tier is set to custom but this config is not specified, the effect is the same as druid.broker.select.tier set to highestPriority. Any of the integers in this config can be ignored if there's no corresponding tiers with such priorities. Tiers with priorities explicitly specified in this config always have higher priority than those not and those not specified fall back to use highestPriority strategy among themselves. Query prioritization and laning Laning strategies allow you to control capacity utilization for heterogeneous query workloads. With laning, the broker examines and classifies a query for the purpose of assigning it to a 'lane'. Lanes have capacity limits, enforced by the broker, that can be used to ensure sufficient resources are available for other lanes or for interactive queries (with no lane), or to limit overall throughput for queries within the lane. Requests in excess of the capacity are discarded with an HTTP 429 status code. Property\tDescription\tDefaultdruid.query.scheduler.numThreads\tMaximum number of concurrently-running queries. When this parameter is set lower than druid.server.http.numThreads, query requests beyond the limit are denied with HTTP 429 instead of waiting in the Jetty request queue. This has the effect of reserving the leftover Jetty threads for non-query requests. When this parameter is set equal to or higher than druid.server.http.numThreads, it has no effect.\tUnbounded druid.query.scheduler.laning.strategy\tQuery laning strategy to use to assign queries to a lane in order to control capacities for certain classes of queries.\tnone druid.query.scheduler.prioritization.strategy\tQuery prioritization strategy to automatically assign priorities.\tmanual Prioritization strategies Manual prioritization strategy With this configuration, queries are never assigned a priority automatically, but will preserve a priority manually set on the query context with the priority key. This mode can be explicitly set by setting druid.query.scheduler.prioritization.strategy to manual. Threshold prioritization strategy This prioritization strategy lowers the priority of queries that cross any of a configurable set of thresholds, such as how far in the past the data is, how large of an interval a query covers, or the number of segments taking part in a query. This strategy can be enabled by setting druid.query.scheduler.prioritization.strategy to threshold. Property\tDescription\tDefaultdruid.query.scheduler.prioritization.periodThreshold\tISO duration threshold for how old data can be queried before automatically adjusting query priority.\tNone druid.query.scheduler.prioritization.durationThreshold\tISO duration threshold for maximum duration a queries interval can span before the priority is automatically adjusted.\tNone druid.query.scheduler.prioritization.segmentCountThreshold\tNumber threshold for maximum number of segments that can take part in a query before its priority is automatically adjusted.\tNone druid.query.scheduler.prioritization.adjustment\tAmount to reduce the priority of queries which cross any threshold.\tNone Laning strategies No laning strategy In this mode, queries are never assigned a lane, and the concurrent query count will only be limited by druid.server.http.numThreads or druid.query.scheduler.numThreads, if set. This is the default Druid query scheduler operating mode. Enable this strategy explicitly by setting druid.query.scheduler.laning.strategy to none. 'High/Low' laning strategy This laning strategy splits queries with a priority below zero into a low query lane, automatically. Queries with priority of zero (the default) or above are considered 'interactive'. The limit on low queries can be set to some desired percentage of the total capacity (or HTTP thread pool size), reserving capacity for interactive queries. Queries in the low lane are not guaranteed their capacity, which may be consumed by interactive queries, but may use up to this limit if total capacity is available. If the low lane is specified in the query context lane parameter, this will override the computed lane. This strategy can be enabled by setting druid.query.scheduler.laning.strategy=hilo. Property\tDescription\tDefaultdruid.query.scheduler.laning.maxLowPercent\tMaximum percent of the smaller number of druid.server.http.numThreads or druid.query.scheduler.numThreads, defining the number of HTTP threads that can be used by queries with a priority lower than 0. Value must be an integer in the range 1 to 100, and will be rounded up\tNo default, must be set if using this mode Guardrails for materialization of subqueries Druid stores the subquery rows in temporary tables that live in the Java heap. It is a good practice to avoid large subqueries in Druid. Therefore there are guardrails that are built in Druid to prevent the queries from generating subquery results which can exhaust the heap space. They can be set on a cluster level or modified per query level as desired. Note the following guardrails that can be set by the cluster admin to limit the subquery results: druid.server.http.maxSubqueryRows in broker's config to set a default for the entire cluster or maxSubqueryRows in the query context to set an upper limit on the number of rows a subquery can generatedruid.server.http.maxSubqueryBytes in broker's config to set a default for the entire cluster or maxSubqueryBytes in the query context to set an upper limit on the number of bytes a subquery can generate Note that limiting the subquery by bytes is a newer feature therefore it is experimental as it materializes the results differently. If you choose to modify or set any of the above limits, you must also think about the heap size of all Brokers, Historicals, and task Peons that process data for the subqueries to accommodate the subquery results. There is no formula to calculate the correct value. Trial and error is the best approach. 'Manual' laning strategy This laning strategy is best suited for cases where one or more external applications which query Druid are capable of manually deciding what lane a given query should belong to. Configured with a map of lane names to percent or exact max capacities, queries with a matching lane parameter in the query context will be subjected to those limits. Property\tDescription\tDefaultdruid.query.scheduler.laning.lanes.{name}\tMaximum percent or exact limit of queries that can concurrently run in the defined lanes. Any number of lanes may be defined like this. The lane names 'total' and 'default' are reserved for internal use.\tNo default, must define at least one lane with a limit above 0. If druid.query.scheduler.laning.isLimitPercent is set to true, values must be integers in the range of 1 to 100. druid.query.scheduler.laning.isLimitPercent\tIf set to true, the values set for druid.query.scheduler.laning.lanes will be treated as a percent of the smaller number of druid.server.http.numThreads or druid.query.scheduler.numThreads. Note that in this mode, these lane values across lanes are not required to add up to, and can exceed, 100%.\tfalse Server Configuration Druid uses Jetty to serve HTTP requests. Each query being processed consumes a single thread from druid.server.http.numThreads, so consider defining druid.query.scheduler.numThreads to a lower value in order to reserve HTTP threads for responding to health checks, lookup loading, and other non-query, (in most cases) comparatively very short-lived, HTTP requests. Property\tDescription\tDefaultdruid.server.http.numThreads\tNumber of threads for HTTP requests.\tmax(10, (Number of cores * 17) / 16 + 2) + 30 druid.server.http.queueSize\tSize of the worker queue used by Jetty server to temporarily store incoming client connections. If this value is set and a request is rejected by jetty because queue is full then client would observe request failure with TCP connection being closed immediately with a completely empty response from server.\tUnbounded druid.server.http.maxIdleTime\tThe Jetty max idle time for a connection.\tPT5M druid.server.http.enableRequestLimit\tIf enabled, no requests would be queued in jetty queue and "HTTP 429 Too Many Requests" error response would be sent.\tfalse druid.server.http.defaultQueryTimeout\tQuery timeout in millis, beyond which unfinished queries will be cancelled\t300000 druid.server.http.maxScatterGatherBytes\tMaximum number of bytes gathered from data processes such as Historicals and realtime processes to execute a query. Queries that exceed this limit will fail. This is an advance configuration that allows to protect in case Broker is under heavy load and not utilizing the data gathered in memory fast enough and leading to OOMs. This limit can be further reduced at query time using maxScatterGatherBytes in the context. Note that having large limit is not necessarily bad if broker is never under heavy concurrent load in which case data gathered is processed quickly and freeing up the memory used. Human-readable format is supported, see here.\tLong.MAX_VALUE druid.server.http.maxSubqueryRows\tMaximum number of rows from all subqueries per query. Druid stores the subquery rows in temporary tables that live in the Java heap. druid.server.http.maxSubqueryRows is a guardrail to prevent the system from exhausting available heap. When a subquery exceeds the row limit, Druid throws a resource limit exceeded exception: "Subquery generated results beyond maximum." It is a good practice to avoid large subqueries in Druid. However, if you choose to raise the subquery row limit, you must also increase the heap size of all Brokers, Historicals, and task Peons that process data for the subqueries to accommodate the subquery results. There is no formula to calculate the correct value. Trial and error is the best approach.\t100000 druid.server.http.maxSubqueryBytes\tMaximum number of bytes from all subqueries per query. Since the results are stored on the Java heap, druid.server.http.maxSubqueryBytes is a guardrail like druid.server.http.maxSubqueryRows to prevent the heap space from exhausting. When a subquery exceeds the byte limit, Druid throws a resource limit exceeded exception. A negative value for the guardrail indicates that Druid won't guardrail by memory. Check the docs for druid.server.http.maxSubqueryRows to see how to set the optimal value for a cluster. This is an experimental feature for now as this materializes the results in a different format.\t-1 druid.server.http.gracefulShutdownTimeout\tThe maximum amount of time Jetty waits after receiving shutdown signal. After this timeout the threads will be forcefully shutdown. This allows any queries that are executing to complete(Only values greater than zero are valid).\tPT30S druid.server.http.unannouncePropagationDelay\tHow long to wait for ZooKeeper unannouncements to propagate before shutting down Jetty. This is a minimum and druid.server.http.gracefulShutdownTimeout does not start counting down until after this period elapses.\tPT0S (do not wait) druid.server.http.maxQueryTimeout\tMaximum allowed value (in milliseconds) for timeout parameter. See query-context to know more about timeout. Query is rejected if the query context timeout is greater than this value.\tLong.MAX_VALUE druid.server.http.maxRequestHeaderSize\tMaximum size of a request header in bytes. Larger headers consume more memory and can make a server more vulnerable to denial of service attacks.\t8 * 1024 druid.server.http.contentSecurityPolicy\tContent-Security-Policy header value to set on each non-POST response. Setting this property to an empty string, or omitting it, both result in the default frame-ancestors: none being set.\tframe-ancestors 'none' druid.server.http.enableHSTS\tIf set to true, druid services will add strict transport security header Strict-Transport-Security: max-age=63072000; includeSubDomains to all HTTP responses\tfalse Client Configuration Druid Brokers use an HTTP client to communicate with data servers (Historical servers and real-time tasks). This client has the following configuration options. Property\tDescription\tDefaultdruid.broker.http.numConnections\tSize of connection pool for the Broker to connect to Historical and real-time processes. If there are more queries than this number that all need to speak to the same process, then they will queue up.\t20 druid.broker.http.eagerInitialization\tIndicates that http connections from Broker to Historical and Real-time processes should be eagerly initialized. If set to true, numConnections connections are created upon initialization\ttrue druid.broker.http.compressionCodec\tCompression codec the Broker uses to communicate with Historical and real-time processes. May be "gzip" or "identity".\tgzip druid.broker.http.readTimeout\tThe timeout for data reads from Historical servers and real-time tasks.\tPT15M druid.broker.http.unusedConnectionTimeout\tThe timeout for idle connections in connection pool. The connection in the pool will be closed after this timeout and a new one will be established. This timeout should be less than druid.broker.http.readTimeout. Set this timeout = ~90% of druid.broker.http.readTimeout\tPT4M druid.broker.http.maxQueuedBytes\tMaximum number of bytes queued per query before exerting backpressure on channels to the data servers. Similar to druid.server.http.maxScatterGatherBytes, except unlike that configuration, this one will trigger backpressure rather than query failure. Zero means disabled. Can be overridden by the "maxQueuedBytes" query context parameter. Human-readable format is supported, see here.\t25MB or 2% of maximum Broker heap size, whichever is greater druid.broker.http.numMaxThreads\t`Maximum number of I/O worker threads\tmax(10, ((number of cores * 17) / 16 + 2) + 30)` Retry Policy Druid broker can optionally retry queries internally for transient errors. Property\tDescription\tDefaultdruid.broker.retryPolicy.numTries\tNumber of tries.\t1 Processing The broker uses processing configs for nested groupBy queries. Property\tDescription\tDefaultdruid.processing.buffer.sizeBytes\tThis specifies a buffer size (less than 2GiB) for the storage of intermediate results. The computation engine in both the Historical and Realtime processes will use a scratch buffer of this size to do all of their intermediate computations off-heap. Larger values allow for more aggregations in a single pass over the data while smaller values can require more passes depending on the query that is being executed. Human-readable format is supported.\tauto (max 1GiB) druid.processing.buffer.poolCacheInitialCount\tinitializes the number of buffers allocated on the intermediate results pool. Note that pool can create more buffers if necessary.\t0 druid.processing.buffer.poolCacheMaxCount\tprocessing buffer pool caches the buffers for later use, this is the maximum count cache will grow to. note that pool can create more buffers than it can cache if necessary.\tInteger.MAX_VALUE druid.processing.numMergeBuffers\tThe number of direct memory buffers available for merging query results. The buffers are sized by druid.processing.buffer.sizeBytes. This property is effectively a concurrency limit for queries that require merging buffers. If you are using any queries that require merge buffers (currently, just groupBy v2) then you should have at least two of these.\tmax(2, druid.processing.numThreads / 4) druid.processing.fifo\tIf the processing queue should treat tasks of equal priority in a FIFO manner\ttrue druid.processing.tmpDir\tPath where temporary files created while processing a query should be stored. If specified, this configuration takes priority over the default java.io.tmpdir path.\tpath represented by java.io.tmpdir druid.processing.merge.useParallelMergePool\tEnable automatic parallel merging for Brokers on a dedicated async ForkJoinPool. If false, instead merges will be done serially on the HTTP thread pool.\ttrue druid.processing.merge.pool.parallelism\tSize of ForkJoinPool. Note that the default configuration assumes that the value returned by Runtime.getRuntime().availableProcessors() represents 2 hyper-threads per physical core, and multiplies this value by 0.75 in attempt to size 1.5 times the number of physical cores.\tRuntime.getRuntime().availableProcessors() * 0.75 (rounded up) druid.processing.merge.pool.defaultMaxQueryParallelism\tDefault maximum number of parallel merge tasks per query. Note that the default configuration assumes that the value returned by Runtime.getRuntime().availableProcessors() represents 2 hyper-threads per physical core, and multiplies this value by 0.5 in attempt to size to the number of physical cores.\tRuntime.getRuntime().availableProcessors() * 0.5 (rounded up) druid.processing.merge.pool.awaitShutdownMillis\tTime to wait for merge ForkJoinPool tasks to complete before ungracefully stopping on process shutdown in milliseconds.\t60_000 druid.processing.merge.task.targetRunTimeMillis\tIdeal run-time of each ForkJoinPool merge task, before forking off a new task to continue merging sequences.\t100 druid.processing.merge.task.initialYieldNumRows\tNumber of rows to yield per ForkJoinPool merge task, before forking off a new task to continue merging sequences.\t16384 druid.processing.merge.task.smallBatchNumRows\tSize of result batches to operate on in ForkJoinPool merge tasks.\t4096 The amount of direct memory needed by Druid is at leastdruid.processing.buffer.sizeBytes * (druid.processing.numMergeBuffers + 1). You can ensure at least this amount of direct memory is available by providing -XX:MaxDirectMemorySize=<VALUE> at the command line. Broker query configuration See general query configuration. Broker Generated Query Configuration Supplementation The Broker generates queries internally. This configuration section describes how an operator can augment the configuration of these queries. As of now the only supported augmentation is overriding the default query context. This allows an operator the flexibility to adjust it as they see fit. A common use of this configuration is to override the query priority of the cluster generated queries in order to avoid running as a default priority of 0. Property\tDescription\tDefaultdruid.broker.internal.query.config.context\tA string formatted key:value map of a query context to add to internally generated broker queries.\tnull SQL The Druid SQL server is configured through the following properties on the Broker. Property\tDescription\tDefaultdruid.sql.enable\tWhether to enable SQL at all, including background metadata fetching. If false, this overrides all other SQL-related properties and disables SQL metadata, serving, and planning completely.\ttrue druid.sql.avatica.enable\tWhether to enable JDBC querying at /druid/v2/sql/avatica/.\ttrue druid.sql.avatica.maxConnections\tMaximum number of open connections for the Avatica server. These are not HTTP connections, but are logical client connections that may span multiple HTTP connections.\t25 druid.sql.avatica.maxRowsPerFrame\tMaximum acceptable value for the JDBC client Statement.setFetchSize method. This setting determines the maximum number of rows that Druid will populate in a single 'fetch' for a JDBC ResultSet. Set this property to -1 to enforce no row limit on the server-side and potentially return the entire set of rows on the initial statement execution. If the JDBC client calls Statement.setFetchSize with a value other than -1, Druid uses the lesser value of the client-provided limit and maxRowsPerFrame. If maxRowsPerFrame is smaller than minRowsPerFrame, then the ResultSet size will be fixed. To handle queries that produce results with a large number of rows, you can increase value of druid.sql.avatica.maxRowsPerFrame to reduce the number of fetches required to completely transfer the result set.\t5,000 druid.sql.avatica.minRowsPerFrame\tMinimum acceptable value for the JDBC client Statement.setFetchSize method. The value for this property must greater than 0. If the JDBC client calls Statement.setFetchSize with a lesser value, Druid uses minRowsPerFrame instead. If maxRowsPerFrame is less than minRowsPerFrame, Druid uses the minimum value of the two. For handling queries which produce results with a large number of rows, you can increase this value to reduce the number of fetches required to completely transfer the result set.\t100 druid.sql.avatica.maxStatementsPerConnection\tMaximum number of simultaneous open statements per Avatica client connection.\t4 druid.sql.avatica.connectionIdleTimeout\tAvatica client connection idle timeout.\tPT5M druid.sql.avatica.fetchTimeoutMs\tAvatica fetch timeout, in milliseconds. When a request for the next batch of data takes longer than this time, Druid returns an empty result set, causing the client to poll again. This avoids HTTP timeouts for long-running queries. The default of 5 sec. is good for most cases.\t5000 druid.sql.http.enable\tWhether to enable JSON over HTTP querying at /druid/v2/sql/.\ttrue druid.sql.planner.maxTopNLimit\tMaximum threshold for a TopN query. Higher limits will be planned as GroupBy queries instead.\t100000 druid.sql.planner.metadataRefreshPeriod\tThrottle for metadata refreshes.\tPT1M druid.sql.planner.metadataColumnTypeMergePolicy\tDefines how column types will be chosen when faced with differences between segments when computing the SQL schema. Options are specified as a JSON object, with valid choices of leastRestrictive or latestInterval. For leastRestrictive, Druid will automatically widen the type computed for the schema to a type which data across all segments can be converted into, however planned schema migrations can only take effect once all segments have been re-ingested to the new schema. With latestInterval, the column type in most recent time chunks defines the type for the schema.\tleastRestrictive druid.sql.planner.useApproximateCountDistinct\tWhether to use an approximate cardinality algorithm for COUNT(DISTINCT foo).\ttrue druid.sql.planner.useGroupingSetForExactDistinct\tOnly relevant when useApproximateCountDistinct is disabled. If set to true, exact distinct queries are re-written using grouping sets. Otherwise, exact distinct queries are re-written using joins. This should be set to true for group by query with multiple exact distinct aggregations. This flag can be overridden per query.\tfalse druid.sql.planner.useApproximateTopN\tWhether to use approximate TopN queries when a SQL query could be expressed as such. If false, exact GroupBy queries will be used instead.\ttrue druid.sql.planner.requireTimeCondition\tWhether to require SQL to have filter conditions on time column so that all generated native queries will have user specified intervals. If true, all queries without filter condition on time column will fail\tfalse druid.sql.planner.sqlTimeZone\tSets the default time zone for the server, which will affect how time functions and timestamp literals behave. Should be a time zone name like "America/Los_Angeles" or offset like "-08:00".\tUTC druid.sql.planner.metadataSegmentCacheEnable\tWhether to keep a cache of published segments in broker. If true, broker polls coordinator in background to get segments from metadata store and maintains a local cache. If false, coordinator's REST API will be invoked when broker needs published segments info.\tfalse druid.sql.planner.metadataSegmentPollPeriod\tHow often to poll coordinator for published segments list if druid.sql.planner.metadataSegmentCacheEnable is set to true. Poll period is in milliseconds.\t60000 druid.sql.planner.authorizeSystemTablesDirectly\tIf true, Druid authorizes queries against any of the system schema tables (sys in SQL) as SYSTEM_TABLE resources which require READ access, in addition to permissions based content filtering.\tfalse druid.sql.planner.useNativeQueryExplain\tIf true, EXPLAIN PLAN FOR will return the explain plan as a JSON representation of equivalent native query(s), else it will return the original version of explain plan generated by Calcite. It can be overridden per query with useNativeQueryExplain context key.\ttrue druid.sql.planner.maxNumericInFilters\tMax limit for the amount of numeric values that can be compared for a string type dimension when the entire SQL WHERE clause of a query translates to an OR of Bound filter. By default, Druid does not restrict the amount of numeric Bound Filters on String columns, although this situation may block other queries from running. Set this property to a smaller value to prevent Druid from running queries that have prohibitively long segment processing times. The optimal limit requires some trial and error; we recommend starting with 100. Users who submit a query that exceeds the limit of maxNumericInFilters should instead rewrite their queries to use strings in the WHERE clause instead of numbers. For example, WHERE someString IN (‘123’, ‘456’). If this value is disabled, maxNumericInFilters set through query context is ignored.\t-1 (disabled) druid.sql.approxCountDistinct.function\tImplementation to use for the APPROX_COUNT_DISTINCT function. Without extensions loaded, the only valid value is APPROX_COUNT_DISTINCT_BUILTIN (a HyperLogLog, or HLL, based implementation). If the DataSketches extension is loaded, this can also be APPROX_COUNT_DISTINCT_DS_HLL (alternative HLL implementation) or APPROX_COUNT_DISTINCT_DS_THETA. Theta sketches use significantly more memory than HLL sketches, so you should prefer one of the two HLL implementations.\tAPPROX_COUNT_DISTINCT_BUILTIN info Previous versions of Druid had properties named druid.sql.planner.maxQueryCount and druid.sql.planner.maxSemiJoinRowsInMemory. These properties are no longer available. Since Druid 0.18.0, you can use druid.server.http.maxSubqueryRows to control the maximum number of rows permitted across all subqueries. Broker Caching You can optionally only configure caching to be enabled on the Broker by setting caching configs here. Property\tPossible Values\tDescription\tDefaultdruid.broker.cache.useCache\ttrue, false\tEnable the cache on the Broker.\tfalse druid.broker.cache.populateCache\ttrue, false\tPopulate the cache on the Broker.\tfalse druid.broker.cache.useResultLevelCache\ttrue, false\tEnable result level caching on the Broker.\tfalse druid.broker.cache.populateResultLevelCache\ttrue, false\tPopulate the result level cache on the Broker.\tfalse druid.broker.cache.resultLevelCacheLimit\tpositive integer\tMaximum size of query response that can be cached.\tInteger.MAX_VALUE druid.broker.cache.unCacheable\tAll druid query types\tAll query types to not cache.\t[] druid.broker.cache.cacheBulkMergeLimit\tpositive integer or 0\tQueries with more segments than this number will not attempt to fetch from cache at the broker level, leaving potential caching fetches (and cache result merging) to the Historicals\tInteger.MAX_VALUE druid.broker.cache.maxEntrySize\tpositive integer\tMaximum cache entry size in bytes.\t1_000_000 See cache configuration for how to configure cache settings. info Note: Even if cache is enabled, for groupBy v2 queries, both of non-result level cache and result level cache do not work on Brokers. See Differences between v1 and v2 and Query caching for more information. Segment Discovery Property\tPossible Values\tDescription\tDefaultdruid.serverview.type\tbatch or http\tSegment discovery method to use. "http" enables discovering segments using HTTP instead of ZooKeeper.\thttp druid.broker.segment.watchedTiers\tList of strings\tThe Broker watches segment announcements from processes that serve segments to build a cache to relate each process to the segments it serves. This configuration allows the Broker to only consider segments being served from a list of tiers. By default, Broker considers all tiers. This can be used to partition your dataSources in specific Historical tiers and configure brokers in partitions so that they are only queryable for specific dataSources. This config is mutually exclusive from druid.broker.segment.ignoredTiers and at most one of these can be configured on a Broker.\tnone druid.broker.segment.ignoredTiers\tList of strings\tThe Broker watches segment announcements from processes that serve segments to build a cache to relate each process to the segments it serves. This configuration allows the Broker to ignore the segments being served from a list of tiers. By default, Broker considers all tiers. This config is mutually exclusive from druid.broker.segment.watchedTiers and at most one of these can be configured on a Broker.\tnone druid.broker.segment.watchedDataSources\tList of strings\tBroker watches the segment announcements from processes serving segments to build cache of which process is serving which segments, this configuration allows to only consider segments being served from a whitelist of dataSources. By default, Broker would consider all datasources. This can be used to configure brokers in partitions so that they are only queryable for specific dataSources.\tnone druid.broker.segment.watchRealtimeTasks\tBoolean\tThe Broker watches segment announcements from processes that serve segments to build a cache to relate each process to the segments it serves. When watchRealtimeTasks is true, the Broker watches for segment announcements from both Historicals and realtime processes. To configure a broker to exclude segments served by realtime processes, set watchRealtimeTasks to false.\ttrue druid.broker.segment.awaitInitializationOnStart\tBoolean\tWhether the Broker will wait for its view of segments to fully initialize before starting up. If set to 'true', the Broker's HTTP server will not start up, and the Broker will not announce itself as available, until the server view is initialized. See also druid.sql.planner.awaitInitializationOnStart, a related setting.\ttrue "},{"title":"Cache Configuration","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#cache-configuration","content":"This section describes caching configuration that is common to Broker, Historical, and MiddleManager/Peon processes. Caching could optionally be enabled on the Broker, Historical, and MiddleManager/Peon processes. SeeBroker, Historical, and Peon configuration options for how to enable it for different processes. Druid uses a local in-memory cache by default, unless a different type of cache is specified. Use the druid.cache.type configuration to set a different kind of cache. Cache settings are set globally, so the same configuration can be re-used for both Broker and Historical processes, when defined in the common properties file. "},{"title":"Cache Type","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#cache-type","content":"Property\tPossible Values\tDescription\tDefaultdruid.cache.type\tlocal, memcached, hybrid, caffeine\tThe type of cache to use for queries. See below of the configuration options for each cache type\tcaffeine Local Cache info DEPRECATED: Use caffeine (default as of v0.12.0) instead The local cache is deprecated in favor of the Caffeine cache, and may be removed in a future version of Druid. The Caffeine cache affords significantly better performance and control over eviction behavior compared to local cache, and is recommended in any situation where you are using JRE 8u60 or higher. A simple in-memory LRU cache. Local cache resides in JVM heap memory, so if you enable it, make sure you increase heap size accordingly. Property\tDescription\tDefaultdruid.cache.sizeInBytes\tMaximum cache size in bytes. Zero disables caching.\t0 druid.cache.initialSize\tInitial size of the hashtable backing the cache.\t500000 druid.cache.logEvictionCount\tIf non-zero, log cache eviction every logEvictionCount items.\t0 Caffeine Cache A highly performant local cache implementation for Druid based on Caffeine. Requires a JRE8u60 or higher if using COMMON_FJP. Configuration Below are the configuration options known to this module: runtime.properties\tDescription\tDefaultdruid.cache.type\tSet this to caffeine or leave out parameter\tcaffeine druid.cache.sizeInBytes\tThe maximum size of the cache in bytes on heap. It can be configured as described in here.\tmin(1GiB, Runtime.maxMemory / 10) druid.cache.expireAfter\tThe time (in ms) after an access for which a cache entry may be expired\tNone (no time limit) druid.cache.cacheExecutorFactory\tThe executor factory to use for Caffeine maintenance. One of COMMON_FJP, SINGLE_THREAD, or SAME_THREAD\tForkJoinPool common pool (COMMON_FJP) druid.cache.evictOnClose\tIf a close of a namespace (ex: removing a segment from a process) should cause an eager eviction of associated cache values\tfalse druid.cache.cacheExecutorFactory Here are the possible values for druid.cache.cacheExecutorFactory, which controls how maintenance tasks are run COMMON_FJP (default) use the common ForkJoinPool. Should use with JRE 8u60 or higher. Older versions of the JRE may have worse performance than newer JRE versions.SINGLE_THREAD Use a single-threaded executor.SAME_THREAD Cache maintenance is done eagerly. Metrics In addition to the normal cache metrics, the caffeine cache implementation also reports the following in both total and delta Metric\tDescription\tNormal valuequery/cache/caffeine/*/requests\tCount of hits or misses\thit + miss query/cache/caffeine/*/loadTime\tLength of time caffeine spends loading new values (unused feature)\t0 query/cache/caffeine/*/evictionBytes\tSize in bytes that have been evicted from the cache\tVaries, should tune cache sizeInBytes so that sizeInBytes/evictionBytes is approximately the rate of cache churn you desire Memcached Uses memcached as cache backend. This allows all processes to share the same cache. Property\tDescription\tDefaultdruid.cache.expiration\tMemcached expiration time.\t2592000 (30 days) druid.cache.timeout\tMaximum time in milliseconds to wait for a response from Memcached.\t500 druid.cache.hosts\tComma separated list of Memcached hosts <host:port>.\tnone druid.cache.maxObjectSize\tMaximum object size in bytes for a Memcached object.\t52428800 (50 MiB) druid.cache.memcachedPrefix\tKey prefix for all keys in Memcached.\tdruid druid.cache.numConnections\tNumber of memcached connections to use.\t1 druid.cache.protocol\tMemcached communication protocol. Can be binary or text.\tbinary druid.cache.locator\tMemcached locator. Can be consistent or array_mod.\tconsistent Hybrid Uses a combination of any two caches as a two-level L1 / L2 cache. This may be used to combine a local in-memory cache with a remote memcached cache. Cache requests will first check L1 cache before checking L2. If there is an L1 miss and L2 hit, it will also populate L1. Property\tDescription\tDefaultdruid.cache.l1.type\ttype of cache to use for L1 cache. See druid.cache.type configuration for valid types.\tcaffeine druid.cache.l2.type\ttype of cache to use for L2 cache. See druid.cache.type configuration for valid types.\tcaffeine druid.cache.l1.*\tAny property valid for the given type of L1 cache can be set using this prefix. For instance, if you are using a caffeine L1 cache, specify druid.cache.l1.sizeInBytes to set its size.\tdefaults are the same as for the given cache type. druid.cache.l2.*\tPrefix for L2 cache settings, see description for L1.\tdefaults are the same as for the given cache type. druid.cache.useL2\tA boolean indicating whether to query L2 cache, if it's a miss in L1. It makes sense to configure this to false on Historical processes, if L2 is a remote cache like memcached, and this cache also used on brokers, because in this case if a query reached Historical it means that a broker didn't find corresponding results in the same remote cache, so a query to the remote cache from Historical is guaranteed to be a miss.\ttrue druid.cache.populateL2\tA boolean indicating whether to put results into L2 cache.\ttrue "},{"title":"General query configuration","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#general-query-configuration","content":"This section describes configurations that control behavior of Druid's query types, applicable to Broker, Historical, and MiddleManager processes. "},{"title":"Overriding default query context values","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#overriding-default-query-context-values","content":"Any query context general parameter default value can be overridden by setting the runtime property in the format of druid.query.default.context.{query_context_key}. The druid.query.default.context.{query_context_key} runtime property prefix applies to all current and future query context keys, the same as how query context parameter passed with the query works. Note that the runtime property value can be overridden if value for the same key is explicitly specify in the query contexts. The precedence chain for query context values is as follows: hard-coded default value in Druid code <- runtime property not prefixed with druid.query.default.context<- runtime property prefixed with druid.query.default.context <- context parameter in the query Note that not all query context key has a runtime property not prefixed with druid.query.default.context that can override the hard-coded default value. For example, maxQueuedBytes has druid.broker.http.maxQueuedBytesbut joinFilterRewriteMaxSize does not. Hence, the only way of overriding joinFilterRewriteMaxSize hard-coded default value is with runtime property druid.query.default.context.joinFilterRewriteMaxSize. To further elaborate on the previous example: If neither druid.broker.http.maxQueuedBytes or druid.query.default.context.maxQueuedBytes is set and the query does not have maxQueuedBytes in the context, then the hard-coded value in Druid code is use. If runtime property only contains druid.broker.http.maxQueuedBytes=x and query does not have maxQueuedBytes in the context, then the value of the property, x, is use. However, if query does have maxQueuedBytes in the context, then that value is use instead. If runtime property only contains druid.query.default.context.maxQueuedBytes=y OR runtime property contains bothdruid.broker.http.maxQueuedBytes=x and druid.query.default.context.maxQueuedBytes=y, then the value ofdruid.query.default.context.maxQueuedBytes, y, is use (given that query does not have maxQueuedBytes in the context). If query does have maxQueuedBytes in the context, then that value is use instead. "},{"title":"TopN query config","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#topn-query-config","content":"Property\tDescription\tDefaultdruid.query.topN.minTopNThreshold\tSee TopN Aliasing for details.\t1000 "},{"title":"Search query config","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#search-query-config","content":"Property\tDescription\tDefaultdruid.query.search.maxSearchLimit\tMaximum number of search results to return.\t1000 druid.query.search.searchStrategy\tDefault search query strategy.\tuseIndexes "},{"title":"SegmentMetadata query config","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#segmentmetadata-query-config","content":"Property\tDescription\tDefaultdruid.query.segmentMetadata.defaultHistory\tWhen no interval is specified in the query, use a default interval of defaultHistory before the end time of the most recent segment, specified in ISO8601 format. This property also controls the duration of the default interval used by GET /druid/v2/datasources/{dataSourceName} interactions for retrieving datasource dimensions/metrics.\tP1W druid.query.segmentMetadata.defaultAnalysisTypes\tThis can be used to set the Default Analysis Types for all segment metadata queries, this can be overridden when making the query\t["cardinality", "interval", "minmax"] "},{"title":"GroupBy query config","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#groupby-query-config","content":"This section describes the configurations for groupBy queries. You can set the runtime properties in the runtime.properties file on Broker, Historical, and MiddleManager processes. You can set the query context parameters through the query context. Configurations for groupBy v2 Supported runtime properties: Property\tDescription\tDefaultdruid.query.groupBy.maxSelectorDictionarySize\tMaximum amount of heap space (approximately) to use for per-segment string dictionaries. See groupBy memory tuning and resource limits for details.\t100000000 druid.query.groupBy.maxMergingDictionarySize\tMaximum amount of heap space (approximately) to use for per-query string dictionaries. When the dictionary exceeds this size, a spill to disk will be triggered. See groupBy memory tuning and resource limits for details.\t100000000 druid.query.groupBy.maxOnDiskStorage\tMaximum amount of disk space to use, per-query, for spilling result sets to disk when either the merging buffer or the dictionary fills up. Queries that exceed this limit will fail. Set to zero to disable disk spilling.\t0 (disabled) druid.query.groupBy.defaultOnDiskStorage\tDefault amount of disk space to use, per-query, for spilling the result sets to disk when either the merging buffer or the dictionary fills up. Set to zero to disable disk spilling for queries which don't override maxOnDiskStorage in their context.\tdruid.query.groupBy.maxOnDiskStorage Supported query contexts: Key\tDescriptionmaxSelectorDictionarySize\tCan be used to lower the value of druid.query.groupBy.maxMergingDictionarySize for this query. maxMergingDictionarySize\tCan be used to lower the value of druid.query.groupBy.maxMergingDictionarySize for this query. maxOnDiskStorage\tCan be used to set maxOnDiskStorage to a value between 0 and druid.query.groupBy.maxOnDiskStorage for this query. If this query context override exceeds druid.query.groupBy.maxOnDiskStorage, the query will use druid.query.groupBy.maxOnDiskStorage. Omitting this from the query context will cause the query to use druid.query.groupBy.defaultOnDiskStorage for maxOnDiskStorage "},{"title":"Advanced configurations","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#advanced-configurations","content":"Common configurations for all groupBy strategies Supported runtime properties: Property\tDescription\tDefaultdruid.query.groupBy.defaultStrategy\tDefault groupBy query strategy.\tv2 druid.query.groupBy.singleThreaded\tMerge results using a single thread.\tfalse Supported query contexts: Key\tDescriptiongroupByStrategy\tOverrides the value of druid.query.groupBy.defaultStrategy for this query. groupByIsSingleThreaded\tOverrides the value of druid.query.groupBy.singleThreaded for this query. GroupBy v2 configurations Supported runtime properties: Property\tDescription\tDefaultdruid.query.groupBy.bufferGrouperInitialBuckets\tInitial number of buckets in the off-heap hash table used for grouping results. Set to 0 to use a reasonable default (1024).\t0 druid.query.groupBy.bufferGrouperMaxLoadFactor\tMaximum load factor of the off-heap hash table used for grouping results. When the load factor exceeds this size, the table will be grown or spilled to disk. Set to 0 to use a reasonable default (0.7).\t0 druid.query.groupBy.forceHashAggregation\tForce to use hash-based aggregation.\tfalse druid.query.groupBy.intermediateCombineDegree\tNumber of intermediate processes combined together in the combining tree. Higher degrees will need less threads which might be helpful to improve the query performance by reducing the overhead of too many threads if the server has sufficiently powerful CPU cores.\t8 druid.query.groupBy.numParallelCombineThreads\tHint for the number of parallel combining threads. This should be larger than 1 to turn on the parallel combining feature. The actual number of threads used for parallel combining is min(druid.query.groupBy.numParallelCombineThreads, druid.processing.numThreads).\t1 (disabled) Supported query contexts: Key\tDescription\tDefaultbufferGrouperInitialBuckets\tOverrides the value of druid.query.groupBy.bufferGrouperInitialBuckets for this query.\tNone bufferGrouperMaxLoadFactor\tOverrides the value of druid.query.groupBy.bufferGrouperMaxLoadFactor for this query.\tNone forceHashAggregation\tOverrides the value of druid.query.groupBy.forceHashAggregation\tNone intermediateCombineDegree\tOverrides the value of druid.query.groupBy.intermediateCombineDegree\tNone numParallelCombineThreads\tOverrides the value of druid.query.groupBy.numParallelCombineThreads\tNone sortByDimsFirst\tSort the results first by dimension values and then by timestamp.\tfalse forceLimitPushDown\tWhen all fields in the orderby are part of the grouping key, the broker will push limit application down to the Historical processes. When the sorting order uses fields that are not in the grouping key, applying this optimization can result in approximate results with unknown accuracy, so this optimization is disabled by default in that case. Enabling this context flag turns on limit push down for limit/orderbys that contain non-grouping key columns.\tfalse GroupBy v1 configurations Supported runtime properties: Property\tDescription\tDefaultdruid.query.groupBy.maxIntermediateRows\tMaximum number of intermediate rows for the per-segment grouping engine. This is a tuning parameter that does not impose a hard limit; rather, it potentially shifts merging work from the per-segment engine to the overall merging index. Queries that exceed this limit will not fail.\t50000 druid.query.groupBy.maxResults\tMaximum number of results. Queries that exceed this limit will fail.\t500000 Supported query contexts: Key\tDescription\tDefaultmaxIntermediateRows\tIgnored by groupBy v2. Can be used to lower the value of druid.query.groupBy.maxIntermediateRows for a groupBy v1 query.\tNone maxResults\tIgnored by groupBy v2. Can be used to lower the value of druid.query.groupBy.maxResults for a groupBy v1 query.\tNone useOffheap\tIgnored by groupBy v2, and no longer supported for groupBy v1. Enabling this option with groupBy v1 will result in an error. For off-heap aggregation, switch to groupBy v2, which always operates off-heap.\tfalse Expression processing configurations Key\tDescription\tDefaultdruid.expressions.useStrictBooleans\tControls the behavior of Druid boolean operators and functions, if set to true all boolean values will be either a 1 or 0. See expression documentation\tfalse druid.expressions.allowNestedArrays\tIf enabled, Druid array expressions can create nested arrays.\tfalse "},{"title":"Router","type":1,"pageTitle":"Configuration reference","url":"/docs/27.0.0/configuration/#router","content":"Router Process Configs Property\tDescription\tDefaultdruid.host\tThe host for the current process. This is used to advertise the current processes location as reachable from another process and should generally be specified such that http://${druid.host}/ could actually talk to this process\tInetAddress.getLocalHost().getCanonicalHostName() druid.bindOnHost\tIndicating whether the process's internal jetty server bind on druid.host. Default is false, which means binding to all interfaces.\tfalse druid.plaintextPort\tThis is the port to actually listen on; unless port mapping is used, this will be the same port as is on druid.host\t8888 druid.tlsPort\tTLS port for HTTPS connector, if druid.enableTlsPort is set then this config will be used. If druid.host contains port then that port will be ignored. This should be a non-negative Integer.\t9088 druid.service\tThe name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services\tdruid/router Runtime Configuration Property\tDescription\tDefaultdruid.router.defaultBrokerServiceName\tThe default Broker to connect to in case service discovery fails.\tdruid/broker druid.router.tierToBrokerMap\tQueries for a certain tier of data are routed to their appropriate Broker. This value should be an ordered JSON map of tiers to Broker names. The priority of Brokers is based on the ordering.\t{"_default_tier": "<defaultBrokerServiceName>"} druid.router.defaultRule\tThe default rule for all datasources.\t"_default" druid.router.pollPeriod\tHow often to poll for new rules.\tPT1M druid.router.sql.enable\tEnable routing of SQL queries using strategies. Whentrue, the Router uses the strategies defined in druid.router.strategies to determine the broker service for a given SQL query. When false, the Router uses the defaultBrokerServiceName.\tfalse druid.router.strategies\tPlease see Router Strategies for details.\t[{"type":"timeBoundary"},{"type":"priority"}] druid.router.avatica.balancer.type\tClass to use for balancing Avatica queries across Brokers. Please see Avatica Query Balancing.\trendezvousHash druid.router.managementProxy.enabled\tEnables the Router's management proxy functionality.\tfalse druid.router.http.numConnections\tSize of connection pool for the Router to connect to Broker processes. If there are more queries than this number that all need to speak to the same process, then they will queue up.\t20 druid.router.http.eagerInitialization\tIndicates that http connections from Router to Broker should be eagerly initialized. If set to true, numConnections connections are created upon initialization\ttrue druid.router.http.readTimeout\tThe timeout for data reads from Broker processes.\tPT15M druid.router.http.numMaxThreads\tMaximum number of worker threads to handle HTTP requests and responses\tmax(10, ((number of cores * 17) / 16 + 2) + 30) druid.router.http.numRequestsQueued\tMaximum number of requests that may be queued to a destination\t1024 druid.router.http.requestBuffersize\tSize of the content buffer for receiving requests. These buffers are only used for active connections that have requests with bodies that will not fit within the header buffer\t8 * 1024 "},{"title":"Basic Security","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/druid-basic-security","content":"","keywords":""},{"title":"Configuration","type":1,"pageTitle":"Basic Security","url":"/docs/27.0.0/development/extensions-core/druid-basic-security#configuration","content":"The examples in the section use the following names for the Authenticators and Authorizers: MyBasicMetadataAuthenticatorMyBasicLDAPAuthenticatorMyBasicMetadataAuthorizerMyBasicLDAPAuthorizer These properties are not tied to specific Authenticator or Authorizer instances. To set the value for the configuration properties, add them to the common runtime properties file. "},{"title":"General properties","type":1,"pageTitle":"Basic Security","url":"/docs/27.0.0/development/extensions-core/druid-basic-security#general-properties","content":"druid.auth.basic.common.pollingPeriod Defines in milliseconds how often processes should poll the Coordinator for the current Druid metadata store authenticator/authorizer state. Required: No Default: 60000 druid.auth.basic.common.maxRandomDelay Defines in milliseconds the amount of random delay to add to the pollingPeriod, to spread polling requests across time. Required: No Default: 6000 druid.auth.basic.common.maxSyncRetries Determines how many times a service will retry if the authentication/authorization Druid metadata store state sync with the Coordinator fails. Required: No Default: 10 druid.auth.basic.common.cacheDirectory If defined, snapshots of the basic Authenticator and Authorizer Druid metadata store caches will be stored on disk in this directory. If this property is defined, when a service is starting, it will attempt to initialize its caches from these on-disk snapshots, if the service is unable to initialize its state by communicating with the Coordinator. Required: No Default: null "},{"title":"Authenticator","type":1,"pageTitle":"Basic Security","url":"/docs/27.0.0/development/extensions-core/druid-basic-security#authenticator","content":"To use the Basic authenticator, add an authenticator with type basic to the authenticatorChain. The default credentials validator (credentialsValidator) is metadata. To use the LDAP validator, define a credentials validator with a type of 'ldap'. Use the following syntax to configure a named authenticator: druid.auth.authenticator.<authenticatorName>.<authenticatorProperty> Example configuration of an authenticator that uses the Druid metadata store to look up and validate credentials: # Druid basic security druid.auth.authenticatorChain=["MyBasicMetadataAuthenticator"] druid.auth.authenticator.MyBasicMetadataAuthenticator.type=basic # Default password for 'admin' user, should be changed for production. druid.auth.authenticator.MyBasicMetadataAuthenticator.initialAdminPassword=password1 # Default password for internal 'druid_system' user, should be changed for production. druid.auth.authenticator.MyBasicMetadataAuthenticator.initialInternalClientPassword=password2 # Uses the metadata store for storing users, you can use authentication API to create new users and grant permissions druid.auth.authenticator.MyBasicMetadataAuthenticator.credentialsValidator.type=metadata # If true and the request credential doesn't exists in this credentials store, the request will proceed to next Authenticator in the chain. druid.auth.authenticator.MyBasicMetadataAuthenticator.skipOnFailure=false druid.auth.authenticator.MyBasicMetadataAuthenticator.authorizerName=MyBasicMetadataAuthorizer The remaining examples of authenticator configuration use either MyBasicMetadataAuthenticator or MyBasicLDAPAuthenticator as the authenticator name. Properties for Druid metadata store user authentication druid.auth.authenticator.MyBasicMetadataAuthenticator.initialAdminPassword Initial Password Provider for the automatically created default admin user. If no password is specified, the default admin user will not be created. If the default admin user already exists, setting this property will not affect its password. Required: No Default: null druid.auth.authenticator.MyBasicMetadataAuthenticator.initialInternalClientPassword Initial Password Provider for the default internal system user, used for internal process communication. If no password is specified, the default internal system user will not be created. If the default internal system user already exists, setting this property will not affect its password. Required: No Default: null druid.auth.authenticator.MyBasicMetadataAuthenticator.enableCacheNotifications If true, the Coordinator will notify Druid processes whenever a configuration change to this Authenticator occurs, allowing them to immediately update their state without waiting for polling. Required: No Default: True druid.auth.authenticator.MyBasicMetadataAuthenticator.cacheNotificationTimeout The timeout in milliseconds for the cache notifications. Required: No Default: 5000 druid.auth.authenticator.MyBasicMetadataAuthenticator.credentialIterations Number of iterations to use for password hashing. See Credential iterations and API performance Required: No Default: 10000 druid.auth.authenticator.MyBasicMetadataAuthenticator.credentialsValidator.type The type of credentials store (metadata) to validate requests credentials. Required: No Default: metadata druid.auth.authenticator.MyBasicMetadataAuthenticator.skipOnFailure If true and the request credential doesn't exists or isn't fully configured in the credentials store, the request will proceed to next Authenticator in the chain. Required: No Default: false druid.auth.authenticator.MyBasicMetadataAuthenticator.authorizerName Authorizer that requests should be directed to. Required: Yes Default: N/A Credential iterations and API performance As noted above, credentialIterations determines the number of iterations used to hash a password. A higher number increases security, but costs more in terms of CPU utilization. This cost affects API performance, including query times. The default setting of 10000 is intentionally high to prevent attackers from using brute force to guess passwords. You can decrease the number of iterations to speed up API response times, but it may expose your system to dictionary attacks. Therefore, only reduce the number of iterations if your environment fits one of the following conditions: All passwords are long and random which make them as safe as a randomly-generated token.You have secured network access to Druid so that no attacker can execute a dictionary attack against it. If Druid uses the default credentials validator (i.e., credentialsValidator.type=metadata), changing the credentialIterations value affects the number of hashing iterations only for users created after the change or for users who subsequently update their passwords via the /druid-ext/basic-security/authentication/db/basic/users/{userName}/credentials endpoint. If Druid uses the ldap validator, the change applies to any user at next log in (as well as to new users or users who update their passwords). Properties for LDAP user authentication druid.auth.authenticator.MyBasicLDAPAuthenticator.initialAdminPassword Initial Password Provider for the automatically created default admin user. If no password is specified, the default admin user will not be created. If the default admin user already exists, setting this property will not affect its password. Required: No Default: null druid.auth.authenticator.MyBasicLDAPAuthenticator.initialInternalClientPassword Initial Password Provider for the default internal system user, used for internal process communication. If no password is specified, the default internal system user will not be created. If the default internal system user already exists, setting this property will not affect its password. Required: No Default: null druid.auth.authenticator.MyBasicLDAPAuthenticator.enableCacheNotifications If true, the Coordinator will notify Druid processes whenever a configuration change to this Authenticator occurs, allowing them to immediately update their state without waiting for polling. Required: No Default: true druid.auth.authenticator.MyBasicLDAPAuthenticator.cacheNotificationTimeout The timeout in milliseconds for the cache notifications. Required: No Default: 5000 druid.auth.authenticator.MyBasicLDAPAuthenticator.credentialIterations Number of iterations to use for password hashing. Required: No Default: 10000 druid.auth.authenticator.MyBasicLDAPAuthenticator.credentialsValidator.type The type of credentials store (ldap) to validate requests credentials. Required: No Default: metadata druid.auth.authenticator.MyBasicLDAPAuthenticator.credentialsValidator.url URL of the LDAP server. Required: Yes Default: null druid.auth.authenticator.MyBasicLDAPAuthenticator.credentialsValidator.bindUser LDAP bind user username. Required: Yes Default: null druid.auth.authenticator.MyBasicLDAPAuthenticator.credentialsValidator.bindPassword Password Provider LDAP bind user password. Required: Yes Default: null druid.auth.authenticator.MyBasicLDAPAuthenticator.credentialsValidator.baseDn The point from where the LDAP server will search for users. Required: Yes Default: null druid.auth.authenticator.MyBasicLDAPAuthenticator.credentialsValidator.userSearch The filter/expression to use for the search. For example, (&(sAMAccountName=%s)(objectClass=user)) Required: Yes Default: null druid.auth.authenticator.MyBasicLDAPAuthenticator.credentialsValidator.userAttribute The attribute id identifying the attribute that will be returned as part of the search. For example, sAMAccountName. Required: Yes Default: null druid.auth.authenticator.MyBasicLDAPAuthenticator.credentialsValidator.credentialVerifyDuration The duration in seconds for how long valid credentials are verifiable within the cache when not requested. Required: No Default: 600 druid.auth.authenticator.MyBasicLDAPAuthenticator.credentialsValidator.credentialMaxDuration The max duration in seconds for valid credentials that can reside in cache regardless of how often they are requested. Required: No Default: 3600 druid.auth.authenticator.MyBasicLDAPAuthenticator.credentialsValidator.credentialCacheSize The valid credentials cache size. The cache uses a LRU policy. Required: No Default: 100 druid.auth.authenticator.MyBasicLDAPAuthenticator.skipOnFailure If true and the request credential doesn't exists or isn't fully configured in the credentials store, the request will proceed to next Authenticator in the chain. Required: No Default: false druid.auth.authenticator.MyBasicLDAPAuthenticator.authorizerName Authorizer that requests should be directed to. Required: Yes Default: N/A "},{"title":"Escalator","type":1,"pageTitle":"Basic Security","url":"/docs/27.0.0/development/extensions-core/druid-basic-security#escalator","content":"The Escalator determines the authentication scheme to use for internal Druid cluster communications, for example, when a Broker service communicates with a Historical service during query processing. Example configuration: # Escalator druid.escalator.type=basic druid.escalator.internalClientUsername=druid_system druid.escalator.internalClientPassword=password2 druid.escalator.authorizerName=MyBasicMetadataAuthorizer Properties druid.escalator.internalClientUsername The escalator will use this username for requests made as the internal system user. Required: Yes Default: N/A druid.escalator.internalClientPassword The escalator will use this Password Provider for requests made as the internal system user. Required: Yes Default: N/A druid.escalator.authorizerName Authorizer that requests should be directed to. Required: Yes Default: N/A "},{"title":"Authorizer","type":1,"pageTitle":"Basic Security","url":"/docs/27.0.0/development/extensions-core/druid-basic-security#authorizer","content":"To use the Basic authorizer, add an authorizer with type basic to the authorizers list. Use the following syntax to configure a named authorizer: druid.auth.authorizer.<authorizerName>.<authorizerProperty> Example configuration: # Authorizer druid.auth.authorizers=["MyBasicMetadataAuthorizer"] druid.auth.authorizer.MyBasicMetadataAuthorizer.type=basic The examples in the rest of this article use MyBasicMetadataAuthorizer or MyBasicLDAPAuthorizer as the authorizer name. Properties for Druid metadata store user authorization druid.auth.authorizer.MyBasicMetadataAuthorizer.enableCacheNotifications If true, the Coordinator will notify Druid processes whenever a configuration change to this Authorizer occurs, allowing them to immediately update their state without waiting for polling. Required: No Default: true druid.auth.authorizer.MyBasicMetadataAuthorizer.cacheNotificationTimeout The timeout in milliseconds for the cache notifications. Required: No Default: 5000 druid.auth.authorizer.MyBasicMetadataAuthorizer.initialAdminUser The initial admin user with role defined in initialAdminRole property if specified, otherwise the default admin role will be assigned. Required: No Default: admin druid.auth.authorizer.MyBasicMetadataAuthorizer.initialAdminRole The initial admin role to create if it doesn't already exists. Required: No Default: admin druid.auth.authorizer.MyBasicMetadataAuthorizer.roleProvider.type The type of role provider to authorize requests credentials. Required: No Default: metadata Properties for LDAP user authorization druid.auth.authorizer.MyBasicLDAPAuthorizer.enableCacheNotifications If true, the Coordinator will notify Druid processes whenever a configuration change to this Authorizer occurs, allowing them to immediately update their state without waiting for polling. Required: No Default: true druid.auth.authorizer.MyBasicLDAPAuthorizer.cacheNotificationTimeout The timeout in milliseconds for the cache notifications. Required: No Default: 5000 druid.auth.authorizer.MyBasicLDAPAuthorizer.initialAdminUser The initial admin user with role defined in initialAdminRole property if specified, otherwise the default admin role will be assigned. Required: No Default: admin druid.auth.authorizer.MyBasicLDAPAuthorizer.initialAdminRole The initial admin role to create if it doesn't already exists. Required: No Default: admin druid.auth.authorizer.MyBasicLDAPAuthorizer.initialAdminGroupMapping The initial admin group mapping with role defined in initialAdminRole property if specified, otherwise the default admin role will be assigned. The name of this initial admin group mapping will be set to adminGroupMapping Required: No Default: null druid.auth.authorizer.MyBasicLDAPAuthorizer.roleProvider.type The type of role provider (ldap) to authorize requests credentials. Required: No Default: metadata druid.auth.authorizer.MyBasicLDAPAuthorizer.roleProvider.groupFilters Array of LDAP group filters used to filter out the allowed set of groups returned from LDAP search. Filters can be begin with , or end with , to provide configurational flexibility to limit or filter allowed set of groups available to LDAP Authorizer. Required: No Default: null Properties for LDAPS Use the following properties to configure Druid authentication with LDAP over TLS (LDAPS). See Configure LDAP authentication for more information. druid.auth.basic.ssl.protocol SSL protocol to use. The TLS version is 1.2. Required: Yes Default: tls druid.auth.basic.ssl.trustStorePath Path to the trust store file. Required: Yes Default: N/A druid.auth.basic.ssl.trustStorePassword Password to access the trust store file. Required: Yes Default: N/A druid.auth.basic.ssl.trustStoreType Format of the trust store file. For Java the format is jks. Required: No Default: jks druid.auth.basic.ssl.trustStoreAlgorithm Algorithm used by the trust manager to validate certificate chains. Required: No Default: N/A druid.auth.basic.ssl.trustStorePassword Password details that enable access to the truststore. Required: No Default: N/A Example LDAPS configuration: druid.auth.basic.ssl.protocol=tls druid.auth.basic.ssl.trustStorePath=/usr/local/druid-path/certs/truststore.jks druid.auth.basic.ssl.trustStorePassword=xxxxx druid.auth.basic.ssl.trustStoreType=jks druid.auth.basic.ssl.trustStoreAlgorithm=PKIX You can configure druid.auth.basic.ssl.trustStorePassword to be a plain text password or you can set the password as an environment variable. See Password providers for more information. "},{"title":"Usage","type":1,"pageTitle":"Basic Security","url":"/docs/27.0.0/development/extensions-core/druid-basic-security#usage","content":""},{"title":"Coordinator Security API","type":1,"pageTitle":"Basic Security","url":"/docs/27.0.0/development/extensions-core/druid-basic-security#coordinator-security-api","content":"To use these APIs, a user needs read/write permissions for the CONFIG resource type with name "security". Authentication API Root path: /druid-ext/basic-security/authentication Each API endpoint includes {authenticatorName}, specifying which Authenticator instance is being configured. User/Credential Management GET(/druid-ext/basic-security/authentication/db/{authenticatorName}/users) Return a list of all user names. GET(/druid-ext/basic-security/authentication/db/{authenticatorName}/users/{userName}) Return the name and credentials information of the user with name {userName} POST(/druid-ext/basic-security/authentication/db/{authenticatorName}/users/{userName}) Create a new user with name {userName} DELETE(/druid-ext/basic-security/authentication/db/{authenticatorName}/users/{userName}) Delete the user with name {userName} POST(/druid-ext/basic-security/authentication/db/{authenticatorName}/users/{userName}/credentials) Assign a password used for HTTP basic authentication for {userName} Content: JSON password request object Example request body: { "password": "helloworld" } Cache Load Status GET(/druid-ext/basic-security/authentication/loadStatus) Return the current load status of the local caches of the authentication Druid metadata store. Authorization API Root path: /druid-ext/basic-security/authorization Each API endpoint includes {authorizerName}, specifying which Authorizer instance is being configured. User Creation/Deletion GET(/druid-ext/basic-security/authorization/db/{authorizerName}/users) Return a list of all user names. GET(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName}) Return the name and role information of the user with name {userName} Example output: { "name": "druid2", "roles": [ "druidRole" ] } This API supports the following flags: ?full: The response will also include the full information for each role currently assigned to the user. Example output: { "name": "druid2", "roles": [ { "name": "druidRole", "permissions": [ { "resourceAction": { "resource": { "name": "A", "type": "DATASOURCE" }, "action": "READ" }, "resourceNamePattern": "A" }, { "resourceAction": { "resource": { "name": "C", "type": "CONFIG" }, "action": "WRITE" }, "resourceNamePattern": "C" } ] } ] } The output format of this API when ?full is specified is deprecated and in later versions will be switched to the output format used when both ?full and ?simplifyPermissions flag is set. The resourceNamePattern is a compiled version of the resource name regex. It is redundant and complicates the use of this API for clients such as frontends that edit the authorization configuration, as the permission format in this output does not match the format used for adding permissions to a role. ?full?simplifyPermissions: When both ?full and ?simplifyPermissions are set, the permissions in the output will contain only a list of resourceAction objects, without the extraneous resourceNamePattern field. { "name": "druid2", "roles": [ { "name": "druidRole", "users": null, "permissions": [ { "resource": { "name": "A", "type": "DATASOURCE" }, "action": "READ" }, { "resource": { "name": "C", "type": "CONFIG" }, "action": "WRITE" } ] } ] } POST(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName}) Create a new user with name {userName} DELETE(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName}) Delete the user with name {userName} Group mapping Creation/Deletion GET(/druid-ext/basic-security/authorization/db/{authorizerName}/groupMappings) Return a list of all group mappings. GET(/druid-ext/basic-security/authorization/db/{authorizerName}/groupMappings/{groupMappingName}) Return the group mapping and role information of the group mapping with name {groupMappingName} POST(/druid-ext/basic-security/authorization/db/{authorizerName}/groupMappings/{groupMappingName}) Create a new group mapping with name {groupMappingName} Content: JSON group mapping object Example request body: { "name": "user", "groupPattern": "CN=aaa,OU=aaa,OU=Groupings,DC=corp,DC=company,DC=com", "roles": [ "user" ] } DELETE(/druid-ext/basic-security/authorization/db/{authorizerName}/groupMappings/{groupMappingName}) Delete the group mapping with name {groupMappingName} Role Creation/Deletion GET(/druid-ext/basic-security/authorization/db/{authorizerName}/roles) Return a list of all role names. GET(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName}) Return name and permissions for the role named {roleName}. Example output: { "name": "druidRole2", "permissions": [ { "resourceAction": { "resource": { "name": "E", "type": "DATASOURCE" }, "action": "WRITE" }, "resourceNamePattern": "E" } ] } The default output format of this API is deprecated and in later versions will be switched to the output format used when the ?simplifyPermissions flag is set. The resourceNamePattern is a compiled version of the resource name regex. It is redundant and complicates the use of this API for clients such as frontends that edit the authorization configuration, as the permission format in this output does not match the format used for adding permissions to a role. This API supports the following flags: ?full: The output will contain an extra users list, containing the users that currently have this role. {"users":["druid"]} ?simplifyPermissions: The permissions in the output will contain only a list of resourceAction objects, without the extraneous resourceNamePattern field. The users field will be null when ?full is not specified. Example output: { "name": "druidRole2", "users": null, "permissions": [ { "resource": { "name": "E", "type": "DATASOURCE" }, "action": "WRITE" } ] } POST(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName}) Create a new role with name {roleName}. Content: username string DELETE(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName}) Delete the role with name {roleName}. Role Assignment POST(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName}/roles/{roleName}) Assign role {roleName} to user {userName}. DELETE(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName}/roles/{roleName}) Unassign role {roleName} from user {userName} POST(/druid-ext/basic-security/authorization/db/{authorizerName}/groupMappings/{groupMappingName}/roles/{roleName}) Assign role {roleName} to group mapping {groupMappingName}. DELETE(/druid-ext/basic-security/authorization/db/{authorizerName}/groupMappings/{groupMappingName}/roles/{roleName}) Unassign role {roleName} from group mapping {groupMappingName} Permissions POST(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName}/permissions) Set the permissions of {roleName}. This replaces the previous set of permissions on the role. Content: List of JSON Resource-Action objects, e.g.: [ { "resource": { "name": "wiki.*", "type": "DATASOURCE" }, "action": "READ" }, { "resource": { "name": "wikiticker", "type": "DATASOURCE" }, "action": "WRITE" } ] The "name" field for resources in the permission definitions are regexes used to match resource names during authorization checks. Please see Defining permissions for more details. Cache Load Status GET(/druid-ext/basic-security/authorization/loadStatus) Return the current load status of the local caches of the authorization Druid metadata store. "},{"title":"ORC Extension","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/orc","content":"","keywords":""},{"title":"ORC extension","type":1,"pageTitle":"ORC Extension","url":"/docs/27.0.0/development/extensions-core/orc#orc-extension","content":"This Apache Druid extension enables Druid to ingest and understand the Apache ORC data format. The extension provides the ORC input format and the ORC Hadoop parserfor native batch ingestion and Hadoop batch ingestion, respectively. Please see corresponding docs for details. To use this extension, make sure to include druid-orc-extensions in the extensions load list. "},{"title":"Migration from 'contrib' extension","type":1,"pageTitle":"ORC Extension","url":"/docs/27.0.0/development/extensions-core/orc#migration-from-contrib-extension","content":"This extension, first available in version 0.15.0, replaces the previous 'contrib' extension which was available until 0.14.0-incubating. While this extension can index any data the 'contrib' extension could, the JSON spec for the ingestion task is incompatible, and will need modified to work with the newer 'core' extension. To migrate to 0.15.0+: In inputSpec of ioConfig, inputFormat must be changed from "org.apache.hadoop.hive.ql.io.orc.OrcNewInputFormat" to"org.apache.orc.mapreduce.OrcInputFormat"The 'contrib' extension supported a typeString property, which provided the schema of the ORC file, of which was essentially required to have the types correct, but notably not the column names, which facilitated column renaming. In the 'core' extension, column renaming can be achieved withflattenSpec. For example, "typeString":"struct<time:string,name:string>"with the actual schema struct<_col0:string,_col1:string>, to preserve Druid schema would need replaced with: "flattenSpec": { "fields": [ { "type": "path", "name": "time", "expr": "$._col0" }, { "type": "path", "name": "name", "expr": "$._col1" } ] ... } The 'contrib' extension supported a mapFieldNameFormat property, which provided a way to specify a dimension to flatten OrcMap columns with primitive types. This functionality has also been replaced withflattenSpec. For example: "mapFieldNameFormat": "<PARENT>_<CHILD>"for a dimension nestedData_dim1, to preserve Druid schema could be replaced with "flattenSpec": { "fields": [ { "type": "path", "name": "nestedData_dim1", "expr": "$.nestedData.dim1" } ] ... } "},{"title":"Amazon Kinesis ingestion","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion","content":"","keywords":""},{"title":"Setup","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#setup","content":"To use the Kinesis indexing service, you must first load the druid-kinesis-indexing-service core extension on both the Overlord and the Middle Manager. See Loading extensions for more information. Review the Kinesis known issues before deploying the druid-kinesis-indexing-service extension to production. "},{"title":"Supervisor spec","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#supervisor-spec","content":"The following table outlines the high-level configuration options for the Kinesis supervisor object. See Supervisor API for more information. Property\tType\tDescription\tRequiredtype\tString\tThe supervisor type; this should always be kinesis.\tYes spec\tObject\tThe container object for the supervisor configuration.\tYes ioConfig\tObject\tThe I/O configuration object for configuring Kafka connection and I/O-related settings for the supervisor and indexing task.\tYes dataSchema\tObject\tThe schema used by the Kinesis indexing task during ingestion. See dataSchema for more information.\tYes tuningConfig\tObject\tThe tuning configuration object for configuring performance-related settings for the supervisor and indexing tasks.\tNo Druid starts a new supervisor when you define a supervisor spec. To create a supervisor, send a POST request to the /druid/indexer/v1/supervisor endpoint. Once created, the supervisor persists in the configured metadata database. There can only be a single supervisor per datasource, and submitting a second spec for the same datasource overwrites the previous one. When an Overlord gains leadership, either by being started or as a result of another Overlord failing, it spawns a supervisor for each supervisor spec in the metadata database. The supervisor then discovers running Kinesis indexing tasks and attempts to adopt them if they are compatible with the supervisor's configuration. If they are not compatible because they have a different ingestion spec or shard allocation, the tasks are killed and the supervisor creates a new set of tasks. In this way, the supervisors persist across Overlord restarts and failovers. The following example shows how to submit a supervisor spec for a stream with the name KinesisStream. In this example, http://SERVICE_IP:SERVICE_PORT is a placeholder for the server address of deployment and the service port. curl -X POST "http://SERVICE_IP:SERVICE_PORT/druid/indexer/v1/supervisor" \\ -H "Content-Type: application/json" \\ -d '{ "type": "kinesis", "spec": { "ioConfig": { "type": "kinesis", "stream": "KinesisStream", "inputFormat": { "type": "json" }, "useEarliestSequenceNumber": true }, "tuningConfig": { "type": "kinesis" }, "dataSchema": { "dataSource": "KinesisStream", "timestampSpec": { "column": "timestamp", "format": "iso" }, "dimensionsSpec": { "dimensions": [ "isRobot", "channel", "flags", "isUnpatrolled", "page", "diffUrl", { "type": "long", "name": "added" }, "comment", { "type": "long", "name": "commentLength" }, "isNew", "isMinor", { "type": "long", "name": "delta" }, "isAnonymous", "user", { "type": "long", "name": "deltaBucket" }, { "type": "long", "name": "deleted" }, "namespace", "cityName", "countryName", "regionIsoCode", "metroCode", "countryIsoCode", "regionName" ] }, "granularitySpec": { "queryGranularity": "none", "rollup": false, "segmentGranularity": "hour" } } } }' POST /druid/indexer/v1/supervisor HTTP/1.1 Host: http://SERVICE_IP:SERVICE_PORT Content-Type: application/json { "type": "kinesis", "spec": { "ioConfig": { "type": "kinesis", "stream": "KinesisStream", "inputFormat": { "type": "json" }, "useEarliestSequenceNumber": true }, "tuningConfig": { "type": "kinesis" }, "dataSchema": { "dataSource": "KinesisStream", "timestampSpec": { "column": "timestamp", "format": "iso" }, "dimensionsSpec": { "dimensions": [ "isRobot", "channel", "flags", "isUnpatrolled", "page", "diffUrl", { "type": "long", "name": "added" }, "comment", { "type": "long", "name": "commentLength" }, "isNew", "isMinor", { "type": "long", "name": "delta" }, "isAnonymous", "user", { "type": "long", "name": "deltaBucket" }, { "type": "long", "name": "deleted" }, "namespace", "cityName", "countryName", "regionIsoCode", "metroCode", "countryIsoCode", "regionName" ] }, "granularitySpec": { "queryGranularity": "none", "rollup": false, "segmentGranularity": "hour" } } } } "},{"title":"Supervisor I/O configuration","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#supervisor-io-configuration","content":"The following table outlines the configuration options for ioConfig: Property\tType\tDescription\tRequired\tDefaultstream\tString\tThe Kinesis stream to read.\tYes inputFormat\tObject\tThe input format to specify how to parse input data. See Specify data format for more information.\tYes endpoint\tString\tThe AWS Kinesis stream endpoint for a region. You can find a list of endpoints in the AWS service endpoints document.\tNo\tkinesis.us-east-1.amazonaws.com replicas\tInteger\tThe number of replica sets, where 1 is a single set of tasks (no replication). Druid always assigns replicate tasks to different workers to provide resiliency against process failure.\tNo\t1 taskCount\tInteger\tThe maximum number of reading tasks in a replica set. Multiply taskCount and replicas to measure the maximum number of reading tasks. The total number of tasks (reading and publishing) is higher than the maximum number of reading tasks. See Capacity planning for more details. When taskCount > {numKinesisShards}, the actual number of reading tasks is less than the taskCount value.\tNo\t1 taskDuration\tISO 8601 period\tThe length of time before tasks stop reading and begin publishing their segments.\tNo\tPT1H startDelay\tISO 8601 period\tThe period to wait before the supervisor starts managing tasks.\tNo\tPT5S period\tISO 8601 period\tDetermines how often the supervisor executes its management logic. Note that the supervisor also runs in response to certain events, such as tasks succeeding, failing, and reaching their task duration, so this value specifies the maximum time between iterations.\tNo\tPT30S useEarliestSequenceNumber\tBoolean\tIf a supervisor is managing a datasource for the first time, it obtains a set of starting sequence numbers from Kinesis. This flag determines whether a supervisor retrieves the earliest or latest sequence numbers in Kinesis. Under normal circumstances, subsequent tasks start from where the previous segments ended so this flag is only used on the first run.\tNo\tfalse completionTimeout\tISO 8601 period\tThe length of time to wait before Druid declares a publishing task has failed and terminates it. If this is set too low, your tasks may never publish. The publishing clock for a task begins roughly after taskDuration elapses.\tNo\tPT6H lateMessageRejectionPeriod\tISO 8601 period\tConfigure tasks to reject messages with timestamps earlier than this period before the task is created. For example, if lateMessageRejectionPeriod is set to PT1H and the supervisor creates a task at 2016-01-01T12:00Z, messages with timestamps earlier than 2016-01-01T11:00Z are dropped. This may help prevent concurrency issues if your data stream has late messages and you have multiple pipelines that need to operate on the same segments, such as a streaming and a nightly batch ingestion pipeline.\tNo earlyMessageRejectionPeriod\tISO 8601 period\tConfigure tasks to reject messages with timestamps later than this period after the task reached its taskDuration. For example, if earlyMessageRejectionPeriod is set to PT1H, the taskDuration is set to PT1H and the supervisor creates a task at 2016-01-01T12:00Z. Messages with timestamps later than 2016-01-01T14:00Z are dropped. Note: Tasks sometimes run past their task duration, for example, in cases of supervisor failover. Setting earlyMessageRejectionPeriod too low may cause messages to be dropped unexpectedly whenever a task runs past its originally configured task duration.\tNo recordsPerFetch\tInteger\tThe number of records to request per call to fetch records from Kinesis.\tNo\tSee Determine fetch settings for defaults. fetchDelayMillis\tInteger\tTime in milliseconds to wait between subsequent calls to fetch records from Kinesis. See Determine fetch settings.\tNo\t0 awsAssumedRoleArn\tString\tThe AWS assumed role to use for additional permissions.\tNo awsExternalId\tString\tThe AWS external ID to use for additional permissions.\tNo deaggregate\tBoolean\tWhether to use the deaggregate function of the Kinesis Client Library (KCL).\tNo autoScalerConfig\tObject\tDefines autoscaling behavior for Kinesis ingest tasks. See Task autoscaler properties for more information.\tNo\tnull "},{"title":"Task autoscaler properties","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#task-autoscaler-properties","content":"The following table outlines the configuration options for autoScalerConfig: Property\tDescription\tRequired\tDefaultenableTaskAutoScaler\tEnables the auto scaler. If not specified, Druid disables the auto scaler even when autoScalerConfig is not null.\tNo\tfalse taskCountMax\tMaximum number of Kinesis ingestion tasks. Must be greater than or equal to taskCountMin. If greater than {numKinesisShards}, Druid sets the maximum number of reading tasks to {numKinesisShards} and ignores taskCountMax.\tYes taskCountMin\tMinimum number of Kinesis ingestion tasks. When you enable the auto scaler, Druid ignores the value of taskCount in IOConfig and uses taskCountMin for the initial number of tasks to launch.\tYes minTriggerScaleActionFrequencyMillis\tMinimum time interval between two scale actions.\tNo\t600000 autoScalerStrategy\tThe algorithm of autoScaler. Druid only supports the lagBased strategy. See Lag based autoscaler strategy related properties for more information.\tNo\tDefaults to lagBased. "},{"title":"Lag based autoscaler strategy related properties","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#lag-based-autoscaler-strategy-related-properties","content":"Unlike the Kafka indexing service, Kinesis reports lag metrics measured in time difference in milliseconds between the current sequence number and latest sequence number, rather than message count. The following table outlines the configuration options for autoScalerStrategy: Property\tDescription\tRequired\tDefaultlagCollectionIntervalMillis\tThe time period during which Druid collects lag metric points.\tNo\t30000 lagCollectionRangeMillis\tThe total time window of lag collection. Use with lagCollectionIntervalMillis to specify the intervals at which to collect lag metric points.\tNo\t600000 scaleOutThreshold\tThe threshold of scale out action.\tNo\t6000000 triggerScaleOutFractionThreshold\tEnables scale out action if triggerScaleOutFractionThreshold percent of lag points is higher than scaleOutThreshold.\tNo\t0.3 scaleInThreshold\tThe threshold of scale in action.\tNo\t1000000 triggerScaleInFractionThreshold\tEnables scale in action if triggerScaleInFractionThreshold percent of lag points is lower than scaleOutThreshold.\tNo\t0.9 scaleActionStartDelayMillis\tThe number of milliseconds to delay after the supervisor starts before the first scale logic check.\tNo\t300000 scaleActionPeriodMillis\tThe frequency in milliseconds to check if a scale action is triggered.\tNo\t60000 scaleInStep\tThe number of tasks to reduce at once when scaling down.\tNo\t1 scaleOutStep\tThe number of tasks to add at once when scaling out.\tNo\t2 The following example shows a supervisor spec with lagBased auto scaler enabled. Click to view the example { "type": "kinesis", "dataSchema": { "dataSource": "metrics-kinesis", "timestampSpec": { "column": "timestamp", "format": "auto" }, "dimensionsSpec": { "dimensions": [], "dimensionExclusions": [ "timestamp", "value" ] }, "metricsSpec": [ { "name": "count", "type": "count" }, { "name": "value_sum", "fieldName": "value", "type": "doubleSum" }, { "name": "value_min", "fieldName": "value", "type": "doubleMin" }, { "name": "value_max", "fieldName": "value", "type": "doubleMax" } ], "granularitySpec": { "type": "uniform", "segmentGranularity": "HOUR", "queryGranularity": "NONE" } }, "ioConfig": { "stream": "metrics", "autoScalerConfig": { "enableTaskAutoScaler": true, "taskCountMax": 6, "taskCountMin": 2, "minTriggerScaleActionFrequencyMillis": 600000, "autoScalerStrategy": "lagBased", "lagCollectionIntervalMillis": 30000, "lagCollectionRangeMillis": 600000, "scaleOutThreshold": 600000, "triggerScaleOutFractionThreshold": 0.3, "scaleInThreshold": 100000, "triggerScaleInFractionThreshold": 0.9, "scaleActionStartDelayMillis": 300000, "scaleActionPeriodMillis": 60000, "scaleInStep": 1, "scaleOutStep": 2 }, "inputFormat": { "type": "json" }, "endpoint": "kinesis.us-east-1.amazonaws.com", "taskCount": 1, "replicas": 1, "taskDuration": "PT1H" }, "tuningConfig": { "type": "kinesis", "maxRowsPerSegment": 5000000 } } "},{"title":"Specify data format","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#specify-data-format","content":"The Kinesis indexing service supports both inputFormat and parser to specify the data format. Use the inputFormat to specify the data format for the Kinesis indexing service unless you need a format only supported by the legacy parser. Supported values for inputFormat include: csvdelimitedjsonavro_streamavro_ocfprotobuf For more information, see Data formats. You can also read thrift formats using parser. "},{"title":"Supervisor tuning configuration","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#supervisor-tuning-configuration","content":"The tuningConfig object is optional. If you don't specify the tuningConfig object, Druid uses the default configuration settings. The following table outlines the configuration options for tuningConfig: Property\tType\tDescription\tRequired\tDefaulttype\tString\tThe indexing task type. This should always be kinesis.\tYes maxRowsInMemory\tInteger\tThe number of rows to aggregate before persisting. This number represents the post-aggregation rows. It is not equivalent to the number of input events, but the resulting number of aggregated rows. Druid uses maxRowsInMemory to manage the required JVM heap size. The maximum heap memory usage for indexing scales is maxRowsInMemory * (2 + maxPendingPersists).\tNo\t100000 maxBytesInMemory\tLong\tThe number of bytes to aggregate in heap memory before persisting. This is based on a rough estimate of memory usage and not actual usage. Normally, this is computed internally. The maximum heap memory usage for indexing is maxBytesInMemory * (2 + maxPendingPersists).\tNo\tOne-sixth of max JVM memory skipBytesInMemoryOverheadCheck\tBoolean\tThe calculation of maxBytesInMemory takes into account overhead objects created during ingestion and each intermediate persist. To exclude the bytes of these overhead objects from the maxBytesInMemory check, set skipBytesInMemoryOverheadCheck to true.\tNo\tfalse maxRowsPerSegment\tInteger\tThe number of rows to aggregate into a segment; this number represents the post-aggregation rows. Handoff occurs when maxRowsPerSegment or maxTotalRows is reached or every intermediateHandoffPeriod, whichever happens first.\tNo\t5000000 maxTotalRows\tLong\tThe number of rows to aggregate across all segments; this number represents the post-aggregation rows. Handoff occurs when maxRowsPerSegment or maxTotalRows is reached or every intermediateHandoffPeriod, whichever happens first.\tNo\tunlimited intermediateHandoffPeriod\tISO 8601 period\tThe period that determines how often tasks hand off segments. Handoff occurs if maxRowsPerSegment or maxTotalRows is reached or every intermediateHandoffPeriod, whichever happens first.\tNo\tP2147483647D intermediatePersistPeriod\tISO 8601 period\tThe period that determines the rate at which intermediate persists occur.\tNo\tPT10M maxPendingPersists\tInteger\tMaximum number of persists that can be pending but not started. If a new intermediate persist exceeds this limit, Druid blocks ingestion until the currently running persist finishes. One persist can be running concurrently with ingestion, and none can be queued up. The maximum heap memory usage for indexing scales is maxRowsInMemory * (2 + maxPendingPersists).\tNo\t0 indexSpec\tObject\tDefines how Druid indexes the data. See IndexSpec for more information.\tNo indexSpecForIntermediatePersists\tObject\tDefines segment storage format options to use at indexing time for intermediate persisted temporary segments. You can use indexSpecForIntermediatePersists to disable dimension/metric compression on intermediate segments to reduce memory required for final merging. However, disabling compression on intermediate segments might increase page cache use while they are used before getting merged into final segment published. See IndexSpec for possible values.\tNo\tSame as indexSpec reportParseExceptions\tBoolean\tIf true, Druid throws exceptions encountered during parsing causing ingestion to halt. If false, Druid skips unparseable rows and fields.\tNo\tfalse handoffConditionTimeout\tLong\tNumber of milliseconds to wait for segment handoff. Set to a value >= 0, where 0 means to wait indefinitely.\tNo\t0 resetOffsetAutomatically\tBoolean\tControls behavior when Druid needs to read Kinesis messages that are no longer available. If false, the exception bubbles up causing tasks to fail and ingestion to halt. If this occurs, manual intervention is required to correct the situation, potentially using the Reset Supervisor API. This mode is useful for production, since it highlights issues with ingestion. If true, Druid automatically resets to the earliest or latest sequence number available in Kinesis, based on the value of the useEarliestSequenceNumber property (earliest if true, latest if false). Note that this can lead to dropping data (if useEarliestSequenceNumber is false) or duplicating data (if useEarliestSequenceNumber is true) without your knowledge. Druid logs messages indicating that a reset has occurred without interrupting ingestion. This mode is useful for non-production situations since it enables Druid to recover from problems automatically, even if they lead to quiet dropping or duplicating of data.\tNo\tfalse skipSequenceNumberAvailabilityCheck\tBoolean\tWhether to enable checking if the current sequence number is still available in a particular Kinesis shard. If false, the indexing task attempts to reset the current sequence number, depending on the value of resetOffsetAutomatically.\tNo\tfalse workerThreads\tInteger\tThe number of threads that the supervisor uses to handle requests/responses for worker tasks, along with any other internal asynchronous operation.\tNo\tmin(10, taskCount) chatAsync\tBoolean\tIf true, the supervisor uses asynchronous communication with indexing tasks and ignores the chatThreads parameter. If false, the supervisor uses synchronous communication in a thread pool of size chatThreads.\tNo\ttrue chatThreads\tInteger\tThe number of threads Druid uses to communicate with indexing tasks. Druid ignores this setting if chatAsync is true.\tNo\tmin(10, taskCount * replicas) chatRetries\tInteger\tThe number of times Druid retries HTTP requests to indexing tasks before considering tasks unresponsive.\tNo\t8 httpTimeout\tISO 8601 period\tThe period of time to wait for a HTTP response from an indexing task.\tNo\tPT10S shutdownTimeout\tISO 8601 period\tThe period of time to wait for the supervisor to attempt a graceful shutdown of tasks before exiting.\tNo\tPT80S recordBufferSize\tInteger\tThe size of the buffer (number of events) Druid uses between the Kinesis fetch threads and the main ingestion thread.\tNo\tSee Determine fetch settings for defaults. recordBufferOfferTimeout\tInteger\tThe number of milliseconds to wait for space to become available in the buffer before timing out.\tNo\t5000 recordBufferFullWait\tInteger\tThe number of milliseconds to wait for the buffer to drain before Druid attempts to fetch records from Kinesis again.\tNo\t5000 fetchThreads\tInteger\tThe size of the pool of threads fetching data from Kinesis. There is no benefit in having more threads than Kinesis shards.\tNo\tprocs * 2, where procs is the number of processors available to the task. segmentWriteOutMediumFactory\tObject\tThe segment write-out medium to use when creating segments See Additional Peon configuration: SegmentWriteOutMediumFactory for explanation and available options.\tNo\tIf not specified, Druid uses the value from druid.peon.defaultSegmentWriteOutMediumFactory.type. logParseExceptions\tBoolean\tIf true, Druid logs an error message when a parsing exception occurs, containing information about the row where the error occurred.\tNo\tfalse maxParseExceptions\tInteger\tThe maximum number of parse exceptions that can occur before the task halts ingestion and fails. Overridden if reportParseExceptions is set.\tNo\tunlimited maxSavedParseExceptions\tInteger\tWhen a parse exception occurs, Druid keeps track of the most recent parse exceptions. maxSavedParseExceptions limits the number of saved exception instances. These saved exceptions are available after the task finishes in the task completion report. Overridden if reportParseExceptions is set.\tNo\t0 maxRecordsPerPoll\tInteger\tThe maximum number of records to be fetched from buffer per poll. The actual maximum will be Max(maxRecordsPerPoll, Max(bufferSize, 1)).\tNo\tSee Determine fetch settings for defaults. repartitionTransitionDuration\tISO 8601 period\tWhen shards are split or merged, the supervisor recomputes shard to task group mappings. The supervisor also signals any running tasks created under the old mappings to stop early at current time + repartitionTransitionDuration. Stopping the tasks early allows Druid to begin reading from the new shards more quickly. The repartition transition wait time controlled by this property gives the stream additional time to write records to the new shards after the split or merge, which helps avoid issues with empty shard handling.\tNo\tPT2M offsetFetchPeriod\tISO 8601 period\tDetermines how often the supervisor queries Kinesis and the indexing tasks to fetch current offsets and calculate lag. If the user-specified value is below the minimum value of PT5S, the supervisor ignores the value and uses the minimum value instead.\tNo\tPT30S useListShards\tBoolean\tIndicates if listShards API of AWS Kinesis SDK can be used to prevent LimitExceededException during ingestion. You must set the necessary IAM permissions.\tNo\tfalse "},{"title":"IndexSpec","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#indexspec","content":"The following table outlines the configuration options for indexSpec: Property\tType\tDescription\tRequired\tDefaultbitmap\tObject\tCompression format for bitmap indexes. Druid supports roaring and concise bitmap types.\tNo\tRoaring dimensionCompression\tString\tCompression format for dimension columns. Choose from LZ4, LZF, or uncompressed.\tNo\tLZ4 metricCompression\tString\tCompression format for primitive type metric columns. Choose from LZ4, LZF, uncompressed, or none.\tNo\tLZ4 longEncoding\tString\tEncoding format for metric and dimension columns with type long. Choose from auto or longs. auto encodes the values using sequence number or lookup table depending on column cardinality and stores them with variable sizes. longs stores the value as is with 8 bytes each.\tNo\tlongs "},{"title":"Operations","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#operations","content":"This section describes how to use the Supervisor API with the Kinesis indexing service. "},{"title":"AWS authentication","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#aws-authentication","content":"To authenticate with AWS, you must provide your AWS access key and AWS secret key using runtime.properties, for example: druid.kinesis.accessKey=AKIAWxxxxxxxxxx4NCKS druid.kinesis.secretKey=Jbytxxxxxxxxxxx2+555 Druid uses the AWS access key and AWS secret key to authenticate Kinesis API requests. If not provided, the service looks for credentials set in environment variables, via Web Identity Token, in the default profile configuration file, and from the EC2 instance profile provider (in this order). To ingest data from Kinesis, ensure that the policy attached to your IAM role contains the necessary permissions. The required permissions depend on the value of useListShards. If the useListShards flag is set to true, you need following permissions: ListStreams to list your data streams.Get* required for GetShardIterator.GetRecords to get data records from a data stream's shard.ListShards to get the shards for a stream of interest. The following is an example policy: [ { "Effect": "Allow", "Action": ["kinesis:List*"], "Resource": ["*"] }, { "Effect": "Allow", "Action": ["kinesis:Get*"], "Resource": [<ARN for shards to be ingested>] } ] If the useListShards flag is set to false, you need following permissions: ListStreams to list your data streams.Get* required for GetShardIterator.GetRecords to get data records from a data stream's shard.DescribeStream to describe the specified data stream. The following is an example policy: [ { "Effect": "Allow", "Action": ["kinesis:ListStreams"], "Resource": ["*"] }, { "Effect": "Allow", "Action": ["kinesis:DescribeStreams"], "Resource": ["*"] }, { "Effect": "Allow", "Action": ["kinesis:Get*"], "Resource": [<ARN for shards to be ingested>] } ] "},{"title":"Get supervisor status report","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#get-supervisor-status-report","content":"To retrieve the current status report for a single supervisor, send a GET request to the /druid/indexer/v1/supervisor/:supervisorId/status endpoint. The report contains the state of the supervisor tasks, the latest sequence numbers, and an array of recently thrown exceptions reported as recentErrors. You can control the maximum size of the exceptions using the druid.supervisor.maxStoredExceptionEvents configuration. The two properties related to the supervisor's state are state and detailedState. The state property contains a small number of generic states that apply to any type of supervisor, while the detailedState property contains a more descriptive, implementation-specific state that may provide more insight into the supervisor's activities. Possible state values are PENDING, RUNNING, SUSPENDED, STOPPING, UNHEALTHY_SUPERVISOR, and UNHEALTHY_TASKS. The following table lists detailedState values and their corresponding state mapping: Detailed state\tCorresponding state\tDescriptionUNHEALTHY_SUPERVISOR\tUNHEALTHY_SUPERVISOR\tThe supervisor encountered errors on previous druid.supervisor.unhealthinessThreshold iterations. UNHEALTHY_TASKS\tUNHEALTHY_TASKS\tThe last druid.supervisor.taskUnhealthinessThreshold tasks all failed. UNABLE_TO_CONNECT_TO_STREAM\tUNHEALTHY_SUPERVISOR\tThe supervisor is encountering connectivity issues with Kinesis and has not successfully connected in the past. LOST_CONTACT_WITH_STREAM\tUNHEALTHY_SUPERVISOR\tThe supervisor is encountering connectivity issues with Kinesis but has successfully connected in the past. PENDING (first iteration only)\tPENDING\tThe supervisor has been initialized but hasn't started connecting to the stream. CONNECTING_TO_STREAM (first iteration only)\tRUNNING\tThe supervisor is trying to connect to the stream and update partition data. DISCOVERING_INITIAL_TASKS (first iteration only)\tRUNNING\tThe supervisor is discovering already-running tasks. CREATING_TASKS (first iteration only)\tRUNNING\tThe supervisor is creating tasks and discovering state. RUNNING\tRUNNING\tThe supervisor has started tasks and is waiting for taskDuration to elapse. SUSPENDED\tSUSPENDED\tThe supervisor is suspended. STOPPING\tSTOPPING\tThe supervisor is stopping. On each iteration of the supervisor's run loop, the supervisor completes the following tasks in sequence: Fetch the list of shards from Kinesis and determine the starting sequence number for each shard (either based on the last processed sequence number if continuing, or starting from the beginning or ending of the stream if this is a new stream).Discover any running indexing tasks that are writing to the supervisor's datasource and adopt them if they match the supervisor's configuration, else signal them to stop.Send a status request to each supervised task to update the view of the state of the tasks under supervision.Handle tasks that have exceeded taskDuration and should transition from the reading to publishing state.Handle tasks that have finished publishing and signal redundant replica tasks to stop.Handle tasks that have failed and clean up the supervisor's internal state.Compare the list of healthy tasks to the requested taskCount and replicas configurations and create additional tasks if required. The detailedState property shows additional values (marked with "first iteration only" in the preceding table) the first time the supervisor executes this run loop after startup or after resuming from a suspension. This is intended to surface initialization-type issues, where the supervisor is unable to reach a stable state. For example, if the supervisor cannot connect to Kinesis, if it's unable to read from the stream, or cannot communicate with existing tasks. Once the supervisor is stable; that is, once it has completed a full execution without encountering any issues, detailedState will show a RUNNINGstate until it is stopped, suspended, or hits a failure threshold and transitions to an unhealthy state. "},{"title":"Update existing supervisors","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#update-existing-supervisors","content":"To update an existing supervisor spec, send a POST request to the /druid/indexer/v1/supervisor endpoint. When you call this endpoint on an existing supervisor for the same datasource, the running supervisor signals its tasks to stop reading and begin publishing their segments, exiting itself. Druid then uses the provided configuration from the request body to create a new supervisor with a new set of tasks that start reading from the sequence numbers, where the previous now-publishing tasks left off, but using the updated schema. In this way, configuration changes can be applied without requiring any pause in ingestion. You can achieve seamless schema migrations by submitting the new schema using the /druid/indexer/v1/supervisor endpoint. "},{"title":"Suspend and resume a supervisor","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#suspend-and-resume-a-supervisor","content":"To suspend a supervisor, send a POST request to the /druid/indexer/v1/supervisor/:supervisorId/suspend endpoint. Suspending a supervisor does not prevent it from operating and emitting logs and metrics. It ensures that no indexing tasks are running until the supervisor resumes. To resume a supervisor, send a POST request to the /druid/indexer/v1/supervisor/:supervisorId/resume endpoint. "},{"title":"Reset a supervisor","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#reset-a-supervisor","content":"The supervisor must be running for this endpoint to be available To reset a supervisor, send a POST request to the /druid/indexer/v1/supervisor/:supervisorId/reset endpoint. This endpoint clears stored sequence numbers, prompting the supervisor to start reading from either the earliest or the latest sequence numbers in Kinesis (depending on the value of useEarliestSequenceNumber). After clearing stored sequence numbers, the supervisor kills and recreates active tasks, so that tasks begin reading from valid sequence numbers. This endpoint is useful when you need to recover from a stopped state due to missing sequence numbers in Kinesis. Use this endpoint with caution as it may result in skipped messages, leading to data loss or duplicate data. The indexing service keeps track of the latest persisted sequence number to provide exactly-once ingestion guarantees across tasks. Subsequent tasks must start reading from where the previous task completed for the generated segments to be accepted. If the messages at the expected starting sequence numbers are no longer available in Kinesis (typically because the message retention period has elapsed or the topic was removed and re-created) the supervisor will refuse to start and in-flight tasks will fail. This endpoint enables you to recover from this condition. "},{"title":"Terminate a supervisor","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#terminate-a-supervisor","content":"To terminate a supervisor and its associated indexing tasks, send a POST request to the /druid/indexer/v1/supervisor/:supervisorId/terminate endpoint. This places a tombstone marker in the database to prevent the supervisor from being reloaded on a restart and then gracefully shuts down the currently running supervisor. The tasks stop reading and begin publishing their segments immediately. The call returns after all tasks have been signaled to stop but before the tasks finish publishing their segments. The terminated supervisor continues exists in the metadata store and its history can be retrieved. The only way to restart a terminated supervisor is by submitting a functioning supervisor spec to /druid/indexer/v1/supervisor. "},{"title":"Capacity planning","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#capacity-planning","content":"Kinesis indexing tasks run on Middle Managers and are limited by the resources available in the Middle Manager cluster. In particular, you should make sure that you have sufficient worker capacity, configured using thedruid.worker.capacity property, to handle the configuration in the supervisor spec. Note that worker capacity is shared across all types of indexing tasks, so you should plan your worker capacity to handle your total indexing load, such as batch processing, streaming tasks, and merging tasks. If your workers run out of capacity, Kinesis indexing tasks queue and wait for the next available worker. This may cause queries to return partial results but will not result in data loss, assuming the tasks run before Kinesis purges those sequence numbers. A running task can be in one of two states: reading or publishing. A task remains in reading state for the period defined in taskDuration, at which point it transitions to publishing state. A task remains in publishing state for as long as it takes to generate segments, push segments to deep storage, and have them loaded and served by a Historical process or until completionTimeout elapses. The number of reading tasks is controlled by replicas and taskCount. In general, there are replicas * taskCount reading tasks. An exception occurs if taskCount > {numKinesisShards}, in which case Druid uses {numKinesisShards} tasks. When taskDuration elapses, these tasks transition to publishing state and replicas * taskCount new reading tasks are created. To allow for reading tasks and publishing tasks to run concurrently, there should be a minimum capacity of: workerCapacity = 2 * replicas * taskCount This value is for the ideal situation in which there is at most one set of tasks publishing while another set is reading. In some circumstances, it is possible to have multiple sets of tasks publishing simultaneously. This would happen if the time-to-publish (generate segment, push to deep storage, load on Historical) is greater than taskDuration. This is a valid and correct scenario but requires additional worker capacity to support. In general, it is a good idea to have taskDuration be large enough that the previous set of tasks finishes publishing before the current set begins. "},{"title":"Shards and segment handoff","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#shards-and-segment-handoff","content":"Each Kinesis indexing task writes the events it consumes from Kinesis shards into a single segment for the segment granularity interval until it reaches one of the following limits: maxRowsPerSegment, maxTotalRows, or intermediateHandoffPeriod. At this point, the task creates a new shard for this segment granularity to contain subsequent events. The Kinesis indexing task also performs incremental hand-offs so that the segments created by the task are not held up until the task duration is over. When the task reaches one of the maxRowsPerSegment, maxTotalRows, or intermediateHandoffPeriod limits, it hands off all the segments and creates a new set of segments for further events. This allows the task to run for longer durations without accumulating old segments locally on Middle Manager processes. The Kinesis indexing service may still produce some small segments. For example, consider the following scenario: Task duration is 4 hoursSegment granularity is set to an HOURThe supervisor was started at 9:10 After 4 hours at 13:10, Druid starts a new set of tasks. The events for the interval 13:00 - 14:00 may be split across existing tasks and the new set of tasks which could result in small segments. To merge them together into new segments of an ideal size (in the range of ~500-700 MB per segment), you can schedule re-indexing tasks, optionally with a different segment granularity. For more detail, see Segment size optimization. "},{"title":"Determine fetch settings","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#determine-fetch-settings","content":"Kinesis indexing tasks fetch records using fetchThreads threads. If fetchThreads is higher than the number of Kinesis shards, the excess threads are unused. Each fetch thread fetches up to recordsPerFetch records at once from a Kinesis shard, with a delay between fetches of fetchDelayMillis. The records fetched by each thread are pushed into a shared queue of size recordBufferSize. The main runner thread for each task polls up to maxRecordsPerPoll records from the queue at once. When using Kinesis Producer Library's aggregation feature, that is when deaggregate is set, each of these parameters refers to aggregated records rather than individual records. The default values for these parameters are: fetchThreads: Twice the number of processors available to the task. The number of processors available to the task is the total number of processors on the server, divided by druid.worker.capacity (the number of task slots on that particular server).fetchDelayMillis: 0 (no delay between fetches).recordsPerFetch: 100 MB or an estimated 5% of available heap, whichever is smaller, divided by fetchThreads. For estimation purposes, Druid uses a figure of 10 KB for regular records and 1 MB for aggregated records.recordBufferSize: 100 MB or an estimated 10% of available heap, whichever is smaller. For estimation purposes, Druid uses a figure of 10 KB for regular records and 1 MB for aggregated records.maxRecordsPerPoll: 100 for regular records, 1 for aggregated records. Kinesis places the following restrictions on calls to fetch records: Each data record can be up to 1 MB in size.Each shard can support up to 5 transactions per second for reads.Each shard can read up to 2 MB per second.The maximum size of data that GetRecords can return is 10 MB. If the above limits are exceeded, Kinesis throws ProvisionedThroughputExceededException errors. If this happens, Druid Kinesis tasks pause by fetchDelayMillis or 3 seconds, whichever is larger, and then attempt the call again. In most cases, the default settings for fetch parameters are sufficient to achieve good performance without excessive memory usage. However, in some cases, you may need to adjust these parameters to control fetch rate and memory usage more finely. Optimal values depend on the average size of a record and the number of consumers you have reading from a given shard, which will be replicas unless you have other consumers also reading from this Kinesis stream. "},{"title":"Deaggregation","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#deaggregation","content":"The Kinesis indexing service supports de-aggregation of multiple rows packed into a single record by the Kinesis Producer Library's aggregate method for more efficient data transfer. To enable this feature, set deaggregate to true in your ioConfig when submitting a supervisor spec. "},{"title":"Resharding","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#resharding","content":"When changing the shard count for a Kinesis stream, there is a window of time around the resharding operation with early shutdown of Kinesis ingestion tasks and possible task failures. The early shutdowns and task failures are expected. They occur because the supervisor updates the shard to task group mappings as shards are closed and fully read. This ensures that tasks are not running with an assignment of closed shards that have been fully read and balances distribution of active shards across tasks. This window with early task shutdowns and possible task failures concludes when: All closed shards have been fully read and the Kinesis ingestion tasks have published the data from those shards, committing the "closed" state to metadata storage.Any remaining tasks that had inactive shards in the assignment have been shut down. These tasks would have been created before the closed shards were completely drained. "},{"title":"Kinesis known issues","type":1,"pageTitle":"Amazon Kinesis ingestion","url":"/docs/27.0.0/development/extensions-core/kinesis-ingestion#kinesis-known-issues","content":"Before you deploy the Kinesis extension to production, consider the following known issues: Avoid implementing more than one Kinesis supervisor that reads from the same Kinesis stream for ingestion. Kinesis has a per-shard read throughput limit and having multiple supervisors on the same stream can reduce available read throughput for an individual supervisor's tasks. Multiple supervisors ingesting to the same Druid datasource can also cause increased contention for locks on the datasource.The only way to change the stream reset policy is to submit a new ingestion spec and set up a new supervisor.If ingestion tasks get stuck, the supervisor does not automatically recover. You should monitor ingestion tasks and investigate if your ingestion falls behind.A Kinesis supervisor can sometimes compare the checkpoint offset to retention window of the stream to see if it has fallen behind. These checks fetch the earliest sequence number for Kinesis which can result in IteratorAgeMilliseconds becoming very high in AWS CloudWatch. "},{"title":"PostgreSQL Metadata Store","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/postgresql","content":"","keywords":""},{"title":"Setting up PostgreSQL","type":1,"pageTitle":"PostgreSQL Metadata Store","url":"/docs/27.0.0/development/extensions-core/postgresql#setting-up-postgresql","content":"Install PostgreSQL Use your favorite package manager to install PostgreSQL, e.g.: on Ubuntu/Debian using apt apt-get install postgresqlon OS X, using Homebrew brew install postgresql Create a druid database and user On the machine where PostgreSQL is installed, using an account with proper postgresql permissions: Create a druid user, enter diurd when prompted for the password. createuser druid -P Create a druid database owned by the user we just created createdb druid -O druid Note: On Ubuntu / Debian you may have to prefix the createuser andcreatedb commands with sudo -u postgres in order to gain proper permissions. Configure your Druid metadata storage extension: Add the following parameters to your Druid configuration, replacing <host>with the location (host name and port) of the database. druid.extensions.loadList=["postgresql-metadata-storage"] druid.metadata.storage.type=postgresql druid.metadata.storage.connector.connectURI=jdbc:postgresql://<host>/druid druid.metadata.storage.connector.user=druid druid.metadata.storage.connector.password=diurd "},{"title":"Configuration","type":1,"pageTitle":"PostgreSQL Metadata Store","url":"/docs/27.0.0/development/extensions-core/postgresql#configuration","content":"In most cases, the configuration options map directly to the postgres JDBC connection options. Property\tDescription\tDefault\tRequireddruid.metadata.postgres.ssl.useSSL\tEnables SSL\tfalse\tno druid.metadata.postgres.ssl.sslPassword\tThe Password Provider or String password for the client's key.\tnone\tno druid.metadata.postgres.ssl.sslFactory\tThe class name to use as the SSLSocketFactory\tnone\tno druid.metadata.postgres.ssl.sslFactoryArg\tAn optional argument passed to the sslFactory's constructor\tnone\tno druid.metadata.postgres.ssl.sslMode\tThe sslMode. Possible values are "disable", "require", "verify-ca", "verify-full", "allow" and "prefer"\tnone\tno druid.metadata.postgres.ssl.sslCert\tThe full path to the certificate file.\tnone\tno druid.metadata.postgres.ssl.sslKey\tThe full path to the key file.\tnone\tno druid.metadata.postgres.ssl.sslRootCert\tThe full path to the root certificate.\tnone\tno druid.metadata.postgres.ssl.sslHostNameVerifier\tThe classname of the hostname verifier.\tnone\tno druid.metadata.postgres.ssl.sslPasswordCallback\tThe classname of the SSL password provider.\tnone\tno druid.metadata.postgres.dbTableSchema\tdruid meta table schema\tpublic\tno "},{"title":"PostgreSQL Firehose","type":1,"pageTitle":"PostgreSQL Metadata Store","url":"/docs/27.0.0/development/extensions-core/postgresql#postgresql-firehose","content":"The PostgreSQL extension provides an implementation of an SQL input source which can be used to ingest data into Druid from a PostgreSQL database. { "type": "index_parallel", "spec": { "dataSchema": { "dataSource": "some_datasource", "dimensionsSpec": { "dimensionExclusions": [], "dimensions": [ "dim1", "dim2", "dim3" ] }, "timestampSpec": { "format": "auto", "column": "ts" }, "metricsSpec": [], "granularitySpec": { "type": "uniform", "segmentGranularity": "DAY", "queryGranularity": { "type": "none" }, "rollup": false, "intervals": null }, "transformSpec": { "filter": null, "transforms": [] } }, "ioConfig": { "type": "index_parallel", "inputSource": { "type": "sql", "database": { "type": "postgresql", "connectorConfig": { "connectURI": "jdbc:postgresql://some-rds-host.us-west-1.rds.amazonaws.com:5432/druid", "user": "admin", "password": "secret" } }, "sqls": [ "SELECT * FROM some_table" ] }, "inputFormat": { "type": "json" } }, "tuningConfig": { "type": "index_parallel" } } } "},{"title":"Protobuf","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/protobuf","content":"","keywords":""},{"title":"Example: Load Protobuf messages from Kafka","type":1,"pageTitle":"Protobuf","url":"/docs/27.0.0/development/extensions-core/protobuf#example-load-protobuf-messages-from-kafka","content":"This example demonstrates how to load Protobuf messages from Kafka. Please read the Load from Kafka tutorial first, and see Kafka Indexing Service documentation for more details. The files used in this example are found at ./examples/quickstart/protobuf in your Druid directory. For this example: Kafka broker host is localhost:9092Kafka topic is metrics_pbDatasource name is metrics-protobuf Here is a JSON example of the 'metrics' data schema used in the example. { "unit": "milliseconds", "http_method": "GET", "value": 44, "timestamp": "2017-04-06T02:36:22Z", "http_code": "200", "page": "/", "metricType": "request/latency", "server": "www1.example.com" } "},{"title":"Proto file","type":1,"pageTitle":"Protobuf","url":"/docs/27.0.0/development/extensions-core/protobuf#proto-file","content":"The corresponding proto file for our 'metrics' dataset looks like this. You can use Protobuf inputFormat with a proto file or Confluent Schema Registry. syntax = "proto3"; message Metrics { string unit = 1; string http_method = 2; int32 value = 3; string timestamp = 4; string http_code = 5; string page = 6; string metricType = 7; string server = 8; } "},{"title":"When using a descriptor file","type":1,"pageTitle":"Protobuf","url":"/docs/27.0.0/development/extensions-core/protobuf#when-using-a-descriptor-file","content":"Next, we use the protoc Protobuf compiler to generate the descriptor file and save it as metrics.desc. The descriptor file must be either in the classpath or reachable by URL. In this example the descriptor file was saved at /tmp/metrics.desc, however this file is also available in the example files. From your Druid install directory: protoc -o /tmp/metrics.desc ./quickstart/protobuf/metrics.proto "},{"title":"When using Schema Registry","type":1,"pageTitle":"Protobuf","url":"/docs/27.0.0/development/extensions-core/protobuf#when-using-schema-registry","content":"Make sure your Schema Registry version is later than 5.5. Next, we can post a schema to add it to the registry: POST /subjects/test/versions HTTP/1.1 Host: schemaregistry.example1.com Accept: application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json { "schemaType": "PROTOBUF", "schema": "syntax = \\"proto3\\";\\nmessage Metrics {\\n string unit = 1;\\n string http_method = 2;\\n int32 value = 3;\\n string timestamp = 4;\\n string http_code = 5;\\n string page = 6;\\n string metricType = 7;\\n string server = 8;\\n}\\n" } This feature uses Confluent's Protobuf provider which is not included in the Druid distribution and must be installed separately. You can fetch it and its dependencies from the Confluent repository and Maven Central at: https://packages.confluent.io/maven/io/confluent/kafka-protobuf-provider/6.0.1/kafka-protobuf-provider-6.0.1.jarhttps://repo1.maven.org/maven2/org/jetbrains/kotlin/kotlin-stdlib/1.4.0/kotlin-stdlib-1.4.0.jarhttps://repo1.maven.org/maven2/com/squareup/wire/wire-schema/3.2.2/wire-schema-3.2.2.jar Copy or symlink those files inside the folder extensions/protobuf-extensions under the distribution root directory. "},{"title":"Create Kafka Supervisor","type":1,"pageTitle":"Protobuf","url":"/docs/27.0.0/development/extensions-core/protobuf#create-kafka-supervisor","content":"Below is the complete Supervisor spec JSON to be submitted to the Overlord. Make sure these keys are properly configured for successful ingestion. "},{"title":"When using a descriptor file","type":1,"pageTitle":"Protobuf","url":"/docs/27.0.0/development/extensions-core/protobuf#when-using-a-descriptor-file-1","content":"Important supervisor properties protoBytesDecoder.descriptor for the descriptor file URLprotoBytesDecoder.protoMessageType from the proto definitionprotoBytesDecoder.type set to file, indicate use descriptor file to decode Protobuf fileinputFormat should have type set to protobuf { "type": "kafka", "spec": { "dataSchema": { "dataSource": "metrics-protobuf", "timestampSpec": { "column": "timestamp", "format": "auto" }, "dimensionsSpec": { "dimensions": [ "unit", "http_method", "http_code", "page", "metricType", "server" ], "dimensionExclusions": [ "timestamp", "value" ] }, "metricsSpec": [ { "name": "count", "type": "count" }, { "name": "value_sum", "fieldName": "value", "type": "doubleSum" }, { "name": "value_min", "fieldName": "value", "type": "doubleMin" }, { "name": "value_max", "fieldName": "value", "type": "doubleMax" } ], "granularitySpec": { "type": "uniform", "segmentGranularity": "HOUR", "queryGranularity": "NONE" } }, "tuningConfig": { "type": "kafka", "maxRowsPerSegment": 5000000 }, "ioConfig": { "topic": "metrics_pb", "consumerProperties": { "bootstrap.servers": "localhost:9092" }, "inputFormat": { "type": "protobuf", "protoBytesDecoder": { "type": "file", "descriptor": "file:///tmp/metrics.desc", "protoMessageType": "Metrics" }, "flattenSpec": { "useFieldDiscovery": true }, "binaryAsString": false }, "taskCount": 1, "replicas": 1, "taskDuration": "PT1H", "type": "kafka" } } } To adopt to old version. You can use old parser style, which also works. { "parser": { "type": "protobuf", "descriptor": "file:///tmp/metrics.desc", "protoMessageType": "Metrics" } } "},{"title":"When using Schema Registry","type":1,"pageTitle":"Protobuf","url":"/docs/27.0.0/development/extensions-core/protobuf#when-using-schema-registry-1","content":"Important supervisor properties protoBytesDecoder.url for the schema registry URL with single instance.protoBytesDecoder.urls for the schema registry URLs with multi instances.protoBytesDecoder.capacity capacity for schema registry cached schemas.protoBytesDecoder.config to send additional configurations, configured for Schema Registry.protoBytesDecoder.headers to send headers to the Schema Registry.protoBytesDecoder.type set to schema_registry, indicate use schema registry to decode Protobuf file.parser should have type set to protobuf, but note that the format of the parseSpec must be json. { "parser": { "type": "protobuf", "protoBytesDecoder": { "urls": ["http://schemaregistry.example1.com:8081","http://schemaregistry.example2.com:8081"], "type": "schema_registry", "capacity": 100, "config" : { "basic.auth.credentials.source": "USER_INFO", "basic.auth.user.info": "fred:letmein", "schema.registry.ssl.truststore.location": "/some/secrets/kafka.client.truststore.jks", "schema.registry.ssl.truststore.password": "<password>", "schema.registry.ssl.keystore.location": "/some/secrets/kafka.client.keystore.jks", "schema.registry.ssl.keystore.password": "<password>", "schema.registry.ssl.key.password": "<password>", ... }, "headers": { "traceID" : "b29c5de2-0db4-490b-b421", "timeStamp" : "1577191871865", ... } } } } "},{"title":"Adding Protobuf messages to Kafka","type":1,"pageTitle":"Protobuf","url":"/docs/27.0.0/development/extensions-core/protobuf#adding-protobuf-messages-to-kafka","content":"If necessary, from your Kafka installation directory run the following command to create the Kafka topic ./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic metrics_pb This example script requires protobuf and kafka-python modules. With the topic in place, messages can be inserted running the following command from your Druid installation directory ./bin/generate-example-metrics | python /quickstart/protobuf/pb_publisher.py You can confirm that data has been inserted to your Kafka topic using the following command from your Kafka installation directory ./bin/kafka-console-consumer --zookeeper localhost --topic metrics_pb which should print messages like this millisecondsGETR"2017-04-06T03:23:56Z*2002/list:request/latencyBwww1.example.com If your supervisor created in the previous step is running, the indexing tasks should begin producing the messages and the data will soon be available for querying in Druid. "},{"title":"Generating the example files","type":1,"pageTitle":"Protobuf","url":"/docs/27.0.0/development/extensions-core/protobuf#generating-the-example-files","content":"The files provided in the example quickstart can be generated in the following manner starting with only metrics.proto. "},{"title":"metrics.desc","type":1,"pageTitle":"Protobuf","url":"/docs/27.0.0/development/extensions-core/protobuf#metricsdesc","content":"The descriptor file is generated using protoc Protobuf compiler. Given a .proto file, a .desc file can be generated like so. protoc -o metrics.desc metrics.proto "},{"title":"metrics_pb2.py","type":1,"pageTitle":"Protobuf","url":"/docs/27.0.0/development/extensions-core/protobuf#metrics_pb2py","content":"metrics_pb2.py is also generated with protoc protoc -o metrics.desc metrics.proto --python_out=. "},{"title":"pb_publisher.py","type":1,"pageTitle":"Protobuf","url":"/docs/27.0.0/development/extensions-core/protobuf#pb_publisherpy","content":"After metrics_pb2.py is generated, another script can be constructed to parse JSON data, convert it to Protobuf, and produce to a Kafka topic #!/usr/bin/env python import sys import json from kafka import KafkaProducer from metrics_pb2 import Metrics producer = KafkaProducer(bootstrap_servers='localhost:9092') topic = 'metrics_pb' for row in iter(sys.stdin): d = json.loads(row) metrics = Metrics() for k, v in d.items(): setattr(metrics, k, v) pb = metrics.SerializeToString() producer.send(topic, pb) producer.flush() "},{"title":"Globally Cached Lookups","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/lookups-cached-global","content":"","keywords":""},{"title":"Configuration","type":1,"pageTitle":"Globally Cached Lookups","url":"/docs/27.0.0/development/extensions-core/lookups-cached-global#configuration","content":"info Static configuration is no longer supported. Lookups can be configured throughdynamic configuration. Globally cached lookups are appropriate for lookups which are not possible to pass at query time due to their size, or are not desired to be passed at query time because the data is to reside in and be handled by the Druid servers, and are small enough to reasonably populate in-memory. This usually means tens to tens of thousands of entries per lookup. Globally cached lookups all draw from the same cache pool, allowing each process to have a fixed cache pool that can be used by cached lookups. Globally cached lookups can be specified as part of the cluster wide config for lookups as a type of cachedNamespace { "type": "cachedNamespace", "extractionNamespace": { "type": "uri", "uri": "file:/tmp/prefix/", "namespaceParseSpec": { "format": "csv", "columns": [ "[\\"key\\"", "\\"value\\"]" ] }, "pollPeriod": "PT5M" }, "firstCacheTimeout": 0 } { "type": "cachedNamespace", "extractionNamespace": { "type": "jdbc", "connectorConfig": { "connectURI": "jdbc:mysql:\\/\\/localhost:3306\\/druid", "user": "druid", "password": "diurd" }, "table": "lookupTable", "keyColumn": "mykeyColumn", "valueColumn": "myValueColumn", "filter" : "myFilterSQL (Where clause statement e.g LOOKUPTYPE=1)", "tsColumn": "timeColumn" }, "firstCacheTimeout": 120000, "injective":true } The parameters are as follows Property\tDescription\tRequired\tDefaultextractionNamespace\tSpecifies how to populate the local cache. See below\tYes\t- firstCacheTimeout\tHow long to wait (in ms) for the first run of the cache to populate. 0 indicates to not wait\tNo\t0 (do not wait) injective\tIf the underlying map is injective (keys and values are unique) then optimizations can occur internally by setting this to true\tNo\tfalse If firstCacheTimeout is set to a non-zero value, it should be less than druid.manager.lookups.hostUpdateTimeout. If firstCacheTimeout is NOT set, then management is essentially asynchronous and does not know if a lookup succeeded or failed in starting. In such a case logs from the processes using lookups should be monitored for repeated failures. Proper functionality of globally cached lookups requires the following extension to be loaded on the Broker, Peon, and Historical processes:druid-lookups-cached-global "},{"title":"Example configuration","type":1,"pageTitle":"Globally Cached Lookups","url":"/docs/27.0.0/development/extensions-core/lookups-cached-global#example-configuration","content":"In a simple case where only one tier exists (realtime_customer2) with one cachedNamespace lookup called country_code, the resulting configuration JSON looks similar to the following: { "realtime_customer2": { "country_code": { "version": "v0", "lookupExtractorFactory": { "type": "cachedNamespace", "extractionNamespace": { "type": "jdbc", "connectorConfig": { "connectURI": "jdbc:mysql:\\/\\/localhost:3306\\/druid", "user": "druid", "password": "diurd" }, "table": "lookupValues", "keyColumn": "value_id", "valueColumn": "value_text", "filter": "value_type='country'", "tsColumn": "timeColumn" }, "firstCacheTimeout": 120000, "injective": true } } } } Where the Coordinator endpoint /druid/coordinator/v1/lookups/realtime_customer2/country_code should return { "version": "v0", "lookupExtractorFactory": { "type": "cachedNamespace", "extractionNamespace": { "type": "jdbc", "connectorConfig": { "connectURI": "jdbc:mysql://localhost:3306/druid", "user": "druid", "password": "diurd" }, "table": "lookupValues", "keyColumn": "value_id", "valueColumn": "value_text", "filter": "value_type='country'", "tsColumn": "timeColumn" }, "firstCacheTimeout": 120000, "injective": true } } "},{"title":"Cache Settings","type":1,"pageTitle":"Globally Cached Lookups","url":"/docs/27.0.0/development/extensions-core/lookups-cached-global#cache-settings","content":"Lookups are cached locally on Historical processes. The following are settings used by the processes which service queries when setting namespaces (Broker, Peon, Historical) Property\tDescription\tDefaultdruid.lookup.namespace.cache.type\tSpecifies the type of caching to be used by the namespaces. May be one of [offHeap, onHeap]. offHeap uses a temporary file for off-heap storage of the namespace (memory mapped files). onHeap stores all cache on the heap in standard java map types.\tonHeap druid.lookup.namespace.numExtractionThreads\tThe number of threads in the thread pool dedicated for lookup extraction and updates. This number may need to be scaled up, if you have a lot of lookups and they take long time to extract, to avoid timeouts.\t2 druid.lookup.namespace.numBufferedEntries\tIf using off-heap caching, the number of records to be stored on an on-heap buffer.\t100,000 The cache is populated in different ways depending on the settings below. In general, most namespaces employ a pollPeriod at the end of which time they poll the remote resource of interest for updates. onHeap uses ConcurrentMaps in the java heap, and thus affects garbage collection and heap sizing.offHeap uses an on-heap buffer and MapDB using memory-mapped files in the java temporary directory. So if total number of entries in the cachedNamespace is in excess of the buffer's configured capacity, the extra will be kept in memory as page cache, and paged in and out by general OS tunings. It's highly recommended that druid.lookup.namespace.numBufferedEntries is set when using offHeap, the value should be chosen from the range between 10% and 50% of the number of entries in the lookup. "},{"title":"Supported lookups","type":1,"pageTitle":"Globally Cached Lookups","url":"/docs/27.0.0/development/extensions-core/lookups-cached-global#supported-lookups","content":"For additional lookups, please see our extensions list. "},{"title":"URI lookup","type":1,"pageTitle":"Globally Cached Lookups","url":"/docs/27.0.0/development/extensions-core/lookups-cached-global#uri-lookup","content":"The remapping values for each globally cached lookup can be specified by a JSON object as per the following examples: { "type":"uri", "uri": "s3://bucket/some/key/prefix/renames-0003.gz", "namespaceParseSpec":{ "format":"csv", "columns":[ "[\\"key\\"", "\\"value\\"]" ] }, "pollPeriod":"PT5M" } { "type":"uri", "uriPrefix": "s3://bucket/some/key/prefix/", "fileRegex":"renames-[0-9]*\\\\.gz", "namespaceParseSpec":{ "format":"csv", "columns":[ "[\\"key\\"", "\\"value\\"]" ] }, "pollPeriod":"PT5M", "maxHeapPercentage": 10 } Property\tDescription\tRequired\tDefaultpollPeriod\tPeriod between polling for updates\tNo\t0 (only once) uri\tURI for the file of interest, specified as a file, hdfs, s3 or gs path\tNo\tUse uriPrefix uriPrefix\tA URI that specifies a directory (or other searchable resource) in which to search for files\tNo\tUse uri fileRegex\tOptional regex for matching the file name under uriPrefix. Only used if uriPrefix is used\tNo\t".*" namespaceParseSpec\tHow to interpret the data at the URI\tYes maxHeapPercentage\tThe maximum percentage of heap size that the lookup should consume. If the lookup grows beyond this size, warning messages will be logged in the respective service logs.\tNo\t10% of JVM heap size One of either uri or uriPrefix must be specified, as either a local file system (file://), HDFS (hdfs://), S3 (s3://) or GCS (gs://) location. HTTP location is not currently supported. The pollPeriod value specifies the period in ISO 8601 format between checks for replacement data for the lookup. If the source of the lookup is capable of providing a timestamp, the lookup will only be updated if it has changed since the prior tick of pollPeriod. A value of 0, an absent parameter, or null all mean populate once and do not attempt to look for new data later. Whenever an poll occurs, the updating system will look for a file with the most recent timestamp and assume that one with the most recent data set, replacing the local cache of the lookup data. The namespaceParseSpec can be one of a number of values. Each of the examples below would rename foo to bar, baz to bat, and buck to truck. All parseSpec types assumes each input is delimited by a new line. See below for the types of parseSpec supported. Only ONE file which matches the search will be used. For most implementations, the discriminator for choosing the URIs is by whichever one reports the most recent timestamp for its modification time. csv lookupParseSpec Parameter\tDescription\tRequired\tDefaultcolumns\tThe list of columns in the csv file\tno if hasHeaderRow is set\tnull keyColumn\tThe name of the column containing the key\tno\tThe first column valueColumn\tThe name of the column containing the value\tno\tThe second column hasHeaderRow\tA flag to indicate that column information can be extracted from the input files' header row\tno\tfalse skipHeaderRows\tNumber of header rows to be skipped\tno\t0 If both skipHeaderRows and hasHeaderRow options are set, skipHeaderRows is first applied. For example, if you setskipHeaderRows to 2 and hasHeaderRow to true, Druid will skip the first two lines and then extract column information from the third line. example input bar,something,foo bat,something2,baz truck,something3,buck example namespaceParseSpec "namespaceParseSpec": { "format": "csv", "columns": ["value","somethingElse","key"], "keyColumn": "key", "valueColumn": "value" } tsv lookupParseSpec Parameter\tDescription\tRequired\tDefaultcolumns\tThe list of columns in the tsv file\tyes\tnull keyColumn\tThe name of the column containing the key\tno\tThe first column valueColumn\tThe name of the column containing the value\tno\tThe second column delimiter\tThe delimiter in the file\tno\ttab (\\t) listDelimiter\tThe list delimiter in the file\tno\t(\\u0001) hasHeaderRow\tA flag to indicate that column information can be extracted from the input files' header row\tno\tfalse skipHeaderRows\tNumber of header rows to be skipped\tno\t0 If both skipHeaderRows and hasHeaderRow options are set, skipHeaderRows is first applied. For example, if you setskipHeaderRows to 2 and hasHeaderRow to true, Druid will skip the first two lines and then extract column information from the third line. example input bar|something,1|foo bat|something,2|baz truck|something,3|buck example namespaceParseSpec "namespaceParseSpec": { "format": "tsv", "columns": ["value","somethingElse","key"], "keyColumn": "key", "valueColumn": "value", "delimiter": "|" } customJson lookupParseSpec Parameter\tDescription\tRequired\tDefaultkeyFieldName\tThe field name of the key\tyes\tnull valueFieldName\tThe field name of the value\tyes\tnull example input {"key": "foo", "value": "bar", "somethingElse" : "something"} {"key": "baz", "value": "bat", "somethingElse" : "something"} {"key": "buck", "somethingElse": "something", "value": "truck"} example namespaceParseSpec "namespaceParseSpec": { "format": "customJson", "keyFieldName": "key", "valueFieldName": "value" } With customJson parsing, if the value field for a particular row is missing or null then that line will be skipped, and will not be included in the lookup. simpleJson lookupParseSpec The simpleJson lookupParseSpec does not take any parameters. It is simply a line delimited JSON file where the field is the key, and the field's value is the value. example input {"foo": "bar"} {"baz": "bat"} {"buck": "truck"} example namespaceParseSpec "namespaceParseSpec":{ "format": "simpleJson" } "},{"title":"JDBC lookup","type":1,"pageTitle":"Globally Cached Lookups","url":"/docs/27.0.0/development/extensions-core/lookups-cached-global#jdbc-lookup","content":"The JDBC lookups will poll a database to populate its local cache. If the tsColumn is set it must be able to accept comparisons in the format '2015-01-01 00:00:00'. For example, the following must be valid SQL for the table SELECT * FROM some_lookup_table WHERE timestamp_column > '2015-01-01 00:00:00'. If tsColumn is set, the caching service will attempt to only poll values that were written after the last sync. If tsColumn is not set, the entire table is pulled every time. Parameter\tDescription\tRequired\tDefaultconnectorConfig\tThe connector config to use. You can set connectURI, user and password. You can selectively allow JDBC properties in connectURI. See JDBC connections security config for more details.\tYes table\tThe table which contains the key value pairs\tYes keyColumn\tThe column in table which contains the keys\tYes valueColumn\tThe column in table which contains the values\tYes filter\tThe filter to use when selecting lookups, this is used to create a where clause on lookup population\tNo\tNo Filter tsColumn\tThe column in table which contains when the key was updated\tNo\tNot used pollPeriod\tHow often to poll the DB\tNo\t0 (only once) maxHeapPercentage\tThe maximum percentage of heap size that the lookup should consume. If the lookup grows beyond this size, warning messages will be logged in the respective service logs.\tNo\t10% of JVM heap size { "type":"jdbc", "connectorConfig":{ "connectURI":"jdbc:mysql://localhost:3306/druid", "user":"druid", "password":"diurd" }, "table":"some_lookup_table", "keyColumn":"the_old_dim_value", "valueColumn":"the_new_dim_value", "tsColumn":"timestamp_column", "pollPeriod":600000, "maxHeapPercentage": 10 } info If using JDBC, you will need to add your database's client JAR files to the extension's directory. For Postgres, the connector JAR is already included. See the MySQL extension documentation for instructions to obtain MySQL or MariaDB connector libraries. The connector JAR should reside in the classpath of Druid's main class loader. To add the connector JAR to the classpath, you can copy the downloaded file to lib/ under the distribution root directory. Alternatively, create a symbolic link to the connector in the lib directory. "},{"title":"Introspection","type":1,"pageTitle":"Globally Cached Lookups","url":"/docs/27.0.0/development/extensions-core/lookups-cached-global#introspection","content":"Globally cached lookups have introspection points at /keys and /values which return a complete set of the keys and values (respectively) in the lookup. Introspection to / returns the entire map. Introspection to /version returns the version indicator for the lookup. "},{"title":"S3-compatible","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/s3","content":"","keywords":""},{"title":"S3 extension","type":1,"pageTitle":"S3-compatible","url":"/docs/27.0.0/development/extensions-core/s3#s3-extension","content":"This extension allows you to do 2 things: Ingest data from files stored in S3.Write segments to deep storage in S3. To use this Apache Druid extension, include druid-s3-extensions in the extensions load list. "},{"title":"Reading data from S3","type":1,"pageTitle":"S3-compatible","url":"/docs/27.0.0/development/extensions-core/s3#reading-data-from-s3","content":"Use a native batch Parallel task with an S3 input source to read objects directly from S3. Alternatively, use a Hadoop task, and specify S3 paths in your inputSpec. To read objects from S3, you must supply connection information in configuration. "},{"title":"Deep Storage","type":1,"pageTitle":"S3-compatible","url":"/docs/27.0.0/development/extensions-core/s3#deep-storage","content":"S3-compatible deep storage means either AWS S3 or a compatible service like Google Storage which exposes the same API as S3. S3 deep storage needs to be explicitly enabled by setting druid.storage.type=s3. Only after setting the storage type to S3 will any of the settings below take effect. To use S3 for Deep Storage, you must supply connection information in configuration and set additional configuration, specific for Deep Storage. Deep storage specific configuration Property\tDescription\tDefaultdruid.storage.bucket\tBucket to store in.\tMust be set. druid.storage.baseKey\tA prefix string that will be prepended to the object names for the segments published to S3 deep storage\tMust be set. druid.storage.type\tGlobal deep storage provider. Must be set to s3 to make use of this extension.\tMust be set (likely s3). druid.storage.archiveBucket\tS3 bucket name for archiving when running the archive task.\tnone druid.storage.archiveBaseKey\tS3 object key prefix for archiving.\tnone druid.storage.disableAcl\tBoolean flag for how object permissions are handled. To use ACLs, set this property to false. To use Object Ownership, set it to true. The permission requirements for ACLs and Object Ownership are different. For more information, see S3 permissions settings.\tfalse druid.storage.useS3aSchema\tIf true, use the "s3a" filesystem when using Hadoop-based ingestion. If false, the "s3n" filesystem will be used. Only affects Hadoop-based ingestion.\tfalse "},{"title":"Configuration","type":1,"pageTitle":"S3-compatible","url":"/docs/27.0.0/development/extensions-core/s3#configuration","content":""},{"title":"S3 authentication methods","type":1,"pageTitle":"S3-compatible","url":"/docs/27.0.0/development/extensions-core/s3#s3-authentication-methods","content":"You can provide credentials to connect to S3 in a number of ways, whether for deep storage or as an ingestion source. The configuration options are listed in order of precedence. For example, if you would like to use profile information given in ~/.aws/credentials, do not set druid.s3.accessKey and druid.s3.secretKey in your Druid config file because they would take precedence. order\ttype\tdetails1\tDruid config file\tBased on your runtime.properties if it contains values druid.s3.accessKey and druid.s3.secretKey 2\tCustom properties file\tBased on custom properties file where you can supply sessionToken, accessKey and secretKey values. This file is provided to Druid through druid.s3.fileSessionCredentials properties 3\tEnvironment variables\tBased on environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY 4\tJava system properties\tBased on JVM properties aws.accessKeyId and aws.secretKey 5\tProfile information\tBased on credentials you may have on your druid instance (generally in ~/.aws/credentials) 6\tECS container credentials\tBased on environment variables available on AWS ECS (AWS_CONTAINER_CREDENTIALS_RELATIVE_URI or AWS_CONTAINER_CREDENTIALS_FULL_URI) as described in the EC2ContainerCredentialsProviderWrapper documentation 7\tInstance profile information\tBased on the instance profile you may have attached to your druid instance For more information, refer to the Amazon Developer Guide. Alternatively, you can bypass this chain by specifying an access key and secret key using a Properties Object inside your ingestion specification. Use the property druid.startup.logging.maskProperties to mask credentials information in Druid logs. For example, ["password", "secretKey", "awsSecretAccessKey"]. "},{"title":"S3 permissions settings","type":1,"pageTitle":"S3-compatible","url":"/docs/27.0.0/development/extensions-core/s3#s3-permissions-settings","content":"To manage the permissions for objects in an S3 bucket, you can use either ACLs or Object Ownership. The permissions required for each method are different. By default, Druid uses ACLs. With ACLs, any object that Druid puts into the bucket inherits the ACL settings from the bucket. You can switch from using ACLs to Object Ownership by setting druid.storage.disableAcl to true. The bucket owner owns any object that gets created, so you need to use S3's bucket policies to manage permissions. Note that this setting only affects Druid's behavior. Changing S3 to use Object Ownership requires additional configuration. For more information, see the AWS documentation on Controlling ownership of objects and disabling ACLs for your bucket. ACL permissions If you're using ACLs, Druid needs the following permissions: s3:GetObjects3:PutObjects3:DeleteObjects3:GetBucketAcls3:PutObjectAcl Object Ownership permissions If you're using Object Ownership, Druid needs the following permissions: s3:GetObjects3:PutObjects3:DeleteObject "},{"title":"AWS region","type":1,"pageTitle":"S3-compatible","url":"/docs/27.0.0/development/extensions-core/s3#aws-region","content":"The AWS SDK requires that a target region be specified. You can set these by using the JVM system property aws.region or by setting an environment variable AWS_REGION. For example, to set the region to 'us-east-1' through system properties: Add -Daws.region=us-east-1 to the jvm.config file for all Druid services.Add -Daws.region=us-east-1 to druid.indexer.runner.javaOpts in Middle Manager configuration so that the property will be passed to Peon (worker) processes. "},{"title":"Connecting to S3 configuration","type":1,"pageTitle":"S3-compatible","url":"/docs/27.0.0/development/extensions-core/s3#connecting-to-s3-configuration","content":"Property\tDescription\tDefaultdruid.s3.accessKey\tS3 access key. See S3 authentication methods for more details\tCan be omitted according to authentication methods chosen. druid.s3.secretKey\tS3 secret key. See S3 authentication methods for more details\tCan be omitted according to authentication methods chosen. druid.s3.fileSessionCredentials\tPath to properties file containing sessionToken, accessKey and secretKey value. One key/value pair per line (format key=value). See S3 authentication methods for more details\tCan be omitted according to authentication methods chosen. druid.s3.protocol\tCommunication protocol type to use when sending requests to AWS. http or https can be used. This configuration would be ignored if druid.s3.endpoint.url is filled with a URL with a different protocol.\thttps druid.s3.disableChunkedEncoding\tDisables chunked encoding. See AWS document for details.\tfalse druid.s3.enablePathStyleAccess\tEnables path style access. See AWS document for details.\tfalse druid.s3.forceGlobalBucketAccessEnabled\tEnables global bucket access. See AWS document for details.\tfalse druid.s3.endpoint.url\tService endpoint either with or without the protocol.\tNone druid.s3.endpoint.signingRegion\tRegion to use for SigV4 signing of requests (e.g. us-west-1).\tNone druid.s3.proxy.host\tProxy host to connect through.\tNone druid.s3.proxy.port\tPort on the proxy host to connect through.\tNone druid.s3.proxy.username\tUser name to use when connecting through a proxy.\tNone druid.s3.proxy.password\tPassword to use when connecting through a proxy.\tNone druid.storage.sse.type\tServer-side encryption type. Should be one of s3, kms, and custom. See the below Server-side encryption section for more details.\tNone druid.storage.sse.kms.keyId\tAWS KMS key ID. This is used only when druid.storage.sse.type is kms and can be empty to use the default key ID.\tNone druid.storage.sse.custom.base64EncodedKey\tBase64-encoded key. Should be specified if druid.storage.sse.type is custom.\tNone "},{"title":"Server-side encryption","type":1,"pageTitle":"S3-compatible","url":"/docs/27.0.0/development/extensions-core/s3#server-side-encryption","content":"You can enable server-side encryption by settingdruid.storage.sse.type to a supported type of server-side encryption. The current supported types are: s3: Server-side encryption with S3-managed encryption keyskms: Server-side encryption with AWS KMS–Managed Keyscustom: Server-side encryption with Customer-Provided Encryption Keys "},{"title":"Simple SSLContext Provider Module","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/simple-client-sslcontext","content":"Simple SSLContext Provider Module This Apache Druid module contains a simple implementation of SSLContextthat will be injected to be used with HttpClient that Druid processes use internally to communicate with each other. To learn more about Java's SSL support, please refer to this guide. Property\tDescription\tDefault\tRequireddruid.client.https.protocol\tSSL protocol to use.\tTLSv1.2\tno druid.client.https.trustStoreType\tThe type of the key store where trusted root certificates are stored.\tjava.security.KeyStore.getDefaultType()\tno druid.client.https.trustStorePath\tThe file path or URL of the TLS/SSL Key store where trusted root certificates are stored.\tnone\tyes druid.client.https.trustStoreAlgorithm\tAlgorithm to be used by TrustManager to validate certificate chains\tjavax.net.ssl.TrustManagerFactory.getDefaultAlgorithm()\tno druid.client.https.trustStorePassword\tThe Password Provider or String password for the Trust Store.\tnone\tyes The following table contains optional parameters for supporting client certificate authentication: Property\tDescription\tDefault\tRequireddruid.client.https.keyStorePath\tThe file path or URL of the TLS/SSL Key store containing the client certificate that Druid will use when communicating with other Druid services. If this is null, the other properties in this table are ignored.\tnone\tyes druid.client.https.keyStoreType\tThe type of the key store.\tnone\tyes druid.client.https.certAlias\tAlias of TLS client certificate in the keystore.\tnone\tyes druid.client.https.keyStorePassword\tThe Password Provider or String password for the Key Store.\tnone\tno druid.client.https.keyManagerFactoryAlgorithm\tAlgorithm to use for creating KeyManager, more details here.\tjavax.net.ssl.KeyManagerFactory.getDefaultAlgorithm()\tno druid.client.https.keyManagerPassword\tThe Password Provider or String password for the Key Manager.\tnone\tno druid.client.https.validateHostnames\tValidate the hostname of the server. This should not be disabled unless you are using custom TLS certificate checks and know that standard hostname validation is not needed.\ttrue\tno This document lists all the possible values for the above mentioned configs among others provided by Java implementation.","keywords":""},{"title":"Stats aggregator","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/stats","content":"","keywords":""},{"title":"Variance aggregator","type":1,"pageTitle":"Stats aggregator","url":"/docs/27.0.0/development/extensions-core/stats#variance-aggregator","content":"Algorithm of the aggregator is the same with that of apache hive. This is the description in GenericUDAFVariance in hive. Evaluate the variance using the algorithm described by Chan, Golub, and LeVeque in "Algorithms for computing the sample variance: analysis and recommendations" The American Statistician, 37 (1983) pp. 242--247. variance = variance1 + variance2 + n/(m(m+n)) pow(((m/n)*t1 - t2),2) where: variance is sum(x-avg^2) (this is actually n times the variance) and is updated at every step. n is the count of elements in chunk1 m is the count of elements in chunk2 t1 is the sum of elements in chunk1t2 is the sum of elements in chunk2 This algorithm was proven to be numerically stable by J.L. Barlow in "Error analysis of a pairwise summation algorithm to compute sample variance" Numer. Math, 58 (1991) pp. 583--590 info As with all aggregators, the order of operations across segments is non-deterministic. This means that if this aggregator operates with an input type of "float" or "double", the result of the aggregation may not be precisely the same across multiple runs of the query. To produce consistent results, round the variance to a fixed number of decimal places so that the results are precisely the same across query runs. "},{"title":"Pre-aggregating variance at ingestion time","type":1,"pageTitle":"Stats aggregator","url":"/docs/27.0.0/development/extensions-core/stats#pre-aggregating-variance-at-ingestion-time","content":"To use this feature, an "variance" aggregator must be included at indexing time. The ingestion aggregator can only apply to numeric values. If you use "variance" then any input rows missing the value will be considered to have a value of 0. User can specify expected input type as one of "float", "double", "long", "variance" for ingestion, which is by default "float". { "type" : "variance", "name" : <output_name>, "fieldName" : <metric_name>, "inputType" : <input_type>, "estimator" : <string> } To query for results, "variance" aggregator with "variance" input type or simply a "varianceFold" aggregator must be included in the query. { "type" : "varianceFold", "name" : <output_name>, "fieldName" : <metric_name>, "estimator" : <string> } Property\tDescription\tDefaultestimator\tSet "population" to get variance_pop rather than variance_sample, which is default.\tnull "},{"title":"Standard deviation post-aggregator","type":1,"pageTitle":"Stats aggregator","url":"/docs/27.0.0/development/extensions-core/stats#standard-deviation-post-aggregator","content":"To acquire standard deviation from variance, user can use "stddev" post aggregator. { "type": "stddev", "name": "<output_name>", "fieldName": "<aggregator_name>", "estimator": <string> } "},{"title":"Query examples:","type":1,"pageTitle":"Stats aggregator","url":"/docs/27.0.0/development/extensions-core/stats#query-examples","content":""},{"title":"Timeseries query","type":1,"pageTitle":"Stats aggregator","url":"/docs/27.0.0/development/extensions-core/stats#timeseries-query","content":"{ "queryType": "timeseries", "dataSource": "testing", "granularity": "day", "aggregations": [ { "type": "variance", "name": "index_var", "fieldName": "index_var" } ], "intervals": [ "2016-03-01T00:00:00.000/2013-03-20T00:00:00.000" ] } "},{"title":"TopN query","type":1,"pageTitle":"Stats aggregator","url":"/docs/27.0.0/development/extensions-core/stats#topn-query","content":"{ "queryType": "topN", "dataSource": "testing", "dimensions": ["alias"], "threshold": 5, "granularity": "all", "aggregations": [ { "type": "variance", "name": "index_var", "fieldName": "index" } ], "postAggregations": [ { "type": "stddev", "name": "index_stddev", "fieldName": "index_var" } ], "intervals": [ "2016-03-06T00:00:00/2016-03-06T23:59:59" ] } "},{"title":"GroupBy query","type":1,"pageTitle":"Stats aggregator","url":"/docs/27.0.0/development/extensions-core/stats#groupby-query","content":"{ "queryType": "groupBy", "dataSource": "testing", "dimensions": ["alias"], "granularity": "all", "aggregations": [ { "type": "variance", "name": "index_var", "fieldName": "index" } ], "postAggregations": [ { "type": "stddev", "name": "index_stddev", "fieldName": "index_var" } ], "intervals": [ "2016-03-06T00:00:00/2016-03-06T23:59:59" ] } "},{"title":"Test Stats Aggregators","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/extensions-core/test-stats","content":"","keywords":""},{"title":"Z-Score for two sample ztests post aggregator","type":1,"pageTitle":"Test Stats Aggregators","url":"/docs/27.0.0/development/extensions-core/test-stats#z-score-for-two-sample-ztests-post-aggregator","content":"Please refer to https://www.isixsigma.com/tools-templates/hypothesis-testing/making-sense-two-proportions-test/ and http://www.ucs.louisiana.edu/~jcb0773/Berry_statbook/Berry_statbook_chpt6.pdf for more details. z = (p1 - p2) / S.E. (assuming null hypothesis is true) Please see below for p1 and p2. Please note S.E. stands for standard error where S.E. = sqrt{ p1 ( 1 - p1 )/n1 + p2 (1 - p2)/n2) } (p1 – p2) is the observed difference between two sample proportions. "},{"title":"zscore2sample post aggregator","type":1,"pageTitle":"Test Stats Aggregators","url":"/docs/27.0.0/development/extensions-core/test-stats#zscore2sample-post-aggregator","content":"zscore2sample: calculate the z-score using two-sample z-test while converting binary variables (e.g. success or not) to continuous variables (e.g. conversion rate). { "type": "zscore2sample", "name": "<output_name>", "successCount1": <post_aggregator> success count of sample 1, "sample1Size": <post_aggregaror> sample 1 size, "successCount2": <post_aggregator> success count of sample 2, "sample2Size" : <post_aggregator> sample 2 size } Please note the post aggregator will be converting binary variables to continuous variables for two population proportions. Specifically p1 = (successCount1) / (sample size 1) p2 = (successCount2) / (sample size 2) "},{"title":"pvalue2tailedZtest post aggregator","type":1,"pageTitle":"Test Stats Aggregators","url":"/docs/27.0.0/development/extensions-core/test-stats#pvalue2tailedztest-post-aggregator","content":"pvalue2tailedZtest: calculate p-value of two-sided z-test from zscore pvalue2tailedZtest(zscore) - the input is a z-score which can be calculated using the zscore2sample post aggregator { "type": "pvalue2tailedZtest", "name": "<output_name>", "zScore": <zscore post_aggregator> } "},{"title":"Example Usage","type":1,"pageTitle":"Test Stats Aggregators","url":"/docs/27.0.0/development/extensions-core/test-stats#example-usage","content":"In this example, we use zscore2sample post aggregator to calculate z-score, and then feed the z-score to pvalue2tailedZtest post aggregator to calculate p-value. A JSON query example can be as follows: { ... "postAggregations" : { "type" : "pvalue2tailedZtest", "name" : "pvalue", "zScore" : { "type" : "zscore2sample", "name" : "zscore", "successCount1" : { "type" : "constant", "name" : "successCountFromPopulation1Sample", "value" : 300 }, "sample1Size" : { "type" : "constant", "name" : "sampleSizeOfPopulation1", "value" : 500 }, "successCount2": { "type" : "constant", "name" : "successCountFromPopulation2Sample", "value" : 450 }, "sample2Size" : { "type" : "constant", "name" : "sampleSizeOfPopulation2", "value" : 600 } } } } "},{"title":"JavaScript programming guide","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/javascript","content":"","keywords":""},{"title":"Examples","type":1,"pageTitle":"JavaScript programming guide","url":"/docs/27.0.0/development/javascript#examples","content":"JavaScript can be used to extend Druid in a variety of ways: AggregatorsExtraction functionsFiltersPost-aggregatorsInput parsersRouter strategyWorker select strategy JavaScript can be injected dynamically at runtime, making it convenient to rapidly prototype new functionality without needing to write and deploy Druid extensions. Druid uses the Mozilla Rhino engine at optimization level 9 to compile and execute JavaScript. "},{"title":"Security","type":1,"pageTitle":"JavaScript programming guide","url":"/docs/27.0.0/development/javascript#security","content":"Druid does not execute JavaScript functions in a sandbox, so they have full access to the machine. So JavaScript functions allow users to execute arbitrary code inside druid process. So, by default, JavaScript is disabled. However, on dev/staging environments or secured production environments you can enable those by setting the configuration propertydruid.javascript.enabled = true. "},{"title":"Global variables","type":1,"pageTitle":"JavaScript programming guide","url":"/docs/27.0.0/development/javascript#global-variables","content":"Avoid using global variables. Druid may share the global scope between multiple threads, which can lead to unpredictable results if global variables are used. "},{"title":"Performance","type":1,"pageTitle":"JavaScript programming guide","url":"/docs/27.0.0/development/javascript#performance","content":"Simple JavaScript functions typically have a slight performance penalty to native speed. More complex JavaScript functions can have steeper performance penalties. Druid compiles JavaScript functions once on each data process per query. You may need to pay special attention to garbage collection when making heavy use of JavaScript functions, especially garbage collection of the compiled classes themselves. Be sure to use a garbage collector configuration that supports timely collection of unused classes (this is generally easier on JDK8 with the Metaspace than it is on JDK7). "},{"title":"JavaScript vs. Native Extensions","type":1,"pageTitle":"JavaScript programming guide","url":"/docs/27.0.0/development/javascript#javascript-vs-native-extensions","content":"Generally we recommend using JavaScript when security is not an issue, and when speed of development is more important than performance or memory use. If security is an issue, or if performance and memory use are of the utmost importance, we recommend developing a native Druid extension. In addition, native Druid extensions are more flexible than JavaScript functions. There are some kinds of extensions (like sketches) that must be written as native Druid extensions due to their need for custom data formats. "},{"title":"Developing on Apache Druid","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/overview","content":"","keywords":""},{"title":"Storage format","type":1,"pageTitle":"Developing on Apache Druid","url":"/docs/27.0.0/development/overview#storage-format","content":"Data in Druid is stored in a custom column format known as a segment. Segments are composed of different types of columns. Column.java and the classes that extend it is a great place to looking into the storage format. "},{"title":"Segment creation","type":1,"pageTitle":"Developing on Apache Druid","url":"/docs/27.0.0/development/overview#segment-creation","content":"Raw data is ingested in IncrementalIndex.java, and segments are created in IndexMerger.java. "},{"title":"Storage engine","type":1,"pageTitle":"Developing on Apache Druid","url":"/docs/27.0.0/development/overview#storage-engine","content":"Druid segments are memory mapped in IndexIO.java to be exposed for querying. "},{"title":"Query engine","type":1,"pageTitle":"Developing on Apache Druid","url":"/docs/27.0.0/development/overview#query-engine","content":"Most of the logic related to Druid queries can be found in the Query* classes. Druid leverages query runners to run queries. Query runners often embed other query runners and each query runner adds on a layer of logic. A good starting point to trace the query logic is to start from QueryResource.java. "},{"title":"Coordination","type":1,"pageTitle":"Developing on Apache Druid","url":"/docs/27.0.0/development/overview#coordination","content":"Most of the coordination logic for Historical processes is on the Druid Coordinator. The starting point here is DruidCoordinator.java. Most of the coordination logic for (real-time) ingestion is in the Druid indexing service. The starting point here is OverlordResource.java. "},{"title":"Real-time Ingestion","type":1,"pageTitle":"Developing on Apache Druid","url":"/docs/27.0.0/development/overview#real-time-ingestion","content":"Druid loads data through FirehoseFactory.java classes. Firehoses often wrap other firehoses, where, similar to the design of the query runners, each firehose adds a layer of logic, and the persist and hand-off logic is in RealtimePlumber.java. "},{"title":"Hadoop-based Batch Ingestion","type":1,"pageTitle":"Developing on Apache Druid","url":"/docs/27.0.0/development/overview#hadoop-based-batch-ingestion","content":"The two main Hadoop indexing classes are HadoopDruidDetermineConfigurationJob.java for the job to determine how many Druid segments to create, and HadoopDruidIndexerJob.java, which creates Druid segments. At some point in the future, we may move the Hadoop ingestion code out of core Druid. "},{"title":"Internal UIs","type":1,"pageTitle":"Developing on Apache Druid","url":"/docs/27.0.0/development/overview#internal-uis","content":"Druid currently has two internal UIs. One is for the Coordinator and one is for the Overlord. At some point in the future, we will likely move the internal UI code out of core Druid. "},{"title":"Client libraries","type":1,"pageTitle":"Developing on Apache Druid","url":"/docs/27.0.0/development/overview#client-libraries","content":"We welcome contributions for new client libraries to interact with Druid. See theCommunity and third-party libraries page for links to existing client libraries. "},{"title":"Versioning","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/versioning","content":"","keywords":""},{"title":"Versioning Strategy","type":1,"pageTitle":"Versioning","url":"/docs/27.0.0/development/versioning#versioning-strategy","content":"We generally follow semantic versioning. The general idea is "Major" version (leftmost): backwards incompatible, no guarantees exist about APIs between the versions"Minor" version (middle number): you can move forward from a smaller number to a larger number, but moving backwards might be incompatible."bug-fix" version ("patch" or the rightmost): Interchangeable. The higher the number, the more things are fixed (hopefully), but the programming interfaces are completely compatible and you should be able to just drop in a new jar and have it work. Note that this is defined in terms of programming API, not in terms of functionality. It is possible that a brand new awesome way of doing something is introduced in a "bug-fix" release version if it doesn’t add to the public API or change it. One exception for right now, while we are still in major version 0, we are considering the APIs to be in beta and are conflating "major" and "minor" so a minor version increase could be backwards incompatible for as long as we are at major version 0. These will be communicated via email on the group. For external deployments, we recommend running the stable release tag. Releases are considered stable after we have deployed them into our production environment and they have operated bug-free for some time. "},{"title":"Tagging strategy","type":1,"pageTitle":"Versioning","url":"/docs/27.0.0/development/versioning#tagging-strategy","content":"Tags of the codebase are equivalent to release candidates. We tag the code every time we want to take it through our release process, which includes some QA cycles and deployments. So, it is not safe to assume that a tag is a stable release, it is a solidification of the code as it goes through our production QA cycle and deployment. Tags will never change, but we often go through a number of iterations of tags before actually getting a stable release onto production. So, it is recommended that if you are not aware of what is on a tag, to stick to the stable releases listed on the Release page. "},{"title":"Ingestion overview","type":0,"sectionRef":"#","url":"/docs/27.0.0/ingestion/","content":"","keywords":""},{"title":"Ingestion methods","type":1,"pageTitle":"Ingestion overview","url":"/docs/27.0.0/ingestion/#ingestion-methods","content":"The tables below list Druid's most common data ingestion methods, along with comparisons to help you choose the best one for your situation. Each ingestion method supports its own set of source systems to pull from. For details about how each method works, as well as configuration properties specific to that method, check out its documentation page. "},{"title":"Streaming","type":1,"pageTitle":"Ingestion overview","url":"/docs/27.0.0/ingestion/#streaming","content":"There are two available options for streaming ingestion. Streaming ingestion is controlled by a continuously-running supervisor. Method\tKafka\tKinesisSupervisor type\tkafka\tkinesis How it works\tDruid reads directly from Apache Kafka.\tDruid reads directly from Amazon Kinesis. Can ingest late data?\tYes.\tYes. Exactly-once guarantees?\tYes.\tYes. "},{"title":"Batch","type":1,"pageTitle":"Ingestion overview","url":"/docs/27.0.0/ingestion/#batch","content":"There are three available options for batch ingestion. Batch ingestion jobs are associated with a controller task that runs for the duration of the job. Method\tNative batch\tSQL\tHadoop-basedController task type\tindex_parallel\tquery_controller\tindex_hadoop How you submit it\tSend an index_parallel spec to the Tasks API.\tSend an INSERT or REPLACE statement to the SQL task API.\tSend an index_hadoop spec to the Tasks API. Parallelism\tUsing subtasks, if maxNumConcurrentSubTasks is greater than 1.\tUsing query_worker subtasks.\tUsing YARN. Fault tolerance\tWorkers automatically relaunched upon failure. Controller task failure leads to job failure.\tController or worker task failure leads to job failure.\tYARN containers automatically relaunched upon failure. Controller task failure leads to job failure. Can append?\tYes.\tYes (INSERT).\tNo. Can overwrite?\tYes.\tYes (REPLACE).\tYes. External dependencies\tNone.\tNone.\tHadoop cluster. Input sources\tAny inputSource.\tAny inputSource (using EXTERN) or Druid datasource (using FROM).\tAny Hadoop FileSystem or Druid datasource. Input formats\tAny inputFormat.\tAny inputFormat.\tAny Hadoop InputFormat. Secondary partitioning options\tDynamic, hash-based, and range-based partitioning methods are available. See partitionsSpec for details.\tRange partitioning (CLUSTERED BY).\tHash-based or range-based partitioning via partitionsSpec. Rollup modes\tPerfect if forceGuaranteedRollup = true in the tuningConfig.\tAlways perfect.\tAlways perfect. "},{"title":"Ingestion troubleshooting FAQ","type":0,"sectionRef":"#","url":"/docs/27.0.0/ingestion/faq","content":"","keywords":""},{"title":"Batch Ingestion","type":1,"pageTitle":"Ingestion troubleshooting FAQ","url":"/docs/27.0.0/ingestion/faq#batch-ingestion","content":"If you are trying to batch load historical data but no events are being loaded, make sure the interval of your ingestion spec actually encapsulates the interval of your data. Events outside this interval are dropped. "},{"title":"Druid ingested my events but they are not in my query results","type":1,"pageTitle":"Ingestion troubleshooting FAQ","url":"/docs/27.0.0/ingestion/faq#druid-ingested-my-events-but-they-are-not-in-my-query-results","content":"If the number of ingested events seem correct, make sure your query is correctly formed. If you included a count aggregator in your ingestion spec, you will need to query for the results of this aggregate with a longSum aggregator. Issuing a query with a count aggregator will count the number of Druid rows, which includes roll-up. "},{"title":"Where do my Druid segments end up after ingestion?","type":1,"pageTitle":"Ingestion troubleshooting FAQ","url":"/docs/27.0.0/ingestion/faq#where-do-my-druid-segments-end-up-after-ingestion","content":"Depending on what druid.storage.type is set to, Druid will upload segments to some Deep Storage. Local disk is used as the default deep storage. "},{"title":"My stream ingest is not handing segments off","type":1,"pageTitle":"Ingestion troubleshooting FAQ","url":"/docs/27.0.0/ingestion/faq#my-stream-ingest-is-not-handing-segments-off","content":"First, make sure there are no exceptions in the logs of the ingestion process. Also make sure that druid.storage.type is set to a deep storage that isn't local if you are running a distributed cluster. Other common reasons that hand-off fails are as follows: 1) Druid is unable to write to the metadata storage. Make sure your configurations are correct. 2) Historical processes are out of capacity and cannot download any more segments. You'll see exceptions in the Coordinator logs if this occurs and the web console will show the Historicals are near capacity. 3) Segments are corrupt and cannot be downloaded. You'll see exceptions in your Historical processes if this occurs. 4) Deep storage is improperly configured. Make sure that your segment actually exists in deep storage and that the Coordinator logs have no errors. "},{"title":"How do I get HDFS to work?","type":1,"pageTitle":"Ingestion troubleshooting FAQ","url":"/docs/27.0.0/ingestion/faq#how-do-i-get-hdfs-to-work","content":"Make sure to include the druid-hdfs-storage and all the hadoop configuration, dependencies (that can be obtained by running command hadoop classpath on a machine where hadoop has been setup) in the classpath. And, provide necessary HDFS settings as described in deep storage . "},{"title":"How do I know when I can make query to Druid after submitting batch ingestion task?","type":1,"pageTitle":"Ingestion troubleshooting FAQ","url":"/docs/27.0.0/ingestion/faq#how-do-i-know-when-i-can-make-query-to-druid-after-submitting-batch-ingestion-task","content":"You can verify if segments created by a recent ingestion task are loaded onto historicals and available for querying using the following workflow. Submit your ingestion task.Repeatedly poll the Overlord's tasks API ( /druid/indexer/v1/task/{taskId}/status) until your task is shown to be successfully completed.Poll the Segment Loading by Datasource API (/druid/coordinator/v1/datasources/{dataSourceName}/loadstatus) withforceMetadataRefresh=true and interval=<INTERVAL_OF_INGESTED_DATA> once. (Note: forceMetadataRefresh=true refreshes Coordinator's metadata cache of all datasources. This can be a heavy operation in terms of the load on the metadata store but is necessary to make sure that we verify all the latest segments' load status) If there are segments not yet loaded, continue to step 4, otherwise you can now query the data.Repeatedly poll the Segment Loading by Datasource API (/druid/coordinator/v1/datasources/{dataSourceName}/loadstatus) withforceMetadataRefresh=false and interval=<INTERVAL_OF_INGESTED_DATA>. Continue polling until all segments are loaded. Once all segments are loaded you can now query the data. Note that this workflow only guarantees that the segments are available at the time of the Segment Loading by Datasource API call. Segments can still become missing because of historical process failures or any other reasons afterward. "},{"title":"I don't see my Druid segments on my Historical processes","type":1,"pageTitle":"Ingestion troubleshooting FAQ","url":"/docs/27.0.0/ingestion/faq#i-dont-see-my-druid-segments-on-my-historical-processes","content":"You can check the web console to make sure that your segments have actually loaded on Historical processes. If your segments are not present, check the Coordinator logs for messages about capacity of replication errors. One reason that segments are not downloaded is because Historical processes have maxSizes that are too small, making them incapable of downloading more data. You can change that with (for example): -Ddruid.segmentCache.locations=[{"path":"/tmp/druid/storageLocation","maxSize":"500000000000"}] "},{"title":"My queries are returning empty results","type":1,"pageTitle":"Ingestion troubleshooting FAQ","url":"/docs/27.0.0/ingestion/faq#my-queries-are-returning-empty-results","content":"You can use a segment metadata query for the dimensions and metrics that have been created for your datasource. Make sure that the name of the aggregators you use in your query match one of these metrics. Also make sure that the query interval you specify match a valid time range where data exists. "},{"title":"Real-time ingestion seems to be stuck","type":1,"pageTitle":"Ingestion troubleshooting FAQ","url":"/docs/27.0.0/ingestion/faq#real-time-ingestion-seems-to-be-stuck","content":"There are a few ways this can occur. Druid will throttle ingestion to prevent out of memory problems if the intermediate persists are taking too long or if hand-off is taking too long. If your process logs indicate certain columns are taking a very long time to build (for example, if your segment granularity is hourly, but creating a single column takes 30 minutes), you should re-evaluate your configuration or scale up your real-time ingestion. "},{"title":"Creating extensions","type":0,"sectionRef":"#","url":"/docs/27.0.0/development/modules","content":"","keywords":""},{"title":"Writing your own extensions","type":1,"pageTitle":"Creating extensions","url":"/docs/27.0.0/development/modules#writing-your-own-extensions","content":"Druid's extensions leverage Guice in order to add things at runtime. Basically, Guice is a framework for Dependency Injection, but we use it to hold the expected object graph of the Druid process. Extensions can make any changes they want/need to the object graph via adding Guice bindings. While the extensions actually give you the capability to change almost anything however you want, in general, we expect people to want to extend one of the things listed below. This means that we honor our versioning strategy for changes that affect the interfaces called out on this page, but other interfaces are deemed "internal" and can be changed in an incompatible manner even between patch releases. Add a new deep storage implementation by extending the org.apache.druid.segment.loading.DataSegment* andorg.apache.druid.tasklogs.TaskLog* classes.Add a new input source by extending org.apache.druid.data.input.InputSource.Add a new input entity by extending org.apache.druid.data.input.InputEntity.Add a new input source reader if necessary by extending org.apache.druid.data.input.InputSourceReader. You can use org.apache.druid.data.input.impl.InputEntityIteratingReader in most cases.Add a new input format by extending org.apache.druid.data.input.InputFormat.Add a new input entity reader by extending org.apache.druid.data.input.TextReader for text formats or org.apache.druid.data.input.IntermediateRowParsingReader for binary formats.Add Aggregators by extending org.apache.druid.query.aggregation.AggregatorFactory, org.apache.druid.query.aggregation.Aggregator, and org.apache.druid.query.aggregation.BufferAggregator.Add PostAggregators by extending org.apache.druid.query.aggregation.PostAggregator.Add ExtractionFns by extending org.apache.druid.query.extraction.ExtractionFn.Add Complex metrics by extending org.apache.druid.segment.serde.ComplexMetricSerde.Add new Query types by extending org.apache.druid.query.QueryRunnerFactory, org.apache.druid.query.QueryToolChest, andorg.apache.druid.query.Query.Add new Jersey resources by calling Jerseys.addResource(binder, clazz).Add new Jetty filters by extending org.apache.druid.server.initialization.jetty.ServletFilterHolder.Add new secret providers by extending org.apache.druid.metadata.PasswordProvider.Add new dynamic configuration providers by extending org.apache.druid.metadata.DynamicConfigProvider.Add new ingest transform by implementing the org.apache.druid.segment.transform.Transform interface from the druid-processing package.Bundle your extension with all the other Druid extensions Extensions are added to the system via an implementation of org.apache.druid.initialization.DruidModule. "},{"title":"Creating a Druid Module","type":1,"pageTitle":"Creating extensions","url":"/docs/27.0.0/development/modules#creating-a-druid-module","content":"The DruidModule class is has two methods A configure(Binder) methodA getJacksonModules() method The configure(Binder) method is the same method that a normal Guice module would have. The getJacksonModules() method provides a list of Jackson modules that are used to help initialize the Jackson ObjectMapper instances used by Druid. This is how you add extensions that are instantiated via Jackson (like AggregatorFactory and InputSource objects) to Druid. "},{"title":"Registering your Druid Module","type":1,"pageTitle":"Creating extensions","url":"/docs/27.0.0/development/modules#registering-your-druid-module","content":"Once you have your DruidModule created, you will need to package an extra file in the META-INF/services directory of your jar. This is easiest to accomplish with a maven project by creating files in the src/main/resources directory. There are examples of this in the Druid code under the cassandra-storage, hdfs-storage and s3-extensions modules, for examples. The file that should exist in your jar is META-INF/services/org.apache.druid.initialization.DruidModule It should be a text file with a new-line delimited list of package-qualified classes that implement DruidModule like org.apache.druid.storage.cassandra.CassandraDruidModule If your jar has this file, then when it is added to the classpath or as an extension, Druid will notice the file and will instantiate instances of the Module. Your Module should have a default constructor, but if you need access to runtime configuration properties, it can have a method with @Inject on it to get a Properties object injected into it from Guice. "},{"title":"Adding a new deep storage implementation","type":1,"pageTitle":"Creating extensions","url":"/docs/27.0.0/development/modules#adding-a-new-deep-storage-implementation","content":"Check the azure-storage, google-storage, cassandra-storage, hdfs-storage and s3-extensions modules for examples of how to do this. The basic idea behind the extension is that you need to add bindings for your DataSegmentPusher and DataSegmentPuller objects. The way to add them is something like (taken from HdfsStorageDruidModule) Binders.dataSegmentPullerBinder(binder) .addBinding("hdfs") .to(HdfsDataSegmentPuller.class).in(LazySingleton.class); Binders.dataSegmentPusherBinder(binder) .addBinding("hdfs") .to(HdfsDataSegmentPusher.class).in(LazySingleton.class); Binders.dataSegment*Binder() is a call provided by the druid-core jar which sets up a Guice multibind "MapBinder". If that doesn't make sense, don't worry about it, just think of it as a magical incantation. addBinding("hdfs") for the Puller binder creates a new handler for loadSpec objects of type "hdfs". For the Pusher binder it creates a new type value that you can specify for the druid.storage.type parameter. to(...).in(...); is normal Guice stuff. In addition to DataSegmentPusher and DataSegmentPuller, you can also bind: DataSegmentKiller: Removes segments, used as part of the Kill Task to delete unused segments, i.e. perform garbage collection of segments that are either superseded by newer versions or that have been dropped from the cluster.DataSegmentMover: Allow migrating segments from one place to another, currently this is only used as part of the MoveTask to move unused segments to a different S3 bucket or prefix, typically to reduce storage costs of unused data (e.g. move to glacier or cheaper storage)DataSegmentArchiver: Just a wrapper around Mover, but comes with a pre-configured target bucket/path, so it doesn't have to be specified at runtime as part of the ArchiveTask. "},{"title":"Validating your deep storage implementation","type":1,"pageTitle":"Creating extensions","url":"/docs/27.0.0/development/modules#validating-your-deep-storage-implementation","content":"WARNING! This is not a formal procedure, but a collection of hints to validate if your new deep storage implementation is able do push, pull and kill segments. It's recommended to use batch ingestion tasks to validate your implementation. The segment will be automatically rolled up to Historical note after ~20 seconds. In this way, you can validate both push (at realtime process) and pull (at Historical process) segments. DataSegmentPusher Wherever your data storage (cloud storage service, distributed file system, etc.) is, you should be able to see one new file: index.zip (partitionNum_index.zip for HDFS data storage) after your ingestion task ends. DataSegmentPuller After ~20 secs your ingestion task ends, you should be able to see your Historical process trying to load the new segment. The following example was retrieved from a Historical process configured to use Azure for deep storage: 2015-04-14T02:42:33,450 INFO [ZkCoordinator-0] org.apache.druid.server.coordination.ZkCoordinator - New request[LOAD: dde_2015-01-02T00:00:00.000Z_2015-01-03T00:00:00 .000Z_2015-04-14T02:41:09.484Z] with zNode[/druid/dev/loadQueue/192.168.33.104:8081/dde_2015-01-02T00:00:00.000Z_2015-01-03T00:00:00.000Z_2015-04-14T02:41:09. 484Z]. 2015-04-14T02:42:33,451 INFO [ZkCoordinator-0] org.apache.druid.server.coordination.ZkCoordinator - Loading segment dde_2015-01-02T00:00:00.000Z_2015-01-03T00:00:00.0 00Z_2015-04-14T02:41:09.484Z 2015-04-14T02:42:33,463 INFO [ZkCoordinator-0] org.apache.druid.guice.JsonConfigurator - Loaded class[class org.apache.druid.storage.azure.AzureAccountConfig] from props[drui d.azure.] as [org.apache.druid.storage.azure.AzureAccountConfig@759c9ad9] 2015-04-14T02:49:08,275 INFO [ZkCoordinator-0] org.apache.druid.utils.CompressionUtils - Unzipping file[/opt/druid/tmp/compressionUtilZipCache1263964429587449785.z ip] to [/opt/druid/zk_druid/dde/2015-01-02T00:00:00.000Z_2015-01-03T00:00:00.000Z/2015-04-14T02:41:09.484Z/0] 2015-04-14T02:49:08,276 INFO [ZkCoordinator-0] org.apache.druid.storage.azure.AzureDataSegmentPuller - Loaded 1196 bytes from [dde/2015-01-02T00:00:00.000Z_2015-01-03 T00:00:00.000Z/2015-04-14T02:41:09.484Z/0/index.zip] to [/opt/druid/zk_druid/dde/2015-01-02T00:00:00.000Z_2015-01-03T00:00:00.000Z/2015-04-14T02:41:09.484Z/0] 2015-04-14T02:49:08,277 WARN [ZkCoordinator-0] org.apache.druid.segment.loading.SegmentLocalCacheManager - Segment [dde_2015-01-02T00:00:00.000Z_2015-01-03T00:00:00.000Z_2015-04-14T02:41:09.484Z] is different than expected size. Expected [0] found [1196] 2015-04-14T02:49:08,282 INFO [ZkCoordinator-0] org.apache.druid.server.coordination.BatchDataSegmentAnnouncer - Announcing segment[dde_2015-01-02T00:00:00.000Z_2015-01-03T00:00:00.000Z_2015-04-14T02:41:09.484Z] at path[/druid/dev/segments/192.168.33.104:8081/192.168.33.104:8081_historical__default_tier_2015-04-14T02:49:08.282Z_7bb87230ebf940188511dd4a53ffd7351] 2015-04-14T02:49:08,292 INFO [ZkCoordinator-0] org.apache.druid.server.coordination.ZkCoordinator - Completed request [LOAD: dde_2015-01-02T00:00:00.000Z_2015-01-03T00:00:00.000Z_2015-04-14T02:41:09.484Z] DataSegmentKiller The easiest way of testing the segment killing is marking a segment as not used and then starting a killing task in the web console. To mark a segment as not used, you need to connect to your metadata storage and update the used column to false on the segment table rows. To start a segment killing task, you need to access the web console then select issue kill task for the appropriate datasource. After the killing task ends, index.zip (partitionNum_index.zip for HDFS data storage) file should be deleted from the data storage. "},{"title":"Adding support for a new input source","type":1,"pageTitle":"Creating extensions","url":"/docs/27.0.0/development/modules#adding-support-for-a-new-input-source","content":"Adding support for a new input source requires to implement three interfaces, i.e., InputSource, InputEntity, and InputSourceReader.InputSource is to define where the input data is stored. InputEntity is to define how data can be read in parallel in native parallel indexing.InputSourceReader defines how to read your new input source and you can simply use the provided InputEntityIteratingReader in most cases. There is an example of this in the druid-s3-extensions module with the S3InputSource and S3Entity. Adding an InputSource is done almost entirely through the Jackson Modules instead of Guice. Specifically, note the implementation @Override public List<? extends Module> getJacksonModules() { return ImmutableList.of( new SimpleModule().registerSubtypes(new NamedType(S3InputSource.class, "s3")) ); } This is registering the InputSource with Jackson's polymorphic serialization/deserialization layer. More concretely, having this will mean that if you specify a "inputSource": { "type": "s3", ... } in your IO config, then the system will load this InputSource for your InputSource implementation. Note that inside of Druid, we have made the @JacksonInject annotation for Jackson deserialized objects actually use the base Guice injector to resolve the object to be injected. So, if your InputSource needs access to some object, you can add a @JacksonInject annotation on a setter and it will get set on instantiation. "},{"title":"Adding support for a new data format","type":1,"pageTitle":"Creating extensions","url":"/docs/27.0.0/development/modules#adding-support-for-a-new-data-format","content":"Adding support for a new data format requires implementing two interfaces, i.e., InputFormat and InputEntityReader.InputFormat is to define how your data is formatted. InputEntityReader is to define how to parse your data and convert into Druid InputRow. There is an example in the druid-orc-extensions module with the OrcInputFormat and OrcReader. Adding an InputFormat is very similar to adding an InputSource. They operate purely through Jackson and thus should just be additions to the Jackson modules returned by your DruidModule. "},{"title":"Adding Aggregators","type":1,"pageTitle":"Creating extensions","url":"/docs/27.0.0/development/modules#adding-aggregators","content":"Adding AggregatorFactory objects is very similar to InputSource objects. They operate purely through Jackson and thus should just be additions to the Jackson modules returned by your DruidModule. "},{"title":"Adding Complex Metrics","type":1,"pageTitle":"Creating extensions","url":"/docs/27.0.0/development/modules#adding-complex-metrics","content":"Adding ComplexMetrics is a little ugly in the current version. The method of getting at complex metrics is through registration with the ComplexMetrics.registerSerde() method. There is no special Guice stuff to get this working, just in your configure(Binder) method register the serialization/deserialization. "},{"title":"Adding new Query types","type":1,"pageTitle":"Creating extensions","url":"/docs/27.0.0/development/modules#adding-new-query-types","content":"Adding a new Query type requires the implementation of three interfaces. org.apache.druid.query.Queryorg.apache.druid.query.QueryToolChestorg.apache.druid.query.QueryRunnerFactory Registering these uses the same general strategy as a deep storage mechanism does. You do something like DruidBinders.queryToolChestBinder(binder) .addBinding(SegmentMetadataQuery.class) .to(SegmentMetadataQueryQueryToolChest.class); DruidBinders.queryRunnerFactoryBinder(binder) .addBinding(SegmentMetadataQuery.class) .to(SegmentMetadataQueryRunnerFactory.class); The first one binds the SegmentMetadataQueryQueryToolChest for usage when a SegmentMetadataQuery is used. The second one does the same thing but for the QueryRunnerFactory instead. "},{"title":"Adding new Jersey resources","type":1,"pageTitle":"Creating extensions","url":"/docs/27.0.0/development/modules#adding-new-jersey-resources","content":"Adding new Jersey resources to a module requires calling the following code to bind the resource in the module: Jerseys.addResource(binder, NewResource.class); "},{"title":"Adding a new Password Provider implementation","type":1,"pageTitle":"Creating extensions","url":"/docs/27.0.0/development/modules#adding-a-new-password-provider-implementation","content":"You will need to implement org.apache.druid.metadata.PasswordProvider interface. For every place where Druid uses PasswordProvider, a new instance of the implementation will be created, thus make sure all the necessary information required for fetching each password is supplied during object instantiation. In your implementation of org.apache.druid.initialization.DruidModule, getJacksonModules should look something like this - return ImmutableList.of( new SimpleModule("SomePasswordProviderModule") .registerSubtypes( new NamedType(SomePasswordProvider.class, "some") ) ); where SomePasswordProvider is the implementation of PasswordProvider interface, you can have a look at org.apache.druid.metadata.EnvironmentVariablePasswordProvider for example. "},{"title":"Adding a new DynamicConfigProvider implementation","type":1,"pageTitle":"Creating extensions","url":"/docs/27.0.0/development/modules#adding-a-new-dynamicconfigprovider-implementation","content":"You will need to implement org.apache.druid.metadata.DynamicConfigProvider interface. For every place where Druid uses DynamicConfigProvider, a new instance of the implementation will be created, thus make sure all the necessary information required for fetching all information is supplied during object instantiation. In your implementation of org.apache.druid.initialization.DruidModule, getJacksonModules should look something like this - return ImmutableList.of( new SimpleModule("SomeDynamicConfigProviderModule") .registerSubtypes( new NamedType(SomeDynamicConfigProvider.class, "some") ) ); where SomeDynamicConfigProvider is the implementation of DynamicConfigProvider interface, you can have a look at org.apache.druid.metadata.MapStringDynamicConfigProvider for example. "},{"title":"Adding a Transform Extension","type":1,"pageTitle":"Creating extensions","url":"/docs/27.0.0/development/modules#adding-a-transform-extension","content":"To create a transform extension implement the org.apache.druid.segment.transform.Transform interface. You'll need to install the druid-processing package to import org.apache.druid.segment.transform. import com.fasterxml.jackson.annotation.JsonCreator; import com.fasterxml.jackson.annotation.JsonProperty; import org.apache.druid.segment.transform.RowFunction; import org.apache.druid.segment.transform.Transform; public class MyTransform implements Transform { private final String name; @JsonCreator public MyTransform( @JsonProperty("name") final String name ) { this.name = name; } @JsonProperty @Override public String getName() { return name; } @Override public RowFunction getRowFunction() { return new MyRowFunction(); } static class MyRowFunction implements RowFunction { @Override public Object eval(Row row) { return "transformed-value"; } } } Then register your transform as a Jackson module. import com.fasterxml.jackson.databind.Module; import com.fasterxml.jackson.databind.jsontype.NamedModule; import com.fasterxml.jackson.databind.module.SimpleModule; import com.google.inject.Binder; import com.google.common.collect.ImmutableList; import org.apache.druid.initialization.DruidModule; public class MyTransformModule implements DruidModule { @Override public List<? extends Module> getJacksonModules() { return return ImmutableList.of( new SimpleModule("MyTransformModule").registerSubtypes( new NamedType(MyTransform.class, "my-transform") ) ): } @Override public void configure(Binder binder) { } } "},{"title":"Adding your own custom pluggable Coordinator Duty","type":1,"pageTitle":"Creating extensions","url":"/docs/27.0.0/development/modules#adding-your-own-custom-pluggable-coordinator-duty","content":"The coordinator periodically runs jobs, so-called CoordinatorDuty which include loading new segments, segment balancing, etc. Druid users can add custom pluggable coordinator duties, which are not part of Core Druid, without modifying any Core Druid classes. Users can do this by writing their own custom coordinator duty implementing the interface CoordinatorCustomDuty and setting the JsonTypeName. Next, users will need to register their custom coordinator as subtypes in their Module's DruidModule#getJacksonModules(). Once these steps are done, user will be able to load their custom coordinator duty using the following properties: druid.coordinator.dutyGroups=[<GROUP_NAME_1>, <GROUP_NAME_2>, ...] druid.coordinator.<GROUP_NAME_1>.duties=[<DUTY_NAME_MATCHING_JSON_TYPE_NAME_1>, <DUTY_NAME_MATCHING_JSON_TYPE_NAME_2>, ...] druid.coordinator.<GROUP_NAME_1>.period=<GROUP_NAME_1_RUN_PERIOD> druid.coordinator.<GROUP_NAME_1>.duty.<DUTY_NAME_MATCHING_JSON_TYPE_NAME_1>.<SOME_CONFIG_1_KEY>=<SOME_CONFIG_1_VALUE> druid.coordinator.<GROUP_NAME_1>.duty.<DUTY_NAME_MATCHING_JSON_TYPE_NAME_1>.<SOME_CONFIG_2_KEY>=<SOME_CONFIG_2_VALUE> In the new system for pluggable Coordinator duties, similar to what coordinator already does today, the duties can be grouped together. The duties will be grouped into multiple groups as per the elements in list druid.coordinator.dutyGroups. All duties in the same group will have the same run period configured by druid.coordinator.<GROUP_NAME>.period. Currently, there is a single thread running the duties sequentially for each group. For example, see KillSupervisorsCustomDuty for a custom coordinator duty implementation and the custom-coordinator-dutiesintegration test group which loads KillSupervisorsCustomDuty using the configs set in integration-tests/docker/environment-configs/test-groups/custom-coordinator-duties. This config file adds the configs below to enable a custom coordinator duty. druid.coordinator.dutyGroups=["cleanupMetadata"] druid.coordinator.cleanupMetadata.duties=["killSupervisors"] druid.coordinator.cleanupMetadata.duty.killSupervisors.retainDuration=PT0M druid.coordinator.cleanupMetadata.period=PT10S These configurations create a custom coordinator duty group called cleanupMetadata which runs a custom coordinator duty called killSupervisors every 10 seconds. The custom coordinator duty killSupervisors also has a config called retainDuration which is set to 0 minute. "},{"title":"Routing data through a HTTP proxy for your extension","type":1,"pageTitle":"Creating extensions","url":"/docs/27.0.0/development/modules#routing-data-through-a-http-proxy-for-your-extension","content":"You can add the ability for the HttpClient of your extension to connect through an HTTP proxy. To support proxy connection for your extension's HTTP client: Add HttpClientProxyConfig as a @JsonProperty to the HTTP config class of your extension. In the extension's module class, add HttpProxyConfig config to HttpClientConfig. For example, where config variable is the extension's HTTP config from step 1: final HttpClientConfig.Builder builder = HttpClientConfig .builder() .withNumConnections(1) .withReadTimeout(config.getReadTimeout().toStandardDuration()) .withHttpProxyConfig(config.getProxyConfig()); "},{"title":"Bundle your extension with all the other Druid extensions","type":1,"pageTitle":"Creating extensions","url":"/docs/27.0.0/development/modules#bundle-your-extension-with-all-the-other-druid-extensions","content":"When you do mvn install, Druid extensions will be packaged within the Druid tarball and extensions directory, which are both underneath distribution/target/. If you want your extension to be included, you can add your extension's maven coordinate as an argument atdistribution/pom.xml During mvn install, maven will install your extension to the local maven repository, and then call pull-deps to pull your extension from there. In the end, you should see your extension underneath distribution/target/extensions and within Druid tarball. "},{"title":"Managing dependencies","type":1,"pageTitle":"Creating extensions","url":"/docs/27.0.0/development/modules#managing-dependencies","content":"Managing library collisions can be daunting for extensions which draw in commonly used libraries. Here is a list of group IDs for libraries that are suggested to be specified with a provided scope to prevent collision with versions used in druid: "org.apache.druid", "com.metamx.druid", "asm", "org.ow2.asm", "org.jboss.netty", "com.google.guava", "com.google.code.findbugs", "com.google.protobuf", "com.esotericsoftware.minlog", "log4j", "org.slf4j", "commons-logging", "org.eclipse.jetty", "org.mortbay.jetty", "com.sun.jersey", "com.sun.jersey.contribs", "common-beanutils", "commons-codec", "commons-lang", "commons-cli", "commons-io", "javax.activation", "org.apache.httpcomponents", "org.apache.zookeeper", "org.codehaus.jackson", "com.fasterxml.jackson", "com.fasterxml.jackson.core", "com.fasterxml.jackson.dataformat", "com.fasterxml.jackson.datatype", "org.roaringbitmap", "net.java.dev.jets3t" See the documentation in org.apache.druid.cli.PullDependencies for more information. "},{"title":"JSON-based batch simple task indexing","type":0,"sectionRef":"#","url":"/docs/27.0.0/ingestion/native-batch-simple-task","content":"","keywords":""},{"title":"Simple task example","type":1,"pageTitle":"JSON-based batch simple task indexing","url":"/docs/27.0.0/ingestion/native-batch-simple-task#simple-task-example","content":"A sample task is shown below: { "type" : "index", "spec" : { "dataSchema" : { "dataSource" : "wikipedia", "timestampSpec" : { "column" : "timestamp", "format" : "auto" }, "dimensionsSpec" : { "dimensions": ["country", "page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","region","city"], "dimensionExclusions" : [] }, "metricsSpec" : [ { "type" : "count", "name" : "count" }, { "type" : "doubleSum", "name" : "added", "fieldName" : "added" }, { "type" : "doubleSum", "name" : "deleted", "fieldName" : "deleted" }, { "type" : "doubleSum", "name" : "delta", "fieldName" : "delta" } ], "granularitySpec" : { "type" : "uniform", "segmentGranularity" : "DAY", "queryGranularity" : "NONE", "intervals" : [ "2013-08-31/2013-09-01" ] } }, "ioConfig" : { "type" : "index", "inputSource" : { "type" : "local", "baseDir" : "examples/indexing/", "filter" : "wikipedia_data.json" }, "inputFormat": { "type": "json" } }, "tuningConfig" : { "type" : "index", "partitionsSpec": { "type": "single_dim", "partitionDimension": "country", "targetRowsPerSegment": 5000000 } } } } "},{"title":"Simple task configuration","type":1,"pageTitle":"JSON-based batch simple task indexing","url":"/docs/27.0.0/ingestion/native-batch-simple-task#simple-task-configuration","content":"property\tdescription\trequired?type\tThe task type, this should always be index.\tyes id\tThe task ID. If this is not explicitly specified, Druid generates the task ID using task type, data source name, interval, and date-time stamp.\tno spec\tThe ingestion spec including the data schema, IO config, and tuning config.\tyes context\tContext to specify various task configuration parameters. See Task context parameters for more details.\tno "},{"title":"dataSchema","type":1,"pageTitle":"JSON-based batch simple task indexing","url":"/docs/27.0.0/ingestion/native-batch-simple-task#dataschema","content":"This field is required. See the dataSchema section of the ingestion docs for details. If you do not specify intervals explicitly in your dataSchema's granularitySpec, the Local Index Task will do an extra pass over the data to determine the range to lock when it starts up. If you specify intervals explicitly, any rows outside the specified intervals will be thrown away. We recommend setting intervals explicitly if you know the time range of the data because it allows the task to skip the extra pass, and so that you don't accidentally replace data outside that range if there's some stray data with unexpected timestamps. "},{"title":"ioConfig","type":1,"pageTitle":"JSON-based batch simple task indexing","url":"/docs/27.0.0/ingestion/native-batch-simple-task#ioconfig","content":"property\tdescription\tdefault\trequired?type\tThe task type, this should always be "index".\tnone\tyes inputFormat\tinputFormat to specify how to parse input data.\tnone\tyes appendToExisting\tCreates segments as additional shards of the latest version, effectively appending to the segment set instead of replacing it. This means that you can append new segments to any datasource regardless of its original partitioning scheme. You must use the dynamic partitioning type for the appended segments. If you specify a different partitioning type, the task fails with an error.\tfalse\tno dropExisting\tIf this setting is false then ingestion proceeds as usual. Set this to true and appendToExisting to false to enforce true "replace" functionality as described next. If true and appendToExisting is false and the granularitySpec contains at least oneinterval, then the ingestion task will create regular segments for time chunk intervals with input data and tombstones for all other time chunks with no data. The task will publish the data segments and the tombstone segments together when the it publishes new segments. The net effect of the data segments and the tombstones is to completely adhere to a "replace" semantics where the input data contained in the granularitySpec intervals replaces all existing data in the intervals even for time chunks that would be empty in the case that no input data was associated with them. In the extreme case when the input data set that falls in the granularitySpec intervals is empty all existing data in the interval will be replaced with an empty data set (i.e. with nothing -- all existing data will be covered by tombstones). If ingestion fails, no segments and tombstones will be published. The following two combinations are not supported and will make the ingestion fail with an error: dropExisting is true and interval is not specified in granularitySpec or appendToExisting is true and dropExisting is true. WARNING: this functionality is still in beta and even though we are not aware of any bugs, use with caution.\tfalse\tno "},{"title":"tuningConfig","type":1,"pageTitle":"JSON-based batch simple task indexing","url":"/docs/27.0.0/ingestion/native-batch-simple-task#tuningconfig","content":"The tuningConfig is optional and default parameters will be used if no tuningConfig is specified. See below for more details. property\tdescription\tdefault\trequired?type\tThe task type, this should always be "index".\tnone\tyes maxRowsInMemory\tUsed in determining when intermediate persists to disk should occur. Normally user does not need to set this, but depending on the nature of data, if rows are short in terms of bytes, user may not want to store a million rows in memory and this value should be set.\t1000000\tno maxBytesInMemory\tUsed in determining when intermediate persists to disk should occur. Normally this is computed internally and user does not need to set it. This value represents number of bytes to aggregate in heap memory before persisting. This is based on a rough estimate of memory usage and not actual usage. The maximum heap memory usage for indexing is maxBytesInMemory * (2 + maxPendingPersists). Note that maxBytesInMemory also includes heap usage of artifacts created from intermediary persists. This means that after every persist, the amount of maxBytesInMemory until next persist will decreases, and task will fail when the sum of bytes of all intermediary persisted artifacts exceeds maxBytesInMemory.\t1/6 of max JVM memory\tno maxTotalRows\tDeprecated. Use partitionsSpec instead. Total number of rows in segments waiting for being pushed. Used in determining when intermediate pushing should occur.\t20000000\tno numShards\tDeprecated. Use partitionsSpec instead. Directly specify the number of shards to create. If this is specified and intervals is specified in the granularitySpec, the index task can skip the determine intervals/partitions pass through the data.\tnull\tno partitionDimensions\tDeprecated. Use partitionsSpec instead. The dimensions to partition on. Leave blank to select all dimensions. Only used with forceGuaranteedRollup = true, will be ignored otherwise.\tnull\tno partitionsSpec\tDefines how to partition data in each timeChunk, see PartitionsSpec\tdynamic if forceGuaranteedRollup = false, hashed if forceGuaranteedRollup = true\tno indexSpec\tDefines segment storage format options to be used at indexing time, see IndexSpec\tnull\tno indexSpecForIntermediatePersists\tDefines segment storage format options to be used at indexing time for intermediate persisted temporary segments. This can be used to disable dimension/metric compression on intermediate segments to reduce memory required for final merging. However, disabling compression on intermediate segments might increase page cache use while they are used before getting merged into final segment published, see IndexSpec for possible values.\tsame as indexSpec\tno maxPendingPersists\tMaximum number of persists that can be pending but not started. If this limit would be exceeded by a new intermediate persist, ingestion will block until the currently-running persist finishes. Maximum heap memory usage for indexing scales with maxRowsInMemory * (2 + maxPendingPersists).\t0 (meaning one persist can be running concurrently with ingestion, and none can be queued up)\tno forceGuaranteedRollup\tForces guaranteeing the perfect rollup. The perfect rollup optimizes the total size of generated segments and querying time while indexing time will be increased. If this is set to true, the index task will read the entire input data twice: one for finding the optimal number of partitions per time chunk and one for generating segments. Note that the result segments would be hash-partitioned. This flag cannot be used with appendToExisting of IOConfig. For more details, see the below Segment pushing modes section.\tfalse\tno reportParseExceptions\tDEPRECATED. If true, exceptions encountered during parsing will be thrown and will halt ingestion; if false, unparseable rows and fields will be skipped. Setting reportParseExceptions to true will override existing configurations for maxParseExceptions and maxSavedParseExceptions, setting maxParseExceptions to 0 and limiting maxSavedParseExceptions to no more than 1.\tfalse\tno pushTimeout\tMilliseconds to wait for pushing segments. It must be >= 0, where 0 means to wait forever.\t0\tno segmentWriteOutMediumFactory\tSegment write-out medium to use when creating segments. See SegmentWriteOutMediumFactory.\tNot specified, the value from druid.peon.defaultSegmentWriteOutMediumFactory.type is used\tno logParseExceptions\tIf true, log an error message when a parsing exception occurs, containing information about the row where the error occurred.\tfalse\tno maxParseExceptions\tThe maximum number of parse exceptions that can occur before the task halts ingestion and fails. Overridden if reportParseExceptions is set.\tunlimited\tno maxSavedParseExceptions\tWhen a parse exception occurs, Druid can keep track of the most recent parse exceptions. "maxSavedParseExceptions" limits how many exception instances will be saved. These saved exceptions will be made available after the task finishes in the task completion report. Overridden if reportParseExceptions is set.\t0\tno "},{"title":"partitionsSpec","type":1,"pageTitle":"JSON-based batch simple task indexing","url":"/docs/27.0.0/ingestion/native-batch-simple-task#partitionsspec","content":"PartitionsSpec is to describe the secondary partitioning method. You should use different partitionsSpec depending on the rollup mode you want. For perfect rollup, you should use hashed. property\tdescription\tdefault\trequired?type\tThis should always be hashed\tnone\tyes maxRowsPerSegment\tUsed in sharding. Determines how many rows are in each segment.\t5000000\tno numShards\tDirectly specify the number of shards to create. If this is specified and intervals is specified in the granularitySpec, the index task can skip the determine intervals/partitions pass through the data. numShards cannot be specified if maxRowsPerSegment is set.\tnull\tno partitionDimensions\tThe dimensions to partition on. Leave blank to select all dimensions.\tnull\tno partitionFunction\tA function to compute hash of partition dimensions. See Hash partition function\tmurmur3_32_abs\tno For best-effort rollup, you should use dynamic. property\tdescription\tdefault\trequired?type\tThis should always be dynamic\tnone\tyes maxRowsPerSegment\tUsed in sharding. Determines how many rows are in each segment.\t5000000\tno maxTotalRows\tTotal number of rows in segments waiting for being pushed.\t20000000\tno "},{"title":"Segment pushing modes","type":1,"pageTitle":"JSON-based batch simple task indexing","url":"/docs/27.0.0/ingestion/native-batch-simple-task#segment-pushing-modes","content":"While ingesting data using the simple task indexing, Druid creates segments from the input data and pushes them. For segment pushing, the simple task index supports the following segment pushing modes based upon your type of rollup: Bulk pushing mode: Used for perfect rollup. Druid pushes every segment at the very end of the index task. Until then, Druid stores created segments in memory and local storage of the service running the index task. This mode can cause problems if you have limited storage capacity, and is not recommended to use in production. To enable bulk pushing mode, set forceGuaranteedRollup in your TuningConfig. You can not use bulk pushing with appendToExisting in your IOConfig.Incremental pushing mode: Used for best-effort rollup. Druid pushes segments are incrementally during the course of the indexing task. The index task collects data and stores created segments in the memory and disks of the services running the task until the total number of collected rows exceeds maxTotalRows. At that point the index task immediately pushes all segments created up until that moment, cleans up pushed segments, and continues to ingest the remaining data. "},{"title":"Supervisor API","type":0,"sectionRef":"#","url":"/docs/27.0.0/api-reference/supervisor-api","content":"","keywords":""},{"title":"Supervisor information","type":1,"pageTitle":"Supervisor API","url":"/docs/27.0.0/api-reference/supervisor-api#supervisor-information","content":"The following table lists the properties of a supervisor object: Property\tType\tDescriptionid\tString\tUnique identifier. state\tString\tGeneric state of the supervisor. Available states:UNHEALTHY_SUPERVISOR, UNHEALTHY_TASKS, PENDING, RUNNING, SUSPENDED, STOPPING. See Apache Kafka operations for details. detailedState\tString\tDetailed state of the supervisor. This property contains a more descriptive, implementation-specific state that may provide more insight into the supervisor's activities than the state property. See Apache Kafka ingestion and Amazon Kinesis ingestion for supervisor-specific states. healthy\tBoolean\tSupervisor health indicator. spec\tObject\tContainer object for the supervisor configuration. suspended\tBoolean\tIndicates whether the supervisor is in a suspended state. "},{"title":"Get an array of active supervisor IDs","type":1,"pageTitle":"Supervisor API","url":"/docs/27.0.0/api-reference/supervisor-api#get-an-array-of-active-supervisor-ids","content":"Returns an array of strings representing the names of active supervisors. If there are no active supervisors, it returns an empty array. URL GET /druid/indexer/v1/supervisor Responses 200 SUCCESS Successfully retrieved array of active supervisor IDs Sample request cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/supervisor" Sample response Click to show sample response [ "wikipedia_stream", "social_media" ] "},{"title":"Get an array of active supervisor objects","type":1,"pageTitle":"Supervisor API","url":"/docs/27.0.0/api-reference/supervisor-api#get-an-array-of-active-supervisor-objects","content":"Retrieves an array of active supervisor objects. If there are no active supervisors, it returns an empty array. For reference on the supervisor object properties, see the preceding table. URL GET /druid/indexer/v1/supervisor?full Responses 200 SUCCESS Successfully retrieved supervisor objects Sample request cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/supervisor?full=null" Sample response Click to show sample response [ { "id": "wikipedia_stream", "state": "RUNNING", "detailedState": "CONNECTING_TO_STREAM", "healthy": true, "spec": { "type": "kafka", "spec": { "dataSchema": { "dataSource": "wikipedia_stream", "timestampSpec": { "column": "__time", "format": "iso", "missingValue": null }, "dimensionsSpec": { "dimensions": [ { "type": "string", "name": "username", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "string", "name": "post_title", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "long", "name": "views", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "upvotes", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "comments", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "string", "name": "edited", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true } ], "dimensionExclusions": [ "__time" ], "includeAllDimensions": false, "useSchemaDiscovery": false }, "metricsSpec": [], "granularitySpec": { "type": "uniform", "segmentGranularity": "HOUR", "queryGranularity": { "type": "none" }, "rollup": false, "intervals": [] }, "transformSpec": { "filter": null, "transforms": [] } }, "ioConfig": { "topic": "social_media", "inputFormat": { "type": "json", "keepNullColumns": false, "assumeNewlineDelimited": false, "useJsonNodeReader": false }, "replicas": 1, "taskCount": 1, "taskDuration": "PT3600S", "consumerProperties": { "bootstrap.servers": "localhost:9042" }, "autoScalerConfig": null, "pollTimeout": 100, "startDelay": "PT5S", "period": "PT30S", "useEarliestOffset": true, "completionTimeout": "PT1800S", "lateMessageRejectionPeriod": null, "earlyMessageRejectionPeriod": null, "lateMessageRejectionStartDateTime": null, "configOverrides": null, "idleConfig": null, "stream": "social_media", "useEarliestSequenceNumber": true }, "tuningConfig": { "type": "kafka", "appendableIndexSpec": { "type": "onheap", "preserveExistingMetrics": false }, "maxRowsInMemory": 150000, "maxBytesInMemory": 0, "skipBytesInMemoryOverheadCheck": false, "maxRowsPerSegment": 5000000, "maxTotalRows": null, "intermediatePersistPeriod": "PT10M", "maxPendingPersists": 0, "indexSpec": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "indexSpecForIntermediatePersists": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "reportParseExceptions": false, "handoffConditionTimeout": 0, "resetOffsetAutomatically": false, "segmentWriteOutMediumFactory": null, "workerThreads": null, "chatThreads": null, "chatRetries": 8, "httpTimeout": "PT10S", "shutdownTimeout": "PT80S", "offsetFetchPeriod": "PT30S", "intermediateHandoffPeriod": "P2147483647D", "logParseExceptions": false, "maxParseExceptions": 2147483647, "maxSavedParseExceptions": 0, "skipSequenceNumberAvailabilityCheck": false, "repartitionTransitionDuration": "PT120S" } }, "dataSchema": { "dataSource": "wikipedia_stream", "timestampSpec": { "column": "__time", "format": "iso", "missingValue": null }, "dimensionsSpec": { "dimensions": [ { "type": "string", "name": "username", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "string", "name": "post_title", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "long", "name": "views", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "upvotes", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "comments", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "string", "name": "edited", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true } ], "dimensionExclusions": [ "__time" ], "includeAllDimensions": false, "useSchemaDiscovery": false }, "metricsSpec": [], "granularitySpec": { "type": "uniform", "segmentGranularity": "HOUR", "queryGranularity": { "type": "none" }, "rollup": false, "intervals": [] }, "transformSpec": { "filter": null, "transforms": [] } }, "tuningConfig": { "type": "kafka", "appendableIndexSpec": { "type": "onheap", "preserveExistingMetrics": false }, "maxRowsInMemory": 150000, "maxBytesInMemory": 0, "skipBytesInMemoryOverheadCheck": false, "maxRowsPerSegment": 5000000, "maxTotalRows": null, "intermediatePersistPeriod": "PT10M", "maxPendingPersists": 0, "indexSpec": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "indexSpecForIntermediatePersists": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "reportParseExceptions": false, "handoffConditionTimeout": 0, "resetOffsetAutomatically": false, "segmentWriteOutMediumFactory": null, "workerThreads": null, "chatThreads": null, "chatRetries": 8, "httpTimeout": "PT10S", "shutdownTimeout": "PT80S", "offsetFetchPeriod": "PT30S", "intermediateHandoffPeriod": "P2147483647D", "logParseExceptions": false, "maxParseExceptions": 2147483647, "maxSavedParseExceptions": 0, "skipSequenceNumberAvailabilityCheck": false, "repartitionTransitionDuration": "PT120S" }, "ioConfig": { "topic": "social_media", "inputFormat": { "type": "json", "keepNullColumns": false, "assumeNewlineDelimited": false, "useJsonNodeReader": false }, "replicas": 1, "taskCount": 1, "taskDuration": "PT3600S", "consumerProperties": { "bootstrap.servers": "localhost:9042" }, "autoScalerConfig": null, "pollTimeout": 100, "startDelay": "PT5S", "period": "PT30S", "useEarliestOffset": true, "completionTimeout": "PT1800S", "lateMessageRejectionPeriod": null, "earlyMessageRejectionPeriod": null, "lateMessageRejectionStartDateTime": null, "configOverrides": null, "idleConfig": null, "stream": "social_media", "useEarliestSequenceNumber": true }, "context": null, "suspended": false }, "suspended": false }, { "id": "social_media", "state": "RUNNING", "detailedState": "RUNNING", "healthy": true, "spec": { "type": "kafka", "spec": { "dataSchema": { "dataSource": "social_media", "timestampSpec": { "column": "__time", "format": "iso", "missingValue": null }, "dimensionsSpec": { "dimensions": [ { "type": "string", "name": "username", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "string", "name": "post_title", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "long", "name": "views", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "upvotes", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "comments", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "string", "name": "edited", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true } ], "dimensionExclusions": [ "__time" ], "includeAllDimensions": false, "useSchemaDiscovery": false }, "metricsSpec": [], "granularitySpec": { "type": "uniform", "segmentGranularity": "HOUR", "queryGranularity": { "type": "none" }, "rollup": false, "intervals": [] }, "transformSpec": { "filter": null, "transforms": [] } }, "ioConfig": { "topic": "social_media", "inputFormat": { "type": "json", "keepNullColumns": false, "assumeNewlineDelimited": false, "useJsonNodeReader": false }, "replicas": 1, "taskCount": 1, "taskDuration": "PT3600S", "consumerProperties": { "bootstrap.servers": "localhost:9094" }, "autoScalerConfig": null, "pollTimeout": 100, "startDelay": "PT5S", "period": "PT30S", "useEarliestOffset": true, "completionTimeout": "PT1800S", "lateMessageRejectionPeriod": null, "earlyMessageRejectionPeriod": null, "lateMessageRejectionStartDateTime": null, "configOverrides": null, "idleConfig": null, "stream": "social_media", "useEarliestSequenceNumber": true }, "tuningConfig": { "type": "kafka", "appendableIndexSpec": { "type": "onheap", "preserveExistingMetrics": false }, "maxRowsInMemory": 150000, "maxBytesInMemory": 0, "skipBytesInMemoryOverheadCheck": false, "maxRowsPerSegment": 5000000, "maxTotalRows": null, "intermediatePersistPeriod": "PT10M", "maxPendingPersists": 0, "indexSpec": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "indexSpecForIntermediatePersists": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "reportParseExceptions": false, "handoffConditionTimeout": 0, "resetOffsetAutomatically": false, "segmentWriteOutMediumFactory": null, "workerThreads": null, "chatThreads": null, "chatRetries": 8, "httpTimeout": "PT10S", "shutdownTimeout": "PT80S", "offsetFetchPeriod": "PT30S", "intermediateHandoffPeriod": "P2147483647D", "logParseExceptions": false, "maxParseExceptions": 2147483647, "maxSavedParseExceptions": 0, "skipSequenceNumberAvailabilityCheck": false, "repartitionTransitionDuration": "PT120S" } }, "dataSchema": { "dataSource": "social_media", "timestampSpec": { "column": "__time", "format": "iso", "missingValue": null }, "dimensionsSpec": { "dimensions": [ { "type": "string", "name": "username", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "string", "name": "post_title", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "long", "name": "views", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "upvotes", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "comments", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "string", "name": "edited", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true } ], "dimensionExclusions": [ "__time" ], "includeAllDimensions": false, "useSchemaDiscovery": false }, "metricsSpec": [], "granularitySpec": { "type": "uniform", "segmentGranularity": "HOUR", "queryGranularity": { "type": "none" }, "rollup": false, "intervals": [] }, "transformSpec": { "filter": null, "transforms": [] } }, "tuningConfig": { "type": "kafka", "appendableIndexSpec": { "type": "onheap", "preserveExistingMetrics": false }, "maxRowsInMemory": 150000, "maxBytesInMemory": 0, "skipBytesInMemoryOverheadCheck": false, "maxRowsPerSegment": 5000000, "maxTotalRows": null, "intermediatePersistPeriod": "PT10M", "maxPendingPersists": 0, "indexSpec": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "indexSpecForIntermediatePersists": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "reportParseExceptions": false, "handoffConditionTimeout": 0, "resetOffsetAutomatically": false, "segmentWriteOutMediumFactory": null, "workerThreads": null, "chatThreads": null, "chatRetries": 8, "httpTimeout": "PT10S", "shutdownTimeout": "PT80S", "offsetFetchPeriod": "PT30S", "intermediateHandoffPeriod": "P2147483647D", "logParseExceptions": false, "maxParseExceptions": 2147483647, "maxSavedParseExceptions": 0, "skipSequenceNumberAvailabilityCheck": false, "repartitionTransitionDuration": "PT120S" }, "ioConfig": { "topic": "social_media", "inputFormat": { "type": "json", "keepNullColumns": false, "assumeNewlineDelimited": false, "useJsonNodeReader": false }, "replicas": 1, "taskCount": 1, "taskDuration": "PT3600S", "consumerProperties": { "bootstrap.servers": "localhost:9094" }, "autoScalerConfig": null, "pollTimeout": 100, "startDelay": "PT5S", "period": "PT30S", "useEarliestOffset": true, "completionTimeout": "PT1800S", "lateMessageRejectionPeriod": null, "earlyMessageRejectionPeriod": null, "lateMessageRejectionStartDateTime": null, "configOverrides": null, "idleConfig": null, "stream": "social_media", "useEarliestSequenceNumber": true }, "context": null, "suspended": false }, "suspended": false } ] "},{"title":"Get an array of supervisor states","type":1,"pageTitle":"Supervisor API","url":"/docs/27.0.0/api-reference/supervisor-api#get-an-array-of-supervisor-states","content":"Retrieves an array of objects representing active supervisors and their current state. If there are no active supervisors, it returns an empty array. For reference on the supervisor object properties, see the preceding table. URL GET /druid/indexer/v1/supervisor?state=true Responses 200 SUCCESS Successfully retrieved supervisor state objects Sample request cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/supervisor?state=true" Sample response Click to show sample response [ { "id": "wikipedia_stream", "state": "UNHEALTHY_SUPERVISOR", "detailedState": "UNABLE_TO_CONNECT_TO_STREAM", "healthy": false, "suspended": false }, { "id": "social_media", "state": "RUNNING", "detailedState": "RUNNING", "healthy": true, "suspended": false } ] "},{"title":"Get supervisor specification","type":1,"pageTitle":"Supervisor API","url":"/docs/27.0.0/api-reference/supervisor-api#get-supervisor-specification","content":"Retrieves the specification for a single supervisor. The returned specification includes the dataSchema, ioConfig, and tuningConfig objects. URL GET /druid/indexer/v1/supervisor/:supervisorId Responses 200 SUCCESS404 NOT FOUND Successfully retrieved supervisor spec Sample request The following example shows how to retrieve the specification of a supervisor with the name wikipedia_stream. cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/supervisor/wikipedia_stream" Sample response Click to show sample response { "type": "kafka", "spec": { "dataSchema": { "dataSource": "social_media", "timestampSpec": { "column": "__time", "format": "iso", "missingValue": null }, "dimensionsSpec": { "dimensions": [ { "type": "string", "name": "username", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "string", "name": "post_title", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "long", "name": "views", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "upvotes", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "comments", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "string", "name": "edited", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true } ], "dimensionExclusions": [ "__time" ], "includeAllDimensions": false, "useSchemaDiscovery": false }, "metricsSpec": [], "granularitySpec": { "type": "uniform", "segmentGranularity": "HOUR", "queryGranularity": { "type": "none" }, "rollup": false, "intervals": [] }, "transformSpec": { "filter": null, "transforms": [] } }, "ioConfig": { "topic": "social_media", "inputFormat": { "type": "json", "keepNullColumns": false, "assumeNewlineDelimited": false, "useJsonNodeReader": false }, "replicas": 1, "taskCount": 1, "taskDuration": "PT3600S", "consumerProperties": { "bootstrap.servers": "localhost:9094" }, "autoScalerConfig": null, "pollTimeout": 100, "startDelay": "PT5S", "period": "PT30S", "useEarliestOffset": true, "completionTimeout": "PT1800S", "lateMessageRejectionPeriod": null, "earlyMessageRejectionPeriod": null, "lateMessageRejectionStartDateTime": null, "configOverrides": null, "idleConfig": null, "stream": "social_media", "useEarliestSequenceNumber": true }, "tuningConfig": { "type": "kafka", "appendableIndexSpec": { "type": "onheap", "preserveExistingMetrics": false }, "maxRowsInMemory": 150000, "maxBytesInMemory": 0, "skipBytesInMemoryOverheadCheck": false, "maxRowsPerSegment": 5000000, "maxTotalRows": null, "intermediatePersistPeriod": "PT10M", "maxPendingPersists": 0, "indexSpec": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "indexSpecForIntermediatePersists": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "reportParseExceptions": false, "handoffConditionTimeout": 0, "resetOffsetAutomatically": false, "segmentWriteOutMediumFactory": null, "workerThreads": null, "chatThreads": null, "chatRetries": 8, "httpTimeout": "PT10S", "shutdownTimeout": "PT80S", "offsetFetchPeriod": "PT30S", "intermediateHandoffPeriod": "P2147483647D", "logParseExceptions": false, "maxParseExceptions": 2147483647, "maxSavedParseExceptions": 0, "skipSequenceNumberAvailabilityCheck": false, "repartitionTransitionDuration": "PT120S" } }, "dataSchema": { "dataSource": "social_media", "timestampSpec": { "column": "__time", "format": "iso", "missingValue": null }, "dimensionsSpec": { "dimensions": [ { "type": "string", "name": "username", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "string", "name": "post_title", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "long", "name": "views", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "upvotes", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "comments", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "string", "name": "edited", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true } ], "dimensionExclusions": [ "__time" ], "includeAllDimensions": false, "useSchemaDiscovery": false }, "metricsSpec": [], "granularitySpec": { "type": "uniform", "segmentGranularity": "HOUR", "queryGranularity": { "type": "none" }, "rollup": false, "intervals": [] }, "transformSpec": { "filter": null, "transforms": [] } }, "tuningConfig": { "type": "kafka", "appendableIndexSpec": { "type": "onheap", "preserveExistingMetrics": false }, "maxRowsInMemory": 150000, "maxBytesInMemory": 0, "skipBytesInMemoryOverheadCheck": false, "maxRowsPerSegment": 5000000, "maxTotalRows": null, "intermediatePersistPeriod": "PT10M", "maxPendingPersists": 0, "indexSpec": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "indexSpecForIntermediatePersists": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "reportParseExceptions": false, "handoffConditionTimeout": 0, "resetOffsetAutomatically": false, "segmentWriteOutMediumFactory": null, "workerThreads": null, "chatThreads": null, "chatRetries": 8, "httpTimeout": "PT10S", "shutdownTimeout": "PT80S", "offsetFetchPeriod": "PT30S", "intermediateHandoffPeriod": "P2147483647D", "logParseExceptions": false, "maxParseExceptions": 2147483647, "maxSavedParseExceptions": 0, "skipSequenceNumberAvailabilityCheck": false, "repartitionTransitionDuration": "PT120S" }, "ioConfig": { "topic": "social_media", "inputFormat": { "type": "json", "keepNullColumns": false, "assumeNewlineDelimited": false, "useJsonNodeReader": false }, "replicas": 1, "taskCount": 1, "taskDuration": "PT3600S", "consumerProperties": { "bootstrap.servers": "localhost:9094" }, "autoScalerConfig": null, "pollTimeout": 100, "startDelay": "PT5S", "period": "PT30S", "useEarliestOffset": true, "completionTimeout": "PT1800S", "lateMessageRejectionPeriod": null, "earlyMessageRejectionPeriod": null, "lateMessageRejectionStartDateTime": null, "configOverrides": null, "idleConfig": null, "stream": "social_media", "useEarliestSequenceNumber": true }, "context": null, "suspended": false } "},{"title":"Get supervisor status","type":1,"pageTitle":"Supervisor API","url":"/docs/27.0.0/api-reference/supervisor-api#get-supervisor-status","content":"Retrieves the current status report for a single supervisor. The report contains the state of the supervisor tasks and an array of recently thrown exceptions. For additional information about the status report, see the topic for each streaming ingestion methods: Amazon KinesisApache Kafka URL GET /druid/indexer/v1/supervisor/:supervisorId/status Responses 200 SUCCESS404 NOT FOUND Successfully retrieved supervisor status Sample request The following example shows how to retrieve the status of a supervisor with the name social_media. cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/supervisor/social_media/status" Sample response Click to show sample response { "id": "social_media", "generationTime": "2023-07-05T23:24:43.934Z", "payload": { "dataSource": "social_media", "stream": "social_media", "partitions": 1, "replicas": 1, "durationSeconds": 3600, "activeTasks": [ { "id": "index_kafka_social_media_ab72ae4127c591c_flcbhdlh", "startingOffsets": { "0": 3176381 }, "startTime": "2023-07-05T23:21:39.321Z", "remainingSeconds": 3415, "type": "ACTIVE", "currentOffsets": { "0": 3296632 }, "lag": { "0": 3 } } ], "publishingTasks": [], "latestOffsets": { "0": 3296635 }, "minimumLag": { "0": 3 }, "aggregateLag": 3, "offsetsLastUpdated": "2023-07-05T23:24:30.212Z", "suspended": false, "healthy": true, "state": "RUNNING", "detailedState": "RUNNING", "recentErrors": [] } } "},{"title":"Audit history","type":1,"pageTitle":"Supervisor API","url":"/docs/27.0.0/api-reference/supervisor-api#audit-history","content":"An audit history provides a comprehensive log of events, including supervisor configuration, creation, suspension, and modification history. "},{"title":"Get audit history for all supervisors","type":1,"pageTitle":"Supervisor API","url":"/docs/27.0.0/api-reference/supervisor-api#get-audit-history-for-all-supervisors","content":"Retrieve an audit history of specs for all supervisors. URL GET /druid/indexer/v1/supervisor/history Responses 200 SUCCESS Successfully retrieved audit history Sample request cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/supervisor/history" Sample response Click to show sample response { "social_media": [ { "spec": { "type": "kafka", "spec": { "dataSchema": { "dataSource": "social_media", "timestampSpec": { "column": "__time", "format": "iso", "missingValue": null }, "dimensionsSpec": { "dimensions": [ { "type": "string", "name": "username", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "string", "name": "post_title", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "long", "name": "views", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "upvotes", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "comments", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "string", "name": "edited", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true } ], "dimensionExclusions": [ "__time" ], "includeAllDimensions": false, "useSchemaDiscovery": false }, "metricsSpec": [], "granularitySpec": { "type": "uniform", "segmentGranularity": "HOUR", "queryGranularity": { "type": "none" }, "rollup": false, "intervals": [] }, "transformSpec": { "filter": null, "transforms": [] } }, "ioConfig": { "topic": "social_media", "inputFormat": { "type": "json", "keepNullColumns": false, "assumeNewlineDelimited": false, "useJsonNodeReader": false }, "replicas": 1, "taskCount": 1, "taskDuration": "PT3600S", "consumerProperties": { "bootstrap.servers": "localhost:9094" }, "autoScalerConfig": null, "pollTimeout": 100, "startDelay": "PT5S", "period": "PT30S", "useEarliestOffset": true, "completionTimeout": "PT1800S", "lateMessageRejectionPeriod": null, "earlyMessageRejectionPeriod": null, "lateMessageRejectionStartDateTime": null, "configOverrides": null, "idleConfig": null, "stream": "social_media", "useEarliestSequenceNumber": true }, "tuningConfig": { "type": "kafka", "appendableIndexSpec": { "type": "onheap", "preserveExistingMetrics": false }, "maxRowsInMemory": 150000, "maxBytesInMemory": 0, "skipBytesInMemoryOverheadCheck": false, "maxRowsPerSegment": 5000000, "maxTotalRows": null, "intermediatePersistPeriod": "PT10M", "maxPendingPersists": 0, "indexSpec": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "indexSpecForIntermediatePersists": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "reportParseExceptions": false, "handoffConditionTimeout": 0, "resetOffsetAutomatically": false, "segmentWriteOutMediumFactory": null, "workerThreads": null, "chatThreads": null, "chatRetries": 8, "httpTimeout": "PT10S", "shutdownTimeout": "PT80S", "offsetFetchPeriod": "PT30S", "intermediateHandoffPeriod": "P2147483647D", "logParseExceptions": false, "maxParseExceptions": 2147483647, "maxSavedParseExceptions": 0, "skipSequenceNumberAvailabilityCheck": false, "repartitionTransitionDuration": "PT120S" } }, "dataSchema": { "dataSource": "social_media", "timestampSpec": { "column": "__time", "format": "iso", "missingValue": null }, "dimensionsSpec": { "dimensions": [ { "type": "string", "name": "username", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "string", "name": "post_title", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "long", "name": "views", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "upvotes", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "comments", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "string", "name": "edited", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true } ], "dimensionExclusions": [ "__time" ], "includeAllDimensions": false, "useSchemaDiscovery": false }, "metricsSpec": [], "granularitySpec": { "type": "uniform", "segmentGranularity": "HOUR", "queryGranularity": { "type": "none" }, "rollup": false, "intervals": [] }, "transformSpec": { "filter": null, "transforms": [] } }, "tuningConfig": { "type": "kafka", "appendableIndexSpec": { "type": "onheap", "preserveExistingMetrics": false }, "maxRowsInMemory": 150000, "maxBytesInMemory": 0, "skipBytesInMemoryOverheadCheck": false, "maxRowsPerSegment": 5000000, "maxTotalRows": null, "intermediatePersistPeriod": "PT10M", "maxPendingPersists": 0, "indexSpec": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "indexSpecForIntermediatePersists": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "reportParseExceptions": false, "handoffConditionTimeout": 0, "resetOffsetAutomatically": false, "segmentWriteOutMediumFactory": null, "workerThreads": null, "chatThreads": null, "chatRetries": 8, "httpTimeout": "PT10S", "shutdownTimeout": "PT80S", "offsetFetchPeriod": "PT30S", "intermediateHandoffPeriod": "P2147483647D", "logParseExceptions": false, "maxParseExceptions": 2147483647, "maxSavedParseExceptions": 0, "skipSequenceNumberAvailabilityCheck": false, "repartitionTransitionDuration": "PT120S" }, "ioConfig": { "topic": "social_media", "inputFormat": { "type": "json", "keepNullColumns": false, "assumeNewlineDelimited": false, "useJsonNodeReader": false }, "replicas": 1, "taskCount": 1, "taskDuration": "PT3600S", "consumerProperties": { "bootstrap.servers": "localhost:9094" }, "autoScalerConfig": null, "pollTimeout": 100, "startDelay": "PT5S", "period": "PT30S", "useEarliestOffset": true, "completionTimeout": "PT1800S", "lateMessageRejectionPeriod": null, "earlyMessageRejectionPeriod": null, "lateMessageRejectionStartDateTime": null, "configOverrides": null, "idleConfig": null, "stream": "social_media", "useEarliestSequenceNumber": true }, "context": null, "suspended": false }, "version": "2023-07-03T18:51:02.970Z" } ] } "},{"title":"Get audit history for a specific supervisor","type":1,"pageTitle":"Supervisor API","url":"/docs/27.0.0/api-reference/supervisor-api#get-audit-history-for-a-specific-supervisor","content":"Retrieves an audit history of specs for a single supervisor. URL GET /druid/indexer/v1/supervisor/:supervisorId/history Responses 200 SUCCESS404 NOT FOUND Successfully retrieved supervisor audit history Sample request The following example shows how to retrieve the audit history of a supervisor with the name wikipedia_stream. cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/supervisor/wikipedia_stream/history" Sample response Click to show sample response [ { "spec": { "type": "kafka", "spec": { "dataSchema": { "dataSource": "wikipedia_stream", "timestampSpec": { "column": "__time", "format": "iso", "missingValue": null }, "dimensionsSpec": { "dimensions": [ { "type": "string", "name": "username", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "string", "name": "post_title", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "long", "name": "views", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "upvotes", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "comments", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "string", "name": "edited", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true } ], "dimensionExclusions": [ "__time" ], "includeAllDimensions": false, "useSchemaDiscovery": false }, "metricsSpec": [], "granularitySpec": { "type": "uniform", "segmentGranularity": "HOUR", "queryGranularity": { "type": "none" }, "rollup": false, "intervals": [] }, "transformSpec": { "filter": null, "transforms": [] } }, "ioConfig": { "topic": "social_media", "inputFormat": { "type": "json", "keepNullColumns": false, "assumeNewlineDelimited": false, "useJsonNodeReader": false }, "replicas": 1, "taskCount": 1, "taskDuration": "PT3600S", "consumerProperties": { "bootstrap.servers": "localhost:9042" }, "autoScalerConfig": null, "pollTimeout": 100, "startDelay": "PT5S", "period": "PT30S", "useEarliestOffset": true, "completionTimeout": "PT1800S", "lateMessageRejectionPeriod": null, "earlyMessageRejectionPeriod": null, "lateMessageRejectionStartDateTime": null, "configOverrides": null, "idleConfig": null, "stream": "social_media", "useEarliestSequenceNumber": true }, "tuningConfig": { "type": "kafka", "appendableIndexSpec": { "type": "onheap", "preserveExistingMetrics": false }, "maxRowsInMemory": 150000, "maxBytesInMemory": 0, "skipBytesInMemoryOverheadCheck": false, "maxRowsPerSegment": 5000000, "maxTotalRows": null, "intermediatePersistPeriod": "PT10M", "maxPendingPersists": 0, "indexSpec": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "indexSpecForIntermediatePersists": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "reportParseExceptions": false, "handoffConditionTimeout": 0, "resetOffsetAutomatically": false, "segmentWriteOutMediumFactory": null, "workerThreads": null, "chatThreads": null, "chatRetries": 8, "httpTimeout": "PT10S", "shutdownTimeout": "PT80S", "offsetFetchPeriod": "PT30S", "intermediateHandoffPeriod": "P2147483647D", "logParseExceptions": false, "maxParseExceptions": 2147483647, "maxSavedParseExceptions": 0, "skipSequenceNumberAvailabilityCheck": false, "repartitionTransitionDuration": "PT120S" } }, "dataSchema": { "dataSource": "wikipedia_stream", "timestampSpec": { "column": "__time", "format": "iso", "missingValue": null }, "dimensionsSpec": { "dimensions": [ { "type": "string", "name": "username", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "string", "name": "post_title", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "long", "name": "views", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "upvotes", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "comments", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "string", "name": "edited", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true } ], "dimensionExclusions": [ "__time" ], "includeAllDimensions": false, "useSchemaDiscovery": false }, "metricsSpec": [], "granularitySpec": { "type": "uniform", "segmentGranularity": "HOUR", "queryGranularity": { "type": "none" }, "rollup": false, "intervals": [] }, "transformSpec": { "filter": null, "transforms": [] } }, "tuningConfig": { "type": "kafka", "appendableIndexSpec": { "type": "onheap", "preserveExistingMetrics": false }, "maxRowsInMemory": 150000, "maxBytesInMemory": 0, "skipBytesInMemoryOverheadCheck": false, "maxRowsPerSegment": 5000000, "maxTotalRows": null, "intermediatePersistPeriod": "PT10M", "maxPendingPersists": 0, "indexSpec": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "indexSpecForIntermediatePersists": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "reportParseExceptions": false, "handoffConditionTimeout": 0, "resetOffsetAutomatically": false, "segmentWriteOutMediumFactory": null, "workerThreads": null, "chatThreads": null, "chatRetries": 8, "httpTimeout": "PT10S", "shutdownTimeout": "PT80S", "offsetFetchPeriod": "PT30S", "intermediateHandoffPeriod": "P2147483647D", "logParseExceptions": false, "maxParseExceptions": 2147483647, "maxSavedParseExceptions": 0, "skipSequenceNumberAvailabilityCheck": false, "repartitionTransitionDuration": "PT120S" }, "ioConfig": { "topic": "social_media", "inputFormat": { "type": "json", "keepNullColumns": false, "assumeNewlineDelimited": false, "useJsonNodeReader": false }, "replicas": 1, "taskCount": 1, "taskDuration": "PT3600S", "consumerProperties": { "bootstrap.servers": "localhost:9042" }, "autoScalerConfig": null, "pollTimeout": 100, "startDelay": "PT5S", "period": "PT30S", "useEarliestOffset": true, "completionTimeout": "PT1800S", "lateMessageRejectionPeriod": null, "earlyMessageRejectionPeriod": null, "lateMessageRejectionStartDateTime": null, "configOverrides": null, "idleConfig": null, "stream": "social_media", "useEarliestSequenceNumber": true }, "context": null, "suspended": false }, "version": "2023-07-05T20:59:16.872Z" } ] "},{"title":"Manage supervisors","type":1,"pageTitle":"Supervisor API","url":"/docs/27.0.0/api-reference/supervisor-api#manage-supervisors","content":""},{"title":"Create or update a supervisor","type":1,"pageTitle":"Supervisor API","url":"/docs/27.0.0/api-reference/supervisor-api#create-or-update-a-supervisor","content":"Creates a new supervisor or updates an existing one for the same datasource with a new schema and configuration. You can define a supervisor spec for Apache Kafka or Amazon Kinesis streaming ingestion methods. Once created, the supervisor persists in the metadata database. When you call this endpoint on an existing supervisor for the same datasource, the running supervisor signals its tasks to stop reading and begin publishing, exiting itself. Druid then uses the provided configuration from the request body to create a new supervisor. Druid submits a new schema while retaining existing publishing tasks and starts new tasks at the previous task offsets. URL POST /druid/indexer/v1/supervisor Responses 200 SUCCESS415 UNSUPPORTED MEDIA TYPE Successfully created a new supervisor or updated an existing supervisor Sample request The following example uses JSON input format to create a supervisor spec for Kafka with a social_media datasource and social_media topic. cURLHTTP curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/supervisor" \\ --header 'Content-Type: application/json' \\ --data '{ "type": "kafka", "spec": { "ioConfig": { "type": "kafka", "consumerProperties": { "bootstrap.servers": "localhost:9094" }, "topic": "social_media", "inputFormat": { "type": "json" }, "useEarliestOffset": true }, "tuningConfig": { "type": "kafka" }, "dataSchema": { "dataSource": "social_media", "timestampSpec": { "column": "__time", "format": "iso" }, "dimensionsSpec": { "dimensions": [ "username", "post_title", { "type": "long", "name": "views" }, { "type": "long", "name": "upvotes" }, { "type": "long", "name": "comments" }, "edited" ] }, "granularitySpec": { "queryGranularity": "none", "rollup": false, "segmentGranularity": "hour" } } } }' Sample response Click to show sample response { "id": "social_media" } "},{"title":"Suspend a running supervisor","type":1,"pageTitle":"Supervisor API","url":"/docs/27.0.0/api-reference/supervisor-api#suspend-a-running-supervisor","content":"Suspends a single running supervisor. Returns the updated supervisor spec, where the suspended property is set to true. The suspended supervisor continues to emit logs and metrics. URL POST /druid/indexer/v1/supervisor/:supervisorId/suspend Responses 200 SUCCESS400 BAD REQUEST404 NOT FOUND Successfully shut down supervisor Sample request The following example shows how to suspend a running supervisor with the name social_media. cURLHTTP curl --request POST "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/supervisor/social_media/suspend" Sample response Click to show sample response { "type": "kafka", "spec": { "dataSchema": { "dataSource": "social_media", "timestampSpec": { "column": "__time", "format": "iso", "missingValue": null }, "dimensionsSpec": { "dimensions": [ { "type": "string", "name": "username", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "string", "name": "post_title", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "long", "name": "views", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "upvotes", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "comments", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "string", "name": "edited", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true } ], "dimensionExclusions": [ "__time" ], "includeAllDimensions": false, "useSchemaDiscovery": false }, "metricsSpec": [], "granularitySpec": { "type": "uniform", "segmentGranularity": "HOUR", "queryGranularity": { "type": "none" }, "rollup": false, "intervals": [] }, "transformSpec": { "filter": null, "transforms": [] } }, "ioConfig": { "topic": "social_media", "inputFormat": { "type": "json", "keepNullColumns": false, "assumeNewlineDelimited": false, "useJsonNodeReader": false }, "replicas": 1, "taskCount": 1, "taskDuration": "PT3600S", "consumerProperties": { "bootstrap.servers": "localhost:9094" }, "autoScalerConfig": null, "pollTimeout": 100, "startDelay": "PT5S", "period": "PT30S", "useEarliestOffset": true, "completionTimeout": "PT1800S", "lateMessageRejectionPeriod": null, "earlyMessageRejectionPeriod": null, "lateMessageRejectionStartDateTime": null, "configOverrides": null, "idleConfig": null, "stream": "social_media", "useEarliestSequenceNumber": true }, "tuningConfig": { "type": "kafka", "appendableIndexSpec": { "type": "onheap", "preserveExistingMetrics": false }, "maxRowsInMemory": 150000, "maxBytesInMemory": 0, "skipBytesInMemoryOverheadCheck": false, "maxRowsPerSegment": 5000000, "maxTotalRows": null, "intermediatePersistPeriod": "PT10M", "maxPendingPersists": 0, "indexSpec": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "indexSpecForIntermediatePersists": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "reportParseExceptions": false, "handoffConditionTimeout": 0, "resetOffsetAutomatically": false, "segmentWriteOutMediumFactory": null, "workerThreads": null, "chatThreads": null, "chatRetries": 8, "httpTimeout": "PT10S", "shutdownTimeout": "PT80S", "offsetFetchPeriod": "PT30S", "intermediateHandoffPeriod": "P2147483647D", "logParseExceptions": false, "maxParseExceptions": 2147483647, "maxSavedParseExceptions": 0, "skipSequenceNumberAvailabilityCheck": false, "repartitionTransitionDuration": "PT120S" } }, "dataSchema": { "dataSource": "social_media", "timestampSpec": { "column": "__time", "format": "iso", "missingValue": null }, "dimensionsSpec": { "dimensions": [ { "type": "string", "name": "username", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "string", "name": "post_title", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "long", "name": "views", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "upvotes", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "comments", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "string", "name": "edited", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true } ], "dimensionExclusions": [ "__time" ], "includeAllDimensions": false, "useSchemaDiscovery": false }, "metricsSpec": [], "granularitySpec": { "type": "uniform", "segmentGranularity": "HOUR", "queryGranularity": { "type": "none" }, "rollup": false, "intervals": [] }, "transformSpec": { "filter": null, "transforms": [] } }, "tuningConfig": { "type": "kafka", "appendableIndexSpec": { "type": "onheap", "preserveExistingMetrics": false }, "maxRowsInMemory": 150000, "maxBytesInMemory": 0, "skipBytesInMemoryOverheadCheck": false, "maxRowsPerSegment": 5000000, "maxTotalRows": null, "intermediatePersistPeriod": "PT10M", "maxPendingPersists": 0, "indexSpec": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "indexSpecForIntermediatePersists": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "reportParseExceptions": false, "handoffConditionTimeout": 0, "resetOffsetAutomatically": false, "segmentWriteOutMediumFactory": null, "workerThreads": null, "chatThreads": null, "chatRetries": 8, "httpTimeout": "PT10S", "shutdownTimeout": "PT80S", "offsetFetchPeriod": "PT30S", "intermediateHandoffPeriod": "P2147483647D", "logParseExceptions": false, "maxParseExceptions": 2147483647, "maxSavedParseExceptions": 0, "skipSequenceNumberAvailabilityCheck": false, "repartitionTransitionDuration": "PT120S" }, "ioConfig": { "topic": "social_media", "inputFormat": { "type": "json", "keepNullColumns": false, "assumeNewlineDelimited": false, "useJsonNodeReader": false }, "replicas": 1, "taskCount": 1, "taskDuration": "PT3600S", "consumerProperties": { "bootstrap.servers": "localhost:9094" }, "autoScalerConfig": null, "pollTimeout": 100, "startDelay": "PT5S", "period": "PT30S", "useEarliestOffset": true, "completionTimeout": "PT1800S", "lateMessageRejectionPeriod": null, "earlyMessageRejectionPeriod": null, "lateMessageRejectionStartDateTime": null, "configOverrides": null, "idleConfig": null, "stream": "social_media", "useEarliestSequenceNumber": true }, "context": null, "suspended": true } "},{"title":"Suspend all supervisors","type":1,"pageTitle":"Supervisor API","url":"/docs/27.0.0/api-reference/supervisor-api#suspend-all-supervisors","content":"Suspends all supervisors. Note that this endpoint returns an HTTP 200 Success code message even if there are no supervisors or running supervisors to suspend. URL POST /druid/indexer/v1/supervisor/suspendAll Responses 200 SUCCESS Successfully suspended all supervisors Sample request cURLHTTP curl --request POST "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/supervisor/suspendAll" Sample response Click to show sample response { "status": "success" } "},{"title":"Resume a supervisor","type":1,"pageTitle":"Supervisor API","url":"/docs/27.0.0/api-reference/supervisor-api#resume-a-supervisor","content":"Resumes indexing tasks for a supervisor. Returns an updated supervisor spec with the suspended property set to false. URL POST /druid/indexer/v1/supervisor/:supervisorId/resume Responses 200 SUCCESS400 BAD REQUEST404 NOT FOUND Successfully resumed supervisor Sample request The following example resumes a previously suspended supervisor with name social_media. cURLHTTP curl --request POST "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/supervisor/social_media/resume" Sample response Click to show sample response { "type": "kafka", "spec": { "dataSchema": { "dataSource": "social_media", "timestampSpec": { "column": "__time", "format": "iso", "missingValue": null }, "dimensionsSpec": { "dimensions": [ { "type": "string", "name": "username", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "string", "name": "post_title", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "long", "name": "views", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "upvotes", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "comments", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "string", "name": "edited", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true } ], "dimensionExclusions": [ "__time" ], "includeAllDimensions": false, "useSchemaDiscovery": false }, "metricsSpec": [], "granularitySpec": { "type": "uniform", "segmentGranularity": "HOUR", "queryGranularity": { "type": "none" }, "rollup": false, "intervals": [] }, "transformSpec": { "filter": null, "transforms": [] } }, "ioConfig": { "topic": "social_media", "inputFormat": { "type": "json", "keepNullColumns": false, "assumeNewlineDelimited": false, "useJsonNodeReader": false }, "replicas": 1, "taskCount": 1, "taskDuration": "PT3600S", "consumerProperties": { "bootstrap.servers": "localhost:9094" }, "autoScalerConfig": null, "pollTimeout": 100, "startDelay": "PT5S", "period": "PT30S", "useEarliestOffset": true, "completionTimeout": "PT1800S", "lateMessageRejectionPeriod": null, "earlyMessageRejectionPeriod": null, "lateMessageRejectionStartDateTime": null, "configOverrides": null, "idleConfig": null, "stream": "social_media", "useEarliestSequenceNumber": true }, "tuningConfig": { "type": "kafka", "appendableIndexSpec": { "type": "onheap", "preserveExistingMetrics": false }, "maxRowsInMemory": 150000, "maxBytesInMemory": 0, "skipBytesInMemoryOverheadCheck": false, "maxRowsPerSegment": 5000000, "maxTotalRows": null, "intermediatePersistPeriod": "PT10M", "maxPendingPersists": 0, "indexSpec": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "indexSpecForIntermediatePersists": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "reportParseExceptions": false, "handoffConditionTimeout": 0, "resetOffsetAutomatically": false, "segmentWriteOutMediumFactory": null, "workerThreads": null, "chatThreads": null, "chatRetries": 8, "httpTimeout": "PT10S", "shutdownTimeout": "PT80S", "offsetFetchPeriod": "PT30S", "intermediateHandoffPeriod": "P2147483647D", "logParseExceptions": false, "maxParseExceptions": 2147483647, "maxSavedParseExceptions": 0, "skipSequenceNumberAvailabilityCheck": false, "repartitionTransitionDuration": "PT120S" } }, "dataSchema": { "dataSource": "social_media", "timestampSpec": { "column": "__time", "format": "iso", "missingValue": null }, "dimensionsSpec": { "dimensions": [ { "type": "string", "name": "username", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "string", "name": "post_title", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true }, { "type": "long", "name": "views", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "upvotes", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "long", "name": "comments", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": false }, { "type": "string", "name": "edited", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true } ], "dimensionExclusions": [ "__time" ], "includeAllDimensions": false, "useSchemaDiscovery": false }, "metricsSpec": [], "granularitySpec": { "type": "uniform", "segmentGranularity": "HOUR", "queryGranularity": { "type": "none" }, "rollup": false, "intervals": [] }, "transformSpec": { "filter": null, "transforms": [] } }, "tuningConfig": { "type": "kafka", "appendableIndexSpec": { "type": "onheap", "preserveExistingMetrics": false }, "maxRowsInMemory": 150000, "maxBytesInMemory": 0, "skipBytesInMemoryOverheadCheck": false, "maxRowsPerSegment": 5000000, "maxTotalRows": null, "intermediatePersistPeriod": "PT10M", "maxPendingPersists": 0, "indexSpec": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "indexSpecForIntermediatePersists": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "stringDictionaryEncoding": { "type": "utf8" }, "metricCompression": "lz4", "longEncoding": "longs" }, "reportParseExceptions": false, "handoffConditionTimeout": 0, "resetOffsetAutomatically": false, "segmentWriteOutMediumFactory": null, "workerThreads": null, "chatThreads": null, "chatRetries": 8, "httpTimeout": "PT10S", "shutdownTimeout": "PT80S", "offsetFetchPeriod": "PT30S", "intermediateHandoffPeriod": "P2147483647D", "logParseExceptions": false, "maxParseExceptions": 2147483647, "maxSavedParseExceptions": 0, "skipSequenceNumberAvailabilityCheck": false, "repartitionTransitionDuration": "PT120S" }, "ioConfig": { "topic": "social_media", "inputFormat": { "type": "json", "keepNullColumns": false, "assumeNewlineDelimited": false, "useJsonNodeReader": false }, "replicas": 1, "taskCount": 1, "taskDuration": "PT3600S", "consumerProperties": { "bootstrap.servers": "localhost:9094" }, "autoScalerConfig": null, "pollTimeout": 100, "startDelay": "PT5S", "period": "PT30S", "useEarliestOffset": true, "completionTimeout": "PT1800S", "lateMessageRejectionPeriod": null, "earlyMessageRejectionPeriod": null, "lateMessageRejectionStartDateTime": null, "configOverrides": null, "idleConfig": null, "stream": "social_media", "useEarliestSequenceNumber": true }, "context": null, "suspended": false } "},{"title":"Resume all supervisors","type":1,"pageTitle":"Supervisor API","url":"/docs/27.0.0/api-reference/supervisor-api#resume-all-supervisors","content":"Resumes all supervisors. Note that this endpoint returns an HTTP 200 Success code even if there are no supervisors or suspended supervisors to resume. URL POST /druid/indexer/v1/supervisor/resumeAll Responses 200 SUCCESS Successfully resumed all supervisors Sample request cURLHTTP curl --request POST "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/supervisor/resumeAll" Sample response Click to show sample response { "status": "success" } "},{"title":"Reset a supervisor","type":1,"pageTitle":"Supervisor API","url":"/docs/27.0.0/api-reference/supervisor-api#reset-a-supervisor","content":"Resets the specified supervisor. This endpoint clears stored offsets in Kafka or sequence numbers in Kinesis, prompting the supervisor to resume data reading. The supervisor will start from the earliest or latest available position, depending on the platform (offsets in Kafka or sequence numbers in Kinesis). It kills and recreates active tasks to read from valid positions. Use this endpoint to recover from a stopped state due to missing offsets in Kafka or sequence numbers in Kinesis. Use this endpoint with caution as it may result in skipped messages and lead to data loss or duplicate data. URL POST /druid/indexer/v1/supervisor/:supervisorId/reset Responses 200 SUCCESS404 NOT FOUND Successfully reset supervisor Sample request The following example shows how to reset a supervisor with the name social_media. cURLHTTP curl --request POST "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/supervisor/social_media/reset" Sample response Click to show sample response { "id": "social_media" } "},{"title":"Terminate a supervisor","type":1,"pageTitle":"Supervisor API","url":"/docs/27.0.0/api-reference/supervisor-api#terminate-a-supervisor","content":"Terminates a supervisor and its associated indexing tasks, triggering the publishing of their segments. When terminated, a tombstone marker is placed in the database to prevent reloading on restart. The terminated supervisor still exists in the metadata store and its history can be retrieved. URL POST /druid/indexer/v1/supervisor/:supervisorId/terminate Responses 200 SUCCESS404 NOT FOUND Successfully terminated a supervisor Sample request cURLHTTP curl --request POST "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/supervisor/social_media/terminate" Sample response Click to show sample response { "id": "social_media" } "},{"title":"Terminate all supervisors","type":1,"pageTitle":"Supervisor API","url":"/docs/27.0.0/api-reference/supervisor-api#terminate-all-supervisors","content":"Terminates all supervisors. Terminated supervisors still exist in the metadata store and their history can be retrieved. Note that this endpoint returns an HTTP 200 Success code even if there are no supervisors or running supervisors to terminate. URL POST /druid/indexer/v1/supervisor/terminateAll Responses 200 SUCCESS Successfully terminated all supervisors Sample request cURLHTTP curl --request POST "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/supervisor/terminateAll" Sample response Click to show sample response { "status": "success" } "},{"title":"Shut down a supervisor","type":1,"pageTitle":"Supervisor API","url":"/docs/27.0.0/api-reference/supervisor-api#shut-down-a-supervisor","content":"Shuts down a supervisor. This endpoint is deprecated and will be removed in future releases. Use the equivalent terminate endpoint instead. URL POST /druid/indexer/v1/supervisor/:supervisorId/shutdown "},{"title":"Partitioning","type":0,"sectionRef":"#","url":"/docs/27.0.0/ingestion/partitioning","content":"","keywords":""},{"title":"Time chunk partitioning","type":1,"pageTitle":"Partitioning","url":"/docs/27.0.0/ingestion/partitioning#time-chunk-partitioning","content":"Druid always partitions datasources by time into time chunks. Each time chunk contains one or more segments. This partitioning happens for all ingestion methods based on the segmentGranularity parameter in your ingestion spec dataSchema object. Partitioning by time is important for two reasons: Queries that filter by __time (SQL) or intervals (native) are able to use time partitioning to prune the set of segments to consider.Certain data management operations, such as overwriting and compacting existing data, acquire exclusive write locks on time partitions.Each segment file is wholly contained within a time partition. Too-fine-grained partitioning may cause a large number of small segments, which leads to poor performance. The most common choices to balance these considerations are hour and day. For streaming ingestion, hour is especially common, because it allows compaction to follow ingestion with less of a time delay. "},{"title":"Secondary partitioning","type":1,"pageTitle":"Partitioning","url":"/docs/27.0.0/ingestion/partitioning#secondary-partitioning","content":"Druid can partition segments within a particular time chunk further depending upon options that vary based on the ingestion type you have chosen. In general, secondary partitioning on a particular dimension improves locality. This means that rows with the same value for that dimension are stored together, decreasing access time. To achieve the best performance and smallest overall footprint, partition your data on a "natural" dimension that you often use as a filter when possible. Such partitioning often improves compression and query performance. For example, some cases have yielded threefold storage size decreases. "},{"title":"Partitioning and sorting","type":1,"pageTitle":"Partitioning","url":"/docs/27.0.0/ingestion/partitioning#partitioning-and-sorting","content":"Partitioning and sorting work well together. If you do have a "natural" partitioning dimension, consider placing it first in the dimensions list of your dimensionsSpec. This way Druid sorts rows within each segment by that column. This sorting configuration frequently improves compression more than using partitioning alone. Note that Druid always sorts rows within a segment by timestamp first, even before the first dimension listed in your dimensionsSpec. This sorting can preclude the efficacy of dimension sorting. To work around this limitation if necessary, set your queryGranularity equal to segmentGranularity in your granularitySpec. Druid will set all timestamps within the segment to the same value, letting you identify a secondary timestamp as the "real" timestamp. "},{"title":"How to configure partitioning","type":1,"pageTitle":"Partitioning","url":"/docs/27.0.0/ingestion/partitioning#how-to-configure-partitioning","content":"Not all ingestion methods support an explicit partitioning configuration, and not all have equivalent levels of flexibility. If you are doing initial ingestion through a less-flexible method like Kafka), you can use reindexing or compaction to repartition your data after initial ingestion. This is a powerful technique you can use to optimally partition any data older than a certain time threshold while you continuously add new data from a stream. The following table shows how each ingestion method handles partitioning: Method\tHow it worksNative batch\tConfigured using partitionsSpec inside the tuningConfig. SQL\tConfigured using PARTITIONED BY and CLUSTERED BY. Hadoop\tConfigured using partitionsSpec inside the tuningConfig. Kafka indexing service\tKafka topic partitioning defines how Druid partitions the datasource. You can also reindex or compact to repartition after initial ingestion. Kinesis indexing service\tKinesis stream sharding defines how Druid partitions the datasource. You can also reindex or compact to repartition after initial ingestion. "},{"title":"Learn more","type":1,"pageTitle":"Partitioning","url":"/docs/27.0.0/ingestion/partitioning#learn-more","content":"See the following topics for more information: partitionsSpec for more detail on partitioning with Native Batch ingestion.Reindexing and Compaction for information on how to repartition existing data in Druid. "},{"title":"Hadoop-based ingestion","type":0,"sectionRef":"#","url":"/docs/27.0.0/ingestion/hadoop","content":"","keywords":""},{"title":"Tutorial","type":1,"pageTitle":"Hadoop-based ingestion","url":"/docs/27.0.0/ingestion/hadoop#tutorial","content":"This page contains reference documentation for Hadoop-based ingestion. For a walk-through instead, check out the Loading from Apache Hadoop tutorial. "},{"title":"Task syntax","type":1,"pageTitle":"Hadoop-based ingestion","url":"/docs/27.0.0/ingestion/hadoop#task-syntax","content":"A sample task is shown below: { "type" : "index_hadoop", "spec" : { "dataSchema" : { "dataSource" : "wikipedia", "parser" : { "type" : "hadoopyString", "parseSpec" : { "format" : "json", "timestampSpec" : { "column" : "timestamp", "format" : "auto" }, "dimensionsSpec" : { "dimensions": ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"], "dimensionExclusions" : [], "spatialDimensions" : [] } } }, "metricsSpec" : [ { "type" : "count", "name" : "count" }, { "type" : "doubleSum", "name" : "added", "fieldName" : "added" }, { "type" : "doubleSum", "name" : "deleted", "fieldName" : "deleted" }, { "type" : "doubleSum", "name" : "delta", "fieldName" : "delta" } ], "granularitySpec" : { "type" : "uniform", "segmentGranularity" : "DAY", "queryGranularity" : "NONE", "intervals" : [ "2013-08-31/2013-09-01" ] } }, "ioConfig" : { "type" : "hadoop", "inputSpec" : { "type" : "static", "paths" : "/MyDirectory/example/wikipedia_data.json" } }, "tuningConfig" : { "type": "hadoop" } }, "hadoopDependencyCoordinates": <my_hadoop_version> } property\tdescription\trequired?type\tThe task type, this should always be "index_hadoop".\tyes spec\tA Hadoop Index Spec. See Ingestion\tyes hadoopDependencyCoordinates\tA JSON array of Hadoop dependency coordinates that Druid will use, this property will override the default Hadoop coordinates. Once specified, Druid will look for those Hadoop dependencies from the location specified by druid.extensions.hadoopDependenciesDir\tno classpathPrefix\tClasspath that will be prepended for the Peon process.\tno Also note that Druid automatically computes the classpath for Hadoop job containers that run in the Hadoop cluster. But in case of conflicts between Hadoop and Druid's dependencies, you can manually specify the classpath by setting druid.extensions.hadoopContainerDruidClasspath property. See the extensions config in base druid configuration. "},{"title":"dataSchema","type":1,"pageTitle":"Hadoop-based ingestion","url":"/docs/27.0.0/ingestion/hadoop#dataschema","content":"This field is required. See the dataSchema section of the main ingestion page for details on what it should contain. "},{"title":"ioConfig","type":1,"pageTitle":"Hadoop-based ingestion","url":"/docs/27.0.0/ingestion/hadoop#ioconfig","content":"This field is required. Field\tType\tDescription\tRequiredtype\tString\tThis should always be 'hadoop'.\tyes inputSpec\tObject\tA specification of where to pull the data in from. See below.\tyes segmentOutputPath\tString\tThe path to dump segments into.\tOnly used by the Command-line Hadoop indexer. This field must be null otherwise. metadataUpdateSpec\tObject\tA specification of how to update the metadata for the druid cluster these segments belong to.\tOnly used by the Command-line Hadoop indexer. This field must be null otherwise. "},{"title":"inputSpec","type":1,"pageTitle":"Hadoop-based ingestion","url":"/docs/27.0.0/ingestion/hadoop#inputspec","content":"There are multiple types of inputSpecs: static A type of inputSpec where a static path to the data files is provided. Field\tType\tDescription\tRequiredinputFormat\tString\tSpecifies the Hadoop InputFormat class to use. e.g. org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat\tno paths\tString\tComma-separated input paths to the raw data. Druid ingests data only from the configured paths. It does not search recursively for data in subdirectories.\tyes For example, using the static input paths: "paths" : "hdfs://path/to/data/is/here/data.gz,hdfs://path/to/data/is/here/moredata.gz,hdfs://path/to/data/is/here/evenmoredata.gz" You can also read from cloud storage such as AWS S3 or Google Cloud Storage. To do so, you need to install the necessary library under Druid's classpath in all MiddleManager or Indexer processes. For S3, you can run the below command to install the Hadoop AWS module. java -classpath "${DRUID_HOME}lib/*" org.apache.druid.cli.Main tools pull-deps -h "org.apache.hadoop:hadoop-aws:${HADOOP_VERSION}"; cp ${DRUID_HOME}/hadoop-dependencies/hadoop-aws/${HADOOP_VERSION}/hadoop-aws-${HADOOP_VERSION}.jar ${DRUID_HOME}/extensions/druid-hdfs-storage/ Once you install the Hadoop AWS module in all MiddleManager and Indexer processes, you can put your S3 paths in the inputSpec with the below job properties. For more configurations, see the Hadoop AWS module. "paths" : "s3a://billy-bucket/the/data/is/here/data.gz,s3a://billy-bucket/the/data/is/here/moredata.gz,s3a://billy-bucket/the/data/is/here/evenmoredata.gz" "jobProperties" : { "fs.s3a.impl" : "org.apache.hadoop.fs.s3a.S3AFileSystem", "fs.AbstractFileSystem.s3a.impl" : "org.apache.hadoop.fs.s3a.S3A", "fs.s3a.access.key" : "YOUR_ACCESS_KEY", "fs.s3a.secret.key" : "YOUR_SECRET_KEY" } For Google Cloud Storage, you need to install GCS connector jarunder ${DRUID_HOME}/hadoop-dependencies in all MiddleManager or Indexer processes. Once you install the GCS Connector jar in all MiddleManager and Indexer processes, you can put your Google Cloud Storage paths in the inputSpec with the below job properties. For more configurations, see the instructions to configure Hadoop,GCS core defaultand GCS core template. "paths" : "gs://billy-bucket/the/data/is/here/data.gz,gs://billy-bucket/the/data/is/here/moredata.gz,gs://billy-bucket/the/data/is/here/evenmoredata.gz" "jobProperties" : { "fs.gs.impl" : "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem", "fs.AbstractFileSystem.gs.impl" : "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS" } granularity A type of inputSpec that expects data to be organized in directories according to datetime using the path format: y=XXXX/m=XX/d=XX/H=XX/M=XX/S=XX (where date is represented by lowercase and time is represented by uppercase). Field\tType\tDescription\tRequireddataGranularity\tString\tSpecifies the granularity to expect the data at, e.g. hour means to expect directories y=XXXX/m=XX/d=XX/H=XX.\tyes inputFormat\tString\tSpecifies the Hadoop InputFormat class to use. e.g. org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat\tno inputPath\tString\tBase path to append the datetime path to.\tyes filePattern\tString\tPattern that files should match to be included.\tyes pathFormat\tString\tJoda datetime format for each directory. Default value is "'y'=yyyy/'m'=MM/'d'=dd/'H'=HH", or see Joda documentation\tno For example, if the sample config were run with the interval 2012-06-01/2012-06-02, it would expect data at the paths: s3n://billy-bucket/the/data/is/here/y=2012/m=06/d=01/H=00 s3n://billy-bucket/the/data/is/here/y=2012/m=06/d=01/H=01 ... s3n://billy-bucket/the/data/is/here/y=2012/m=06/d=01/H=23 dataSource This is a type of inputSpec that reads data already stored inside Druid. This is used to allow "re-indexing" data and for "delta-ingestion" described later in multi type inputSpec. Field\tType\tDescription\tRequiredtype\tString.\tThis should always be 'dataSource'.\tyes ingestionSpec\tJSON object.\tSpecification of Druid segments to be loaded. See below.\tyes maxSplitSize\tNumber\tEnables combining multiple segments into single Hadoop InputSplit according to size of segments. With -1, druid calculates max split size based on user specified number of map task(mapred.map.tasks or mapreduce.job.maps). By default, one split is made for one segment. maxSplitSize is specified in bytes.\tno useNewAggs\tBoolean\tIf "false", then list of aggregators in "metricsSpec" of hadoop indexing task must be same as that used in original indexing task while ingesting raw data. Default value is "false". This field can be set to "true" when "inputSpec" type is "dataSource" and not "multi" to enable arbitrary aggregators while reindexing. See below for "multi" type support for delta-ingestion.\tno Here is what goes inside ingestionSpec: Field\tType\tDescription\tRequireddataSource\tString\tDruid dataSource name from which you are loading the data.\tyes intervals\tList\tA list of strings representing ISO-8601 Intervals.\tyes segments\tList\tList of segments from which to read data from, by default it is obtained automatically. You can obtain list of segments to put here by making a POST query to Coordinator at url /druid/coordinator/v1/metadata/datasources/segments?full with list of intervals specified in the request payload, e.g. ["2012-01-01T00:00:00.000/2012-01-03T00:00:00.000", "2012-01-05T00:00:00.000/2012-01-07T00:00:00.000"]. You may want to provide this list manually in order to ensure that segments read are exactly same as they were at the time of task submission, task would fail if the list provided by the user does not match with state of database when the task actually runs.\tno filter\tJSON\tSee Filters\tno dimensions\tArray of String\tName of dimension columns to load. By default, the list will be constructed from parseSpec. If parseSpec does not have an explicit list of dimensions then all the dimension columns present in stored data will be read.\tno metrics\tArray of String\tName of metric columns to load. By default, the list will be constructed from the "name" of all the configured aggregators.\tno ignoreWhenNoSegments\tboolean\tWhether to ignore this ingestionSpec if no segments were found. Default behavior is to throw error when no segments were found.\tno For example "ioConfig" : { "type" : "hadoop", "inputSpec" : { "type" : "dataSource", "ingestionSpec" : { "dataSource": "wikipedia", "intervals": ["2014-10-20T00:00:00Z/P2W"] } }, ... } multi This is a composing inputSpec to combine other inputSpecs. This inputSpec is used for delta ingestion. You can also use a multi inputSpec to combine data from multiple dataSources. However, each particular dataSource can only be specified one time. Note that, "useNewAggs" must be set to default value false to support delta-ingestion. Field\tType\tDescription\tRequiredchildren\tArray of JSON objects\tList of JSON objects containing other inputSpecs.\tyes For example: "ioConfig" : { "type" : "hadoop", "inputSpec" : { "type" : "multi", "children": [ { "type" : "dataSource", "ingestionSpec" : { "dataSource": "wikipedia", "intervals": ["2012-01-01T00:00:00.000/2012-01-03T00:00:00.000", "2012-01-05T00:00:00.000/2012-01-07T00:00:00.000"], "segments": [ { "dataSource": "test1", "interval": "2012-01-01T00:00:00.000/2012-01-03T00:00:00.000", "version": "v2", "loadSpec": { "type": "local", "path": "/tmp/index1.zip" }, "dimensions": "host", "metrics": "visited_sum,unique_hosts", "shardSpec": { "type": "none" }, "binaryVersion": 9, "size": 2, "identifier": "test1_2000-01-01T00:00:00.000Z_3000-01-01T00:00:00.000Z_v2" } ] } }, { "type" : "static", "paths": "/path/to/more/wikipedia/data/" } ] }, ... } It is STRONGLY RECOMMENDED to provide list of segments in dataSource inputSpec explicitly so that your delta ingestion task is idempotent. You can obtain that list of segments by making following call to the Coordinator. POST /druid/coordinator/v1/metadata/datasources/{dataSourceName}/segments?fullRequest Body: [interval1, interval2,...] for example ["2012-01-01T00:00:00.000/2012-01-03T00:00:00.000", "2012-01-05T00:00:00.000/2012-01-07T00:00:00.000"] "},{"title":"tuningConfig","type":1,"pageTitle":"Hadoop-based ingestion","url":"/docs/27.0.0/ingestion/hadoop#tuningconfig","content":"The tuningConfig is optional and default parameters will be used if no tuningConfig is specified. Field\tType\tDescription\tRequiredworkingPath\tString\tThe working path to use for intermediate results (results between Hadoop jobs).\tOnly used by the Command-line Hadoop indexer. The default is '/tmp/druid-indexing'. This field must be null otherwise. version\tString\tThe version of created segments. Ignored for HadoopIndexTask unless useExplicitVersion is set to true\tno (default == datetime that indexing starts at) partitionsSpec\tObject\tA specification of how to partition each time bucket into segments. Absence of this property means no partitioning will occur. See partitionsSpec below.\tno (default == 'hashed') maxRowsInMemory\tInteger\tThe number of rows to aggregate before persisting. Note that this is the number of post-aggregation rows which may not be equal to the number of input events due to roll-up. This is used to manage the required JVM heap size. Normally user does not need to set this, but depending on the nature of data, if rows are short in terms of bytes, user may not want to store a million rows in memory and this value should be set.\tno (default == 1000000) maxBytesInMemory\tLong\tThe number of bytes to aggregate in heap memory before persisting. Normally this is computed internally and user does not need to set it. This is based on a rough estimate of memory usage and not actual usage. The maximum heap memory usage for indexing is maxBytesInMemory * (2 + maxPendingPersists). Note that maxBytesInMemory also includes heap usage of artifacts created from intermediary persists. This means that after every persist, the amount of maxBytesInMemory until next persist will decreases, and task will fail when the sum of bytes of all intermediary persisted artifacts exceeds maxBytesInMemory.\tno (default == One-sixth of max JVM memory) leaveIntermediate\tBoolean\tLeave behind intermediate files (for debugging) in the workingPath when a job completes, whether it passes or fails.\tno (default == false) cleanupOnFailure\tBoolean\tClean up intermediate files when a job fails (unless leaveIntermediate is on).\tno (default == true) overwriteFiles\tBoolean\tOverride existing files found during indexing.\tno (default == false) ignoreInvalidRows\tBoolean\tDEPRECATED. Ignore rows found to have problems. If false, any exception encountered during parsing will be thrown and will halt ingestion; if true, unparseable rows and fields will be skipped. If maxParseExceptions is defined, this property is ignored.\tno (default == false) combineText\tBoolean\tUse CombineTextInputFormat to combine multiple files into a file split. This can speed up Hadoop jobs when processing a large number of small files.\tno (default == false) useCombiner\tBoolean\tUse Hadoop combiner to merge rows at mapper if possible.\tno (default == false) jobProperties\tObject\tA map of properties to add to the Hadoop job configuration, see below for details.\tno (default == null) indexSpec\tObject\tTune how data is indexed. See indexSpec on the main ingestion page for more information.\tno indexSpecForIntermediatePersists\tObject\tdefines segment storage format options to be used at indexing time for intermediate persisted temporary segments. this can be used to disable dimension/metric compression on intermediate segments to reduce memory required for final merging. however, disabling compression on intermediate segments might increase page cache use while they are used before getting merged into final segment published, see indexSpec for possible values.\tno (default = same as indexSpec) numBackgroundPersistThreads\tInteger\tThe number of new background threads to use for incremental persists. Using this feature causes a notable increase in memory pressure and CPU usage but will make the job finish more quickly. If changing from the default of 0 (use current thread for persists), we recommend setting it to 1.\tno (default == 0) forceExtendableShardSpecs\tBoolean\tForces use of extendable shardSpecs. Hash-based partitioning always uses an extendable shardSpec. For single-dimension partitioning, this option should be set to true to use an extendable shardSpec. For partitioning, please check Partitioning specification. This option can be useful when you need to append more data to existing dataSource.\tno (default = false) useExplicitVersion\tBoolean\tForces HadoopIndexTask to use version.\tno (default = false) logParseExceptions\tBoolean\tIf true, log an error message when a parsing exception occurs, containing information about the row where the error occurred.\tno(default = false) maxParseExceptions\tInteger\tThe maximum number of parse exceptions that can occur before the task halts ingestion and fails. Overrides ignoreInvalidRows if maxParseExceptions is defined.\tno(default = unlimited) useYarnRMJobStatusFallback\tBoolean\tIf the Hadoop jobs created by the indexing task are unable to retrieve their completion status from the JobHistory server, and this parameter is true, the indexing task will try to fetch the application status from http://<yarn-rm-address>/ws/v1/cluster/apps/<application-id>, where <yarn-rm-address> is the value of yarn.resourcemanager.webapp.address in your Hadoop configuration. This flag is intended as a fallback for cases where an indexing task's jobs succeed, but the JobHistory server is unavailable, causing the indexing task to fail because it cannot determine the job statuses.\tno (default = true) awaitSegmentAvailabilityTimeoutMillis\tLong\tMilliseconds to wait for the newly indexed segments to become available for query after ingestion completes. If <= 0, no wait will occur. If > 0, the task will wait for the Coordinator to indicate that the new segments are available for querying. If the timeout expires, the task will exit as successful, but the segments were not confirmed to have become available for query.\tno (default = 0) "},{"title":"jobProperties","type":1,"pageTitle":"Hadoop-based ingestion","url":"/docs/27.0.0/ingestion/hadoop#jobproperties","content":" "tuningConfig" : { "type": "hadoop", "jobProperties": { "<hadoop-property-a>": "<value-a>", "<hadoop-property-b>": "<value-b>" } } Hadoop's MapReduce documentation lists the possible configuration parameters. With some Hadoop distributions, it may be necessary to set mapreduce.job.classpath or mapreduce.job.user.classpath.firstto avoid class loading issues. See the working with different Hadoop versions documentationfor more details. "},{"title":"partitionsSpec","type":1,"pageTitle":"Hadoop-based ingestion","url":"/docs/27.0.0/ingestion/hadoop#partitionsspec","content":"Segments are always partitioned based on timestamp (according to the granularitySpec) and may be further partitioned in some other way depending on partition type. Druid supports two types of partitioning strategies: hashed (based on the hash of all dimensions in each row), and single_dim (based on ranges of a single dimension). Hashed partitioning is recommended in most cases, as it will improve indexing performance and create more uniformly sized data segments relative to single-dimension partitioning. "},{"title":"Hash-based partitioning","type":1,"pageTitle":"Hadoop-based ingestion","url":"/docs/27.0.0/ingestion/hadoop#hash-based-partitioning","content":" "partitionsSpec": { "type": "hashed", "targetRowsPerSegment": 5000000 } Hashed partitioning works by first selecting a number of segments, and then partitioning rows across those segments according to the hash of all dimensions in each row. The number of segments is determined automatically based on the cardinality of the input set and a target partition size. The configuration options are: Field\tDescription\tRequiredtype\tType of partitionSpec to be used.\t"hashed" targetRowsPerSegment\tTarget number of rows to include in a partition, should be a number that targets segments of 500MB~1GB. Defaults to 5000000 if numShards is not set.\teither this or numShards targetPartitionSize\tDeprecated. Renamed to targetRowsPerSegment. Target number of rows to include in a partition, should be a number that targets segments of 500MB~1GB.\teither this or numShards maxRowsPerSegment\tDeprecated. Renamed to targetRowsPerSegment. Target number of rows to include in a partition, should be a number that targets segments of 500MB~1GB.\teither this or numShards numShards\tSpecify the number of partitions directly, instead of a target partition size. Ingestion will run faster, since it can skip the step necessary to select a number of partitions automatically.\teither this or targetRowsPerSegment partitionDimensions\tThe dimensions to partition on. Leave blank to select all dimensions. Only used with numShards, will be ignored when targetRowsPerSegment is set.\tno partitionFunction\tA function to compute hash of partition dimensions. See Hash partition function\tmurmur3_32_abs Hash partition function In hash partitioning, the partition function is used to compute hash of partition dimensions. The partition dimension values are first serialized into a byte array as a whole, and then the partition function is applied to compute hash of the byte array. Druid currently supports only one partition function. name\tdescriptionmurmur3_32_abs\tApplies an absolute value function to the result of murmur3_32. "},{"title":"Single-dimension range partitioning","type":1,"pageTitle":"Hadoop-based ingestion","url":"/docs/27.0.0/ingestion/hadoop#single-dimension-range-partitioning","content":" "partitionsSpec": { "type": "single_dim", "targetRowsPerSegment": 5000000 } Single-dimension range partitioning works by first selecting a dimension to partition on, and then separating that dimension into contiguous ranges. Each segment will contain all rows with values of that dimension in that range. For example, your segments may be partitioned on the dimension "host" using the ranges "a.example.com" to "f.example.com" and "f.example.com" to "z.example.com". By default, the dimension to use is determined automatically, although you can override it with a specific dimension. The configuration options are: Field\tDescription\tRequiredtype\tType of partitionSpec to be used.\t"single_dim" targetRowsPerSegment\tTarget number of rows to include in a partition, should be a number that targets segments of 500MB~1GB.\tyes targetPartitionSize\tDeprecated. Renamed to targetRowsPerSegment. Target number of rows to include in a partition, should be a number that targets segments of 500MB~1GB.\tno maxRowsPerSegment\tMaximum number of rows to include in a partition. Defaults to 50% larger than the targetRowsPerSegment.\tno maxPartitionSize\tDeprecated. Use maxRowsPerSegment instead. Maximum number of rows to include in a partition. Defaults to 50% larger than the targetPartitionSize.\tno partitionDimension\tThe dimension to partition on. Leave blank to select a dimension automatically.\tno assumeGrouped\tAssume that input data has already been grouped on time and dimensions. Ingestion will run faster, but may choose sub-optimal partitions if this assumption is violated.\tno "},{"title":"Remote Hadoop clusters","type":1,"pageTitle":"Hadoop-based ingestion","url":"/docs/27.0.0/ingestion/hadoop#remote-hadoop-clusters","content":"If you have a remote Hadoop cluster, make sure to include the folder holding your configuration *.xml files in your Druid _common configuration folder. If you are having dependency problems with your version of Hadoop and the version compiled with Druid, please see these docs. "},{"title":"Elastic MapReduce","type":1,"pageTitle":"Hadoop-based ingestion","url":"/docs/27.0.0/ingestion/hadoop#elastic-mapreduce","content":"If your cluster is running on Amazon Web Services, you can use Elastic MapReduce (EMR) to index data from S3. To do this: Create a persistent, long-running cluster.When creating your cluster, enter the following configuration. If you're using the wizard, this should be in advanced mode under "Edit software settings": classification=yarn-site,properties=[mapreduce.reduce.memory.mb=6144,mapreduce.reduce.java.opts=-server -Xms2g -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps,mapreduce.map.java.opts=758,mapreduce.map.java.opts=-server -Xms512m -Xmx512m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps,mapreduce.task.timeout=1800000] Follow the instructions underConfigure for connecting to Hadoop using the XML files from /etc/hadoop/confon your EMR master. "},{"title":"Kerberized Hadoop clusters","type":1,"pageTitle":"Hadoop-based ingestion","url":"/docs/27.0.0/ingestion/hadoop#kerberized-hadoop-clusters","content":"By default druid can use the existing TGT kerberos ticket available in local kerberos key cache. Although TGT ticket has a limited life cycle, therefore you need to call kinit command periodically to ensure validity of TGT ticket. To avoid this extra external cron job script calling kinit periodically, you can provide the principal name and keytab location and druid will do the authentication transparently at startup and job launching time. Property\tPossible Values\tDescription\tDefaultdruid.hadoop.security.kerberos.principal\tdruid@EXAMPLE.COM\tPrincipal user name\tempty druid.hadoop.security.kerberos.keytab\t/etc/security/keytabs/druid.headlessUser.keytab\tPath to keytab file\tempty "},{"title":"Loading from S3 with EMR","type":1,"pageTitle":"Hadoop-based ingestion","url":"/docs/27.0.0/ingestion/hadoop#loading-from-s3-with-emr","content":"In the jobProperties field in the tuningConfig section of your Hadoop indexing task, add: "jobProperties" : { "fs.s3.awsAccessKeyId" : "YOUR_ACCESS_KEY", "fs.s3.awsSecretAccessKey" : "YOUR_SECRET_KEY", "fs.s3.impl" : "org.apache.hadoop.fs.s3native.NativeS3FileSystem", "fs.s3n.awsAccessKeyId" : "YOUR_ACCESS_KEY", "fs.s3n.awsSecretAccessKey" : "YOUR_SECRET_KEY", "fs.s3n.impl" : "org.apache.hadoop.fs.s3native.NativeS3FileSystem", "io.compression.codecs" : "org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec" } Note that this method uses Hadoop's built-in S3 filesystem rather than Amazon's EMRFS, and is not compatible with Amazon-specific features such as S3 encryption and consistent views. If you need to use these features, you will need to make the Amazon EMR Hadoop JARs available to Druid through one of the mechanisms described in the Using other Hadoop distributions section. "},{"title":"Using other Hadoop distributions","type":1,"pageTitle":"Hadoop-based ingestion","url":"/docs/27.0.0/ingestion/hadoop#using-other-hadoop-distributions","content":"Druid works out of the box with many Hadoop distributions. If you are having dependency conflicts between Druid and your version of Hadoop, you can try searching for a solution in the Druid user groups, or reading the Druid Different Hadoop Versions documentation. "},{"title":"Command line (non-task) version","type":1,"pageTitle":"Hadoop-based ingestion","url":"/docs/27.0.0/ingestion/hadoop#command-line-non-task-version","content":"To run: java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath lib/*:<hadoop_config_dir> org.apache.druid.cli.Main index hadoop <spec_file> "},{"title":"Options","type":1,"pageTitle":"Hadoop-based ingestion","url":"/docs/27.0.0/ingestion/hadoop#options","content":""--coordinate" - provide a version of Apache Hadoop to use. This property will override the default Hadoop coordinates. Once specified, Apache Druid will look for those Hadoop dependencies from the location specified by druid.extensions.hadoopDependenciesDir."--no-default-hadoop" - don't pull down the default hadoop version "},{"title":"Spec file","type":1,"pageTitle":"Hadoop-based ingestion","url":"/docs/27.0.0/ingestion/hadoop#spec-file","content":"The spec file needs to contain a JSON object where the contents are the same as the "spec" field in the Hadoop index task. See Hadoop Batch Ingestion for details on the spec format. In addition, a metadataUpdateSpec and segmentOutputPath field needs to be added to the ioConfig: "ioConfig" : { ... "metadataUpdateSpec" : { "type":"mysql", "connectURI" : "jdbc:mysql://localhost:3306/druid", "password" : "diurd", "segmentTable" : "druid_segments", "user" : "druid" }, "segmentOutputPath" : "/MyDirectory/data/index/output" }, and a workingPath field needs to be added to the tuningConfig: "tuningConfig" : { ... "workingPath": "/tmp", ... } Metadata Update Job Spec This is a specification of the properties that tell the job how to update metadata such that the Druid cluster will see the output segments and load them. Field\tType\tDescription\tRequiredtype\tString\t"metadata" is the only value available.\tyes connectURI\tString\tA valid JDBC url to metadata storage.\tyes user\tString\tUsername for db.\tyes password\tString\tpassword for db.\tyes segmentTable\tString\tTable to use in DB.\tyes These properties should parrot what you have configured for your Coordinator. segmentOutputPath Config Field\tType\tDescription\tRequiredsegmentOutputPath\tString\tthe path to dump segments into.\tyes workingPath Config Field\tType\tDescription\tRequiredworkingPath\tString\tthe working path to use for intermediate results (results between Hadoop jobs).\tno (default == '/tmp/druid-indexing') Please note that the command line Hadoop indexer doesn't have the locking capabilities of the indexing service, so if you choose to use it, you have to take caution to not override segments created by real-time processing (if you that a real-time pipeline set up). "},{"title":"Ingestion spec reference","type":0,"sectionRef":"#","url":"/docs/27.0.0/ingestion/ingestion-spec","content":"","keywords":""},{"title":"dataSchema","type":1,"pageTitle":"Ingestion spec reference","url":"/docs/27.0.0/ingestion/ingestion-spec#dataschema","content":"info The dataSchema spec has been changed in 0.17.0. The new spec is supported by all ingestion methods except for Hadoop ingestion. See the Legacy dataSchema spec for the old spec. The dataSchema is a holder for the following components: datasource nameprimary timestampdimensionsmetricstransforms and filters (if needed). An example dataSchema is: "dataSchema": { "dataSource": "wikipedia", "timestampSpec": { "column": "timestamp", "format": "auto" }, "dimensionsSpec": { "dimensions": [ "page", "language", { "type": "long", "name": "userId" } ] }, "metricsSpec": [ { "type": "count", "name": "count" }, { "type": "doubleSum", "name": "bytes_added_sum", "fieldName": "bytes_added" }, { "type": "doubleSum", "name": "bytes_deleted_sum", "fieldName": "bytes_deleted" } ], "granularitySpec": { "segmentGranularity": "day", "queryGranularity": "none", "intervals": [ "2013-08-31/2013-09-01" ] } } "},{"title":"dataSource","type":1,"pageTitle":"Ingestion spec reference","url":"/docs/27.0.0/ingestion/ingestion-spec#datasource","content":"The dataSource is located in dataSchema → dataSource and is simply the name of thedatasource that data will be written to. An exampledataSource is: "dataSource": "my-first-datasource" "},{"title":"timestampSpec","type":1,"pageTitle":"Ingestion spec reference","url":"/docs/27.0.0/ingestion/ingestion-spec#timestampspec","content":"The timestampSpec is located in dataSchema → timestampSpec and is responsible for configuring the primary timestamp. An example timestampSpec is: "timestampSpec": { "column": "timestamp", "format": "auto" } info Conceptually, after input data records are read, Druid applies ingestion spec components in a particular order: first flattenSpec (if any), then timestampSpec, then transformSpec, and finally dimensionsSpec and metricsSpec. Keep this in mind when writing your ingestion spec. A timestampSpec can have the following components: Field\tDescription\tDefaultcolumn\tInput row field to read the primary timestamp from. Regardless of the name of this input field, the primary timestamp will always be stored as a column named __time in your Druid datasource.\ttimestamp format\tTimestamp format. Options are: iso: ISO8601 with 'T' separator, like "2000-01-01T01:02:03.456"posix: seconds since epochmillis: milliseconds since epochmicro: microseconds since epochnano: nanoseconds since epochauto: automatically detects ISO (either 'T' or space separator) or millis formatany Joda DateTimeFormat string auto missingValue\tTimestamp to use for input records that have a null or missing timestamp column. Should be in ISO8601 format, like "2000-01-01T01:02:03.456", even if you have specified something else for format. Since Druid requires a primary timestamp, this setting can be useful for ingesting datasets that do not have any per-record timestamps at all.\tnone You can use the timestamp in a expression as __time because Druid parses the timestampSpec before applying transforms. You can also set the expression name to __time to replace the value of the timestamp. Treat __time as a millisecond timestamp: the number of milliseconds since Jan 1, 1970 at midnight UTC. "},{"title":"dimensionsSpec","type":1,"pageTitle":"Ingestion spec reference","url":"/docs/27.0.0/ingestion/ingestion-spec#dimensionsspec","content":"The dimensionsSpec is located in dataSchema → dimensionsSpec and is responsible for configuring dimensions. You can either manually specify the dimensions or take advantage of schema auto-discovery where you allow Druid to infer all or some of the schema for your data. This means that you don't have to explicitly specify your dimensions and their type. To use schema auto-discovery, set useSchemaDiscovery to true. Alternatively, you can use the string-based schemaless ingestion where any discovered dimensions are treated as strings. To do so, leave useSchemaDiscovery set to false (default). Then, set the dimensions list to empty or set the includeAllDimensions property to true. The following dimensionsSpec example uses schema auto-discovery ("useSchemaDiscovery": true) in conjunction with explicitly defined dimensions to have Druid infer some of the schema for the data: "dimensionsSpec" : { "dimensions": [ "page", "language", { "type": "long", "name": "userId" } ], "dimensionExclusions" : [], "spatialDimensions" : [], "useSchemaDiscovery": true } info Conceptually, after input data records are read, Druid applies ingestion spec components in a particular order: first flattenSpec (if any), then timestampSpec, then transformSpec, and finally dimensionsSpec and metricsSpec. Keep this in mind when writing your ingestion spec. A dimensionsSpec can have the following components: Field\tDescription\tDefaultdimensions\tA list of dimension names or objects. You cannot include the same column in both dimensions and dimensionExclusions. If dimensions and spatialDimensions are both null or empty arrays, Druid treats all columns other than timestamp or metrics that do not appear in dimensionExclusions as String-typed dimension columns. See inclusions and exclusions for details. As a best practice, put the most frequently filtered dimensions at the beginning of the dimensions list. In this case, it would also be good to consider partitioning by those same dimensions.\t[] dimensionExclusions\tThe names of dimensions to exclude from ingestion. Only names are supported here, not objects. This list is only used if the dimensions and spatialDimensions lists are both null or empty arrays; otherwise it is ignored. See inclusions and exclusions below for details.\t[] spatialDimensions\tAn array of spatial dimensions.\t[] includeAllDimensions\tNote that this field only applies to string-based schema discovery where Druid ingests dimensions it discovers as strings. This is different from schema auto-discovery where Druid infers the type for data. You can set includeAllDimensions to true to ingest both explicit dimensions in the dimensions field and other dimensions that the ingestion task discovers from input data. In this case, the explicit dimensions will appear first in the order that you specify them, and the dimensions dynamically discovered will come after. This flag can be useful especially with auto schema discovery using flattenSpec. If this is not set and the dimensions field is not empty, Druid will ingest only explicit dimensions. If this is not set and the dimensions field is empty, all discovered dimensions will be ingested.\tfalse useSchemaDiscovery\tConfigure Druid to use schema auto-discovery to discover some or all of the dimensions and types for your data. For any dimensions that aren't a uniform type, Druid ingests them as JSON. You can use this for native batch or streaming ingestion.\tfalse Dimension objects Each dimension in the dimensions list can either be a name or an object. Providing a name is equivalent to providing a string type dimension object with the given name, e.g. "page" is equivalent to {"name": "page", "type": "string"}. Dimension objects can have the following components: Field\tDescription\tDefaulttype\tEither auto, string, long, float, double, or json. For the auto type, Druid determines the most appropriate type for the dimension and assigns one of the following: STRING, ARRAY<STRING>, LONG, ARRAY<LONG>, DOUBLE, ARRAY<DOUBLE>, or COMPLEX<json> columns, all sharing a common 'nested' format. When Druid infers the schema with schema auto-discovery, the type is auto.\tstring name\tThe name of the dimension. This will be used as the field name to read from input records, as well as the column name stored in generated segments. Note that you can use a transformSpec if you want to rename columns during ingestion time.\tnone (required) createBitmapIndex\tFor string typed dimensions, whether or not bitmap indexes should be created for the column in generated segments. Creating a bitmap index requires more storage, but speeds up certain kinds of filtering (especially equality and prefix filtering). Only supported for string typed dimensions.\ttrue multiValueHandling\tFor string typed dimensions, specifies the type of handling for multi-value fields. Possible values are array (ingest string arrays as-is), sorted_array (sort string arrays during ingestion), and sorted_set (sort and de-duplicate string arrays during ingestion). This parameter is ignored for types other than string.\tsorted_array Inclusions and exclusions Druid will interpret a dimensionsSpec in two possible ways: normal or schemaless. Normal interpretation occurs when either dimensions or spatialDimensions is non-empty. In this case, the combination of the two lists will be taken as the set of dimensions to be ingested, and the list of dimensionExclusions will be ignored. info The following description of schemaless refers to string-based schemaless where Druid treats dimensions it discovers as strings. We recommend you use schema auto-discovery instead where Druid infers the type for the dimension. For more information, see dimensionsSpec. Schemaless interpretation occurs when both dimensions and spatialDimensions are empty or null. In this case, the set of dimensions is determined in the following way: First, start from the set of all root-level fields from the input record, as determined by the inputFormat. "Root-level" includes all fields at the top level of a data structure, but does not included fields nested within maps or lists. To extract these, you must use a flattenSpec. All fields of non-nested data formats, such as CSV and delimited text, are considered root-level.If a flattenSpec is being used, the set of root-level fields includes any fields generated by the flattenSpec. The useFieldDiscovery parameter determines whether the original root-level fields will be retained or discarded.Any field listed in dimensionExclusions is excluded.The field listed as column in the timestampSpec is excluded.Any field used as an input to an aggregator from the metricsSpec is excluded.Any field with the same name as an aggregator from the metricsSpec is excluded.All other fields are ingested as string typed dimensions with the default settings. Additionally, if you have empty columns that you want to include in the string-based schemaless ingestion, you'll need to include the context parameter storeEmptyColumns and set it to true. info Note: Fields generated by a transformSpec are not currently considered candidates for schemaless dimension interpretation. "},{"title":"metricsSpec","type":1,"pageTitle":"Ingestion spec reference","url":"/docs/27.0.0/ingestion/ingestion-spec#metricsspec","content":"The metricsSpec is located in dataSchema → metricsSpec and is a list of aggregatorsto apply at ingestion time. This is most useful when rollup is enabled, since it's how you configure ingestion-time aggregation. An example metricsSpec is: "metricsSpec": [ { "type": "count", "name": "count" }, { "type": "doubleSum", "name": "bytes_added_sum", "fieldName": "bytes_added" }, { "type": "doubleSum", "name": "bytes_deleted_sum", "fieldName": "bytes_deleted" } ] info Generally, when rollup is disabled, you should have an empty metricsSpec (because without rollup, Druid does not do any ingestion-time aggregation, so there is little reason to include an ingestion-time aggregator). However, in some cases, it can still make sense to define metrics: for example, if you want to create a complex column as a way of pre-computing part of an approximate aggregation, this can only be done by defining a metric in a metricsSpec. "},{"title":"granularitySpec","type":1,"pageTitle":"Ingestion spec reference","url":"/docs/27.0.0/ingestion/ingestion-spec#granularityspec","content":"The granularitySpec is located in dataSchema → granularitySpec and is responsible for configuring the following operations: Partitioning a datasource into time chunks (via segmentGranularity).Truncating the timestamp, if desired (via queryGranularity).Specifying which time chunks of segments should be created, for batch ingestion (via intervals).Specifying whether ingestion-time rollup should be used or not (via rollup). Other than rollup, these operations are all based on the primary timestamp. An example granularitySpec is: "granularitySpec": { "segmentGranularity": "day", "queryGranularity": "none", "intervals": [ "2013-08-31/2013-09-01" ], "rollup": true } A granularitySpec can have the following components: Field\tDescription\tDefaulttype\tuniform\tuniform segmentGranularity\tTime chunking granularity for this datasource. Multiple segments can be created per time chunk. For example, when set to day, the events of the same day fall into the same time chunk which can be optionally further partitioned into multiple segments based on other configurations and input size. Any granularity can be provided here. Note that all segments in the same time chunk should have the same segment granularity. Avoid WEEK granularity for data partitioning because weeks don't align neatly with months and years, making it difficult to change partitioning by coarser granularity. Instead, opt for other partitioning options such as DAY or MONTH, which offer more flexibility.\tday queryGranularity\tThe resolution of timestamp storage within each segment. This must be equal to, or finer, than segmentGranularity. This will be the finest granularity that you can query at and still receive sensible results, but note that you can still query at anything coarser than this granularity. E.g., a value of minute will mean that records will be stored at minutely granularity, and can be sensibly queried at any multiple of minutes (including minutely, 5-minutely, hourly, etc). Any granularity can be provided here. Use none to store timestamps as-is, without any truncation. Note that rollup will be applied if it is set even when the queryGranularity is set to none.\tnone rollup\tWhether to use ingestion-time rollup or not. Note that rollup is still effective even when queryGranularity is set to none. Your data will be rolled up if they have the exactly same timestamp.\ttrue intervals\tA list of intervals defining time chunks for segments. Specify interval values using ISO8601 format. For example, ["2021-12-06T21:27:10+00:00/2021-12-07T00:00:00+00:00"]. If you omit the time, the time defaults to "00:00:00". Druid breaks the list up and rounds off the list values based on the segmentGranularity. If null or not provided, batch ingestion tasks generally determine which time chunks to output based on the timestamps found in the input data. If specified, batch ingestion tasks may be able to skip a determining-partitions phase, which can result in faster ingestion. Batch ingestion tasks may also be able to request all their locks up-front instead of one by one. Batch ingestion tasks throw away any records with timestamps outside of the specified intervals. Ignored for any form of streaming ingestion.\tnull "},{"title":"transformSpec","type":1,"pageTitle":"Ingestion spec reference","url":"/docs/27.0.0/ingestion/ingestion-spec#transformspec","content":"The transformSpec is located in dataSchema → transformSpec and is responsible for transforming and filtering records during ingestion time. It is optional. An example transformSpec is: "transformSpec": { "transforms": [ { "type": "expression", "name": "countryUpper", "expression": "upper(country)" } ], "filter": { "type": "selector", "dimension": "country", "value": "San Serriffe" } } info Conceptually, after input data records are read, Druid applies ingestion spec components in a particular order: first flattenSpec (if any), then timestampSpec, then transformSpec, and finally dimensionsSpec and metricsSpec. Keep this in mind when writing your ingestion spec. Transforms The transforms list allows you to specify a set of expressions to evaluate on top of input data. Each transform has a "name" which can be referred to by your dimensionsSpec, metricsSpec, etc. If a transform has the same name as a field in an input row, then it will shadow the original field. Transforms that shadow fields may still refer to the fields they shadow. This can be used to transform a field "in-place". Transforms do have some limitations. They can only refer to fields present in the actual input rows; in particular, they cannot refer to other transforms. And they cannot remove fields, only add them. However, they can shadow a field with another field containing all nulls, which will act similarly to removing the field. Druid currently includes one kind of built-in transform, the expression transform. It has the following syntax: { "type": "expression", "name": "<output name>", "expression": "<expr>" } The expression is a Druid query expression. info Conceptually, after input data records are read, Druid applies ingestion spec components in a particular order: first flattenSpec (if any), then timestampSpec, then transformSpec, and finally dimensionsSpec and metricsSpec. Keep this in mind when writing your ingestion spec. Filter The filter conditionally filters input rows during ingestion. Only rows that pass the filter will be ingested. Any of Druid's standard query filters can be used. Note that within atransformSpec, the transforms are applied before the filter, so the filter can refer to a transform. "},{"title":"Legacy dataSchema spec","type":1,"pageTitle":"Ingestion spec reference","url":"/docs/27.0.0/ingestion/ingestion-spec#legacy-dataschema-spec","content":"info The dataSchema spec has been changed in 0.17.0. The new spec is supported by all ingestion methods except for Hadoop ingestion. See dataSchema for the new spec. The legacy dataSchema spec has below two more components in addition to the ones listed in the dataSchema section above. input row parser, flattening of nested data (if needed) parser (Deprecated) In legacy dataSchema, the parser is located in the dataSchema → parser and is responsible for configuring a wide variety of items related to parsing input records. The parser is deprecated and it is highly recommended to use inputFormat instead. For details about inputFormat and supported parser types, see the "Data formats" page. For details about major components of the parseSpec, refer to their subsections: timestampSpec, responsible for configuring the primary timestamp.dimensionsSpec, responsible for configuring dimensions.flattenSpec, responsible for flattening nested data formats. An example parser is: "parser": { "type": "string", "parseSpec": { "format": "json", "flattenSpec": { "useFieldDiscovery": true, "fields": [ { "type": "path", "name": "userId", "expr": "$.user.id" } ] }, "timestampSpec": { "column": "timestamp", "format": "auto" }, "dimensionsSpec": { "dimensions": [ "page", "language", { "type": "long", "name": "userId" } ] } } } flattenSpec In the legacy dataSchema, the flattenSpec is located in dataSchema → parser → parseSpec → flattenSpec and is responsible for bridging the gap between potentially nested input data (such as JSON, Avro, etc) and Druid's flat data model. See Flatten spec for more details. "},{"title":"ioConfig","type":1,"pageTitle":"Ingestion spec reference","url":"/docs/27.0.0/ingestion/ingestion-spec#ioconfig","content":"The ioConfig influences how data is read from a source system, such as Apache Kafka, Amazon S3, a mounted filesystem, or any other supported source system. The inputFormat property applies to allingestion method except for Hadoop ingestion. The Hadoop ingestion still uses the parser in the legacy dataSchema. The rest of ioConfig is specific to each individual ingestion method. An example ioConfig to read JSON data is: "ioConfig": { "type": "<ingestion-method-specific type code>", "inputFormat": { "type": "json" }, ... } For more details, see the documentation provided by each ingestion method. "},{"title":"tuningConfig","type":1,"pageTitle":"Ingestion spec reference","url":"/docs/27.0.0/ingestion/ingestion-spec#tuningconfig","content":"Tuning properties are specified in a tuningConfig, which goes at the top level of an ingestion spec. Some properties apply to all ingestion methods, but most are specific to each individual ingestion method. An example tuningConfig that sets all of the shared, common properties to their defaults is: "tuningConfig": { "type": "<ingestion-method-specific type code>", "maxRowsInMemory": 1000000, "maxBytesInMemory": <one-sixth of JVM memory>, "indexSpec": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "metricCompression": "lz4", "longEncoding": "longs" }, <other ingestion-method-specific properties> } Field\tDescription\tDefaulttype\tEach ingestion method has its own tuning type code. You must specify the type code that matches your ingestion method. Common options are index, hadoop, kafka, and kinesis. maxRowsInMemory\tThe maximum number of records to store in memory before persisting to disk. Note that this is the number of rows post-rollup, and so it may not be equal to the number of input records. Ingested records will be persisted to disk when either maxRowsInMemory or maxBytesInMemory are reached (whichever happens first).\t1000000 maxBytesInMemory\tThe maximum aggregate size of records, in bytes, to store in the JVM heap before persisting. This is based on a rough estimate of memory usage. Ingested records will be persisted to disk when either maxRowsInMemory or maxBytesInMemory are reached (whichever happens first). maxBytesInMemory also includes heap usage of artifacts created from intermediary persists. This means that after every persist, the amount of maxBytesInMemory until the next persist will decrease. If the sum of bytes of all intermediary persisted artifacts exceeds maxBytesInMemory the task fails. Setting maxBytesInMemory to -1 disables this check, meaning Druid will rely entirely on maxRowsInMemory to control memory usage. Setting it to zero means the default value will be used (one-sixth of JVM heap size). Note that the estimate of memory usage is designed to be an overestimate, and can be especially high when using complex ingest-time aggregators, including sketches. If this causes your indexing workloads to persist to disk too often, you can set maxBytesInMemory to -1 and rely on maxRowsInMemory instead.\tOne-sixth of max JVM heap size skipBytesInMemoryOverheadCheck\tThe calculation of maxBytesInMemory takes into account overhead objects created during ingestion and each intermediate persist. Setting this to true can exclude the bytes of these overhead objects from maxBytesInMemory check.\tfalse indexSpec\tDefines segment storage format options to use at indexing time.\tSee indexSpec for more information. indexSpecForIntermediatePersists\tDefines segment storage format options to use at indexing time for intermediate persisted temporary segments.\tSee indexSpec for more information. Other properties\tEach ingestion method has its own list of additional tuning properties. See the documentation for each method for a full list: Kafka indexing service, Kinesis indexing service, Native batch, and Hadoop-based.\t "},{"title":"indexSpec","type":1,"pageTitle":"Ingestion spec reference","url":"/docs/27.0.0/ingestion/ingestion-spec#indexspec","content":"The indexSpec object can include the following properties: Field\tDescription\tDefaultbitmap\tCompression format for bitmap indexes. Should be a JSON object with type set to roaring or concise.\t{"type": "roaring"} dimensionCompression\tCompression format for dimension columns. Options are lz4, lzf, zstd, or uncompressed.\tlz4 stringDictionaryEncoding\tEncoding format for STRING value dictionaries used by STRING and COMPLEX<json> columns. Example to enable front coding: {"type":"frontCoded", "bucketSize": 4} bucketSize is the number of values to place in a bucket to perform delta encoding. Must be a power of 2, maximum is 128. Defaults to 4. formatVersion can specify older versions for backwards compatibility during rolling upgrades, valid options are 0 and 1. Defaults to 0 for backwards compatibility. See Front coding for more information.\t{"type":"utf8"} metricCompression\tCompression format for primitive type metric columns. Options are lz4, lzf, zstd, uncompressed, or none (which is more efficient than uncompressed, but not supported by older versions of Druid).\tlz4 longEncoding\tEncoding format for long-typed columns. Applies regardless of whether they are dimensions or metrics. Options are auto or longs. auto encodes the values using offset or lookup table depending on column cardinality, and store them with variable size. longs stores the value as-is with 8 bytes each.\tlongs jsonCompression\tCompression format to use for nested column raw data. Options are lz4, lzf, zstd, or uncompressed.\tlz4 Front coding Front coding is an experimental feature starting in version 25.0. Front coding is an incremental encoding strategy that Druid can use to store STRING and COMPLEX<json> columns. It allows Druid to create smaller UTF-8 encoded segments with very little performance cost. You can enable front coding with all types of ingestion. For information on defining an indexSpec in a query context, see SQL-based ingestion reference. info Front coding was originally introduced in Druid 25.0, and an improved 'version 1' was introduced in Druid 26.0, with typically faster read speed and smaller storage size. The current recommendation is to enable it in a staging environment and fully test your use case before using in production. By default, segments created with front coding enabled in Druid 26.0 are backwards compatible with Druid 25.0, but those created with Druid 26.0 or 25.0 are not compatible with Druid versions older than 25.0. If using front coding in Druid 25.0 and upgrading to Druid 26.0, the formatVersion defaults to 0 to keep writing out the older format to enable seamless downgrades to Druid 25.0, and then later is recommended to be changed to 1 once determined that rollback is not necessary. Beyond these properties, each ingestion method has its own specific tuning properties. See the documentation for eachingestion method for details. "},{"title":"Input sources","type":0,"sectionRef":"#","url":"/docs/27.0.0/ingestion/input-sources","content":"","keywords":""},{"title":"S3 input source","type":1,"pageTitle":"Input sources","url":"/docs/27.0.0/ingestion/input-sources#s3-input-source","content":"info You need to include the druid-s3-extensions as an extension to use the S3 input source. The S3 input source reads objects directly from S3. You can specify either: a list of S3 URI stringsa list of S3 location prefixes that attempts to list the contents and ingest all objects contained within the locations. The S3 input source is splittable. Therefore, you can use it with the Parallel task. Each worker task of index_parallel reads one or multiple objects. Sample specs: ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "s3", "objectGlob": "**.json", "uris": ["s3://foo/bar/file.json", "s3://bar/foo/file2.json"] }, "inputFormat": { "type": "json" }, ... }, ... ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "s3", "objectGlob": "**.parquet", "prefixes": ["s3://foo/bar/", "s3://bar/foo/"] }, "inputFormat": { "type": "json" }, ... }, ... ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "s3", "objectGlob": "**.json", "objects": [ { "bucket": "foo", "path": "bar/file1.json"}, { "bucket": "bar", "path": "foo/file2.json"} ] }, "inputFormat": { "type": "json" }, ... }, ... ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "s3", "objectGlob": "**.json", "uris": ["s3://foo/bar/file.json", "s3://bar/foo/file2.json"], "properties": { "accessKeyId": "KLJ78979SDFdS2", "secretAccessKey": "KLS89s98sKJHKJKJH8721lljkd" } }, "inputFormat": { "type": "json" }, ... }, ... ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "s3", "objectGlob": "**.json", "uris": ["s3://foo/bar/file.json", "s3://bar/foo/file2.json"], "properties": { "accessKeyId": "KLJ78979SDFdS2", "secretAccessKey": "KLS89s98sKJHKJKJH8721lljkd", "assumeRoleArn": "arn:aws:iam::2981002874992:role/role-s3" } }, "inputFormat": { "type": "json" }, ... }, ... ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "s3", "uris": ["s3://foo/bar/file.json", "s3://bar/foo/file2.json"], "endpointConfig": { "url" : "s3-store.aws.com", "signingRegion" : "us-west-2" }, "clientConfig": { "protocol" : "http", "disableChunkedEncoding" : true, "enablePathStyleAccess" : true, "forceGlobalBucketAccessEnabled" : false }, "proxyConfig": { "host" : "proxy-s3.aws.com", "port" : 8888, "username" : "admin", "password" : "admin" }, "properties": { "accessKeyId": "KLJ78979SDFdS2", "secretAccessKey": "KLS89s98sKJHKJKJH8721lljkd", "assumeRoleArn": "arn:aws:iam::2981002874992:role/role-s3" } }, "inputFormat": { "type": "json" }, ... }, ... Property\tDescription\tDefault\tRequiredtype\tSet the value to s3.\tNone\tyes uris\tJSON array of URIs where S3 objects to be ingested are located.\tNone\turis or prefixes or objects must be set prefixes\tJSON array of URI prefixes for the locations of S3 objects to be ingested. Empty objects starting with one of the given prefixes will be skipped.\tNone\turis or prefixes or objects must be set objects\tJSON array of S3 Objects to be ingested.\tNone\turis or prefixes or objects must be set objectGlob\tA glob for the object part of the S3 URI. In the URI s3://foo/bar/file.json, the glob is applied to bar/file.json. The glob must match the entire object part, not just the filename. For example, the glob *.json does not match s3://foo/bar/file.json, because the object part is bar/file.json, and the* does not match the slash. To match all objects ending in .json, use **.json instead. For more information, refer to the documentation for FileSystem#getPathMatcher.\tNone\tno endpointConfig\tConfig for overriding the default S3 endpoint and signing region. This would allow ingesting data from a different S3 store. Please see s3 config for more information.\tNone\tNo (defaults will be used if not given) clientConfig\tS3 client properties for the overridden s3 endpoint. This is used in conjunction with endPointConfig. Please see s3 config for more information.\tNone\tNo (defaults will be used if not given) proxyConfig\tProperties for specifying proxy information for the overridden s3 endpoint. This is used in conjunction with clientConfig. Please see s3 config for more information.\tNone\tNo (defaults will be used if not given) properties\tProperties Object for overriding the default S3 configuration. See below for more information.\tNone\tNo (defaults will be used if not given) Note that the S3 input source will skip all empty objects only when prefixes is specified. S3 Object: Property\tDescription\tDefault\tRequiredbucket\tName of the S3 bucket\tNone\tyes path\tThe path where data is located.\tNone\tyes Properties Object: Property\tDescription\tDefault\tRequiredaccessKeyId\tThe Password Provider or plain text string of this S3 input source access key\tNone\tyes if secretAccessKey is given secretAccessKey\tThe Password Provider or plain text string of this S3 input source secret key\tNone\tyes if accessKeyId is given assumeRoleArn\tAWS ARN of the role to assume see. assumeRoleArn can be used either with the ingestion spec AWS credentials or with the default S3 credentials\tNone\tno assumeRoleExternalId\tA unique identifier that might be required when you assume a role in another account see\tNone\tno info Note: If accessKeyId and secretAccessKey are not given, the default S3 credentials provider chain is used. "},{"title":"Google Cloud Storage input source","type":1,"pageTitle":"Input sources","url":"/docs/27.0.0/ingestion/input-sources#google-cloud-storage-input-source","content":"info You need to include the druid-google-extensions as an extension to use the Google Cloud Storage input source. The Google Cloud Storage input source is to support reading objects directly from Google Cloud Storage. Objects can be specified as list of Google Cloud Storage URI strings. The Google Cloud Storage input source is splittable and can be used by the Parallel task, where each worker task of index_parallel will read one or multiple objects. Sample specs: ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "google", "objectGlob": "**.json", "uris": ["gs://foo/bar/file.json", "gs://bar/foo/file2.json"] }, "inputFormat": { "type": "json" }, ... }, ... ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "google", "objectGlob": "**.parquet", "prefixes": ["gs://foo/bar/", "gs://bar/foo/"] }, "inputFormat": { "type": "json" }, ... }, ... ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "google", "objectGlob": "**.json", "objects": [ { "bucket": "foo", "path": "bar/file1.json"}, { "bucket": "bar", "path": "foo/file2.json"} ] }, "inputFormat": { "type": "json" }, ... }, ... Property\tDescription\tDefault\tRequiredtype\tSet the value to google.\tNone\tyes uris\tJSON array of URIs where Google Cloud Storage objects to be ingested are located.\tNone\turis or prefixes or objects must be set prefixes\tJSON array of URI prefixes for the locations of Google Cloud Storage objects to be ingested. Empty objects starting with one of the given prefixes will be skipped.\tNone\turis or prefixes or objects must be set objects\tJSON array of Google Cloud Storage objects to be ingested.\tNone\turis or prefixes or objects must be set objectGlob\tA glob for the object part of the S3 URI. In the URI s3://foo/bar/file.json, the glob is applied to bar/file.json. The glob must match the entire object part, not just the filename. For example, the glob *.json does not match s3://foo/bar/file.json, because the object part is bar/file.json, and the* does not match the slash. To match all objects ending in .json, use **.json instead. For more information, refer to the documentation for FileSystem#getPathMatcher.\tNone\tno Note that the Google Cloud Storage input source will skip all empty objects only when prefixes is specified. Google Cloud Storage object: Property\tDescription\tDefault\tRequiredbucket\tName of the Google Cloud Storage bucket\tNone\tyes path\tThe path where data is located.\tNone\tyes "},{"title":"Azure input source","type":1,"pageTitle":"Input sources","url":"/docs/27.0.0/ingestion/input-sources#azure-input-source","content":"info You need to include the druid-azure-extensions as an extension to use the Azure input source. The Azure input source reads objects directly from Azure Blob store or Azure Data Lake sources. You can specify objects as a list of file URI strings or prefixes. You can split the Azure input source for use with Parallel task indexing and each worker task reads one chunk of the split data. Sample specs: ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "azure", "objectGlob": "**.json", "uris": ["azure://container/prefix1/file.json", "azure://container/prefix2/file2.json"] }, "inputFormat": { "type": "json" }, ... }, ... ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "azure", "objectGlob": "**.parquet", "prefixes": ["azure://container/prefix1/", "azure://container/prefix2/"] }, "inputFormat": { "type": "json" }, ... }, ... ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "azure", "objectGlob": "**.json", "objects": [ { "bucket": "container", "path": "prefix1/file1.json"}, { "bucket": "container", "path": "prefix2/file2.json"} ] }, "inputFormat": { "type": "json" }, ... }, ... Property\tDescription\tDefault\tRequiredtype\tSet the value to azure.\tNone\tyes uris\tJSON array of URIs where the Azure objects to be ingested are located, in the form azure://<container>/<path-to-file>\tNone\turis or prefixes or objects must be set prefixes\tJSON array of URI prefixes for the locations of Azure objects to ingest, in the form azure://<container>/<prefix>. Empty objects starting with one of the given prefixes are skipped.\tNone\turis or prefixes or objects must be set objects\tJSON array of Azure objects to ingest.\tNone\turis or prefixes or objects must be set objectGlob\tA glob for the object part of the S3 URI. In the URI s3://foo/bar/file.json, the glob is applied to bar/file.json. The glob must match the entire object part, not just the filename. For example, the glob *.json does not match s3://foo/bar/file.json, because the object part is bar/file.json, and the* does not match the slash. To match all objects ending in .json, use **.json instead. For more information, refer to the documentation for FileSystem#getPathMatcher.\tNone\tno Note that the Azure input source skips all empty objects only when prefixes is specified. The objects property is: Property\tDescription\tDefault\tRequiredbucket\tName of the Azure Blob Storage or Azure Data Lake container\tNone\tyes path\tThe path where data is located.\tNone\tyes "},{"title":"HDFS input source","type":1,"pageTitle":"Input sources","url":"/docs/27.0.0/ingestion/input-sources#hdfs-input-source","content":"info You need to include the druid-hdfs-storage as an extension to use the HDFS input source. The HDFS input source is to support reading files directly from HDFS storage. File paths can be specified as an HDFS URI string or a list of HDFS URI strings. The HDFS input source is splittable and can be used by the Parallel task, where each worker task of index_parallel will read one or multiple files. Sample specs: ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "hdfs", "paths": "hdfs://namenode_host/foo/bar/", "hdfs://namenode_host/bar/foo" }, "inputFormat": { "type": "json" }, ... }, ... ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "hdfs", "paths": "hdfs://namenode_host/foo/bar/", "hdfs://namenode_host/bar/foo" }, "inputFormat": { "type": "json" }, ... }, ... ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "hdfs", "paths": "hdfs://namenode_host/foo/bar/file.json", "hdfs://namenode_host/bar/foo/file2.json" }, "inputFormat": { "type": "json" }, ... }, ... ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "hdfs", "paths": ["hdfs://namenode_host/foo/bar/file.json", "hdfs://namenode_host/bar/foo/file2.json"] }, "inputFormat": { "type": "json" }, ... }, ... Property\tDescription\tDefault\tRequiredtype\tSet the value to hdfs.\tNone\tyes paths\tHDFS paths. Can be either a JSON array or comma-separated string of paths. Wildcards like * are supported in these paths. Empty files located under one of the given paths will be skipped.\tNone\tyes You can also ingest from other storage using the HDFS input source if the HDFS client supports that storage. However, if you want to ingest from cloud storage, consider using the service-specific input source for your data storage. If you want to use a non-hdfs protocol with the HDFS input source, include the protocol in druid.ingestion.hdfs.allowedProtocols. See HDFS input source security configuration for more details. "},{"title":"HTTP input source","type":1,"pageTitle":"Input sources","url":"/docs/27.0.0/ingestion/input-sources#http-input-source","content":"The HTTP input source is to support reading files directly from remote sites via HTTP. info Security notes: Ingestion tasks run under the operating system account that runs the Druid processes, for example the Indexer, Middle Manager, and Peon. This means any user who can submit an ingestion task can specify an input source referring to any location that the Druid process can access. For example, using http input source, users may have access to internal network servers. The http input source is not limited to the HTTP or HTTPS protocols. It uses the Java URI class that supports HTTP, HTTPS, FTP, file, and jar protocols by default. For more information about security best practices, see Security overview. The HTTP input source is splittable and can be used by the Parallel task, where each worker task of index_parallel will read only one file. This input source does not support Split Hint Spec. Sample specs: ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "http", "uris": ["http://example.com/uri1", "http://example2.com/uri2"] }, "inputFormat": { "type": "json" }, ... }, ... Example with authentication fields using the DefaultPassword provider (this requires the password to be in the ingestion spec): ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "http", "uris": ["http://example.com/uri1", "http://example2.com/uri2"], "httpAuthenticationUsername": "username", "httpAuthenticationPassword": "password123" }, "inputFormat": { "type": "json" }, ... }, ... You can also use the other existing Druid PasswordProviders. Here is an example using the EnvironmentVariablePasswordProvider: ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "http", "uris": ["http://example.com/uri1", "http://example2.com/uri2"], "httpAuthenticationUsername": "username", "httpAuthenticationPassword": { "type": "environment", "variable": "HTTP_INPUT_SOURCE_PW" } }, "inputFormat": { "type": "json" }, ... }, ... } Property\tDescription\tDefault\tRequiredtype\tSet the value to http.\tNone\tyes uris\tURIs of the input files. See below for the protocols allowed for URIs.\tNone\tyes httpAuthenticationUsername\tUsername to use for authentication with specified URIs. Can be optionally used if the URIs specified in the spec require a Basic Authentication Header.\tNone\tno httpAuthenticationPassword\tPasswordProvider to use with specified URIs. Can be optionally used if the URIs specified in the spec require a Basic Authentication Header.\tNone\tno You can only use protocols listed in the druid.ingestion.http.allowedProtocols property as HTTP input sources. The http and https protocols are allowed by default. See HTTP input source security configuration for more details. "},{"title":"Inline input source","type":1,"pageTitle":"Input sources","url":"/docs/27.0.0/ingestion/input-sources#inline-input-source","content":"The Inline input source can be used to read the data inlined in its own spec. It can be used for demos or for quickly testing out parsing and schema. Sample spec: ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "inline", "data": "0,values,formatted\\n1,as,CSV" }, "inputFormat": { "type": "csv" }, ... }, ... Property\tDescription\tRequiredtype\tSet the value to inline.\tyes data\tInlined data to ingest.\tyes "},{"title":"Local input source","type":1,"pageTitle":"Input sources","url":"/docs/27.0.0/ingestion/input-sources#local-input-source","content":"The Local input source is to support reading files directly from local storage, and is mainly intended for proof-of-concept testing. The Local input source is splittable and can be used by the Parallel task, where each worker task of index_parallel will read one or multiple files. Sample spec: ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "local", "filter" : "*.csv", "baseDir": "/data/directory", "files": ["/bar/foo", "/foo/bar"] }, "inputFormat": { "type": "csv" }, ... }, ... Property\tDescription\tRequiredtype\tSet the value to local.\tyes filter\tA wildcard filter for files. See here for more information. Files matching the filter criteria are considered for ingestion. Files not matching the filter criteria are ignored.\tyes if baseDir is specified baseDir\tDirectory to search recursively for files to be ingested. Empty files under the baseDir will be skipped.\tAt least one of baseDir or files should be specified files\tFile paths to ingest. Some files can be ignored to avoid ingesting duplicate files if they are located under the specified baseDir. Empty files will be skipped.\tAt least one of baseDir or files should be specified "},{"title":"Druid input source","type":1,"pageTitle":"Input sources","url":"/docs/27.0.0/ingestion/input-sources#druid-input-source","content":"The Druid input source is to support reading data directly from existing Druid segments, potentially using a new schema and changing the name, dimensions, metrics, rollup, etc. of the segment. The Druid input source is splittable and can be used by the Parallel task. This input source has a fixed input format for reading from Druid segments; no inputFormat field needs to be specified in the ingestion spec when using this input source. Property\tDescription\tRequiredtype\tSet the value to druid.\tyes dataSource\tA String defining the Druid datasource to fetch rows from\tyes interval\tA String representing an ISO-8601 interval, which defines the time range to fetch the data over.\tyes filter\tSee Filters. Only rows that match the filter, if specified, will be returned.\tno The Druid input source can be used for a variety of purposes, including: Creating new datasources that are rolled-up copies of existing datasources.Changing the partitioning or sorting of a datasource to improve performance.Updating or removing rows using a transformSpec. When using the Druid input source, the timestamp column shows up as a numeric field named __time set to the number of milliseconds since the epoch (January 1, 1970 00:00:00 UTC). It is common to use this in the timestampSpec, if you want the output timestamp to be equivalent to the input timestamp. In this case, set the timestamp column to __timeand the format to auto or millis. It is OK for the input and output datasources to be the same. In this case, newly generated data will overwrite the previous data for the intervals specified in the granularitySpec. Generally, if you are going to do this, it is a good idea to test out your reindexing by writing to a separate datasource before overwriting your main one. Alternatively, if your goals can be satisfied by compaction, consider that instead as a simpler approach. An example task spec is shown below. It reads from a hypothetical raw datasource wikipedia_raw and creates a new rolled-up datasource wikipedia_rollup by grouping on hour, "countryName", and "page". { "type": "index_parallel", "spec": { "dataSchema": { "dataSource": "wikipedia_rollup", "timestampSpec": { "column": "__time", "format": "millis" }, "dimensionsSpec": { "dimensions": [ "countryName", "page" ] }, "metricsSpec": [ { "type": "count", "name": "cnt" } ], "granularitySpec": { "type": "uniform", "queryGranularity": "HOUR", "segmentGranularity": "DAY", "intervals": ["2016-06-27/P1D"], "rollup": true } }, "ioConfig": { "type": "index_parallel", "inputSource": { "type": "druid", "dataSource": "wikipedia_raw", "interval": "2016-06-27/P1D" } }, "tuningConfig": { "type": "index_parallel", "partitionsSpec": { "type": "hashed" }, "forceGuaranteedRollup": true, "maxNumConcurrentSubTasks": 1 } } } info Note: Older versions (0.19 and earlier) did not respect the timestampSpec when using the Druid input source. If you have ingestion specs that rely on this and cannot rewrite them, setdruid.indexer.task.ignoreTimestampSpecForDruidInputSourceto true to enable a compatibility mode where the timestampSpec is ignored. "},{"title":"SQL input source","type":1,"pageTitle":"Input sources","url":"/docs/27.0.0/ingestion/input-sources#sql-input-source","content":"The SQL input source is used to read data directly from RDBMS. The SQL input source is splittable and can be used by the Parallel task, where each worker task will read from one SQL query from the list of queries. This input source does not support Split Hint Spec. Since this input source has a fixed input format for reading events, no inputFormat field needs to be specified in the ingestion spec when using this input source. Please refer to the Recommended practices section below before using this input source. Property\tDescription\tRequiredtype\tSet the value to sql.\tYes database\tSpecifies the database connection details. The database type corresponds to the extension that supplies the connectorConfig support. The specified extension must be loaded into Druid: mysql-metadata-storage for mysql postgresql-metadata-storage extension for postgresql. You can selectively allow JDBC properties in connectURI. See JDBC connections security config for more details.\tYes foldCase\tToggle case folding of database column names. This may be enabled in cases where the database returns case insensitive column names in query results.\tNo sqls\tList of SQL queries where each SQL query would retrieve the data to be indexed.\tYes The following is an example of an SQL input source spec: ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "sql", "database": { "type": "mysql", "connectorConfig": { "connectURI": "jdbc:mysql://host:port/schema", "user": "user", "password": "password" } }, "sqls": ["SELECT * FROM table1 WHERE timestamp BETWEEN '2013-01-01 00:00:00' AND '2013-01-01 11:59:59'", "SELECT * FROM table2 WHERE timestamp BETWEEN '2013-01-01 00:00:00' AND '2013-01-01 11:59:59'"] } }, ... The spec above will read all events from two separate SQLs for the interval 2013-01-01/2013-01-02. Each of the SQL queries will be run in its own sub-task and thus for the above example, there would be two sub-tasks. Recommended practices Compared to the other native batch input sources, SQL input source behaves differently in terms of reading the input data. Therefore, consider the following points before using this input source in a production environment: During indexing, each sub-task would execute one of the SQL queries and the results are stored locally on disk. The sub-tasks then proceed to read the data from these local input files and generate segments. Presently, there isn’t any restriction on the size of the generated files and this would require the MiddleManagers or Indexers to have sufficient disk capacity based on the volume of data being indexed. Filtering the SQL queries based on the intervals specified in the granularitySpec can avoid unwanted data being retrieved and stored locally by the indexing sub-tasks. For example, if the intervals specified in the granularitySpec is ["2013-01-01/2013-01-02"] and the SQL query is SELECT * FROM table1, SqlInputSource will read all the data for table1 based on the query, even though only data between the intervals specified will be indexed into Druid. Pagination may be used on the SQL queries to ensure that each query pulls a similar amount of data, thereby improving the efficiency of the sub-tasks. Similar to file-based input formats, any updates to existing data will replace the data in segments specific to the intervals specified in the granularitySpec. "},{"title":"Combining input source","type":1,"pageTitle":"Input sources","url":"/docs/27.0.0/ingestion/input-sources#combining-input-source","content":"The Combining input source lets you read data from multiple input sources. It identifies the splits from delegate input sources and uses a worker task to process each split. Use the Combining input source only if all the delegates are splittable and can be used by the Parallel task. Similar to other input sources, the Combining input source supports a single inputFormat. Delegate input sources that require an inputFormat must have the same format for input data. If you include the Druid input source, the timestamp column is stored in the __time field. To correctly combine the data from the Druid input source with another source, ensure that other delegate input sources also store the timestamp column in __time. Property\tDescription\tRequiredtype\tSet the value to combining.\tYes delegates\tList of splittable input sources to read data from.\tYes The following is an example of a Combining input source spec: ... "ioConfig": { "type": "index_parallel", "inputSource": { "type": "combining", "delegates" : [ { "type": "local", "filter" : "*.csv", "baseDir": "/data/directory", "files": ["/bar/foo", "/foo/bar"] }, { "type": "druid", "dataSource": "wikipedia", "interval": "2013-01-01/2013-01-02" } ] }, "inputFormat": { "type": "csv" }, ... }, ... The secondary partitioning method determines the requisite number of concurrent worker tasks that run in parallel to complete ingestion with the Combining input source. Set this value in maxNumConcurrentSubTasks in tuningConfig based on the secondary partitioning method: range or single_dim partitioning: greater than or equal to 1hashed or dynamic partitioning: greater than or equal to 2 For more information on the maxNumConcurrentSubTasks field, see Implementation considerations. "},{"title":"Data rollup","type":0,"sectionRef":"#","url":"/docs/27.0.0/ingestion/rollup","content":"","keywords":""},{"title":"Maximizing rollup ratio","type":1,"pageTitle":"Data rollup","url":"/docs/27.0.0/ingestion/rollup#maximizing-rollup-ratio","content":"To measure the rollup ratio of a datasource, compare the number of rows in Druid (COUNT) with the number of ingested events. For example, run a Druid SQL query where "num_rows" refers to a count-type metric generated at ingestion time as follows: SELECT SUM("num_rows") / (COUNT(*) * 1.0) FROM datasource The higher the result, the greater the benefit you gain from rollup. See Counting the number of ingested events for more details about how counting works with rollup is enabled. Tips for maximizing rollup: Design your schema with fewer dimensions and lower cardinality dimensions to yield better rollup ratios.Use sketches to avoid storing high cardinality dimensions, which decrease rollup ratios.Adjust your queryGranularity at ingestion time to increase the chances that multiple rows in Druid having matching timestamps. For example, use five minute query granularity (PT5M) instead of one minute (PT1M).You can optionally load the same data into more than one Druid datasource. For example: Create a "full" datasource that has rollup disabled, or enabled, but with a minimal rollup ratio.Create a second "abbreviated" datasource with fewer dimensions and a higher rollup ratio. When queries only involve dimensions in the "abbreviated" set, use the second datasource to reduce query times. Often, this method only requires a small increase in storage footprint because abbreviated datasources tend to be substantially smaller. If you use a best-effort rollup ingestion configuration that does not guarantee perfect rollup, try one of the following: Switch to a guaranteed perfect rollup option.Reindex or compact your data in the background after initial ingestion. "},{"title":"Perfect rollup vs best-effort rollup","type":1,"pageTitle":"Data rollup","url":"/docs/27.0.0/ingestion/rollup#perfect-rollup-vs-best-effort-rollup","content":"Depending on the ingestion method, Druid has the following rollup options: Guaranteed perfect rollup: Druid perfectly aggregates input data at ingestion time.Best-effort rollup: Druid may not perfectly aggregate input data. Therefore, multiple segments might contain rows with the same timestamp and dimension values. In general, ingestion methods that offer best-effort rollup do this for one of the following reasons: The ingestion method parallelizes ingestion without a shuffling step required for perfect rollup.The ingestion method uses incremental publishing which means it finalizes and publishes segments before all data for a time chunk has been received. In both of these cases, records that could theoretically be rolled up may end up in different segments. All types of streaming ingestion run in this mode. Ingestion methods that guarantee perfect rollup use an additional preprocessing step to determine intervals and partitioning before data ingestion. This preprocessing step scans the entire input dataset. While this step increases the time required for ingestion, it provides information necessary for perfect rollup. The following table shows how each method handles rollup: Method\tHow it worksNative batch\tindex_parallel and index type may be either perfect or best-effort, based on configuration. SQL-based batch\tAlways perfect. Hadoop\tAlways perfect. Kafka indexing service\tAlways best-effort. Kinesis indexing service\tAlways best-effort. "},{"title":"Learn more","type":1,"pageTitle":"Data rollup","url":"/docs/27.0.0/ingestion/rollup#learn-more","content":"See the following topic for more information: Rollup tutorial for an example of how to configure rollup, and of how the feature modifies your data. "},{"title":"JSON-based batch","type":0,"sectionRef":"#","url":"/docs/27.0.0/ingestion/native-batch","content":"","keywords":""},{"title":"Submit an indexing task","type":1,"pageTitle":"JSON-based batch","url":"/docs/27.0.0/ingestion/native-batch#submit-an-indexing-task","content":"To run either kind of native batch indexing task you can: Use the Load Data UI in the web console to define and submit an ingestion spec.Define an ingestion spec in JSON based upon the examples and reference topics for batch indexing. Then POST the ingestion spec to the Tasks API endpoint,/druid/indexer/v1/task, the Overlord service. Alternatively you can use the indexing script included with Druid at bin/post-index-task. "},{"title":"Parallel task indexing","type":1,"pageTitle":"JSON-based batch","url":"/docs/27.0.0/ingestion/native-batch#parallel-task-indexing","content":"The parallel task type index_parallel is a task for multi-threaded batch indexing. Parallel task indexing only relies on Druid resources. It does not depend on other external systems like Hadoop. The index_parallel task is a supervisor task that orchestrates the whole indexing process. The supervisor task splits the input data and creates worker tasks to process the individual portions of data. Druid issues the worker tasks to the Overlord. The overlord schedules and runs the workers on MiddleManagers or Indexers. After a worker task successfully processes the assigned input portion, it reports the resulting segment list to the supervisor task. The supervisor task periodically checks the status of worker tasks. If a task fails, the supervisor retries the task until the number of retries reaches the configured limit. If all worker tasks succeed, it publishes the reported segments at once and finalizes ingestion. The detailed behavior of the parallel task is different depending on the partitionsSpec. See partitionsSpec for more details. Parallel tasks require: a splittable inputSource in the ioConfig. For a list of supported splittable input formats, see Splittable input sources.the maxNumConcurrentSubTasks greater than 1 in the tuningConfig. Otherwise tasks run sequentially. The index_parallel task reads each input file one by one and creates segments by itself. "},{"title":"Supported compression formats","type":1,"pageTitle":"JSON-based batch","url":"/docs/27.0.0/ingestion/native-batch#supported-compression-formats","content":"Native batch ingestion supports the following compression formats: bz2gzxzzipsz (Snappy)zst (ZSTD). "},{"title":"Implementation considerations","type":1,"pageTitle":"JSON-based batch","url":"/docs/27.0.0/ingestion/native-batch#implementation-considerations","content":"This section covers implementation details to consider when you implement parallel task ingestion. Volume control for worker tasks You can control the amount of input data each worker task processes using different configurations depending on the phase in parallel ingestion. See partitionsSpec for details about how partitioning affects data volume for tasks.For the tasks that read data from the inputSource, you can set the Split hint spec in the tuningConfig.For the task that merge shuffled segments, you can set the totalNumMergeTasks in the tuningConfig. Number of running tasks The maxNumConcurrentSubTasks in the tuningConfig determines the number of concurrent worker tasks that run in parallel. The supervisor task checks the number of current running worker tasks and creates more if it's smaller than maxNumConcurrentSubTasks regardless of the number of available task slots. This may affect to other ingestion performance. See Capacity planning section for more details. Replacing or appending data By default, batch ingestion replaces all data in the intervals in your granularitySpec for any segment that it writes to. If you want to add to the segment instead, set the appendToExisting flag in the ioConfig. Batch ingestion only replaces data in segments where it actively adds data. If there are segments in the intervals for your granularitySpec that have do not have data from a task, they remain unchanged. If any existing segments partially overlap with the intervals in the granularitySpec, the portion of those segments outside the interval for the new spec remain visible. Fully replacing existing segments using tombstones You can set dropExisting flag in the ioConfig to true if you want the ingestion task to replace all existing segments that start and end within the intervals for your granularitySpec. This applies whether or not the new data covers all existing segments. dropExisting only applies when appendToExisting is false and the granularitySpec contains an interval. WARNING: this functionality is still in beta. The following examples demonstrate when to set the dropExisting property to true in the ioConfig: Consider an existing segment with an interval of 2020-01-01 to 2021-01-01 and YEAR segmentGranularity. You want to overwrite the whole interval of 2020-01-01 to 2021-01-01 with new data using the finer segmentGranularity of MONTH. If the replacement data does not have a record within every months from 2020-01-01 to 2021-01-01 Druid cannot drop the original YEAR segment even if it does include all the replacement data. Set dropExisting to true in this case to replace the original segment at YEAR segmentGranularity since you no longer need it. Imagine you want to re-ingest or overwrite a datasource and the new data does not contain some time intervals that exist in the datasource. For example, a datasource contains the following data at MONTH segmentGranularity: January: 1 record February: 10 records March: 10 records You want to re-ingest and overwrite with new data as follows: January: 0 records February: 10 records March: 9 records Unless you set dropExisting to true, the result after ingestion with overwrite using the same MONTH segmentGranularity would be: January: 1 record February: 10 records March: 9 records This may not be what it is expected since the new data has 0 records for January. Set dropExisting to true to replace the unneeded January segment with a tombstone. "},{"title":"Parallel indexing example","type":1,"pageTitle":"JSON-based batch","url":"/docs/27.0.0/ingestion/native-batch#parallel-indexing-example","content":"The following example illustrates the configuration for a parallel indexing task: { "type": "index_parallel", "spec": { "dataSchema": { "dataSource": "wikipedia_parallel_index_test", "timestampSpec": { "column": "timestamp" }, "dimensionsSpec": { "dimensions": [ "country", "page", "language", "user", "unpatrolled", "newPage", "robot", "anonymous", "namespace", "continent", "region", "city" ] }, "metricsSpec": [ { "type": "count", "name": "count" }, { "type": "doubleSum", "name": "added", "fieldName": "added" }, { "type": "doubleSum", "name": "deleted", "fieldName": "deleted" }, { "type": "doubleSum", "name": "delta", "fieldName": "delta" } ], "granularitySpec": { "segmentGranularity": "DAY", "queryGranularity": "second", "intervals": [ "2013-08-31/2013-09-02" ] } }, "ioConfig": { "type": "index_parallel", "inputSource": { "type": "local", "baseDir": "examples/indexing/", "filter": "wikipedia_index_data*" }, "inputFormat": { "type": "json" } }, "tuningConfig": { "type": "index_parallel", "partitionsSpec": { "type": "single_dim", "partitionDimension": "country", "targetRowsPerSegment": 5000000 }, "maxNumConcurrentSubTasks": 2 } } } "},{"title":"Parallel indexing configuration","type":1,"pageTitle":"JSON-based batch","url":"/docs/27.0.0/ingestion/native-batch#parallel-indexing-configuration","content":"The following table defines the primary sections of the input spec: |property|description|required?| |--------|-----------|---------| |type|The task type. For parallel task indexing, set the value to index_parallel.|yes| |id|The task ID. If omitted, Druid generates the task ID using the task type, data source name, interval, and date-time stamp. |no| |spec|The ingestion spec that defines the data schema, IO config, and tuning config.|yes| |context|Context to specify various task configuration parameters. See Task context parameters for more details.|no| "},{"title":"dataSchema","type":1,"pageTitle":"JSON-based batch","url":"/docs/27.0.0/ingestion/native-batch#dataschema","content":"This field is required. In general, it defines the way that Druid will store your data: the primary timestamp column, the dimensions, metrics, and any transformations. For an overview, see Ingestion Spec DataSchema. When defining the granularitySpec for index parallel, consider the defining intervals explicitly if you know the time range of the data. This way locking failure happens faster and Druid won't accidentally replace data outside the interval range some rows contain unexpected timestamps. The reasoning is as follows: If you explicitly define intervals, batch ingestion locks all intervals specified when it starts up. Problems with locking become evident quickly when multiple ingestion or indexing tasks try to obtain a lock on the same interval. For example, if a Kafka ingestion task tries to obtain a lock on a locked interval causing the ingestion task fail. Furthermore, if there are rows outside the specified intervals, Druid drops them, avoiding conflict with unexpected intervals.If you do not define intervals, batch ingestion locks each interval when the interval is discovered. In this case if the task overlaps with a higher-priority task, issues with conflicting locks occur later in the ingestion process. Also if the source data includes rows with unexpected timestamps, they may caused unexpected locking of intervals. "},{"title":"ioConfig","type":1,"pageTitle":"JSON-based batch","url":"/docs/27.0.0/ingestion/native-batch#ioconfig","content":"property\tdescription\tdefault\trequired?type\tThe task type. Set to the value to index_parallel.\tnone\tyes inputFormat\tinputFormat to specify how to parse input data.\tnone\tyes appendToExisting\tCreates segments as additional shards of the latest version, effectively appending to the segment set instead of replacing it. This means that you can append new segments to any datasource regardless of its original partitioning scheme. You must use the dynamic partitioning type for the appended segments. If you specify a different partitioning type, the task fails with an error.\tfalse\tno dropExisting\tIf true and appendToExisting is false and the granularitySpec contains aninterval, then the ingestion task replaces all existing segments fully contained by the specified interval when the task publishes new segments. If ingestion fails, Druid does not change any existing segment. In the case of misconfiguration where either appendToExisting is true or interval is not specified in granularitySpec, Druid does not replace any segments even if dropExisting is true. WARNING: this functionality is still in beta.\tfalse\tno "},{"title":"tuningConfig","type":1,"pageTitle":"JSON-based batch","url":"/docs/27.0.0/ingestion/native-batch#tuningconfig","content":"The tuningConfig is optional and default parameters will be used if no tuningConfig is specified. See below for more details. property\tdescription\tdefault\trequired?type\tThe task type. Set the value toindex_parallel.\tnone\tyes maxRowsInMemory\tDetermines when Druid should perform intermediate persists to disk. Normally you do not need to set this. Depending on the nature of your data, if rows are short in terms of bytes. For example, you may not want to store a million rows in memory. In this case, set this value.\t1000000\tno maxBytesInMemory\tUse to determine when Druid should perform intermediate persists to disk. Normally Druid computes this internally and you do not need to set it. This value represents number of bytes to aggregate in heap memory before persisting. This is based on a rough estimate of memory usage and not actual usage. The maximum heap memory usage for indexing is maxBytesInMemory * (2 + maxPendingPersists). Note that maxBytesInMemory also includes heap usage of artifacts created from intermediary persists. This means that after every persist, the amount of maxBytesInMemory until next persist will decrease. Tasks fail when the sum of bytes of all intermediary persisted artifacts exceeds maxBytesInMemory.\t1/6 of max JVM memory\tno maxColumnsToMerge\tLimit of the number of segments to merge in a single phase when merging segments for publishing. This limit affects the total number of columns present in a set of segments to merge. If the limit is exceeded, segment merging occurs in multiple phases. Druid merges at least 2 segments per phase, regardless of this setting.\t-1 (unlimited)\tno maxTotalRows\tDeprecated. Use partitionsSpec instead. Total number of rows in segments waiting to be pushed. Used to determine when intermediate pushing should occur.\t20000000\tno numShards\tDeprecated. Use partitionsSpec instead. Directly specify the number of shards to create when using a hashed partitionsSpec. If this is specified and intervals is specified in the granularitySpec, the index task can skip the determine intervals/partitions pass through the data.\tnull\tno splitHintSpec\tHint to control the amount of data that each first phase task reads. Druid may ignore the hint depending on the implementation of the input source. See Split hint spec for more details.\tsize-based split hint spec\tno partitionsSpec\tDefines how to partition data in each timeChunk, see PartitionsSpec\tdynamic if forceGuaranteedRollup = false, hashed or single_dim if forceGuaranteedRollup = true\tno indexSpec\tDefines segment storage format options to be used at indexing time, see IndexSpec\tnull\tno indexSpecForIntermediatePersists\tDefines segment storage format options to use at indexing time for intermediate persisted temporary segments. You can use this configuration to disable dimension/metric compression on intermediate segments to reduce memory required for final merging. However, if you disable compression on intermediate segments, page cache use my increase while intermediate segments are used before Druid merges them to the final published segment published. See IndexSpec for possible values.\tsame as indexSpec\tno maxPendingPersists\tMaximum number of pending persists that remain not started. If a new intermediate persist exceeds this limit, ingestion blocks until the currently-running persist finishes. Maximum heap memory usage for indexing scales with maxRowsInMemory * (2 + maxPendingPersists).\t0 (meaning one persist can be running concurrently with ingestion, and none can be queued up)\tno forceGuaranteedRollup\tForces perfect rollup. The perfect rollup optimizes the total size of generated segments and querying time but increases indexing time. If true, specify intervals in the granularitySpec and use either hashed or single_dim for the partitionsSpec. You cannot use this flag in conjunction with appendToExisting of IOConfig. For more details, see Segment pushing modes.\tfalse\tno reportParseExceptions\tIf true, Druid throws exceptions encountered during parsing and halts ingestion. If false, Druid skips unparseable rows and fields.\tfalse\tno pushTimeout\tMilliseconds to wait to push segments. Must be >= 0, where 0 means to wait forever.\t0\tno segmentWriteOutMediumFactory\tSegment write-out medium to use when creating segments. See SegmentWriteOutMediumFactory.\tIf not specified, uses the value from druid.peon.defaultSegmentWriteOutMediumFactory.type\tno maxNumConcurrentSubTasks\tMaximum number of worker tasks that can be run in parallel at the same time. The supervisor task spawns worker tasks up to maxNumConcurrentSubTasks regardless of the current available task slots. If this value is 1, the supervisor task processes data ingestion on its own instead of spawning worker tasks. If this value is set to too large, the supervisor may create too many worker tasks that block other ingestion tasks. See Capacity planning for more details.\t1\tno maxRetry\tMaximum number of retries on task failures.\t3\tno maxNumSegmentsToMerge\tMax limit for the number of segments that a single task can merge at the same time in the second phase. Used only when forceGuaranteedRollup is true.\t100\tno totalNumMergeTasks\tTotal number of tasks that merge segments in the merge phase when partitionsSpec is set to hashed or single_dim.\t10\tno taskStatusCheckPeriodMs\tPolling period in milliseconds to check running task statuses.\t1000\tno chatHandlerTimeout\tTimeout for reporting the pushed segments in worker tasks.\tPT10S\tno chatHandlerNumRetries\tRetries for reporting the pushed segments in worker tasks.\t5\tno awaitSegmentAvailabilityTimeoutMillis\tLong\tMilliseconds to wait for the newly indexed segments to become available for query after ingestion completes. If <= 0, no wait occurs. If > 0, the task waits for the Coordinator to indicate that the new segments are available for querying. If the timeout expires, the task exits as successful, but the segments are not confirmed as available for query.\tno (default = 0) "},{"title":"Split Hint Spec","type":1,"pageTitle":"JSON-based batch","url":"/docs/27.0.0/ingestion/native-batch#split-hint-spec","content":"The split hint spec is used to help the supervisor task divide input sources. Each worker task processes a single input division. You can control the amount of data each worker task reads during the first phase. Size-based Split Hint Spec The size-based split hint spec affects all splittable input sources except for the HTTP input source and SQL input source. property\tdescription\tdefault\trequired?type\tSet the value to maxSize.\tnone\tyes maxSplitSize\tMaximum number of bytes of input files to process in a single subtask. If a single file is larger than the limit, Druid processes the file alone in a single subtask. Druid does not split files across tasks. One subtask will not process more files than maxNumFiles even when their total size is smaller than maxSplitSize. Human-readable format is supported.\t1GiB\tno maxNumFiles\tMaximum number of input files to process in a single subtask. This limit avoids task failures when the ingestion spec is too long. There are two known limits on the max size of serialized ingestion spec: the max ZNode size in ZooKeeper (jute.maxbuffer) and the max packet size in MySQL (max_allowed_packet). These limits can cause ingestion tasks fail if the serialized ingestion spec size hits one of them. One subtask will not process more data than maxSplitSize even when the total number of files is smaller than maxNumFiles.\t1000\tno Segments Split Hint Spec The segments split hint spec is used only for DruidInputSource. property\tdescription\tdefault\trequired?type\tSet the value to segments.\tnone\tyes maxInputSegmentBytesPerTask\tMaximum number of bytes of input segments to process in a single subtask. If a single segment is larger than this number, Druid processes the segment alone in a single subtask. Druid never splits input segments across tasks. A single subtask will not process more segments than maxNumSegments even when their total size is smaller than maxInputSegmentBytesPerTask. Human-readable format is supported.\t1GiB\tno maxNumSegments\tMaximum number of input segments to process in a single subtask. This limit avoids failures due to the the ingestion spec being too long. There are two known limits on the max size of serialized ingestion spec: the max ZNode size in ZooKeeper (jute.maxbuffer) and the max packet size in MySQL (max_allowed_packet). These limits can make ingestion tasks fail when the serialized ingestion spec size hits one of them. A single subtask will not process more data than maxInputSegmentBytesPerTask even when the total number of segments is smaller than maxNumSegments.\t1000\tno "},{"title":"partitionsSpec","type":1,"pageTitle":"JSON-based batch","url":"/docs/27.0.0/ingestion/native-batch#partitionsspec","content":"The primary partition for Druid is time. You can define a secondary partitioning method in the partitions spec. Use the partitionsSpec type that applies for your rollup method. For perfect rollup, you can use: hashed partitioning based on the hash value of specified dimensions for each rowsingle_dim based on ranges of values for a single dimensionrange based on ranges of values of multiple dimensions. For best-effort rollup, use dynamic. For an overview, see Partitioning. The partitionsSpec types have different characteristics. PartitionsSpec\tIngestion speed\tPartitioning method\tSupported rollup mode\tSecondary partition pruning at query timedynamic\tFastest\tDynamic partitioning based on the number of rows in a segment.\tBest-effort rollup\tN/A hashed\tModerate\tMultiple dimension hash-based partitioning may reduce both your datasource size and query latency by improving data locality. See Partitioning for more details.\tPerfect rollup\tThe broker can use the partition information to prune segments early to speed up queries. Since the broker knows how to hash partitionDimensions values to locate a segment, given a query including a filter on all the partitionDimensions, the broker can pick up only the segments holding the rows satisfying the filter on partitionDimensions for query processing. Note that partitionDimensions must be set at ingestion time to enable secondary partition pruning at query time. single_dim\tSlower\tSingle dimension range partitioning may reduce your datasource size and query latency by improving data locality. See Partitioning for more details.\tPerfect rollup\tThe broker can use the partition information to prune segments early to speed up queries. Since the broker knows the range of partitionDimension values in each segment, given a query including a filter on the partitionDimension, the broker can pick up only the segments holding the rows satisfying the filter on partitionDimension for query processing. range\tSlowest\tMultiple dimension range partitioning may reduce your datasource size and query latency by improving data locality. See Partitioning for more details.\tPerfect rollup\tThe broker can use the partition information to prune segments early to speed up queries. Since the broker knows the range of partitionDimensions values within each segment, given a query including a filter on the first of the partitionDimensions, the broker can pick up only the segments holding the rows satisfying the filter on the first partition dimension for query processing. Dynamic partitioning property\tdescription\tdefault\trequired?type\tSet the value to dynamic.\tnone\tyes maxRowsPerSegment\tUsed in sharding. Determines how many rows are in each segment.\t5000000\tno maxTotalRows\tTotal number of rows across all segments waiting for being pushed. Used in determining when intermediate segment push should occur.\t20000000\tno With the Dynamic partitioning, the parallel index task runs in a single phase: it spawns multiple worker tasks (type single_phase_sub_task), each of which creates segments. How the worker task creates segments: Whenever the number of rows in the current segment exceedsmaxRowsPerSegment.When the total number of rows in all segments across all time chunks reaches to maxTotalRows. At this point the task pushes all segments created so far to the deep storage and creates new ones. Hash-based partitioning property\tdescription\tdefault\trequired?type\tSet the value to hashed.\tnone\tyes numShards\tDirectly specify the number of shards to create. If this is specified and intervals is specified in the granularitySpec, the index task can skip the determine intervals/partitions pass through the data. This property and targetRowsPerSegment cannot both be set.\tnone\tno targetRowsPerSegment\tA target row count for each partition. If numShards is left unspecified, the Parallel task will determine a partition count automatically such that each partition has a row count close to the target, assuming evenly distributed keys in the input data. A target per-segment row count of 5 million is used if both numShards and targetRowsPerSegment are null.\tnull (or 5,000,000 if both numShards and targetRowsPerSegment are null)\tno partitionDimensions\tThe dimensions to partition on. Leave blank to select all dimensions.\tnull\tno partitionFunction\tA function to compute hash of partition dimensions. See Hash partition function\tmurmur3_32_abs\tno The Parallel task with hash-based partitioning is similar to MapReduce. The task runs in up to 3 phases: partial dimension cardinality, partial segment generation and partial segment merge. The partial dimension cardinality phase is an optional phase that only runs if numShards is not specified. The Parallel task splits the input data and assigns them to worker tasks based on the split hint spec. Each worker task (type partial_dimension_cardinality) gathers estimates of partitioning dimensions cardinality for each time chunk. The Parallel task will aggregate these estimates from the worker tasks and determine the highest cardinality across all of the time chunks in the input data, dividing this cardinality by targetRowsPerSegment to automatically determine numShards.In the partial segment generation phase, just like the Map phase in MapReduce, the Parallel task splits the input data based on the split hint spec and assigns each split to a worker task. Each worker task (type partial_index_generate) reads the assigned split, and partitions rows by the time chunk from segmentGranularity (primary partition key) in the granularitySpecand then by the hash value of partitionDimensions (secondary partition key) in the partitionsSpec. The partitioned data is stored in local storage of the middleManager or the indexer.The partial segment merge phase is similar to the Reduce phase in MapReduce. The Parallel task spawns a new set of worker tasks (type partial_index_generic_merge) to merge the partitioned data created in the previous phase. Here, the partitioned data is shuffled based on the time chunk and the hash value of partitionDimensions to be merged; each worker task reads the data falling in the same time chunk and the same hash value from multiple MiddleManager/Indexer processes and merges them to create the final segments. Finally, they push the final segments to the deep storage at once. Hash partition function In hash partitioning, the partition function is used to compute hash of partition dimensions. The partition dimension values are first serialized into a byte array as a whole, and then the partition function is applied to compute hash of the byte array. Druid currently supports only one partition function. name\tdescriptionmurmur3_32_abs\tApplies an absolute value function to the result of murmur3_32. Single-dimension range partitioning info Single dimension range partitioning is not supported in the sequential mode of the index_parallel task type. Range partitioning has several benefits related to storage footprint and query performance. The Parallel task will use one subtask when you set maxNumConcurrentSubTasks to 1. When you use this technique to partition your data, segment sizes may be unequally distributed if the data in your partitionDimension is also unequally distributed. Therefore, to avoid imbalance in data layout, review the distribution of values in your source data before deciding on a partitioning strategy. Range partitioning is not possible on multi-value dimensions. If the providedpartitionDimension is multi-value, your ingestion job will report an error. property\tdescription\tdefault\trequired?type\tSet the value to single_dim.\tnone\tyes partitionDimension\tThe dimension to partition on. Only rows with a single dimension value are allowed.\tnone\tyes targetRowsPerSegment\tTarget number of rows to include in a partition, should be a number that targets segments of 500MB~1GB.\tnone\teither this or maxRowsPerSegment maxRowsPerSegment\tSoft max for the number of rows to include in a partition.\tnone\teither this or targetRowsPerSegment assumeGrouped\tAssume that input data has already been grouped on time and dimensions. Ingestion will run faster, but may choose sub-optimal partitions if this assumption is violated.\tfalse\tno With single-dim partitioning, the Parallel task runs in 3 phases, i.e., partial dimension distribution, partial segment generation, and partial segment merge. The first phase is to collect some statistics to find the best partitioning and the other 2 phases are to create partial segments and to merge them, respectively, as in hash-based partitioning. In the partial dimension distribution phase, the Parallel task splits the input data and assigns them to worker tasks based on the split hint spec. Each worker task (type partial_dimension_distribution) reads the assigned split and builds a histogram for partitionDimension. The Parallel task collects those histograms from worker tasks and finds the best range partitioning based on partitionDimension to evenly distribute rows across partitions. Note that either targetRowsPerSegmentor maxRowsPerSegment will be used to find the best partitioning.In the partial segment generation phase, the Parallel task spawns new worker tasks (type partial_range_index_generate) to create partitioned data. Each worker task reads a split created as in the previous phase, partitions rows by the time chunk from the segmentGranularity (primary partition key) in the granularitySpecand then by the range partitioning found in the previous phase. The partitioned data is stored in local storage of the middleManager or the indexer.In the partial segment merge phase, the parallel index task spawns a new set of worker tasks (type partial_index_generic_merge) to merge the partitioned data created in the previous phase. Here, the partitioned data is shuffled based on the time chunk and the value of partitionDimension; each worker task reads the segments falling in the same partition of the same range from multiple MiddleManager/Indexer processes and merges them to create the final segments. Finally, they push the final segments to the deep storage. info Because the task with single-dimension range partitioning makes two passes over the input in partial dimension distribution and partial segment generation phases, the task may fail if the input changes in between the two passes. Multi-dimension range partitioning info Multi-dimension range partitioning is not supported in the sequential mode of the index_parallel task type. Range partitioning has several benefits related to storage footprint and query performance. Multi-dimension range partitioning improves over single-dimension range partitioning by allowing Druid to distribute segment sizes more evenly, and to prune on more dimensions. Range partitioning is not possible on multi-value dimensions. If one of the providedpartitionDimensions is multi-value, your ingestion job will report an error. property\tdescription\tdefault\trequired?type\tSet the value to range.\tnone\tyes partitionDimensions\tAn array of dimensions to partition on. Order the dimensions from most frequently queried to least frequently queried. For best results, limit your number of dimensions to between three and five dimensions.\tnone\tyes targetRowsPerSegment\tTarget number of rows to include in a partition, should be a number that targets segments of 500MB~1GB.\tnone\teither this or maxRowsPerSegment maxRowsPerSegment\tSoft max for the number of rows to include in a partition.\tnone\teither this or targetRowsPerSegment assumeGrouped\tAssume that input data has already been grouped on time and dimensions. Ingestion will run faster, but may choose sub-optimal partitions if this assumption is violated.\tfalse\tno Benefits of range partitioning Range partitioning, either single_dim or range, has several benefits: Lower storage footprint due to combining similar data into the same segments, which improves compressibility.Better query performance due to Broker-level segment pruning, which removes segments from consideration when they cannot possibly contain data matching the query filter. For Broker-level segment pruning to be effective, you must include partition dimensions in the WHERE clause. Each partition dimension can participate in pruning if the prior partition dimensions (those to its left) are also participating, and if the query uses filters that support pruning. Filters that support pruning include: Equality on string literals, like x = 'foo' and x IN ('foo', 'bar') where x is a string.Comparison between string columns and string literals, like x < 'foo' or other comparisons involving <, >, <=, or >=. For example, if you configure the following range partitioning during ingestion: "partitionsSpec": { "type": "range", "partitionDimensions": ["countryName", "cityName"], "targetRowsPerSegment": 5000000 } Then, filters like WHERE countryName = 'United States' or WHERE countryName = 'United States' AND cityName = 'New York'can make use of pruning. However, WHERE cityName = 'New York' cannot make use of pruning, because countryName is not involved. The clause WHERE cityName LIKE 'New%' cannot make use of pruning either, because LIKE filters do not support pruning. "},{"title":"HTTP status endpoints","type":1,"pageTitle":"JSON-based batch","url":"/docs/27.0.0/ingestion/native-batch#http-status-endpoints","content":"The supervisor task provides some HTTP endpoints to get running status. http://{PEON_IP}:{PEON_PORT}/druid/worker/v1/chat/{SUPERVISOR_TASK_ID}/mode Returns 'parallel' if the indexing task is running in parallel. Otherwise, it returns 'sequential'. http://{PEON_IP}:{PEON_PORT}/druid/worker/v1/chat/{SUPERVISOR_TASK_ID}/phase Returns the name of the current phase if the task running in the parallel mode. http://{PEON_IP}:{PEON_PORT}/druid/worker/v1/chat/{SUPERVISOR_TASK_ID}/progress Returns the estimated progress of the current phase if the supervisor task is running in the parallel mode. An example of the result is { "running":10, "succeeded":0, "failed":0, "complete":0, "total":10, "estimatedExpectedSucceeded":10 } http://{PEON_IP}:{PEON_PORT}/druid/worker/v1/chat/{SUPERVISOR_TASK_ID}/subtasks/running Returns the task IDs of running worker tasks, or an empty list if the supervisor task is running in the sequential mode. http://{PEON_IP}:{PEON_PORT}/druid/worker/v1/chat/{SUPERVISOR_TASK_ID}/subtaskspecs Returns all worker task specs, or an empty list if the supervisor task is running in the sequential mode. http://{PEON_IP}:{PEON_PORT}/druid/worker/v1/chat/{SUPERVISOR_TASK_ID}/subtaskspecs/running Returns running worker task specs, or an empty list if the supervisor task is running in the sequential mode. http://{PEON_IP}:{PEON_PORT}/druid/worker/v1/chat/{SUPERVISOR_TASK_ID}/subtaskspecs/complete Returns complete worker task specs, or an empty list if the supervisor task is running in the sequential mode. http://{PEON_IP}:{PEON_PORT}/druid/worker/v1/chat/{SUPERVISOR_TASK_ID}/subtaskspec/{SUB_TASK_SPEC_ID} Returns the worker task spec of the given id, or HTTP 404 Not Found error if the supervisor task is running in the sequential mode. http://{PEON_IP}:{PEON_PORT}/druid/worker/v1/chat/{SUPERVISOR_TASK_ID}/subtaskspec/{SUB_TASK_SPEC_ID}/state Returns the state of the worker task spec of the given id, or HTTP 404 Not Found error if the supervisor task is running in the sequential mode. The returned result contains the worker task spec, a current task status if exists, and task attempt history. An example of the result is { "spec": { "id": "index_parallel_lineitem_2018-04-20T22:12:43.610Z_2", "groupId": "index_parallel_lineitem_2018-04-20T22:12:43.610Z", "supervisorTaskId": "index_parallel_lineitem_2018-04-20T22:12:43.610Z", "context": null, "inputSplit": { "split": "/path/to/data/lineitem.tbl.5" }, "ingestionSpec": { "dataSchema": { "dataSource": "lineitem", "timestampSpec": { "column": "l_shipdate", "format": "yyyy-MM-dd" }, "dimensionsSpec": { "dimensions": [ "l_orderkey", "l_partkey", "l_suppkey", "l_linenumber", "l_returnflag", "l_linestatus", "l_shipdate", "l_commitdate", "l_receiptdate", "l_shipinstruct", "l_shipmode", "l_comment" ] }, "metricsSpec": [ { "type": "count", "name": "count" }, { "type": "longSum", "name": "l_quantity", "fieldName": "l_quantity", "expression": null }, { "type": "doubleSum", "name": "l_extendedprice", "fieldName": "l_extendedprice", "expression": null }, { "type": "doubleSum", "name": "l_discount", "fieldName": "l_discount", "expression": null }, { "type": "doubleSum", "name": "l_tax", "fieldName": "l_tax", "expression": null } ], "granularitySpec": { "type": "uniform", "segmentGranularity": "YEAR", "queryGranularity": { "type": "none" }, "rollup": true, "intervals": [ "1980-01-01T00:00:00.000Z/2020-01-01T00:00:00.000Z" ] }, "transformSpec": { "filter": null, "transforms": [] } }, "ioConfig": { "type": "index_parallel", "inputSource": { "type": "local", "baseDir": "/path/to/data/", "filter": "lineitem.tbl.5" }, "inputFormat": { "type": "tsv", "delimiter": "|", "columns": [ "l_orderkey", "l_partkey", "l_suppkey", "l_linenumber", "l_quantity", "l_extendedprice", "l_discount", "l_tax", "l_returnflag", "l_linestatus", "l_shipdate", "l_commitdate", "l_receiptdate", "l_shipinstruct", "l_shipmode", "l_comment" ] }, "appendToExisting": false, "dropExisting": false }, "tuningConfig": { "type": "index_parallel", "partitionsSpec": { "type": "dynamic" }, "maxRowsInMemory": 1000000, "maxTotalRows": 20000000, "numShards": null, "indexSpec": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "metricCompression": "lz4", "longEncoding": "longs" }, "indexSpecForIntermediatePersists": { "bitmap": { "type": "roaring" }, "dimensionCompression": "lz4", "metricCompression": "lz4", "longEncoding": "longs" }, "maxPendingPersists": 0, "reportParseExceptions": false, "pushTimeout": 0, "segmentWriteOutMediumFactory": null, "maxNumConcurrentSubTasks": 4, "maxRetry": 3, "taskStatusCheckPeriodMs": 1000, "chatHandlerTimeout": "PT10S", "chatHandlerNumRetries": 5, "logParseExceptions": false, "maxParseExceptions": 2147483647, "maxSavedParseExceptions": 0, "forceGuaranteedRollup": false } } }, "currentStatus": { "id": "index_sub_lineitem_2018-04-20T22:16:29.922Z", "type": "index_sub", "createdTime": "2018-04-20T22:16:29.925Z", "queueInsertionTime": "2018-04-20T22:16:29.929Z", "statusCode": "RUNNING", "duration": -1, "location": { "host": null, "port": -1, "tlsPort": -1 }, "dataSource": "lineitem", "errorMsg": null }, "taskHistory": [] } http://{PEON_IP}:{PEON_PORT}/druid/worker/v1/chat/{SUPERVISOR_TASK_ID}/subtaskspec/{SUB_TASK_SPEC_ID}/history Returns the task attempt history of the worker task spec of the given id, or HTTP 404 Not Found error if the supervisor task is running in the sequential mode. "},{"title":"Segment pushing modes","type":1,"pageTitle":"JSON-based batch","url":"/docs/27.0.0/ingestion/native-batch#segment-pushing-modes","content":"While ingesting data using the parallel task indexing, Druid creates segments from the input data and pushes them. For segment pushing, the parallel task index supports the following segment pushing modes based upon your type of rollup: Bulk pushing mode: Used for perfect rollup. Druid pushes every segment at the very end of the index task. Until then, Druid stores created segments in memory and local storage of the service running the index task. To enable bulk pushing mode, set forceGuaranteedRollup to true in your tuning config. You cannot use bulk pushing with appendToExisting in your IOConfig.Incremental pushing mode: Used for best-effort rollup. Druid pushes segments are incrementally during the course of the indexing task. The index task collects data and stores created segments in the memory and disks of the services running the task until the total number of collected rows exceeds maxTotalRows. At that point the index task immediately pushes all segments created up until that moment, cleans up pushed segments, and continues to ingest the remaining data. "},{"title":"Capacity planning","type":1,"pageTitle":"JSON-based batch","url":"/docs/27.0.0/ingestion/native-batch#capacity-planning","content":"The supervisor task can create up to maxNumConcurrentSubTasks worker tasks no matter how many task slots are currently available. As a result, total number of tasks which can be run at the same time is (maxNumConcurrentSubTasks + 1) (including the supervisor task). Please note that this can be even larger than total number of task slots (sum of the capacity of all workers). If maxNumConcurrentSubTasks is larger than n (available task slots), thenmaxNumConcurrentSubTasks tasks are created by the supervisor task, but only n tasks would be started. Others will wait in the pending state until any running task is finished. If you are using the Parallel Index Task with stream ingestion together, we would recommend to limit the max capacity for batch ingestion to prevent stream ingestion from being blocked by batch ingestion. Suppose you havet Parallel Index Tasks to run at the same time, but want to limit the max number of tasks for batch ingestion to b. Then, (sum of maxNumConcurrentSubTasksof all Parallel Index Tasks + t (for supervisor tasks)) must be smaller than b. If you have some tasks of a higher priority than others, you may set theirmaxNumConcurrentSubTasks to a higher value than lower priority tasks. This may help the higher priority tasks to finish earlier than lower priority tasks by assigning more task slots to them. "},{"title":"Splittable input sources","type":1,"pageTitle":"JSON-based batch","url":"/docs/27.0.0/ingestion/native-batch#splittable-input-sources","content":"Use the inputSource object to define the location where your index can read data. Only the native parallel task and simple task support the input source. For details on available input sources see: S3 input source (s3) reads data from AWS S3 storage.Google Cloud Storage input source (gs) reads data from Google Cloud Storage.Azure input source (azure) reads data from Azure Blob Storage and Azure Data Lake.HDFS input source (hdfs) reads data from HDFS storage.HTTP input Source (http) reads data from HTTP servers.Inline input Source reads data you paste into the web console.Local input Source (local) reads data from local storage.Druid input Source (druid) reads data from a Druid datasource.SQL input Source (sql) reads data from a RDBMS source. For information on how to combine input sources, see Combining input source. "},{"title":"segmentWriteOutMediumFactory","type":1,"pageTitle":"JSON-based batch","url":"/docs/27.0.0/ingestion/native-batch#segmentwriteoutmediumfactory","content":"Field\tType\tDescription\tRequiredtype\tString\tSee Additional Peon Configuration: SegmentWriteOutMediumFactory for explanation and available options.\tyes "},{"title":"JSON-based batch ingestion with firehose (Deprecated)","type":0,"sectionRef":"#","url":"/docs/27.0.0/ingestion/native-batch-firehose","content":"","keywords":""},{"title":"StaticS3Firehose","type":1,"pageTitle":"JSON-based batch ingestion with firehose (Deprecated)","url":"/docs/27.0.0/ingestion/native-batch-firehose#statics3firehose","content":"You need to include the druid-s3-extensions as an extension to use the StaticS3Firehose. This firehose ingests events from a predefined list of S3 objects. This firehose is splittable and can be used by the Parallel task. Since each split represents an object in this firehose, each worker task of index_parallel will read an object. Sample spec: "firehose" : { "type" : "static-s3", "uris": ["s3://foo/bar/file.gz", "s3://bar/foo/file2.gz"] } This firehose provides caching and prefetching features. In the Simple task, a firehose can be read twice if intervals or shardSpecs are not specified, and, in this case, caching can be useful. Prefetching is preferred when direct scan of objects is slow. Note that prefetching or caching isn't that useful in the Parallel task. property\tdescription\tdefault\trequired?type\tThis should be static-s3.\tNone\tyes uris\tJSON array of URIs where s3 files to be ingested are located.\tNone\turis or prefixes must be set prefixes\tJSON array of URI prefixes for the locations of s3 files to be ingested.\tNone\turis or prefixes must be set maxCacheCapacityBytes\tMaximum size of the cache space in bytes. 0 means disabling cache. Cached files are not removed until the ingestion task completes.\t1073741824\tno maxFetchCapacityBytes\tMaximum size of the fetch space in bytes. 0 means disabling prefetch. Prefetched files are removed immediately once they are read.\t1073741824\tno prefetchTriggerBytes\tThreshold to trigger prefetching s3 objects.\tmaxFetchCapacityBytes / 2\tno fetchTimeout\tTimeout for fetching an s3 object.\t60000\tno maxFetchRetry\tMaximum retry for fetching an s3 object.\t3\tno "},{"title":"StaticGoogleBlobStoreFirehose","type":1,"pageTitle":"JSON-based batch ingestion with firehose (Deprecated)","url":"/docs/27.0.0/ingestion/native-batch-firehose#staticgoogleblobstorefirehose","content":"You need to include the druid-google-extensions as an extension to use the StaticGoogleBlobStoreFirehose. This firehose ingests events, similar to the StaticS3Firehose, but from an Google Cloud Store. As with the S3 blobstore, it is assumed to be gzipped if the extension ends in .gz This firehose is splittable and can be used by the Parallel task. Since each split represents an object in this firehose, each worker task of index_parallel will read an object. Sample spec: "firehose" : { "type" : "static-google-blobstore", "blobs": [ { "bucket": "foo", "path": "/path/to/your/file.json" }, { "bucket": "bar", "path": "/another/path.json" } ] } This firehose provides caching and prefetching features. In the Simple task, a firehose can be read twice if intervals or shardSpecs are not specified, and, in this case, caching can be useful. Prefetching is preferred when direct scan of objects is slow. Note that prefetching or caching isn't that useful in the Parallel task. property\tdescription\tdefault\trequired?type\tThis should be static-google-blobstore.\tNone\tyes blobs\tJSON array of Google Blobs.\tNone\tyes maxCacheCapacityBytes\tMaximum size of the cache space in bytes. 0 means disabling cache. Cached files are not removed until the ingestion task completes.\t1073741824\tno maxFetchCapacityBytes\tMaximum size of the fetch space in bytes. 0 means disabling prefetch. Prefetched files are removed immediately once they are read.\t1073741824\tno prefetchTriggerBytes\tThreshold to trigger prefetching Google Blobs.\tmaxFetchCapacityBytes / 2\tno fetchTimeout\tTimeout for fetching a Google Blob.\t60000\tno maxFetchRetry\tMaximum retry for fetching a Google Blob.\t3\tno Google Blobs: property\tdescription\tdefault\trequired?bucket\tName of the Google Cloud bucket\tNone\tyes path\tThe path where data is located.\tNone\tyes "},{"title":"HDFSFirehose","type":1,"pageTitle":"JSON-based batch ingestion with firehose (Deprecated)","url":"/docs/27.0.0/ingestion/native-batch-firehose#hdfsfirehose","content":"You need to include the druid-hdfs-storage as an extension to use the HDFSFirehose. This firehose ingests events from a predefined list of files from the HDFS storage. This firehose is splittable and can be used by the Parallel task. Since each split represents an HDFS file, each worker task of index_parallel will read files. Sample spec: "firehose" : { "type" : "hdfs", "paths": "/foo/bar,/foo/baz" } This firehose provides caching and prefetching features. During native batch indexing, a firehose can be read twice ifintervals are not specified, and, in this case, caching can be useful. Prefetching is preferred when direct scanning of files is slow. Note that prefetching or caching isn't that useful in the Parallel task. Property\tDescription\tDefaulttype\tThis should be hdfs.\tnone (required) paths\tHDFS paths. Can be either a JSON array or comma-separated string of paths. Wildcards like * are supported in these paths.\tnone (required) maxCacheCapacityBytes\tMaximum size of the cache space in bytes. 0 means disabling cache. Cached files are not removed until the ingestion task completes.\t1073741824 maxFetchCapacityBytes\tMaximum size of the fetch space in bytes. 0 means disabling prefetch. Prefetched files are removed immediately once they are read.\t1073741824 prefetchTriggerBytes\tThreshold to trigger prefetching files.\tmaxFetchCapacityBytes / 2 fetchTimeout\tTimeout for fetching each file.\t60000 maxFetchRetry\tMaximum number of retries for fetching each file.\t3 You can also ingest from other storage using the HDFS firehose if the HDFS client supports that storage. However, if you want to ingest from cloud storage, consider using the service-specific input source for your data storage. If you want to use a non-hdfs protocol with the HDFS firehose, you need to include the protocol you want in druid.ingestion.hdfs.allowedProtocols. See HDFS firehose security configuration for more details. "},{"title":"LocalFirehose","type":1,"pageTitle":"JSON-based batch ingestion with firehose (Deprecated)","url":"/docs/27.0.0/ingestion/native-batch-firehose#localfirehose","content":"This Firehose can be used to read the data from files on local disk, and is mainly intended for proof-of-concept testing, and works with string typed parsers. This Firehose is splittable and can be used by native parallel index tasks. Since each split represents a file in this Firehose, each worker task of index_parallel will read a file. A sample local Firehose spec is shown below: { "type": "local", "filter" : "*.csv", "baseDir": "/data/directory" } property\tdescription\trequired?type\tThis should be "local".\tyes filter\tA wildcard filter for files. See here for more information.\tyes baseDir\tdirectory to search recursively for files to be ingested.\tyes "},{"title":"HttpFirehose","type":1,"pageTitle":"JSON-based batch ingestion with firehose (Deprecated)","url":"/docs/27.0.0/ingestion/native-batch-firehose#httpfirehose","content":"This Firehose can be used to read the data from remote sites via HTTP, and works with string typed parsers. This Firehose is splittable and can be used by native parallel index tasks. Since each split represents a file in this Firehose, each worker task of index_parallel will read a file. A sample HTTP Firehose spec is shown below: { "type": "http", "uris": ["http://example.com/uri1", "http://example2.com/uri2"] } You can only use protocols listed in the druid.ingestion.http.allowedProtocols property as HTTP firehose input sources. The http and https protocols are allowed by default. See HTTP firehose security configuration for more details. The below configurations can be optionally used if the URIs specified in the spec require a Basic Authentication Header. Omitting these fields from your spec will result in HTTP requests with no Basic Authentication Header. property\tdescription\tdefaulthttpAuthenticationUsername\tUsername to use for authentication with specified URIs\tNone httpAuthenticationPassword\tPasswordProvider to use with specified URIs\tNone Example with authentication fields using the DefaultPassword provider (this requires the password to be in the ingestion spec): { "type": "http", "uris": ["http://example.com/uri1", "http://example2.com/uri2"], "httpAuthenticationUsername": "username", "httpAuthenticationPassword": "password123" } You can also use the other existing Druid PasswordProviders. Here is an example using the EnvironmentVariablePasswordProvider: { "type": "http", "uris": ["http://example.com/uri1", "http://example2.com/uri2"], "httpAuthenticationUsername": "username", "httpAuthenticationPassword": { "type": "environment", "variable": "HTTP_FIREHOSE_PW" } } The below configurations can optionally be used for tuning the Firehose performance. Note that prefetching or caching isn't that useful in the Parallel task. property\tdescription\tdefaultmaxCacheCapacityBytes\tMaximum size of the cache space in bytes. 0 means disabling cache. Cached files are not removed until the ingestion task completes.\t1073741824 maxFetchCapacityBytes\tMaximum size of the fetch space in bytes. 0 means disabling prefetch. Prefetched files are removed immediately once they are read.\t1073741824 prefetchTriggerBytes\tThreshold to trigger prefetching HTTP objects.\tmaxFetchCapacityBytes / 2 fetchTimeout\tTimeout for fetching an HTTP object.\t60000 maxFetchRetry\tMaximum retries for fetching an HTTP object.\t3 "},{"title":"IngestSegmentFirehose","type":1,"pageTitle":"JSON-based batch ingestion with firehose (Deprecated)","url":"/docs/27.0.0/ingestion/native-batch-firehose#ingestsegmentfirehose","content":"This Firehose can be used to read the data from existing druid segments, potentially using a new schema and changing the name, dimensions, metrics, rollup, etc. of the segment. This Firehose is splittable and can be used by native parallel index tasks. This firehose will accept any type of parser, but will only utilize the list of dimensions and the timestamp specification. A sample ingest Firehose spec is shown below: { "type": "ingestSegment", "dataSource": "wikipedia", "interval": "2013-01-01/2013-01-02" } property\tdescription\trequired?type\tThis should be "ingestSegment".\tyes dataSource\tA String defining the data source to fetch rows from, very similar to a table in a relational database\tyes interval\tA String representing the ISO-8601 interval. This defines the time range to fetch the data over.\tyes dimensions\tThe list of dimensions to select. If left empty, no dimensions are returned. If left null or not defined, all dimensions are returned.\tno metrics\tThe list of metrics to select. If left empty, no metrics are returned. If left null or not defined, all metrics are selected.\tno filter\tSee Filters\tno maxInputSegmentBytesPerTask\tDeprecated. Use Segments Split Hint Spec instead. When used with the native parallel index task, the maximum number of bytes of input segments to process in a single task. If a single segment is larger than this number, it will be processed by itself in a single task (input segments are never split across tasks). Defaults to 150MB.\tno "},{"title":"SqlFirehose","type":1,"pageTitle":"JSON-based batch ingestion with firehose (Deprecated)","url":"/docs/27.0.0/ingestion/native-batch-firehose#sqlfirehose","content":"This Firehose can be used to ingest events residing in an RDBMS. The database connection information is provided as part of the ingestion spec. For each query, the results are fetched locally and indexed. If there are multiple queries from which data needs to be indexed, queries are prefetched in the background, up to maxFetchCapacityBytes bytes. This Firehose is splittable and can be used by native parallel index tasks. This firehose will accept any type of parser, but will only utilize the list of dimensions and the timestamp specification. See the extension documentation for more detailed ingestion examples. Requires one of the following extensions: MySQL Metadata Store.PostgreSQL Metadata Store. { "type": "sql", "database": { "type": "mysql", "connectorConfig": { "connectURI": "jdbc:mysql://host:port/schema", "user": "user", "password": "password" } }, "sqls": ["SELECT * FROM table1", "SELECT * FROM table2"] } property\tdescription\tdefault\trequired?type\tThis should be "sql". Yes database\tSpecifies the database connection details. The database type corresponds to the extension that supplies the connectorConfig support. The specified extension must be loaded into Druid: mysql-metadata-storage for mysql postgresql-metadata-storage extension for postgresql. You can selectively allow JDBC properties in connectURI. See JDBC connections security config for more details. Yes maxCacheCapacityBytes\tMaximum size of the cache space in bytes. 0 means disabling cache. Cached files are not removed until the ingestion task completes.\t1073741824\tNo maxFetchCapacityBytes\tMaximum size of the fetch space in bytes. 0 means disabling prefetch. Prefetched files are removed immediately once they are read.\t1073741824\tNo prefetchTriggerBytes\tThreshold to trigger prefetching SQL result objects.\tmaxFetchCapacityBytes / 2\tNo fetchTimeout\tTimeout for fetching the result set.\t60000\tNo foldCase\tToggle case folding of database column names. This may be enabled in cases where the database returns case insensitive column names in query results.\tfalse\tNo sqls\tList of SQL queries where each SQL query would retrieve the data to be indexed. Yes "},{"title":"Database","type":1,"pageTitle":"JSON-based batch ingestion with firehose (Deprecated)","url":"/docs/27.0.0/ingestion/native-batch-firehose#database","content":"property\tdescription\tdefault\trequired?type\tThe type of database to query. Valid values are mysql and postgresql_ Yes connectorConfig\tSpecify the database connection properties via connectURI, user and password Yes "},{"title":"InlineFirehose","type":1,"pageTitle":"JSON-based batch ingestion with firehose (Deprecated)","url":"/docs/27.0.0/ingestion/native-batch-firehose#inlinefirehose","content":"This Firehose can be used to read the data inlined in its own spec. It can be used for demos or for quickly testing out parsing and schema, and works with string typed parsers. A sample inline Firehose spec is shown below: { "type": "inline", "data": "0,values,formatted\\n1,as,CSV" } property\tdescription\trequired?type\tThis should be "inline".\tyes data\tInlined data to ingest.\tyes "},{"title":"CombiningFirehose","type":1,"pageTitle":"JSON-based batch ingestion with firehose (Deprecated)","url":"/docs/27.0.0/ingestion/native-batch-firehose#combiningfirehose","content":"This Firehose can be used to combine and merge data from a list of different Firehoses. { "type": "combining", "delegates": [ { firehose1 }, { firehose2 }, ... ] } property\tdescription\trequired?type\tThis should be "combining"\tyes delegates\tList of Firehoses to combine data from\tyes "},{"title":"Schema design tips","type":0,"sectionRef":"#","url":"/docs/27.0.0/ingestion/schema-design","content":"","keywords":""},{"title":"Druid's data model","type":1,"pageTitle":"Schema design tips","url":"/docs/27.0.0/ingestion/schema-design#druids-data-model","content":"For general information, check out the documentation on Druid schema model on the main ingestion overview page. The rest of this page discusses tips for users coming from other kinds of systems, as well as general tips and common practices. Druid data is stored in datasources, which are similar to tables in a traditional RDBMS.Druid datasources can be ingested with or without rollup. With rollup enabled, Druid partially aggregates your data during ingestion, potentially reducing its row count, decreasing storage footprint, and improving query performance. With rollup disabled, Druid stores one row for each row in your input data, without any pre-aggregation.Every row in Druid must have a timestamp. Data is always partitioned by time, and every query has a time filter. Query results can also be broken down by time buckets like minutes, hours, days, and so on.All columns in Druid datasources, other than the timestamp column, are either dimensions or metrics. This follows the standard naming convention of OLAP data.Typical production datasources have tens to hundreds of columns.Dimension columns are stored as-is, so they can be filtered on, grouped by, or aggregated at query time. They are always single Strings, arrays of Strings, single Longs, single Doubles or single Floats.Metric columns are stored pre-aggregated, so they can only be aggregated at query time (not filtered or grouped by). They are often stored as numbers (integers or floats) but can also be stored as complex objects like HyperLogLog sketches or approximate quantile sketches. Metrics can be configured at ingestion time even when rollup is disabled, but are most useful when rollup is enabled. "},{"title":"If you're coming from a","type":1,"pageTitle":"Schema design tips","url":"/docs/27.0.0/ingestion/schema-design#if-youre-coming-from-a","content":""},{"title":"Relational model","type":1,"pageTitle":"Schema design tips","url":"/docs/27.0.0/ingestion/schema-design#relational-model","content":"(Like Hive or PostgreSQL.) Druid datasources are generally equivalent to tables in a relational database. Druid lookupscan act similarly to data-warehouse-style dimension tables, but as you'll see below, denormalization is often recommended if you can get away with it. Common practice for relational data modeling involves normalization: the idea of splitting up data into multiple tables such that data redundancy is reduced or eliminated. For example, in a "sales" table, best-practices relational modeling calls for a "product id" column that is a foreign key into a separate "products" table, which in turn has "product id", "product name", and "product category" columns. This prevents the product name and category from needing to be repeated on different rows in the "sales" table that refer to the same product. In Druid, on the other hand, it is common to use totally flat datasources that do not require joins at query time. In the example of the "sales" table, in Druid it would be typical to store "productid", "product_name", and "product_category" as dimensions directly in a Druid "sales" datasource, without using a separate "products" table. Totally flat schemas substantially increase performance, since the need for joins is eliminated at query time. As an an added speed boost, this also allows Druid's query layer to operate directly on compressed dictionary-encoded data. Perhaps counter-intuitively, this does _not substantially increase storage footprint relative to normalized schemas, since Druid uses dictionary encoding to effectively store just a single integer per row for string columns. If necessary, Druid datasources can be partially normalized through the use of lookups, which are the rough equivalent of dimension tables in a relational database. At query time, you would use Druid's SQLLOOKUP function, or native lookup extraction functions, instead of using the JOIN keyword like you would in a relational database. Since lookup tables impose an increase in memory footprint and incur more computational overhead at query time, it is only recommended to do this if you need the ability to update a lookup table and have the changes reflected immediately for already-ingested rows in your main table. Tips for modeling relational data in Druid: Druid datasources do not have primary or unique keys, so skip those.Denormalize if possible. If you need to be able to update dimension / lookup tables periodically and have those changes reflected in already-ingested data, consider partial normalization with lookups.If you need to join two large distributed tables with each other, you must do this before loading the data into Druid. Druid does not support query-time joins of two datasources. Lookups do not help here, since a full copy of each lookup table is stored on each Druid server, so they are not a good choice for large tables.Consider whether you want to enable rollup for pre-aggregation, or whether you want to disable rollup and load your existing data as-is. Rollup in Druid is similar to creating a summary table in a relational model. "},{"title":"Time series model","type":1,"pageTitle":"Schema design tips","url":"/docs/27.0.0/ingestion/schema-design#time-series-model","content":"(Like OpenTSDB or InfluxDB.) Similar to time series databases, Druid's data model requires a timestamp. Druid is not a timeseries database, but it is a natural choice for storing timeseries data. Its flexible data model allows it to store both timeseries and non-timeseries data, even in the same datasource. To achieve best-case compression and query performance in Druid for timeseries data, it is important to partition and sort by metric name, like timeseries databases often do. See Partitioning and sorting for more details. Tips for modeling timeseries data in Druid: Druid does not think of data points as being part of a "time series". Instead, Druid treats each point separately for ingestion and aggregation.Create a dimension that indicates the name of the series that a data point belongs to. This dimension is often called "metric" or "name". Do not get the dimension named "metric" confused with the concept of Druid metrics. Place this first in the list of dimensions in your "dimensionsSpec" for best performance (this helps because it improves locality; see partitioning and sorting below for details).Create other dimensions for attributes attached to your data points. These are often called "tags" in timeseries database systems.Create metrics corresponding to the types of aggregations that you want to be able to query. Typically this includes "sum", "min", and "max" (in one of the long, float, or double flavors). If you want the ability to compute percentiles or quantiles, use Druid's approximate aggregators.Consider enabling rollup, which will allow Druid to potentially combine multiple points into one row in your Druid datasource. This can be useful if you want to store data at a different time granularity than it is naturally emitted. It is also useful if you want to combine timeseries and non-timeseries data in the same datasource.If you don't know ahead of time what columns you'll want to ingest, use an empty dimensions list to triggerautomatic detection of dimension columns. "},{"title":"Log aggregation model","type":1,"pageTitle":"Schema design tips","url":"/docs/27.0.0/ingestion/schema-design#log-aggregation-model","content":"(Like Elasticsearch or Splunk.) Similar to log aggregation systems, Druid offers inverted indexes for fast searching and filtering. Druid's search capabilities are generally less developed than these systems, and its analytical capabilities are generally more developed. The main data modeling differences between Druid and these systems are that when ingesting data into Druid, you must be more explicit. Druid columns have types specific upfront. Tips for modeling log data in Druid: If you don't know ahead of time what columns to ingest, you can have Druid perform schema auto-discovery.If you have nested data, you can ingest it using the nested columns feature or flatten it using a flattenSpec.Consider enabling rollup if you have mainly analytical use cases for your log data. This will mean you lose the ability to retrieve individual events from Druid, but you potentially gain substantial compression and query performance boosts. "},{"title":"General tips and best practices","type":1,"pageTitle":"Schema design tips","url":"/docs/27.0.0/ingestion/schema-design#general-tips-and-best-practices","content":""},{"title":"Rollup","type":1,"pageTitle":"Schema design tips","url":"/docs/27.0.0/ingestion/schema-design#rollup","content":"Druid can roll up data as it is ingested to minimize the amount of raw data that needs to be stored. This is a form of summarization or pre-aggregation. For more details, see the Rollup section of the ingestion documentation. "},{"title":"Partitioning and sorting","type":1,"pageTitle":"Schema design tips","url":"/docs/27.0.0/ingestion/schema-design#partitioning-and-sorting","content":"Optimally partitioning and sorting your data can have substantial impact on footprint and performance. For more details, see the Partitioning section of the ingestion documentation. "},{"title":"Sketches for high cardinality columns","type":1,"pageTitle":"Schema design tips","url":"/docs/27.0.0/ingestion/schema-design#sketches-for-high-cardinality-columns","content":"When dealing with high cardinality columns like user IDs or other unique IDs, consider using sketches for approximate analysis rather than operating on the actual values. When you ingest data using a sketch, Druid does not store the original raw data, but instead stores a "sketch" of it that it can feed into a later computation at query time. Popular use cases for sketches include count-distinct and quantile computation. Each sketch is designed for just one particular kind of computation. In general using sketches serves two main purposes: improving rollup, and reducing memory footprint at query time. Sketches improve rollup ratios because they allow you to collapse multiple distinct values into the same sketch. For example, if you have two rows that are identical except for a user ID (perhaps two users did the same action at the same time), storing them in a count-distinct sketch instead of as-is means you can store the data in one row instead of two. You won't be able to retrieve the user IDs or compute exact distinct counts, but you'll still be able to compute approximate distinct counts, and you'll reduce your storage footprint. Sketches reduce memory footprint at query time because they limit the amount of data that needs to be shuffled between servers. For example, in a quantile computation, instead of needing to send all data points to a central location so they can be sorted and the quantile can be computed, Druid instead only needs to send a sketch of the points. This can reduce data transfer needs to mere kilobytes. For details about the sketches available in Druid, see theapproximate aggregators page. If you prefer videos, take a look at Not exactly!, a conference talk about sketches in Druid. "},{"title":"String vs numeric dimensions","type":1,"pageTitle":"Schema design tips","url":"/docs/27.0.0/ingestion/schema-design#string-vs-numeric-dimensions","content":"If the user wishes to ingest a column as a numeric-typed dimension (Long, Double or Float), it is necessary to specify the type of the column in the dimensions section of the dimensionsSpec. If the type is omitted, Druid will ingest a column as the default String type. There are performance tradeoffs between string and numeric columns. Numeric columns are generally faster to group on than string columns. But unlike string columns, numeric columns don't have indexes, so they can be slower to filter on. You may want to experiment to find the optimal choice for your use case. For details about how to configure numeric dimensions, see the dimensionsSpec documentation. "},{"title":"Secondary timestamps","type":1,"pageTitle":"Schema design tips","url":"/docs/27.0.0/ingestion/schema-design#secondary-timestamps","content":"Druid schemas must always include a primary timestamp. The primary timestamp is used forpartitioning and sorting your data, so it should be the timestamp that you will most often filter on. Druid is able to rapidly identify and retrieve data corresponding to time ranges of the primary timestamp column. If your data has more than one timestamp, you can ingest the others as secondary timestamps. The best way to do this is to ingest them as long-typed dimensions in milliseconds format. If necessary, you can get them into this format using a transformSpec andexpressions like timestamp_parse, which returns millisecond timestamps. At query time, you can query secondary timestamps with SQL time functionslike MILLIS_TO_TIMESTAMP, TIME_FLOOR, and others. If you're using native Druid queries, you can useexpressions. "},{"title":"Nested dimensions","type":1,"pageTitle":"Schema design tips","url":"/docs/27.0.0/ingestion/schema-design#nested-dimensions","content":"You can ingest and store nested data in a Druid column as a COMPLEX<json> data type. See Nested columns for more information. If you want to ingest nested data in a format unsupported by the nested columns feature, you must use the flattenSpec object to flatten it. For example, if you have data of the following form: { "foo": { "bar": 3 } } then before indexing it, you should transform it to: { "foo_bar": 3 } See the flattenSpec documentation for more details. "},{"title":"Counting the number of ingested events","type":1,"pageTitle":"Schema design tips","url":"/docs/27.0.0/ingestion/schema-design#counting-the-number-of-ingested-events","content":"When rollup is enabled, count aggregators at query time do not actually tell you the number of rows that have been ingested. They tell you the number of rows in the Druid datasource, which may be smaller than the number of rows ingested. In this case, a count aggregator at ingestion time can be used to count the number of events. However, it is important to note that when you query for this metric, you should use a longSum aggregator. A count aggregator at query time will return the number of Druid rows for the time interval, which can be used to determine what the roll-up ratio was. To clarify with an example, if your ingestion spec contains: "metricsSpec": [ { "type": "count", "name": "count" } ] You should query for the number of ingested rows with: "aggregations": [ { "type": "longSum", "name": "numIngestedEvents", "fieldName": "count" } ] "},{"title":"Schema auto-discovery for dimensions","type":1,"pageTitle":"Schema design tips","url":"/docs/27.0.0/ingestion/schema-design#schema-auto-discovery-for-dimensions","content":"Druid can infer the schema for your data in one of two ways: Type-aware schema discovery (experimental) where Druid infers the schema and type for your data. Type-aware schema discovery is an experimental feature currently available for native batch and streaming ingestion.String-based schema discovery where all the discovered columns are typed as either native string or multi-value string columns. Type-aware schema discovery info Note that using type-aware schema discovery can impact downstream BI tools depending on how they handle ARRAY typed columns. You can have Druid infer the schema and types for your data partially or fully by setting dimensionsSpec.useSchemaDiscovery to true and defining some or no dimensions in the dimensions list. When performing type-aware schema discovery, Druid can discover all of the columns of your input data (that aren't in the exclusion list). Druid automatically chooses the most appropriate native Druid type among STRING, LONG,DOUBLE, ARRAY<STRING>, ARRAY<LONG>, ARRAY<DOUBLE>, or COMPLEX<json> for nested data. For input formats with native boolean types, Druid ingests these values as strings if druid.expressions.useStrictBooleans is set to false(the default), or longs if set to true (for more SQL compatible behavior). Array typed columns can be queried using the array functions or UNNEST. Nested columns can be queried with the JSON functions. We also highly recommend setting druid.generic.useDefaultValueForNull=false when using these columns since it also enables out of the box ARRAY type filtering. If not set to false, setting sqlUseBoundsAndSelectors to false on the SQL query context can enable ARRAY filtering instead. Mixed type columns are stored in the least restrictive type that can represent all values in the column. For example: Mixed numeric columns are DOUBLEIf there are any strings present, then the column is a STRINGIf there are arrays, then the column becomes an array with the least restrictive element typeAny nested data or arrays of nested data become COMPLEX<json> nested columns. If you're already using string-based schema discovery and want to migrate, see Migrating to type-aware schema discovery. String-based schema discovery If you do not set dimensionsSpec.useSchemaDiscovery to true, Druid can still use the string-based schema discovery for ingestion if any of the following conditions are met: The dimension list is empty You set includeAllDimensions to true Druid coerces primitives and arrays of primitive types into the native Druid string type. Nested data structures and arrays of nested data structures are ignored and not ingested. Migrating to type-aware schema discovery If you previously used string-based schema discovery and want to migrate to type-aware schema discovery, do the following: Update any queries that use multi-value dimensions (MVDs) to use UNNEST in conjunction with other functions so that no MVD behavior is being relied upon. Type-aware schema discovery generates ARRAY typed columns instead of MVDs, so queries that use any MVD features will fail.Be aware of mixed typed inputs and test how type-aware schema discovery handles them. Druid attempts to cast them as the least restrictive type.If you notice issues with numeric types, you may need to explicitly cast them. Generally, Druid handles the coercion for you.Update your dimension exclusion list and add any nested columns if you want to continue to exclude them. String-based schema discovery automatically ignores nested columns, but type-aware schema discovery will ingest them. "},{"title":"Including the same column as a dimension and a metric","type":1,"pageTitle":"Schema design tips","url":"/docs/27.0.0/ingestion/schema-design#including-the-same-column-as-a-dimension-and-a-metric","content":"One workflow with unique IDs is to be able to filter on a particular ID, while still being able to do fast unique counts on the ID column. If you are not using schema-less dimensions, this use case is supported by setting the name of the metric to something different than the dimension. If you are using schema-less dimensions, the best practice here is to include the same column twice, once as a dimension, and as a hyperUnique metric. This may involve some work at ETL time. As an example, for schema-less dimensions, repeat the same column: { "device_id_dim": 123, "device_id_met": 123 } and in your metricsSpec, include: { "type": "hyperUnique", "name": "devices", "fieldName": "device_id_met" } device_id_dim should automatically get picked up as a dimension. "},{"title":"Druid schema model","type":0,"sectionRef":"#","url":"/docs/27.0.0/ingestion/schema-model","content":"","keywords":""},{"title":"Primary timestamp","type":1,"pageTitle":"Druid schema model","url":"/docs/27.0.0/ingestion/schema-model#primary-timestamp","content":"Druid schemas must always include a primary timestamp. Druid uses the primary timestamp to partition and sort your data. Druid uses the primary timestamp to rapidly identify and retrieve data within the time range of queries. Druid also uses the primary timestamp column for time-based data management operations such as dropping time chunks, overwriting time chunks, and time-based retention rules. Druid parses the primary timestamp based on the timestampSpec configuration at ingestion time. Regardless of the source field for the primary timestamp, Druid always stores the timestamp in the __time column in your Druid datasource. You can control other important operations that are based on the primary timestamp in thegranularitySpec. If you have more than one timestamp column, you can store the others assecondary timestamps. "},{"title":"Dimensions","type":1,"pageTitle":"Druid schema model","url":"/docs/27.0.0/ingestion/schema-model#dimensions","content":"Dimensions are columns that Druid stores "as-is". You can use dimensions for any purpose. For example, you can group, filter, or apply aggregators to dimensions at query time when necessary. If you disable rollup, then Druid treats the set of dimensions like a set of columns to ingest. The dimensions behave exactly as you would expect from any database that does not support a rollup feature. At ingestion time, you configure dimensions in the dimensionsSpec. "},{"title":"Metrics","type":1,"pageTitle":"Druid schema model","url":"/docs/27.0.0/ingestion/schema-model#metrics","content":"Metrics are columns that Druid stores in an aggregated form. Metrics are most useful when you enable rollup. If you specify a metric, you can apply an aggregation function to each row during ingestion. This has the following benefits: Rollup is a form of aggregation that collapses dimensions while aggregating the values in the metrics, that is, it collapses rows but retains its summary information." Rollup is a form of aggregation that combines multiple rows with the same timestamp value and dimension values. For example, the rollup tutorial demonstrates using rollup to collapse netflow data to a single row per (minute, srcIP, dstIP) tuple, while retaining aggregate information about total packet and byte counts. Druid can compute some aggregators, especially approximate ones, more quickly at query time if they are partially computed at ingestion time, including data that has not been rolled up. At ingestion time, you configure Metrics in the metricsSpec. "},{"title":"Realtime Process","type":0,"sectionRef":"#","url":"/docs/27.0.0/ingestion/standalone-realtime","content":"Realtime Process Older versions of Apache Druid supported a standalone 'Realtime' process to query and index 'stream pull' modes of real-time ingestion. These processes would periodically build segments for the data they had collected over some span of time and then set up hand-off to Historical servers. This processes could be invoked by org.apache.druid.cli.Main server realtime This model of stream pull ingestion was deprecated for a number of both operational and architectural reasons, and removed completely in Druid 0.16.0. Operationally, realtime nodes were difficult to configure, deploy, and scale because each node required an unique configuration. The design of the stream pull ingestion system for realtime nodes also suffered from limitations which made it not possible to achieve exactly once ingestion. The extensions druid-kafka-eight, druid-kafka-eight-simpleConsumer, druid-rabbitmq, and druid-rocketmq were also removed at this time, since they were built to operate on the realtime nodes. Please consider using the Kafka Indexing Service orKinesis Indexing Service for stream pull ingestion instead.","keywords":""},{"title":"Task reference","type":0,"sectionRef":"#","url":"/docs/27.0.0/ingestion/tasks","content":"","keywords":""},{"title":"Task API","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#task-api","content":"Task APIs are available in two main places: The Overlord process offers HTTP APIs to submit tasks, cancel tasks, check their status, review logs and reports, and more. Refer to the Tasks API reference for a full list.Druid SQL includes a sys.tasks table that provides information about currently running tasks. This table is read-only, and has a limited (but useful!) subset of the full information available through the Overlord APIs. "},{"title":"Task reports","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#task-reports","content":"A report containing information about the number of rows ingested, and any parse exceptions that occurred is available for both completed tasks and running tasks. The reporting feature is supported by native batch tasks, the Hadoop batch task, and Kafka and Kinesis ingestion tasks. "},{"title":"Completion report","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#completion-report","content":"After a task completes, if it supports reports, its report can be retrieved at: http://<OVERLORD-HOST>:<OVERLORD-PORT>/druid/indexer/v1/task/<task-id>/reports An example output is shown below: { "ingestionStatsAndErrors": { "taskId": "compact_twitter_2018-09-24T18:24:23.920Z", "payload": { "ingestionState": "COMPLETED", "unparseableEvents": {}, "rowStats": { "determinePartitions": { "processed": 0, "processedBytes": 0, "processedWithError": 0, "thrownAway": 0, "unparseable": 0 }, "buildSegments": { "processed": 5390324, "processedBytes": 5109573212, "processedWithError": 0, "thrownAway": 0, "unparseable": 0 } }, "segmentAvailabilityConfirmed": false, "segmentAvailabilityWaitTimeMs": 0, "errorMsg": null }, "type": "ingestionStatsAndErrors" } } Segment Availability Fields For some task types, the indexing task can wait for the newly ingested segments to become available for queries after ingestion completes. The below fields inform the end user regarding the duration and result of the availability wait. For batch ingestion task types, refer to tuningConfig docs to see if the task supports an availability waiting period. Field\tDescriptionsegmentAvailabilityConfirmed\tWhether all segments generated by this ingestion task had been confirmed as available for queries in the cluster before the task completed. segmentAvailabilityWaitTimeMs\tMilliseconds waited by the ingestion task for the newly ingested segments to be available for query after completing ingestion was completed. "},{"title":"Live report","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#live-report","content":"When a task is running, a live report containing ingestion state, unparseable events and moving average for number of events processed for 1 min, 5 min, 15 min time window can be retrieved at: http://<OVERLORD-HOST>:<OVERLORD-PORT>/druid/indexer/v1/task/<task-id>/reports An example output is shown below: { "ingestionStatsAndErrors": { "taskId": "compact_twitter_2018-09-24T18:24:23.920Z", "payload": { "ingestionState": "RUNNING", "unparseableEvents": {}, "rowStats": { "movingAverages": { "buildSegments": { "5m": { "processed": 3.392158326408501, "processedBytes": 627.5492903856, "unparseable": 0, "thrownAway": 0, "processedWithError": 0 }, "15m": { "processed": 1.736165476881023, "processedBytes": 321.1906130223, "unparseable": 0, "thrownAway": 0, "processedWithError": 0 }, "1m": { "processed": 4.206417693750045, "processedBytes": 778.1872733438, "unparseable": 0, "thrownAway": 0, "processedWithError": 0 } } }, "totals": { "buildSegments": { "processed": 1994, "processedBytes": 3425110, "processedWithError": 0, "thrownAway": 0, "unparseable": 0 } } }, "errorMsg": null }, "type": "ingestionStatsAndErrors" } } A description of the fields: The ingestionStatsAndErrors report provides information about row counts and errors. The ingestionState shows what step of ingestion the task reached. Possible states include: NOT_STARTED: The task has not begun reading any rowsDETERMINE_PARTITIONS: The task is processing rows to determine partitioningBUILD_SEGMENTS: The task is processing rows to construct segmentsCOMPLETED: The task has finished its work. Only batch tasks have the DETERMINE_PARTITIONS phase. Realtime tasks such as those created by the Kafka Indexing Service do not have a DETERMINE_PARTITIONS phase. unparseableEvents contains lists of exception messages that were caused by unparseable inputs. This can help with identifying problematic input rows. There will be one list each for the DETERMINE_PARTITIONS and BUILD_SEGMENTS phases. Note that the Hadoop batch task does not support saving of unparseable events. the rowStats map contains information about row counts. There is one entry for each ingestion phase. The definitions of the different row counts are shown below: processed: Number of rows successfully ingested without parsing errorsprocessedBytes: Total number of uncompressed bytes processed by the task. This reports the total byte size of all rows i.e. even those that are included in processedWithError, unparseable or thrownAway.processedWithError: Number of rows that were ingested, but contained a parsing error within one or more columns. This typically occurs where input rows have a parseable structure but invalid types for columns, such as passing in a non-numeric String value for a numeric column.thrownAway: Number of rows skipped. This includes rows with timestamps that were outside of the ingestion task's defined time interval and rows that were filtered out with a transformSpec, but doesn't include the rows skipped by explicit user configurations. For example, the rows skipped by skipHeaderRows or hasHeaderRow in the CSV format are not counted.unparseable: Number of rows that could not be parsed at all and were discarded. This tracks input rows without a parseable structure, such as passing in non-JSON data when using a JSON parser. The errorMsg field shows a message describing the error that caused a task to fail. It will be null if the task was successful. "},{"title":"Live reports","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#live-reports","content":""},{"title":"Row stats","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#row-stats","content":"The native batch task, the Hadoop batch task, and Kafka and Kinesis ingestion tasks support retrieval of row stats while the task is running. The live report can be accessed with a GET to the following URL on a Peon running a task: http://<middlemanager-host>:<worker-port>/druid/worker/v1/chat/<task-id>/rowStats An example report is shown below. The movingAverages section contains 1 minute, 5 minute, and 15 minute moving averages of increases to the four row counters, which have the same definitions as those in the completion report. The totals section shows the current totals. { "movingAverages": { "buildSegments": { "5m": { "processed": 3.392158326408501, "processedBytes": 627.5492903856, "unparseable": 0, "thrownAway": 0, "processedWithError": 0 }, "15m": { "processed": 1.736165476881023, "processedBytes": 321.1906130223, "unparseable": 0, "thrownAway": 0, "processedWithError": 0 }, "1m": { "processed": 4.206417693750045, "processedBytes": 778.1872733438, "unparseable": 0, "thrownAway": 0, "processedWithError": 0 } } }, "totals": { "buildSegments": { "processed": 1994, "processedBytes": 3425110, "processedWithError": 0, "thrownAway": 0, "unparseable": 0 } } } For the Kafka Indexing Service, a GET to the following Overlord API will retrieve live row stat reports from each task being managed by the supervisor and provide a combined report. http://<OVERLORD-HOST>:<OVERLORD-PORT>/druid/indexer/v1/supervisor/<supervisor-id>/stats "},{"title":"Unparseable events","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#unparseable-events","content":"Lists of recently-encountered unparseable events can be retrieved from a running task with a GET to the following Peon API: http://<middlemanager-host>:<worker-port>/druid/worker/v1/chat/<task-id>/unparseableEvents Note that this functionality is not supported by all task types. Currently, it is only supported by the non-parallel native batch task (type index) and the tasks created by the Kafka and Kinesis indexing services. "},{"title":"Task lock system","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#task-lock-system","content":"This section explains the task locking system in Druid. Druid's locking system and versioning system are tightly coupled with each other to guarantee the correctness of ingested data. "},{"title":"\"Overshadowing\" between segments","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#overshadowing-between-segments","content":"You can run a task to overwrite existing data. The segments created by an overwriting task overshadows existing segments. Note that the overshadow relation holds only for the same time chunk and the same data source. These overshadowed segments are not considered in query processing to filter out stale data. Each segment has a major version and a minor version. The major version is represented as a timestamp in the format of "yyyy-MM-dd'T'hh:mm:ss"while the minor version is an integer number. These major and minor versions are used to determine the overshadow relation between segments as seen below. A segment s1 overshadows another s2 if s1 has a higher major version than s2, ors1 has the same major version and a higher minor version than s2. Here are some examples. A segment of the major version of 2019-01-01T00:00:00.000Z and the minor version of 0 overshadows another of the major version of 2018-01-01T00:00:00.000Z and the minor version of 1.A segment of the major version of 2019-01-01T00:00:00.000Z and the minor version of 1 overshadows another of the major version of 2019-01-01T00:00:00.000Z and the minor version of 0. "},{"title":"Locking","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#locking","content":"If you are running two or more Druid tasks which generate segments for the same data source and the same time chunk, the generated segments could potentially overshadow each other, which could lead to incorrect query results. To avoid this problem, tasks will attempt to get locks prior to creating any segment in Druid. There are two types of locks, i.e., time chunk lock and segment lock. When the time chunk lock is used, a task locks the entire time chunk of a data source where generated segments will be written. For example, suppose we have a task ingesting data into the time chunk of 2019-01-01T00:00:00.000Z/2019-01-02T00:00:00.000Z of the wikipedia data source. With the time chunk locking, this task will lock the entire time chunk of 2019-01-01T00:00:00.000Z/2019-01-02T00:00:00.000Z of the wikipedia data source before it creates any segments. As long as it holds the lock, any other tasks will be unable to create segments for the same time chunk of the same data source. The segments created with the time chunk locking have a higher major version than existing segments. Their minor version is always 0. When the segment lock is used, a task locks individual segments instead of the entire time chunk. As a result, two or more tasks can create segments for the same time chunk of the same data source simultaneously if they are reading different segments. For example, a Kafka indexing task and a compaction task can always write segments into the same time chunk of the same data source simultaneously. The reason for this is because a Kafka indexing task always appends new segments, while a compaction task always overwrites existing segments. The segments created with the segment locking have the same major version and a higher minor version. info The segment locking is still experimental. It could have unknown bugs which potentially lead to incorrect query results. To enable segment locking, you may need to set forceTimeChunkLock to false in the task context. Once forceTimeChunkLock is unset, the task will choose a proper lock type to use automatically. Please note that segment lock is not always available. The most common use case where time chunk lock is enforced is when an overwriting task changes the segment granularity. Also, the segment locking is supported by only native indexing tasks and Kafka/Kinesis indexing tasks. Hadoop indexing tasks don't support it. forceTimeChunkLock in the task context is only applied to individual tasks. If you want to unset it for all tasks, you would want to set druid.indexer.tasklock.forceTimeChunkLock to false in the overlord configuration. Lock requests can conflict with each other if two or more tasks try to get locks for the overlapped time chunks of the same data source. Note that the lock conflict can happen between different locks types. The behavior on lock conflicts depends on the task priority. If all tasks of conflicting lock requests have the same priority, then the task who requested first will get the lock. Other tasks will wait for the task to release the lock. If a task of a lower priority asks a lock later than another of a higher priority, this task will also wait for the task of a higher priority to release the lock. If a task of a higher priority asks a lock later than another of a lower priority, then this task will preempt the other task of a lower priority. The lock of the lower-prioritized task will be revoked and the higher-prioritized task will acquire a new lock. This lock preemption can happen at any time while a task is running except when it is publishing segments in a critical section. Its locks become preemptible again once publishing segments is finished. Note that locks are shared by the tasks of the same groupId. For example, Kafka indexing tasks of the same supervisor have the same groupId and share all locks with each other. "},{"title":"Lock priority","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#lock-priority","content":"Each task type has a different default lock priority. The below table shows the default priorities of different task types. Higher the number, higher the priority. task type\tdefault priorityRealtime index task\t75 Batch index tasks, including native batch, SQL, and Hadoop-based\t50 Merge/Append/Compaction task\t25 Other tasks\t0 You can override the task priority by setting your priority in the task context as below. "context" : { "priority" : 100 } "},{"title":"Task actions","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#task-actions","content":"Task actions are overlord actions performed by tasks during their lifecycle. Some typical task actions are: lockAcquire: acquires a time-chunk lock on an interval for the tasklockRelease: releases a lock acquired by the task on an intervalsegmentTransactionalInsert: publishes new segments created by a task and optionally overwrites and/or drops existing segments in a single transactionsegmentAllocate: allocates pending segments to a task to write rows "},{"title":"Batching segmentAllocate actions","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#batching-segmentallocate-actions","content":"In a cluster with several concurrent tasks, segmentAllocate actions on the overlord can take a long time to finish, causing spikes in the task/action/run/time. This can result in ingestion lag building up while a task waits for a segment to be allocated. The root cause of such spikes is likely to be one or more of the following: several concurrent tasks trying to allocate segments for the same datasource and intervallarge number of metadata calls made to the segments and pending segments tables concurrency limitations while acquiring a task lock required for allocating a segment Since the contention typically arises from tasks allocating segments for the same datasource and interval, you can improve the run times by batching the actions together. To enable batched segment allocation on the overlord, set druid.indexer.tasklock.batchSegmentAllocation to true. See overlord configuration for more details. "},{"title":"Context parameters","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#context-parameters","content":"The task context is used for various individual task configuration. Specify task context configurations in the context field of the ingestion spec. When configuring automatic compaction, set the task context configurations in taskContext rather than in context. The settings get passed into the context field of the compaction tasks issued to MiddleManagers. The following parameters apply to all task types. Property\tDescription\tDefaultforceTimeChunkLock\tSetting this to false is still experimental. Force to use time chunk lock. When true, this parameter overrides the overlord runtime property druid.indexer.tasklock.forceTimeChunkLock configuration for the overlord. If neither this parameter nor the runtime property is true, each task automatically chooses a lock type to use. See Locking for more details.\ttrue priority\tTask priority\tDepends on the task type. See Priority for more details. storeCompactionState\tEnables the task to store the compaction state of created segments in the metadata store. When true, the segments created by the task fill lastCompactionState in the segment metadata. This parameter is set automatically on compaction tasks.\ttrue for compaction tasks, false for other task types storeEmptyColumns\tEnables the task to store empty columns during ingestion. When true, Druid stores every column specified in the dimensionsSpec. When false, Druid SQL queries referencing empty columns will fail. If you intend to leave storeEmptyColumns disabled, you should either ingest dummy data for empty columns or else not query on empty columns. When set in the task context, storeEmptyColumns overrides the system property druid.indexer.task.storeEmptyColumns.\ttrue taskLockTimeout\tTask lock timeout in milliseconds. For more details, see Locking. When a task acquires a lock, it sends a request via HTTP and awaits until it receives a response containing the lock acquisition result. As a result, an HTTP timeout error can occur if taskLockTimeout is greater than druid.server.http.maxIdleTime of Overlords.\t300000 useLineageBasedSegmentAllocation\tEnables the new lineage-based segment allocation protocol for the native Parallel task with dynamic partitioning. This option should be off during the replacing rolling upgrade from one of the Druid versions between 0.19 and 0.21 to Druid 0.22 or higher. Once the upgrade is done, it must be set to true to ensure data correctness.\tfalse in 0.21 or earlier, true in 0.22 or later "},{"title":"Task logs","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#task-logs","content":"Logs are created by ingestion tasks as they run. You can configure Druid to push these into a repository for long-term storage after they complete. Once the task has been submitted to the Overlord it remains WAITING for locks to be acquired. Worker slot allocation is then PENDING until the task can actually start executing. The task then starts creating logs in a local directory of the middle manager (or indexer) in a log directory for the specific taskId at [druid.worker.baseTaskDirs] (../configuration/index.md#middlemanager-configuration). When the task completes - whether it succeeds or fails - the middle manager (or indexer) will push the task log file into the location specified in druid.indexer.logs. Task logs on the Druid web console are retrieved via an API on the Overlord. It automatically detects where the log file is, either in the middleManager / indexer or in long-term storage, and passes it back. If you don't see the log file in long-term storage, it means either: the middleManager / indexer failed to push the log file to deep storage orthe task did not complete. You can check the middleManager / indexer logs locally to see if there was a push failure. If there was not, check the Overlord's own process logs to see why the task failed before it started. info If you are running the indexing service in remote mode, the task logs must be stored in S3, Azure Blob Store, Google Cloud Storage or HDFS. You can configure retention periods for logs in milliseconds by setting druid.indexer.logs.kill properties in configuration. The Overlord will then automatically manage task logs in log directories along with entries in task-related metadata storage tables. info Automatic log file deletion typically works based on the log file's 'modified' timestamp in the back-end store. Large clock skews between Druid processes and the long-term store might result in unintended behavior. "},{"title":"Configuring task storage sizes","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#configuring-task-storage-sizes","content":"Tasks sometimes need to use local disk for storage of things while the task is active. For example, for realtime ingestion tasks to accept broadcast segments for broadcast joins. Or intermediate data sets for Multi-stage Query jobs Task storage sizes are configured through a combination of three properties: druid.worker.capacity - i.e. the "number of task slots"druid.worker.baseTaskDirs - i.e. the list of directories to use for task storage. druid.worker.baseTaskDirSize - i.e. the amount of storage to use on each storage location While it seems like one task might use multiple directories, only one directory from the list of base directories will be used for any given task, as such, each task is only given a singular directory for scratch space. The actual amount of memory assigned to any given task is computed by determining the largest size that enables all task slots to be given an equivalent amount of disk storage. For example, with 5 slots, 2 directories (A and B) and a size of 300 GB, 3 slots would be given to directory A, 2 slots to directory B and each slot would be allowed 100 GB "},{"title":"All task types","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#all-task-types","content":""},{"title":"index_parallel","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#index_parallel","content":"See Native batch ingestion (parallel task). "},{"title":"index_hadoop","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#index_hadoop","content":"See Hadoop-based ingestion. "},{"title":"index_kafka","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#index_kafka","content":"Submitted automatically, on your behalf, by aKafka-based ingestion supervisor. "},{"title":"index_kinesis","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#index_kinesis","content":"Submitted automatically, on your behalf, by aKinesis-based ingestion supervisor. "},{"title":"compact","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#compact","content":"Compaction tasks merge all segments of the given interval. See the documentation oncompaction for details. "},{"title":"kill","type":1,"pageTitle":"Task reference","url":"/docs/27.0.0/ingestion/tasks#kill","content":"Kill tasks delete all metadata about certain segments and removes them from deep storage. See the documentation on deleting data for details. "},{"title":"Tranquility","type":0,"sectionRef":"#","url":"/docs/27.0.0/ingestion/tranquility","content":"Tranquility Tranquility is a separately distributed package for pushing streams to Druid in real-time. Tranquility has not been built against a version of Druid later than Druid 0.9.2 release. It may still work with the latest Druid servers, but not all features and functionality will be available due to limitations of older Druid APIs on the Tranquility side. For new projects that require streaming ingestion, we recommend using Druid's native support forApache Kafka orAmazon Kinesis. For more details, check out the Tranquility GitHub page.","keywords":""},{"title":"Papers","type":0,"sectionRef":"#","url":"/docs/27.0.0/misc/papers-and-talks","content":"","keywords":""},{"title":"Papers","type":1,"pageTitle":"Papers","url":"/docs/27.0.0/misc/papers-and-talks#papers","content":"Druid: A Real-time Analytical Data Store - Discusses the Druid architecture in detail. The RADStack: Open Source Lambda Architecture for Interactive Analytics - Discusses how Druid supports real-time and batch workflows. "},{"title":"Presentations","type":1,"pageTitle":"Papers","url":"/docs/27.0.0/misc/papers-and-talks#presentations","content":"Introduction to Druid - Discusses the motivations behind Druid and the architecture of the system. Druid: Interactive Queries Meet Real-Time Data - Discusses how real-time ingestion in Druid works and use cases at Netflix. Not Exactly! Fast Queries via Approximation Algorithms - Discusses how approximate algorithms work in Druid. Real-time Analytics with Open Source Technologies - Discusses Lambda architectures with Druid. Stories from the Trenches - The Challenges of Building an Analytics Stack - Discusses features that were added to scale Druid. Building Interactive Applications at Scale - Discusses building applications on top of Druid. "},{"title":"SQL-based ingestion","type":0,"sectionRef":"#","url":"/docs/27.0.0/multi-stage-query/","content":"","keywords":""},{"title":"Vocabulary","type":1,"pageTitle":"SQL-based ingestion","url":"/docs/27.0.0/multi-stage-query/#vocabulary","content":"Controller: An indexing service task of type query_controller that manages the execution of a query. There is one controller task per query. Worker: Indexing service tasks of type query_worker that execute a query. There can be multiple worker tasks per query. Internally, the tasks process items in parallel using their processing pools (up to druid.processing.numThreads of execution parallelism within a worker task). Stage: A stage of query execution that is parallelized across worker tasks. Workers exchange data with each other between stages. Partition: A slice of data output by worker tasks. In INSERT or REPLACE queries, the partitions of the final stage become Druid segments. Shuffle: Workers exchange data between themselves on a per-partition basis in a process called shuffling. During a shuffle, each output partition is sorted by a clustering key. "},{"title":"Load the extension","type":1,"pageTitle":"SQL-based ingestion","url":"/docs/27.0.0/multi-stage-query/#load-the-extension","content":"To add the extension to an existing cluster, add druid-multi-stage-query to druid.extensions.loadlist in yourcommon.runtime.properties file. For more information about how to load an extension, see Loading extensions. To use EXTERN, you need READ permission on the resource named "EXTERNAL" of the resource type "EXTERNAL". If you encounter a 403 error when trying to use EXTERN, verify that you have the correct permissions. The same is true of any of the input-source specific table function such as S3 or LOCALFILES. "},{"title":"Next steps","type":1,"pageTitle":"SQL-based ingestion","url":"/docs/27.0.0/multi-stage-query/#next-steps","content":"Read about key concepts to learn more about how SQL-based ingestion and multi-stage queries work.Check out the examples to see SQL-based ingestion in action.Explore the Query view to get started in the web console. "},{"title":"SQL-based ingestion concepts","type":0,"sectionRef":"#","url":"/docs/27.0.0/multi-stage-query/concepts","content":"","keywords":""},{"title":"Multi-stage query task engine","type":1,"pageTitle":"SQL-based ingestion concepts","url":"/docs/27.0.0/multi-stage-query/concepts#multi-stage-query-task-engine","content":"The druid-multi-stage-query extension adds a multi-stage query (MSQ) task engine that executes SQL statements as batch tasks in the indexing service, which execute on Middle Managers.INSERT and REPLACE tasks publishsegments just like all other forms of batch ingestion. Each query occupies at least two task slots while running: one controller task, and at least one worker task. As an experimental feature, the MSQ task engine also supports running SELECT queries as batch tasks. The behavior and result format of plain SELECT (without INSERT or REPLACE) is subject to change. You can execute SQL statements using the MSQ task engine through the Query view in the web console or through the /druid/v2/sql/task API. For more details on how SQL queries are executed using the MSQ task engine, see multi-stage query tasks. "},{"title":"SQL extensions","type":1,"pageTitle":"SQL-based ingestion concepts","url":"/docs/27.0.0/multi-stage-query/concepts#sql-extensions","content":"To support ingestion, additional SQL functionality is available through the MSQ task engine. "},{"title":"Read external data with EXTERN","type":1,"pageTitle":"SQL-based ingestion concepts","url":"/docs/27.0.0/multi-stage-query/concepts#read-external-data-with-extern","content":"Query tasks can access external data through the EXTERN function, using any native batch input source and input format. EXTERN can read multiple files in parallel across different worker tasks. However, EXTERN does not split individual files across multiple worker tasks. If you have a small number of very large input files, you can increase query parallelism by splitting up your input files. For more information about the syntax, see EXTERN. See also the set of SQL-friendly input-source-specific table functions which may be more convenient than EXTERN. "},{"title":"Load data with INSERT","type":1,"pageTitle":"SQL-based ingestion concepts","url":"/docs/27.0.0/multi-stage-query/concepts#load-data-with-insert","content":"INSERT statements can create a new datasource or append to an existing datasource. In Druid SQL, unlike standard SQL, there is no syntactical difference between creating a table and appending data to a table. Druid does not include aCREATE TABLE statement. Nearly all SELECT capabilities are available for INSERT ... SELECT queries. Certain exceptions are listed on the Known issues page. INSERT statements acquire a shared lock to the target datasource. Multiple INSERT statements can run at the same time, for the same datasource, if your cluster has enough task slots. Like all other forms of batch ingestion, each INSERT statement generates new segments and publishes them at the end of its run. For this reason, it is best suited to loading data in larger batches. Do not useINSERT statements to load data in a sequence of microbatches; for that, use streaming ingestion instead. When deciding whether to use REPLACE or INSERT, keep in mind that segments generated with REPLACE can be pruned with dimension-based pruning but those generated with INSERT cannot. For more information about the requirements for dimension-based pruning, see Clustering. For more information about the syntax, see INSERT. "},{"title":"Overwrite data with REPLACE","type":1,"pageTitle":"SQL-based ingestion concepts","url":"/docs/27.0.0/multi-stage-query/concepts#overwrite-data-with-replace","content":"REPLACE statements can create a new datasource or overwrite data in an existing datasource. In Druid SQL, unlike standard SQL, there is no syntactical difference between creating a table and overwriting data in a table. Druid does not include a CREATE TABLE statement. REPLACE uses an OVERWRITE clause to determine which data to overwrite. You can overwrite an entire table, or a specific time range of a table. When you overwrite a specific time range, that time range must align with the granularity specified in the PARTITIONED BY clause. REPLACE statements acquire an exclusive write lock to the target time range of the target datasource. No other ingestion or compaction operations may proceed for that time range while the task is running. However, ingestion and compaction operations may proceed for other time ranges. Nearly all SELECT capabilities are available for REPLACE ... SELECT queries. Certain exceptions are listed on the Known issues page. For more information about the syntax, see REPLACE. When deciding whether to use REPLACE or INSERT, keep in mind that segments generated with REPLACE can be pruned with dimension-based pruning but those generated with INSERT cannot. For more information about the requirements for dimension-based pruning, see Clustering. "},{"title":"Primary timestamp","type":1,"pageTitle":"SQL-based ingestion concepts","url":"/docs/27.0.0/multi-stage-query/concepts#primary-timestamp","content":"Druid tables always include a primary timestamp named __time. It is common to set a primary timestamp by using date and time functions; for example: TIME_FORMAT("timestamp", 'yyyy-MM-dd HH:mm:ss') AS __time. The __time column is used for partitioning by time. If you use PARTITIONED BY ALL orPARTITIONED BY ALL TIME, partitioning by time is disabled. In these cases, you do not need to include a __timecolumn in your INSERT statement. However, Druid still creates a __time column in your Druid table and sets all timestamps to 1970-01-01 00:00:00. For more information, see Primary timestamp. "},{"title":"Partitioning by time","type":1,"pageTitle":"SQL-based ingestion concepts","url":"/docs/27.0.0/multi-stage-query/concepts#partitioning-by-time","content":"INSERT and REPLACE statements require the PARTITIONED BY clause, which determines how time-based partitioning is done. In Druid, data is split into one or more segments per time chunk, defined by the PARTITIONED BY granularity. Partitioning by time is important for three reasons: Queries that filter by __time (SQL) or intervals (native) are able to use time partitioning to prune the set of segments to consider.Certain data management operations, such as overwriting and compacting existing data, acquire exclusive write locks on time partitions. Finer-grained partitioning allows finer-grained exclusive write locks.Each segment file is wholly contained within a time partition. Too-fine-grained partitioning may cause a large number of small segments, which leads to poor performance. PARTITIONED BY HOUR and PARTITIONED BY DAY are the most common choices to balance these considerations. PARTITIONED BY ALL is suitable if your dataset does not have a primary timestamp. For more information about the syntax, see PARTITIONED BY. "},{"title":"Clustering","type":1,"pageTitle":"SQL-based ingestion concepts","url":"/docs/27.0.0/multi-stage-query/concepts#clustering","content":"Within each time chunk defined by time partitioning, data can be further split by the optionalCLUSTERED BY clause. For example, suppose you ingest 100 million rows per hour using PARTITIONED BY HOUR and CLUSTERED BY hostName. The ingestion task will generate segments of roughly 3 million rows — the default value ofrowsPerSegment — with lexicographic ranges of hostNames grouped into segments. Clustering is important for two reasons: Lower storage footprint due to improved locality, and therefore improved compressibility.Better query performance due to dimension-based segment pruning, which removes segments from consideration when they cannot possibly contain data matching a query's filter. This speeds up filters like x = 'foo' and x IN ('foo', 'bar'). To activate dimension-based pruning, these requirements must be met: Segments were generated by a REPLACE statement, not an INSERT statement.All CLUSTERED BY columns are single-valued string columns. If these requirements are not met, Druid still clusters data during ingestion but will not be able to perform dimension-based segment pruning at query time. You can tell if dimension-based segment pruning is possible by using thesys.segments table to inspect the shard_spec for the segments generated by an ingestion query. If they are of typerange or single, then dimension-based segment pruning is possible. Otherwise, it is not. The shard spec type is also available in the Segments view under the Partitioning column. For more information about syntax, see CLUSTERED BY. "},{"title":"Rollup","type":1,"pageTitle":"SQL-based ingestion concepts","url":"/docs/27.0.0/multi-stage-query/concepts#rollup","content":"Rollup is a technique that pre-aggregates data during ingestion to reduce the amount of data stored. Intermediate aggregations are stored in the generated segments, and further aggregation is done at query time. This reduces storage footprint and improves performance, often dramatically. To perform ingestion with rollup: Use GROUP BY. The columns in the GROUP BY clause become dimensions, and aggregation functions become metrics.Set finalizeAggregations: false in your context. This causes aggregation functions to write their internal state to the generated segments, instead of the finalized end result, and enables further aggregation at query time.Wrap all multi-value strings in MV_TO_ARRAY(...) and set groupByEnableMultiValueUnnesting: false in your context. This ensures that multi-value strings are left alone and remain lists, instead of being automatically unnested by theGROUP BY operator. When you do all of these things, Druid understands that you intend to do an ingestion with rollup, and it writes rollup-related metadata into the generated segments. Other applications can then use segmentMetadataqueries to retrieve rollup-related information. If you see the error "Encountered multi-value dimension x that cannot be processed with groupByEnableMultiValueUnnesting set to false", then wrap that column in MV_TO_ARRAY(x) AS x. The following aggregation functions are supported for rollup at ingestion time:COUNT (but switch to SUM at query time), SUM, MIN, MAX, EARLIEST (string only),LATEST (string only), APPROX_COUNT_DISTINCT, APPROX_COUNT_DISTINCT_BUILTIN,APPROX_COUNT_DISTINCT_DS_HLL, APPROX_COUNT_DISTINCT_DS_THETA, and DS_QUANTILES_SKETCH (but switch toAPPROX_QUANTILE_DS at query time). Do not use AVG; instead, use SUM and COUNT at ingest time and compute the quotient at query time. For an example, see INSERT with rollup example. "},{"title":"Multi-stage query tasks","type":1,"pageTitle":"SQL-based ingestion concepts","url":"/docs/27.0.0/multi-stage-query/concepts#multi-stage-query-tasks","content":""},{"title":"Execution flow","type":1,"pageTitle":"SQL-based ingestion concepts","url":"/docs/27.0.0/multi-stage-query/concepts#execution-flow","content":"When you execute a SQL statement using the task endpoint /druid/v2/sql/task, the following happens: The Broker plans your SQL query into a native query, as usual. The Broker wraps the native query into a task of type query_controllerand submits it to the indexing service. The Broker returns the task ID to you and exits. The controller task launches some number of worker tasks determined by the maxNumTasks and taskAssignment context parameters. You can set these settings individually for each query. Worker tasks of type query_worker execute the query. If the query is a SELECT query, the worker tasks send the results back to the controller task, which writes them into its task report. If the query is an INSERT or REPLACE query, the worker tasks generate and publish new Druid segments to the provided datasource. "},{"title":"Parallelism","type":1,"pageTitle":"SQL-based ingestion concepts","url":"/docs/27.0.0/multi-stage-query/concepts#parallelism","content":"The maxNumTasks query parameter determines the maximum number of tasks your query will use, including the one query_controller task. Generally, queries perform better with more workers. The lowest possible value of maxNumTasks is two (one worker and one controller). Do not set this higher than the number of free slots available in your cluster; doing so will result in a TaskStartTimeouterror. When reading external data, EXTERN can read multiple files in parallel across different worker tasks. However, EXTERN does not split individual files across multiple worker tasks. If you have a small number of very large input files, you can increase query parallelism by splitting up your input files. The druid.worker.capacity server property on each Middle Managerdetermines the maximum number of worker tasks that can run on each server at once. Worker tasks run single-threaded, which also determines the maximum number of processors on the server that can contribute towards multi-stage queries. "},{"title":"Memory usage","type":1,"pageTitle":"SQL-based ingestion concepts","url":"/docs/27.0.0/multi-stage-query/concepts#memory-usage","content":"Increasing the amount of available memory can improve performance in certain cases: Segment generation becomes more efficient when data doesn't spill to disk as often.Sorting stage output data becomes more efficient since available memory affects the number of required sorting passes. Worker tasks use both JVM heap memory and off-heap ("direct") memory. On Peons launched by Middle Managers, the bulk of the JVM heap (75%, less any space used bylookups) is split up into two bundles of equal size: one processor bundle and one worker bundle. Each one comprises 37.5% of the available JVM heap, less any space used by lookups. Depending on the type of query, controller and worker tasks may use sketches for determining partition boundaries. The heap footprint of these sketches is capped at 10% of available memory, or 300 MB, whichever is lower. The processor memory bundle is used for query processing and segment generation. Each processor bundle must also provides space to buffer I/O between stages. Specifically, each downstream stage requires 1 MB of buffer space for each upstream worker. For example, if you have 100 workers running in stage 0, and stage 1 reads from stage 0, then each worker in stage 1 requires 1M * 100 = 100 MB of memory for frame buffers. The worker memory bundle is used for sorting stage output data prior to shuffle. Workers can sort more data than fits in memory; in this case, they will switch to using disk. Worker tasks also use off-heap ("direct") memory. Set the amount of direct memory available (-XX:MaxDirectMemorySize) to at least (druid.processing.numThreads + 1) * druid.processing.buffer.sizeBytes. Increasing the amount of direct memory available beyond the minimum does not speed up processing. "},{"title":"Disk usage","type":1,"pageTitle":"SQL-based ingestion concepts","url":"/docs/27.0.0/multi-stage-query/concepts#disk-usage","content":"Worker tasks use local disk for four purposes: Temporary copies of input data. Each temporary file is deleted before the next one is read. You only need enough temporary disk space to store one input file at a time per task.Temporary data related to segment generation. You only need enough temporary disk space to store one segments' worth of data at a time per task. This is generally less than 2 GB per task.External sort of data prior to shuffle. Requires enough space to store a compressed copy of the entire output dataset for a task.Storing stage output data during a shuffle. Requires enough space to store a compressed copy of the entire output dataset for a task. Workers use the task working directory, given bydruid.indexer.task.baseDir, for these items. It is important that this directory has enough space available for these purposes. "},{"title":"SQL-based ingestion known issues","type":0,"sectionRef":"#","url":"/docs/27.0.0/multi-stage-query/known-issues","content":"","keywords":""},{"title":"Multi-stage query task runtime","type":1,"pageTitle":"SQL-based ingestion known issues","url":"/docs/27.0.0/multi-stage-query/known-issues#multi-stage-query-task-runtime","content":"Fault tolerance is partially implemented. Workers get relaunched when they are killed unexpectedly. The controller does not get relaunched if it is killed unexpectedly. Worker task stage outputs are stored in the working directory given by druid.indexer.task.baseDir. Stages that generate a large amount of output data may exhaust all available disk space. In this case, the query fails with an UnknownError with a message including "No space left on device". "},{"title":"SELECT Statement","type":1,"pageTitle":"SQL-based ingestion known issues","url":"/docs/27.0.0/multi-stage-query/known-issues#select-statement","content":"SELECT from a Druid datasource does not include unpublished real-time data. GROUPING SETS and UNION ALL are not implemented. Queries using these features return aQueryNotSupported error. For some COUNT DISTINCT queries, you'll encounter a QueryNotSupported error that includes Must not have 'subtotalsSpec' as one of its causes. This is caused by the planner attempting to useGROUPING SETs, which are not implemented. The numeric varieties of the EARLIEST and LATEST aggregators do not work properly. Attempting to use the numeric varieties of these aggregators lead to an error likejava.lang.ClassCastException: class java.lang.Double cannot be cast to class org.apache.druid.collections.SerializablePair. The string varieties, however, do work properly. "},{"title":"INSERT and REPLACE Statements","type":1,"pageTitle":"SQL-based ingestion known issues","url":"/docs/27.0.0/multi-stage-query/known-issues#insert-and-replace-statements","content":"The INSERT and REPLACE statements with column lists, like INSERT INTO tbl (a, b, c) SELECT ..., is not implemented. INSERT ... SELECT and REPLACE ... SELECT insert columns from the SELECT statement based on column name. This differs from SQL standard behavior, where columns are inserted based on position. INSERT and REPLACE do not support all options available in ingestion specs, including the createBitmapIndex and multiValueHandling dimensionproperties, and the indexSpec tuningConfig property. "},{"title":"EXTERN Function","type":1,"pageTitle":"SQL-based ingestion known issues","url":"/docs/27.0.0/multi-stage-query/known-issues#extern-function","content":"The schemaless dimensionsfeature is not available. All columns and their types must be specified explicitly using the signature parameter of the EXTERN function. EXTERN with input sources that match large numbers of files may exhaust available memory on the controller task. EXTERN refers to external files. Use FROM to access druid input sources. "},{"title":"SQL-based ingestion security","type":0,"sectionRef":"#","url":"/docs/27.0.0/multi-stage-query/security","content":"","keywords":""},{"title":"S3","type":1,"pageTitle":"SQL-based ingestion security","url":"/docs/27.0.0/multi-stage-query/security#s3","content":"The MSQ task engine can use S3 to store intermediate files when running queries. This can increase its reliability but requires certain permissions in S3. These permissions are required if you configure durable storage. Permissions for pushing and fetching intermediate stage results to and from S3: s3:GetObjects3:PutObjects3:AbortMultipartUpload Permissions for removing intermediate stage results: s3:DeleteObject "},{"title":"Alerts","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/alerts","content":"Alerts Druid generates alerts on getting into unexpected situations. Alerts are emitted as JSON objects to a runtime log file or over HTTP (to a service such as Apache Kafka). Alert emission is disabled by default. All Druid alerts share a common set of fields: timestamp - the time the alert was createdservice - the service name that emitted the alerthost - the host name that emitted the alertseverity - severity of the alert e.g. anomaly, component-failure, service-failure etc.description - a description of the alertdata - if there was an exception then a JSON object with fields exceptionType, exceptionMessage and exceptionStackTrace","keywords":""},{"title":"Authentication and Authorization","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/auth","content":"","keywords":""},{"title":"Enabling Authentication/AuthorizationLoadingLookupTest","type":1,"pageTitle":"Authentication and Authorization","url":"/docs/27.0.0/operations/auth#enabling-authenticationauthorizationloadinglookuptest","content":""},{"title":"Authenticator chain","type":1,"pageTitle":"Authentication and Authorization","url":"/docs/27.0.0/operations/auth#authenticator-chain","content":"Authentication decisions are handled by a chain of Authenticator instances. A request will be checked by Authenticators in the sequence defined by the druid.auth.authenticatorChain. Authenticator implementations are provided by extensions. For example, the following authenticator chain definition enables the Kerberos and HTTP Basic authenticators, from the druid-kerberos and druid-basic-security core extensions, respectively: druid.auth.authenticatorChain=["kerberos", "basic"] A request will pass through all Authenticators in the chain, until one of the Authenticators successfully authenticates the request or sends an HTTP error response. Authenticators later in the chain will be skipped after the first successful authentication or if the request is terminated with an error response. If no Authenticator in the chain successfully authenticated a request or sent an HTTP error response, an HTTP error response will be sent at the end of the chain. Druid includes two built-in Authenticators, one of which is used for the default unsecured configuration. "},{"title":"AllowAll authenticator","type":1,"pageTitle":"Authentication and Authorization","url":"/docs/27.0.0/operations/auth#allowall-authenticator","content":"This built-in Authenticator authenticates all requests, and always directs them to an Authorizer named "allowAll". It is not intended to be used for anything other than the default unsecured configuration. "},{"title":"Anonymous authenticator","type":1,"pageTitle":"Authentication and Authorization","url":"/docs/27.0.0/operations/auth#anonymous-authenticator","content":"This built-in Authenticator authenticates all requests, and directs them to an Authorizer specified in the configuration by the user. It is intended to be used for adding a default level of access so the Anonymous Authenticator should be added to the end of the authenticator chain. A request that reaches the Anonymous Authenticator at the end of the chain will succeed or fail depending on how the Authorizer linked to the Anonymous Authenticator is configured. Property\tDescription\tDefault\tRequireddruid.auth.authenticator.<authenticatorName>.authorizerName\tAuthorizer that requests should be directed to.\tN/A\tYes druid.auth.authenticator.<authenticatorName>.identity\tThe identity of the requester.\tdefaultUser\tNo To use the Anonymous Authenticator, add an authenticator with type anonymous to the authenticatorChain. For example, the following enables the Anonymous Authenticator with the druid-basic-security extension: druid.auth.authenticatorChain=["basic", "anonymous"] druid.auth.authenticator.anonymous.type=anonymous druid.auth.authenticator.anonymous.identity=defaultUser druid.auth.authenticator.anonymous.authorizerName=myBasicAuthorizer # ... usual configs for basic authentication would go here ... "},{"title":"Trusted domain Authenticator","type":1,"pageTitle":"Authentication and Authorization","url":"/docs/27.0.0/operations/auth#trusted-domain-authenticator","content":"This built-in Trusted Domain Authenticator authenticates requests originating from the configured trusted domain, and directs them to an Authorizer specified in the configuration by the user. It is intended to be used for adding a default level of trust and allow access for hosts within same domain. Property\tDescription\tDefault\tRequireddruid.auth.authenticator.<authenticatorName>.name\tauthenticator name.\tN/A\tYes druid.auth.authenticator.<authenticatorName>.domain\tTrusted Domain from which requests should be authenticated. If authentication is allowed for connections from only a given host, fully qualified hostname of that host needs to be specified.\tN/A\tYes druid.auth.authenticator.<authenticatorName>.useForwardedHeaders\tClients connecting to druid could pass through many layers of proxy. Some proxies also append its own IP address to 'X-Forwarded-For' header before passing on the request to another proxy. Some proxies also connect on behalf of client. If this config is set to true and if 'X-Forwarded-For' is present, trusted domain authenticator will use left most host name from X-Forwarded-For header. Note: It is possible to spoof X-Forwarded-For headers in HTTP requests, enable this with caution.\tfalse\tNo druid.auth.authenticator.<authenticatorName>.authorizerName\tAuthorizer that requests should be directed to.\tN/A\tYes druid.auth.authenticator.<authenticatorName>.identity\tThe identity of the requester.\tdefaultUser\tNo To use the Trusted Domain Authenticator, add an authenticator with type trustedDomain to the authenticatorChain. For example, the following enables the Trusted Domain Authenticator : druid.auth.authenticatorChain=["trustedDomain"] druid.auth.authenticator.trustedDomain.type=trustedDomain druid.auth.authenticator.trustedDomain.domain=trustedhost.mycompany.com druid.auth.authenticator.trustedDomain.identity=defaultUser druid.auth.authenticator.trustedDomain.authorizerName=myBasicAuthorizer druid.auth.authenticator.trustedDomain.name=myTrustedAutenticator # ... usual configs for druid would go here ... "},{"title":"Escalator","type":1,"pageTitle":"Authentication and Authorization","url":"/docs/27.0.0/operations/auth#escalator","content":"The druid.escalator.type property determines what authentication scheme should be used for internal Druid cluster communications (such as when a Broker process communicates with Historical processes for query processing). The Escalator chosen for this property must use an authentication scheme that is supported by an Authenticator in druid.auth.authenticatorChain. Authenticator extension implementers must also provide a corresponding Escalator implementation if they intend to use a particular authentication scheme for internal Druid communications. "},{"title":"Noop escalator","type":1,"pageTitle":"Authentication and Authorization","url":"/docs/27.0.0/operations/auth#noop-escalator","content":"This built-in default Escalator is intended for use only with the default AllowAll Authenticator and Authorizer. "},{"title":"Authorizers","type":1,"pageTitle":"Authentication and Authorization","url":"/docs/27.0.0/operations/auth#authorizers","content":"Authorization decisions are handled by an Authorizer. The druid.auth.authorizers property determines what Authorizer implementations will be active. There are two built-in Authorizers, "default" and "noop". Other implementations are provided by extensions. For example, the following authorizers definition enables the "basic" implementation from druid-basic-security: druid.auth.authorizers=["basic"] Only a single Authorizer will authorize any given request. Druid includes one built in authorizer: "},{"title":"AllowAll authorizer","type":1,"pageTitle":"Authentication and Authorization","url":"/docs/27.0.0/operations/auth#allowall-authorizer","content":"The Authorizer with type name "allowAll" accepts all requests. "},{"title":"Default Unsecured Configuration","type":1,"pageTitle":"Authentication and Authorization","url":"/docs/27.0.0/operations/auth#default-unsecured-configuration","content":"When druid.auth.authenticatorChain is left empty or unspecified, Druid will create an authenticator chain with a single AllowAll Authenticator named "allowAll". When druid.auth.authorizers is left empty or unspecified, Druid will create a single AllowAll Authorizer named "allowAll". The default value of druid.escalator.type is "noop" to match the default unsecured Authenticator/Authorizer configurations. "},{"title":"Authenticator to Authorizer Routing","type":1,"pageTitle":"Authentication and Authorization","url":"/docs/27.0.0/operations/auth#authenticator-to-authorizer-routing","content":"When an Authenticator successfully authenticates a request, it must attach a AuthenticationResult to the request, containing an information about the identity of the requester, as well as the name of the Authorizer that should authorize the authenticated request. An Authenticator implementation should provide some means through configuration to allow users to select what Authorizer(s) the Authenticator should route requests to. "},{"title":"Internal system user","type":1,"pageTitle":"Authentication and Authorization","url":"/docs/27.0.0/operations/auth#internal-system-user","content":"Internal requests between Druid processes (non-user initiated communications) need to have authentication credentials attached. These requests should be run as an "internal system user", an identity that represents the Druid cluster itself, with full access permissions. The details of how the internal system user is defined is left to extension implementations. "},{"title":"Authorizer Internal System User Handling","type":1,"pageTitle":"Authentication and Authorization","url":"/docs/27.0.0/operations/auth#authorizer-internal-system-user-handling","content":"Authorizers implementations must recognize and authorize an identity for the "internal system user", with full access permissions. "},{"title":"Authenticator and Escalator Internal System User Handling","type":1,"pageTitle":"Authentication and Authorization","url":"/docs/27.0.0/operations/auth#authenticator-and-escalator-internal-system-user-handling","content":"An Authenticator implementation that is intended to support internal Druid communications must recognize credentials for the "internal system user", as provided by a corresponding Escalator implementation. An Escalator must implement three methods related to the internal system user: public HttpClient createEscalatedClient(HttpClient baseClient); public org.eclipse.jetty.client.HttpClient createEscalatedJettyClient(org.eclipse.jetty.client.HttpClient baseClient); public AuthenticationResult createEscalatedAuthenticationResult(); createEscalatedClient returns an wrapped HttpClient that attaches the credentials of the "internal system user" to requests. createEscalatedJettyClient is similar to createEscalatedClient, except that it operates on a Jetty HttpClient. createEscalatedAuthenticationResult returns an AuthenticationResult containing the identity of the "internal system user". "},{"title":"Reserved Name Configuration Property","type":1,"pageTitle":"Authentication and Authorization","url":"/docs/27.0.0/operations/auth#reserved-name-configuration-property","content":"For extension implementers, please note that the following configuration properties are reserved for the names of Authenticators and Authorizers: druid.auth.authenticator.<authenticator-name>.name=<authenticator-name> druid.auth.authorizer.<authorizer-name>.name=<authorizer-name> These properties provide the authenticator and authorizer names to the implementations as @JsonProperty parameters, potentially useful when multiple authenticators or authorizers of the same type are configured. "},{"title":"Deep storage migration","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/deep-storage-migration","content":"","keywords":""},{"title":"Shut down cluster services","type":1,"pageTitle":"Deep storage migration","url":"/docs/27.0.0/operations/deep-storage-migration#shut-down-cluster-services","content":"To ensure a clean migration, shut down the non-coordinator services to ensure that metadata state will not change as you do the migration. When migrating from Derby, the coordinator processes will still need to be up initially, as they host the Derby database. "},{"title":"Copy segments from old deep storage to new deep storage.","type":1,"pageTitle":"Deep storage migration","url":"/docs/27.0.0/operations/deep-storage-migration#copy-segments-from-old-deep-storage-to-new-deep-storage","content":"Before migrating, you will need to copy your old segments to the new deep storage. For information on what path structure to use in the new deep storage, please see deep storage migration options. "},{"title":"Export segments with rewritten load specs","type":1,"pageTitle":"Deep storage migration","url":"/docs/27.0.0/operations/deep-storage-migration#export-segments-with-rewritten-load-specs","content":"Druid provides an Export Metadata Tool for exporting metadata from Derby into CSV files which can then be reimported. By setting deep storage migration options, the export-metadata tool will export CSV files where the segment load specs have been rewritten to load from your new deep storage location. Run the export-metadata tool on your existing cluster, using the migration options appropriate for your new deep storage location, and save the CSV files it generates. After a successful export, you can shut down the coordinator. "},{"title":"Import metadata","type":1,"pageTitle":"Deep storage migration","url":"/docs/27.0.0/operations/deep-storage-migration#import-metadata","content":"After generating the CSV exports with the modified segment data, you can reimport the contents of the Druid segments table from the generated CSVs. Please refer to import commands for examples. Only the druid_segments table needs to be imported. "},{"title":"Restart cluster","type":1,"pageTitle":"Deep storage migration","url":"/docs/27.0.0/operations/deep-storage-migration#restart-cluster","content":"After importing the segment table successfully, you can now restart your cluster. "},{"title":"Source input formats","type":0,"sectionRef":"#","url":"/docs/27.0.0/ingestion/data-formats","content":"","keywords":""},{"title":"Formatting data","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#formatting-data","content":"The following samples show data formats that are natively supported in Druid: JSON {"timestamp": "2013-08-31T01:02:33Z", "page": "Gypsy Danger", "language" : "en", "user" : "nuclear", "unpatrolled" : "true", "newPage" : "true", "robot": "false", "anonymous": "false", "namespace":"article", "continent":"North America", "country":"United States", "region":"Bay Area", "city":"San Francisco", "added": 57, "deleted": 200, "delta": -143} {"timestamp": "2013-08-31T03:32:45Z", "page": "Striker Eureka", "language" : "en", "user" : "speed", "unpatrolled" : "false", "newPage" : "true", "robot": "true", "anonymous": "false", "namespace":"wikipedia", "continent":"Australia", "country":"Australia", "region":"Cantebury", "city":"Syndey", "added": 459, "deleted": 129, "delta": 330} {"timestamp": "2013-08-31T07:11:21Z", "page": "Cherno Alpha", "language" : "ru", "user" : "masterYi", "unpatrolled" : "false", "newPage" : "true", "robot": "true", "anonymous": "false", "namespace":"article", "continent":"Asia", "country":"Russia", "region":"Oblast", "city":"Moscow", "added": 123, "deleted": 12, "delta": 111} {"timestamp": "2013-08-31T11:58:39Z", "page": "Crimson Typhoon", "language" : "zh", "user" : "triplets", "unpatrolled" : "true", "newPage" : "false", "robot": "true", "anonymous": "false", "namespace":"wikipedia", "continent":"Asia", "country":"China", "region":"Shanxi", "city":"Taiyuan", "added": 905, "deleted": 5, "delta": 900} {"timestamp": "2013-08-31T12:41:27Z", "page": "Coyote Tango", "language" : "ja", "user" : "cancer", "unpatrolled" : "true", "newPage" : "false", "robot": "true", "anonymous": "false", "namespace":"wikipedia", "continent":"Asia", "country":"Japan", "region":"Kanto", "city":"Tokyo", "added": 1, "deleted": 10, "delta": -9} CSV 2013-08-31T01:02:33Z,"Gypsy Danger","en","nuclear","true","true","false","false","article","North America","United States","Bay Area","San Francisco",57,200,-143 2013-08-31T03:32:45Z,"Striker Eureka","en","speed","false","true","true","false","wikipedia","Australia","Australia","Cantebury","Syndey",459,129,330 2013-08-31T07:11:21Z,"Cherno Alpha","ru","masterYi","false","true","true","false","article","Asia","Russia","Oblast","Moscow",123,12,111 2013-08-31T11:58:39Z,"Crimson Typhoon","zh","triplets","true","false","true","false","wikipedia","Asia","China","Shanxi","Taiyuan",905,5,900 2013-08-31T12:41:27Z,"Coyote Tango","ja","cancer","true","false","true","false","wikipedia","Asia","Japan","Kanto","Tokyo",1,10,-9 TSV (Delimited) 2013-08-31T01:02:33Z "Gypsy Danger" "en" "nuclear" "true" "true" "false" "false" "article" "North America" "United States" "Bay Area" "San Francisco" 57 200 -143 2013-08-31T03:32:45Z "Striker Eureka" "en" "speed" "false" "true" "true" "false" "wikipedia" "Australia" "Australia" "Cantebury" "Syndey" 459 129 330 2013-08-31T07:11:21Z "Cherno Alpha" "ru" "masterYi" "false" "true" "true" "false" "article" "Asia" "Russia" "Oblast" "Moscow" 123 12 111 2013-08-31T11:58:39Z "Crimson Typhoon" "zh" "triplets" "true" "false" "true" "false" "wikipedia" "Asia" "China" "Shanxi" "Taiyuan" 905 5 900 2013-08-31T12:41:27Z "Coyote Tango" "ja" "cancer" "true" "false" "true" "false" "wikipedia" "Asia" "Japan" "Kanto" "Tokyo" 1 10 -9 Note that the CSV and TSV data do not contain column heads. This becomes important when you specify the data for ingesting. Besides text formats, Druid also supports binary formats such as Orc and Parquet formats. "},{"title":"Custom formats","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#custom-formats","content":"Druid supports custom data formats and can use the Regex parser or the JavaScript parsers to parse these formats. Using any of these parsers for parsing data is less efficient than writing a native Java parser or using an external stream processor. We welcome contributions of new parsers. "},{"title":"Input format","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#input-format","content":"You can use the inputFormat field to specify the data format for your input data. info inputFormat doesn't support all data formats or ingestion methods supported by Druid. Especially if you want to use the Hadoop ingestion, you still need to use the Parser. If your data is formatted in some format not listed in this section, please consider using the Parser instead. All forms of Druid ingestion require some form of schema object. The format of the data to be ingested is specified using the inputFormat entry in your ioConfig. "},{"title":"JSON","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#json","content":"Configure the JSON inputFormat to load JSON data as follows: Field\tType\tDescription\tRequiredtype\tString\tSet value to json.\tyes flattenSpec\tJSON Object\tSpecifies flattening configuration for nested JSON data. See flattenSpec for more info.\tno featureSpec\tJSON Object\tJSON parser features supported by Jackson, a JSON processor for Java. The features control parsing of the input JSON data. To enable a feature, map the feature name to a Boolean value of "true". For example: "featureSpec": {"ALLOW_SINGLE_QUOTES": true, "ALLOW_UNQUOTED_FIELD_NAMES": true}\tno The following properties are specialized properties that only apply when the JSON inputFormat is used in streaming ingestion, and they are related to how parsing exceptions are handled. In streaming ingestion, multi-line JSON events can be ingested (i.e. where a single JSON event spans multiple lines). However, if a parsing exception occurs, all JSON events that are present in the same streaming record will be discarded. Field\tType\tDescription\tRequiredassumeNewlineDelimited\tBoolean\tIf the input is known to be newline delimited JSON (each individual JSON event is contained in a single line, separated by newlines), setting this option to true allows for more flexible parsing exception handling. Only the lines with invalid JSON syntax will be discarded, while lines containing valid JSON events will still be ingested.\tno (Default false) useJsonNodeReader\tBoolean\tWhen ingesting multi-line JSON events, enabling this option will enable the use of a JSON parser which will retain any valid JSON events encountered within a streaming record prior to when a parsing exception occurred.\tno (Default false) For example: "ioConfig": { "inputFormat": { "type": "json" }, ... } "},{"title":"CSV","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#csv","content":"Configure the CSV inputFormat to load CSV data as follows: Field\tType\tDescription\tRequiredtype\tString\tSet value to csv.\tyes listDelimiter\tString\tA custom delimiter for multi-value dimensions.\tno (default = ctrl+A) columns\tJSON array\tSpecifies the columns of the data. The columns should be in the same order with the columns of your data.\tyes if findColumnsFromHeader is false or missing findColumnsFromHeader\tBoolean\tIf this is set, the task will find the column names from the header row. Note that skipHeaderRows will be applied before finding column names from the header. For example, if you set skipHeaderRows to 2 and findColumnsFromHeader to true, the task will skip the first two lines and then extract column information from the third line. columns will be ignored if this is set to true.\tno (default = false if columns is set; otherwise null) skipHeaderRows\tInteger\tIf this is set, the task will skip the first skipHeaderRows rows.\tno (default = 0) For example: "ioConfig": { "inputFormat": { "type": "csv", "columns" : ["timestamp","page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city","added","deleted","delta"] }, ... } "},{"title":"TSV (Delimited)","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#tsv-delimited","content":"Configure the TSV inputFormat to load TSV data as follows: Field\tType\tDescription\tRequiredtype\tString\tSet value to tsv.\tyes delimiter\tString\tA custom delimiter for data values.\tno (default = \\t) listDelimiter\tString\tA custom delimiter for multi-value dimensions.\tno (default = ctrl+A) columns\tJSON array\tSpecifies the columns of the data. The columns should be in the same order with the columns of your data.\tyes if findColumnsFromHeader is false or missing findColumnsFromHeader\tBoolean\tIf this is set, the task will find the column names from the header row. Note that skipHeaderRows will be applied before finding column names from the header. For example, if you set skipHeaderRows to 2 and findColumnsFromHeader to true, the task will skip the first two lines and then extract column information from the third line. columns will be ignored if this is set to true.\tno (default = false if columns is set; otherwise null) skipHeaderRows\tInteger\tIf this is set, the task will skip the first skipHeaderRows rows.\tno (default = 0) Be sure to change the delimiter to the appropriate delimiter for your data. Like CSV, you must specify the columns and which subset of the columns you want indexed. For example: "ioConfig": { "inputFormat": { "type": "tsv", "columns" : ["timestamp","page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city","added","deleted","delta"], "delimiter":"|" }, ... } "},{"title":"ORC","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#orc","content":"To use the ORC input format, load the Druid Orc extension ( druid-orc-extensions). info To upgrade from versions earlier than 0.15.0 to 0.15.0 or new, read Migration from 'contrib' extension. Configure the ORC inputFormat to load ORC data as follows: Field\tType\tDescription\tRequiredtype\tString\tSet value to orc.\tyes flattenSpec\tJSON Object\tSpecifies flattening configuration for nested ORC data. Only 'path' expressions are supported ('jq' and 'tree' are unavailable). See flattenSpec for more info.\tno binaryAsString\tBoolean\tSpecifies if the binary orc column which is not logically marked as a string should be treated as a UTF-8 encoded string.\tno (default = false) For example: "ioConfig": { "inputFormat": { "type": "orc", "flattenSpec": { "useFieldDiscovery": true, "fields": [ { "type": "path", "name": "nested", "expr": "$.path.to.nested" } ] }, "binaryAsString": false }, ... } "},{"title":"Parquet","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#parquet","content":"To use the Parquet input format load the Druid Parquet extension (druid-parquet-extensions). Configure the Parquet inputFormat to load Parquet data as follows: Field\tType\tDescription\tRequiredtype\tString\tSet value to parquet.\tyes flattenSpec\tJSON Object\tDefine a flattenSpec to extract nested values from a Parquet file. Only 'path' expressions are supported ('jq' and 'tree' are unavailable).\tno (default will auto-discover 'root' level properties) binaryAsString\tBoolean\tSpecifies if the bytes parquet column which is not logically marked as a string or enum type should be treated as a UTF-8 encoded string.\tno (default = false) For example: "ioConfig": { "inputFormat": { "type": "parquet", "flattenSpec": { "useFieldDiscovery": true, "fields": [ { "type": "path", "name": "nested", "expr": "$.path.to.nested" } ] }, "binaryAsString": false }, ... } "},{"title":"Avro Stream","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#avro-stream","content":"To use the Avro Stream input format load the Druid Avro extension (druid-avro-extensions). For more information on how Druid handles Avro types, see Avro Types section for Configure the Avro inputFormat to load Avro data as follows: Field\tType\tDescription\tRequiredtype\tString\tSet value to avro_stream.\tyes flattenSpec\tJSON Object\tDefine a flattenSpec to extract nested values from a Avro record. Only 'path' expressions are supported ('jq' is unavailable).\tno (default will auto-discover 'root' level properties) avroBytesDecoder\tJSON Object\tSpecifies how to decode bytes to Avro record.\tyes binaryAsString\tBoolean\tSpecifies if the bytes Avro column which is not logically marked as a string or enum type should be treated as a UTF-8 encoded string.\tno (default = false) For example: "ioConfig": { "inputFormat": { "type": "avro_stream", "avroBytesDecoder": { "type": "schema_inline", "schema": { //your schema goes here, for example "namespace": "org.apache.druid.data", "name": "User", "type": "record", "fields": [ { "name": "FullName", "type": "string" }, { "name": "Country", "type": "string" } ] } }, "flattenSpec": { "useFieldDiscovery": true, "fields": [ { "type": "path", "name": "someRecord_subInt", "expr": "$.someRecord.subInt" } ] }, "binaryAsString": false }, ... } Avro Bytes Decoder If type is not included, the avroBytesDecoder defaults to schema_repo. Inline Schema Based Avro Bytes Decoder info The "schema_inline" decoder reads Avro records using a fixed schema and does not support schema migration. If you may need to migrate schemas in the future, consider one of the other decoders, all of which use a message header that allows the parser to identify the proper Avro schema for reading records. This decoder can be used if all the input events can be read using the same schema. In this case, specify the schema in the input task JSON itself, as described below. ... "avroBytesDecoder": { "type": "schema_inline", "schema": { //your schema goes here, for example "namespace": "org.apache.druid.data", "name": "User", "type": "record", "fields": [ { "name": "FullName", "type": "string" }, { "name": "Country", "type": "string" } ] } } ... Multiple Inline Schemas Based Avro Bytes Decoder Use this decoder if different input events can have different read schemas. In this case, specify the schema in the input task JSON itself, as described below. ... "avroBytesDecoder": { "type": "multiple_schemas_inline", "schemas": { //your id -> schema map goes here, for example "1": { "namespace": "org.apache.druid.data", "name": "User", "type": "record", "fields": [ { "name": "FullName", "type": "string" }, { "name": "Country", "type": "string" } ] }, "2": { "namespace": "org.apache.druid.otherdata", "name": "UserIdentity", "type": "record", "fields": [ { "name": "Name", "type": "string" }, { "name": "Location", "type": "string" } ] }, ... ... } } ... Note that it is essentially a map of integer schema ID to avro schema object. This parser assumes that record has following format. first 1 byte is version and must always be 1. next 4 bytes are integer schema ID serialized using big-endian byte order. remaining bytes contain serialized avro message. SchemaRepo Based Avro Bytes Decoder This Avro bytes decoder first extracts subject and id from the input message bytes, and then uses them to look up the Avro schema used to decode the Avro record from bytes. For details, see the schema repo. You need an HTTP service like schema repo to hold the Avro schema. For information on registering a schema on the message producer side, see org.apache.druid.data.input.AvroStreamInputRowParserTest#testParse(). Field\tType\tDescription\tRequiredtype\tString\tSet value to schema_repo.\tno subjectAndIdConverter\tJSON Object\tSpecifies how to extract the subject and id from message bytes.\tyes schemaRepository\tJSON Object\tSpecifies how to look up the Avro schema from subject and id.\tyes Avro-1124 Subject And Id Converter This section describes the format of the subjectAndIdConverter object for the schema_repo Avro bytes decoder. Field\tType\tDescription\tRequiredtype\tString\tSet value to avro_1124.\tno topic\tString\tSpecifies the topic of your Kafka stream.\tyes Avro-1124 Schema Repository This section describes the format of the schemaRepository object for the schema_repo Avro bytes decoder. Field\tType\tDescription\tRequiredtype\tString\tSet value to avro_1124_rest_client.\tno url\tString\tSpecifies the endpoint URL of your Avro-1124 schema repository.\tyes Confluent Schema Registry-based Avro Bytes Decoder This Avro bytes decoder first extracts a unique id from input message bytes, and then uses it to look up the schema in the Schema Registry used to decode the Avro record from bytes. For details, see the Schema Registry documentation and repository. Field\tType\tDescription\tRequiredtype\tString\tSet value to schema_registry.\tno url\tString\tSpecifies the URL endpoint of the Schema Registry.\tyes capacity\tInteger\tSpecifies the max size of the cache (default = Integer.MAX_VALUE).\tno urls\tArray<String>\tSpecifies the URL endpoints of the multiple Schema Registry instances.\tyes (if url is not provided) config\tJson\tTo send additional configurations, configured for Schema Registry. This can be supplied via a DynamicConfigProvider\tno headers\tJson\tTo send headers to the Schema Registry. This can be supplied via a DynamicConfigProvider\tno For a single schema registry instance, use Field url or urls for multi instances. Single Instance: ... "avroBytesDecoder" : { "type" : "schema_registry", "url" : <schema-registry-url> } ... Multiple Instances: ... "avroBytesDecoder" : { "type" : "schema_registry", "urls" : [<schema-registry-url-1>, <schema-registry-url-2>, ...], "config" : { "basic.auth.credentials.source": "USER_INFO", "basic.auth.user.info": "fred:letmein", "schema.registry.ssl.truststore.location": "/some/secrets/kafka.client.truststore.jks", "schema.registry.ssl.truststore.password": "<password>", "schema.registry.ssl.keystore.location": "/some/secrets/kafka.client.keystore.jks", "schema.registry.ssl.keystore.password": "<password>", "schema.registry.ssl.key.password": "<password>", "schema.registry.ssl.key.password", ... }, "headers": { "traceID" : "b29c5de2-0db4-490b-b421", "timeStamp" : "1577191871865", "druid.dynamic.config.provider":{ "type":"mapString", "config":{ "registry.header.prop.1":"value.1", "registry.header.prop.2":"value.2" } } ... } } ... Parse exceptions The following errors when reading records will be considered parse exceptions, which can be limited and logged with ingestion task configurations such as maxParseExceptions and maxSavedParseExceptions: Failure to retrieve a schema due to misconfiguration or corrupt records (invalid schema IDs)Failure to decode an Avro message "},{"title":"Avro OCF","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#avro-ocf","content":"To load the Avro OCF input format, load the Druid Avro extension (druid-avro-extensions). See the Avro Types section for how Avro types are handled in Druid Configure the Avro OCF inputFormat to load Avro OCF data as follows: Field\tType\tDescription\tRequiredtype\tString\tSet value to avro_ocf.\tyes flattenSpec\tJSON Object\tDefine a flattenSpec to extract nested values from Avro records. Only 'path' expressions are supported ('jq' and 'tree' are unavailable).\tno (default will auto-discover 'root' level properties) schema\tJSON Object\tDefine a reader schema to be used when parsing Avro records. This is useful when parsing multiple versions of Avro OCF file data.\tno (default will decode using the writer schema contained in the OCF file) binaryAsString\tBoolean\tSpecifies if the bytes parquet column which is not logically marked as a string or enum type should be treated as a UTF-8 encoded string.\tno (default = false) For example: "ioConfig": { "inputFormat": { "type": "avro_ocf", "flattenSpec": { "useFieldDiscovery": true, "fields": [ { "type": "path", "name": "someRecord_subInt", "expr": "$.someRecord.subInt" } ] }, "schema": { "namespace": "org.apache.druid.data.input", "name": "SomeDatum", "type": "record", "fields" : [ { "name": "timestamp", "type": "long" }, { "name": "eventType", "type": "string" }, { "name": "id", "type": "long" }, { "name": "someRecord", "type": { "type": "record", "name": "MySubRecord", "fields": [ { "name": "subInt", "type": "int"}, { "name": "subLong", "type": "long"} ] }}] }, "binaryAsString": false }, ... } "},{"title":"Protobuf","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#protobuf","content":"info You need to include the druid-protobuf-extensions as an extension to use the Protobuf input format. Configure the Protobuf inputFormat to load Protobuf data as follows: Field\tType\tDescription\tRequiredtype\tString\tSet value to protobuf.\tyes flattenSpec\tJSON Object\tDefine a flattenSpec to extract nested values from a Protobuf record. Note that only 'path' expression are supported ('jq' and 'tree' is unavailable).\tno (default will auto-discover 'root' level properties) protoBytesDecoder\tJSON Object\tSpecifies how to decode bytes to Protobuf record.\tyes For example: "ioConfig": { "inputFormat": { "type": "protobuf", "protoBytesDecoder": { "type": "file", "descriptor": "file:///tmp/metrics.desc", "protoMessageType": "Metrics" } "flattenSpec": { "useFieldDiscovery": true, "fields": [ { "type": "path", "name": "someRecord_subInt", "expr": "$.someRecord.subInt" } ] } }, ... } "},{"title":"Kafka","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#kafka","content":"kafka is a special input format that wraps a regular input format (which goes in valueFormat) and allows you to parse the Kafka metadata (timestamp, headers, and key) that is part of Kafka messages. It should only be used when ingesting from Apache Kafka. Configure the Kafka inputFormat as follows: Field\tType\tDescription\tRequiredtype\tString\tSet value to kafka.\tyes valueFormat\tInputFormat\tAny InputFormat to parse the Kafka value payload. For details about specifying the input format, see Specifying data format.\tyes timestampColumnName\tString\tName of the column for the kafka record's timestamp.\tno (default = "kafka.timestamp") headerColumnPrefix\tString\tCustom prefix for all the header columns.\tno (default = "kafka.header.") headerFormat\tObject\theaderFormat specifies how to parse the Kafka headers. Supports String types. Because Kafka header values are bytes, the parser decodes them as UTF-8 encoded strings. To change this behavior, implement your own parser based on the encoding style. Change the 'encoding' type in KafkaStringHeaderFormat to match your custom implementation.\tno keyFormat\tInputFormat\tAny input format to parse the Kafka key. It only processes the first entry of the inputFormat field. For details, see Specifying data format.\tno keyColumnName\tString\tName of the column for the kafka record's key.\tno (default = "kafka.key") The Kafka input format augments the payload with information from the Kafka timestamp, headers, and key. If there are conflicts between column names in the payload and those created from the metadata, the payload takes precedence. This ensures that upgrading a Kafka ingestion to use the Kafka input format (by taking its existing input format and setting it as the valueFormat) can be done without losing any of the payload data. Here is a minimal example that only augments the parsed payload with the Kafka timestamp column: "ioConfig": { "inputFormat": { "type": "kafka", "valueFormat": { "type": "json" } }, ... } Here is a complete example: "ioConfig": { "inputFormat": { "type": "kafka", "valueFormat": { "type": "json" } "timestampColumnName": "kafka.timestamp", "headerFormat": { "type": "string", "encoding": "UTF-8" }, "headerColumnPrefix": "kafka.header.", "keyFormat": { "type": "tsv", "findColumnsFromHeader": false, "columns": ["x"] }, "keyColumnName": "kafka.key", }, ... } If you want to use kafka.timestamp as Druid's primary timestamp (__time), specify it as the value for column in the timestampSpec: "timestampSpec": { "column": "kafka.timestamp", "format": "millis" } Similarly, if you want to use a timestamp extracted from the Kafka header: "timestampSpec": { "column": "kafka.header.myTimestampHeader", "format": "millis" } "},{"title":"FlattenSpec","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#flattenspec","content":"You can use the flattenSpec object to flatten nested data, as an alternative to the Druid nested columns feature, and for nested input formats unsupported by the feature. It is an object within the inputFormat object. See Nested columns for information on ingesting and storing nested data in an Apache Druid column as a COMPLEX<json> data type. Configure your flattenSpec as follows: Field\tDescription\tDefaultuseFieldDiscovery\tIf true, interpret all root-level fields as available fields for usage by timestampSpec, transformSpec, dimensionsSpec, and metricsSpec. If false, only explicitly specified fields (see fields) will be available for use.\ttrue fields\tSpecifies the fields of interest and how they are accessed. See Field flattening specifications for more detail.\t[] For example: "flattenSpec": { "useFieldDiscovery": true, "fields": [ { "name": "baz", "type": "root" }, { "name": "foo_bar", "type": "path", "expr": "$.foo.bar" }, { "name": "foo_other_bar", "type": "tree", "nodes": ["foo", "other", "bar"] }, { "name": "first_food", "type": "jq", "expr": ".thing.food[1]" } ] } After Druid reads the input data records, it applies the flattenSpec before applying any other specs such as timestampSpec, transformSpec, dimensionsSpec, or metricsSpec. This makes it possible to extract timestamps from flattened data, for example, and to refer to flattened data in transformations, in your dimension list, and when generating metrics. Flattening is only supported for data formats that support nesting, including avro, json, orc, and parquet. "},{"title":"Field flattening specifications","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#field-flattening-specifications","content":"Each entry in the fields list can have the following components: Field\tDescription\tDefaulttype\tOptions are as follows: root, referring to a field at the root level of the record. Only really useful if useFieldDiscovery is false.path, referring to a field using JsonPath notation. Supported by most data formats that offer nesting, including avro, json, orc, and parquet.jq, referring to a field using jackson-jq notation. Only supported for the json format.tree, referring to a nested field from the root level of the record. Useful and more efficient than path or jq if a simple hierarchical fetch is required. Only supported for the json format. none (required) name\tName of the field after flattening. This name can be referred to by the timestampSpec, transformSpec, dimensionsSpec, and metricsSpec.\tnone (required) expr\tExpression for accessing the field while flattening. For type path, this should be JsonPath. For type jq, this should be jackson-jq notation. For other types, this parameter is ignored.\tnone (required for types path and jq) nodes\tFor tree only. Multiple-expression field for accessing the field while flattening, representing the hierarchy of field names to read. For other types, this parameter must not be provided.\tnone (required for type tree) "},{"title":"Notes on flattening","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#notes-on-flattening","content":"For convenience, when defining a root-level field, it is possible to define only the field name, as a string, instead of a JSON object. For example, {"name": "baz", "type": "root"} is equivalent to "baz". Enabling useFieldDiscovery will only automatically detect "simple" fields at the root level that correspond to data types that Druid supports. This includes strings, numbers, and lists of strings or numbers. Other types will not be automatically detected, and must be specified explicitly in the fields list. Duplicate field names are not allowed. An exception will be thrown. If useFieldDiscovery is enabled, any discovered field with the same name as one already defined in the fields list will be skipped, rather than added twice. JSONPath evaluator is useful for testing path-type expressions. jackson-jq supports a subset of the full jq syntax. Please refer to the jackson-jq documentation for details. JsonPath supports a bunch of functions, but not all of these functions are supported by Druid now. Following matrix shows the current supported JsonPath functions and corresponding data formats. Please also note the output data type of these functions. Function\tDescription\tOutput type\tjson\torc\tavro\tparquetmin()\tProvides the min value of an array of numbers\tDouble\t✓\t✓\t✓\t✓ max()\tProvides the max value of an array of numbers\tDouble\t✓\t✓\t✓\t✓ avg()\tProvides the average value of an array of numbers\tDouble\t✓\t✓\t✓\t✓ stddev()\tProvides the standard deviation value of an array of numbers\tDouble\t✓\t✓\t✓\t✓ length()\tProvides the length of an array\tInteger\t✓\t✓\t✓\t✓ sum()\tProvides the sum value of an array of numbers\tDouble\t✓\t✓\t✓\t✓ concat(X)\tProvides a concatenated version of the path output with a new item\tlike input\t✓\t✗\t✗\t✗ append(X)\tadd an item to the json path output array\tlike input\t✓\t✗\t✗\t✗ keys()\tProvides the property keys (An alternative for terminal tilde ~)\tSet<E>\t✗\t✗\t✗\t✗ "},{"title":"Parser","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#parser","content":"info The Parser is deprecated for native batch tasks, Kafka indexing service, and Kinesis indexing service. Consider using the input format instead for these types of ingestion. This section lists all default and core extension parsers. For community extension parsers, please see our community extensions list. "},{"title":"String Parser","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#string-parser","content":"string typed parsers operate on text based inputs that can be split into individual records by newlines. Each line can be further parsed using parseSpec. Field\tType\tDescription\tRequiredtype\tString\tSet value to string for most cases. Otherwise use hadoopyString for Hadoop indexing.\tyes parseSpec\tJSON Object\tSpecifies the format, timestamp, and dimensions of the data.\tyes "},{"title":"Avro Hadoop Parser","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#avro-hadoop-parser","content":"info You need to include the druid-avro-extensions as an extension to use the Avro Hadoop Parser. info See the Avro Types section for how Avro types are handled in Druid This parser is for Hadoop batch ingestion. The inputFormat of inputSpec in ioConfig must be set to "org.apache.druid.data.input.avro.AvroValueInputFormat". You may want to set Avro reader's schema in jobProperties in tuningConfig, e.g.: "avro.schema.input.value.path": "/path/to/your/schema.avsc" or"avro.schema.input.value": "your_schema_JSON_object". If the Avro reader's schema is not set, the schema in Avro object container file will be used. See Avro specification for more information. Field\tType\tDescription\tRequiredtype\tString\tSet value to avro_hadoop.\tyes parseSpec\tJSON Object\tSpecifies the timestamp and dimensions of the data. Should be an "avro" parseSpec.\tyes fromPigAvroStorage\tBoolean\tSpecifies whether the data file is stored using AvroStorage.\tno(default == false) An Avro parseSpec can contain a flattenSpec using either the "root" or "path" field types, which can be used to read nested Avro records. The "jq" and "tree" field type is not currently supported for Avro. For example, using Avro Hadoop parser with custom reader's schema file: { "type" : "index_hadoop", "spec" : { "dataSchema" : { "dataSource" : "", "parser" : { "type" : "avro_hadoop", "parseSpec" : { "format": "avro", "timestampSpec": <standard timestampSpec>, "dimensionsSpec": <standard dimensionsSpec>, "flattenSpec": <optional> } } }, "ioConfig" : { "type" : "hadoop", "inputSpec" : { "type" : "static", "inputFormat": "org.apache.druid.data.input.avro.AvroValueInputFormat", "paths" : "" } }, "tuningConfig" : { "jobProperties" : { "avro.schema.input.value.path" : "/path/to/my/schema.avsc" } } } } "},{"title":"ORC Hadoop Parser","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#orc-hadoop-parser","content":"info You need to include the druid-orc-extensions as an extension to use the ORC Hadoop Parser. info If you are considering upgrading from earlier than 0.15.0 to 0.15.0 or a higher version, please read Migration from 'contrib' extension carefully. This parser is for Hadoop batch ingestion. The inputFormat of inputSpec in ioConfig must be set to "org.apache.orc.mapreduce.OrcInputFormat". Field\tType\tDescription\tRequiredtype\tString\tSet value to orc.\tyes parseSpec\tJSON Object\tSpecifies the timestamp and dimensions of the data (timeAndDims and orc format) and a flattenSpec (orc format).\tyes The parser supports two parseSpec formats: orc and timeAndDims. orc supports auto field discovery and flattening, if specified with a flattenSpec. If no flattenSpec is specified, useFieldDiscovery will be enabled by default. Specifying a dimensionSpec is optional if useFieldDiscovery is enabled: if a dimensionSpec is supplied, the list of dimensions it defines will be the set of ingested dimensions, if missing the discovered fields will make up the list. timeAndDims parse spec must specify which fields will be extracted as dimensions through the dimensionSpec. All column types are supported, with the exception of union types. Columns oflist type, if filled with primitives, may be used as a multi-value dimension, or specific elements can be extracted withflattenSpec expressions. Likewise, primitive fields may be extracted from map and struct types in the same manner. Auto field discovery will automatically create a string dimension for every (non-timestamp) primitive or list of primitives, as well as any flatten expressions defined in the flattenSpec. Hadoop job properties Like most Hadoop jobs, the best outcomes will add "mapreduce.job.user.classpath.first": "true" or"mapreduce.job.classloader": "true" to the jobProperties section of tuningConfig. Note that it is likely if using"mapreduce.job.classloader": "true" that you will need to set mapreduce.job.classloader.system.classes to include-org.apache.hadoop.hive. to instruct Hadoop to load org.apache.hadoop.hive classes from the application jars instead of system jars, e.g. ... "mapreduce.job.classloader": "true", "mapreduce.job.classloader.system.classes" : "java., javax.accessibility., javax.activation., javax.activity., javax.annotation., javax.annotation.processing., javax.crypto., javax.imageio., javax.jws., javax.lang.model., -javax.management.j2ee., javax.management., javax.naming., javax.net., javax.print., javax.rmi., javax.script., -javax.security.auth.message., javax.security.auth., javax.security.cert., javax.security.sasl., javax.sound., javax.sql., javax.swing., javax.tools., javax.transaction., -javax.xml.registry., -javax.xml.rpc., javax.xml., org.w3c.dom., org.xml.sax., org.apache.commons.logging., org.apache.log4j., -org.apache.hadoop.hbase., -org.apache.hadoop.hive., org.apache.hadoop., core-default.xml, hdfs-default.xml, mapred-default.xml, yarn-default.xml", ... This is due to the hive-storage-api dependency of theorc-mapreduce library, which provides some classes under the org.apache.hadoop.hive package. If instead using the setting "mapreduce.job.user.classpath.first": "true", then this will not be an issue. Examples orc parser, orc parseSpec, auto field discovery, flatten expressions { "type": "index_hadoop", "spec": { "ioConfig": { "type": "hadoop", "inputSpec": { "type": "static", "inputFormat": "org.apache.orc.mapreduce.OrcInputFormat", "paths": "path/to/file.orc" }, ... }, "dataSchema": { "dataSource": "example", "parser": { "type": "orc", "parseSpec": { "format": "orc", "flattenSpec": { "useFieldDiscovery": true, "fields": [ { "type": "path", "name": "nestedDim", "expr": "$.nestedData.dim1" }, { "type": "path", "name": "listDimFirstItem", "expr": "$.listDim[1]" } ] }, "timestampSpec": { "column": "timestamp", "format": "millis" } } }, ... }, "tuningConfig": <hadoop-tuning-config> } } } orc parser, orc parseSpec, field discovery with no flattenSpec or dimensionSpec { "type": "index_hadoop", "spec": { "ioConfig": { "type": "hadoop", "inputSpec": { "type": "static", "inputFormat": "org.apache.orc.mapreduce.OrcInputFormat", "paths": "path/to/file.orc" }, ... }, "dataSchema": { "dataSource": "example", "parser": { "type": "orc", "parseSpec": { "format": "orc", "timestampSpec": { "column": "timestamp", "format": "millis" } } }, ... }, "tuningConfig": <hadoop-tuning-config> } } } orc parser, orc parseSpec, no autodiscovery { "type": "index_hadoop", "spec": { "ioConfig": { "type": "hadoop", "inputSpec": { "type": "static", "inputFormat": "org.apache.orc.mapreduce.OrcInputFormat", "paths": "path/to/file.orc" }, ... }, "dataSchema": { "dataSource": "example", "parser": { "type": "orc", "parseSpec": { "format": "orc", "flattenSpec": { "useFieldDiscovery": false, "fields": [ { "type": "path", "name": "nestedDim", "expr": "$.nestedData.dim1" }, { "type": "path", "name": "listDimFirstItem", "expr": "$.listDim[1]" } ] }, "timestampSpec": { "column": "timestamp", "format": "millis" }, "dimensionsSpec": { "dimensions": [ "dim1", "dim3", "nestedDim", "listDimFirstItem" ], "dimensionExclusions": [], "spatialDimensions": [] } } }, ... }, "tuningConfig": <hadoop-tuning-config> } } } orc parser, timeAndDims parseSpec { "type": "index_hadoop", "spec": { "ioConfig": { "type": "hadoop", "inputSpec": { "type": "static", "inputFormat": "org.apache.orc.mapreduce.OrcInputFormat", "paths": "path/to/file.orc" }, ... }, "dataSchema": { "dataSource": "example", "parser": { "type": "orc", "parseSpec": { "format": "timeAndDims", "timestampSpec": { "column": "timestamp", "format": "auto" }, "dimensionsSpec": { "dimensions": [ "dim1", "dim2", "dim3", "listDim" ], "dimensionExclusions": [], "spatialDimensions": [] } } }, ... }, "tuningConfig": <hadoop-tuning-config> } } "},{"title":"Parquet Hadoop Parser","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#parquet-hadoop-parser","content":"info You need to include the druid-parquet-extensions as an extension to use the Parquet Hadoop Parser. The Parquet Hadoop parser is for Hadoop batch ingestion and parses Parquet files directly. The inputFormat of inputSpec in ioConfig must be set to org.apache.druid.data.input.parquet.DruidParquetInputFormat. The Parquet Hadoop Parser supports auto field discovery and flattening if provided with aflattenSpec with the parquet parseSpec. Parquet nested list and maplogical types should operate correctly with JSON path expressions for all supported types. Field\tType\tDescription\tRequiredtype\tString\tSet value to parquet.\tyes parseSpec\tJSON Object\tSpecifies the timestamp and dimensions of the data, and optionally, a flatten spec. Valid parseSpec formats are timeAndDims and parquet.\tyes binaryAsString\tBoolean\tSpecifies if the bytes parquet column which is not logically marked as a string or enum type should be treated as a UTF-8 encoded string.\tno(default = false) When the time dimension is a DateType column, a format should not be supplied. When the format is UTF8 (String), either auto or a explicitly definedformat is required. Parquet Hadoop Parser vs Parquet Avro Hadoop Parser Both parsers read from Parquet files, but slightly differently. The main differences are: The Parquet Hadoop Parser uses a simple conversion while the Parquet Avro Hadoop Parser converts Parquet data into avro records first with the parquet-avro library and then parses avro data using the druid-avro-extensions module to ingest into Druid.The Parquet Hadoop Parser sets a hadoop job propertyparquet.avro.add-list-element-records to false (which normally defaults to true), in order to 'unwrap' primitive list elements into multi-value dimensions.The Parquet Hadoop Parser supports int96 Parquet values, while the Parquet Avro Hadoop Parser does not. There may also be some subtle differences in the behavior of JSON path expression evaluation of flattenSpec. Based on those differences, we suggest using the Parquet Hadoop Parser over the Parquet Avro Hadoop Parser to allow ingesting data beyond the schema constraints of Avro conversion. However, the Parquet Avro Hadoop Parser was the original basis for supporting the Parquet format, and as such it is a bit more mature. Examples parquet parser, parquet parseSpec { "type": "index_hadoop", "spec": { "ioConfig": { "type": "hadoop", "inputSpec": { "type": "static", "inputFormat": "org.apache.druid.data.input.parquet.DruidParquetInputFormat", "paths": "path/to/file.parquet" }, ... }, "dataSchema": { "dataSource": "example", "parser": { "type": "parquet", "parseSpec": { "format": "parquet", "flattenSpec": { "useFieldDiscovery": true, "fields": [ { "type": "path", "name": "nestedDim", "expr": "$.nestedData.dim1" }, { "type": "path", "name": "listDimFirstItem", "expr": "$.listDim[1]" } ] }, "timestampSpec": { "column": "timestamp", "format": "auto" }, "dimensionsSpec": { "dimensions": [], "dimensionExclusions": [], "spatialDimensions": [] } } }, ... }, "tuningConfig": <hadoop-tuning-config> } } } parquet parser, timeAndDims parseSpec { "type": "index_hadoop", "spec": { "ioConfig": { "type": "hadoop", "inputSpec": { "type": "static", "inputFormat": "org.apache.druid.data.input.parquet.DruidParquetInputFormat", "paths": "path/to/file.parquet" }, ... }, "dataSchema": { "dataSource": "example", "parser": { "type": "parquet", "parseSpec": { "format": "timeAndDims", "timestampSpec": { "column": "timestamp", "format": "auto" }, "dimensionsSpec": { "dimensions": [ "dim1", "dim2", "dim3", "listDim" ], "dimensionExclusions": [], "spatialDimensions": [] } } }, ... }, "tuningConfig": <hadoop-tuning-config> } } "},{"title":"Parquet Avro Hadoop Parser","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#parquet-avro-hadoop-parser","content":"info Consider using the Parquet Hadoop Parser over this parser to ingest Parquet files. See Parquet Hadoop Parser vs Parquet Avro Hadoop Parserfor the differences between those parsers. info You need to include both the druid-parquet-extensions[druid-avro-extensions] as extensions to use the Parquet Avro Hadoop Parser. The Parquet Avro Hadoop Parser is for Hadoop batch ingestion. This parser first converts the Parquet data into Avro records, and then parses them to ingest into Druid. The inputFormat of inputSpec in ioConfig must be set to org.apache.druid.data.input.parquet.DruidParquetAvroInputFormat. The Parquet Avro Hadoop Parser supports auto field discovery and flattening if provided with aflattenSpec with the avro parseSpec. Parquet nested list and maplogical types should operate correctly with JSON path expressions for all supported types. This parser sets a hadoop job propertyparquet.avro.add-list-element-records to false (which normally defaults to true), in order to 'unwrap' primitive list elements into multi-value dimensions. Note that the int96 Parquet value type is not supported with this parser. Field\tType\tDescription\tRequiredtype\tString\tSet value to parquet-avro.\tyes parseSpec\tJSON Object\tSpecifies the timestamp and dimensions of the data, and optionally, a flatten spec. Should be avro.\tyes binaryAsString\tBoolean\tSpecifies if the bytes parquet column which is not logically marked as a string or enum type should be treated as a UTF-8 encoded string.\tno(default = false) When the time dimension is a DateType column, a format should not be supplied. When the format is UTF8 (String), either auto or an explicitly defined format is required. Example { "type": "index_hadoop", "spec": { "ioConfig": { "type": "hadoop", "inputSpec": { "type": "static", "inputFormat": "org.apache.druid.data.input.parquet.DruidParquetAvroInputFormat", "paths": "path/to/file.parquet" }, ... }, "dataSchema": { "dataSource": "example", "parser": { "type": "parquet-avro", "parseSpec": { "format": "avro", "flattenSpec": { "useFieldDiscovery": true, "fields": [ { "type": "path", "name": "nestedDim", "expr": "$.nestedData.dim1" }, { "type": "path", "name": "listDimFirstItem", "expr": "$.listDim[1]" } ] }, "timestampSpec": { "column": "timestamp", "format": "auto" }, "dimensionsSpec": { "dimensions": [], "dimensionExclusions": [], "spatialDimensions": [] } } }, ... }, "tuningConfig": <hadoop-tuning-config> } } } "},{"title":"Avro Stream Parser","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#avro-stream-parser","content":"info You need to include the druid-avro-extensions as an extension to use the Avro Stream Parser. info See the Avro Types section for how Avro types are handled in Druid This parser is for stream ingestion and reads Avro data from a stream directly. Field\tType\tDescription\tRequiredtype\tString\tSet value to avro_stream.\tno avroBytesDecoder\tJSON Object\tSpecifies [avroBytesDecoder](#Avro Bytes Decoder) to decode bytes to Avro record.\tyes parseSpec\tJSON Object\tSpecifies the timestamp and dimensions of the data. Should be an "avro" parseSpec.\tyes An Avro parseSpec can contain a flattenSpec using either the "root" or "path" field types, which can be used to read nested Avro records. The "jq" and "tree" field type is not currently supported for Avro. For example, using Avro stream parser with schema repo Avro bytes decoder: "parser" : { "type" : "avro_stream", "avroBytesDecoder" : { "type" : "schema_repo", "subjectAndIdConverter" : { "type" : "avro_1124", "topic" : "${YOUR_TOPIC}" }, "schemaRepository" : { "type" : "avro_1124_rest_client", "url" : "${YOUR_SCHEMA_REPO_END_POINT}", } }, "parseSpec" : { "format": "avro", "timestampSpec": <standard timestampSpec>, "dimensionsSpec": <standard dimensionsSpec>, "flattenSpec": <optional> } } "},{"title":"Protobuf Parser","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#protobuf-parser","content":"info You need to include the druid-protobuf-extensions as an extension to use the Protobuf Parser. This parser is for stream ingestion and reads Protocol buffer data from a stream directly. Field\tType\tDescription\tRequiredtype\tString\tSet value to protobuf.\tyes protoBytesDecoder\tJSON Object\tSpecifies how to decode bytes to Protobuf record.\tyes parseSpec\tJSON Object\tSpecifies the timestamp and dimensions of the data. The format must be JSON. See JSON ParseSpec for more configuration options. Note that timeAndDims parseSpec is no longer supported.\tyes Sample spec: "parser": { "type": "protobuf", "protoBytesDecoder": { "type": "file", "descriptor": "file:///tmp/metrics.desc", "protoMessageType": "Metrics" }, "parseSpec": { "format": "json", "timestampSpec": { "column": "timestamp", "format": "auto" }, "dimensionsSpec": { "dimensions": [ "unit", "http_method", "http_code", "page", "metricType", "server" ], "dimensionExclusions": [ "timestamp", "value" ] } } } See the extension description for more details and examples. Protobuf Bytes Decoder If type is not included, the protoBytesDecoder defaults to schema_registry. File-based Protobuf Bytes Decoder This Protobuf bytes decoder first read a descriptor file, and then parse it to get schema used to decode the Protobuf record from bytes. Field\tType\tDescription\tRequiredtype\tString\tSet value to file.\tyes descriptor\tString\tProtobuf descriptor file name in the classpath or URL.\tyes protoMessageType\tString\tProtobuf message type in the descriptor. Both short name and fully qualified name are accepted. The parser uses the first message type found in the descriptor if not specified.\tno Sample spec: "protoBytesDecoder": { "type": "file", "descriptor": "file:///tmp/metrics.desc", "protoMessageType": "Metrics" } Inline Descriptor Protobuf Bytes Decoder This Protobuf bytes decoder allows the user to provide the contents of a Protobuf descriptor file inline, encoded as a Base64 string, and then parse it to get schema used to decode the Protobuf record from bytes. Field\tType\tDescription\tRequiredtype\tString\tSet value to inline.\tyes descriptorString\tString\tA compiled Protobuf descriptor, encoded as a Base64 string.\tyes protoMessageType\tString\tProtobuf message type in the descriptor. Both short name and fully qualified name are accepted. The parser uses the first message type found in the descriptor if not specified.\tno Sample spec: "protoBytesDecoder": { "type": "inline", "descriptorString": <Contents of a Protobuf descriptor file encoded as Base64 string>, "protoMessageType": "Metrics" } Confluent Schema Registry-based Protobuf Bytes Decoder This Protobuf bytes decoder first extracts a unique id from input message bytes, and then uses it to look up the schema in the Schema Registry used to decode the Avro record from bytes. For details, see the Schema Registry documentation and repository. Field\tType\tDescription\tRequiredtype\tString\tSet value to schema_registry.\tyes url\tString\tSpecifies the URL endpoint of the Schema Registry.\tyes capacity\tInteger\tSpecifies the max size of the cache (default = Integer.MAX_VALUE).\tno urls\tArray<String>\tSpecifies the URL endpoints of the multiple Schema Registry instances.\tyes (if url is not provided) config\tJson\tTo send additional configurations, configured for Schema Registry. This can be supplied via a DynamicConfigProvider.\tno headers\tJson\tTo send headers to the Schema Registry. This can be supplied via a DynamicConfigProvider\tno For a single schema registry instance, use Field url or urls for multi instances. Single Instance: ... "protoBytesDecoder": { "url": <schema-registry-url>, "type": "schema_registry" } ... Multiple Instances: ... "protoBytesDecoder": { "urls": [<schema-registry-url-1>, <schema-registry-url-2>, ...], "type": "schema_registry", "capacity": 100, "config" : { "basic.auth.credentials.source": "USER_INFO", "basic.auth.user.info": "fred:letmein", "schema.registry.ssl.truststore.location": "/some/secrets/kafka.client.truststore.jks", "schema.registry.ssl.truststore.password": "<password>", "schema.registry.ssl.keystore.location": "/some/secrets/kafka.client.keystore.jks", "schema.registry.ssl.keystore.password": "<password>", "schema.registry.ssl.key.password": "<password>", ... }, "headers": { "traceID" : "b29c5de2-0db4-490b-b421", "timeStamp" : "1577191871865", "druid.dynamic.config.provider":{ "type":"mapString", "config":{ "registry.header.prop.1":"value.1", "registry.header.prop.2":"value.2" } } ... } } ... "},{"title":"ParseSpec","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#parsespec","content":"info The Parser is deprecated for native batch tasks, Kafka indexing service, and Kinesis indexing service. Consider using the input format instead for these types of ingestion. ParseSpecs serve two purposes: The String Parser use them to determine the format (i.e., JSON, CSV, TSV) of incoming rows.All Parsers use them to determine the timestamp and dimensions of incoming rows. If format is not included, the parseSpec defaults to tsv. "},{"title":"JSON ParseSpec","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#json-parsespec","content":"Use this with the String Parser to load JSON. Field\tType\tDescription\tRequiredformat\tString\tjson\tno timestampSpec\tJSON Object\tSpecifies the column and format of the timestamp.\tyes dimensionsSpec\tJSON Object\tSpecifies the dimensions of the data.\tyes flattenSpec\tJSON Object\tSpecifies flattening configuration for nested JSON data. See flattenSpec for more info.\tno Sample spec: "parseSpec": { "format" : "json", "timestampSpec" : { "column" : "timestamp" }, "dimensionSpec" : { "dimensions" : ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"] } } "},{"title":"JSON Lowercase ParseSpec","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#json-lowercase-parsespec","content":"info The jsonLowercase parser is deprecated and may be removed in a future version of Druid. This is a special variation of the JSON ParseSpec that lower cases all the column names in the incoming JSON data. This parseSpec is required if you are updating to Druid 0.7.x from Druid 0.6.x, are directly ingesting JSON with mixed case column names, do not have any ETL in place to lower case those column names, and would like to make queries that include the data you created using 0.6.x and 0.7.x. Field\tType\tDescription\tRequiredformat\tString\tjsonLowercase\tyes timestampSpec\tJSON Object\tSpecifies the column and format of the timestamp.\tyes dimensionsSpec\tJSON Object\tSpecifies the dimensions of the data.\tyes "},{"title":"CSV ParseSpec","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#csv-parsespec","content":"Use this with the String Parser to load CSV. Strings are parsed using the com.opencsv library. Field\tType\tDescription\tRequiredformat\tString\tcsv\tyes timestampSpec\tJSON Object\tSpecifies the column and format of the timestamp.\tyes dimensionsSpec\tJSON Object\tSpecifies the dimensions of the data.\tyes listDelimiter\tString\tA custom delimiter for multi-value dimensions.\tno (default = ctrl+A) columns\tJSON array\tSpecifies the columns of the data.\tyes Sample spec: "parseSpec": { "format" : "csv", "timestampSpec" : { "column" : "timestamp" }, "columns" : ["timestamp","page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city","added","deleted","delta"], "dimensionsSpec" : { "dimensions" : ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"] } } CSV Index Tasks If your input files contain a header, the columns field is optional and you don't need to set. Instead, you can set the hasHeaderRow field to true, which makes Druid automatically extract the column information from the header. Otherwise, you must set the columns field and ensure that field must match the columns of your input data in the same order. Also, you can skip some header rows by setting skipHeaderRows in your parseSpec. If both skipHeaderRows and hasHeaderRow options are set,skipHeaderRows is first applied. For example, if you set skipHeaderRows to 2 and hasHeaderRow to true, Druid will skip the first two lines and then extract column information from the third line. Note that hasHeaderRow and skipHeaderRows are effective only for non-Hadoop batch index tasks. Other types of index tasks will fail with an exception. Other CSV Ingestion Tasks The columns field must be included and and ensure that the order of the fields matches the columns of your input data in the same order. "},{"title":"TSV / Delimited ParseSpec","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#tsv--delimited-parsespec","content":"Use this with the String Parser to load any delimited text that does not require special escaping. By default, the delimiter is a tab, so this will load TSV. Field\tType\tDescription\tRequiredformat\tString\ttsv\tyes timestampSpec\tJSON Object\tSpecifies the column and format of the timestamp.\tyes dimensionsSpec\tJSON Object\tSpecifies the dimensions of the data.\tyes delimiter\tString\tA custom delimiter for data values.\tno (default = \\t) listDelimiter\tString\tA custom delimiter for multi-value dimensions.\tno (default = ctrl+A) columns\tJSON String array\tSpecifies the columns of the data.\tyes Sample spec: "parseSpec": { "format" : "tsv", "timestampSpec" : { "column" : "timestamp" }, "columns" : ["timestamp","page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city","added","deleted","delta"], "delimiter":"|", "dimensionsSpec" : { "dimensions" : ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"] } } Be sure to change the delimiter to the appropriate delimiter for your data. Like CSV, you must specify the columns and which subset of the columns you want indexed. TSV (Delimited) Index Tasks If your input files contain a header, the columns field is optional and doesn't need to be set. Instead, you can set the hasHeaderRow field to true, which makes Druid automatically extract the column information from the header. Otherwise, you must set the columns field and ensure that field must match the columns of your input data in the same order. Also, you can skip some header rows by setting skipHeaderRows in your parseSpec. If both skipHeaderRows and hasHeaderRow options are set,skipHeaderRows is first applied. For example, if you set skipHeaderRows to 2 and hasHeaderRow to true, Druid will skip the first two lines and then extract column information from the third line. Note that hasHeaderRow and skipHeaderRows are effective only for non-Hadoop batch index tasks. Other types of index tasks will fail with an exception. Other TSV (Delimited) Ingestion Tasks The columns field must be included and and ensure that the order of the fields matches the columns of your input data in the same order. "},{"title":"Regex ParseSpec","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#regex-parsespec","content":""parseSpec":{ "format" : "regex", "timestampSpec" : { "column" : "timestamp" }, "dimensionsSpec" : { "dimensions" : [<your_list_of_dimensions>] }, "columns" : [<your_columns_here>], "pattern" : <regex pattern for partitioning data> } The columns field must match the columns of your regex matching groups in the same order. If columns are not provided, default columns names ("column_1", "column2", ... "column_n") will be assigned. Ensure that your column names include all your dimensions. "},{"title":"JavaScript ParseSpec","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#javascript-parsespec","content":""parseSpec":{ "format" : "javascript", "timestampSpec" : { "column" : "timestamp" }, "dimensionsSpec" : { "dimensions" : ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"] }, "function" : "function(str) { var parts = str.split(\\"-\\"); return { one: parts[0], two: parts[1] } }" } Note with the JavaScript parser that data must be fully parsed and returned as a {key:value} format in the JS logic. This means any flattening or parsing multi-dimensional values must be done here. info JavaScript-based functionality is disabled by default. Please refer to the Druid JavaScript programming guide for guidelines about using Druid's JavaScript functionality, including instructions on how to enable it. "},{"title":"TimeAndDims ParseSpec","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#timeanddims-parsespec","content":"Use this with non-String Parsers to provide them with timestamp and dimensions information. Non-String Parsers handle all formatting decisions on their own, without using the ParseSpec. Field\tType\tDescription\tRequiredformat\tString\ttimeAndDims\tyes timestampSpec\tJSON Object\tSpecifies the column and format of the timestamp.\tyes dimensionsSpec\tJSON Object\tSpecifies the dimensions of the data.\tyes "},{"title":"Orc ParseSpec","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#orc-parsespec","content":"Use this with the Hadoop ORC Parser to load ORC files. Field\tType\tDescription\tRequiredformat\tString\torc\tno timestampSpec\tJSON Object\tSpecifies the column and format of the timestamp.\tyes dimensionsSpec\tJSON Object\tSpecifies the dimensions of the data.\tyes flattenSpec\tJSON Object\tSpecifies flattening configuration for nested JSON data. See flattenSpec for more info.\tno "},{"title":"Parquet ParseSpec","type":1,"pageTitle":"Source input formats","url":"/docs/27.0.0/ingestion/data-formats#parquet-parsespec","content":"Use this with the Hadoop Parquet Parser to load Parquet files. Field\tType\tDescription\tRequiredformat\tString\tparquet\tno timestampSpec\tJSON Object\tSpecifies the column and format of the timestamp.\tyes dimensionsSpec\tJSON Object\tSpecifies the dimensions of the data.\tyes flattenSpec\tJSON Object\tSpecifies flattening configuration for nested JSON data. See flattenSpec for more info.\tno "},{"title":"SQL-based ingestion reference","type":0,"sectionRef":"#","url":"/docs/27.0.0/multi-stage-query/reference","content":"","keywords":""},{"title":"SQL reference","type":1,"pageTitle":"SQL-based ingestion reference","url":"/docs/27.0.0/multi-stage-query/reference#sql-reference","content":"This topic is a reference guide for the multi-stage query architecture in Apache Druid. For examples of real-world usage, refer to the Examples page. INSERT and REPLACE load data into a Druid datasource from either an external input source, or from another datasource. When loading from an external datasource, you typically must provide the kind of input source, the data format, and the schema (signature) of the input file. Druid provides table functions to allow you to specify the external file. There are two kinds. EXTERN works with the JSON-serialized specs for the three items, using the same JSON you would use in native ingest. A set of other, input-source-specific functions use SQL syntax to specify the format and the input schema. There is one function for each input source. The input-source-specific functions allow you to use SQL query parameters to specify the set of files (or URIs), making it easy to reuse the same SQL statement for each ingest: just specify the set of files to use each time. "},{"title":"EXTERN Function","type":1,"pageTitle":"SQL-based ingestion reference","url":"/docs/27.0.0/multi-stage-query/reference#extern-function","content":"Use the EXTERN function to read external data. The function has two variations. Function variation 1, with the input schema expressed as JSON: SELECT <column> FROM TABLE( EXTERN( '<Druid input source>', '<Druid input format>', '<row signature>' ) ) EXTERN consists of the following parts: Any Druid input source as a JSON-encoded string.Any Druid input format as a JSON-encoded string.A row signature, as a JSON-encoded array of column descriptors. Each column descriptor must have aname and a type. The type can be string, long, double, or float. This row signature is used to map the external data into the SQL layer. Variation 2, with the input schema expressed in SQL using an EXTEND clause. (See the next section for more detail on EXTEND). This format also uses named arguments to make the SQL a bit easier to read: SELECT <column> FROM TABLE( EXTERN( inputSource => '<Druid input source>', inputFormat => '<Druid input format>' )) (<columns>) The input source and format are as above. The columns are expressed as in a SQL CREATE TABLE. Example: (timestamp VARCHAR, metricType VARCHAR, value BIGINT). The optional EXTEND keyword can precede the column list: EXTEND (timestamp VARCHAR...). For more information, see Read external data with EXTERN. "},{"title":"INSERT","type":1,"pageTitle":"SQL-based ingestion reference","url":"/docs/27.0.0/multi-stage-query/reference#insert","content":"Use the INSERT statement to insert data. Unlike standard SQL, INSERT loads data into the target table according to column name, not positionally. If necessary, use AS in your SELECT column list to assign the correct names. Do not rely on their positions within the SELECT clause. Statement format: INSERT INTO <table name> < SELECT query > PARTITIONED BY <time frame> [ CLUSTERED BY <column list> ] INSERT consists of the following parts: Optional context parameters.An INSERT INTO <dataSource> clause at the start of your query, such as INSERT INTO your-table.A clause for the data you want to insert, such as SELECT ... FROM .... You can use EXTERNto reference external tables using FROM TABLE(EXTERN(...)).A PARTITIONED BY clause, such as PARTITIONED BY DAY.An optional CLUSTERED BY clause. For more information, see Load data with INSERT. "},{"title":"REPLACE","type":1,"pageTitle":"SQL-based ingestion reference","url":"/docs/27.0.0/multi-stage-query/reference#replace","content":"You can use the REPLACE function to replace all or some of the data. Unlike standard SQL, REPLACE loads data into the target table according to column name, not positionally. If necessary, use AS in your SELECT column list to assign the correct names. Do not rely on their positions within the SELECT clause. REPLACE all data Function format to replace all data: REPLACE INTO <target table> OVERWRITE ALL < SELECT query > PARTITIONED BY <time granularity> [ CLUSTERED BY <column list> ] REPLACE specific time ranges Function format to replace specific time ranges: REPLACE INTO <target table> OVERWRITE WHERE __time >= TIMESTAMP '<lower bound>' AND __time < TIMESTAMP '<upper bound>' < SELECT query > PARTITIONED BY <time granularity> [ CLUSTERED BY <column list> ] REPLACE consists of the following parts: Optional context parameters.A REPLACE INTO <dataSource> clause at the start of your query, such as REPLACE INTO "your-table".An OVERWRITE clause after the datasource, either OVERWRITE ALL or OVERWRITE WHERE: OVERWRITE ALL replaces the entire existing datasource with the results of the query.OVERWRITE WHERE drops the time segments that match the condition you set. Conditions are based on the __timecolumn and use the format __time [< > = <= >=] TIMESTAMP. Use them with AND, OR, and NOT between them, inclusive of the timestamps specified. No other expressions or functions are valid in OVERWRITE. A clause for the actual data you want to use for the replacement.A PARTITIONED BY clause, such as PARTITIONED BY DAY.An optional CLUSTERED BY clause. For more information, see Overwrite data with REPLACE. "},{"title":"PARTITIONED BY","type":1,"pageTitle":"SQL-based ingestion reference","url":"/docs/27.0.0/multi-stage-query/reference#partitioned-by","content":"The PARTITIONED BY <time granularity> clause is required for INSERT and REPLACE. SeePartitioning for details. The following granularity arguments are accepted: Time unit keywords: HOUR, DAY, MONTH, or YEAR. Equivalent to FLOOR(__time TO TimeUnit).Time units as ISO 8601 period strings: :'PT1H', 'P1D, etc. (Druid 26.0 and later.)TIME_FLOOR(__time, 'granularity_string'), where granularity_string is one of the ISO 8601 periods listed below. The first argument must be __time.FLOOR(__time TO TimeUnit), where TimeUnit is any unit supported by the FLOOR function. The first argument must be __time.ALL or ALL TIME, which effectively disables time partitioning by placing all data in a single time chunk. To use LIMIT or OFFSET at the outer level of your INSERT or REPLACE query, you must set PARTITIONED BY to ALL or ALL TIME. Earlier versions required the TIME_FLOOR notation to specify a granularity other than the keywords. In the current version, the string constant provides a simpler equivalent solution. The following ISO 8601 periods are supported for TIME_FLOOR and the string constant: PT1SPT1MPT5MPT10MPT15MPT30MPT1HPT6HP1DP1W*P1MP3MP1Y For more information about partitioning, see Partitioning. *Avoid partitioning by week, P1W, because weeks don't align neatly with months and years, making it difficult to partition by coarser granularities later. "},{"title":"CLUSTERED BY","type":1,"pageTitle":"SQL-based ingestion reference","url":"/docs/27.0.0/multi-stage-query/reference#clustered-by","content":"The CLUSTERED BY <column list> clause is optional for INSERT and REPLACE. It accepts a list of column names or expressions. For more information about clustering, see Clustering. "},{"title":"Context parameters","type":1,"pageTitle":"SQL-based ingestion reference","url":"/docs/27.0.0/multi-stage-query/reference#context-parameters","content":"In addition to the Druid SQL context parameters, the multi-stage query task engine accepts certain context parameters that are specific to it. Use context parameters alongside your queries to customize the behavior of the query. If you're using the API, include the context parameters in the query context when you submit a query: { "query": "SELECT 1 + 1", "context": { "<key>": "<value>", "maxNumTasks": 3 } } If you're using the web console, you can specify the context parameters through various UI options. The following table lists the context parameters for the MSQ task engine: Parameter\tDescription\tDefault valuemaxNumTasks\tSELECT, INSERT, REPLACE The maximum total number of tasks to launch, including the controller task. The lowest possible value for this setting is 2: one controller and one worker. All tasks must be able to launch simultaneously. If they cannot, the query returns a TaskStartTimeout error code after approximately 10 minutes. May also be provided as numTasks. If both are present, maxNumTasks takes priority.\t2 taskAssignment\tSELECT, INSERT, REPLACE Determines how many tasks to use. Possible values include: max: Uses as many tasks as possible, up to maxNumTasks.auto: When file sizes can be determined through directory listing (for example: local files, S3, GCS, HDFS) uses as few tasks as possible without exceeding 512 MiB or 10,000 files per task, unless exceeding these limits is necessary to stay within maxNumTasks. When calculating the size of files, the weighted size is used, which considers the file format and compression format used if any. When file sizes cannot be determined through directory listing (for example: http), behaves the same as max. max finalizeAggregations\tSELECT, INSERT, REPLACE Determines the type of aggregation to return. If true, Druid finalizes the results of complex aggregations that directly appear in query results. If false, Druid returns the aggregation's intermediate type rather than finalized type. This parameter is useful during ingestion, where it enables storing sketches directly in Druid tables. For more information about aggregations, see SQL aggregation functions.\ttrue sqlJoinAlgorithm\tSELECT, INSERT, REPLACE Algorithm to use for JOIN. Use broadcast (the default) for broadcast hash join or sortMerge for sort-merge join. Affects all JOIN operations in the query. This is a hint to the MSQ engine and the actual joins in the query may proceed in a different way than specified. See Joins for more details.\tbroadcast rowsInMemory\tINSERT or REPLACE Maximum number of rows to store in memory at once before flushing to disk during the segment generation process. Ignored for non-INSERT queries. In most cases, use the default value. You may need to override the default if you run into one of the known issues around memory usage.\t100,000 segmentSortOrder\tINSERT or REPLACE Normally, Druid sorts rows in individual segments using __time first, followed by the CLUSTERED BY clause. When you set segmentSortOrder, Druid sorts rows in segments using this column list first, followed by the CLUSTERED BY order. You provide the column list as comma-separated values or as a JSON array in string form. If your query includes __time, then this list must begin with __time. For example, consider an INSERT query that uses CLUSTERED BY country and has segmentSortOrder set to __time,city. Within each time chunk, Druid assigns rows to segments based on country, and then within each of those segments, Druid sorts those rows by __time first, then city, then country.\tempty list maxParseExceptions\tSELECT, INSERT, REPLACE Maximum number of parse exceptions that are ignored while executing the query before it stops with TooManyWarningsFault. To ignore all the parse exceptions, set the value to -1.\t0 rowsPerSegment\tINSERT or REPLACE The number of rows per segment to target. The actual number of rows per segment may be somewhat higher or lower than this number. In most cases, use the default. For general information about sizing rows per segment, see Segment Size Optimization.\t3,000,000 indexSpec\tINSERT or REPLACE An indexSpec to use when generating segments. May be a JSON string or object. See Front coding for details on configuring an indexSpec with front coding.\tSee indexSpec. durableShuffleStorage\tSELECT, INSERT, REPLACE Whether to use durable storage for shuffle mesh. To use this feature, configure the durable storage at the server level using druid.msq.intermediate.storage.enable=true). If these properties are not configured, any query with the context variable durableShuffleStorage=true fails with a configuration error. false faultTolerance\tSELECT, INSERT, REPLACE Whether to turn on fault tolerance mode or not. Failed workers are retried based on Limits. Cannot be used when durableShuffleStorage is explicitly set to false.\tfalse selectDestination\tSELECT Controls where the final result of the select query is written. Use taskReport(the default) to write select results to the task report. This is not scalable since task reports size explodes for large results Use durableStorage to write results to durable storage location. For large results sets, its recommended to use durableStorage . To configure durable storage see this section.\ttaskReport "},{"title":"Joins","type":1,"pageTitle":"SQL-based ingestion reference","url":"/docs/27.0.0/multi-stage-query/reference#joins","content":"Joins in multi-stage queries use one of two algorithms based on what you set the context parameter sqlJoinAlgorithm to: broadcast (default) sortMerge. If you omit this context parameter, the MSQ task engine uses broadcast since it's the default join algorithm. The context parameter applies to the entire SQL statement, so you can't mix different join algorithms in the same query. sqlJoinAlgorithm is a hint to the planner to execute the join in the specified manner. The planner can decide to ignore the hint if it deduces that the specified algorithm can be detrimental to the performance of the join beforehand. This intelligence is very limited as of now, and the sqlJoinAlgorithm set would be respected in most cases, therefore the user should set it appropriately. See the advantages and the drawbacks for the broadcast and the sort-merge join to determine which join to use beforehand. "},{"title":"Broadcast","type":1,"pageTitle":"SQL-based ingestion reference","url":"/docs/27.0.0/multi-stage-query/reference#broadcast","content":"The default join algorithm for multi-stage queries is a broadcast hash join, which is similar to howjoins are executed with native queries. To use broadcast joins, either omit the sqlJoinAlgorithm or set it to broadcast. For a broadcast join, any adjacent joins are flattened into a structure with a "base" input (the bottom-leftmost one) and other leaf inputs (the rest). Next, any subqueries that are inputs the join (either base or other leafs) are planned into independent stages. Then, the non-base leaf inputs are all connected as broadcast inputs to the "base" stage. Together, all of these non-base leaf inputs must not exceed the limit on broadcast table footprint. There is no limit on the size of the base (leftmost) input. Only LEFT JOIN, INNER JOIN, and CROSS JOIN are supported with broadcast. Join conditions, if present, must be equalities. It is not necessary to include a join condition; for example,CROSS JOIN and comma join do not require join conditions. The following example has a single join chain where orders is the base input while products andcustomers are non-base leaf inputs. The broadcast inputs (products and customers) must fall under the limit on broadcast table footprint, but the base orders input can be unlimited in size. The query reads products and customers and then broadcasts both to the stage that reads orders. That stage loads the broadcast inputs (products and customers) in memory and walks through orders row by row. The results are aggregated and written to the table orders_enriched. REPLACE INTO orders_enriched OVERWRITE ALL SELECT orders.__time, products.name AS product_name, customers.name AS customer_name, SUM(orders.amount) AS amount FROM orders LEFT JOIN products ON orders.product_id = products.id LEFT JOIN customers ON orders.customer_id = customers.id GROUP BY 1, 2 PARTITIONED BY HOUR CLUSTERED BY product_name "},{"title":"Sort-merge","type":1,"pageTitle":"SQL-based ingestion reference","url":"/docs/27.0.0/multi-stage-query/reference#sort-merge","content":"You can use the sort-merge join algorithm to make queries more scalable at the cost of performance. If your goal is performance, consider broadcast joins. There are various scenarios where broadcast join would return a BroadcastTablesTooLarge error, but a sort-merge join would succeed. To use the sort-merge join algorithm, set the context parameter sqlJoinAlgorithm to sortMerge. In a sort-merge join, each pairwise join is planned into its own stage with two inputs. The two inputs are partitioned and sorted using a hash partitioning on the same key. When using the sort-merge algorithm, keep the following in mind: There is no limit on the overall size of either input, so sort-merge is a good choice for performing a join of two large inputs or for performing a self-join of a large input with itself. There is a limit on the amount of data associated with each individual key. If both sides of the join exceed this limit, the query returns a TooManyRowsWithSameKey error. If only one side exceeds the limit, the query does not return this error. Join conditions are optional but must be equalities if they are present. For example, CROSS JOIN and comma join do not require join conditions. All join types are supported with sortMerge: LEFT, RIGHT, INNER, FULL, and CROSS. The following example runs using a single sort-merge join stage that receives eventstream(partitioned on user_id) and users (partitioned on id) as inputs. There is no limit on the size of either input. REPLACE INTO eventstream_enriched OVERWRITE ALL SELECT eventstream.__time, eventstream.user_id, eventstream.event_type, eventstream.event_details, users.signup_date AS user_signup_date FROM eventstream LEFT JOIN users ON eventstream.user_id = users.id PARTITIONED BY HOUR CLUSTERED BY user The context parameter that sets sqlJoinAlgorithm to sortMerge is not shown in the above example. "},{"title":"Durable storage","type":1,"pageTitle":"SQL-based ingestion reference","url":"/docs/27.0.0/multi-stage-query/reference#durable-storage","content":"SQL-based ingestion supports using durable storage to store intermediate files temporarily. Enabling it can improve reliability. For more information, see Durable storage. "},{"title":"Durable storage configurations","type":1,"pageTitle":"SQL-based ingestion reference","url":"/docs/27.0.0/multi-stage-query/reference#durable-storage-configurations","content":"The following common service properties control how durable storage behaves: Parameter\tDefault\tDescriptiondruid.msq.intermediate.storage.enable\ttrue\tRequired. Whether to enable durable storage for the cluster. For more information about enabling durable storage, see Durable storage. druid.msq.intermediate.storage.type\ts3 for Amazon S3\tRequired. The type of storage to use. s3 is the only supported storage type. druid.msq.intermediate.storage.bucket\tn/a\tThe S3 bucket to store intermediate files. druid.msq.intermediate.storage.prefix\tn/a\tS3 prefix to store intermediate stage results. Provide a unique value for the prefix. Don't share the same prefix between clusters. If the location includes other files or directories, then they will get cleaned up as well. druid.msq.intermediate.storage.tempDir\tn/a\tRequired. Directory path on the local disk to temporarily store intermediate stage results. druid.msq.intermediate.storage.maxRetry\t10\tOptional. Defines the max number times to attempt S3 API calls to avoid failures due to transient errors. druid.msq.intermediate.storage.chunkSize\t100MiB\tOptional. Defines the size of each chunk to temporarily store in druid.msq.intermediate.storage.tempDir. The chunk size must be between 5 MiB and 5 GiB. A large chunk size reduces the API calls made to the durable storage, however it requires more disk space to store the temporary chunks. Druid uses a default of 100MiB if the value is not provided. In addition to the common service properties, there are certain properties that you configure on the Overlord specifically to clean up intermediate files: Parameter\tDefault\tDescriptiondruid.msq.intermediate.storage.cleaner.enabled\tfalse\tOptional. Whether durable storage cleaner should be enabled for the cluster. druid.msq.intermediate.storage.cleaner.delaySeconds\t86400\tOptional. The delay (in seconds) after the last run post which the durable storage cleaner would clean the outputs. "},{"title":"Limits","type":1,"pageTitle":"SQL-based ingestion reference","url":"/docs/27.0.0/multi-stage-query/reference#limits","content":"Knowing the limits for the MSQ task engine can help you troubleshoot any errors that you encounter. Many of the errors occur as a result of reaching a limit. The following table lists query limits: Limit\tValue\tError if exceededSize of an individual row written to a frame. Row size when written to a frame may differ from the original row size.\t1 MB\tRowTooLarge Number of segment-granular time chunks encountered during ingestion.\t5,000\tTooManyBuckets Number of input files/segments per worker.\t10,000\tTooManyInputFiles Number of output partitions for any one stage. Number of segments generated during ingestion.\t25,000\tTooManyPartitions Number of output columns for any one stage.\t2,000\tTooManyColumns Number of cluster by columns that can appear in a stage\t1,500\tTooManyClusteredByColumns Number of workers for any one stage.\tHard limit is 1,000. Memory-dependent soft limit may be lower.\tTooManyWorkers Maximum memory occupied by broadcasted tables.\t30% of each processor memory bundle.\tBroadcastTablesTooLarge Maximum memory occupied by buffered data during sort-merge join. Only relevant when sqlJoinAlgorithm is sortMerge.\t10 MB\tTooManyRowsWithSameKey Maximum relaunch attempts per worker. Initial run is not a relaunch. The worker will be spawned 1 + workerRelaunchLimit times before the job fails.\t2\tTooManyAttemptsForWorker Maximum relaunch attempts for a job across all workers.\t100\tTooManyAttemptsForJob "},{"title":"Error codes","type":1,"pageTitle":"SQL-based ingestion reference","url":"/docs/27.0.0/multi-stage-query/reference#error-codes","content":"The following table describes error codes you may encounter in the multiStageQuery.payload.status.errorReport.error.errorCode field: Code\tMeaning\tAdditional fieldsBroadcastTablesTooLarge\tThe size of the broadcast tables used in the right hand side of the join exceeded the memory reserved for them in a worker task. Try increasing the peon memory or reducing the size of the broadcast tables.\tmaxBroadcastTablesSize: Memory reserved for the broadcast tables, measured in bytes. Canceled\tThe query was canceled. Common reasons for cancellation: User-initiated shutdown of the controller task via the /druid/indexer/v1/task/{taskId}/shutdown API.Restart or failure of the server process that was running the controller task. CannotParseExternalData\tA worker task could not parse data from an external datasource.\terrorMessage: More details on why parsing failed. ColumnNameRestricted\tThe query uses a restricted column name.\tcolumnName: The restricted column name. ColumnTypeNotSupported\tThe column type is not supported. This can be because: Support for writing or reading from a particular column type is not supported.The query attempted to use a column type that is not supported by the frame format. This occurs with ARRAY types, which are not yet implemented for frames. columnName: The column name with an unsupported type. columnType: The unknown column type. InsertCannotAllocateSegment\tThe controller task could not allocate a new segment ID due to conflict with existing segments or pending segments. Common reasons for such conflicts: Attempting to mix different granularities in the same intervals of the same datasource.Prior ingestions that used non-extendable shard specs. Use REPLACE to overwrite the existing data or if the error contains the allocatedInterval then alternatively rerun the INSERT job with the mentioned granularity to append to existing data. Note that it might not always be possible to append to the existing data using INSERT and can only be done if allocatedInterval is present.\tdataSource interval: The interval for the attempted new segment allocation. allocatedInterval: The incorrect interval allocated by the overlord. It can be null InsertCannotBeEmpty\tAn INSERT or REPLACE query did not generate any output rows in a situation where output rows are required for success. This can happen for INSERT or REPLACE queries with PARTITIONED BY set to something other than ALL or ALL TIME.\tdataSource InsertLockPreempted\tAn INSERT or REPLACE query was canceled by a higher-priority ingestion job, such as a real-time ingestion task. InsertTimeNull\tAn INSERT or REPLACE query encountered a null timestamp in the __time field. This can happen due to using an expression like TIME_PARSE(timestamp) AS __time with a timestamp that cannot be parsed. (TIME_PARSE returns null when it cannot parse a timestamp.) In this case, try parsing your timestamps using a different function or pattern. Or, if your timestamps may genuinely be null, consider using COALESCE to provide a default value. One option is CURRENT_TIMESTAMP, which represents the start time of the job. InsertTimeOutOfBounds\tA REPLACE query generated a timestamp outside the bounds of the TIMESTAMP parameter for your OVERWRITE WHERE clause. To avoid this error, verify that the you specified is valid.\tinterval: time chunk interval corresponding to the out-of-bounds timestamp InvalidNullByte\tA string column included a null byte. Null bytes in strings are not permitted.\tsource: The source that included the null byte </br /> rowNumber: The row number (1-indexed) that included the null byte column: The column that included the null byte value: Actual string containing the null byte position: Position (1-indexed) of occurrence of null byte QueryNotSupported\tQueryKit could not translate the provided native query to a multi-stage query. This can happen if the query uses features that aren't supported, like GROUPING SETS. QueryRuntimeError\tMSQ uses the native query engine to run the leaf stages. This error tells MSQ that error is in native query runtime. Since this is a generic error, the user needs to look at logs for the error message and stack trace to figure out the next course of action. If the user is stuck, consider raising a github issue for assistance.\tbaseErrorMessage error message from the native query runtime. RowTooLarge\tThe query tried to process a row that was too large to write to a single frame. See the Limits table for specific limits on frame size. Note that the effective maximum row size is smaller than the maximum frame size due to alignment considerations during frame writing.\tmaxFrameSize: The limit on the frame size. TaskStartTimeout\tUnable to launch pendingTasks worker out of total totalTasks workers tasks within timeout seconds of the last successful worker launch. There may be insufficient available slots to start all the worker tasks simultaneously. Try splitting up your query into smaller chunks using a smaller value of maxNumTasks. Another option is to increase capacity.\tpendingTasks: Number of tasks not yet started. totalTasks: The number of tasks attempted to launch. timeout: Timeout, in milliseconds, that was exceeded. TooManyAttemptsForJob\tTotal relaunch attempt count across all workers exceeded max relaunch attempt limit. See the Limits table for the specific limit.\tmaxRelaunchCount: Max number of relaunches across all the workers defined in the Limits section. currentRelaunchCount: current relaunch counter for the job across all workers. taskId: Latest task id which failed rootErrorMessage: Error message of the latest failed task. TooManyAttemptsForWorker\tWorker exceeded maximum relaunch attempt count as defined in the Limits section.\tmaxPerWorkerRelaunchCount: Max number of relaunches allowed per worker as defined in the Limits section. workerNumber: the worker number for which the task failed taskId: Latest task id which failed rootErrorMessage: Error message of the latest failed task. TooManyBuckets\tExceeded the maximum number of partition buckets for a stage (5,000 partition buckets). < br />Partition buckets are created for each PARTITIONED BY time chunk for INSERT and REPLACE queries. The most common reason for this error is that your PARTITIONED BY is too narrow relative to your data.\tmaxBuckets: The limit on partition buckets. TooManyInputFiles\tExceeded the maximum number of input files or segments per worker (10,000 files or segments). If you encounter this limit, consider adding more workers, or breaking up your query into smaller queries that process fewer files or segments per query.\tnumInputFiles: The total number of input files/segments for the stage. maxInputFiles: The maximum number of input files/segments per worker per stage. minNumWorker: The minimum number of workers required for a successful run. TooManyPartitions\tExceeded the maximum number of partitions for a stage (25,000 partitions). This can occur with INSERT or REPLACE statements that generate large numbers of segments, since each segment is associated with a partition. If you encounter this limit, consider breaking up your INSERT or REPLACE statement into smaller statements that process less data per statement.\tmaxPartitions: The limit on partitions which was exceeded TooManyClusteredByColumns\tExceeded the maximum number of clustering columns for a stage (1,500 columns). This can occur with CLUSTERED BY, ORDER BY, or GROUP BY with a large number of columns.\tnumColumns: The number of columns requested. maxColumns: The limit on columns which was exceeded.stage: The stage number exceeding the limit TooManyRowsWithSameKey\tThe number of rows for a given key exceeded the maximum number of buffered bytes on both sides of a join. See the Limits table for the specific limit. Only occurs when join is executed via the sort-merge join algorithm.\tkey: The key that had a large number of rows. numBytes: Number of bytes buffered, which may include other keys. maxBytes: Maximum number of bytes buffered. TooManyColumns\tExceeded the maximum number of columns for a stage (2,000 columns).\tnumColumns: The number of columns requested. maxColumns: The limit on columns which was exceeded. TooManyWarnings\tExceeded the maximum allowed number of warnings of a particular type.\trootErrorCode: The error code corresponding to the exception that exceeded the required limit. maxWarnings: Maximum number of warnings that are allowed for the corresponding rootErrorCode. TooManyWorkers\tExceeded the maximum number of simultaneously-running workers. See the Limits table for more details.\tworkers: The number of simultaneously running workers that exceeded a hard or soft limit. This may be larger than the number of workers in any one stage if multiple stages are running simultaneously. maxWorkers: The hard or soft limit on workers that was exceeded. If this is lower than the hard limit (1,000 workers), then you can increase the limit by adding more memory to each task. NotEnoughMemory\tInsufficient memory to launch a stage.\tsuggestedServerMemory: Suggested number of bytes of memory to allocate to a given process. serverMemory: The number of bytes of memory available to a single process. usableMemory: The number of usable bytes of memory for a single process. serverWorkers: The number of workers running in a single process. serverThreads: The number of threads in a single process. NotEnoughTemporaryStorage\tInsufficient temporary storage configured to launch a stage. This limit is set by the property druid.indexer.task.tmpStorageBytesPerTask. This property should be increased to the minimum suggested limit to resolve this.\tsuggestedMinimumStorage: Suggested number of bytes of temporary storage space to allocate to a given process. configuredTemporaryStorage: The number of bytes of storage currently configured. WorkerFailed\tA worker task failed unexpectedly.\terrorMsg workerTaskId: The ID of the worker task. WorkerRpcFailed\tA remote procedure call to a worker task failed and could not recover.\tworkerTaskId: the id of the worker task UnknownError\tAll other errors.\tmessage InsertCannotOrderByDescending\tDeprecated. An INSERT query contained a CLUSTERED BY expression in descending order. Druid's segment generation code only supports ascending order. The query returns a ValidationException instead of the fault.\tcolumnName "},{"title":"dump-segment tool","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/dump-segment","content":"","keywords":""},{"title":"Output format","type":1,"pageTitle":"dump-segment tool","url":"/docs/27.0.0/operations/dump-segment#output-format","content":"Data dumps By default, or with --dump rows, this tool dumps rows of the segment as newline-separate JSON objects, with one object per line, using the default serialization for each column. Normally all columns are included, but if you like, you can limit the dump to specific columns with --column name. For example, one line might look like this when pretty-printed: { "__time": 1442018818771, "added": 36, "channel": "#en.wikipedia", "cityName": null, "comment": "added project", "count": 1, "countryIsoCode": null, "countryName": null, "deleted": 0, "delta": 36, "isAnonymous": "false", "isMinor": "false", "isNew": "false", "isRobot": "false", "isUnpatrolled": "false", "iuser": "00001553", "metroCode": null, "namespace": "Talk", "page": "Talk:Oswald Tilghman", "regionIsoCode": null, "regionName": null, "user": "GELongstreet" } Metadata dumps With --dump metadata, this tool dumps metadata instead of rows. Metadata dumps generated by this tool are in the same format as returned by the SegmentMetadata query. Bitmap dumps With --dump bitmaps, this tool will dump bitmap indexes instead of rows. Bitmap dumps generated by this tool include dictionary-encoded string columns only. The output contains a field "bitmapSerdeFactory" describing the type of bitmaps used in the segment, and a field "bitmaps" containing the bitmaps for each value of each column. These are base64 encoded by default, but you can also dump them as lists of row numbers with --decompress-bitmaps. Normally all columns are included, but if you like, you can limit the dump to specific columns with --column name. Sample output: { "bitmapSerdeFactory": { "type": "roaring" }, "bitmaps": { "isRobot": { "false": "//aExfu+Nv3X...", "true": "gAl7OoRByQ..." } } } Nested column dumps With --dump nested, this tool can be used to examine Druid nested columns. Usingnested always requires exactly one --column name argument, and takes an optional argument to specify a specific nested field in JSONPath syntax, --nested-path $.path.to.field. If --nested-path is not specified, the output will contain the list of nested fields and their types, the global value dictionaries, and the list of null rows. Sample output: { "nest": { "fields": [ { "path": "$.x", "types": [ "LONG" ] }, { "path": "$.y", "types": [ "DOUBLE" ] }, { "path": "$.z", "types": [ "STRING" ] } ], "dictionaries": { "strings": [ { "globalId": 0, "value": null }, { "globalId": 1, "value": "a" }, { "globalId": 2, "value": "b" } ], "longs": [ { "globalId": 3, "value": 100 }, { "globalId": 4, "value": 200 }, { "globalId": 5, "value": 400 } ], "doubles": [ { "globalId": 6, "value": 1.1 }, { "globalId": 7, "value": 2.2 }, { "globalId": 8, "value": 3.3 } ], "nullRows": [] } } } If --nested-path is specified, the output will instead contain the types of the nested field, the local value dictionary, including the 'global' dictionary id and value, the uncompressed bitmap index for each value (list of row numbers which contain the value), and a dump of the column itself, which contains the row number, raw JSON form of the nested column itself, the local dictionary id of the field for that row, and the value for the field for the row. Sample output: { "bitmapSerdeFactory": { "type": "roaring" }, "nest": { "$.x": { "types": [ "LONG" ], "dictionary": [ { "localId": 0, "globalId": 0, "value": null, "rows": [ 4 ] }, { "localId": 1, "globalId": 3, "value": "100", "rows": [ 3 ] }, { "localId": 2, "globalId": 4, "value": "200", "rows": [ 0, 2 ] }, { "localId": 3, "globalId": 5, "value": "400", "rows": [ 1 ] } ], "column": [ { "row": 0, "raw": { "x": 200, "y": 2.2 }, "fieldId": 2, "fieldValue": "200" }, { "row": 1, "raw": { "x": 400, "y": 1.1, "z": "a" }, "fieldId": 3, "fieldValue": "400" }, { "row": 2, "raw": { "x": 200, "z": "b" }, "fieldId": 2, "fieldValue": "200" }, { "row": 3, "raw": { "x": 100, "y": 1.1, "z": "a" }, "fieldId": 1, "fieldValue": "100" }, { "row": 4, "raw": { "y": 3.3, "z": "b" }, "fieldId": 0, "fieldValue": null } ] } } } "},{"title":"Command line arguments","type":1,"pageTitle":"dump-segment tool","url":"/docs/27.0.0/operations/dump-segment#command-line-arguments","content":"argument\tdescription\trequired?--directory file\tDirectory containing segment data. This could be generated by unzipping an "index.zip" from deep storage.\tyes --output file\tFile to write to, or omit to write to stdout.\tyes --dump TYPE\tDump either 'rows' (default), 'metadata', 'bitmaps', or 'nested' for examining nested columns.\tno --column columnName\tColumn to include. Specify multiple times for multiple columns, or omit to include all columns.\tno --filter json\tJSON-encoded query filter. Omit to include all rows. Only used if dumping rows.\tno --time-iso8601\tFormat __time column in ISO8601 format rather than long. Only used if dumping rows.\tno --decompress-bitmaps\tDump bitmaps as arrays rather than base64-encoded compressed bitmaps. Only used if dumping bitmaps.\tno --nested-path\tSpecify a specific nested column field using JSONPath syntax. Only used if dumping a nested column.\tno "},{"title":"Durable storage for the multi-stage query engine","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/durable-storage","content":"","keywords":""},{"title":"Enable durable storage","type":1,"pageTitle":"Durable storage for the multi-stage query engine","url":"/docs/27.0.0/operations/durable-storage#enable-durable-storage","content":"To enable durable storage, you need to set the following common service properties: druid.msq.intermediate.storage.enable=true druid.msq.intermediate.storage.type=s3 druid.msq.intermediate.storage.bucket=YOUR_BUCKET druid.msq.intermediate.storage.prefix=YOUR_PREFIX druid.msq.intermediate.storage.tempDir=/path/to/your/temp/dir For detailed information about the settings related to durable storage, see Durable storage configurations. "},{"title":"Use durable storage for SQL-based ingestion queries","type":1,"pageTitle":"Durable storage for the multi-stage query engine","url":"/docs/27.0.0/operations/durable-storage#use-durable-storage-for-sql-based-ingestion-queries","content":"When you run a query, include the context parameter durableShuffleStorage and set it to true. For queries where you want to use fault tolerance for workers, set faultTolerance to true, which automatically sets durableShuffleStorage to true. "},{"title":"Use durable storage for queries from deep storage","type":1,"pageTitle":"Durable storage for the multi-stage query engine","url":"/docs/27.0.0/operations/durable-storage#use-durable-storage-for-queries-from-deep-storage","content":"Depending on the size of the results you're expecting, saving the final results for queries from deep storage to durable storage might be needed. By default, Druid saves the final results for queries from deep storage to task reports. Generally, this is acceptable for smaller result sets but may lead to timeouts for larger result sets. When you run a query, include the context parameter selectDestination and set it to DURABLESTORAGE: "context":{ ... "selectDestination": "DURABLESTORAGE" } You can also write intermediate results to durable storage (durableShuffleStorage) for better reliability. The location where workers write intermediate results is different than the location where final results get stored. This means that durable storage for results can be enabled even if you don't write intermediate results to durable storage. If you write the results for queries from deep storage to durable storage, the results are cleaned up when the task is removed from the metadata store. "},{"title":"Durable storage clean up","type":1,"pageTitle":"Durable storage for the multi-stage query engine","url":"/docs/27.0.0/operations/durable-storage#durable-storage-clean-up","content":"To prevent durable storage from getting filled up with temporary files in case the tasks fail to clean them up, a periodic cleaner can be scheduled to clean the directories corresponding to which there isn't a controller task running. It utilizes the storage connector to work upon the durable storage. The durable storage location should only be utilized to store the output for the cluster's MSQ tasks. If the location contains other files or directories, then they will get cleaned up as well. Use druid.msq.intermediate.storage.cleaner.enabled and druid.msq.intermediate.storage.cleaner.delaySEconds to configure the cleaner. For more information, see Durable storage configurations. Note that if you choose to write query results to durable storage,the results are cleaned up when the task is removed from the metadata store. "},{"title":"Dynamic Config Providers","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/dynamic-config-provider","content":"","keywords":""},{"title":"Environment variable dynamic config provider","type":1,"pageTitle":"Dynamic Config Providers","url":"/docs/27.0.0/operations/dynamic-config-provider#environment-variable-dynamic-config-provider","content":"You can use the environment variable dynamic config provider (EnvironmentVariableDynamicConfigProvider) to store passwords or other sensitive information using system environment variables instead of plain text configuration. The environment variable dynamic config provider uses the following syntax: druid.dynamic.config.provider={"type": "environment","variables":{"secret1": "SECRET1_VAR","secret2": "SECRET2_VAR"}} Field\tType\tDescription\tRequiredtype\tString\tdynamic config provider type\tYes: environment variables\tMap\tenvironment variables that store the configuration information\tYes When using the environment variable config provider, consider the following: If you manually specify a configuration key-value pair and use the dynamic config provider for the same key, Druid uses the value from the dynamic config provider.For use in a supervisor spec, environment variables must be available to the system user that runs the Overlord service and that runs the Peon service. The following example shows how to configure environment variables to store the SSL key and truststore passwords for Kafka. On the Overlord and Peon machines, set the following environment variables for the system user that runs the Druid services: export SSL_KEY_PASSWORD=mysecretkeypassword export SSL_KEYSTORE_PASSWORD=mysecretkeystorepassword export SSL_TRUSTSTORE_PASSWORD=mysecrettruststorepassword When you define the consumer properties in the supervisor spec, use the dynamic config provider to refer to the environment variables: ... "consumerProperties": { "bootstrap.servers": "localhost:9092", "ssl.keystore.location": "/opt/kafka/config/kafka01.keystore.jks", "ssl.truststore.location": "/opt/kafka/config/kafka.truststore.jks", "druid.dynamic.config.provider": { "type": "environment", "variables": { "ssl.key.password": "SSL_KEY_PASSWORD", "ssl.keystore.password": "SSL_KEYSTORE_PASSWORD", "ssl.truststore.password": "SSL_TRUSTSTORE_PASSWORD" } } }, ... When connecting to Kafka, Druid replaces the environment variables with their corresponding values. "},{"title":"Configure LDAP authentication","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/auth-ldap","content":"","keywords":""},{"title":"Prerequisites","type":1,"pageTitle":"Configure LDAP authentication","url":"/docs/27.0.0/operations/auth-ldap#prerequisites","content":"Before you start to configure LDAP for Druid, test your LDAP connection and perform a sample search. "},{"title":"Check your LDAP connection","type":1,"pageTitle":"Configure LDAP authentication","url":"/docs/27.0.0/operations/auth-ldap#check-your-ldap-connection","content":"Test your LDAP connection to verify it works with user credentials. Later in the process you configure Druid for LDAP authentication with this user as the bindUser. The following example command tests the connection for the user myuser@example.com. Insert your LDAP server IP address. Modify the port number of your LDAP instance if it listens on a port other than 389. ldapwhoami -vv -H ldap://ip_address:389 -D "myuser@example.com" -W Enter the password for the user when prompted and verify that the command succeeded. If it failed, check the following: Make sure you're using the correct port for your LDAP instance.Check if a network firewall is preventing connections to the LDAP port.Review your LDAP implementation details to see whether you need to specifically allow LDAP clients at the LDAP server. If so, add the Druid Coordinator server to the allow list. "},{"title":"Test your LDAP search","type":1,"pageTitle":"Configure LDAP authentication","url":"/docs/27.0.0/operations/auth-ldap#test-your-ldap-search","content":"Once your LDAP connection is working, search for a user. For example, the following command searches for the user myuser in an Active Directory system. The sAMAccountName attribute is specific to Active Directory and contains the authenticated user identity: ldapsearch -x -W -H ldap://ip_address:389 -D "cn=admin,dc=example,dc=com" -b "dc=example,dc=com" "(sAMAccountName=myuser)" + The memberOf attribute in the results shows the groups the user belongs to. For example, the following response shows that the user is a member of the mygroup group: memberOf: cn=mygroup,ou=groups,dc=example,dc=com You use this information to map the LDAP group to Druid roles in a later step. info Druid uses the memberOf attribute to determine a group's membership using LDAP. If your LDAP server implementation doesn't include this attribute, you must complete some additional steps when you map LDAP groups to Druid roles. "},{"title":"Configure Druid for LDAP authentication","type":1,"pageTitle":"Configure LDAP authentication","url":"/docs/27.0.0/operations/auth-ldap#configure-druid-for-ldap-authentication","content":"To configure Druid to use LDAP authentication, follow these steps. See Configuration reference for the location of the configuration files. Create a user in your LDAP system that you'll use both for internal communication with Druid and as the LDAP initial admin user. See Security overview for more information. In the example below, the LDAP user is internal@example.com. Enable the druid-basic-security extension in the common.runtime.properties file. In the common.runtime.properties file, add the following lines for LDAP properties and substitute the values for your own. See Druid basic security for details about these properties. druid.auth.authenticatorChain=["ldap"] druid.auth.authenticator.ldap.type=basic druid.auth.authenticator.ldap.enableCacheNotifications=true druid.auth.authenticator.ldap.credentialsValidator.type=ldap druid.auth.authenticator.ldap.credentialsValidator.url=ldap://ip_address:port druid.auth.authenticator.ldap.credentialsValidator.bindUser=administrator@example.com druid.auth.authenticator.ldap.credentialsValidator.bindPassword=adminpassword druid.auth.authenticator.ldap.credentialsValidator.baseDn=dc=example,dc=com druid.auth.authenticator.ldap.credentialsValidator.userSearch=(&(sAMAccountName=%s)(objectClass=user)) druid.auth.authenticator.ldap.credentialsValidator.userAttribute=sAMAccountName druid.auth.authenticator.ldap.authorizerName=ldapauth druid.escalator.type=basic druid.escalator.internalClientUsername=internal@example.com druid.escalator.internalClientPassword=internaluserpassword druid.escalator.authorizerName=ldapauth druid.auth.authorizers=["ldapauth"] druid.auth.authorizer.ldapauth.type=basic druid.auth.authorizer.ldapauth.initialAdminUser=internal@example.com druid.auth.authorizer.ldapauth.initialAdminRole=admin druid.auth.authorizer.ldapauth.roleProvider.type=ldap Note the following: bindUser: A user for connecting to LDAP. This should be the same user you used to test your LDAP search.userSearch: Your LDAP search syntax.userAttribute: The user search attribute.internal@example.com is the LDAP user you created in step 1. In the example it serves as both the internal client user and the initial admin user. info In the above example, the Druid escalator and LDAP initial admin user are set to the same user - internal@example.com. If the escalator is set to a different user, you must follow steps 4 and 5 to create the group mapping and allocate initial roles before the rest of the cluster can function. Save your group mapping to a JSON file. An example file groupmap.json looks like this: { "name": "mygroupmap", "groupPattern": "CN=mygroup,CN=Users,DC=example,DC=com", "roles": [ "readRole" ] } In the example, the LDAP group mygroup maps to Druid role readRole and the name of the mapping is mygroupmap. Use the Druid API to create the group mapping and allocate initial roles according to your JSON file. The following example uses curl to create the mapping defined in groupmap.json for the LDAP group mygroup: curl -i -v -H "Content-Type: application/json" -u internal -X POST -d @groupmap.json http://localhost:8081/druid-ext/basic-security/authorization/db/ldapauth/groupMappings/mygroupmap Check that the group mapping was created successfully. The following example request lists all group mappings: curl -i -v -H "Content-Type: application/json" -u internal -X GET http://localhost:8081/druid-ext/basic-security/authorization/db/ldapauth/groupMappings "},{"title":"Map LDAP groups to Druid roles","type":1,"pageTitle":"Configure LDAP authentication","url":"/docs/27.0.0/operations/auth-ldap#map-ldap-groups-to-druid-roles","content":"Once you've completed the initial setup and mapping, you can map more LDAP groups to Druid roles. Members of an LDAP group get access to the permissions of the corresponding Druid role. "},{"title":"Create a Druid role","type":1,"pageTitle":"Configure LDAP authentication","url":"/docs/27.0.0/operations/auth-ldap#create-a-druid-role","content":"To create a Druid role, you can submit a POST request to the Coordinator process using the Druid REST API or you can use the Druid console. The examples below use localhost as the Coordinator host and 8081 as the port. Amend these properties according to the details of your deployment. Example request to create a role named readRole: curl -i -v -H "Content-Type: application/json" -u internal -X POST http://localhost:8081/druid-ext/basic-security/authorization/db/ldapauth/roles/readRole Check that Druid created the role successfully. The following example request lists all roles: curl -i -v -H "Content-Type: application/json" -u internal -X GET http://localhost:8081/druid-ext/basic-security/authorization/db/ldapauth/roles "},{"title":"Add permissions to the Druid role","type":1,"pageTitle":"Configure LDAP authentication","url":"/docs/27.0.0/operations/auth-ldap#add-permissions-to-the-druid-role","content":"Once you have a Druid role you can add permissions to it. The following example adds read-only access to a wikipedia data source. Given the following JSON in a file named perm.json: [ { "resource": { "name": "wikipedia", "type": "DATASOURCE" }, "action": "READ" }, { "resource": { "name": ".*", "type": "STATE" }, "action": "READ" }, { "resource": {"name": ".*", "type": "CONFIG"}, "action": "READ"} ] The following request associates the permissions in the JSON file with the readRole role: curl -i -v -H "Content-Type: application/json" -u internal -X POST -d@perm.json http://localhost:8081/druid-ext/basic-security/authorization/db/ldapauth/roles/readRole/permissions Druid users need the STATE and CONFIG permissions to view the data source in the Druid console. If you only want to assign querying permissions you can apply just the READ permission with the first line in the perm.json file. You can also provide the data source name in the form of a regular expression. For example, to give access to all data sources starting with wiki, you would specify the data source name as { "name": "wiki.*" } . "},{"title":"Create the group mapping","type":1,"pageTitle":"Configure LDAP authentication","url":"/docs/27.0.0/operations/auth-ldap#create-the-group-mapping","content":"You can now map an LDAP group to the Druid role. The following example request creates a mapping with name mygroupmap. It assumes that a group named mygroup exists in the directory. { "name": "mygroupmap", "groupPattern": "CN=mygroup,CN=Users,DC=example,DC=com", "roles": [ "readRole" ] } The following example request configures the mapping—the role mapping is in the file groupmap.json. See Configure Druid for LDAP authentication for the contents of an example file. curl -i -v -H "Content-Type: application/json" -u internal -X POST -d @groupmap.json http://localhost:8081/druid-ext/basic-security/authorization/db/ldapauth/groupMappings/mygroupmap To check whether the group mapping was created successfully, the following request lists all group mappings: curl -i -v -H "Content-Type: application/json" -u internal -X GET http://localhost:8081/druid-ext/basic-security/authorization/db/ldapauth/groupMappings The following example request returns the details of the mygroupmap group: curl -i -v -H "Content-Type: application/json" -u internal -X GET http://localhost:8081/druid-ext/basic-security/authorization/db/ldapauth/groupMappings/mygroupmap The following example request adds the role queryRole to the mygroupmap mapping: curl -i -v -H "Content-Type: application/json" -u internal -X POST http://localhost:8081/druid-ext/basic-security/authorization/db/ldapauth/groupMappings/mygroup/roles/queryrole "},{"title":"Add an LDAP user to Druid and assign a role","type":1,"pageTitle":"Configure LDAP authentication","url":"/docs/27.0.0/operations/auth-ldap#add-an-ldap-user-to-druid-and-assign-a-role","content":"You only need to complete this step if: Your LDAP user doesn't belong to any of your LDAP groups, orYou want to configure a user with additional Druid roles that are not mapped to the LDAP groups that the user belongs to. Example request to add the LDAP user myuser to Druid: curl -i -v -H "Content-Type: application/json" -u internal -X POST http://localhost:8081/druid-ext/basic-security/authorization/db/ldapauth/users/myuser Example request to assign the myuser user to the queryRole role: curl -i -v -H "Content-Type: application/json" -u internal -X POST http://localhost:8081/druid-ext/basic-security/authorization/db/ldapauth/users/myuser/roles/queryRole "},{"title":"Enable LDAP over TLS (LDAPS)","type":1,"pageTitle":"Configure LDAP authentication","url":"/docs/27.0.0/operations/auth-ldap#enable-ldap-over-tls-ldaps","content":"Once you've configured LDAP authentication in Druid, you can optionally make LDAP traffic confidential and secure by using Transport Layer Security (TLS)—previously Secure Socket Layer(SSL)—technology. Configuring LDAPS establishes trust between Druid and the LDAP server. "},{"title":"Prerequisites","type":1,"pageTitle":"Configure LDAP authentication","url":"/docs/27.0.0/operations/auth-ldap#prerequisites-1","content":"Before you start to set up LDAPS in Druid, you must configure Druid for LDAP authentication. You also need: A certificate issued by a public certificate authority (CA) or a self-signed certificate by an internal CA.The root certificate for the CA that signed the certificate for the LDAP server. If you're using a common public CA, the certificate may already be in the Java truststore. Otherwise you need to import the certificate for the CA. "},{"title":"Configure Druid for LDAPS","type":1,"pageTitle":"Configure LDAP authentication","url":"/docs/27.0.0/operations/auth-ldap#configure-druid-for-ldaps","content":"Complete the following steps to set up LDAPS for Druid. See Configuration reference for the location of the configuration files. Import the CA certificate for your LDAP server or a self-signed certificate into the truststore location saved as druid.client.https.trustStorePath in your common.runtime.properties file. keytool -import -trustcacerts -keystore path/to/cacerts -storepass truststorepassword -alias aliasName -file path/to/certificate.cer Replace path/to/cacerts with the path to your truststore, truststorepassword with your truststore password, aliasName with an alias name for the keystore, and path/to/certificate.cer with the location and name of your certificate. For example: keytool -import -trustcacerts -keystore /Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/security/cacerts -storepass mypassword -alias myAlias -file /etc/ssl/certs/my-certificate.cer If the root certificate for the CA isn't already in the Java truststore, import it: keytool -importcert -keystore path/to/cacerts -storepass truststorepassword -alias aliasName -file path/to/certificate.cer Replace path/to/cacerts with the path to your truststore, truststorepassword with your truststore password, aliasName with an alias name for the keystore, and path/to/certificate.cer with the location and name of your certificate. For example: keytool -importcert -keystore /Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/security/cacerts -storepass mypassword -alias myAlias -file /etc/ssl/certs/my-certificate.cer In your common.runtime.properties file, add the following lines to the LDAP configuration section, substituting your own truststore path and password: druid.auth.basic.ssl.trustStorePath=/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/security/cacerts druid.auth.basic.ssl.protocol=TLS druid.auth.basic.ssl.trustStorePassword=xxxxxx See Druid basic security for details about these properties. You can optionally configure additional LDAPS properties in the common.runtime.properties file. See Druid basic security for more information. Restart Druid. "},{"title":"Troubleshooting tips","type":1,"pageTitle":"Configure LDAP authentication","url":"/docs/27.0.0/operations/auth-ldap#troubleshooting-tips","content":"The following are some ideas to help you troubleshoot issues with LDAP and LDAPS. "},{"title":"Check the coordinator logs","type":1,"pageTitle":"Configure LDAP authentication","url":"/docs/27.0.0/operations/auth-ldap#check-the-coordinator-logs","content":"If your LDAP connection isn't working, check the coordinator logs. See Logging for details. "},{"title":"Check the Druid escalator configuration","type":1,"pageTitle":"Configure LDAP authentication","url":"/docs/27.0.0/operations/auth-ldap#check-the-druid-escalator-configuration","content":"If the coordinator is working but the rest of the cluster isn't, check the escalator configuration. See the Configuration reference for details. You can also check other service logs to see why the services are unable to fetch authorization details from the coordinator. "},{"title":"Check your LDAP server response time","type":1,"pageTitle":"Configure LDAP authentication","url":"/docs/27.0.0/operations/auth-ldap#check-your-ldap-server-response-time","content":"If a user can log in to the Druid console but the landing page shows a 401 error, check your LDAP server response time. In a large organization with a high number of LDAP users, LDAP may be slow to respond, and this can result in a connection timeout. "},{"title":"Basic cluster tuning","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/basic-cluster-tuning","content":"","keywords":""},{"title":"Process-specific guidelines","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#process-specific-guidelines","content":""},{"title":"Historical","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#historical","content":"Heap sizing The biggest contributions to heap usage on Historicals are: Partial unmerged query results from segmentsThe stored maps for lookups. A general rule-of-thumb for sizing the Historical heap is (0.5GiB * number of CPU cores), with an upper limit of ~24GiB. This rule-of-thumb scales using the number of CPU cores as a convenient proxy for hardware size and level of concurrency (note: this formula is not a hard rule for sizing Historical heaps). Having a heap that is too large can result in excessively long GC collection pauses, the ~24GiB upper limit is imposed to avoid this. If caching is enabled on Historicals, the cache is stored on heap, sized by druid.cache.sizeInBytes. Running out of heap on the Historicals can indicate misconfiguration or usage patterns that are overloading the cluster. Lookups If you are using lookups, calculate the total size of the lookup maps being loaded. Druid performs an atomic swap when updating lookup maps (both the old map and the new map will exist in heap during the swap), so the maximum potential heap usage from lookup maps will be (2 * total size of all loaded lookups). Be sure to add (2 * total size of all loaded lookups) to your heap size in addition to the (0.5GiB * number of CPU cores) guideline. Processing Threads and Buffers Please see the General Guidelines for Processing Threads and Buffers section for an overview of processing thread/buffer configuration. On Historicals: druid.processing.numThreads should generally be set to (number of cores - 1): a smaller value can result in CPU underutilization, while going over the number of cores can result in unnecessary CPU contention.druid.processing.buffer.sizeBytes can be set to 500MiB.druid.processing.numMergeBuffers, a 1:4 ratio of merge buffers to processing threads is a reasonable choice for general use. Direct Memory Sizing The processing and merge buffers described above are direct memory buffers. When a historical processes a query, it must open a set of segments for reading. This also requires some direct memory space, described in segment decompression buffers. A formula for estimating direct memory usage follows: (druid.processing.numThreads + druid.processing.numMergeBuffers + 1) * druid.processing.buffer.sizeBytes The + 1 factor is a fuzzy estimate meant to account for the segment decompression buffers. Connection pool sizing Please see the General Connection Pool Guidelines section for an overview of connection pool configuration. For Historicals, druid.server.http.numThreads should be set to a value slightly higher than the sum of druid.broker.http.numConnections across all the Brokers in the cluster. Tuning the cluster so that each Historical can accept 50 queries and 10 non-queries is a reasonable starting point. Segment Cache Size For better query performance, do not allocate segment data to a Historical in excess of the system free memory. When free system memory is greater than or equal to druid.segmentCache.locations, the more segment data the Historical can be held in the memory-mapped segment cache. Druid uses the druid.segmentCache.locations to calculate the total segment data size assigned to a Historical. For some rarer use cases, you can override this behavior with druid.server.maxSize property. Number of Historicals The number of Historicals needed in a cluster depends on how much data the cluster has. For good performance, you will want enough Historicals such that each Historical has a good (free system memory / total size of all druid.segmentCache.locations) ratio, as described in the segment cache size section above. Having a smaller number of big servers is generally better than having a large number of small servers, as long as you have enough fault tolerance for your use case. SSD storage We recommend using SSDs for storage on the Historicals, as they handle segment data stored on disk. Total memory usage To estimate total memory usage of the Historical under these guidelines: Heap: (0.5GiB * number of CPU cores) + (2 * total size of lookup maps) + druid.cache.sizeInBytesDirect Memory: (druid.processing.numThreads + druid.processing.numMergeBuffers + 1) * druid.processing.buffer.sizeBytes The Historical will use any available free system memory (i.e., memory not used by the Historical JVM and heap/direct memory buffers or other processes on the system) for memory-mapping of segments on disk. For better query performance, you will want to ensure a good (free system memory / total size of all druid.segmentCache.locations) ratio so that a greater proportion of segments can be kept in memory. Segment sizes matter Be sure to check out segment size optimization to help tune your Historical processes for maximum performance. "},{"title":"Broker","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#broker","content":"Heap sizing The biggest contributions to heap usage on Brokers are: Partial unmerged query results from Historicals and TasksThe segment timeline: this consists of location information (which Historical/Task is serving a segment) for all currently available segments.Cached segment metadata: this consists of metadata, such as per-segment schemas, for all currently available segments. The Broker heap requirements scale based on the number of segments in the cluster, and the total data size of the segments. The heap size will vary based on data size and usage patterns, but 4GiB to 8GiB is a good starting point for a small or medium cluster (~15 servers or less). For a rough estimate of memory requirements on the high end, very large clusters with a node count on the order of ~100 nodes may need Broker heaps of 30GiB-60GiB. If caching is enabled on the Broker, the cache is stored on heap, sized by druid.cache.sizeInBytes. Direct memory sizing On the Broker, the amount of direct memory needed depends on how many merge buffers (used for merging GroupBys) are configured. The Broker does not generally need processing threads or processing buffers, as query results are merged on-heap in the HTTP connection threads instead. druid.processing.buffer.sizeBytes can be set to 500MiB.druid.processing.numMergeBuffers: set this to the same value as on Historicals or a bit higher Connection pool sizing Please see the General Connection Pool Guidelines section for an overview of connection pool configuration. On the Brokers, please ensure that the sum of druid.broker.http.numConnections across all the Brokers is slightly lower than the value of druid.server.http.numThreads on your Historicals and Tasks. druid.server.http.numThreads on the Broker should be set to a value slightly higher than druid.broker.http.numConnections on the same Broker. Tuning the cluster so that each Historical can accept 50 queries and 10 non-queries, adjusting the Brokers accordingly, is a reasonable starting point. Broker backpressure When retrieving query results from Historical processes or Tasks, the Broker can optionally specify a maximum buffer size for queued, unread data, and exert backpressure on the channel to the Historical or Tasks when limit is reached (causing writes to the channel to block on the Historical/Task side until the Broker is able to drain some data from the channel). This buffer size is controlled by the druid.broker.http.maxQueuedBytes setting. The limit is divided across the number of Historicals/Tasks that a query would hit: suppose I have druid.broker.http.maxQueuedBytes set to 5MiB, and the Broker receives a query that needs to be fanned out to 2 Historicals. Each per-historical channel would get a 2.5MiB buffer in this case. You can generally set this to a value of approximately 2MiB * number of Historicals. As your cluster scales up with more Historicals and Tasks, consider increasing this buffer size and increasing the Broker heap accordingly. If the buffer is too small, this can lead to inefficient queries due to the buffer filling up rapidly and stalling the channelIf the buffer is too large, this puts more memory pressure on the Broker due to more queued result data in the HTTP channels. Number of brokers A 1:15 ratio of Brokers to Historicals is a reasonable starting point (this is not a hard rule). If you need Broker HA, you can deploy 2 initially and then use the 1:15 ratio guideline for additional Brokers. Total memory usage To estimate total memory usage of the Broker under these guidelines: Heap: allocated heap sizeDirect Memory: (druid.processing.numMergeBuffers + 1) * druid.processing.buffer.sizeBytes "},{"title":"MiddleManager","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#middlemanager","content":"The MiddleManager is a lightweight task controller/manager that launches Task processes, which perform ingestion work. MiddleManager heap sizing The MiddleManager itself does not require much resources, you can set the heap to ~128MiB generally. SSD storage We recommend using SSDs for storage on the MiddleManagers, as the Tasks launched by MiddleManagers handle segment data stored on disk. Task Count The number of tasks a MiddleManager can launch is controlled by the druid.worker.capacity setting. The number of workers needed in your cluster depends on how many concurrent ingestion tasks you need to run for your use cases. The number of workers that can be launched on a given machine depends on the size of resources allocated per worker and available system resources. You can allocate more MiddleManager machines to your cluster to add task capacity. Task configurations The following section below describes configuration for Tasks launched by the MiddleManager. The Tasks can be queried and perform ingestion workloads, so they require more resources than the MM. Task heap sizing A 1GiB heap is usually enough for Tasks. Lookups If you are using lookups, calculate the total size of the lookup maps being loaded. Druid performs an atomic swap when updating lookup maps (both the old map and the new map will exist in heap during the swap), so the maximum potential heap usage from lookup maps will be (2 * total size of all loaded lookups). Be sure to add (2 * total size of all loaded lookups) to your Task heap size if you are using lookups. Task processing threads and buffers For Tasks, 1 or 2 processing threads are often enough, as the Tasks tend to hold much less queryable data than Historical processes. druid.indexer.fork.property.druid.processing.numThreads: set this to 1 or 2druid.indexer.fork.property.druid.processing.numMergeBuffers: set this to 2druid.indexer.fork.property.druid.processing.buffer.sizeBytes: can be set to 100MiB Direct memory sizing The processing and merge buffers described above are direct memory buffers. When a Task processes a query, it must open a set of segments for reading. This also requires some direct memory space, described in segment decompression buffers. An ingestion Task also needs to merge partial ingestion results, which requires direct memory space, described in segment merging. A formula for estimating direct memory usage follows: (druid.processing.numThreads + druid.processing.numMergeBuffers + 1) * druid.processing.buffer.sizeBytes The + 1 factor is a fuzzy estimate meant to account for the segment decompression buffers and dictionary merging buffers. Connection pool sizing Please see the General Connection Pool Guidelines section for an overview of connection pool configuration. For Tasks, druid.server.http.numThreads should be set to a value slightly higher than the sum of druid.broker.http.numConnections across all the Brokers in the cluster. Tuning the cluster so that each Task can accept 50 queries and 10 non-queries is a reasonable starting point. Total memory usage To estimate total memory usage of a Task under these guidelines: Heap: 1GiB + (2 * total size of lookup maps)Direct Memory: (druid.processing.numThreads + druid.processing.numMergeBuffers + 1) * druid.processing.buffer.sizeBytes The total memory usage of the MiddleManager + Tasks: MM heap size + druid.worker.capacity * (single task memory usage) Configuration guidelines for specific ingestion types Kafka/Kinesis ingestion If you use the Kafka Indexing Service or Kinesis Indexing Service, the number of tasks required will depend on the number of partitions and your taskCount/replica settings. On top of those requirements, allocating more task slots in your cluster is a good idea, so that you have free task slots available for other tasks, such as compaction tasks. Hadoop ingestion If you are only using Hadoop-based batch ingestion with no other ingestion types, you can lower the amount of resources allocated per Task. Batch ingestion tasks do not need to answer queries, and the bulk of the ingestion workload will be executed on the Hadoop cluster, so the Tasks do not require much resources. Parallel native ingestion If you are using parallel native batch ingestion, allocating more available task slots is a good idea and will allow greater ingestion concurrency. "},{"title":"Coordinator","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#coordinator","content":"The main performance-related setting on the Coordinator is the heap size. The heap requirements of the Coordinator scale with the number of servers, segments, and tasks in the cluster. You can set the Coordinator heap to the same size as your Broker heap, or slightly smaller: both services have to process cluster-wide state and answer API requests about this state. Dynamic Configuration percentOfSegmentsToConsiderPerMove The default value is 100. This means that the Coordinator will consider all segments when it is looking for a segment to move. The Coordinator makes a weighted choice, with segments on Servers with the least capacity being the most likely segments to be moved. This weighted selection strategy means that the segments on the servers who have the most available capacity are the least likely to be chosen.As the number of segments in the cluster increases, the probability of choosing the Nth segment to move decreases; where N is the last segment considered for moving.An admin can use this config to skip consideration of that Nth segment. Instead of skipping a precise amount of segments, we skip a percentage of segments in the cluster. For example, with the value set to 25, only the first 25% of segments will be considered as a segment that can be moved. This 25% of segments will come from the servers that have the least available capacity. In this example, each time the Coordinator looks for a segment to move, it will consider 75% less segments than it did when the configuration was 100. On clusters with hundreds of thousands of segments, this can add up to meaningful coordination time savings. General recommendations for this configuration: If you are not worried about the amount of time it takes your Coordinator to complete a full coordination cycle, you likely do not need to modify this config.If you are frustrated with how long the Coordinator takes to run a full coordination cycle, and you have set the Coordinator dynamic config maxSegmentsToMove to a value above 0 (the default is 5), setting this config to a non-default value can help shorten coordination time. The recommended starting point value is 66. It represents a meaningful decrease in the percentage of segments considered while also not being too aggressive (You will consider 1/3 fewer segments per move operation with this value). The impact that modifying this config will have on your coordination time will be a function of how low you set the config value, the value for maxSegmentsToMove and the total number of segments in your cluster. If your cluster has a relatively small number of segments, or you choose to move few segments per coordination cycle, there may not be much savings to be had here. "},{"title":"Overlord","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#overlord","content":"The main performance-related setting on the Overlord is the heap size. The heap requirements of the Overlord scale primarily with the number of running Tasks. The Overlord tends to require less resources than the Coordinator or Broker. You can generally set the Overlord heap to a value that's 25-50% of your Coordinator heap. "},{"title":"Router","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#router","content":"The Router has light resource requirements, as it proxies requests to Brokers without performing much computational work itself. You can assign it 256MiB heap as a starting point, growing it if needed. "},{"title":"Guidelines for processing threads and buffers","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#guidelines-for-processing-threads-and-buffers","content":""},{"title":"Processing threads","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#processing-threads","content":"The druid.processing.numThreads configuration controls the size of the processing thread pool used for computing query results. The size of this pool limits how many queries can be concurrently processed. "},{"title":"Processing buffers","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#processing-buffers","content":"druid.processing.buffer.sizeBytes is a closely related property that controls the size of the off-heap buffers allocated to the processing threads. One buffer is allocated for each processing thread. A size between 500MiB and 1GiB is a reasonable choice for general use. The TopN and GroupBy queries use these buffers to store intermediate computed results. As the buffer size increases, more data can be processed in a single pass. "},{"title":"GroupBy merging buffers","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#groupby-merging-buffers","content":"If you plan to issue GroupBy V2 queries, druid.processing.numMergeBuffers is an important configuration property. GroupBy V2 queries use an additional pool of off-heap buffers for merging query results. These buffers have the same size as the processing buffers described above, set by the druid.processing.buffer.sizeBytes property. Non-nested GroupBy V2 queries require 1 merge buffer per query, while a nested GroupBy V2 query requires 2 merge buffers (regardless of the depth of nesting). The number of merge buffers determines the number of GroupBy V2 queries that can be processed concurrently. "},{"title":"Connection pool guidelines","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#connection-pool-guidelines","content":"Each Druid process has a configuration property for the number of HTTP connection handling threads, druid.server.http.numThreads. The number of HTTP server threads limits how many concurrent HTTP API requests a given process can handle. "},{"title":"Sizing the connection pool for queries","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#sizing-the-connection-pool-for-queries","content":"The Broker has a setting druid.broker.http.numConnections that controls how many outgoing connections it can make to a given Historical or Task process. These connections are used to send queries to the Historicals or Tasks, with one connection per query; the value of druid.broker.http.numConnections is effectively a limit on the number of concurrent queries that a given broker can process. Suppose we have a cluster with 3 Brokers and druid.broker.http.numConnections is set to 10. This means that each Broker in the cluster will open up to 10 connections to each individual Historical or Task (for a total of 30 incoming query connections per Historical/Task). On the Historical/Task side, this means that druid.server.http.numThreads must be set to a value at least as high as the sum of druid.broker.http.numConnections across all the Brokers in the cluster. In practice, you will want to allocate additional server threads for non-query API requests such as status checks; adding 10 threads for those is a good general guideline. Using the example with 3 Brokers in the cluster and druid.broker.http.numConnections set to 10, a value of 40 would be appropriate for druid.server.http.numThreads on Historicals and Tasks. As a starting point, allowing for 50 concurrent queries (requests that read segment data from datasources) + 10 non-query requests (other requests like status checks) on Historicals and Tasks is reasonable (i.e., set druid.server.http.numThreads to 60 there), while sizing druid.broker.http.numConnections based on the number of Brokers in the cluster to fit within the 50 query connection limit per Historical/Task. If the connection pool across Brokers and Historicals/Tasks is too small, the cluster will be underutilized as there are too few concurrent query slots.If the connection pool is too large, you may get out-of-memory errors due to excessive concurrent load, and increased resource contention.The connection pool sizing matters most when you require QoS-type guarantees and use query priorities; otherwise, these settings can be more loosely configured.If your cluster usage patterns are heavily biased towards a high number of small concurrent queries (where each query takes less than ~15ms), enlarging the connection pool can be a good idea.The 50/10 general guideline here is a rough starting point, since different queries impose different amounts of load on the system. To size the connection pool more exactly for your cluster, you would need to know the execution times for your queries and ensure that the rate of incoming queries does not exceed your "drain" rate. "},{"title":"Per-segment direct memory buffers","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#per-segment-direct-memory-buffers","content":""},{"title":"Segment decompression","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#segment-decompression","content":"When opening a segment for reading during segment merging or query processing, Druid allocates a 64KiB off-heap decompression buffer for each column being read. Thus, there is additional direct memory overhead of (64KiB number of columns read per segment number of segments read) when reading segments. "},{"title":"Segment merging","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#segment-merging","content":"In addition to the segment decompression overhead described above, when a set of segments are merged during ingestion, a direct buffer is allocated for every String typed column, for every segment in the set to be merged. The size of these buffers are equal to the cardinality of the String column within its segment, times 4 bytes (the buffers store integers). For example, if two segments are being merged, the first segment having a single String column with cardinality 1000, and the second segment having a String column with cardinality 500, the merge step would allocate (1000 + 500) * 4 = 6000 bytes of direct memory. These buffers are used for merging the value dictionaries of the String column across segments. These "dictionary merging buffers" are independent of the "merge buffers" configured by druid.processing.numMergeBuffers. "},{"title":"General recommendations","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#general-recommendations","content":""},{"title":"JVM tuning","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#jvm-tuning","content":"Garbage Collection We recommend using the G1GC garbage collector: -XX:+UseG1GC Enabling process termination on out-of-memory errors is useful as well, since the process generally will not recover from such a state, and it's better to restart the process: -XX:+ExitOnOutOfMemoryError Other generally useful JVM flags -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.io.tmpdir=<should not be volatile tmpfs and also has good read and write speed. Strongly recommended to avoid using NFS mount> -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager -Dorg.jboss.logging.provider=slf4j -Dnet.spy.log.LoggerImpl=net.spy.memcached.compat.log.SLF4JLogger -Dlog4j.shutdownCallbackRegistry=org.apache.druid.common.config.Log4jShutdown -Dlog4j.shutdownHookEnabled=true -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -Xloggc:/var/logs/druid/historical.gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=50 -XX:GCLogFileSize=10m -XX:+ExitOnOutOfMemoryError -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/logs/druid/historical.hprof -XX:MaxDirectMemorySize=1g info Please note that the flag settings above represent sample, general guidelines only. Be careful to use values appropriate for your specific scenario and be sure to test any changes in staging environments. ExitOnOutOfMemoryError flag is only supported starting JDK 8u92 . For older versions, -XX:OnOutOfMemoryError='kill -9 %p' can be used. MaxDirectMemorySize restricts JVM from allocating more than specified limit, by setting it to unlimited JVM restriction is lifted and OS level memory limits would still be effective. It's still important to make sure that Druid is not configured to allocate more off-heap memory than your machine has available. Important settings here include druid.processing.numThreads, druid.processing.numMergeBuffers, and druid.processing.buffer.sizeBytes. Additionally, for large JVM heaps, here are a few Garbage Collection efficiency guidelines that have been known to help in some cases. Mount /tmp on tmpfs. See The Four Month Bug: JVM statistics cause garbage collection pauses.On Disk-IO intensive processes (e.g., Historical and MiddleManager), GC and Druid logs should be written to a different disk than where data is written.Disable Transparent Huge Pages.Try disabling biased locking by using -XX:-UseBiasedLocking JVM flag. See Logging Stop-the-world Pauses in JVM. "},{"title":"Use UTC timezone","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#use-utc-timezone","content":"We recommend using UTC timezone for all your events and across your hosts, not just for Druid, but for all data infrastructure. This can greatly mitigate potential query problems with inconsistent timezones. To query in a non-UTC timezone see query granularities "},{"title":"System configuration","type":1,"pageTitle":"Basic cluster tuning","url":"/docs/27.0.0/operations/basic-cluster-tuning#system-configuration","content":"SSDs SSDs are highly recommended for Historical, MiddleManager, and Indexer processes if you are not running a cluster that is entirely in memory. SSDs can greatly mitigate the time required to page data in and out of memory. JBOD vs RAID Historical processes store large number of segments on Disk and support specifying multiple paths for storing those. Typically, hosts have multiple disks configured with RAID which makes them look like a single disk to OS. RAID might have overheads specially if its not hardware controller based but software based. So, Historicals might get improved disk throughput with JBOD. Swap space We recommend not using swap space for Historical, MiddleManager, and Indexer processes since due to the large number of memory mapped segment files can lead to poor and unpredictable performance. Linux limits For Historical, MiddleManager, and Indexer processes (and for really large clusters, Broker processes), you might need to adjust some Linux system limits to account for a large number of open files, a large number of network connections, or a large number of memory mapped files. ulimit The limit on the number of open files can be set permanently by editing /etc/security/limits.conf. This value should be substantially greater than the number of segment files that will exist on the server. max_map_count Historical processes and to a lesser extent, MiddleManager and Indexer processes memory map segment files, so depending on the number of segments per server, /proc/sys/vm/max_map_count might also need to be adjusted. Depending on the variant of Linux, this might be done via sysctl by placing a file in /etc/sysctl.d/ that sets vm.max_map_count. "},{"title":"Automated cleanup for metadata records","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/clean-metadata-store","content":"","keywords":""},{"title":"Automated cleanup strategies","type":1,"pageTitle":"Automated cleanup for metadata records","url":"/docs/27.0.0/operations/clean-metadata-store#automated-cleanup-strategies","content":"There are several cases when you should consider automated cleanup of the metadata related to deleted datasources: If you know you have many high-churn datasources, for example, you have scripts that create and delete supervisors regularly.If you have issues with the hard disk for your metadata database filling up.If you run into performance issues with the metadata database. For example, API calls are very slow or fail to execute. If you have compliance requirements to keep audit records and you enable automated cleanup for audit records, use alternative methods to preserve audit metadata, for example, by periodically exporting audit metadata records to external storage. "},{"title":"Configure automated metadata cleanup","type":1,"pageTitle":"Automated cleanup for metadata records","url":"/docs/27.0.0/operations/clean-metadata-store#configure-automated-metadata-cleanup","content":"You can configure cleanup for each entity separately, as described in this section. Define the properties in the coordinator/runtime.properties file. The cleanup of one entity may depend on the cleanup of another entity as follows: You have to configure a kill task for segment records before you can configure automated cleanup for rules or compaction configuration.You have to schedule the metadata management tasks to run at the same or higher frequency as your most frequent cleanup job. For example, if your most frequent cleanup job is every hour, set the metadata store management period to one hour or less: druid.coordinator.period.metadataStoreManagementPeriod=P1H. For details on configuration properties, see Metadata management. If you want to skip the details, check out the example for configuring automated metadata cleanup. "},{"title":"Segment records and segments in deep storage (kill task)","type":1,"pageTitle":"Automated cleanup for metadata records","url":"/docs/27.0.0/operations/clean-metadata-store#segment-records-and-segments-in-deep-storage-kill-task","content":"info The kill task is the only configuration in this topic that affects actual data in deep storage and not simply metadata or logs. Segment records and segments in deep storage become eligible for deletion when both of the following conditions hold: When they meet the eligibility requirement of kill task datasource configuration according to killDataSourceWhitelist set in the Coordinator dynamic configuration. See Dynamic configuration.When the durationToRetain time has passed since their creation. Kill tasks use the following configuration: druid.coordinator.kill.on: When true, enables the Coordinator to submit a kill task for unused segments, which deletes them completely from metadata store and from deep storage. Only applies to the specified datasources in the dynamic configuration parameter killDataSourceWhitelist. If killDataSourceWhitelist is not set or empty, then kill tasks can be submitted for all datasources.druid.coordinator.kill.period: Defines the frequency in ISO 8601 format for the cleanup job to check for and delete eligible segments. Defaults to P1D. Must be greater than druid.coordinator.period.indexingPeriod. druid.coordinator.kill.durationToRetain: Defines the retention period in ISO 8601 format after creation that segments become eligible for deletion.druid.coordinator.kill.maxSegments: Defines the maximum number of segments to delete per kill task. "},{"title":"Audit records","type":1,"pageTitle":"Automated cleanup for metadata records","url":"/docs/27.0.0/operations/clean-metadata-store#audit-records","content":"All audit records become eligible for deletion when the durationToRetain time has passed since their creation. Audit cleanup uses the following configuration: druid.coordinator.kill.audit.on: When true, enables cleanup for audit records.druid.coordinator.kill.audit.period: Defines the frequency in ISO 8601 format for the cleanup job to check for and delete eligible audit records. Defaults to P1D.druid.coordinator.kill.audit.durationToRetain: Defines the retention period in ISO 8601 format after creation that audit records become eligible for deletion. "},{"title":"Supervisor records","type":1,"pageTitle":"Automated cleanup for metadata records","url":"/docs/27.0.0/operations/clean-metadata-store#supervisor-records","content":"Supervisor records become eligible for deletion when the supervisor is terminated and the durationToRetain time has passed since their creation. Supervisor cleanup uses the following configuration: druid.coordinator.kill.supervisor.on: When true, enables cleanup for supervisor records.druid.coordinator.kill.supervisor.period: Defines the frequency in ISO 8601 format for the cleanup job to check for and delete eligible supervisor records. Defaults to P1D.druid.coordinator.kill.supervisor.durationToRetain: Defines the retention period in ISO 8601 format after creation that supervisor records become eligible for deletion. "},{"title":"Rules records","type":1,"pageTitle":"Automated cleanup for metadata records","url":"/docs/27.0.0/operations/clean-metadata-store#rules-records","content":"Rule records become eligible for deletion when all segments for the datasource have been killed by the kill task and the durationToRetain time has passed since their creation. Automated cleanup for rules requires a kill task. Rule cleanup uses the following configuration: druid.coordinator.kill.rule.on: When true, enables cleanup for rules records.druid.coordinator.kill.rule.period: Defines the frequency in ISO 8601 format for the cleanup job to check for and delete eligible rules records. Defaults to P1D.druid.coordinator.kill.rule.durationToRetain: Defines the retention period in ISO 8601 format after creation that rules records become eligible for deletion. "},{"title":"Compaction configuration records","type":1,"pageTitle":"Automated cleanup for metadata records","url":"/docs/27.0.0/operations/clean-metadata-store#compaction-configuration-records","content":"Druid retains all compaction configuration records by default, which should be suitable for most use cases. If you create and delete short-lived datasources with high frequency, and you set auto compaction configuration on those datasources, then consider turning on automated cleanup of compaction configuration records. info With automated cleanup of compaction configuration records, if you create a compaction configuration for some datasource before the datasource exists, for example if initial ingestion is still ongoing, Druid may remove the compaction configuration. To prevent the configuration from being prematurely removed, wait for the datasource to be created before applying the compaction configuration to the datasource. Unlike other metadata records, compaction configuration records do not have a retention period set by durationToRetain. Druid deletes compaction configuration records at every cleanup cycle for inactive datasources, which do not have segments either used or unused. Compaction configuration records in the druid_config table become eligible for deletion after all segments for the datasource have been killed by the kill task. Automated cleanup for compaction configuration requires a kill task. Compaction configuration cleanup uses the following configuration: druid.coordinator.kill.compaction.on: When true, enables cleanup for compaction configuration records.druid.coordinator.kill.compaction.period: Defines the frequency in ISO 8601 format for the cleanup job to check for and delete eligible compaction configuration records. Defaults to P1D. info If you already have an extremely large compaction configuration, you may not be able to delete compaction configuration due to size limits with the audit log. In this case you can set druid.audit.manager.maxPayloadSizeBytes and druid.audit.manager.skipNullField to avoid the auditing issue. See Audit logging. "},{"title":"Datasource records created by supervisors","type":1,"pageTitle":"Automated cleanup for metadata records","url":"/docs/27.0.0/operations/clean-metadata-store#datasource-records-created-by-supervisors","content":"Datasource records created by supervisors become eligible for deletion when the supervisor is terminated or does not exist in the druid_supervisors table and the durationToRetain time has passed since their creation. Datasource cleanup uses the following configuration: druid.coordinator.kill.datasource.on: When true, enables cleanup datasources created by supervisors.druid.coordinator.kill.datasource.period: Defines the frequency in ISO 8601 format for the cleanup job to check for and delete eligible datasource records. Defaults to P1D.druid.coordinator.kill.datasource.durationToRetain: Defines the retention period in ISO 8601 format after creation that datasource records become eligible for deletion. "},{"title":"Indexer task logs","type":1,"pageTitle":"Automated cleanup for metadata records","url":"/docs/27.0.0/operations/clean-metadata-store#indexer-task-logs","content":"You can configure the Overlord to periodically delete indexer task logs and associated metadata. During cleanup, the Overlord removes the following: Indexer task logs from deep storage.Indexer task log metadata from the tasks and tasklogs tables in metadata storage (named druid_tasks and druid_tasklogs by default). Druid no longer uses the tasklogs table, and the table is always empty. To configure cleanup of task logs by the Overlord, set the following properties in the overlord/runtime.properties file. Indexer task log cleanup on the Overlord uses the following configuration: druid.indexer.logs.kill.enabled: When true, enables cleanup of task logs.druid.indexer.logs.kill.durationToRetain: Defines the length of time in milliseconds to retain task logs.druid.indexer.logs.kill.initialDelay: Defines the length of time in milliseconds after the Overlord starts before it executes its first job to kill task logs.druid.indexer.logs.kill.delay: The length of time in milliseconds between jobs to kill task logs. For more detail, see Task logging. "},{"title":"Disable automated metadata cleanup","type":1,"pageTitle":"Automated cleanup for metadata records","url":"/docs/27.0.0/operations/clean-metadata-store#disable-automated-metadata-cleanup","content":"Druid automatically cleans up metadata records, excluding compaction configuration records and indexer task logs. To disable automated metadata cleanup, set the following properties in the coordinator/runtime.properties file: # Keep unused segments druid.coordinator.kill.on=false # Keep audit records druid.coordinator.kill.audit.on=false # Keep supervisor records druid.coordinator.kill.supervisor.on=false # Keep rules records druid.coordinator.kill.rule.on=false # Keep datasource records created by supervisors druid.coordinator.kill.datasource.on=false ## Example configuration for automated metadata cleanup Consider a scenario where you have scripts to create and delete hundreds of datasources and related entities a day. You do not want to fill your metadata store with leftover records. The datasources and related entities tend to persist for only one or two days. Therefore, you want to run a cleanup job that identifies and removes leftover records that are at least four days old. The exception is for audit logs, which you need to retain for 30 days: ... # Schedule the metadata management store task for every hour: druid.coordinator.period.metadataStoreManagementPeriod=P1H # Set a kill task to poll every day to delete Segment records and segments # in deep storage > 4 days old. When druid.coordinator.kill.on is set to true, # you can set killDataSourceWhitelist in the dynamic configuration to limit # the datasources that can be killed. # Required also for automated cleanup of rules and compaction configuration. druid.coordinator.kill.on=true druid.coordinator.kill.period=P1D druid.coordinator.kill.durationToRetain=P4D druid.coordinator.kill.maxSegments=1000 # Poll every day to delete audit records > 30 days old druid.coordinator.kill.audit.on=true druid.coordinator.kill.audit.period=P1D druid.coordinator.kill.audit.durationToRetain=P30D # Poll every day to delete supervisor records > 4 days old druid.coordinator.kill.supervisor.on=true druid.coordinator.kill.supervisor.period=P1D druid.coordinator.kill.supervisor.durationToRetain=P4D # Poll every day to delete rules records > 4 days old druid.coordinator.kill.rule.on=true druid.coordinator.kill.rule.period=P1D druid.coordinator.kill.rule.durationToRetain=P4D # Poll every day to delete compaction configuration records druid.coordinator.kill.compaction.on=true druid.coordinator.kill.compaction.period=P1D # Poll every day to delete datasource records created by supervisors > 4 days old druid.coordinator.kill.datasource.on=true druid.coordinator.kill.datasource.period=P1D druid.coordinator.kill.datasource.durationToRetain=P4D ... "},{"title":"Learn more","type":1,"pageTitle":"Automated cleanup for metadata records","url":"/docs/27.0.0/operations/clean-metadata-store#learn-more","content":"See the following topics for more information: Metadata management for metadata store configuration reference.Metadata storage for an overview of the metadata storage database. "},{"title":"SQL-based ingestion query examples","type":0,"sectionRef":"#","url":"/docs/27.0.0/multi-stage-query/examples","content":"","keywords":""},{"title":"INSERT with no rollup","type":1,"pageTitle":"SQL-based ingestion query examples","url":"/docs/27.0.0/multi-stage-query/examples#insert-with-no-rollup","content":"This example inserts data into a table named w000 without performing any data rollup: Show the query INSERT INTO w000 SELECT TIME_PARSE("timestamp") AS __time, isRobot, channel, flags, isUnpatrolled, page, diffUrl, added, comment, commentLength, isNew, isMinor, delta, isAnonymous, user, deltaBucket, deleted, namespace, cityName, countryName, regionIsoCode, metroCode, countryIsoCode, regionName FROM TABLE( EXTERN( '{"type":"http","uris":["https://druid.apache.org/data/wikipedia.json.gz"]}', '{"type":"json"}', '[{"name":"isRobot","type":"string"},{"name":"channel","type":"string"},{"name":"timestamp","type":"string"},{"name":"flags","type":"string"},{"name":"isUnpatrolled","type":"string"},{"name":"page","type":"string"},{"name":"diffUrl","type":"string"},{"name":"added","type":"long"},{"name":"comment","type":"string"},{"name":"commentLength","type":"long"},{"name":"isNew","type":"string"},{"name":"isMinor","type":"string"},{"name":"delta","type":"long"},{"name":"isAnonymous","type":"string"},{"name":"user","type":"string"},{"name":"deltaBucket","type":"long"},{"name":"deleted","type":"long"},{"name":"namespace","type":"string"},{"name":"cityName","type":"string"},{"name":"countryName","type":"string"},{"name":"regionIsoCode","type":"string"},{"name":"metroCode","type":"long"},{"name":"countryIsoCode","type":"string"},{"name":"regionName","type":"string"}]' ) ) PARTITIONED BY HOUR CLUSTERED BY channel "},{"title":"INSERT with rollup","type":1,"pageTitle":"SQL-based ingestion query examples","url":"/docs/27.0.0/multi-stage-query/examples#insert-with-rollup","content":"This example inserts data into a table named kttm_data and performs data rollup. This example implements the recommendations described in Rollup. Show the query INSERT INTO "kttm_rollup" WITH kttm_data AS ( SELECT * FROM TABLE( EXTERN( '{"type":"http","uris":["https://static.imply.io/example-data/kttm-v2/kttm-v2-2019-08-25.json.gz"]}', '{"type":"json"}', '[{"name":"timestamp","type":"string"},{"name":"agent_category","type":"string"},{"name":"agent_type","type":"string"},{"name":"browser","type":"string"},{"name":"browser_version","type":"string"},{"name":"city","type":"string"},{"name":"continent","type":"string"},{"name":"country","type":"string"},{"name":"version","type":"string"},{"name":"event_type","type":"string"},{"name":"event_subtype","type":"string"},{"name":"loaded_image","type":"string"},{"name":"adblock_list","type":"string"},{"name":"forwarded_for","type":"string"},{"name":"language","type":"string"},{"name":"number","type":"long"},{"name":"os","type":"string"},{"name":"path","type":"string"},{"name":"platform","type":"string"},{"name":"referrer","type":"string"},{"name":"referrer_host","type":"string"},{"name":"region","type":"string"},{"name":"remote_address","type":"string"},{"name":"screen","type":"string"},{"name":"session","type":"string"},{"name":"session_length","type":"long"},{"name":"timezone","type":"string"},{"name":"timezone_offset","type":"long"},{"name":"window","type":"string"}]' ) )) SELECT FLOOR(TIME_PARSE("timestamp") TO MINUTE) AS __time, session, agent_category, agent_type, browser, browser_version, MV_TO_ARRAY("language") AS "language", -- Multi-value string dimension os, city, country, forwarded_for AS ip_address, COUNT(*) AS "cnt", SUM(session_length) AS session_length, APPROX_COUNT_DISTINCT_DS_HLL(event_type) AS unique_event_types FROM kttm_data WHERE os = 'iOS' GROUP BY 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 PARTITIONED BY HOUR CLUSTERED BY browser, session "},{"title":"INSERT for reindexing an existing datasource","type":1,"pageTitle":"SQL-based ingestion query examples","url":"/docs/27.0.0/multi-stage-query/examples#insert-for-reindexing-an-existing-datasource","content":"This example aggregates data from a table named w000 and inserts the result into w002. Show the query INSERT INTO w002 SELECT FLOOR(__time TO MINUTE) AS __time, channel, countryIsoCode, countryName, regionIsoCode, regionName, page, COUNT(*) AS cnt, SUM(added) AS sum_added, SUM(deleted) AS sum_deleted FROM w000 GROUP BY 1, 2, 3, 4, 5, 6, 7 PARTITIONED BY HOUR CLUSTERED BY page "},{"title":"INSERT with JOIN","type":1,"pageTitle":"SQL-based ingestion query examples","url":"/docs/27.0.0/multi-stage-query/examples#insert-with-join","content":"This example inserts data into a table named w003 and joins data from two sources: Show the query INSERT INTO w003 WITH wikidata AS (SELECT * FROM TABLE( EXTERN( '{"type":"http","uris":["https://druid.apache.org/data/wikipedia.json.gz"]}', '{"type":"json"}', '[{"name":"isRobot","type":"string"},{"name":"channel","type":"string"},{"name":"timestamp","type":"string"},{"name":"flags","type":"string"},{"name":"isUnpatrolled","type":"string"},{"name":"page","type":"string"},{"name":"diffUrl","type":"string"},{"name":"added","type":"long"},{"name":"comment","type":"string"},{"name":"commentLength","type":"long"},{"name":"isNew","type":"string"},{"name":"isMinor","type":"string"},{"name":"delta","type":"long"},{"name":"isAnonymous","type":"string"},{"name":"user","type":"string"},{"name":"deltaBucket","type":"long"},{"name":"deleted","type":"long"},{"name":"namespace","type":"string"},{"name":"cityName","type":"string"},{"name":"countryName","type":"string"},{"name":"regionIsoCode","type":"string"},{"name":"metroCode","type":"long"},{"name":"countryIsoCode","type":"string"},{"name":"regionName","type":"string"}]' ) )), countries AS (SELECT * FROM TABLE( EXTERN( '{"type":"http","uris":["https://static.imply.io/example-data/lookup/countries.tsv"]}', '{"type":"tsv","findColumnsFromHeader":true}', '[{"name":"Country","type":"string"},{"name":"Capital","type":"string"},{"name":"ISO3","type":"string"},{"name":"ISO2","type":"string"}]' ) )) SELECT TIME_PARSE("timestamp") AS __time, isRobot, channel, flags, isUnpatrolled, page, diffUrl, added, comment, commentLength, isNew, isMinor, delta, isAnonymous, user, deltaBucket, deleted, namespace, cityName, countryName, regionIsoCode, metroCode, countryIsoCode, countries.Capital AS countryCapital, regionName FROM wikidata LEFT JOIN countries ON wikidata.countryIsoCode = countries.ISO2 PARTITIONED BY HOUR "},{"title":"REPLACE an entire datasource","type":1,"pageTitle":"SQL-based ingestion query examples","url":"/docs/27.0.0/multi-stage-query/examples#replace-an-entire-datasource","content":"This example replaces the entire datasource used in the table w007 with the new query data while dropping the old data: Show the query REPLACE INTO w007 OVERWRITE ALL SELECT TIME_PARSE("timestamp") AS __time, isRobot, channel, flags, isUnpatrolled, page, diffUrl, added, comment, commentLength, isNew, isMinor, delta, isAnonymous, user, deltaBucket, deleted, namespace, cityName, countryName, regionIsoCode, metroCode, countryIsoCode, regionName FROM TABLE( EXTERN( '{"type":"http","uris":["https://druid.apache.org/data/wikipedia.json.gz"]}', '{"type":"json"}', '[{"name":"isRobot","type":"string"},{"name":"channel","type":"string"},{"name":"timestamp","type":"string"},{"name":"flags","type":"string"},{"name":"isUnpatrolled","type":"string"},{"name":"page","type":"string"},{"name":"diffUrl","type":"string"},{"name":"added","type":"long"},{"name":"comment","type":"string"},{"name":"commentLength","type":"long"},{"name":"isNew","type":"string"},{"name":"isMinor","type":"string"},{"name":"delta","type":"long"},{"name":"isAnonymous","type":"string"},{"name":"user","type":"string"},{"name":"deltaBucket","type":"long"},{"name":"deleted","type":"long"},{"name":"namespace","type":"string"},{"name":"cityName","type":"string"},{"name":"countryName","type":"string"},{"name":"regionIsoCode","type":"string"},{"name":"metroCode","type":"long"},{"name":"countryIsoCode","type":"string"},{"name":"regionName","type":"string"}]' ) ) PARTITIONED BY HOUR CLUSTERED BY channel "},{"title":"REPLACE for replacing a specific time segment","type":1,"pageTitle":"SQL-based ingestion query examples","url":"/docs/27.0.0/multi-stage-query/examples#replace-for-replacing-a-specific-time-segment","content":"This example replaces certain segments in a datasource with the new query data while dropping old segments: Show the query REPLACE INTO w007 OVERWRITE WHERE __time >= TIMESTAMP '2019-08-25 02:00:00' AND __time < TIMESTAMP '2019-08-25 03:00:00' SELECT FLOOR(__time TO MINUTE) AS __time, channel, countryIsoCode, countryName, regionIsoCode, regionName, page FROM w007 WHERE __time >= TIMESTAMP '2019-08-25 02:00:00' AND __time < TIMESTAMP '2019-08-25 03:00:00' AND countryName = "Canada" PARTITIONED BY HOUR CLUSTERED BY page "},{"title":"REPLACE for reindexing an existing datasource into itself","type":1,"pageTitle":"SQL-based ingestion query examples","url":"/docs/27.0.0/multi-stage-query/examples#replace-for-reindexing-an-existing-datasource-into-itself","content":"Show the query REPLACE INTO w000 OVERWRITE ALL SELECT FLOOR(__time TO MINUTE) AS __time, channel, countryIsoCode, countryName, regionIsoCode, regionName, page, COUNT(*) AS cnt, SUM(added) AS sum_added, SUM(deleted) AS sum_deleted FROM w000 GROUP BY 1, 2, 3, 4, 5, 6, 7 PARTITIONED BY HOUR CLUSTERED BY page "},{"title":"SELECT with EXTERN and JOIN","type":1,"pageTitle":"SQL-based ingestion query examples","url":"/docs/27.0.0/multi-stage-query/examples#select-with-extern-and-join","content":"Show the query WITH flights AS ( SELECT * FROM TABLE( EXTERN( '{"type":"http","uris":["https://static.imply.io/example-data/flight_on_time/flights/On_Time_Reporting_Carrier_On_Time_Performance_(1987_present)_2005_11.csv.zip"]}', '{"type":"csv","findColumnsFromHeader":true}', '[{"name":"depaturetime","type":"string"},{"name":"arrivalime","type":"string"},{"name":"Year","type":"long"},{"name":"Quarter","type":"long"},{"name":"Month","type":"long"},{"name":"DayofMonth","type":"long"},{"name":"DayOfWeek","type":"long"},{"name":"FlightDate","type":"string"},{"name":"Reporting_Airline","type":"string"},{"name":"DOT_ID_Reporting_Airline","type":"long"},{"name":"IATA_CODE_Reporting_Airline","type":"string"},{"name":"Tail_Number","type":"string"},{"name":"Flight_Number_Reporting_Airline","type":"long"},{"name":"OriginAirportID","type":"long"},{"name":"OriginAirportSeqID","type":"long"},{"name":"OriginCityMarketID","type":"long"},{"name":"Origin","type":"string"},{"name":"OriginCityName","type":"string"},{"name":"OriginState","type":"string"},{"name":"OriginStateFips","type":"long"},{"name":"OriginStateName","type":"string"},{"name":"OriginWac","type":"long"},{"name":"DestAirportID","type":"long"},{"name":"DestAirportSeqID","type":"long"},{"name":"DestCityMarketID","type":"long"},{"name":"Dest","type":"string"},{"name":"DestCityName","type":"string"},{"name":"DestState","type":"string"},{"name":"DestStateFips","type":"long"},{"name":"DestStateName","type":"string"},{"name":"DestWac","type":"long"},{"name":"CRSDepTime","type":"long"},{"name":"DepTime","type":"long"},{"name":"DepDelay","type":"long"},{"name":"DepDelayMinutes","type":"long"},{"name":"DepDel15","type":"long"},{"name":"DepartureDelayGroups","type":"long"},{"name":"DepTimeBlk","type":"string"},{"name":"TaxiOut","type":"long"},{"name":"WheelsOff","type":"long"},{"name":"WheelsOn","type":"long"},{"name":"TaxiIn","type":"long"},{"name":"CRSArrTime","type":"long"},{"name":"ArrTime","type":"long"},{"name":"ArrDelay","type":"long"},{"name":"ArrDelayMinutes","type":"long"},{"name":"ArrDel15","type":"long"},{"name":"ArrivalDelayGroups","type":"long"},{"name":"ArrTimeBlk","type":"string"},{"name":"Cancelled","type":"long"},{"name":"CancellationCode","type":"string"},{"name":"Diverted","type":"long"},{"name":"CRSElapsedTime","type":"long"},{"name":"ActualElapsedTime","type":"long"},{"name":"AirTime","type":"long"},{"name":"Flights","type":"long"},{"name":"Distance","type":"long"},{"name":"DistanceGroup","type":"long"},{"name":"CarrierDelay","type":"long"},{"name":"WeatherDelay","type":"long"},{"name":"NASDelay","type":"long"},{"name":"SecurityDelay","type":"long"},{"name":"LateAircraftDelay","type":"long"},{"name":"FirstDepTime","type":"string"},{"name":"TotalAddGTime","type":"string"},{"name":"LongestAddGTime","type":"string"},{"name":"DivAirportLandings","type":"string"},{"name":"DivReachedDest","type":"string"},{"name":"DivActualElapsedTime","type":"string"},{"name":"DivArrDelay","type":"string"},{"name":"DivDistance","type":"string"},{"name":"Div1Airport","type":"string"},{"name":"Div1AirportID","type":"string"},{"name":"Div1AirportSeqID","type":"string"},{"name":"Div1WheelsOn","type":"string"},{"name":"Div1TotalGTime","type":"string"},{"name":"Div1LongestGTime","type":"string"},{"name":"Div1WheelsOff","type":"string"},{"name":"Div1TailNum","type":"string"},{"name":"Div2Airport","type":"string"},{"name":"Div2AirportID","type":"string"},{"name":"Div2AirportSeqID","type":"string"},{"name":"Div2WheelsOn","type":"string"},{"name":"Div2TotalGTime","type":"string"},{"name":"Div2LongestGTime","type":"string"},{"name":"Div2WheelsOff","type":"string"},{"name":"Div2TailNum","type":"string"},{"name":"Div3Airport","type":"string"},{"name":"Div3AirportID","type":"string"},{"name":"Div3AirportSeqID","type":"string"},{"name":"Div3WheelsOn","type":"string"},{"name":"Div3TotalGTime","type":"string"},{"name":"Div3LongestGTime","type":"string"},{"name":"Div3WheelsOff","type":"string"},{"name":"Div3TailNum","type":"string"},{"name":"Div4Airport","type":"string"},{"name":"Div4AirportID","type":"string"},{"name":"Div4AirportSeqID","type":"string"},{"name":"Div4WheelsOn","type":"string"},{"name":"Div4TotalGTime","type":"string"},{"name":"Div4LongestGTime","type":"string"},{"name":"Div4WheelsOff","type":"string"},{"name":"Div4TailNum","type":"string"},{"name":"Div5Airport","type":"string"},{"name":"Div5AirportID","type":"string"},{"name":"Div5AirportSeqID","type":"string"},{"name":"Div5WheelsOn","type":"string"},{"name":"Div5TotalGTime","type":"string"},{"name":"Div5LongestGTime","type":"string"},{"name":"Div5WheelsOff","type":"string"},{"name":"Div5TailNum","type":"string"},{"name":"Unnamed: 109","type":"string"}]' ) )), L_AIRPORT AS ( SELECT * FROM TABLE( EXTERN( '{"type":"http","uris":["https://static.imply.io/example-data/flight_on_time/dimensions/L_AIRPORT.csv"]}', '{"type":"csv","findColumnsFromHeader":true}', '[{"name":"Code","type":"string"},{"name":"Description","type":"string"}]' ) )), L_AIRPORT_ID AS ( SELECT * FROM TABLE( EXTERN( '{"type":"http","uris":["https://static.imply.io/example-data/flight_on_time/dimensions/L_AIRPORT_ID.csv"]}', '{"type":"csv","findColumnsFromHeader":true}', '[{"name":"Code","type":"long"},{"name":"Description","type":"string"}]' ) )), L_AIRLINE_ID AS ( SELECT * FROM TABLE( EXTERN( '{"type":"http","uris":["https://static.imply.io/example-data/flight_on_time/dimensions/L_AIRLINE_ID.csv"]}', '{"type":"csv","findColumnsFromHeader":true}', '[{"name":"Code","type":"long"},{"name":"Description","type":"string"}]' ) )), L_CITY_MARKET_ID AS ( SELECT * FROM TABLE( EXTERN( '{"type":"http","uris":["https://static.imply.io/example-data/flight_on_time/dimensions/L_CITY_MARKET_ID.csv"]}', '{"type":"csv","findColumnsFromHeader":true}', '[{"name":"Code","type":"long"},{"name":"Description","type":"string"}]' ) )), L_CANCELLATION AS ( SELECT * FROM TABLE( EXTERN( '{"type":"http","uris":["https://static.imply.io/example-data/flight_on_time/dimensions/L_CANCELLATION.csv"]}', '{"type":"csv","findColumnsFromHeader":true}', '[{"name":"Code","type":"string"},{"name":"Description","type":"string"}]' ) )), L_STATE_FIPS AS ( SELECT * FROM TABLE( EXTERN( '{"type":"http","uris":["https://static.imply.io/example-data/flight_on_time/dimensions/L_STATE_FIPS.csv"]}', '{"type":"csv","findColumnsFromHeader":true}', '[{"name":"Code","type":"long"},{"name":"Description","type":"string"}]' ) )) SELECT depaturetime, arrivalime, -- "Year", -- Quarter, -- "Month", -- DayofMonth, -- DayOfWeek, -- FlightDate, Reporting_Airline, DOT_ID_Reporting_Airline, DOTAirlineLookup.Description AS DOT_Reporting_Airline, IATA_CODE_Reporting_Airline, Tail_Number, Flight_Number_Reporting_Airline, OriginAirportID, OriginAirportIDLookup.Description AS OriginAirport, OriginAirportSeqID, OriginCityMarketID, OriginCityMarketIDLookup.Description AS OriginCityMarket, Origin, OriginAirportLookup.Description AS OriginDescription, OriginCityName, OriginState, OriginStateFips, OriginStateFipsLookup.Description AS OriginStateFipsDescription, OriginStateName, OriginWac, DestAirportID, DestAirportIDLookup.Description AS DestAirport, DestAirportSeqID, DestCityMarketID, DestCityMarketIDLookup.Description AS DestCityMarket, Dest, DestAirportLookup.Description AS DestDescription, DestCityName, DestState, DestStateFips, DestStateFipsLookup.Description AS DestStateFipsDescription, DestStateName, DestWac, CRSDepTime, DepTime, DepDelay, DepDelayMinutes, DepDel15, DepartureDelayGroups, DepTimeBlk, TaxiOut, WheelsOff, WheelsOn, TaxiIn, CRSArrTime, ArrTime, ArrDelay, ArrDelayMinutes, ArrDel15, ArrivalDelayGroups, ArrTimeBlk, Cancelled, CancellationCode, CancellationCodeLookup.Description AS CancellationReason, Diverted, CRSElapsedTime, ActualElapsedTime, AirTime, Flights, Distance, DistanceGroup, CarrierDelay, WeatherDelay, NASDelay, SecurityDelay, LateAircraftDelay, FirstDepTime, TotalAddGTime, LongestAddGTime FROM "flights" LEFT JOIN L_AIRLINE_ID AS DOTAirlineLookup ON DOT_ID_Reporting_Airline = DOTAirlineLookup.Code LEFT JOIN L_AIRPORT AS OriginAirportLookup ON Origin = OriginAirportLookup.Code LEFT JOIN L_AIRPORT AS DestAirportLookup ON Dest = DestAirportLookup.Code LEFT JOIN L_AIRPORT_ID AS OriginAirportIDLookup ON OriginAirportID = OriginAirportIDLookup.Code LEFT JOIN L_AIRPORT_ID AS DestAirportIDLookup ON DestAirportID = DestAirportIDLookup.Code LEFT JOIN L_CITY_MARKET_ID AS OriginCityMarketIDLookup ON OriginCityMarketID = OriginCityMarketIDLookup.Code LEFT JOIN L_CITY_MARKET_ID AS DestCityMarketIDLookup ON DestCityMarketID = DestCityMarketIDLookup.Code LEFT JOIN L_STATE_FIPS AS OriginStateFipsLookup ON OriginStateFips = OriginStateFipsLookup.Code LEFT JOIN L_STATE_FIPS AS DestStateFipsLookup ON DestStateFips = DestStateFipsLookup.Code LEFT JOIN L_CANCELLATION AS CancellationCodeLookup ON CancellationCode = CancellationCodeLookup.Code LIMIT 1000 "},{"title":"Export Metadata Tool","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/export-metadata","content":"","keywords":""},{"title":"export-metadata Options","type":1,"pageTitle":"Export Metadata Tool","url":"/docs/27.0.0/operations/export-metadata#export-metadata-options","content":"The export-metadata tool provides the following options: "},{"title":"Connection Properties","type":1,"pageTitle":"Export Metadata Tool","url":"/docs/27.0.0/operations/export-metadata#connection-properties","content":"--connectURI: The URI of the Derby database, e.g. jdbc:derby://localhost:1527/var/druid/metadata.db;create=true--user: Username--password: Password--base: corresponds to the value of druid.metadata.storage.tables.base in the configuration, druid by default. "},{"title":"Output Path","type":1,"pageTitle":"Export Metadata Tool","url":"/docs/27.0.0/operations/export-metadata#output-path","content":"--output-path, -o: The output directory of the tool. CSV files for the Druid segments, rules, config, datasource, and supervisors tables will be written to this directory. "},{"title":"Export Format Options","type":1,"pageTitle":"Export Metadata Tool","url":"/docs/27.0.0/operations/export-metadata#export-format-options","content":"--use-hex-blobs, -x: If set, export BLOB payload columns as hexadecimal strings. This needs to be set if importing back into Derby. Default is false.--booleans-as-strings, -t: If set, write boolean values as "true" or "false" instead of "1" and "0". This needs to be set if importing back into Derby. Default is false. "},{"title":"Deep Storage Migration","type":1,"pageTitle":"Export Metadata Tool","url":"/docs/27.0.0/operations/export-metadata#deep-storage-migration","content":"Migration to S3 Deep Storage By setting the options below, the tool will rewrite the segment load specs to point to a new S3 deep storage location. This helps users migrate segments stored in local deep storage to S3. --s3bucket, -b: The S3 bucket that will hold the migrated segments--s3baseKey, -k: The base S3 key where the migrated segments will be stored When copying the local deep storage segments to S3, the rewrite performed by this tool requires that the directory structure of the segments be unchanged. For example, if the cluster had the following local deep storage configuration: druid.storage.type=local druid.storage.storageDirectory=/druid/segments If the target S3 bucket was migration, with a base key of example, the contents of s3://migration/example/ must be identical to that of /druid/segments on the old local filesystem. Migration to HDFS Deep Storage By setting the options below, the tool will rewrite the segment load specs to point to a new HDFS deep storage location. This helps users migrate segments stored in local deep storage to HDFS. --hadoopStorageDirectory, -h: The HDFS path that will hold the migrated segments When copying the local deep storage segments to HDFS, the rewrite performed by this tool requires that the directory structure of the segments be unchanged, with the exception of directory names containing colons (:). For example, if the cluster had the following local deep storage configuration: druid.storage.type=local druid.storage.storageDirectory=/druid/segments If the target hadoopStorageDirectory was /migration/example, the contents of hdfs:///migration/example/ must be identical to that of /druid/segments on the old local filesystem. Additionally, the segments paths in local deep storage contain colons(:) in their names, e.g.: wikipedia/2016-06-27T02:00:00.000Z_2016-06-27T03:00:00.000Z/2019-05-03T21:57:15.950Z/1/index.zip HDFS cannot store files containing colons, and this tool expects the colons to be replaced with underscores (_) in HDFS. In this example, the wikipedia segment above under /druid/segments in local deep storage would need to be migrated to HDFS under hdfs:///migration/example/ with the following path: wikipedia/2016-06-27T02_00_00.000Z_2016-06-27T03_00_00.000Z/2019-05-03T21_57_15.950Z/1/index.zip Migration to New Local Deep Storage Path By setting the options below, the tool will rewrite the segment load specs to point to a new local deep storage location. This helps users migrate segments stored in local deep storage to a new path (e.g., a new NFS mount). --newLocalPath, -n: The new path on the local filesystem that will hold the migrated segments When copying the local deep storage segments to a new path, the rewrite performed by this tool requires that the directory structure of the segments be unchanged. For example, if the cluster had the following local deep storage configuration: druid.storage.type=local druid.storage.storageDirectory=/druid/segments If the new path was /migration/example, the contents of /migration/example/ must be identical to that of /druid/segments on the local filesystem. "},{"title":"Running the tool","type":1,"pageTitle":"Export Metadata Tool","url":"/docs/27.0.0/operations/export-metadata#running-the-tool","content":"To use the tool, you can run the following from the root of the Druid package: cd ${DRUID_ROOT} mkdir -p /tmp/csv java -classpath "lib/*" -Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml -Ddruid.extensions.directory="extensions" -Ddruid.extensions.loadList=[] org.apache.druid.cli.Main tools export-metadata --connectURI "jdbc:derby://localhost:1527/var/druid/metadata.db;" -o /tmp/csv In the example command above: lib is the Druid lib directoryextensions is the Druid extensions directory/tmp/csv is the output directory. Please make sure that this directory exists. "},{"title":"Importing Metadata","type":1,"pageTitle":"Export Metadata Tool","url":"/docs/27.0.0/operations/export-metadata#importing-metadata","content":"After running the tool, the output directory will contain <table-name>_raw.csv and <table-name>.csv files. The <table-name>_raw.csv files are intermediate files used by the tool, containing the table data as exported by Derby without modification. The <table-name>.csv files are used for import into another database such as MySQL and PostgreSQL and have any configured deep storage location rewrites applied. Example import commands for Derby, MySQL, and PostgreSQL are shown below. These example import commands expect /tmp/csv and its contents to be accessible from the server. For other options, such as importing from the client filesystem, please refer to the database's documentation. "},{"title":"Derby","type":1,"pageTitle":"Export Metadata Tool","url":"/docs/27.0.0/operations/export-metadata#derby","content":"CALL SYSCS_UTIL.SYSCS_IMPORT_TABLE (null,'DRUID_SEGMENTS','/tmp/csv/druid_segments.csv',',','"',null,0); CALL SYSCS_UTIL.SYSCS_IMPORT_TABLE (null,'DRUID_RULES','/tmp/csv/druid_rules.csv',',','"',null,0); CALL SYSCS_UTIL.SYSCS_IMPORT_TABLE (null,'DRUID_CONFIG','/tmp/csv/druid_config.csv',',','"',null,0); CALL SYSCS_UTIL.SYSCS_IMPORT_TABLE (null,'DRUID_DATASOURCE','/tmp/csv/druid_dataSource.csv',',','"',null,0); CALL SYSCS_UTIL.SYSCS_IMPORT_TABLE (null,'DRUID_SUPERVISORS','/tmp/csv/druid_supervisors.csv',',','"',null,0); "},{"title":"MySQL","type":1,"pageTitle":"Export Metadata Tool","url":"/docs/27.0.0/operations/export-metadata#mysql","content":"LOAD DATA INFILE '/tmp/csv/druid_segments.csv' INTO TABLE druid_segments FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\\"' (id,dataSource,created_date,start,end,partitioned,version,used,payload); SHOW WARNINGS; LOAD DATA INFILE '/tmp/csv/druid_rules.csv' INTO TABLE druid_rules FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\\"' (id,dataSource,version,payload); SHOW WARNINGS; LOAD DATA INFILE '/tmp/csv/druid_config.csv' INTO TABLE druid_config FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\\"' (name,payload); SHOW WARNINGS; LOAD DATA INFILE '/tmp/csv/druid_dataSource.csv' INTO TABLE druid_dataSource FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\\"' (dataSource,created_date,commit_metadata_payload,commit_metadata_sha1); SHOW WARNINGS; LOAD DATA INFILE '/tmp/csv/druid_supervisors.csv' INTO TABLE druid_supervisors FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\\"' (id,spec_id,created_date,payload); SHOW WARNINGS; "},{"title":"PostgreSQL","type":1,"pageTitle":"Export Metadata Tool","url":"/docs/27.0.0/operations/export-metadata#postgresql","content":"COPY druid_segments(id,dataSource,created_date,start,"end",partitioned,version,used,payload) FROM '/tmp/csv/druid_segments.csv' DELIMITER ',' CSV; COPY druid_rules(id,dataSource,version,payload) FROM '/tmp/csv/druid_rules.csv' DELIMITER ',' CSV; COPY druid_config(name,payload) FROM '/tmp/csv/druid_config.csv' DELIMITER ',' CSV; COPY druid_dataSource(dataSource,created_date,commit_metadata_payload,commit_metadata_sha1) FROM '/tmp/csv/druid_dataSource.csv' DELIMITER ',' CSV; COPY druid_supervisors(id,spec_id,created_date,payload) FROM '/tmp/csv/druid_supervisors.csv' DELIMITER ',' CSV; "},{"title":"Getting started with Apache Druid","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/getting-started","content":"","keywords":""},{"title":"Overview","type":1,"pageTitle":"Getting started with Apache Druid","url":"/docs/27.0.0/operations/getting-started#overview","content":"If you are new to Druid, we recommend reading the Design Overview and the Ingestion Overview first for a basic understanding of Druid. "},{"title":"Single-server Quickstart and Tutorials","type":1,"pageTitle":"Getting started with Apache Druid","url":"/docs/27.0.0/operations/getting-started#single-server-quickstart-and-tutorials","content":"To get started with running Druid, the simplest and quickest way is to try the single-server quickstart and tutorials. "},{"title":"Deploying a Druid cluster","type":1,"pageTitle":"Getting started with Apache Druid","url":"/docs/27.0.0/operations/getting-started#deploying-a-druid-cluster","content":"If you wish to jump straight to deploying Druid as a cluster, or if you have an existing single-server deployment that you wish to migrate to a clustered deployment, please see the Clustered Deployment Guide. "},{"title":"Operating Druid","type":1,"pageTitle":"Getting started with Apache Druid","url":"/docs/27.0.0/operations/getting-started#operating-druid","content":"The configuration reference describes all of Druid's configuration properties. The API reference describes the APIs available on each Druid process. The basic cluster tuning guide is an introductory guide for tuning your Druid cluster. "},{"title":"Need help with Druid?","type":1,"pageTitle":"Getting started with Apache Druid","url":"/docs/27.0.0/operations/getting-started#need-help-with-druid","content":"If you have questions about using Druid, please reach out to the Druid user mailing list or other community channels! "},{"title":"High availability","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/high-availability","content":"High availability Apache ZooKeeper, metadata store, the coordinator, the overlord, and brokers are recommended to set up a high availability environment. For highly-available ZooKeeper, you will need a cluster of 3 or 5 ZooKeeper nodes. We recommend either installing ZooKeeper on its own hardware, or running 3 or 5 Master servers (where overlords or coordinators are running) and configuring ZooKeeper on them appropriately. See the ZooKeeper admin guide for more details.For highly-available metadata storage, we recommend MySQL or PostgreSQL with replication and failover enabled. See MySQL Enterprise High Availability and PostgreSQL's High Availability, Load Balancing, and Replication for more information.For highly-available Apache Druid Coordinators and Overlords, we recommend to run multiple servers. If they are all configured to use the same ZooKeeper cluster and metadata storage, then they will automatically failover between each other as necessary. Only one will be active at a time, but inactive servers will redirect to the currently active server.Druid Brokers can be scaled out and all running servers will be active and queryable. We recommend placing them behind a load balancer.","keywords":""},{"title":"HTTP compression","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/http-compression","content":"HTTP compression Apache Druid supports http request decompression and response compression, to use this, http request header Content-Encoding:gzip and Accept-Encoding:gzip is needed to be set. Property\tDescription\tDefaultdruid.server.http.compressionLevel\tThe compression level. Value should be between [-1,9], -1 for default level, 0 for no compression.\t-1 (default compression level) druid.server.http.inflateBufferSize\tThe buffer size used by gzip decoder. Set to 0 to disable request decompression.\t4096","keywords":""},{"title":"insert-segment-to-db tool","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/insert-segment-to-db","content":"insert-segment-to-db tool In older versions of Apache Druid, insert-segment-to-db was a tool that could scan deep storage and insert data from there into Druid metadata storage. It was intended to be used to update the segment table in the metadata storage after manually migrating segments from one place to another, or even to recover lost metadata storage by telling it where the segments are stored. In Druid 0.14.x and earlier, Druid wrote segment metadata to two places: the metadata store's druid_segments table, anddescriptor.json files in deep storage. This practice was stopped in Druid 0.15.0 as part ofconsolidated metadata management, for the following reasons: If any segments are manually dropped or re-enabled by cluster operators, this information is not reflected in deep storage. Restoring metadata from deep storage would undo any such drops or re-enables.Ingestion methods that allocate segments optimistically (such as native Kafka or Kinesis stream ingestion, or native batch ingestion in 'append' mode) can write segments to deep storage that are not meant to actually be used by the Druid cluster. There is no way, while purely looking at deep storage, to differentiate the segments that made it into the metadata store originally (and therefore should be used) from the segments that did not (and thereforeshould not be used).Nothing in Druid other than the insert-segment-to-db tool read the descriptor.json files. After this change, Druid stopped writing descriptor.json files to deep storage, and now only writes segment metadata to the metadata store. This meant the insert-segment-to-db tool is no longer useful, so it was removed in Druid 0.15.0. It is highly recommended that you take regular backups of your metadata store, since it is difficult to recover Druid clusters properly without it.","keywords":""},{"title":"Java runtime","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/java","content":"","keywords":""},{"title":"Selecting a Java runtime","type":1,"pageTitle":"Java runtime","url":"/docs/27.0.0/operations/java#selecting-a-java-runtime","content":"Druid fully supports Java 8u92+, Java 11, and Java 17. The project team recommends Java 17. The project team recommends using an OpenJDK-based Java distribution. There are many free and actively-supported distributions available, includingAmazon Corretto,Azul Zulu, andEclipse Temurin. The project team does not recommend any specific distribution over any other. Druid relies on the environment variables JAVA_HOME or DRUID_JAVA_HOME to find Java on the machine. You can setDRUID_JAVA_HOME if there is more than one instance of Java. To verify Java requirements for your environment, run thebin/verify-java script. "},{"title":"Garbage collection","type":1,"pageTitle":"Java runtime","url":"/docs/27.0.0/operations/java#garbage-collection","content":"In general, the project team recommends using the G1 collector with default settings. This is the default collector in Java 11 and 17. To enable G1 on Java 8, use -XX:+UseG1GC. There is no harm in explicitly specifying this on Java 11 or 17 as well. Garbage collector selection and tuning is a form of sport in the Java community. There may be situations where adjusting garbage collection configuration improves or worsens performance. The project team's guidance is that most people do not need to stray away from G1 with default settings. "},{"title":"Strong encapsulation","type":1,"pageTitle":"Java runtime","url":"/docs/27.0.0/operations/java#strong-encapsulation","content":"Java 9 and beyond (including Java 11 and 17) include the capability forstrong encapsulation of internal JDK APIs. Druid uses certain internal JDK APIs, which must be added to --add-exports and --add-opens on the Java command line. On Java 11, if these parameters are not included, you will see warnings like the following: WARNING: An illegal reflective access operation has occurred WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release On Java 17, if these parameters are not included, you will see errors on startup like the following: Exception in thread "main" java.lang.ExceptionInInitializerError Druid's out-of-box configuration adds these parameters transparently when you use the bundled bin/start-druid or similar commands. In this case, there is nothing special you need to do to run successfully on Java 11 or 17. However, if you have customized your Druid service launching system, you will need to ensure the required Java parameters are added. There are many ways of doing this. Choose the one that works best for you. The simplest approach: use Druid's bundled bin/start-druid script to launch Druid. If you launch Druid using bin/supervise -c <config>, ensure your config file uses bin/run-druid. This script uses bin/run-java internally, and automatically adds the proper flags. If you launch Druid using a java command, replace java with bin/run-java. Druid's bundledbin/run-java script automatically adds the proper flags. If you launch Druid without using its bundled scripts, ensure the following parameters are added to your Java command line: --add-exports=java.base/jdk.internal.misc=ALL-UNNAMED \\ --add-exports=java.base/jdk.internal.ref=ALL-UNNAMED \\ --add-opens=java.base/java.nio=ALL-UNNAMED \\ --add-opens=java.base/sun.nio.ch=ALL-UNNAMED \\ --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED \\ --add-opens=java.base/java.io=ALL-UNNAMED \\ --add-opens=java.base/java.lang=ALL-UNNAMED \\ --add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED "},{"title":"kubernetes","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/kubernetes","content":"kubernetes Apache Druid distribution is also available as Docker image from Docker Hub . For example, you can obtain latest release using the command below. $ docker pull apache/druid druid-operator can be used to manage a Druid cluster on Kubernetes . Druid clusters deployed on Kubernetes can function without Zookeeper using druid–kubernetes-extensions .","keywords":""},{"title":"Metadata Migration","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/metadata-migration","content":"","keywords":""},{"title":"Shut down cluster services","type":1,"pageTitle":"Metadata Migration","url":"/docs/27.0.0/operations/metadata-migration#shut-down-cluster-services","content":"To ensure a clean migration, shut down the non-coordinator services to ensure that metadata state will not change as you do the migration. When migrating from Derby, the coordinator processes will still need to be up initially, as they host the Derby database. "},{"title":"Exporting metadata","type":1,"pageTitle":"Metadata Migration","url":"/docs/27.0.0/operations/metadata-migration#exporting-metadata","content":"Druid provides an Export Metadata Tool for exporting metadata from Derby into CSV files which can then be imported into your new metadata store. The tool also provides options for rewriting the deep storage locations of segments; this is useful for deep storage migration. Run the export-metadata tool on your existing cluster, and save the CSV files it generates. After a successful export, you can shut down the coordinator. "},{"title":"Initializing the new metadata store","type":1,"pageTitle":"Metadata Migration","url":"/docs/27.0.0/operations/metadata-migration#initializing-the-new-metadata-store","content":""},{"title":"Create database","type":1,"pageTitle":"Metadata Migration","url":"/docs/27.0.0/operations/metadata-migration#create-database","content":"Before importing the existing cluster metadata, you will need to set up the new metadata store. The MySQL extension and PostgreSQL extension docs have instructions for initial database setup. "},{"title":"Update configuration","type":1,"pageTitle":"Metadata Migration","url":"/docs/27.0.0/operations/metadata-migration#update-configuration","content":"Update your Druid runtime properties with the new metadata configuration. "},{"title":"Create Druid tables","type":1,"pageTitle":"Metadata Migration","url":"/docs/27.0.0/operations/metadata-migration#create-druid-tables","content":"Druid provides a metadata-init tool for creating Druid's metadata tables. After initializing the Druid database, you can run the commands shown below from the root of the Druid package to initialize the tables. In the example commands below: lib is the Druid lib directoryextensions is the Druid extensions directorybase corresponds to the value of druid.metadata.storage.tables.base in the configuration, druid by default.The --connectURI parameter corresponds to the value of druid.metadata.storage.connector.connectURI.The --user parameter corresponds to the value of druid.metadata.storage.connector.user.The --password parameter corresponds to the value of druid.metadata.storage.connector.password. MySQL cd ${DRUID_ROOT} java -classpath "lib/*" -Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml -Ddruid.extensions.directory="extensions" -Ddruid.extensions.loadList="[\\"mysql-metadata-storage\\"]" -Ddruid.metadata.storage.type=mysql -Ddruid.node.type=metadata-init org.apache.druid.cli.Main tools metadata-init --connectURI="<mysql-uri>" --user <user> --password <pass> --base druid PostgreSQL cd ${DRUID_ROOT} java -classpath "lib/*" -Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml -Ddruid.extensions.directory="extensions" -Ddruid.extensions.loadList="[\\"postgresql-metadata-storage\\"]" -Ddruid.metadata.storage.type=postgresql -Ddruid.node.type=metadata-init org.apache.druid.cli.Main tools metadata-init --connectURI="<postgresql-uri>" --user <user> --password <pass> --base druid "},{"title":"Import metadata","type":1,"pageTitle":"Metadata Migration","url":"/docs/27.0.0/operations/metadata-migration#import-metadata","content":"After initializing the tables, please refer to the import commands for your target database. "},{"title":"Restart cluster","type":1,"pageTitle":"Metadata Migration","url":"/docs/27.0.0/operations/metadata-migration#restart-cluster","content":"After importing the metadata successfully, you can now restart your cluster. "},{"title":"Migrate from firehose to input source ingestion (legacy)","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/migrate-from-firehose","content":"","keywords":""},{"title":"Migrate from firehose ingestion to an input source","type":1,"pageTitle":"Migrate from firehose to input source ingestion (legacy)","url":"/docs/27.0.0/operations/migrate-from-firehose#migrate-from-firehose-ingestion-to-an-input-source","content":"To migrate from firehose ingestion, you can use the Druid console to update your ingestion spec, or you can update it manually. "},{"title":"Use the Druid console","type":1,"pageTitle":"Migrate from firehose to input source ingestion (legacy)","url":"/docs/27.0.0/operations/migrate-from-firehose#use-the-druid-console","content":"To update your ingestion spec using the Druid console, open the console and copy your spec into the Edit spec stage of the data loader. Druid converts the spec into one with a defined input source. For example, it converts the example firehose ingestion spec below into the example ingestion spec after migration. If you're unable to use the console or you have problems with the console method, the alternative is to update your ingestion spec manually. "},{"title":"Update your ingestion spec manually","type":1,"pageTitle":"Migrate from firehose to input source ingestion (legacy)","url":"/docs/27.0.0/operations/migrate-from-firehose#update-your-ingestion-spec-manually","content":"To update your ingestion spec manually, copy your existing spec into a new file. Refer to Native batch ingestion with firehose (Deprecated) for a description of firehose properties. Edit the new file as follows: In the ioConfig component, replace the firehose definition with an inputSource definition for your chosen input source. See Native batch input sources for details.Move the timeStampSpec definition from parser.parseSpec to the dataSchema component.Move the dimensionsSpec definition from parser.parseSpec to the dataSchema component.Move the format definition from parser.parseSpec to an inputFormat definition in ioConfig.Delete the parser definition.Save the file. You can check the format of your new ingestion file against the migrated example below.Test the new ingestion spec with a temporary data source.Once you've successfully ingested sample data with the new spec, stop firehose ingestion and switch to the new spec. When the transition is complete, you can upgrade Druid to the latest version. See the Druid release notes for upgrade instructions. "},{"title":"Example firehose ingestion spec","type":1,"pageTitle":"Migrate from firehose to input source ingestion (legacy)","url":"/docs/27.0.0/operations/migrate-from-firehose#example-firehose-ingestion-spec","content":"An example firehose ingestion spec is as follows: { "type" : "index", "spec" : { "dataSchema" : { "dataSource" : "wikipedia", "metricsSpec" : [ { "type" : "count", "name" : "count" }, { "type" : "doubleSum", "name" : "added", "fieldName" : "added" }, { "type" : "doubleSum", "name" : "deleted", "fieldName" : "deleted" }, { "type" : "doubleSum", "name" : "delta", "fieldName" : "delta" } ], "granularitySpec" : { "type" : "uniform", "segmentGranularity" : "DAY", "queryGranularity" : "NONE", "intervals" : [ "2013-08-31/2013-09-01" ] }, "parser": { "type": "string", "parseSpec": { "format": "json", "timestampSpec" : { "column" : "timestamp", "format" : "auto" }, "dimensionsSpec" : { "dimensions": ["country", "page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","region","city"], "dimensionExclusions" : [] } } } }, "ioConfig" : { "type" : "index", "firehose" : { "type" : "local", "baseDir" : "examples/indexing/", "filter" : "wikipedia_data.json" } }, "tuningConfig" : { "type" : "index", "partitionsSpec": { "type": "single_dim", "partitionDimension": "country", "targetRowsPerSegment": 5000000 } } } } "},{"title":"Example ingestion spec after migration","type":1,"pageTitle":"Migrate from firehose to input source ingestion (legacy)","url":"/docs/27.0.0/operations/migrate-from-firehose#example-ingestion-spec-after-migration","content":"The following example illustrates the result of migrating the example firehose ingestion spec to a spec with an input source: { "type" : "index", "spec" : { "dataSchema" : { "dataSource" : "wikipedia", "timestampSpec" : { "column" : "timestamp", "format" : "auto" }, "dimensionsSpec" : { "dimensions": ["country", "page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","region","city"], "dimensionExclusions" : [] }, "metricsSpec" : [ { "type" : "count", "name" : "count" }, { "type" : "doubleSum", "name" : "added", "fieldName" : "added" }, { "type" : "doubleSum", "name" : "deleted", "fieldName" : "deleted" }, { "type" : "doubleSum", "name" : "delta", "fieldName" : "delta" } ], "granularitySpec" : { "type" : "uniform", "segmentGranularity" : "DAY", "queryGranularity" : "NONE", "intervals" : [ "2013-08-31/2013-09-01" ] } }, "ioConfig" : { "type" : "index", "inputSource" : { "type" : "local", "baseDir" : "examples/indexing/", "filter" : "wikipedia_data.json" }, "inputFormat": { "type": "json" } }, "tuningConfig" : { "type" : "index", "partitionsSpec": { "type": "single_dim", "partitionDimension": "country", "targetRowsPerSegment": 5000000 } } } } "},{"title":"Learn more","type":1,"pageTitle":"Migrate from firehose to input source ingestion (legacy)","url":"/docs/27.0.0/operations/migrate-from-firehose#learn-more","content":"For more information, see the following pages: Ingestion: Overview of the Druid ingestion process.Native batch ingestion: Description of the supported native batch indexing tasks.Ingestion spec reference: Description of the components and properties in the ingestion spec. "},{"title":"Configure Druid for mixed workloads","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/mixed-workloads","content":"","keywords":""},{"title":"Query laning","type":1,"pageTitle":"Configure Druid for mixed workloads","url":"/docs/27.0.0/operations/mixed-workloads#query-laning","content":"When you need to run many concurrent queries having heterogeneous workloads, start with query laning to optimize your query performance. Query laning restricts resource usage for less urgent queries to ensure dedicated resources for high priority queries. Query lanes are analogous to carpool and normal lanes on the freeway. With query laning, Druid sets apart prioritized lanes from other general lanes. Druid restricts low priority queries to the general lanes and allows high priority queries to run wherever possible, whether in a VIP or general lane. In Druid, query lanes reserve resources for Broker HTTP threads. Each Druid query requires one Broker thread. The number of threads on a Broker is defined by the druid.server.http.numThreads parameter. Broker threads may be occupied by tasks other than queries, such as health checks. You can use query laning to limit the number of HTTP threads designated for resource-intensive queries, leaving other threads available for short-running queries and other tasks. "},{"title":"General properties","type":1,"pageTitle":"Configure Druid for mixed workloads","url":"/docs/27.0.0/operations/mixed-workloads#general-properties","content":"Set the following query laning properties in the broker/runtime.properties file. druid.query.scheduler.laning.strategy – The strategy used to assign queries to lanes. You can use the built-in “high/low” laning strategy, or define your own laning strategy manually.druid.query.scheduler.numThreads – The total number of queries that can be served per Broker. We recommend setting this value to 1-2 less than druid.server.http.numThreads. info The query scheduler by default does not limit the number of queries that a Broker can serve. Setting this property to a bounded number limits the thread count. If the allocated threads are all occupied, any incoming query, including interactive queries, will be rejected with an HTTP 429 status code. "},{"title":"Lane-specific properties","type":1,"pageTitle":"Configure Druid for mixed workloads","url":"/docs/27.0.0/operations/mixed-workloads#lane-specific-properties","content":"If you use the high/low laning strategy, set the following: druid.query.scheduler.laning.maxLowPercent – The maximum percent of query threads to handle low priority queries. The remaining query threads are dedicated to high priority queries. Consider also defining a prioritization strategy for the Broker to label queries as high or low priority. Otherwise, manually set the priority for incoming queries on the query context. If you use a manual laning strategy, set the following: druid.query.scheduler.laning.lanes.{name} – The limit for how many queries can run in the name lane. Define as many named lanes as needed.druid.query.scheduler.laning.isLimitPercent – Whether to treat the lane limit as an exact number or a percent of the minimum of druid.server.http.numThreads or druid.query.scheduler.numThreads. With manual laning, incoming queries can be labeled with the desired lane in the lane parameter of the query context. See Query prioritization and laning for additional details on query laning configuration. "},{"title":"Example","type":1,"pageTitle":"Configure Druid for mixed workloads","url":"/docs/27.0.0/operations/mixed-workloads#example","content":"Example config for query laning with the high/low laning strategy: # Laning strategy druid.query.scheduler.laning.strategy=hilo druid.query.scheduler.laning.maxLowPercent=20 # Limit the number of HTTP threads for query processing # This value should be less than druid.server.http.numThreads druid.query.scheduler.numThreads=40 "},{"title":"Service tiering","type":1,"pageTitle":"Configure Druid for mixed workloads","url":"/docs/27.0.0/operations/mixed-workloads#service-tiering","content":"In service tiering, you define separate groups of Historicals and Brokers to manage queries based on the segments and resource requirements of the query. You can limit the resources that are set aside for certain types of queries. Many heavy queries involving complex subqueries or large result sets can hog resources away from high priority, interactive queries. Minimize the impact of these heavy queries by limiting them to a separate Broker tier. When all Brokers set aside for heavy queries are occupied, subsequent heavy queries must wait until the designated resources become available. A prolonged wait results in the later queries failing with a timeout error. Note that you can separate Historical processes into tiers without having separate Broker tiers. Historical-only tiering is not sufficient to meet the demands of mixed workloads on a Druid cluster. However, it is useful when you query certain segments more frequently than others, such as often analyzing the most recent data. Historical tiering assigns data from specific time intervals to specific tiers in order to support higher concurrency on hot data. The examples below demonstrate two tiers—hot and cold—for both the Historicals and Brokers. The Brokers will serve short-running, light queries before long-running, heavy queries. Light queries will be routed to the hot tiers, and heavy queries will be routed to the cold tiers. "},{"title":"Historical tiering","type":1,"pageTitle":"Configure Druid for mixed workloads","url":"/docs/27.0.0/operations/mixed-workloads#historical-tiering","content":"This section describes how to configure segment loading and how to assign Historical services into tiers. Configure segment loading The Coordinator service assigns segments to different tiers of Historicals using load rules. Define a load rule to indicate how segment replicas should be assigned to different Historical tiers. For example, you may store segments of more recent data on more powerful hardware for better performance. There are several types of load rules: forever, interval, and period. Select the load rule that matches your use case for each Historical, whether you want all segments to be loaded, segments within a certain time interval, or segments within a certain time period. Interval and period load rules must be accompanied by corresponding drop rules. In the load rule, define tiers in the tieredReplicants property. Provide descriptive names for your tiers, and specify how many replicas each tier should have. You can designate a higher number of replicas for the hot tier to increase the concurrency for processing queries. The following example shows a period load rule with two Historical tiers, named “hot” and “_default_tier”. For the most recent month of data, Druid loads three replicas in the hot tier and one replica in the default cold tier. Incoming queries that rely on this month of data can use the single replica in the cold Historical tier or any of the three replicas in the hot Historical tier. { "type" : "loadByPeriod", "period" : "P1M", "includeFuture" : true, "tieredReplicants": { "hot": 3, "_default_tier" : 1 } } See Load rules for more information on segment load rules. Visit Tutorial: Configuring data retention for an example of setting retention rules from the Druid web console. Assign Historicals to tiers To assign a Historical to a tier, add a label for the tier name and set the priority value in the historical/runtime.properties for the Historical. Example Historical in the hot tier: druid.server.tier=hot druid.server.priority=1 Example Historical in the cold tier: druid.server.tier=_default_tier druid.server.priority=0 See Historical general configuration for more details on these properties. "},{"title":"Broker tiering","type":1,"pageTitle":"Configure Druid for mixed workloads","url":"/docs/27.0.0/operations/mixed-workloads#broker-tiering","content":"You must set up Historical tiering before you can use Broker tiering. To set up Broker tiering, assign Brokers to tiers, and configure query routing by the Router. Assign Brokers to tiers For each of the Brokers, define the Broker group in the broker/runtime.properties files. Example config for a Broker in the hot tier: druid.service=druid:broker-hot Example config for a Broker in the cold tier: druid.service=druid:broker-cold Also in the broker/runtime.properties files, instruct the Broker to select Historicals by priority so that the Broker will select Historicals in the hot tier before Historicals in the cold tier. Example Broker config to prioritize hot tier Historicals: druid.broker.select.tier=highestPriority See Broker configuration for more details on these properties. Configure query routing Direct the Router to route queries appropriately by setting the default Broker tier and the map of Historical tier to Broker tier in the router/runtime.properties file. Example Router config to map hot/cold tier Brokers to hot/cold tier Historicals, respectively: druid.router.defaultBrokerServiceName=druid:broker-cold druid.router.tierToBrokerMap={"hot":"druid:broker-hot","_default_tier":"druid:broker-cold"} If you plan to run Druid SQL queries, also enable routing of SQL queries by setting the following: druid.router.sql.enable=true See Router process for an example production configuration. "},{"title":"Learn more","type":1,"pageTitle":"Configure Druid for mixed workloads","url":"/docs/27.0.0/operations/mixed-workloads#learn-more","content":"See Multitenancy considerations for applying query concurrency to multitenant workloads. "},{"title":"Working with different versions of Apache Hadoop","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/other-hadoop","content":"","keywords":""},{"title":"Tip #1: Place Hadoop XMLs on Druid classpath","type":1,"pageTitle":"Working with different versions of Apache Hadoop","url":"/docs/27.0.0/operations/other-hadoop#tip-1-place-hadoop-xmls-on-druid-classpath","content":"Place your Hadoop configuration XMLs (core-site.xml, hdfs-site.xml, yarn-site.xml, mapred-site.xml) on the classpath of your Druid processes. You can do this by copying them into conf/druid/_common/core-site.xml,conf/druid/_common/hdfs-site.xml, and so on. This allows Druid to find your Hadoop cluster and properly submit jobs. "},{"title":"Tip #2: Classloader modification on Hadoop (Map/Reduce jobs only)","type":1,"pageTitle":"Working with different versions of Apache Hadoop","url":"/docs/27.0.0/operations/other-hadoop#tip-2-classloader-modification-on-hadoop-mapreduce-jobs-only","content":"Druid uses a number of libraries that are also likely present on your Hadoop cluster, and if these libraries conflict, your Map/Reduce jobs can fail. This problem can be avoided by enabling classloader isolation using the Hadoop job property mapreduce.job.classloader = true. This instructs Hadoop to use a separate classloader for Druid dependencies and for Hadoop's own dependencies. If your version of Hadoop does not support this functionality, you can also try setting the propertymapreduce.job.user.classpath.first = true. This instructs Hadoop to prefer loading Druid's version of a library when there is a conflict. Generally, you should only set one of these parameters, not both. These properties can be set in either one of the following ways: Using the task definition, e.g. add "mapreduce.job.classloader": "true" to the jobProperties of the tuningConfig of your indexing task (see the Hadoop batch ingestion documentation).Using system properties, e.g. on the MiddleManager set druid.indexer.runner.javaOpts=... -Dhadoop.mapreduce.job.classloader=true in Middle Manager configuration. "},{"title":"Overriding specific classes","type":1,"pageTitle":"Working with different versions of Apache Hadoop","url":"/docs/27.0.0/operations/other-hadoop#overriding-specific-classes","content":"When mapreduce.job.classloader = true, it is also possible to specifically define which classes should be loaded from the hadoop system classpath and which should be loaded from job-supplied JARs. This is controlled by defining class inclusion/exclusion patterns in the mapreduce.job.classloader.system.classes property in the jobProperties of tuningConfig. For example, some community members have reported version incompatibility errors with the Validator class: Error: java.lang.ClassNotFoundException: javax.validation.Validator The following jobProperties excludes javax.validation. classes from being loaded from the system classpath, while including those from java.,javax.,org.apache.commons.logging.,org.apache.log4j.,org.apache.hadoop.. "jobProperties": { "mapreduce.job.classloader": "true", "mapreduce.job.classloader.system.classes": "-javax.validation.,java.,javax.,org.apache.commons.logging.,org.apache.log4j.,org.apache.hadoop." } mapred-default.xml documentation contains more information about this property. "},{"title":"Tip #3: Use specific versions of Hadoop libraries","type":1,"pageTitle":"Working with different versions of Apache Hadoop","url":"/docs/27.0.0/operations/other-hadoop#tip-3-use-specific-versions-of-hadoop-libraries","content":"Druid loads Hadoop client libraries from two different locations. Each set of libraries is loaded in an isolated classloader. HDFS deep storage uses jars from extensions/druid-hdfs-storage/ to read and write Druid data on HDFS.Batch ingestion uses jars from hadoop-dependencies/ to submit Map/Reduce jobs (location customizable via thedruid.extensions.hadoopDependenciesDir runtime property; see Configuration). hadoop-client:2.8.5 is the default version of the Hadoop client bundled with Druid for both purposes. This works with many Hadoop distributions (the version does not necessarily need to match), but if you run into issues, you can instead have Druid load libraries that exactly match your distribution. To do this, either copy the jars from your Hadoop cluster, or use the pull-deps tool to download the jars from a Maven repository. "},{"title":"Preferred: Load using Druid's standard mechanism","type":1,"pageTitle":"Working with different versions of Apache Hadoop","url":"/docs/27.0.0/operations/other-hadoop#preferred-load-using-druids-standard-mechanism","content":"If you have issues with HDFS deep storage, you can switch your Hadoop client libraries by recompiling the druid-hdfs-storage extension using an alternate version of the Hadoop client libraries. You can do this by editing the main Druid pom.xml and rebuilding the distribution by running mvn package. If you have issues with Map/Reduce jobs, you can switch your Hadoop client libraries without rebuilding Druid. You can do this by adding a new set of libraries to the hadoop-dependencies/ directory (or another directory specified by druid.extensions.hadoopDependenciesDir) and then using hadoopDependencyCoordinates in theHadoop Index Task to specify the Hadoop dependencies you want Druid to load. Example: Suppose you specify druid.extensions.hadoopDependenciesDir=/usr/local/druid_tarball/hadoop-dependencies, and you have downloadedhadoop-client 2.3.0 and 2.4.0, either by copying them from your Hadoop cluster or by using pull-deps to download the jars from a Maven repository. Then underneath hadoop-dependencies, your jars should look like this: hadoop-dependencies/ └── hadoop-client ├── 2.3.0 │ ├── activation-1.1.jar │ ├── avro-1.7.4.jar │ ├── commons-beanutils-1.7.0.jar │ ├── commons-beanutils-core-1.8.0.jar │ ├── commons-cli-1.2.jar │ ├── commons-codec-1.4.jar ..... lots of jars └── 2.4.0 ├── activation-1.1.jar ├── avro-1.7.4.jar ├── commons-beanutils-1.7.0.jar ├── commons-beanutils-core-1.8.0.jar ├── commons-cli-1.2.jar ├── commons-codec-1.4.jar ..... lots of jars As you can see, under hadoop-client, there are two sub-directories, each denotes a version of hadoop-client. Next, use hadoopDependencyCoordinates in Hadoop Index Task to specify the Hadoop dependencies you want Druid to load. For example, in your Hadoop Index Task spec file, you can write: "hadoopDependencyCoordinates": ["org.apache.hadoop:hadoop-client:2.4.0"] This instructs Druid to load hadoop-client 2.4.0 when processing the task. What happens behind the scene is that Druid first looks for a folder called hadoop-client underneath druid.extensions.hadoopDependenciesDir, then looks for a folder called 2.4.0underneath hadoop-client, and upon successfully locating these folders, hadoop-client 2.4.0 is loaded. "},{"title":"Alternative: Append your Hadoop jars to the Druid classpath","type":1,"pageTitle":"Working with different versions of Apache Hadoop","url":"/docs/27.0.0/operations/other-hadoop#alternative-append-your-hadoop-jars-to-the-druid-classpath","content":"You can also load Hadoop client libraries in Druid's main classloader, rather than an isolated classloader. This mechanism is relatively easy to reason about, but it also means that you have to ensure that all dependency jars on the classpath are compatible. That is, Druid makes no provisions while using this method to maintain class loader isolation so you must make sure that the jars on your classpath are mutually compatible. Set druid.indexer.task.defaultHadoopCoordinates=[]. By setting this to an empty list, Druid will not load any other Hadoop dependencies except the ones specified in the classpath.Append your Hadoop jars to Druid's classpath. Druid will load them into the system. "},{"title":"Notes on specific Hadoop distributions","type":1,"pageTitle":"Working with different versions of Apache Hadoop","url":"/docs/27.0.0/operations/other-hadoop#notes-on-specific-hadoop-distributions","content":"If the tips above do not solve any issues you are having with HDFS deep storage or Hadoop batch indexing, you may have luck with one of the following suggestions contributed by the Druid community. "},{"title":"CDH","type":1,"pageTitle":"Working with different versions of Apache Hadoop","url":"/docs/27.0.0/operations/other-hadoop#cdh","content":"Members of the community have reported dependency conflicts between the version of Jackson used in CDH and Druid when running a Mapreduce job like: java.lang.VerifyError: class com.fasterxml.jackson.datatype.guava.deser.HostAndPortDeserializer overrides final method deserialize.(Lcom/fasterxml/jackson/core/JsonParser;Lcom/fasterxml/jackson/databind/DeserializationContext;)Ljava/lang/Object; Preferred workaround First, try the tip under "Classloader modification on Hadoop" above. More recent versions of CDH have been reported to work with the classloader isolation option (mapreduce.job.classloader = true). Alternate workaround - 1 You can try editing Druid's pom.xml dependencies to match the version of Jackson in your Hadoop version and recompile Druid. For more about building Druid, please see Building Druid. Alternate workaround - 2 Another workaround solution is to build a custom fat jar of Druid using sbt, which manually excludes all the conflicting Jackson dependencies, and then put this fat jar in the classpath of the command that starts Overlord indexing service. To do this, please follow the following steps. (1) Download and install sbt. (2) Make a new directory named 'druid_build'. (3) Cd to 'druid_build' and create the build.sbt file with the content here. You can always add more building targets or remove the ones you don't need. (4) In the same directory create a new directory named 'project'. (5) Put the druid source code into 'druid_build/project'. (6) Create a file 'druid_build/project/assembly.sbt' with content as follows. addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.13.0") (7) In the 'druid_build' directory, run 'sbt assembly'. (8) In the 'druid_build/target/scala-2.10' folder, you will find the fat jar you just build. (9) Make sure the jars you've uploaded has been completely removed. The HDFS directory is by default '/tmp/druid-indexing/classpath'. (10) Include the fat jar in the classpath when you start the indexing service. Make sure you've removed 'lib/*' from your classpath because now the fat jar includes all you need. Alternate workaround - 3 If sbt is not your choice, you can also use maven-shade-plugin to make a fat jar: relocation all Jackson packages will resolve it too. In this way, druid will not be affected by Jackson library embedded in hadoop. Please follow the steps below: (1) Add all extensions you needed to services/pom.xml like <dependency> <groupId>org.apache.druid.extensions</groupId> <artifactId>druid-avro-extensions</artifactId> <version>${project.parent.version}</version> </dependency> <dependency> <groupId>org.apache.druid.extensions</groupId> <artifactId>druid-parquet-extensions</artifactId> <version>${project.parent.version}</version> </dependency> <dependency> <groupId>org.apache.druid.extensions</groupId> <artifactId>druid-hdfs-storage</artifactId> <version>${project.parent.version}</version> </dependency> <dependency> <groupId>org.apache.druid.extensions</groupId> <artifactId>mysql-metadata-storage</artifactId> <version>${project.parent.version}</version> </dependency> (2) Shade Jackson packages and assemble a fat jar. <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <outputFile> ${project.build.directory}/${project.artifactId}-${project.version}-selfcontained.jar </outputFile> <relocations> <relocation> <pattern>com.fasterxml.jackson</pattern> <shadedPattern>shade.com.fasterxml.jackson</shadedPattern> </relocation> </relocations> <artifactSet> <includes> <include>*:*</include> </includes> </artifactSet> <filters> <filter> <artifact>*:*</artifact> <excludes> <exclude>META-INF/*.SF</exclude> <exclude>META-INF/*.DSA</exclude> <exclude>META-INF/*.RSA</exclude> </excludes> </filter> </filters> <transformers> <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/> </transformers> </configuration> </execution> </executions> </plugin> Copy out services/target/xxxxx-selfcontained.jar after mvn install in project root for further usage. (3) run hadoop indexer (post an indexing task is not possible now) as below. lib is not needed anymore. As hadoop indexer is a standalone tool, you don't have to replace the jars of your running services: java -Xmx32m \\ -Dfile.encoding=UTF-8 -Duser.timezone=UTC \\ -classpath config/hadoop:config/overlord:config/_common:$SELF_CONTAINED_JAR:$HADOOP_DISTRIBUTION/etc/hadoop \\ -Djava.security.krb5.conf=$KRB5 \\ org.apache.druid.cli.Main index hadoop \\ $config_path "},{"title":"Password providers","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/password-provider","content":"Password providers Passwords help secure Apache Druid systems such as the metadata store and the keystore that contains server certificates, and so on. These passwords have corresponding runtime properties associated with them, for example druid.metadata.storage.connector.password corresponds to the metadata store password. By default users can directly set the passwords in plaintext for runtime properties. For example, druid.metadata.storage.connector.password=pwd sets the password to be used by Druid to connect to the metadata store to pwd. Alternatively, users can can set passwords as environment variables. Environment variable passwords allow users to avoid exposing passwords in the runtime.properties file. You can set an environment variable password as in the following example: druid.metadata.storage.connector.password={ "type": "environment", "variable": "METADATA_STORAGE_PASSWORD" } The values are described below. Field\tType\tDescription\tRequiredtype\tString\tpassword provider type\tYes: environment variable\tString\tenvironment variable to read password from\tYes Another option that provides even greater control is to securely fetch passwords at runtime using a custom extension of the PasswordProvider interface that is registered at Druid process startup. For more information, see Adding a new Password Provider implementation. To use this implementation, simply set the relevant password runtime property similarly to how was shown for the environment variable password: druid.metadata.storage.connector.password={ "type": "<registered_password_provider_name>", "<jackson_property>": "<value>", ... } ","keywords":""},{"title":"pull-deps tool","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/pull-deps","content":"pull-deps tool pull-deps is an Apache Druid tool that can pull down dependencies to the local repository and lay dependencies out into the extension directory as needed. pull-deps has several command line options, they are as follows: -c or --coordinate (Can be specified multiple times) Extension coordinate to pull down, followed by a maven coordinate, e.g. org.apache.druid.extensions:mysql-metadata-storage -h or --hadoop-coordinate (Can be specified multiply times) Apache Hadoop dependency to pull down, followed by a maven coordinate, e.g. org.apache.hadoop:hadoop-client:2.4.0 --no-default-hadoop Don't pull down the default hadoop coordinate, i.e., org.apache.hadoop:hadoop-client:2.3.0. If -h option is supplied, then default hadoop coordinate will not be downloaded. --clean Remove existing extension and hadoop dependencies directories before pulling down dependencies. -l or --localRepository A local repository that Maven will use to put downloaded files. Then pull-deps will lay these files out into the extensions directory as needed. -r or --remoteRepository Add a remote repository. Unless --no-default-remote-repositories is provided, these will be used after https://repo1.maven.org/maven2/. --no-default-remote-repositories Don't use the default remote repository, https://repo1.maven.org/maven2/. Only use the repositories provided directly via --remoteRepository. -d or --defaultVersion Version to use for extension coordinate that doesn't have a version information. For example, if extension coordinate is org.apache.druid.extensions:mysql-metadata-storage, and default version is 27.0.0, then this coordinate will be treated as org.apache.druid.extensions:mysql-metadata-storage:27.0.0 --use-proxy Use http/https proxy to send request to the remote repository servers. --proxy-host and --proxy-port must be set explicitly if this option is enabled. --proxy-type Set the proxy type, Should be either http or https, default value is https. --proxy-host Set the proxy host. e.g. proxy.com. --proxy-port Set the proxy port number. e.g. 8080. --proxy-username Set a username to connect to the proxy, this option is only required if the proxy server uses authentication. --proxy-password Set a password to connect to the proxy, this option is only required if the proxy server uses authentication. To run pull-deps, you should 1) Specify druid.extensions.directory and druid.extensions.hadoopDependenciesDir, these two properties tell pull-deps where to put extensions. If you don't specify them, default values will be used, see Configuration. 2) Tell pull-deps what to download using -c or -h option, which are followed by a maven coordinate. Example: Suppose you want to download mysql-metadata-storage and hadoop-client(both 2.3.0 and 2.4.0) with a specific version, you can run pull-deps command with -c org.apache.druid.extensions:mysql-metadata-storage:27.0.0, -h org.apache.hadoop:hadoop-client:2.3.0 and -h org.apache.hadoop:hadoop-client:2.4.0, an example command would be: java -classpath "/my/druid/lib/*" org.apache.druid.cli.Main tools pull-deps --clean -c org.apache.druid.extensions:mysql-metadata-storage:27.0.0 -h org.apache.hadoop:hadoop-client:2.3.0 -h org.apache.hadoop:hadoop-client:2.4.0 Because --clean is supplied, this command will first remove the directories specified at druid.extensions.directory and druid.extensions.hadoopDependenciesDir, then recreate them and start downloading the extensions there. After finishing downloading, if you go to the extension directories you specified, you will see tree extensions extensions └── mysql-metadata-storage └── mysql-metadata-storage-27.0.0.jar tree hadoop-dependencies hadoop-dependencies/ └── hadoop-client ├── 2.3.0 │ ├── activation-1.1.jar │ ├── avro-1.7.4.jar │ ├── commons-beanutils-1.7.0.jar │ ├── commons-beanutils-core-1.8.0.jar │ ├── commons-cli-1.2.jar │ ├── commons-codec-1.4.jar ..... lots of jars └── 2.4.0 ├── activation-1.1.jar ├── avro-1.7.4.jar ├── commons-beanutils-1.7.0.jar ├── commons-beanutils-core-1.8.0.jar ├── commons-cli-1.2.jar ├── commons-codec-1.4.jar ..... lots of jars Note that if you specify --defaultVersion, you don't have to put version information in the coordinate. For example, if you want mysql-metadata-storage to use version 27.0.0, you can change the command above to java -classpath "/my/druid/lib/*" org.apache.druid.cli.Main tools pull-deps --defaultVersion 27.0.0 --clean -c org.apache.druid.extensions:mysql-metadata-storage -h org.apache.hadoop:hadoop-client:2.3.0 -h org.apache.hadoop:hadoop-client:2.4.0 info Please note to use the pull-deps tool you must know the Maven groupId, artifactId, and version of your extension. For Druid community extensions listed here, the groupId is "org.apache.druid.extensions.contrib" and the artifactId is the name of the extension.","keywords":""},{"title":"Request logging","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/request-logging","content":"","keywords":""},{"title":"Configure request logging","type":1,"pageTitle":"Request logging","url":"/docs/27.0.0/operations/request-logging#configure-request-logging","content":"To enable request logging, determine the type of request logger to use, then set the configurations specific to the request logger type. The following request logger types are available: noop: Disables request logging, the default behavior.file: Stores logs to disk.emitter: Logs request to an external location, which is configured through an emitter.slf4j: Logs queries via the SLF4J Java logging API.filtered: Filters requests by query type or execution time before logging the filtered queries by the delegated request logger.composing: Logs all requests to multiple request loggers.switching: Logs native queries and SQL queries to separate request loggers. Define the type of request logger in druid.request.logging.type. See the Request logging configuration for properties to set for each type of request logger. Specify these properties in the common.runtime.properties file. You must restart Druid for the changes to take effect. Druid stores the results in the Broker logs, unless the request logging type is emitter. If you use emitter request logging, you must also configure metrics emission. "},{"title":"Configure metrics emission","type":1,"pageTitle":"Request logging","url":"/docs/27.0.0/operations/request-logging#configure-metrics-emission","content":"Druid includes various emitters to send metrics and alerts. To emit query metrics, set druid.request.logging.feed=emitter, and define the emitter type in the druid.emitter property. You can use any of the following emitters in Druid: noop: Disables metric emission, the default behavior.logging: Emits metrics to Log4j 2. See Logging to configure Log4j 2 for use with Druid.http: Sends HTTP POST requests containing the metrics in JSON format to a user-defined endpoint.parametrized: Operates like the http emitter but fine-tunes the recipient URL based on the event feed.composing: Emits metrics to multiple emitter types.graphite: Emits metrics to a Graphite Carbon service. Specify these properties in the common.runtime.properties file. See the Metrics emitters configuration for properties to set for each type of metrics emitter. You must restart Druid for the changes to take effect. "},{"title":"Example","type":1,"pageTitle":"Request logging","url":"/docs/27.0.0/operations/request-logging#example","content":"The following configuration shows how to enable request logging and post query metrics to the endpoint http://example.com:8080/path. # Enable request logging and configure the emitter request logger druid.request.logging.type=emitter druid.request.logging.feed=myRequestLogFeed # Enable metrics emission and tell Druid where to emit messages druid.emitter=http druid.emitter.http.recipientBaseUrl=http://example.com:8080/path # Authenticate to the base URL, if needed druid.emitter.http.basicAuthentication=username:password The following shows an example log emitter output: [ { "feed": "metrics", "timestamp": "2022-01-06T20:32:06.628Z", "service": "druid/broker", "host": "localhost:8082", "version": "2022.01.0-iap-SNAPSHOT", "metric": "sqlQuery/bytes", "value": 9351, "dataSource": "[wikipedia]", "id": "56e8317b-31cc-443d-b109-47f51b21d4c3", "nativeQueryIds": "[2b9cbced-11fc-4d78-a58c-c42863dff3c8]", "remoteAddress": "127.0.0.1", "success": "true" }, { "feed": "myRequestLogFeed", "timestamp": "2022-01-06T20:32:06.585Z", "remoteAddr": "127.0.0.1", "service": "druid/broker", "sqlQueryContext": { "useApproximateCountDistinct": false, "sqlQueryId": "56e8317b-31cc-443d-b109-47f51b21d4c3", "useApproximateTopN": false, "useCache": false, "sqlOuterLimit": 101, "populateCache": false, "nativeQueryIds": "[2b9cbced-11fc-4d78-a58c-c42863dff3c8]" }, "queryStats": { "sqlQuery/time": 43, "sqlQuery/planningTimeMs": 5, "sqlQuery/bytes": 9351, "success": true, "context": { "useApproximateCountDistinct": false, "sqlQueryId": "56e8317b-31cc-443d-b109-47f51b21d4c3", "useApproximateTopN": false, "useCache": false, "sqlOuterLimit": 101, "populateCache": false, "nativeQueryIds": "[2b9cbced-11fc-4d78-a58c-c42863dff3c8]" }, "identity": "allowAll" }, "query": null, "host": "localhost:8082", "sql": "SELECT * FROM wikipedia WHERE cityName = 'Buenos Aires'" }, { "feed": "myRequestLogFeed", "timestamp": "2022-01-06T20:32:07.652Z", "remoteAddr": "", "service": "druid/broker", "sqlQueryContext": {}, "queryStats": { "query/time": 16, "query/bytes": -1, "success": true, "identity": "allowAll" }, "query": { "queryType": "scan", "dataSource": { "type": "table", "name": "wikipedia" }, "intervals": { "type": "intervals", "intervals": [ "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z" ] }, "virtualColumns": [ { "type": "expression", "name": "v0", "expression": "'Buenos Aires'", "outputType": "STRING" } ], "resultFormat": "compactedList", "batchSize": 20480, "limit": 101, "filter": { "type": "selector", "dimension": "cityName", "value": "Buenos Aires", "extractionFn": null }, "columns": [ "__time", "added", "channel", "comment", "commentLength", "countryIsoCode", "countryName", "deleted", "delta", "deltaBucket", "diffUrl", "flags", "isAnonymous", "isMinor", "isNew", "isRobot", "isUnpatrolled", "metroCode", "namespace", "page", "regionIsoCode", "regionName", "user", "v0" ], "legacy": false, "context": { "populateCache": false, "queryId": "62e3d373-6e50-41b4-873b-1e56347c2950", "sqlOuterLimit": 101, "sqlQueryId": "cbb3d519-aee9-4566-8920-dbbeab6269f5", "useApproximateCountDistinct": false, "useApproximateTopN": false, "useCache": false }, "descending": false, "granularity": { "type": "all" } }, "host": "localhost:8082", "sql": null }, ... ] "},{"title":"Learn more","type":1,"pageTitle":"Request logging","url":"/docs/27.0.0/operations/request-logging#learn-more","content":"See the following topics for more information. Query metricsRequest logging configurationMetrics emitters configuration "},{"title":"reset-cluster tool","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/reset-cluster","content":"reset-cluster tool The reset-cluster tool can be used to completely wipe out Apache Druid cluster state stored on Metadata and Deep storage. This is intended to be used in dev/test environments where you typically want to reset the cluster before running the test suite.reset-cluster automatically figures out necessary information from Druid cluster configuration. So the java classpath used in the command must have all the necessary druid configuration files. It can be run in one of the following ways. java -classpath "/my/druid/lib/*" -Ddruid.extensions.loadList="[]" org.apache.druid.cli.Main \\ tools reset-cluster \\ [--metadataStore] \\ [--segmentFiles] \\ [--taskLogs] \\ [--hadoopWorkingPath] or java -classpath "/my/druid/lib/*" -Ddruid.extensions.loadList="[]" org.apache.druid.cli.Main \\ tools reset-cluster \\ --all Usage documentation can be printed by running following command. $ java -classpath "/my/druid/lib/*" -Ddruid.extensions.loadList="[]" org.apache.druid.cli.Main help tools reset-cluster NAME druid tools reset-cluster - Cleanup all persisted state from metadata and deep storage. SYNOPSIS druid tools reset-cluster [--all] [--hadoopWorkingPath] [--metadataStore] [--segmentFiles] [--taskLogs] OPTIONS --all delete all state stored in metadata and deep storage --hadoopWorkingPath delete hadoopWorkingPath --metadataStore delete all records in metadata storage --segmentFiles delete all segment files from deep storage --taskLogs delete all tasklogs ","keywords":""},{"title":"Rolling updates","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/rolling-updates","content":"","keywords":""},{"title":"Historical","type":1,"pageTitle":"Rolling updates","url":"/docs/27.0.0/operations/rolling-updates#historical","content":"Historical processes can be updated one at a time. Each Historical process has a startup time to memory map all the segments it was serving before the update. The startup time typically takes a few seconds to a few minutes, depending on the hardware of the host. As long as each Historical process is updated with a sufficient delay (greater than the time required to start a single process), you can rolling update the entire Historical cluster. "},{"title":"Overlord","type":1,"pageTitle":"Rolling updates","url":"/docs/27.0.0/operations/rolling-updates#overlord","content":"Overlord processes can be updated one at a time in a rolling fashion. "},{"title":"Middle Managers/Indexers","type":1,"pageTitle":"Rolling updates","url":"/docs/27.0.0/operations/rolling-updates#middle-managersindexers","content":"Middle Managers or Indexer nodes run both batch and real-time indexing tasks. Generally you want to update Middle Managers in such a way that real-time indexing tasks do not fail. There are three strategies for doing that. "},{"title":"Rolling restart (restore-based)","type":1,"pageTitle":"Rolling updates","url":"/docs/27.0.0/operations/rolling-updates#rolling-restart-restore-based","content":"Middle Managers can be updated one at a time in a rolling fashion when you setdruid.indexer.task.restoreTasksOnRestart=true. In this case, indexing tasks that support restoring will restore their state on Middle Manager restart, and will not fail. Currently, only realtime tasks support restoring, so non-realtime indexing tasks will fail and will need to be resubmitted. "},{"title":"Rolling restart (graceful-termination-based)","type":1,"pageTitle":"Rolling updates","url":"/docs/27.0.0/operations/rolling-updates#rolling-restart-graceful-termination-based","content":"Middle Managers can be gracefully terminated using the "disable" API. This works for all task types, even tasks that are not restorable. To prepare a Middle Manager for update, send a POST request to<MiddleManager_IP:PORT>/druid/worker/v1/disable. The Overlord will now no longer send tasks to this Middle Manager. Tasks that have already started will run to completion. Current state can be checked using <MiddleManager_IP:PORT>/druid/worker/v1/enabled . To view all existing tasks, send a GET request to <MiddleManager_IP:PORT>/druid/worker/v1/tasks. When this list is empty, you can safely update the Middle Manager. After the Middle Manager starts back up, it is automatically enabled again. You can also manually enable Middle Managers by POSTing to <MiddleManager_IP:PORT>/druid/worker/v1/enable. "},{"title":"Autoscaling-based replacement","type":1,"pageTitle":"Rolling updates","url":"/docs/27.0.0/operations/rolling-updates#autoscaling-based-replacement","content":"If autoscaling is enabled on your Overlord, then Overlord processes can launch new Middle Manager processes en masse and then gracefully terminate old ones as their tasks finish. This process is configured by setting druid.indexer.runner.minWorkerVersion=#{VERSION}. Each time you update your Overlord process, the VERSION value should be increased, which will trigger a mass launch of new Middle Managers. The config druid.indexer.autoscale.workerVersion=#{VERSION} also needs to be set. "},{"title":"Standalone Real-time","type":1,"pageTitle":"Rolling updates","url":"/docs/27.0.0/operations/rolling-updates#standalone-real-time","content":"Standalone real-time processes can be updated one at a time in a rolling fashion. "},{"title":"Broker","type":1,"pageTitle":"Rolling updates","url":"/docs/27.0.0/operations/rolling-updates#broker","content":"Broker processes can be updated one at a time in a rolling fashion. There needs to be some delay between updating each process as Brokers must load the entire state of the cluster before they return valid results. "},{"title":"Coordinator","type":1,"pageTitle":"Rolling updates","url":"/docs/27.0.0/operations/rolling-updates#coordinator","content":"Coordinator processes can be updated one at a time in a rolling fashion. "},{"title":"Using rules to drop and retain data","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/rule-configuration","content":"","keywords":""},{"title":"Set retention rules","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#set-retention-rules","content":"You can use the Druid web console or the Service status API reference to create and manage retention rules. "},{"title":"Use the web console","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#use-the-web-console","content":"To set retention rules in the Druid web console: On the console home page, click Datasources.Click the name of your datasource to open the data window.Select Actions > Edit retention rules.Click +New rule.Select a rule type and set properties for the rule.Click Next and enter a description for the rule.Click Save to save and apply the rule to the datasource. "},{"title":"Use the Coordinator API","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#use-the-coordinator-api","content":"To set one or more default retention rules for all datasources, send a POST request containing a JSON object for each rule to /druid/coordinator/v1/rules/_default. The following example request sets a default forever broadcast rule for all datasources: curl --location --request POST 'http://localhost:8888/druid/coordinator/v1/rules/_default' \\ --header 'Content-Type: application/json' \\ --data-raw '[{ "type": "broadcastForever" }]' To set one or more retention rules for a specific datasource, send a POST request containing a JSON object for each rule to /druid/coordinator/v1/rules/{datasourceName}. The following example request sets a period drop rule and a period broadcast rule for the wikipedia datasource: curl --location --request POST 'http://localhost:8888/druid/coordinator/v1/rules/wikipedia' \\ --header 'Content-Type: application/json' \\ --data-raw '[{ "type": "dropByPeriod", "period": "P1M", "includeFuture": true }, { "type": "broadcastByPeriod", "period": "P1M", "includeFuture": true }]' To retrieve all rules for all datasources, send a GET request to /druid/coordinator/v1/rules—for example: curl --location --request GET 'http://localhost:8888/druid/coordinator/v1/rules' "},{"title":"Rule structure","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#rule-structure","content":"The rules API accepts an array of rules as JSON objects. The JSON object you send in the API request for each rule is specific to the rules types outlined below. info You must pass the entire array of rules, in your desired order, with each API request. Each POST request to the rules API overwrites the existing rules for the specified datasource. The order of rules is very important. The Coordinator reads rules in the order in which they appear in the rules list. For example, in the following screenshot the Coordinator evaluates data against rule 1, then rule 2, then rule 3: The Coordinator cycles through all used segments and matches each segment with the first rule that applies. Each segment can only match a single rule. In the web console you can use the up and down arrows on the right side of the interface to change the order of the rules. "},{"title":"Load rules","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#load-rules","content":"Load rules define how Druid assigns segments to Historical process tiers, and how many replicas of a segment exist in each tier. If you have a single tier, Druid automatically names the tier _default. If you define an additional tier, you must define a load rule to specify which segments to load on that tier. Until you define a load rule, your new tier remains empty. All load rules can have these properties: Property\tDescription\tRequired\tDefault valuetieredReplicants\tMap from tier names to the respective number of segment replicas to be loaded on those tiers. The number of replicas for each tier must be either 0 or a positive integer.\tNo\tWhen useDefaultTierForNull is true, the default value is {"_default_tier": 2} i.e. 2 replicas to be loaded on the _default_tier. When useDefaultTierForNull is false, the default value is {} i.e. no replicas to be loaded on any tier. useDefaultTierForNull\tDetermines the default value of tieredReplicants if it is not specified or set to null.\tNo\ttrue Specific types of load rules discussed below may have other properties too. Load rules are also how you take advantage of the resource savings that query the data from deep storage provides. One way to configure data so that certain segments are not loaded onto Historical tiers but are available to query from deep storage is to set tieredReplicants to an empty array and useDefaultTierForNull to false for those segments, either by interval or by period. "},{"title":"Forever load rule","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#forever-load-rule","content":"The forever load rule assigns all datasource segments to specified tiers. It is the default rule Druid applies to datasources. Forever load rules have type loadForever. The following example places one replica of each segment on a custom tier named hot, and another single replica on the default tier. { "type": "loadForever", "tieredReplicants": { "hot": 1, "_default_tier": 1 } } Set the following property: tieredReplicants: a map of tier names to the number of segment replicas for that tier.useDefaultTierForNull: This parameter determines the default value of tieredReplicants and only has an effect if the field is not present. The default value of useDefaultTierForNull is true. "},{"title":"Period load rule","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#period-load-rule","content":"You can use a period load rule to assign segment data in a specific period to a tier. Druid compares a segment's interval to the period you specify in the rule and loads the matching data. Period load rules have type loadByPeriod. The following example places one replica of data in a one-month period on a custom tier named hot, and another single replica on the default tier. { "type": "loadByPeriod", "period": "P1M", "includeFuture": true, "tieredReplicants": { "hot": 1, "_default_tier": 1 } } Set the following properties: period: a JSON object representing ISO 8601 periods. The period is from some time in the past to the present, or into the future if includeFuture is set to true. includeFuture: a boolean flag to instruct Druid to match a segment if: the segment interval overlaps the rule interval, orthe segment interval starts any time after the rule interval starts. You can use this property to load segments with future start and end dates, where "future" is relative to the time when the Coordinator evaluates data against the rule. Defaults to true. tieredReplicants: a map of tier names to the number of segment replicas for that tier. useDefaultTierForNull: This parameter determines the default value of tieredReplicants and only has an effect if the field is not present. The default value of useDefaultTierForNull is true. "},{"title":"Interval load rule","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#interval-load-rule","content":"You can use an interval rule to assign a specific range of data to a tier. For example, analysts may typically work with the complete data set for all of last week and not so much with the data for the current week. Interval load rules have type loadByInterval. The following example places one replica of data matching the specified interval on a custom tier named hot, and another single replica on the default tier. { "type": "loadByInterval", "interval": "2012-01-01/2013-01-01", "tieredReplicants": { "hot": 1, "_default_tier": 1 } } Set the following properties: interval: the load interval specified as an ISO 8601 range encoded as a string.tieredReplicants: a map of tier names to the number of segment replicas for that tier. useDefaultTierForNull: This parameter determines the default value of tieredReplicants and only has an effect if the field is not present. The default value of useDefaultTierForNull is true. "},{"title":"Drop rules","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#drop-rules","content":"Drop rules define when Druid drops segments from the cluster. Druid keeps dropped data in deep storage. Note that if you enable automatic cleanup of unused segments, or you run a kill task, Druid deletes the data from deep storage. See Data deletion for more information on deleting data. If you want to use a load rule to retain only data from a defined period of time, you must also define a drop rule. If you don't define a drop rule, Druid retains data that doesn't lie within your defined period according to the default rule, loadForever. "},{"title":"Forever drop rule","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#forever-drop-rule","content":"The forever drop rule drops all segment data from the cluster. If you configure a set of rules with a forever drop rule as the last rule, Druid drops any segment data that remains after it evaluates the higher priority rules. Forever drop rules have type dropForever: { "type": "dropForever" } "},{"title":"Period drop rule","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#period-drop-rule","content":"Druid compares a segment's interval to the period you specify in the rule and drops the matching data. The rule matches if the period contains the segment interval. This rule always drops recent data. Period drop rules have type dropByPeriod and the following JSON structure: { "type": "dropByPeriod", "period": "P1M", "includeFuture": true } Set the following properties: period: a JSON object representing ISO 8601 periods. The period is from some time in the past to the future or to the current time, depending on the includeFuture flag. includeFuture: a boolean flag to instruct Druid to match a segment if one of the following conditions apply: the segment interval overlaps the rule intervalthe segment interval starts any time after the rule interval starts You can use this property to drop segments with future start and end dates, where "future" is relative to the time when the Coordinator evaluates data against the rule. Defaults to true. "},{"title":"Period drop before rule","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#period-drop-before-rule","content":"Druid compares a segment's interval to the period you specify in the rule and drops the matching data. The rule matches if the segment interval is before the specified period. If you only want to retain recent data, you can use this rule to drop old data before a specified period, and add a loadForever rule to retain the data that follows it. Note that the rule combination dropBeforeByPeriod + loadForever is equivalent to loadByPeriod(includeFuture = true) + dropForever. Period drop rules have type dropBeforeByPeriod and the following JSON structure: { "type": "dropBeforeByPeriod", "period": "P1M" } Set the following property: period: a JSON object representing ISO 8601 periods. "},{"title":"Interval drop rule","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#interval-drop-rule","content":"You can use a drop interval rule to prevent Druid from loading a specified range of data onto any tier. The range is typically your oldest data. The dropped data resides in deep storage and can still be queried from deep storage. Interval drop rules have type dropByInterval and the following JSON structure: { "type": "dropByInterval", "interval": "2012-01-01/2013-01-01" } Set the following property: interval: the drop interval specified as an ISO 8601 range encoded as a string. "},{"title":"Broadcast rules","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#broadcast-rules","content":"Druid extensions use broadcast rules to load segment data onto all brokers in the cluster. Apply broadcast rules in a test environment, not in production. "},{"title":"Forever broadcast rule","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#forever-broadcast-rule","content":"The forever broadcast rule loads all segment data in your datasources onto all brokers in the cluster. Forever broadcast rules have type broadcastForever: { "type": "broadcastForever" } "},{"title":"Period broadcast rule","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#period-broadcast-rule","content":"Druid compares a segment's interval to the period you specify in the rule and loads the matching data onto the brokers in the cluster. Period broadcast rules have type broadcastByPeriod and the following JSON structure: { "type": "broadcastByPeriod", "period": "P1M", "includeFuture": true } Set the following properties: period: a JSON object representing ISO 8601 periods. The period is from some time in the past to the future or to the current time, depending on the includeFuture flag. includeFuture: a boolean flag to instruct Druid to match a segment if one of the following conditions apply: the segment interval overlaps the rule intervalthe segment interval starts any time after the rule interval starts. You can use this property to broadcast segments with future start and end dates, where "future" is relative to the time when the Coordinator evaluates data against the rule. Defaults to true. "},{"title":"Interval broadcast rule","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#interval-broadcast-rule","content":"An interval broadcast rule loads a specific range of data onto the brokers in the cluster. Interval broadcast rules have type broadcastByInterval and the following JSON structure: { "type": "broadcastByInterval", "interval": "2012-01-01/2013-01-01" } Set the following property: interval: the broadcast interval specified as an ISO 8601 range encoded as a string. "},{"title":"Permanently delete data","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#permanently-delete-data","content":"Druid can fully drop data from the cluster, wipe the metadata store entry, and remove the data from deep storage for any segments marked unused. Note that Druid always marks segments dropped from the cluster by rules as unused. You can submit a kill task to the Overlord to do this. "},{"title":"Reload dropped data","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#reload-dropped-data","content":"You can't use a single rule to reload data Druid has dropped from a cluster. To reload dropped data: Set your retention period—for example, change the retention period from one month to two months.Use the web console or the API to mark all segments belonging to the datasource as used. "},{"title":"Learn more","type":1,"pageTitle":"Using rules to drop and retain data","url":"/docs/27.0.0/operations/rule-configuration#learn-more","content":"For more information about using retention rules in Druid, see the following topics: Tutorial: Configuring data retentionConfigure Druid for mixed workloadsRouter process "},{"title":"Metrics","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/metrics","content":"","keywords":""},{"title":"Query metrics","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#query-metrics","content":""},{"title":"Router","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#router","content":"Metric\tDescription\tDimensions\tNormal Valuequery/time\tMilliseconds taken to complete a query.\tNative Query: dataSource, type, interval, hasFilters, duration, context, remoteAddress, id.\t< 1s "},{"title":"Broker","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#broker","content":"Metric\tDescription\tDimensions\tNormal Valuequery/time\tMilliseconds taken to complete a query. Common: dataSource, type, interval, hasFilters, duration, context, remoteAddress, id. Aggregation Queries: numMetrics, numComplexMetrics. GroupBy: numDimensions. TopN: threshold, dimension. < 1s query/bytes\tThe total number of bytes returned to the requesting client in the query response from the broker. Other services report the total bytes for their portion of the query. Common: dataSource, type, interval, hasFilters, duration, context, remoteAddress, id. Aggregation Queries: numMetrics, numComplexMetrics. GroupBy: numDimensions. TopN: threshold, dimension. query/node/time\tMilliseconds taken to query individual historical/realtime processes.\tid, status, server\t< 1s query/node/bytes\tNumber of bytes returned from querying individual historical/realtime processes.\tid, status, server query/node/ttfb\tTime to first byte. Milliseconds elapsed until Broker starts receiving the response from individual historical/realtime processes.\tid, status, server\t< 1s query/node/backpressure\tMilliseconds that the channel to this process has spent suspended due to backpressure.\tid, status, server. query/count\tNumber of total queries.\tThis metric is only available if the QueryCountStatsMonitor module is included. query/success/count\tNumber of queries successfully processed.\tThis metric is only available if the QueryCountStatsMonitor module is included. query/failed/count\tNumber of failed queries.\tThis metric is only available if the QueryCountStatsMonitor module is included. query/interrupted/count\tNumber of queries interrupted due to cancellation.\tThis metric is only available if the QueryCountStatsMonitor module is included. query/timeout/count\tNumber of timed out queries.\tThis metric is only available if the QueryCountStatsMonitor module is included. query/segments/count\tThis metric is not enabled by default. See the QueryMetrics Interface for reference regarding enabling this metric. Number of segments that will be touched by the query. In the broker, it makes a plan to distribute the query to realtime tasks and historicals based on a snapshot of segment distribution state. If there are some segments moved after this snapshot is created, certain historicals and realtime tasks can report those segments as missing to the broker. The broker will resend the query to the new servers that serve those segments after move. In this case, those segments can be counted more than once in this metric. Varies query/priority\tAssigned lane and priority, only if Laning strategy is enabled. Refer to Laning strategies\tlane, dataSource, type\t0 sqlQuery/time\tMilliseconds taken to complete a SQL query.\tid, nativeQueryIds, dataSource, remoteAddress, success, engine\t< 1s sqlQuery/planningTimeMs\tMilliseconds taken to plan a SQL to native query.\tid, nativeQueryIds, dataSource, remoteAddress, success, engine sqlQuery/bytes\tNumber of bytes returned in the SQL query response.\tid, nativeQueryIds, dataSource, remoteAddress, success, engine serverview/init/time\tTime taken to initialize the broker server view. Useful to detect if brokers are taking too long to start. Depends on the number of segments. metadatacache/init/time\tTime taken to initialize the broker segment metadata cache. Useful to detect if brokers are taking too long to start Depends on the number of segments. metadatacache/refresh/count\tNumber of segments to refresh in broker segment metadata cache.\tdataSource metadatacache/refresh/time\tTime taken to refresh segments in broker segment metadata cache.\tdataSource serverview/sync/healthy\tSync status of the Broker with a segment-loading server such as a Historical or Peon. Emitted only when HTTP-based server view is enabled. This metric can be used in conjunction with serverview/sync/unstableTime to debug slow startup of Brokers.\tserver, tier\t1 for fully synced servers, 0 otherwise serverview/sync/unstableTime\tTime in milliseconds for which the Broker has been failing to sync with a segment-loading server. Emitted only when HTTP-based server view is enabled.\tserver, tier\tNot emitted for synced servers. "},{"title":"Historical","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#historical","content":"Metric\tDescription\tDimensions\tNormal Valuequery/time\tMilliseconds taken to complete a query. Common: dataSource, type, interval, hasFilters, duration, context, remoteAddress, id. Aggregation Queries: numMetrics, numComplexMetrics. GroupBy: numDimensions. TopN: threshold, dimension. < 1s query/segment/time\tMilliseconds taken to query individual segment. Includes time to page in the segment from disk.\tid, status, segment, vectorized.\tseveral hundred milliseconds query/wait/time\tMilliseconds spent waiting for a segment to be scanned.\tid, segment\t< several hundred milliseconds segment/scan/pending\tNumber of segments in queue waiting to be scanned. Close to 0 query/segmentAndCache/time\tMilliseconds taken to query individual segment or hit the cache (if it is enabled on the Historical process).\tid, segment\tseveral hundred milliseconds query/cpu/time\tMicroseconds of CPU time taken to complete a query. Common: dataSource, type, interval, hasFilters, duration, context, remoteAddress, id. Aggregation Queries: numMetrics, numComplexMetrics. GroupBy: numDimensions. TopN: threshold, dimension. Varies query/count\tTotal number of queries.\tThis metric is only available if the QueryCountStatsMonitor module is included. query/success/count\tNumber of queries successfully processed.\tThis metric is only available if the QueryCountStatsMonitor module is included. query/failed/count\tNumber of failed queries.\tThis metric is only available if the QueryCountStatsMonitor module is included. query/interrupted/count\tNumber of queries interrupted due to cancellation.\tThis metric is only available if the QueryCountStatsMonitor module is included. query/timeout/count\tNumber of timed out queries.\tThis metric is only available if the QueryCountStatsMonitor module is included.\t "},{"title":"Real-time","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#real-time","content":"Metric\tDescription\tDimensions\tNormal Valuequery/time\tMilliseconds taken to complete a query. Common: dataSource, type, interval, hasFilters, duration, context, remoteAddress, id. Aggregation Queries: numMetrics, numComplexMetrics. GroupBy: numDimensions. TopN: threshold, dimension. < 1s query/wait/time\tMilliseconds spent waiting for a segment to be scanned.\tid, segment\tseveral hundred milliseconds segment/scan/pending\tNumber of segments in queue waiting to be scanned. Close to 0 query/cpu/time\tMicroseconds of CPU time taken to complete a query. Common: dataSource, type, interval, hasFilters, duration, context, remoteAddress, id. Aggregation Queries: numMetrics, numComplexMetrics. GroupBy: numDimensions. TopN: threshold, dimension. Varies query/count\tNumber of total queries.\tThis metric is only available if the QueryCountStatsMonitor module is included. query/success/count\tNumber of queries successfully processed.\tThis metric is only available if the QueryCountStatsMonitor module is included. query/failed/count\tNumber of failed queries.\tThis metric is only available if the QueryCountStatsMonitor module is included. query/interrupted/count\tNumber of queries interrupted due to cancellation.\tThis metric is only available if the QueryCountStatsMonitor module is included. query/timeout/count\tNumber of timed out queries.\tThis metric is only available if the QueryCountStatsMonitor module is included.\t "},{"title":"Jetty","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#jetty","content":"Metric\tDescription\tNormal Valuejetty/numOpenConnections\tNumber of open jetty connections.\tNot much higher than number of jetty threads. jetty/threadPool/total\tNumber of total workable threads allocated.\tThe number should equal to threadPoolNumIdleThreads + threadPoolNumBusyThreads. jetty/threadPool/idle\tNumber of idle threads.\tLess than or equal to threadPoolNumTotalThreads. Non zero number means there is less work to do than configured capacity. jetty/threadPool/busy\tNumber of busy threads that has work to do from the worker queue.\tLess than or equal to threadPoolNumTotalThreads. jetty/threadPool/isLowOnThreads\tA rough indicator of whether number of total workable threads allocated is enough to handle the works in the work queue.\t0 jetty/threadPool/min\tNumber of minimum threads allocatable.\tdruid.server.http.numThreads plus a small fixed number of threads allocated for Jetty acceptors and selectors. jetty/threadPool/max\tNumber of maximum threads allocatable.\tdruid.server.http.numThreads plus a small fixed number of threads allocated for Jetty acceptors and selectors. jetty/threadPool/queueSize\tSize of the worker queue.\tNot much higher than druid.server.http.queueSize. "},{"title":"Cache","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#cache","content":"Metric\tDescription\tDimensions\tNormal Valuequery/cache/delta/*\tCache metrics since the last emission. N/A query/cache/total/*\tTotal cache metrics. N/A */numEntries\tNumber of cache entries. Varies */sizeBytes\tSize in bytes of cache entries. Varies */hits\tNumber of cache hits. Varies */misses\tNumber of cache misses. Varies */evictions\tNumber of cache evictions. Varies */hitRate\tCache hit rate. ~40% */averageByte\tAverage cache entry byte size. Varies */timeouts\tNumber of cache timeouts. 0 */errors\tNumber of cache errors. 0 */put/ok\tNumber of new cache entries successfully cached. Varies, but more than zero */put/error\tNumber of new cache entries that could not be cached due to errors. Varies, but more than zero */put/oversized\tNumber of potential new cache entries that were skipped due to being too large (based on druid.{broker,historical,realtime}.cache.maxEntrySize properties). Varies Memcached only metrics Memcached client metrics are reported as per the following. These metrics come directly from the client as opposed to from the cache retrieval layer. Metric\tDescription\tDimensions\tNormal Valuequery/cache/memcached/total\tCache metrics unique to memcached (only if druid.cache.type=memcached) as their actual values.\tVariable\tN/A query/cache/memcached/delta\tCache metrics unique to memcached (only if druid.cache.type=memcached) as their delta from the prior event emission.\tVariable\tN/A "},{"title":"SQL Metrics","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#sql-metrics","content":"If SQL is enabled, the Broker will emit the following metrics for SQL. Metric\tDescription\tDimensions\tNormal ValuesqlQuery/time\tMilliseconds taken to complete a SQL.\tid, nativeQueryIds, dataSource, remoteAddress, success\t< 1s sqlQuery/planningTimeMs\tMilliseconds taken to plan a SQL to native query.\tid, nativeQueryIds, dataSource, remoteAddress, success sqlQuery/bytes\tnumber of bytes returned in SQL response.\tid, nativeQueryIds, dataSource, remoteAddress, success\t "},{"title":"Ingestion metrics","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#ingestion-metrics","content":""},{"title":"General native ingestion metrics","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#general-native-ingestion-metrics","content":"Metric\tDescription\tDimensions\tNormal Valueingest/count\tCount of 1 every time an ingestion job runs (includes compaction jobs). Aggregate using dimensions.\tdataSource, taskId, taskType, groupId, taskIngestionMode, tags\tAlways 1. ingest/segments/count\tCount of final segments created by job (includes tombstones).\tdataSource, taskId, taskType, groupId, taskIngestionMode, tags\tAt least 1. ingest/tombstones/count\tCount of tombstones created by job.\tdataSource, taskId, taskType, groupId, taskIngestionMode, tags\tZero or more for replace. Always zero for non-replace tasks (always zero for legacy replace, see below). The taskIngestionMode dimension includes the following modes: APPEND: a native ingestion job appending to existing segments REPLACE_LEGACY: the original replace before tombstonesREPLACE: a native ingestion job replacing existing segments using tombstones The mode is decided using the values of the isAppendToExisting and isDropExisting flags in the task's IOConfig as follows: isAppendToExisting\tisDropExisting\tmodetrue\tfalse\tAPPEND true\ttrue Invalid combination, exception thrown. false\tfalse\tREPLACE_LEGACY (this is the default for native batch ingestion). false\ttrue\tREPLACE The tags dimension is reported only for metrics emitted from ingestion tasks whose ingest spec specifies the tagsfield in the context field of the ingestion spec. tags is expected to be a map of string to object. "},{"title":"Ingestion metrics for Kafka","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#ingestion-metrics-for-kafka","content":"These metrics apply to the Kafka indexing service. Metric\tDescription\tDimensions\tNormal Valueingest/kafka/lag\tTotal lag between the offsets consumed by the Kafka indexing tasks and latest offsets in Kafka brokers across all partitions. Minimum emission period for this metric is a minute.\tdataSource, stream, tags\tGreater than 0, should not be a very high number. ingest/kafka/maxLag\tMax lag between the offsets consumed by the Kafka indexing tasks and latest offsets in Kafka brokers across all partitions. Minimum emission period for this metric is a minute.\tdataSource, stream, tags\tGreater than 0, should not be a very high number. ingest/kafka/avgLag\tAverage lag between the offsets consumed by the Kafka indexing tasks and latest offsets in Kafka brokers across all partitions. Minimum emission period for this metric is a minute.\tdataSource, stream, tags\tGreater than 0, should not be a very high number. ingest/kafka/partitionLag\tPartition-wise lag between the offsets consumed by the Kafka indexing tasks and latest offsets in Kafka brokers. Minimum emission period for this metric is a minute.\tdataSource, stream, partition, tags\tGreater than 0, should not be a very high number. "},{"title":"Ingestion metrics for Kinesis","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#ingestion-metrics-for-kinesis","content":"These metrics apply to the Kinesis indexing service. Metric\tDescription\tDimensions\tNormal Valueingest/kinesis/lag/time\tTotal lag time in milliseconds between the current message sequence number consumed by the Kinesis indexing tasks and latest sequence number in Kinesis across all shards. Minimum emission period for this metric is a minute.\tdataSource, stream, tags\tGreater than 0, up to max Kinesis retention period in milliseconds. ingest/kinesis/maxLag/time\tMax lag time in milliseconds between the current message sequence number consumed by the Kinesis indexing tasks and latest sequence number in Kinesis across all shards. Minimum emission period for this metric is a minute.\tdataSource, stream, tags\tGreater than 0, up to max Kinesis retention period in milliseconds. ingest/kinesis/avgLag/time\tAverage lag time in milliseconds between the current message sequence number consumed by the Kinesis indexing tasks and latest sequence number in Kinesis across all shards. Minimum emission period for this metric is a minute.\tdataSource, stream, tags\tGreater than 0, up to max Kinesis retention period in milliseconds. ingest/kinesis/partitionLag/time\tPartition-wise lag time in milliseconds between the current message sequence number consumed by the Kinesis indexing tasks and latest sequence number in Kinesis. Minimum emission period for this metric is a minute.\tdataSource, stream, partition, tags\tGreater than 0, up to max Kinesis retention period in milliseconds. "},{"title":"Other ingestion metrics","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#other-ingestion-metrics","content":"Streaming ingestion tasks and certain types of batch ingestion emit the following metrics. These metrics are deltas for each emission period. Metric\tDescription\tDimensions\tNormal Valueingest/events/thrownAway\tNumber of events rejected because they are either null, or filtered by the transform spec, or outside the windowPeriod.\tdataSource, taskId, taskType, groupId, tags\t0 ingest/events/unparseable\tNumber of events rejected because the events are unparseable.\tdataSource, taskId, taskType, groupId, tags\t0 ingest/events/duplicate\tNumber of events rejected because the events are duplicated.\tdataSource, taskId, taskType, groupId, tags\t0 ingest/events/processed\tNumber of events successfully processed per emission period.\tdataSource, taskId, taskType, groupId, tags\tEqual to the number of events per emission period. ingest/rows/output\tNumber of Druid rows persisted.\tdataSource, taskId, taskType, groupId\tYour number of events with rollup. ingest/persists/count\tNumber of times persist occurred.\tdataSource, taskId, taskType, groupId, tags\tDepends on configuration. ingest/persists/time\tMilliseconds spent doing intermediate persist.\tdataSource, taskId, taskType, groupId, tags\tDepends on configuration. Generally a few minutes at most. ingest/persists/cpu\tCpu time in Nanoseconds spent on doing intermediate persist.\tdataSource, taskId, taskType, groupId, tags\tDepends on configuration. Generally a few minutes at most. ingest/persists/backPressure\tMilliseconds spent creating persist tasks and blocking waiting for them to finish.\tdataSource, taskId, taskType, groupId, tags\t0 or very low ingest/persists/failed\tNumber of persists that failed.\tdataSource, taskId, taskType, groupId, tags\t0 ingest/handoff/failed\tNumber of handoffs that failed.\tdataSource, taskId, taskType, groupId,tags\t0 ingest/merge/time\tMilliseconds spent merging intermediate segments.\tdataSource, taskId, taskType, groupId, tags\tDepends on configuration. Generally a few minutes at most. ingest/merge/cpu\tCpu time in Nanoseconds spent on merging intermediate segments.\tdataSource, taskId, taskType, groupId, tags\tDepends on configuration. Generally a few minutes at most. ingest/handoff/count\tNumber of handoffs that happened.\tdataSource, taskId, taskType, groupId, tags\tVaries. Generally greater than 0 once every segment granular period if cluster operating normally. ingest/sink/count\tNumber of sinks not handoffed.\tdataSource, taskId, taskType, groupId, tags\t1~3 ingest/events/messageGap\tTime gap in milliseconds between the latest ingested event timestamp and the current system timestamp of metrics emission. If the value is increasing but lag is low, Druid may not be receiving new data. This metric is reset as new tasks spawn up.\tdataSource, taskId, taskType, groupId, tags\tGreater than 0, depends on the time carried in event. ingest/notices/queueSize\tNumber of pending notices to be processed by the coordinator.\tdataSource, tags\tTypically 0 and occasionally in lower single digits. Should not be a very high number. ingest/notices/time\tMilliseconds taken to process a notice by the supervisor.\tdataSource, tags\t< 1s ingest/pause/time\tMilliseconds spent by a task in a paused state without ingesting.\tdataSource, taskId, tags\t< 10 seconds ingest/handoff/time\tTotal number of milliseconds taken to handoff a set of segments.\tdataSource, taskId, taskType, groupId, tags\tDepends on coordinator cycle time. Note: If the JVM does not support CPU time measurement for the current thread, ingest/merge/cpu and ingest/persists/cpu will be 0. "},{"title":"Indexing service","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#indexing-service","content":"Metric\tDescription\tDimensions\tNormal Valuetask/run/time\tMilliseconds taken to run a task.\tdataSource, taskId, taskType, groupId, taskStatus, tags\tVaries task/pending/time\tMilliseconds taken for a task to wait for running.\tdataSource, taskId, taskType, groupId, tags\tVaries task/action/log/time\tMilliseconds taken to log a task action to the audit log.\tdataSource, taskId, taskType, groupId, taskActionType, tags\t< 1000 (subsecond) task/action/run/time\tMilliseconds taken to execute a task action.\tdataSource, taskId, taskType, groupId, taskActionType, tags\tVaries from subsecond to a few seconds, based on action type. task/action/success/count\tNumber of task actions that were executed successfully during the emission period. Currently only being emitted for batched segmentAllocate actions.\tdataSource, taskId, taskType, groupId, taskActionType, tags\tVaries task/action/failed/count\tNumber of task actions that failed during the emission period. Currently only being emitted for batched segmentAllocate actions.\tdataSource, taskId, taskType, groupId, taskActionType, tags\tVaries task/action/batch/queueTime\tMilliseconds spent by a batch of task actions in queue. Currently only being emitted for batched segmentAllocate actions.\tdataSource, taskActionType, interval\tVaries based on the batchAllocationWaitTime and number of batches in queue. task/action/batch/runTime\tMilliseconds taken to execute a batch of task actions. Currently only being emitted for batched segmentAllocate actions.\tdataSource, taskActionType, interval\tVaries from subsecond to a few seconds, based on action type and batch size. task/action/batch/size\tNumber of task actions in a batch that was executed during the emission period. Currently only being emitted for batched segmentAllocate actions.\tdataSource, taskActionType, interval\tVaries based on number of concurrent task actions. task/action/batch/attempts\tNumber of execution attempts for a single batch of task actions. Currently only being emitted for batched segmentAllocate actions.\tdataSource, taskActionType, interval\t1 if there are no failures or retries. task/segmentAvailability/wait/time\tThe amount of milliseconds a batch indexing task waited for newly created segments to become available for querying.\tdataSource, taskType, groupId, taskId, segmentAvailabilityConfirmed, tags\tVaries segment/added/bytes\tSize in bytes of new segments created.\tdataSource, taskId, taskType, groupId, interval, tags\tVaries segment/moved/bytes\tSize in bytes of segments moved/archived via the Move Task.\tdataSource, taskId, taskType, groupId, interval, tags\tVaries segment/nuked/bytes\tSize in bytes of segments deleted via the Kill Task.\tdataSource, taskId, taskType, groupId, interval, tags\tVaries task/success/count\tNumber of successful tasks per emission period. This metric is only available if the TaskCountStatsMonitor module is included.\tdataSource\tVaries task/failed/count\tNumber of failed tasks per emission period. This metric is only available if the TaskCountStatsMonitor module is included.\tdataSource\tVaries task/running/count\tNumber of current running tasks. This metric is only available if the TaskCountStatsMonitor module is included.\tdataSource\tVaries task/pending/count\tNumber of current pending tasks. This metric is only available if the TaskCountStatsMonitor module is included.\tdataSource\tVaries task/waiting/count\tNumber of current waiting tasks. This metric is only available if the TaskCountStatsMonitor module is included.\tdataSource\tVaries taskSlot/total/count\tNumber of total task slots per emission period. This metric is only available if the TaskSlotCountStatsMonitor module is included.\tcategory\tVaries taskSlot/idle/count\tNumber of idle task slots per emission period. This metric is only available if the TaskSlotCountStatsMonitor module is included.\tcategory\tVaries taskSlot/used/count\tNumber of busy task slots per emission period. This metric is only available if the TaskSlotCountStatsMonitor module is included.\tcategory\tVaries taskSlot/lazy/count\tNumber of total task slots in lazy marked MiddleManagers and Indexers per emission period. This metric is only available if the TaskSlotCountStatsMonitor module is included.\tcategory\tVaries taskSlot/blacklisted/count\tNumber of total task slots in blacklisted MiddleManagers and Indexers per emission period. This metric is only available if the TaskSlotCountStatsMonitor module is included.\tcategory\tVaries worker/task/failed/count\tNumber of failed tasks run on the reporting worker per emission period. This metric is only available if the WorkerTaskCountStatsMonitor module is included, and is only supported for middleManager nodes.\tcategory, workerVersion\tVaries worker/task/success/count\tNumber of successful tasks run on the reporting worker per emission period. This metric is only available if the WorkerTaskCountStatsMonitor module is included, and is only supported for middleManager nodes.\tcategory,workerVersion\tVaries worker/taskSlot/idle/count\tNumber of idle task slots on the reporting worker per emission period. This metric is only available if the WorkerTaskCountStatsMonitor module is included, and is only supported for middleManager nodes.\tcategory, workerVersion\tVaries worker/taskSlot/total/count\tNumber of total task slots on the reporting worker per emission period. This metric is only available if the WorkerTaskCountStatsMonitor module is included.\tcategory, workerVersion\tVaries worker/taskSlot/used/count\tNumber of busy task slots on the reporting worker per emission period. This metric is only available if the WorkerTaskCountStatsMonitor module is included.\tcategory, workerVersion\tVaries "},{"title":"Shuffle metrics (Native parallel task)","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#shuffle-metrics-native-parallel-task","content":"The shuffle metrics can be enabled by adding org.apache.druid.indexing.worker.shuffle.ShuffleMonitor in druid.monitoring.monitorsSee Enabling Metrics for more details. Metric\tDescription\tDimensions\tNormal Valueingest/shuffle/bytes\tNumber of bytes shuffled per emission period.\tsupervisorTaskId\tVaries ingest/shuffle/requests\tNumber of shuffle requests per emission period.\tsupervisorTaskId\tVaries "},{"title":"Coordination","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#coordination","content":"These metrics are for the Druid Coordinator and are reset each time the Coordinator runs the coordination logic. Metric\tDescription\tDimensions\tNormal Valuesegment/assigned/count\tNumber of segments assigned to be loaded in the cluster.\tdataSource, tier\tVaries segment/moved/count\tNumber of segments moved in the cluster.\tdataSource, tier\tVaries segment/dropped/count\tNumber of segments chosen to be dropped from the cluster due to being over-replicated.\tdataSource, tier\tVaries segment/deleted/count\tNumber of segments marked as unused due to drop rules.\tdataSource\tVaries segment/unneeded/count\tNumber of segments dropped due to being marked as unused.\tdataSource, tier\tVaries segment/assignSkipped/count\tNumber of segments that could not be assigned to any server for loading. This can occur due to replication throttling, no available disk space, or a full load queue.\tdataSource, tier, description\tVaries segment/moveSkipped/count\tNumber of segments that were chosen for balancing but could not be moved. This can occur when segments are already optimally placed.\tdataSource, tier, description\tVaries segment/dropSkipped/count\tNumber of segments that could not be dropped from any server.\tdataSource, tier, description\tVaries segment/loadQueue/size\tSize in bytes of segments to load.\tserver\tVaries segment/loadQueue/count\tNumber of segments to load.\tserver\tVaries segment/dropQueue/count\tNumber of segments to drop.\tserver\tVaries segment/loadQueue/assigned\tNumber of segments assigned for load or drop to the load queue of a server.\tdataSource, server\tVaries segment/loadQueue/success\tNumber of segment assignments that completed successfully.\tdataSource, server\tVaries segment/loadQueue/failed\tNumber of segment assignments that failed to complete.\tdataSource, server\t0 segment/loadQueue/cancelled\tNumber of segment assignments that were canceled before completion.\tdataSource, server\tVaries segment/size\tTotal size of used segments in a data source. Emitted only for data sources to which at least one used segment belongs.\tdataSource\tVaries segment/count\tNumber of used segments belonging to a data source. Emitted only for data sources to which at least one used segment belongs.\tdataSource\t< max segment/overShadowed/count\tNumber of segments marked as unused due to being overshadowed. Varies segment/unavailable/count\tNumber of segments (not including replicas) left to load until segments that should be loaded in the cluster are available for queries.\tdataSource\t0 segment/underReplicated/count\tNumber of segments (including replicas) left to load until segments that should be loaded in the cluster are available for queries.\ttier, dataSource\t0 tier/historical/count\tNumber of available historical nodes in each tier.\ttier\tVaries tier/replication/factor\tConfigured maximum replication factor in each tier.\ttier\tVaries tier/required/capacity\tTotal capacity in bytes required in each tier.\ttier\tVaries tier/total/capacity\tTotal capacity in bytes available in each tier.\ttier\tVaries compact/task/count\tNumber of tasks issued in the auto compaction run. Varies compactTask/maxSlot/count\tMax number of task slots that can be used for auto compaction tasks in the auto compaction run. Varies compactTask/availableSlot/count\tNumber of available task slots that can be used for auto compaction tasks in the auto compaction run. This is the max number of task slots minus any currently running compaction tasks. Varies segment/waitCompact/bytes\tTotal bytes of this datasource waiting to be compacted by the auto compaction (only consider intervals/segments that are eligible for auto compaction).\tdataSource\tVaries segment/waitCompact/count\tTotal number of segments of this datasource waiting to be compacted by the auto compaction (only consider intervals/segments that are eligible for auto compaction).\tdataSource\tVaries interval/waitCompact/count\tTotal number of intervals of this datasource waiting to be compacted by the auto compaction (only consider intervals/segments that are eligible for auto compaction).\tdataSource\tVaries segment/compacted/bytes\tTotal bytes of this datasource that are already compacted with the spec set in the auto compaction config.\tdataSource\tVaries segment/compacted/count\tTotal number of segments of this datasource that are already compacted with the spec set in the auto compaction config.\tdataSource\tVaries interval/compacted/count\tTotal number of intervals of this datasource that are already compacted with the spec set in the auto compaction config.\tdataSource\tVaries segment/skipCompact/bytes\tTotal bytes of this datasource that are skipped (not eligible for auto compaction) by the auto compaction.\tdataSource\tVaries segment/skipCompact/count\tTotal number of segments of this datasource that are skipped (not eligible for auto compaction) by the auto compaction.\tdataSource\tVaries interval/skipCompact/count\tTotal number of intervals of this datasource that are skipped (not eligible for auto compaction) by the auto compaction.\tdataSource\tVaries coordinator/time\tApproximate Coordinator duty runtime in milliseconds.\tduty\tVaries coordinator/global/time\tApproximate runtime of a full coordination cycle in milliseconds. The dutyGroup dimension indicates what type of coordination this run was. For example: Historical Management or Indexing.\tdutyGroup\tVaries metadata/kill/supervisor/count\tTotal number of terminated supervisors that were automatically deleted from metadata store per each Coordinator kill supervisor duty run. This metric can help adjust druid.coordinator.kill.supervisor.durationToRetain configuration based on whether more or less terminated supervisors need to be deleted per cycle. This metric is only emitted when druid.coordinator.kill.supervisor.on is set to true. Varies metadata/kill/audit/count\tTotal number of audit logs that were automatically deleted from metadata store per each Coordinator kill audit duty run. This metric can help adjust druid.coordinator.kill.audit.durationToRetain configuration based on whether more or less audit logs need to be deleted per cycle. This metric is emitted only when druid.coordinator.kill.audit.on is set to true. Varies metadata/kill/compaction/count\tTotal number of compaction configurations that were automatically deleted from metadata store per each Coordinator kill compaction configuration duty run. This metric is only emitted when druid.coordinator.kill.compaction.on is set to true. Varies metadata/kill/rule/count\tTotal number of rules that were automatically deleted from metadata store per each Coordinator kill rule duty run. This metric can help adjust druid.coordinator.kill.rule.durationToRetain configuration based on whether more or less rules need to be deleted per cycle. This metric is only emitted when druid.coordinator.kill.rule.on is set to true. Varies metadata/kill/datasource/count\tTotal number of datasource metadata that were automatically deleted from metadata store per each Coordinator kill datasource duty run. Note that datasource metadata only exists for datasource created from supervisor. This metric can help adjust druid.coordinator.kill.datasource.durationToRetain configuration based on whether more or less datasource metadata need to be deleted per cycle. This metric is only emitted when druid.coordinator.kill.datasource.on is set to true. Varies serverview/init/time\tTime taken to initialize the coordinator server view. Depends on the number of segments. serverview/sync/healthy\tSync status of the Coordinator with a segment-loading server such as a Historical or Peon. Emitted only when HTTP-based server view is enabled. You can use this metric in conjunction with serverview/sync/unstableTime to debug slow startup of the Coordinator.\tserver, tier\t1 for fully synced servers, 0 otherwise serverview/sync/unstableTime\tTime in milliseconds for which the Coordinator has been failing to sync with a segment-loading server. Emitted only when HTTP-based server view is enabled.\tserver, tier\tNot emitted for synced servers. "},{"title":"General Health","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#general-health","content":""},{"title":"Service Health","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#service-health","content":"Metric\tDescription\tDimensions\tNormal Valueservice/heartbeat\tMetric indicating the service is up. ServiceStatusMonitor must be enabled.\tleader on the Overlord and Coordinator.\t1 "},{"title":"Historical","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#historical-1","content":"Metric\tDescription\tDimensions\tNormal Valuesegment/max\tMaximum byte limit available for segments. Varies. segment/used\tBytes used for served segments.\tdataSource, tier, priority\t< max segment/usedPercent\tPercentage of space used by served segments.\tdataSource, tier, priority\t< 100% segment/count\tNumber of served segments.\tdataSource, tier, priority\tVaries segment/pendingDelete\tOn-disk size in bytes of segments that are waiting to be cleared out. Varies segment/rowCount/avg\tThe average number of rows per segment on a historical. SegmentStatsMonitor must be enabled.\tdataSource, tier, priority\tVaries. See segment optimization for guidance on optimal segment sizes. segment/rowCount/range/count\tThe number of segments in a bucket. SegmentStatsMonitor must be enabled.\tdataSource, tier, priority, range\tVaries "},{"title":"JVM","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#jvm","content":"These metrics are only available if the JVMMonitor module is included. Metric\tDescription\tDimensions\tNormal Valuejvm/pool/committed\tCommitted pool\tpoolKind, poolName\tClose to max pool jvm/pool/init\tInitial pool\tpoolKind, poolName\tVaries jvm/pool/max\tMax pool\tpoolKind, poolName\tVaries jvm/pool/used\tPool used\tpoolKind, poolName\t< max pool jvm/bufferpool/count\tBufferpool count\tbufferpoolName\tVaries jvm/bufferpool/used\tBufferpool used\tbufferpoolName\tClose to capacity jvm/bufferpool/capacity\tBufferpool capacity\tbufferpoolName\tVaries jvm/mem/init\tInitial memory\tmemKind\tVaries jvm/mem/max\tMax memory\tmemKind\tVaries jvm/mem/used\tUsed memory\tmemKind\t< max memory jvm/mem/committed\tCommitted memory\tmemKind\tClose to max memory jvm/gc/count\tGarbage collection count\tgcName (cms/g1/parallel/etc.), gcGen (old/young)\tVaries jvm/gc/cpu\tCount of CPU time in Nanoseconds spent on garbage collection. Note: jvm/gc/cpu represents the total time over multiple GC cycles; divide by jvm/gc/count to get the mean GC time per cycle.\tgcName, gcGen\tSum of jvm/gc/cpu should be within 10-30% of sum of jvm/cpu/total, depending on the GC algorithm used (reported by JvmCpuMonitor). "},{"title":"EventReceiverFirehose","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#eventreceiverfirehose","content":"The following metric is only available if the EventReceiverFirehoseMonitor module is included. Metric\tDescription\tDimensions\tNormal Valueingest/events/buffered\tNumber of events queued in the EventReceiverFirehose buffer.\tserviceName, dataSource, taskId, taskType, bufferCapacity\tEqual to current number of events in the buffer queue. ingest/bytes/received\tNumber of bytes received by the EventReceiverFirehose.\tserviceName, dataSource, taskId, taskType\tVaries "},{"title":"Sys","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#sys","content":"These metrics are only available if the SysMonitor module is included. Metric\tDescription\tDimensions\tNormal Valuesys/swap/free\tFree swap Varies sys/swap/max\tMax swap Varies sys/swap/pageIn\tPaged in swap Varies sys/swap/pageOut\tPaged out swap Varies sys/disk/write/count\tWrites to disk\tfsDevName, fsDirName, fsTypeName, fsSysTypeName, fsOptions\tVaries sys/disk/read/count\tReads from disk\tfsDevName, fsDirName, fsTypeName, fsSysTypeName, fsOptions\tVaries sys/disk/write/size\tBytes written to disk. One indicator of the amount of paging occurring for segments.\tfsDevName,fsDirName,fsTypeName, fsSysTypeName, fsOptions\tVaries sys/disk/read/size\tBytes read from disk. One indicator of the amount of paging occurring for segments.\tfsDevName,fsDirName, fsTypeName, fsSysTypeName, fsOptions\tVaries sys/net/write/size\tBytes written to the network\tnetName, netAddress, netHwaddr\tVaries sys/net/read/size\tBytes read from the network\tnetName, netAddress, netHwaddr\tVaries sys/fs/used\tFilesystem bytes used\tfsDevName, fsDirName, fsTypeName, fsSysTypeName, fsOptions\t< max sys/fs/max\tFilesystem bytes max\tfsDevName, fsDirName, fsTypeName, fsSysTypeName, fsOptions\tVaries sys/mem/used\tMemory used < max sys/mem/max\tMemory max Varies sys/storage/used\tDisk space used\tfsDirName\tVaries sys/cpu\tCPU used\tcpuName, cpuTime\tVaries "},{"title":"Cgroup","type":1,"pageTitle":"Metrics","url":"/docs/27.0.0/operations/metrics#cgroup","content":"These metrics are available on operating systems with the cgroup kernel feature. All the values are derived by reading from /sys/fs/cgroup. Metric\tDescription\tDimensions\tNormal Valuecgroup/cpu/shares\tRelative value of CPU time available to this process. Read from cpu.shares. Varies cgroup/cpu/cores_quota\tNumber of cores available to this process. Derived from cpu.cfs_quota_us/cpu.cfs_period_us. Varies. A value of -1 indicates there is no explicit quota set. cgroup/memory/*\tMemory stats for this process (e.g. cache, total_swap, etc.). Each stat produces a separate metric. Read from memory.stat. Varies cgroup/memory_numa/*/pages\tMemory stats, per NUMA node, for this process (e.g. total, unevictable, etc.). Each stat produces a separate metric. Read from memory.num_stat.\tnumaZone\tVaries cgroup/cpuset/cpu_count\tTotal number of CPUs available to the process. Derived from cpuset.cpus. Varies cgroup/cpuset/effective_cpu_count\tTotal number of active CPUs available to the process. Derived from cpuset.effective_cpus. Varies cgroup/cpuset/mems_count\tTotal number of memory nodes available to the process. Derived from cpuset.mems. Varies cgroup/cpuset/effective_mems_count\tTotal number of active memory nodes available to the process. Derived from cpuset.effective_mems. Varies "},{"title":"Security overview","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/security-overview","content":"","keywords":""},{"title":"Best practices","type":1,"pageTitle":"Security overview","url":"/docs/27.0.0/operations/security-overview#best-practices","content":"The following recommendations apply to the Druid cluster setup: Run Druid as an unprivileged Unix user. Do not run Druid as the root user. caution Druid administrators have the same OS permissions as the Unix user account running Druid. See Authentication and authorization model. If the Druid process is running under the OS root user account, then Druid administrators can read or write all files that the root account has access to, including sensitive files such as /etc/passwd. Enable authentication to the Druid cluster for production environments and other environments that can be accessed by untrusted networks.Enable authorization and do not expose the web console without authorization enabled. If authorization is not enabled, any user that has access to the web console has the same privileges as the operating system user that runs the web console process.Grant users the minimum permissions necessary to perform their functions. For instance, do not allow users who only need to query data to write to data sources or view state.Do not provide plain-text passwords for production systems in configuration specs. For example, sensitive properties should not be in the consumerProperties field of KafkaSupervisorIngestionSpec. See Environment variable dynamic config provider for more information.Disable JavaScript, as noted in the Security section of the JavaScript guide. The following recommendations apply to the network where Druid runs: Enable TLS to encrypt communication within the cluster.Use an API gateway to: Restrict access from untrusted networksCreate an allow list of specific APIs that your users need to accessImplement account lockout and throttling features. When possible, use firewall and other network layer filtering to only expose Druid services and ports specifically required for your use case. For example, only expose Broker ports to downstream applications that execute queries. You can limit access to a specific IP address or IP range to further tighten and enhance security. The following recommendation applies to Druid's authorization and authentication model: Only grant WRITE permissions to any DATASOURCE to trusted users. Druid's trust model assumes those users have the same privileges as the operating system user that runs the web console process. Additionally, users with WRITE permissions can make changes to datasources and they have access to both task and supervisor update (POST) APIs which may affect ingestion.Only grant STATE READ, STATE WRITE, CONFIG WRITE, and DATASOURCE WRITE permissions to highly-trusted users. These permissions allow users to access resources on behalf of the Druid server process regardless of the datasource.If your Druid client application allows less-trusted users to control the input source or firehose of an ingestion task, validate the URLs from the users. It is possible to point unchecked URLs to other locations and resources within your network or local file system. "},{"title":"Enable TLS","type":1,"pageTitle":"Security overview","url":"/docs/27.0.0/operations/security-overview#enable-tls","content":"Enabling TLS encrypts the traffic between external clients and the Druid cluster and traffic between services within the cluster. "},{"title":"Generating keys","type":1,"pageTitle":"Security overview","url":"/docs/27.0.0/operations/security-overview#generating-keys","content":"Before you enable TLS in Druid, generate the KeyStore and truststore. When one Druid process, e.g. Broker, contacts another Druid process , e.g. Historical, the first service is a client for the second service, considered the server. The client uses a trustStore that contains certificates trusted by the client. For example, the Broker. The server uses a KeyStore that contains private keys and certificate chain used to securely identify itself. The following example demonstrates how to use Java keytool to generate the KeyStore for the server and then create a trustStore to trust the key for the client: Generate the KeyStore with the Java keytool command: keytool -keystore keystore.jks -alias druid -genkey -keyalg RSA Export a public certificate: keytool -export -alias druid -keystore keystore.jks -rfc -file public.cert Create the trustStore: keytool -import -file public.cert -alias druid -keystore truststore.jks Druid uses Jetty as its embedded web server. See Configuring SSL/TLS KeyStores from the Jetty documentation. caution Do not use self-signed certificates for production environments. Instead, rely on your current public key infrastructure to generate and distribute trusted keys. "},{"title":"Update Druid TLS configurations","type":1,"pageTitle":"Security overview","url":"/docs/27.0.0/operations/security-overview#update-druid-tls-configurations","content":"Edit common.runtime.properties for all Druid services on all nodes. Add or update the following TLS options. Restart the cluster when you are finished. # Turn on TLS globally druid.enableTlsPort=true # Disable non-TLS communicatoins druid.enablePlaintextPort=false # For Druid processes acting as a client # Load simple-client-sslcontext to enable client side TLS # Add the following to extension load list druid.extensions.loadList=[......., "simple-client-sslcontext"] # Setup client side TLS druid.client.https.protocol=TLSv1.2 druid.client.https.trustStoreType=jks druid.client.https.trustStorePath=truststore.jks # replace with correct trustStore file druid.client.https.trustStorePassword=secret123 # replace with your own password # Setup server side TLS druid.server.https.keyStoreType=jks druid.server.https.keyStorePath=my-keystore.jks # replace with correct keyStore file druid.server.https.keyStorePassword=secret123 # replace with your own password druid.server.https.certAlias=druid For more information, see TLS support and Simple SSLContext Provider Module. "},{"title":"Authentication and authorization","type":1,"pageTitle":"Security overview","url":"/docs/27.0.0/operations/security-overview#authentication-and-authorization","content":"You can configure authentication and authorization to control access to the Druid APIs. Then configure users, roles, and permissions, as described in the following sections. Make the configuration changes in the common.runtime.properties file on all Druid servers in the cluster. Within Druid's operating context, authenticators control the way user identities are verified. Authorizers employ user roles to relate authenticated users to the datasources they are permitted to access. You can set the finest-grained permissions on a per-datasource basis. The following graphic depicts the course of request through the authentication process: "},{"title":"Enable an authenticator","type":1,"pageTitle":"Security overview","url":"/docs/27.0.0/operations/security-overview#enable-an-authenticator","content":"To authenticate requests in Druid, you configure an Authenticator. Authenticator extensions exist for HTTP basic authentication, LDAP, and Kerberos. The following takes you through sample configuration steps for enabling basic auth: Add the druid-basic-security extension to druid.extensions.loadList in common.runtime.properties. For the quickstart installation, for example, the properties file is at conf/druid/cluster/_common: druid.extensions.loadList=["druid-basic-security", "druid-histogram", "druid-datasketches", "druid-kafka-indexing-service"] Configure the basic Authenticator, Authorizer, and Escalator settings in the same common.runtime.properties file. The Escalator defines how Druid processes authenticate with one another. An example configuration: # Druid basic security druid.auth.authenticatorChain=["MyBasicMetadataAuthenticator"] druid.auth.authenticator.MyBasicMetadataAuthenticator.type=basic # Default password for 'admin' user, should be changed for production. druid.auth.authenticator.MyBasicMetadataAuthenticator.initialAdminPassword=password1 # Default password for internal 'druid_system' user, should be changed for production. druid.auth.authenticator.MyBasicMetadataAuthenticator.initialInternalClientPassword=password2 # Uses the metadata store for storing users. # You can use the authentication API to create new users and grant permissions druid.auth.authenticator.MyBasicMetadataAuthenticator.credentialsValidator.type=metadata # If true and if the request credential doesn't exist in this credentials store, # the request will proceed to next Authenticator in the chain. druid.auth.authenticator.MyBasicMetadataAuthenticator.skipOnFailure=false druid.auth.authenticator.MyBasicMetadataAuthenticator.authorizerName=MyBasicMetadataAuthorizer # Escalator druid.escalator.type=basic druid.escalator.internalClientUsername=druid_system druid.escalator.internalClientPassword=password2 druid.escalator.authorizerName=MyBasicMetadataAuthorizer druid.auth.authorizers=["MyBasicMetadataAuthorizer"] druid.auth.authorizer.MyBasicMetadataAuthorizer.type=basic Restart the cluster. See the following topics for more information: Authentication and Authorization for more information about the Authenticator, Escalator, and Authorizer.Basic Security for more information about the extension used in the examples above.Kerberos for Kerberos authentication.User authentication and authorization for details about permissions.SQL permissions for permissions on SQL system tables.The druidapi Python library, provided as part of the Druid tutorials, to set up users and roles for learning how security works. "},{"title":"Enable authorizers","type":1,"pageTitle":"Security overview","url":"/docs/27.0.0/operations/security-overview#enable-authorizers","content":"After enabling the basic auth extension, you can add users, roles, and permissions via the Druid Coordinator user endpoint. Note that you cannot assign permissions directly to individual users. They must be assigned through roles. The following diagram depicts the authorization model, and the relationship between users, roles, permissions, and resources. The following steps walk through a sample setup procedure: info The default Coordinator API port is 8081 for non-TLS connections and 8281 for secured connections. Create a user by issuing a POST request to druid-ext/basic-security/authentication/db/MyBasicMetadataAuthenticator/users/<USERNAME>. Replace <USERNAME> with the new username you are trying to create. For example: curl -u admin:password1 -XPOST https://my-coordinator-ip:8281/druid-ext/basic-security/authentication/db/MyBasicMetadataAuthenticator/users/myname info If you have TLS enabled, be sure to adjust the curl command accordingly. For example, if your Druid servers use self-signed certificates, you may choose to include the insecure curl option to forgo certificate checking for the curl command. Add a credential for the user by issuing a POST request to druid-ext/basic-security/authentication/db/MyBasicMetadataAuthenticator/users/<USERNAME>/credentials. For example: curl -u admin:password1 -H'Content-Type: application/json' -XPOST https://my-coordinator-ip:8281/druid-ext/basic-security/authentication/db/MyBasicMetadataAuthenticator/users/myname/credentials --data-raw '{"password": "my_password"}' For each authenticator user you create, create a corresponding authorizer user by issuing a POST request to druid-ext/basic-security/authorization/db/MyBasicMetadataAuthorizer/users/<USERNAME>. For example: curl -u admin:password1 -XPOST https://my-coordinator-ip:8281/druid-ext/basic-security/authorization/db/MyBasicMetadataAuthorizer/users/myname Create authorizer roles to control permissions by issuing a POST request to druid-ext/basic-security/authorization/db/MyBasicMetadataAuthorizer/roles/<ROLENAME>. For example: curl -u admin:password1 -XPOST https://my-coordinator-ip:8281/druid-ext/basic-security/authorization/db/MyBasicMetadataAuthorizer/roles/myrole Assign roles to users by issuing a POST request to druid-ext/basic-security/authorization/db/MyBasicMetadataAuthorizer/users/<USERNAME>/roles/<ROLENAME>. For example: curl -u admin:password1 -XPOST https://my-coordinator-ip:8281/druid-ext/basic-security/authorization/db/MyBasicMetadataAuthorizer/users/myname/roles/myrole | jq Finally, attach permissions to the roles to control how they can interact with Druid at druid-ext/basic-security/authorization/db/MyBasicMetadataAuthorizer/roles/<ROLENAME>/permissions. For example: curl -u admin:password1 -H'Content-Type: application/json' -XPOST --data-binary @perms.json https://my-coordinator-ip:8281/druid-ext/basic-security/authorization/db/MyBasicMetadataAuthorizer/roles/myrole/permissions The payload of perms.json should be in the following form: [ { "resource": { "type": "DATASOURCE", "name": "<PATTERN>" }, "action": "READ" }, { "resource": { "type": "STATE", "name": "STATE" }, "action": "READ" } ] info Note: Druid treats the resource name as a regular expression (regex). You can use a specific datasource name or regex to grant permissions for multiple datasources at a time. "},{"title":"Configuring an LDAP authenticator","type":1,"pageTitle":"Security overview","url":"/docs/27.0.0/operations/security-overview#configuring-an-ldap-authenticator","content":"As an alternative to using the basic metadata authenticator, you can use LDAP to authenticate users. See Configure LDAP authentication for information on configuring Druid for LDAP and LDAPS. "},{"title":"Druid security trust model","type":1,"pageTitle":"Security overview","url":"/docs/27.0.0/operations/security-overview#druid-security-trust-model","content":"Within Druid's trust model there users can have different authorization levels: Users with resource write permissions are allowed to do anything that the druid process can do.Authenticated read only users can execute queries against resources to which they have permissions.An authenticated user without any permissions is allowed to execute queries that don't require access to a resource. Additionally, Druid operates according to the following principles: From the innermost layer: Druid processes have the same access to the local files granted to the specified system user running the process.The Druid ingestion system can create new processes to execute tasks. Those tasks inherit the user of their parent process. This means that any user authorized to submit an ingestion task can use the ingestion task permissions to read or write any local files or external resources that the Druid process has access to. info Note: Only grant the DATASOURCE WRITE to trusted users because they can act as the Druid process. Within the cluster: Druid assumes it operates on an isolated, protected network where no reachable IP within the network is under adversary control. When you implement Druid, take care to setup firewalls and other security measures to secure both inbound and outbound connections. Druid assumes network traffic within the cluster is encrypted, including API calls and data transfers. The default encryption implementation uses TLS.Druid assumes auxiliary services such as the metadata store and ZooKeeper nodes are not under adversary control. Cluster to deep storage: Druid does not make assumptions about the security for deep storage. It follows the system's native security policies to authenticate and authorize with deep storage.Druid does not encrypt files for deep storage. Instead, it relies on the storage system's native encryption capabilities to ensure compatibility with encryption schemes across all storage types. Cluster to client: Druid authenticates with the client based on the configured authenticator.Druid only performs actions when an authorizer grants permission. The default configuration is allowAll authorizer. "},{"title":"Reporting security issues","type":1,"pageTitle":"Security overview","url":"/docs/27.0.0/operations/security-overview#reporting-security-issues","content":"The Apache Druid team takes security very seriously. If you find a potential security issue in Druid, such as a way to bypass the security mechanisms described earlier, please report this problem to security@apache.org. This is a private mailing list. Please send one plain text email for each vulnerability you are reporting. "},{"title":"Vulnerability handling","type":1,"pageTitle":"Security overview","url":"/docs/27.0.0/operations/security-overview#vulnerability-handling","content":"The following list summarizes the vulnerability handling process: The reporter reports the vulnerability privately to security@apache.orgThe reporter receives a response that the Druid team has received the report and will investigate the issue.The Druid project security team works privately with the reporter to resolve the vulnerability.The Druid team delivers the fix by creating a new release of the package that the vulnerability affects.The Druid team publicly announces the vulnerability and describes how to apply the fix. Committers should read a more detailed description of the process. Reporters of security vulnerabilities may also find it useful. "},{"title":"User authentication and authorization","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/security-user-auth","content":"","keywords":""},{"title":"Authentication and authorization model","type":1,"pageTitle":"User authentication and authorization","url":"/docs/27.0.0/operations/security-user-auth#authentication-and-authorization-model","content":"At the center of the Druid user authentication and authorization model are resources and actions. A resource is something that authenticated users are trying to access or modify. An action is something that users are trying to do. "},{"title":"Resource types","type":1,"pageTitle":"User authentication and authorization","url":"/docs/27.0.0/operations/security-user-auth#resource-types","content":"Druid uses the following resource types: DATASOURCE – Each Druid table (i.e., tables in the druid schema in SQL) is a resource.CONFIG – Configuration resources exposed by the cluster components. EXTERNAL – External data read through the EXTERN function in SQL.STATE – Cluster-wide state resources.SYSTEM_TABLE – when the Broker property druid.sql.planner.authorizeSystemTablesDirectly is true, then Druid uses this resource type to authorize the system tables in the sys schema in SQL. For specific resources associated with the resource types, see Defining permissions and the corresponding endpoint descriptions in API reference. "},{"title":"Actions","type":1,"pageTitle":"User authentication and authorization","url":"/docs/27.0.0/operations/security-user-auth#actions","content":"Users perform one of the following actions on resources: READ – Used for read-only operations.WRITE – Used for operations that are not read-only. WRITE permission on a resource does not include READ permission. If a user requires both READ and WRITE permissions on a resource, you must grant them both explicitly. For instance, a user with only DATASOURCE READ permission might have access to an API or a system schema record that a user with DATASOURCE WRITE permission would not have access to. "},{"title":"User types","type":1,"pageTitle":"User authentication and authorization","url":"/docs/27.0.0/operations/security-user-auth#user-types","content":"In practice, most deployments will only need to define two classes of users: Administrators, who have WRITE action permissions on all resource types. These users will add datasources and administer the system. Data users, who only need READ access to DATASOURCE. These users should access Query APIs only through an API gateway. Other APIs and permissions include functionality that should be limited to server admins. It is important to note that WRITE access to DATASOURCE grants a user broad access. For instance, such users will have access to the Druid file system, S3 buckets, and credentials, among other things. As such, the ability to add and manage datasources should be allocated selectively to administrators. "},{"title":"Default user accounts","type":1,"pageTitle":"User authentication and authorization","url":"/docs/27.0.0/operations/security-user-auth#default-user-accounts","content":""},{"title":"Authenticator","type":1,"pageTitle":"User authentication and authorization","url":"/docs/27.0.0/operations/security-user-auth#authenticator","content":"If druid.auth.authenticator.<authenticator-name>.initialAdminPassword is set, a default admin user named "admin" will be created, with the specified initial password. If this configuration is omitted, the "admin" user will not be created. If druid.auth.authenticator.<authenticator-name>.initialInternalClientPassword is set, a default internal system user named "druid_system" will be created, with the specified initial password. If this configuration is omitted, the "druid_system" user will not be created. "},{"title":"Authorizer","type":1,"pageTitle":"User authentication and authorization","url":"/docs/27.0.0/operations/security-user-auth#authorizer","content":"Each Authorizer will always have a default "admin" and "druid_system" user with full privileges. "},{"title":"Defining permissions","type":1,"pageTitle":"User authentication and authorization","url":"/docs/27.0.0/operations/security-user-auth#defining-permissions","content":"You define permissions that you then grant to user groups. Permissions are defined by resource type, action, and resource name. This section describes the resource names available for each resource type. "},{"title":"DATASOURCE","type":1,"pageTitle":"User authentication and authorization","url":"/docs/27.0.0/operations/security-user-auth#datasource","content":"Resource names for this type are datasource names. Specifying a datasource permission allows the administrator to grant users access to specific datasources. "},{"title":"CONFIG","type":1,"pageTitle":"User authentication and authorization","url":"/docs/27.0.0/operations/security-user-auth#config","content":"There are two possible resource names for the "CONFIG" resource type, "CONFIG" and "security". Granting a user access to CONFIG resources allows them to access the following endpoints. "CONFIG" resource name covers the following endpoints: Endpoint\tProcess Type/druid/coordinator/v1/config\tcoordinator /druid/indexer/v1/worker\toverlord /druid/indexer/v1/worker/history\toverlord /druid/worker/v1/disable\tmiddleManager /druid/worker/v1/enable\tmiddleManager "security" resource name covers the following endpoint: Endpoint\tProcess Type/druid-ext/basic-security/authentication\tcoordinator /druid-ext/basic-security/authorization\tcoordinator "},{"title":"EXTERNAL","type":1,"pageTitle":"User authentication and authorization","url":"/docs/27.0.0/operations/security-user-auth#external","content":"The EXTERNAL resource type only accepts the resource name "EXTERNAL". Granting a user access to EXTERNAL resources allows them to run queries that include the EXTERN function in SQL to read external data. "},{"title":"STATE","type":1,"pageTitle":"User authentication and authorization","url":"/docs/27.0.0/operations/security-user-auth#state","content":"There is only one possible resource name for the "STATE" config resource type, "STATE". Granting a user access to STATE resources allows them to access the following endpoints. "STATE" resource name covers the following endpoints: Endpoint\tProcess Type/druid/coordinator/v1\tcoordinator /druid/coordinator/v1/rules\tcoordinator /druid/coordinator/v1/rules/history\tcoordinator /druid/coordinator/v1/servers\tcoordinator /druid/coordinator/v1/tiers\tcoordinator /druid/broker/v1\tbroker /druid/v2/candidates\tbroker /druid/indexer/v1/leader\toverlord /druid/indexer/v1/isLeader\toverlord /druid/indexer/v1/action\toverlord /druid/indexer/v1/workers\toverlord /druid/indexer/v1/scaling\toverlord /druid/worker/v1/enabled\tmiddleManager /druid/worker/v1/tasks\tmiddleManager /druid/worker/v1/task/{taskid}/shutdown\tmiddleManager /druid/worker/v1/task/{taskid}/log\tmiddleManager /druid/historical/v1\thistorical /druid-internal/v1/segments/\thistorical /druid-internal/v1/segments/\tpeon /druid-internal/v1/segments/\trealtime /status\tall process types "},{"title":"SYSTEM_TABLE","type":1,"pageTitle":"User authentication and authorization","url":"/docs/27.0.0/operations/security-user-auth#system_table","content":"Resource names for this type are system schema table names in the sys schema in SQL, for example sys.segments and sys.server_segments. Druid only enforces authorization for SYSTEM_TABLE resources when the Broker property druid.sql.planner.authorizeSystemTablesDirectly is true. "},{"title":"HTTP methods","type":1,"pageTitle":"User authentication and authorization","url":"/docs/27.0.0/operations/security-user-auth#http-methods","content":"For information on what HTTP methods are supported on a particular request endpoint, refer to API reference. GET requests require READ permissions, while POST and DELETE requests require WRITE permissions. "},{"title":"SQL permissions","type":1,"pageTitle":"User authentication and authorization","url":"/docs/27.0.0/operations/security-user-auth#sql-permissions","content":"Queries on Druid datasources require DATASOURCE READ permissions for the specified datasource. Queries to access external data through the EXTERN function require EXTERNAL READ permissions. Queries on INFORMATION_SCHEMA tables return information about datasources that the caller has DATASOURCE READ access to. Other datasources are omitted. Queries on the system schema tables require the following permissions: segments: Druid filters segments according to DATASOURCE READ permissions.servers: The user requires STATE READ permissions.server_segments: The user requires STATE READ permissions. Druid filters segments according to DATASOURCE READ permissions.tasks: Druid filters tasks according to DATASOURCE READ permissions.supervisors: Druid filters supervisors according to DATASOURCE READ permissions. When the Broker property druid.sql.planner.authorizeSystemTablesDirectly is true, users also require SYSTEM_TABLE authorization on a system schema table to query it. "},{"title":"Configuration propagation","type":1,"pageTitle":"User authentication and authorization","url":"/docs/27.0.0/operations/security-user-auth#configuration-propagation","content":"To prevent excessive load on the Coordinator, the Authenticator and Authorizer user/role Druid metadata store state is cached on each Druid process. Each process will periodically poll the Coordinator for the latest Druid metadata store state, controlled by the druid.auth.basic.common.pollingPeriod and druid.auth.basic.common.maxRandomDelay properties. When a configuration update occurs, the Coordinator can optionally notify each process with the updated Druid metadata store state. This behavior is controlled by the enableCacheNotifications and cacheNotificationTimeout properties on Authenticators and Authorizers. Note that because of the caching, changes made to the user/role Druid metadata store may not be immediately reflected at each Druid process. "},{"title":"Segment size optimization","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/segment-optimization","content":"","keywords":""},{"title":"Learn more","type":1,"pageTitle":"Segment size optimization","url":"/docs/27.0.0/operations/segment-optimization#learn-more","content":"For an overview of compaction and how to submit a manual compaction task, see Compaction.To learn how to enable and configure automatic compaction, see Automatic compaction. "},{"title":"Single server deployment","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/single-server","content":"","keywords":""},{"title":"Single server reference configurations (deprecated)","type":1,"pageTitle":"Single server deployment","url":"/docs/27.0.0/operations/single-server#single-server-reference-configurations-deprecated","content":"Druid includes a set of reference configurations and launch scripts for single-machine deployments. These start scripts are deprecated in favor of the bin/start-druid script documented above. These configuration bundles are located in conf/druid/single-server/. Configuration\tSizing\tLaunch command\tConfiguration directorynano-quickstart\t1 CPU, 4GiB RAM\tbin/start-nano-quickstart\tconf/druid/single-server/nano-quickstart micro-quickstart\t4 CPU, 16GiB RAM\tbin/start-micro-quickstart\tconf/druid/single-server/micro-quickstart small\t8 CPU, 64GiB RAM (~i3.2xlarge)\tbin/start-small\tconf/druid/single-server/small medium\t16 CPU, 128GiB RAM (~i3.4xlarge)\tbin/start-medium\tconf/druid/single-server/medium large\t32 CPU, 256GiB RAM (~i3.8xlarge)\tbin/start-large\tconf/druid/single-server/large xlarge\t64 CPU, 512GiB RAM (~i3.16xlarge)\tbin/start-xlarge\tconf/druid/single-server/xlarge The micro-quickstart is sized for small machines like laptops and is intended for quick evaluation use-cases. The nano-quickstart is an even smaller configuration, targeting a machine with 1 CPU and 4GiB memory. It is meant for limited evaluations in resource constrained environments, such as small Docker containers. The other configurations are intended for general use single-machine deployments. They are sized for hardware roughly based on Amazon's i3 series of EC2 instances. The startup scripts for these example configurations run a single ZK instance along with the Druid services. You can choose to deploy ZK separately as well. "},{"title":"TLS support","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/tls-support","content":"","keywords":""},{"title":"General configuration","type":1,"pageTitle":"TLS support","url":"/docs/27.0.0/operations/tls-support#general-configuration","content":"Property\tDescription\tDefaultdruid.enablePlaintextPort\tEnable/Disable HTTP connector.\ttrue druid.enableTlsPort\tEnable/Disable HTTPS connector.\tfalse Although not recommended, the HTTP and HTTPS connectors can both be enabled at a time. The respective ports are configurable using druid.plaintextPortand druid.tlsPort properties on each process. Please see Configuration section of individual processes to check the valid and default values for these ports. "},{"title":"Jetty server configuration","type":1,"pageTitle":"TLS support","url":"/docs/27.0.0/operations/tls-support#jetty-server-configuration","content":"Apache Druid uses Jetty as its embedded web server. To get familiar with TLS/SSL, along with related concepts like keys and certificates, read Configuring Secure Protocols in the Jetty documentation. To get more in-depth knowledge of TLS/SSL support in Java in general, refer to the Java Secure Socket Extension (JSSE) Reference Guide. The Class SslContextFactoryreference doc can help in understanding TLS/SSL configurations listed below. Finally, Java Cryptography Architecture Standard Algorithm Name Documentation for JDK 8 lists all possible values for the configs below, among others provided by Java implementation. Property\tDescription\tDefault\tRequireddruid.server.https.keyStorePath\tThe file path or URL of the TLS/SSL Key store.\tnone\tyes druid.server.https.keyStoreType\tThe type of the key store.\tnone\tyes druid.server.https.certAlias\tAlias of TLS/SSL certificate for the connector.\tnone\tyes druid.server.https.keyStorePassword\tThe Password Provider or String password for the Key Store.\tnone\tyes druid.server.https.reloadSslContext\tShould Druid server detect Key Store file change and reload.\tfalse\tno druid.server.https.reloadSslContextSeconds\tHow frequently should Druid server scan for Key Store file change.\t60\tyes The following table contains configuration options related to client certificate authentication. Property\tDescription\tDefault\tRequireddruid.server.https.requireClientCertificate\tIf set to true, clients must identify themselves by providing a TLS certificate, without which connections will fail.\tfalse\tno druid.server.https.requestClientCertificate\tIf set to true, clients may optionally identify themselves by providing a TLS certificate. Connections will not fail if TLS certificate is not provided. This property is ignored if requireClientCertificate is set to true. If requireClientCertificate and requestClientCertificate are false, the rest of the options in this table are ignored.\tfalse\tno druid.server.https.trustStoreType\tThe type of the trust store containing certificates used to validate client certificates. Not needed if requireClientCertificate and requestClientCertificate are false.\tjava.security.KeyStore.getDefaultType()\tno druid.server.https.trustStorePath\tThe file path or URL of the trust store containing certificates used to validate client certificates. Not needed if requireClientCertificate and requestClientCertificate are false.\tnone\tyes, only if requireClientCertificate is true druid.server.https.trustStoreAlgorithm\tAlgorithm to be used by TrustManager to validate client certificate chains. Not needed if requireClientCertificate and requestClientCertificate are false.\tjavax.net.ssl.TrustManagerFactory.getDefaultAlgorithm()\tno druid.server.https.trustStorePassword\tThe password provider or String password for the Trust Store. Not needed if requireClientCertificate and requestClientCertificate are false.\tnone\tno druid.server.https.validateHostnames\tIf set to true, check that the client's hostname matches the CN/subjectAltNames in the client certificate. Not used if requireClientCertificate and requestClientCertificate are false.\ttrue\tno druid.server.https.crlPath\tSpecifies a path to a file containing static Certificate Revocation Lists, used to check if a client certificate has been revoked. Not used if requireClientCertificate and requestClientCertificate are false.\tnull\tno The following table contains non-mandatory advanced configuration options, use caution. Property\tDescription\tDefault\tRequireddruid.server.https.keyManagerFactoryAlgorithm\tAlgorithm to use for creating KeyManager, more details here.\tjavax.net.ssl.KeyManagerFactory.getDefaultAlgorithm()\tno druid.server.https.keyManagerPassword\tThe Password Provider or String password for the Key Manager.\tnone\tno druid.server.https.includeCipherSuites\tList of cipher suite names to include. You can either use the exact cipher suite name or a regular expression.\tJetty's default include cipher list\tno druid.server.https.excludeCipherSuites\tList of cipher suite names to exclude. You can either use the exact cipher suite name or a regular expression.\tJetty's default exclude cipher list\tno druid.server.https.includeProtocols\tList of exact protocols names to include.\tJetty's default include protocol list\tno druid.server.https.excludeProtocols\tList of exact protocols names to exclude.\tJetty's default exclude protocol list\tno "},{"title":"Internal communication over TLS","type":1,"pageTitle":"TLS support","url":"/docs/27.0.0/operations/tls-support#internal-communication-over-tls","content":"Whenever possible Druid processes will use HTTPS to talk to each other. To enable this communication Druid's HttpClient needs to be configured with a proper SSLContext that is able to validate the Server Certificates, otherwise communication will fail. Since, there are various ways to configure SSLContext, by default, Druid looks for an instance of SSLContext Guice binding while creating the HttpClient. This binding can be achieved writing a Druid extensionwhich can provide an instance of SSLContext. Druid comes with a simple extension present herewhich should be useful enough for most simple cases, see this for how to include extensions. If this extension does not satisfy the requirements then please follow the extension implementationto create your own extension. When Druid Coordinator/Overlord have both HTTP and HTTPS enabled and Client sends request to non-leader process, then Client is always redirected to the HTTPS endpoint on leader process. So, Clients should be first upgraded to be able to handle redirect to HTTPS. Then Druid Overlord/Coordinator should be upgraded and configured to run both HTTP and HTTPS ports. Then Client configuration should be changed to refer to Druid Coordinator/Overlord via the HTTPS endpoint and then HTTP port on Druid Coordinator/Overlord should be disabled. "},{"title":"Custom certificate checks","type":1,"pageTitle":"TLS support","url":"/docs/27.0.0/operations/tls-support#custom-certificate-checks","content":"Druid supports custom certificate check extensions. Please refer to the org.apache.druid.server.security.TLSCertificateChecker interface for details on the methods to be implemented. To use a custom TLS certificate checker, specify the following property: Property\tDescription\tDefault\tRequireddruid.tls.certificateChecker\tType name of custom TLS certificate checker, provided by extensions. Please refer to extension documentation for the type name that should be specified.\t"default"\tno The default checker delegates to the standard trust manager and performs no additional actions or checks. If using a non-default certificate checker, please refer to the extension documentation for additional configuration properties needed. "},{"title":"Content for build.sbt","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/use_sbt_to_build_fat_jar","content":"Content for build.sbt libraryDependencies ++= Seq( "com.amazonaws" % "aws-java-sdk" % "1.9.23" exclude("common-logging", "common-logging"), "org.joda" % "joda-convert" % "1.7", "joda-time" % "joda-time" % "2.7", "org.apache.druid" % "druid" % "0.8.1" excludeAll ( ExclusionRule("org.ow2.asm"), ExclusionRule("com.fasterxml.jackson.core"), ExclusionRule("com.fasterxml.jackson.datatype"), ExclusionRule("com.fasterxml.jackson.dataformat"), ExclusionRule("com.fasterxml.jackson.jaxrs"), ExclusionRule("com.fasterxml.jackson.module") ), "org.apache.druid" % "druid-services" % "0.8.1" excludeAll ( ExclusionRule("org.ow2.asm"), ExclusionRule("com.fasterxml.jackson.core"), ExclusionRule("com.fasterxml.jackson.datatype"), ExclusionRule("com.fasterxml.jackson.dataformat"), ExclusionRule("com.fasterxml.jackson.jaxrs"), ExclusionRule("com.fasterxml.jackson.module") ), "org.apache.druid" % "druid-indexing-service" % "0.8.1" excludeAll ( ExclusionRule("org.ow2.asm"), ExclusionRule("com.fasterxml.jackson.core"), ExclusionRule("com.fasterxml.jackson.datatype"), ExclusionRule("com.fasterxml.jackson.dataformat"), ExclusionRule("com.fasterxml.jackson.jaxrs"), ExclusionRule("com.fasterxml.jackson.module") ), "org.apache.druid" % "druid-indexing-hadoop" % "0.8.1" excludeAll ( ExclusionRule("org.ow2.asm"), ExclusionRule("com.fasterxml.jackson.core"), ExclusionRule("com.fasterxml.jackson.datatype"), ExclusionRule("com.fasterxml.jackson.dataformat"), ExclusionRule("com.fasterxml.jackson.jaxrs"), ExclusionRule("com.fasterxml.jackson.module") ), "org.apache.druid.extensions" % "mysql-metadata-storage" % "0.8.1" excludeAll ( ExclusionRule("org.ow2.asm"), ExclusionRule("com.fasterxml.jackson.core"), ExclusionRule("com.fasterxml.jackson.datatype"), ExclusionRule("com.fasterxml.jackson.dataformat"), ExclusionRule("com.fasterxml.jackson.jaxrs"), ExclusionRule("com.fasterxml.jackson.module") ), "org.apache.druid.extensions" % "druid-s3-extensions" % "0.8.1" excludeAll ( ExclusionRule("org.ow2.asm"), ExclusionRule("com.fasterxml.jackson.core"), ExclusionRule("com.fasterxml.jackson.datatype"), ExclusionRule("com.fasterxml.jackson.dataformat"), ExclusionRule("com.fasterxml.jackson.jaxrs"), ExclusionRule("com.fasterxml.jackson.module") ), "org.apache.druid.extensions" % "druid-histogram" % "0.8.1" excludeAll ( ExclusionRule("org.ow2.asm"), ExclusionRule("com.fasterxml.jackson.core"), ExclusionRule("com.fasterxml.jackson.datatype"), ExclusionRule("com.fasterxml.jackson.dataformat"), ExclusionRule("com.fasterxml.jackson.jaxrs"), ExclusionRule("com.fasterxml.jackson.module") ), "org.apache.druid.extensions" % "druid-hdfs-storage" % "0.8.1" excludeAll ( ExclusionRule("org.ow2.asm"), ExclusionRule("com.fasterxml.jackson.core"), ExclusionRule("com.fasterxml.jackson.datatype"), ExclusionRule("com.fasterxml.jackson.dataformat"), ExclusionRule("com.fasterxml.jackson.jaxrs"), ExclusionRule("com.fasterxml.jackson.module") ), "com.fasterxml.jackson.core" % "jackson-annotations" % "2.3.0", "com.fasterxml.jackson.core" % "jackson-core" % "2.3.0", "com.fasterxml.jackson.core" % "jackson-databind" % "2.3.0", "com.fasterxml.jackson.datatype" % "jackson-datatype-guava" % "2.3.0", "com.fasterxml.jackson.datatype" % "jackson-datatype-joda" % "2.3.0", "com.fasterxml.jackson.jaxrs" % "jackson-jaxrs-base" % "2.3.0", "com.fasterxml.jackson.jaxrs" % "jackson-jaxrs-json-provider" % "2.3.0", "com.fasterxml.jackson.jaxrs" % "jackson-jaxrs-smile-provider" % "2.3.0", "com.fasterxml.jackson.module" % "jackson-module-jaxb-annotations" % "2.3.0", "com.sun.jersey" % "jersey-servlet" % "1.17.1", "mysql" % "mysql-connector-java" % "5.1.34", "org.scalatest" %% "scalatest" % "2.2.3" % "test", "org.mockito" % "mockito-core" % "1.10.19" % "test" ) assemblyMergeStrategy in assembly := { case path if path contains "pom." => MergeStrategy.first case path if path contains "javax.inject.Named" => MergeStrategy.first case path if path contains "mime.types" => MergeStrategy.first case path if path contains "org/apache/commons/logging/impl/SimpleLog.class" => MergeStrategy.first case path if path contains "org/apache/commons/logging/impl/SimpleLog$1.class" => MergeStrategy.first case path if path contains "org/apache/commons/logging/impl/NoOpLog.class" => MergeStrategy.first case path if path contains "org/apache/commons/logging/LogFactory.class" => MergeStrategy.first case path if path contains "org/apache/commons/logging/LogConfigurationException.class" => MergeStrategy.first case path if path contains "org/apache/commons/logging/Log.class" => MergeStrategy.first case path if path contains "META-INF/jersey-module-version" => MergeStrategy.first case path if path contains ".properties" => MergeStrategy.first case path if path contains ".class" => MergeStrategy.first case x => val oldStrategy = (assemblyMergeStrategy in assembly).value oldStrategy(x) } ","keywords":""},{"title":"Web console","type":0,"sectionRef":"#","url":"/docs/27.0.0/operations/web-console","content":"","keywords":""},{"title":"Home","type":1,"pageTitle":"Web console","url":"/docs/27.0.0/operations/web-console#home","content":"The Home view provides a high-level overview of the cluster. Each card is clickable and links to the appropriate view. The Home view displays the following cards: Status. Click this card for information on the Druid version and any extensions loaded on the cluster.DatasourcesSegmentsSupervisorsTasksServicesLookups You can access the data loader and lookups view from the top-level navigation of the Home view. "},{"title":"Query","type":1,"pageTitle":"Web console","url":"/docs/27.0.0/operations/web-console#query","content":"SQL-based ingestion and the multi-stage query task engine use the Query view, which provides you with a UI to edit and use SQL queries. You should see this UI automatically in Druid 24.0 and later since the multi-stage query extension is loaded by default. The following screenshot shows a populated enhanced Query view along with a description of its parts: The multi-stage, tab-enabled, Query view is where you can issue queries and see results. All other views are unchanged from the non-enhanced version. You can still access the original Query view by navigating to #query in the URL. You can tell that you're looking at the updated Query view by the presence of the tabs (3).The druid panel shows the available schemas, datasources, and columns.Query tabs allow you to manage and run several queries at once. Click the plus icon to open a new tab. To manipulate existing tabs, click the tab name.The tab bar contains some helpful tools including the Connect external data button that samples external data and creates an initial query with the appropriate EXTERN definition that you can then edit as needed.The Recent query tasks panel lets you see currently running and previous queries from all users in the cluster. It is equivalent to the Task view in the Ingestion view with the filter of type='query_controller'.You can click on each query entry to attach to that query in a new tab.You can download an archive of all the pertinent details about the query that you can share.The Run button runs the query.The Preview button appears when you enter an INSERT/REPLACE query. It runs the query inline without the INSERT/REPLACE clause and with an added LIMIT to give you a preview of the data that would be ingested if you click Run. The added LIMIT makes the query run faster but provides incomplete results.The engine selector lets you choose which engine (API endpoint) to send a query to. By default, it automatically picks which endpoint to use based on an analysis of the query, but you can select a specific engine explicitly. You can also configure the engine specific context parameters from this menu.The Max tasks picker appears when you have the sql-msq-task engine selected. It lets you configure the degree of parallelism.The More menu (...) contains the following helpful tools: Explain SQL query shows you the logical plan returned by EXPLAIN PLAN FOR for a SQL query.Query history shows you previously executed queries.Convert ingestion spec to SQL lets you convert a native batch ingestion spec to an equivalent SQL query.Attach tab from task ID lets you create a new tab from the task ID of a query executed on this cluster.Open query detail archive lets you open a detail archive generated on any cluster by (7). The query timer indicates how long the query has been running for.The (cancel) link cancels the currently running query.The main progress bar shows the overall progress of the query. The progress is computed from the various counters in the live reports (16).The Current stage progress bar shows the progress for the currently running query stage. If several stages are executing concurrently, it conservatively shows the information for the earliest executing stage.The live query reports show detailed information of all the stages (past, present, and future). The live reports are shown while the query is running. You can hide the report if you want. After queries finish, you can access them by clicking on the query time indicator or from the Recent query tasks panel (6).You can expand each stage of the live query report by clicking on the triangle to show per worker and per partition statistics. "},{"title":"Data loader","type":1,"pageTitle":"Web console","url":"/docs/27.0.0/operations/web-console#data-loader","content":"You can use the data loader to build an ingestion spec with a step-by-step wizard. After selecting the location of your data, follow the series of steps displaying incremental previews of the data as it is ingested. After filling in the required details on every step you can navigate to the next step by clicking Next. You can also freely navigate between the steps from the top navigation. Navigating with the top navigation leaves the underlying spec unmodified while clicking Next attempts to fill in the subsequent steps with appropriate defaults. "},{"title":"Datasources","type":1,"pageTitle":"Web console","url":"/docs/27.0.0/operations/web-console#datasources","content":"The Datasources view shows all the datasources currently loaded on the cluster, as well as their sizes and availability. From the Datasources view, you can edit the retention rules, configure automatic compaction, and drop data in a datasource. A datasource is partitioned into one or more segments organized by time chunks. To display a timeline of segments, toggle the option for Show segment timeline. Like any view that is powered by a Druid SQL query, you can click View SQL query for table from the ellipsis menu to run the underlying SQL query directly. You can view and edit retention rules to determine the general availability of a datasource. "},{"title":"Segments","type":1,"pageTitle":"Web console","url":"/docs/27.0.0/operations/web-console#segments","content":"The Segments view shows all the segments in the cluster. Each segment has a detail view that provides more information. The Segment ID is also conveniently broken down into Datasource, Start, End, Version, and Partition columns for ease of filtering and sorting. "},{"title":"Supervisors","type":1,"pageTitle":"Web console","url":"/docs/27.0.0/operations/web-console#supervisors","content":"From this view, you can check the status of existing supervisors as well as suspend, resume, and reset them. The supervisor oversees the state of the indexing tasks to coordinate handoffs, manage failures, and ensure that the scalability and replication requirements are maintained. Submit a supervisor spec manually by clicking the ellipsis icon and selecting Submit JSON supervisor. Click the magnifying glass icon for any supervisor to see detailed reports of its progress. "},{"title":"Tasks","type":1,"pageTitle":"Web console","url":"/docs/27.0.0/operations/web-console#tasks","content":"The tasks table allows you to see the currently running and recently completed tasks. To navigate your tasks more easily, you can group them by their Type, Datasource, or Status. Submit a task manually by clicking the ellipsis icon and selecting Submit JSON task. Click the magnifying glass icon for any task to see more detail about it. "},{"title":"Services","type":1,"pageTitle":"Web console","url":"/docs/27.0.0/operations/web-console#services","content":"The Services view lets you see the current status of the nodes making up your cluster. You can group the nodes by Type or by Tier to get meaningful summary statistics. "},{"title":"Lookups","type":1,"pageTitle":"Web console","url":"/docs/27.0.0/operations/web-console#lookups","content":"Access the Lookups view from the Lookups card in the home view or by clicking the ellipsis icon in the top-level navigation. Here you can create and edit query time lookups. "},{"title":"Native queries","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/","content":"","keywords":""},{"title":"Available queries","type":1,"pageTitle":"Native queries","url":"/docs/27.0.0/querying/#available-queries","content":"Druid has numerous query types for various use cases. Queries are composed of various JSON properties and Druid has different types of queries for different use cases. The documentation for the various query types describe all the JSON properties that can be set. "},{"title":"Aggregation queries","type":1,"pageTitle":"Native queries","url":"/docs/27.0.0/querying/#aggregation-queries","content":"TimeseriesTopNGroupBy "},{"title":"Metadata queries","type":1,"pageTitle":"Native queries","url":"/docs/27.0.0/querying/#metadata-queries","content":"TimeBoundarySegmentMetadataDatasourceMetadata "},{"title":"Other queries","type":1,"pageTitle":"Native queries","url":"/docs/27.0.0/querying/#other-queries","content":"ScanSearch "},{"title":"Which query type should I use?","type":1,"pageTitle":"Native queries","url":"/docs/27.0.0/querying/#which-query-type-should-i-use","content":"For aggregation queries, if more than one would satisfy your needs, we generally recommend using Timeseries or TopN whenever possible, as they are specifically optimized for their use cases. If neither is a good fit, you should use the GroupBy query, which is the most flexible. "},{"title":"Query cancellation","type":1,"pageTitle":"Native queries","url":"/docs/27.0.0/querying/#query-cancellation","content":"Queries can be cancelled explicitly using their unique identifier. If the query identifier is set at the time of query, or is otherwise known, the following endpoint can be used on the Broker or Router to cancel the query. DELETE /druid/v2/{queryId} For example, if the query ID is abc123, the query can be cancelled as follows: curl -X DELETE "http://host:port/druid/v2/abc123" "},{"title":"Query errors","type":1,"pageTitle":"Native queries","url":"/docs/27.0.0/querying/#query-errors","content":""},{"title":"Authentication and authorization failures","type":1,"pageTitle":"Native queries","url":"/docs/27.0.0/querying/#authentication-and-authorization-failures","content":"For secured Druid clusters, query requests respond with an HTTP 401 response code in case of an authentication failure. For authorization failures, an HTTP 403 response code is returned. "},{"title":"Query execution failures","type":1,"pageTitle":"Native queries","url":"/docs/27.0.0/querying/#query-execution-failures","content":"If a query fails, Druid returns a response with an HTTP response code and a JSON object with the following structure: { "error" : "Query timeout", "errorMessage" : "Timeout waiting for task.", "errorClass" : "java.util.concurrent.TimeoutException", "host" : "druid1.example.com:8083" } The fields in the response are: field\tdescriptionerror\tA well-defined error code (see below). errorMessage\tA free-form message with more information about the error. May be null. errorClass\tThe class of the exception that caused this error. May be null. host\tThe host on which this error occurred. May be null. Possible Druid error codes for the error field include: Error code\tHTTP response code\tdescriptionSQL parse failed\t400\tOnly for SQL queries. The SQL query failed to parse. Plan validation failed\t400\tOnly for SQL queries. The SQL query failed to validate. Resource limit exceeded\t400\tThe query exceeded a configured resource limit (e.g. groupBy maxResults). Query capacity exceeded\t429\tThe query failed to execute because of the lack of resources available at the time when the query was submitted. The resources could be any runtime resources such as query scheduler lane capacity, merge buffers, and so on. The error message should have more details about the failure. Unsupported operation\t501\tThe query attempted to perform an unsupported operation. This may occur when using undocumented features or when using an incompletely implemented extension. Query timeout\t504\tThe query timed out. Query interrupted\t500\tThe query was interrupted, possibly due to JVM shutdown. Query cancelled\t500\tThe query was cancelled through the query cancellation API. Truncated response context\t500\tAn intermediate response context for the query exceeded the built-in limit of 7KiB. The response context is an internal data structure that Druid servers use to share out-of-band information when sending query results to each other. It is serialized in an HTTP header with a maximum length of 7KiB. This error occurs when an intermediate response context sent from a data server (like a Historical) to the Broker exceeds this limit. The response context is used for a variety of purposes, but the one most likely to generate a large context is sharing details about segments that move during a query. That means this error can potentially indicate that a very large number of segments moved in between the time a Broker issued a query and the time it was processed on Historicals. This should rarely, if ever, occur during normal operation. Unknown exception\t500\tSome other exception occurred. Check errorMessage and errorClass for details, although keep in mind that the contents of those fields are free-form and may change from release to release. "},{"title":"Query caching","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/caching","content":"","keywords":""},{"title":"Cache types","type":1,"pageTitle":"Query caching","url":"/docs/27.0.0/querying/caching#cache-types","content":"Druid supports two types of query caching: Per-segment caching stores partial query results for a specific segment. It is enabled by default.Whole-query caching stores final query results. Druid invalidates any cache the moment any underlying data change to avoid returning stale results. This is especially important for table datasources that have highly-variable underlying data segments, including real-time data segments. info Druid can store cache data on the local JVM heap or in an external distributed key/value store (e.g. memcached) The default is a local cache based upon Caffeine. The default maximum cache storage size is the minimum of 1 GiB / ten percent of maximum runtime memory for the JVM, with no cache expiration. See Cache configuration for information on how to configure cache storage. When using caffeine, the cache is inside the JVM heap and is directly measurable. Heap usage will grow up to the maximum configured size, and then the least recently used segment results will be evicted and replaced with newer results. "},{"title":"Per-segment caching","type":1,"pageTitle":"Query caching","url":"/docs/27.0.0/querying/caching#per-segment-caching","content":"The primary form of caching in Druid is a per-segment results cache. This cache stores partial query results on a per-segment basis and is enabled on Historical services by default. The per-segment results cache allows Druid to maintain a low-eviction-rate cache for segments that do not change, especially important for those segments that historical processes pull into their local segment cache from deep storage. Real-time segments, on the other hand, continue to have results computed at query time. Druid may potentially merge per-segment cached results with the results of later queries that use a similar basic shape with similar filters, aggregations, etc. For example, if the query is identical except that it covers a different time period. Per-segment caching is controlled by the parameters useCache and populateCache. Use per-segment caching with real-time data. For example, your queries request data actively arriving from Kafka alongside intervals in segments that are loaded on Historicals. Druid can merge cached results from Historical segments with real-time results from the stream. Whole-query caching, on the other hand, is not helpful in this scenario because new data from real-time ingestion will continually invalidate the entire cached result. "},{"title":"Whole-query caching","type":1,"pageTitle":"Query caching","url":"/docs/27.0.0/querying/caching#whole-query-caching","content":"With whole-query caching, Druid caches the entire results of individual queries, meaning the Broker no longer needs to merge per-segment results from data processes. Use whole-query caching on the Broker to increase query efficiency when there is little risk of ingestion invalidating the cache at a segment level. This applies particularly, for example, when not using real-time ingestion. Perhaps your queries tend to use batch-ingested data, in which case per-segment caching would be less efficient since the underlying segments hardly ever change, yet Druid would continue to acquire per-segment results for each query. "},{"title":"Where to enable caching","type":1,"pageTitle":"Query caching","url":"/docs/27.0.0/querying/caching#where-to-enable-caching","content":"Per-segment cache is available as follows: On Historicals, the default. Enable segment-level cache population on Historicals for larger production clusters to prevent Brokers from having to merge all query results. When you enable cache population on Historicals instead of Brokers, the Historicals merge their own local results and put less strain on the Brokers. On ingestion tasks in the Peon or Indexer service. Larger production clusters should enable segment-level cache population on task services only to prevent Brokers from having to merge all query results. When you enable cache population on task execution services instead of Brokers, the task execution services to merge their own local results and put less strain on the Brokers. Task executor services only support caches that store data locally. For example the caffeine cache. This restriction exists because the cache stores results at the level of intermediate partial segments generated by the ingestion tasks. These intermediate partial segments may not be identical across task replicas. Therefore task executor services ignore remote cache types such as memcached. On Brokers for small production clusters with less than five servers. Avoid using per-segment cache at the Broker for large production clusters. When the Broker cache is enabled (druid.broker.cache.populateCache is true) and populateCache is not false in the query context, individual Historicals will not merge individual segment-level results, and instead pass these back to the lead Broker. The Broker must then carry out a large merge from all segments on its own. Whole-query cache is available exclusively on Brokers. "},{"title":"Performance considerations for caching","type":1,"pageTitle":"Query caching","url":"/docs/27.0.0/querying/caching#performance-considerations-for-caching","content":"Caching enables increased concurrency on the same system, therefore leading to noticeable performance improvements for queries on Druid clusters handling throughput for concurrent, mixed workloads. If you are looking to improve response time for a single query or page load, you should ignore caching. In general, response time for a single task should meet performance objectives even when the cache is cold. During query processing, the per-segment cache intercepts the query and sends the results directly to the Broker. This way the query bypasses the data server processing threads. For queries requiring minimal processing in the Broker, cached queries are very quick. If work done on the Broker causes a query bottleneck, enabling caching results in little noticeable query improvement. The largest performance gains from segment caching tend to apply to topN and time series queries. For groupBy queries, if the bottleneck is in the merging phase on the broker, the impact is less. The same applies to queries with or without joins. "},{"title":"Scenarios where caching does not increase query performance","type":1,"pageTitle":"Query caching","url":"/docs/27.0.0/querying/caching#scenarios-where-caching-does-not-increase-query-performance","content":"Caching does not solve all types of query performance issues. For each cache type there are scenarios where caching is likely to be of little benefit. Per-segment caching doesn't work for the following: queries containing a sub-query in them. However the output of sub-queries may be cached. See Query execution for more details on sub-queries execution.queries with joins do not support any caching on the broker.GroupBy v2 queries do not support any caching on broker.queries with bySegment set in the query context are not cached on the broker. Whole-query caching doesn't work for the following: queries that involve an inline datasource or a lookup datasource.GroupBy v2 queries.queries with joins.queries with a union datasource. "},{"title":"Learn more","type":1,"pageTitle":"Query caching","url":"/docs/27.0.0/querying/caching#learn-more","content":"See the following topics for more information: Using query caching to learn how to configure and use caching.Druid Design to learn about Druid processes. Segments to learn how Druid stores data.Query execution to learn how Druid services process query statements. "},{"title":"DatasourceMetadata queries","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/datasourcemetadataquery","content":"DatasourceMetadata queries info Apache Druid supports two query languages: Druid SQL and native queries. This document describes a query type that is only available in the native language. Data Source Metadata queries return metadata information for a dataSource. These queries return information about: The timestamp of latest ingested event for the dataSource. This is the ingested event without any consideration of rollup. The grammar for these queries is: { "queryType" : "dataSourceMetadata", "dataSource": "sample_datasource" } There are 2 main parts to a Data Source Metadata query: property\tdescription\trequired?queryType\tThis String should always be "dataSourceMetadata"; this is the first thing Apache Druid looks at to figure out how to interpret the query\tyes dataSource\tA String or Object defining the data source to query, very similar to a table in a relational database. See DataSource for more information.\tyes context\tSee Context\tno The format of the result is: [ { "timestamp" : "2013-05-09T18:24:00.000Z", "result" : { "maxIngestedEventTime" : "2013-05-09T18:24:09.007Z" } } ] ","keywords":""},{"title":"Spatial filters","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/geo","content":"","keywords":""},{"title":"Spatial indexing","type":1,"pageTitle":"Spatial filters","url":"/docs/27.0.0/querying/geo#spatial-indexing","content":"Spatial indexing refers to ingesting data of a spatial data type, such as geometry or geography, into Druid. Spatial dimensions are string columns that contain coordinates separated by a comma. In the ingestion spec, you configure spatial dimensions in the dimensionsSpec object of the dataSchema component. You can provide spatial dimensions in any of the data formats supported by Druid. The following example shows an ingestion spec with a spatial dimension named coordinates, which is constructed from the input fields x and y: { "type": "hadoop", "dataSchema": { "dataSource": "DatasourceName", "parser": { "type": "string", "parseSpec": { "format": "json", "timestampSpec": { "column": "timestamp", "format": "auto" }, "dimensionsSpec": { "dimensions": [ { "type": "double", "name": "x" }, { "type": "double", "name": "y" } ], "spatialDimensions": [ { "dimName": "coordinates", "dims": [ "x", "y" ] } ] } } } } } Each spatial dimension object in the spatialDimensions array is defined by the following fields: Property\tDescription\tRequireddimName\tThe name of a spatial dimension. You can construct a spatial dimension from other dimensions or it may already exist as part of an event. If a spatial dimension already exists, it must be an array of coordinate values.\tyes dims\tThe list of dimension names that comprise the spatial dimension.\tno For information on how to use the ingestion spec to configure ingestion, see Ingestion spec reference. For general information on loading data in Druid, see Ingestion. "},{"title":"Spatial filters","type":1,"pageTitle":"Spatial filters","url":"/docs/27.0.0/querying/geo#spatial-filters","content":"A filter is a JSON object indicating which rows of data should be included in the computation for a query. You can filter on spatial structures, such as rectangles and polygons, using the spatial filter. Spatial filters have the following structure: "filter": { "type": "spatial", "dimension": <name_of_spatial_dimension>, "bound": <bound_type> } The following example shows a spatial filter with a rectangular bound type: "filter" : { "type": "spatial", "dimension": "spatialDim", "bound": { "type": "rectangular", "minCoords": [10.0, 20.0], "maxCoords": [30.0, 40.0] } The order of the dimension coordinates in the spatial filter must be equal to the order of the dimension coordinates in the spatialDimensions array. "},{"title":"Bound types","type":1,"pageTitle":"Spatial filters","url":"/docs/27.0.0/querying/geo#bound-types","content":"The bound property of the spatial filter object lets you filter on ranges of dimension values. You can define rectangular, radius, and polygon filter bounds. Rectangular The rectangular bound has the following elements: Property\tDescription\tRequiredminCoords\tThe list of minimum dimension coordinates in the form [x, y]\tyes maxCoords\tThe list of maximum dimension coordinates in the form [x, y]\tyes Radius The radius bound has the following elements: Property\tDescription\tRequiredcoords\tOrigin coordinates in the form [x, y]\tyes radius\tThe float radius value\tyes Polygon The polygon bound has the following elements: Property\tDescription\tRequiredabscissa\tHorizontal coordinates for the corners of the polygon\tyes ordinate\tVertical coordinates for the corners of the polygon\tyes "},{"title":"Aggregations","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/aggregations","content":"","keywords":""},{"title":"Exact aggregations","type":1,"pageTitle":"Aggregations","url":"/docs/27.0.0/querying/aggregations#exact-aggregations","content":""},{"title":"Count aggregator","type":1,"pageTitle":"Aggregations","url":"/docs/27.0.0/querying/aggregations#count-aggregator","content":"count computes the count of Druid rows that match the filters. Property\tDescription\tRequiredtype\tMust be "count".\tYes name\tOutput name of the aggregator\tYes Example: { "type" : "count", "name" : "count" } The count aggregator counts the number of Druid rows, which does not always reflect the number of raw events ingested. This is because Druid can be configured to roll up data at ingestion time. To count the number of ingested rows of data, include a count aggregator at ingestion time and a longSum aggregator at query time. "},{"title":"Sum aggregators","type":1,"pageTitle":"Aggregations","url":"/docs/27.0.0/querying/aggregations#sum-aggregators","content":"Property\tDescription\tRequiredtype\tMust be "longSum", "doubleSum", or "floatSum".\tYes name\tOutput name for the summed value.\tYes fieldName\tName of the input column to sum over.\tNo. You must specify fieldName or expression. expression\tYou can specify an inline expression as an alternative to fieldName.\tNo. You must specify fieldName or expression. longSum aggregator Computes the sum of values as a 64-bit, signed integer. Example: { "type" : "longSum", "name" : "sumLong", "fieldName" : "aLong" } doubleSum aggregator Computes and stores the sum of values as a 64-bit floating point value. Similar to longSum. Example: { "type" : "doubleSum", "name" : "sumDouble", "fieldName" : "aDouble" } floatSum aggregator Computes and stores the sum of values as a 32-bit floating point value. Similar to longSum and doubleSum. Example: { "type" : "floatSum", "name" : "sumFloat", "fieldName" : "aFloat" } "},{"title":"Min and max aggregators","type":1,"pageTitle":"Aggregations","url":"/docs/27.0.0/querying/aggregations#min-and-max-aggregators","content":"Property\tDescription\tRequiredtype\tMust be "doubleMin", "doubleMax", "floatMin", "floatMax", "longMin", or "longMax".\tYes name\tOutput name for the min or max value.\tYes fieldName\tName of the input column to compute the minimum or maximum value over.\tNo. You must specify fieldName or expression. expression\tYou can specify an inline expression as an alternative to fieldName.\tNo. You must specify fieldName or expression. doubleMin aggregator doubleMin computes the minimum of all input values and null if druid.generic.useDefaultValueForNull is false or Double.POSITIVE_INFINITY if true. Example: { "type" : "doubleMin", "name" : "maxDouble", "fieldName" : "aDouble" } doubleMax aggregator doubleMax computes the maximum of all input values and null if druid.generic.useDefaultValueForNull is false or Double.NEGATIVE_INFINITY if true. Example: { "type" : "doubleMax", "name" : "minDouble", "fieldName" : "aDouble" } floatMin aggregator floatMin computes the minimum of all input values and null if druid.generic.useDefaultValueForNull is false or Float.POSITIVE_INFINITY if true. Example: { "type" : "floatMin", "name" : "minFloat", "fieldName" : "aFloat" } floatMax aggregator floatMax computes the maximum of all input values and null if druid.generic.useDefaultValueForNull is false or Float.NEGATIVE_INFINITY if true. Example: { "type" : "floatMax", "name" : "maxFloat", "fieldName" : "aFloat" } longMin aggregator longMin computes the minimum of all input values and null if druid.generic.useDefaultValueForNull is false or Long.MAX_VALUE if true. Example: { "type" : "longMin", "name" : "minLong", "fieldName" : "aLong" } longMax aggregator longMax computes the maximum of all metric values and null if druid.generic.useDefaultValueForNull is false or Long.MIN_VALUE if true. Example: { "type" : "longMax", "name" : "maxLong", "fieldName" : "aLong" } "},{"title":"doubleMean aggregator","type":1,"pageTitle":"Aggregations","url":"/docs/27.0.0/querying/aggregations#doublemean-aggregator","content":"Computes and returns the arithmetic mean of a column's values as a 64-bit floating point value. Property\tDescription\tRequiredtype\tMust be "doubleMean".\tYes name\tOutput name for the mean value.\tYes fieldName\tName of the input column to compute the arithmetic mean value over.\tYes Example: { "type" : "doubleMean", "name" : "aMean", "fieldName" : "aDouble" } doubleMean is a query time aggregator only. It is not available for indexing. To accomplish mean aggregation on ingestion, refer to the Quantiles aggregator from the DataSketches extension. "},{"title":"First and last aggregators","type":1,"pageTitle":"Aggregations","url":"/docs/27.0.0/querying/aggregations#first-and-last-aggregators","content":"The first and last aggregators determine the metric values that respectively correspond to the earliest and latest values of a time column. Do not use first and last aggregators for the double, float, and long types in an ingestion spec. They are only supported for queries. The string-typed aggregators, stringFirst and stringLast, are supported for both ingestion and querying. Queries with first or last aggregators on a segment created with rollup return the rolled up value, not the first or last value from the raw ingested data. Numeric first and last aggregators Property\tDescription\tRequiredtype\tMust be "doubleFirst", "doubleLast", "floatFirst", "floatLast", "longFirst", "longLast".\tYes name\tOutput name for the first or last value.\tYes fieldName\tName of the input column to compute the first or last value over.\tYes timeColumn\tName of the input column to use for time values. Must be a LONG typed column.\tNo. Defaults to __time. doubleFirst aggregator doubleFirst computes the input value with the minimum value for time column or 0 in default mode, or null in SQL-compatible mode if no row exists. Example: { "type" : "doubleFirst", "name" : "firstDouble", "fieldName" : "aDouble" } doubleLast aggregator doubleLast computes the input value with the maximum value for time column or 0 in default mode, or null in SQL-compatible mode if no row exists. Example: { "type" : "doubleLast", "name" : "lastDouble", "fieldName" : "aDouble", "timeColumn" : "longTime" } floatFirst aggregator floatFirst computes the input value with the minimum value for time column or 0 in default mode, or null in SQL-compatible mode if no row exists. Example: { "type" : "floatFirst", "name" : "firstFloat", "fieldName" : "aFloat" } floatLast aggregator floatLast computes the metric value with the maximum value for time column or 0 in default mode, or null in SQL-compatible mode if no row exists. Example: { "type" : "floatLast", "name" : "lastFloat", "fieldName" : "aFloat" } longFirst aggregator longFirst computes the metric value with the minimum value for time column or 0 in default mode, or null in SQL-compatible mode if no row exists. Example: { "type" : "longFirst", "name" : "firstLong", "fieldName" : "aLong" } longLast aggregator longLast computes the metric value with the maximum value for time column or 0 in default mode, or null in SQL-compatible mode if no row exists. Example: { "type" : "longLast", "name" : "lastLong", "fieldName" : "aLong", "timeColumn" : "longTime" } String first and last aggregators Property\tDescription\tRequiredtype\tMust be "stringFirst", "stringLast".\tYes name\tOutput name for the first or last value.\tYes fieldName\tName of the input column to compute the first or last value over.\tYes timeColumn\tName of the input column to use for time values. Must be a LONG typed column.\tNo. Defaults to __time. maxStringBytes\tMaximum size of string values to accumulate when computing the first or last value per group. Values longer than this will be truncated.\tNo. Defaults to 1024. stringFirst aggregator stringFirst computes the metric value with the minimum value for time column or null if no row exists. Example: { "type" : "stringFirst", "name" : "firstString", "fieldName" : "aString", "maxStringBytes" : 2048, "timeColumn" : "longTime" } stringLast aggregator stringLast computes the metric value with the maximum value for time column or null if no row exists. Example: { "type" : "stringLast", "name" : "lastString", "fieldName" : "aString" } "},{"title":"ANY aggregators","type":1,"pageTitle":"Aggregations","url":"/docs/27.0.0/querying/aggregations#any-aggregators","content":"(Double/Float/Long/String) ANY aggregator cannot be used in ingestion spec, and should only be specified as part of queries. Returns any value including null. This aggregator can simplify and optimize the performance by returning the first encountered value (including null) Numeric any aggregators Property\tDescription\tRequiredtype\tMust be "doubleAny", "floatAny", or "longAny".\tYes name\tOutput name for the value.\tYes fieldName\tName of the input column to compute the value over.\tYes doubleAny aggregator doubleAny returns any double metric value. Example: { "type" : "doubleAny", "name" : "anyDouble", "fieldName" : "aDouble" } floatAny aggregator floatAny returns any float metric value. Example: { "type" : "floatAny", "name" : "anyFloat", "fieldName" : "aFloat" } longAny aggregator longAny returns any long metric value. Example: { "type" : "longAny", "name" : "anyLong", "fieldName" : "aLong" } stringAny aggregator stringAny returns any string value present in the input. Property\tDescription\tRequiredtype\tMust be "stringAny".\tYes name\tOutput name for the value.\tYes fieldName\tName of the input column to compute the value over.\tYes maxStringBytes\tMaximum size of string values to accumulate when computing the first or last value per group. Values longer than this will be truncated.\tNo. Defaults to 1024. Example: { "type" : "stringAny", "name" : "anyString", "fieldName" : "aString", "maxStringBytes" : 2048 } info JavaScript-based functionality is disabled by default. Please refer to the Druid JavaScript programming guide for guidelines about using Druid's JavaScript functionality, including instructions on how to enable it. "},{"title":"Approximate aggregations","type":1,"pageTitle":"Aggregations","url":"/docs/27.0.0/querying/aggregations#approximate-aggregations","content":""},{"title":"Count distinct","type":1,"pageTitle":"Aggregations","url":"/docs/27.0.0/querying/aggregations#count-distinct","content":"Apache DataSketches Theta Sketch The DataSketches Theta Sketch extension-provided aggregator gives distinct count estimates with support for set union, intersection, and difference post-aggregators, using Theta sketches from the Apache DataSketches library. Apache DataSketches HLL Sketch The DataSketches HLL Sketch extension-provided aggregator gives distinct count estimates using the HyperLogLog algorithm. Compared to the Theta sketch, the HLL sketch does not support set operations and has slightly slower update and merge speed, but requires significantly less space. Cardinality, hyperUnique info For new use cases, we recommend evaluating DataSketches Theta Sketch or DataSketches HLL Sketch instead. The DataSketches aggregators are generally able to offer more flexibility and better accuracy than the classic Druid cardinality and hyperUnique aggregators. The Cardinality and HyperUnique aggregators are older aggregator implementations available by default in Druid that also provide distinct count estimates using the HyperLogLog algorithm. The newer DataSketches Theta and HLL extension-provided aggregators described above have superior accuracy and performance and are recommended instead. The DataSketches team has published a comparison study between Druid's original HLL algorithm and the DataSketches HLL algorithm. Based on the demonstrated advantages of the DataSketches implementation, we are recommending using them in preference to Druid's original HLL-based aggregators. However, to ensure backwards compatibility, we will continue to support the classic aggregators. Please note that hyperUnique aggregators are not mutually compatible with Datasketches HLL or Theta sketches. Multi-column handling Note the DataSketches Theta and HLL aggregators currently only support single-column inputs. If you were previously using the Cardinality aggregator with multiple-column inputs, equivalent operations using Theta or HLL sketches are described below: Multi-column byValue Cardinality can be replaced with a union of Theta sketches on the individual input columnsMulti-column byRow Cardinality can be replaced with a Theta or HLL sketch on a single virtual column that combines the individual input columns. "},{"title":"Histograms and quantiles","type":1,"pageTitle":"Aggregations","url":"/docs/27.0.0/querying/aggregations#histograms-and-quantiles","content":"DataSketches Quantiles Sketch The DataSketches Quantiles Sketch extension-provided aggregator provides quantile estimates and histogram approximations using the numeric quantiles DoublesSketch from the datasketches library. We recommend this aggregator in general for quantiles/histogram use cases, as it provides formal error bounds and has distribution-independent accuracy. Moments Sketch (Experimental) The Moments Sketch extension-provided aggregator is an experimental aggregator that provides quantile estimates using the Moments Sketch. The Moments Sketch aggregator is provided as an experimental option. It is optimized for merging speed and it can have higher aggregation performance compared to the DataSketches quantiles aggregator. However, the accuracy of the Moments Sketch is distribution-dependent, so users will need to empirically verify that the aggregator is suitable for their input data. As a general guideline for experimentation, the Moments Sketch paper points out that this algorithm works better on inputs with high entropy. In particular, the algorithm is not a good fit when the input data consists of a small number of clustered discrete values. Fixed Buckets Histogram Druid also provides a simple histogram implementation that uses a fixed range and fixed number of buckets with support for quantile estimation, backed by an array of bucket count values. The fixed buckets histogram can perform well when the distribution of the input data allows a small number of buckets to be used. We do not recommend the fixed buckets histogram for general use, as its usefulness is extremely data dependent. However, it is made available for users that have already identified use cases where a fixed buckets histogram is suitable. Approximate Histogram (deprecated) info The Approximate Histogram aggregator is deprecated. There are a number of other quantile estimation algorithms that offer better performance, accuracy, and memory footprint. We recommend using DataSketches Quantiles instead. The Approximate Histogram extension-provided aggregator also provides quantile estimates and histogram approximations, based on http://jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf. The algorithm used by this deprecated aggregator is highly distribution-dependent and its output is subject to serious distortions when the input does not fit within the algorithm's limitations. A study published by the DataSketches team demonstrates some of the known failure modes of this algorithm: The algorithm's quantile calculations can fail to provide results for a large range of rank values (all ranks less than 0.89 in the example used in the study), returning all zeroes instead.The algorithm can completely fail to record spikes in the tail ends of the distributionIn general, the histogram produced by the algorithm can deviate significantly from the true histogram, with no bounds on the errors. It is not possible to determine a priori how well this aggregator will behave for a given input stream, nor does the aggregator provide any indication that serious distortions are present in the output. For these reasons, we have deprecated this aggregator and recommend using the DataSketches Quantiles aggregator instead for new and existing use cases, although we will continue to support Approximate Histogram for backwards compatibility. "},{"title":"Expression aggregations","type":1,"pageTitle":"Aggregations","url":"/docs/27.0.0/querying/aggregations#expression-aggregations","content":""},{"title":"Expression aggregator","type":1,"pageTitle":"Aggregations","url":"/docs/27.0.0/querying/aggregations#expression-aggregator","content":"Aggregator applicable only at query time. Aggregates results using Druid expressions functions to facilitate building custom functions. Property\tDescription\tRequiredtype\tMust be "expression".\tYes name\tThe aggregator output name.\tYes fields\tThe list of aggregator input columns.\tYes accumulatorIdentifier\tThe variable which identifies the accumulator value in the fold and combine expressions.\tNo. Default __acc. fold\tThe expression to accumulate values from fields. The result of the expression is stored in accumulatorIdentifier and available to the next computation.\tYes combine\tThe expression to combine the results of various fold expressions of each segment when merging results. The input is available to the expression as a variable identified by the name.\tNo. Default to fold expression if the expression has a single input in fields. compare\tThe comparator expression which can only refer to two input variables, o1 and o2, where o1 and o2 are the output of fold or combine expressions, and must adhere to the Java comparator contract. If not set, the aggregator will try to fall back to an output type appropriate comparator.\tNo finalize\tThe finalize expression which can only refer to a single input variable, o. This expression is used to perform any final transformation of the output of the fold or combine expressions. If not set, then the value is not transformed.\tNo initialValue\tThe initial value of the accumulator for the fold (and combine, if InitialCombineValue is null) expression.\tYes initialCombineValue\tThe initial value of the accumulator for the combine expression.\tNo. Default initialValue. isNullUnlessAggregated\tIndicates that the default output value should be null if the aggregator does not process any rows. If true, the value is null, if false, the result of running the expressions with initial values is used instead.\tNo. Defaults to the value of druid.generic.useDefaultValueForNull. shouldAggregateNullInputs\tIndicates if the fold expression should operate on any null input values.\tNo. Defaults to true. shouldCombineAggregateNullInputs\tIndicates if the combine expression should operate on any null input values.\tNo. Defaults to the value of shouldAggregateNullInputs. maxSizeBytes\tMaximum size in bytes that variably sized aggregator output types such as strings and arrays are allowed to grow to before the aggregation fails.\tNo. Default is 8192 bytes. Example: a "count" aggregator The initial value is 0. fold adds 1 for each row processed. { "type": "expression", "name": "expression_count", "fields": [], "initialValue": "0", "fold": "__acc + 1", "combine": "__acc + expression_count" } Example: a "sum" aggregator The initial value is 0. fold adds the numeric value column_a for each row processed. { "type": "expression", "name": "expression_sum", "fields": ["column_a"], "initialValue": "0", "fold": "__acc + column_a" } Example: a "distinct array element" aggregator, sorted by array_length The initial value is an empty array. fold adds the elements of column_a to the accumulator using set semantics, combine merges the sets, and compare orders the values by array_length. { "type": "expression", "name": "expression_array_agg_distinct", "fields": ["column_a"], "initialValue": "[]", "fold": "array_set_add(__acc, column_a)", "combine": "array_set_add_all(__acc, expression_array_agg_distinct)", "compare": "if(array_length(o1) > array_length(o2), 1, if (array_length(o1) == array_length(o2), 0, -1))" } Example: an "approximate count" aggregator using the built-in hyper-unique Similar to the cardinality aggregator, the default value is an empty hyper-unique sketch, fold adds the value of column_a to the sketch, combine merges the sketches, and finalize gets the estimated count from the accumulated sketch. { "type": "expression", "name": "expression_cardinality", "fields": ["column_a"], "initialValue": "hyper_unique()", "fold": "hyper_unique_add(column_a, __acc)", "combine": "hyper_unique_add(expression_cardinality, __acc)", "finalize": "hyper_unique_estimate(o)" } "},{"title":"JavaScript aggregator","type":1,"pageTitle":"Aggregations","url":"/docs/27.0.0/querying/aggregations#javascript-aggregator","content":"Computes an arbitrary JavaScript function over a set of columns (both metrics and dimensions are allowed). Your JavaScript functions are expected to return floating-point values. Property\tDescription\tRequiredtype\tMust be "javascript".\tYes name\tThe aggregator output name.\tYes fieldNames\tThe list of aggregator input columns.\tYes fnAggregate\tJavaScript function that updates partial aggregate based on the current row values, and returns the updated partial aggregate.\tYes fnCombine\tJavaScript function to combine partial aggregates and return the combined result.\tYes fnReset\tJavaScript function that returns the 'initial' value.\tYes Example { "type": "javascript", "name": "sum(log(x)*y) + 10", "fieldNames": ["x", "y"], "fnAggregate" : "function(current, a, b) { return current + (Math.log(a) * b); }", "fnCombine" : "function(partialA, partialB) { return partialA + partialB; }", "fnReset" : "function() { return 10; }" } JavaScript functionality is disabled by default. Refer to the Druid JavaScript programming guide for guidelines about using Druid's JavaScript functionality, including instructions on how to enable it. "},{"title":"Miscellaneous aggregations","type":1,"pageTitle":"Aggregations","url":"/docs/27.0.0/querying/aggregations#miscellaneous-aggregations","content":""},{"title":"Filtered aggregator","type":1,"pageTitle":"Aggregations","url":"/docs/27.0.0/querying/aggregations#filtered-aggregator","content":"A filtered aggregator wraps any given aggregator, but only aggregates the values for which the given dimension filter matches. This makes it possible to compute the results of a filtered and an unfiltered aggregation simultaneously, without having to issue multiple queries, and use both results as part of post-aggregations. If only the filtered results are required, consider putting the filter on the query itself. This will be much faster since it does not require scanning all the data. Property\tDescription\tRequiredtype\tMust be "filtered".\tYes name\tThe aggregator output name.\tNo aggregator\tInline aggregator specification.\tYes filter\tInline filter specification.\tYes Example: { "type": "filtered", "name": "filteredSumLong", "filter": { "type" : "selector", "dimension" : "someColumn", "value" : "abcdef" }, "aggregator": { "type": "longSum", "name": "sumLong", "fieldName": "aLong" } } "},{"title":"Grouping aggregator","type":1,"pageTitle":"Aggregations","url":"/docs/27.0.0/querying/aggregations#grouping-aggregator","content":"A grouping aggregator can only be used as part of GroupBy queries which have a subtotal spec. It returns a number for each output row that lets you infer whether a particular dimension is included in the sub-grouping used for that row. You can pass a non-empty list of dimensions to this aggregator which must be a subset of dimensions that you are grouping on. Property\tDescription\tRequiredtype\tMust be "grouping".\tYes name\tThe aggregator output name.\tYes groupings\tThe list of columns to use in the grouping set.\tYes For example, the following aggregator has ["dim1", "dim2"] as input dimensions: { "type" : "grouping", "name" : "someGrouping", "groupings" : ["dim1", "dim2"] } and used in a grouping query with [["dim1", "dim2"], ["dim1"], ["dim2"], []] as subtotals, the possible output of the aggregator is: subtotal used in query\tOutput\t(bits representation)["dim1", "dim2"]\t0\t(00) ["dim1"]\t1\t(01) ["dim2"]\t2\t(10) []\t3\t(11) As the example illustrates, you can think of the output number as an unsigned n bit number where n is the number of dimensions passed to the aggregator. Druid sets the bit at position X for the number to 0 if the sub-grouping includes a dimension at position X in the aggregator input. Otherwise, Druid sets this bit to 1. "},{"title":"Cardinality/HyperUnique aggregators","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/hll-old","content":"","keywords":""},{"title":"Cardinality aggregator","type":1,"pageTitle":"Cardinality/HyperUnique aggregators","url":"/docs/27.0.0/querying/hll-old#cardinality-aggregator","content":"Computes the cardinality of a set of Apache Druid dimensions, using HyperLogLog to estimate the cardinality. Please note that this aggregator will be much slower than indexing a column with the hyperUnique aggregator. This aggregator also runs over a dimension column, which means the string dimension cannot be removed from the dataset to improve rollup. In general, we strongly recommend using the hyperUnique aggregator instead of the cardinality aggregator if you do not care about the individual values of a dimension. { "type": "cardinality", "name": "<output_name>", "fields": [ <dimension1>, <dimension2>, ... ], "byRow": <false | true> # (optional, defaults to false), "round": <false | true> # (optional, defaults to false) } Each individual element of the "fields" list can be a String or DimensionSpec. A String dimension in the fields list is equivalent to a DefaultDimensionSpec (no transformations). The HyperLogLog algorithm generates decimal estimates with some error. "round" can be set to true to round off estimated values to whole numbers. Note that even with rounding, the cardinality is still an estimate. The "round" field only affects query-time behavior, and is ignored at ingestion-time. "},{"title":"Cardinality by value","type":1,"pageTitle":"Cardinality/HyperUnique aggregators","url":"/docs/27.0.0/querying/hll-old#cardinality-by-value","content":"When setting byRow to false (the default) it computes the cardinality of the set composed of the union of all dimension values for all the given dimensions. For a single dimension, this is equivalent to SELECT COUNT(DISTINCT(dimension)) FROM <datasource> For multiple dimensions, this is equivalent to something akin to SELECT COUNT(DISTINCT(value)) FROM ( SELECT dim_1 as value FROM <datasource> UNION SELECT dim_2 as value FROM <datasource> UNION SELECT dim_3 as value FROM <datasource> ) "},{"title":"Cardinality by row","type":1,"pageTitle":"Cardinality/HyperUnique aggregators","url":"/docs/27.0.0/querying/hll-old#cardinality-by-row","content":"When setting byRow to true it computes the cardinality by row, i.e. the cardinality of distinct dimension combinations. This is equivalent to something akin to SELECT COUNT(*) FROM ( SELECT DIM1, DIM2, DIM3 FROM <datasource> GROUP BY DIM1, DIM2, DIM3 ) Example Determine the number of distinct countries people are living in or have come from. { "type": "cardinality", "name": "distinct_countries", "fields": [ "country_of_origin", "country_of_residence" ] } Determine the number of distinct people (i.e. combinations of first and last name). { "type": "cardinality", "name": "distinct_people", "fields": [ "first_name", "last_name" ], "byRow" : true } Determine the number of distinct starting characters of last names { "type": "cardinality", "name": "distinct_last_name_first_char", "fields": [ { "type" : "extraction", "dimension" : "last_name", "outputName" : "last_name_first_char", "extractionFn" : { "type" : "substring", "index" : 0, "length" : 1 } } ], "byRow" : true } "},{"title":"HyperUnique aggregator","type":1,"pageTitle":"Cardinality/HyperUnique aggregators","url":"/docs/27.0.0/querying/hll-old#hyperunique-aggregator","content":"Uses HyperLogLog to compute the estimated cardinality of a dimension that has been aggregated as a "hyperUnique" metric at indexing time. { "type" : "hyperUnique", "name" : <output_name>, "fieldName" : <metric_name>, "isInputHyperUnique" : false, "round" : false } "isInputHyperUnique" can be set to true to index precomputed HLL (Base64 encoded output from druid-hll is expected). The "isInputHyperUnique" field only affects ingestion-time behavior, and is ignored at query-time. The HyperLogLog algorithm generates decimal estimates with some error. "round" can be set to true to round off estimated values to whole numbers. Note that even with rounding, the cardinality is still an estimate. The "round" field only affects query-time behavior, and is ignored at ingestion-time. "},{"title":"Datasources","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/datasource","content":"","keywords":""},{"title":"Datasource type","type":1,"pageTitle":"Datasources","url":"/docs/27.0.0/querying/datasource#datasource-type","content":""},{"title":"table","type":1,"pageTitle":"Datasources","url":"/docs/27.0.0/querying/datasource#table","content":"SQLNative SELECT column1, column2 FROM "druid"."dataSourceName" The table datasource is the most common type. This is the kind of datasource you get when you performdata ingestion. They are split up into segments, distributed around the cluster, and queried in parallel. In Druid SQL, table datasources reside in the druid schema. This is the default schema, so table datasources can be referenced as either druid.dataSourceName or simply dataSourceName. In native queries, table datasources can be referenced using their names as strings (as in the example above), or by using JSON objects of the form: "dataSource": { "type": "table", "name": "dataSourceName" } To see a list of all table datasources, use the SQL querySELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'druid'. "},{"title":"lookup","type":1,"pageTitle":"Datasources","url":"/docs/27.0.0/querying/datasource#lookup","content":"SQLNative SELECT k, v FROM lookup.countries Lookup datasources correspond to Druid's key-value lookup objects. In Druid SQL, they reside in the lookup schema. They are preloaded in memory on all servers, so they can be accessed rapidly. They can be joined onto regular tables using the join operator. Lookup datasources are key-value oriented and always have exactly two columns: k (the key) and v (the value), and both are always strings. To see a list of all lookup datasources, use the SQL querySELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'lookup'. info Performance tip: Lookups can be joined with a base table either using an explicit join, or by using the SQL LOOKUP function. However, the join operator must evaluate the condition on each row, whereas theLOOKUP function can defer evaluation until after an aggregation phase. This means that the LOOKUP function is usually faster than joining to a lookup datasource. Refer to the Query execution page for more details on how queries are executed when you use table datasources. "},{"title":"union","type":1,"pageTitle":"Datasources","url":"/docs/27.0.0/querying/datasource#union","content":"SQLNative SELECT column1, column2 FROM ( SELECT column1, column2 FROM table1 UNION ALL SELECT column1, column2 FROM table2 UNION ALL SELECT column1, column2 FROM table3 ) Unions allow you to treat two or more tables as a single datasource. In SQL, this is done with the UNION ALL operator applied directly to tables, called a "table-level union". In native queries, this is done with a "union" datasource. With SQL table-level unions the same columns must be selected from each table in the same order, and those columns must either have the same types, or types that can be implicitly cast to each other (such as different numeric types). For this reason, it is more robust to write your queries to select specific columns. With the native union datasource, the tables being unioned do not need to have identical schemas. If they do not fully match up, then columns that exist in one table but not another will be treated as if they contained all null values in the tables where they do not exist. In either case, features like expressions, column aliasing, JOIN, GROUP BY, ORDER BY, and so on cannot be used with table unions. Refer to the Query execution page for more details on how queries are executed when you use union datasources. "},{"title":"inline","type":1,"pageTitle":"Datasources","url":"/docs/27.0.0/querying/datasource#inline","content":"Native { "queryType": "scan", "dataSource": { "type": "inline", "columnNames": ["country", "city"], "rows": [ ["United States", "San Francisco"], ["Canada", "Calgary"] ] }, "columns": ["country", "city"], "intervals": ["0000/3000"] } Inline datasources allow you to query a small amount of data that is embedded in the query itself. They are useful when you want to write a query on a small amount of data without loading it first. They are also useful as inputs into ajoin. Druid also uses them internally to handle subqueries that need to be inlined on the Broker. See thequery datasource documentation for more details. There are two fields in an inline datasource: an array of columnNames and an array of rows. Each row is an array that must be exactly as long as the list of columnNames. The first element in each row corresponds to the first column in columnNames, and so on. Inline datasources are not available in Druid SQL. Refer to the Query execution page for more details on how queries are executed when you use inline datasources. "},{"title":"query","type":1,"pageTitle":"Datasources","url":"/docs/27.0.0/querying/datasource#query","content":"SQLNative -- Uses a subquery to count hits per page, then takes the average. SELECT AVG(cnt) AS average_hits_per_page FROM (SELECT page, COUNT(*) AS hits FROM site_traffic GROUP BY page) Query datasources allow you to issue subqueries. In native queries, they can appear anywhere that accepts adataSource (except underneath a union). In SQL, they can appear in the following places, always surrounded by parentheses: The FROM clause: FROM (<subquery>).As inputs to a JOIN: <table-or-subquery-1> t1 INNER JOIN <table-or-subquery-2> t2 ON t1.<col1> = t2.<col2>.In the WHERE clause: WHERE <column> { IN | NOT IN } (<subquery>). These are translated to joins by the SQL planner. info Performance tip: In most cases, subquery results are fully buffered in memory on the Broker and then further processing occurs on the Broker itself. This means that subqueries with large result sets can cause performance bottlenecks or run into memory usage limits on the Broker. See the Query executionpage for more details on how subqueries are executed and what limits will apply. "},{"title":"join","type":1,"pageTitle":"Datasources","url":"/docs/27.0.0/querying/datasource#join","content":"SQLNative -- Joins "sales" with "countries" (using "store" as the join key) to get sales by country. SELECT store_to_country.v AS country, SUM(sales.revenue) AS country_revenue FROM sales INNER JOIN lookup.store_to_country ON sales.store = store_to_country.k GROUP BY countries.v Join datasources allow you to do a SQL-style join of two datasources. Stacking joins on top of each other allows you to join arbitrarily many datasources. In Druid 27.0.0, joins in native queries are implemented with a broadcast hash-join algorithm. This means that all datasources other than the leftmost "base" datasource must fit in memory. It also means that the join condition must be an equality. This feature is intended mainly to allow joining regular Druid tables with lookup,inline, and query datasources. Refer to the Query execution page for more details on how queries are executed when you use join datasources. Joins in SQL SQL joins take the form: <o1> [ INNER | LEFT [OUTER] ] JOIN <o2> ON <condition> The condition must involve only equalities, but functions are okay, and there can be multiple equalities ANDed together. Conditions like t1.x = t2.x, or LOWER(t1.x) = t2.x, or t1.x = t2.x AND t1.y = t2.y can all be handled. Conditions like t1.x <> t2.x cannot currently be handled. Note that Druid SQL is less rigid than what native join datasources can handle. In cases where a SQL query does something that is not allowed as-is with a native join datasource, Druid SQL will generate a subquery. This can have a substantial effect on performance and scalability, so it is something to watch out for. Some examples of when the SQL layer will generate subqueries include: Joining a regular Druid table to itself, or to another regular Druid table. The native join datasource can accept a table on the left-hand side, but not the right, so a subquery is needed. Join conditions where the expressions on either side are of different types. Join conditions where the right-hand expression is not a direct column access. For more information about how Druid translates SQL to native queries, refer to theDruid SQL documentation. Joins in native queries Native join datasources have the following properties. All are required. Field\tDescriptionleft\tLeft-hand datasource. Must be of type table, join, lookup, query, or inline. Placing another join as the left datasource allows you to join arbitrarily many datasources. right\tRight-hand datasource. Must be of type lookup, query, or inline. Note that this is more rigid than what Druid SQL requires. rightPrefix\tString prefix that will be applied to all columns from the right-hand datasource, to prevent them from colliding with columns from the left-hand datasource. Can be any string, so long as it is nonempty and is not be a prefix of the string __time. Any columns from the left-hand side that start with your rightPrefix will be shadowed. It is up to you to provide a prefix that will not shadow any important columns from the left side. condition\tExpression that must be an equality where one side is an expression of the left-hand side, and the other side is a simple column reference to the right-hand side. Note that this is more rigid than what Druid SQL requires: here, the right-hand reference must be a simple column reference; in SQL it can be an expression. joinType\tINNER or LEFT. Join performance Joins are a feature that can significantly affect performance of your queries. Some performance tips and notes: Joins are especially useful with lookup datasources, but in most cases, theLOOKUP function performs better than a join. Consider using the LOOKUP function if it is appropriate for your use case.When using joins in Druid SQL, keep in mind that it can generate subqueries that you did not explicitly include in your queries. Refer to the Druid SQL documentation for more details about when this happens and how to detect it.One common reason for implicit subquery generation is if the types of the two halves of an equality do not match. For example, since lookup keys are always strings, the condition druid.d JOIN lookup.l ON d.field = l.field will perform best if d.field is a string.As of Druid 27.0.0, the join operator must evaluate the condition for each row. In the future, we expect to implement both early and deferred condition evaluation, which we expect to improve performance considerably for common use cases.Currently, Druid does not support pushing down predicates (condition and filter) past a Join (i.e. into Join's children). Druid only supports pushing predicates into the join if they originated from above the join. Hence, the location of predicates and filters in your Druid SQL is very important. Also, as a result of this, comma joins should be avoided. Future work for joins Joins are an area of active development in Druid. The following features are missing today but may appear in future versions: Reordering of join operations to get the most performant plan.Preloaded dimension tables that are wider than lookups (i.e. supporting more than a single key and single value).RIGHT OUTER and FULL OUTER joins in the native query engine. Currently, they are partially implemented. Queries run but results are not always correct.Performance-related optimizations as mentioned in the previous section.Join conditions on a column containing a multi-value dimension. "},{"title":"unnest","type":1,"pageTitle":"Datasources","url":"/docs/27.0.0/querying/datasource#unnest","content":"info The unnest datasource is experimental. Its API and behavior are subject to change in future releases. It is not recommended to use this feature in production at this time. Use the unnest datasource to unnest a column with multiple values in an array. For example, you have a source column that looks like this: Nested[a, b] [c, d] [e, [f,g]] When you use the unnest datasource, the unnested column looks like this: Unnesteda b c d e [f, g] When unnesting data, keep the following in mind: The total number of rows will grow to accommodate the new rows that the unnested data occupy.You can unnest the values in more than one column in a single unnest datasource, but this can lead to a very large number of new rows depending on your dataset. The unnest datasource uses the following syntax: "dataSource": { "type": "unnest", "base": { "type": "table", "name": "nested_data" }, "virtualColumn": { "type": "expression", "name": "output_column", "expression": "\\"column_reference\\"" }, "unnestFilter": "optional_filter" } dataSource.type: Set this to unnest.dataSource.base: Defines the datasource you want to unnest. dataSource.base.type: The type of datasource you want to unnest, such as a table. dataSource.virtualColumn: Virtual column that references the nested values. The output name of this column is reused as the name of the column that contains unnested values. You can replace the source column with the unnested column by specifying the source column's name or a new column by specifying a different name. Outputting it to a new column can help you verify that you get the results that you expect but isn't required.unnestFilter: A filter only on the output column. You can omit this or set it to null if there are no filters. To learn more about how to use the unnest datasource, see the unnest tutorial. "},{"title":"Joins","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/joins","content":"Joins Apache Druid has two features related to joining of data: Join operators. These are available using a join datasource in native queries, or using the JOIN operator in Druid SQL. Refer to thejoin datasource documentation for information about how joins work in Druid native queries, or the multi-stage query join documentation for information about how joins work in multi-stage query tasks.Query-time lookups, simple key-to-value mappings. These are preloaded on all servers that are involved in queries and can be accessed with or without an explicit join operator. Refer to the lookupsdocumentation for more details. Whenever possible, for best performance it is good to avoid joins at query time. Often this can be accomplished by joining data before it is loaded into Druid. However, there are situations where joins or lookups are the best solution available despite the performance overhead, including: The fact-to-dimension (star and snowflake schema) case: you need to change dimension values after initial ingestion, and aren't able to reingest to do this. In this case, you can use lookups for your dimension tables.Your workload requires joins or filters on subqueries.","keywords":""},{"title":"Query dimensions","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/dimensionspecs","content":"","keywords":""},{"title":"DimensionSpec","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#dimensionspec","content":"A DimensionSpec defines how to transform dimension values prior to aggregation. "},{"title":"Default DimensionSpec","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#default-dimensionspec","content":"Returns dimension values as is and optionally renames the dimension. { "type" : "default", "dimension" : <dimension>, "outputName": <output_name>, "outputType": <"STRING"|"LONG"|"FLOAT"> } When specifying a DimensionSpec on a numeric column, you should include the type of the column in the outputType field. The outputType defaults to STRING when not specified. See Output Types for more details. "},{"title":"Extraction DimensionSpec","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#extraction-dimensionspec","content":"Returns dimension values transformed using the given extraction function. { "type" : "extraction", "dimension" : <dimension>, "outputName" : <output_name>, "outputType": <"STRING"|"LONG"|"FLOAT">, "extractionFn" : <extraction_function> } You can specify an outputType in an ExtractionDimensionSpec to apply type conversion to results before merging. The outputType defaults to STRING when not specified. Please refer to the Output Types section for more details. "},{"title":"Filtered DimensionSpecs","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#filtered-dimensionspecs","content":"A filtered DimensionSpec is only useful for multi-value dimensions. Say you have a row in Apache Druid that has a multi-value dimension with values ["v1", "v2", "v3"] and you send a groupBy/topN query grouping by that dimension with a query filter for a value of "v1". In the response you will get 3 rows containing "v1", "v2" and "v3". This behavior might be unintuitive for some use cases. This happens because Druid uses the "query filter" internally on bitmaps to match the row to include in query result processing. With multi-value dimensions, "query filter" behaves like a contains check, which matches the row with dimension value ["v1", "v2", "v3"]. See the section on "Multi-value columns" in segment for more details. Then the groupBy/topN processing pipeline "explodes" all multi-value dimensions resulting 3 rows for "v1", "v2" and "v3" each. In addition to "query filter", which efficiently selects the rows to be processed, you can use the filtered dimension spec to filter for specific values within the values of a multi-value dimension. These dimension specs take a delegate DimensionSpec and a filtering criteria. From the "exploded" rows, only rows matching the given filtering criteria are returned in the query result. The following filtered dimension spec defines the values to include or exclude as per the isWhitelist attribute value. { "type" : "listFiltered", "delegate" : <dimensionSpec>, "values": <array of strings>, "isWhitelist": <optional attribute for true/false, default is true> } The following filtered dimension spec retains only the values matching a regex. You should use the listFiltered function for inclusion and exclusion use cases because it is faster. { "type" : "regexFiltered", "delegate" : <dimensionSpec>, "pattern": <java regex pattern> } The following filtered dimension spec retains only the values starting with the same prefix. { "type" : "prefixFiltered", "delegate" : <dimensionSpec>, "prefix": <prefix string> } For more details and examples, see multi-value dimensions. "},{"title":"Lookup DimensionSpecs","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#lookup-dimensionspecs","content":"You can use lookup dimension specs to define a lookup implementation as a dimension spec directly. Generally, there are two kinds of lookup implementations. The first kind is passed at the query time like map implementation. { "type":"lookup", "dimension":"dimensionName", "outputName":"dimensionOutputName", "replaceMissingValueWith":"missing_value", "retainMissingValue":false, "lookup":{"type": "map", "map":{"key":"value"}, "isOneToOne":false} } A property of retainMissingValue and replaceMissingValueWith can be specified at query time to hint how to handle missing values. Setting replaceMissingValueWith to "" has the same effect as setting it to null or omitting the property. Setting retainMissingValue to true will use the dimension's original value if it is not found in the lookup. The default values are replaceMissingValueWith = null and retainMissingValue = false which causes missing values to be treated as missing. It is illegal to set retainMissingValue = true and also specify a replaceMissingValueWith. A property optimize can be supplied to allow optimization of lookup based extraction filter (by default optimize = true). The second kind where it is not possible to pass at query time due to their size, will be based on an external lookup table or resource that is already registered via configuration file or/and Coordinator. { "type":"lookup", "dimension":"dimensionName", "outputName":"dimensionOutputName", "name":"lookupName" } "},{"title":"Output Types","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#output-types","content":"The dimension specs provide an option to specify the output type of a column's values. This is necessary as it is possible for a column with given name to have different value types in different segments; results will be converted to the type specified by outputType before merging. Note that not all use cases for DimensionSpec currently support outputType, the table below shows which use cases support this option: Query Type\tSupported?GroupBy (v1)\tno GroupBy (v2)\tyes TopN\tyes Search\tno Select\tno Cardinality Aggregator\tno "},{"title":"Extraction Functions","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#extraction-functions","content":"Extraction functions define the transformation applied to each dimension value. Transformations can be applied to both regular (string) dimensions, as well as the special __time dimension, which represents the current time bucket according to the query aggregation granularity. Note: for functions taking string values (such as regular expressions),__time dimension values will be formatted in ISO-8601 formatbefore getting passed to the extraction function. "},{"title":"Regular Expression Extraction Function","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#regular-expression-extraction-function","content":"Returns the first matching group for the given regular expression. If there is no match, it returns the dimension value as is. { "type" : "regex", "expr" : <regular_expression>, "index" : <group to extract, default 1> "replaceMissingValue" : true, "replaceMissingValueWith" : "foobar" } For example, using "expr" : "(\\\\w\\\\w\\\\w).*" will transform'Monday', 'Tuesday', 'Wednesday' into 'Mon', 'Tue', 'Wed'. If "index" is set, it will control which group from the match to extract. Index zero extracts the string matching the entire pattern. If the replaceMissingValue property is true, the extraction function will transform dimension values that do not match the regex pattern to a user-specified String. Default value is false. The replaceMissingValueWith property sets the String that unmatched dimension values will be replaced with, if replaceMissingValue is true. If replaceMissingValueWith is not specified, unmatched dimension values will be replaced with nulls. For example, if expr is "(a\\w+)" in the example JSON above, a regex that matches words starting with the letter a, the extraction function will convert a dimension value like banana to foobar. "},{"title":"Partial Extraction Function","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#partial-extraction-function","content":"Returns the dimension value unchanged if the regular expression matches, otherwise returns null. { "type" : "partial", "expr" : <regular_expression> } "},{"title":"Search query extraction function","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#search-query-extraction-function","content":"Returns the dimension value unchanged if the given SearchQuerySpecmatches, otherwise returns null. { "type" : "searchQuery", "query" : <search_query_spec> } "},{"title":"Substring Extraction Function","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#substring-extraction-function","content":"Returns a substring of the dimension value starting from the supplied index and of the desired length. Both index and length are measured in the number of Unicode code units present in the string as if it were encoded in UTF-16. Note that some Unicode characters may be represented by two code units. This is the same behavior as the Java String class's "substring" method. If the desired length exceeds the length of the dimension value, the remainder of the string starting at index will be returned. If index is greater than the length of the dimension value, null will be returned. { "type" : "substring", "index" : 1, "length" : 4 } The length may be omitted for substring to return the remainder of the dimension value starting from index, or null if index greater than the length of the dimension value. { "type" : "substring", "index" : 3 } "},{"title":"Strlen Extraction Function","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#strlen-extraction-function","content":"Returns the length of dimension values, as measured in the number of Unicode code units present in the string as if it were encoded in UTF-16. Note that some Unicode characters may be represented by two code units. This is the same behavior as the Java String class's "length" method. null strings are considered as having zero length. { "type" : "strlen" } "},{"title":"Time Format Extraction Function","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#time-format-extraction-function","content":"Returns the dimension value formatted according to the given format string, time zone, and locale. For __time dimension values, this formats the time value bucketed by theaggregation granularity For a regular dimension, it assumes the string is formatted inISO-8601 date and time format. format : date time format for the resulting dimension value, in Joda Time DateTimeFormat, or null to use the default ISO8601 format.locale : locale (language and country) to use, given as a IETF BCP 47 language tag, e.g. en-US, en-GB, fr-FR, fr-CA, etc.timeZone : time zone to use in IANA tz database format, e.g. Europe/Berlin (this can possibly be different than the aggregation time-zone)granularity : granularity to apply before formatting, or omit to not apply any granularity.asMillis : boolean value, set to true to treat input strings as millis rather than ISO8601 strings. Additionally, if format is null or not specified, output will be in millis rather than ISO8601. { "type" : "timeFormat", "format" : <output_format> (optional), "timeZone" : <time_zone> (optional, default UTC), "locale" : <locale> (optional, default current locale), "granularity" : <granularity> (optional, default none) }, "asMillis" : <true or false> (optional) } For example, the following dimension spec returns the day of the week for Montréal in French: { "type" : "extraction", "dimension" : "__time", "outputName" : "dayOfWeek", "extractionFn" : { "type" : "timeFormat", "format" : "EEEE", "timeZone" : "America/Montreal", "locale" : "fr" } } "},{"title":"Time Parsing Extraction Function","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#time-parsing-extraction-function","content":"Parses dimension values as timestamps using the given input format, and returns them formatted using the given output format. Note, if you are working with the __time dimension, you should consider using thetime extraction function instead instead, which works on time value directly as opposed to string values. If "joda" is true, time formats are described in the Joda DateTimeFormat documentation. If "joda" is false (or unspecified) then formats are described in the SimpleDateFormat documentation. In general, we recommend setting "joda" to true since Joda format strings are more common in Druid APIs and since Joda handles certain edge cases (like weeks and weekyears near the start and end of calendar years) in a more ISO8601 compliant way. If a value cannot be parsed using the provided timeFormat, it will be returned as-is. { "type" : "time", "timeFormat" : <input_format>, "resultFormat" : <output_format>, "joda" : <true, false> } "},{"title":"JavaScript Extraction Function","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#javascript-extraction-function","content":"Returns the dimension value, as transformed by the given JavaScript function. For regular dimensions, the input value is passed as a string. For the __time dimension, the input value is passed as a number representing the number of milliseconds since January 1, 1970 UTC. Example for a regular dimension { "type" : "javascript", "function" : "function(str) { return str.substr(0, 3); }" } { "type" : "javascript", "function" : "function(str) { return str + '!!!'; }", "injective" : true } A property of injective specifies if the JavaScript function preserves uniqueness. The default value is false meaning uniqueness is not preserved Example for the __time dimension: { "type" : "javascript", "function" : "function(t) { return 'Second ' + Math.floor((t % 60000) / 1000); }" } info JavaScript-based functionality is disabled by default. Please refer to the Druid JavaScript programming guide for guidelines about using Druid's JavaScript functionality, including instructions on how to enable it. "},{"title":"Registered lookup extraction function","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#registered-lookup-extraction-function","content":"Lookups are a concept in Druid where dimension values are (optionally) replaced with new values. For more documentation on using lookups, please see Lookups. The "registeredLookup" extraction function lets you refer to a lookup that has been registered in the cluster-wide configuration. An example: { "type":"registeredLookup", "lookup":"some_lookup_name", "retainMissingValue":true } A property of retainMissingValue and replaceMissingValueWith can be specified at query time to hint how to handle missing values. Setting replaceMissingValueWith to "" has the same effect as setting it to null or omitting the property. Setting retainMissingValue to true will use the dimension's original value if it is not found in the lookup. The default values are replaceMissingValueWith = null and retainMissingValue = false which causes missing values to be treated as missing. It is illegal to set retainMissingValue = true and also specify a replaceMissingValueWith. A property of injective can override the lookup's own sense of whether or not it isinjective. If left unspecified, Druid will use the registered cluster-wide lookup configuration. A property optimize can be supplied to allow optimization of lookup based extraction filter (by default optimize = true). The optimization layer will run on the Broker and it will rewrite the extraction filter as clause of selector filters. For instance the following filter { "filter": { "type": "selector", "dimension": "product", "value": "bar_1", "extractionFn": { "type": "registeredLookup", "optimize": true, "lookup": "some_lookup_name" } } } will be rewritten as the following simpler query, assuming a lookup that maps "product_1" and "product_3" to the value "bar_1": { "filter":{ "type":"or", "fields":[ { "filter":{ "type":"selector", "dimension":"product", "value":"product_1" } }, { "filter":{ "type":"selector", "dimension":"product", "value":"product_3" } } ] } } A null dimension value can be mapped to a specific value by specifying the empty string as the key in your lookup file. This allows distinguishing between a null dimension and a lookup resulting in a null. For example, specifying {"":"bar","bat":"baz"} with dimension values [null, "foo", "bat"] and replacing missing values with "oof" will yield results of ["bar", "oof", "baz"]. Omitting the empty string key will cause the missing value to take over. For example, specifying {"bat":"baz"} with dimension values [null, "foo", "bat"] and replacing missing values with "oof" will yield results of ["oof", "oof", "baz"]. "},{"title":"Inline lookup extraction function","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#inline-lookup-extraction-function","content":"Lookups are a concept in Druid where dimension values are (optionally) replaced with new values. For more documentation on using lookups, please see Lookups. The "lookup" extraction function lets you specify an inline lookup map without registering one in the cluster-wide configuration. Examples: { "type":"lookup", "lookup":{ "type":"map", "map":{"foo":"bar", "baz":"bat"} }, "retainMissingValue":true, "injective":true } { "type":"lookup", "lookup":{ "type":"map", "map":{"foo":"bar", "baz":"bat"} }, "retainMissingValue":false, "injective":false, "replaceMissingValueWith":"MISSING" } The inline lookup should be of type map. The properties retainMissingValue, replaceMissingValueWith, injective, and optimize behave similarly to theregistered lookup extraction function. "},{"title":"Cascade Extraction Function","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#cascade-extraction-function","content":"Provides chained execution of extraction functions. A property of extractionFns contains an array of any extraction functions, which is executed in the array index order. Example for chaining regular expression extraction function, JavaScript extraction function, and substring extraction function is as followings. { "type" : "cascade", "extractionFns": [ { "type" : "regex", "expr" : "/([^/]+)/", "replaceMissingValue": false, "replaceMissingValueWith": null }, { "type" : "javascript", "function" : "function(str) { return \\"the \\".concat(str) }" }, { "type" : "substring", "index" : 0, "length" : 7 } ] } It will transform dimension values with specified extraction functions in the order named. For example, '/druid/prod/historical' is transformed to 'the dru' as regular expression extraction function first transforms it to 'druid' and then, JavaScript extraction function transforms it to 'the druid', and lastly, substring extraction function transforms it to 'the dru'. "},{"title":"String Format Extraction Function","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#string-format-extraction-function","content":"Returns the dimension value formatted according to the given format string. { "type" : "stringFormat", "format" : <sprintf_expression>, "nullHandling" : <optional attribute for handling null value> } For example if you want to concat "[" and "]" before and after the actual dimension value, you need to specify "[%s]" as format string. "nullHandling" can be one of nullString, emptyString or returnNull. With "[%s]" format, each configuration will result [null], [], null. Default is nullString. "},{"title":"Upper and Lower extraction functions.","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#upper-and-lower-extraction-functions","content":"Returns the dimension values as all upper case or lower case. Optionally user can specify the language to use in order to perform upper or lower transformation { "type" : "upper", "locale":"fr" } or without setting "locale" (in this case, the current value of the default locale for this instance of the Java Virtual Machine.) { "type" : "lower" } "},{"title":"Bucket Extraction Function","type":1,"pageTitle":"Query dimensions","url":"/docs/27.0.0/querying/dimensionspecs#bucket-extraction-function","content":"Bucket extraction function is used to bucket numerical values in each range of the given size by converting them to the same base value. Non numeric values are converted to null. size : the size of the buckets (optional, default 1)offset : the offset for the buckets (optional, default 0) The following extraction function creates buckets of 5 starting from 2. In this case, values in the range of [2, 7) will be converted to 2, values in [7, 12) will be converted to 7, etc. { "type" : "bucket", "size" : 5, "offset" : 2 } "},{"title":"Sorting and limiting (groupBy)","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/limitspec","content":"","keywords":""},{"title":"DefaultLimitSpec","type":1,"pageTitle":"Sorting and limiting (groupBy)","url":"/docs/27.0.0/querying/limitspec#defaultlimitspec","content":"The default limit spec takes a limit and the list of columns to do an orderBy operation over. The grammar is: { "type" : "default", "limit" : <optional integer>, "offset" : <optional integer>, "columns" : [<optional list of OrderByColumnSpec>], } The "limit" parameter is the maximum number of rows to return. The "offset" parameter tells Druid to skip this many rows when returning results. If both "limit" and "offset" are provided, then "offset" will be applied first, followed by "limit". For example, a spec with limit 100 and offset 10 will return 100 rows starting from row number 10. Internally, the query is executed by extending the limit by the offset and then discarding a number of rows equal to the offset. This means that raising the offset will increase resource usage by an amount similar to increasing the limit. Together, "limit" and "offset" can be used to implement pagination. However, note that if the underlying datasource is modified in between page fetches in ways that affect overall query results, then the different pages will not necessarily align with each other. OrderByColumnSpec OrderByColumnSpecs indicate how to do order by operations. Each order-by condition can be a jsonString or a map of the following form: { "dimension" : "<Any dimension or metric name>", "direction" : <"ascending"|"descending">, "dimensionOrder" : <"lexicographic"(default)|"alphanumeric"|"strlen"|"numeric"> } If only the dimension is provided (as a JSON string), the default order-by is ascending with lexicographic sorting. See Sorting Orders for more information on the sorting orders specified by "dimensionOrder". "},{"title":"Query granularities","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/granularities","content":"","keywords":""},{"title":"Simple Granularities","type":1,"pageTitle":"Query granularities","url":"/docs/27.0.0/querying/granularities#simple-granularities","content":"Simple granularities are specified as a string and bucket timestamps by their UTC time (e.g., days start at 00:00 UTC). Druid supports the following granularity strings: allnonesecondminutefive_minuteten_minutefifteen_minutethirty_minutehoursix_houreight_hourdayweek*monthquarter year The minimum and maximum granularities are none and all, described as follows: all buckets everything into a single bucket.none does not mean zero bucketing. It buckets data to millisecond granularity—the granularity of the internal index. You can think of none as equivalent to millisecond. info Do not use none in a timeseries query; Druid fills empty interior time buckets with zeroes, meaning the output will contain results for every single millisecond in the requested interval. *Avoid using the week granularity for partitioning at ingestion time, because weeks don't align neatly with months and years, making it difficult to partition by coarser granularities later. Example: Suppose you have data below stored in Apache Druid with millisecond ingestion granularity, {"timestamp": "2013-08-31T01:02:33Z", "page": "AAA", "language" : "en"} {"timestamp": "2013-09-01T01:02:33Z", "page": "BBB", "language" : "en"} {"timestamp": "2013-09-02T23:32:45Z", "page": "CCC", "language" : "en"} {"timestamp": "2013-09-03T03:32:45Z", "page": "DDD", "language" : "en"} After submitting a groupBy query with hour granularity, { "queryType":"groupBy", "dataSource":"my_dataSource", "granularity":"hour", "dimensions":[ "language" ], "aggregations":[ { "type":"count", "name":"count" } ], "intervals":[ "2000-01-01T00:00Z/3000-01-01T00:00Z" ] } you will get [ { "version" : "v1", "timestamp" : "2013-08-31T01:00:00.000Z", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-09-01T01:00:00.000Z", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-09-02T23:00:00.000Z", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-09-03T03:00:00.000Z", "event" : { "count" : 1, "language" : "en" } } ] Note that all the empty buckets are discarded. If you change the granularity to day, you will get [ { "version" : "v1", "timestamp" : "2013-08-31T00:00:00.000Z", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-09-01T00:00:00.000Z", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-09-02T00:00:00.000Z", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-09-03T00:00:00.000Z", "event" : { "count" : 1, "language" : "en" } } ] If you change the granularity to none, you will get the same results as setting it to the ingestion granularity. [ { "version" : "v1", "timestamp" : "2013-08-31T01:02:33.000Z", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-09-01T01:02:33.000Z", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-09-02T23:32:45.000Z", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-09-03T03:32:45.000Z", "event" : { "count" : 1, "language" : "en" } } ] Having a query time granularity that is smaller than the queryGranularity parameter set atingestion time is unreasonable because information about that smaller granularity is not present in the indexed data. So, if the query time granularity is smaller than the ingestion time query granularity, Druid produces results that are equivalent to having set granularity to queryGranularity. If you change the granularity to all, you will get everything aggregated in 1 bucket, [ { "version" : "v1", "timestamp" : "2000-01-01T00:00:00.000Z", "event" : { "count" : 4, "language" : "en" } } ] "},{"title":"Duration Granularities","type":1,"pageTitle":"Query granularities","url":"/docs/27.0.0/querying/granularities#duration-granularities","content":"Duration granularities are specified as an exact duration in milliseconds and timestamps are returned as UTC. Duration granularity values are in millis. They also support specifying an optional origin, which defines where to start counting time buckets from (defaults to 1970-01-01T00:00:00Z). {"type": "duration", "duration": 7200000} This chunks up every 2 hours. {"type": "duration", "duration": 3600000, "origin": "2012-01-01T00:30:00Z"} This chunks up every hour on the half-hour. Example: Reusing the data in the previous example, after submitting a groupBy query with 24 hours duration, { "queryType":"groupBy", "dataSource":"my_dataSource", "granularity":{"type": "duration", "duration": "86400000"}, "dimensions":[ "language" ], "aggregations":[ { "type":"count", "name":"count" } ], "intervals":[ "2000-01-01T00:00Z/3000-01-01T00:00Z" ] } you will get [ { "version" : "v1", "timestamp" : "2013-08-31T00:00:00.000Z", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-09-01T00:00:00.000Z", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-09-02T00:00:00.000Z", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-09-03T00:00:00.000Z", "event" : { "count" : 1, "language" : "en" } } ] if you set the origin for the granularity to 2012-01-01T00:30:00Z, "granularity":{"type": "duration", "duration": "86400000", "origin":"2012-01-01T00:30:00Z"} you will get [ { "version" : "v1", "timestamp" : "2013-08-31T00:30:00.000Z", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-09-01T00:30:00.000Z", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-09-02T00:30:00.000Z", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-09-03T00:30:00.000Z", "event" : { "count" : 1, "language" : "en" } } ] Note that the timestamp for each bucket starts at the 30th minute. "},{"title":"Period Granularities","type":1,"pageTitle":"Query granularities","url":"/docs/27.0.0/querying/granularities#period-granularities","content":"Period granularities are specified as arbitrary period combinations of years, months, weeks, hours, minutes and seconds (e.g. P2W, P3M, PT1H30M, PT0.750S) in ISO8601 format. They support specifying a time zone which determines where period boundaries start as well as the timezone of the returned timestamps. By default, years start on the first of January, months start on the first of the month and weeks start on Mondays unless an origin is specified. Time zone is optional (defaults to UTC). Origin is optional (defaults to 1970-01-01T00:00:00 in the given time zone). {"type": "period", "period": "P2D", "timeZone": "America/Los_Angeles"} This will bucket by two-day chunks in the Pacific timezone. {"type": "period", "period": "P3M", "timeZone": "America/Los_Angeles", "origin": "2012-02-01T00:00:00-08:00"} This will bucket by 3-month chunks in the Pacific timezone where the three-month quarters are defined as starting from February. Example Reusing the data in the previous example, if you submit a groupBy query with 1 day period in Pacific timezone, { "queryType":"groupBy", "dataSource":"my_dataSource", "granularity":{"type": "period", "period": "P1D", "timeZone": "America/Los_Angeles"}, "dimensions":[ "language" ], "aggregations":[ { "type":"count", "name":"count" } ], "intervals":[ "1999-12-31T16:00:00.000-08:00/2999-12-31T16:00:00.000-08:00" ] } you will get [ { "version" : "v1", "timestamp" : "2013-08-30T00:00:00.000-07:00", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-08-31T00:00:00.000-07:00", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-09-02T00:00:00.000-07:00", "event" : { "count" : 2, "language" : "en" } } ] Note that the timestamp for each bucket has been converted to Pacific time. Row {"timestamp": "2013-09-02T23:32:45Z", "page": "CCC", "language" : "en"} and{"timestamp": "2013-09-03T03:32:45Z", "page": "DDD", "language" : "en"} are put in the same bucket because they are in the same day in Pacific time. Also note that the intervals in groupBy query will not be converted to the timezone specified, the timezone specified in granularity is only applied on the query results. If you set the origin for the granularity to 1970-01-01T20:30:00-08:00, "granularity":{"type": "period", "period": "P1D", "timeZone": "America/Los_Angeles", "origin": "1970-01-01T20:30:00-08:00"} you will get [ { "version" : "v1", "timestamp" : "2013-08-29T20:30:00.000-07:00", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-08-30T20:30:00.000-07:00", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-09-01T20:30:00.000-07:00", "event" : { "count" : 1, "language" : "en" } }, { "version" : "v1", "timestamp" : "2013-09-02T20:30:00.000-07:00", "event" : { "count" : 1, "language" : "en" } } ] Note that the origin you specified has nothing to do with the timezone, it only serves as a starting point for locating the very first granularity bucket. In this case, Row {"timestamp": "2013-09-02T23:32:45Z", "page": "CCC", "language" : "en"} and {"timestamp": "2013-09-03T03:32:45Z", "page": "DDD", "language" : "en"}are not in the same bucket. Supported Time Zones Timezone support is provided by the Joda Time library, which uses the standard IANA time zones. See the Joda Time supported timezones. "},{"title":"GroupBy queries","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/groupbyquery","content":"","keywords":""},{"title":"Behavior on multi-value dimensions","type":1,"pageTitle":"GroupBy queries","url":"/docs/27.0.0/querying/groupbyquery#behavior-on-multi-value-dimensions","content":"groupBy queries can group on multi-value dimensions. When grouping on a multi-value dimension, all values from matching rows will be used to generate one group per value. It's possible for a query to return more groups than there are rows. For example, a groupBy on the dimension tags with filter "t1" AND "t3" would match only row1, and generate a result with three groups: t1, t2, and t3. If you only need to include values that match your filter, you can use a filtered dimensionSpec. This can also improve performance. See Multi-value dimensions for more details. "},{"title":"More on subtotalsSpec","type":1,"pageTitle":"GroupBy queries","url":"/docs/27.0.0/querying/groupbyquery#more-on-subtotalsspec","content":"The subtotals feature allows computation of multiple sub-groupings in a single query. To use this feature, add a "subtotalsSpec" to your query as a list of subgroup dimension sets. It should contain the outputName from dimensions in your dimensions attribute, in the same order as they appear in the dimensions attribute (although, of course, you may skip some). For example, consider a groupBy query like this one: { "type": "groupBy", ... ... "dimensions": [ { "type" : "default", "dimension" : "d1col", "outputName": "D1" }, { "type" : "extraction", "dimension" : "d2col", "outputName" : "D2", "extractionFn" : extraction_func }, { "type":"lookup", "dimension":"d3col", "outputName":"D3", "name":"my_lookup" } ], ... ... "subtotalsSpec":[ ["D1", "D2", D3"], ["D1", "D3"], ["D3"]], .. } The result of the subtotalsSpec would be equivalent to concatenating the result of three groupBy queries, with the "dimensions" field being ["D1", "D2", D3"], ["D1", "D3"] and ["D3"], given the DimensionSpec shown above. The response for the query above would look something like: [ { "version" : "v1", "timestamp" : "t1", "event" : { "D1": "..", "D2": "..", "D3": ".." } } }, { "version" : "v1", "timestamp" : "t2", "event" : { "D1": "..", "D2": "..", "D3": ".." } } }, ... ... { "version" : "v1", "timestamp" : "t1", "event" : { "D1": "..", "D2": null, "D3": ".." } } }, { "version" : "v1", "timestamp" : "t2", "event" : { "D1": "..", "D2": null, "D3": ".." } } }, ... ... { "version" : "v1", "timestamp" : "t1", "event" : { "D1": null, "D2": null, "D3": ".." } } }, { "version" : "v1", "timestamp" : "t2", "event" : { "D1": null, "D2": null, "D3": ".." } } }, ... ] info Notice that dimensions that are not included in an individual subtotalsSpec grouping are returned with a null value. This response format represents a behavior change as of Apache Druid 0.18.0. In release 0.17.0 and earlier, such dimensions were entirely excluded from the result. If you were relying on this old behavior to determine whether a particular dimension was not part of a subtotal grouping, you can now use Grouping aggregator instead. "},{"title":"Implementation details","type":1,"pageTitle":"GroupBy queries","url":"/docs/27.0.0/querying/groupbyquery#implementation-details","content":""},{"title":"Strategies","type":1,"pageTitle":"GroupBy queries","url":"/docs/27.0.0/querying/groupbyquery#strategies","content":"GroupBy queries can be executed using two different strategies. The default strategy for a cluster is determined by the "druid.query.groupBy.defaultStrategy" runtime property on the Broker. This can be overridden using "groupByStrategy" in the query context. If neither the context field nor the property is set, the "v2" strategy will be used. "v2", the default, is designed to offer better performance and memory management. This strategy generates per-segment results using a fully off-heap map. Data processes merge the per-segment results using a fully off-heap concurrent facts map combined with an on-heap string dictionary. This may optionally involve spilling to disk. Data processes return sorted results to the Broker, which merges result streams using an N-way merge. The broker materializes the results if necessary (e.g. if the query sorts on columns other than its dimensions). Otherwise, it streams results back as they are merged. "v1", a legacy engine, generates per-segment results on data processes (Historical, realtime, MiddleManager) using a map which is partially on-heap (dimension keys and the map itself) and partially off-heap (the aggregated values). Data processes then merge the per-segment results using Druid's indexing mechanism. This merging is multi-threaded by default, but can optionally be single-threaded. The Broker merges the final result set using Druid's indexing mechanism again. The broker merging is always single-threaded. Because the Broker merges results using the indexing mechanism, it must materialize the full result set before returning any results. On both the data processes and the Broker, the merging index is fully on-heap by default, but it can optionally store aggregated values off-heap. "},{"title":"Differences between v1 and v2","type":1,"pageTitle":"GroupBy queries","url":"/docs/27.0.0/querying/groupbyquery#differences-between-v1-and-v2","content":"Query API and results are compatible between the two engines; however, there are some differences from a cluster configuration perspective: groupBy v1 controls resource usage using a row-based limit (maxResults) whereas groupBy v2 uses bytes-based limits. In addition, groupBy v1 merges results on-heap, whereas groupBy v2 merges results off-heap. These factors mean that memory tuning and resource limits behave differently between v1 and v2. In particular, due to this, some queries that can complete successfully in one engine may exceed resource limits and fail with the other engine. See the "Memory tuning and resource limits" section for more details.groupBy v1 imposes no limit on the number of concurrently running queries, whereas groupBy v2 controls memory usage by using a finite-sized merge buffer pool. By default, the number of merge buffers is 1/4 the number of processing threads. You can adjust this as necessary to balance concurrency and memory usage.groupBy v1 supports caching on either the Broker or Historical processes, whereas groupBy v2 only supports caching on Historical processes.groupBy v2 supports both array-based aggregation and hash-based aggregation. The array-based aggregation is used only when the grouping key is a single indexed string column. In array-based aggregation, the dictionary-encoded value is used as the index, so the aggregated values in the array can be accessed directly without finding buckets based on hashing. "},{"title":"Memory tuning and resource limits","type":1,"pageTitle":"GroupBy queries","url":"/docs/27.0.0/querying/groupbyquery#memory-tuning-and-resource-limits","content":"When using groupBy v2, four parameters control resource usage and limits: druid.processing.buffer.sizeBytes: size of the off-heap hash table used for aggregation, per query, in bytes. At most druid.processing.numMergeBuffers of these will be created at once, which also serves as an upper limit on the number of concurrently running groupBy queries.druid.query.groupBy.maxSelectorDictionarySize: size of the on-heap segment-level dictionary used when grouping on string or array-valued expressions that do not have pre-existing dictionaries. There is at most one dictionary per processing thread; therefore there are up to druid.processing.numThreads of these. Note that the size is based on a rough estimate of the dictionary footprint.druid.query.groupBy.maxMergingDictionarySize: size of the on-heap query-level dictionary used when grouping on any string expression. There is at most one dictionary per concurrently-running query; therefore there are up todruid.server.http.numThreads of these. Note that the size is based on a rough estimate of the dictionary footprint.druid.query.groupBy.maxOnDiskStorage: amount of space on disk used for aggregation, per query, in bytes. By default, this is 0, which means aggregation will not use disk. If maxOnDiskStorage is 0 (the default) then a query that exceeds either the on-heap dictionary limit, or the off-heap aggregation table limit, will fail with a "Resource limit exceeded" error describing the limit that was exceeded. If maxOnDiskStorage is greater than 0, queries that exceed the in-memory limits will start using disk for aggregation. In this case, when either the on-heap dictionary or off-heap hash table fills up, partially aggregated records will be sorted and flushed to disk. Then, both in-memory structures will be cleared out for further aggregation. Queries that then go on to exceed maxOnDiskStorage will fail with a "Resource limit exceeded" error indicating that they ran out of disk space. With groupBy v2, cluster operators should make sure that the off-heap hash tables and on-heap merging dictionaries will not exceed available memory for the maximum possible concurrent query load (given bydruid.processing.numMergeBuffers). See the basic cluster tuning guidefor more details about direct memory usage, organized by Druid process type. Brokers do not need merge buffers for basic groupBy queries. Queries with subqueries (using a query dataSource) require one merge buffer if there is a single subquery, or two merge buffers if there is more than one layer of nested subqueries. Queries with subtotals need one merge buffer. These can stack on top of each other: a groupBy query with multiple layers of nested subqueries, and that also uses subtotals, will need three merge buffers. Historicals and ingestion tasks need one merge buffer for each groupBy query, unless parallel combination is enabled, in which case they need two merge buffers per query. When using groupBy v1, all aggregation is done on-heap, and resource limits are done through the parameterdruid.query.groupBy.maxResults. This is a cap on the maximum number of results in a result set. Queries that exceed this limit will fail with a "Resource limit exceeded" error indicating they exceeded their row limit. Cluster operators should make sure that the on-heap aggregations will not exceed available JVM heap space for the expected concurrent query load. "},{"title":"Performance tuning for groupBy v2","type":1,"pageTitle":"GroupBy queries","url":"/docs/27.0.0/querying/groupbyquery#performance-tuning-for-groupby-v2","content":"Limit pushdown optimization Druid pushes down the limit spec in groupBy queries to the segments on Historicals wherever possible to early prune unnecessary intermediate results and minimize the amount of data transferred to Brokers. By default, this technique is applied only when all fields in the orderBy spec is a subset of the grouping keys. This is because the limitPushDown doesn't guarantee the exact results if the orderBy spec includes any fields that are not in the grouping keys. However, you can enable this technique even in such cases if you can sacrifice some accuracy for fast query processing like in topN queries. See forceLimitPushDown in advanced groupBy v2 configurations. Optimizing hash table The groupBy v2 engine uses an open addressing hash table for aggregation. The hash table is initialized with a given initial bucket number and gradually grows on buffer full. On hash collisions, the linear probing technique is used. The default number of initial buckets is 1024 and the default max load factor of the hash table is 0.7. If you can see too many collisions in the hash table, you can adjust these numbers. See bufferGrouperInitialBuckets and bufferGrouperMaxLoadFactor in Advanced groupBy v2 configurations. Parallel combine Once a Historical finishes aggregation using the hash table, it sorts the aggregated results and merges them before sending to the Broker for N-way merge aggregation in the broker. By default, Historicals use all their available processing threads (configured by druid.processing.numThreads) for aggregation, but use a single thread for sorting and merging aggregates which is an http thread to send data to Brokers. This is to prevent some heavy groupBy queries from blocking other queries. In Druid, the processing threads are shared between all submitted queries and they are not interruptible. It means, if a heavy query takes all available processing threads, all other queries might be blocked until the heavy query is finished. GroupBy queries usually take longer time than timeseries or topN queries, they should release processing threads as soon as possible. However, you might care about the performance of some really heavy groupBy queries. Usually, the performance bottleneck of heavy groupBy queries is merging sorted aggregates. In such cases, you can use processing threads for it as well. This is called parallel combine. To enable parallel combine, see numParallelCombineThreads inAdvanced groupBy v2 configurations. Note that parallel combine can be enabled only when data is actually spilled (see Memory tuning and resource limits). Once parallel combine is enabled, the groupBy v2 engine can create a combining tree for merging sorted aggregates. Each intermediate node of the tree is a thread merging aggregates from the child nodes. The leaf node threads read and merge aggregates from hash tables including spilled ones. Usually, leaf processes are slower than intermediate nodes because they need to read data from disk. As a result, less threads are used for intermediate nodes by default. You can change the degree of intermediate nodes. See intermediateCombineDegree in Advanced groupBy v2 configurations. Please note that each Historical needs two merge buffers to process a groupBy v2 query with parallel combine: one for computing intermediate aggregates from each segment and another for combining intermediate aggregates in parallel. "},{"title":"Alternatives","type":1,"pageTitle":"GroupBy queries","url":"/docs/27.0.0/querying/groupbyquery#alternatives","content":"There are some situations where other query types may be a better choice than groupBy. For queries with no "dimensions" (i.e. grouping by time only) the Timeseries query will generally be faster than groupBy. The major differences are that it is implemented in a fully streaming manner (taking advantage of the fact that segments are already sorted on time) and does not need to use a hash table for merging. For queries with a single "dimensions" element (i.e. grouping by one string dimension), the TopN querywill sometimes be faster than groupBy. This is especially true if you are ordering by a metric and find approximate results acceptable. "},{"title":"Nested groupBys","type":1,"pageTitle":"GroupBy queries","url":"/docs/27.0.0/querying/groupbyquery#nested-groupbys","content":"Nested groupBys (dataSource of type "query") are performed differently for "v1" and "v2". The Broker first runs the inner groupBy query in the usual way. "v1" strategy then materializes the inner query's results on-heap with Druid's indexing mechanism, and runs the outer query on these materialized results. "v2" strategy runs the outer query on the inner query's results stream with off-heap fact map and on-heap string dictionary that can spill to disk. Both strategy perform the outer query on the Broker in a single-threaded fashion. "},{"title":"Configurations","type":1,"pageTitle":"GroupBy queries","url":"/docs/27.0.0/querying/groupbyquery#configurations","content":"This section describes the configurations for groupBy queries. You can set the runtime properties in the runtime.properties file on Broker, Historical, and MiddleManager processes. You can set the query context parameters through the query context. Configurations for groupBy v2 Supported runtime properties: Property\tDescription\tDefaultdruid.query.groupBy.maxSelectorDictionarySize\tMaximum amount of heap space (approximately) to use for per-segment string dictionaries. If set to 0 (automatic), each query's dictionary can use 10% of the Java heap divided by druid.processing.numMergeBuffers, or 1GB, whichever is smaller. See Memory tuning and resource limits for details on changing this property.\t0 (automatic) druid.query.groupBy.maxMergingDictionarySize\tMaximum amount of heap space (approximately) to use for per-query string dictionaries. When the dictionary exceeds this size, a spill to disk will be triggered. If set to 0 (automatic), each query's dictionary uses 30% of the Java heap divided by druid.processing.numMergeBuffers, or 1GB, whichever is smaller. See Memory tuning and resource limits for details on changing this property.\t0 (automatic) druid.query.groupBy.maxOnDiskStorage\tMaximum amount of disk space to use, per-query, for spilling result sets to disk when either the merging buffer or the dictionary fills up. Queries that exceed this limit will fail. Set to zero to disable disk spilling.\t0 (disabled) Supported query contexts: Key\tDescriptionmaxOnDiskStorage\tCan be used to lower the value of druid.query.groupBy.maxOnDiskStorage for this query. "},{"title":"Advanced configurations","type":1,"pageTitle":"GroupBy queries","url":"/docs/27.0.0/querying/groupbyquery#advanced-configurations","content":"Common configurations for all groupBy strategies Supported runtime properties: Property\tDescription\tDefaultdruid.query.groupBy.defaultStrategy\tDefault groupBy query strategy.\tv2 druid.query.groupBy.singleThreaded\tMerge results using a single thread.\tfalse druid.query.groupBy.intermediateResultAsMapCompat\tWhether Brokers are able to understand map-based result rows. Setting this to true adds some overhead to all groupBy queries. It is required for compatibility with data servers running versions older than 0.16.0, which introduced array-based result rows.\tfalse Supported query contexts: Key\tDescriptiongroupByStrategy\tOverrides the value of druid.query.groupBy.defaultStrategy for this query. groupByIsSingleThreaded\tOverrides the value of druid.query.groupBy.singleThreaded for this query. GroupBy v2 configurations Supported runtime properties: Property\tDescription\tDefaultdruid.query.groupBy.bufferGrouperInitialBuckets\tInitial number of buckets in the off-heap hash table used for grouping results. Set to 0 to use a reasonable default (1024).\t0 druid.query.groupBy.bufferGrouperMaxLoadFactor\tMaximum load factor of the off-heap hash table used for grouping results. When the load factor exceeds this size, the table will be grown or spilled to disk. Set to 0 to use a reasonable default (0.7).\t0 druid.query.groupBy.forceHashAggregation\tForce to use hash-based aggregation.\tfalse druid.query.groupBy.intermediateCombineDegree\tNumber of intermediate nodes combined together in the combining tree. Higher degrees will need less threads which might be helpful to improve the query performance by reducing the overhead of too many threads if the server has sufficiently powerful cpu cores.\t8 druid.query.groupBy.numParallelCombineThreads\tHint for the number of parallel combining threads. This should be larger than 1 to turn on the parallel combining feature. The actual number of threads used for parallel combining is min(druid.query.groupBy.numParallelCombineThreads, druid.processing.numThreads).\t1 (disabled) druid.query.groupBy.applyLimitPushDownToSegment\tIf Broker pushes limit down to queryable data server (historicals, peons) then limit results during segment scan. If typically there are a large number of segments taking part in a query on a data server, this setting may counterintuitively reduce performance if enabled.\tfalse (disabled) Supported query contexts: Key\tDescription\tDefaultbufferGrouperInitialBuckets\tOverrides the value of druid.query.groupBy.bufferGrouperInitialBuckets for this query.\tNone bufferGrouperMaxLoadFactor\tOverrides the value of druid.query.groupBy.bufferGrouperMaxLoadFactor for this query.\tNone forceHashAggregation\tOverrides the value of druid.query.groupBy.forceHashAggregation\tNone intermediateCombineDegree\tOverrides the value of druid.query.groupBy.intermediateCombineDegree\tNone numParallelCombineThreads\tOverrides the value of druid.query.groupBy.numParallelCombineThreads\tNone mergeThreadLocal\tWhether merge buffers should always be split into thread-local buffers. Setting this to true reduces thread contention, but uses memory less efficiently. This tradeoff is beneficial when memory is plentiful.\tfalse sortByDimsFirst\tSort the results first by dimension values and then by timestamp.\tfalse forceLimitPushDown\tWhen all fields in the orderby are part of the grouping key, the Broker will push limit application down to the Historical processes. When the sorting order uses fields that are not in the grouping key, applying this optimization can result in approximate results with unknown accuracy, so this optimization is disabled by default in that case. Enabling this context flag turns on limit push down for limit/orderbys that contain non-grouping key columns.\tfalse applyLimitPushDownToSegment\tIf Broker pushes limit down to queryable nodes (historicals, peons) then limit results during segment scan. This context value can be used to override druid.query.groupBy.applyLimitPushDownToSegment.\ttrue groupByEnableMultiValueUnnesting\tSafety flag to enable/disable the implicit unnesting on multi value column's as part of the grouping key. 'true' indicates multi-value grouping keys are unnested. 'false' returns an error if a multi value column is found as part of the grouping key.\ttrue GroupBy v1 configurations Supported runtime properties: Property\tDescription\tDefaultdruid.query.groupBy.maxIntermediateRows\tMaximum number of intermediate rows for the per-segment grouping engine. This is a tuning parameter that does not impose a hard limit; rather, it potentially shifts merging work from the per-segment engine to the overall merging index. Queries that exceed this limit will not fail.\t50000 druid.query.groupBy.maxResults\tMaximum number of results. Queries that exceed this limit will fail.\t500000 Supported query contexts: Key\tDescription\tDefaultmaxIntermediateRows\tIgnored by groupBy v2. Can be used to lower the value of druid.query.groupBy.maxIntermediateRows for a groupBy v1 query.\tNone maxResults\tIgnored by groupBy v2. Can be used to lower the value of druid.query.groupBy.maxResults for a groupBy v1 query.\tNone useOffheap\tIgnored by groupBy v2, and no longer supported for groupBy v1. Enabling this option with groupBy v1 will result in an error. For off-heap aggregation, switch to groupBy v2, which always operates off-heap.\tfalse Array based result rows Internally Druid always uses an array based representation of groupBy result rows, but by default this is translated into a map based result format at the Broker. To reduce the overhead of this translation, results may also be returned from the Broker directly in the array based format if resultAsArray is set to true on the query context. Each row is positional, and has the following fields, in order: Timestamp (optional; only if granularity != ALL)Dimensions (in order)Aggregators (in order)Post-aggregators (optional; in order, if present) This schema is not available on the response, so it must be computed from the issued query in order to properly read the results. "},{"title":"Having filters (groupBy)","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/having","content":"","keywords":""},{"title":"Query filters","type":1,"pageTitle":"Having filters (groupBy)","url":"/docs/27.0.0/querying/having#query-filters","content":"Query filter HavingSpecs allow all Druid query filters to be used in the Having part of the query. The grammar for a query filter HavingSpec is: { "queryType": "groupBy", "dataSource": "sample_datasource", ... "having": { "type" : "filter", "filter" : <any Druid query filter> } } For example, to use a selector filter: { "queryType": "groupBy", "dataSource": "sample_datasource", ... "having": { "type" : "filter", "filter" : { "type": "selector", "dimension" : "<dimension>", "value" : "<dimension_value>" } } } You can use "filter" HavingSpecs to filter on the timestamp of result rows by applying a filter to the "__time" column. "},{"title":"Numeric filters","type":1,"pageTitle":"Having filters (groupBy)","url":"/docs/27.0.0/querying/having#numeric-filters","content":"The simplest having clause is a numeric filter. Numeric filters can be used as the base filters for more complex boolean expressions of filters. Here's an example of a having-clause numeric filter: { "queryType": "groupBy", "dataSource": "sample_datasource", ... "having": { "type": "greaterThan", "aggregation": "<aggregate_metric>", "value": <numeric_value> } } Equal To The equalTo filter will match rows with a specific aggregate value. The grammar for an equalTo filter is as follows: { "queryType": "groupBy", "dataSource": "sample_datasource", ... "having": { "type": "equalTo", "aggregation": "<aggregate_metric>", "value": <numeric_value> } } This is the equivalent of HAVING <aggregate> = <value>. Greater Than The greaterThan filter will match rows with aggregate values greater than the given value. The grammar for a greaterThan filter is as follows: { "queryType": "groupBy", "dataSource": "sample_datasource", ... "having": { "type": "greaterThan", "aggregation": "<aggregate_metric>", "value": <numeric_value> } } This is the equivalent of HAVING <aggregate> > <value>. Less Than The lessThan filter will match rows with aggregate values less than the specified value. The grammar for a greaterThan filter is as follows: { "queryType": "groupBy", "dataSource": "sample_datasource", ... "having": { "type": "lessThan", "aggregation": "<aggregate_metric>", "value": <numeric_value> } } This is the equivalent of HAVING <aggregate> < <value>. "},{"title":"Dimension Selector Filter","type":1,"pageTitle":"Having filters (groupBy)","url":"/docs/27.0.0/querying/having#dimension-selector-filter","content":"dimSelector The dimSelector filter will match rows with dimension values equal to the specified value. The grammar for a dimSelector filter is as follows: { "queryType": "groupBy", "dataSource": "sample_datasource", ... "having": { "type": "dimSelector", "dimension": "<dimension>", "value": <dimension_value> } } "},{"title":"Logical expression filters","type":1,"pageTitle":"Having filters (groupBy)","url":"/docs/27.0.0/querying/having#logical-expression-filters","content":"AND The grammar for an AND filter is as follows: { "queryType": "groupBy", "dataSource": "sample_datasource", ... "having": { "type": "and", "havingSpecs": [ { "type": "greaterThan", "aggregation": "<aggregate_metric>", "value": <numeric_value> }, { "type": "lessThan", "aggregation": "<aggregate_metric>", "value": <numeric_value> } ] } } OR The grammar for an OR filter is as follows: { "queryType": "groupBy", "dataSource": "sample_datasource", ... "having": { "type": "or", "havingSpecs": [ { "type": "greaterThan", "aggregation": "<aggregate_metric>", "value": <numeric_value> }, { "type": "equalTo", "aggregation": "<aggregate_metric>", "value": <numeric_value> } ] } } NOT The grammar for a NOT filter is as follows: { "queryType": "groupBy", "dataSource": "sample_datasource", ... "having": { "type": "not", "havingSpec": { "type": "equalTo", "aggregation": "<aggregate_metric>", "value": <numeric_value> } } } "},{"title":"Lookups","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/lookups","content":"","keywords":""},{"title":"Query Syntax","type":1,"pageTitle":"Lookups","url":"/docs/27.0.0/querying/lookups#query-syntax","content":"In Druid SQL, lookups can be queried using the LOOKUP function, for example: SELECT LOOKUP(store, 'store_to_country') AS country, SUM(revenue) FROM sales GROUP BY 1 They can also be queried using the JOIN operator: SELECT store_to_country.v AS country, SUM(sales.revenue) AS country_revenue FROM sales INNER JOIN lookup.store_to_country ON sales.store = store_to_country.k GROUP BY 1 In native queries, lookups can be queried with dimension specs or extraction functions. "},{"title":"Query Execution","type":1,"pageTitle":"Lookups","url":"/docs/27.0.0/querying/lookups#query-execution","content":"When executing an aggregation query involving lookup functions (like the SQL LOOKUP function, Druid can decide to apply them while scanning and aggregating rows, or to apply them after aggregation is complete. It is more efficient to apply lookups after aggregation is complete, so Druid will do this if it can. Druid decides this by checking if the lookup is marked as "injective" or not. In general, you should set this property for any lookup that is naturally one-to-one, to allow Druid to run your queries as fast as possible. Injective lookups should include all possible keys that may show up in your dataset, and should also map all keys tounique values. This matters because non-injective lookups may map different keys to the same value, which must be accounted for during aggregation, lest query results contain two result values that should have been aggregated into one. This lookup is injective (assuming it contains all possible keys from your data): 1 -> Foo 2 -> Bar 3 -> Billy But this one is not, since both "2" and "3" map to the same value: 1 -> Foo 2 -> Bar 3 -> Bar To tell Druid that your lookup is injective, you must specify "injective" : true in the lookup configuration. Druid will not detect this automatically. info Currently, the injective lookup optimization is not triggered when lookups are inputs to ajoin datasource. It is only used when lookup functions are used directly, without the join operator. "},{"title":"Dynamic Configuration","type":1,"pageTitle":"Lookups","url":"/docs/27.0.0/querying/lookups#dynamic-configuration","content":"The following documents the behavior of the cluster-wide config which is accessible through the Coordinator. The configuration is propagated through the concept of "tier" of servers. A "tier" is defined as a group of services which should receive a set of lookups. For example, you might have all Historicals be part of __default, and Peons be part of individual tiers for the datasources they are tasked with. The tiers for lookups are completely independent of Historical tiers. These configs are accessed using JSON through the following URI template http://<COORDINATOR_IP>:<PORT>/druid/coordinator/v1/lookups/config/{tier}/{id} All URIs below are assumed to have http://<COORDINATOR_IP>:<PORT> prepended. If you have NEVER configured lookups before, you MUST post an empty json object {} to /druid/coordinator/v1/lookups/config to initialize the configuration. These endpoints will return one of the following results: 404 if the resource is not found400 if there is a problem in the formatting of the request202 if the request was accepted asynchronously (POST and DELETE)200 if the request succeeded (GET only) "},{"title":"Configuration propagation behavior","type":1,"pageTitle":"Lookups","url":"/docs/27.0.0/querying/lookups#configuration-propagation-behavior","content":"The configuration is propagated to the query serving processes (Broker / Router / Peon / Historical) by the Coordinator. The query serving processes have an internal API for managing lookups on the process and those are used by the Coordinator. The Coordinator periodically checks if any of the processes need to load/drop lookups and updates them appropriately. Please note that only 2 simultaneous lookup configuration propagation requests can be concurrently handled by a single query serving process. This limit is applied to prevent lookup handling from consuming too many server HTTP connections. "},{"title":"API","type":1,"pageTitle":"Lookups","url":"/docs/27.0.0/querying/lookups#api","content":"See Lookups API for reference on configuring lookups and lookup status. "},{"title":"Configuration","type":1,"pageTitle":"Lookups","url":"/docs/27.0.0/querying/lookups#configuration","content":"See Lookups Dynamic Configuration for Coordinator configuration. To configure a Broker / Router / Historical / Peon to announce itself as part of a lookup tier, use following properties. Property\tDescription\tDefaultdruid.lookup.lookupTier\tThe tier for lookups for this process. This is independent of other tiers.\t__default druid.lookup.lookupTierIsDatasource\tFor some things like indexing service tasks, the datasource is passed in the runtime properties of a task. This option fetches the tierName from the same value as the datasource for the task. It is suggested to only use this as Peon options for the indexing service, if at all. If true, druid.lookup.lookupTier MUST NOT be specified\t"false" To configure the behavior of the dynamic configuration manager, use the following properties on the Coordinator: Property\tDescription\tDefaultdruid.manager.lookups.hostTimeout\tTimeout (in ms) PER HOST for processing request\t2000(2 seconds) druid.manager.lookups.allHostTimeout\tTimeout (in ms) to finish lookup management on all the processes.\t900000(15 mins) druid.manager.lookups.period\tHow long to pause between management cycles\t120000(2 mins) druid.manager.lookups.threadPoolSize\tNumber of service processes that can be managed concurrently\t10 "},{"title":"Saving configuration across restarts","type":1,"pageTitle":"Lookups","url":"/docs/27.0.0/querying/lookups#saving-configuration-across-restarts","content":"It is possible to save the configuration across restarts such that a process will not have to wait for Coordinator action to re-populate its lookups. To do this the following property is set: Property\tDescription\tDefaultdruid.lookup.snapshotWorkingDir\tWorking path used to store snapshot of current lookup configuration, leaving this property null will disable snapshot/bootstrap utility\tnull druid.lookup.enableLookupSyncOnStartup\tEnable the lookup synchronization process with Coordinator on startup. The queryable processes will fetch and load the lookups from the Coordinator instead of waiting for the Coordinator to load the lookups for them. Users may opt to disable this option if there are no lookups configured in the cluster.\ttrue druid.lookup.numLookupLoadingThreads\tNumber of threads for loading the lookups in parallel on startup. This thread pool is destroyed once startup is done. It is not kept during the lifetime of the JVM\tAvailable Processors / 2 druid.lookup.coordinatorFetchRetries\tHow many times to retry to fetch the lookup bean list from Coordinator, during the sync on startup.\t3 druid.lookup.lookupStartRetries\tHow many times to retry to start each lookup, either during the sync on startup, or during the runtime.\t3 druid.lookup.coordinatorRetryDelay\tHow long to delay (in millis) between retries to fetch lookup list from the Coordinator during the sync on startup.\t60_000 "},{"title":"Introspect a Lookup","type":1,"pageTitle":"Lookups","url":"/docs/27.0.0/querying/lookups#introspect-a-lookup","content":"The Broker provides an API for lookup introspection if the lookup type implements a LookupIntrospectHandler. A GET request to /druid/v1/lookups/introspect/{lookupId} will return the map of complete values. ex: GET /druid/v1/lookups/introspect/nato-phonetic { "A": "Alfa", "B": "Bravo", "C": "Charlie", ... "Y": "Yankee", "Z": "Zulu", "-": "Dash" } The list of keys can be retrieved via GET to /druid/v1/lookups/introspect/{lookupId}/keys" ex: GET /druid/v1/lookups/introspect/nato-phonetic/keys [ "A", "B", "C", ... "Y", "Z", "-" ] A GET request to /druid/v1/lookups/introspect/{lookupId}/values" will return the list of values. ex: GET /druid/v1/lookups/introspect/nato-phonetic/values [ "Alfa", "Bravo", "Charlie", ... "Yankee", "Zulu", "Dash" ] "},{"title":"Expressions","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/math-expr","content":"","keywords":""},{"title":"General functions","type":1,"pageTitle":"Expressions","url":"/docs/27.0.0/querying/math-expr#general-functions","content":"name\tdescriptioncast\tcast(expr,LONG or DOUBLE or STRING or ARRAY<LONG>, or ARRAY<DOUBLE> or ARRAY<STRING>) returns expr with specified type. exception can be thrown. Scalar types may be cast to array types and will take the form of a single element list (null will still be null). if\tif(predicate,then,else) returns 'then' if 'predicate' evaluates to a positive number, otherwise it returns 'else' nvl\tnvl(expr,expr-for-null) returns 'expr-for-null' if 'expr' is null (or empty string for string type) like\tlike(expr, pattern[, escape]) is equivalent to SQL expr LIKE pattern case_searched\tcase_searched(expr1, result1, [[expr2, result2, ...], else-result]) is similar to CASE WHEN expr1 THEN result1 [ELSE else_result] END in SQL case_simple\tcase_simple(expr, value1, result1, [[value2, result2, ...], else-result]) is similar to CASE expr WHEN value THEN result [ELSE else_result] END in SQL isnull\tisnull(expr) returns 1 if the value is null, else 0 notnull\tnotnull(expr) returns 1 if the value is not null, else 0 bloom_filter_test\tbloom_filter_test(expr, filter) tests the value of 'expr' against 'filter', a bloom filter serialized as a base64 string. See bloom filter extension documentation for additional details. "},{"title":"String functions","type":1,"pageTitle":"Expressions","url":"/docs/27.0.0/querying/math-expr#string-functions","content":"name\tdescriptionconcat\tconcat(expr, expr...) concatenate a list of strings format\tformat(pattern[, args...]) returns a string formatted in the manner of Java's String.format. like\tlike(expr, pattern[, escape]) is equivalent to SQL expr LIKE pattern lookup\tlookup(expr, lookup-name) looks up expr in a registered query-time lookup parse_long\tparse_long(string[, radix]) parses a string as a long with the given radix, or 10 (decimal) if a radix is not provided. regexp_extract\tregexp_extract(expr, pattern[, index]) applies a regular expression pattern and extracts a capture group index, or null if there is no match. If index is unspecified or zero, returns the substring that matched the pattern. The pattern may match anywhere inside expr; if you want to match the entire string instead, use the ^ and $ markers at the start and end of your pattern. regexp_like\tregexp_like(expr, pattern) returns whether expr matches regular expression pattern. The pattern may match anywhere inside expr; if you want to match the entire string instead, use the ^ and $ markers at the start and end of your pattern. regexp_replace\tregexp_replace(expr, pattern, replacement) replaces all instances of a regular expression pattern with a given replacement string. The pattern may match anywhere inside expr; if you want to match the entire string instead, use the ^ and $ markers at the start and end of your pattern. contains_string\tcontains_string(expr, string) returns whether expr contains string as a substring. This method is case-sensitive. icontains_string\tcontains_string(expr, string) returns whether expr contains string as a substring. This method is case-insensitive. replace\treplace(expr, pattern, replacement) replaces pattern with replacement substring\tsubstring(expr, index, length) behaves like java.lang.String's substring right\tright(expr, length) returns the rightmost length characters from a string left\tleft(expr, length) returns the leftmost length characters from a string strlen\tstrlen(expr) returns length of a string in UTF-16 code units strpos\tstrpos(haystack, needle[, fromIndex]) returns the position of the needle within the haystack, with indexes starting from 0. The search will begin at fromIndex, or 0 if fromIndex is not specified. If the needle is not found then the function returns -1. trim\ttrim(expr[, chars]) remove leading and trailing characters from expr if they are present in chars. chars defaults to ' ' (space) if not provided. ltrim\tltrim(expr[, chars]) remove leading characters from expr if they are present in chars. chars defaults to ' ' (space) if not provided. rtrim\trtrim(expr[, chars]) remove trailing characters from expr if they are present in chars. chars defaults to ' ' (space) if not provided. lower\tlower(expr) converts a string to lowercase upper\tupper(expr) converts a string to uppercase reverse\treverse(expr) reverses a string repeat\trepeat(expr, N) repeats a string N times lpad\tlpad(expr, length, chars) returns a string of length from expr left-padded with chars. If length is shorter than the length of expr, the result is expr which is truncated to length. The result will be null if either expr or chars is null. If chars is an empty string, no padding is added, however expr may be trimmed if necessary. rpad\trpad(expr, length, chars) returns a string of length from expr right-padded with chars. If length is shorter than the length of expr, the result is expr which is truncated to length. The result will be null if either expr or chars is null. If chars is an empty string, no padding is added, however expr may be trimmed if necessary. "},{"title":"Time functions","type":1,"pageTitle":"Expressions","url":"/docs/27.0.0/querying/math-expr#time-functions","content":"name\tdescriptiontimestamp\ttimestamp(expr[,format-string]) parses string expr into date then returns milliseconds from java epoch. without 'format-string' it's regarded as ISO datetime format unix_timestamp\tsame with 'timestamp' function but returns seconds instead timestamp_ceil\ttimestamp_ceil(expr, period, [origin, [timezone]]) rounds up a timestamp, returning it as a new timestamp. Period can be any ISO8601 period, like P3M (quarters) or PT12H (half-days). The time zone, if provided, should be a time zone name like "America/Los_Angeles" or offset like "-08:00". timestamp_floor\ttimestamp_floor(expr, period, [origin, [timezone]]) rounds down a timestamp, returning it as a new timestamp. Period can be any ISO8601 period, like P3M (quarters) or PT12H (half-days). The time zone, if provided, should be a time zone name like "America/Los_Angeles" or offset like "-08:00". timestamp_shift\ttimestamp_shift(expr, period, step, [timezone]) shifts a timestamp by a period (step times), returning it as a new timestamp. Period can be any ISO8601 period. Step may be negative. The time zone, if provided, should be a time zone name like "America/Los_Angeles" or offset like "-08:00". timestamp_extract\ttimestamp_extract(expr, unit, [timezone]) extracts a time part from expr, returning it as a number. Unit can be EPOCH (number of seconds since 1970-01-01 00:00:00 UTC), SECOND, MINUTE, HOUR, DAY (day of month), DOW (day of week), DOY (day of year), WEEK (week of week year), MONTH (1 through 12), QUARTER (1 through 4), or YEAR. The time zone, if provided, should be a time zone name like "America/Los_Angeles" or offset like "-08:00" timestamp_parse\ttimestamp_parse(string expr, [pattern, [timezone]]) parses a string into a timestamp using a given Joda DateTimeFormat pattern. If the pattern is not provided, this parses time strings in either ISO8601 or SQL format. The time zone, if provided, should be a time zone name like "America/Los_Angeles" or offset like "-08:00", and will be used as the time zone for strings that do not include a time zone offset. Pattern and time zone must be literals. Strings that cannot be parsed as timestamps will be returned as nulls. timestamp_format\ttimestamp_format(expr, [pattern, [timezone]]) formats a timestamp as a string with a given Joda DateTimeFormat pattern, or ISO8601 if the pattern is not provided. The time zone, if provided, should be a time zone name like "America/Los_Angeles" or offset like "-08:00". Pattern and time zone must be literals. "},{"title":"Math functions","type":1,"pageTitle":"Expressions","url":"/docs/27.0.0/querying/math-expr#math-functions","content":"See javadoc of java.lang.Math for detailed explanation for each function. name\tdescriptionabs\tabs(x) returns the absolute value of x acos\tacos(x) returns the arc cosine of x asin\tasin(x) returns the arc sine of x atan\tatan(x) returns the arc tangent of x bitwiseAnd\tbitwiseAnd(x,y) returns the result of x & y. Double values will be implicitly cast to longs, use bitwiseConvertDoubleToLongBits to perform bitwise operations directly with doubles bitwiseComplement\tbitwiseComplement(x) returns the result of ~x. Double values will be implicitly cast to longs, use bitwiseConvertDoubleToLongBits to perform bitwise operations directly with doubles bitwiseConvertDoubleToLongBits\tbitwiseConvertDoubleToLongBits(x) converts the bits of an IEEE 754 floating-point double value to a long. If the input is not a double, it is implicitly cast to a double prior to conversion bitwiseConvertLongBitsToDouble\tbitwiseConvertLongBitsToDouble(x) converts a long to the IEEE 754 floating-point double specified by the bits stored in the long. If the input is not a long, it is implicitly cast to a long prior to conversion bitwiseOr\tbitwiseOr(x,y) returns the result of x [PIPE] y. Double values will be implicitly cast to longs, use bitwiseConvertDoubleToLongBits to perform bitwise operations directly with doubles bitwiseShiftLeft\tbitwiseShiftLeft(x,y) returns the result of x << y. Double values will be implicitly cast to longs, use bitwiseConvertDoubleToLongBits to perform bitwise operations directly with doubles bitwiseShiftRight\tbitwiseShiftRight(x,y) returns the result of x >> y. Double values will be implicitly cast to longs, use bitwiseConvertDoubleToLongBits to perform bitwise operations directly with doubles bitwiseXor\tbitwiseXor(x,y) returns the result of x ^ y. Double values will be implicitly cast to longs, use bitwiseConvertDoubleToLongBits to perform bitwise operations directly with doubles atan2\tatan2(y, x) returns the angle theta from the conversion of rectangular coordinates (x, y) to polar * coordinates (r, theta) cbrt\tcbrt(x) returns the cube root of x ceil\tceil(x) returns the smallest (closest to negative infinity) double value that is greater than or equal to x and is equal to a mathematical integer copysign\tcopysign(x) returns the first floating-point argument with the sign of the second floating-point argument cos\tcos(x) returns the trigonometric cosine of x cosh\tcosh(x) returns the hyperbolic cosine of x cot\tcot(x) returns the trigonometric cotangent of an angle x div\tdiv(x,y) is integer division of x by y exp\texp(x) returns Euler's number raised to the power of x expm1\texpm1(x) returns e^x-1 floor\tfloor(x) returns the largest (closest to positive infinity) double value that is less than or equal to x and is equal to a mathematical integer getExponent\tgetExponent(x) returns the unbiased exponent used in the representation of x hypot\thypot(x, y) returns sqrt(x^2+y^2) without intermediate overflow or underflow log\tlog(x) returns the natural logarithm of x log10\tlog10(x) returns the base 10 logarithm of x log1p\tlog1p(x) will the natural logarithm of x + 1 max\tmax(x, y) returns the greater of two values min\tmin(x, y) returns the smaller of two values nextafter\tnextafter(x, y) returns the floating-point number adjacent to the x in the direction of the y nextUp\tnextUp(x) returns the floating-point value adjacent to x in the direction of positive infinity pi\tpi returns the constant value of the π pow\tpow(x, y) returns the value of the x raised to the power of y remainder\tremainder(x, y) returns the remainder operation on two arguments as prescribed by the IEEE 754 standard rint\trint(x) returns value that is closest in value to x and is equal to a mathematical integer round\tround(x, y) returns the value of the x rounded to the y decimal places. While x can be an integer or floating-point number, y must be an integer. The type of the return value is specified by that of x. y defaults to 0 if omitted. When y is negative, x is rounded on the left side of the y decimal points. If x is NaN, x returns 0. If x is infinity, x will be converted to the nearest finite double. safe_divide\tsafe_divide(x,y) returns the division of x by y if y is not equal to 0. In case y is 0 it returns 0 or null if druid.generic.useDefaultValueForNull=false scalb\tscalb(d, sf) returns d * 2^sf rounded as if performed by a single correctly rounded floating-point multiply to a member of the double value set signum\tsignum(x) returns the signum function of the argument x sin\tsin(x) returns the trigonometric sine of an angle x sinh\tsinh(x) returns the hyperbolic sine of x sqrt\tsqrt(x) returns the correctly rounded positive square root of x tan\ttan(x) returns the trigonometric tangent of an angle x tanh\ttanh(x) returns the hyperbolic tangent of x todegrees\ttodegrees(x) converts an angle measured in radians to an approximately equivalent angle measured in degrees toradians\ttoradians(x) converts an angle measured in degrees to an approximately equivalent angle measured in radians ulp\tulp(x) returns the size of an ulp of the argument x "},{"title":"Array functions","type":1,"pageTitle":"Expressions","url":"/docs/27.0.0/querying/math-expr#array-functions","content":"function\tdescriptionarray(expr1,expr ...)\tconstructs an array from the expression arguments, using the type of the first argument as the output array type array_length(arr)\treturns length of array expression array_offset(arr,long)\treturns the array element at the 0 based index supplied, or null for an out of range index array_ordinal(arr,long)\treturns the array element at the 1 based index supplied, or null for an out of range index array_contains(arr,expr)\treturns 1 if the array contains the element specified by expr, or contains all elements specified by expr if expr is an array, else 0 array_overlap(arr1,arr2)\treturns 1 if arr1 and arr2 have any elements in common, else 0 array_offset_of(arr,expr)\treturns the 0 based index of the first occurrence of expr in the array, or -1 or null if druid.generic.useDefaultValueForNull=falseif no matching elements exist in the array. array_ordinal_of(arr,expr)\treturns the 1 based index of the first occurrence of expr in the array, or -1 or null if druid.generic.useDefaultValueForNull=false if no matching elements exist in the array. array_prepend(expr,arr)\tadds expr to arr at the beginning, the resulting array type determined by the type of the array array_append(arr,expr)\tappends expr to arr, the resulting array type determined by the type of the first array array_concat(arr1,arr2)\tconcatenates 2 arrays, the resulting array type determined by the type of the first array array_set_add(arr,expr)\tadds expr to arr and converts the array to a new array composed of the unique set of elements. The resulting array type determined by the type of the array array_set_add_all(arr1,arr2)\tcombines the unique set of elements of 2 arrays, the resulting array type determined by the type of the first array array_slice(arr,start,end)\treturn the subarray of arr from the 0 based index start(inclusive) to end(exclusive), or null, if start is less than 0, greater than length of arr or less than end array_to_string(arr,str)\tjoins all elements of arr by the delimiter specified by str string_to_array(str1,str2)\tsplits str1 into an array on the delimiter specified by str2 "},{"title":"Apply functions","type":1,"pageTitle":"Expressions","url":"/docs/27.0.0/querying/math-expr#apply-functions","content":"Apply functions allow for special 'lambda' expressions to be defined and applied to array inputs to enable free-form transformations. function\tdescriptionmap(lambda,arr)\tapplies a transform specified by a single argument lambda expression to all elements of arr, returning a new array cartesian_map(lambda,arr1,arr2,...)\tapplies a transform specified by a multi argument lambda expression to all elements of the Cartesian product of all input arrays, returning a new array; the number of lambda arguments and array inputs must be the same filter(lambda,arr)\tfilters arr by a single argument lambda, returning a new array with all matching elements, or null if no elements match fold(lambda,arr,acc)\tfolds a 2 argument lambda across arr using acc as the initial input value. The first argument of the lambda is the array element and the second the accumulator, returning a single accumulated value. cartesian_fold(lambda,arr1,arr2,...,acc)\tfolds a multi argument lambda across the Cartesian product of all input arrays using acc as the initial input value. The first arguments of the lambda are the array elements of each array and the last is the accumulator, returning a single accumulated value. any(lambda,arr)\treturns 1 if any element in the array matches the lambda expression, else 0 all(lambda,arr)\treturns 1 if all elements in the array matches the lambda expression, else 0 "},{"title":"Lambda expressions syntax","type":1,"pageTitle":"Expressions","url":"/docs/27.0.0/querying/math-expr#lambda-expressions-syntax","content":"Lambda expressions are a sort of function definition, where new identifiers can be defined and passed as input to the expression body (identifier1 ...) -> expr e.g. (x, y) -> x + y The identifier arguments of a lambda expression correspond to the elements of the array it is being applied to. For example: map((x) -> x + 1, some_multi_value_column) will map each element of some_multi_value_column to the identifier x so that the lambda expression body can be evaluated for each x. The scoping rules are that lambda arguments will override identifiers which are defined externally from the lambda expression body. Using the same example: map((x) -> x + 1, x) in this case, the x when evaluating x + 1 is the lambda argument, thus an element of the multi-valued column x, rather than the column x itself. "},{"title":"JSON functions","type":1,"pageTitle":"Expressions","url":"/docs/27.0.0/querying/math-expr#json-functions","content":"JSON functions provide facilities to extract, transform, and create COMPLEX<json> values. function\tdescriptionjson_value(expr, path[, type])\tExtract a Druid literal (STRING, LONG, DOUBLE) value from expr using JSONPath syntax of path. The optional type argument can be set to 'LONG','DOUBLE' or 'STRING' to cast values to that type. json_query(expr, path)\tExtract a COMPLEX<json> value from expr using JSONPath syntax of path json_object(expr1, expr2[, expr3, expr4 ...])\tConstruct a COMPLEX<json> with alternating 'key' and 'value' arguments parse_json(expr)\tDeserialize a JSON STRING into a COMPLEX<json>. If the input is not a STRING or it is invalid JSON, this function will result in an error. try_parse_json(expr)\tDeserialize a JSON STRING into a COMPLEX<json>. If the input is not a STRING or it is invalid JSON, this function will result in a NULL value. to_json_string(expr)\tConvert expr into a JSON STRING value json_keys(expr, path)\tGet array of field names from expr at the specified JSONPath path, or null if the data does not exist or have any fields json_paths(expr)\tGet array of all JSONPath paths available from expr "},{"title":"JSONPath syntax","type":1,"pageTitle":"Expressions","url":"/docs/27.0.0/querying/math-expr#jsonpath-syntax","content":"Druid supports a small, simplified subset of the JSONPath syntax operators, primarily limited to extracting individual values from nested data structures. Operator\tDescription$\tRoot element. All JSONPath expressions start with this operator. .<name>\tChild element in dot notation. ['<name>']\tChild element in bracket notation. [<number>]\tArray index. See SQL JSON documentation for examples and Nested columns for more information on ingesting and storing nested data. "},{"title":"Reduction functions","type":1,"pageTitle":"Expressions","url":"/docs/27.0.0/querying/math-expr#reduction-functions","content":"Reduction functions operate on zero or more expressions and return a single expression. If no expressions are passed as arguments, then the result is NULL. The expressions must all be convertible to a common data type, which will be the type of the result: If all arguments are NULL, the result is NULL. Otherwise, NULL arguments are ignored.If the arguments comprise a mix of numbers and strings, the arguments are interpreted as strings.If all arguments are integer numbers, the arguments are interpreted as longs.If all arguments are numbers and at least one argument is a double, the arguments are interpreted as doubles. function\tdescriptiongreatest([expr1, ...])\tEvaluates zero or more expressions and returns the maximum value based on comparisons as described above. least([expr1, ...])\tEvaluates zero or more expressions and returns the minimum value based on comparisons as described above. "},{"title":"IP address functions","type":1,"pageTitle":"Expressions","url":"/docs/27.0.0/querying/math-expr#ip-address-functions","content":"For the IPv4 address functions, the address argument accepts either an IPv4 dotted-decimal string (e.g., "192.168.0.1") or an IP address represented as a long (e.g., 3232235521). Format the subnet argument as an IPv4 address subnet in CIDR notation (e.g., "192.168.0.0/16"). function\tdescriptionipv4_match(address, subnet)\tReturns 1 if the address belongs to the subnet literal, else 0. If address is not a valid IPv4 address, then 0 is returned. This function is more efficient if address is a long instead of a string. ipv4_parse(address)\tParses address into an IPv4 address stored as a long. Returns address if it is already a valid IPv4 integer address. Returns null if address cannot be represented as an IPv4 address. ipv4_stringify(address)\tConverts address into an IPv4 address dotted-decimal string. Returns address if it is already a valid IPv4 dotted-decimal string. Returns null if address cannot be represented as an IPv4 address. "},{"title":"Other functions","type":1,"pageTitle":"Expressions","url":"/docs/27.0.0/querying/math-expr#other-functions","content":"function\tdescriptionhuman_readable_binary_byte_format(value[, precision])\tFormat a number in human-readable IEC format. precision must be in the range of [0,3] (default: 2). For example: human_readable_binary_byte_format(1048576) returns 1.00 MiBhuman_readable_binary_byte_format(1048576, 3) returns 1.000 MiB human_readable_decimal_byte_format(value[, precision])\tFormat a number in human-readable SI format. precision must be in the range of [0,3] (default: 2). For example: human_readable_decimal_byte_format(1000000) returns 1.00 MBhuman_readable_decimal_byte_format(1000000, 3) returns 1.000 MB human_readable_decimal_format(value[, precision])\tFormat a number in human-readable SI format. precision must be in the range of [0,3] (default: 2). For example:human_readable_decimal_format(1000000) returns 1.00 Mhuman_readable_decimal_format(1000000, 3) returns 1.000 M "},{"title":"Vectorization support","type":1,"pageTitle":"Expressions","url":"/docs/27.0.0/querying/math-expr#vectorization-support","content":"A number of expressions support 'vectorized' query engines Supported features: constants and identifiers are supported for any column typecast is supported for numeric and string typesmath operators: +,-,*,/,%,^ are supported for numeric typeslogical operators: !, &&, ||, are supported for string and numeric types (if druid.expressions.useStrictBooleans=true)comparison operators: =, !=, >, >=, <, <= are supported for string and numeric typesmath functions: abs, acos, asin, atan, cbrt, ceil, cos, cosh, cot, exp, expm1, floor, getExponent, log, log10, log1p, nextUp, rint, signum, sin, sinh, sqrt, tan, tanh, toDegrees, toRadians, ulp, atan2, copySign, div, hypot, max, min, nextAfter, pow, remainder, scalb are supported for numeric typestime functions: timestamp_floor (with constant granularity argument) is supported for numeric typesboolean functions: isnull, notnull are supported for string and numeric typesconditional functions: nvl is supported for string and numeric typesstring functions: the concatenation operator (+) and concat function are supported for string and numeric typesother: parse_long is supported for numeric and string types "},{"title":"Logical operator modes","type":1,"pageTitle":"Expressions","url":"/docs/27.0.0/querying/math-expr#logical-operator-modes","content":"Prior to the 0.23 release of Apache Druid, boolean function expressions have inconsistent handling of true and false values, and the logical 'and' and 'or' operators behave in a manner that is incompatible with SQL, even if SQL compatible null handling mode (druid.generic.useDefaultValueForNull=false) is enabled. Logical operators also pass through their input values similar to many scripting languages, and treat null as false, which can result in some rather strange behavior. Other boolean operations, such as comparisons and equality, retain their input types (e.g. DOUBLE comparison would produce 1.0 for true and 0.0 for false), while many other boolean functions strictly produce LONG typed values of 1 for true and 0 for false. After 0.23, while the inconsistent legacy behavior is still the default, it can be optionally be changed by setting druid.expressions.useStrictBooleans=true, so that these operations will allow correctly treating null values as "unknown" for SQL compatible behavior, and all boolean output functions will output 'homogeneous' LONG typed boolean values of 1 for true and 0 for false. Additionally, For the "or" operator: true || null, null || true, -> 1false || null, null || false, null || null-> null For the "and" operator: true && null, null && true, null && null -> nullfalse && null, null && false -> 0 Druid currently still retains implicit conversion of LONG, DOUBLE, and STRING types into boolean values in both modes: LONG or DOUBLE - any value greater than 0 is considered true, else falseSTRING - the value 'true' (case insensitive) is considered true, everything else is false. Legacy behavior: 100 && 11 -> 110.7 || 0.3 -> 0.3100 && 0 -> 0'troo' && 'true' -> 'troo''troo' || 'true' -> 'true' SQL compatible behavior: 100 && 11 -> 10.7 || 0.3 -> 1100 && 0 -> 0'troo' && 'true' -> 0'troo' || 'true' -> 1 "},{"title":"Multi-value dimensions","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/multi-value-dimensions","content":"","keywords":""},{"title":"Overview","type":1,"pageTitle":"Multi-value dimensions","url":"/docs/27.0.0/querying/multi-value-dimensions#overview","content":"At ingestion time, Druid can detect multi-value dimensions and configure the dimensionsSpec accordingly. It detects JSON arrays or CSV/TSV fields as multi-value dimensions. For TSV or CSV data, you can specify the multi-value delimiters using the listDelimiter field in the parseSpec. JSON data must be formatted as a JSON array to be ingested as a multi-value dimension. JSON data does not require parseSpec configuration. The following shows an example multi-value dimension named tags in a dimensionsSpec: "dimensions": [ { "type": "string", "name": "tags", "multiValueHandling": "SORTED_ARRAY", "createBitmapIndex": true } ], By default, Druid sorts values in multi-value dimensions. This behavior is controlled by the SORTED_ARRAY value of the multiValueHandling field. Alternatively, you can specify multi-value handling as: SORTED_SET: results in the removal of duplicate valuesARRAY: retains the original order of the values See Dimension Objects for information on configuring multi-value handling. "},{"title":"Querying multi-value dimensions","type":1,"pageTitle":"Multi-value dimensions","url":"/docs/27.0.0/querying/multi-value-dimensions#querying-multi-value-dimensions","content":"The following sections describe filtering and grouping behavior based on the following example data, which includes a multi-value dimension, tags. {"timestamp": "2011-01-12T00:00:00.000Z", "tags": ["t1","t2","t3"]} #row1 {"timestamp": "2011-01-13T00:00:00.000Z", "tags": ["t3","t4","t5"]} #row2 {"timestamp": "2011-01-14T00:00:00.000Z", "tags": ["t5","t6","t7"]} #row3 {"timestamp": "2011-01-14T00:00:00.000Z", "tags": []} #row4 info Be sure to remove the comments before trying out the sample data. "},{"title":"Filtering","type":1,"pageTitle":"Multi-value dimensions","url":"/docs/27.0.0/querying/multi-value-dimensions#filtering","content":"All query types, as well as filtered aggregators, can filter on multi-value dimensions. Filters follow these rules on multi-value dimensions: Value filters (like "selector", "bound", and "in") match a row if any of the values of a multi-value dimension match the filter.The Column Comparison filter will match a row if the dimensions have any overlap.Value filters that match null or "" (empty string) will match empty cells in a multi-value dimension.Logical expression filters behave the same way they do on single-value dimensions: "and" matches a row if all underlying filters match that row; "or" matches a row if any underlying filters match that row; "not" matches a row if the underlying filter does not match the row. The following example illustrates these rules. This query applies an "or" filter to match row1 and row2 of the dataset above, but not row3: { "type": "or", "fields": [ { "type": "selector", "dimension": "tags", "value": "t1" }, { "type": "selector", "dimension": "tags", "value": "t3" } ] } This "and" filter would match only row1 of the dataset above: { "type": "and", "fields": [ { "type": "selector", "dimension": "tags", "value": "t1" }, { "type": "selector", "dimension": "tags", "value": "t3" } ] } This "selector" filter would match row4 of the dataset above: { "type": "selector", "dimension": "tags", "value": null } "},{"title":"Grouping","type":1,"pageTitle":"Multi-value dimensions","url":"/docs/27.0.0/querying/multi-value-dimensions#grouping","content":"topN and groupBy queries can group on multi-value dimensions. When grouping on a multi-value dimension, all values from matching rows will be used to generate one group per value. This behaves similarly to an implicit SQL UNNESToperation. This means it's possible for a query to return more groups than there are rows. For example, a topN on the dimension tags with filter "t1" AND "t3" would match only row1, and generate a result with three groups:t1, t2, and t3. If you only need to include values that match your filter, you can use afiltered dimensionSpec. This can also improve performance. "},{"title":"Example: GroupBy query with no filtering","type":1,"pageTitle":"Multi-value dimensions","url":"/docs/27.0.0/querying/multi-value-dimensions#example-groupby-query-with-no-filtering","content":"See GroupBy querying for details. { "queryType": "groupBy", "dataSource": "test", "intervals": [ "1970-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z" ], "granularity": { "type": "all" }, "dimensions": [ { "type": "default", "dimension": "tags", "outputName": "tags" } ], "aggregations": [ { "type": "count", "name": "count" } ] } This query returns the following result: [ { "timestamp": "1970-01-01T00:00:00.000Z", "event": { "count": 1, "tags": "t1" } }, { "timestamp": "1970-01-01T00:00:00.000Z", "event": { "count": 1, "tags": "t2" } }, { "timestamp": "1970-01-01T00:00:00.000Z", "event": { "count": 2, "tags": "t3" } }, { "timestamp": "1970-01-01T00:00:00.000Z", "event": { "count": 1, "tags": "t4" } }, { "timestamp": "1970-01-01T00:00:00.000Z", "event": { "count": 2, "tags": "t5" } }, { "timestamp": "1970-01-01T00:00:00.000Z", "event": { "count": 1, "tags": "t6" } }, { "timestamp": "1970-01-01T00:00:00.000Z", "event": { "count": 1, "tags": "t7" } } ] Notice that original rows are "exploded" into multiple rows and merged. "},{"title":"Example: GroupBy query with a selector query filter","type":1,"pageTitle":"Multi-value dimensions","url":"/docs/27.0.0/querying/multi-value-dimensions#example-groupby-query-with-a-selector-query-filter","content":"See query filters for details of selector query filter. { "queryType": "groupBy", "dataSource": "test", "intervals": [ "1970-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z" ], "filter": { "type": "selector", "dimension": "tags", "value": "t3" }, "granularity": { "type": "all" }, "dimensions": [ { "type": "default", "dimension": "tags", "outputName": "tags" } ], "aggregations": [ { "type": "count", "name": "count" } ] } This query returns the following result: [ { "timestamp": "1970-01-01T00:00:00.000Z", "event": { "count": 1, "tags": "t1" } }, { "timestamp": "1970-01-01T00:00:00.000Z", "event": { "count": 1, "tags": "t2" } }, { "timestamp": "1970-01-01T00:00:00.000Z", "event": { "count": 2, "tags": "t3" } }, { "timestamp": "1970-01-01T00:00:00.000Z", "event": { "count": 1, "tags": "t4" } }, { "timestamp": "1970-01-01T00:00:00.000Z", "event": { "count": 1, "tags": "t5" } } ] You might be surprised to see "t1", "t2", "t4" and "t5" included in the results. This is because the query filter is applied on the row before explosion. For multi-value dimensions, a selector filter for "t3" would match row1 and row2, after which exploding is done. For multi-value dimensions, a query filter matches a row if any individual value inside the multiple values matches the query filter. "},{"title":"Example: GroupBy query with selector query and dimension filters","type":1,"pageTitle":"Multi-value dimensions","url":"/docs/27.0.0/querying/multi-value-dimensions#example-groupby-query-with-selector-query-and-dimension-filters","content":"To solve the problem above and to get only rows for "t3", use a "filtered dimension spec", as in the query below. See filtered dimensionSpecs in dimensionSpecs for details. { "queryType": "groupBy", "dataSource": "test", "intervals": [ "1970-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z" ], "filter": { "type": "selector", "dimension": "tags", "value": "t3" }, "granularity": { "type": "all" }, "dimensions": [ { "type": "listFiltered", "delegate": { "type": "default", "dimension": "tags", "outputName": "tags" }, "values": ["t3"] } ], "aggregations": [ { "type": "count", "name": "count" } ] } This query returns the following result: [ { "timestamp": "1970-01-01T00:00:00.000Z", "event": { "count": 2, "tags": "t3" } } ] Note that, for groupBy queries, you could get similar result with a having spec but using a filtereddimensionSpec is much more efficient because that gets applied at the lowest level in the query processing pipeline. Having specs are applied at the outermost level of groupBy query processing. "},{"title":"Disable GroupBy on multi-value columns","type":1,"pageTitle":"Multi-value dimensions","url":"/docs/27.0.0/querying/multi-value-dimensions#disable-groupby-on-multi-value-columns","content":"You can disable the implicit unnesting behavior for groupBy by setting groupByEnableMultiValueUnnesting: false in your query context. In this mode, the groupBy engine will return an error instead of completing the query. This is a safety feature for situations where you believe that all dimensions are singly-valued and want the engine to reject any multi-valued dimensions that were inadvertently included. "},{"title":"Multitenancy considerations","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/multitenancy","content":"","keywords":""},{"title":"Shared datasources or datasource-per-tenant?","type":1,"pageTitle":"Multitenancy considerations","url":"/docs/27.0.0/querying/multitenancy#shared-datasources-or-datasource-per-tenant","content":"A datasource is the Druid equivalent of a database table. Multitenant workloads can either use a separate datasource for each tenant, or can share one or more datasources between tenants using a "tenant_id" dimension. When deciding which path to go down, consider that each path has pros and cons. Pros of datasources per tenant: Each datasource can have its own schema, its own backfills, its own partitioning rules, and its own data loading and expiration rules.Queries can be faster since there will be fewer segments to examine for a typical tenant's query.You get the most flexibility. Pros of shared datasources: Each datasource requires its own JVMs for realtime indexing.Each datasource requires its own YARN resources for Hadoop batch jobs.Each datasource requires its own segment files on disk.For these reasons it can be wasteful to have a very large number of small datasources. One compromise is to use more than one datasource, but a smaller number than tenants. For example, you could have some tenants with partitioning rules A and some with partitioning rules B; you could use two datasources and split your tenants between them. "},{"title":"Partitioning shared datasources","type":1,"pageTitle":"Multitenancy considerations","url":"/docs/27.0.0/querying/multitenancy#partitioning-shared-datasources","content":"If your multitenant cluster uses shared datasources, most of your queries will likely filter on a "tenant_id" dimension. These sorts of queries perform best when data is well-partitioned by tenant. There are a few ways to accomplish this. With batch indexing, you can use single-dimension partitioningto partition your data by tenant_id. Druid always partitions by time first, but the secondary partition within each time bucket will be on tenant_id. With realtime indexing, you'd do this by tweaking the stream you send to Druid. For example, if you're using Kafka then you can have your Kafka producer partition your topic by a hash of tenant_id. "},{"title":"Customizing data distribution","type":1,"pageTitle":"Multitenancy considerations","url":"/docs/27.0.0/querying/multitenancy#customizing-data-distribution","content":"Druid additionally supports multitenancy by providing configurable means of distributing data. Druid's Historical processes can be configured into tiers, and rulescan be set that determines which segments go into which tiers. One use case of this is that recent data tends to be accessed more frequently than older data. Tiering enables more recent segments to be hosted on more powerful hardware for better performance. A second copy of recent segments can be replicated on cheaper hardware (a different tier), and older segments can also be stored on this tier. "},{"title":"Supporting high query concurrency","type":1,"pageTitle":"Multitenancy considerations","url":"/docs/27.0.0/querying/multitenancy#supporting-high-query-concurrency","content":"Druid uses a segment as its fundamental unit of computation. Processes scan segments in parallel and a given process can scan druid.processing.numThreads concurrently. You can add more cores to a cluster to process more data in parallel and increase performance. Size your Druid segments such that any computation over any given segment should complete in at most 500ms. Use the the query/segment/time metric to monitor computation times. Druid internally stores requests to scan segments in a priority queue. If a given query requires scanning more segments than the total number of available processors in a cluster, and many similarly expensive queries are concurrently running, we don't want any query to be starved out. Druid's internal processing logic will scan a set of segments from one query and release resources as soon as the scans complete. This allows for a second set of segments from another query to be scanned. By keeping segment computation time very small, we ensure that resources are constantly being yielded, and segments pertaining to different queries are all being processed. Druid queries can optionally set a priority flag in the query context. Queries known to be slow (download or reporting style queries) can be de-prioritized and more interactive queries can have higher priority. Broker processes can also be dedicated to a given tier. For example, one set of Broker processes can be dedicated to fast interactive queries, and a second set of Broker processes can be dedicated to slower reporting queries. Druid also provides a Routerprocess that can route queries to different Brokers based on various query parameters (datasource, interval, etc.). "},{"title":"Query filters","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/filters","content":"","keywords":""},{"title":"Selector filter","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#selector-filter","content":"The simplest filter is a selector filter. The selector filter matches a specific dimension with a specific value. Selector filters can be used as the base filters for more complex Boolean expressions of filters. Property\tDescription\tRequiredtype\tMust be "selector".\tYes dimension\tInput column or virtual column name to filter.\tYes value\tString value to match.\tNo. If not specified the filter matches NULL values. extractionFn\tExtraction function to apply to dimension prior to value matching. See filtering with extraction functions for details.\tNo The selector filter can only match against STRING (single and multi-valued), LONG, FLOAT, DOUBLE types. Use the newer null and equality filters to match against ARRAY or COMPLEX types. When the selector filter matches against numeric inputs, the string value will be best-effort coerced into a numeric value. "},{"title":"Example: equivalent of WHERE someColumn = 'hello'","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-of-where-somecolumn--hello","content":"{ "type": "selector", "dimension": "someColumn", "value": "hello" } "},{"title":"Example: equivalent of WHERE someColumn IS NULL","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-of-where-somecolumn-is-null","content":"{ "type": "selector", "dimension": "someColumn", "value": null } "},{"title":"Equality Filter","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#equality-filter","content":"The equality filter is a replacement for the selector filter with the ability to match against any type of column. The equality filter is designed to have more SQL compatible behavior than the selector filter and so can not match null values. To match null values use the null filter. Druid's SQL planner uses the equality filter by default instead of selector filter whenever druid.generic.useDefaultValueForNull=false, or if sqlUseBoundAndSelectors is set to false on the SQL query context. Property\tDescription\tRequiredtype\tMust be "equality".\tYes column\tInput column or virtual column name to filter.\tYes matchValueType\tString specifying the type of value to match. For example STRING, LONG, DOUBLE, FLOAT, ARRAY<STRING>, ARRAY<LONG>, or any other Druid type. The matchValueType determines how Druid interprets the matchValue to assist in converting to the type of the matched column.\tYes matchValue\tValue to match, must not be null.\tYes "},{"title":"Example: equivalent of WHERE someColumn = 'hello'","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-of-where-somecolumn--hello-1","content":"{ "type": "equality", "column": "someColumn", "matchValueType": "STRING", "matchValue": "hello" } "},{"title":"Example: equivalent of WHERE someNumericColumn = 1.23","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-of-where-somenumericcolumn--123","content":"{ "type": "equality", "column": "someNumericColumn", "matchValueType": "DOUBLE", "matchValue": 1.23 } "},{"title":"Example: equivalent of WHERE someArrayColumn = ARRAY[1, 2, 3]","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-of-where-somearraycolumn--array1-2-3","content":"{ "type": "equality", "column": "someArrayColumn", "matchValueType": "ARRAY<LONG>", "matchValue": [1, 2, 3] } "},{"title":"Null Filter","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#null-filter","content":"The null filter is a partial replacement for the selector filter. It is dedicated to matching NULL values. Druid's SQL planner uses the null filter by default instead of selector filter whenever druid.generic.useDefaultValueForNull=false, or if sqlUseBoundAndSelectors is set to false on the SQL query context. Property\tDescription\tRequiredtype\tMust be "null".\tYes column\tInput column or virtual column name to filter.\tYes "},{"title":"Example: equivalent of WHERE someColumn IS NULL","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-of-where-somecolumn-is-null-1","content":"{ "type": "null", "column": "someColumn" } "},{"title":"Column comparison filter","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#column-comparison-filter","content":"The column comparison filter is similar to the selector filter, but compares dimensions to each other. For example: Property\tDescription\tRequiredtype\tMust be "selector".\tYes dimensions\tList of DimensionSpec to compare.\tYes dimensions is list of DimensionSpecs, making it possible to apply an extraction function if needed. Note that the column comparison filter converts all values to strings prior to comparison. This allows differently-typed input columns to match without a cast operation. "},{"title":"Example: equivalent of WHERE someColumn = someLongColumn","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-of-where-somecolumn--somelongcolumn","content":"{ "type": "columnComparison", "dimensions": [ "someColumn", { "type" : "default", "dimension" : someLongColumn, "outputType": "LONG" } ] } "},{"title":"Logical expression filters","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#logical-expression-filters","content":""},{"title":"AND","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#and","content":"Property\tDescription\tRequiredtype\tMust be "and".\tYes fields\tList of filter JSON objects, such as any other filter defined on this page or provided by extensions.\tYes Example: equivalent of WHERE someColumn = 'a' AND otherColumn = 1234 AND anotherColumn IS NULL { "type": "and", "fields": [ { "type": "equality", "column": "someColumn", "matchValue": "a", "matchValueType": "STRING" }, { "type": "equality", "column": "otherColumn", "matchValue": 1234, "matchValueType": "LONG" }, { "type": "null", "column": "anotherColumn" } ] } "},{"title":"OR","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#or","content":"Property\tDescription\tRequiredtype\tMust be "or".\tYes fields\tList of filter JSON objects, such as any other filter defined on this page or provided by extensions.\tYes Example: equivalent of WHERE someColumn = 'a' OR otherColumn = 1234 OR anotherColumn IS NULL { "type": "or", "fields": [ { "type": "equality", "column": "someColumn", "matchValue": "a", "matchValueType": "STRING" }, { "type": "equality", "column": "otherColumn", "matchValue": 1234, "matchValueType": "LONG" }, { "type": "null", "column": "anotherColumn" } ] } "},{"title":"NOT","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#not","content":"Property\tDescription\tRequiredtype\tMust be "not".\tYes field\tFilter JSON objects, such as any other filter defined on this page or provided by extensions.\tYes Example: equivalent of WHERE someColumn IS NOT NULL { "type": "not", "field": { "type": "null", "column": "someColumn" }} "},{"title":"In filter","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#in-filter","content":"The in filter can match input rows against a set of values, where a match occurs if the value is contained in the set. Property\tDescription\tRequiredtype\tMust be "in".\tYes dimension\tInput column or virtual column name to filter.\tYes values\tList of string value to match.\tYes extractionFn\tExtraction function to apply to dimension prior to value matching. See filtering with extraction functions for details.\tNo If an empty values array is passed to the "in" filter, it will simply return an empty result. If the values array contains null, the "in" filter matches null values. This differs from the SQL IN filter, which does not match NULL values. "},{"title":"Example: equivalent of WHERE outlaw IN ('Good', 'Bad', 'Ugly')","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-of-where-outlaw-in-good-bad-ugly","content":"{ "type": "in", "dimension": "outlaw", "values": ["Good", "Bad", "Ugly"] } "},{"title":"Bound filter","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#bound-filter","content":"Bound filters can be used to filter on ranges of dimension values. It can be used for comparison filtering like greater than, less than, greater than or equal to, less than or equal to, and "between" (if both "lower" and "upper" are set). Property\tDescription\tRequiredtype\tMust be "bound".\tYes dimension\tInput column or virtual column name to filter.\tYes lower\tThe lower bound string match value for the filter.\tNo upper\tThe upper bound string match value for the filter.\tNo lowerStrict\tBoolean indicating whether to perform strict comparison on the lower bound (">" instead of ">=").\tNo, default: false upperStrict\tBoolean indicating whether to perform strict comparison on the upper bound ("<" instead of "<=").\tNo, default: false ordering\tString that specifies the sorting order to use when comparing values against the bound. Can be one of the following values: "lexicographic", "alphanumeric", "numeric", "strlen", "version". See Sorting Orders for more details.\tNo, default: "lexicographic" extractionFn\tExtraction function to apply to dimension prior to value matching. See filtering with extraction functions for details.\tNo When the bound filter matches against numeric inputs, the string lower and upper bound values are best-effort coerced into a numeric value when using the "numeric" mode of ordering. The bound filter can only match against STRING (single and multi-valued), LONG, FLOAT, DOUBLE types. Use the newer range to match against ARRAY or COMPLEX types. Note that the bound filter matches null values if you don't specify a lower bound. Use the range filter if SQL-compatible behavior. "},{"title":"Example: equivalent to WHERE 21 <= age <= 31","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-to-where-21--age--31","content":"{ "type": "bound", "dimension": "age", "lower": "21", "upper": "31" , "ordering": "numeric" } "},{"title":"Example: equivalent to WHERE 'foo' <= name <= 'hoo', using the default lexicographic sorting order","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-to-where-foo--name--hoo-using-the-default-lexicographic-sorting-order","content":"{ "type": "bound", "dimension": "name", "lower": "foo", "upper": "hoo" } "},{"title":"Example: equivalent to WHERE 21 < age < 31","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-to-where-21--age--31-1","content":"{ "type": "bound", "dimension": "age", "lower": "21", "lowerStrict": true, "upper": "31" , "upperStrict": true, "ordering": "numeric" } "},{"title":"Example: equivalent to WHERE age < 31","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-to-where-age--31","content":"{ "type": "bound", "dimension": "age", "upper": "31" , "upperStrict": true, "ordering": "numeric" } "},{"title":"Example: equivalent to WHERE age >= 18","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-to-where-age--18","content":"{ "type": "bound", "dimension": "age", "lower": "18" , "ordering": "numeric" } "},{"title":"Range filter","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#range-filter","content":"The range filter is a replacement for the bound filter. It compares against any type of column and is designed to have has more SQL compliant behavior than the bound filter. It won't match null values, even if you don't specify a lower bound. Druid's SQL planner uses the range filter by default instead of bound filter whenever druid.generic.useDefaultValueForNull=false, or if sqlUseBoundAndSelectors is set to false on the SQL query context. Property\tDescription\tRequiredtype\tMust be "range".\tYes column\tInput column or virtual column name to filter.\tYes matchValueType\tString specifying the type of bounds to match. For example STRING, LONG, DOUBLE, FLOAT, ARRAY<STRING>, ARRAY<LONG>, or any other Druid type. The matchValueType determines how Druid interprets the matchValue to assist in converting to the type of the matched column and also defines the type of comparison used when matching values.\tYes lower\tLower bound value to match.\tNo. At least one of lower or upper must not be null. upper\tUpper bound value to match.\tNo. At least one of lower or upper must not be null. lowerOpen\tBoolean indicating if lower bound is open in the interval of values defined by the range (">" instead of ">=").\tNo upperOpen\tBoolean indicating if upper bound is open on the interval of values defined by range ("<" instead of "<=").\tNo "},{"title":"Example: equivalent to WHERE 21 <= age <= 31","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-to-where-21--age--31-2","content":"{ "type": "range", "column": "age", "matchValueType": "LONG", "lower": 21, "upper": 31 } "},{"title":"Example: equivalent to WHERE 'foo' <= name <= 'hoo', using STRING comparison","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-to-where-foo--name--hoo-using-string-comparison","content":"{ "type": "range", "column": "name", "matchValueType": "STRING", "lower": "foo", "upper": "hoo" } "},{"title":"Example: equivalent to WHERE 21 < age < 31","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-to-where-21--age--31-3","content":"{ "type": "range", "column": "age", "matchValueType": "LONG", "lower": "21", "lowerOpen": true, "upper": "31" , "upperOpen": true } "},{"title":"Example: equivalent to WHERE age < 31","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-to-where-age--31-1","content":"{ "type": "range", "column": "age", "matchValueType": "LONG", "upper": "31" , "upperOpen": true } "},{"title":"Example: equivalent to WHERE age >= 18","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-to-where-age--18-1","content":"{ "type": "range", "column": "age", "matchValueType": "LONG", "lower": 18 } "},{"title":"Example: equivalent to WHERE ARRAY['a','b','c'] < arrayColumn < ARRAY['d','e','f'], using ARRAY comparison","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-to-where-arrayabc--arraycolumn--arraydef-using-array-comparison","content":"{ "type": "range", "column": "name", "matchValueType": "ARRAY<STRING>", "lower": ["a","b","c"], "lowerOpen": true, "upper": ["d","e","f"], "upperOpen": true } "},{"title":"Like filter","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#like-filter","content":"Like filters can be used for basic wildcard searches. They are equivalent to the SQL LIKE operator. Special characters supported are "%" (matches any number of characters) and "_" (matches any one character). Property\tDescription\tRequiredtype\tMust be "like".\tYes dimension\tInput column or virtual column name to filter.\tYes pattern\tString LIKE pattern, such as "foo%" or "___bar".\tYes escape\tA string escape character that can be used to escape special characters.\tNo extractionFn\tExtraction function to apply to dimension prior to value matching. See filtering with extraction functions for details.\tNo Like filters support the use of extraction functions, see Filtering with Extraction Functions for details. "},{"title":"Example: equivalent of WHERE last_name LIKE \"D%\" (last_name starts with \"D\")","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-equivalent-of-where-last_name-like-d-last_name-starts-with-d","content":"{ "type": "like", "dimension": "last_name", "pattern": "D%" } "},{"title":"Regular expression filter","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#regular-expression-filter","content":"The regular expression filter is similar to the selector filter, but using regular expressions. It matches the specified dimension with the given pattern. Property\tDescription\tRequiredtype\tMust be "regex".\tYes dimension\tInput column or virtual column name to filter.\tYes pattern\tString pattern to match - any standard Java regular expression.\tYes extractionFn\tExtraction function to apply to dimension prior to value matching. See filtering with extraction functions for details.\tNo Note that it is often more optimal to use a like filter instead of a regex for simple matching of prefixes. "},{"title":"Example: matches values that start with \"50.\"","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-matches-values-that-start-with-50","content":"{ "type": "regex", "dimension": "someColumn", "pattern": ^50.* } "},{"title":"Interval filter","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#interval-filter","content":"The Interval filter enables range filtering on columns that contain long millisecond values, with the boundaries specified as ISO 8601 time intervals. It is suitable for the __time column, long metric columns, and dimensions with values that can be parsed as long milliseconds. This filter converts the ISO 8601 intervals to long millisecond start/end ranges and translates to an OR of Bound filters on those millisecond ranges, with numeric comparison. The Bound filters will have left-closed and right-open matching (i.e., start <= time < end). Property\tDescription\tRequiredtype\tMust be "interval".\tYes dimension\tInput column or virtual column name to filter.\tYes intervals\tA JSON array containing ISO-8601 interval strings that defines the time ranges to filter on.\tYes extractionFn\tExtraction function to apply to dimension prior to value matching. See filtering with extraction functions for details.\tNo The interval filter supports the use of extraction functions, see Filtering with Extraction Functions for details. If an extraction function is used with this filter, the extraction function should output values that are parseable as long milliseconds. The following example filters on the time ranges of October 1-7, 2014 and November 15-16, 2014. { "type" : "interval", "dimension" : "__time", "intervals" : [ "2014-10-01T00:00:00.000Z/2014-10-07T00:00:00.000Z", "2014-11-15T00:00:00.000Z/2014-11-16T00:00:00.000Z" ] } The filter above is equivalent to the following OR of Bound filters: { "type": "or", "fields": [ { "type": "bound", "dimension": "__time", "lower": "1412121600000", "lowerStrict": false, "upper": "1412640000000" , "upperStrict": true, "ordering": "numeric" }, { "type": "bound", "dimension": "__time", "lower": "1416009600000", "lowerStrict": false, "upper": "1416096000000" , "upperStrict": true, "ordering": "numeric" } ] } "},{"title":"True filter","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#true-filter","content":"A filter which matches all values. You can use it to temporarily disable other filters without removing them. { "type" : "true" } "},{"title":"False filter","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#false-filter","content":"A filter matches no values. You can use it to force a query to match no values. {"type": "false" } "},{"title":"Search filter","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#search-filter","content":"You can use search filters to filter on partial string matches. { "filter": { "type": "search", "dimension": "product", "query": { "type": "insensitive_contains", "value": "foo" } } } Property\tDescription\tRequiredtype\tMust be "search".\tYes dimension\tInput column or virtual column name to filter.\tYes query\tA JSON object for the type of search. See search query spec for more information.\tYes extractionFn\tExtraction function to apply to dimension prior to value matching. See filtering with extraction functions for details.\tNo "},{"title":"Search query spec","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#search-query-spec","content":"Contains Property\tDescription\tRequiredtype\tMust be "contains".\tYes value\tA String value to search.\tYes caseSensitive\tWhether the string comparison is case-sensitive or not.\tNo, default is false (insensitive) Insensitive contains Property\tDescription\tRequiredtype\tMust be "insensitive_contains".\tYes value\tA String value to search.\tYes Note that an "insensitive_contains" search is equivalent to a "contains" search with "caseSensitive": false (or not provided). Fragment Property\tDescription\tRequiredtype\tMust be "fragment".\tYes values\tA JSON array of string values to search.\tYes caseSensitive\tWhether the string comparison is case-sensitive or not.\tNo, default is false (insensitive) "},{"title":"Expression filter","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#expression-filter","content":"The expression filter allows for the implementation of arbitrary conditions, leveraging the Druid expression system. This filter allows for complete flexibility, but it might be less performant than a combination of the other filters on this page because it can't always use the same optimizations available to other filters. Property\tDescription\tRequiredtype\tMust be "expression"\tYes expression\tExpression string to evaluate into true or false. See the Druid expression system for more details.\tYes "},{"title":"Example: expression based matching","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-expression-based-matching","content":"{ "type" : "expression" , "expression" : "((product_type == 42) && (!is_deleted))" } "},{"title":"JavaScript filter","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#javascript-filter","content":"The JavaScript filter matches a dimension against the specified JavaScript function predicate. The filter matches values for which the function returns true. Property\tDescription\tRequiredtype\tMust be "javascript"\tYes dimension\tInput column or virtual column name to filter.\tYes function\tJavaScript function which accepts the dimension value as a single argument, and returns either true or false.\tYes extractionFn\tExtraction function to apply to dimension prior to value matching. See filtering with extraction functions for details.\tNo "},{"title":"Example: matching any dimension values for the dimension name between 'bar' and 'foo'","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-matching-any-dimension-values-for-the-dimension-name-between-bar-and-foo","content":"{ "type" : "javascript", "dimension" : "name", "function" : "function(x) { return(x >= 'bar' && x <= 'foo') }" } info JavaScript-based functionality is disabled by default. Refer to the Druid JavaScript programming guide for guidelines about using Druid's JavaScript functionality, including instructions on how to enable it. "},{"title":"Extraction filter","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#extraction-filter","content":"info The extraction filter is now deprecated. Use the selector filter with an extraction function instead. Extraction filter matches a dimension using a specific extraction function. The following filter matches the values for which the extraction function has a transformation entry input_key=output_value whereoutput_value is equal to the filter value and input_key is present as a dimension. Property\tDescription\tRequiredtype\tMust be "extraction"\tYes dimension\tInput column or virtual column name to filter.\tYes value\tString value to match.\tNo. If not specified the filter will match NULL values. extractionFn\tExtraction function to apply to dimension prior to value matching. See filtering with extraction functions for details.\tNo "},{"title":"Example: matching dimension values in [product_1, product_3, product_5] for the column product","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-matching-dimension-values-in-product_1-product_3-product_5-for-the-column-product","content":"{ "filter": { "type": "extraction", "dimension": "product", "value": "bar_1", "extractionFn": { "type": "lookup", "lookup": { "type": "map", "map": { "product_1": "bar_1", "product_5": "bar_1", "product_3": "bar_1" } } } } } "},{"title":"Filtering with extraction functions","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#filtering-with-extraction-functions","content":"All filters except the "spatial" filter support extraction functions. An extraction function is defined by setting the "extractionFn" field on a filter. See Extraction function for more details on extraction functions. If specified, the extraction function will be used to transform input values before the filter is applied. The example below shows a selector filter combined with an extraction function. This filter will transform input values according to the values defined in the lookup map; transformed values will then be matched with the string "bar_1". "},{"title":"Example: matches dimension values in [product_1, product_3, product_5] for the column product","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#example-matches-dimension-values-in-product_1-product_3-product_5-for-the-column-product","content":"{ "filter": { "type": "selector", "dimension": "product", "value": "bar_1", "extractionFn": { "type": "lookup", "lookup": { "type": "map", "map": { "product_1": "bar_1", "product_5": "bar_1", "product_3": "bar_1" } } } } } "},{"title":"Column types","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#column-types","content":"Druid supports filtering on timestamp, string, long, and float columns. Note that only string columns and columns produced with the 'auto' ingestion spec also used by type aware schema discovery have bitmap indexes. Queries that filter on other column types must scan those columns. "},{"title":"Filtering on multi-value string columns","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#filtering-on-multi-value-string-columns","content":"All filters return true if any one of the dimension values is satisfies the filter. Example: multi-value match behavior Given a multi-value STRING row with values ['a', 'b', 'c'], a filter such as { "type": "equality", "column": "someMultiValueColumn", "matchValueType": "STRING", "matchValue": "b" } will successfully match the entire row. This can produce sometimes unintuitive behavior when coupled with the implicit UNNEST functionality of Druid GroupBy and TopN queries. Additionally, contradictory filters may be defined and perfectly legal in native queries which will not work in SQL. Example: SQL "contradiction" This query is impossible to express as is in SQL since it is a contradiction that the SQL planner will optimize to false and match nothing. Given a multi-value STRING row with values ['a', 'b', 'c'], and filter such as { "type": "and", "fields": [ { "type": "equality", "column": "someMultiValueColumn", "matchValueType": "STRING", "matchValue": "a" }, { "type": "equality", "column": "someMultiValueColumn", "matchValueType": "STRING", "matchValue": "b" } ] } will successfully match the entire row, but not match a row with value ['a', 'c']. To express this filter in SQL, use SQL multi-value string functions such as MV_CONTAINS, which can be optimized by the planner to the same native filters. "},{"title":"Filtering on numeric columns","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#filtering-on-numeric-columns","content":"Some filters, such as equality and range filters allow accepting numeric match values directly since they include a secondary matchValueType parameter. When filtering on numeric columns using string based filters such as the selector, in, and bounds filters, you can write filter match values as if they were strings. In most cases, your filter will be converted into a numeric predicate and will be applied to the numeric column values directly. In some cases (such as the "regex" filter) the numeric column values will be converted to strings during the scan. Example: filtering on a specific value, myFloatColumn = 10.1 { "type": "equality", "dimension": "myFloatColumn", "matchValueType": "FLOAT", "value": 10.1 } or with a selector filter: { "type": "selector", "dimension": "myFloatColumn", "value": "10.1" } Example: filtering on a range of values, 10 <= myFloatColumn < 20 { "type": "range", "column": "myFloatColumn", "matchvalueType": "FLOAT", "lower": 10.1, "lowerOpen": false, "upper": 20.9, "upperOpen": true } or with a bound filter: { "type": "bound", "dimension": "myFloatColumn", "ordering": "numeric", "lower": "10", "lowerStrict": false, "upper": "20", "upperStrict": true } "},{"title":"Filtering on the timestamp column","type":1,"pageTitle":"Query filters","url":"/docs/27.0.0/querying/filters#filtering-on-the-timestamp-column","content":"Query filters can also be applied to the timestamp column. The timestamp column has long millisecond values. To refer to the timestamp column, use the string __time as the dimension name. Like numeric dimensions, timestamp filters should be specified as if the timestamp values were strings. If you want to interpret the timestamp with a specific format, timezone, or locale, the Time Format Extraction Function is useful. Example: filtering on a long timestamp value { "type": "equality", "dimension": "__time", "matchValueType": "LONG", "value": 124457387532 } or with a selector filter: { "type": "selector", "dimension": "__time", "value": "124457387532" } Example: filtering on day of week using an extraction function { "type": "selector", "dimension": "__time", "value": "Friday", "extractionFn": { "type": "timeFormat", "format": "EEEE", "timeZone": "America/New_York", "locale": "en" } } Example: filtering on a set of ISO 8601 intervals { "type" : "interval", "dimension" : "__time", "intervals" : [ "2014-10-01T00:00:00.000Z/2014-10-07T00:00:00.000Z", "2014-11-15T00:00:00.000Z/2014-11-16T00:00:00.000Z" ] } "},{"title":"Post-aggregations","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/post-aggregations","content":"","keywords":""},{"title":"Arithmetic post-aggregator","type":1,"pageTitle":"Post-aggregations","url":"/docs/27.0.0/querying/post-aggregations#arithmetic-post-aggregator","content":"The arithmetic post-aggregator applies the provided function to the given fields from left to right. The fields can be aggregators or other post aggregators. Supported functions are +, -, *, /, pow and quotient. Note: / division always returns 0 if dividing by0, regardless of the numerator.quotient division behaves like regular floating point divisionArithmetic post-aggregators always use floating point arithmetic. Arithmetic post-aggregators may also specify an ordering, which defines the order of resulting values when sorting results (this can be useful for topN queries for instance): If no ordering (or null) is specified, the default floating point ordering is used.numericFirst ordering always returns finite values first, followed by NaN, and infinite values last. The grammar for an arithmetic post aggregation is: postAggregation : { "type" : "arithmetic", "name" : <output_name>, "fn" : <arithmetic_function>, "fields": [<post_aggregator>, <post_aggregator>, ...], "ordering" : <null (default), or "numericFirst"> } "},{"title":"Field accessor post-aggregators","type":1,"pageTitle":"Post-aggregations","url":"/docs/27.0.0/querying/post-aggregations#field-accessor-post-aggregators","content":"These post-aggregators return the value produced by the specified aggregator. fieldName refers to the output name of the aggregator given in the aggregations portion of the query. For complex aggregators, like "cardinality" and "hyperUnique", the type of the post-aggregator determines what the post-aggregator will return. Use type "fieldAccess" to return the raw aggregation object, or use type "finalizingFieldAccess" to return a finalized value, such as an estimated cardinality. { "type" : "fieldAccess", "name": <output_name>, "fieldName" : <aggregator_name> } or { "type" : "finalizingFieldAccess", "name": <output_name>, "fieldName" : <aggregator_name> } "},{"title":"Constant post-aggregator","type":1,"pageTitle":"Post-aggregations","url":"/docs/27.0.0/querying/post-aggregations#constant-post-aggregator","content":"The constant post-aggregator always returns the specified value. { "type" : "constant", "name" : <output_name>, "value" : <numerical_value> } "},{"title":"Expression post-aggregator","type":1,"pageTitle":"Post-aggregations","url":"/docs/27.0.0/querying/post-aggregations#expression-post-aggregator","content":"The expression post-aggregator is defined using a Druid expression. { "type": "expression", "name": <output_name>, "expression": <post-aggregation expression>, "ordering" : <null (default), or "numericFirst"> } "},{"title":"Greatest / Least post-aggregators","type":1,"pageTitle":"Post-aggregations","url":"/docs/27.0.0/querying/post-aggregations#greatest--least-post-aggregators","content":"doubleGreatest and longGreatest computes the maximum of all fields and Double.NEGATIVE_INFINITY.doubleLeast and longLeast computes the minimum of all fields and Double.POSITIVE_INFINITY. The difference between the doubleMax aggregator and the doubleGreatest post-aggregator is that doubleMax returns the highest value of all rows for one specific column while doubleGreatest returns the highest value of multiple columns in one row. These are similar to the SQL MAX and GREATEST functions. Example: { "type" : "doubleGreatest", "name" : <output_name>, "fields": [<post_aggregator>, <post_aggregator>, ...] } "},{"title":"JavaScript post-aggregator","type":1,"pageTitle":"Post-aggregations","url":"/docs/27.0.0/querying/post-aggregations#javascript-post-aggregator","content":"Applies the provided JavaScript function to the given fields. Fields are passed as arguments to the JavaScript function in the given order. postAggregation : { "type": "javascript", "name": <output_name>, "fieldNames" : [<aggregator_name>, <aggregator_name>, ...], "function": <javascript function> } Example JavaScript aggregator: { "type": "javascript", "name": "absPercent", "fieldNames": ["delta", "total"], "function": "function(delta, total) { return 100 * Math.abs(delta) / total; }" } info JavaScript-based functionality is disabled by default. Please refer to the Druid JavaScript programming guide for guidelines about using Druid's JavaScript functionality, including instructions on how to enable it. "},{"title":"HyperUnique Cardinality post-aggregator","type":1,"pageTitle":"Post-aggregations","url":"/docs/27.0.0/querying/post-aggregations#hyperunique-cardinality-post-aggregator","content":"The hyperUniqueCardinality post aggregator is used to wrap a hyperUnique object such that it can be used in post aggregations. { "type" : "hyperUniqueCardinality", "name": <output name>, "fieldName" : <the name field value of the hyperUnique aggregator> } It can be used in a sample calculation as so: "aggregations" : [{ {"type" : "count", "name" : "rows"}, {"type" : "hyperUnique", "name" : "unique_users", "fieldName" : "uniques"} }], "postAggregations" : [{ "type" : "arithmetic", "name" : "average_users_per_row", "fn" : "/", "fields" : [ { "type" : "hyperUniqueCardinality", "fieldName" : "unique_users" }, { "type" : "fieldAccess", "name" : "rows", "fieldName" : "rows" } ] }] This post-aggregator will inherit the rounding behavior of the aggregator it references. Note that this inheritance is only effective if you directly reference an aggregator. Going through another post-aggregator, for example, will cause the user-specified rounding behavior to get lost and default to "no rounding". "},{"title":"Example Usage","type":1,"pageTitle":"Post-aggregations","url":"/docs/27.0.0/querying/post-aggregations#example-usage","content":"In this example, let’s calculate a simple percentage using post aggregators. Let’s imagine our data set has a metric called "total". The format of the query JSON is as follows: { ... "aggregations" : [ { "type" : "count", "name" : "rows" }, { "type" : "doubleSum", "name" : "tot", "fieldName" : "total" } ], "postAggregations" : [{ "type" : "arithmetic", "name" : "average", "fn" : "/", "fields" : [ { "type" : "fieldAccess", "name" : "tot", "fieldName" : "tot" }, { "type" : "fieldAccess", "name" : "rows", "fieldName" : "rows" } ] }] ... } { ... "aggregations" : [ { "type" : "doubleSum", "name" : "tot", "fieldName" : "total" }, { "type" : "doubleSum", "name" : "part", "fieldName" : "part" } ], "postAggregations" : [{ "type" : "arithmetic", "name" : "part_percentage", "fn" : "*", "fields" : [ { "type" : "arithmetic", "name" : "ratio", "fn" : "/", "fields" : [ { "type" : "fieldAccess", "name" : "part", "fieldName" : "part" }, { "type" : "fieldAccess", "name" : "tot", "fieldName" : "tot" } ] }, { "type" : "constant", "name": "const", "value" : 100 } ] }] ... } The same could be computed using an expression post-aggregator: { ... "aggregations" : [ { "type" : "doubleSum", "name" : "tot", "fieldName" : "total" }, { "type" : "doubleSum", "name" : "part", "fieldName" : "part" } ], "postAggregations" : [{ "type" : "expression", "name" : "part_percentage", "expression" : "100 * (part / tot)" }] ... } "},{"title":"Query context","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/query-context","content":"","keywords":""},{"title":"General parameters","type":1,"pageTitle":"Query context","url":"/docs/27.0.0/querying/query-context#general-parameters","content":"Unless otherwise noted, the following parameters apply to all query types. Parameter\tDefault\tDescriptiontimeout\tdruid.server.http.defaultQueryTimeout\tQuery timeout in millis, beyond which unfinished queries will be cancelled. 0 timeout means no timeout (up to the server-side maximum query timeout, druid.server.http.maxQueryTimeout). To set the default timeout and maximum timeout, see Broker configuration priority\tThe default priority is one of the following: Value of priority in the query context, if setThe value of the runtime property druid.query.default.context.priority, if set and not null0 if the priority is not set in the query context or runtime properties Query priority. Queries with higher priority get precedence for computational resources. lane\tnull\tQuery lane, used to control usage limits on classes of queries. See Broker configuration for more details. queryId\tauto-generated\tUnique identifier given to this query. If a query ID is set or known, this can be used to cancel the query brokerService\tnull\tBroker service to which this query should be routed. This parameter is honored only by a broker selector strategy of type manual. See Router strategies for more details. useCache\ttrue\tFlag indicating whether to leverage the query cache for this query. When set to false, it disables reading from the query cache for this query. When set to true, Apache Druid uses druid.broker.cache.useCache or druid.historical.cache.useCache to determine whether or not to read from the query cache populateCache\ttrue\tFlag indicating whether to save the results of the query to the query cache. Primarily used for debugging. When set to false, it disables saving the results of this query to the query cache. When set to true, Druid uses druid.broker.cache.populateCache or druid.historical.cache.populateCache to determine whether or not to save the results of this query to the query cache useResultLevelCache\ttrue\tFlag indicating whether to leverage the result level cache for this query. When set to false, it disables reading from the query cache for this query. When set to true, Druid uses druid.broker.cache.useResultLevelCache to determine whether or not to read from the result-level query cache populateResultLevelCache\ttrue\tFlag indicating whether to save the results of the query to the result level cache. Primarily used for debugging. When set to false, it disables saving the results of this query to the query cache. When set to true, Druid uses druid.broker.cache.populateResultLevelCache to determine whether or not to save the results of this query to the result-level query cache bySegment\tfalse\tNative queries only. Return "by segment" results. Primarily used for debugging, setting it to true returns results associated with the data segment they came from finalize\tN/A\tFlag indicating whether to "finalize" aggregation results. Primarily used for debugging. For instance, the hyperUnique aggregator returns the full HyperLogLog sketch instead of the estimated cardinality when this flag is set to false maxScatterGatherBytes\tdruid.server.http.maxScatterGatherBytes\tMaximum number of bytes gathered from data processes such as Historicals and realtime processes to execute a query. This parameter can be used to further reduce maxScatterGatherBytes limit at query time. See Broker configuration for more details. maxQueuedBytes\tdruid.broker.http.maxQueuedBytes\tMaximum number of bytes queued per query before exerting backpressure on the channel to the data server. Similar to maxScatterGatherBytes, except unlike that configuration, this one will trigger backpressure rather than query failure. Zero means disabled. serializeDateTimeAsLong\tfalse\tIf true, DateTime is serialized as long in the result returned by Broker and the data transportation between Broker and compute process serializeDateTimeAsLongInner\tfalse\tIf true, DateTime is serialized as long in the data transportation between Broker and compute process enableParallelMerge\ttrue\tEnable parallel result merging on the Broker. Note that druid.processing.merge.useParallelMergePool must be enabled for this setting to be set to true. See Broker configuration for more details. parallelMergeParallelism\tdruid.processing.merge.pool.parallelism\tMaximum number of parallel threads to use for parallel result merging on the Broker. See Broker configuration for more details. parallelMergeInitialYieldRows\tdruid.processing.merge.task.initialYieldNumRows\tNumber of rows to yield per ForkJoinPool merge task for parallel result merging on the Broker, before forking off a new task to continue merging sequences. See Broker configuration for more details. parallelMergeSmallBatchRows\tdruid.processing.merge.task.smallBatchNumRows\tSize of result batches to operate on in ForkJoinPool merge tasks for parallel result merging on the Broker. See Broker configuration for more details. useFilterCNF\tfalse\tIf true, Druid will attempt to convert the query filter to Conjunctive Normal Form (CNF). During query processing, columns can be pre-filtered by intersecting the bitmap indexes of all values that match the eligible filters, often greatly reducing the raw number of rows which need to be scanned. But this effect only happens for the top level filter, or individual clauses of a top level 'and' filter. As such, filters in CNF potentially have a higher chance to utilize a large amount of bitmap indexes on string columns during pre-filtering. However, this setting should be used with great caution, as it can sometimes have a negative effect on performance, and in some cases, the act of computing CNF of a filter can be expensive. We recommend hand tuning your filters to produce an optimal form if possible, or at least verifying through experimentation that using this parameter actually improves your query performance with no ill-effects. secondaryPartitionPruning\ttrue\tEnable secondary partition pruning on the Broker. The Broker will always prune unnecessary segments from the input scan based on a filter on time intervals, but if the data is further partitioned with hash or range partitioning, this option will enable additional pruning based on a filter on secondary partition dimensions. enableJoinLeftTableScanDirect\tfalse\tThis flag applies to queries which have joins. For joins, where left child is a simple scan with a filter, by default, druid will run the scan as a query and the join the results to the right child on broker. Setting this flag to true overrides that behavior and druid will attempt to push the join to data servers instead. Please note that the flag could be applicable to queries even if there is no explicit join. since queries can internally translated into a join by the SQL planner. debug\tfalse\tFlag indicating whether to enable debugging outputs for the query. When set to false, no additional logs will be produced (logs produced will be entirely dependent on your logging level). When set to true, the following addition logs will be produced: - Log the stack trace of the exception (if any) produced by the query setProcessingThreadNames\ttrue\tWhether processing thread names will be set to queryType_dataSource_intervals while processing a query. This aids in interpreting thread dumps, and is on by default. Query overhead can be reduced slightly by setting this to false. This has a tiny effect in most scenarios, but can be meaningful in high-QPS, low-per-segment-processing-time scenarios. maxNumericInFilters\t-1\tMax limit for the amount of numeric values that can be compared for a string type dimension when the entire SQL WHERE clause of a query translates only to an OR of Bound filter. By default, Druid does not restrict the amount of of numeric Bound Filters on String columns, although this situation may block other queries from running. Set this parameter to a smaller value to prevent Druid from running queries that have prohibitively long segment processing times. The optimal limit requires some trial and error; we recommend starting with 100. Users who submit a query that exceeds the limit of maxNumericInFilters should instead rewrite their queries to use strings in the WHERE clause instead of numbers. For example, WHERE someString IN (‘123’, ‘456’). This value cannot exceed the set system configuration druid.sql.planner.maxNumericInFilters. This value is ignored if druid.sql.planner.maxNumericInFilters is not set explicitly. inSubQueryThreshold\t2147483647\tThreshold for minimum number of values in an IN clause to convert the query to a JOIN operation on an inlined table rather than a predicate. A threshold of 0 forces usage of an inline table in all cases; a threshold of [Integer.MAX_VALUE] forces usage of OR in all cases. "},{"title":"Druid SQL parameters","type":1,"pageTitle":"Query context","url":"/docs/27.0.0/querying/query-context#druid-sql-parameters","content":"See SQL query context for query context parameters specific to Druid SQL queries. "},{"title":"Parameters by query type","type":1,"pageTitle":"Query context","url":"/docs/27.0.0/querying/query-context#parameters-by-query-type","content":"Some query types offer context parameters specific to that query type. "},{"title":"TopN","type":1,"pageTitle":"Query context","url":"/docs/27.0.0/querying/query-context#topn","content":"Parameter\tDefault\tDescriptionminTopNThreshold\t1000\tThe top minTopNThreshold local results from each segment are returned for merging to determine the global topN. "},{"title":"Timeseries","type":1,"pageTitle":"Query context","url":"/docs/27.0.0/querying/query-context#timeseries","content":"Parameter\tDefault\tDescriptionskipEmptyBuckets\tfalse\tDisable timeseries zero-filling behavior, so only buckets with results will be returned. "},{"title":"Join filter","type":1,"pageTitle":"Query context","url":"/docs/27.0.0/querying/query-context#join-filter","content":"Parameter\tDefault\tDescriptionenableJoinFilterPushDown\ttrue\tControls whether a join query will attempt filter push down, which reduces the number of rows that have to be compared in a join operation. enableJoinFilterRewrite\ttrue\tControls whether filter clauses that reference non-base table columns will be rewritten into filters on base table columns. enableJoinFilterRewriteValueColumnFilters\tfalse\tControls whether Druid rewrites non-base table filters on non-key columns in the non-base table. Requires a scan of the non-base table. enableRewriteJoinToFilter\ttrue\tControls whether a join can be pushed partial or fully to the base table as a filter at runtime. joinFilterRewriteMaxSize\t10000\tThe maximum size of the correlated value set used for filter rewrites. Set this limit to prevent excessive memory use. "},{"title":"GroupBy","type":1,"pageTitle":"Query context","url":"/docs/27.0.0/querying/query-context#groupby","content":"See the list of GroupBy query context parameters available on the groupBy query page. "},{"title":"Vectorization parameters","type":1,"pageTitle":"Query context","url":"/docs/27.0.0/querying/query-context#vectorization-parameters","content":"The GroupBy and Timeseries query types can run in vectorized mode, which speeds up query execution by processing batches of rows at a time. Not all queries can be vectorized. In particular, vectorization currently has the following requirements: All query-level filters must either be able to run on bitmap indexes or must offer vectorized row-matchers. These include selector, bound, in, like, regex, search, and, or, and not.All filters in filtered aggregators must offer vectorized row-matchers.All aggregators must offer vectorized implementations. These include count, doubleSum, floatSum, longSum. longMin,longMax, doubleMin, doubleMax, floatMin, floatMax, longAny, doubleAny, floatAny, stringAny,hyperUnique, filtered, approxHistogram, approxHistogramFold, and fixedBucketsHistogram (with numerical input). All virtual columns must offer vectorized implementations. Currently for expression virtual columns, support for vectorization is decided on a per expression basis, depending on the type of input and the functions used by the expression. See the currently supported list in the expression documentation.For GroupBy: All dimension specs must be "default" (no extraction functions or filtered dimension specs).For GroupBy: No multi-value dimensions.For Timeseries: No "descending" order.Only immutable segments (not real-time).Only table datasources (not joins, subqueries, lookups, or inline datasources). Other query types (like TopN, Scan, Select, and Search) ignore the vectorize parameter, and will execute without vectorization. These query types will ignore the vectorize parameter even if it is set to "force". Parameter\tDefault\tDescriptionvectorize\ttrue\tEnables or disables vectorized query execution. Possible values are false (disabled), true (enabled if possible, disabled otherwise, on a per-segment basis), and force (enabled, and groupBy or timeseries queries that cannot be vectorized will fail). The "force" setting is meant to aid in testing, and is not generally useful in production (since real-time segments can never be processed with vectorized execution, any queries on real-time data will fail). This will override druid.query.default.context.vectorize if it's set. vectorSize\t512\tSets the row batching size for a particular query. This will override druid.query.default.context.vectorSize if it's set. vectorizeVirtualColumns\ttrue\tEnables or disables vectorized query processing of queries with virtual columns, layered on top of vectorize (vectorize must also be set to true for a query to utilize vectorization). Possible values are false (disabled), true (enabled if possible, disabled otherwise, on a per-segment basis), and force (enabled, and groupBy or timeseries queries with virtual columns that cannot be vectorized will fail). The "force" setting is meant to aid in testing, and is not generally useful in production. This will override druid.query.default.context.vectorizeVirtualColumns if it's set. "},{"title":"Query from deep storage","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/query-deep-storage","content":"","keywords":""},{"title":"Keep segments in deep storage only","type":1,"pageTitle":"Query from deep storage","url":"/docs/27.0.0/querying/query-deep-storage#keep-segments-in-deep-storage-only","content":"Any data you ingest into Druid is already stored in deep storage, so you don't need to perform any additional configuration from that perspective. However, to take advantage of the cost savings that querying from deep storage provides, make sure not all your segments get loaded onto Historical processes. To do this, configure load rules to manage the which segments are only in deep storage and which get loaded onto Historical processes. The easiest way to do this is to explicitly configure the segments that don't get loaded onto Historical processes. Set tieredReplicants to an empty array and useDefaultTierForNull to false. For example, if you configure the following rule for a datasource: [ { "interval": "2016-06-27T00:00:00.000Z/2016-06-27T02:59:00.000Z", "tieredReplicants": {}, "useDefaultTierForNull": false, "type": "loadByInterval" } ] Any segment that falls within the specified interval exists only in deep storage. For segments that aren't in this interval, they'll use the default cluster load rules or any other load rules you configure. To configure the load rules through the Druid console, go to Datasources > ... in the Actions column > Edit retention rules. Then, paste the provided JSON into the JSON tab: You can verify that a segment is not loaded on any Historical tiers by querying the Druid metadata table: SELECT "segment_id", "replication_factor" FROM sys."segments" WHERE "replication_factor" = 0 AND "datasource" = YOUR_DATASOURCE Segments with a replication_factor of 0 are not assigned to any Historical tiers. Queries against these segments are run directly against the segment in deep storage. You can also confirm this through the Druid console. On the Segments page, see the Replication factor column. Keep the following in mind when working with load rules to control what exists only in deep storage: At least one of the segments in a datasource must be loaded onto a Historical process so that Druid can plan the query. The segment on the Historical process can be any segment from the datasource. It does not need to be a specific segment. One way to verify that a datasource has at least one segment on a Historical process is if it's visible in the Druid console.The actual number of replicas may differ from the replication factor temporarily as Druid processes your load rules. "},{"title":"Run a query from deep storage","type":1,"pageTitle":"Query from deep storage","url":"/docs/27.0.0/querying/query-deep-storage#run-a-query-from-deep-storage","content":""},{"title":"Submit a query","type":1,"pageTitle":"Query from deep storage","url":"/docs/27.0.0/querying/query-deep-storage#submit-a-query","content":"You can query data from deep storage by submitting a query to the API using POST /sql/statements or the Druid console. Druid uses the multi-stage query (MSQ) task engine to perform the query. To run a query from deep storage, send your query to the Router using the POST method: POST https://ROUTER:8888/druid/v2/sql/statements Submitting a query from deep storage uses the same syntax as any other Druid SQL query where the query is contained in the "query" field in the JSON object within the request payload. For example: {"query" : "SELECT COUNT(*) FROM data_source WHERE foo = 'bar'"} Generally, the request body fields are the same between the sql and sql/statements endpoints. There are additional context parameters for sql/statements specifically: executionMode (required) determines how query results are fetched. Set this to ASYNC. selectDestination (optional) set to durableStorage instructs Druid to write the results from SELECT queries to durable storage. Note that this requires you to have durable storage for MSQ enabled. The following sample query includes the two additional context parameters that querying from deep storage supports: curl --location 'http://localhost:8888/druid/v2/sql/statements' \\ --header 'Content-Type: application/json' \\ --data '{ "query":"SELECT * FROM \\"YOUR_DATASOURCE\\" where \\"__time\\" >TIMESTAMP'\\''2017-09-01'\\'' and \\"__time\\" <= TIMESTAMP'\\''2017-09-02'\\''", "context":{ "executionMode":"ASYNC", "selectDestination": "durableStorage" } }' The response for submitting a query includes the query ID along with basic information, such as when you submitted the query and the schema of the results: { "queryId": "query-ALPHANUMBERIC-STRING", "state": "ACCEPTED", "createdAt": CREATION_TIMESTAMP, "schema": [ { "name": COLUMN_NAME, "type": COLUMN_TYPE, "nativeType": COLUMN_TYPE }, ... ], "durationMs": DURATION_IN_MS, } "},{"title":"Get query status","type":1,"pageTitle":"Query from deep storage","url":"/docs/27.0.0/querying/query-deep-storage#get-query-status","content":"You can check the status of a query with the following API call: GET https://ROUTER:8888/druid/v2/sql/statements/QUERYID The query returns the status of the query, such as ACCEPTED or RUNNING. Before you attempt to get results, make sure the state is SUCCESS. When you check the status on a successful query, it includes useful information about your query results including a sample record and information about how the results are organized by pages. The information for each page includes the following: numRows: the number of rows in that page of resultssizeInBytes: the size of the pageid: the indexed page number that you can use to reference a specific page when you get query results You can use page as a parameter to refine the results you retrieve. The following snippet shows the structure of the result object: { ... "result": { "numTotalRows": INTEGER, "totalSizeInBytes": INTEGER, "dataSource": "__query_select", "sampleRecords": [ [ RECORD_1, RECORD_2, ... ] ], "pages": [ { "numRows": INTEGER, "sizeInBytes": INTEGER, "id": INTEGER_PAGE_NUMBER } ... ] } } "},{"title":"Get query results","type":1,"pageTitle":"Query from deep storage","url":"/docs/27.0.0/querying/query-deep-storage#get-query-results","content":"Only the user who submitted a query can retrieve the results for the query. Use the following endpoint to retrieve results: GET https://ROUTER:8888/druid/v2/sql/statements/QUERYID/results?page=PAGENUMBER&size=RESULT_SIZE&timeout=TIMEOUT_MS Results are returned in JSON format. You can use the optional page, size, and timeout parameters to refine your results. You can retrieve the page information for your results by fetching the status of the completed query. When you try to get results for a query from deep storage, you may receive an error that states the query is still running. Wait until the query completes before you try again. "},{"title":"Further reading","type":1,"pageTitle":"Query from deep storage","url":"/docs/27.0.0/querying/query-deep-storage#further-reading","content":"Query from deep storage tutorialQuery from deep storage API reference "},{"title":"Query execution","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/query-execution","content":"","keywords":""},{"title":"Datasource type","type":1,"pageTitle":"Query execution","url":"/docs/27.0.0/querying/query-execution#datasource-type","content":""},{"title":"table","type":1,"pageTitle":"Query execution","url":"/docs/27.0.0/querying/query-execution#table","content":"Queries that operate directly on table datasources are executed using a scatter-gather approach led by the Broker process. The process looks like this: The Broker identifies which segments are relevant to the query based on the "intervals"parameter. Segments are always partitioned by time, so any segment whose interval overlaps the query interval is potentially relevant. The Broker may additionally further prune the segment list based on the "filter", if the input data was partitioned by range using the single_dim partitionsSpec, and if the filter matches the dimension used for partitioning. The Broker, having pruned the list of segments for the query, forwards the query to data servers (like Historicals and tasks running on MiddleManagers) that are currently serving those segments. For all query types except Scan, data servers process each segment in parallel and generate partial results for each segment. The specific processing that is done depends on the query type. These partial results may be cached if query caching is enabled. For Scan queries, segments are processed in order by a single thread, and results are not cached. The Broker receives partial results from each data server, merges them into the final result set, and returns them to the caller. For Timeseries and Scan queries, and for GroupBy queries where there is no sorting, the Broker is able to do this in a streaming fashion. Otherwise, the Broker fully computes the result set before returning anything. "},{"title":"lookup","type":1,"pageTitle":"Query execution","url":"/docs/27.0.0/querying/query-execution#lookup","content":"Queries that operate directly on lookup datasources (without a join) are executed on the Broker that received the query, using its local copy of the lookup. All registered lookup tables are preloaded in-memory on the Broker. The query runs single-threaded. Execution of queries that use lookups as right-hand inputs to a join are executed in a way that depends on their "base" (bottom-leftmost) datasource, as described in the join section below. "},{"title":"union","type":1,"pageTitle":"Query execution","url":"/docs/27.0.0/querying/query-execution#union","content":"Queries that operate directly on union datasources are split up on the Broker into a separate query for each table that is part of the union. Each of these queries runs separately, and the Broker merges their results together. "},{"title":"inline","type":1,"pageTitle":"Query execution","url":"/docs/27.0.0/querying/query-execution#inline","content":"Queries that operate directly on inline datasources are executed on the Broker that received the query. The query runs single-threaded. Execution of queries that use inline datasources as right-hand inputs to a join are executed in a way that depends on their "base" (bottom-leftmost) datasource, as described in the join section below. "},{"title":"query","type":1,"pageTitle":"Query execution","url":"/docs/27.0.0/querying/query-execution#query","content":"Query datasources are subqueries. Each subquery is executed as if it was its own query and the results are brought back to the Broker. Then, the Broker continues on with the rest of the query as if the subquery was replaced with an inline datasource. In most cases, Druid buffers subquery results in memory on the Broker before the rest of the query proceeds. Therefore, subqueries execute sequentially. The total number of rows buffered across all subqueries of a given query cannot exceed the druid.server.http.maxSubqueryRows which defaults to 100000 rows, or thedruid.server.http.maxSubqueryBytes if set. Otherwise, Druid throws a resource limit exceeded exception. There is one exception: if the outer query and all subqueries are the groupBy type, then subquery results can be processed in a streaming fashion and the druid.server.http.maxSubqueryRows and druid.server.http.maxSubqueryByteslimits do not apply. "},{"title":"join","type":1,"pageTitle":"Query execution","url":"/docs/27.0.0/querying/query-execution#join","content":"Join datasources are handled using a broadcast hash-join approach. The Broker executes any subqueries that are inputs the join, as described in the query section, and replaces them with inline datasources. The Broker flattens a join tree, if present, into a "base" datasource (the bottom-leftmost one) and other leaf datasources (the rest). Query execution proceeds using the same structure that the base datasource would use on its own. If the base datasource is a table, segments are pruned based on "intervals" as usual, and the query is executed on the cluster by forwarding it to all relevant data servers in parallel. If the base datasource is a lookup orinline datasource (including an inline datasource that was the result of inlining a subquery), the query is executed on the Broker itself. The base query cannot be a union, because unions are not currently supported as inputs to a join. Before beginning to process the base datasource, the server(s) that will execute the query first inspect all the non-base leaf datasources to determine if a new hash table needs to be built for the upcoming hash join. Currently, lookups do not require new hash tables to be built (because they are preloaded), but inline datasources do. Query execution proceeds again using the same structure that the base datasource would use on its own, with one addition: while processing the base datasource, Druid servers will use the hash tables built from the other join inputs to produce the join result row-by-row, and query engines will operate on the joined rows rather than the base rows. "},{"title":"Scan queries","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/scan-query","content":"","keywords":""},{"title":"Example results","type":1,"pageTitle":"Scan queries","url":"/docs/27.0.0/querying/scan-query#example-results","content":"The format of the result when resultFormat equals list: [{ "segmentId" : "wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9", "columns" : [ "timestamp", "robot", "namespace", "anonymous", "unpatrolled", "page", "language", "newpage", "user", "count", "added", "delta", "variation", "deleted" ], "events" : [ { "timestamp" : "2013-01-01T00:00:00.000Z", "robot" : "1", "namespace" : "article", "anonymous" : "0", "unpatrolled" : "0", "page" : "11._korpus_(NOVJ)", "language" : "sl", "newpage" : "0", "user" : "EmausBot", "count" : 1.0, "added" : 39.0, "delta" : 39.0, "variation" : 39.0, "deleted" : 0.0 }, { "timestamp" : "2013-01-01T00:00:00.000Z", "robot" : "0", "namespace" : "article", "anonymous" : "0", "unpatrolled" : "0", "page" : "112_U.S._580", "language" : "en", "newpage" : "1", "user" : "MZMcBride", "count" : 1.0, "added" : 70.0, "delta" : 70.0, "variation" : 70.0, "deleted" : 0.0 }, { "timestamp" : "2013-01-01T00:00:00.000Z", "robot" : "0", "namespace" : "article", "anonymous" : "0", "unpatrolled" : "0", "page" : "113_U.S._243", "language" : "en", "newpage" : "1", "user" : "MZMcBride", "count" : 1.0, "added" : 77.0, "delta" : 77.0, "variation" : 77.0, "deleted" : 0.0 } ] } ] The format of the result when resultFormat equals compactedList: [{ "segmentId" : "wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9", "columns" : [ "timestamp", "robot", "namespace", "anonymous", "unpatrolled", "page", "language", "newpage", "user", "count", "added", "delta", "variation", "deleted" ], "events" : [ ["2013-01-01T00:00:00.000Z", "1", "article", "0", "0", "11._korpus_(NOVJ)", "sl", "0", "EmausBot", 1.0, 39.0, 39.0, 39.0, 0.0], ["2013-01-01T00:00:00.000Z", "0", "article", "0", "0", "112_U.S._580", "en", "1", "MZMcBride", 1.0, 70.0, 70.0, 70.0, 0.0], ["2013-01-01T00:00:00.000Z", "0", "article", "0", "0", "113_U.S._243", "en", "1", "MZMcBride", 1.0, 77.0, 77.0, 77.0, 0.0] ] } ] "},{"title":"Time ordering","type":1,"pageTitle":"Scan queries","url":"/docs/27.0.0/querying/scan-query#time-ordering","content":"The Scan query currently supports ordering based on timestamp for non-legacy queries. Note that using time ordering will yield results that do not indicate which segment rows are from (segmentId will show up as null). Furthermore, time ordering is only supported where the result set limit is less than druid.query.scan.maxRowsQueuedForOrderingrows or all segments scanned have fewer than druid.query.scan.maxSegmentPartitionsOrderedInMemory partitions. Also, time ordering is not supported for queries issued directly to historicals unless a list of segments is specified. The reasoning behind these limitations is that the implementation of time ordering uses two strategies that can consume too much heap memory if left unbounded. These strategies (listed below) are chosen on a per-Historical basis depending on query result set limit and the number of segments being scanned. Priority Queue: Each segment on a Historical is opened sequentially. Every row is added to a bounded priority queue which is ordered by timestamp. For every row above the result set limit, the row with the earliest (if descending) or latest (if ascending) timestamp will be dequeued. After every row has been processed, the sorted contents of the priority queue are streamed back to the Broker(s) in batches. Attempting to load too many rows into memory runs the risk of Historical nodes running out of memory. The druid.query.scan.maxRowsQueuedForOrdering property protects from this by limiting the number of rows in the query result set when time ordering is used. N-Way Merge: For each segment, each partition is opened in parallel. Since each partition's rows are already time-ordered, an n-way merge can be performed on the results from each partition. This approach doesn't persist the entire result set in memory (like the Priority Queue) as it streams back batches as they are returned from the merge function. However, attempting to query too many partition could also result in high memory usage due to the need to open decompression and decoding buffers for each. The druid.query.scan.maxSegmentPartitionsOrderedInMemory limit protects from this by capping the number of partitions opened at any times when time ordering is used. Both druid.query.scan.maxRowsQueuedForOrdering and druid.query.scan.maxSegmentPartitionsOrderedInMemory are configurable and can be tuned based on hardware specs and number of dimensions being queried. These config properties can also be overridden using the maxRowsQueuedForOrdering and maxSegmentPartitionsOrderedInMemory properties in the query context (see the Query Context Properties section). "},{"title":"Legacy mode","type":1,"pageTitle":"Scan queries","url":"/docs/27.0.0/querying/scan-query#legacy-mode","content":"The Scan query supports a legacy mode designed for protocol compatibility with the former scan-query contrib extension. In legacy mode you can expect the following behavior changes: The __time column is returned as "timestamp" rather than "__time". This will take precedence over any other column you may have that is named "timestamp".The __time column is included in the list of columns even if you do not specifically ask for it.Timestamps are returned as ISO8601 time strings rather than integers (milliseconds since 1970-01-01 00:00:00 UTC). Legacy mode can be triggered either by passing "legacy" : true in your query JSON, or by settingdruid.query.scan.legacy = true on your Druid processes. If you were previously using the scan-query contrib extension, the best way to migrate is to activate legacy mode during a rolling upgrade, then switch it off after the upgrade is complete. "},{"title":"Configuration Properties","type":1,"pageTitle":"Scan queries","url":"/docs/27.0.0/querying/scan-query#configuration-properties","content":"Configuration properties: property\tdescription\tvalues\tdefaultdruid.query.scan.maxRowsQueuedForOrdering\tThe maximum number of rows returned when time ordering is used\tAn integer in [1, 2147483647]\t100000 druid.query.scan.maxSegmentPartitionsOrderedInMemory\tThe maximum number of segments scanned per historical when time ordering is used\tAn integer in [1, 2147483647]\t50 druid.query.scan.legacy\tWhether legacy mode should be turned on for Scan queries\ttrue or false\tfalse "},{"title":"Query context properties","type":1,"pageTitle":"Scan queries","url":"/docs/27.0.0/querying/scan-query#query-context-properties","content":"property\tdescription\tvalues\tdefaultmaxRowsQueuedForOrdering\tThe maximum number of rows returned when time ordering is used. Overrides the identically named config.\tAn integer in [1, 2147483647]\tdruid.query.scan.maxRowsQueuedForOrdering maxSegmentPartitionsOrderedInMemory\tThe maximum number of segments scanned per historical when time ordering is used. Overrides the identically named config.\tAn integer in [1, 2147483647]\tdruid.query.scan.maxSegmentPartitionsOrderedInMemory Sample query context JSON object: { "maxRowsQueuedForOrdering": 100001, "maxSegmentPartitionsOrderedInMemory": 100 } "},{"title":"Search queries","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/searchquery","content":"","keywords":""},{"title":"Implementation details","type":1,"pageTitle":"Search queries","url":"/docs/27.0.0/querying/searchquery#implementation-details","content":"Strategies Search queries can be executed using two different strategies. The default strategy is determined by the "druid.query.search.searchStrategy" runtime property on the Broker. This can be overridden using "searchStrategy" in the query context. If neither the context field nor the property is set, the "useIndexes" strategy will be used. "useIndexes" strategy, the default, first categorizes search dimensions into two groups according to their support for bitmap indexes. And then, it applies index-only and cursor-based execution plans to the group of dimensions supporting bitmaps and others, respectively. The index-only plan uses only indexes for search query processing. For each dimension, it reads the bitmap index for each dimension value, evaluates the search predicate, and finally checks the time interval and filter predicates. For the cursor-based execution plan, please refer to the "cursorOnly" strategy. The index-only plan shows low performance for the search dimensions of large cardinality which means most values of search dimensions are unique. "cursorOnly" strategy generates a cursor-based execution plan. This plan creates a cursor which reads a row from a queryableIndexSegment, and then evaluates search predicates. If some filters support bitmap indexes, the cursor can read only the rows which satisfy those filters, thereby saving I/O cost. However, it might be slow with filters of low selectivity. "auto" strategy uses a cost-based planner for choosing an optimal search strategy. It estimates the cost of index-only and cursor-based execution plans, and chooses the optimal one. Currently, it is not enabled by default due to the overhead of cost estimation. "},{"title":"Server configuration","type":1,"pageTitle":"Search queries","url":"/docs/27.0.0/querying/searchquery#server-configuration","content":"The following runtime properties apply: Property\tDescription\tDefaultdruid.query.search.searchStrategy\tDefault search query strategy.\tuseIndexes "},{"title":"Query context","type":1,"pageTitle":"Search queries","url":"/docs/27.0.0/querying/searchquery#query-context","content":"The following query context parameters apply: Property\tDescriptionsearchStrategy\tOverrides the value of druid.query.search.searchStrategy for this query. "},{"title":"SearchQuerySpec","type":1,"pageTitle":"Search queries","url":"/docs/27.0.0/querying/searchquery#searchqueryspec","content":""},{"title":"insensitive_contains","type":1,"pageTitle":"Search queries","url":"/docs/27.0.0/querying/searchquery#insensitive_contains","content":"If any part of a dimension value contains the value specified in this search query spec, regardless of case, a "match" occurs. The grammar is: { "type" : "insensitive_contains", "value" : "some_value" } "},{"title":"fragment","type":1,"pageTitle":"Search queries","url":"/docs/27.0.0/querying/searchquery#fragment","content":"If any part of a dimension value contains all of the values specified in this search query spec, regardless of case by default, a "match" occurs. The grammar is: { "type" : "fragment", "case_sensitive" : false, "values" : ["fragment1", "fragment2"] } "},{"title":"contains","type":1,"pageTitle":"Search queries","url":"/docs/27.0.0/querying/searchquery#contains","content":"If any part of a dimension value contains the value specified in this search query spec, a "match" occurs. The grammar is: { "type" : "contains", "case_sensitive" : true, "value" : "some_value" } "},{"title":"regex","type":1,"pageTitle":"Search queries","url":"/docs/27.0.0/querying/searchquery#regex","content":"If any part of a dimension value contains the pattern specified in this search query spec, a "match" occurs. The grammar is: { "type" : "regex", "pattern" : "some_pattern" } "},{"title":"SegmentMetadata queries","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/segmentmetadataquery","content":"","keywords":""},{"title":"intervals","type":1,"pageTitle":"SegmentMetadata queries","url":"/docs/27.0.0/querying/segmentmetadataquery#intervals","content":"If an interval is not specified, the query will use a default interval that spans a configurable period before the end time of the most recent segment. The length of this default time period is set in the Broker configuration via: druid.query.segmentMetadata.defaultHistory "},{"title":"toInclude","type":1,"pageTitle":"SegmentMetadata queries","url":"/docs/27.0.0/querying/segmentmetadataquery#toinclude","content":"There are 3 types of toInclude objects. "},{"title":"All","type":1,"pageTitle":"SegmentMetadata queries","url":"/docs/27.0.0/querying/segmentmetadataquery#all","content":"The grammar is as follows: "toInclude": { "type": "all"} "},{"title":"None","type":1,"pageTitle":"SegmentMetadata queries","url":"/docs/27.0.0/querying/segmentmetadataquery#none","content":"The grammar is as follows: "toInclude": { "type": "none"} "},{"title":"List","type":1,"pageTitle":"SegmentMetadata queries","url":"/docs/27.0.0/querying/segmentmetadataquery#list","content":"The grammar is as follows: "toInclude": { "type": "list", "columns": [<string list of column names>]} "},{"title":"analysisTypes","type":1,"pageTitle":"SegmentMetadata queries","url":"/docs/27.0.0/querying/segmentmetadataquery#analysistypes","content":"This is a list of properties that determines the amount of information returned about the columns, i.e. analyses to be performed on the columns. By default, the "cardinality", "interval", and "minmax" types will be used. If a property is not needed, omitting it from this list will result in a more efficient query. The default analysis types can be set in the Broker configuration via:druid.query.segmentMetadata.defaultAnalysisTypes Types of column analyses are described below: "},{"title":"cardinality","type":1,"pageTitle":"SegmentMetadata queries","url":"/docs/27.0.0/querying/segmentmetadataquery#cardinality","content":"cardinality is the number of unique values present in string columns. It is null for other column types. Druid examines the size of string column dictionaries to compute the cardinality value. There is one dictionary per column per segment. If merge is off (false), this reports the cardinality of each column of each segment individually. Ifmerge is on (true), this reports the highest cardinality encountered for a particular column across all relevant segments. "},{"title":"minmax","type":1,"pageTitle":"SegmentMetadata queries","url":"/docs/27.0.0/querying/segmentmetadataquery#minmax","content":"Estimated min/max values for each column. Only reported for string columns. "},{"title":"size","type":1,"pageTitle":"SegmentMetadata queries","url":"/docs/27.0.0/querying/segmentmetadataquery#size","content":"size is the estimated total byte size as if the data were stored in text format. This is not the actual storage size of the column in Druid. If you want the actual storage size in bytes of a segment, look elsewhere. Some pointers: To get the storage size in bytes of an entire segment, check the size field in thesys.segments table. This is the size of the memory-mappable content.To get the storage size in bytes of a particular column in a particular segment, unpack the segment and look at themeta.smoosh file inside the archive. The difference between the third and fourth columns is the size in bytes. Currently, there is no API for retrieving this information. "},{"title":"interval","type":1,"pageTitle":"SegmentMetadata queries","url":"/docs/27.0.0/querying/segmentmetadataquery#interval","content":"intervals in the result will contain the list of intervals associated with the queried segments. "},{"title":"timestampSpec","type":1,"pageTitle":"SegmentMetadata queries","url":"/docs/27.0.0/querying/segmentmetadataquery#timestampspec","content":"timestampSpec in the result will contain timestampSpec of data stored in segments. this can be null if timestampSpec of segments was unknown or unmergeable (if merging is enabled). "},{"title":"queryGranularity","type":1,"pageTitle":"SegmentMetadata queries","url":"/docs/27.0.0/querying/segmentmetadataquery#querygranularity","content":"queryGranularity in the result will contain query granularity of data stored in segments. this can be null if query granularity of segments was unknown or unmergeable (if merging is enabled). "},{"title":"aggregators","type":1,"pageTitle":"SegmentMetadata queries","url":"/docs/27.0.0/querying/segmentmetadataquery#aggregators","content":"aggregators in the result will contain the list of aggregators usable for querying metric columns. This may be null if the aggregators are unknown or unmergeable (if merging is enabled). Merging can be strict or lenient. See lenientAggregatorMerge below for details. The form of the result is a map of column name to aggregator. "},{"title":"rollup","type":1,"pageTitle":"SegmentMetadata queries","url":"/docs/27.0.0/querying/segmentmetadataquery#rollup","content":"rollup in the result is true/false/null.When merging is enabled, if some are rollup, others are not, result is null. "},{"title":"lenientAggregatorMerge","type":1,"pageTitle":"SegmentMetadata queries","url":"/docs/27.0.0/querying/segmentmetadataquery#lenientaggregatormerge","content":"Conflicts between aggregator metadata across segments can occur if some segments have unknown aggregators, or if two segments use incompatible aggregators for the same column (e.g. longSum changed to doubleSum). Aggregators can be merged strictly (the default) or leniently. With strict merging, if there are any segments with unknown aggregators, or any conflicts of any kind, the merged aggregators list will be null. With lenient merging, segments with unknown aggregators will be ignored, and conflicts between aggregators will only null out the aggregator for that particular column. In particular, with lenient merging, it is possible for an individual column's aggregator to be null. This will not occur with strict merging. "},{"title":"Select queries","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/select-query","content":"Select queries Older versions of Apache Druid included a Select query type. Since Druid 0.17.0, it has been removed and replaced by the Scan query, which offers improved memory usage and performance. This solves issues that users had with Select queries causing Druid to run out of memory or slow down.","keywords":""},{"title":"Nested columns","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/nested-columns","content":"","keywords":""},{"title":"Example nested data","type":1,"pageTitle":"Nested columns","url":"/docs/27.0.0/querying/nested-columns#example-nested-data","content":"The examples in this topic use the JSON data in nested_example_data.json. The file contains a simple facsimile of an order tracking and shipping table. When pretty-printed, a sample row in nested_example_data looks like this: { "time":"2022-6-14T10:32:08Z", "product":"Keyboard", "department":"Computers", "shipTo":{ "firstName": "Sandra", "lastName": "Beatty", "address": { "street": "293 Grant Well", "city": "Loischester", "state": "FL", "country": "TV", "postalCode": "88845-0066" }, "phoneNumbers": [ {"type":"primary","number":"1-788-771-7028 x8627" }, {"type":"secondary","number":"1-460-496-4884 x887"} ] }, "details"{"color":"plum","price":"40.00"} } "},{"title":"Native batch ingestion","type":1,"pageTitle":"Nested columns","url":"/docs/27.0.0/querying/nested-columns#native-batch-ingestion","content":"For native batch ingestion, you can use the SQL JSON functions to extract nested data as an alternative to using the flattenSpec input format. To configure a dimension as a nested data type, specify the json type for the dimension in the dimensions list in the dimensionsSpec property of your ingestion spec. For example, the following ingestion spec instructs Druid to ingest shipTo and details as JSON-type nested dimensions: { "type": "index_parallel", "spec": { "ioConfig": { "type": "index_parallel", "inputSource": { "type": "http", "uris": [ "https://static.imply.io/data/nested_example_data.json" ] }, "inputFormat": { "type": "json" } }, "dataSchema": { "granularitySpec": { "segmentGranularity": "day", "queryGranularity": "none", "rollup": false }, "dataSource": "nested_data_example", "timestampSpec": { "column": "time", "format": "auto" }, "dimensionsSpec": { "dimensions": [ "product", "department", { "type": "json", "name": "shipTo" }, { "type": "json", "name": "details" } ] }, "transformSpec": {} }, "tuningConfig": { "type": "index_parallel", "partitionsSpec": { "type": "dynamic" } } } } "},{"title":"Transform data during batch ingestion","type":1,"pageTitle":"Nested columns","url":"/docs/27.0.0/querying/nested-columns#transform-data-during-batch-ingestion","content":"You can use the SQL JSON functions to transform nested data and reference the transformed data in your ingestion spec. To do this, define the output name and expression in the transforms list in the transformSpec object of your ingestion spec. For example, the following ingestion spec extracts firstName, lastName and address from shipTo and creates a composite JSON object containing product, details and department. { "type": "index_parallel", "spec": { "ioConfig": { "type": "index_parallel", "inputSource": { "type": "http", "uris": [ "https://static.imply.io/data/nested_example_data.json" ] }, "inputFormat": { "type": "json" } }, "dataSchema": { "granularitySpec": { "segmentGranularity": "day", "queryGranularity": "none", "rollup": false }, "dataSource": "nested_data_transform_example", "timestampSpec": { "column": "time", "format": "auto" }, "dimensionsSpec": { "dimensions": [ "firstName", "lastName", { "type": "json", "name": "address" }, { "type": "json", "name": "productDetails" } ] }, "transformSpec": { "transforms":[ { "type":"expression", "name":"firstName", "expression":"json_value(shipTo, '$.firstName')"}, { "type":"expression", "name":"lastName", "expression":"json_value(shipTo, '$.lastName')"}, { "type":"expression", "name":"address", "expression":"json_query(shipTo, '$.address')"}, { "type":"expression", "name":"productDetails", "expression":"json_object('product', product, 'details', details, 'department', department)"} ] } }, "tuningConfig": { "type": "index_parallel", "partitionsSpec": { "type": "dynamic" } } } } "},{"title":"SQL-based ingestion","type":1,"pageTitle":"Nested columns","url":"/docs/27.0.0/querying/nested-columns#sql-based-ingestion","content":"To ingest nested data using SQL-based ingestion, specify COMPLEX<json> as the value for type when you define the row signature—shipTo and details in the following example ingestion spec: REPLACE INTO msq_nested_data_example OVERWRITE ALL SELECT TIME_PARSE("time") as __time, product, department, shipTo, details FROM ( SELECT * FROM TABLE( EXTERN( '{"type":"http","uris":["https://static.imply.io/data/nested_example_data.json"]}', '{"type":"json"}', '[{"name":"time","type":"string"},{"name":"product","type":"string"},{"name":"department","type":"string"},{"name":"shipTo","type":"COMPLEX<json>"},{"name":"details","type":"COMPLEX<json>"}]' ) ) ) PARTITIONED BY ALL "},{"title":"Streaming ingestion","type":1,"pageTitle":"Nested columns","url":"/docs/27.0.0/querying/nested-columns#streaming-ingestion","content":"You can ingest nested data into Druid using the streaming method—for example, from a Kafka topic. When you define your supervisor spec, include a dimension with type json for each nested column. For example, the following supervisor spec from the Kafka ingestion tutorial contains dimensions for the nested columns event, agent, and geo_ip in datasource kttm-kafka. { "type": "kafka", "spec": { "ioConfig": { "type": "kafka", "consumerProperties": { "bootstrap.servers": "localhost:9092" }, "topic": "kttm", "inputFormat": { "type": "json" }, "useEarliestOffset": true }, "tuningConfig": { "type": "kafka" }, "dataSchema": { "dataSource": "kttm-kafka", "timestampSpec": { "column": "timestamp", "format": "iso" }, "dimensionsSpec": { "dimensions": [ "session", "number", "client_ip", "language", "adblock_list", "app_version", "path", "loaded_image", "referrer", "referrer_host", "server_ip", "screen", "window", { "type": "long", "name": "session_length" }, "timezone", "timezone_offset", { "type": "json", "name": "event" }, { "type": "json", "name": "agent" }, { "type": "json", "name": "geo_ip" } ] }, "granularitySpec": { "queryGranularity": "none", "rollup": false, "segmentGranularity": "day" } } } } The Kafka tutorial guides you through the steps to load sample nested data into a Kafka topic, then ingest the data into Druid. "},{"title":"Transform data during SQL-based ingestion","type":1,"pageTitle":"Nested columns","url":"/docs/27.0.0/querying/nested-columns#transform-data-during-sql-based-ingestion","content":"You can use the SQL JSON functions to transform nested data in your ingestion query. For example, the following ingestion query is the SQL-based version of the previous batch example—it extracts firstName, lastName, and address from shipTo and creates a composite JSON object containing product, details, and department. REPLACE INTO msq_nested_data_transform_example OVERWRITE ALL SELECT TIME_PARSE("time") as __time, JSON_VALUE(shipTo, '$.firstName') as firstName, JSON_VALUE(shipTo, '$.lastName') as lastName, JSON_QUERY(shipTo, '$.address') as address, JSON_OBJECT('product':product,'details':details, 'department':department) as productDetails FROM ( SELECT * FROM TABLE( EXTERN( '{"type":"http","uris":["https://static.imply.io/data/nested_example_data.json"]}', '{"type":"json"}', '[{"name":"time","type":"string"},{"name":"product","type":"string"},{"name":"department","type":"string"},{"name":"shipTo","type":"COMPLEX<json>"},{"name":"details","type":"COMPLEX<json>"}]' ) ) ) PARTITIONED BY ALL "},{"title":"Ingest a JSON string as COMPLEX<json>","type":1,"pageTitle":"Nested columns","url":"/docs/27.0.0/querying/nested-columns#ingest-a-json-string-as-complexjson","content":"If your source data contains serialized JSON strings, you can ingest the data as COMPLEX<JSON> as follows: During native batch ingestion, call the parse_json function in a transform object in the transformSpec.During SQL-based ingestion, use the PARSE_JSON keyword within your SELECT statement to transform the string values to JSON.If you are concerned that your data may not contain valid JSON, you can use try_parse_json for native batch or TRY_PARSE_JSON for SQL-based ingestion. For cases where the column does not contain valid JSON, Druid inserts a null value. If you are using a text input format like tsv, you need to use this method to ingest data into a COMPLEX<json> column. For example, consider the following deserialized row of the sample data set: {"time": "2022-06-13T10:10:35Z", "product": "Bike", "department":"Sports", "shipTo":"{\\"firstName\\": \\"Henry\\",\\"lastName\\": \\"Wuckert\\",\\"address\\": {\\"street\\": \\"5643 Jan Walk\\",\\"city\\": \\"Lake Bridget\\",\\"state\\": \\"HI\\",\\"country\\":\\"ME\\",\\"postalCode\\": \\"70204-2939\\"},\\"phoneNumbers\\": [{\\"type\\":\\"primary\\",\\"number\\":\\"593.475.0449 x86733\\" },{\\"type\\":\\"secondary\\",\\"number\\":\\"638-372-1210\\"}]}", "details":"{\\"color\\":\\"ivory\\", \\"price\\":955.00}"} The following examples demonstrate how to ingest the shipTo and details columns both as string type and as COMPLEX<json> in the shipTo_parsed and details_parsed columns. SQLNative batch REPLACE INTO deserialized_example OVERWRITE ALL WITH source AS (SELECT * FROM TABLE( EXTERN( '{"type":"inline","data":"{\\"time\\": \\"2022-06-13T10:10:35Z\\", \\"product\\": \\"Bike\\", \\"department\\":\\"Sports\\", \\"shipTo\\":\\"{\\\\\\"firstName\\\\\\": \\\\\\"Henry\\\\\\",\\\\\\"lastName\\\\\\": \\\\\\"Wuckert\\\\\\",\\\\\\"address\\\\\\": {\\\\\\"street\\\\\\": \\\\\\"5643 Jan Walk\\\\\\",\\\\\\"city\\\\\\": \\\\\\"Lake Bridget\\\\\\",\\\\\\"state\\\\\\": \\\\\\"HI\\\\\\",\\\\\\"country\\\\\\":\\\\\\"ME\\\\\\",\\\\\\"postalCode\\\\\\": \\\\\\"70204-2939\\\\\\"},\\\\\\"phoneNumbers\\\\\\": [{\\\\\\"type\\\\\\":\\\\\\"primary\\\\\\",\\\\\\"number\\\\\\":\\\\\\"593.475.0449 x86733\\\\\\" },{\\\\\\"type\\\\\\":\\\\\\"secondary\\\\\\",\\\\\\"number\\\\\\":\\\\\\"638-372-1210\\\\\\"}]}\\", \\"details\\":\\"{\\\\\\"color\\\\\\":\\\\\\"ivory\\\\\\", \\\\\\"price\\\\\\":955.00}\\"}\\n"}', '{"type":"json"}', '[{"name":"time","type":"string"},{"name":"product","type":"string"},{"name":"department","type":"string"},{"name":"shipTo","type":"string"},{"name":"details","type":"string"}]' ) )) SELECT TIME_PARSE("time") AS __time, "product", "department", "shipTo", "details", PARSE_JSON("shipTo") as "shipTo_parsed", PARSE_JSON("details") as "details_parsed" FROM source PARTITIONED BY DAY "},{"title":"Querying nested columns","type":1,"pageTitle":"Nested columns","url":"/docs/27.0.0/querying/nested-columns#querying-nested-columns","content":"Once ingested, Druid stores the JSON-typed columns as native JSON objects and presents them as COMPLEX<json>. See the Nested columns functions reference for information on the functions in the examples below. Druid supports a small, simplified subset of the JSONPath syntax operators, primarily limited to extracting individual values from nested data structures. See the SQL JSON functions page for details. "},{"title":"Displaying data types","type":1,"pageTitle":"Nested columns","url":"/docs/27.0.0/querying/nested-columns#displaying-data-types","content":"The following example illustrates how you can display the data types for your columns. Note that details and shipTo display as COMPLEX<json>. Example query: Display data types SELECT TABLE_NAME, COLUMN_NAME, DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'nested_data_example' Example query results: [["TABLE_NAME","COLUMN_NAME","DATA_TYPE"],["STRING","STRING","STRING"],["VARCHAR","VARCHAR","VARCHAR"],["nested_data_example","__time","TIMESTAMP"],["nested_data_example","department","VARCHAR"],["nested_data_example","details","COMPLEX<json>"],["nested_data_example","product","VARCHAR"],["nested_data_example","shipTo","COMPLEX<json>"]] "},{"title":"Retrieving JSON data","type":1,"pageTitle":"Nested columns","url":"/docs/27.0.0/querying/nested-columns#retrieving-json-data","content":"You can retrieve JSON data directly from a table. Druid returns the results as a JSON object, so you can't use grouping, aggregation, or filtering operators. Example query: Retrieve JSON data The following example query extracts all data from nested_data_example: SELECT * FROM nested_data_example Example query results: [["__time","department","details","product","shipTo"],["LONG","STRING","COMPLEX<json>","STRING","COMPLEX<json>"],["TIMESTAMP","VARCHAR","OTHER","VARCHAR","OTHER"],["2022-06-13T07:52:29.000Z","Sports","{\\"color\\":\\"sky blue\\",\\"price\\":542.0}","Bike","{\\"firstName\\":\\"Russ\\",\\"lastName\\":\\"Cole\\",\\"address\\":{\\"street\\":\\"77173 Rusty Station\\",\\"city\\":\\"South Yeseniabury\\",\\"state\\":\\"WA\\",\\"country\\":\\"BL\\",\\"postalCode\\":\\"01893\\"},\\"phoneNumbers\\":[{\\"type\\":\\"primary\\",\\"number\\":\\"891-374-6188 x74568\\"},{\\"type\\":\\"secondary\\",\\"number\\":\\"1-248-998-4426 x33037\\"}]}"],["2022-06-13T10:10:35.000Z","Sports","{\\"color\\":\\"ivory\\",\\"price\\":955.0}","Bike","{\\"firstName\\":\\"Henry\\",\\"lastName\\":\\"Wuckert\\",\\"address\\":{\\"street\\":\\"5643 Jan Walk\\",\\"city\\":\\"Lake Bridget\\",\\"state\\":\\"HI\\",\\"country\\":\\"ME\\",\\"postalCode\\":\\"70204-2939\\"},\\"phoneNumbers\\":[{\\"type\\":\\"primary\\",\\"number\\":\\"593.475.0449 x86733\\"},{\\"type\\":\\"secondary\\",\\"number\\":\\"638-372-1210\\"}]}"],["2022-06-13T13:57:38.000Z","Grocery","{\\"price\\":8.0}","Sausages","{\\"firstName\\":\\"Forrest\\",\\"lastName\\":\\"Brekke\\",\\"address\\":{\\"street\\":\\"41548 Collier Divide\\",\\"city\\":\\"Wintheiserborough\\",\\"state\\":\\"WA\\",\\"country\\":\\"AD\\",\\"postalCode\\":\\"27577-6784\\"},\\"phoneNumbers\\":[{\\"type\\":\\"primary\\",\\"number\\":\\"(904) 890-0696 x581\\"},{\\"type\\":\\"secondary\\",\\"number\\":\\"676.895.6759\\"}]}"],["2022-06-13T21:37:06.000Z","Computers","{\\"color\\":\\"olive\\",\\"price\\":90.0}","Mouse","{\\"firstName\\":\\"Rickey\\",\\"lastName\\":\\"Rempel\\",\\"address\\":{\\"street\\":\\"6232 Green Glens\\",\\"city\\":\\"New Fermin\\",\\"state\\":\\"HI\\",\\"country\\":\\"CW\\",\\"postalCode\\":\\"98912-1195\\"},\\"phoneNumbers\\":[{\\"type\\":\\"primary\\",\\"number\\":\\"(689) 766-4272 x60778\\"},{\\"type\\":\\"secondary\\",\\"number\\":\\"375.662.4737 x24707\\"}]}"],["2022-06-14T10:32:08.000Z","Computers","{\\"color\\":\\"plum\\",\\"price\\":40.0}","Keyboard","{\\"firstName\\":\\"Sandra\\",\\"lastName\\":\\"Beatty\\",\\"address\\":{\\"street\\":\\"293 Grant Well\\",\\"city\\":\\"Loischester\\",\\"state\\":\\"FL\\",\\"country\\":\\"TV\\",\\"postalCode\\":\\"88845-0066\\"},\\"phoneNumbers\\":[{\\"type\\":\\"primary\\",\\"number\\":\\"1-788-771-7028 x8627\\"},{\\"type\\":\\"secondary\\",\\"number\\":\\"1-460-496-4884 x887\\"}]}"]] "},{"title":"Extracting nested data elements","type":1,"pageTitle":"Nested columns","url":"/docs/27.0.0/querying/nested-columns#extracting-nested-data-elements","content":"The JSON_VALUE function is specially optimized to provide native Druid level performance when processing nested literal values, as if they were flattened, traditional, Druid column types. It does this by reading from the specialized nested columns and indexes that are built and stored in JSON objects when Druid creates segments. Some operations using JSON_VALUE run faster than those using native Druid columns. For example, filtering numeric types uses the indexes built for nested numeric columns, which are not available for Druid DOUBLE, FLOAT, or LONG columns. JSON_VALUE only returns literal types. Any paths that reference JSON objects or array types return null. info To achieve the best possible performance, use the JSON_VALUE function whenever you query JSON objects. Example query: Extract nested data elements The following example query illustrates how to use JSON_VALUE to extract specified elements from a COMPLEX<json> object. Note that the returned values default to type VARCHAR. SELECT product, department, JSON_VALUE(shipTo, '$.address.country') as country, JSON_VALUE(shipTo, '$.phoneNumbers[0].number') as primaryPhone, JSON_VALUE(details, '$.price') as price FROM nested_data_example Example query results: [["product","department","country","primaryPhone","price"],["STRING","STRING","STRING","STRING","STRING"],["VARCHAR","VARCHAR","VARCHAR","VARCHAR","VARCHAR"],["Bike","Sports","BL","891-374-6188 x74568","542.0"],["Bike","Sports","ME","593.475.0449 x86733","955.0"],["Sausages","Grocery","AD","(904) 890-0696 x581","8.0"],["Mouse","Computers","CW","(689) 766-4272 x60778","90.0"],["Keyboard","Computers","TV","1-788-771-7028 x8627","40.0"]] "},{"title":"Extracting nested data elements as a suggested type","type":1,"pageTitle":"Nested columns","url":"/docs/27.0.0/querying/nested-columns#extracting-nested-data-elements-as-a-suggested-type","content":"You can use the RETURNING keyword to provide type hints to the JSON_VALUE function. This way the SQL planner produces the correct native Druid query, leading to expected results. This keyword allows you to specify a SQL type for the path value. Example query: Extract nested data elements as suggested types The following example query illustrates how to use JSON_VALUE and the RETURNING keyword to extract an element of nested data and return it as specified types. SELECT product, department, JSON_VALUE(shipTo, '$.address.country') as country, JSON_VALUE(details, '$.price' RETURNING BIGINT) as price_int, JSON_VALUE(details, '$.price' RETURNING DECIMAL) as price_decimal, JSON_VALUE(details, '$.price' RETURNING VARCHAR) as price_varchar FROM nested_data_example Query results: [["product","department","country","price_int","price_decimal","price_varchar"],["STRING","STRING","STRING","LONG","DOUBLE","STRING"],["VARCHAR","VARCHAR","VARCHAR","BIGINT","DECIMAL","VARCHAR"],["Bike","Sports","BL",542,542.0,"542.0"],["Bike","Sports","ME",955,955.0,"955.0"],["Sausages","Grocery","AD",8,8.0,"8.0"],["Mouse","Computers","CW",90,90.0,"90.0"],["Keyboard","Computers","TV",40,40.0,"40.0"]] "},{"title":"Grouping, aggregating, and filtering","type":1,"pageTitle":"Nested columns","url":"/docs/27.0.0/querying/nested-columns#grouping-aggregating-and-filtering","content":"You can use JSON_VALUE expressions in any context where you can use traditional Druid columns, such as grouping, aggregation, and filtering. Example query: Grouping and filtering The following example query illustrates how to use SUM, WHERE, GROUP BY, and ORDER BY operators with JSON_VALUE. SELECT product, JSON_VALUE(shipTo, '$.address.country'), SUM(JSON_VALUE(details, '$.price' RETURNING BIGINT)) FROM nested_data_example WHERE JSON_VALUE(shipTo, '$.address.country') in ('BL', 'CW') GROUP BY 1,2 ORDER BY 3 DESC Example query results: [["product","EXPR$1","EXPR$2"],["STRING","STRING","LONG"],["VARCHAR","VARCHAR","BIGINT"],["Bike","BL",542],["Mouse","CW",90]] "},{"title":"Transforming JSON object data","type":1,"pageTitle":"Nested columns","url":"/docs/27.0.0/querying/nested-columns#transforming-json-object-data","content":"In addition to JSON_VALUE, Druid offers a number of operators that focus on transforming JSON object data: JSON_QUERYJSON_OBJECTPARSE_JSONTO_JSON_STRING These functions are primarily intended for use with SQL-based ingestion to transform data during insert operations, but they also work in traditional Druid SQL queries. Because most of these functions output JSON objects, they have the same limitations when used in traditional Druid queries as interacting with the JSON objects directly. Example query: Return results in a JSON object You can use the JSON_QUERY function to extract a partial structure from any JSON input and return results in a JSON object. Unlike JSON_VALUE it can extract objects and arrays. The following example query illustrates the differences in output between JSON_VALUE and JSON_QUERY. The two output columns for JSON_VALUE contain null values only because JSON_VALUE only returns literal types. SELECT JSON_VALUE(shipTo, '$.address'), JSON_QUERY(shipTo, '$.address'), JSON_VALUE(shipTo, '$.phoneNumbers'), JSON_QUERY(shipTo, '$.phoneNumbers') FROM nested_data_example Example query results: [["EXPR$0","EXPR$1","EXPR$2","EXPR$3"],["STRING","COMPLEX<json>","STRING","COMPLEX<json>"],["VARCHAR","OTHER","VARCHAR","OTHER"],["","{\\"street\\":\\"77173 Rusty Station\\",\\"city\\":\\"South Yeseniabury\\",\\"state\\":\\"WA\\",\\"country\\":\\"BL\\",\\"postalCode\\":\\"01893\\"}","","[{\\"type\\":\\"primary\\",\\"number\\":\\"891-374-6188 x74568\\"},{\\"type\\":\\"secondary\\",\\"number\\":\\"1-248-998-4426 x33037\\"}]"],["","{\\"street\\":\\"5643 Jan Walk\\",\\"city\\":\\"Lake Bridget\\",\\"state\\":\\"HI\\",\\"country\\":\\"ME\\",\\"postalCode\\":\\"70204-2939\\"}","","[{\\"type\\":\\"primary\\",\\"number\\":\\"593.475.0449 x86733\\"},{\\"type\\":\\"secondary\\",\\"number\\":\\"638-372-1210\\"}]"],["","{\\"street\\":\\"41548 Collier Divide\\",\\"city\\":\\"Wintheiserborough\\",\\"state\\":\\"WA\\",\\"country\\":\\"AD\\",\\"postalCode\\":\\"27577-6784\\"}","","[{\\"type\\":\\"primary\\",\\"number\\":\\"(904) 890-0696 x581\\"},{\\"type\\":\\"secondary\\",\\"number\\":\\"676.895.6759\\"}]"],["","{\\"street\\":\\"6232 Green Glens\\",\\"city\\":\\"New Fermin\\",\\"state\\":\\"HI\\",\\"country\\":\\"CW\\",\\"postalCode\\":\\"98912-1195\\"}","","[{\\"type\\":\\"primary\\",\\"number\\":\\"(689) 766-4272 x60778\\"},{\\"type\\":\\"secondary\\",\\"number\\":\\"375.662.4737 x24707\\"}]"],["","{\\"street\\":\\"293 Grant Well\\",\\"city\\":\\"Loischester\\",\\"state\\":\\"FL\\",\\"country\\":\\"TV\\",\\"postalCode\\":\\"88845-0066\\"}","","[{\\"type\\":\\"primary\\",\\"number\\":\\"1-788-771-7028 x8627\\"},{\\"type\\":\\"secondary\\",\\"number\\":\\"1-460-496-4884 x887\\"}]"]] Example query: Combine multiple JSON inputs into a single JSON object value The following query illustrates how to use JSON_OBJECT to combine nested data elements into a new object. SELECT JSON_OBJECT(KEY 'shipTo' VALUE JSON_QUERY(shipTo, '$'), KEY 'details' VALUE JSON_QUERY(details, '$')) as combinedJson FROM nested_data_example Example query results: [["combinedJson"],["COMPLEX<json>"],["OTHER"],["{\\"details\\":{\\"color\\":\\"sky blue\\",\\"price\\":542.0},\\"shipTo\\":{\\"firstName\\":\\"Russ\\",\\"lastName\\":\\"Cole\\",\\"address\\":{\\"street\\":\\"77173 Rusty Station\\",\\"city\\":\\"South Yeseniabury\\",\\"state\\":\\"WA\\",\\"country\\":\\"BL\\",\\"postalCode\\":\\"01893\\"},\\"phoneNumbers\\":[{\\"type\\":\\"primary\\",\\"number\\":\\"891-374-6188 x74568\\"},{\\"type\\":\\"secondary\\",\\"number\\":\\"1-248-998-4426 x33037\\"}]}}"],["{\\"details\\":{\\"color\\":\\"ivory\\",\\"price\\":955.0},\\"shipTo\\":{\\"firstName\\":\\"Henry\\",\\"lastName\\":\\"Wuckert\\",\\"address\\":{\\"street\\":\\"5643 Jan Walk\\",\\"city\\":\\"Lake Bridget\\",\\"state\\":\\"HI\\",\\"country\\":\\"ME\\",\\"postalCode\\":\\"70204-2939\\"},\\"phoneNumbers\\":[{\\"type\\":\\"primary\\",\\"number\\":\\"593.475.0449 x86733\\"},{\\"type\\":\\"secondary\\",\\"number\\":\\"638-372-1210\\"}]}}"],["{\\"details\\":{\\"price\\":8.0},\\"shipTo\\":{\\"firstName\\":\\"Forrest\\",\\"lastName\\":\\"Brekke\\",\\"address\\":{\\"street\\":\\"41548 Collier Divide\\",\\"city\\":\\"Wintheiserborough\\",\\"state\\":\\"WA\\",\\"country\\":\\"AD\\",\\"postalCode\\":\\"27577-6784\\"},\\"phoneNumbers\\":[{\\"type\\":\\"primary\\",\\"number\\":\\"(904) 890-0696 x581\\"},{\\"type\\":\\"secondary\\",\\"number\\":\\"676.895.6759\\"}]}}"],["{\\"details\\":{\\"color\\":\\"olive\\",\\"price\\":90.0},\\"shipTo\\":{\\"firstName\\":\\"Rickey\\",\\"lastName\\":\\"Rempel\\",\\"address\\":{\\"street\\":\\"6232 Green Glens\\",\\"city\\":\\"New Fermin\\",\\"state\\":\\"HI\\",\\"country\\":\\"CW\\",\\"postalCode\\":\\"98912-1195\\"},\\"phoneNumbers\\":[{\\"type\\":\\"primary\\",\\"number\\":\\"(689) 766-4272 x60778\\"},{\\"type\\":\\"secondary\\",\\"number\\":\\"375.662.4737 x24707\\"}]}}"],["{\\"details\\":{\\"color\\":\\"plum\\",\\"price\\":40.0},\\"shipTo\\":{\\"firstName\\":\\"Sandra\\",\\"lastName\\":\\"Beatty\\",\\"address\\":{\\"street\\":\\"293 Grant Well\\",\\"city\\":\\"Loischester\\",\\"state\\":\\"FL\\",\\"country\\":\\"TV\\",\\"postalCode\\":\\"88845-0066\\"},\\"phoneNumbers\\":[{\\"type\\":\\"primary\\",\\"number\\":\\"1-788-771-7028 x8627\\"},{\\"type\\":\\"secondary\\",\\"number\\":\\"1-460-496-4884 x887\\"}]}}"]] "},{"title":"Using other transform functions","type":1,"pageTitle":"Nested columns","url":"/docs/27.0.0/querying/nested-columns#using-other-transform-functions","content":"Druid provides the following additional transform functions: PARSE_JSON: Deserializes a string value into a JSON object.TO_JSON_STRING: Performs the operation of TO_JSON and then serializes the value into a string. Example query: Parse and deserialize data The following query illustrates how to use the transform functions to parse and deserialize data. SELECT PARSE_JSON('{"x":"y"}'), TO_JSON_STRING('{"x":"y"}'), TO_JSON_STRING(PARSE_JSON('{"x":"y"}')) Example query results: [["EXPR$0","EXPR$2","EXPR$3"],["COMPLEX<json>","STRING","STRING"],["OTHER","VARCHAR","VARCHAR"],["{\\"x\\":\\"y\\"}","\\"{\\\\\\"x\\\\\\":\\\\\\"y\\\\\\"}\\"","{\\"x\\":\\"y\\"}"]] "},{"title":"Using helper operators","type":1,"pageTitle":"Nested columns","url":"/docs/27.0.0/querying/nested-columns#using-helper-operators","content":"The JSON_KEYS and JSON_PATHS functions are helper operators that you can use to examine JSON object schema. Use them to plan your queries, for example to work out which paths to use in JSON_VALUE. Example query: Examine JSON object schema The following query illustrates how to use the helper operators to examine a nested data object. SELECT ARRAY_CONCAT_AGG(DISTINCT JSON_KEYS(shipTo, '$.')), ARRAY_CONCAT_AGG(DISTINCT JSON_KEYS(shipTo, '$.address')), ARRAY_CONCAT_AGG(DISTINCT JSON_PATHS(shipTo)) FROM nested_data_example Example query results: [["EXPR$0","EXPR$1","EXPR$2","EXPR$3"],["COMPLEX<json>","COMPLEX<json>","STRING","STRING"],["OTHER","OTHER","VARCHAR","VARCHAR"],["{\\"x\\":\\"y\\"}","\\"{\\\\\\"x\\\\\\":\\\\\\"y\\\\\\"}\\"","\\"{\\\\\\"x\\\\\\":\\\\\\"y\\\\\\"}\\"","{\\"x\\":\\"y\\"}"]] "},{"title":"Known issues","type":1,"pageTitle":"Nested columns","url":"/docs/27.0.0/querying/nested-columns#known-issues","content":"Before you start using the nested columns feature, consider the following known issues: Directly using COMPLEX<json> columns and expressions is not well integrated into the Druid query engine. It can result in errors or undefined behavior when grouping and filtering, and when you use COMPLEX<json> objects as inputs to aggregators. As a workaround, consider using TO_JSON_STRING to coerce the values to strings before you perform these operations.Directly using array-typed outputs from JSON_KEYS and JSON_PATHS is moderately supported by the Druid query engine. You can group on these outputs, and there are a number of array expressions that can operate on these values, such as ARRAY_CONCAT_AGG. However, some operations are not well defined for use outside array-specific functions, such as filtering using = or IS NULL.Input validation for JSON SQL operators is currently incomplete, which sometimes results in undefined behavior or unhelpful error messages.Ingesting data with a very complex nested structure is potentially an expensive operation and may require you to tune ingestion tasks and/or cluster parameters to account for increased memory usage or overall task run time. When you tune your ingestion configuration, treat each nested literal field inside an object as a flattened top-level Druid column. "},{"title":"Further reading","type":1,"pageTitle":"Nested columns","url":"/docs/27.0.0/querying/nested-columns#further-reading","content":"For more information, see the following pages: Nested columns functions reference for details of the functions used in the examples on this page.Multi-stage query architecture overview for information on how to set up and use this feature.Ingestion spec reference for information on native ingestion and transformSpec.Data formats for information on flattenSpec. "},{"title":"String comparators","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/sorting-orders","content":"","keywords":""},{"title":"Lexicographic","type":1,"pageTitle":"String comparators","url":"/docs/27.0.0/querying/sorting-orders#lexicographic","content":"Sorts values by converting Strings to their UTF-8 byte array representations and comparing lexicographically, byte-by-byte. "},{"title":"Alphanumeric","type":1,"pageTitle":"String comparators","url":"/docs/27.0.0/querying/sorting-orders#alphanumeric","content":"Suitable for strings with both numeric and non-numeric content, e.g.: "file12 sorts after file2" See https://github.com/amjjd/java-alphanum for more details on how this ordering sorts values. This ordering is not suitable for numbers with decimal points or negative numbers. For example, "1.3" precedes "1.15" in this ordering because "15" has more significant digits than "3".Negative numbers are sorted after positive numbers (because numeric characters precede the "-" in the negative numbers). "},{"title":"Numeric","type":1,"pageTitle":"String comparators","url":"/docs/27.0.0/querying/sorting-orders#numeric","content":"Sorts values as numbers, supports integers and floating point values. Negative values are supported. This sorting order will try to parse all string values as numbers. Unparseable values are treated as nulls, and nulls precede numbers. When comparing two unparseable values (e.g., "hello" and "world"), this ordering will sort by comparing the unparsed strings lexicographically. "},{"title":"Strlen","type":1,"pageTitle":"String comparators","url":"/docs/27.0.0/querying/sorting-orders#strlen","content":"Sorts values by their string lengths. When there is a tie, this comparator falls back to using the String compareTo method. "},{"title":"Version","type":1,"pageTitle":"String comparators","url":"/docs/27.0.0/querying/sorting-orders#version","content":"Sorts values as versions, e.g.: "10.0 sorts after 9.0", "1.0.0-SNAPSHOT sorts after 1.0.0". See https://maven.apache.org/ref/3.6.0/maven-artifact/apidocs/org/apache/maven/artifact/versioning/ComparableVersion.html for more details on how this ordering sorts values. "},{"title":"Druid SQL overview","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/sql","content":"","keywords":""},{"title":"Syntax","type":1,"pageTitle":"Druid SQL overview","url":"/docs/27.0.0/querying/sql#syntax","content":"Druid SQL supports SELECT queries with the following structure: [ EXPLAIN PLAN FOR ] [ WITH tableName [ ( column1, column2, ... ) ] AS ( query ) ] SELECT [ ALL | DISTINCT ] { * | exprs } FROM { <table> | (<subquery>) | <o1> [ INNER | LEFT ] JOIN <o2> ON condition } [, UNNEST(source_expression) as table_alias_name(column_alias_name) ] [ WHERE expr ] [ GROUP BY [ exprs | GROUPING SETS ( (exprs), ... ) | ROLLUP (exprs) | CUBE (exprs) ] ] [ HAVING expr ] [ ORDER BY expr [ ASC | DESC ], expr [ ASC | DESC ], ... ] [ LIMIT limit ] [ OFFSET offset ] [ UNION ALL <another query> ] "},{"title":"FROM","type":1,"pageTitle":"Druid SQL overview","url":"/docs/27.0.0/querying/sql#from","content":"The FROM clause can refer to any of the following: Table datasources from the druid schema. This is the default schema, so Druid table datasources can be referenced as either druid.dataSourceName or simply dataSourceName.Lookups from the lookup schema, for example lookup.countries. Note that lookups can also be queried using the LOOKUP function.Subqueries.Joins between anything in this list, except between native datasources (table, lookup, query) and system tables. The join condition must be an equality between expressions from the left- and right-hand side of the join.Metadata tables from the INFORMATION_SCHEMA or sys schemas. Unlike the other options for the FROM clause, metadata tables are not considered datasources. They exist only in the SQL layer. For more information about table, lookup, query, and join datasources, refer to the Datasourcesdocumentation. "},{"title":"UNNEST","type":1,"pageTitle":"Druid SQL overview","url":"/docs/27.0.0/querying/sql#unnest","content":"info The UNNEST SQL function is experimental. Its API and behavior are subject to change in future releases. It is not recommended to use this feature in production at this time. The UNNEST clause unnests array values. It's the SQL equivalent to the unnest datasource. The source for UNNEST can be an array or an input that's been transformed into an array, such as with helper functions like MV_TO_ARRAY or ARRAY. The following is the general syntax for UNNEST, specifically a query that returns the column that gets unnested: SELECT column_alias_name FROM datasource, UNNEST(source_expression1) AS table_alias_name1(column_alias_name1), UNNEST(source_expression2) AS table_alias_name2(column_alias_name2), ... The datasource for UNNEST can be any Druid datasource, such as the following: A table, such as FROM a_table.A subset of a table based on a query, a filter, or a JOIN. For example, FROM (SELECT columnA,columnB,columnC from a_table). The source_expression for the UNNEST function must be an array and can come from any expression. If the dimension you are unnesting is a multi-value dimension, you have to specify MV_TO_ARRAY(dimension) to convert it to an implicit ARRAY type. You can also specify any expression that has an SQL array datatype. For example, you can call UNNEST on the following: ARRAY[dim1,dim2] if you want to make an array out of two dimensions. ARRAY_CONCAT(dim1,dim2) if you want to concatenate two multi-value dimensions. The AS table_alias_name(column_alias_name) clause is not required but is highly recommended. Use it to specify the output, which can be an existing column or a new one. Replace table_alias_name and column_alias_name with a table and column name you want to alias the unnested results to. If you don't provide this, Druid uses a nondescriptive name, such as EXPR$0. Keep the following things in mind when writing your query: You must include the context parameter "enableUnnest": true.You can unnest multiple source expressions in a single query.Notice the comma between the datasource and the UNNEST function. This is needed in most cases of the UNNEST function. Specifically, it is not needed when you're unnesting an inline array since the array itself is the datasource.If you view the native explanation of a SQL UNNEST, you'll notice that Druid uses j0.unnest as a virtual column to perform the unnest. An underscore is added for each unnest, so you may notice virtual columns named _j0.unnest or __j0.unnest.UNNEST preserves the ordering of the source array that is being unnested. For examples, see the Unnest arrays tutorial. The UNNEST function has the following limitations: The function does not remove any duplicates or nulls in an array. Nulls will be treated as any other value in an array. If there are multiple nulls within the array, a record corresponding to each of the nulls gets created.Arrays inside complex JSON types are not supported.You cannot perform an UNNEST at ingestion time, including SQL-based ingestion using the MSQ task engine. "},{"title":"WHERE","type":1,"pageTitle":"Druid SQL overview","url":"/docs/27.0.0/querying/sql#where","content":"The WHERE clause refers to columns in the FROM table, and will be translated to native filters. The WHERE clause can also reference a subquery, like WHERE col1 IN (SELECT foo FROM ...). Queries like this are executed as a join on the subquery, described in the Query translation section. Strings and numbers can be compared in the WHERE clause of a SQL query through implicit type conversion. For example, you can evaluate WHERE stringDim = 1 for a string-typed dimension named stringDim. However, for optimal performance, you should explicitly cast the reference number as a string when comparing against a string dimension: WHERE stringDim = '1' Similarly, if you compare a string-typed dimension with reference to an array of numbers, cast the numbers to strings: WHERE stringDim IN ('1', '2', '3') Note that explicit type casting does not lead to significant performance improvement when comparing strings and numbers involving numeric dimensions since numeric dimensions are not indexed. "},{"title":"GROUP BY","type":1,"pageTitle":"Druid SQL overview","url":"/docs/27.0.0/querying/sql#group-by","content":"The GROUP BY clause refers to columns in the FROM table. Using GROUP BY, DISTINCT, or any aggregation functions will trigger an aggregation query using one of Druid's three native aggregation query types. GROUP BY can refer to an expression or a select clause ordinal position (like GROUP BY 2 to group by the second selected column). The GROUP BY clause can also refer to multiple grouping sets in three ways. The most flexible is GROUP BY GROUPING SETS, for example GROUP BY GROUPING SETS ( (country, city), () ). This example is equivalent to a GROUP BY country, cityfollowed by GROUP BY () (a grand total). With GROUPING SETS, the underlying data is only scanned one time, leading to better efficiency. Second, GROUP BY ROLLUP computes a grouping set for each level of the grouping expressions. For example GROUP BY ROLLUP (country, city) is equivalent to GROUP BY GROUPING SETS ( (country, city), (country), () )and will produce grouped rows for each country / city pair, along with subtotals for each country, along with a grand total. Finally, GROUP BY CUBE computes a grouping set for each combination of grouping expressions. For example,GROUP BY CUBE (country, city) is equivalent to GROUP BY GROUPING SETS ( (country, city), (country), (city), () ). Grouping columns that do not apply to a particular row will contain NULL. For example, when computingGROUP BY GROUPING SETS ( (country, city), () ), the grand total row corresponding to () will have NULL for the "country" and "city" columns. Column may also be NULL if it was NULL in the data itself. To differentiate such rows, you can use GROUPING aggregation. When using GROUP BY GROUPING SETS, GROUP BY ROLLUP, or GROUP BY CUBE, be aware that results may not be generated in the order that you specify your grouping sets in the query. If you need results to be generated in a particular order, use the ORDER BY clause. "},{"title":"HAVING","type":1,"pageTitle":"Druid SQL overview","url":"/docs/27.0.0/querying/sql#having","content":"The HAVING clause refers to columns that are present after execution of GROUP BY. It can be used to filter on either grouping expressions or aggregated values. It can only be used together with GROUP BY. "},{"title":"ORDER BY","type":1,"pageTitle":"Druid SQL overview","url":"/docs/27.0.0/querying/sql#order-by","content":"The ORDER BY clause refers to columns that are present after execution of GROUP BY. It can be used to order the results based on either grouping expressions or aggregated values. ORDER BY can refer to an expression or a select clause ordinal position (like ORDER BY 2 to order by the second selected column). For non-aggregation queries, ORDER BY can only order by the __time column. For aggregation queries, ORDER BY can order by any column. "},{"title":"LIMIT","type":1,"pageTitle":"Druid SQL overview","url":"/docs/27.0.0/querying/sql#limit","content":"The LIMIT clause limits the number of rows returned. In some situations Druid will push down this limit to data servers, which boosts performance. Limits are always pushed down for queries that run with the native Scan or TopN query types. With the native GroupBy query type, it is pushed down when ordering on a column that you are grouping by. If you notice that adding a limit doesn't change performance very much, then it's possible that Druid wasn't able to push down the limit for your query. "},{"title":"OFFSET","type":1,"pageTitle":"Druid SQL overview","url":"/docs/27.0.0/querying/sql#offset","content":"The OFFSET clause skips a certain number of rows when returning results. If both LIMIT and OFFSET are provided, then OFFSET will be applied first, followed by LIMIT. For example, using LIMIT 100 OFFSET 10 will return 100 rows, starting from row number 10. Together, LIMIT and OFFSET can be used to implement pagination. However, note that if the underlying datasource is modified between page fetches, then the different pages will not necessarily align with each other. There are two important factors that can affect the performance of queries that use OFFSET: Skipped rows still need to be generated internally and then discarded, meaning that raising offsets to high values can cause queries to use additional resources.OFFSET is only supported by the Scan and GroupBy native query types. Therefore, a query with OFFSET will use one of those two types, even if it might otherwise have run as a Timeseries or TopN. Switching query engines in this way can affect performance. "},{"title":"UNION ALL","type":1,"pageTitle":"Druid SQL overview","url":"/docs/27.0.0/querying/sql#union-all","content":"The UNION ALL operator fuses multiple queries together. Druid SQL supports the UNION ALL operator in two situations: top-level and table-level, as described below. Queries that use UNION ALL in any other way will fail. "},{"title":"Top-level","type":1,"pageTitle":"Druid SQL overview","url":"/docs/27.0.0/querying/sql#top-level","content":"In top-level queries, you can use UNION ALL at the very top outer layer of the query - not in a subquery, and not in the FROM clause. The underlying queries run sequentially. Druid concatenates their results so that they appear one after the other. For example: SELECT COUNT(*) FROM tbl WHERE my_column = 'value1' UNION ALL SELECT COUNT(*) FROM tbl WHERE my_column = 'value2' info With top-level queries, you can't apply GROUP BY, ORDER BY, or any other operator to the results of a UNION ALL. "},{"title":"Table-level","type":1,"pageTitle":"Druid SQL overview","url":"/docs/27.0.0/querying/sql#table-level","content":"In table-level queries, you must use UNION ALL in a subquery in the FROM clause, and create the lower-level subqueries that are inputs to the UNION ALL operator as simple table SELECTs. You can't use features like expressions, column aliasing, JOIN, GROUP BY, or ORDER BY in table-level queries. The query runs natively using a union datasource. At table-level queries, you must select the same columns from each table in the same order, and those columns must either have the same types, or types that can be implicitly cast to each other (such as different numeric types). For this reason, it is generally more robust to write your queries to select specific columns. If you use SELECT *, you must modify your queries if a new column is added to one table but not to the others. For example: SELECT col1, COUNT(*) FROM ( SELECT col1, col2, col3 FROM tbl1 UNION ALL SELECT col1, col2, col3 FROM tbl2 ) GROUP BY col1 With table-level UNION ALL, the rows from the unioned tables are not guaranteed to process in any particular order. They may process in an interleaved fashion. If you need a particular result ordering, use ORDER BY on the outer query. "},{"title":"EXPLAIN PLAN","type":1,"pageTitle":"Druid SQL overview","url":"/docs/27.0.0/querying/sql#explain-plan","content":"Add "EXPLAIN PLAN FOR" to the beginning of any query to get information about how it will be translated. In this case, the query will not actually be executed. Refer to the Query translationdocumentation for more information on the output of EXPLAIN PLAN. info For the legacy plan, be careful when interpreting EXPLAIN PLAN output, and use request logging if in doubt. Request logs show the exact native query that will be run. Alternatively, to see the native query plan, set useNativeQueryExplain to true in the query context. "},{"title":"Identifiers and literals","type":1,"pageTitle":"Druid SQL overview","url":"/docs/27.0.0/querying/sql#identifiers-and-literals","content":"Identifiers like datasource and column names can optionally be quoted using double quotes. To escape a double quote inside an identifier, use another double quote, like "My ""very own"" identifier". All identifiers are case-sensitive and no implicit case conversions are performed. Literal strings should be quoted with single quotes, like 'foo'. Literal strings with Unicode escapes can be written like U&'fo\\00F6', where character codes in hex are prefixed by a backslash. Literal numbers can be written in forms like 100 (denoting an integer), 100.0 (denoting a floating point value), or 1.0e5 (scientific notation). Literal timestamps can be written like TIMESTAMP '2000-01-01 00:00:00'. Literal intervals, used for time arithmetic, can be written like INTERVAL '1' HOUR, INTERVAL '1 02:03' DAY TO MINUTE, INTERVAL '1-2' YEAR TO MONTH, and so on. "},{"title":"Dynamic parameters","type":1,"pageTitle":"Druid SQL overview","url":"/docs/27.0.0/querying/sql#dynamic-parameters","content":"Druid SQL supports dynamic parameters using question mark (?) syntax, where parameters are bound to ? placeholders at execution time. To use dynamic parameters, replace any literal in the query with a ? character and provide a corresponding parameter value when you execute the query. Parameters are bound to the placeholders in the order in which they are passed. Parameters are supported in both the HTTP POST and JDBC APIs. In certain cases, using dynamic parameters in expressions can cause type inference issues which cause your query to fail, for example: SELECT * FROM druid.foo WHERE dim1 like CONCAT('%', ?, '%') To solve this issue, explicitly provide the type of the dynamic parameter using the CAST keyword. Consider the fix for the preceding example: SELECT * FROM druid.foo WHERE dim1 like CONCAT('%', CAST (? AS VARCHAR), '%') "},{"title":"SQL aggregation functions","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/sql-aggregations","content":"","keywords":""},{"title":"Sketch functions","type":1,"pageTitle":"SQL aggregation functions","url":"/docs/27.0.0/querying/sql-aggregations#sketch-functions","content":"These functions create sketch objects that you can use to perform fast, approximate analyses. For advice on choosing approximate aggregation functions, check out our approximate aggregations documentation. To operate on sketch objects, also see the DataSketches post aggregator functions. "},{"title":"HLL sketch functions","type":1,"pageTitle":"SQL aggregation functions","url":"/docs/27.0.0/querying/sql-aggregations#hll-sketch-functions","content":"Load the DataSketches extension to use the following functions. Function\tNotes\tDefaultAPPROX_COUNT_DISTINCT_DS_HLL(expr, [lgK, tgtHllType])\tCounts distinct values of expr, which can be a regular column or an HLL sketch column. Results are always approximate, regardless of the value of useApproximateCountDistinct. The lgK and tgtHllType parameters here are, like the equivalents in the aggregator, described in the HLL sketch documentation. See also COUNT(DISTINCT expr).\t0 DS_HLL(expr, [lgK, tgtHllType])\tCreates an HLL sketch on the values of expr, which can be a regular column or a column containing HLL sketches. The lgK and tgtHllType parameters are described in the HLL sketch documentation.\t'0' (STRING) "},{"title":"Theta sketch functions","type":1,"pageTitle":"SQL aggregation functions","url":"/docs/27.0.0/querying/sql-aggregations#theta-sketch-functions","content":"Load the DataSketches extension to use the following functions. Function\tNotes\tDefaultAPPROX_COUNT_DISTINCT_DS_THETA(expr, [size])\tCounts distinct values of expr, which can be a regular column or a Theta sketch column. Results are always approximate, regardless of the value of useApproximateCountDistinct. The size parameter is described in the Theta sketch documentation. See also COUNT(DISTINCT expr).\t0 DS_THETA(expr, [size])\tCreates a Theta sketch on the values of expr, which can be a regular column or a column containing Theta sketches. The size parameter is described in the Theta sketch documentation.\t'0.0' (STRING) "},{"title":"Quantiles sketch functions","type":1,"pageTitle":"SQL aggregation functions","url":"/docs/27.0.0/querying/sql-aggregations#quantiles-sketch-functions","content":"Load the DataSketches extension to use the following functions. Function\tNotes\tDefaultAPPROX_QUANTILE_DS(expr, probability, [k])\tComputes approximate quantiles on numeric or Quantiles sketch expressions. The probability value should be between 0 and 1, exclusive. The k parameter is described in the Quantiles sketch documentation. See the known issue with this function.\tNaN DS_QUANTILES_SKETCH(expr, [k])\tCreates a Quantiles sketch on the values of expr, which can be a regular column or a column containing quantiles sketches. The k parameter is described in the Quantiles sketch documentation. See the known issue with this function.\t'0' (STRING) "},{"title":"Tuple sketch functions","type":1,"pageTitle":"SQL aggregation functions","url":"/docs/27.0.0/querying/sql-aggregations#tuple-sketch-functions","content":"Load the DataSketches extension to use the following functions. Function\tNotes\tDefaultDS_TUPLE_DOUBLES(expr, [nominalEntries])\tCreates a Tuple sketch on the values of expr which is a column containing Tuple sketches which contain an array of double values as their Summary Objects. The nominalEntries override parameter is optional and described in the Tuple sketch documentation. DS_TUPLE_DOUBLES(dimensionColumnExpr, metricColumnExpr, ..., [nominalEntries])\tCreates a Tuple sketch which contains an array of double values as its Summary Object based on the dimension value of dimensionColumnExpr and the numeric metric values contained in one or more metricColumnExpr columns. If the last value of the array is a numeric literal, Druid assumes that the value is an override parameter for nominal entries.\t "},{"title":"T-Digest sketch functions","type":1,"pageTitle":"SQL aggregation functions","url":"/docs/27.0.0/querying/sql-aggregations#t-digest-sketch-functions","content":"Load the T-Digest extension to use the following functions. See the T-Digest extension for additional details and for more information on these functions. Function\tNotes\tDefaultTDIGEST_QUANTILE(expr, quantileFraction, [compression])\tBuilds a T-Digest sketch on values produced by expr and returns the value for the quantile. Compression parameter (default value 100) determines the accuracy and size of the sketch. Higher compression means higher accuracy but more space to store sketches.\tDouble.NaN TDIGEST_GENERATE_SKETCH(expr, [compression])\tBuilds a T-Digest sketch on values produced by expr. Compression parameter (default value 100) determines the accuracy and size of the sketch Higher compression means higher accuracy but more space to store sketches.\tEmpty base64 encoded T-Digest sketch STRING "},{"title":"SQL ARRAY functions","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/sql-array-functions","content":"SQL ARRAY functions info Apache Druid supports two query languages: Druid SQL and native queries. This document describes the SQL language. This page describes the operations you can perform on arrays using Druid SQL. See ARRAY data type documentation for additional details. All array references in the array function documentation can refer to multi-value string columns or ARRAY literals. These functions are largely identical to the multi-value string functions, but use ARRAY types and behavior. Multi-value string VARCHAR columns can be converted to VARCHAR ARRAY to use with these functions using MV_TO_ARRAY, and ARRAY types can be converted to multi-value string VARCHAR withARRAY_TO_MV. The following table describes array functions. To learn more about array aggregation functions, see SQL aggregation functions. Function\tDescriptionARRAY[expr1, expr2, ...]\tConstructs a SQL ARRAY literal from the expression arguments, using the type of the first argument as the output array type. ARRAY_LENGTH(arr)\tReturns length of the array expression. ARRAY_OFFSET(arr, long)\tReturns the array element at the 0-based index supplied, or null for an out of range index. ARRAY_ORDINAL(arr, long)\tReturns the array element at the 1-based index supplied, or null for an out of range index. ARRAY_CONTAINS(arr, expr)\tIf expr is a scalar type, returns 1 if arr contains expr. If expr is an array, returns 1 if arr contains all elements of expr. Otherwise returns 0. ARRAY_OVERLAP(arr1, arr2)\tReturns 1 if arr1 and arr2 have any elements in common, else 0. ARRAY_OFFSET_OF(arr, expr)\tReturns the 0-based index of the first occurrence of expr in the array. If no matching elements exist in the array, returns -1 or null if druid.generic.useDefaultValueForNull=false. ARRAY_ORDINAL_OF(arr, expr)\tReturns the 1-based index of the first occurrence of expr in the array. If no matching elements exist in the array, returns -1 or null if druid.generic.useDefaultValueForNull=false. ARRAY_PREPEND(expr, arr)\tPrepends expr to arr at the beginning, the resulting array type determined by the type of arr. ARRAY_APPEND(arr1, expr)\tAppends expr to arr, the resulting array type determined by the type of arr1. ARRAY_CONCAT(arr1, arr2)\tConcatenates arr2 to arr1. The resulting array type is determined by the type of arr1. ARRAY_SLICE(arr, start, end)\tReturns the subarray of arr from the 0-based index start (inclusive) to end (exclusive). Returns null, if start is less than 0, greater than length of arr, or greater than end. ARRAY_TO_STRING(arr, str)\tJoins all elements of arr by the delimiter specified by str. STRING_TO_ARRAY(str1, str2)\tSplits str1 into an array on the delimiter specified by str2, which is a regular expression. ARRAY_TO_MV(arr)\tConverts an ARRAY of any type into a multi-value string VARCHAR.","keywords":""},{"title":"SQL data types","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/sql-data-types","content":"","keywords":""},{"title":"Standard types","type":1,"pageTitle":"SQL data types","url":"/docs/27.0.0/querying/sql-data-types#standard-types","content":"Druid natively supports the following basic column types: LONG: 64-bit signed intFLOAT: 32-bit floatDOUBLE: 64-bit floatSTRING: UTF-8 encoded strings and string arraysCOMPLEX: non-standard data types, such as nested JSON, hyperUnique and approxHistogram, and DataSketchesARRAY: arrays composed of any of these types Druid treats timestamps (including the __time column) as LONG, with the value being the number of milliseconds since 1970-01-01 00:00:00 UTC, not counting leap seconds. Therefore, timestamps in Druid do not carry any timezone information. They only carry information about the exact moment in time they represent. SeeTime functions for more information about timestamp handling. The following table describes how Druid maps SQL types onto native types when running queries: SQL type\tDruid runtime type\tDefault value*\tNotesCHAR\tSTRING\t'' VARCHAR\tSTRING\t''\tDruid STRING columns are reported as VARCHAR. Can include multi-value strings as well. DECIMAL\tDOUBLE\t0.0\tDECIMAL uses floating point, not fixed point math FLOAT\tFLOAT\t0.0\tDruid FLOAT columns are reported as FLOAT REAL\tDOUBLE\t0.0 DOUBLE\tDOUBLE\t0.0\tDruid DOUBLE columns are reported as DOUBLE BOOLEAN\tLONG\tfalse TINYINT\tLONG\t0 SMALLINT\tLONG\t0 INTEGER\tLONG\t0 BIGINT\tLONG\t0\tDruid LONG columns (except __time) are reported as BIGINT TIMESTAMP\tLONG\t0, meaning 1970-01-01 00:00:00 UTC\tDruid's __time column is reported as TIMESTAMP. Casts between string and timestamp types assume standard SQL formatting, such as 2000-01-02 03:04:05, not ISO 8601 formatting. For handling other formats, use one of the time functions. DATE\tLONG\t0, meaning 1970-01-01\tCasting TIMESTAMP to DATE rounds down the timestamp to the nearest day. Casts between string and date types assume standard SQL formatting—for example, 2000-01-02. For handling other formats, use one of the time functions. ARRAY\tARRAY\tNULL\tDruid native array types work as SQL arrays, and multi-value strings can be converted to arrays. See Arrays for more information. OTHER\tCOMPLEX\tnone\tMay represent various Druid column types such as hyperUnique, approxHistogram, etc. * Default value applies if `druid.generic.useDefaultValueForNull = true` (the default mode). Otherwise, the default value is `NULL` for all types. Casts between two SQL types with the same Druid runtime type have no effect other than the exceptions noted in the table. Casts between two SQL types that have different Druid runtime types generate a runtime cast in Druid. If a value cannot be cast to the target type, as in CAST('foo' AS BIGINT), Druid either substitutes a default value (when druid.generic.useDefaultValueForNull = true, the default mode), or substitutes NULL (whendruid.generic.useDefaultValueForNull = false). NULL values cast to non-nullable types are also substituted with a default value. For example, if druid.generic.useDefaultValueForNull = true, a null VARCHAR cast to BIGINT is converted to a zero. "},{"title":"Multi-value strings","type":1,"pageTitle":"SQL data types","url":"/docs/27.0.0/querying/sql-data-types#multi-value-strings","content":"Druid's native type system allows strings to have multiple values. These multi-value string dimensions are reported in SQL as type VARCHAR and can be syntactically used like any other VARCHAR. Regular string functions that refer to multi-value string dimensions are applied to all values for each row individually. You can treat multi-value string dimensions as arrays using specialmulti-value string functions, which perform powerful array-aware operations, but retain their VARCHAR type and behavior. Grouping by multi-value dimensions observes the native Druid multi-value aggregation behavior, which is similar to an implicit SQL UNNEST. See Grouping for more information. info Because the SQL planner treats multi-value dimensions as VARCHAR, there are some inconsistencies between how they are handled in Druid SQL and in native queries. For instance, expressions involving multi-value dimensions may be incorrectly optimized by the Druid SQL planner. For example, multi_val_dim = 'a' AND multi_val_dim = 'b' is optimized tofalse, even though it is possible for a single row to have both 'a' and 'b' as values for multi_val_dim. The SQL behavior of multi-value dimensions may change in a future release to more closely align with their behavior in native queries, but the multi-value string functions should be able to provide nearly all possible native functionality. "},{"title":"Arrays","type":1,"pageTitle":"SQL data types","url":"/docs/27.0.0/querying/sql-data-types#arrays","content":"Druid supports ARRAY types constructed at query time. ARRAY types behave as standard SQL arrays, where results are grouped by matching entire arrays. This is in contrast to the implicit UNNEST that occurs when grouping on multi-value dimensions directly or when used with multi-value functions. You can convert multi-value dimensions to standard SQL arrays explicitly with MV_TO_ARRAY or implicitly using array functions. You can also use the array functions to construct arrays from multiple columns. You can use schema auto-discovery to detect and ingest arrays as ARRAY typed columns. "},{"title":"Multi-value strings behavior","type":1,"pageTitle":"SQL data types","url":"/docs/27.0.0/querying/sql-data-types#multi-value-strings-behavior","content":"The behavior of Druid multi-value string dimensions varies depending on the context of their usage. When used with standard VARCHAR functions which expect a single input value per row, such as CONCAT, Druid will map the function across all values in the row. If the row is null or empty, the function receives NULL as its input. When used with the explicit multi-value string functions, Druid processes the row values as if they were ARRAY typed. Any operations which produce null and empty rows are distinguished as separate values (unlike implicit mapping behavior). These multi-value string functions, typically denoted with an MV_prefix, retain their VARCHAR type after the computation is complete. Note that Druid multi-value columns do notdistinguish between empty and null rows. An empty row will never appear natively as input to a multi-valued function, but any multi-value function which manipulates the array form of the value may produce an empty array, which is handled separately while processing. info Do not mix the usage of multi-value functions and normal scalar functions within the same expression, as the planner will be unable to determine how to properly process the value given its ambiguous usage. A multi-value string must be treated consistently within an expression. When converted to ARRAY or used with array functions, multi-value strings behave as standard SQL arrays and can no longer be manipulated with non-array functions. Druid serializes multi-value VARCHAR results as a JSON string of the array, if grouping was not applied on the value. If the value was grouped, due to the implicit UNNEST behavior, all results will always be standard single value VARCHAR. ARRAY typed results will be serialized into stringified JSON arrays if the context parametersqlStringifyArrays is set, otherwise they remain in their array format. "},{"title":"NULL values","type":1,"pageTitle":"SQL data types","url":"/docs/27.0.0/querying/sql-data-types#null-values","content":"The druid.generic.useDefaultValueForNullruntime property controls Druid's NULL handling mode. For the most SQL compliant behavior, set this to false. When druid.generic.useDefaultValueForNull = true (the default mode), Druid treats NULLs and empty strings interchangeably, rather than according to the SQL standard. In this mode Druid SQL only has partial support for NULLs. For example, the expressions col IS NULL and col = '' are equivalent, and both evaluate to true if colcontains an empty string. Similarly, the expression COALESCE(col1, col2) returns col2 if col1 is an empty string. While the COUNT(*) aggregator counts all rows, the COUNT(expr) aggregator counts the number of rows where expr is neither null nor the empty string. Numeric columns in this mode are not nullable; any null or missing values are treated as zeroes. When druid.generic.useDefaultValueForNull = false, NULLs are treated more closely to the SQL standard. In this mode, numeric NULL is permitted, and NULLs and empty strings are no longer treated as interchangeable. This property affects both storage and querying, and must be set on all Druid service types to be available at both ingestion time and query time. There is some overhead associated with the ability to handle NULLs; see the segment internals documentation for more details. "},{"title":"Boolean logic","type":1,"pageTitle":"SQL data types","url":"/docs/27.0.0/querying/sql-data-types#boolean-logic","content":"The druid.expressions.useStrictBooleansruntime property controls Druid's boolean logic mode. For the most SQL compliant behavior, set this to true. When druid.expressions.useStrictBooleans = false (the default mode), Druid uses two-valued logic. When druid.expressions.useStrictBooleans = true, Druid uses three-valued logic forexpressions evaluation, such as expression virtual columns or expression filters. However, even in this mode, Druid uses two-valued logic for filter types other than expression. "},{"title":"Nested columns","type":1,"pageTitle":"SQL data types","url":"/docs/27.0.0/querying/sql-data-types#nested-columns","content":"Druid supports storing nested data structures in segments using the native COMPLEX<json> type. See Nested columns for more information. You can interact with nested data using JSON functions, which can extract nested values, parse from string, serialize to string, and create new COMPLEX<json> structures. COMPLEX types have limited functionality outside the specialized functions that use them, so their behavior is undefined when: Grouping on complex values.Filtering directly on complex values, such as WHERE json is NULL.Used as inputs to aggregators without specialized handling for a specific complex type. In many cases, functions are provided to translate COMPLEX value types to STRING, which serves as a workaround solution until COMPLEX type functionality can be improved. "},{"title":"SQL JSON functions","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/sql-json-functions","content":"","keywords":""},{"title":"JSONPath syntax","type":1,"pageTitle":"SQL JSON functions","url":"/docs/27.0.0/querying/sql-json-functions#jsonpath-syntax","content":"Druid supports a subset of the JSONPath syntax operators, primarily limited to extracting individual values from nested data structures. Operator\tDescription$\tRoot element. All JSONPath expressions start with this operator. .<name>\tChild element in dot notation. ['<name>']\tChild element in bracket notation. [<number>]\tArray index. Consider the following example input JSON: {"x":1, "y":[1, 2, 3]} To return the entire JSON object: $ -> {"x":1, "y":[1, 2, 3]}To return the value of the key "x": $.x -> 1For a key that contains an array, to return the entire array: $['y'] -> [1, 2, 3]For a key that contains an array, to return an item in the array: $.y[1] -> 2 "},{"title":"SQL metadata tables","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/sql-metadata-tables","content":"","keywords":""},{"title":"INFORMATION SCHEMA","type":1,"pageTitle":"SQL metadata tables","url":"/docs/27.0.0/querying/sql-metadata-tables#information-schema","content":"You can access table and column metadata through JDBC using connection.getMetaData(), or through the INFORMATION_SCHEMA tables described below. For example, to retrieve metadata for the Druid datasource "foo", use the query: SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE "TABLE_SCHEMA" = 'druid' AND "TABLE_NAME" = 'foo' info Note: INFORMATION_SCHEMA tables do not currently support Druid-specific functions like TIME_PARSE andAPPROX_QUANTILE_DS. Only standard SQL functions can be used. "},{"title":"SCHEMATA table","type":1,"pageTitle":"SQL metadata tables","url":"/docs/27.0.0/querying/sql-metadata-tables#schemata-table","content":"INFORMATION_SCHEMA.SCHEMATA provides a list of all known schemas, which include druid for standard Druid Table datasources, lookup for Lookups, sys for the virtual System metadata tables, and INFORMATION_SCHEMA for these virtual tables. Tables are allowed to have the same name across different schemas, so the schema may be included in an SQL statement to distinguish them, e.g. lookup.table vs druid.table. Column\tType\tNotesCATALOG_NAME\tVARCHAR\tAlways set as druid SCHEMA_NAME\tVARCHAR\tdruid, lookup, sys, or INFORMATION_SCHEMA SCHEMA_OWNER\tVARCHAR\tUnused DEFAULT_CHARACTER_SET_CATALOG\tVARCHAR\tUnused DEFAULT_CHARACTER_SET_SCHEMA\tVARCHAR\tUnused DEFAULT_CHARACTER_SET_NAME\tVARCHAR\tUnused SQL_PATH\tVARCHAR\tUnused "},{"title":"TABLES table","type":1,"pageTitle":"SQL metadata tables","url":"/docs/27.0.0/querying/sql-metadata-tables#tables-table","content":"INFORMATION_SCHEMA.TABLES provides a list of all known tables and schemas. Column\tType\tNotesTABLE_CATALOG\tVARCHAR\tAlways set as druid TABLE_SCHEMA\tVARCHAR\tThe 'schema' which the table falls under, see SCHEMATA table for details TABLE_NAME\tVARCHAR\tTable name. For the druid schema, this is the dataSource. TABLE_TYPE\tVARCHAR\t"TABLE" or "SYSTEM_TABLE" IS_JOINABLE\tVARCHAR\tIf a table is directly joinable if on the right hand side of a JOIN statement, without performing a subquery, this value will be set to YES, otherwise NO. Lookups are always joinable because they are globally distributed among Druid query processing nodes, but Druid datasources are not, and will use a less efficient subquery join. IS_BROADCAST\tVARCHAR\tIf a table is 'broadcast' and distributed among all Druid query processing nodes, this value will be set to YES, such as lookups and Druid datasources which have a 'broadcast' load rule, else NO. "},{"title":"COLUMNS table","type":1,"pageTitle":"SQL metadata tables","url":"/docs/27.0.0/querying/sql-metadata-tables#columns-table","content":"INFORMATION_SCHEMA.COLUMNS provides a list of all known columns across all tables and schema. Column\tType\tNotesTABLE_CATALOG\tVARCHAR\tAlways set as druid TABLE_SCHEMA\tVARCHAR\tThe 'schema' which the table column falls under, see SCHEMATA table for details TABLE_NAME\tVARCHAR\tThe 'table' which the column belongs to, see TABLES table for details COLUMN_NAME\tVARCHAR\tThe column name ORDINAL_POSITION\tBIGINT\tThe order in which the column is stored in a table COLUMN_DEFAULT\tVARCHAR\tUnused IS_NULLABLE\tVARCHAR DATA_TYPE\tVARCHAR CHARACTER_MAXIMUM_LENGTH\tBIGINT\tUnused CHARACTER_OCTET_LENGTH\tBIGINT\tUnused NUMERIC_PRECISION\tBIGINT NUMERIC_PRECISION_RADIX\tBIGINT NUMERIC_SCALE\tBIGINT DATETIME_PRECISION\tBIGINT CHARACTER_SET_NAME\tVARCHAR COLLATION_NAME\tVARCHAR JDBC_TYPE\tBIGINT\tType code from java.sql.Types (Druid extension) For example, this query returns data type information for columns in the foo table: SELECT "ORDINAL_POSITION", "COLUMN_NAME", "IS_NULLABLE", "DATA_TYPE", "JDBC_TYPE" FROM INFORMATION_SCHEMA.COLUMNS WHERE "TABLE_NAME" = 'foo' "},{"title":"ROUTINES table","type":1,"pageTitle":"SQL metadata tables","url":"/docs/27.0.0/querying/sql-metadata-tables#routines-table","content":"INFORMATION_SCHEMA.ROUTINES provides a list of all known functions. Column\tType\tNotesROUTINE_CATALOG\tVARCHAR\tThe catalog that contains the routine. Always set as druid ROUTINE_SCHEMA\tVARCHAR\tThe schema that contains the routine. Always set as INFORMATION_SCHEMA ROUTINE_NAME\tVARCHAR\tTHe routine name ROUTINE_TYPE\tVARCHAR\tThe routine type. Always set as FUNCTION IS_AGGREGATOR\tVARCHAR\tIf a routine is an aggregator function, then the value will be set to YES, else NO SIGNATURES\tVARCHAR\tOne or more routine signatures For example, this query returns information about all the aggregator functions: SELECT "ROUTINE_CATALOG", "ROUTINE_SCHEMA", "ROUTINE_NAME", "ROUTINE_TYPE", "IS_AGGREGATOR", "SIGNATURES" FROM "INFORMATION_SCHEMA"."ROUTINES" WHERE "IS_AGGREGATOR" = 'YES' "},{"title":"SYSTEM SCHEMA","type":1,"pageTitle":"SQL metadata tables","url":"/docs/27.0.0/querying/sql-metadata-tables#system-schema","content":"The "sys" schema provides visibility into Druid segments, servers and tasks. info Note: "sys" tables do not currently support Druid-specific functions like TIME_PARSE andAPPROX_QUANTILE_DS. Only standard SQL functions can be used. "},{"title":"SEGMENTS table","type":1,"pageTitle":"SQL metadata tables","url":"/docs/27.0.0/querying/sql-metadata-tables#segments-table","content":"Segments table provides details on all Druid segments, whether they are published yet or not. Column\tType\tNotessegment_id\tVARCHAR\tUnique segment identifier datasource\tVARCHAR\tName of datasource start\tVARCHAR\tInterval start time (in ISO 8601 format) end\tVARCHAR\tInterval end time (in ISO 8601 format) size\tBIGINT\tSize of segment in bytes version\tVARCHAR\tVersion string (generally an ISO8601 timestamp corresponding to when the segment set was first started). Higher version means the more recently created segment. Version comparing is based on string comparison. partition_num\tBIGINT\tPartition number (an integer, unique within a datasource+interval+version; may not necessarily be contiguous) num_replicas\tBIGINT\tNumber of replicas of this segment currently being served num_rows\tBIGINT\tNumber of rows in this segment, or zero if the number of rows is not known. This row count is gathered by the Broker in the background. It will be zero if the Broker has not gathered a row count for this segment yet. For segments ingested from streams, the reported row count may lag behind the result of a count(*) query because the cached num_rows on the Broker may be out of date. This will settle shortly after new rows stop being written to that particular segment. is_active\tBIGINT\tTrue for segments that represent the latest state of a datasource. Equivalent to (is_published = 1 AND is_overshadowed = 0) OR is_realtime = 1. In steady state, when no ingestion or data management operations are happening, is_active will be equivalent to is_available. However, they may differ from each other when ingestion or data management operations have executed recently. In these cases, Druid will load and unload segments appropriately to bring actual availability in line with the expected state given by is_active. is_published\tBIGINT\tBoolean represented as long type where 1 = true, 0 = false. 1 if this segment has been published to the metadata store and is marked as used. See the segment lifecycle documentation for more details. is_available\tBIGINT\tBoolean represented as long type where 1 = true, 0 = false. 1 if this segment is currently being served by any data serving process, like a Historical or a realtime ingestion task. See the segment lifecycle documentation for more details. is_realtime\tBIGINT\tBoolean represented as long type where 1 = true, 0 = false. 1 if this segment is only served by realtime tasks, and 0 if any Historical process is serving this segment. is_overshadowed\tBIGINT\tBoolean represented as long type where 1 = true, 0 = false. 1 if this segment is published and is fully overshadowed by some other published segments. Currently, is_overshadowed is always 0 for unpublished segments, although this may change in the future. You can filter for segments that "should be published" by filtering for is_published = 1 AND is_overshadowed = 0. Segments can briefly be both published and overshadowed if they were recently replaced, but have not been unpublished yet. See the segment lifecycle documentation for more details. shard_spec\tVARCHAR\tJSON-serialized form of the segment ShardSpec dimensions\tVARCHAR\tJSON-serialized form of the segment dimensions metrics\tVARCHAR\tJSON-serialized form of the segment metrics last_compaction_state\tVARCHAR\tJSON-serialized form of the compaction task's config (compaction task which created this segment). May be null if segment was not created by compaction task. replication_factor\tBIGINT\tTotal number of replicas of the segment that are required to be loaded across all historical tiers, based on the load rule that currently applies to this segment. If this value is 0, the segment is not assigned to any historical and will not be loaded. This value is -1 if load rules for the segment have not been evaluated yet. For example, to retrieve all currently active segments for datasource "wikipedia", use the query: SELECT * FROM sys.segments WHERE datasource = 'wikipedia' AND is_active = 1 Another example to retrieve segments total_size, avg_size, avg_num_rows and num_segments per datasource: SELECT datasource, SUM("size") AS total_size, CASE WHEN SUM("size") = 0 THEN 0 ELSE SUM("size") / (COUNT(*) FILTER(WHERE "size" > 0)) END AS avg_size, CASE WHEN SUM(num_rows) = 0 THEN 0 ELSE SUM("num_rows") / (COUNT(*) FILTER(WHERE num_rows > 0)) END AS avg_num_rows, COUNT(*) AS num_segments FROM sys.segments WHERE is_active = 1 GROUP BY 1 ORDER BY 2 DESC This query goes a step further and shows the overall profile of available, non-realtime segments across buckets of 1 million rows each for the foo datasource: SELECT ABS("num_rows" / 1000000) as "bucket", COUNT(*) as segments, SUM("size") / 1048576 as totalSizeMiB, MIN("size") / 1048576 as minSizeMiB, AVG("size") / 1048576 as averageSizeMiB, MAX("size") / 1048576 as maxSizeMiB, SUM("num_rows") as totalRows, MIN("num_rows") as minRows, AVG("num_rows") as averageRows, MAX("num_rows") as maxRows, (AVG("size") / AVG("num_rows")) as avgRowSizeB FROM sys.segments WHERE is_available = 1 AND is_realtime = 0 AND "datasource" = `foo` GROUP BY 1 ORDER BY 1 If you want to retrieve segment that was compacted (ANY compaction): SELECT * FROM sys.segments WHERE is_active = 1 AND last_compaction_state IS NOT NULL or if you want to retrieve segment that was compacted only by a particular compaction spec (such as that of the auto compaction): SELECT * FROM sys.segments WHERE is_active = 1 AND last_compaction_state = 'CompactionState{partitionsSpec=DynamicPartitionsSpec{maxRowsPerSegment=5000000, maxTotalRows=9223372036854775807}, indexSpec={bitmap={type=roaring}, dimensionCompression=lz4, metricCompression=lz4, longEncoding=longs, segmentLoader=null}}' "},{"title":"SERVERS table","type":1,"pageTitle":"SQL metadata tables","url":"/docs/27.0.0/querying/sql-metadata-tables#servers-table","content":"Servers table lists all discovered servers in the cluster. Column\tType\tNotesserver\tVARCHAR\tServer name in the form host:port host\tVARCHAR\tHostname of the server plaintext_port\tBIGINT\tUnsecured port of the server, or -1 if plaintext traffic is disabled tls_port\tBIGINT\tTLS port of the server, or -1 if TLS is disabled server_type\tVARCHAR\tType of Druid service. Possible values include: COORDINATOR, OVERLORD, BROKER, ROUTER, HISTORICAL, MIDDLE_MANAGER or PEON. tier\tVARCHAR\tDistribution tier see druid.server.tier. Only valid for HISTORICAL type, for other types it's null current_size\tBIGINT\tCurrent size of segments in bytes on this server. Only valid for HISTORICAL type, for other types it's 0 max_size\tBIGINT\tMax size in bytes this server recommends to assign to segments see druid.server.maxSize. Only valid for HISTORICAL type, for other types it's 0 is_leader\tBIGINT\t1 if the server is currently the 'leader' (for services which have the concept of leadership), otherwise 0 if the server is not the leader, or the default long value (0 or null depending on druid.generic.useDefaultValueForNull) if the server type does not have the concept of leadership start_time\tSTRING\tTimestamp in ISO8601 format when the server was announced in the cluster To retrieve information about all servers, use the query: SELECT * FROM sys.servers; "},{"title":"SERVER_SEGMENTS table","type":1,"pageTitle":"SQL metadata tables","url":"/docs/27.0.0/querying/sql-metadata-tables#server_segments-table","content":"SERVER_SEGMENTS is used to join servers with segments table Column\tType\tNotesserver\tVARCHAR\tServer name in format host:port (Primary key of servers table) segment_id\tVARCHAR\tSegment identifier (Primary key of segments table) JOIN between "servers" and "segments" can be used to query the number of segments for a specific datasource, grouped by server, example query: SELECT count(segments.segment_id) as num_segments from sys.segments as segments INNER JOIN sys.server_segments as server_segments ON segments.segment_id = server_segments.segment_id INNER JOIN sys.servers as servers ON servers.server = server_segments.server WHERE segments.datasource = 'wikipedia' GROUP BY servers.server; "},{"title":"TASKS table","type":1,"pageTitle":"SQL metadata tables","url":"/docs/27.0.0/querying/sql-metadata-tables#tasks-table","content":"The tasks table provides information about active and recently-completed indexing tasks. For more information check out the documentation for ingestion tasks. Column\tType\tNotestask_id\tVARCHAR\tUnique task identifier group_id\tVARCHAR\tTask group ID for this task, the value depends on the task type. For example, for native index tasks, it's same as task_id, for sub tasks, this value is the parent task's ID type\tVARCHAR\tTask type, for example this value is "index" for indexing tasks. See tasks-overview datasource\tVARCHAR\tDatasource name being indexed created_time\tVARCHAR\tTimestamp in ISO8601 format corresponding to when the ingestion task was created. Note that this value is populated for completed and waiting tasks. For running and pending tasks this value is set to 1970-01-01T00:00:00Z queue_insertion_time\tVARCHAR\tTimestamp in ISO8601 format corresponding to when this task was added to the queue on the Overlord status\tVARCHAR\tStatus of a task can be RUNNING, FAILED, SUCCESS runner_status\tVARCHAR\tRunner status of a completed task would be NONE, for in-progress tasks this can be RUNNING, WAITING, PENDING duration\tBIGINT\tTime it took to finish the task in milliseconds, this value is present only for completed tasks location\tVARCHAR\tServer name where this task is running in the format host:port, this information is present only for RUNNING tasks host\tVARCHAR\tHostname of the server where task is running plaintext_port\tBIGINT\tUnsecured port of the server, or -1 if plaintext traffic is disabled tls_port\tBIGINT\tTLS port of the server, or -1 if TLS is disabled error_msg\tVARCHAR\tDetailed error message in case of FAILED tasks For example, to retrieve tasks information filtered by status, use the query SELECT * FROM sys.tasks WHERE status='FAILED'; "},{"title":"SUPERVISORS table","type":1,"pageTitle":"SQL metadata tables","url":"/docs/27.0.0/querying/sql-metadata-tables#supervisors-table","content":"The supervisors table provides information about supervisors. Column\tType\tNotessupervisor_id\tVARCHAR\tSupervisor task identifier state\tVARCHAR\tBasic state of the supervisor. Available states: UNHEALTHY_SUPERVISOR, UNHEALTHY_TASKS, PENDING, RUNNING, SUSPENDED, STOPPING. Check Kafka Docs for details. detailed_state\tVARCHAR\tSupervisor specific state. (See documentation of the specific supervisor for details, e.g. Kafka or Kinesis) healthy\tBIGINT\tBoolean represented as long type where 1 = true, 0 = false. 1 indicates a healthy supervisor type\tVARCHAR\tType of supervisor, e.g. kafka, kinesis or materialized_view source\tVARCHAR\tSource of the supervisor, e.g. Kafka topic or Kinesis stream suspended\tBIGINT\tBoolean represented as long type where 1 = true, 0 = false. 1 indicates supervisor is in suspended state spec\tVARCHAR\tJSON-serialized supervisor spec For example, to retrieve supervisor tasks information filtered by health status, use the query SELECT * FROM sys.supervisors WHERE healthy=0; "},{"title":"SQL multi-value string functions","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/sql-multivalue-string-functions","content":"SQL multi-value string functions info Apache Druid supports two query languages: Druid SQL and native queries. This document describes the SQL language. Druid supports string dimensions containing multiple values. This page describes the operations you can perform on multi-value string dimensions using Druid SQL. See SQL multi-value strings and native Multi-value dimensions for more information. All array references in the multi-value string function documentation can refer to multi-value string columns orARRAY types. These functions are largely identical to the array functions, but useVARCHAR types and behavior. Multi-value strings can also be converted to ARRAY types using MV_TO_ARRAY, andARRAY into multi-value strings via ARRAY_TO_MV. For additional details about ARRAY types, seeARRAY data type documentation. Function\tDescriptionMV_FILTER_ONLY(expr, arr)\tFilters multi-value expr to include only values contained in array arr. MV_FILTER_NONE(expr, arr)\tFilters multi-value expr to include no values contained in array arr. MV_LENGTH(arr)\tReturns length of the array expression. MV_OFFSET(arr, long)\tReturns the array element at the 0-based index supplied, or null for an out of range index. MV_ORDINAL(arr, long)\tReturns the array element at the 1-based index supplied, or null for an out of range index. MV_CONTAINS(arr, expr)\tIf expr is a scalar type, returns 1 if arr contains expr. If expr is an array, returns 1 if arr contains all elements of expr. Otherwise returns 0. MV_OVERLAP(arr1, arr2)\tReturns 1 if arr1 and arr2 have any elements in common, else 0. MV_OFFSET_OF(arr, expr)\tReturns the 0-based index of the first occurrence of expr in the array. If no matching elements exist in the array, returns -1 or null if druid.generic.useDefaultValueForNull=false. MV_ORDINAL_OF(arr, expr)\tReturns the 1-based index of the first occurrence of expr in the array. If no matching elements exist in the array, returns -1 or null if druid.generic.useDefaultValueForNull=false. MV_PREPEND(expr, arr)\tAdds expr to arr at the beginning, the resulting array type determined by the type of the array. MV_APPEND(arr1, expr)\tAppends expr to arr, the resulting array type determined by the type of the first array. MV_CONCAT(arr1, arr2)\tConcatenates arr2 to arr1. The resulting array type is determined by the type of arr1. MV_SLICE(arr, start, end)\tReturns the subarray of arr from the 0-based index start(inclusive) to end(exclusive), or null, if start is less than 0, greater than length of arr or greater than end. MV_TO_STRING(arr, str)\tJoins all elements of arr by the delimiter specified by str. STRING_TO_MV(str1, str2)\tSplits str1 into an array on the delimiter specified by str2, which is a regular expression. MV_TO_ARRAY(str)\tConverts a multi-value string from a VARCHAR to a VARCHAR ARRAY.","keywords":""},{"title":"Druid SQL Operators","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/sql-operators","content":"","keywords":""},{"title":"Arithmetic operators","type":1,"pageTitle":"Druid SQL Operators","url":"/docs/27.0.0/querying/sql-operators#arithmetic-operators","content":"Operator\tDescriptionx + y\tAdd x - y\tSubtract x * y\tMultiply x / y\tDivide "},{"title":"Datetime arithmetic operators","type":1,"pageTitle":"Druid SQL Operators","url":"/docs/27.0.0/querying/sql-operators#datetime-arithmetic-operators","content":"For the datetime arithmetic operators, interval_expr can include interval literals like INTERVAL '2' HOUR. This operator treats days as uniformly 86400 seconds long, and does not take into account daylight savings time. To account for daylight savings time, use the TIME_SHIFT function. Also see TIMESTAMPADD for datetime arithmetic. Operator\tDescriptiontimestamp_expr + interval_expr\tAdd an amount of time to a timestamp. timestamp_expr - interval_expr\tSubtract an amount of time from a timestamp. "},{"title":"Concatenation operator","type":1,"pageTitle":"Druid SQL Operators","url":"/docs/27.0.0/querying/sql-operators#concatenation-operator","content":"Also see the CONCAT function. Operator\tDescriptionx || y\tConcatenate strings x and y. "},{"title":"Comparison operators","type":1,"pageTitle":"Druid SQL Operators","url":"/docs/27.0.0/querying/sql-operators#comparison-operators","content":"Operator\tDescriptionx = y\tEqual to x <> y\tNot equal to x > y\tGreater than x >= y\tGreater than or equal to x < y\tLess than x <= y\tLess than or equal to "},{"title":"Logical operators","type":1,"pageTitle":"Druid SQL Operators","url":"/docs/27.0.0/querying/sql-operators#logical-operators","content":"Operator\tDescriptionx AND y\tBoolean AND x OR y\tBoolean OR NOT x\tBoolean NOT x IS NULL\tTrue if x is NULL or empty string x IS NOT NULL\tTrue if x is neither NULL nor empty string x IS TRUE\tTrue if x is true x IS NOT TRUE\tTrue if x is not true x IS FALSE\tTrue if x is false x IS NOT FALSE\tTrue if x is not false x BETWEEN y AND z\tEquivalent to x >= y AND x <= z x NOT BETWEEN y AND z\tEquivalent to x < y OR x > z x LIKE pattern [ESCAPE esc]\tTrue if x matches a SQL LIKE pattern (with an optional escape) x NOT LIKE pattern [ESCAPE esc]\tTrue if x does not match a SQL LIKE pattern (with an optional escape) x IN (values)\tTrue if x is one of the listed values x NOT IN (values)\tTrue if x is not one of the listed values x IN (subquery)\tTrue if x is returned by the subquery. This will be translated into a join; see Query translation for details. x NOT IN (subquery)\tTrue if x is not returned by the subquery. This will be translated into a join; see Query translation for details. "},{"title":"SQL query context","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/sql-query-context","content":"","keywords":""},{"title":"SQL query context parameters","type":1,"pageTitle":"SQL query context","url":"/docs/27.0.0/querying/sql-query-context#sql-query-context-parameters","content":"Configure Druid SQL query planning using the parameters in the table below. Parameter\tDescription\tDefault valuesqlQueryId\tUnique identifier given to this SQL query. For HTTP client, it will be returned in X-Druid-SQL-Query-Id header. To specify a unique identifier for SQL query, use sqlQueryId instead of queryId. Setting queryId for a SQL request has no effect. All native queries underlying SQL use an auto-generated queryId.\tauto-generated sqlTimeZone\tSets the time zone for this connection, which will affect how time functions and timestamp literals behave. Should be a time zone name like "America/Los_Angeles" or offset like "-08:00".\tdruid.sql.planner.sqlTimeZone on the Broker (default: UTC) sqlStringifyArrays\tWhen set to true, result columns which return array values will be serialized into a JSON string in the response instead of as an array\ttrue, except for JDBC connections, where it is always false useApproximateCountDistinct\tWhether to use an approximate cardinality algorithm for COUNT(DISTINCT foo).\tdruid.sql.planner.useApproximateCountDistinct on the Broker (default: true) useGroupingSetForExactDistinct\tWhether to use grouping sets to execute queries with multiple exact distinct aggregations.\tdruid.sql.planner.useGroupingSetForExactDistinct on the Broker (default: false) useApproximateTopN\tWhether to use approximate TopN queries when a SQL query could be expressed as such. If false, exact GroupBy queries will be used instead.\tdruid.sql.planner.useApproximateTopN on the Broker (default: true) enableTimeBoundaryPlanning\tIf true, SQL queries will get converted to TimeBoundary queries wherever possible. TimeBoundary queries are very efficient for min-max calculation on __time column in a datasource\tdruid.query.default.context.enableTimeBoundaryPlanning on the Broker (default: false) useNativeQueryExplain\tIf true, EXPLAIN PLAN FOR will return the explain plan as a JSON representation of equivalent native query(s), else it will return the original version of explain plan generated by Calcite. This property is provided for backwards compatibility. It is not recommended to use this parameter unless you were depending on the older behavior.\tdruid.sql.planner.useNativeQueryExplain on the Broker (default: true) sqlFinalizeOuterSketches\tIf false (default behavior in Druid 25.0.0 and later), DS_HLL, DS_THETA, and DS_QUANTILES_SKETCH return sketches in query results, as documented. If true (default behavior in Druid 24.0.1 and earlier), sketches from these functions are finalized when they appear in query results. This property is provided for backwards compatibility with behavior in Druid 24.0.1 and earlier. It is not recommended to use this parameter unless you were depending on the older behavior. Instead, use a function that does not return a sketch, such as APPROX_COUNT_DISTINCT_DS_HLL, APPROX_COUNT_DISTINCT_DS_THETA, APPROX_QUANTILE_DS, DS_THETA_ESTIMATE, or DS_GET_QUANTILE.\tdruid.query.default.context.sqlFinalizeOuterSketches on the Broker (default: false) sqlUseBoundAndSelectors\tIf false (default behavior if druid.generic.useDefaultValueForNull=false in Druid 27.0.0 and later), the SQL planner will use equality, null, and range filters instead of selector and bounds. This value must be set to false for correct behavior for filtering ARRAY typed values.\tDefaults to same value as druid.generic.useDefaultValueForNull "},{"title":"Setting the query context","type":1,"pageTitle":"SQL query context","url":"/docs/27.0.0/querying/sql-query-context#setting-the-query-context","content":"The query context parameters can be specified as a "context" object in the JSON API or as a JDBC connection properties object. See examples for each option below. "},{"title":"Example using JSON API","type":1,"pageTitle":"SQL query context","url":"/docs/27.0.0/querying/sql-query-context#example-using-json-api","content":"{ "query" : "SELECT COUNT(*) FROM data_source WHERE foo = 'bar' AND __time > TIMESTAMP '2000-01-01 00:00:00'", "context" : { "sqlTimeZone" : "America/Los_Angeles" } } "},{"title":"Example using JDBC","type":1,"pageTitle":"SQL query context","url":"/docs/27.0.0/querying/sql-query-context#example-using-jdbc","content":"String url = "jdbc:avatica:remote:url=http://localhost:8082/druid/v2/sql/avatica/"; // Set any query context parameters you need here. Properties connectionProperties = new Properties(); connectionProperties.setProperty("sqlTimeZone", "America/Los_Angeles"); connectionProperties.setProperty("useCache", "false"); try (Connection connection = DriverManager.getConnection(url, connectionProperties)) { // create and execute statements, process result sets, etc } "},{"title":"TimeBoundary queries","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/timeboundaryquery","content":"TimeBoundary queries info Apache Druid supports two query languages: Druid SQL and native queries. This document describes a query type that is only available in the native language. Time boundary queries return the earliest and latest data points of a data set. The grammar is: { "queryType" : "timeBoundary", "dataSource": "sample_datasource", "bound" : < "maxTime" | "minTime" > # optional, defaults to returning both timestamps if not set "filter" : { "type": "and", "fields": [<filter>, <filter>, ...] } # optional } There are 3 main parts to a time boundary query: property\tdescription\trequired?queryType\tThis String should always be "timeBoundary"; this is the first thing Apache Druid looks at to figure out how to interpret the query\tyes dataSource\tA String or Object defining the data source to query, very similar to a table in a relational database. See DataSource for more information.\tyes bound\tOptional, set to maxTime or minTime to return only the latest or earliest timestamp. Default to returning both if not set\tno filter\tSee Filters\tno context\tSee Context\tno The format of the result is: [ { "timestamp" : "2013-05-09T18:24:00.000Z", "result" : { "minTime" : "2013-05-09T18:24:00.000Z", "maxTime" : "2013-05-09T18:37:00.000Z" } } ] ","keywords":""},{"title":"All Druid SQL functions","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/sql-functions","content":"","keywords":""},{"title":"ABS","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#abs","content":"ABS(<NUMERIC>) Function type: Scalar, numeric Calculates the absolute value of a numeric expression. "},{"title":"ACOS","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#acos","content":"ACOS(<NUMERIC>) Function type: Scalar, numeric Calculates the arc cosine of a numeric expression. "},{"title":"ANY_VALUE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#any_value","content":"ANY_VALUE(<NUMERIC>) ANY_VALUE(<BOOLEAN>) ANY_VALUE(<CHARACTER>, <NUMERIC>) Function type: Aggregation Returns any value of the specified expression. "},{"title":"APPROX_COUNT_DISTINCT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#approx_count_distinct","content":"APPROX_COUNT_DISTINCT(expr) Function type: Aggregation Counts distinct values of a regular column or a prebuilt sketch column. APPROX_COUNT_DISTINCT_BUILTIN(expr) Function type: Aggregation Counts distinct values of a string, numeric, or hyperUnique column using Druid's built-in cardinality or hyperUnique aggregators. "},{"title":"APPROX_COUNT_DISTINCT_DS_HLL","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#approx_count_distinct_ds_hll","content":"APPROX_COUNT_DISTINCT_DS_HLL(expr, [<NUMERIC>, <CHARACTER>]) Function type: Aggregation Counts distinct values of an HLL sketch column or a regular column. "},{"title":"APPROX_COUNT_DISTINCT_DS_THETA","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#approx_count_distinct_ds_theta","content":"APPROX_COUNT_DISTINCT_DS_THETA(expr, [<NUMERIC>]) Function type: Aggregation Counts distinct values of a Theta sketch column or a regular column. "},{"title":"APPROX_QUANTILE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#approx_quantile","content":"APPROX_QUANTILE(expr, <NUMERIC>, [<NUMERIC>]) Function type: Aggregation Deprecated in favor of APPROX_QUANTILE_DS. "},{"title":"APPROX_QUANTILE_DS","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#approx_quantile_ds","content":"APPROX_QUANTILE_DS(expr, <NUMERIC>, [<NUMERIC>]) Function type: Aggregation Computes approximate quantiles on a Quantiles sketch column or a regular numeric column. "},{"title":"APPROX_QUANTILE_FIXED_BUCKETS","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#approx_quantile_fixed_buckets","content":"APPROX_QUANTILE_FIXED_BUCKETS(expr, <NUMERIC>, <NUMERIC>, <NUMERIC>, <NUMERIC>, [<CHARACTER>]) Function type: Aggregation Computes approximate quantiles on fixed buckets histogram column or a regular numeric column. "},{"title":"ARRAY[]","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#array","content":"ARRAY[expr1, expr2, ...] Function type: Multi-value string Constructs a SQL ARRAY literal from the expression arguments. The arguments must be of the same type. "},{"title":"ARRAY_AGG","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#array_agg","content":"ARRAY_AGG([DISTINCT] expr, [<NUMERIC>]) Function type: Aggregation Returns an array of all values of the specified expression. "},{"title":"ARRAY_APPEND","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#array_append","content":"ARRAY_APPEND(arr1, expr) Function type: Array Appends expr to arr, the resulting array type determined by the type of arr1. "},{"title":"ARRAY_CONCAT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#array_concat","content":"ARRAY_CONCAT(arr1, arr2) Function type: Array Concatenates arr2 to arr1. The resulting array type is determined by the type of arr1.| "},{"title":"ARRAY_CONCAT_AGG","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#array_concat_agg","content":"ARRAY_CONCAT_AGG([DISTINCT] expr, [<NUMERIC>]) Function type: Aggregation Concatenates array inputs into a single array. "},{"title":"ARRAY_CONTAINS","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#array_contains","content":"ARRAY_CONTAINS(arr, expr) Function type: Array If expr is a scalar type, returns 1 if arr contains expr. If expr is an array, returns 1 if arr contains all elements of expr. Otherwise returns 0. "},{"title":"ARRAY_LENGTH","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#array_length","content":"ARRAY_LENGTH(arr) Function type: Array Returns length of the array expression. "},{"title":"ARRAY_OFFSET","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#array_offset","content":"ARRAY_OFFSET(arr, long) Function type: Array Returns the array element at the 0-based index supplied, or null for an out of range index. "},{"title":"ARRAY_OFFSET_OF","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#array_offset_of","content":"ARRAY_OFFSET_OF(arr, expr) Function type: Array Returns the 0-based index of the first occurrence of expr in the array. If no matching elements exist in the array, returns -1 or null if druid.generic.useDefaultValueForNull=false. "},{"title":"ARRAY_ORDINAL","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#array_ordinal","content":"Function type: Array ARRAY_ORDINAL(arr, long) Returns the array element at the 1-based index supplied, or null for an out of range index. "},{"title":"ARRAY_ORDINAL_OF","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#array_ordinal_of","content":"ARRAY_ORDINAL_OF(arr, expr) Function type: Array Returns the 1-based index of the first occurrence of expr in the array. If no matching elements exist in the array, returns -1 or null if druid.generic.useDefaultValueForNull=false.| "},{"title":"ARRAY_OVERLAP","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#array_overlap","content":"ARRAY_OVERLAP(arr1, arr2) Function type: Array Returns 1 if arr1 and arr2 have any elements in common, else 0.| "},{"title":"ARRAY_PREPEND","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#array_prepend","content":"ARRAY_PREPEND(expr, arr) Function type: Array Prepends expr to arr at the beginning, the resulting array type determined by the type of arr. "},{"title":"ARRAY_SLICE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#array_slice","content":"ARRAY_SLICE(arr, start, end) Function type: Array Returns the subarray of arr from the 0-based index start (inclusive) to end (exclusive). Returns null, if start is less than 0, greater than length of arr, or greater than end. "},{"title":"ARRAY_TO_MV","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#array_to_mv","content":"ARRAY_TO_MV(arr) Function type: Array Converts an ARRAY of any type into a multi-value string VARCHAR. "},{"title":"ARRAY_TO_STRING","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#array_to_string","content":"ARRAY_TO_STRING(arr, str) Function type: Array Joins all elements of arr by the delimiter specified by str. "},{"title":"ASIN","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#asin","content":"ASIN(<NUMERIC>) Function type: Scalar, numeric Calculates the arc sine of a numeric expression. "},{"title":"ATAN","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#atan","content":"ATAN(<NUMERIC>) Function type: Scalar, numeric Calculates the arc tangent of a numeric expression. "},{"title":"ATAN2","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#atan2","content":"ATAN2(<NUMERIC>, <NUMERIC>) Function type: Scalar, numeric Calculates the arc tangent of the two arguments. "},{"title":"AVG","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#avg","content":"AVG(<NUMERIC>) Function type: Aggregation Calculates the average of a set of values. "},{"title":"BIT_AND","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#bit_and","content":"BIT_AND(expr) Function type: Aggregation Performs a bitwise AND operation on all input values. "},{"title":"BIT_OR","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#bit_or","content":"BIT_OR(expr) Function type: Aggregation Performs a bitwise OR operation on all input values. "},{"title":"BIT_XOR","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#bit_xor","content":"BIT_XOR(expr) Function type: Aggregation Performs a bitwise XOR operation on all input values. "},{"title":"BITWISE_AND","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#bitwise_and","content":"BITWISE_AND(expr1, expr2) Function type: Scalar, numeric Returns the bitwise AND between the two expressions, that is, expr1 & expr2. "},{"title":"BITWISE_COMPLEMENT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#bitwise_complement","content":"BITWISE_COMPLEMENT(expr) Function type: Scalar, numeric Returns the bitwise NOT for the expression, that is, ~expr. "},{"title":"BITWISE_CONVERT_DOUBLE_TO_LONG_BITS","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#bitwise_convert_double_to_long_bits","content":"BITWISE_CONVERT_DOUBLE_TO_LONG_BITS(expr) Function type: Scalar, numeric Converts the bits of an IEEE 754 floating-point double value to a long. "},{"title":"BITWISE_CONVERT_LONG_BITS_TO_DOUBLE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#bitwise_convert_long_bits_to_double","content":"BITWISE_CONVERT_LONG_BITS_TO_DOUBLE(expr) Function type: Scalar, numeric Converts a long to the IEEE 754 floating-point double specified by the bits stored in the long. "},{"title":"BITWISE_OR","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#bitwise_or","content":"BITWISE_OR(expr1, expr2) Function type: Scalar, numeric Returns the bitwise OR between the two expressions, that is, expr1 | expr2. "},{"title":"BITWISE_SHIFT_LEFT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#bitwise_shift_left","content":"BITWISE_SHIFT_LEFT(expr1, expr2) Function type: Scalar, numeric Returns a bitwise left shift of expr1, that is, expr1 << expr2. "},{"title":"BITWISE_SHIFT_RIGHT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#bitwise_shift_right","content":"BITWISE_SHIFT_RIGHT(expr1, expr2) Function type: Scalar, numeric Returns a bitwise right shift of expr1, that is, expr1 >> expr2. "},{"title":"BITWISE_XOR","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#bitwise_xor","content":"BITWISE_XOR(expr1, expr2) Function type: Scalar, numeric Returns the bitwise exclusive OR between the two expressions, that is, expr1 ^ expr2. "},{"title":"BLOOM_FILTER","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#bloom_filter","content":"BLOOM_FILTER(expr, <NUMERIC>) Function type: Aggregation Computes a Bloom filter from values produced by the specified expression. "},{"title":"BLOOM_FILTER_TEST","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#bloom_filter_test","content":"BLOOM_FILTER_TEST(expr, <STRING>) Function type: Scalar, other Returns true if the expression is contained in a Base64-serialized Bloom filter. "},{"title":"BTRIM","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#btrim","content":"BTRIM(<CHARACTER>, [<CHARACTER>]) Function type: Scalar, string Trims characters from both the leading and trailing ends of an expression. "},{"title":"CASE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#case","content":"CASE expr WHEN value1 THEN result1 \\[ WHEN value2 THEN result2 ... \\] \\[ ELSE resultN \\] END Function type: Scalar, other Returns a result based on a given condition. "},{"title":"CAST","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#cast","content":"CAST(value AS TYPE) Function type: Scalar, other Converts a value into the specified data type. "},{"title":"CEIL (date and time)","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ceil-date-and-time","content":"CEIL(<TIMESTAMP> TO <TIME_UNIT>) Function type: Scalar, date and time Rounds up a timestamp by a given time unit. "},{"title":"CEIL (numeric)","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ceil-numeric","content":"CEIL(<NUMERIC>) Function type: Scalar, numeric Calculates the smallest integer value greater than or equal to the numeric expression. "},{"title":"CHAR_LENGTH","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#char_length","content":"CHAR_LENGTH(expr) Function type: Scalar, string Alias for LENGTH. "},{"title":"CHARACTER_LENGTH","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#character_length","content":"CHARACTER_LENGTH(expr) Function type: Scalar, string Alias for LENGTH. "},{"title":"COALESCE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#coalesce","content":"COALESCE(expr, expr, ...) Function type: Scalar, other Returns the first non-null value. "},{"title":"CONCAT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#concat","content":"CONCAT(expr, expr...) Function type: Scalar, string Concatenates a list of expressions. "},{"title":"CONTAINS_STRING","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#contains_string","content":"CONTAINS_STRING(<CHARACTER>, <CHARACTER>) Function type: Scalar, string Finds whether a string is in a given expression, case-sensitive. "},{"title":"COS","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#cos","content":"COS(<NUMERIC>) Function type: Scalar, numeric Calculates the trigonometric cosine of an angle expressed in radians. "},{"title":"COT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#cot","content":"COT(<NUMERIC>) Function type: Scalar, numeric Calculates the trigonometric cotangent of an angle expressed in radians. "},{"title":"COUNT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#count","content":"COUNT([DISTINCT] expr) COUNT(*) Function type: Aggregation Counts the number of rows. "},{"title":"CURRENT_DATE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#current_date","content":"CURRENT_DATE Function type: Scalar, date and time Returns the current date in the connection's time zone. "},{"title":"CURRENT_TIMESTAMP","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#current_timestamp","content":"CURRENT_TIMESTAMP Function type: Scalar, date and time Returns the current timestamp in the connection's time zone. "},{"title":"DATE_TRUNC","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#date_trunc","content":"DATE_TRUNC(<CHARACTER>, <TIMESTAMP>) Function type: Scalar, date and time Rounds down a timestamp by a given time unit. "},{"title":"DEGREES","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#degrees","content":"DEGREES(<NUMERIC>) Function type: Scalar, numeric Converts an angle from radians to degrees. "},{"title":"DIV","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#div","content":"DIV(x, y) Function type: Scalar, numeric Returns the result of integer division of x by y. "},{"title":"DS_CDF","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ds_cdf","content":"DS_CDF(expr, splitPoint0, splitPoint1, ...) Function type: Scalar, sketch Returns a string representing an approximation to the Cumulative Distribution Function given the specified bin definition. "},{"title":"DS_GET_QUANTILE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ds_get_quantile","content":"DS_GET_QUANTILE(expr, fraction) Function type: Scalar, sketch Returns the quantile estimate corresponding to fraction from a quantiles sketch. "},{"title":"DS_GET_QUANTILES","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ds_get_quantiles","content":"DS_GET_QUANTILES(expr, fraction0, fraction1, ...) Function type: Scalar, sketch Returns a string representing an array of quantile estimates corresponding to a list of fractions from a quantiles sketch. "},{"title":"DS_HISTOGRAM","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ds_histogram","content":"DS_HISTOGRAM(expr, splitPoint0, splitPoint1, ...) Function type: Scalar, sketch Returns a string representing an approximation to the histogram given the specified bin definition. "},{"title":"DS_HLL","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ds_hll","content":"DS_HLL(expr, [lgK, tgtHllType]) Function type: Aggregation Creates an HLL sketch on a column containing HLL sketches or a regular column. "},{"title":"DS_QUANTILE_SUMMARY","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ds_quantile_summary","content":"DS_QUANTILE_SUMMARY(expr) Function type: Scalar, sketch Returns a string summary of a quantiles sketch. "},{"title":"DS_QUANTILES_SKETCH","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ds_quantiles_sketch","content":"DS_QUANTILES_SKETCH(expr, [k]) Function type: Aggregation Creates a Quantiles sketch on a column containing Quantiles sketches or a regular column. "},{"title":"DS_RANK","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ds_rank","content":"DS_RANK(expr, value) Function type: Scalar, sketch Returns an approximate rank between 0 and 1 of a given value, in which the rank signifies the fraction of the distribution less than the given value. "},{"title":"DS_THETA","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ds_theta","content":"DS_THETA(expr, [size]) Function type: Aggregation Creates a Theta sketch on a column containing Theta sketches or a regular column. "},{"title":"DS_TUPLE_DOUBLES","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ds_tuple_doubles","content":"DS_TUPLE_DOUBLES(expr, [nominalEntries]) DS_TUPLE_DOUBLES(dimensionColumnExpr, metricColumnExpr, ..., [nominalEntries]) Function type: Aggregation Creates a Tuple sketch which contains an array of double values as the Summary Object. If the last value of the array is a numeric literal, Druid assumes that the value is an override parameter for nominal entries. "},{"title":"DS_TUPLE_DOUBLES_INTERSECT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ds_tuple_doubles_intersect","content":"DS_TUPLE_DOUBLES_INTERSECT(expr, ..., [nominalEntries]) Function type: Scalar, sketch Returns an intersection of Tuple sketches which each contain an array of double values as their Summary Objects. The values contained in the Summary Objects are summed when combined. If the last value of the array is a numeric literal, Druid assumes that the value is an override parameter for nominal entries. "},{"title":"DS_TUPLE_DOUBLES_METRICS_SUM_ESTIMATE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ds_tuple_doubles_metrics_sum_estimate","content":"DS_TUPLE_DOUBLES_METRICS_SUM_ESTIMATE(expr) Function type: Scalar, sketch Computes approximate sums of the values contained within a Tuple sketch which contains an array of double values as the Summary Object. "},{"title":"DS_TUPLE_DOUBLES_NOT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ds_tuple_doubles_not","content":"DS_TUPLE_DOUBLES_NOT(expr, ..., [nominalEntries]) Function type: Scalar, sketch Returns a set difference of Tuple sketches which each contain an array of double values as their Summary Objects. The values contained in the Summary Object are preserved as is. If the last value of the array is a numeric literal, Druid assumes that the value is an override parameter for nominal entries. "},{"title":"DS_TUPLE_DOUBLES_UNION","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ds_tuple_doubles_union","content":"DS_TUPLE_DOUBLES_UNION(expr, ..., [nominalEntries]) Function type: Scalar, sketch Returns a union of Tuple sketches which each contain an array of double values as their Summary Objects. The values contained in the Summary Objects are summed when combined. If the last value of the array is a numeric literal, Druid assumes that the value is an override parameter for nominal entries. "},{"title":"EARLIEST","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#earliest","content":"EARLIEST(expr) EARLIEST(expr, maxBytesPerString) Function type: Aggregation Returns the value of a numeric or string expression corresponding to the earliest __time value. "},{"title":"EARLIEST_BY","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#earliest_by","content":"EARLIEST_BY(expr, timestampExpr) EARLIEST_BY(expr, timestampExpr, maxBytesPerString) Function type: Aggregation Returns the value of a numeric or string expression corresponding to the earliest time value from timestampExpr. "},{"title":"EXP","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#exp","content":"EXP(<NUMERIC>) Function type: Scalar, numeric Calculates e raised to the power of the numeric expression. "},{"title":"EXTRACT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#extract","content":"EXTRACT(<TIME_UNIT> FROM <TIMESTAMP>) Function type: Scalar, date and time Extracts the value of some unit of the timestamp, optionally from a certain time zone, and returns the number. "},{"title":"FLOOR (date and time)","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#floor-date-and-time","content":"FLOOR(<TIMESTAMP> TO <TIME_UNIT>) Function type: Scalar, date and time Rounds down a timestamp by a given time unit. "},{"title":"FLOOR (numeric)","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#floor-numeric","content":"FLOOR(<NUMERIC>) Function type: Scalar, numeric Calculates the largest integer value less than or equal to the numeric expression. "},{"title":"GREATEST","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#greatest","content":"GREATEST([expr1, ...]) Function type: Scalar, reduction Returns the maximum value from the provided arguments. "},{"title":"GROUPING","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#grouping","content":"GROUPING(expr, expr...) Function type: Aggregation Returns a number for each output row of a groupBy query, indicating whether the specified dimension is included for that row. "},{"title":"HLL_SKETCH_ESTIMATE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#hll_sketch_estimate","content":"HLL_SKETCH_ESTIMATE(expr, [round]) Function type: Scalar, sketch Returns the distinct count estimate from an HLL sketch. "},{"title":"HLL_SKETCH_ESTIMATE_WITH_ERROR_BOUNDS","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#hll_sketch_estimate_with_error_bounds","content":"HLL_SKETCH_ESTIMATE_WITH_ERROR_BOUNDS(expr, [numStdDev]) Function type: Scalar, sketch Returns the distinct count estimate and error bounds from an HLL sketch. "},{"title":"HLL_SKETCH_TO_STRING","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#hll_sketch_to_string","content":"HLL_SKETCH_TO_STRING(expr) Function type: Scalar, sketch Returns a human-readable string representation of an HLL sketch. "},{"title":"HLL_SKETCH_UNION","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#hll_sketch_union","content":"HLL_SKETCH_UNION([lgK, tgtHllType], expr0, expr1, ...) Function type: Scalar, sketch Returns a union of HLL sketches. "},{"title":"HUMAN_READABLE_BINARY_BYTE_FORMAT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#human_readable_binary_byte_format","content":"HUMAN_READABLE_BINARY_BYTE_FORMAT(value[, precision]) Function type: Scalar, numeric Converts an integer byte size into human-readable IEC format. "},{"title":"HUMAN_READABLE_DECIMAL_BYTE_FORMAT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#human_readable_decimal_byte_format","content":"HUMAN_READABLE_DECIMAL_BYTE_FORMAT(value[, precision]) Function type: Scalar, numeric Converts a byte size into human-readable SI format. "},{"title":"HUMAN_READABLE_DECIMAL_FORMAT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#human_readable_decimal_format","content":"HUMAN_READABLE_DECIMAL_FORMAT(value[, precision]) Function type: Scalar, numeric Converts a byte size into human-readable SI format with single-character units. "},{"title":"ICONTAINS_STRING","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#icontains_string","content":"ICONTAINS_STRING(<expr>, str) Function type: Scalar, string Finds whether a string is in a given expression, case-insensitive. "},{"title":"IPV4_MATCH","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ipv4_match","content":"IPV4_MATCH(address, subnet) Function type: Scalar, IP address Returns true if the address belongs to the subnet literal, else false. "},{"title":"IPV4_PARSE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ipv4_parse","content":"IPV4_PARSE(address) Function type: Scalar, IP address Parses address into an IPv4 address stored as an integer. "},{"title":"IPV4_STRINGIFY","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ipv4_stringify","content":"IPV4_STRINGIFY(address) Function type: Scalar, IP address Converts address into an IPv4 address in dot-decimal notation. "},{"title":"JSON_KEYS","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#json_keys","content":"Function type: JSON JSON_KEYS(expr, path) Returns an array of field names from expr at the specified path. "},{"title":"JSON_OBJECT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#json_object","content":"Function type: JSON JSON_OBJECT(KEY expr1 VALUE expr2[, KEY expr3 VALUE expr4, ...]) Constructs a new COMPLEX<json> object. The KEY expressions must evaluate to string types. The VALUE expressions can be composed of any input type, including other COMPLEX<json> values. JSON_OBJECT can accept colon-separated key-value pairs. The following syntax is equivalent: JSON_OBJECT(expr1:expr2[, expr3:expr4, ...]). "},{"title":"JSON_PATHS","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#json_paths","content":"Function type: JSON JSON_PATHS(expr) Returns an array of all paths which refer to literal values in expr in JSONPath format. "},{"title":"JSON_QUERY","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#json_query","content":"Function type: JSON JSON_QUERY(expr, path) Extracts a COMPLEX<json> value from expr, at the specified path. "},{"title":"JSON_VALUE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#json_value","content":"Function type: JSON JSON_VALUE(expr, path [RETURNING sqlType]) Extracts a literal value from expr at the specified path. If you specify RETURNING and an SQL type name (such as VARCHAR, BIGINT, DOUBLE, etc) the function plans the query using the suggested type. Otherwise, it attempts to infer the type based on the context. If it can't infer the type, it defaults to VARCHAR. "},{"title":"LATEST","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#latest","content":"LATEST(expr) LATEST(expr, maxBytesPerString) Function type: Aggregation Returns the value of a numeric or string expression corresponding to the latest __time value. "},{"title":"LATEST_BY","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#latest_by","content":"LATEST_BY(expr, timestampExpr) LATEST_BY(expr, timestampExpr, maxBytesPerString) Function type: Aggregation Returns the value of a numeric or string expression corresponding to the latest time value from timestampExpr. "},{"title":"LEAST","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#least","content":"LEAST([expr1, ...]) Function type: Scalar, reduction Returns the minimum value from the provided arguments. "},{"title":"LEFT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#left","content":"LEFT(expr, [length]) Function type: Scalar, string Returns the leftmost number of characters from an expression. "},{"title":"LENGTH","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#length","content":"LENGTH(expr) Function type: Scalar, string Returns the length of the expression in UTF-16 encoding. "},{"title":"LN","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ln","content":"LN(expr) Function type: Scalar, numeric Calculates the natural logarithm of the numeric expression. "},{"title":"LOG10","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#log10","content":"LOG10(expr) Function type: Scalar, numeric Calculates the base-10 of the numeric expression. "},{"title":"LOOKUP","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#lookup","content":"LOOKUP(<CHARACTER>, <CHARACTER>) Function type: Scalar, string Looks up the expression in a registered query-time lookup table. "},{"title":"LOWER","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#lower","content":"LOWER(expr) Function type: Scalar, string Returns the expression in lowercase. "},{"title":"LPAD","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#lpad","content":"LPAD(<CHARACTER>, <INTEGER>, [<CHARACTER>]) Function type: Scalar, string Returns the leftmost number of characters from an expression, optionally padded with the given characters. "},{"title":"LTRIM","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#ltrim","content":"LTRIM(<CHARACTER>, [<CHARACTER>]) Function type: Scalar, string Trims characters from the leading end of an expression. "},{"title":"MAX","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#max","content":"MAX(expr) Function type: Aggregation Returns the maximum value of a set of values. "},{"title":"MILLIS_TO_TIMESTAMP","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#millis_to_timestamp","content":"MILLIS_TO_TIMESTAMP(millis_expr) Function type: Scalar, date and time Converts a number of milliseconds since epoch into a timestamp. "},{"title":"MIN","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#min","content":"MIN(expr) Function type: Aggregation Returns the minimum value of a set of values. "},{"title":"MOD","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#mod","content":"MOD(x, y) Function type: Scalar, numeric Calculates x modulo y, or the remainder of x divided by y. "},{"title":"MV_APPEND","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#mv_append","content":"MV_APPEND(arr1, expr) Function type: Multi-value string Adds the expression to the end of the array. "},{"title":"MV_CONCAT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#mv_concat","content":"MV_CONCAT(arr1, arr2) Function type: Multi-value string Concatenates two arrays. "},{"title":"MV_CONTAINS","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#mv_contains","content":"MV_CONTAINS(arr, expr) Function type: Multi-value string Returns true if the expression is in the array, false otherwise. "},{"title":"MV_FILTER_NONE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#mv_filter_none","content":"MV_FILTER_NONE(expr, arr) Function type: Multi-value string Filters a multi-value expression to include no values contained in the array. "},{"title":"MV_FILTER_ONLY","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#mv_filter_only","content":"MV_FILTER_ONLY(expr, arr) Function type: Multi-value string Filters a multi-value expression to include only values contained in the array. "},{"title":"MV_LENGTH","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#mv_length","content":"MV_LENGTH(arr) Function type: Multi-value string Returns the length of an array expression. "},{"title":"MV_OFFSET","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#mv_offset","content":"MV_OFFSET(arr, long) Function type: Multi-value string Returns the array element at the given zero-based index. "},{"title":"MV_OFFSET_OF","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#mv_offset_of","content":"MV_OFFSET_OF(arr, expr) Function type: Multi-value string Returns the zero-based index of the first occurrence of a given expression in the array. "},{"title":"MV_ORDINAL","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#mv_ordinal","content":"MV_ORDINAL(arr, long) Function type: Multi-value string Returns the array element at the given one-based index. "},{"title":"MV_ORDINAL_OF","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#mv_ordinal_of","content":"MV_ORDINAL_OF(arr, expr) Function type: Multi-value string Returns the one-based index of the first occurrence of a given expression. "},{"title":"MV_OVERLAP","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#mv_overlap","content":"MV_OVERLAP(arr1, arr2) Function type: Multi-value string Returns true if the two arrays have any elements in common, false otherwise. "},{"title":"MV_PREPEND","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#mv_prepend","content":"MV_PREPEND(expr, arr) Function type: Multi-value string Adds the expression to the beginning of the array. "},{"title":"MV_SLICE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#mv_slice","content":"MV_SLICE(arr, start, end) Function type: Multi-value string Returns a slice of the array from the zero-based start and end indexes. "},{"title":"MV_TO_STRING","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#mv_to_string","content":"MV_TO_STRING(arr, str) Function type: Multi-value string Joins all elements of the array together by the given delimiter. "},{"title":"NULLIF","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#nullif","content":"NULLIF(value1, value2) Function type: Scalar, other Returns NULL if two values are equal, else returns the first value. "},{"title":"NVL","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#nvl","content":"NVL(e1, e2) Function type: Scalar, other Returns e2 if e1 is null, else returns e1. "},{"title":"PARSE_JSON","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#parse_json","content":"Function type: JSON PARSE_JSON(expr) Parses expr into a COMPLEX<json> object. This operator deserializes JSON values when processing them, translating stringified JSON into a nested structure. If the input is not a VARCHAR or it is invalid JSON, this function will result in an error. "},{"title":"PARSE_LONG","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#parse_long","content":"PARSE_LONG(<CHARACTER>, [<INTEGER>]) Function type: Scalar, string Converts a string into a BIGINT with the given base or into a DECIMAL data type if the base is not specified. "},{"title":"POSITION","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#position","content":"POSITION(<CHARACTER> IN <CHARACTER> [FROM <INTEGER>]) Function type: Scalar, string Returns the one-based index position of a substring within an expression, optionally starting from a given one-based index. "},{"title":"POWER","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#power","content":"POWER(expr, power) Function type: Scalar, numeric Calculates a numerical expression raised to the specified power. "},{"title":"RADIANS","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#radians","content":"RADIANS(expr) Function type: Scalar, numeric Converts an angle from degrees to radians. "},{"title":"REGEXP_EXTRACT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#regexp_extract","content":"REGEXP_EXTRACT(<CHARACTER>, <CHARACTER>, [<INTEGER>]) Function type: Scalar, string Applies a regular expression to the string expression and returns the _n_th match. "},{"title":"REGEXP_LIKE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#regexp_like","content":"REGEXP_LIKE(<CHARACTER>, <CHARACTER>) Function type: Scalar, string Returns true or false signifying whether the regular expression finds a match in the string expression. "},{"title":"REGEXP_REPLACE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#regexp_replace","content":"REGEXP_REPLACE(<CHARACTER>, <CHARACTER>, <CHARACTER>) Function type: Scalar, string Replaces all occurrences of a regular expression in a string expression with a replacement string. The replacement string may refer to capture groups using $1, $2, etc. "},{"title":"REPEAT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#repeat","content":"REPEAT(<CHARACTER>, [<INTEGER>]) Function type: Scalar, string Repeats the string expression an integer number of times. "},{"title":"REPLACE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#replace","content":"REPLACE(expr, pattern, replacement) Function type: Scalar, string Replaces a pattern with another string in the given expression. "},{"title":"REVERSE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#reverse","content":"REVERSE(expr) Function type: Scalar, string Reverses the given expression. "},{"title":"RIGHT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#right","content":"RIGHT(expr, [length]) Function type: Scalar, string Returns the rightmost number of characters from an expression. "},{"title":"ROUND","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#round","content":"ROUND(expr[, digits]) Function type: Scalar, numeric Calculates the rounded value for a numerical expression. "},{"title":"RPAD","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#rpad","content":"RPAD(<CHARACTER>, <INTEGER>, [<CHARACTER>]) Function type: Scalar, string Returns the rightmost number of characters from an expression, optionally padded with the given characters. "},{"title":"RTRIM","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#rtrim","content":"RTRIM(<CHARACTER>, [<CHARACTER>]) Function type: Scalar, string Trims characters from the trailing end of an expression. "},{"title":"SAFE_DIVIDE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#safe_divide","content":"SAFE_DIVIDE(x, y) Function type: Scalar, numeric Returns x divided by y, guarded on division by 0. "},{"title":"SIN","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#sin","content":"SIN(expr) Function type: Scalar, numeric Calculates the trigonometric sine of an angle expressed in radians. "},{"title":"SQRT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#sqrt","content":"SQRT(expr) Function type: Scalar, numeric Calculates the square root of a numeric expression. "},{"title":"STDDEV","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#stddev","content":"STDDEV(expr) Function type: Aggregation Alias for STDDEV_SAMP. "},{"title":"STDDEV_POP","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#stddev_pop","content":"STDDEV_POP(expr) Function type: Aggregation Calculates the population standard deviation of a set of values. "},{"title":"STDDEV_SAMP","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#stddev_samp","content":"STDDEV_SAMP(expr) Function type: Aggregation Calculates the sample standard deviation of a set of values. "},{"title":"STRING_AGG","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#string_agg","content":"STRING_AGG(expr, separator, [size]) Function type: Aggregation Collects all values of an expression into a single string. "},{"title":"STRING_TO_ARRAY","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#string_to_array","content":"STRING_TO_ARRAY(str1, str2) Function type: Array Splits str1 into an array on the delimiter specified by str2, which is a regular expression. "},{"title":"STRING_FORMAT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#string_format","content":"STRING_FORMAT(pattern[, args...]) Function type: Scalar, string Returns a string formatted in accordance to Java's String.format method. "},{"title":"STRING_TO_MV","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#string_to_mv","content":"STRING_TO_MV(str1, str2) Function type: Multi-value string Converts a string into an array, split by the given delimiter. "},{"title":"STRLEN","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#strlen","content":"STRLEN(expr) Function type: Scalar, string Alias for LENGTH. "},{"title":"STRPOS","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#strpos","content":"STRPOS(<CHARACTER>, <CHARACTER>) Function type: Scalar, string Returns the one-based index position of a substring within an expression. "},{"title":"SUBSTR","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#substr","content":"SUBSTR(<CHARACTER>, <INTEGER>, [<INTEGER>]) Function type: Scalar, string Alias for SUBSTRING. "},{"title":"SUBSTRING","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#substring","content":"SUBSTRING(<CHARACTER>, <INTEGER>, [<INTEGER>]) Function type: Scalar, string Returns a substring of the expression starting at a given one-based index. "},{"title":"SUM","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#sum","content":"SUM(expr) Function type: Aggregation Calculates the sum of a set of values. "},{"title":"TAN","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#tan","content":"TAN(expr) Function type: Scalar, numeric Calculates the trigonometric tangent of an angle expressed in radians. "},{"title":"TDIGEST_GENERATE_SKETCH","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#tdigest_generate_sketch","content":"TDIGEST_GENERATE_SKETCH(expr, [compression]) Function type: Aggregation Generates a T-digest sketch from values of the specified expression. "},{"title":"TDIGEST_QUANTILE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#tdigest_quantile","content":"TDIGEST_QUANTILE(expr, quantileFraction, [compression]) Function type: Aggregation Returns the quantile for the specified fraction from a T-Digest sketch constructed from values of the expression. "},{"title":"TEXTCAT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#textcat","content":"TEXTCAT(<CHARACTER>, <CHARACTER>) Function type: Scalar, string Concatenates two string expressions. "},{"title":"THETA_SKETCH_ESTIMATE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#theta_sketch_estimate","content":"THETA_SKETCH_ESTIMATE(expr) Function type: Scalar, sketch Returns the distinct count estimate from a Theta sketch. "},{"title":"THETA_SKETCH_ESTIMATE_WITH_ERROR_BOUNDS","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#theta_sketch_estimate_with_error_bounds","content":"THETA_SKETCH_ESTIMATE_WITH_ERROR_BOUNDS(expr, errorBoundsStdDev) Function type: Scalar, sketch Returns the distinct count estimate and error bounds from a Theta sketch. "},{"title":"THETA_SKETCH_INTERSECT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#theta_sketch_intersect","content":"THETA_SKETCH_INTERSECT([size], expr0, expr1, ...) Function type: Scalar, sketch Returns an intersection of Theta sketches. "},{"title":"THETA_SKETCH_NOT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#theta_sketch_not","content":"THETA_SKETCH_NOT([size], expr0, expr1, ...) Function type: Scalar, sketch Returns a set difference of Theta sketches. "},{"title":"THETA_SKETCH_UNION","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#theta_sketch_union","content":"THETA_SKETCH_UNION([size], expr0, expr1, ...) Function type: Scalar, sketch Returns a union of Theta sketches. "},{"title":"TIME_CEIL","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#time_ceil","content":"TIME_CEIL(<TIMESTAMP>, <period>, [<origin>, [<timezone>]]) Function type: Scalar, date and time Rounds up a timestamp by a given time period, optionally from some reference time or timezone. "},{"title":"TIME_EXTRACT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#time_extract","content":"TIME_EXTRACT(<TIMESTAMP>, [<unit>, [<timezone>]]) Function type: Scalar, date and time Extracts the value of some unit of the timestamp and returns the number. "},{"title":"TIME_FLOOR","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#time_floor","content":"TIME_FLOOR(<TIMESTAMP>, <period>, [<origin>, [<timezone>]]) Function type: Scalar, date and time Rounds down a timestamp by a given time period, optionally from some reference time or timezone. "},{"title":"TIME_FORMAT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#time_format","content":"TIME_FORMAT(<TIMESTAMP>, [<pattern>, [<timezone>]]) Function type: Scalar, date and time Formats a timestamp as a string. "},{"title":"TIME_IN_INTERVAL","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#time_in_interval","content":"TIME_IN_INTERVAL(<TIMESTAMP>, <CHARACTER>) Function type: Scalar, date and time Returns whether a timestamp is contained within a particular interval, formatted as a string. "},{"title":"TIME_PARSE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#time_parse","content":"TIME_PARSE(<string_expr>, [<pattern>, [<timezone>]]) Function type: Scalar, date and time Parses a string into a timestamp. "},{"title":"TIME_SHIFT","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#time_shift","content":"TIME_SHIFT(<TIMESTAMP>, <period>, <step>, [<timezone>]) Function type: Scalar, date and time Shifts a timestamp forwards or backwards by a given number of time units. "},{"title":"TIMESTAMP_TO_MILLIS","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#timestamp_to_millis","content":"TIMESTAMP_TO_MILLIS(<TIMESTAMP>) Function type: Scalar, date and time Returns the number of milliseconds since epoch for the given timestamp. "},{"title":"TIMESTAMPADD","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#timestampadd","content":"TIMESTAMPADD(<unit>, <count>, <TIMESTAMP>) Function type: Scalar, date and time Adds a certain amount of time to a given timestamp. "},{"title":"TIMESTAMPDIFF","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#timestampdiff","content":"TIMESTAMPDIFF(<unit>, <TIMESTAMP>, <TIMESTAMP>) Function type: Scalar, date and time Takes the difference between two timestamps, returning the results in the given units. "},{"title":"TO_JSON_STRING","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#to_json_string","content":"Function type: JSON TO_JSON_STRING(expr) Serializes expr into a JSON string. "},{"title":"TRIM","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#trim","content":"TRIM([BOTH|LEADING|TRAILING] [<chars> FROM] expr) Function type: Scalar, string Trims the leading or trailing characters of an expression. "},{"title":"TRUNC","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#trunc","content":"TRUNC(expr[, digits]) Function type: Scalar, numeric Alias for TRUNCATE. "},{"title":"TRUNCATE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#truncate","content":"TRUNCATE(expr[, digits]) Function type: Scalar, numeric Truncates a numerical expression to a specific number of decimal digits. "},{"title":"TRY_PARSE_JSON","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#try_parse_json","content":"Function type: JSON TRY_PARSE_JSON(expr) Parses expr into a COMPLEX<json> object. This operator deserializes JSON values when processing them, translating stringified JSON into a nested structure. If the input is not a VARCHAR or it is invalid JSON, this function will result in a NULL value. "},{"title":"UNNEST","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#unnest","content":"UNNEST(source_expression) as table_alias_name(column_alias_name) Unnests a source expression that includes arrays into a target column with an aliased name. For more information, see UNNEST. "},{"title":"UPPER","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#upper","content":"UPPER(expr) Function type: Scalar, string Returns the expression in uppercase. "},{"title":"VAR_POP","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#var_pop","content":"VAR_POP(expr) Function type: Aggregation Calculates the population variance of a set of values. "},{"title":"VAR_SAMP","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#var_samp","content":"VAR_SAMP(expr) Function type: Aggregation Calculates the sample variance of a set of values. "},{"title":"VARIANCE","type":1,"pageTitle":"All Druid SQL functions","url":"/docs/27.0.0/querying/sql-functions#variance","content":"VARIANCE(expr) Function type: Aggregation Alias for VAR_SAMP. "},{"title":"Sorting (topN)","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/topnmetricspec","content":"","keywords":""},{"title":"Numeric TopNMetricSpec","type":1,"pageTitle":"Sorting (topN)","url":"/docs/27.0.0/querying/topnmetricspec#numeric-topnmetricspec","content":"The simplest metric specification is a String value indicating the metric to sort topN results by. They are included in a topN query with: "metric": "<metric_name>" The metric field can also be given as a JSON object. The grammar for dimension values sorted by numeric value is shown below: "metric": { "type": "numeric", "metric": "<metric_name>" } property\tdescription\trequired?type\tthis indicates a numeric sort\tyes metric\tthe actual metric field in which results will be sorted by\tyes "},{"title":"Dimension TopNMetricSpec","type":1,"pageTitle":"Sorting (topN)","url":"/docs/27.0.0/querying/topnmetricspec#dimension-topnmetricspec","content":"This metric specification sorts TopN results by dimension value, using one of the sorting orders described here: Sorting Orders property\ttype\tdescription\trequired?type\tString\tthis indicates a sort a dimension's values\tyes, must be 'dimension' ordering\tString\tSpecifies the sorting order. Can be one of the following values: "lexicographic", "alphanumeric", "numeric", "strlen". See Sorting Orders for more details.\tno, default: "lexicographic" previousStop\tString\tthe starting point of the sort. For example, if a previousStop value is 'b', all values before 'b' are discarded. This field can be used to paginate through all the dimension values.\tno The following metricSpec uses lexicographic sorting. "metric": { "type": "dimension", "ordering": "lexicographic", "previousStop": "<previousStop_value>" } Note that in earlier versions of Druid, the functionality provided by the DimensionTopNMetricSpec was handled by two separate spec types, Lexicographic and Alphanumeric (when only two sorting orders were supported). These spec types have been deprecated but are still usable. "},{"title":"Inverted TopNMetricSpec","type":1,"pageTitle":"Sorting (topN)","url":"/docs/27.0.0/querying/topnmetricspec#inverted-topnmetricspec","content":"Sort dimension values in inverted order, i.e inverts the order of the delegate metric spec. It can be used to sort the values in ascending order. "metric": { "type": "inverted", "metric": <delegate_top_n_metric_spec> } property\tdescription\trequired?type\tthis indicates an inverted sort\tyes metric\tthe delegate metric spec.\tyes "},{"title":"TopN queries","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/topnquery","content":"","keywords":""},{"title":"Behavior on multi-value dimensions","type":1,"pageTitle":"TopN queries","url":"/docs/27.0.0/querying/topnquery#behavior-on-multi-value-dimensions","content":"topN queries can group on multi-value dimensions. When grouping on a multi-value dimension, all values from matching rows will be used to generate one group per value. It's possible for a query to return more groups than there are rows. For example, a topN on the dimension tags with filter "t1" AND "t3" would match only row1, and generate a result with three groups: t1, t2, and t3. If you only need to include values that match your filter, you can use a filtered dimensionSpec. This can also improve performance. See Multi-value dimensions for more details. "},{"title":"Aliasing","type":1,"pageTitle":"TopN queries","url":"/docs/27.0.0/querying/topnquery#aliasing","content":"The current TopN algorithm is an approximate algorithm. The top 1000 local results from each segment are returned for merging to determine the global topN. As such, the topN algorithm is approximate in both rank and results. Approximate results ONLY APPLY WHEN THERE ARE MORE THAN 1000 DIM VALUES. A topN over a dimension with fewer than 1000 unique dimension values can be considered accurate in rank and accurate in aggregates. The threshold can be modified from its default 1000 via the server parameter druid.query.topN.minTopNThreshold, which needs a restart of the servers to take effect, or via minTopNThreshold in the query context, which takes effect per query. If you are wanting the top 100 of a high cardinality, uniformly distributed dimension ordered by some low-cardinality, uniformly distributed dimension, you are potentially going to get aggregates back that are missing data. To put it another way, the best use cases for topN are when you can have confidence that the overall results are uniformly in the top. For example, if a particular site ID is in the top 10 for some metric for every hour of every day, then it will probably be accurate in the topN over multiple days. But if a site is barely in the top 1000 for any given hour, but over the whole query granularity is in the top 500 (example: a site which gets highly uniform traffic co-mingling in the dataset with sites with highly periodic data), then a top500 query may not have that particular site at the exact rank, and may not be accurate for that particular site's aggregates. Before continuing in this section, please consider if you really need exact results. Getting exact results is a very resource intensive process. For the vast majority of "useful" data results, an approximate topN algorithm supplies plenty of accuracy. Users wishing to get an exact rank and exact aggregates topN over a dimension with greater than 1000 unique values should issue a groupBy query and sort the results themselves. This is very computationally expensive for high-cardinality dimensions. Users who can tolerate approximate rank topN over a dimension with greater than 1000 unique values, but require exact aggregates can issue two queries. One to get the approximate topN dimension values, and another topN with dimension selection filters which only use the topN results of the first. "},{"title":"Example First query","type":1,"pageTitle":"TopN queries","url":"/docs/27.0.0/querying/topnquery#example-first-query","content":"{ "aggregations": [ { "fieldName": "L_QUANTITY_longSum", "name": "L_QUANTITY_", "type": "longSum" } ], "dataSource": "tpch_year", "dimension":"l_orderkey", "granularity": "all", "intervals": [ "1900-01-09T00:00:00.000Z/2992-01-10T00:00:00.000Z" ], "metric": "L_QUANTITY_", "queryType": "topN", "threshold": 2 } "},{"title":"Example second query","type":1,"pageTitle":"TopN queries","url":"/docs/27.0.0/querying/topnquery#example-second-query","content":"{ "aggregations": [ { "fieldName": "L_TAX_doubleSum", "name": "L_TAX_", "type": "doubleSum" }, { "fieldName": "L_DISCOUNT_doubleSum", "name": "L_DISCOUNT_", "type": "doubleSum" }, { "fieldName": "L_EXTENDEDPRICE_doubleSum", "name": "L_EXTENDEDPRICE_", "type": "doubleSum" }, { "fieldName": "L_QUANTITY_longSum", "name": "L_QUANTITY_", "type": "longSum" }, { "name": "count", "type": "count" } ], "dataSource": "tpch_year", "dimension":"l_orderkey", "filter": { "fields": [ { "dimension": "l_orderkey", "type": "selector", "value": "103136" }, { "dimension": "l_orderkey", "type": "selector", "value": "1648672" } ], "type": "or" }, "granularity": "all", "intervals": [ "1900-01-09T00:00:00.000Z/2992-01-10T00:00:00.000Z" ], "metric": "L_QUANTITY_", "queryType": "topN", "threshold": 2 } "},{"title":"Troubleshooting query execution in Druid","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/troubleshooting","content":"","keywords":""},{"title":"Query fails due to internal communication timeout","type":1,"pageTitle":"Troubleshooting query execution in Druid","url":"/docs/27.0.0/querying/troubleshooting#query-fails-due-to-internal-communication-timeout","content":"In Druid's query processing, when the Broker sends a query to the data servers, the data servers process the query and push their intermediate results back to the Broker. Because calls from the Broker to the data servers are synchronous, the Jetty server can time out in data servers in certain cases: The data servers don't push any results to the Broker before the maximum idle time.The data servers started to push data but paused for longer than the maximum idle time such as due to Broker backpressure. When such timeout occurs, the server interrupts the connection between the Broker and data servers which causes the query to fail with a channel disconnection error. For example, { "error": { "error": "Unknown exception", "errorMessage": "Query[6eee73a6-a95f-4bdc-821d-981e99e39242] url[https://localhost:8283/druid/v2/] failed with exception msg [Channel disconnected] (through reference chain: org.apache.druid.query.scan.ScanResultValue[\\"segmentId\\"])", "errorClass": "com.fasterxml.jackson.databind.JsonMappingException", "host": "localhost:8283" } } Channel disconnection occurs for various reasons. To verify that the error is due to web server timeout, search for the query ID in the Historical logs. The query ID in the example above is 6eee73a6-a95f-4bdc-821d-981e99e39242. The "host" field in the error message above indicates the IP address of the Historical in question. In the Historical logs, you will see a raised exception indicating Idle timeout expired: 2021-09-14T19:52:27,685 ERROR [qtp475526834-85[scan_[test_large_table]_6eee73a6-a95f-4bdc-821d-981e99e39242]] org.apache.druid.server.QueryResource - Unable to send query response. (java.io.IOException: java.util.concurrent.TimeoutException: Idle timeout expired: 300000/300000 ms) 2021-09-14T19:52:27,685 ERROR [qtp475526834-85] org.apache.druid.server.QueryLifecycle - Exception while processing queryId [6eee73a6-a95f-4bdc-821d-981e99e39242] (java.io.IOException: java.util.concurrent.TimeoutException: Idle timeout expired: 300000/300000 ms) 2021-09-14T19:52:27,686 WARN [qtp475526834-85] org.eclipse.jetty.server.HttpChannel - handleException /druid/v2/ java.io.IOException: java.util.concurrent.TimeoutException: Idle timeout expired: 300000/300000 ms To mitigate query failure due to web server timeout: Increase the max idle time for the web server. Set the max idle time in the druid.server.http.maxIdleTime property in the historical/runtime.properties file. You must restart the Druid cluster for this change to take effect. See Configuration reference for more information on configuring the server. If the timeout occurs because the data servers have not pushed any results to the Broker, consider optimizing data server performance. Significant slowdown in the data servers may be a result of spilling too much data to disk in groupBy v2 queries, large IN filters in the query, or an under scaled cluster. Analyze your Druid query metrics to determine the bottleneck.If the timeout is caused by Broker backpressure, consider optimizing Broker performance. Check whether the connection is fast enough between the Broker and deep storage. "},{"title":"Using query caching","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/using-caching","content":"","keywords":""},{"title":"Enabling query caching on Historicals","type":1,"pageTitle":"Using query caching","url":"/docs/27.0.0/querying/using-caching#enabling-query-caching-on-historicals","content":"Historicals only support segment-level caching, which is enabled by default. To control caching on the Historical, set the useCache and populateCache runtime properties. For example, to set the Historical to both use and populate the segment cache for queries: druid.historical.cache.useCache=true druid.historical.cache.populateCache=true See Historical caching for a description of all available Historical cache configurations. "},{"title":"Enabling query caching on task executor services","type":1,"pageTitle":"Using query caching","url":"/docs/27.0.0/querying/using-caching#enabling-query-caching-on-task-executor-services","content":"Task executor services, the Peon or the Indexer, only support segment-level caching. To control caching on a task executor service, set the useCache and populateCache runtime properties. For example, to set the Peon to both use and populate the segment cache for queries: druid.realtime.cache.useCache=true druid.realtime.cache.populateCache=true See Peon caching and Indexer caching for a description of all available task executor service caching options. "},{"title":"Enabling query caching on Brokers","type":1,"pageTitle":"Using query caching","url":"/docs/27.0.0/querying/using-caching#enabling-query-caching-on-brokers","content":"Brokers support both segment-level and whole-query result level caching. To control segment caching on the Broker, set the useCache and populateCacheruntime properties. For example, to set the Broker to use and populate the segment cache for queries: druid.broker.cache.useCache=true druid.broker.cache.populateCache=true To control whole-query caching on the Broker, set the useResultLevelCache and populateResultLevelCache runtime properties. For example, to set the Broker to use and populate the whole-query cache for queries: druid.broker.cache.useResultLevelCache=true druid.broker.cache.populateResultLevelCache=true See Broker caching for a description of all available Broker cache configurations. "},{"title":"Enabling caching in the query context","type":1,"pageTitle":"Using query caching","url":"/docs/27.0.0/querying/using-caching#enabling-caching-in-the-query-context","content":"As long as the service is set to populate the cache, you can set cache options for individual queries in the query context. For example, you can POST a Druid SQL request to the HTTP POST API and include the context as a JSON object: { "query" : "SELECT COUNT(*) FROM data_source WHERE foo = 'bar' AND __time > TIMESTAMP '2020-01-01 00:00:00'", "context" : { "useCache" : "true", "populateCache" : "false" } } In this example the user has set populateCache to false to avoid filling the result cache with results for segments that are over a year old. For more information, see Druid SQL client APIs. "},{"title":"Learn more","type":1,"pageTitle":"Using query caching","url":"/docs/27.0.0/querying/using-caching#learn-more","content":"See the following topics for more information: Query caching for an overview of caching.Query context for more details and usage for the query context.Cache configuration for information about different cache types and additional configuration options. "},{"title":"Virtual columns","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/virtual-columns","content":"","keywords":""},{"title":"Virtual column types","type":1,"pageTitle":"Virtual columns","url":"/docs/27.0.0/querying/virtual-columns#virtual-column-types","content":""},{"title":"Expression virtual column","type":1,"pageTitle":"Virtual columns","url":"/docs/27.0.0/querying/virtual-columns#expression-virtual-column","content":"Expression virtual columns use Druid's native expression system to allow defining query time transforms of inputs from one or more columns. The expression virtual column has the following syntax: { "type": "expression", "name": <name of the virtual column>, "expression": <row expression>, "outputType": <output value type of expression> } property\tdescription\trequired?type\tMust be "expression" to indicate that this is an expression virtual column.\tyes name\tThe name of the virtual column.\tyes expression\tAn expression that takes a row as input and outputs a value for the virtual column.\tyes outputType\tThe expression's output will be coerced to this type. Can be LONG, FLOAT, DOUBLE, STRING, ARRAY types, or COMPLEX types.\tno, default is FLOAT "},{"title":"Nested field virtual column","type":1,"pageTitle":"Virtual columns","url":"/docs/27.0.0/querying/virtual-columns#nested-field-virtual-column","content":"The nested field virtual column is an optimized virtual column that can provide direct access into various paths of a COMPLEX<json> column, including using their indexes. This virtual column is used for the SQL operators JSON_VALUE (if processFromRaw is set to false) or JSON_QUERY(if processFromRaw is true), and accepts 'JSONPath' or 'jq' syntax string representations of paths, or a parsed list of "path parts" in order to determine what should be selected from the column. You can define a nested field virtual column with any of the following equivalent syntaxes. The examples all produce the same output value, with each example showing a different way to specify how to access the nested value. The first is using JSONPath syntax path, the second with a jq path, and the third uses pathParts. { "type": "nested-field", "columnName": "shipTo", "outputName": "v0", "expectedType": "STRING", "path": "$.phoneNumbers[1].number" } { "type": "nested-field", "columnName": "shipTo", "outputName": "v1", "expectedType": "STRING", "path": ".phoneNumbers[1].number", "useJqSyntax": true } { "type": "nested-field", "columnName": "shipTo", "outputName": "v2", "expectedType": "STRING", "pathParts": [ { "type": "field", "field": "phoneNumbers" }, { "type": "arrayElement", "index": 1 }, { "type": "field", "field": "number" } ] } property\tdescription\trequired?type\tMust be "nested-field" to indicate that this is a nested field virtual column.\tyes columnName\tThe name of the COMPLEX<json> input column.\tyes outputName\tThe name of the virtual column.\tyes expectedType\tThe native Druid output type of the column, Druid will coerce output to this type if it does not match the underlying data. This can be STRING, LONG, FLOAT, DOUBLE, or COMPLEX<json>. Extracting ARRAY types is not yet supported.\tno, default STRING pathParts\tThe parsed path parts used to locate the nested values. path will be translated into pathParts internally. One of path or pathParts must be set\tno, if path is defined processFromRaw\tIf set to true, the virtual column will process the "raw" JSON data to extract values rather than using an optimized "literal" value selector. This option allows extracting non-literal values (such as nested JSON objects or arrays) as a COMPLEX<json> at the cost of much slower performance.\tno, default false path\t'JSONPath' (or 'jq') syntax path. One of path or pathParts must be set.\tno, if pathParts is defined useJqSyntax\tIf true, parse path using 'jq' syntax instead of 'JSONPath'.\tno, default is false Nested path part Specify pathParts as an array of objects that describe each component of the path to traverse. Each object can take the following properties: property\tdescription\trequired?type\tMust be 'field' or 'arrayElement'. Use field when accessing a specific field in a nested structure. Use arrayElement when accessing a specific integer position of an array (zero based).\tyes field\tThe name of the 'field' in a 'field' type path part\tyes, if type is 'field' index\tThe array element index if type is arrayElement\tyes, if type is 'arrayElement' See Nested columns for more information on ingesting and storing nested data. "},{"title":"List filtered virtual column","type":1,"pageTitle":"Virtual columns","url":"/docs/27.0.0/querying/virtual-columns#list-filtered-virtual-column","content":"This virtual column provides an alternative way to use'list filtered' dimension spec as a virtual column. It has optimized access to the underlying column value indexes that can provide a small performance improvement in some cases. { "type": "mv-filtered", "name": "filteredDim3", "delegate": "dim3", "values": ["hello", "world"], "isAllowList": true } property\tdescription\trequired?type\tMust be "mv-filtered" to indicate that this is a list filtered virtual column.\tyes name\tThe output name of the virtual column\tyes delegate\tThe name of the multi-value STRING input column to filter\tyes values\tSet of STRING values to allow or deny\tyes isAllowList\tIf true, the output of the virtual column will be limited to the set specified by values, else it will provide all values except those specified.\tNo, default true "},{"title":"SQL scalar functions","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/sql-scalar","content":"","keywords":""},{"title":"Numeric functions","type":1,"pageTitle":"SQL scalar functions","url":"/docs/27.0.0/querying/sql-scalar#numeric-functions","content":"For mathematical operations, Druid SQL will use integer math if all operands involved in an expression are integers. Otherwise, Druid will switch to floating point math. You can force this to happen by casting one of your operands to FLOAT. At runtime, Druid will widen 32-bit floats to 64-bit for most expressions. Function\tNotesPI\tConstant Pi. ABS(expr)\tAbsolute value. CEIL(expr)\tCeiling. EXP(expr)\te to the power of expr. FLOOR(expr)\tFloor. LN(expr)\tLogarithm (base e). LOG10(expr)\tLogarithm (base 10). POWER(expr, power)\texpr raised to the power of power. SQRT(expr)\tSquare root. TRUNCATE(expr, [digits])\tTruncate expr to a specific number of decimal digits. If digits is negative, then this truncates that many places to the left of the decimal point. Digits defaults to zero if not specified. TRUNC(expr, [digits])\tAlias for TRUNCATE. ROUND(expr, [digits])\tROUND(x, y) would return the value of the x rounded to the y decimal places. While x can be an integer or floating-point number, y must be an integer. The type of the return value is specified by that of x. y defaults to 0 if omitted. When y is negative, x is rounded on the left side of the y decimal points. If expr evaluates to either NaN, expr will be converted to 0. If expr is infinity, expr will be converted to the nearest finite double. MOD(x, y)\tModulo (remainder of x divided by y). SIN(expr)\tTrigonometric sine of an angle expr. COS(expr)\tTrigonometric cosine of an angle expr. TAN(expr)\tTrigonometric tangent of an angle expr. COT(expr)\tTrigonometric cotangent of an angle expr. ASIN(expr)\tArc sine of expr. ACOS(expr)\tArc cosine of expr. ATAN(expr)\tArc tangent of expr. ATAN2(y, x)\tAngle theta from the conversion of rectangular coordinates (x, y) to polar * coordinates (r, theta). DEGREES(expr)\tConverts an angle measured in radians to an approximately equivalent angle measured in degrees. RADIANS(expr)\tConverts an angle measured in degrees to an approximately equivalent angle measured in radians. BITWISE_AND(expr1, expr2)\tReturns the result of expr1 & expr2. Double values will be implicitly cast to longs, use BITWISE_CONVERT_DOUBLE_TO_LONG_BITS to perform bitwise operations directly with doubles. BITWISE_COMPLEMENT(expr)\tReturns the result of ~expr. Double values will be implicitly cast to longs, use BITWISE_CONVERT_DOUBLE_TO_LONG_BITS to perform bitwise operations directly with doubles. BITWISE_CONVERT_DOUBLE_TO_LONG_BITS(expr)\tConverts the bits of an IEEE 754 floating-point double value to a long. If the input is not a double, it is implicitly cast to a double prior to conversion. BITWISE_CONVERT_LONG_BITS_TO_DOUBLE(expr)\tConverts a long to the IEEE 754 floating-point double specified by the bits stored in the long. If the input is not a long, it is implicitly cast to a long prior to conversion. BITWISE_OR(expr1, expr2)\tReturns the result of expr1 [PIPE] expr2. Double values will be implicitly cast to longs, use BITWISE_CONVERT_DOUBLE_TO_LONG_BITS to perform bitwise operations directly with doubles. BITWISE_SHIFT_LEFT(expr1, expr2)\tReturns the result of expr1 << expr2. Double values will be implicitly cast to longs, use BITWISE_CONVERT_DOUBLE_TO_LONG_BITS to perform bitwise operations directly with doubles. BITWISE_SHIFT_RIGHT(expr1, expr2)\tReturns the result of expr1 >> expr2. Double values will be implicitly cast to longs, use BITWISE_CONVERT_DOUBLE_TO_LONG_BITS to perform bitwise operations directly with doubles. BITWISE_XOR(expr1, expr2)\tReturns the result of expr1 ^ expr2. Double values will be implicitly cast to longs, use BITWISE_CONVERT_DOUBLE_TO_LONG_BITS to perform bitwise operations directly with doubles. DIV(x, y)\tReturns the result of integer division of x by y HUMAN_READABLE_BINARY_BYTE_FORMAT(value, [precision])\tFormat a number in human-readable IEC format. For example, HUMAN_READABLE_BINARY_BYTE_FORMAT(1048576) returns 1.00 MiB. precision must be in the range of [0, 3] (default: 2). HUMAN_READABLE_DECIMAL_BYTE_FORMAT(value, [precision])\tFormat a number in human-readable SI format. HUMAN_READABLE_DECIMAL_BYTE_FORMAT(1048576) returns 1.04 MB. precision must be in the range of [0, 3] (default: 2). precision must be in the range of [0, 3] (default: 2). HUMAN_READABLE_DECIMAL_FORMAT(value, [precision])\tFormat a number in human-readable SI format. For example, HUMAN_READABLE_DECIMAL_FORMAT(1048576) returns 1.04 M. precision must be in the range of [0, 3] (default: 2). SAFE_DIVIDE(x, y)\tReturns the division of x by y guarded on division by 0. In case y is 0 it returns 0, or null if druid.generic.useDefaultValueForNull=false "},{"title":"String functions","type":1,"pageTitle":"SQL scalar functions","url":"/docs/27.0.0/querying/sql-scalar#string-functions","content":"String functions accept strings, and return a type appropriate to the function. Function\tNotesCONCAT(expr, expr...)\tConcats a list of expressions. Also see the concatenation operator. TEXTCAT(expr, expr)\tTwo argument version of CONCAT. STRING_FORMAT(pattern, [args...])\tReturns a string formatted in the manner of Java's String.format. LENGTH(expr)\tLength of expr in UTF-16 code units. CHAR_LENGTH(expr)\tAlias for LENGTH. CHARACTER_LENGTH(expr)\tAlias for LENGTH. STRLEN(expr)\tAlias for LENGTH. LOOKUP(expr, lookupName)\tLook up expr in a registered query-time lookup table. Note that lookups can also be queried directly using the lookup schema. LOWER(expr)\tReturns expr in all lowercase. UPPER(expr)\tReturns expr in all uppercase. PARSE_LONG(string, [radix])\tParses a string into a long (BIGINT) with the given radix, or 10 (decimal) if a radix is not provided. POSITION(needle IN haystack [FROM fromIndex])\tReturns the index of needle within haystack, with indexes starting from 1. The search will begin at fromIndex, or 1 if fromIndex is not specified. If needle is not found, returns 0. REGEXP_EXTRACT(expr, pattern, [index])\tApply regular expression pattern to expr and extract a capture group, or NULL if there is no match. If index is unspecified or zero, returns the first substring that matched the pattern. The pattern may match anywhere inside expr; if you want to match the entire string instead, use the ^ and $ markers at the start and end of your pattern. Note: when druid.generic.useDefaultValueForNull = true, it is not possible to differentiate an empty-string match from a non-match (both will return NULL). REGEXP_LIKE(expr, pattern)\tReturns whether expr matches regular expression pattern. The pattern may match anywhere inside expr; if you want to match the entire string instead, use the ^ and $ markers at the start and end of your pattern. Similar to LIKE, but uses regexps instead of LIKE patterns. Especially useful in WHERE clauses. REGEXP_REPLACE(expr, pattern, replacement)\tReplaces all occurrences of regular expression pattern within expr with replacement. The replacement string may refer to capture groups using $1, $2, etc. The pattern may match anywhere inside expr; if you want to match the entire string instead, use the ^ and $ markers at the start and end of your pattern. CONTAINS_STRING(expr, str)\tReturns true if the str is a substring of expr. ICONTAINS_STRING(expr, str)\tReturns true if the str is a substring of expr. The match is case-insensitive. REPLACE(expr, pattern, replacement)\tReplaces pattern with replacement in expr, and returns the result. STRPOS(haystack, needle)\tReturns the index of needle within haystack, with indexes starting from 1. If needle is not found, returns 0. SUBSTRING(expr, index, [length])\tReturns a substring of expr starting at index, with a max length, both measured in UTF-16 code units. RIGHT(expr, [length])\tReturns the rightmost length characters from expr. LEFT(expr, [length])\tReturns the leftmost length characters from expr. SUBSTR(expr, index, [length])\tAlias for SUBSTRING. TRIM([BOTH |LEADING| TRAILING] [chars FROM] expr)\tReturns expr with characters removed from the leading, trailing, or both ends of "expr" if they are in "chars". If "chars" is not provided, it defaults to " " (a space). If the directional argument is not provided, it defaults to "BOTH". BTRIM(expr, [chars])\tAlternate form of TRIM(BOTH chars FROM expr). LTRIM(expr, [chars])\tAlternate form of TRIM(LEADING chars FROM expr). RTRIM(expr, [chars])\tAlternate form of TRIM(TRAILING chars FROM expr). REVERSE(expr)\tReverses expr. REPEAT(expr, [N])\tRepeats expr N times LPAD(expr, length, [chars])\tReturns a string of length from expr left-padded with chars. If length is shorter than the length of expr, the result is expr which is truncated to length. The result will be null if either expr or chars is null. If chars is an empty string, no padding is added, however expr may be trimmed if necessary. RPAD(expr, length, [chars])\tReturns a string of length from expr right-padded with chars. If length is shorter than the length of expr, the result is expr which is truncated to length. The result will be null if either expr or chars is null. If chars is an empty string, no padding is added, however expr may be trimmed if necessary. "},{"title":"Date and time functions","type":1,"pageTitle":"SQL scalar functions","url":"/docs/27.0.0/querying/sql-scalar#date-and-time-functions","content":"Time functions can be used with: Druid's primary timestamp column, __time;Numeric values representing milliseconds since the epoch, through the MILLIS_TO_TIMESTAMP function; andString timestamps, through the TIME_PARSE function. By default, time operations use the UTC time zone. You can change the time zone by setting the connection context parameter sqlTimeZone to the name of another time zone, like America/Los_Angeles, or to an offset like-08:00. If you need to mix multiple time zones in the same query, or if you need to use a time zone other than the connection time zone, some functions also accept time zones as parameters. These parameters always take precedence over the connection time zone. Literal timestamps in the connection time zone can be written using TIMESTAMP '2000-01-01 00:00:00' syntax. The simplest way to write literal timestamps in other time zones is to use TIME_PARSE, likeTIME_PARSE('2000-02-01 00:00:00', NULL, 'America/Los_Angeles'). The best ways to filter based on time are by using ISO8601 intervals, likeTIME_IN_INTERVAL(__time, '2000-01-01/2000-02-01'), or by using literal timestamps with the >= and < operators, like__time >= TIMESTAMP '2000-01-01 00:00:00' AND __time < TIMESTAMP '2000-02-01 00:00:00'. Druid supports the standard SQL BETWEEN operator, but we recommend avoiding it for time filters. BETWEEN is inclusive of its upper bound, which makes it awkward to write time filters correctly. For example, the equivalent ofTIME_IN_INTERVAL(__time, '2000-01-01/2000-02-01') is__time BETWEEN TIMESTAMP '2000-01-01 00:00:00' AND TIMESTAMP '2000-01-31 23:59:59.999'. Druid processes timestamps internally as longs (64-bit integers) representing milliseconds since the epoch. Therefore, time functions perform best when used with the primary timestamp column, or with timestamps stored in long columns as milliseconds and accessed with MILLIS_TO_TIMESTAMP. Other timestamp representations, include string timestamps and POSIX timestamps (seconds since the epoch) require query-time conversion to Druid's internal form, which adds additional overhead. Function\tNotesCURRENT_TIMESTAMP\tCurrent timestamp in the connection's time zone. CURRENT_DATE\tCurrent date in the connection's time zone. DATE_TRUNC(unit, timestamp_expr)\tRounds down a timestamp, returning it as a new timestamp. Unit can be 'milliseconds', 'second', 'minute', 'hour', 'day', 'week', 'month', 'quarter', 'year', 'decade', 'century', or 'millennium'. TIME_CEIL(timestamp_expr, period, [origin, [timezone]])\tRounds up a timestamp, returning it as a new timestamp. Period can be any ISO8601 period, like P3M (quarters) or PT12H (half-days). Specify origin as a timestamp to set the reference time for rounding. For example, TIME_CEIL(__time, 'PT1H', TIMESTAMP '2016-06-27 00:30:00') measures an hourly period from 00:30-01:30 instead of 00:00-01:00. See Period granularities for details on the default starting boundaries. The time zone, if provided, should be a time zone name like "America/Los_Angeles" or offset like "-08:00". This function is similar to CEIL but is more flexible. TIME_FLOOR(timestamp_expr, period, [origin, [timezone]])\tRounds down a timestamp, returning it as a new timestamp. Period can be any ISO8601 period, like P3M (quarters) or PT12H (half-days). Specify origin as a timestamp to set the reference time for rounding. For example, TIME_FLOOR(__time, 'PT1H', TIMESTAMP '2016-06-27 00:30:00') measures an hourly period from 00:30-01:30 instead of 00:00-01:00. See Period granularities for details on the default starting boundaries. The time zone, if provided, should be a time zone name like "America/Los_Angeles" or offset like "-08:00". This function is similar to FLOOR but is more flexible. TIME_SHIFT(timestamp_expr, period, step, [timezone])\tShifts a timestamp by a period (step times), returning it as a new timestamp. Period can be any ISO8601 period. Step may be negative. The time zone, if provided, should be a time zone name like "America/Los_Angeles" or offset like "-08:00". TIME_EXTRACT(timestamp_expr, [unit, [timezone]])\tExtracts a time part from expr, returning it as a number. Unit can be EPOCH, SECOND, MINUTE, HOUR, DAY (day of month), DOW (day of week), DOY (day of year), WEEK (week of week year), MONTH (1 through 12), QUARTER (1 through 4), or YEAR. The time zone, if provided, should be a time zone name like "America/Los_Angeles" or offset like "-08:00". This function is similar to EXTRACT but is more flexible. Unit and time zone must be literals, and must be provided quoted, like TIME_EXTRACT(__time, 'HOUR') or TIME_EXTRACT(__time, 'HOUR', 'America/Los_Angeles'). TIME_PARSE(string_expr, [pattern, [timezone]])\tParses a string into a timestamp using a given Joda DateTimeFormat pattern, or ISO8601 (e.g. 2000-01-02T03:04:05Z) if the pattern is not provided. The time zone, if provided, should be a time zone name like "America/Los_Angeles" or offset like "-08:00", and will be used as the time zone for strings that do not include a time zone offset. Pattern and time zone must be literals. Strings that cannot be parsed as timestamps will be returned as NULL. TIME_FORMAT(timestamp_expr, [pattern, [timezone]])\tFormats a timestamp as a string with a given Joda DateTimeFormat pattern, or ISO8601 (e.g. 2000-01-02T03:04:05Z) if the pattern is not provided. The time zone, if provided, should be a time zone name like "America/Los_Angeles" or offset like "-08:00". Pattern and time zone must be literals. TIME_IN_INTERVAL(timestamp_expr, interval)\tReturns whether a timestamp is contained within a particular interval. The interval must be a literal string containing any ISO8601 interval, such as '2001-01-01/P1D' or '2001-01-01T01:00:00/2001-01-02T01:00:00'. The start instant of the interval is inclusive and the end instant is exclusive. MILLIS_TO_TIMESTAMP(millis_expr)\tConverts a number of milliseconds since the epoch (1970-01-01 00:00:00 UTC) into a timestamp. TIMESTAMP_TO_MILLIS(timestamp_expr)\tConverts a timestamp into a number of milliseconds since the epoch. EXTRACT(unit FROM timestamp_expr)\tExtracts a time part from expr, returning it as a number. Unit can be EPOCH, MICROSECOND, MILLISECOND, SECOND, MINUTE, HOUR, DAY (day of month), DOW (day of week), ISODOW (ISO day of week), DOY (day of year), WEEK (week of year), MONTH, QUARTER, YEAR, ISOYEAR, DECADE, CENTURY or MILLENNIUM. Units must be provided unquoted, like EXTRACT(HOUR FROM __time). FLOOR(timestamp_expr TO unit)\tRounds down a timestamp, returning it as a new timestamp. Unit can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, or YEAR. CEIL(timestamp_expr TO unit)\tRounds up a timestamp, returning it as a new timestamp. Unit can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, or YEAR. TIMESTAMPADD(unit, count, timestamp)\tEquivalent to timestamp + count * INTERVAL '1' UNIT. TIMESTAMPDIFF(unit, timestamp1, timestamp2)\tReturns the (signed) number of unit between timestamp1 and timestamp2. Unit can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, or YEAR. "},{"title":"Reduction functions","type":1,"pageTitle":"SQL scalar functions","url":"/docs/27.0.0/querying/sql-scalar#reduction-functions","content":"Reduction functions operate on zero or more expressions and return a single expression. If no expressions are passed as arguments, then the result is NULL. The expressions must all be convertible to a common data type, which will be the type of the result: If all argument are NULL, the result is NULL. Otherwise, NULL arguments are ignored.If the arguments comprise a mix of numbers and strings, the arguments are interpreted as strings.If all arguments are integer numbers, the arguments are interpreted as longs.If all arguments are numbers and at least one argument is a double, the arguments are interpreted as doubles. Function\tNotesGREATEST([expr1, ...])\tEvaluates zero or more expressions and returns the maximum value based on comparisons as described above. LEAST([expr1, ...])\tEvaluates zero or more expressions and returns the minimum value based on comparisons as described above. "},{"title":"IP address functions","type":1,"pageTitle":"SQL scalar functions","url":"/docs/27.0.0/querying/sql-scalar#ip-address-functions","content":"For the IPv4 address functions, the address argument can either be an IPv4 dotted-decimal string (e.g., "192.168.0.1") or an IP address represented as an integer (e.g., 3232235521). The subnetargument should be a string formatted as an IPv4 address subnet in CIDR notation (e.g., "192.168.0.0/16"). Function\tNotesIPV4_MATCH(address, subnet)\tReturns true if the address belongs to the subnet literal, else false. If address is not a valid IPv4 address, then false is returned. This function is more efficient if address is an integer instead of a string. IPV4_PARSE(address)\tParses address into an IPv4 address stored as an integer . If address is an integer that is a valid IPv4 address, then it is passed through. Returns null if address cannot be represented as an IPv4 address. IPV4_STRINGIFY(address)\tConverts address into an IPv4 address dotted-decimal string. If address is a string that is a valid IPv4 address, then it is passed through. Returns null if address cannot be represented as an IPv4 address. "},{"title":"Sketch functions","type":1,"pageTitle":"SQL scalar functions","url":"/docs/27.0.0/querying/sql-scalar#sketch-functions","content":"These functions operate on expressions or columns that return sketch objects. To create sketch objects, see the DataSketches aggregators. "},{"title":"HLL sketch functions","type":1,"pageTitle":"SQL scalar functions","url":"/docs/27.0.0/querying/sql-scalar#hll-sketch-functions","content":"The following functions operate on DataSketches HLL sketches. The DataSketches extension must be loaded to use the following functions. Function\tNotesHLL_SKETCH_ESTIMATE(expr, [round])\tReturns the distinct count estimate from an HLL sketch. expr must return an HLL sketch. The optional round boolean parameter will round the estimate if set to true, with a default of false. HLL_SKETCH_ESTIMATE_WITH_ERROR_BOUNDS(expr, [numStdDev])\tReturns the distinct count estimate and error bounds from an HLL sketch. expr must return an HLL sketch. An optional numStdDev argument can be provided. HLL_SKETCH_UNION([lgK, tgtHllType], expr0, expr1, ...)\tReturns a union of HLL sketches, where each input expression must return an HLL sketch. The lgK and tgtHllType can be optionally specified as the first parameter; if provided, both optional parameters must be specified. HLL_SKETCH_TO_STRING(expr)\tReturns a human-readable string representation of an HLL sketch for debugging. expr must return an HLL sketch. "},{"title":"Theta sketch functions","type":1,"pageTitle":"SQL scalar functions","url":"/docs/27.0.0/querying/sql-scalar#theta-sketch-functions","content":"The following functions operate on theta sketches. The DataSketches extension must be loaded to use the following functions. Function\tNotesTHETA_SKETCH_ESTIMATE(expr)\tReturns the distinct count estimate from a theta sketch. expr must return a theta sketch. THETA_SKETCH_ESTIMATE_WITH_ERROR_BOUNDS(expr, errorBoundsStdDev)\tReturns the distinct count estimate and error bounds from a theta sketch. expr must return a theta sketch. THETA_SKETCH_UNION([size], expr0, expr1, ...)\tReturns a union of theta sketches, where each input expression must return a theta sketch. The size can be optionally specified as the first parameter. THETA_SKETCH_INTERSECT([size], expr0, expr1, ...)\tReturns an intersection of theta sketches, where each input expression must return a theta sketch. The size can be optionally specified as the first parameter. THETA_SKETCH_NOT([size], expr0, expr1, ...)\tReturns a set difference of theta sketches, where each input expression must return a theta sketch. The size can be optionally specified as the first parameter. "},{"title":"Quantiles sketch functions","type":1,"pageTitle":"SQL scalar functions","url":"/docs/27.0.0/querying/sql-scalar#quantiles-sketch-functions","content":"The following functions operate on quantiles sketches. The DataSketches extension must be loaded to use the following functions. Function\tNotesDS_GET_QUANTILE(expr, fraction)\tReturns the quantile estimate corresponding to fraction from a quantiles sketch. expr must return a quantiles sketch. DS_GET_QUANTILES(expr, fraction0, fraction1, ...)\tReturns a string representing an array of quantile estimates corresponding to a list of fractions from a quantiles sketch. expr must return a quantiles sketch. DS_HISTOGRAM(expr, splitPoint0, splitPoint1, ...)\tReturns a string representing an approximation to the histogram given a list of split points that define the histogram bins from a quantiles sketch. expr must return a quantiles sketch. DS_CDF(expr, splitPoint0, splitPoint1, ...)\tReturns a string representing approximation to the Cumulative Distribution Function given a list of split points that define the edges of the bins from a quantiles sketch. expr must return a quantiles sketch. DS_RANK(expr, value)\tReturns an approximation to the rank of a given value that is the fraction of the distribution less than that value from a quantiles sketch. expr must return a quantiles sketch. DS_QUANTILE_SUMMARY(expr)\tReturns a string summary of a quantiles sketch, useful for debugging. expr must return a quantiles sketch. "},{"title":"Tuple sketch functions","type":1,"pageTitle":"SQL scalar functions","url":"/docs/27.0.0/querying/sql-scalar#tuple-sketch-functions","content":"The following functions operate on tuple sketches. The DataSketches extension must be loaded to use the following functions. Function\tNotes\tDefaultDS_TUPLE_DOUBLES_METRICS_SUM_ESTIMATE(expr)\tComputes approximate sums of the values contained within a Tuple sketch column which contains an array of double values as its Summary Object. DS_TUPLE_DOUBLES_INTERSECT(expr, ..., [nominalEntries])\tReturns an intersection of tuple sketches, where each input expression must return a tuple sketch which contains an array of double values as its Summary Object. The values contained in the Summary Objects are summed when combined. If the last value of the array is a numeric literal, Druid assumes that the value is an override parameter for nominal entries. DS_TUPLE_DOUBLES_NOT(expr, ..., [nominalEntries])\tReturns a set difference of tuple sketches, where each input expression must return a tuple sketch which contains an array of double values as its Summary Object. The values contained in the Summary Object are preserved as is. If the last value of the array is a numeric literal, Druid assumes that the value is an override parameter for nominal entries. DS_TUPLE_DOUBLES_UNION(expr, ..., [nominalEntries])\tReturns a union of tuple sketches, where each input expression must return a tuple sketch which contains an array of double values as its Summary Object. The values contained in the Summary Objects are summed when combined. If the last value of the array is a numeric literal, Druid assumes that the value is an override parameter for nominal entries.\t "},{"title":"Other scalar functions","type":1,"pageTitle":"SQL scalar functions","url":"/docs/27.0.0/querying/sql-scalar#other-scalar-functions","content":"Function\tNotesCAST(value AS TYPE)\tCast value to another type. See Data types for details about how Druid SQL handles CAST. CASE expr WHEN value1 THEN result1 \\[ WHEN value2 THEN result2 ... \\] \\[ ELSE resultN \\] END\tSimple CASE. CASE WHEN boolean_expr1 THEN result1 \\[ WHEN boolean_expr2 THEN result2 ... \\] \\[ ELSE resultN \\] END\tSearched CASE. NULLIF(value1, value2)\tReturns NULL if value1 and value2 match, else returns value1. COALESCE(value1, value2, ...)\tReturns the first value that is neither NULL nor empty string. NVL(value1, value2)\tReturns value1 if value1 is not null, otherwise value2. BLOOM_FILTER_TEST(expr, serialized-filter)\tReturns true if the value of expr is contained in the Base64-serialized Bloom filter. See the Bloom filter extension documentation for additional details. See the BLOOM_FILTER function for computing Bloom filters. "},{"title":"Quickstart (local)","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/","content":"","keywords":""},{"title":"Prerequisites","type":1,"pageTitle":"Quickstart (local)","url":"/docs/27.0.0/tutorials/#prerequisites","content":"You can follow these steps on a relatively modest machine, such as a workstation or virtual server with 6 GiB of RAM. The software requirements for the installation machine are: Linux, Mac OS X, or other Unix-like OS. (Windows is not supported)Java 8u92+, 11, or 17Python 3 (preferred) or Python 2Perl 5 Java must be available. Either it is on your path, or set one of the JAVA_HOME or DRUID_JAVA_HOME environment variables. You can run apache-druid-27.0.0/bin/verify-java to verify Java requirements for your environment. Before installing a production Druid instance, be sure to review the security overview. In general, avoid running Druid as root user. Consider creating a dedicated user account for running Druid. "},{"title":"Install Druid","type":1,"pageTitle":"Quickstart (local)","url":"/docs/27.0.0/tutorials/#install-druid","content":"Download the 27.0.0 release from Apache Druid. In your terminal, extract the file and change directories to the distribution directory: tar -xzf apache-druid-27.0.0-bin.tar.gz cd apache-druid-27.0.0 The distribution directory contains LICENSE and NOTICE files and subdirectories for executable files, configuration files, sample data and more. "},{"title":"Start up Druid services","type":1,"pageTitle":"Quickstart (local)","url":"/docs/27.0.0/tutorials/#start-up-druid-services","content":"Start up Druid services using the automatic single-machine configuration. This configuration includes default settings that are appropriate for this tutorial, such as loading the druid-multi-stage-query extension by default so that you can use the MSQ task engine. You can view the default settings in the configuration files located in conf/druid/auto. From the apache-druid-27.0.0 package root, run the following command: ./bin/start-druid This launches instances of ZooKeeper and the Druid services. For example: $ ./bin/start-druid [Tue Nov 29 16:31:06 2022] Starting Apache Druid. [Tue Nov 29 16:31:06 2022] Open http://localhost:8888/ in your browser to access the web console. [Tue Nov 29 16:31:06 2022] Or, if you have enabled TLS, use https on port 9088. [Tue Nov 29 16:31:06 2022] Starting services with log directory [/apache-druid-27.0.0/log]. [Tue Nov 29 16:31:06 2022] Running command[zk]: bin/run-zk conf [Tue Nov 29 16:31:06 2022] Running command[broker]: bin/run-druid broker /apache-druid-27.0.0/conf/druid/single-server/quickstart '-Xms1187m -Xmx1187m -XX:MaxDirectMemorySize=791m' [Tue Nov 29 16:31:06 2022] Running command[router]: bin/run-druid router /apache-druid-27.0.0/conf/druid/single-server/quickstart '-Xms128m -Xmx128m' [Tue Nov 29 16:31:06 2022] Running command[coordinator-overlord]: bin/run-druid coordinator-overlord /apache-druid-27.0.0/conf/druid/single-server/quickstart '-Xms1290m -Xmx1290m' [Tue Nov 29 16:31:06 2022] Running command[historical]: bin/run-druid historical /apache-druid-27.0.0/conf/druid/single-server/quickstart '-Xms1376m -Xmx1376m -XX:MaxDirectMemorySize=2064m' [Tue Nov 29 16:31:06 2022] Running command[middleManager]: bin/run-druid middleManager /apache-druid-27.0.0/conf/druid/single-server/quickstart '-Xms64m -Xmx64m' '-Ddruid.worker.capacity=2 -Ddruid.indexer.runner.javaOptsArray=["-server","-Duser.timezone=UTC","-Dfile.encoding=UTF-8","-XX:+ExitOnOutOfMemoryError","-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager","-Xms256m","-Xmx256m","-XX:MaxDirectMemorySize=256m"]' Druid may use up to 80% of the total available system memory. To explicitly set the total memory available to Druid, pass a value for the memory parameter. For example, ./bin/start-druid -m 16g. Druid stores all persistent state data, such as the cluster metadata store and data segments, in apache-druid-27.0.0/var. Each service writes to a log file under apache-druid-27.0.0/log. At any time, you can revert Druid to its original, post-installation state by deleting the entire var directory. You may want to do this, for example, between Druid tutorials or after experimentation, to start with a fresh instance. To stop Druid at any time, use CTRL+C in the terminal. This exits the bin/start-druid script and terminates all Druid processes. "},{"title":"Open the web console","type":1,"pageTitle":"Quickstart (local)","url":"/docs/27.0.0/tutorials/#open-the-web-console","content":"After starting the Druid services, open the web console at http://localhost:8888. It may take a few seconds for all Druid services to finish starting, including the Druid router, which serves the console. If you attempt to open the web console before startup is complete, you may see errors in the browser. Wait a few moments and try again. In this quickstart, you use the the web console to perform ingestion. The MSQ task engine specifically uses the Query view to edit and run SQL queries. For a complete walkthrough of the Query view as it relates to the multi-stage query architecture and the MSQ task engine, see UI walkthrough. "},{"title":"Load data","type":1,"pageTitle":"Quickstart (local)","url":"/docs/27.0.0/tutorials/#load-data","content":"The Druid distribution bundles the wikiticker-2015-09-12-sampled.json.gz sample dataset that you can use for testing. The sample dataset is located in the quickstart/tutorial/ folder, accessible from the Druid root directory, and represents Wikipedia page edits for a given day. Follow these steps to load the sample Wikipedia dataset: In the Query view, click Connect external data. Select the Local disk tile and enter the following values: Base directory: quickstart/tutorial/ File filter: wikiticker-2015-09-12-sampled.json.gz Entering the base directory and wildcard file filter separately, as afforded by the UI, allows you to specify multiple files for ingestion at once. Click Connect data. On the Parse page, you can examine the raw data and perform the following optional actions before loading data into Druid: Expand a row to see the corresponding source data.Customize how the data is handled by selecting from the Input format options.Adjust the primary timestamp column for the data. Druid requires data to have a primary timestamp column (internally stored in a column called __time). If your dataset doesn't have a timestamp, Druid uses the default value of 1970-01-01 00:00:00. Click Done. You're returned to the Query view that displays the newly generated query. The query inserts the sample data into the table named wikiticker-2015-09-12-sampled. Show the query REPLACE INTO "wikiticker-2015-09-12-sampled" OVERWRITE ALL WITH input_data AS (SELECT * FROM TABLE( EXTERN( '{"type":"local","baseDir":"quickstart/tutorial/","filter":"wikiticker-2015-09-12-sampled.json.gz"}', '{"type":"json"}', '[{"name":"time","type":"string"},{"name":"channel","type":"string"},{"name":"cityName","type":"string"},{"name":"comment","type":"string"},{"name":"countryIsoCode","type":"string"},{"name":"countryName","type":"string"},{"name":"isAnonymous","type":"string"},{"name":"isMinor","type":"string"},{"name":"isNew","type":"string"},{"name":"isRobot","type":"string"},{"name":"isUnpatrolled","type":"string"},{"name":"metroCode","type":"long"},{"name":"namespace","type":"string"},{"name":"page","type":"string"},{"name":"regionIsoCode","type":"string"},{"name":"regionName","type":"string"},{"name":"user","type":"string"},{"name":"delta","type":"long"},{"name":"added","type":"long"},{"name":"deleted","type":"long"}]' ) )) SELECT TIME_PARSE("time") AS __time, channel, cityName, comment, countryIsoCode, countryName, isAnonymous, isMinor, isNew, isRobot, isUnpatrolled, metroCode, namespace, page, regionIsoCode, regionName, user, delta, added, deleted FROM input_data PARTITIONED BY DAY Optionally, click Preview to see the general shape of the data before you ingest it. Edit the first line of the query and change the default destination datasource name from wikiticker-2015-09-12-sampled to wikipedia. Click Run to execute the query. The task may take a minute or two to complete. When done, the task displays its duration and the number of rows inserted into the table. The view is set to automatically refresh, so you don't need to refresh the browser to see the status change. A successful task means that Druid data servers have picked up one or more segments. "},{"title":"Query data","type":1,"pageTitle":"Quickstart (local)","url":"/docs/27.0.0/tutorials/#query-data","content":"Once the ingestion job is complete, you can query the data. In the Query view, run the following query to produce a list of top channels: SELECT channel, COUNT(*) FROM "wikipedia" GROUP BY channel ORDER BY COUNT(*) DESC Congratulations! You've gone from downloading Druid to querying data with the MSQ task engine in just one quickstart. "},{"title":"Next steps","type":1,"pageTitle":"Quickstart (local)","url":"/docs/27.0.0/tutorials/#next-steps","content":"See the following topics for more information: Druid SQL overview or the Query tutorial to learn about how to query the data you just ingested.Ingestion overview to explore options for ingesting more data.Tutorial: Load files using SQL to learn how to generate a SQL query that loads external data into a Druid datasource.Tutorial: Load data with native batch ingestion to load and query data with Druid's native batch ingestion feature.Tutorial: Load stream data from Apache Kafka to load streaming data from a Kafka topic.Extensions for details on Druid extensions. Remember that after stopping Druid services, you can start clean next time by deleting the var directory from the Druid root directory and running the bin/start-druid script again. You may want to do this before using other data ingestion tutorials, since they use the same Wikipedia datasource. "},{"title":"Timeseries queries","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/timeseriesquery","content":"","keywords":""},{"title":"Grand totals","type":1,"pageTitle":"Timeseries queries","url":"/docs/27.0.0/querying/timeseriesquery#grand-totals","content":"Druid can include an extra "grand totals" row as the last row of a timeseries result set. To enable this, add"grandTotal" : true to your query context. For example: { "queryType": "timeseries", "dataSource": "sample_datasource", "intervals": [ "2012-01-01T00:00:00.000/2012-01-03T00:00:00.000" ], "granularity": "day", "aggregations": [ { "type": "longSum", "name": "sample_name1", "fieldName": "sample_fieldName1" }, { "type": "doubleSum", "name": "sample_name2", "fieldName": "sample_fieldName2" } ], "context": { "grandTotal": true } } The grand totals row will appear as the last row in the result array, and will have no timestamp. It will be the last row even if the query is run in "descending" mode. Post-aggregations in the grand totals row will be computed based upon the grand total aggregations. "},{"title":"Zero-filling","type":1,"pageTitle":"Timeseries queries","url":"/docs/27.0.0/querying/timeseriesquery#zero-filling","content":"Timeseries queries normally fill empty interior time buckets with zeroes. For example, if you issue a "day" granularity timeseries query for the interval 2012-01-01/2012-01-04, and no data exists for 2012-01-02, you will receive: [ { "timestamp": "2012-01-01T00:00:00.000Z", "result": { "sample_name1": <some_value> } }, { "timestamp": "2012-01-02T00:00:00.000Z", "result": { "sample_name1": 0 } }, { "timestamp": "2012-01-03T00:00:00.000Z", "result": { "sample_name1": <some_value> } } ] Time buckets that lie completely outside the data interval are not zero-filled. You can disable all zero-filling with the context flag "skipEmptyBuckets". In this mode, the data point for 2012-01-02 would be omitted from the results. A query with this context flag set would look like: { "queryType": "timeseries", "dataSource": "sample_datasource", "granularity": "day", "aggregations": [ { "type": "longSum", "name": "sample_name1", "fieldName": "sample_fieldName1" } ], "intervals": [ "2012-01-01T00:00:00.000/2012-01-04T00:00:00.000" ], "context" : { "skipEmptyBuckets": "true" } } "},{"title":"Clustered deployment","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/cluster","content":"","keywords":""},{"title":"Select hardware","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#select-hardware","content":""},{"title":"Fresh Deployment","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#fresh-deployment","content":"If you do not have an existing Druid cluster, and wish to start running Druid in a clustered deployment, this guide provides an example clustered deployment with pre-made configurations. Master server The Coordinator and Overlord processes are responsible for handling the metadata and coordination needs of your cluster. They can be colocated together on the same server. In this example, we will be deploying the equivalent of one AWS m5.2xlarge instance. This hardware offers: 8 vCPUs32 GiB RAM Example Master server configurations that have been sized for this hardware can be found under conf/druid/cluster/master. Data server Historicals and MiddleManagers can be colocated on the same server to handle the actual data in your cluster. These servers benefit greatly from CPU, RAM, and SSDs. In this example, we will be deploying the equivalent of two AWS i3.4xlarge instances. This hardware offers: 16 vCPUs122 GiB RAM2 * 1.9TB SSD storage Example Data server configurations that have been sized for this hardware can be found under conf/druid/cluster/data. Query server Druid Brokers accept queries and farm them out to the rest of the cluster. They also optionally maintain an in-memory query cache. These servers benefit greatly from CPU and RAM. In this example, we will be deploying the equivalent of one AWS m5.2xlarge instance. This hardware offers: 8 vCPUs32 GiB RAM You can consider co-locating any open source UIs or query libraries on the same server that the Broker is running on. Example Query server configurations that have been sized for this hardware can be found under conf/druid/cluster/query. Other Hardware Sizes The example cluster above is chosen as a single example out of many possible ways to size a Druid cluster. You can choose smaller/larger hardware or less/more servers for your specific needs and constraints. If your use case has complex scaling requirements, you can also choose to not co-locate Druid processes (e.g., standalone Historical servers). The information in the basic cluster tuning guide can help with your decision-making process and with sizing your configurations. "},{"title":"Migrating from a single-server deployment","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#migrating-from-a-single-server-deployment","content":"If you have an existing single-server deployment, such as the ones from the single-server deployment examples, and you wish to migrate to a clustered deployment of similar scale, the following section contains guidelines for choosing equivalent hardware using the Master/Data/Query server organization. Master server The main considerations for the Master server are available CPUs and RAM for the Coordinator and Overlord heaps. Sum up the allocated heap sizes for your Coordinator and Overlord from the single-server deployment, and choose Master server hardware with enough RAM for the combined heaps, with some extra RAM for other processes on the machine. For CPU cores, you can choose hardware with approximately 1/4th of the cores of the single-server deployment. Data server When choosing Data server hardware for the cluster, the main considerations are available CPUs and RAM, and using SSD storage if feasible. In a clustered deployment, having multiple Data servers is a good idea for fault-tolerance purposes. When choosing the Data server hardware, you can choose a split factor N, divide the original CPU/RAM of the single-server deployment by N, and deploy N Data servers of reduced size in the new cluster. Instructions for adjusting the Historical/MiddleManager configs for the split are described in a later section in this guide. Query server The main considerations for the Query server are available CPUs and RAM for the Broker heap + direct memory, and Router heap. Sum up the allocated memory sizes for your Broker and Router from the single-server deployment, and choose Query server hardware with enough RAM to cover the Broker/Router, with some extra RAM for other processes on the machine. For CPU cores, you can choose hardware with approximately 1/4th of the cores of the single-server deployment. The basic cluster tuning guide has information on how to calculate Broker/Router memory usage. "},{"title":"Select OS","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#select-os","content":"We recommend running your favorite Linux distribution. You will also need Java 8u92+, 11, or 17Python 2 or Python 3 info If needed, you can specify where to find Java using the environment variablesDRUID_JAVA_HOME or JAVA_HOME. For more details run the bin/verify-java script. For information about installing Java, see the documentation for your OS package manager. If your Ubuntu-based OS does not have a recent enough version of Java, WebUpd8 offers packages for those OSes. "},{"title":"Download the distribution","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#download-the-distribution","content":"First, download and unpack the release archive. It's best to do this on a single machine at first, since you will be editing the configurations and then copying the modified distribution out to all of your servers. Downloadthe 27.0.0 release. Extract Druid by running the following commands in your terminal: tar -xzf apache-druid-27.0.0-bin.tar.gz cd apache-druid-27.0.0 In the package, you should find: LICENSE and NOTICE filesbin/* - scripts related to the single-machine quickstartconf/druid/cluster/* - template configurations for a clustered setupextensions/* - core Druid extensionshadoop-dependencies/* - Druid Hadoop dependencieslib/* - libraries and dependencies for core Druidquickstart/* - files related to the single-machine quickstart We'll be editing the files in conf/druid/cluster/ in order to get things running. "},{"title":"Migrating from Single-Server Deployments","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#migrating-from-single-server-deployments","content":"In the following sections we will be editing the configs under conf/druid/cluster. If you have an existing single-server deployment, please copy your existing configs to conf/druid/cluster to preserve any config changes you have made. "},{"title":"Configure metadata storage and deep storage","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#configure-metadata-storage-and-deep-storage","content":""},{"title":"Migrating from Single-Server Deployments","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#migrating-from-single-server-deployments-1","content":"If you have an existing single-server deployment and you wish to preserve your data across the migration, please follow the instructions at metadata migration and deep storage migration before updating your metadata/deep storage configs. These guides are targeted at single-server deployments that use the Derby metadata store and local deep storage. If you are already using a non-Derby metadata store in your single-server cluster, you can reuse the existing metadata store for the new cluster. These guides also provide information on migrating segments from local deep storage. A clustered deployment requires distributed deep storage like S3 or HDFS. If your single-server deployment was already using distributed deep storage, you can reuse the existing deep storage for the new cluster. "},{"title":"Metadata storage","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#metadata-storage","content":"In conf/druid/cluster/_common/common.runtime.properties, replace "metadata.storage.*" with the address of the machine that you will use as your metadata store: druid.metadata.storage.connector.connectURIdruid.metadata.storage.connector.host In a production deployment, we recommend running a dedicated metadata store such as MySQL or PostgreSQL with replication, deployed separately from the Druid servers. The MySQL extension and PostgreSQL extension docs have instructions for extension configuration and initial database setup. "},{"title":"Deep storage","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#deep-storage","content":"Druid relies on a distributed filesystem or large object (blob) store for data storage. The most commonly used deep storage implementations are S3 (popular for those on AWS) and HDFS (popular if you already have a Hadoop deployment). S3 In conf/druid/cluster/_common/common.runtime.properties, Add "druid-s3-extensions" to druid.extensions.loadList. Comment out the configurations for local storage under "Deep Storage" and "Indexing service logs". Uncomment and configure appropriate values in the "For S3" sections of "Deep Storage" and "Indexing service logs". After this, you should have made the following changes: druid.extensions.loadList=["druid-s3-extensions"] #druid.storage.type=local #druid.storage.storageDirectory=var/druid/segments druid.storage.type=s3 druid.storage.bucket=your-bucket druid.storage.baseKey=druid/segments druid.s3.accessKey=... druid.s3.secretKey=... #druid.indexer.logs.type=file #druid.indexer.logs.directory=var/druid/indexing-logs druid.indexer.logs.type=s3 druid.indexer.logs.s3Bucket=your-bucket druid.indexer.logs.s3Prefix=druid/indexing-logs Please see the S3 extension documentation for more info. HDFS In conf/druid/cluster/_common/common.runtime.properties, Add "druid-hdfs-storage" to druid.extensions.loadList. Comment out the configurations for local storage under "Deep Storage" and "Indexing service logs". Uncomment and configure appropriate values in the "For HDFS" sections of "Deep Storage" and "Indexing service logs". After this, you should have made the following changes: druid.extensions.loadList=["druid-hdfs-storage"] #druid.storage.type=local #druid.storage.storageDirectory=var/druid/segments druid.storage.type=hdfs druid.storage.storageDirectory=/druid/segments #druid.indexer.logs.type=file #druid.indexer.logs.directory=var/druid/indexing-logs druid.indexer.logs.type=hdfs druid.indexer.logs.directory=/druid/indexing-logs Also, Place your Hadoop configuration XMLs (core-site.xml, hdfs-site.xml, yarn-site.xml, mapred-site.xml) on the classpath of your Druid processes. You can do this by copying them intoconf/druid/cluster/_common/. Please see the HDFS extension documentation for more info. "},{"title":"Configure for connecting to Hadoop (optional)","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#configure-for-connecting-to-hadoop-optional","content":"If you will be loading data from a Hadoop cluster, then at this point you should configure Druid to be aware of your cluster: Update druid.indexer.task.hadoopWorkingPath in conf/druid/cluster/middleManager/runtime.properties to a path on HDFS that you'd like to use for temporary files required during the indexing process.druid.indexer.task.hadoopWorkingPath=/tmp/druid-indexing is a common choice. Place your Hadoop configuration XMLs (core-site.xml, hdfs-site.xml, yarn-site.xml, mapred-site.xml) on the classpath of your Druid processes. You can do this by copying them intoconf/druid/cluster/_common/core-site.xml, conf/druid/cluster/_common/hdfs-site.xml, and so on. Note that you don't need to use HDFS deep storage in order to load data from Hadoop. For example, if your cluster is running on Amazon Web Services, we recommend using S3 for deep storage even if you are loading data using Hadoop or Elastic MapReduce. For more info, please see the Hadoop-based ingestion page. "},{"title":"Configure Zookeeper connection","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#configure-zookeeper-connection","content":"In a production cluster, we recommend using a dedicated ZK cluster in a quorum, deployed separately from the Druid servers. In conf/druid/cluster/_common/common.runtime.properties, setdruid.zk.service.host to a connection stringcontaining a comma separated list of host:port pairs, each corresponding to a ZooKeeper server in your ZK quorum. (e.g. "127.0.0.1:4545" or "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002") You can also choose to run ZK on the Master servers instead of having a dedicated ZK cluster. If doing so, we recommend deploying 3 Master servers so that you have a ZK quorum. "},{"title":"Configuration Tuning","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#configuration-tuning","content":""},{"title":"Migrating from a Single-Server Deployment","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#migrating-from-a-single-server-deployment-1","content":"Master If you are using an example configuration from single-server deployment examples, these examples combine the Coordinator and Overlord processes into one combined process. The example configs under conf/druid/cluster/master/coordinator-overlord also combine the Coordinator and Overlord processes. You can copy your existing coordinator-overlord configs from the single-server deployment to conf/druid/cluster/master/coordinator-overlord. Data Suppose we are migrating from a single-server deployment that had 32 CPU and 256GiB RAM. In the old deployment, the following configurations for Historicals and MiddleManagers were applied: Historical (Single-server) druid.processing.buffer.sizeBytes=500MiB druid.processing.numMergeBuffers=8 druid.processing.numThreads=31 MiddleManager (Single-server) druid.worker.capacity=8 druid.indexer.fork.property.druid.processing.numMergeBuffers=2 druid.indexer.fork.property.druid.processing.buffer.sizeBytes=100MiB druid.indexer.fork.property.druid.processing.numThreads=1 In the clustered deployment, we can choose a split factor (2 in this example), and deploy 2 Data servers with 16CPU and 128GiB RAM each. The areas to scale are the following: Historical druid.processing.numThreads: Set to (num_cores - 1) based on the new hardwaredruid.processing.numMergeBuffers: Divide the old value from the single-server deployment by the split factordruid.processing.buffer.sizeBytes: Keep this unchanged MiddleManager: druid.worker.capacity: Divide the old value from the single-server deployment by the split factordruid.indexer.fork.property.druid.processing.numMergeBuffers: Keep this unchangeddruid.indexer.fork.property.druid.processing.buffer.sizeBytes: Keep this unchangeddruid.indexer.fork.property.druid.processing.numThreads: Keep this unchanged The resulting configs after the split: New Historical (on 2 Data servers) druid.processing.buffer.sizeBytes=500MiB druid.processing.numMergeBuffers=4 druid.processing.numThreads=15 New MiddleManager (on 2 Data servers) druid.worker.capacity=4 druid.indexer.fork.property.druid.processing.numMergeBuffers=2 druid.indexer.fork.property.druid.processing.buffer.sizeBytes=100MiB druid.indexer.fork.property.druid.processing.numThreads=1 Query You can copy your existing Broker and Router configs to the directories under conf/druid/cluster/query, no modifications are needed, as long as the new hardware is sized accordingly. "},{"title":"Fresh deployment","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#fresh-deployment-1","content":"If you are using the example cluster described above: 1 Master server (m5.2xlarge)2 Data servers (i3.4xlarge)1 Query server (m5.2xlarge) The configurations under conf/druid/cluster have already been sized for this hardware and you do not need to make further modifications for general use cases. If you have chosen different hardware, the basic cluster tuning guide can help you size your configurations. "},{"title":"Open ports (if using a firewall)","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#open-ports-if-using-a-firewall","content":"If you're using a firewall or some other system that only allows traffic on specific ports, allow inbound connections on the following: "},{"title":"Master Server","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#master-server-2","content":"1527 (Derby metadata store; not needed if you are using a separate metadata store like MySQL or PostgreSQL)2181 (ZooKeeper; not needed if you are using a separate ZooKeeper cluster)8081 (Coordinator)8090 (Overlord) "},{"title":"Data Server","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#data-server-2","content":"8083 (Historical)8091, 8100–8199 (Druid Middle Manager; you may need higher than port 8199 if you have a very high druid.worker.capacity) "},{"title":"Query Server","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#query-server-2","content":"8082 (Broker)8088 (Router, if used) info In production, we recommend deploying ZooKeeper and your metadata store on their own dedicated hardware, rather than on the Master server. "},{"title":"Start Master Server","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#start-master-server","content":"Copy the Druid distribution and your edited configurations to your Master server. If you have been editing the configurations on your local machine, you can use rsync to copy them: rsync -az apache-druid-27.0.0/ MASTER_SERVER:apache-druid-27.0.0/ "},{"title":"No Zookeeper on Master","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#no-zookeeper-on-master","content":"From the distribution root, run the following command to start the Master server: bin/start-cluster-master-no-zk-server "},{"title":"With Zookeeper on Master","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#with-zookeeper-on-master","content":"If you plan to run ZK on Master servers, first update conf/zoo.cfg to reflect how you plan to run ZK. Then, you can start the Master server processes together with ZK using: bin/start-cluster-master-with-zk-server info In production, we also recommend running a ZooKeeper cluster on its own dedicated hardware. "},{"title":"Start Data Server","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#start-data-server","content":"Copy the Druid distribution and your edited configurations to your Data servers. From the distribution root, run the following command to start the Data server: bin/start-cluster-data-server You can add more Data servers as needed. info For clusters with complex resource allocation needs, you can break apart Historicals and MiddleManagers and scale the components individually. This also allows you take advantage of Druid's built-in MiddleManager autoscaling facility. "},{"title":"Start Query Server","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#start-query-server","content":"Copy the Druid distribution and your edited configurations to your Query servers. From the distribution root, run the following command to start the Query server: bin/start-cluster-query-server You can add more Query servers as needed based on query load. If you increase the number of Query servers, be sure to adjust the connection pools on your Historicals and Tasks as described in the basic cluster tuning guide. "},{"title":"Loading data","type":1,"pageTitle":"Clustered deployment","url":"/docs/27.0.0/tutorials/cluster#loading-data","content":"Congratulations, you now have a Druid cluster! The next step is to learn about recommended ways to load data into Druid based on your use case. Read more about loading data. "},{"title":"Run with Docker","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/docker","content":"","keywords":""},{"title":"Prerequisites","type":1,"pageTitle":"Run with Docker","url":"/docs/27.0.0/tutorials/docker#prerequisites","content":"Docker "},{"title":"Docker memory requirements","type":1,"pageTitle":"Run with Docker","url":"/docs/27.0.0/tutorials/docker#docker-memory-requirements","content":"The default docker-compose.yml launches eight containers: Zookeeper, PostgreSQL, and six Druid containers based upon the micro quickstart configuration. Each Druid service is configured to use up to 7 GiB of memory (6 GiB direct memory and 1 GiB heap). However, the quickstart will not use all the available memory. For this setup, Docker needs at least 6 GiB of memory available for the Druid cluster. For Docker Desktop on Mac OS, adjust the memory settings in the Docker Desktop preferences. If you experience a crash with a 137 error code you likely don't have enough memory allocated to Docker. You can modify the value of DRUID_SINGLE_NODE_CONF in the Docker environment to use different single-server mode. For example to use the nano quickstart: DRUID_SINGLE_NODE_CONF=nano-quickstart. "},{"title":"Getting started","type":1,"pageTitle":"Run with Docker","url":"/docs/27.0.0/tutorials/docker#getting-started","content":"Create a directory to hold the Druid Docker files. The Druid source code contains an example docker-compose.yml which pulls an image from Docker Hub and is suited to be used as an example environment and to experiment with Docker based Druid configuration and deployments. Download this file to the directory created above. "},{"title":"Compose file","type":1,"pageTitle":"Run with Docker","url":"/docs/27.0.0/tutorials/docker#compose-file","content":"The example docker-compose.yml will create a container for each Druid service, as well as ZooKeeper and a PostgreSQL container as the metadata store. It will also create a named volume druid_shared as deep storage to keep and share segments and task logs among Druid services. The volume is mounted as opt/shared in the container. "},{"title":"Environment file","type":1,"pageTitle":"Run with Docker","url":"/docs/27.0.0/tutorials/docker#environment-file","content":"The Druid docker-compose.yml example uses an environment file to specify the complete Druid configuration, including the environment variables described in Configuration. This file is named environment by default, and must be in the same directory as the docker-compose.yml file. Download the example environment file to the directory created above. The options in this file work well for trying Druid and for using the tutorial. The single-file approach is inadequate for a production system. Instead we suggest using either DRUID_COMMON_CONFIG and DRUID_CONFIG_${service} or specially tailored, service-specific environment files. "},{"title":"Configuration","type":1,"pageTitle":"Run with Docker","url":"/docs/27.0.0/tutorials/docker#configuration","content":"Configuration of the Druid Docker container is done via environment variables set within the container. Docker Compose passes the values from the environment file into the container. The variables may additionally specify paths to the standard Druid configuration files which must be available within the container. The default values are fine for the Quickstart. Production systems will want to modify the defaults. Basic configuration: DRUID_MAXDIRECTMEMORYSIZE -- set Java max direct memory size. Default is 6 GiB.DRUID_XMX -- set Java Xmx, the maximum heap size. Default is 1 GB. Production configuration: DRUID_CONFIG_COMMON -- full path to a file for Druid common propertiesDRUID_CONFIG_${service} -- full path to a file for Druid service propertiesJAVA_OPTS -- set Java options Logging configuration: DRUID_LOG4J -- set the entire log4j.xml configuration file verbatim. (Example)DRUID_LOG_LEVEL -- override the default Log4j log levelDRUID_SERVICE_LOG4J -- set the entire log4j.xml configuration file verbatim specific to a service.DRUID_SERVICE_LOG_LEVEL -- override the default Log4j log level in the service specific log4j. Advanced memory configuration: DRUID_XMS -- set Java Xms, the initial heap size. Default is 1 GB.DRUID_MAXNEWSIZE -- set Java max new sizeDRUID_NEWSIZE -- set Java new size In addition to the special environment variables, the script which launches Druid in the container will use any environment variable starting with the druid_ prefix as command-line configuration. For example, an environment variable druid_metadata_storage_type=postgresql is translated into the following option in the Java launch command for the Druid process in the container: -Ddruid.metadata.storage.type=postgresql Note that Druid uses port 8888 for the console. This port is also used by Jupyter and other tools. To avoid conflicts, you can change the port in the ports section of the docker-compose.yml file. For example, to expose the console on port 9999 of the host: container_name: router ... ports: - "9999:8888" "},{"title":"Launching the cluster","type":1,"pageTitle":"Run with Docker","url":"/docs/27.0.0/tutorials/docker#launching-the-cluster","content":"cd into the directory that contains the configuration files. This is the directory you created above, or the distribution/docker/ in your Druid installation directory if you installed Druid locally. Run docker-compose up to launch the cluster with a shell attached, or docker-compose up -d to run the cluster in the background. Once the cluster has started, you can navigate to the web console at http://localhost:8888. The Druid router process serves the UI. It takes a few seconds for all the Druid processes to fully start up. If you open the console immediately after starting the services, you may see some errors that you can safely ignore. "},{"title":"Using the cluster","type":1,"pageTitle":"Run with Docker","url":"/docs/27.0.0/tutorials/docker#using-the-cluster","content":"From here you can follow along with the Quickstart. For production use, refine your docker-compose.yml file to add any additional external service dependencies as necessary. You can explore the Druid containers using Docker to start a shell: docker exec -ti <id> sh Where <id> is the container id found with docker ps. Druid is installed in /opt/druid. The script which consumes the environment variables mentioned above, and which launches Druid, is located at /druid.sh. Run docker-compose down to shut down the cluster. Your data is persisted as a set of Docker volumes and will be available when you restart your Druid cluster. "},{"title":"Load a file","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-batch","content":"","keywords":""},{"title":"Loading data with a spec (via console)","type":1,"pageTitle":"Load a file","url":"/docs/27.0.0/tutorials/tutorial-batch#loading-data-with-a-spec-via-console","content":"The Druid package includes the following sample native batch ingestion task spec at quickstart/tutorial/wikipedia-index.json, shown here for convenience, which has been configured to read the quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz input file: { "type" : "index_parallel", "spec" : { "dataSchema" : { "dataSource" : "wikipedia", "dimensionsSpec" : { "dimensions" : [ "channel", "cityName", "comment", "countryIsoCode", "countryName", "isAnonymous", "isMinor", "isNew", "isRobot", "isUnpatrolled", "metroCode", "namespace", "page", "regionIsoCode", "regionName", "user", { "name": "added", "type": "long" }, { "name": "deleted", "type": "long" }, { "name": "delta", "type": "long" } ] }, "timestampSpec": { "column": "time", "format": "iso" }, "metricsSpec" : [], "granularitySpec" : { "type" : "uniform", "segmentGranularity" : "day", "queryGranularity" : "none", "intervals" : ["2015-09-12/2015-09-13"], "rollup" : false } }, "ioConfig" : { "type" : "index_parallel", "inputSource" : { "type" : "local", "baseDir" : "quickstart/tutorial/", "filter" : "wikiticker-2015-09-12-sampled.json.gz" }, "inputFormat" : { "type": "json" }, "appendToExisting" : false }, "tuningConfig" : { "type" : "index_parallel", "partitionsSpec": { "type": "dynamic" }, "maxRowsInMemory" : 25000 } } } This spec creates a datasource named "wikipedia". From the Ingestion view, click the ellipses next to Tasks and choose Submit JSON task. This brings up the spec submission dialog where you can paste the spec above. Once the spec is submitted, wait a few moments for the data to load, after which you can query it. "},{"title":"Loading data with a spec (via command line)","type":1,"pageTitle":"Load a file","url":"/docs/27.0.0/tutorials/tutorial-batch#loading-data-with-a-spec-via-command-line","content":"For convenience, the Druid package includes a batch ingestion helper script at bin/post-index-task. This script will POST an ingestion task to the Druid Overlord and poll Druid until the data is available for querying. Run the following command from Druid package root: bin/post-index-task --file quickstart/tutorial/wikipedia-index.json --url http://localhost:8081 You should see output like the following: Beginning indexing data for wikipedia Task started: index_wikipedia_2018-07-27T06:37:44.323Z Task log: http://localhost:8081/druid/indexer/v1/task/index_wikipedia_2018-07-27T06:37:44.323Z/log Task status: http://localhost:8081/druid/indexer/v1/task/index_wikipedia_2018-07-27T06:37:44.323Z/status Task index_wikipedia_2018-07-27T06:37:44.323Z still running... Task index_wikipedia_2018-07-27T06:37:44.323Z still running... Task finished with status: SUCCESS Completed indexing data for wikipedia. Now loading indexed data onto the cluster... wikipedia loading complete! You may now query your data Once the spec is submitted, you can follow the same instructions as above to wait for the data to load and then query it. "},{"title":"Loading data without the script","type":1,"pageTitle":"Load a file","url":"/docs/27.0.0/tutorials/tutorial-batch#loading-data-without-the-script","content":"Let's briefly discuss how we would've submitted the ingestion task without using the script. You do not need to run these commands. To submit the task, POST it to Druid in a new terminal window from the apache-druid-27.0.0 directory: curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/wikipedia-index.json http://localhost:8081/druid/indexer/v1/task Which will print the ID of the task if the submission was successful: {"task":"index_wikipedia_2018-06-09T21:30:32.802Z"} You can monitor the status of this task from the console as outlined above. "},{"title":"Querying your data","type":1,"pageTitle":"Load a file","url":"/docs/27.0.0/tutorials/tutorial-batch#querying-your-data","content":"Once the data is loaded, please follow the query tutorial to run some example queries on the newly loaded data. "},{"title":"Cleanup","type":1,"pageTitle":"Load a file","url":"/docs/27.0.0/tutorials/tutorial-batch#cleanup","content":"If you wish to go through any of the other ingestion tutorials, you will need to shut down the cluster and reset the cluster state by removing the contents of the var directory under the druid package, as the other tutorials will write to the same "wikipedia" datasource. "},{"title":"Further reading","type":1,"pageTitle":"Load a file","url":"/docs/27.0.0/tutorials/tutorial-batch#further-reading","content":"For more information on loading batch data, please see the native batch ingestion documentation. "},{"title":"Load batch data using Apache Hadoop","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-batch-hadoop","content":"","keywords":""},{"title":"Install Docker","type":1,"pageTitle":"Load batch data using Apache Hadoop","url":"/docs/27.0.0/tutorials/tutorial-batch-hadoop#install-docker","content":"This tutorial requires Docker to be installed on the tutorial machine. Once the Docker install is complete, please proceed to the next steps in the tutorial. "},{"title":"Build the Hadoop docker image","type":1,"pageTitle":"Load batch data using Apache Hadoop","url":"/docs/27.0.0/tutorials/tutorial-batch-hadoop#build-the-hadoop-docker-image","content":"For this tutorial, we've provided a Dockerfile for a Hadoop 2.8.5 cluster, which we'll use to run the batch indexing task. This Dockerfile and related files are located at quickstart/tutorial/hadoop/docker. From the apache-druid-27.0.0 package root, run the following commands to build a Docker image named "druid-hadoop-demo" with version tag "2.8.5": cd quickstart/tutorial/hadoop/docker docker build -t druid-hadoop-demo:2.8.5 . This will start building the Hadoop image. Once the image build is done, you should see the message Successfully tagged druid-hadoop-demo:2.8.5 printed to the console. "},{"title":"Setup the Hadoop docker cluster","type":1,"pageTitle":"Load batch data using Apache Hadoop","url":"/docs/27.0.0/tutorials/tutorial-batch-hadoop#setup-the-hadoop-docker-cluster","content":""},{"title":"Create temporary shared directory","type":1,"pageTitle":"Load batch data using Apache Hadoop","url":"/docs/27.0.0/tutorials/tutorial-batch-hadoop#create-temporary-shared-directory","content":"We'll need a shared folder between the host and the Hadoop container for transferring some files. Let's create some folders under /tmp, we will use these later when starting the Hadoop container: mkdir -p /tmp/shared mkdir -p /tmp/shared/hadoop_xml "},{"title":"Configure /etc/hosts","type":1,"pageTitle":"Load batch data using Apache Hadoop","url":"/docs/27.0.0/tutorials/tutorial-batch-hadoop#configure-etchosts","content":"On the host machine, add the following entry to /etc/hosts: 127.0.0.1 druid-hadoop-demo "},{"title":"Start the Hadoop container","type":1,"pageTitle":"Load batch data using Apache Hadoop","url":"/docs/27.0.0/tutorials/tutorial-batch-hadoop#start-the-hadoop-container","content":"Once the /tmp/shared folder has been created and the etc/hosts entry has been added, run the following command to start the Hadoop container. docker run -it -h druid-hadoop-demo --name druid-hadoop-demo -p 2049:2049 -p 2122:2122 -p 8020:8020 -p 8021:8021 -p 8030:8030 -p 8031:8031 -p 8032:8032 -p 8033:8033 -p 8040:8040 -p 8042:8042 -p 8088:8088 -p 8443:8443 -p 9000:9000 -p 10020:10020 -p 19888:19888 -p 34455:34455 -p 49707:49707 -p 50010:50010 -p 50020:50020 -p 50030:50030 -p 50060:50060 -p 50070:50070 -p 50075:50075 -p 50090:50090 -p 51111:51111 -v /tmp/shared:/shared druid-hadoop-demo:2.8.5 /etc/bootstrap.sh -bash Once the container is started, your terminal will attach to a bash shell running inside the container: Starting sshd: [ OK ] 18/07/26 17:27:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [druid-hadoop-demo] druid-hadoop-demo: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-druid-hadoop-demo.out localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-druid-hadoop-demo.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-druid-hadoop-demo.out 18/07/26 17:27:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable starting yarn daemons starting resourcemanager, logging to /usr/local/hadoop/logs/yarn--resourcemanager-druid-hadoop-demo.out localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-druid-hadoop-demo.out starting historyserver, logging to /usr/local/hadoop/logs/mapred--historyserver-druid-hadoop-demo.out bash-4.1# The Unable to load native-hadoop library for your platform... using builtin-java classes where applicable warning messages can be safely ignored. Accessing the Hadoop container shell To open another shell to the Hadoop container, run the following command: docker exec -it druid-hadoop-demo bash "},{"title":"Copy input data to the Hadoop container","type":1,"pageTitle":"Load batch data using Apache Hadoop","url":"/docs/27.0.0/tutorials/tutorial-batch-hadoop#copy-input-data-to-the-hadoop-container","content":"From the apache-druid-27.0.0 package root on the host, copy the quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz sample data to the shared folder: cp quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz /tmp/shared/wikiticker-2015-09-12-sampled.json.gz "},{"title":"Setup HDFS directories","type":1,"pageTitle":"Load batch data using Apache Hadoop","url":"/docs/27.0.0/tutorials/tutorial-batch-hadoop#setup-hdfs-directories","content":"In the Hadoop container's shell, run the following commands to setup the HDFS directories needed by this tutorial and copy the input data to HDFS. cd /usr/local/hadoop/bin ./hdfs dfs -mkdir /druid ./hdfs dfs -mkdir /druid/segments ./hdfs dfs -mkdir /quickstart ./hdfs dfs -chmod 777 /druid ./hdfs dfs -chmod 777 /druid/segments ./hdfs dfs -chmod 777 /quickstart ./hdfs dfs -chmod -R 777 /tmp ./hdfs dfs -chmod -R 777 /user ./hdfs dfs -put /shared/wikiticker-2015-09-12-sampled.json.gz /quickstart/wikiticker-2015-09-12-sampled.json.gz If you encounter namenode errors when running this command, the Hadoop container may not be finished initializing. When this occurs, wait a couple of minutes and retry the commands. "},{"title":"Configure Druid to use Hadoop","type":1,"pageTitle":"Load batch data using Apache Hadoop","url":"/docs/27.0.0/tutorials/tutorial-batch-hadoop#configure-druid-to-use-hadoop","content":"Some additional steps are needed to configure the Druid cluster for Hadoop batch indexing. "},{"title":"Copy Hadoop configuration to Druid classpath","type":1,"pageTitle":"Load batch data using Apache Hadoop","url":"/docs/27.0.0/tutorials/tutorial-batch-hadoop#copy-hadoop-configuration-to-druid-classpath","content":"From the Hadoop container's shell, run the following command to copy the Hadoop .xml configuration files to the shared folder: cp /usr/local/hadoop/etc/hadoop/*.xml /shared/hadoop_xml From the host machine, run the following, where {PATH_TO_DRUID} is replaced by the path to the Druid package. mkdir -p {PATH_TO_DRUID}/conf/druid/single-server/micro-quickstart/_common/hadoop-xml cp /tmp/shared/hadoop_xml/*.xml {PATH_TO_DRUID}/conf/druid/single-server/micro-quickstart/_common/hadoop-xml/ "},{"title":"Update Druid segment and log storage","type":1,"pageTitle":"Load batch data using Apache Hadoop","url":"/docs/27.0.0/tutorials/tutorial-batch-hadoop#update-druid-segment-and-log-storage","content":"In your favorite text editor, open conf/druid/auto/_common/common.runtime.properties, and make the following edits: Disable local deep storage and enable HDFS deep storage # # Deep storage # # For local disk (only viable in a cluster if this is a network mount): #druid.storage.type=local #druid.storage.storageDirectory=var/druid/segments # For HDFS: druid.storage.type=hdfs druid.storage.storageDirectory=/druid/segments Disable local log storage and enable HDFS log storage # # Indexing service logs # # For local disk (only viable in a cluster if this is a network mount): #druid.indexer.logs.type=file #druid.indexer.logs.directory=var/druid/indexing-logs # For HDFS: druid.indexer.logs.type=hdfs druid.indexer.logs.directory=/druid/indexing-logs "},{"title":"Restart Druid cluster","type":1,"pageTitle":"Load batch data using Apache Hadoop","url":"/docs/27.0.0/tutorials/tutorial-batch-hadoop#restart-druid-cluster","content":"Once the Hadoop .xml files have been copied to the Druid cluster and the segment/log storage configuration has been updated to use HDFS, the Druid cluster needs to be restarted for the new configurations to take effect. If the cluster is still running, CTRL-C to terminate the bin/start-druid script, and re-run it to bring the Druid services back up. "},{"title":"Load batch data","type":1,"pageTitle":"Load batch data using Apache Hadoop","url":"/docs/27.0.0/tutorials/tutorial-batch-hadoop#load-batch-data","content":"We've included a sample of Wikipedia edits from September 12, 2015 to get you started. To load this data into Druid, you can submit an ingestion task pointing to the file. We've included a task that loads the wikiticker-2015-09-12-sampled.json.gz file included in the archive. Let's submit the wikipedia-index-hadoop.json task: bin/post-index-task --file quickstart/tutorial/wikipedia-index-hadoop.json --url http://localhost:8081 "},{"title":"Querying your data","type":1,"pageTitle":"Load batch data using Apache Hadoop","url":"/docs/27.0.0/tutorials/tutorial-batch-hadoop#querying-your-data","content":"After the data load is complete, please follow the query tutorial to run some example queries on the newly loaded data. "},{"title":"Cleanup","type":1,"pageTitle":"Load batch data using Apache Hadoop","url":"/docs/27.0.0/tutorials/tutorial-batch-hadoop#cleanup","content":"This tutorial is only meant to be used together with the query tutorial. If you wish to go through any of the other tutorials, you will need to: Shut down the cluster and reset the cluster state by removing the contents of the var directory under the druid package.Revert the deep storage and task storage config back to local types in conf/druid/auto/_common/common.runtime.propertiesRestart the cluster This is necessary because the other ingestion tutorials will write to the same "wikipedia" datasource, and later tutorials expect the cluster to use local deep storage. Example reverted config: # # Deep storage # # For local disk (only viable in a cluster if this is a network mount): druid.storage.type=local druid.storage.storageDirectory=var/druid/segments # For HDFS: #druid.storage.type=hdfs #druid.storage.storageDirectory=/druid/segments # # Indexing service logs # # For local disk (only viable in a cluster if this is a network mount): druid.indexer.logs.type=file druid.indexer.logs.directory=var/druid/indexing-logs # For HDFS: #druid.indexer.logs.type=hdfs #druid.indexer.logs.directory=/druid/indexing-logs "},{"title":"Further reading","type":1,"pageTitle":"Load batch data using Apache Hadoop","url":"/docs/27.0.0/tutorials/tutorial-batch-hadoop#further-reading","content":"For more information on loading batch data with Hadoop, please see the Hadoop batch ingestion documentation. "},{"title":"Load data with native batch ingestion","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-batch-native","content":"","keywords":""},{"title":"Prerequisites","type":1,"pageTitle":"Load data with native batch ingestion","url":"/docs/27.0.0/tutorials/tutorial-batch-native#prerequisites","content":"Install Druid, start up Druid services, and open the web console as described in the Druid quickstart. "},{"title":"Load data","type":1,"pageTitle":"Load data with native batch ingestion","url":"/docs/27.0.0/tutorials/tutorial-batch-native#load-data","content":"Ingestion specs define the schema of the data Druid reads and stores. You can write ingestion specs by hand or using the data loader, as we'll do here to perform batch file loading with Druid's native batch ingestion. The Druid distribution bundles sample data we can use. The sample data located in quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gzin the Druid root directory represents Wikipedia page edits for a given day. Click Load data from the web console header (). Select the Local disk tile and then click Connect data. Enter the following values: Base directory: quickstart/tutorial/ File filter: wikiticker-2015-09-12-sampled.json.gz Entering the base directory and wildcard file filter separately, as afforded by the UI, allows you to specify multiple files for ingestion at once. Click Apply. The data loader displays the raw data, giving you a chance to verify that the data appears as expected. Notice that your position in the sequence of steps to load data, Connect in our case, appears at the top of the console, as shown below. You can click other steps to move forward or backward in the sequence at any time. Click Next: Parse data. The data loader tries to determine the parser appropriate for the data format automatically. In this case it identifies the data format as json, as shown in the Input format field at the bottom right. Feel free to select other Input format options to get a sense of their configuration settings and how Druid parses other types of data. With the JSON parser selected, click Next: Parse time. The Parse time settings are where you view and adjust the primary timestamp column for the data. Druid requires data to have a primary timestamp column (internally stored in a column called __time). If you do not have a timestamp in your data, select Constant value. In our example, the data loader determines that the time column is the only candidate that can be used as the primary time column. Click Next: Transform, Next: Filter, and then Next: Configure schema, skipping a few steps. You do not need to adjust transformation or filtering settings, as applying ingestion time transforms and filters are out of scope for this tutorial. The Configure schema settings are where you configure what dimensionsand metrics are ingested. The outcome of this configuration represents exactly how the data will appear in Druid after ingestion. Since our dataset is very small, you can turn off rollupby unsetting the Rollup switch and confirming the change when prompted. Click Next: Partition to configure how the data will be split into segments. In this case, choose DAY as the Segment granularity. Since this is a small dataset, we can have just a single segment, which is what selecting DAY as the segment granularity gives us. Click Next: Tune and Next: Publish. The Publish settings are where you specify the datasource name in Druid. Let's change the default name from wikiticker-2015-09-12-sampled to wikipedia. Click Next: Edit spec to review the ingestion spec we've constructed with the data loader. Feel free to go back and change settings from previous steps to see how doing so updates the spec. Similarly, you can edit the spec directly and see it reflected in the previous steps. For other ways to load ingestion specs in Druid, see Tutorial: Loading a file. Once you are satisfied with the spec, click Submit. The new task for our wikipedia datasource now appears in the Ingestion view. ![Tasks view](../assets/tutorial-batch-data-loader-09.png "Tasks view") The task may take a minute or two to complete. When done, the task status should be "SUCCESS", with the duration of the task indicated. Note that the view is set to automatically refresh, so you do not need to refresh the browser to see the status change. A successful task means that one or more segments have been built and are now picked up by our data servers. "},{"title":"Query the data","type":1,"pageTitle":"Load data with native batch ingestion","url":"/docs/27.0.0/tutorials/tutorial-batch-native#query-the-data","content":"You can now see the data as a datasource in the console and try out a query, as follows: Click Datasources from the console header. If the wikipedia datasource doesn't appear, wait a few moments for the segment to finish loading. A datasource is queryable once it is shown to be "Fully available" in the Availability column. When the datasource is available, open the Actions menu () for that datasource and choose Query with SQL. info Notice the other actions you can perform for a datasource, including configuring retention rules, compaction, and more. Run the prepopulated query, SELECT * FROM "wikipedia" to see the results. "},{"title":"Compact segments","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-compaction","content":"","keywords":""},{"title":"Prerequisites","type":1,"pageTitle":"Compact segments","url":"/docs/27.0.0/tutorials/tutorial-compaction#prerequisites","content":"This tutorial assumes you have already downloaded Apache Druid as described in the single-machine quickstart and have it running on your local machine. If you haven't already, you should finish the following tutorials first: Tutorial: Loading a fileTutorial: Querying data "},{"title":"Load the initial data","type":1,"pageTitle":"Compact segments","url":"/docs/27.0.0/tutorials/tutorial-compaction#load-the-initial-data","content":"This tutorial uses the Wikipedia edits sample data included with the Druid distribution. To load the initial data, you use an ingestion spec that loads batch data with segment granularity of HOUR and creates between one and three segments per hour. You can review the ingestion spec at quickstart/tutorial/compaction-init-index.json. Submit the spec as follows to create a datasource called compaction-tutorial: bin/post-index-task --file quickstart/tutorial/compaction-init-index.json --url http://localhost:8081 info maxRowsPerSegment in the tutorial ingestion spec is set to 1000 to generate multiple segments per hour for demonstration purposes. Do not use this spec in production. After the ingestion completes, navigate to http://localhost:8888/unified-console.html#datasources in a browser to see the new datasource in the web console. In the Availability column for the compaction-tutorial datasource, click the link for 51 segments to view segments information for the datasource. The datasource comprises 51 segments, between one and three segments per hour from the input data: Run a COUNT query on the datasource to verify there are 39,244 rows: dsql> select count(*) from "compaction-tutorial"; ┌────────┐ │ EXPR$0 │ ├────────┤ │ 39244 │ └────────┘ Retrieved 1 row in 1.38s. "},{"title":"Compact the data","type":1,"pageTitle":"Compact segments","url":"/docs/27.0.0/tutorials/tutorial-compaction#compact-the-data","content":"Now you compact these 51 small segments and retain the segment granularity of HOUR. The Druid distribution includes a compaction task spec for this tutorial datasource at quickstart/tutorial/compaction-keep-granularity.json: { "type": "compact", "dataSource": "compaction-tutorial", "interval": "2015-09-12/2015-09-13", "tuningConfig" : { "type" : "index_parallel", "partitionsSpec": { "type": "dynamic" }, "maxRowsInMemory" : 25000 } } This compacts all segments for the interval 2015-09-12/2015-09-13 in the compaction-tutorial datasource. The parameters in the tuningConfig control the maximum number of rows present in each compacted segment and thus affect the number of segments in the compacted set. This datasource only has 39,244 rows. 39,244 is below the default limit of 5,000,000 maxRowsPerSegment for dynamic partitioning. Therefore, Druid only creates one compacted segment per hour. Submit the compaction task now: bin/post-index-task --file quickstart/tutorial/compaction-keep-granularity.json --url http://localhost:8081 After the task finishes, refresh the segments view. Over time the Coordinator marks the original 51 segments as unused and subsequently removes them to leave only the new compacted segments. By default, the Coordinator does not mark segments as unused until the Coordinator has been running for at least 15 minutes. During that time, you may see 75 total segments comprised of the old segment set and the new compacted set: The new compacted segments have a more recent version than the original segments. Even though the web console displays both sets of segments, queries only read from the new compacted segments. Run a COUNT query on compaction-tutorial again to verify the number of rows remains 39,244: dsql> select count(*) from "compaction-tutorial"; ┌────────┐ │ EXPR$0 │ ├────────┤ │ 39244 │ └────────┘ Retrieved 1 row in 1.30s. After the Coordinator has been running for at least 15 minutes, the segments view only shows the new 24 segments, one for each hour: "},{"title":"Compact the data with new segment granularity","type":1,"pageTitle":"Compact segments","url":"/docs/27.0.0/tutorials/tutorial-compaction#compact-the-data-with-new-segment-granularity","content":"You can also change the segment granularity in a compaction task to produce compacted segments with a different granularity from that of the input segments. The Druid distribution includes a compaction task spec to create DAY granularity segments at quickstart/tutorial/compaction-day-granularity.json: { "type": "compact", "dataSource": "compaction-tutorial", "interval": "2015-09-12/2015-09-13", "tuningConfig" : { "type" : "index_parallel", "partitionsSpec": { "type": "dynamic" }, "maxRowsInMemory" : 25000, "forceExtendableShardSpecs" : true }, "granularitySpec" : { "segmentGranularity" : "DAY", "queryGranularity" : "none" } } Note that segmentGranularity is set to DAY in this compaction task spec. Submit this task now: bin/post-index-task --file quickstart/tutorial/compaction-day-granularity.json --url http://localhost:8081 It takes some time before the Coordinator marks the old input segments as unused, so you may see an intermediate state with 25 total segments. Eventually, only one DAY granularity segment remains: "},{"title":"Learn more","type":1,"pageTitle":"Compact segments","url":"/docs/27.0.0/tutorials/tutorial-compaction#learn-more","content":"This tutorial demonstrated how to use a compaction task spec to manually compact segments and how to optionally change the segment granularity for segments. For more details, see Compaction.To learn about the benefits of compaction, see Segment optimization. "},{"title":"Tutorial: Deleting data","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-delete-data","content":"","keywords":""},{"title":"Load initial data","type":1,"pageTitle":"Tutorial: Deleting data","url":"/docs/27.0.0/tutorials/tutorial-delete-data#load-initial-data","content":"In this tutorial, we will use the Wikipedia edits data, with an indexing spec that creates hourly segments. This spec is located at quickstart/tutorial/deletion-index.json, and it creates a datasource called deletion-tutorial. Let's load this initial data: bin/post-index-task --file quickstart/tutorial/deletion-index.json --url http://localhost:8081 When the load finishes, open http://localhost:8888/unified-console.md#datasources in a browser. "},{"title":"How to permanently delete data","type":1,"pageTitle":"Tutorial: Deleting data","url":"/docs/27.0.0/tutorials/tutorial-delete-data#how-to-permanently-delete-data","content":"Permanent deletion of a Druid segment has two steps: The segment must first be marked as "unused". This occurs when a user manually disables a segment through the Coordinator API.After segments have been marked as "unused", a Kill Task will delete any "unused" segments from Druid's metadata store as well as deep storage. Let's drop some segments now, by using the coordinator API to drop data by interval and segmentIds. "},{"title":"Disable segments by interval","type":1,"pageTitle":"Tutorial: Deleting data","url":"/docs/27.0.0/tutorials/tutorial-delete-data#disable-segments-by-interval","content":"Let's disable segments in a specified interval. This will mark all segments in the interval as "unused", but not remove them from deep storage. Let's disable segments in interval 2015-09-12T18:00:00.000Z/2015-09-12T20:00:00.000Z i.e. between hour 18 and 20. curl -X 'POST' -H 'Content-Type:application/json' -d '{ "interval" : "2015-09-12T18:00:00.000Z/2015-09-12T20:00:00.000Z" }' http://localhost:8081/druid/coordinator/v1/datasources/deletion-tutorial/markUnused When the request completes, the Segments view of the web console no longer displays the segments for hours 18 and 19. Note that the hour 18 and 19 segments are still present in deep storage: $ ls -l1 var/druid/segments/deletion-tutorial/ 2015-09-12T00:00:00.000Z_2015-09-12T01:00:00.000Z 2015-09-12T01:00:00.000Z_2015-09-12T02:00:00.000Z 2015-09-12T02:00:00.000Z_2015-09-12T03:00:00.000Z 2015-09-12T03:00:00.000Z_2015-09-12T04:00:00.000Z 2015-09-12T04:00:00.000Z_2015-09-12T05:00:00.000Z 2015-09-12T05:00:00.000Z_2015-09-12T06:00:00.000Z 2015-09-12T06:00:00.000Z_2015-09-12T07:00:00.000Z 2015-09-12T07:00:00.000Z_2015-09-12T08:00:00.000Z 2015-09-12T08:00:00.000Z_2015-09-12T09:00:00.000Z 2015-09-12T09:00:00.000Z_2015-09-12T10:00:00.000Z 2015-09-12T10:00:00.000Z_2015-09-12T11:00:00.000Z 2015-09-12T11:00:00.000Z_2015-09-12T12:00:00.000Z 2015-09-12T12:00:00.000Z_2015-09-12T13:00:00.000Z 2015-09-12T13:00:00.000Z_2015-09-12T14:00:00.000Z 2015-09-12T14:00:00.000Z_2015-09-12T15:00:00.000Z 2015-09-12T15:00:00.000Z_2015-09-12T16:00:00.000Z 2015-09-12T16:00:00.000Z_2015-09-12T17:00:00.000Z 2015-09-12T17:00:00.000Z_2015-09-12T18:00:00.000Z 2015-09-12T18:00:00.000Z_2015-09-12T19:00:00.000Z 2015-09-12T19:00:00.000Z_2015-09-12T20:00:00.000Z 2015-09-12T20:00:00.000Z_2015-09-12T21:00:00.000Z 2015-09-12T21:00:00.000Z_2015-09-12T22:00:00.000Z 2015-09-12T22:00:00.000Z_2015-09-12T23:00:00.000Z 2015-09-12T23:00:00.000Z_2015-09-13T00:00:00.000Z "},{"title":"Disable segments by segment IDs","type":1,"pageTitle":"Tutorial: Deleting data","url":"/docs/27.0.0/tutorials/tutorial-delete-data#disable-segments-by-segment-ids","content":"Let's disable some segments by their segmentID. This will again mark the segments as "unused", but not remove them from deep storage. You can see the full segmentID for a segment using the web console. In the segments view, click one of the segment rows to open the segment metadata dialog: The identifier field in the metadata dialog shows the full segment ID. For example, the hour 23 segment has segment ID deletion-tutorial_2015-09-12T23:00:00.000Z_2015-09-13T00:00:00.000Z_2023-05-16T00:04:12.091Z. Disable the last two segments, hour 22 and 23 segments, by sending a POST request to the Coordinator with the corresponding segment IDs. The following command queries the Coordinator for segment IDs and uses jq to parse and extract the IDs of the last two segments. The segment IDs are stored in an environment variable named unusedSegmentIds. unusedSegmentIds=$(curl -X 'GET' -H 'Content-Type:application/json' http://localhost:8081/druid/coordinator/v1/datasources/deletion-tutorial/segments | jq '.[-2:]') The following request marks the segments unused: curl -X 'POST' -H 'Content-Type:application/json' -d "{\\"segmentIds\\": $unusedSegmentIds}" http://localhost:8081/druid/coordinator/v1/datasources/deletion-tutorial/markUnused When the request completes, the Segments view of the web console no longer displays the segments for hours 22 and 23. Note that the hour 22 and 23 segments are still in deep storage: $ ls -l1 var/druid/segments/deletion-tutorial/ 2015-09-12T00:00:00.000Z_2015-09-12T01:00:00.000Z 2015-09-12T01:00:00.000Z_2015-09-12T02:00:00.000Z 2015-09-12T02:00:00.000Z_2015-09-12T03:00:00.000Z 2015-09-12T03:00:00.000Z_2015-09-12T04:00:00.000Z 2015-09-12T04:00:00.000Z_2015-09-12T05:00:00.000Z 2015-09-12T05:00:00.000Z_2015-09-12T06:00:00.000Z 2015-09-12T06:00:00.000Z_2015-09-12T07:00:00.000Z 2015-09-12T07:00:00.000Z_2015-09-12T08:00:00.000Z 2015-09-12T08:00:00.000Z_2015-09-12T09:00:00.000Z 2015-09-12T09:00:00.000Z_2015-09-12T10:00:00.000Z 2015-09-12T10:00:00.000Z_2015-09-12T11:00:00.000Z 2015-09-12T11:00:00.000Z_2015-09-12T12:00:00.000Z 2015-09-12T12:00:00.000Z_2015-09-12T13:00:00.000Z 2015-09-12T13:00:00.000Z_2015-09-12T14:00:00.000Z 2015-09-12T14:00:00.000Z_2015-09-12T15:00:00.000Z 2015-09-12T15:00:00.000Z_2015-09-12T16:00:00.000Z 2015-09-12T16:00:00.000Z_2015-09-12T17:00:00.000Z 2015-09-12T17:00:00.000Z_2015-09-12T18:00:00.000Z 2015-09-12T18:00:00.000Z_2015-09-12T19:00:00.000Z 2015-09-12T19:00:00.000Z_2015-09-12T20:00:00.000Z 2015-09-12T20:00:00.000Z_2015-09-12T21:00:00.000Z 2015-09-12T21:00:00.000Z_2015-09-12T22:00:00.000Z 2015-09-12T22:00:00.000Z_2015-09-12T23:00:00.000Z 2015-09-12T23:00:00.000Z_2015-09-13T00:00:00.000Z "},{"title":"Run a kill task","type":1,"pageTitle":"Tutorial: Deleting data","url":"/docs/27.0.0/tutorials/tutorial-delete-data#run-a-kill-task","content":"Now that we have disabled some segments, we can submit a Kill Task, which will delete the disabled segments from metadata and deep storage. A Kill Task spec has been provided at quickstart/tutorial/deletion-kill.json. Submit this task to the Overlord with the following command: curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/deletion-kill.json http://localhost:8081/druid/indexer/v1/task When the task finishes, note that Druid deleted the disabled segments from deep storage. $ ls -l1 var/druid/segments/deletion-tutorial/ 2015-09-12T00:00:00.000Z_2015-09-12T01:00:00.000Z 2015-09-12T01:00:00.000Z_2015-09-12T02:00:00.000Z 2015-09-12T02:00:00.000Z_2015-09-12T03:00:00.000Z 2015-09-12T03:00:00.000Z_2015-09-12T04:00:00.000Z 2015-09-12T04:00:00.000Z_2015-09-12T05:00:00.000Z 2015-09-12T05:00:00.000Z_2015-09-12T06:00:00.000Z 2015-09-12T06:00:00.000Z_2015-09-12T07:00:00.000Z 2015-09-12T07:00:00.000Z_2015-09-12T08:00:00.000Z 2015-09-12T08:00:00.000Z_2015-09-12T09:00:00.000Z 2015-09-12T09:00:00.000Z_2015-09-12T10:00:00.000Z 2015-09-12T10:00:00.000Z_2015-09-12T11:00:00.000Z 2015-09-12T11:00:00.000Z_2015-09-12T12:00:00.000Z 2015-09-12T12:00:00.000Z_2015-09-12T13:00:00.000Z 2015-09-12T13:00:00.000Z_2015-09-12T14:00:00.000Z 2015-09-12T14:00:00.000Z_2015-09-12T15:00:00.000Z 2015-09-12T15:00:00.000Z_2015-09-12T16:00:00.000Z 2015-09-12T16:00:00.000Z_2015-09-12T17:00:00.000Z 2015-09-12T17:00:00.000Z_2015-09-12T18:00:00.000Z 2015-09-12T20:00:00.000Z_2015-09-12T21:00:00.000Z 2015-09-12T21:00:00.000Z_2015-09-12T22:00:00.000Z "},{"title":"SQL query translation","type":0,"sectionRef":"#","url":"/docs/27.0.0/querying/sql-translation","content":"","keywords":""},{"title":"Best practices","type":1,"pageTitle":"SQL query translation","url":"/docs/27.0.0/querying/sql-translation#best-practices","content":"Consider the following non-exhaustive list of best practices when looking into performance implications of translating Druid SQL queries to native queries. If you wrote a filter on the primary time column __time, make sure it is being correctly translated to an"intervals" filter, as described in the Time filters section below. If not, you may need to change the way you write the filter. Try to avoid subqueries underneath joins: they affect both performance and scalability. This includes implicit subqueries generated by conditions on mismatched types, and implicit subqueries generated by conditions that use expressions to refer to the right-hand side. Currently, Druid does not support pushing down predicates (condition and filter) past a Join (i.e. into Join's children). Druid only supports pushing predicates into the join if they originated from above the join. Hence, the location of predicates and filters in your Druid SQL is very important. Also, as a result of this, comma joins should be avoided. Read through the Query execution page to understand how various types of native queries will be executed. Be careful when interpreting EXPLAIN PLAN output, and use request logging if in doubt. Request logs will show the exact native query that was run. See the next section for more details. If you encounter a query that could be planned better, feel free toraise an issue on GitHub. A reproducible test case is always appreciated. "},{"title":"Interpreting EXPLAIN PLAN output","type":1,"pageTitle":"SQL query translation","url":"/docs/27.0.0/querying/sql-translation#interpreting-explain-plan-output","content":"The EXPLAIN PLAN functionality can help you understand how a given SQL query will be translated to native. EXPLAIN PLAN statements return: a PLAN column that contains a JSON array of native queries that Druid will runa RESOURCES column that describes the resources used in the queryan ATTRIBUTES column that describes the attributes of the query, including: statementType: the SQL statement typetargetDataSource: the target datasource in an INSERT or REPLACE statementpartitionedBy: the time-based partitioning granularity in an INSERT or REPLACE statementclusteredBy: the clustering columns in an INSERT or REPLACE statementreplaceTimeChunks: the time chunks in a REPLACE statement Example 1: EXPLAIN PLAN for a SELECT query on the wikipedia datasource: Show the query EXPLAIN PLAN FOR SELECT channel, COUNT(*) FROM wikipedia WHERE channel IN (SELECT page FROM wikipedia GROUP BY page ORDER BY COUNT(*) DESC LIMIT 10) GROUP BY channel The above EXPLAIN PLAN query returns the following result: Show the result [ [ { "query": { "queryType": "topN", "dataSource": { "type": "join", "left": { "type": "table", "name": "wikipedia" }, "right": { "type": "query", "query": { "queryType": "groupBy", "dataSource": { "type": "table", "name": "wikipedia" }, "intervals": { "type": "intervals", "intervals": [ "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z" ] }, "granularity": { "type": "all" }, "dimensions": [ { "type": "default", "dimension": "page", "outputName": "d0", "outputType": "STRING" } ], "aggregations": [ { "type": "count", "name": "a0" } ], "limitSpec": { "type": "default", "columns": [ { "dimension": "a0", "direction": "descending", "dimensionOrder": { "type": "numeric" } } ], "limit": 10 }, "context": { "sqlOuterLimit": 101, "sqlQueryId": "ee616a36-c30c-4eae-af00-245127956e42", "useApproximateCountDistinct": false, "useApproximateTopN": false } } }, "rightPrefix": "j0.", "condition": "(\\"channel\\" == \\"j0.d0\\")", "joinType": "INNER" }, "dimension": { "type": "default", "dimension": "channel", "outputName": "d0", "outputType": "STRING" }, "metric": { "type": "dimension", "ordering": { "type": "lexicographic" } }, "threshold": 101, "intervals": { "type": "intervals", "intervals": [ "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z" ] }, "granularity": { "type": "all" }, "aggregations": [ { "type": "count", "name": "a0" } ], "context": { "sqlOuterLimit": 101, "sqlQueryId": "ee616a36-c30c-4eae-af00-245127956e42", "useApproximateCountDistinct": false, "useApproximateTopN": false } }, "signature": [ { "name": "d0", "type": "STRING" }, { "name": "a0", "type": "LONG" } ], "columnMappings": [ { "queryColumn": "d0", "outputColumn": "channel" }, { "queryColumn": "a0", "outputColumn": "EXPR$1" } ] } ], [ { "name": "wikipedia", "type": "DATASOURCE" } ], { "statementType": "SELECT" } ] Example 2: EXPLAIN PLAN for an INSERT query that inserts data into the wikipedia datasource: Show the query EXPLAIN PLAN FOR INSERT INTO wikipedia2 SELECT TIME_PARSE("timestamp") AS __time, namespace, cityName, countryName, regionIsoCode, metroCode, countryIsoCode, regionName FROM TABLE( EXTERN( '{"type":"http","uris":["https://druid.apache.org/data/wikipedia.json.gz"]}', '{"type":"json"}', '[{"name":"timestamp","type":"string"},{"name":"namespace","type":"string"},{"name":"cityName","type":"string"},{"name":"countryName","type":"string"},{"name":"regionIsoCode","type":"string"},{"name":"metroCode","type":"long"},{"name":"countryIsoCode","type":"string"},{"name":"regionName","type":"string"}]' ) ) PARTITIONED BY ALL The above EXPLAIN PLAN returns the following result: Show the result [ [ { "query": { "queryType": "scan", "dataSource": { "type": "external", "inputSource": { "type": "http", "uris": [ "https://druid.apache.org/data/wikipedia.json.gz" ] }, "inputFormat": { "type": "json", "keepNullColumns": false, "assumeNewlineDelimited": false, "useJsonNodeReader": false }, "signature": [ { "name": "timestamp", "type": "STRING" }, { "name": "namespace", "type": "STRING" }, { "name": "cityName", "type": "STRING" }, { "name": "countryName", "type": "STRING" }, { "name": "regionIsoCode", "type": "STRING" }, { "name": "metroCode", "type": "LONG" }, { "name": "countryIsoCode", "type": "STRING" }, { "name": "regionName", "type": "STRING" } ] }, "intervals": { "type": "intervals", "intervals": [ "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z" ] }, "virtualColumns": [ { "type": "expression", "name": "v0", "expression": "timestamp_parse(\\"timestamp\\",null,'UTC')", "outputType": "LONG" } ], "resultFormat": "compactedList", "columns": [ "cityName", "countryIsoCode", "countryName", "metroCode", "namespace", "regionIsoCode", "regionName", "v0" ], "legacy": false, "context": { "finalizeAggregations": false, "forceExpressionVirtualColumns": true, "groupByEnableMultiValueUnnesting": false, "maxNumTasks": 5, "multiStageQuery": true, "queryId": "42e3de2b-daaf-40f9-a0e7-2c6184529ea3", "scanSignature": "[{\\"name\\":\\"cityName\\",\\"type\\":\\"STRING\\"},{\\"name\\":\\"countryIsoCode\\",\\"type\\":\\"STRING\\"},{\\"name\\":\\"countryName\\",\\"type\\":\\"STRING\\"},{\\"name\\":\\"metroCode\\",\\"type\\":\\"LONG\\"},{\\"name\\":\\"namespace\\",\\"type\\":\\"STRING\\"},{\\"name\\":\\"regionIsoCode\\",\\"type\\":\\"STRING\\"},{\\"name\\":\\"regionName\\",\\"type\\":\\"STRING\\"},{\\"name\\":\\"v0\\",\\"type\\":\\"LONG\\"}]", "sqlInsertSegmentGranularity": "{\\"type\\":\\"all\\"}", "sqlQueryId": "42e3de2b-daaf-40f9-a0e7-2c6184529ea3", "useNativeQueryExplain": true }, "granularity": { "type": "all" } }, "signature": [ { "name": "v0", "type": "LONG" }, { "name": "namespace", "type": "STRING" }, { "name": "cityName", "type": "STRING" }, { "name": "countryName", "type": "STRING" }, { "name": "regionIsoCode", "type": "STRING" }, { "name": "metroCode", "type": "LONG" }, { "name": "countryIsoCode", "type": "STRING" }, { "name": "regionName", "type": "STRING" } ], "columnMappings": [ { "queryColumn": "v0", "outputColumn": "__time" }, { "queryColumn": "namespace", "outputColumn": "namespace" }, { "queryColumn": "cityName", "outputColumn": "cityName" }, { "queryColumn": "countryName", "outputColumn": "countryName" }, { "queryColumn": "regionIsoCode", "outputColumn": "regionIsoCode" }, { "queryColumn": "metroCode", "outputColumn": "metroCode" }, { "queryColumn": "countryIsoCode", "outputColumn": "countryIsoCode" }, { "queryColumn": "regionName", "outputColumn": "regionName" } ] } ], [ { "name": "EXTERNAL", "type": "EXTERNAL" }, { "name": "wikipedia", "type": "DATASOURCE" } ], { "statementType": "INSERT", "targetDataSource": "wikipedia", "partitionedBy": { "type": "all" } } ] Example 3: EXPLAIN PLAN for a REPLACE query that replaces all the data in the wikipedia datasource with a DAYtime partitioning, and cityName and countryName as the clustering columns: Show the query EXPLAIN PLAN FOR REPLACE INTO wikipedia OVERWRITE ALL SELECT TIME_PARSE("timestamp") AS __time, namespace, cityName, countryName, regionIsoCode, metroCode, countryIsoCode, regionName FROM TABLE( EXTERN( '{"type":"http","uris":["https://druid.apache.org/data/wikipedia.json.gz"]}', '{"type":"json"}', '[{"name":"timestamp","type":"string"},{"name":"namespace","type":"string"},{"name":"cityName","type":"string"},{"name":"countryName","type":"string"},{"name":"regionIsoCode","type":"string"},{"name":"metroCode","type":"long"},{"name":"countryIsoCode","type":"string"},{"name":"regionName","type":"string"}]' ) ) PARTITIONED BY DAY CLUSTERED BY cityName, countryName The above EXPLAIN PLAN query returns the following result: Show the result [ [ { "query": { "queryType": "scan", "dataSource": { "type": "external", "inputSource": { "type": "http", "uris": [ "https://druid.apache.org/data/wikipedia.json.gz" ] }, "inputFormat": { "type": "json", "keepNullColumns": false, "assumeNewlineDelimited": false, "useJsonNodeReader": false }, "signature": [ { "name": "timestamp", "type": "STRING" }, { "name": "namespace", "type": "STRING" }, { "name": "cityName", "type": "STRING" }, { "name": "countryName", "type": "STRING" }, { "name": "regionIsoCode", "type": "STRING" }, { "name": "metroCode", "type": "LONG" }, { "name": "countryIsoCode", "type": "STRING" }, { "name": "regionName", "type": "STRING" } ] }, "intervals": { "type": "intervals", "intervals": [ "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z" ] }, "virtualColumns": [ { "type": "expression", "name": "v0", "expression": "timestamp_parse(\\"timestamp\\",null,'UTC')", "outputType": "LONG" } ], "resultFormat": "compactedList", "columns": [ "cityName", "countryIsoCode", "countryName", "metroCode", "namespace", "regionIsoCode", "regionName", "v0" ], "legacy": false, "context": { "finalizeAggregations": false, "groupByEnableMultiValueUnnesting": false, "maxNumTasks": 5, "queryId": "d88e0823-76d4-40d9-a1a7-695c8577b79f", "scanSignature": "[{\\"name\\":\\"cityName\\",\\"type\\":\\"STRING\\"},{\\"name\\":\\"countryIsoCode\\",\\"type\\":\\"STRING\\"},{\\"name\\":\\"countryName\\",\\"type\\":\\"STRING\\"},{\\"name\\":\\"metroCode\\",\\"type\\":\\"LONG\\"},{\\"name\\":\\"namespace\\",\\"type\\":\\"STRING\\"},{\\"name\\":\\"regionIsoCode\\",\\"type\\":\\"STRING\\"},{\\"name\\":\\"regionName\\",\\"type\\":\\"STRING\\"},{\\"name\\":\\"v0\\",\\"type\\":\\"LONG\\"}]", "sqlInsertSegmentGranularity": "\\"DAY\\"", "sqlQueryId": "d88e0823-76d4-40d9-a1a7-695c8577b79f", "sqlReplaceTimeChunks": "all" }, "granularity": { "type": "all" } }, "signature": [ { "name": "v0", "type": "LONG" }, { "name": "namespace", "type": "STRING" }, { "name": "cityName", "type": "STRING" }, { "name": "countryName", "type": "STRING" }, { "name": "regionIsoCode", "type": "STRING" }, { "name": "metroCode", "type": "LONG" }, { "name": "countryIsoCode", "type": "STRING" }, { "name": "regionName", "type": "STRING" } ], "columnMappings": [ { "queryColumn": "v0", "outputColumn": "__time" }, { "queryColumn": "namespace", "outputColumn": "namespace" }, { "queryColumn": "cityName", "outputColumn": "cityName" }, { "queryColumn": "countryName", "outputColumn": "countryName" }, { "queryColumn": "regionIsoCode", "outputColumn": "regionIsoCode" }, { "queryColumn": "metroCode", "outputColumn": "metroCode" }, { "queryColumn": "countryIsoCode", "outputColumn": "countryIsoCode" }, { "queryColumn": "regionName", "outputColumn": "regionName" } ] } ], [ { "name": "EXTERNAL", "type": "EXTERNAL" }, { "name": "wikipedia", "type": "DATASOURCE" } ], { "statementType": "REPLACE", "targetDataSource": "wikipedia", "partitionedBy": "DAY", "clusteredBy": ["cityName","countryName"], "replaceTimeChunks": "all" } ] In this case the JOIN operator gets translated to a join datasource. See the Join translation section for more details about how this works. We can see this for ourselves using Druid's request logging feature. After enabling logging and running this query, we can see that it actually runs as the following native query. { "queryType": "groupBy", "dataSource": { "type": "join", "left": "wikipedia", "right": { "type": "query", "query": { "queryType": "topN", "dataSource": "wikipedia", "dimension": {"type": "default", "dimension": "page", "outputName": "d0"}, "metric": {"type": "numeric", "metric": "a0"}, "threshold": 10, "intervals": "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z", "granularity": "all", "aggregations": [ { "type": "count", "name": "a0"} ] } }, "rightPrefix": "j0.", "condition": "(\\"page\\" == \\"j0.d0\\")", "joinType": "INNER" }, "intervals": "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z", "granularity": "all", "dimensions": [ {"type": "default", "dimension": "channel", "outputName": "d0"} ], "aggregations": [ { "type": "count", "name": "a0"} ] } "},{"title":"Query types","type":1,"pageTitle":"SQL query translation","url":"/docs/27.0.0/querying/sql-translation#query-types","content":"Druid SQL uses four different native query types. Scan is used for queries that do not aggregate—no GROUP BY, no DISTINCT. Timeseries is used for queries that GROUP BY FLOOR(__time TO unit) or TIME_FLOOR(__time, period), have no other grouping expressions, no HAVING clause, no nesting, and either no ORDER BY, or an ORDER BY that orders by same expression as present in GROUP BY. It also uses Timeseries for "grand total" queries that have aggregation functions but no GROUP BY. This query type takes advantage of the fact that Druid segments are sorted by time. TopN is used by default for queries that group by a single expression, do have ORDER BY and LIMIT clauses, do not have HAVING clauses, and are not nested. However, the TopN query type will deliver approximate ranking and results in some cases; if you want to avoid this, set "useApproximateTopN" to "false". TopN results are always computed in memory. See the TopN documentation for more details. GroupBy is used for all other aggregations, including any nested aggregation queries. Druid's GroupBy is a traditional aggregation engine: it delivers exact results and rankings and supports a wide variety of features. GroupBy aggregates in memory if it can, but it may spill to disk if it doesn't have enough memory to complete your query. Results are streamed back from data processes through the Broker if you ORDER BY the same expressions in your GROUP BY clause, or if you don't have an ORDER BY at all. If your query has an ORDER BY referencing expressions that don't appear in the GROUP BY clause (like aggregation functions) then the Broker will materialize a list of results in memory, up to a max of your LIMIT, if any. See the GroupBy documentation for details about tuning performance and memory use. "},{"title":"Time filters","type":1,"pageTitle":"SQL query translation","url":"/docs/27.0.0/querying/sql-translation#time-filters","content":"For all native query types, filters on the __time column will be translated into top-level query "intervals" whenever possible, which allows Druid to use its global time index to quickly prune the set of data that must be scanned. Consider this (non-exhaustive) list of time filters that will be recognized and translated to "intervals": __time >= TIMESTAMP '2000-01-01 00:00:00' (comparison to absolute time)__time >= CURRENT_TIMESTAMP - INTERVAL '8' HOUR (comparison to relative time)FLOOR(__time TO DAY) = TIMESTAMP '2000-01-01 00:00:00' (specific day) Refer to the Interpreting EXPLAIN PLAN output section for details on confirming that time filters are being translated as you expect. "},{"title":"Joins","type":1,"pageTitle":"SQL query translation","url":"/docs/27.0.0/querying/sql-translation#joins","content":"SQL join operators are translated to native join datasources as follows: Joins that the native layer can handle directly are translated literally, to a join datasourcewhose left, right, and condition are faithful translations of the original SQL. This includes any SQL join where the right-hand side is a lookup or subquery, and where the condition is an equality where one side is an expression based on the left-hand table, the other side is a simple column reference to the right-hand table, and both sides of the equality are the same data type. If a join cannot be handled directly by a native join datasource as written, Druid SQL will insert subqueries to make it runnable. For example, foo INNER JOIN bar ON foo.abc = LOWER(bar.def) cannot be directly translated, because there is an expression on the right-hand side instead of a simple column access. A subquery will be inserted that effectively transforms this clause tofoo INNER JOIN (SELECT LOWER(def) AS def FROM bar) t ON foo.abc = t.def. Druid SQL does not currently reorder joins to optimize queries. Refer to the Interpreting EXPLAIN PLAN output section for details on confirming that joins are being translated as you expect. Refer to the Query execution page for information about how joins are executed. "},{"title":"Subqueries","type":1,"pageTitle":"SQL query translation","url":"/docs/27.0.0/querying/sql-translation#subqueries","content":"Subqueries in SQL are generally translated to native query datasources. Refer to theQuery execution page for information about how subqueries are executed. info Note: Subqueries in the WHERE clause, like WHERE col1 IN (SELECT foo FROM ...) are translated to inner joins. "},{"title":"Approximations","type":1,"pageTitle":"SQL query translation","url":"/docs/27.0.0/querying/sql-translation#approximations","content":"Druid SQL will use approximate algorithms in some situations: The COUNT(DISTINCT col) aggregation functions by default uses a variant ofHyperLogLog, a fast approximate distinct counting algorithm. Druid SQL will switch to exact distinct counts if you set "useApproximateCountDistinct" to "false", either through query context or through Broker configuration. GROUP BY queries over a single column with ORDER BY and LIMIT may be executed using the TopN engine, which uses an approximate algorithm. Druid SQL will switch to an exact grouping algorithm if you set "useApproximateTopN" to "false", either through query context or through Broker configuration. Aggregation functions that are labeled as using sketches or approximations, such as APPROX_COUNT_DISTINCT, are always approximate, regardless of configuration. A known issue with approximate functions based on data sketches The APPROX_QUANTILE_DS and DS_QUANTILES_SKETCH functions can fail with an IllegalStateException if one of the sketches for the query hits maxStreamLength: the maximum number of items to store in each sketch. See GitHub issue 11544 for more details. To workaround the issue, increase value of the maximum string length with the approxQuantileDsMaxStreamLength parameter in the query context. Since it is set to 1,000,000,000 by default, you don't need to override it in most cases. See accuracy information in the DataSketches documentation for how many bytes are required per stream length. This query context parameter is a temporary solution to avoid the known issue. It may be removed in a future release after the bug is fixed. "},{"title":"Unsupported features","type":1,"pageTitle":"SQL query translation","url":"/docs/27.0.0/querying/sql-translation#unsupported-features","content":"Druid does not support all SQL features. In particular, the following features are not supported. JOIN between native datasources (table, lookup, subquery) and system tables.JOIN conditions that are not an equality between expressions from the left- and right-hand sides.JOIN conditions containing a constant value inside the condition.JOIN conditions on a column which contains a multi-value dimension.OVER clauses, and analytic functions such as LAG and LEAD.ORDER BY for a non-aggregating query, except for ORDER BY __time or ORDER BY __time DESC, which are supported. This restriction only applies to non-aggregating queries; you can ORDER BY any column in an aggregating query.DDL and DML.Using Druid-specific functions like TIME_PARSE and APPROX_QUANTILE_DS on system tables. Additionally, some Druid native query features are not supported by the SQL language. Some unsupported Druid features include: Inline datasources.Spatial filters.Multi-value dimensions are only partially implemented in Druid SQL. There are known inconsistencies between their behavior in SQL queries and in native queries due to how they are currently treated by the SQL planner. "},{"title":"Use the JDBC driver to query Druid","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-jdbc","content":"Use the JDBC driver to query Druid Redirecting you to the JDBC driver API... Click here if you are not redirected.","keywords":""},{"title":"Docker for Jupyter Notebook tutorials","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-jupyter-docker","content":"","keywords":""},{"title":"Prerequisites","type":1,"pageTitle":"Docker for Jupyter Notebook tutorials","url":"/docs/27.0.0/tutorials/tutorial-jupyter-docker#prerequisites","content":"Jupyter in Docker requires that you have Docker and Docker Compose. We recommend installing these through Docker Desktop. For ARM-based devices, see Tutorial setup for ARM-based devices. "},{"title":"Launch the Docker containers","type":1,"pageTitle":"Docker for Jupyter Notebook tutorials","url":"/docs/27.0.0/tutorials/tutorial-jupyter-docker#launch-the-docker-containers","content":"You run Docker Compose to launch Jupyter and optionally Druid or Kafka. Docker Compose references the configuration in docker-compose.yaml. Running Druid in Docker also requires the environment file, which sets the configuration properties for the Druid services. To get started, download both docker-compose.yaml and environment fromtutorial-jupyter-docker.zip. Alternatively, you can clone the Apache Druid repo and access the files in druid/examples/quickstart/jupyter-notebooks/docker-jupyter. "},{"title":"Start only the Jupyter container","type":1,"pageTitle":"Docker for Jupyter Notebook tutorials","url":"/docs/27.0.0/tutorials/tutorial-jupyter-docker#start-only-the-jupyter-container","content":"If you already have Druid running locally or on another machine, you can run the Docker containers for Jupyter only. In the same directory as docker-compose.yaml, start the application: docker compose --profile jupyter up -d The Docker Compose file assigns 8889 for the Jupyter port. You can override the port number by setting the JUPYTER_PORT environment variable before starting the Docker application. If Druid is running local to the same machine as Jupyter, open the tutorial and set the host variable to host.docker.internal before starting. For example: host = "host.docker.internal" "},{"title":"Start Jupyter and Druid","type":1,"pageTitle":"Docker for Jupyter Notebook tutorials","url":"/docs/27.0.0/tutorials/tutorial-jupyter-docker#start-jupyter-and-druid","content":"Running Druid in Docker requires the environment file as well as an environment variable named DRUID_VERSION, which determines the version of Druid to use. The Druid version references the Docker tag to pull from theApache Druid Docker Hub. In the same directory as docker-compose.yaml and environment, start the application: DRUID_VERSION=27.0.0 docker compose --profile druid-jupyter up -d "},{"title":"Start Jupyter, Druid, and Kafka","type":1,"pageTitle":"Docker for Jupyter Notebook tutorials","url":"/docs/27.0.0/tutorials/tutorial-jupyter-docker#start-jupyter-druid-and-kafka","content":"Running Druid in Docker requires the environment file as well as the DRUID_VERSION environment variable. In the same directory as docker-compose.yaml and environment, start the application: DRUID_VERSION=27.0.0 docker compose --profile all-services up -d "},{"title":"Start Kafka and Jupyter","type":1,"pageTitle":"Docker for Jupyter Notebook tutorials","url":"/docs/27.0.0/tutorials/tutorial-jupyter-docker#start-kafka-and-jupyter","content":"If you already have Druid running externally, such as an existing cluster or a dedicated infrastructure for Druid, you can run the Docker containers for Kafka and Jupyter only. In the same directory as docker-compose.yaml and environment, start the application: DRUID_VERSION=27.0.0 docker compose --profile kafka-jupyter up -d If you have an external Druid instance running on a different machine than the one hosting the Docker Compose environment, change the host variable in the notebook tutorial to the hostname or address of the machine where Druid is running. If Druid is running local to the same machine as Jupyter, open the tutorial and set the host variable to host.docker.internal before starting. For example: host = "host.docker.internal" To enable Druid to ingest data from Kafka within the Docker Compose environment, update the bootstrap.servers property in the Kafka ingestion spec to localhost:9094 before ingesting. For reference, see more on consumer properties. "},{"title":"Update image from Docker Hub","type":1,"pageTitle":"Docker for Jupyter Notebook tutorials","url":"/docs/27.0.0/tutorials/tutorial-jupyter-docker#update-image-from-docker-hub","content":"If you already have a local cache of the Jupyter image, you can update the image before running the application using the following command: docker compose pull jupyter "},{"title":"Use locally built image","type":1,"pageTitle":"Docker for Jupyter Notebook tutorials","url":"/docs/27.0.0/tutorials/tutorial-jupyter-docker#use-locally-built-image","content":"The default Docker Compose file pulls the custom Jupyter Notebook image from a third party Docker Hub. If you prefer to build the image locally from the official source, do the following: Clone the Apache Druid repository.Navigate to examples/quickstart/jupyter-notebooks/docker-jupyter.Start the services using -f docker-compose-local.yaml in the docker compose command. For example: DRUID_VERSION=27.0.0 docker compose --profile all-services -f docker-compose-local.yaml up -d "},{"title":"Access Jupyter-based tutorials","type":1,"pageTitle":"Docker for Jupyter Notebook tutorials","url":"/docs/27.0.0/tutorials/tutorial-jupyter-docker#access-jupyter-based-tutorials","content":"The following steps show you how to access the Jupyter notebook tutorials from the Docker container. At startup, Docker creates and mounts a volume to persist data from the container to your local machine. This way you can save your work completed within the Docker container. Navigate to the notebooks at http://localhost:8889. info If you set JUPYTER_PORT to another port number, replace 8889 with the value of the Jupyter port. Select a tutorial. If you don't plan to save your changes, you can use the notebook directly as is. Otherwise, continue to the next step. Optional: To save a local copy of your tutorial work, select File > Save as... from the navigation menu. Then enter work/<notebook name>.ipynb. If the notebook still displays as read only, you may need to refresh the page in your browser. Access the saved files in the notebooks folder in your local working directory. "},{"title":"View the Druid web console","type":1,"pageTitle":"Docker for Jupyter Notebook tutorials","url":"/docs/27.0.0/tutorials/tutorial-jupyter-docker#view-the-druid-web-console","content":"To access the Druid web console in Docker, go to http://localhost:8888/unified-console.html. Use the web console to view datasources and ingestion tasks that you create in the tutorials. "},{"title":"Stop Docker containers","type":1,"pageTitle":"Docker for Jupyter Notebook tutorials","url":"/docs/27.0.0/tutorials/tutorial-jupyter-docker#stop-docker-containers","content":"Shut down the Docker application using the following command: docker compose down -v "},{"title":"Tutorial setup without using Docker","type":1,"pageTitle":"Docker for Jupyter Notebook tutorials","url":"/docs/27.0.0/tutorials/tutorial-jupyter-docker#tutorial-setup-without-using-docker","content":"To use the Jupyter Notebook-based tutorials without using Docker, do the following: Clone the Apache Druid repo, or download the tutorialsas well as the Python client for Druid. Install the prerequisite Python packages with the following commands: # Install requests pip install requests # Install JupyterLab pip install jupyterlab # Install Jupyter Notebook pip install notebook Individual notebooks may list additional packages you need to install to complete the tutorial. In your Druid source repo, install druidapi with the following commands: cd examples/quickstart/jupyter-notebooks/druidapi pip install . Start Jupyter, in the same directory as the tutorials, using either JupyterLab or Jupyter Notebook: # Start JupyterLab on port 3001 jupyter lab --port 3001 # Start Jupyter Notebook on port 3001 jupyter notebook --port 3001 Start Druid. You can use the Quickstart (local) instance. The tutorials assume that you are using the quickstart, so no authentication or authorization is expected unless explicitly mentioned. If you contribute to Druid, and work with Druid integration tests, you can use a test cluster. Assume you have an environment variable, DRUID_DEV, which identifies your Druid source repo. cd $DRUID_DEV ./it.sh build ./it.sh image ./it.sh up <category> Replace <category> with one of the available integration test categories. See the integration test README.md for details. You should now be able to access and complete the tutorials. "},{"title":"Tutorial setup for ARM-based devices","type":1,"pageTitle":"Docker for Jupyter Notebook tutorials","url":"/docs/27.0.0/tutorials/tutorial-jupyter-docker#tutorial-setup-for-arm-based-devices","content":"For ARM-based devices, follow this setup to start Druid externally, while keeping Kafka and Jupyter within the Docker Compose environment: Start Druid using the start-druid script. You can follow Quickstart (local) instructions. The tutorials assume that you are using the quickstart, so no authentication or authorization is expected unless explicitly mentioned. Start either Jupyter only or Jupyter and Kafka using the following commands in the same directory as docker-compose.yaml and environment: # Start only Jupyter docker compose --profile jupyter up -d # Start Kafka and Jupyter DRUID_VERSION=27.0.0 docker compose --profile kafka-jupyter up -d If Druid is running local to the same machine as Jupyter, open the tutorial and set the host variable to host.docker.internal before starting. For example: host = "host.docker.internal" If using Kafka to handle the data stream that will be ingested into Druid and Druid is running local to the same machine, update the consumer property bootstrap.servers to localhost:9094. "},{"title":"Learn more","type":1,"pageTitle":"Docker for Jupyter Notebook tutorials","url":"/docs/27.0.0/tutorials/tutorial-jupyter-docker#learn-more","content":"See the following topics for more information: Jupyter Notebook tutorials for the available Jupyter Notebook-based tutorials for DruidTutorial: Run with Docker for running Druid from a Docker container "},{"title":"Jupyter Notebook tutorials","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-jupyter-index","content":"","keywords":""},{"title":"Prerequisites","type":1,"pageTitle":"Jupyter Notebook tutorials","url":"/docs/27.0.0/tutorials/tutorial-jupyter-index#prerequisites","content":"The simplest way to get started is to use Docker. In this case, you only need to set up Docker Desktop. For more information, see Docker for Jupyter Notebook tutorials. Otherwise, you can install the prerequisites on your own. Here's what you need: An available Druid instance.Python 3.7 or laterJupyterLab (recommended) or Jupyter Notebook running on a non-default port. By default, Druid and Jupyter both try to use port 8888, so start Jupyter on a different port.The requests Python packageThe druidapi Python package For setup instructions, see Tutorial setup without using Docker. Individual tutorials may require additional Python packages, such as for visualization or streaming ingestion. "},{"title":"Python API for Druid","type":1,"pageTitle":"Jupyter Notebook tutorials","url":"/docs/27.0.0/tutorials/tutorial-jupyter-index#python-api-for-druid","content":"The druidapi Python package is a REST API for Druid. One of the notebooks shows how to use the Druid REST API. The others focus on other topics and use a simple set of Python wrappers around the underlying REST API. The wrappers reside in the druidapi package within the notebooks directory. While the package can be used in any Python program, the key purpose, at present, is to support these notebooks. SeeIntroduction to the Druid Python APIfor an overview of the Python API. The druidapi package is already installed in the custom Jupyter Docker container for Druid tutorials. "},{"title":"Tutorials","type":1,"pageTitle":"Jupyter Notebook tutorials","url":"/docs/27.0.0/tutorials/tutorial-jupyter-index#tutorials","content":"The notebooks are located in the apache/druid repo. You can either clone the repo or download the notebooks you want individually. The links that follow are the raw GitHub URLs, so you can use them to download the notebook directly, such as with wget, or manually through your web browser. Note that if you save the file from your web browser, make sure to remove the .txt extension. Introduction to the Druid REST API walks you through some of the basics related to the Druid REST API and several endpoints.Introduction to the Druid Python API walks you through some of the basics related to the Druid API using the Python wrapper API.Learn the basics of Druid SQL introduces you to the unique aspects of Druid SQL with the primary focus on the SELECT statement.Ingest and query data from Apache Kafka walks you through ingesting an event stream from Kafka. "},{"title":"Load streaming data from Apache Kafka","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-kafka","content":"","keywords":""},{"title":"Prerequisites","type":1,"pageTitle":"Load streaming data from Apache Kafka","url":"/docs/27.0.0/tutorials/tutorial-kafka#prerequisites","content":"Before you follow the steps in this tutorial, download Druid as described in the quickstart using the automatic single-machine configuration and have it running on your local machine. You don't need to have loaded any data. "},{"title":"Download and start Kafka","type":1,"pageTitle":"Load streaming data from Apache Kafka","url":"/docs/27.0.0/tutorials/tutorial-kafka#download-and-start-kafka","content":"Apache Kafka is a high-throughput message bus that works well with Druid. For this tutorial, use Kafka 2.7.0. To download Kafka, run the following commands in your terminal: curl -O https://archive.apache.org/dist/kafka/2.7.0/kafka_2.13-2.7.0.tgz tar -xzf kafka_2.13-2.7.0.tgz cd kafka_2.13-2.7.0 If you're already running Kafka on the machine you're using for this tutorial, delete or rename the kafka-logs directory in /tmp. info Druid and Kafka both rely on Apache ZooKeeper to coordinate and manage services. Because Druid is already running, Kafka attaches to the Druid ZooKeeper instance when it starts up. In a production environment where you're running Druid and Kafka on different machines, start the Kafka ZooKeeper before you start the Kafka broker. In the Kafka root directory, run this command to start a Kafka broker: ./bin/kafka-server-start.sh config/server.properties In a new terminal window, navigate to the Kafka root directory and run the following command to create a Kafka topic called kttm: ./bin/kafka-topics.sh --create --topic kttm --bootstrap-server localhost:9092 Kafka returns a message when it successfully adds the topic: Created topic kttm. "},{"title":"Load data into Kafka","type":1,"pageTitle":"Load streaming data from Apache Kafka","url":"/docs/27.0.0/tutorials/tutorial-kafka#load-data-into-kafka","content":"In this section, you download sample data to the tutorial's directory and send the data to your Kafka topic. In your Kafka root directory, create a directory for the sample data: mkdir sample-data Download the sample data to your new directory and extract it: cd sample-data curl -O https://static.imply.io/example-data/kttm-nested-v2/kttm-nested-v2-2019-08-25.json.gz In your Kafka root directory, run the following commands to post sample events to the kttm Kafka topic: export KAFKA_OPTS="-Dfile.encoding=UTF-8" gzcat ./sample-data/kttm-nested-v2-2019-08-25.json.gz | ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic kttm "},{"title":"Load data into Druid","type":1,"pageTitle":"Load streaming data from Apache Kafka","url":"/docs/27.0.0/tutorials/tutorial-kafka#load-data-into-druid","content":"Now that you have data in your Kafka topic, you can use Druid's Kafka indexing service to ingest the data into Druid. To do this, you can use the Druid console data loader or you can submit a supervisor spec. Follow the steps below to try each method. "},{"title":"Load data with the console data loader","type":1,"pageTitle":"Load streaming data from Apache Kafka","url":"/docs/27.0.0/tutorials/tutorial-kafka#load-data-with-the-console-data-loader","content":"The Druid console data loader presents you with several screens to configure each section of the supervisor spec, then creates an ingestion task to ingest the Kafka data. To use the console data loader: Navigate to localhost:8888 and click Load data > Streaming. Click Apache Kafka and then Connect data. Enter localhost:9092 as the bootstrap server and kttm as the topic, then click Apply and make sure you see data similar to the following: Click Next: Parse data. The data loader automatically tries to determine the correct parser for the data. For the sample data, it selects input format json. You can play around with the different options to get a preview of how Druid parses your data. With the json input format selected, click Next: Parse time. You may need to click Apply first. Druid's architecture requires that you specify a primary timestamp column. Druid stores the timestamp in the __time column in your Druid datasource. In a production environment, if you don't have a timestamp in your data, you can select Parse timestamp from: None to use a placeholder value. For the sample data, the data loader selects the timestamp column in the raw data as the primary time column. Click Next: ... three times to go past the Transform and Filter steps to Configure schema. You don't need to enter anything in these two steps because applying transforms and filters is out of scope for this tutorial. In the Configure schema step, you can select data types for the columns and configure dimensions and metrics to ingest into Druid. The console does most of this for you, but you need to create JSON-type dimensions for the three nested columns in the data. Click Add dimension and enter the following information. You can only add one dimension at a time. Name: event, Type: jsonName: agent, Type: jsonName: geo_ip, Type: json After you create the dimensions, you can scroll to the right in the preview window to see the nested columns: Click Next: Partition to configure how Druid partitions the data into segments. Select day as the Segment granularity. Since this is a small dataset, you don't need to make any further adjustments. Click Next: Tune to fine tune how Druid ingests data. In Input tuning, set Use earliest offset to True—this is very important because you want to consume the data from the start of the stream. There are no other changes to make here, so click Next: Publish. Name the datasource kttm-kafka and click Next: Edit spec to review your spec. The console presents the spec you've constructed. You can click the buttons above the spec to make changes in previous steps and see how the changes update the spec. You can also edit the spec directly and see it reflected in the previous steps. Click Submit to create an ingestion task. Druid displays the task view with the focus on the newly created supervisor. The task view auto-refreshes, so wait until the supervisor launches a task. The status changes from Pending to Running as Druid starts to ingest data. Navigate to the Datasources view from the header. When the kttm-kafka datasource appears here, you can query it. See Query your data for details. info If the datasource doesn't appear after a minute you might not have set the supervisor to read data from the start of the stream—the Use earliest offset setting in the Tune step. Go to the Ingestion page and terminate the supervisor using the Actions(...) menu. Load the sample data again and apply the correct setting when you get to the Tune step. "},{"title":"Submit a supervisor spec","type":1,"pageTitle":"Load streaming data from Apache Kafka","url":"/docs/27.0.0/tutorials/tutorial-kafka#submit-a-supervisor-spec","content":"As an alternative to using the data loader, you can submit a supervisor spec to Druid. You can do this in the console or using the Druid API. Use the console To submit a supervisor spec using the Druid console: Click Ingestion in the console, then click the ellipses next to the refresh button and select Submit JSON supervisor. Paste this spec into the JSON window and click Submit. { "type": "kafka", "spec": { "ioConfig": { "type": "kafka", "consumerProperties": { "bootstrap.servers": "localhost:9092" }, "topic": "kttm", "inputFormat": { "type": "json" }, "useEarliestOffset": true }, "tuningConfig": { "type": "kafka" }, "dataSchema": { "dataSource": "kttm-kafka-supervisor-console", "timestampSpec": { "column": "timestamp", "format": "iso" }, "dimensionsSpec": { "dimensions": [ "session", "number", "client_ip", "language", "adblock_list", "app_version", "path", "loaded_image", "referrer", "referrer_host", "server_ip", "screen", "window", { "type": "long", "name": "session_length" }, "timezone", "timezone_offset", { "type": "json", "name": "event" }, { "type": "json", "name": "agent" }, { "type": "json", "name": "geo_ip" } ] }, "granularitySpec": { "queryGranularity": "none", "rollup": false, "segmentGranularity": "day" } } } } This starts the supervisor—the supervisor spawns tasks that start listening for incoming data. Click Tasks on the console home page to monitor the status of the job. This spec writes the data in the kttm topic to a datasource named kttm-kafka-supervisor-console. Use the API You can also use the Druid API to submit a supervisor spec. Run the following command to download the sample spec: curl -O https://druid.apache.org/docs/latest/assets/files/kttm-kafka-supervisor.json Run the following command to submit the spec in the kttm-kafka-supervisor.json file: curl -XPOST -H 'Content-Type: application/json' kttm-kafka-supervisor.json http://localhost:8081/druid/indexer/v1/supervisor After Druid successfully creates the supervisor, you get a response containing the supervisor ID: {"id":"kttm-kafka-supervisor-api"}. Click Tasks on the console home page to monitor the status of the job. This spec writes the data in the kttm topic to a datasource named kttm-kafka-supervisor-api. "},{"title":"Query your data","type":1,"pageTitle":"Load streaming data from Apache Kafka","url":"/docs/27.0.0/tutorials/tutorial-kafka#query-your-data","content":"After Druid sends data to the Kafka stream, it is immediately available for querying. Click Query in the Druid console to run SQL queries against the datasource. Since this tutorial ingests a small dataset, you can run the query SELECT * FROM "kttm-kafka" to return all of the data in the dataset you created. Check out the Querying data tutorial to run some example queries on the newly loaded data. "},{"title":"Further reading","type":1,"pageTitle":"Load streaming data from Apache Kafka","url":"/docs/27.0.0/tutorials/tutorial-kafka#further-reading","content":"For more information, see the following topics: Apache Kafka ingestion for more information on loading data from Kafka streams.Apache Kafka supervisor reference for Kafka supervisor configuration information.Apache Kafka supervisor operations reference for information on running and maintaining Kafka supervisors for Druid. "},{"title":"Configure Apache Druid to use Kerberized Apache Hadoop as deep storage","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-kerberos-hadoop","content":"","keywords":""},{"title":"Hadoop Setup","type":1,"pageTitle":"Configure Apache Druid to use Kerberized Apache Hadoop as deep storage","url":"/docs/27.0.0/tutorials/tutorial-kerberos-hadoop#hadoop-setup","content":"Following are the configurations files required to be copied over to Druid conf folders: For HDFS as a deep storage, hdfs-site.xml, core-site.xmlFor ingestion, mapred-site.xml, yarn-site.xml "},{"title":"HDFS Folders and permissions","type":1,"pageTitle":"Configure Apache Druid to use Kerberized Apache Hadoop as deep storage","url":"/docs/27.0.0/tutorials/tutorial-kerberos-hadoop#hdfs-folders-and-permissions","content":"Choose any folder name for the druid deep storage, for example 'druid' Create the folder in hdfs under the required parent folder. For example,hdfs dfs -mkdir /druidORhdfs dfs -mkdir /apps/druid Give druid processes appropriate permissions for the druid processes to access this folder. This would ensure that druid is able to create necessary folders like data and indexing_log in HDFS. For example, if druid processes run as user 'root', then `hdfs dfs -chown root:root /apps/druid` OR `hdfs dfs -chmod 777 /apps/druid` Druid creates necessary sub-folders to store data and index under this newly created folder. "},{"title":"Druid Setup","type":1,"pageTitle":"Configure Apache Druid to use Kerberized Apache Hadoop as deep storage","url":"/docs/27.0.0/tutorials/tutorial-kerberos-hadoop#druid-setup","content":"Edit common.runtime.properties at conf/druid/_common/common.runtime.properties to include the HDFS properties. Folders used for the location are same as the ones used for example above. "},{"title":"common.runtime.properties","type":1,"pageTitle":"Configure Apache Druid to use Kerberized Apache Hadoop as deep storage","url":"/docs/27.0.0/tutorials/tutorial-kerberos-hadoop#commonruntimeproperties","content":"# Deep storage # # For HDFS: druid.storage.type=hdfs druid.storage.storageDirectory=/druid/segments # OR # druid.storage.storageDirectory=/apps/druid/segments # # Indexing service logs # # For HDFS: druid.indexer.logs.type=hdfs druid.indexer.logs.directory=/druid/indexing-logs # OR # druid.storage.storageDirectory=/apps/druid/indexing-logs Note: Comment out Local storage and S3 Storage parameters in the file Also include hdfs-storage core extension to conf/druid/_common/common.runtime.properties # # Extensions # druid.extensions.directory=dist/druid/extensions druid.extensions.hadoopDependenciesDir=dist/druid/hadoop-dependencies druid.extensions.loadList=["mysql-metadata-storage", "druid-hdfs-storage", "druid-kerberos"] "},{"title":"Hadoop Jars","type":1,"pageTitle":"Configure Apache Druid to use Kerberized Apache Hadoop as deep storage","url":"/docs/27.0.0/tutorials/tutorial-kerberos-hadoop#hadoop-jars","content":"Ensure that Druid has necessary jars to support the Hadoop version. Find the hadoop version using command, hadoop version In case there is other software used with hadoop, like WanDisco, ensure that the necessary libraries are availableadd the requisite extensions to druid.extensions.loadlist in conf/druid/_common/common.runtime.properties "},{"title":"Kerberos setup","type":1,"pageTitle":"Configure Apache Druid to use Kerberized Apache Hadoop as deep storage","url":"/docs/27.0.0/tutorials/tutorial-kerberos-hadoop#kerberos-setup","content":"Create a headless keytab which would have access to the druid data and index. Edit conf/druid/_common/common.runtime.properties and add the following properties: druid.hadoop.security.kerberos.principal druid.hadoop.security.kerberos.keytab For example druid.hadoop.security.kerberos.principal=hdfs-test@EXAMPLE.IO druid.hadoop.security.kerberos.keytab=/etc/security/keytabs/hdfs.headless.keytab "},{"title":"Restart Druid Services","type":1,"pageTitle":"Configure Apache Druid to use Kerberized Apache Hadoop as deep storage","url":"/docs/27.0.0/tutorials/tutorial-kerberos-hadoop#restart-druid-services","content":"With the above changes, restart Druid. This would ensure that Druid works with Kerberized Hadoop "},{"title":"Convert an ingestion spec for SQL-based ingestion","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-msq-convert-spec","content":"Convert an ingestion spec for SQL-based ingestion info This page describes SQL-based batch ingestion using the druid-multi-stage-queryextension, new in Druid 24.0. Refer to the ingestion methods table to determine which ingestion method is right for you. If you're already ingesting data with native batch ingestion, you can use the web console to convert the ingestion spec to a SQL query that the multi-stage query task engine can use to ingest data. This tutorial demonstrates how to convert the ingestion spec to a query task in the web console. To convert the ingestion spec to a query task, do the following: In the Query view of the web console, navigate to the menu bar that includes Run. Click the ellipsis icon and select Convert ingestion spec to SQL. In the Ingestion spec to covert window, insert your ingestion spec. You can use your own spec or the sample ingestion spec provided in the tutorial. The sample spec uses data hosted at https://druid.apache.org/data/wikipedia.json.gz and loads it into a table named wikipedia: Show the spec { "type": "index_parallel", "spec": { "ioConfig": { "type": "index_parallel", "inputSource": { "type": "http", "uris": [ "https://druid.apache.org/data/wikipedia.json.gz" ] }, "inputFormat": { "type": "json" } }, "tuningConfig": { "type": "index_parallel", "partitionsSpec": { "type": "dynamic" } }, "dataSchema": { "dataSource": "wikipedia", "timestampSpec": { "column": "timestamp", "format": "iso" }, "dimensionsSpec": { "dimensions": [ "isRobot", "channel", "flags", "isUnpatrolled", "page", "diffUrl", { "type": "long", "name": "added" }, "comment", { "type": "long", "name": "commentLength" }, "isNew", "isMinor", { "type": "long", "name": "delta" }, "isAnonymous", "user", { "type": "long", "name": "deltaBucket" }, { "type": "long", "name": "deleted" }, "namespace", "cityName", "countryName", "regionIsoCode", "metroCode", "countryIsoCode", "regionName" ] }, "granularitySpec": { "queryGranularity": "none", "rollup": false, "segmentGranularity": "day" } } } } Click Submit to submit the spec. The web console uses the JSON-based ingestion spec to generate a SQL query that you can use instead. This is what the query looks like for the sample ingestion spec: Show the query -- This SQL query was auto generated from an ingestion spec REPLACE INTO wikipedia OVERWRITE ALL WITH source AS (SELECT * FROM TABLE( EXTERN( '{"type":"http","uris":["https://druid.apache.org/data/wikipedia.json.gz"]}', '{"type":"json"}', '[{"name":"timestamp","type":"string"},{"name":"isRobot","type":"string"},{"name":"channel","type":"string"},{"name":"flags","type":"string"},{"name":"isUnpatrolled","type":"string"},{"name":"page","type":"string"},{"name":"diffUrl","type":"string"},{"name":"added","type":"long"},{"name":"comment","type":"string"},{"name":"commentLength","type":"long"},{"name":"isNew","type":"string"},{"name":"isMinor","type":"string"},{"name":"delta","type":"long"},{"name":"isAnonymous","type":"string"},{"name":"user","type":"string"},{"name":"deltaBucket","type":"long"},{"name":"deleted","type":"long"},{"name":"namespace","type":"string"},{"name":"cityName","type":"string"},{"name":"countryName","type":"string"},{"name":"regionIsoCode","type":"string"},{"name":"metroCode","type":"string"},{"name":"countryIsoCode","type":"string"},{"name":"regionName","type":"string"}]' ) )) SELECT TIME_PARSE("timestamp") AS __time, "isRobot", "channel", "flags", "isUnpatrolled", "page", "diffUrl", "added", "comment", "commentLength", "isNew", "isMinor", "delta", "isAnonymous", "user", "deltaBucket", "deleted", "namespace", "cityName", "countryName", "regionIsoCode", "metroCode", "countryIsoCode", "regionName" FROM source PARTITIONED BY DAY Review the generated SQL query to make sure it matches your requirements and does what you expect. Click Run to start the ingestion.","keywords":""},{"title":"Load files with SQL-based ingestion","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-msq-extern","content":"","keywords":""},{"title":"Query the data","type":1,"pageTitle":"Load files with SQL-based ingestion","url":"/docs/27.0.0/tutorials/tutorial-msq-extern#query-the-data","content":"You can query the wikipedia table after the ingestion completes. For example, you can analyze the data in the table to produce a list of top channels: SELECT channel, COUNT(*) FROM "wikipedia" GROUP BY channel ORDER BY COUNT(*) DESC With the EXTERN function, you could run the same query on the external data directly without ingesting it first: Show the query SELECT channel, COUNT(*) FROM TABLE( EXTERN( '{"type": "http", "uris": ["https://druid.apache.org/data/wikipedia.json.gz"]}', '{"type": "json"}', '[{"name": "added", "type": "long"}, {"name": "channel", "type": "string"}, {"name": "cityName", "type": "string"}, {"name": "comment", "type": "string"}, {"name": "commentLength", "type": "long"}, {"name": "countryIsoCode", "type": "string"}, {"name": "countryName", "type": "string"}, {"name": "deleted", "type": "long"}, {"name": "delta", "type": "long"}, {"name": "deltaBucket", "type": "string"}, {"name": "diffUrl", "type": "string"}, {"name": "flags", "type": "string"}, {"name": "isAnonymous", "type": "string"}, {"name": "isMinor", "type": "string"}, {"name": "isNew", "type": "string"}, {"name": "isRobot", "type": "string"}, {"name": "isUnpatrolled", "type": "string"}, {"name": "metroCode", "type": "string"}, {"name": "namespace", "type": "string"}, {"name": "page", "type": "string"}, {"name": "regionIsoCode", "type": "string"}, {"name": "regionName", "type": "string"}, {"name": "timestamp", "type": "string"}, {"name": "user", "type": "string"}]' ) ) GROUP BY channel ORDER BY COUNT(*) DESC "},{"title":"Further reading","type":1,"pageTitle":"Load files with SQL-based ingestion","url":"/docs/27.0.0/tutorials/tutorial-msq-extern#further-reading","content":"See the following topics to learn more: SQL-based ingestion overview to further explore SQL-based ingestion.SQL-based ingestion reference for reference on context parameters, functions, and error codes. "},{"title":"Query data","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-query","content":"","keywords":""},{"title":"Query SQL from the web console","type":1,"pageTitle":"Query data","url":"/docs/27.0.0/tutorials/tutorial-query#query-sql-from-the-web-console","content":"The web console includes a view that makes it easier to build and test queries, and view their results. Start up the Druid cluster, if it's not already running, and open the web console in your web browser. Click Query from the header to open the Query view: You can always write queries directly in the edit pane, but the Query view also provides facilities to help you construct SQL queries, which we will use to generate a starter query. Expand the wikipedia datasource tree in the left pane. We'll create a query for the page dimension. Click page and then Show:page from the menu: A SELECT query appears in the query edit pane and immediately runs. However, in this case, the query returns no data, since by default the query filters for data from the last day, while our data is considerably older than that. Let's remove the filter. Click Run to run the query. You should now see two columns of data, a page name and the count: Notice that the results are limited in the console to about a hundred, by default, due to the Smart query limitfeature. This helps users avoid inadvertently running queries that return an excessive amount of data, possibly overwhelming their system. Let's edit the query directly and take a look at a few more query building features in the editor. Click in the query edit pane and make the following changes: Add a line after the first column, "page" and Start typing the name of a new column, "countryName". Notice that the autocomplete menu suggests column names, functions, keywords, and more. Choose "countryName" and add the new column to the GROUP BY clause as well, either by name or by reference to its position, 2. For readability, replace Count column name with Edits, since the COUNT() function actually returns the number of edits for the page. Make the same column name change in the ORDER BY clause as well. The `COUNT()` function is one of many functions available for use in Druid SQL queries. You can mouse over a function name in the autocomplete menu to see a brief description of a function. Also, you can find more information in the Druid documentation; for example, the `COUNT()` function is documented in [Aggregation functions](/docs/27.0.0/querying/sql-aggregations). The query should now be: SELECT "page", "countryName", COUNT(*) AS "Edits" FROM "wikipedia" GROUP BY 1, 2 ORDER BY "Edits" DESC When you run the query again, notice that we're getting the new dimension,countryName, but for most of the rows, its value is null. Let's show only rows with a countryName value. Click the countryName dimension in the left pane and choose the first filtering option. It's not exactly what we want, but we'll edit it by hand. The new WHERE clause should appear in your query. Modify the WHERE clause to exclude results that do not have a value for countryName: WHERE "countryName" IS NOT NULL Run the query again. You should now see the top edits by country: Under the covers, every Druid SQL query is translated into a query in the JSON-based Druid native query format before it runs on data nodes. You can view the native query for this query by clicking ... and Explain SQL Query. While you can use Druid SQL for most purposes, familiarity with native query is useful for composing complex queries and for troubleshooting performance issues. For more information, see Native queries. info Another way to view the explain plan is by adding EXPLAIN PLAN FOR to the front of your query, as follows: EXPLAIN PLAN FOR SELECT "page", "countryName", COUNT(*) AS "Edits" FROM "wikipedia" WHERE "countryName" IS NOT NULL GROUP BY 1, 2 ORDER BY "Edits" DESC This is particularly useful when running queries from the command line or over HTTP. Finally, click ... and Edit context to see how you can add additional parameters controlling the execution of the query execution. In the field, enter query context options as JSON key-value pairs, as described in Context flags. That's it! We've built a simple query using some of the query builder features built into the web console. The following sections provide a few more example queries you can try. See Query SQL over HTTP for an example of how to use the Druid SQL HTTP API. "},{"title":"More Druid SQL examples","type":1,"pageTitle":"Query data","url":"/docs/27.0.0/tutorials/tutorial-query#more-druid-sql-examples","content":"Try the following queries to learn a few more Druid SQL tricks: "},{"title":"Query over time","type":1,"pageTitle":"Query data","url":"/docs/27.0.0/tutorials/tutorial-query#query-over-time","content":"SELECT FLOOR(__time to HOUR) AS HourTime, SUM(deleted) AS LinesDeleted FROM wikipedia WHERE TIME_IN_INTERVAL("__time", '2016-06-27/2016-06-28') GROUP BY 1 "},{"title":"General group by","type":1,"pageTitle":"Query data","url":"/docs/27.0.0/tutorials/tutorial-query#general-group-by","content":"SELECT channel, page, SUM(added) FROM wikipedia WHERE TIME_IN_INTERVAL("__time", '2016-06-27/2016-06-28') GROUP BY channel, page ORDER BY SUM(added) DESC "},{"title":"Query SQL over HTTP","type":1,"pageTitle":"Query data","url":"/docs/27.0.0/tutorials/tutorial-query#query-sql-over-http","content":"You can submit native queries directly to the Druid Broker over HTTP. The request body should be a JSON object, with the value for the key query containing text of the query: { "query": "SELECT page, COUNT(*) AS Edits FROM wikipedia WHERE TIME_IN_INTERVAL(\\"__time\\", '2016-06-27/2016-06-28') GROUP BY page ORDER BY Edits DESC LIMIT 10" } The tutorial package includes an example file that contains the SQL query shown above at quickstart/tutorial/wikipedia-top-pages-sql.json. Let's submit that query to the Druid Broker: curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/wikipedia-top-pages-sql.json http://localhost:8888/druid/v2/sql The following results should be returned: [ { "page": "Copa América Centenario", "Edits": 29 }, { "page": "User:Cyde/List of candidates for speedy deletion/Subpage", "Edits": 16 }, { "page": "Wikipedia:Administrators' noticeboard/Incidents", "Edits": 16 }, { "page": "2016 Wimbledon Championships – Men's Singles", "Edits": 15 }, { "page": "Wikipedia:Administrator intervention against vandalism", "Edits": 15 }, { "page": "Wikipedia:Vandalismusmeldung", "Edits": 15 }, { "page": "The Winds of Winter (Game of Thrones)", "Edits": 12 }, { "page": "ولاية الجزائر", "Edits": 12 }, { "page": "Copa América", "Edits": 10 }, { "page": "Lionel Messi", "Edits": 10 } ] "},{"title":"Further reading","type":1,"pageTitle":"Query data","url":"/docs/27.0.0/tutorials/tutorial-query#further-reading","content":"See the Druid SQL documentation for more information on using Druid SQL queries. See the Queries documentation for more information on Druid native queries. "},{"title":"Configure data retention","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-retention","content":"","keywords":""},{"title":"Load the example data","type":1,"pageTitle":"Configure data retention","url":"/docs/27.0.0/tutorials/tutorial-retention#load-the-example-data","content":"For this tutorial, we'll be using the Wikipedia edits sample data, with an ingestion task spec that will create a separate segment for each hour in the input data. The ingestion spec can be found at quickstart/tutorial/retention-index.json. Let's submit that spec, which will create a datasource called retention-tutorial: bin/post-index-task --file quickstart/tutorial/retention-index.json --url http://localhost:8081 After the ingestion completes, go to http://localhost:8888/unified-console.html#datasources in a browser to access the web console's datasource view. This view shows the available datasources and a summary of the retention rules for each datasource: Currently there are no rules set for the retention-tutorial datasource. Note that there are default rules for the cluster: load forever with 2 replicas in _default_tier. This means that all data will be loaded regardless of timestamp, and each segment will be replicated to two Historical processes in the default tier. In this tutorial, we will ignore the tiering and redundancy concepts for now. Let's view the segments for the retention-tutorial datasource by clicking the "24 Segments" link next to "Fully Available". The segments view (http://localhost:8888/unified-console.html#segments) provides information about what segments a datasource contains. The page shows that there are 24 segments, each one containing data for a specific hour of 2015-09-12: "},{"title":"Set retention rules","type":1,"pageTitle":"Configure data retention","url":"/docs/27.0.0/tutorials/tutorial-retention#set-retention-rules","content":"Suppose we want to drop data for the first 12 hours of 2015-09-12 and keep data for the later 12 hours of 2015-09-12. Go to the datasources view and click the blue pencil icon next to Cluster default: loadForever for the retention-tutorial datasource. A rule configuration window will appear: Now click the + New rule button twice. In the upper rule box, select Load and by interval, and then enter 2015-09-12T12:00:00.000Z/2015-09-13T00:00:00.000Z in field next to by interval. Replicas can remain at 2 in the _default_tier. In the lower rule box, select Drop and forever. The rules should look like this: Now click Next. The rule configuration process will ask for a user name and comment, for change logging purposes. You can enter tutorial for both. Now click Save. You can see the new rules in the datasources view: Give the cluster a few minutes to apply the rule change, and go to the segments view in the web console. The segments for the first 12 hours of 2015-09-12 are now gone: The resulting retention rule chain is the following: loadByInterval 2015-09-12T12/2015-09-13 (12 hours) dropForever loadForever (default rule) The rule chain is evaluated from top to bottom, with the default rule chain always added at the bottom. The tutorial rule chain we just created loads data if it is within the specified 12 hour interval. If data is not within the 12 hour interval, the rule chain evaluates dropForever next, which will drop any data. The dropForever terminates the rule chain, effectively overriding the default loadForever rule, which will never be reached in this rule chain. Note that in this tutorial we defined a load rule on a specific interval. If instead you want to retain data based on how old it is (e.g., retain data that ranges from 3 months in the past to the present time), you would define a Period load rule instead. "},{"title":"Further reading","type":1,"pageTitle":"Configure data retention","url":"/docs/27.0.0/tutorials/tutorial-retention#further-reading","content":"Load rules "},{"title":"Get to know Query view","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-sql-query-view","content":"","keywords":""},{"title":"Prerequisites","type":1,"pageTitle":"Get to know Query view","url":"/docs/27.0.0/tutorials/tutorial-sql-query-view#prerequisites","content":"Before you follow the steps in this tutorial, download Druid as described in the quickstart and have it running on your local machine. You don't need to have loaded any data. "},{"title":"Run a demo query to ingest data","type":1,"pageTitle":"Get to know Query view","url":"/docs/27.0.0/tutorials/tutorial-sql-query-view#run-a-demo-query-to-ingest-data","content":"Druid includes demo queries that each demonstrate a different Druid feature—for example transforming data during ingestion and sorting ingested data. Each query has detailed comments to help you learn more. In this section you load the demo queries and run a SQL task to ingest sample data into a table datasource. Navigate to the Druid console at http://localhost:8888 and click Query. Click the ellipsis at the bottom of the query window and select Load demo queries. Note that loading the demo queries replaces all of your current query tabs. The demo queries load in several tabs: Click the Demo 1 tab. This query ingests sample data into a datasource called kttm_simple. Click the Demo 1 tab heading again and note the options—you can rename, copy, and duplicate tabs. Click Run to ingest the data. When ingestion is complete, Druid displays the time it took to complete the insert query, and the new datasource kttm_simple displays in the left pane. "},{"title":"View and filter query results","type":1,"pageTitle":"Get to know Query view","url":"/docs/27.0.0/tutorials/tutorial-sql-query-view#view-and-filter-query-results","content":"In this section you run some queries against the new datasource and perform some operations on the query results. Click + to the right of the existing tabs to open a new query tab. Click the name of the datasource kttm_simple in the left pane to display some automatically generated queries: Click SELECT * FROM kttm_simple and run the query. In the query results pane, click Chrome anywhere it appears in the browser column then click Filter on: browser = 'Chrome' to filter the results. "},{"title":"Run aggregate queries","type":1,"pageTitle":"Get to know Query view","url":"/docs/27.0.0/tutorials/tutorial-sql-query-view#run-aggregate-queries","content":"Aggregate functions allow you to perform a calculation on a set of values and return a single value. In this section you run some queries using aggregate functions and perform some operations on the results, using shortcut features designed to help you build your query. Open a new query tab. Click kttm_simple in the left pane to display the generated queries. Click SELECT COUNT(*) AS "Count" FROM kttm_simple and run the query. After you run a query that contains an aggregate function, additional Query view options become available. Click the arrow to the left of the kttm_simple datasource to display the columns, then click the country column. Several options appear to apply country-based filters and aggregate functions to the query: Click Aggregate > COUNT(DISTINCT "country") to add this clause to the query. The query now appears as follows: SELECT COUNT(*) AS "Count", COUNT(DISTINCT "country") AS "dist_country" FROM "kttm_simple" GROUP BY () Note that you can use column names such as dist_country in this example as shortcuts when building your query. Run the updated query: Click Engine: auto (sql-native) to display the engine options—native for native (JSON-based) queries, sql-native for Druid SQL queries, and sql-msq-task for SQL-based ingestion. Select auto to let Druid select the most efficient engine based on your query input. From the engine menu you can also edit the query context and turn off some query defaults. Deselect Use approximate COUNT(DISTINCT) and rerun the query. The country count in the results decreases because the computation has become more exact. See SQL aggregation functions for more information. Query view can provide information about a function, in case you aren't sure exactly what it does. Delete the contents of the query line COUNT(DISTINCT country) AS dist_country and type COUNT(DISTINCT) to replace it. A help dialog for the function displays: Click outside the help window to close it. You can perform actions on calculated columns in the results pane. Click the results column heading dist_country COUNT(DISTINCT "country") to see the available options: Select Edit column and change the Output name to Distinct countries. "},{"title":"Generate an explain plan","type":1,"pageTitle":"Get to know Query view","url":"/docs/27.0.0/tutorials/tutorial-sql-query-view#generate-an-explain-plan","content":"In this section you generate an explain plan for a query. An explain plan shows the full query details and all of the operations Druid performs to execute it. Druid optimizes queries of certain types—see SQL query translation for information on how to interpret an explain plan and use the details to improve query performance. Open a new query tab. Click kttm_simple in the left pane to display the generated queries. Click SELECT * FROM kttm_simple and run the query. Click the ellipsis at the bottom of the query window and select Explain SQL query. The query plan opens in a new window: Click Open in new tab. You can review the query details and modify it as required. Change the limit from 1001 to 2001: "Limit": 2001, and run the query to confirm that the updated query returns 2,001 results. "},{"title":"Try out a few more features","type":1,"pageTitle":"Get to know Query view","url":"/docs/27.0.0/tutorials/tutorial-sql-query-view#try-out-a-few-more-features","content":"In this section you try out a few more useful Query view features. "},{"title":"Use calculator mode","type":1,"pageTitle":"Get to know Query view","url":"/docs/27.0.0/tutorials/tutorial-sql-query-view#use-calculator-mode","content":"Queries without a FROM clause run in calculator mode—this can be useful to help you understand how functions work. See the Druid SQL functions reference for more information. Open a new query tab and enter the following: SELECT SQRT(49) Run the query to produce the result 7. "},{"title":"Download query results","type":1,"pageTitle":"Get to know Query view","url":"/docs/27.0.0/tutorials/tutorial-sql-query-view#download-query-results","content":"You can download query results in CSV, TSV, or newline-delimited JSON format. Open a new query tab and run a query, for example: SELECT DISTINCT platform FROM kttm_simple Above the results pane, click the down arrow and select Download results as… CSV. "},{"title":"View query history","type":1,"pageTitle":"Get to know Query view","url":"/docs/27.0.0/tutorials/tutorial-sql-query-view#view-query-history","content":"In any query tab, click the ellipsis at the bottom of the query window and select Query history. You can click the links on the left to view queries run at a particular date and time, and open a previously run query in a new query tab. "},{"title":"Further reading","type":1,"pageTitle":"Get to know Query view","url":"/docs/27.0.0/tutorials/tutorial-sql-query-view#further-reading","content":"For more information on ingestion and querying data, see the following topics: Quickstart for information on getting started with Druid.Tutorial: Querying data for example queries to run on Druid data.Ingestion for an overview of ingestion and the ingestion methods available in Druid.SQL-based ingestion for an overview of SQL-based ingestion.SQL-based ingestion query examples for examples of SQL-based ingestion for various use cases. "},{"title":"Aggregate data with rollup","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-rollup","content":"","keywords":""},{"title":"Example data","type":1,"pageTitle":"Aggregate data with rollup","url":"/docs/27.0.0/tutorials/tutorial-rollup#example-data","content":"For this tutorial, we'll use a small sample of network flow event data, representing packet and byte counts for traffic from a source to a destination IP address that occurred within a particular second. {"timestamp":"2018-01-01T01:01:35Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2","packets":20,"bytes":9024} {"timestamp":"2018-01-01T01:01:51Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2","packets":255,"bytes":21133} {"timestamp":"2018-01-01T01:01:59Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2","packets":11,"bytes":5780} {"timestamp":"2018-01-01T01:02:14Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2","packets":38,"bytes":6289} {"timestamp":"2018-01-01T01:02:29Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2","packets":377,"bytes":359971} {"timestamp":"2018-01-01T01:03:29Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2","packets":49,"bytes":10204} {"timestamp":"2018-01-02T21:33:14Z","srcIP":"7.7.7.7", "dstIP":"8.8.8.8","packets":38,"bytes":6289} {"timestamp":"2018-01-02T21:33:45Z","srcIP":"7.7.7.7", "dstIP":"8.8.8.8","packets":123,"bytes":93999} {"timestamp":"2018-01-02T21:35:45Z","srcIP":"7.7.7.7", "dstIP":"8.8.8.8","packets":12,"bytes":2818} A file containing this sample input data is located at quickstart/tutorial/rollup-data.json. We'll ingest this data using the following ingestion task spec, located at quickstart/tutorial/rollup-index.json. { "type" : "index_parallel", "spec" : { "dataSchema" : { "dataSource" : "rollup-tutorial", "dimensionsSpec" : { "dimensions" : [ "srcIP", "dstIP" ] }, "timestampSpec": { "column": "timestamp", "format": "iso" }, "metricsSpec" : [ { "type" : "count", "name" : "count" }, { "type" : "longSum", "name" : "packets", "fieldName" : "packets" }, { "type" : "longSum", "name" : "bytes", "fieldName" : "bytes" } ], "granularitySpec" : { "type" : "uniform", "segmentGranularity" : "week", "queryGranularity" : "minute", "intervals" : ["2018-01-01/2018-01-03"], "rollup" : true } }, "ioConfig" : { "type" : "index_parallel", "inputSource" : { "type" : "local", "baseDir" : "quickstart/tutorial", "filter" : "rollup-data.json" }, "inputFormat" : { "type" : "json" }, "appendToExisting" : false }, "tuningConfig" : { "type" : "index_parallel", "partitionsSpec": { "type": "dynamic" }, "maxRowsInMemory" : 25000 } } } Rollup has been enabled by setting "rollup" : true in the granularitySpec. Note that we have srcIP and dstIP defined as dimensions, a longSum metric is defined for the packets and bytes columns, and the queryGranularity has been defined as minute. We will see how these definitions are used after we load this data. "},{"title":"Load the example data","type":1,"pageTitle":"Aggregate data with rollup","url":"/docs/27.0.0/tutorials/tutorial-rollup#load-the-example-data","content":"From the apache-druid-27.0.0 package root, run the following command: bin/post-index-task --file quickstart/tutorial/rollup-index.json --url http://localhost:8081 After the script completes, we will query the data. "},{"title":"Query the example data","type":1,"pageTitle":"Aggregate data with rollup","url":"/docs/27.0.0/tutorials/tutorial-rollup#query-the-example-data","content":"Let's run bin/dsql and issue a select * from "rollup-tutorial"; query to see what data was ingested. $ bin/dsql Welcome to dsql, the command-line client for Druid SQL. Type "\\h" for help. dsql> select * from "rollup-tutorial"; ┌──────────────────────────┬────────┬───────┬─────────┬─────────┬─────────┐ │ __time │ bytes │ count │ dstIP │ packets │ srcIP │ ├──────────────────────────┼────────┼───────┼─────────┼─────────┼─────────┤ │ 2018-01-01T01:01:00.000Z │ 35937 │ 3 │ 2.2.2.2 │ 286 │ 1.1.1.1 │ │ 2018-01-01T01:02:00.000Z │ 366260 │ 2 │ 2.2.2.2 │ 415 │ 1.1.1.1 │ │ 2018-01-01T01:03:00.000Z │ 10204 │ 1 │ 2.2.2.2 │ 49 │ 1.1.1.1 │ │ 2018-01-02T21:33:00.000Z │ 100288 │ 2 │ 8.8.8.8 │ 161 │ 7.7.7.7 │ │ 2018-01-02T21:35:00.000Z │ 2818 │ 1 │ 8.8.8.8 │ 12 │ 7.7.7.7 │ └──────────────────────────┴────────┴───────┴─────────┴─────────┴─────────┘ Retrieved 5 rows in 1.18s. dsql> Let's look at the three events in the original input data that occurred during 2018-01-01T01:01: {"timestamp":"2018-01-01T01:01:35Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2","packets":20,"bytes":9024} {"timestamp":"2018-01-01T01:01:51Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2","packets":255,"bytes":21133} {"timestamp":"2018-01-01T01:01:59Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2","packets":11,"bytes":5780} These three rows have been "rolled up" into the following row: ┌──────────────────────────┬────────┬───────┬─────────┬─────────┬─────────┐ │ __time │ bytes │ count │ dstIP │ packets │ srcIP │ ├──────────────────────────┼────────┼───────┼─────────┼─────────┼─────────┤ │ 2018-01-01T01:01:00.000Z │ 35937 │ 3 │ 2.2.2.2 │ 286 │ 1.1.1.1 │ └──────────────────────────┴────────┴───────┴─────────┴─────────┴─────────┘ The input rows have been grouped by the timestamp and dimension columns {timestamp, srcIP, dstIP} with sum aggregations on the metric columns packets and bytes. Before the grouping occurs, the timestamps of the original input data are bucketed/floored by minute, due to the "queryGranularity":"minute" setting in the ingestion spec. Likewise, these two events that occurred during 2018-01-01T01:02 have been rolled up: {"timestamp":"2018-01-01T01:02:14Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2","packets":38,"bytes":6289} {"timestamp":"2018-01-01T01:02:29Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2","packets":377,"bytes":359971} ┌──────────────────────────┬────────┬───────┬─────────┬─────────┬─────────┐ │ __time │ bytes │ count │ dstIP │ packets │ srcIP │ ├──────────────────────────┼────────┼───────┼─────────┼─────────┼─────────┤ │ 2018-01-01T01:02:00.000Z │ 366260 │ 2 │ 2.2.2.2 │ 415 │ 1.1.1.1 │ └──────────────────────────┴────────┴───────┴─────────┴─────────┴─────────┘ For the last event recording traffic between 1.1.1.1 and 2.2.2.2, no rollup took place, because this was the only event that occurred during 2018-01-01T01:03: {"timestamp":"2018-01-01T01:03:29Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2","packets":49,"bytes":10204} ┌──────────────────────────┬────────┬───────┬─────────┬─────────┬─────────┐ │ __time │ bytes │ count │ dstIP │ packets │ srcIP │ ├──────────────────────────┼────────┼───────┼─────────┼─────────┼─────────┤ │ 2018-01-01T01:03:00.000Z │ 10204 │ 1 │ 2.2.2.2 │ 49 │ 1.1.1.1 │ └──────────────────────────┴────────┴───────┴─────────┴─────────┴─────────┘ Note that the count metric shows how many rows in the original input data contributed to the final "rolled up" row. "},{"title":"Transform input data","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-transform-spec","content":"","keywords":""},{"title":"Sample data","type":1,"pageTitle":"Transform input data","url":"/docs/27.0.0/tutorials/tutorial-transform-spec#sample-data","content":"We've included sample data for this tutorial at quickstart/tutorial/transform-data.json, reproduced here for convenience: {"timestamp":"2018-01-01T07:01:35Z","animal":"octopus", "location":1, "number":100} {"timestamp":"2018-01-01T05:01:35Z","animal":"mongoose", "location":2,"number":200} {"timestamp":"2018-01-01T06:01:35Z","animal":"snake", "location":3, "number":300} {"timestamp":"2018-01-01T01:01:35Z","animal":"lion", "location":4, "number":300} "},{"title":"Load data with transform specs","type":1,"pageTitle":"Transform input data","url":"/docs/27.0.0/tutorials/tutorial-transform-spec#load-data-with-transform-specs","content":"We will ingest the sample data using the following spec, which demonstrates the use of transform specs: { "type" : "index_parallel", "spec" : { "dataSchema" : { "dataSource" : "transform-tutorial", "timestampSpec": { "column": "timestamp", "format": "iso" }, "dimensionsSpec" : { "dimensions" : [ "animal", { "name": "location", "type": "long" } ] }, "metricsSpec" : [ { "type" : "count", "name" : "count" }, { "type" : "longSum", "name" : "number", "fieldName" : "number" }, { "type" : "longSum", "name" : "triple-number", "fieldName" : "triple-number" } ], "granularitySpec" : { "type" : "uniform", "segmentGranularity" : "week", "queryGranularity" : "minute", "intervals" : ["2018-01-01/2018-01-03"], "rollup" : true }, "transformSpec": { "transforms": [ { "type": "expression", "name": "animal", "expression": "concat('super-', animal)" }, { "type": "expression", "name": "triple-number", "expression": "number * 3" } ], "filter": { "type":"or", "fields": [ { "type": "selector", "dimension": "animal", "value": "super-mongoose" }, { "type": "selector", "dimension": "triple-number", "value": "300" }, { "type": "selector", "dimension": "location", "value": "3" } ] } } }, "ioConfig" : { "type" : "index_parallel", "inputSource" : { "type" : "local", "baseDir" : "quickstart/tutorial", "filter" : "transform-data.json" }, "inputFormat" : { "type" :"json" }, "appendToExisting" : false }, "tuningConfig" : { "type" : "index_parallel", "partitionsSpec": { "type": "dynamic" }, "maxRowsInMemory" : 25000 } } } In the transform spec, we have two expression transforms: super-animal: prepends "super-" to the values in the animal column. This will override the animal column with the transformed version, since the transform's name is animal.triple-number: multiplies the number column by 3. This will create a new triple-number column. Note that we are ingesting both the original and the transformed column. Additionally, we have an OR filter with three clauses: super-animal values that match "super-mongoose"triple-number values that match 300location values that match 3 This filter selects the first 3 rows, and it will exclude the final "lion" row in the input data. Note that the filter is applied after the transformation. Let's submit this task now, which has been included at quickstart/tutorial/transform-index.json: bin/post-index-task --file quickstart/tutorial/transform-index.json --url http://localhost:8081 "},{"title":"Query the transformed data","type":1,"pageTitle":"Transform input data","url":"/docs/27.0.0/tutorials/tutorial-transform-spec#query-the-transformed-data","content":"Let's run bin/dsql and issue a select * from "transform-tutorial"; query to see what was ingested: dsql> select * from "transform-tutorial"; ┌──────────────────────────┬────────────────┬───────┬──────────┬────────┬───────────────┐ │ __time │ animal │ count │ location │ number │ triple-number │ ├──────────────────────────┼────────────────┼───────┼──────────┼────────┼───────────────┤ │ 2018-01-01T05:01:00.000Z │ super-mongoose │ 1 │ 2 │ 200 │ 600 │ │ 2018-01-01T06:01:00.000Z │ super-snake │ 1 │ 3 │ 300 │ 900 │ │ 2018-01-01T07:01:00.000Z │ super-octopus │ 1 │ 1 │ 100 │ 300 │ └──────────────────────────┴────────────────┴───────┴──────────┴────────┴───────────────┘ Retrieved 3 rows in 0.03s. The "lion" row has been discarded, the animal column has been transformed, and we have both the original and transformed number column. "},{"title":"Update existing data","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-update-data","content":"","keywords":""},{"title":"Prerequisites","type":1,"pageTitle":"Update existing data","url":"/docs/27.0.0/tutorials/tutorial-update-data#prerequisites","content":"Before starting this tutorial, download and run Apache Druid on your local machine as described in the single-machine quickstart. You should also be familiar with the material in the following tutorials: Load a fileQuery dataRollup "},{"title":"Load initial data","type":1,"pageTitle":"Update existing data","url":"/docs/27.0.0/tutorials/tutorial-update-data#load-initial-data","content":"Load an initial data set to which you will overwrite and append data. The ingestion spec is located at quickstart/tutorial/updates-init-index.json. This spec creates a datasource called updates-tutorial and ingests data from quickstart/tutorial/updates-data.json. Submit the ingestion task: bin/post-index-task --file quickstart/tutorial/updates-init-index.json --url http://localhost:8081 Start the SQL command-line client: bin/dsql Run the following SQL query to retrieve data from updates-tutorial: dsql> SELECT * FROM "updates-tutorial"; ┌──────────────────────────┬──────────┬───────┬────────┐ │ __time │ animal │ count │ number │ ├──────────────────────────┼──────────┼───────┼────────┤ │ 2018-01-01T01:01:00.000Z │ tiger │ 1 │ 100 │ │ 2018-01-01T03:01:00.000Z │ aardvark │ 1 │ 42 │ │ 2018-01-01T03:01:00.000Z │ giraffe │ 1 │ 14124 │ └──────────────────────────┴──────────┴───────┴────────┘ Retrieved 3 rows in 1.42s. The datasource contains three rows of data with an animal dimension and a number metric. "},{"title":"Overwrite data","type":1,"pageTitle":"Update existing data","url":"/docs/27.0.0/tutorials/tutorial-update-data#overwrite-data","content":"To overwrite the data, submit another task for the same interval but with different input data. The quickstart/tutorial/updates-overwrite-index.json spec performs an overwrite on the updates-tutorial datasource. In the overwrite ingestion spec, notice the following: The intervals field remains the same: "intervals" : ["2018-01-01/2018-01-03"]New data is loaded from the local file, quickstart/tutorial/updates-data2.jsonappendToExisting is set to false, indicating an overwrite task Submit the ingestion task to overwrite the data: bin/post-index-task --file quickstart/tutorial/updates-overwrite-index.json --url http://localhost:8081 When Druid finishes loading the new segment from this overwrite task, run the SELECT query again. In the new results, the tiger row now has the value lion, the aardvark row has a different number, and the giraffe row has been replaced with a bear row. dsql> SELECT * FROM "updates-tutorial"; ┌──────────────────────────┬──────────┬───────┬────────┐ │ __time │ animal │ count │ number │ ├──────────────────────────┼──────────┼───────┼────────┤ │ 2018-01-01T01:01:00.000Z │ lion │ 1 │ 100 │ │ 2018-01-01T03:01:00.000Z │ aardvark │ 1 │ 9999 │ │ 2018-01-01T04:01:00.000Z │ bear │ 1 │ 111 │ └──────────────────────────┴──────────┴───────┴────────┘ Retrieved 3 rows in 0.02s. "},{"title":"Combine existing data with new data and overwrite","type":1,"pageTitle":"Update existing data","url":"/docs/27.0.0/tutorials/tutorial-update-data#combine-existing-data-with-new-data-and-overwrite","content":"Now append new data to the updates-tutorial datasource from quickstart/tutorial/updates-data3.json using the ingestion spec quickstart/tutorial/updates-append-index.json. The spec directs Druid to read from the existing updates-tutorial datasource as well as the quickstart/tutorial/updates-data3.json file. The task combines data from the two input sources, then overwrites the original data with the new combined data. Submit that task: bin/post-index-task --file quickstart/tutorial/updates-append-index.json --url http://localhost:8081 When Druid finishes loading the new segment from this overwrite task, it adds the new rows to the datasource. Run the SELECT query again. Druid automatically rolls up the data at ingestion time, aggregating the data in the lion row: dsql> SELECT * FROM "updates-tutorial"; ┌──────────────────────────┬──────────┬───────┬────────┐ │ __time │ animal │ count │ number │ ├──────────────────────────┼──────────┼───────┼────────┤ │ 2018-01-01T01:01:00.000Z │ lion │ 2 │ 400 │ │ 2018-01-01T03:01:00.000Z │ aardvark │ 1 │ 9999 │ │ 2018-01-01T04:01:00.000Z │ bear │ 1 │ 111 │ │ 2018-01-01T05:01:00.000Z │ mongoose │ 1 │ 737 │ │ 2018-01-01T06:01:00.000Z │ snake │ 1 │ 1234 │ │ 2018-01-01T07:01:00.000Z │ octopus │ 1 │ 115 │ └──────────────────────────┴──────────┴───────┴────────┘ Retrieved 6 rows in 0.02s. "},{"title":"Append data","type":1,"pageTitle":"Update existing data","url":"/docs/27.0.0/tutorials/tutorial-update-data#append-data","content":"Now you append data to the datasource without changing the existing data. Use the ingestion spec located at quickstart/tutorial/updates-append-index2.json. The spec directs Druid to ingest data from quickstart/tutorial/updates-data4.json and append it to the updates-tutorial datasource. The property appendToExisting is set to true in this spec. Submit the task: bin/post-index-task --file quickstart/tutorial/updates-append-index2.json --url http://localhost:8081 Druid adds two additional rows after octopus. When the task completes, query the data again to see them. Druid doesn't roll up the new bear row with the existing bear row because it stored the new data in a separate segment. dsql> SELECT * FROM "updates-tutorial"; ┌──────────────────────────┬──────────┬───────┬────────┐ │ __time │ animal │ count │ number │ ├──────────────────────────┼──────────┼───────┼────────┤ │ 2018-01-01T01:01:00.000Z │ lion │ 2 │ 400 │ │ 2018-01-01T03:01:00.000Z │ aardvark │ 1 │ 9999 │ │ 2018-01-01T04:01:00.000Z │ bear │ 1 │ 111 │ │ 2018-01-01T05:01:00.000Z │ mongoose │ 1 │ 737 │ │ 2018-01-01T06:01:00.000Z │ snake │ 1 │ 1234 │ │ 2018-01-01T07:01:00.000Z │ octopus │ 1 │ 115 │ │ 2018-01-01T04:01:00.000Z │ bear │ 1 │ 222 │ │ 2018-01-01T09:01:00.000Z │ falcon │ 1 │ 1241 │ └──────────────────────────┴──────────┴───────┴────────┘ Retrieved 8 rows in 0.02s. Run the following groupBy query to see that the bear rows group together at query time: dsql> SELECT __time, animal, SUM("count"), SUM("number") FROM "updates-tutorial" GROUP BY __time, animal; ┌──────────────────────────┬──────────┬────────┬────────┐ │ __time │ animal │ EXPR$2 │ EXPR$3 │ ├──────────────────────────┼──────────┼────────┼────────┤ │ 2018-01-01T01:01:00.000Z │ lion │ 2 │ 400 │ │ 2018-01-01T03:01:00.000Z │ aardvark │ 1 │ 9999 │ │ 2018-01-01T04:01:00.000Z │ bear │ 2 │ 333 │ │ 2018-01-01T05:01:00.000Z │ mongoose │ 1 │ 737 │ │ 2018-01-01T06:01:00.000Z │ snake │ 1 │ 1234 │ │ 2018-01-01T07:01:00.000Z │ octopus │ 1 │ 115 │ │ 2018-01-01T09:01:00.000Z │ falcon │ 1 │ 1241 │ └──────────────────────────┴──────────┴────────┴────────┘ Retrieved 7 rows in 0.23s. "},{"title":"Tutorial: Query from deep storage","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-query-deep-storage","content":"","keywords":""},{"title":"Load example data","type":1,"pageTitle":"Tutorial: Query from deep storage","url":"/docs/27.0.0/tutorials/tutorial-query-deep-storage#load-example-data","content":"Use the Load data wizard or the following SQL query to ingest the wikipedia sample datasource bundled with Druid. If you use the wizard, make sure you change the partitioning to be by hour. Partitioning by hour provides more segment granularity, so you can selectively load segments onto Historicals or keep them in deep storage. Show the query REPLACE INTO "wikipedia" OVERWRITE ALL WITH "ext" AS (SELECT * FROM TABLE( EXTERN( '{"type":"http","uris":["https://druid.apache.org/data/wikipedia.json.gz"]}', '{"type":"json"}' ) ) EXTEND ("isRobot" VARCHAR, "channel" VARCHAR, "timestamp" VARCHAR, "flags" VARCHAR, "isUnpatrolled" VARCHAR, "page" VARCHAR, "diffUrl" VARCHAR, "added" BIGINT, "comment" VARCHAR, "commentLength" BIGINT, "isNew" VARCHAR, "isMinor" VARCHAR, "delta" BIGINT, "isAnonymous" VARCHAR, "user" VARCHAR, "deltaBucket" BIGINT, "deleted" BIGINT, "namespace" VARCHAR, "cityName" VARCHAR, "countryName" VARCHAR, "regionIsoCode" VARCHAR, "metroCode" BIGINT, "countryIsoCode" VARCHAR, "regionName" VARCHAR)) SELECT TIME_PARSE("timestamp") AS "__time", "isRobot", "channel", "flags", "isUnpatrolled", "page", "diffUrl", "added", "comment", "commentLength", "isNew", "isMinor", "delta", "isAnonymous", "user", "deltaBucket", "deleted", "namespace", "cityName", "countryName", "regionIsoCode", "metroCode", "countryIsoCode", "regionName" FROM "ext" PARTITIONED BY HOUR "},{"title":"Configure a load rule","type":1,"pageTitle":"Tutorial: Query from deep storage","url":"/docs/27.0.0/tutorials/tutorial-query-deep-storage#configure-a-load-rule","content":"The load rule configures Druid to keep any segments that fall within the following interval only in deep storage: 2016-06-27T00:00:00.000Z/2016-06-27T02:59:00.000Z The JSON form of the rule is as follows: [ { "interval": "2016-06-27T00:00:00.000Z/2016-06-27T02:59:00.000Z", "tieredReplicants": {}, "useDefaultTierForNull": false, "type": "loadByInterval" } ] The rest of the segments use the default load rules for the cluster. For the quickstart, that means all the other segments get loaded onto Historical processes. You can configure the load rules through the API or the Druid console. To configure the load rules through the Druid console, go to Datasources > ... in the Actions column > Edit retention rules. Then, paste the provided JSON into the JSON tab: "},{"title":"Verify the replication factor","type":1,"pageTitle":"Tutorial: Query from deep storage","url":"/docs/27.0.0/tutorials/tutorial-query-deep-storage#verify-the-replication-factor","content":"Segments that are only available from deep storage have a replication_factor of 0 in the Druid system table. You can verify that your load rule worked as intended using the following query: SELECT "segment_id", "replication_factor", "num_replicas" FROM sys."segments" WHERE datasource = 'wikipedia' You can also verify it through the Druid console by checking the Replication factor column in the Segments view. Note that the number of replicas and replication factor may differ temporarily as Druid processes your retention rules. "},{"title":"Query from deep storage","type":1,"pageTitle":"Tutorial: Query from deep storage","url":"/docs/27.0.0/tutorials/tutorial-query-deep-storage#query-from-deep-storage","content":"Now that there are segments that are only available from deep storage, run the following query: SELECT page FROM wikipedia WHERE __time < TIMESTAMP'2016-06-27 00:10:00' LIMIT 10 With the context parameter: "executionMode": "ASYNC" For example, run the following curl command: curl --location 'http://localhost:8888/druid/v2/sql/statements' \\ --header 'Content-Type: application/json' \\ --data '{ "query":"SELECT page FROM wikipedia WHERE __time < TIMESTAMP'\\''2016-06-27 00:10:00'\\'' LIMIT 10", "context":{ "executionMode":"ASYNC" } }' This query looks for records with timestamps that precede 00:10:00. Based on the load rule you configured earlier, this data is only available from deep storage. When you submit the query from deep storage through the API, you get the following response: Show the response { "queryId": "query-6888b6f6-e597-456c-9004-222b05b97051", "state": "ACCEPTED", "createdAt": "2023-07-28T21:59:02.334Z", "schema": [ { "name": "page", "type": "VARCHAR", "nativeType": "STRING" } ], "durationMs": -1 } Make sure you note the queryID. You'll need it to interact with the query. Compare this to if you were to submit the query to Druid SQL's regular endpoint, POST /sql: curl --location 'http://localhost:8888/druid/v2/sql/' \\ --header 'Content-Type: application/json' \\ --data '{ "query":"SELECT page FROM wikipedia WHERE __time < TIMESTAMP'\\''2016-06-27 00:10:00'\\'' LIMIT 10", "context":{ "executionMode":"ASYNC" } }' The response you get back is an empty response cause there are no records on the Historicals that match the query. "},{"title":"Get query status","type":1,"pageTitle":"Tutorial: Query from deep storage","url":"/docs/27.0.0/tutorials/tutorial-query-deep-storage#get-query-status","content":"Replace :queryId with the ID for your query and run the following curl command to get your query status: curl --location --request GET 'http://localhost:8888/druid/v2/sql/statements/:queryId' \\ --header 'Content-Type: application/json' \\ "},{"title":"Response for a running query","type":1,"pageTitle":"Tutorial: Query from deep storage","url":"/docs/27.0.0/tutorials/tutorial-query-deep-storage#response-for-a-running-query","content":"The response for a running query is the same as the response from when you submitted the query except the state is RUNNING instead of ACCEPTED. "},{"title":"Response for a completed query","type":1,"pageTitle":"Tutorial: Query from deep storage","url":"/docs/27.0.0/tutorials/tutorial-query-deep-storage#response-for-a-completed-query","content":"A successful query also returns a pages object that includes the page numbers (id), rows per page (numRows), and the size of the page (sizeInBytes). You can pass the page number as a parameter when you get results to refine the results you get. Note that sampleRecords has been truncated for brevity. Show the response { "queryId": "query-6888b6f6-e597-456c-9004-222b05b97051", "state": "SUCCESS", "createdAt": "2023-07-28T21:59:02.334Z", "schema": [ { "name": "page", "type": "VARCHAR", "nativeType": "STRING" } ], "durationMs": 87351, "result": { "numTotalRows": 152, "totalSizeInBytes": 9036, "dataSource": "__query_select", "sampleRecords": [ [ "Salo Toraut" ], [ "利用者:ワーナー成増/放送ウーマン賞" ], [ "Bailando 2015" ], ... ... ... ], "pages": [ { "id": 0, "numRows": 152, "sizeInBytes": 9036 } ] } } "},{"title":"Get query results","type":1,"pageTitle":"Tutorial: Query from deep storage","url":"/docs/27.0.0/tutorials/tutorial-query-deep-storage#get-query-results","content":"Replace :queryId with the ID for your query and run the following curl command to get your query results: curl --location 'http://ROUTER:PORT/druid/v2/sql/statements/:queryId' Note that the response has been truncated for brevity. Show the response [ { "page": "Salo Toraut" }, { "page": "利用者:ワーナー成増/放送ウーマン賞" }, { "page": "Bailando 2015" }, ... ... ... ] "},{"title":"Further reading","type":1,"pageTitle":"Tutorial: Query from deep storage","url":"/docs/27.0.0/tutorials/tutorial-query-deep-storage#further-reading","content":"Query from deep storageQuery from deep storage API reference "},{"title":"Approximations with Theta sketches","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-sketches-theta","content":"","keywords":""},{"title":"The problem with counts and set operations on large data sets","type":1,"pageTitle":"Approximations with Theta sketches","url":"/docs/27.0.0/tutorials/tutorial-sketches-theta#the-problem-with-counts-and-set-operations-on-large-data-sets","content":"Imagine you are interested in the number of visitors that watched episodes of a TV show. Let's say you found that at a given day, 1000 unique visitors watched the first episode, and 800 visitors watched the second episode. You may want to explore further trends, for example: How many visitors watched both episodes?How many visitors are there that watched at least one of the episodes?How many visitors watched episode 1 but not episode 2? There is no way to answer these questions by just looking at the aggregated numbers. You would have to go back to the detail data and scan every single row. If the data volume is high enough, this may take a very long time, meaning that an interactive data exploration is not possible. An additional nuisance is that unique counts don't work well with rollups. For this example, it would be great if you could have just one row of data per 15 minute interval1, show, and episode. After all, you are not interested in the individual user IDs, just the unique counts. Is there a way to avoid crunching the detail data every single time, and maybe even enable rollup? Enter Theta sketches. "},{"title":"Use Theta sketches for fast approximation with set operations","type":1,"pageTitle":"Approximations with Theta sketches","url":"/docs/27.0.0/tutorials/tutorial-sketches-theta#use-theta-sketches-for-fast-approximation-with-set-operations","content":"Use Theta sketches to obtain a fast approximate estimate for the distinct count of values used to build the sketches. Theta sketches are a probabilistic data structure to enable approximate analysis of big data with known error distributions. Druid's implementation relies on the Apache DataSketches library. The following properties describe Theta sketches: Similar to other sketches, Theta sketches are mergeable. This means you can work with rolled up data and merge the sketches over various time intervals. Thus, you can take advantage of Druid's rollup feature.Specific to sketches supported in Druid, Theta sketches support set operations. Given two Theta sketches over subsets of data, you can compute the union, intersection, or set difference of the two subsets. This enables you to answer questions like the number of visitors that watched a specific combination of episodes from the example. In this tutorial, you will learn how to do the following: Create Theta sketches from your input data at ingestion time.Execute distinct count and set operation queries on the Theta sketches to explore the questions presented earlier. "},{"title":"Prerequisites","type":1,"pageTitle":"Approximations with Theta sketches","url":"/docs/27.0.0/tutorials/tutorial-sketches-theta#prerequisites","content":"For this tutorial, you should have already downloaded Druid as described in the single-machine quickstart and have it running on your local machine. It will also be helpful to have finished Tutorial: Loading a file and Tutorial: Querying data. This tutorial works with the following data: date: a timestamp. In this case it's just dates but as mentioned earlier, a finer granularity makes sense in real life.uid: a user IDshow: name of a TV showepisode: episode identifier date,uid,show,episode 2022-05-19,alice,Game of Thrones,S1E1 2022-05-19,alice,Game of Thrones,S1E2 2022-05-19,alice,Game of Thrones,S1E1 2022-05-19,bob,Bridgerton,S1E1 2022-05-20,alice,Game of Thrones,S1E1 2022-05-20,carol,Bridgerton,S1E2 2022-05-20,dan,Bridgerton,S1E1 2022-05-21,alice,Game of Thrones,S1E1 2022-05-21,carol,Bridgerton,S1E1 2022-05-21,erin,Game of Thrones,S1E1 2022-05-21,alice,Bridgerton,S1E1 2022-05-22,bob,Game of Thrones,S1E1 2022-05-22,bob,Bridgerton,S1E1 2022-05-22,carol,Bridgerton,S1E2 2022-05-22,bob,Bridgerton,S1E1 2022-05-22,erin,Game of Thrones,S1E1 2022-05-22,erin,Bridgerton,S1E2 2022-05-23,erin,Game of Thrones,S1E1 2022-05-23,alice,Game of Thrones,S1E1 "},{"title":"Ingest data using Theta sketches","type":1,"pageTitle":"Approximations with Theta sketches","url":"/docs/27.0.0/tutorials/tutorial-sketches-theta#ingest-data-using-theta-sketches","content":"Navigate to the Load data wizard in the web console.Select Paste data as the data source and paste the given data: Leave the source type as inline and click Apply and Next: Parse data.Parse the data as CSV, with included headers: Accept the default values in the Parse time, Transform, and Filter stages.In the Configure schema stage, enable rollup and confirm your choice in the dialog. Then set the query granularity to day. Add the Theta sketch during this stage. Select Add metric.Define the new metric as a Theta sketch with the following details: Name: theta_uidType: thetaSketchField name: uidSize: Accept the default value, 16384.Is input theta sketch: Accept the default value, False. Click Apply to add the new metric to the data model. You are not interested in individual user ID's, only the unique counts. Right now, uid is still in the data model. To remove it, click on the uid column in the data model and delete it using the trashcan icon on the right: For the remaining stages of the Load data wizard, set the following options: Partition: Set Segment granularity to day.Tune: Leave the default options.Publish: Set the datasource name to ts_tutorial. On the Edit spec page, your final input spec should match the following: { "type": "index_parallel", "spec": { "ioConfig": { "type": "index_parallel", "inputSource": { "type": "inline", "data": "date,uid,show,episode\\n2022-05-19,alice,Game of Thrones,S1E1\\n2022-05-19,alice,Game of Thrones,S1E2\\n2022-05-19,alice,Game of Thrones,S1E1\\n2022-05-19,bob,Bridgerton,S1E1\\n2022-05-20,alice,Game of Thrones,S1E1\\n2022-05-20,carol,Bridgerton,S1E2\\n2022-05-20,dan,Bridgerton,S1E1\\n2022-05-21,alice,Game of Thrones,S1E1\\n2022-05-21,carol,Bridgerton,S1E1\\n2022-05-21,erin,Game of Thrones,S1E1\\n2022-05-21,alice,Bridgerton,S1E1\\n2022-05-22,bob,Game of Thrones,S1E1\\n2022-05-22,bob,Bridgerton,S1E1\\n2022-05-22,carol,Bridgerton,S1E2\\n2022-05-22,bob,Bridgerton,S1E1\\n2022-05-22,erin,Game of Thrones,S1E1\\n2022-05-22,erin,Bridgerton,S1E2\\n2022-05-23,erin,Game of Thrones,S1E1\\n2022-05-23,alice,Game of Thrones,S1E1" }, "inputFormat": { "type": "csv", "findColumnsFromHeader": true } }, "tuningConfig": { "type": "index_parallel", "partitionsSpec": { "type": "hashed" }, "forceGuaranteedRollup": true }, "dataSchema": { "dataSource": "ts_tutorial", "timestampSpec": { "column": "date", "format": "auto" }, "dimensionsSpec": { "dimensions": [ "show", "episode" ] }, "granularitySpec": { "queryGranularity": "day", "rollup": true, "segmentGranularity": "day" }, "metricsSpec": [ { "name": "count", "type": "count" }, { "type": "thetaSketch", "name": "theta_uid", "fieldName": "uid" } ] } } } Notice the theta_uid object in the metricsSpec list, that defines the thetaSketch aggregator on the uid column during ingestion. Click Submit to start the ingestion. "},{"title":"Query the Theta sketch column","type":1,"pageTitle":"Approximations with Theta sketches","url":"/docs/27.0.0/tutorials/tutorial-sketches-theta#query-the-theta-sketch-column","content":"Calculating a unique count estimate from a Theta sketch column involves the following steps: Merge the Theta sketches in the column by means of the DS_THETA aggregator function in Druid SQL.Retrieve the estimate from the merged sketch with the THETA_SKETCH_ESTIMATE function. Between steps 1 and 2, you can apply set functions as demonstrated later in Set operations. "},{"title":"Basic counting","type":1,"pageTitle":"Approximations with Theta sketches","url":"/docs/27.0.0/tutorials/tutorial-sketches-theta#basic-counting","content":"Let's first see what the data looks like in Druid. Run the following SQL statement in the query editor: SELECT * FROM ts_tutorial The Theta sketch column theta_uid appears as a Base64-encoded string; behind it is a bitmap. The following query to compute the distinct counts of user IDs uses APPROX_COUNT_DISTINCT_DS_THETA and groups by the other dimensions: SELECT __time, "show", "episode", APPROX_COUNT_DISTINCT_DS_THETA(theta_uid) AS users FROM ts_tutorial GROUP BY 1, 2, 3 In the preceding query, APPROX_COUNT_DISTINCT_DS_THETA is equivalent to calling DS_THETA and THETA_SKETCH_ESIMATE as follows: SELECT __time, "show", "episode", THETA_SKETCH_ESTIMATE(DS_THETA(theta_uid)) AS users FROM ts_tutorial GROUP BY 1, 2, 3 That is, APPROX_COUNT_DISTINCT_DS_THETA applies the following: DS_THETA: Creates a new Theta sketch from the column of Theta sketchesTHETA_SKETCH_ESTIMATE: Calculates the distinct count estimate from the output of DS_THETA "},{"title":"Filtered metrics","type":1,"pageTitle":"Approximations with Theta sketches","url":"/docs/27.0.0/tutorials/tutorial-sketches-theta#filtered-metrics","content":"Druid has the capability to use filtered metrics. This means you can include a WHERE clause in the SELECT part of the query. info In the case of Theta sketches, the filter clause has to be inserted between the aggregator and the estimator. As an example, query the total unique users that watched Bridgerton: SELECT THETA_SKETCH_ESTIMATE( DS_THETA(theta_uid) FILTER(WHERE "show" = 'Bridgerton') ) AS users FROM ts_tutorial "},{"title":"Set operations","type":1,"pageTitle":"Approximations with Theta sketches","url":"/docs/27.0.0/tutorials/tutorial-sketches-theta#set-operations","content":"You can use this capability of filtering in the aggregator, together with set operations, to finally answer the questions from the introduction. How many users watched both episodes of Bridgerton? Use THETA_SKETCH_INTERSECT to compute the unique count of the intersection of two (or more) segments: SELECT THETA_SKETCH_ESTIMATE( THETA_SKETCH_INTERSECT( DS_THETA(theta_uid) FILTER(WHERE "show" = 'Bridgerton' AND "episode" = 'S1E1'), DS_THETA(theta_uid) FILTER(WHERE "show" = 'Bridgerton' AND "episode" = 'S1E2') ) ) AS users FROM ts_tutorial Again, the set function is spliced in between the aggregator and the estimator. Likewise, use THETA_SKETCH_UNION to find the number of visitors that watched any of the episodes: SELECT THETA_SKETCH_ESTIMATE( THETA_SKETCH_UNION( DS_THETA(theta_uid) FILTER(WHERE "show" = 'Bridgerton' AND "episode" = 'S1E1'), DS_THETA(theta_uid) FILTER(WHERE "show" = 'Bridgerton' AND "episode" = 'S1E2') ) ) AS users FROM ts_tutorial And finally, there is THETA_SKETCH_NOT which computes the set difference of two or more segments. The result describes how many visitors watched episode 1 of Bridgerton but not episode 2. SELECT THETA_SKETCH_ESTIMATE( THETA_SKETCH_NOT( DS_THETA(theta_uid) FILTER(WHERE "show" = 'Bridgerton' AND "episode" = 'S1E1'), DS_THETA(theta_uid) FILTER(WHERE "show" = 'Bridgerton' AND "episode" = 'S1E2') ) ) AS users FROM ts_tutorial "},{"title":"Conclusions","type":1,"pageTitle":"Approximations with Theta sketches","url":"/docs/27.0.0/tutorials/tutorial-sketches-theta#conclusions","content":"Counting distinct things for large data sets can be done with Theta sketches in Apache Druid.This allows us to use rollup and discard the individual values, just retaining statistical approximations in the sketches.With Theta sketch set operations, affinity analysis is easier, for example, to answer questions such as which segments correlate or overlap by how much. "},{"title":"Further reading","type":1,"pageTitle":"Approximations with Theta sketches","url":"/docs/27.0.0/tutorials/tutorial-sketches-theta#further-reading","content":"See the following topics for more information: Theta sketch for reference on ingestion and native queries on Theta sketches in Druid.Theta sketch scalar functions and Theta sketch aggregation functions for Theta sketch functions in Druid SQL queries.Sketches for high cardinality columns for Druid schema design involving sketches.DataSketches extension for more information about the DataSketches extension in Druid as well as other available sketches.The accuracy of queries using Theta sketches is governed by the size k of the Theta sketch and by the operations you perform. See more details in the Apache DataSketches documentation. "},{"title":"Acknowledgments","type":1,"pageTitle":"Approximations with Theta sketches","url":"/docs/27.0.0/tutorials/tutorial-sketches-theta#acknowledgments","content":"This tutorial is adapted from a blog post by community member Hellmar Becker. Why 15 minutes and not just 1 hour? Intervals of 15 minutes work better with international timezones because those are not always aligned by hour. India, for instance, is 30 minutes off, and Nepal is even 45 minutes off. With 15 minute aggregates, you can get hourly sums for any of those timezones, too!↩ "},{"title":"Write an ingestion spec","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-ingestion-spec","content":"","keywords":""},{"title":"Example data","type":1,"pageTitle":"Write an ingestion spec","url":"/docs/27.0.0/tutorials/tutorial-ingestion-spec#example-data","content":"Suppose we have the following network flow data: srcIP: IP address of sendersrcPort: Port of senderdstIP: IP address of receiverdstPort: Port of receiverprotocol: IP protocol numberpackets: number of packets transmittedbytes: number of bytes transmittedcost: the cost of sending the traffic {"ts":"2018-01-01T01:01:35Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2", "srcPort":2000, "dstPort":3000, "protocol": 6, "packets":10, "bytes":1000, "cost": 1.4} {"ts":"2018-01-01T01:01:51Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2", "srcPort":2000, "dstPort":3000, "protocol": 6, "packets":20, "bytes":2000, "cost": 3.1} {"ts":"2018-01-01T01:01:59Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2", "srcPort":2000, "dstPort":3000, "protocol": 6, "packets":30, "bytes":3000, "cost": 0.4} {"ts":"2018-01-01T01:02:14Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2", "srcPort":5000, "dstPort":7000, "protocol": 6, "packets":40, "bytes":4000, "cost": 7.9} {"ts":"2018-01-01T01:02:29Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2", "srcPort":5000, "dstPort":7000, "protocol": 6, "packets":50, "bytes":5000, "cost": 10.2} {"ts":"2018-01-01T01:03:29Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2", "srcPort":5000, "dstPort":7000, "protocol": 6, "packets":60, "bytes":6000, "cost": 4.3} {"ts":"2018-01-01T02:33:14Z","srcIP":"7.7.7.7", "dstIP":"8.8.8.8", "srcPort":4000, "dstPort":5000, "protocol": 17, "packets":100, "bytes":10000, "cost": 22.4} {"ts":"2018-01-01T02:33:45Z","srcIP":"7.7.7.7", "dstIP":"8.8.8.8", "srcPort":4000, "dstPort":5000, "protocol": 17, "packets":200, "bytes":20000, "cost": 34.5} {"ts":"2018-01-01T02:35:45Z","srcIP":"7.7.7.7", "dstIP":"8.8.8.8", "srcPort":4000, "dstPort":5000, "protocol": 17, "packets":300, "bytes":30000, "cost": 46.3} Save the JSON contents above into a file called ingestion-tutorial-data.json in quickstart/. Let's walk through the process of defining an ingestion spec that can load this data. For this tutorial, we will be using the native batch indexing task. When using other task types, some aspects of the ingestion spec will differ, and this tutorial will point out such areas. "},{"title":"Defining the schema","type":1,"pageTitle":"Write an ingestion spec","url":"/docs/27.0.0/tutorials/tutorial-ingestion-spec#defining-the-schema","content":"The core element of a Druid ingestion spec is the dataSchema. The dataSchema defines how to parse input data into a set of columns that will be stored in Druid. Let's start with an empty dataSchema and add fields to it as we progress through the tutorial. Create a new file called ingestion-tutorial-index.json in quickstart/ with the following contents: "dataSchema" : {} We will be making successive edits to this ingestion spec as we progress through the tutorial. "},{"title":"Datasource name","type":1,"pageTitle":"Write an ingestion spec","url":"/docs/27.0.0/tutorials/tutorial-ingestion-spec#datasource-name","content":"The datasource name is specified by the dataSource parameter in the dataSchema. "dataSchema" : { "dataSource" : "ingestion-tutorial", } Let's call the tutorial datasource ingestion-tutorial. "},{"title":"Time column","type":1,"pageTitle":"Write an ingestion spec","url":"/docs/27.0.0/tutorials/tutorial-ingestion-spec#time-column","content":"The dataSchema needs to know how to extract the main timestamp field from the input data. The timestamp column in our input data is named "ts", containing ISO 8601 timestamps, so let's add a timestampSpec with that information to the dataSchema: "dataSchema" : { "dataSource" : "ingestion-tutorial", "timestampSpec" : { "format" : "iso", "column" : "ts" } } "},{"title":"Column types","type":1,"pageTitle":"Write an ingestion spec","url":"/docs/27.0.0/tutorials/tutorial-ingestion-spec#column-types","content":"Now that we've defined the time column, let's look at definitions for other columns. Druid supports the following column types: String, Long, Float, Double. We will see how these are used in the following sections. Before we move on to how we define our other non-time columns, let's discuss rollup first. "},{"title":"Rollup","type":1,"pageTitle":"Write an ingestion spec","url":"/docs/27.0.0/tutorials/tutorial-ingestion-spec#rollup","content":"When ingesting data, we must consider whether we wish to use rollup or not. If rollup is enabled, we will need to separate the input columns into two categories, "dimensions" and "metrics". "Dimensions" are the grouping columns for rollup, while "metrics" are the columns that will be aggregated. If rollup is disabled, then all columns are treated as "dimensions" and no pre-aggregation occurs. For this tutorial, let's enable rollup. This is specified with a granularitySpec on the dataSchema. "dataSchema" : { "dataSource" : "ingestion-tutorial", "timestampSpec" : { "format" : "iso", "column" : "ts" }, "granularitySpec" : { "rollup" : true } } Choosing dimensions and metrics For this example dataset, the following is a sensible split for "dimensions" and "metrics": Dimensions: srcIP, srcPort, dstIP, dstPort, protocolMetrics: packets, bytes, cost The dimensions here are a group of properties that identify a unidirectional flow of IP traffic, while the metrics represent facts about the IP traffic flow specified by a dimension grouping. Let's look at how to define these dimensions and metrics within the ingestion spec. Dimensions Dimensions are specified with a dimensionsSpec inside the dataSchema. "dataSchema" : { "dataSource" : "ingestion-tutorial", "timestampSpec" : { "format" : "iso", "column" : "ts" }, "dimensionsSpec" : { "dimensions": [ "srcIP", { "name" : "srcPort", "type" : "long" }, { "name" : "dstIP", "type" : "string" }, { "name" : "dstPort", "type" : "long" }, { "name" : "protocol", "type" : "string" } ] }, "granularitySpec" : { "rollup" : true } } Each dimension has a name and a type, where type can be "long", "float", "double", or "string". Note that srcIP is a "string" dimension; for string dimensions, it is enough to specify just a dimension name, since "string" is the default dimension type. Also note that protocol is a numeric value in the input data, but we are ingesting it as a "string" column; Druid will coerce the input longs to strings during ingestion. Strings vs. Numerics Should a numeric input be ingested as a numeric dimension or as a string dimension? Numeric dimensions have the following pros/cons relative to String dimensions: Pros: Numeric representation can result in smaller column sizes on disk and lower processing overhead when reading values from the columnCons: Numeric dimensions do not have indices, so filtering on them will often be slower than filtering on an equivalent String dimension (which has bitmap indices) Metrics Metrics are specified with a metricsSpec inside the dataSchema: "dataSchema" : { "dataSource" : "ingestion-tutorial", "timestampSpec" : { "format" : "iso", "column" : "ts" }, "dimensionsSpec" : { "dimensions": [ "srcIP", { "name" : "srcPort", "type" : "long" }, { "name" : "dstIP", "type" : "string" }, { "name" : "dstPort", "type" : "long" }, { "name" : "protocol", "type" : "string" } ] }, "metricsSpec" : [ { "type" : "count", "name" : "count" }, { "type" : "longSum", "name" : "packets", "fieldName" : "packets" }, { "type" : "longSum", "name" : "bytes", "fieldName" : "bytes" }, { "type" : "doubleSum", "name" : "cost", "fieldName" : "cost" } ], "granularitySpec" : { "rollup" : true } } When defining a metric, it is necessary to specify what type of aggregation should be performed on that column during rollup. Here we have defined long sum aggregations on the two long metric columns, packets and bytes, and a double sum aggregation for the cost column. Note that the metricsSpec is on a different nesting level than dimensionSpec or parseSpec; it belongs on the same nesting level as parser within the dataSchema. Note that we have also defined a count aggregator. The count aggregator will track how many rows in the original input data contributed to a "rolled up" row in the final ingested data. "},{"title":"No rollup","type":1,"pageTitle":"Write an ingestion spec","url":"/docs/27.0.0/tutorials/tutorial-ingestion-spec#no-rollup","content":"If we were not using rollup, all columns would be specified in the dimensionsSpec, e.g.: "dimensionsSpec" : { "dimensions": [ "srcIP", { "name" : "srcPort", "type" : "long" }, { "name" : "dstIP", "type" : "string" }, { "name" : "dstPort", "type" : "long" }, { "name" : "protocol", "type" : "string" }, { "name" : "packets", "type" : "long" }, { "name" : "bytes", "type" : "long" }, { "name" : "srcPort", "type" : "double" } ] }, "},{"title":"Define granularities","type":1,"pageTitle":"Write an ingestion spec","url":"/docs/27.0.0/tutorials/tutorial-ingestion-spec#define-granularities","content":"At this point, we are done defining the parser and metricsSpec within the dataSchema and we are almost done writing the ingestion spec. There are some additional properties we need to set in the granularitySpec: Type of granularitySpec: the uniform granularity spec defines segments with uniform interval sizes. For example, all segments cover an hour's worth of data.The segment granularity: what size of time interval should a single segment contain data for? e.g., DAY, WEEKThe bucketing granularity of the timestamps in the time column (referred to as queryGranularity) Segment granularity Segment granularity is configured by the segmentGranularity property in the granularitySpec. For this tutorial, we'll create hourly segments: "dataSchema" : { "dataSource" : "ingestion-tutorial", "timestampSpec" : { "format" : "iso", "column" : "ts" }, "dimensionsSpec" : { "dimensions": [ "srcIP", { "name" : "srcPort", "type" : "long" }, { "name" : "dstIP", "type" : "string" }, { "name" : "dstPort", "type" : "long" }, { "name" : "protocol", "type" : "string" } ] }, "metricsSpec" : [ { "type" : "count", "name" : "count" }, { "type" : "longSum", "name" : "packets", "fieldName" : "packets" }, { "type" : "longSum", "name" : "bytes", "fieldName" : "bytes" }, { "type" : "doubleSum", "name" : "cost", "fieldName" : "cost" } ], "granularitySpec" : { "type" : "uniform", "segmentGranularity" : "HOUR", "rollup" : true } } Our input data has events from two separate hours, so this task will generate two segments. Query granularity The query granularity is configured by the queryGranularity property in the granularitySpec. For this tutorial, let's use minute granularity: "dataSchema" : { "dataSource" : "ingestion-tutorial", "timestampSpec" : { "format" : "iso", "column" : "ts" }, "dimensionsSpec" : { "dimensions": [ "srcIP", { "name" : "srcPort", "type" : "long" }, { "name" : "dstIP", "type" : "string" }, { "name" : "dstPort", "type" : "long" }, { "name" : "protocol", "type" : "string" } ] }, "metricsSpec" : [ { "type" : "count", "name" : "count" }, { "type" : "longSum", "name" : "packets", "fieldName" : "packets" }, { "type" : "longSum", "name" : "bytes", "fieldName" : "bytes" }, { "type" : "doubleSum", "name" : "cost", "fieldName" : "cost" } ], "granularitySpec" : { "type" : "uniform", "segmentGranularity" : "HOUR", "queryGranularity" : "MINUTE", "rollup" : true } } To see the effect of the query granularity, let's look at this row from the raw input data: {"ts":"2018-01-01T01:03:29Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2", "srcPort":5000, "dstPort":7000, "protocol": 6, "packets":60, "bytes":6000, "cost": 4.3} When this row is ingested with minute queryGranularity, Druid will floor the row's timestamp to minute buckets: {"ts":"2018-01-01T01:03:00Z","srcIP":"1.1.1.1", "dstIP":"2.2.2.2", "srcPort":5000, "dstPort":7000, "protocol": 6, "packets":60, "bytes":6000, "cost": 4.3} Define an interval (batch only) For batch tasks, it is necessary to define a time interval. Input rows with timestamps outside of the time interval will not be ingested. The interval is also specified in the granularitySpec: "dataSchema" : { "dataSource" : "ingestion-tutorial", "timestampSpec" : { "format" : "iso", "column" : "ts" }, "dimensionsSpec" : { "dimensions": [ "srcIP", { "name" : "srcPort", "type" : "long" }, { "name" : "dstIP", "type" : "string" }, { "name" : "dstPort", "type" : "long" }, { "name" : "protocol", "type" : "string" } ] }, "metricsSpec" : [ { "type" : "count", "name" : "count" }, { "type" : "longSum", "name" : "packets", "fieldName" : "packets" }, { "type" : "longSum", "name" : "bytes", "fieldName" : "bytes" }, { "type" : "doubleSum", "name" : "cost", "fieldName" : "cost" } ], "granularitySpec" : { "type" : "uniform", "segmentGranularity" : "HOUR", "queryGranularity" : "MINUTE", "intervals" : ["2018-01-01/2018-01-02"], "rollup" : true } } "},{"title":"Define the task type","type":1,"pageTitle":"Write an ingestion spec","url":"/docs/27.0.0/tutorials/tutorial-ingestion-spec#define-the-task-type","content":"We've now finished defining our dataSchema. The remaining steps are to place the dataSchema we created into an ingestion task spec, and specify the input source. The dataSchema is shared across all task types, but each task type has its own specification format. For this tutorial, we will use the native batch ingestion task: { "type" : "index_parallel", "spec" : { "dataSchema" : { "dataSource" : "ingestion-tutorial", "timestampSpec" : { "format" : "iso", "column" : "ts" }, "dimensionsSpec" : { "dimensions": [ "srcIP", { "name" : "srcPort", "type" : "long" }, { "name" : "dstIP", "type" : "string" }, { "name" : "dstPort", "type" : "long" }, { "name" : "protocol", "type" : "string" } ] }, "metricsSpec" : [ { "type" : "count", "name" : "count" }, { "type" : "longSum", "name" : "packets", "fieldName" : "packets" }, { "type" : "longSum", "name" : "bytes", "fieldName" : "bytes" }, { "type" : "doubleSum", "name" : "cost", "fieldName" : "cost" } ], "granularitySpec" : { "type" : "uniform", "segmentGranularity" : "HOUR", "queryGranularity" : "MINUTE", "intervals" : ["2018-01-01/2018-01-02"], "rollup" : true } } } } "},{"title":"Define the input source","type":1,"pageTitle":"Write an ingestion spec","url":"/docs/27.0.0/tutorials/tutorial-ingestion-spec#define-the-input-source","content":"Now let's define our input source, which is specified in an ioConfig object. Each task type has its own type of ioConfig. To read input data, we need to specify an inputSource. The example netflow data we saved earlier needs to be read from a local file, which is configured below: "ioConfig" : { "type" : "index_parallel", "inputSource" : { "type" : "local", "baseDir" : "quickstart/", "filter" : "ingestion-tutorial-data.json" } } "},{"title":"Define the format of the data","type":1,"pageTitle":"Write an ingestion spec","url":"/docs/27.0.0/tutorials/tutorial-ingestion-spec#define-the-format-of-the-data","content":"Since our input data is represented as JSON strings, we'll use a inputFormat to json format: "ioConfig" : { "type" : "index_parallel", "inputSource" : { "type" : "local", "baseDir" : "quickstart/", "filter" : "ingestion-tutorial-data.json" }, "inputFormat" : { "type" : "json" } } { "type" : "index_parallel", "spec" : { "dataSchema" : { "dataSource" : "ingestion-tutorial", "timestampSpec" : { "format" : "iso", "column" : "ts" }, "dimensionsSpec" : { "dimensions": [ "srcIP", { "name" : "srcPort", "type" : "long" }, { "name" : "dstIP", "type" : "string" }, { "name" : "dstPort", "type" : "long" }, { "name" : "protocol", "type" : "string" } ] }, "metricsSpec" : [ { "type" : "count", "name" : "count" }, { "type" : "longSum", "name" : "packets", "fieldName" : "packets" }, { "type" : "longSum", "name" : "bytes", "fieldName" : "bytes" }, { "type" : "doubleSum", "name" : "cost", "fieldName" : "cost" } ], "granularitySpec" : { "type" : "uniform", "segmentGranularity" : "HOUR", "queryGranularity" : "MINUTE", "intervals" : ["2018-01-01/2018-01-02"], "rollup" : true } }, "ioConfig" : { "type" : "index_parallel", "inputSource" : { "type" : "local", "baseDir" : "quickstart/", "filter" : "ingestion-tutorial-data.json" }, "inputFormat" : { "type" : "json" } } } } "},{"title":"Additional tuning","type":1,"pageTitle":"Write an ingestion spec","url":"/docs/27.0.0/tutorials/tutorial-ingestion-spec#additional-tuning","content":"Each ingestion task has a tuningConfig section that allows users to tune various ingestion parameters. As an example, let's add a tuningConfig that sets a target segment size for the native batch ingestion task: "tuningConfig" : { "type" : "index_parallel", "partitionsSpec": { "type": "dynamic", "maxRowsPerSegment" : 5000000 } } Note that each ingestion task has its own type of tuningConfig. "},{"title":"Final spec","type":1,"pageTitle":"Write an ingestion spec","url":"/docs/27.0.0/tutorials/tutorial-ingestion-spec#final-spec","content":"We've finished defining the ingestion spec, it should now look like the following: { "type" : "index_parallel", "spec" : { "dataSchema" : { "dataSource" : "ingestion-tutorial", "timestampSpec" : { "format" : "iso", "column" : "ts" }, "dimensionsSpec" : { "dimensions": [ "srcIP", { "name" : "srcPort", "type" : "long" }, { "name" : "dstIP", "type" : "string" }, { "name" : "dstPort", "type" : "long" }, { "name" : "protocol", "type" : "string" } ] }, "metricsSpec" : [ { "type" : "count", "name" : "count" }, { "type" : "longSum", "name" : "packets", "fieldName" : "packets" }, { "type" : "longSum", "name" : "bytes", "fieldName" : "bytes" }, { "type" : "doubleSum", "name" : "cost", "fieldName" : "cost" } ], "granularitySpec" : { "type" : "uniform", "segmentGranularity" : "HOUR", "queryGranularity" : "MINUTE", "intervals" : ["2018-01-01/2018-01-02"], "rollup" : true } }, "ioConfig" : { "type" : "index_parallel", "inputSource" : { "type" : "local", "baseDir" : "quickstart/", "filter" : "ingestion-tutorial-data.json" }, "inputFormat" : { "type" : "json" } }, "tuningConfig" : { "type" : "index_parallel", "partitionsSpec": { "type": "dynamic", "maxRowsPerSegment" : 5000000 } } } } "},{"title":"Submit the task and query the data","type":1,"pageTitle":"Write an ingestion spec","url":"/docs/27.0.0/tutorials/tutorial-ingestion-spec#submit-the-task-and-query-the-data","content":"From the apache-druid-27.0.0 package root, run the following command: bin/post-index-task --file quickstart/ingestion-tutorial-index.json --url http://localhost:8081 After the script completes, we will query the data. Let's run bin/dsql and issue a select * from "ingestion-tutorial"; query to see what data was ingested. $ bin/dsql Welcome to dsql, the command-line client for Druid SQL. Type "\\h" for help. dsql> select * from "ingestion-tutorial"; ┌──────────────────────────┬───────┬──────┬───────┬─────────┬─────────┬─────────┬──────────┬─────────┬─────────┐ │ __time │ bytes │ cost │ count │ dstIP │ dstPort │ packets │ protocol │ srcIP │ srcPort │ ├──────────────────────────┼───────┼──────┼───────┼─────────┼─────────┼─────────┼──────────┼─────────┼─────────┤ │ 2018-01-01T01:01:00.000Z │ 6000 │ 4.9 │ 3 │ 2.2.2.2 │ 3000 │ 60 │ 6 │ 1.1.1.1 │ 2000 │ │ 2018-01-01T01:02:00.000Z │ 9000 │ 18.1 │ 2 │ 2.2.2.2 │ 7000 │ 90 │ 6 │ 1.1.1.1 │ 5000 │ │ 2018-01-01T01:03:00.000Z │ 6000 │ 4.3 │ 1 │ 2.2.2.2 │ 7000 │ 60 │ 6 │ 1.1.1.1 │ 5000 │ │ 2018-01-01T02:33:00.000Z │ 30000 │ 56.9 │ 2 │ 8.8.8.8 │ 5000 │ 300 │ 17 │ 7.7.7.7 │ 4000 │ │ 2018-01-01T02:35:00.000Z │ 30000 │ 46.3 │ 1 │ 8.8.8.8 │ 5000 │ 300 │ 17 │ 7.7.7.7 │ 4000 │ └──────────────────────────┴───────┴──────┴───────┴─────────┴─────────┴─────────┴──────────┴─────────┴─────────┘ Retrieved 5 rows in 0.12s. dsql> "},{"title":"Unnest arrays within a column","type":0,"sectionRef":"#","url":"/docs/27.0.0/tutorials/tutorial-unnest-arrays","content":"","keywords":""},{"title":"Prerequisites","type":1,"pageTitle":"Unnest arrays within a column","url":"/docs/27.0.0/tutorials/tutorial-unnest-arrays#prerequisites","content":"You need a Druid cluster, such as the quickstart. The cluster does not need any existing datasources. You'll load a basic one as part of this tutorial. "},{"title":"Load data with nested values","type":1,"pageTitle":"Unnest arrays within a column","url":"/docs/27.0.0/tutorials/tutorial-unnest-arrays#load-data-with-nested-values","content":"The data you're ingesting contains a handful of rows that resemble the following: t:2000-01-01, m1:1.0, m2:1.0, dim1:, dim2:[a], dim3:[a,b], dim4:[x,y], dim5:[a,b] The focus of this tutorial is on the nested array of values in dim3. You can load this data by running a query for SQL-based ingestion or submitting a JSON-based ingestion spec. The example loads data into a table named nested_data: SQL-based ingestionIngestion spec REPLACE INTO nested_data OVERWRITE ALL SELECT TIME_PARSE("t") as __time, dim1, dim2, dim3, dim4, dim5, m1, m2 FROM TABLE( EXTERN( '{"type":"inline","data":"{\\"t\\":\\"2000-01-01\\",\\"m1\\":\\"1.0\\",\\"m2\\":\\"1.0\\",\\"dim1\\":\\"\\",\\"dim2\\":[\\"a\\"],\\"dim3\\":[\\"a\\",\\"b\\"],\\"dim4\\":[\\"x\\",\\"y\\"],\\"dim5\\":[\\"a\\",\\"b\\"]},\\n{\\"t\\":\\"2000-01-02\\",\\"m1\\":\\"2.0\\",\\"m2\\":\\"2.0\\",\\"dim1\\":\\"10.1\\",\\"dim2\\":[],\\"dim3\\":[\\"c\\",\\"d\\"],\\"dim4\\":[\\"e\\",\\"f\\"],\\"dim5\\":[\\"a\\",\\"b\\",\\"c\\",\\"d\\"]},\\n{\\"t\\":\\"2001-01-03\\",\\"m1\\":\\"6.0\\",\\"m2\\":\\"6.0\\",\\"dim1\\":\\"abc\\",\\"dim2\\":[\\"a\\"],\\"dim3\\":[\\"k\\",\\"l\\"]},\\n{\\"t\\":\\"2001-01-01\\",\\"m1\\":\\"4.0\\",\\"m2\\":\\"4.0\\",\\"dim1\\":\\"1\\",\\"dim2\\":[\\"a\\"],\\"dim3\\":[\\"g\\",\\"h\\"]},\\n{\\"t\\":\\"2001-01-02\\",\\"m1\\":\\"5.0\\",\\"m2\\":\\"5.0\\",\\"dim1\\":\\"def\\",\\"dim2\\":[\\"abc\\"],\\"dim3\\":[\\"i\\",\\"j\\"]},\\n{\\"t\\":\\"2001-01-03\\",\\"m1\\":\\"6.0\\",\\"m2\\":\\"6.0\\",\\"dim1\\":\\"abc\\",\\"dim2\\":[\\"a\\"],\\"dim3\\":[\\"k\\",\\"l\\"]},\\n{\\"t\\":\\"2001-01-02\\",\\"m1\\":\\"5.0\\",\\"m2\\":\\"5.0\\",\\"dim1\\":\\"def\\",\\"dim2\\":[\\"abc\\"],\\"dim3\\":[\\"m\\",\\"n\\"]}"}', '{"type":"json"}', '[{"name":"t","type":"string"},{"name":"dim1","type":"string"},{"name":"dim2","type":"string"},{"name":"dim3","type":"string"},{"name":"dim4","type":"string"},{"name":"dim5","type":"string"},{"name":"m1","type":"float"},{"name":"m2","type":"double"}]' ) ) PARTITIONED BY YEAR "},{"title":"View the data","type":1,"pageTitle":"Unnest arrays within a column","url":"/docs/27.0.0/tutorials/tutorial-unnest-arrays#view-the-data","content":"Now that the data is loaded, run the following query: SELECT * FROM nested_data In the results, notice that the column named dim3 has nested values like ["a","b"]. The example queries that follow unnest dim3 and run queries against the unnested records. Depending on the type of queries you write, see either Unnest using SQL queries or Unnest using native queries. "},{"title":"Unnest using SQL queries","type":1,"pageTitle":"Unnest arrays within a column","url":"/docs/27.0.0/tutorials/tutorial-unnest-arrays#unnest-using-sql-queries","content":"The following is the general syntax for UNNEST: SELECT column_alias_name FROM datasource, UNNEST(source_expression) AS table_alias_name(column_alias_name) In addition, you must supply the following context parameter: "enableUnnest": "true" For more information about the syntax, see UNNEST. "},{"title":"Unnest a single source expression in a datasource","type":1,"pageTitle":"Unnest arrays within a column","url":"/docs/27.0.0/tutorials/tutorial-unnest-arrays#unnest-a-single-source-expression-in-a-datasource","content":"The following query returns a column called d3 from the table nested_data. d3 contains the unnested values from the source column dim3: SELECT d3 FROM "nested_data", UNNEST(MV_TO_ARRAY(dim3)) AS example_table(d3) Notice the MV_TO_ARRAY helper function, which converts the multi-value records in dim3 to arrays. It is required since dim3 is a multi-value string dimension. If the column you are unnesting is not a string dimension, then you do not need to use the MV_TO_ARRAY helper function. "},{"title":"Unnest a virtual column","type":1,"pageTitle":"Unnest arrays within a column","url":"/docs/27.0.0/tutorials/tutorial-unnest-arrays#unnest-a-virtual-column","content":"You can unnest into a virtual column (multiple columns treated as one). The following query returns the two source columns and a third virtual column containing the unnested data: SELECT dim4,dim5,d45 FROM nested_data, UNNEST(ARRAY[dim4,dim5]) AS example_table(d45) The virtual column d45 is the product of the two source columns. Notice how the total number of rows has grown. The table nested_data had only seven rows originally. Another way to unnest a virtual column is to concatenate them with ARRAY_CONCAT: SELECT dim4,dim5,d45 FROM nested_data, UNNEST(ARRAY_CONCAT(dim4,dim5)) AS example_table(d45) Decide which method to use based on what your goals are. "},{"title":"Unnest multiple source expressions","type":1,"pageTitle":"Unnest arrays within a column","url":"/docs/27.0.0/tutorials/tutorial-unnest-arrays#unnest-multiple-source-expressions","content":"You can include multiple UNNEST clauses in a single query. Each UNNEST clause needs the following: UNNEST(source_expression) AS table_alias_name(column_alias_name) The table_alias_name and column_alias_name for each UNNEST clause should be unique. The example query returns the following from the nested_data datasource: the source columns dim3, dim4, and dim5an unnested version of dim3 aliased to d3an unnested virtual column composed of dim4 and dim5 aliased to d45 SELECT dim3,dim4,dim5,d3,d45 FROM "nested_data", UNNEST(MV_TO_ARRAY("dim3")) AS foo1(d3), UNNEST(ARRAY[dim4,dim5]) AS foo2(d45) "},{"title":"Unnest a column from a subset of a table","type":1,"pageTitle":"Unnest arrays within a column","url":"/docs/27.0.0/tutorials/tutorial-unnest-arrays#unnest-a-column-from-a-subset-of-a-table","content":"The following query uses only three columns from the nested_data table as the datasource. From that subset, it unnests the column dim3 into d3 and returns d3. SELECT d3 FROM (SELECT dim1, dim2, dim3 FROM "nested_data"), UNNEST(MV_TO_ARRAY(dim3)) AS example_table(d3) "},{"title":"Unnest with a filter","type":1,"pageTitle":"Unnest arrays within a column","url":"/docs/27.0.0/tutorials/tutorial-unnest-arrays#unnest-with-a-filter","content":"You can specify which rows to unnest by including a filter in your query. The following query: Filters the source expression based on dim2Unnests the records in dim3 into d3Returns the records for the unnested d3 that have a dim2 record that matches the filter SELECT d3 FROM (SELECT * FROM nested_data WHERE dim2 IN ('abc')), UNNEST(MV_TO_ARRAY(dim3)) AS example_table(d3) You can also filter the results of an UNNEST clause. The following example unnests the inline array [1,2,3] but only returns the rows that match the filter: SELECT * FROM UNNEST(ARRAY[1,2,3]) AS example_table(d1) WHERE d1 IN ('1','2') This means that you can run a query like the following where Druid only return rows that meet the following conditions: The unnested values of dim3 (aliased to d3) matches IN ('b', 'd')The value of m1 is less than 2. SELECT * FROM nested_data, UNNEST(MV_TO_ARRAY("dim3")) AS foo(d3) WHERE d3 IN ('b', 'd') and m1 < 2 The query only returns a single row since only one row meets the conditions. You can see the results change if you modify the filter. "},{"title":"Unnest and then GROUP BY","type":1,"pageTitle":"Unnest arrays within a column","url":"/docs/27.0.0/tutorials/tutorial-unnest-arrays#unnest-and-then-group-by","content":"The following query unnests dim3 and then performs a GROUP BY on the output d3. SELECT d3 FROM nested_data, UNNEST(MV_TO_ARRAY(dim3)) AS example_table(d3) GROUP BY d3 You can further transform your results by including clauses like ORDER BY d3 DESC or LIMIT. "},{"title":"Unnest using native queries","type":1,"pageTitle":"Unnest arrays within a column","url":"/docs/27.0.0/tutorials/tutorial-unnest-arrays#unnest-using-native-queries","content":"The following section shows examples of how you can use the unnest datasource in queries. They all use the nested_data table you created earlier in the tutorial. You can use a single unnest datasource to unnest multiple columns. Be careful when doing this though because it can lead to a very large number of new rows. "},{"title":"Scan query","type":1,"pageTitle":"Unnest arrays within a column","url":"/docs/27.0.0/tutorials/tutorial-unnest-arrays#scan-query","content":"The following native Scan query returns the rows of the datasource and unnests the values in the dim3 column by using the unnest datasource type: Show the query { "queryType": "scan", "dataSource": { "type": "unnest", "base": { "type": "table", "name": "nested_data" }, "virtualColumn": { "type": "expression", "name": "unnest-dim3", "expression": "\\"dim3\\"" } }, "intervals": { "type": "intervals", "intervals": [ "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z" ] }, "limit": 100, "columns": [ "__time", "dim1", "dim2", "dim3", "m1", "m2", "unnest-dim3" ], "legacy": false, "granularity": { "type": "all" }, "context": { "debug": true, "useCache": false } } In the results, notice that there are more rows than before and an additional column named unnest-dim3. The values of unnest-dim3 are the same as the dim3 column except the nested values are no longer nested and are each a separate record. You can implement filters. For example, you can add the following to the Scan query to filter results to only rows that have the values "a" or "abc" in "dim2": "filter": { "type": "in", "dimension": "dim2", "values": [ "a", "abc", ] }, "},{"title":"groupBy query","type":1,"pageTitle":"Unnest arrays within a column","url":"/docs/27.0.0/tutorials/tutorial-unnest-arrays#groupby-query","content":"The following query returns an unnested version of the column dim3 as the column unnest-dim3 sorted in descending order. Show the query { "queryType": "groupBy", "dataSource": { "type": "unnest", "base": "nested_data", "virtualColumn": { "type": "expression", "name": "unnest-dim3", "expression": "\\"dim3\\"" } }, "intervals": ["-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"], "granularity": "all", "dimensions": [ "unnest-dim3" ], "limitSpec": { "type": "default", "columns": [ { "dimension": "unnest-dim3", "direction": "descending" } ], "limit": 1001 }, "context": { "debug": true } } "},{"title":"topN query","type":1,"pageTitle":"Unnest arrays within a column","url":"/docs/27.0.0/tutorials/tutorial-unnest-arrays#topn-query","content":"The example topN query unnests dim3 into the column unnest-dim3. The query uses the unnested column as the dimension for the topN query. The results are outputted to a column named topN-unnest-d3 and are sorted numerically in ascending order based on the column a0, an aggregate value representing the minimum of m1. Show the query { "queryType": "topN", "dataSource": { "type": "unnest", "base": { "type": "table", "name": "nested_data" }, "virtualColumn": { "type": "expression", "name": "unnest-dim3", "expression": "\\"dim3\\"" }, }, "dimension": { "type": "default", "dimension": "unnest-dim3", "outputName": "topN-unnest-d3", "outputType": "STRING" }, "metric": { "type": "inverted", "metric": { "type": "numeric", "metric": "a0" } }, "threshold": 3, "intervals": { "type": "intervals", "intervals": [ "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z" ] }, "granularity": { "type": "all" }, "aggregations": [ { "type": "floatMin", "name": "a0", "fieldName": "m1" } ], "context": { "debug": true } } "},{"title":"Unnest with a JOIN query","type":1,"pageTitle":"Unnest arrays within a column","url":"/docs/27.0.0/tutorials/tutorial-unnest-arrays#unnest-with-a-join-query","content":"This query joins the nested_data table with itself and outputs the unnested data into a new column called unnest-dim3. Show the query { "queryType": "scan", "dataSource": { "type": "unnest", "base": { "type": "join", "left": { "type": "table", "name": "nested_data" }, "right": { "type": "query", "query": { "queryType": "scan", "dataSource": { "type": "table", "name": "nested_data" }, "intervals": { "type": "intervals", "intervals": [ "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z" ] }, "virtualColumns": [ { "type": "expression", "name": "v0", "expression": "\\"m2\\"", "outputType": "FLOAT" } ], "resultFormat": "compactedList", "columns": [ "__time", "dim1", "dim2", "dim3", "m1", "m2", "v0" ], "legacy": false, "context": { "sqlOuterLimit": 1001, "useNativeQueryExplain": true }, "granularity": { "type": "all" } } }, "rightPrefix": "j0.", "condition": "(\\"m1\\" == \\"j0.v0\\")", "joinType": "INNER" }, "virtualColumn": { "type": "expression", "name": "unnest-dim3", "expression": "\\"dim3\\"" } }, "intervals": { "type": "intervals", "intervals": [ "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z" ] }, "resultFormat": "compactedList", "limit": 1001, "columns": [ "__time", "dim1", "dim2", "dim3", "j0.__time", "j0.dim1", "j0.dim2", "j0.dim3", "j0.m1", "j0.m2", "m1", "m2", "unnest-dim3" ], "legacy": false, "context": { "sqlOuterLimit": 1001, "useNativeQueryExplain": true }, "granularity": { "type": "all" } } "},{"title":"Unnest a virtual column","type":1,"pageTitle":"Unnest arrays within a column","url":"/docs/27.0.0/tutorials/tutorial-unnest-arrays#unnest-a-virtual-column-1","content":"The unnest datasource supports unnesting virtual columns, which is a queryable composite column that can draw data from multiple source columns. The following query returns the columns dim45 and m1. The dim45 column is the unnested version of a virtual column that contains an array of the dim4 and dim5 columns. Show the query { "queryType": "scan", "dataSource":{ "type": "unnest", "base": { "type": "table", "name": "nested_data" }, "virtualColumn": { "type": "expression", "name": "dim45", "expression": "array_concat(\\"dim4\\",\\"dim5\\")", "outputType": "ARRAY<STRING>" }, } "intervals": { "type": "intervals", "intervals": [ "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z" ] }, "resultFormat": "compactedList", "limit": 1001, "columns": [ "dim45", "m1" ], "legacy": false, "granularity": { "type": "all" }, "context": { "debug": true, "useCache": false } } "},{"title":"Unnest a column and a virtual column","type":1,"pageTitle":"Unnest arrays within a column","url":"/docs/27.0.0/tutorials/tutorial-unnest-arrays#unnest-a-column-and-a-virtual-column","content":"The following Scan query unnests the column dim3 into d3 and a virtual column composed of dim4 and dim5 into the column d45. It then returns those source columns and their unnested variants. Show the query { "queryType": "scan", "dataSource": { "type": "unnest", "base": { "type": "unnest", "base": { "type": "table", "name": "nested_data" }, "virtualColumn": { "type": "expression", "name": "d3", "expression": "\\"dim3\\"", "outputType": "STRING" }, }, "virtualColumn": { "type": "expression", "name": "d45", "expression": "array(\\"dim4\\",\\"dim5\\")", "outputType": "ARRAY<STRING>" }, }, "intervals": { "type": "intervals", "intervals": [ "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z" ] }, "resultFormat": "compactedList", "limit": 1001, "columns": [ "dim3", "d3", "dim4", "dim5", "d45" ], "legacy": false, "context": { "enableUnnest": "true", "queryId": "2618b9ce-6c0d-414e-b88d-16fb59b9c481", "sqlOuterLimit": 1001, "sqlQueryId": "2618b9ce-6c0d-414e-b88d-16fb59b9c481", "useNativeQueryExplain": true }, "granularity": { "type": "all" } } "},{"title":"Learn more","type":1,"pageTitle":"Unnest arrays within a column","url":"/docs/27.0.0/tutorials/tutorial-unnest-arrays#learn-more","content":"For more information, see the following: UNNEST SQL functionunnest in Datasources "}] |