Druid can roll up data at ingestion time to reduce the amount of raw data to store on disk. Rollup is a form of summarization or pre-aggregation. Rolling up data can dramatically reduce the size of data to be stored and reduce row counts by potentially orders of magnitude. As a trade-off for the efficiency of rollup, you lose the ability to query individual events.
At ingestion time, you control rollup with the rollup
setting in the granularitySpec
. Rollup is enabled by default. This means Druid combines into a single row any rows that have identical dimension values and timestamp values after queryGranularity
-based truncation.
When you disable rollup, Druid loads each row as-is without doing any form of pre-aggregation. This mode is similar to databases that do not support a rollup feature. Set rollup
to false
if you want Druid to store each record as-is, without any rollup summarization.
Use roll-up when creating a table datasource if both:
Conversely, disable roll-up if either:
GROUP BY
or WHERE
queries on any column.If you have conflicting needs for different use cases, you can create multiple tables with different roll-up configurations on each table.
To measure the rollup ratio of a datasource, compare the number of rows in Druid (COUNT
) with the number of ingested events. For example, run a Druid SQL query where “num_rows” refers to a count
-type metric generated at ingestion time as follows:
SELECT SUM("num_rows") / (COUNT(*) * 1.0) FROM datasource
The higher the result, the greater the benefit you gain from rollup. See Counting the number of ingested events for more details about how counting works with rollup is enabled.
Tips for maximizing rollup:
queryGranularity
at ingestion time to increase the chances that multiple rows in Druid having matching timestamps. For example, use five minute query granularity (PT5M
) instead of one minute (PT1M
).Depending on the ingestion method, Druid has the following rollup options:
In general, ingestion methods that offer best-effort rollup do this for one of the following reasons:
Ingestion methods that guarantee perfect rollup use an additional preprocessing step to determine intervals and partitioning before data ingestion. This preprocessing step scans the entire input dataset. While this step increases the time required for ingestion, it provides information necessary for perfect rollup.
The following table shows how each method handles rollup:
Method | How it works |
---|---|
Native batch | index_parallel and index type may be either perfect or best-effort, based on configuration. |
SQL-based batch | Always perfect. |
Hadoop | Always perfect. |
Kafka indexing service | Always best-effort. |
Kinesis indexing service | Always best-effort. |
See the following topic for more information: