Apache Druid stores data partitioned by time chunk and supports deleting data for time chunks by dropping segments. This is a fast, metadata-only operation.
Deletion by time range happens in two steps:
kill
task to permanently delete the segment file from deep storage and remove its record from the metadata store. This is a hard delete: the data is unrecoverable unless you have a backup.For documentation on disabling segments using the Coordinator API, see the Legacy metadata API reference.
A data deletion tutorial is available at Tutorial: Deleting data.
Druid supports load and drop rules, which are used to define intervals of time where data should be preserved, and intervals where data should be discarded. Data that falls under a drop rule is marked unused, in the same manner as if you manually mark that time range unused. This is a fast, metadata-only operation.
Data that is dropped in this way is marked unused, but remains in deep storage. To permanently delete it, use a kill
task.
Druid supports deleting specific records using reindexing with a filter. The filter specifies which data remains after reindexing, so it must be the inverse of the data you want to delete. Because segments must be rewritten to delete data in this way, it can be a time-consuming operation.
For example, to delete records where userName
is 'bob'
with native batch indexing, use a transformSpec
with filter {"type": "not", "field": {"type": "selector", "dimension": "userName", "value": "bob"}}
.
To delete the same records using SQL, use REPLACE with WHERE userName <> 'bob'
.
To reindex using native batch, use the druid
input source. If needed, transformSpec
can be used to filter or modify data during the reindexing job. To reindex with SQL, use REPLACE <table> OVERWRITE
with SELECT ... FROM <table>
. (Druid does not have UPDATE
or ALTER TABLE
statements.) Any SQL SELECT query can be used to filter, modify, or enrich the data during the reindexing job.
Data that is deleted in this way is marked unused, but remains in deep storage. To permanently delete it, use a kill
task.
Deleting an entire table works the same way as deleting part of a table by time range. First, mark all segments unused using the Coordinator API or web console. Then, optionally, delete it permanently using a kill
task.
kill
tasksData that has been overwritten or soft-deleted still remains as segments that have been marked unused. You can use a kill
task to permanently delete this data.
The available grammar is:
{ "type": "kill", "id": <task_id>, "dataSource": <task_datasource>, "interval" : <all_unused_segments_in_this_interval_will_die!>, "versions" : <optional_list_of_segment_versions_to_delete_in_this_interval>, "context": <task_context>, "batchSize": <optional_batch_size>, "limit": <optional_maximum_number_of_segments_to_delete>, "maxUsedStatusLastUpdatedTime": <optional_maximum_timestamp_when_segments_were_marked_as_unused> }
Some of the parameters used in the task payload are further explained below:
Parameter | Default | Explanation |
---|---|---|
versions | null (all versions) | List of segment versions within the specified interval for the kill task to delete. The default behavior is to delete all unused segment versions in the specified interval . |
batchSize | 100 | Maximum number of segments that are deleted in one kill batch. Some operations on the Overlord may get stuck while a kill task is in progress due to concurrency constraints (such as in TaskLockbox ). Thus, a kill task splits the list of unused segments to be deleted into smaller batches to yield the Overlord resources intermittently to other task operations. |
limit | null (no limit) | Maximum number of segments for the kill task to delete. |
maxUsedStatusLastUpdatedTime | null (no cutoff) | Maximum timestamp used as a cutoff to include unused segments. The kill task only considers segments which lie in the specified interval and were marked as unused no later than this time. The default behavior is to kill all unused segments in the interval regardless of when they where marked as unused. |
WARNING: The kill
task permanently removes all information about the affected segments from the metadata store and deep storage. This operation cannot be undone.
Instead of submitting kill
tasks manually to permanently delete data for a given interval, you can enable auto-kill of unused segments on the Coordinator. The Coordinator runs a duty periodically to identify intervals containing unused segments that are eligible for kill. It then launches a kill
task for each of these intervals.
Refer to Data management on the Coordinator to configure auto-kill of unused segments on the Coordinator.
:::info This is an experimental feature that:
This is an experimental feature to run kill tasks in an “embedded” mode on the Overlord itself.
These embedded tasks offer several advantages over auto-kill performed by the Coordinator as they:
Refer to Auto-kill unused segments on the Overlord to configure auto-kill of unused segments on the Overlord. See Auto-kill metrics for the metrics emitted by embedded kill tasks.