When you delete some entities from Apache Druid, records related to the entity may remain in the metadata store including:
If you have a high datasource churn rate, meaning you frequently create and delete many short-lived datasources or other related entities like compaction configuration or rules, the leftover records can start to fill your metadata store and cause performance issues.
To maintain metadata store performance in this case, you can configure Apache Druid to automatically remove records associated with deleted entities from the metadata store.
There are several cases when you should consider automated cleanup of the metadata related to deleted datasources:
If you have compliance requirements to keep audit records and you enable automated cleanup for audit records, use alternative methods to preserve audit metadata, for example, by periodically exporting audit metadata records to external storage.
By default, automatic cleanup for metadata is disabled. See Metadata storage for the default configuration settings after you enable the feature.
You can configure cleanup on a per-entity basis with the following constraints:
druid.coordinator.period.metadataStoreManagementPeriod=P1H
.For details on configuration properties, see Metadata management.
killDataSourceWhitelist
and killAllDataSources
set in the Coordinator dynamic configuration. See Dynamic configuration.durationToRetain
time has passed since their creation.Kill tasks use the following configuration:
druid.coordinator.kill.on
: When True
, enables the Coordinator to submit kill task for unused segments, which deletes them completely from metadata store and from deep storage. Only applies dataSources
according to the dynamic configuration: allowed datasources (killDataSourceWhitelist
) or all datasources (killAllDataSources
).druid.coordinator.kill.period
: Defines the frequency in ISO 8601 format for the cleanup job to check for and delete eligible segments. Defaults to P1D
. Must be greater than druid.coordinator.period.indexingPeriod
.druid.coordinator.kill.durationToRetain
: Defines the retention period in ISO 8601 format after creation that segments become eligible for deletion.druid.coordinator.kill.maxSegments
: Defines the maximum number of segments to delete per kill task.The kill task is the only configuration in this topic that affects actual data in deep storage and not simply metadata or logs.
All audit records become eligible for deletion when the durationToRetain
time has passed since their creation.
Audit cleanup uses the following configuration:
druid.coordinator.kill.audit.on
: When true
, enables cleanup for audit records.druid.coordinator.kill.audit.period
: Defines the frequency in ISO 8601 format for the cleanup job to check for and delete eligible audit records. Defaults to P1D
.druid.coordinator.kill.audit.durationToRetain
: Defines the retention period in ISO 8601 format after creation that audit records become eligible for deletion.Supervisor records become eligible for deletion when the supervisor is terminated and the durationToRetain
time has passed since their creation.
Supervisor cleanup uses the following configuration:
druid.coordinator.kill.supervisor.on
: When true
, enables cleanup for supervisor records.druid.coordinator.kill.supervisor.period
: Defines the frequency in ISO 8601 format for the cleanup job to check for and delete eligible supervisor records. Defaults to P1D
.druid.coordinator.kill.supervisor.durationToRetain
: Defines the retention period in ISO 8601 format after creation that supervisor records become eligible for deletion.Rule records become eligible for deletion when all segments for the datasource have been killed by the kill task and the durationToRetain
time has passed since their creation. Automated cleanup for rules requires a kill task.
Rule cleanup uses the following configuration:
druid.coordinator.kill.rule.on
: When true
, enables cleanup for rules records.druid.coordinator.kill.rule.period
: Defines the frequency in ISO 8601 format for the cleanup job to check for and delete eligible rules records. Defaults to P1D
.druid.coordinator.kill.rule.durationToRetain
: Defines the retention period in ISO 8601 format after creation that rules records become eligible for deletion.Compaction configuration records in the druid_config
table become eligible for deletion after all segments for the datasource have been killed by the kill task. Automated cleanup for compaction configuration requires a kill task.
Compaction configuration cleanup uses the following configuration:
druid.coordinator.kill.compaction.on
: When true
, enables cleanup for compaction configuration records.druid.coordinator.kill.compaction.period
: Defines the frequency in ISO 8601 format for the cleanup job to check for and delete eligible compaction configuration records. Defaults to P1D
.If you already have an extremely large compaction configuration, you may not be able to delete compaction configuration due to size limits with the audit log. In this case you can set
druid.audit.manager.maxPayloadSizeBytes
anddruid.audit.manager.skipNullField
to avoid the auditing issue. See Audit logging.
Datasource records created by supervisors become eligible for deletion when the supervisor is terminated or does not exist in the druid_supervisors
table and the durationToRetain
time has passed since their creation.
Datasource cleanup uses the following configuration:
druid.coordinator.kill.datasource.on
: When true
, enables cleanup datasources created by supervisors.druid.coordinator.kill.datasource.period
: Defines the frequency in ISO 8601 format for the cleanup job to check for and delete eligible datasource records. Defaults to P1D
.druid.coordinator.kill.datasource.durationToRetain
: Defines the retention period in ISO 8601 format after creation that datasource records become eligible for deletion.You can configure the Overlord to delete indexer task log metadata and the indexer task logs from local disk or from cloud storage.
Indexer task log cleanup on the Overlord uses the following configuration:
druid.indexer.logs.kill.enabled
: When true
, enables cleanup of task logs.druid.indexer.logs.kill.durationToRetain
: Defines the length of time in milliseconds to retain task logs.druid.indexer.logs.kill.initialDelay
: Defines the length of time in milliseconds after the Overlord starts before it executes its first job to kill task logs.druid.indexer.logs.kill.delay
: The length of time in milliseconds between jobs to kill task logs.For more detail, see Task logging.
Consider a scenario where you have scripts to create and delete hundreds of datasources and related entities a day. You do not want to fill your metadata store with leftover records. The datasources and related entities tend to persist for only one or two days. Therefore, you want to run a cleanup job that identifies and removes leftover records that are at least four days old. The exception is for audit logs, which you need to retain for 30 days:
... # Schedule the metadata management store task for every hour: druid.coordinator.period.metadataStoreManagementPeriod=P1H # Set a kill task to poll every day to delete Segment records and segments # in deep storage > 4 days old. When druid.coordinator.kill.on is set to true, # you must set either killAllDataSources or killDataSourceWhitelist in the dynamic # configuration. For this example, assume killAllDataSources is set to true. # Required also for automated cleanup of rules and compaction configuration. druid.coordinator.kill.on=true druid.coordinator.kill.period=P1D druid.coordinator.kill.durationToRetain=P4D druid.coordinator.kill.maxSegments=1000 # Poll every day to delete audit records > 30 days old druid.coordinator.kill.audit.on=true druid.coordinator.kill.audit.period=P1D druid.coordinator.kill.audit.durationToRetain=P30D # Poll every day to delete supervisor records > 4 days old druid.coordinator.kill.supervisor.on=true druid.coordinator.kill.supervisor.period=P1D druid.coordinator.kill.supervisor.durationToRetain=P4D # Poll every day to delete rules records > 4 days old druid.coordinator.kill.rule.on=true druid.coordinator.kill.rule.period=P1D druid.coordinator.kill.rule.durationToRetain=P4D # Poll every day to delete compaction configuration records druid.coordinator.kill.compaction.on=true druid.coordinator.kill.compaction.period=P1D # Poll every day to delete datasource records created by supervisors > 4 days old druid.coordinator.kill.datasource.on=true druid.coordinator.kill.datasource.period=P1D druid.coordinator.kill.datasource.durationToRetain=P4D ...
See the following topics for more information: