blob: 16f03866309a250f6be158b5d3440d29d922c08c [file] [log] [blame]
// Jest Snapshot v1, https://goo.gl/fbAQLP
exports[`coordinator dynamic config matches snapshot 1`] = `
<SnitchDialog
className="coordinator-dynamic-config-dialog"
onClose={[Function]}
onSave={[Function]}
saveDisabled={false}
title="Coordinator dynamic config"
>
<p>
Edit the coordinator dynamic configuration on the fly. For more information please refer to the
<Memo(ExternalLink)
href="https://druid.apache.org/docs/0.20.0/configuration/index.html#dynamic-configuration"
>
documentation
</Memo(ExternalLink)>
.
</p>
<Memo(FormJsonSelector)
onChange={[Function]}
tab="form"
/>
<AutoForm
fields={
Array [
Object {
"defaultValue": 5,
"info": <React.Fragment>
The maximum number of segments that can be moved at any given time.
</React.Fragment>,
"name": "maxSegmentsToMove",
"type": "number",
},
Object {
"defaultValue": 1,
"info": <React.Fragment>
Thread pool size for computing moving cost of segments in segment balancing. Consider increasing this if you have a lot of segments and moving segments starts to get stuck.
</React.Fragment>,
"name": "balancerComputeThreads",
"type": "number",
},
Object {
"defaultValue": false,
"info": <React.Fragment>
Boolean flag for whether or not we should emit balancing stats. This is an expensive operation.
</React.Fragment>,
"name": "emitBalancingStats",
"type": "boolean",
},
Object {
"defaultValue": false,
"info": <React.Fragment>
Send kill tasks for ALL dataSources if property
<Unknown>
druid.coordinator.kill.on
</Unknown>
is true. If this is set to true then
<Unknown>
killDataSourceWhitelist
</Unknown>
must not be specified or be empty list.
</React.Fragment>,
"name": "killAllDataSources",
"type": "boolean",
},
Object {
"emptyValue": Array [],
"info": <React.Fragment>
List of dataSources for which kill tasks are sent if property
<Unknown>
druid.coordinator.kill.on
</Unknown>
is true. This can be a list of comma-separated dataSources or a JSON array.
</React.Fragment>,
"name": "killDataSourceWhitelist",
"type": "string-array",
},
Object {
"emptyValue": Array [],
"info": <React.Fragment>
List of dataSources for which pendingSegments are NOT cleaned up if property
<Unknown>
druid.coordinator.kill.pendingSegments.on
</Unknown>
is true. This can be a list of comma-separated dataSources or a JSON array.
</React.Fragment>,
"name": "killPendingSegmentsSkipList",
"type": "string-array",
},
Object {
"defaultValue": 0,
"info": <React.Fragment>
The maximum number of segments that could be queued for loading to any given server. This parameter could be used to speed up segments loading process, especially if there are "slow" nodes in the cluster (with low loading speed) or if too much segments scheduled to be replicated to some particular node (faster loading could be preferred to better segments distribution). Desired value depends on segments loading speed, acceptable replication time and number of nodes. Value 1000 could be a start point for a rather big cluster. Default value is 0 (loading queue is unbounded)
</React.Fragment>,
"name": "maxSegmentsInNodeLoadingQueue",
"type": "number",
},
Object {
"defaultValue": 524288000,
"info": <React.Fragment>
The maximum total uncompressed size in bytes of segments to merge.
</React.Fragment>,
"name": "mergeBytesLimit",
"type": "size-bytes",
},
Object {
"defaultValue": 100,
"info": <React.Fragment>
The maximum number of segments that can be in a single append task.
</React.Fragment>,
"name": "mergeSegmentsLimit",
"type": "number",
},
Object {
"defaultValue": 900000,
"info": <React.Fragment>
How long does the Coordinator need to be active before it can start removing (marking unused) segments in metadata storage.
</React.Fragment>,
"name": "millisToWaitBeforeDeleting",
"type": "number",
},
Object {
"defaultValue": 15,
"info": <React.Fragment>
The maximum number of Coordinator runs for a segment to be replicated before we start alerting.
</React.Fragment>,
"name": "replicantLifetime",
"type": "number",
},
Object {
"defaultValue": 10,
"info": <React.Fragment>
The maximum number of segments that can be replicated at one time.
</React.Fragment>,
"name": "replicationThrottleLimit",
"type": "number",
},
Object {
"emptyValue": Array [],
"info": <React.Fragment>
List of historical services to 'decommission'. Coordinator will not assign new segments to 'decommissioning' services, and segments will be moved away from them to be placed on non-decommissioning services at the maximum rate specified by
<Unknown>
decommissioningMaxPercentOfMaxSegmentsToMove
</Unknown>
.
</React.Fragment>,
"name": "decommissioningNodes",
"type": "string-array",
},
Object {
"defaultValue": 70,
"info": <React.Fragment>
The maximum number of segments that may be moved away from 'decommissioning' services to non-decommissioning (that is, active) services during one Coordinator run. This value is relative to the total maximum segment movements allowed during one run which is determined by
<Unknown>
maxSegmentsToMove
</Unknown>
. If
<Unknown>
decommissioningMaxPercentOfMaxSegmentsToMove
</Unknown>
is 0, segments will neither be moved from or to 'decommissioning' services, effectively putting them in a sort of "maintenance" mode that will not participate in balancing or assignment by load rules. Decommissioning can also become stalled if there are no available active services to place the segments. By leveraging the maximum percent of decommissioning segment movements, an operator can prevent active services from overload by prioritizing balancing, or decrease decommissioning time instead. The value should be between 0 and 100.
</React.Fragment>,
"name": "decommissioningMaxPercentOfMaxSegmentsToMove",
"type": "number",
},
Object {
"defaultValue": 100,
"info": <React.Fragment>
The percentage of the total number of segments in the cluster that are considered every time a segment needs to be selected for a move. Druid orders servers by available capacity ascending (the least available capacity first) and then iterates over the servers. For each server, Druid iterates over the segments on the server, considering them for moving. The default config of 100% means that every segment on every server is a candidate to be moved. This should make sense for most small to medium-sized clusters. However, an admin may find it preferable to drop this value lower if they don't think that it is worthwhile to consider every single segment in the cluster each time it is looking for a segment to move.
</React.Fragment>,
"name": "percentOfSegmentsToConsiderPerMove",
"type": "number",
},
Object {
"defaultValue": false,
"info": <React.Fragment>
Boolean flag for whether or not the coordinator should execute its various duties of coordinating the cluster. Setting this to true essentially pauses all coordination work while allowing the API to remain up.
</React.Fragment>,
"name": "pauseCoordination",
"type": "boolean",
},
Object {
"defaultValue": false,
"info": <React.Fragment>
Boolean flag for whether or not additional replication is needed for segments that have failed to load due to the expiry of coordinator load timeout. If this is set to true, the coordinator will attempt to replicate the failed segment on a different historical server.
</React.Fragment>,
"name": "replicateAfterLoadTimeout",
"type": "boolean",
},
Object {
"defaultValue": 2147483647,
"info": <React.Fragment>
The maximum number of non-primary replicants to load in a single Coordinator cycle. Once this limit is hit, only primary replicants will be loaded for the remainder of the cycle. Tuning this value lower can help reduce the delay in loading primary segments when the cluster has a very large number of non-primary replicants to load (such as when a single historical drops out of the cluster leaving many under-replicated segments).
</React.Fragment>,
"name": "maxNonPrimaryReplicantsToLoad",
"type": "number",
},
]
}
model={Object {}}
onChange={[Function]}
/>
</SnitchDialog>
`;