blob: a8984e73897b5b6f4ca5c59c00e046cfb3fe9b4c [file] [log] [blame]
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>SQL-based ingestion reference ยท Apache Druid</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><link rel="canonical" href="https://druid.apache.org/docs/latest/multi-stage-query/reference.html"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta name="docsearch:version" content="26.0.0" /><meta property="og:title" content="SQL-based ingestion reference ยท Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" content="https://druid.apache.org/index.html"/><meta property="og:description" content="&lt;!--"/><meta property="og:image" content="https://druid.apache.org/img/druid_nav.png"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://druid.apache.org/img/druid_nav.png"/><link rel="shortcut icon" href="/img/favicon.png"/><link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/docsearch.js@2/dist/cdn/docsearch.min.css"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script async="" src="https://www.googletagmanager.com/gtag/js?id=UA-131010415-1"></script><script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments); }
gtag('js', new Date());
gtag('config', 'UA-131010415-1');
</script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><link rel="stylesheet" href="/css/code-block-buttons.css"/><script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/clipboard.js/2.0.4/clipboard.min.js"></script><script type="text/javascript" src="/js/code-block-buttons.js"></script><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><nav class="slidingNav"><ul class="nav-site nav-site-internal"><li class=""><a href="/technology" target="_self">Technology</a></li><li class=""><a href="/use-cases" target="_self">Use Cases</a></li><li class=""><a href="/druid-powered" target="_self">Powered By</a></li><li class="siteNavGroupActive"><a href="/docs/latest/design/index.html" target="_self">Docs</a></li><li class=""><a href="/community/" target="_self">Community</a></li><li class=""><a href="https://www.apache.org" target="_self">Apache</a></li><li class=""><a href="/downloads.html" target="_self">Download</a></li><li class="navSearchWrapper reactNavSearchWrapper"><input type="text" id="search_input_react" placeholder="Search" title="Search"/></li></ul></nav></div></header></div></div><div class="navPusher"><div class="docMainWrapper wrapper"><div class="docsNavContainer" id="docsNav"><nav class="toc"><div class="toggleNav"><section class="navWrapper wrapper"><div class="navBreadcrumb wrapper"><div class="navToggle" id="navToggler"><div class="hamburger-menu"><div class="line1"></div><div class="line2"></div><div class="line3"></div></div></div><h2><i>โ€บ</i><span>SQL-based ingestion ๐Ÿ†•</span></h2><div class="tocToggler" id="tocToggler"><i class="icon-toc"></i></div></div><div class="navGroups"><div class="navGroup"><h3 class="navGroupCategoryTitle collapsible">Getting started<span class="arrow"><svg width="24" height="24" viewBox="0 0 24 24"><path fill="#565656" d="M7.41 15.41L12 10.83l4.59 4.58L18 14l-6-6-6 6z"></path><path d="M0 0h24v24H0z" fill="none"></path></svg></span></h3><ul class="hide"><li class="navListItem"><a class="navItem" href="/docs/latest/design/index.html">Introduction to Apache Druid</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/index.html">Quickstart (local)</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/single-server.html">Single server deployment</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/cluster.html">Clustered deployment</a></li></ul></div><div class="navGroup"><h3 class="navGroupCategoryTitle collapsible">Tutorials<span class="arrow"><svg width="24" height="24" viewBox="0 0 24 24"><path fill="#565656" d="M7.41 15.41L12 10.83l4.59 4.58L18 14l-6-6-6 6z"></path><path d="M0 0h24v24H0z" fill="none"></path></svg></span></h3><ul class="hide"><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-batch.html">Load files natively</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-msq-extern.html">Load files using SQL ๐Ÿ†•</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-kafka.html">Load from Apache Kafka</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-batch-hadoop.html">Load from Apache Hadoop</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-query.html">Querying data</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-rollup.html">Roll-up</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-sketches-theta.html">Theta sketches</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-retention.html">Configuring data retention</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-update-data.html">Updating existing data</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-compaction.html">Compacting segments</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-delete-data.html">Deleting data</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-ingestion-spec.html">Writing an ingestion spec</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-transform-spec.html">Transforming input data</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/docker.html">Tutorial: Run with Docker</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-kerberos-hadoop.html">Kerberized HDFS deep storage</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-msq-convert-spec.html">Convert ingestion spec to SQL</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-sql-query-view.html">Get to know Query view</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-unnest-arrays.html">Unnesting arrays</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-jupyter-index.html">Jupyter Notebook tutorials</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-jupyter-docker.html">Docker for tutorials</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/tutorials/tutorial-jdbc.html">JDBC connector</a></li></ul></div><div class="navGroup"><h3 class="navGroupCategoryTitle collapsible">Design<span class="arrow"><svg width="24" height="24" viewBox="0 0 24 24"><path fill="#565656" d="M7.41 15.41L12 10.83l4.59 4.58L18 14l-6-6-6 6z"></path><path d="M0 0h24v24H0z" fill="none"></path></svg></span></h3><ul class="hide"><li class="navListItem"><a class="navItem" href="/docs/latest/design/architecture.html">Design</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/design/segments.html">Segments</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/design/processes.html">Processes and servers</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/dependencies/deep-storage.html">Deep storage</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/dependencies/metadata-storage.html">Metadata storage</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/dependencies/zookeeper.html">ZooKeeper</a></li></ul></div><div class="navGroup"><h3 class="navGroupCategoryTitle collapsible">Ingestion<span class="arrow"><svg width="24" height="24" viewBox="0 0 24 24"><path fill="#565656" d="M7.41 15.41L12 10.83l4.59 4.58L18 14l-6-6-6 6z"></path><path d="M0 0h24v24H0z" fill="none"></path></svg></span></h3><ul class="hide"><li class="navListItem"><a class="navItem" href="/docs/latest/ingestion/index.html">Ingestion</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/ingestion/data-formats.html">Data formats</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/ingestion/data-model.html">Data model</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/ingestion/rollup.html">Data rollup</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/ingestion/partitioning.html">Partitioning</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/ingestion/ingestion-spec.html">Ingestion spec</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/ingestion/schema-design.html">Schema design tips</a></li><div class="navGroup subNavGroup"><h4 class="navGroupSubcategoryTitle">Stream ingestion</h4><ul><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/kafka-ingestion.html">Apache Kafka ingestion</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/kafka-supervisor-reference.html">Apache Kafka supervisor</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/kafka-supervisor-operations.html">Apache Kafka operations</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/kinesis-ingestion.html">Amazon Kinesis</a></li></ul></div><div class="navGroup subNavGroup"><h4 class="navGroupSubcategoryTitle">Batch ingestion</h4><ul><li class="navListItem"><a class="navItem" href="/docs/latest/ingestion/native-batch.html">Native batch</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/ingestion/native-batch-input-sources.html">Native batch: input sources</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/ingestion/migrate-from-firehose.html">Migrate from firehose</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/ingestion/hadoop.html">Hadoop-based</a></li></ul></div><div class="navGroup subNavGroup"><h4 class="navGroupSubcategoryTitle">SQL-based ingestion ๐Ÿ†•</h4><ul><li class="navListItem"><a class="navItem" href="/docs/latest/multi-stage-query/index.html">Overview</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/multi-stage-query/concepts.html">Key concepts</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/multi-stage-query/api.html">API</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/multi-stage-query/security.html">Security</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/multi-stage-query/examples.html">Examples</a></li><li class="navListItem navListItemActive"><a class="navItem" href="/docs/latest/multi-stage-query/reference.html">Reference</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/multi-stage-query/known-issues.html">Known issues</a></li></ul></div><li class="navListItem"><a class="navItem" href="/docs/latest/ingestion/tasks.html">Task reference</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/ingestion/faq.html">Troubleshooting FAQ</a></li></ul></div><div class="navGroup"><h3 class="navGroupCategoryTitle collapsible">Data management<span class="arrow"><svg width="24" height="24" viewBox="0 0 24 24"><path fill="#565656" d="M7.41 15.41L12 10.83l4.59 4.58L18 14l-6-6-6 6z"></path><path d="M0 0h24v24H0z" fill="none"></path></svg></span></h3><ul class="hide"><li class="navListItem"><a class="navItem" href="/docs/latest/data-management/index.html">Overview</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/data-management/update.html">Data updates</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/data-management/delete.html">Data deletion</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/data-management/schema-changes.html">Schema changes</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/data-management/compaction.html">Compaction</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/data-management/automatic-compaction.html">Automatic compaction</a></li></ul></div><div class="navGroup"><h3 class="navGroupCategoryTitle collapsible">Querying<span class="arrow"><svg width="24" height="24" viewBox="0 0 24 24"><path fill="#565656" d="M7.41 15.41L12 10.83l4.59 4.58L18 14l-6-6-6 6z"></path><path d="M0 0h24v24H0z" fill="none"></path></svg></span></h3><ul class="hide"><div class="navGroup subNavGroup"><h4 class="navGroupSubcategoryTitle">Druid SQL</h4><ul><li class="navListItem"><a class="navItem" href="/docs/latest/querying/sql.html">Overview and syntax</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/sql-data-types.html">SQL data types</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/sql-operators.html">Operators</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/sql-scalar.html">Scalar functions</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/sql-aggregations.html">Aggregation functions</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/sql-multivalue-string-functions.html">Multi-value string functions</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/sql-json-functions.html">JSON functions</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/sql-functions.html">All functions</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/sql-api.html">Druid SQL API</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/sql-jdbc.html">JDBC driver API</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/sql-query-context.html">SQL query context</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/sql-metadata-tables.html">SQL metadata tables</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/sql-translation.html">SQL query translation</a></li></ul></div><li class="navListItem"><a class="navItem" href="/docs/latest/querying/querying.html">Native queries</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/query-execution.html">Query execution</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/troubleshooting.html">Troubleshooting</a></li><div class="navGroup subNavGroup"><h4 class="navGroupSubcategoryTitle">Concepts</h4><ul><li class="navListItem"><a class="navItem" href="/docs/latest/querying/datasource.html">Datasources</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/joins.html">Joins</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/lookups.html">Lookups</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/multi-value-dimensions.html">Multi-value dimensions</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/nested-columns.html">Nested columns</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/multitenancy.html">Multitenancy</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/caching.html">Query caching</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/using-caching.html">Using query caching</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/query-context.html">Query context</a></li></ul></div><div class="navGroup subNavGroup"><h4 class="navGroupSubcategoryTitle">Native query types</h4><ul><li class="navListItem"><a class="navItem" href="/docs/latest/querying/timeseriesquery.html">Timeseries</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/topnquery.html">TopN</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/groupbyquery.html">GroupBy</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/scan-query.html">Scan</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/searchquery.html">Search</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/timeboundaryquery.html">TimeBoundary</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/segmentmetadataquery.html">SegmentMetadata</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/datasourcemetadataquery.html">DatasourceMetadata</a></li></ul></div><div class="navGroup subNavGroup"><h4 class="navGroupSubcategoryTitle">Native query components</h4><ul><li class="navListItem"><a class="navItem" href="/docs/latest/querying/filters.html">Filters</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/granularities.html">Granularities</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/dimensionspecs.html">Dimensions</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/aggregations.html">Aggregations</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/post-aggregations.html">Post-aggregations</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/misc/math-expr.html">Expressions</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/having.html">Having filters (groupBy)</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/limitspec.html">Sorting and limiting (groupBy)</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/topnmetricspec.html">Sorting (topN)</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/sorting-orders.html">String comparators</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/virtual-columns.html">Virtual columns</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/geo.html">Spatial filters</a></li></ul></div></ul></div><div class="navGroup"><h3 class="navGroupCategoryTitle collapsible">Configuration<span class="arrow"><svg width="24" height="24" viewBox="0 0 24 24"><path fill="#565656" d="M7.41 15.41L12 10.83l4.59 4.58L18 14l-6-6-6 6z"></path><path d="M0 0h24v24H0z" fill="none"></path></svg></span></h3><ul class="hide"><li class="navListItem"><a class="navItem" href="/docs/latest/configuration/index.html">Configuration reference</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions.html">Extensions</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/configuration/logging.html">Logging</a></li></ul></div><div class="navGroup"><h3 class="navGroupCategoryTitle collapsible">Operations<span class="arrow"><svg width="24" height="24" viewBox="0 0 24 24"><path fill="#565656" d="M7.41 15.41L12 10.83l4.59 4.58L18 14l-6-6-6 6z"></path><path d="M0 0h24v24H0z" fill="none"></path></svg></span></h3><ul class="hide"><li class="navListItem"><a class="navItem" href="/docs/latest/operations/web-console.html">Web console</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/java.html">Java runtime</a></li><div class="navGroup subNavGroup"><h4 class="navGroupSubcategoryTitle">Security</h4><ul><li class="navListItem"><a class="navItem" href="/docs/latest/operations/security-overview.html">Security overview</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/security-user-auth.html">User authentication and authorization</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/auth-ldap.html">LDAP auth</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/password-provider.html">Password providers</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/dynamic-config-provider.html">Dynamic Config Providers</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/tls-support.html">TLS support</a></li></ul></div><div class="navGroup subNavGroup"><h4 class="navGroupSubcategoryTitle">Performance tuning</h4><ul><li class="navListItem"><a class="navItem" href="/docs/latest/operations/basic-cluster-tuning.html">Basic cluster tuning</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/segment-optimization.html">Segment size optimization</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/mixed-workloads.html">Mixed workloads</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/http-compression.html">HTTP compression</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/clean-metadata-store.html">Automated metadata cleanup</a></li></ul></div><div class="navGroup subNavGroup"><h4 class="navGroupSubcategoryTitle">Monitoring</h4><ul><li class="navListItem"><a class="navItem" href="/docs/latest/operations/request-logging.html">Request logging</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/metrics.html">Metrics</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/alerts.html">Alerts</a></li></ul></div><li class="navListItem"><a class="navItem" href="/docs/latest/operations/api-reference.html">API reference</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/high-availability.html">High availability</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/rolling-updates.html">Rolling updates</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/rule-configuration.html">Using rules to drop and retain data</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/other-hadoop.html">Working with different versions of Apache Hadoop</a></li><div class="navGroup subNavGroup"><h4 class="navGroupSubcategoryTitle">Misc</h4><ul><li class="navListItem"><a class="navItem" href="/docs/latest/operations/dump-segment.html">dump-segment tool</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/reset-cluster.html">reset-cluster tool</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/insert-segment-to-db.html">insert-segment-to-db tool</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/pull-deps.html">pull-deps tool</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/deep-storage-migration.html">Deep storage migration</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/export-metadata.html">Export Metadata Tool</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/metadata-migration.html">Metadata Migration</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/use_sbt_to_build_fat_jar.html">Content for build.sbt</a></li></ul></div></ul></div><div class="navGroup"><h3 class="navGroupCategoryTitle collapsible">Development<span class="arrow"><svg width="24" height="24" viewBox="0 0 24 24"><path fill="#565656" d="M7.41 15.41L12 10.83l4.59 4.58L18 14l-6-6-6 6z"></path><path d="M0 0h24v24H0z" fill="none"></path></svg></span></h3><ul class="hide"><li class="navListItem"><a class="navItem" href="/docs/latest/development/overview.html">Developing on Druid</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/modules.html">Creating extensions</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/javascript.html">JavaScript functionality</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/build.html">Build from source</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/versioning.html">Versioning</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/experimental.html">Experimental features</a></li></ul></div><div class="navGroup"><h3 class="navGroupCategoryTitle collapsible">Misc<span class="arrow"><svg width="24" height="24" viewBox="0 0 24 24"><path fill="#565656" d="M7.41 15.41L12 10.83l4.59 4.58L18 14l-6-6-6 6z"></path><path d="M0 0h24v24H0z" fill="none"></path></svg></span></h3><ul class="hide"><li class="navListItem"><a class="navItem" href="/docs/latest/misc/papers-and-talks.html">Papers</a></li></ul></div><div class="navGroup"><h3 class="navGroupCategoryTitle collapsible">Hidden<span class="arrow"><svg width="24" height="24" viewBox="0 0 24 24"><path fill="#565656" d="M7.41 15.41L12 10.83l4.59 4.58L18 14l-6-6-6 6z"></path><path d="M0 0h24v24H0z" fill="none"></path></svg></span></h3><ul class="hide"><li class="navListItem"><a class="navItem" href="/docs/latest/comparisons/druid-vs-elasticsearch.html">Apache Druid vs Elasticsearch</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/comparisons/druid-vs-key-value.html">Apache Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/comparisons/druid-vs-kudu.html">Apache Druid vs Kudu</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/comparisons/druid-vs-redshift.html">Apache Druid vs Redshift</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/comparisons/druid-vs-spark.html">Apache Druid vs Spark</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/comparisons/druid-vs-sql-on-hadoop.html">Apache Druid vs SQL-on-Hadoop</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/design/auth.html">Authentication and Authorization</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/design/broker.html">Broker</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/design/coordinator.html">Coordinator Process</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/design/historical.html">Historical Process</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/design/indexer.html">Indexer Process</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/design/indexing-service.html">Indexing Service</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/design/middlemanager.html">MiddleManager Process</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/design/overlord.html">Overlord Process</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/design/router.html">Router Process</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/design/peons.html">Peons</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/approximate-histograms.html">Approximate Histogram aggregators</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/avro.html">Apache Avro</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/azure.html">Microsoft Azure</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/bloom-filter.html">Bloom Filter</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/datasketches-extension.html">DataSketches extension</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/datasketches-hll.html">DataSketches HLL Sketch module</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/datasketches-quantiles.html">DataSketches Quantiles Sketch module</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/datasketches-theta.html">DataSketches Theta Sketch module</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/datasketches-tuple.html">DataSketches Tuple Sketch module</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/druid-basic-security.html">Basic Security</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/druid-kerberos.html">Kerberos</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/druid-lookups.html">Cached Lookup Module</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/druid-ranger-security.html">Apache Ranger Security</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/google.html">Google Cloud Storage</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/hdfs.html">HDFS</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/kafka-extraction-namespace.html">Apache Kafka Lookups</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/lookups-cached-global.html">Globally Cached Lookups</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/mysql.html">MySQL Metadata Store</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/orc.html">ORC Extension</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/druid-pac4j.html">Druid pac4j based Security extension</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/parquet.html">Apache Parquet Extension</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/postgresql.html">PostgreSQL Metadata Store</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/protobuf.html">Protobuf</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/s3.html">S3-compatible</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/simple-client-sslcontext.html">Simple SSLContext Provider Module</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/stats.html">Stats aggregator</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/test-stats.html">Test Stats Aggregators</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/druid-aws-rds.html">Druid AWS RDS Module</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-core/kubernetes.html">Kubernetes</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/ambari-metrics-emitter.html">Ambari Metrics Emitter</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/cassandra.html">Apache Cassandra</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/cloudfiles.html">Rackspace Cloud Files</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/distinctcount.html">DistinctCount Aggregator</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/graphite.html">Graphite Emitter</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/influx.html">InfluxDB Line Protocol Parser</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/influxdb-emitter.html">InfluxDB Emitter</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/kafka-emitter.html">Kafka Emitter</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/materialized-view.html">Materialized View</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/momentsketch-quantiles.html">Moment Sketches for Approximate Quantiles module</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/moving-average-query.html">Moving Average Query</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/opentsdb-emitter.html">OpenTSDB Emitter</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/redis-cache.html">Druid Redis Cache</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/sqlserver.html">Microsoft SQLServer</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/statsd.html">StatsD Emitter</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/tdigestsketch-quantiles.html">T-Digest Quantiles Sketch module</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/thrift.html">Thrift</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/time-min-max.html">Timestamp Min/Max aggregators</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/gce-extensions.html">GCE Extensions</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/aliyun-oss.html">Aliyun OSS</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/development/extensions-contrib/prometheus.html">Prometheus Emitter</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/operations/kubernetes.html">kubernetes</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/hll-old.html">Cardinality/HyperUnique aggregators</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/querying/select-query.html">Select</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/ingestion/native-batch-firehose.html">Firehose (deprecated)</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/ingestion/native-batch-simple-task.html">Native batch (simple)</a></li><li class="navListItem"><a class="navItem" href="/docs/latest/ingestion/standalone-realtime.html">Realtime Process</a></li></ul></div></div></section></div><script>
var coll = document.getElementsByClassName('collapsible');
var checkActiveCategory = true;
for (var i = 0; i < coll.length; i++) {
var links = coll[i].nextElementSibling.getElementsByTagName('*');
if (checkActiveCategory){
for (var j = 0; j < links.length; j++) {
if (links[j].classList.contains('navListItemActive')){
coll[i].nextElementSibling.classList.toggle('hide');
coll[i].childNodes[1].classList.toggle('rotate');
checkActiveCategory = false;
break;
}
}
}
coll[i].addEventListener('click', function() {
var arrow = this.childNodes[1];
arrow.classList.toggle('rotate');
var content = this.nextElementSibling;
content.classList.toggle('hide');
});
}
document.addEventListener('DOMContentLoaded', function() {
createToggler('#navToggler', '#docsNav', 'docsSliderActive');
createToggler('#tocToggler', 'body', 'tocActive');
var headings = document.querySelector('.toc-headings');
headings && headings.addEventListener('click', function(event) {
var el = event.target;
while(el !== headings){
if (el.tagName === 'A') {
document.body.classList.remove('tocActive');
break;
} else{
el = el.parentNode;
}
}
}, false);
function createToggler(togglerSelector, targetSelector, className) {
var toggler = document.querySelector(togglerSelector);
var target = document.querySelector(targetSelector);
if (!toggler) {
return;
}
toggler.onclick = function(event) {
event.preventDefault();
target.classList.toggle(className);
};
}
});
</script></nav></div><div class="container mainContainer docsContainer"><div class="wrapper"><div class="post"><header class="postHeader"><a class="edit-page-link button" href="https://github.com/apache/druid/edit/master/docs/multi-stage-query/reference.md" target="_blank" rel="noreferrer noopener">Edit</a><h1 id="__docusaurus" class="postHeaderTitle">SQL-based ingestion reference</h1></header><article><div><span><!--
~ Licensed to the Apache Software Foundation (ASF) under one
~ or more contributor license agreements. See the NOTICE file
~ distributed with this work for additional information
~ regarding copyright ownership. The ASF licenses this file
~ to you under the Apache License, Version 2.0 (the
~ "License"); you may not use this file except in compliance
~ with the License. You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing,
~ software distributed under the License is distributed on an
~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
~ KIND, either express or implied. See the License for the
~ specific language governing permissions and limitations
~ under the License.
-->
<blockquote>
<p>This page describes SQL-based batch ingestion using the <a href="/docs/latest/multi-stage-query/index.html"><code>druid-multi-stage-query</code></a>
extension, new in Druid 24.0. Refer to the <a href="/docs/latest/ingestion/index.html#batch">ingestion methods</a> table to determine which
ingestion method is right for you.</p>
</blockquote>
<h2><a class="anchor" aria-hidden="true" id="sql-reference"></a><a href="#sql-reference" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>SQL reference</h2>
<p>This topic is a reference guide for the multi-stage query architecture in Apache Druid. For examples of real-world
usage, refer to the <a href="/docs/latest/multi-stage-query/examples.html">Examples</a> page.</p>
<p><code>INSERT</code> and <code>REPLACE</code> load data into a Druid datasource from either an external input source, or from another
datasource. When loading from an external datasource, you typically must provide the kind of input source,
the data format, and the schema (signature) of the input file. Druid provides <em>table functions</em> to allow you to
specify the external file. There are two kinds. <code>EXTERN</code> works with the JSON-serialized specs for the three
items, using the same JSON you would use in native ingest. A set of other, input-source-specific functions
use SQL syntax to specify the format and the input schema. There is one function for each input source. The
input-source-specific functions allow you to use SQL query parameters to specify the set of files (or URIs),
making it easy to reuse the same SQL statement for each ingest: just specify the set of files to use each time.</p>
<h3><a class="anchor" aria-hidden="true" id="extern-function"></a><a href="#extern-function" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a><code>EXTERN</code> Function</h3>
<p>Use the <code>EXTERN</code> function to read external data. The function has two variations.</p>
<p>Function variation 1, with the input schema expressed as JSON:</p>
<pre><code class="hljs css language-sql"><span class="hljs-keyword">SELECT</span>
&lt;<span class="hljs-keyword">column</span>&gt;
<span class="hljs-keyword">FROM</span> <span class="hljs-keyword">TABLE</span>(
EXTERN(
<span class="hljs-string">'&lt;Druid input source&gt;'</span>,
<span class="hljs-string">'&lt;Druid input format&gt;'</span>,
<span class="hljs-string">'&lt;row signature&gt;'</span>
)
)
</code></pre>
<p><code>EXTERN</code> consists of the following parts:</p>
<ol>
<li>Any <a href="/docs/latest/ingestion/native-batch-input-sources.html">Druid input source</a> as a JSON-encoded string.</li>
<li>Any <a href="/docs/latest/ingestion/data-formats.html">Druid input format</a> as a JSON-encoded string.</li>
<li>A row signature, as a JSON-encoded array of column descriptors. Each column descriptor must have a
<code>name</code> and a <code>type</code>. The type can be <code>string</code>, <code>long</code>, <code>double</code>, or <code>float</code>. This row signature is
used to map the external data into the SQL layer.</li>
</ol>
<p>Variation 2, with the input schema expressed in SQL using an <code>EXTEND</code> clause. (See the next
section for more detail on <code>EXTEND</code>). This format also uses named arguments to make the
SQL a bit easier to read:</p>
<pre><code class="hljs css language-sql"><span class="hljs-keyword">SELECT</span>
&lt;<span class="hljs-keyword">column</span>&gt;
<span class="hljs-keyword">FROM</span> <span class="hljs-keyword">TABLE</span>(
EXTERN(
inputSource =&gt; <span class="hljs-string">'&lt;Druid input source&gt;'</span>,
inputFormat =&gt; <span class="hljs-string">'&lt;Druid input format&gt;'</span>
)) (&lt;<span class="hljs-keyword">columns</span>&gt;)
</code></pre>
<p>The input source and format are as above. The columns are expressed as in a SQL <code>CREATE TABLE</code>.
Example: <code>(timestamp VARCHAR, metricType VARCHAR, value BIGINT)</code>. The optional <code>EXTEND</code> keyword
can precede the column list: <code>EXTEND (timestamp VARCHAR...)</code>.</p>
<p>For more information, see <a href="/docs/latest/multi-stage-query/concepts.html#extern">Read external data with EXTERN</a>.</p>
<h3><a class="anchor" aria-hidden="true" id="insert"></a><a href="#insert" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a><code>INSERT</code></h3>
<p>Use the <code>INSERT</code> statement to insert data.</p>
<p>Unlike standard SQL, <code>INSERT</code> loads data into the target table according to column name, not positionally. If necessary,
use <code>AS</code> in your <code>SELECT</code> column list to assign the correct names. Do not rely on their positions within the SELECT
clause.</p>
<p>Statement format:</p>
<pre><code class="hljs css language-sql"><span class="hljs-keyword">INSERT</span> <span class="hljs-keyword">INTO</span> &lt;<span class="hljs-keyword">table</span> <span class="hljs-keyword">name</span>&gt;
&lt; <span class="hljs-keyword">SELECT</span> <span class="hljs-keyword">query</span> &gt;
PARTITIONED <span class="hljs-keyword">BY</span> &lt;<span class="hljs-built_in">time</span> frame&gt;
[ CLUSTERED <span class="hljs-keyword">BY</span> &lt;<span class="hljs-keyword">column</span> <span class="hljs-keyword">list</span>&gt; ]
</code></pre>
<p>INSERT consists of the following parts:</p>
<ol>
<li>Optional <a href="/docs/latest/multi-stage-query/reference.html#context-parameters">context parameters</a>.</li>
<li>An <code>INSERT INTO &lt;dataSource&gt;</code> clause at the start of your query, such as <code>INSERT INTO your-table</code>.</li>
<li>A clause for the data you want to insert, such as <code>SELECT ... FROM ...</code>. You can use <a href="#extern-function"><code>EXTERN</code></a>
to reference external tables using <code>FROM TABLE(EXTERN(...))</code>.</li>
<li>A <a href="#partitioned-by">PARTITIONED BY</a> clause, such as <code>PARTITIONED BY DAY</code>.</li>
<li>An optional <a href="#clustered-by">CLUSTERED BY</a> clause.</li>
</ol>
<p>For more information, see <a href="/docs/latest/multi-stage-query/concepts.html#insert">Load data with INSERT</a>.</p>
<h3><a class="anchor" aria-hidden="true" id="replace"></a><a href="#replace" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a><code>REPLACE</code></h3>
<p>You can use the <code>REPLACE</code> function to replace all or some of the data.</p>
<p>Unlike standard SQL, <code>REPLACE</code> loads data into the target table according to column name, not positionally. If necessary,
use <code>AS</code> in your <code>SELECT</code> column list to assign the correct names. Do not rely on their positions within the SELECT
clause.</p>
<h4><a class="anchor" aria-hidden="true" id="replace-all-data"></a><a href="#replace-all-data" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a><code>REPLACE</code> all data</h4>
<p>Function format to replace all data:</p>
<pre><code class="hljs css language-sql"><span class="hljs-keyword">REPLACE</span> <span class="hljs-keyword">INTO</span> &lt;target <span class="hljs-keyword">table</span>&gt;
OVERWRITE <span class="hljs-keyword">ALL</span>
&lt; <span class="hljs-keyword">SELECT</span> <span class="hljs-keyword">query</span> &gt;
PARTITIONED <span class="hljs-keyword">BY</span> &lt;<span class="hljs-built_in">time</span> granularity&gt;
[ CLUSTERED <span class="hljs-keyword">BY</span> &lt;<span class="hljs-keyword">column</span> <span class="hljs-keyword">list</span>&gt; ]
</code></pre>
<h4><a class="anchor" aria-hidden="true" id="replace-specific-time-ranges"></a><a href="#replace-specific-time-ranges" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a><code>REPLACE</code> specific time ranges</h4>
<p>Function format to replace specific time ranges:</p>
<pre><code class="hljs css language-sql"><span class="hljs-keyword">REPLACE</span> <span class="hljs-keyword">INTO</span> &lt;target <span class="hljs-keyword">table</span>&gt;
OVERWRITE <span class="hljs-keyword">WHERE</span> __time &gt;= <span class="hljs-built_in">TIMESTAMP</span> <span class="hljs-string">'&lt;lower bound&gt;'</span> <span class="hljs-keyword">AND</span> __time &lt; <span class="hljs-built_in">TIMESTAMP</span> <span class="hljs-string">'&lt;upper bound&gt;'</span>
&lt; <span class="hljs-keyword">SELECT</span> <span class="hljs-keyword">query</span> &gt;
PARTITIONED <span class="hljs-keyword">BY</span> &lt;<span class="hljs-built_in">time</span> granularity&gt;
[ CLUSTERED <span class="hljs-keyword">BY</span> &lt;<span class="hljs-keyword">column</span> <span class="hljs-keyword">list</span>&gt; ]
</code></pre>
<p><code>REPLACE</code> consists of the following parts:</p>
<ol>
<li>Optional <a href="/docs/latest/multi-stage-query/reference.html#context-parameters">context parameters</a>.</li>
<li>A <code>REPLACE INTO &lt;dataSource&gt;</code> clause at the start of your query, such as <code>REPLACE INTO &quot;your-table&quot;.</code></li>
<li>An OVERWRITE clause after the datasource, either OVERWRITE ALL or OVERWRITE WHERE:
<ul>
<li>OVERWRITE ALL replaces the entire existing datasource with the results of the query.</li>
<li>OVERWRITE WHERE drops the time segments that match the condition you set. Conditions are based on the <code>__time</code>
column and use the format <code>__time [&lt; &gt; = &lt;= &gt;=] TIMESTAMP</code>. Use them with AND, OR, and NOT between them, inclusive
of the timestamps specified. No other expressions or functions are valid in OVERWRITE.</li>
</ul></li>
<li>A clause for the actual data you want to use for the replacement.</li>
<li>A <a href="#partitioned-by">PARTITIONED BY</a> clause, such as <code>PARTITIONED BY DAY</code>.</li>
<li>An optional <a href="#clustered-by">CLUSTERED BY</a> clause.</li>
</ol>
<p>For more information, see <a href="/docs/latest/multi-stage-query/concepts.html#replace">Overwrite data with REPLACE</a>.</p>
<h3><a class="anchor" aria-hidden="true" id="partitioned-by"></a><a href="#partitioned-by" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a><code>PARTITIONED BY</code></h3>
<p>The <code>PARTITIONED BY &lt;time granularity&gt;</code> clause is required for <a href="#insert">INSERT</a> and <a href="#replace">REPLACE</a>. See
<a href="/docs/latest/multi-stage-query/concepts.html#partitioning">Partitioning</a> for details.</p>
<p>The following granularity arguments are accepted:</p>
<ul>
<li>Time unit keywords: <code>HOUR</code>, <code>DAY</code>, <code>MONTH</code>, or <code>YEAR</code>. Equivalent to <code>FLOOR(__time TO TimeUnit)</code>.</li>
<li>Time units as ISO 8601 period strings: :<code>'PT1H'</code>, '<code>P1D</code>, etc. (Druid 26.0 and later.)</li>
<li><code>TIME_FLOOR(__time, 'granularity_string')</code>, where granularity_string is one of the ISO 8601 periods listed below. The
first argument must be <code>__time</code>.</li>
<li><code>FLOOR(__time TO TimeUnit)</code>, where <code>TimeUnit</code> is any unit supported by the <a href="/docs/latest/querying/sql-scalar.html#date-and-time-functions">FLOOR function</a>. The first argument must be <code>__time</code>.</li>
<li><code>ALL</code> or <code>ALL TIME</code>, which effectively disables time partitioning by placing all data in a single time chunk. To use
LIMIT or OFFSET at the outer level of your <code>INSERT</code> or <code>REPLACE</code> query, you must set <code>PARTITIONED BY</code> to <code>ALL</code> or <code>ALL TIME</code>.</li>
</ul>
<p>Earlier versions required the <code>TIME_FLOOR</code> notation to specify a granularity other than the keywords.
In the current version, the string constant provides a simpler equivalent solution.</p>
<p>The following ISO 8601 periods are supported for <code>TIME_FLOOR</code> and the string constant:</p>
<ul>
<li>PT1S</li>
<li>PT1M</li>
<li>PT5M</li>
<li>PT10M</li>
<li>PT15M</li>
<li>PT30M</li>
<li>PT1H</li>
<li>PT6H</li>
<li>P1D</li>
<li>P1W</li>
<li>P1M</li>
<li>P3M</li>
<li>P1Y</li>
</ul>
<p>For more information about partitioning, see <a href="/docs/latest/multi-stage-query/concepts.html#partitioning">Partitioning</a>.</p>
<h3><a class="anchor" aria-hidden="true" id="clustered-by"></a><a href="#clustered-by" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a><code>CLUSTERED BY</code></h3>
<p>The <code>CLUSTERED BY &lt;column list&gt;</code> clause is optional for <a href="#insert">INSERT</a> and <a href="#replace">REPLACE</a>. It accepts a list of
column names or expressions.</p>
<p>For more information about clustering, see <a href="/docs/latest/multi-stage-query/concepts.html#clustering">Clustering</a>.</p>
<p><a name="context"></a></p>
<h2><a class="anchor" aria-hidden="true" id="context-parameters"></a><a href="#context-parameters" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Context parameters</h2>
<p>In addition to the Druid SQL <a href="/docs/latest/querying/sql-query-context.html">context parameters</a>, the multi-stage query task engine accepts certain context parameters that are specific to it.</p>
<p>Use context parameters alongside your queries to customize the behavior of the query. If you're using the API, include the context parameters in the query context when you submit a query:</p>
<pre><code class="hljs css language-json">{
<span class="hljs-attr">"query"</span>: <span class="hljs-string">"SELECT 1 + 1"</span>,
<span class="hljs-attr">"context"</span>: {
<span class="hljs-attr">"&lt;key&gt;"</span>: <span class="hljs-string">"&lt;value&gt;"</span>,
<span class="hljs-attr">"maxNumTasks"</span>: <span class="hljs-number">3</span>
}
}
</code></pre>
<p>If you're using the web console, you can specify the context parameters through various UI options.</p>
<p>The following table lists the context parameters for the MSQ task engine:</p>
<table>
<thead>
<tr><th>Parameter</th><th>Description</th><th>Default value</th></tr>
</thead>
<tbody>
<tr><td><code>maxNumTasks</code></td><td>SELECT, INSERT, REPLACE<br /><br />The maximum total number of tasks to launch, including the controller task. The lowest possible value for this setting is 2: one controller and one worker. All tasks must be able to launch simultaneously. If they cannot, the query returns a <code>TaskStartTimeout</code> error code after approximately 10 minutes.<br /><br />May also be provided as <code>numTasks</code>. If both are present, <code>maxNumTasks</code> takes priority.</td><td>2</td></tr>
<tr><td><code>taskAssignment</code></td><td>SELECT, INSERT, REPLACE<br /><br />Determines how many tasks to use. Possible values include: <ul><li><code>max</code>: Uses as many tasks as possible, up to <code>maxNumTasks</code>.</li><li><code>auto</code>: When file sizes can be determined through directory listing (for example: local files, S3, GCS, HDFS) uses as few tasks as possible without exceeding 10 GiB or 10,000 files per task, unless exceeding these limits is necessary to stay within <code>maxNumTasks</code>. When file sizes cannot be determined through directory listing (for example: http), behaves the same as <code>max</code>.</li></ul></td><td><code>max</code></td></tr>
<tr><td><code>finalizeAggregations</code></td><td>SELECT, INSERT, REPLACE<br /><br />Determines the type of aggregation to return. If true, Druid finalizes the results of complex aggregations that directly appear in query results. If false, Druid returns the aggregation's intermediate type rather than finalized type. This parameter is useful during ingestion, where it enables storing sketches directly in Druid tables. For more information about aggregations, see <a href="/docs/latest/querying/sql-aggregations.html">SQL aggregation functions</a>.</td><td>true</td></tr>
<tr><td><code>sqlJoinAlgorithm</code></td><td>SELECT, INSERT, REPLACE<br /><br />Algorithm to use for JOIN. Use <code>broadcast</code> (the default) for broadcast hash join or <code>sortMerge</code> for sort-merge join. Affects all JOIN operations in the query. See <a href="#joins">Joins</a> for more details.</td><td><code>broadcast</code></td></tr>
<tr><td><code>rowsInMemory</code></td><td>INSERT or REPLACE<br /><br />Maximum number of rows to store in memory at once before flushing to disk during the segment generation process. Ignored for non-INSERT queries. In most cases, use the default value. You may need to override the default if you run into one of the <a href="/docs/latest/multi-stage-query/known-issues.html">known issues</a> around memory usage.</td><td>100,000</td></tr>
<tr><td><code>segmentSortOrder</code></td><td>INSERT or REPLACE<br /><br />Normally, Druid sorts rows in individual segments using <code>__time</code> first, followed by the <a href="#clustered-by">CLUSTERED BY</a> clause. When you set <code>segmentSortOrder</code>, Druid sorts rows in segments using this column list first, followed by the CLUSTERED BY order.<br /><br />You provide the column list as comma-separated values or as a JSON array in string form. If your query includes <code>__time</code>, then this list must begin with <code>__time</code>. For example, consider an INSERT query that uses <code>CLUSTERED BY country</code> and has <code>segmentSortOrder</code> set to <code>__time,city</code>. Within each time chunk, Druid assigns rows to segments based on <code>country</code>, and then within each of those segments, Druid sorts those rows by <code>__time</code> first, then <code>city</code>, then <code>country</code>.</td><td>empty list</td></tr>
<tr><td><code>maxParseExceptions</code></td><td>SELECT, INSERT, REPLACE<br /><br />Maximum number of parse exceptions that are ignored while executing the query before it stops with <code>TooManyWarningsFault</code>. To ignore all the parse exceptions, set the value to -1.</td><td>0</td></tr>
<tr><td><code>rowsPerSegment</code></td><td>INSERT or REPLACE<br /><br />The number of rows per segment to target. The actual number of rows per segment may be somewhat higher or lower than this number. In most cases, use the default. For general information about sizing rows per segment, see <a href="/docs/latest/operations/segment-optimization.html">Segment Size Optimization</a>.</td><td>3,000,000</td></tr>
<tr><td><code>indexSpec</code></td><td>INSERT or REPLACE<br /><br />An <a href="/docs/latest/ingestion/ingestion-spec.html#indexspec"><code>indexSpec</code></a> to use when generating segments. May be a JSON string or object. See <a href="/docs/latest/ingestion/ingestion-spec.html#front-coding">Front coding</a> for details on configuring an <code>indexSpec</code> with front coding.</td><td>See <a href="/docs/latest/ingestion/ingestion-spec.html#indexspec"><code>indexSpec</code></a>.</td></tr>
<tr><td><code>durableShuffleStorage</code></td><td>SELECT, INSERT, REPLACE <br /><br />Whether to use durable storage for shuffle mesh. To use this feature, configure the durable storage at the server level using <code>druid.msq.intermediate.storage.enable=true</code>). If these properties are not configured, any query with the context variable <code>durableShuffleStorage=true</code> fails with a configuration error. <br /><br /></td><td><code>false</code></td></tr>
<tr><td><code>faultTolerance</code></td><td>SELECT, INSERT, REPLACE<br /><br /> Whether to turn on fault tolerance mode or not. Failed workers are retried based on <a href="#limits">Limits</a>. Cannot be used when <code>durableShuffleStorage</code> is explicitly set to false.</td><td><code>false</code></td></tr>
</tbody>
</table>
<h2><a class="anchor" aria-hidden="true" id="joins"></a><a href="#joins" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Joins</h2>
<p>Joins in multi-stage queries use one of two algorithms, based on the <a href="#context-parameters">context parameter</a>
<code>sqlJoinAlgorithm</code>. This context parameter applies to the entire SQL statement, so it is not possible to mix different
join algorithms in the same query.</p>
<h3><a class="anchor" aria-hidden="true" id="broadcast"></a><a href="#broadcast" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Broadcast</h3>
<p>Set <code>sqlJoinAlgorithm</code> to <code>broadcast</code>.</p>
<p>The default join algorithm for multi-stage queries is a broadcast hash join, which is similar to how
<a href="/docs/latest/querying/query-execution.html#join">joins are executed with native queries</a>. First, any adjacent joins are flattened
into a structure with a &quot;base&quot; input (the bottom-leftmost one) and other leaf inputs (the rest). Next, any subqueries
that are inputs the join (either base or other leafs) are planned into independent stages. Then, the non-base leaf
inputs are all connected as broadcast inputs to the &quot;base&quot; stage.</p>
<p>Together, all of these non-base leaf inputs must not exceed the <a href="#limits">limit on broadcast table footprint</a>. There
is no limit on the size of the base (leftmost) input.</p>
<p>Only LEFT JOIN, INNER JOIN, and CROSS JOIN are supported with with <code>broadcast</code>.</p>
<p>Join conditions, if present, must be equalities. It is not necessary to include a join condition; for example,
<code>CROSS JOIN</code> and comma join do not require join conditions.</p>
<p>As an example, the following statement has a single join chain where <code>orders</code> is the base input, and <code>products</code> and
<code>customers</code> are non-base leaf inputs. The query will first read <code>products</code> and <code>customers</code>, then broadcast both to
the stage that reads <code>orders</code>. That stage loads the broadcast inputs (<code>products</code> and <code>customers</code>) in memory, and walks
through <code>orders</code> row by row. The results are then aggregated and written to the table <code>orders_enriched</code>. The broadcast
inputs (<code>products</code> and <code>customers</code>) must fall under the limit on broadcast table footprint, but the base <code>orders</code> input
can be unlimited in size.</p>
<pre><code class="hljs"><span class="hljs-keyword">REPLACE</span> <span class="hljs-keyword">INTO</span> orders_enriched
OVERWRITE <span class="hljs-keyword">ALL</span>
<span class="hljs-keyword">SELECT</span>
orders.__time,
products.name <span class="hljs-keyword">AS</span> product_name,
customers.name <span class="hljs-keyword">AS</span> customer_name,
<span class="hljs-keyword">SUM</span>(orders.amount) <span class="hljs-keyword">AS</span> amount
<span class="hljs-keyword">FROM</span> orders
<span class="hljs-keyword">LEFT</span> <span class="hljs-keyword">JOIN</span> products <span class="hljs-keyword">ON</span> orders.product_id = products.id
<span class="hljs-keyword">LEFT</span> <span class="hljs-keyword">JOIN</span> customers <span class="hljs-keyword">ON</span> orders.customer_id = customers.id
<span class="hljs-keyword">GROUP</span> <span class="hljs-keyword">BY</span> <span class="hljs-number">1</span>, <span class="hljs-number">2</span>
PARTITIONED <span class="hljs-keyword">BY</span> <span class="hljs-keyword">HOUR</span>
CLUSTERED <span class="hljs-keyword">BY</span> product_name
</code></pre>
<h3><a class="anchor" aria-hidden="true" id="sort-merge"></a><a href="#sort-merge" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Sort-merge</h3>
<p>Set <code>sqlJoinAlgorithm</code> to <code>sortMerge</code>.</p>
<p>Multi-stage queries can use a sort-merge join algorithm. With this algorithm, each pairwise join is planned into its own
stage with two inputs. The two inputs are partitioned and sorted using a hash partitioning on the same key. This
approach is generally less performant, but more scalable, than <code>broadcast</code>. There are various scenarios where broadcast
join would return a <a href="#errors"><code>BroadcastTablesTooLarge</code></a> error, but a sort-merge join would succeed.</p>
<p>There is no limit on the overall size of either input, so sort-merge is a good choice for performing a join of two large
inputs, or for performing a self-join of a large input with itself.</p>
<p>There is a limit on the amount of data associated with each individual key. If <em>both</em> sides of the join exceed this
limit, the query returns a <a href="#errors"><code>TooManyRowsWithSameKey</code></a> error. If only one side exceeds the limit, the query
does not return this error.</p>
<p>Join conditions, if present, must be equalities. It is not necessary to include a join condition; for example,
<code>CROSS JOIN</code> and comma join do not require join conditions.</p>
<p>All join types are supported with <code>sortMerge</code>: LEFT, RIGHT, INNER, FULL, and CROSS.</p>
<p>As an example, the following statement runs using a single sort-merge join stage that receives <code>eventstream</code>
(partitioned on <code>user_id</code>) and <code>users</code> (partitioned on <code>id</code>) as inputs. There is no limit on the size of either input.</p>
<pre><code class="hljs">REPLACE INTO eventstream_enriched
OVERWRITE ALL
SELECT
eventstream.__time,
eventstream.user_id,
eventstream.event_type,
eventstream.event_details,
users.signup_date AS user_signup_date
<span class="hljs-keyword">FROM</span> eventstream
LEFT JOIN<span class="hljs-built_in"> users </span>ON eventstream.user_id = users.id
PARTITIONED BY HOUR
CLUSTERED BY<span class="hljs-built_in"> user
</span></code></pre>
<h2><a class="anchor" aria-hidden="true" id="durable-storage"></a><a href="#durable-storage" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Durable Storage</h2>
<p>Using durable storage with your SQL-based ingestions can improve their reliability by writing intermediate files to a storage location temporarily.</p>
<p>To prevent durable storage from getting filled up with temporary files in case the tasks fail to clean them up, a periodic
cleaner can be scheduled to clean the directories corresponding to which there isn't a controller task running. It utilizes
the storage connector to work upon the durable storage. The durable storage location should only be utilized to store the output
for cluster's MSQ tasks. If the location contains other files or directories, then they will get cleaned up as well.</p>
<p>Enabling durable storage also enables the use of local disk to store temporary files, such as the intermediate files produced
by the super sorter. The limit set by <code>druid.indexer.task.tmpStorageBytesPerTask</code> for maximum number of bytes of local
storage to be used per task will be respected by MSQ tasks. If the configured limit is too low, <code>NotEnoughTemporaryStorageFault</code>
may be thrown.</p>
<h3><a class="anchor" aria-hidden="true" id="enable-durable-storage"></a><a href="#enable-durable-storage" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Enable durable storage</h3>
<p>To enable durable storage, you need to set the following common service properties:</p>
<pre><code class="hljs">druid<span class="hljs-selector-class">.msq</span><span class="hljs-selector-class">.intermediate</span><span class="hljs-selector-class">.storage</span>.enable=true
druid<span class="hljs-selector-class">.msq</span><span class="hljs-selector-class">.intermediate</span><span class="hljs-selector-class">.storage</span>.type=s3
druid<span class="hljs-selector-class">.msq</span><span class="hljs-selector-class">.intermediate</span><span class="hljs-selector-class">.storage</span>.bucket=YOUR_BUCKET
druid<span class="hljs-selector-class">.msq</span><span class="hljs-selector-class">.intermediate</span><span class="hljs-selector-class">.storage</span>.prefix=YOUR_PREFIX
druid<span class="hljs-selector-class">.msq</span><span class="hljs-selector-class">.intermediate</span><span class="hljs-selector-class">.storage</span>.tempDir=/path/to/your/temp/dir
</code></pre>
<p>For detailed information about the settings related to durable storage, see <a href="#durable-storage-configurations">Durable storage configurations</a>.</p>
<h3><a class="anchor" aria-hidden="true" id="use-durable-storage-for-queries"></a><a href="#use-durable-storage-for-queries" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Use durable storage for queries</h3>
<p>When you run a query, include the context parameter <code>durableShuffleStorage</code> and set it to <code>true</code>.</p>
<p>For queries where you want to use fault tolerance for workers, set <code>faultTolerance</code> to <code>true</code>, which automatically sets <code>durableShuffleStorage</code> to <code>true</code>.</p>
<h2><a class="anchor" aria-hidden="true" id="durable-storage-configurations"></a><a href="#durable-storage-configurations" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Durable storage configurations</h2>
<p>The following common service properties control how durable storage behaves:</p>
<table>
<thead>
<tr><th>Parameter</th><th>Default</th><th>Description</th></tr>
</thead>
<tbody>
<tr><td><code>druid.msq.intermediate.storage.bucket</code></td><td>n/a</td><td>The bucket in S3 where you want to store intermediate files.</td></tr>
<tr><td><code>druid.msq.intermediate.storage.chunkSize</code></td><td>100MiB</td><td>Optional. Defines the size of each chunk to temporarily store in <code>druid.msq.intermediate.storage.tempDir</code>. The chunk size must be between 5 MiB and 5 GiB. A large chunk size reduces the API calls made to the durable storage, however it requires more disk space to store the temporary chunks. Druid uses a default of 100MiB if the value is not provided.</td></tr>
<tr><td><code>druid.msq.intermediate.storage.enable</code></td><td>true</td><td>Required. Whether to enable durable storage for the cluster.</td></tr>
<tr><td><code>druid.msq.intermediate.storage.maxRetry</code></td><td>10</td><td>Optional. Defines the max number times to attempt S3 API calls to avoid failures due to transient errors.</td></tr>
<tr><td><code>druid.msq.intermediate.storage.prefix</code></td><td>n/a</td><td>S3 prefix to store intermediate stage results. Provide a unique value for the prefix. Don't share the same prefix between clusters. If the location includes other files or directories, then they will get cleaned up as well.</td></tr>
<tr><td><code>druid.msq.intermediate.storage.tempDir</code></td><td>n/a</td><td>Required. Directory path on the local disk to temporarily store intermediate stage results.</td></tr>
<tr><td><code>druid.msq.intermediate.storage.type</code></td><td><code>s3</code> if your deep storage is S3</td><td>Required. The type of storage to use. You can either set this to <code>local</code> or <code>s3</code>.</td></tr>
</tbody>
</table>
<p>In addition to the common service properties, there are certain properties that you configure on the Overlord specifically to clean up intermediate files:</p>
<table>
<thead>
<tr><th>Parameter</th><th>Default</th><th>Description</th></tr>
</thead>
<tbody>
<tr><td><code>druid.msq.intermediate.storage.cleaner.enabled</code></td><td>false</td><td>Optional. Whether durable storage cleaner should be enabled for the cluster.</td></tr>
<tr><td><code>druid.msq.intermediate.storage.cleaner.delaySeconds</code></td><td>86400</td><td>Optional. The delay (in seconds) after the last run post which the durable storage cleaner would clean the outputs.</td></tr>
</tbody>
</table>
<h2><a class="anchor" aria-hidden="true" id="limits"></a><a href="#limits" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Limits</h2>
<p>Knowing the limits for the MSQ task engine can help you troubleshoot any <a href="#error-codes">errors</a> that you encounter. Many of the errors occur as a result of reaching a limit.</p>
<p>The following table lists query limits:</p>
<table>
<thead>
<tr><th>Limit</th><th>Value</th><th>Error if exceeded</th></tr>
</thead>
<tbody>
<tr><td>Size of an individual row written to a frame. Row size when written to a frame may differ from the original row size.</td><td>1 MB</td><td><a href="#error_RowTooLarge"><code>RowTooLarge</code></a></td></tr>
<tr><td>Number of segment-granular time chunks encountered during ingestion.</td><td>5,000</td><td><a href="#error_TooManyBuckets"><code>TooManyBuckets</code></a></td></tr>
<tr><td>Number of input files/segments per worker.</td><td>10,000</td><td><a href="#error_TooManyInputFiles"><code>TooManyInputFiles</code></a></td></tr>
<tr><td>Number of output partitions for any one stage. Number of segments generated during ingestion.</td><td>25,000</td><td><a href="#error_TooManyPartitions"><code>TooManyPartitions</code></a></td></tr>
<tr><td>Number of output columns for any one stage.</td><td>2,000</td><td><a href="#error_TooManyColumns"><code>TooManyColumns</code></a></td></tr>
<tr><td>Number of cluster by columns that can appear in a stage</td><td>1,500</td><td><a href="#error_TooManyClusteredByColumns"><code>TooManyClusteredByColumns</code></a></td></tr>
<tr><td>Number of workers for any one stage.</td><td>Hard limit is 1,000. Memory-dependent soft limit may be lower.</td><td><a href="#error_TooManyWorkers"><code>TooManyWorkers</code></a></td></tr>
<tr><td>Maximum memory occupied by broadcasted tables.</td><td>30% of each <a href="/docs/latest/multi-stage-query/concepts.html#memory-usage">processor memory bundle</a>.</td><td><a href="#error_BroadcastTablesTooLarge"><code>BroadcastTablesTooLarge</code></a></td></tr>
<tr><td>Maximum memory occupied by buffered data during sort-merge join. Only relevant when <code>sqlJoinAlgorithm</code> is <code>sortMerge</code>.</td><td>10 MB</td><td><code>TooManyRowsWithSameKey</code></td></tr>
<tr><td>Maximum relaunch attempts per worker. Initial run is not a relaunch. The worker will be spawned 1 + <code>workerRelaunchLimit</code> times before the job fails.</td><td>2</td><td><code>TooManyAttemptsForWorker</code></td></tr>
<tr><td>Maximum relaunch attempts for a job across all workers.</td><td>100</td><td><code>TooManyAttemptsForJob</code></td></tr>
</tbody>
</table>
<p><a name="errors"></a></p>
<h2><a class="anchor" aria-hidden="true" id="error-codes"></a><a href="#error-codes" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Error codes</h2>
<p>The following table describes error codes you may encounter in the <code>multiStageQuery.payload.status.errorReport.error.errorCode</code> field:</p>
<table>
<thead>
<tr><th>Code</th><th>Meaning</th><th>Additional fields</th></tr>
</thead>
<tbody>
<tr><td><a name="error_BroadcastTablesTooLarge"><code>BroadcastTablesTooLarge</code></a></td><td>The size of the broadcast tables used in the right hand side of the join exceeded the memory reserved for them in a worker task.<br /><br />Try increasing the peon memory or reducing the size of the broadcast tables.</td><td><code>maxBroadcastTablesSize</code>: Memory reserved for the broadcast tables, measured in bytes.</td></tr>
<tr><td><a name="error_Canceled"><code>Canceled</code></a></td><td>The query was canceled. Common reasons for cancellation:<br /><br /><ul><li>User-initiated shutdown of the controller task via the <code>/druid/indexer/v1/task/{taskId}/shutdown</code> API.</li><li>Restart or failure of the server process that was running the controller task.</li></ul></td><td></td></tr>
<tr><td><a name="error_CannotParseExternalData"><code>CannotParseExternalData</code></a></td><td>A worker task could not parse data from an external datasource.</td><td><code>errorMessage</code>: More details on why parsing failed.</td></tr>
<tr><td><a name="error_ColumnNameRestricted"><code>ColumnNameRestricted</code></a></td><td>The query uses a restricted column name.</td><td><code>columnName</code>: The restricted column name.</td></tr>
<tr><td><a name="error_ColumnTypeNotSupported"><code>ColumnTypeNotSupported</code></a></td><td>The column type is not supported. This can be because:<br /> <br /><ul><li>Support for writing or reading from a particular column type is not supported.</li><li>The query attempted to use a column type that is not supported by the frame format. This occurs with ARRAY types, which are not yet implemented for frames.</li></ul></td><td><code>columnName</code>: The column name with an unsupported type.<br /> <br /><code>columnType</code>: The unknown column type.</td></tr>
<tr><td><a name="error_InsertCannotAllocateSegment"><code>InsertCannotAllocateSegment</code></a></td><td>The controller task could not allocate a new segment ID due to conflict with existing segments or pending segments. Common reasons for such conflicts:<br /> <br /><ul><li>Attempting to mix different granularities in the same intervals of the same datasource.</li><li>Prior ingestions that used non-extendable shard specs.</li></ul></td><td><code>dataSource</code><br /> <br /><code>interval</code>: The interval for the attempted new segment allocation.</td></tr>
<tr><td><a name="error_InsertCannotBeEmpty"><code>InsertCannotBeEmpty</code></a></td><td>An INSERT or REPLACE query did not generate any output rows in a situation where output rows are required for success. This can happen for INSERT or REPLACE queries with <code>PARTITIONED BY</code> set to something other than <code>ALL</code> or <code>ALL TIME</code>.</td><td><code>dataSource</code></td></tr>
<tr><td><a name="error_InsertCannotOrderByDescending"><code>InsertCannotOrderByDescending</code></a></td><td>An INSERT query contained a <code>CLUSTERED BY</code> expression in descending order. Druid's segment generation code only supports ascending order.</td><td><code>columnName</code></td></tr>
<tr><td><a name="error_InsertLockPreempted"><code>InsertLockPreempted</code></a></td><td>An INSERT or REPLACE query was canceled by a higher-priority ingestion job, such as a real-time ingestion task.</td><td></td></tr>
<tr><td><a name="error_InsertTimeNull"><code>InsertTimeNull</code></a></td><td>An INSERT or REPLACE query encountered a null timestamp in the <code>__time</code> field.<br /><br />This can happen due to using an expression like <code>TIME_PARSE(timestamp) AS __time</code> with a timestamp that cannot be parsed. (<a href="/docs/latest/querying/sql-scalar.html#date-and-time-functions"><code>TIME_PARSE</code></a> returns null when it cannot parse a timestamp.) In this case, try parsing your timestamps using a different function or pattern. Or, if your timestamps may genuinely be null, consider using <a href="/docs/latest/querying/sql-scalar.html#other-scalar-functions"><code>COALESCE</code></a> to provide a default value. One option is <a href="/docs/latest/querying/sql-scalar.html#date-and-time-functions"><code>CURRENT_TIMESTAMP</code></a>, which represents the start time of the job.<br /><br />This error code only occurs when <a href="/docs/latest/configuration/index.html#sql-compatible-null-handling"><code>druid.generic.useDefaultValueForNull = false</code></a>. When this property is <code>true</code> (the default setting), null timestamps are instead replaced by <code>1970-01-01 00:00:00</code>.</td></tr>
<tr><td><a name="error_InsertTimeOutOfBounds"><code>InsertTimeOutOfBounds</code></a></td><td>A REPLACE query generated a timestamp outside the bounds of the TIMESTAMP parameter for your OVERWRITE WHERE clause.<br /> <br />To avoid this error, verify that the you specified is valid.</td><td><code>interval</code>: time chunk interval corresponding to the out-of-bounds timestamp</td></tr>
<tr><td><a name="error_InvalidNullByte"><code>InvalidNullByte</code></a></td><td>A string column included a null byte. Null bytes in strings are not permitted.</td><td><code>column</code>: The column that included the null byte</td></tr>
<tr><td><a name="error_QueryNotSupported"><code>QueryNotSupported</code></a></td><td>QueryKit could not translate the provided native query to a multi-stage query.<br /> <br />This can happen if the query uses features that aren't supported, like GROUPING SETS.</td><td></td></tr>
<tr><td><a name="error_QueryRuntimeError"><code>QueryRuntimeError</code></a></td><td>MSQ uses the native query engine to run the leaf stages. This error tells MSQ that error is in native query runtime.<br /> <br /> Since this is a generic error, the user needs to look at logs for the error message and stack trace to figure out the next course of action. If the user is stuck, consider raising a <code>github</code> issue for assistance.</td><td><code>baseErrorMessage</code> error message from the native query runtime.</td></tr>
<tr><td><a name="error_RowTooLarge"><code>RowTooLarge</code></a></td><td>The query tried to process a row that was too large to write to a single frame. See the <a href="#limits">Limits</a> table for specific limits on frame size. Note that the effective maximum row size is smaller than the maximum frame size due to alignment considerations during frame writing.</td><td><code>maxFrameSize</code>: The limit on the frame size.</td></tr>
<tr><td><a name="error_TaskStartTimeout"><code>TaskStartTimeout</code></a></td><td>Unable to launch <code>numTasks</code> tasks within <code>timeout</code> milliseconds.<br /><br />There may be insufficient available slots to start all the worker tasks simultaneously. Try splitting up your query into smaller chunks using a smaller value of <a href="#context-parameters"><code>maxNumTasks</code></a>. Another option is to increase capacity.</td><td><code>numTasks</code>: The number of tasks attempted to launch.<br /><br /><code>timeout</code>: Timeout, in milliseconds, that was exceeded.</td></tr>
<tr><td><a name="error_TooManyAttemptsForJob"><code>TooManyAttemptsForJob</code></a></td><td>Total relaunch attempt count across all workers exceeded max relaunch attempt limit. See the <a href="#limits">Limits</a> table for the specific limit.</td><td><code>maxRelaunchCount</code>: Max number of relaunches across all the workers defined in the <a href="#limits">Limits</a> section. <br /><br /> <code>currentRelaunchCount</code>: current relaunch counter for the job across all workers. <br /><br /> <code>taskId</code>: Latest task id which failed <br /> <br /> <code>rootErrorMessage</code>: Error message of the latest failed task.</td></tr>
<tr><td><a name="error_TooManyAttemptsForWorker"><code>TooManyAttemptsForWorker</code></a></td><td>Worker exceeded maximum relaunch attempt count as defined in the <a href="#limits">Limits</a> section.</td><td><code>maxPerWorkerRelaunchCount</code>: Max number of relaunches allowed per worker as defined in the <a href="#limits">Limits</a> section. <br /><br /> <code>workerNumber</code>: the worker number for which the task failed <br /><br /> <code>taskId</code>: Latest task id which failed <br /> <br /> <code>rootErrorMessage</code>: Error message of the latest failed task.</td></tr>
<tr><td><a name="error_TooManyBuckets"><code>TooManyBuckets</code></a></td><td>Exceeded the maximum number of partition buckets for a stage (5,000 partition buckets).<br />&lt; br /&gt;Partition buckets are created for each <a href="#partitioned-by"><code>PARTITIONED BY</code></a> time chunk for INSERT and REPLACE queries. The most common reason for this error is that your <code>PARTITIONED BY</code> is too narrow relative to your data.</td><td><code>maxBuckets</code>: The limit on partition buckets.</td></tr>
<tr><td><a name="error_TooManyInputFiles"><code>TooManyInputFiles</code></a></td><td>Exceeded the maximum number of input files or segments per worker (10,000 files or segments).<br /><br />If you encounter this limit, consider adding more workers, or breaking up your query into smaller queries that process fewer files or segments per query.</td><td><code>numInputFiles</code>: The total number of input files/segments for the stage.<br /><br /><code>maxInputFiles</code>: The maximum number of input files/segments per worker per stage.<br /><br /><code>minNumWorker</code>: The minimum number of workers required for a successful run.</td></tr>
<tr><td><a name="error_TooManyPartitions"><code>TooManyPartitions</code></a></td><td>Exceeded the maximum number of partitions for a stage (25,000 partitions).<br /><br />This can occur with INSERT or REPLACE statements that generate large numbers of segments, since each segment is associated with a partition. If you encounter this limit, consider breaking up your INSERT or REPLACE statement into smaller statements that process less data per statement.</td><td><code>maxPartitions</code>: The limit on partitions which was exceeded</td></tr>
<tr><td><a name="error_TooManyClusteredByColumns"><code>TooManyClusteredByColumns</code></a></td><td>Exceeded the maximum number of clustering columns for a stage (1,500 columns).<br /><br />This can occur with <code>CLUSTERED BY</code>, <code>ORDER BY</code>, or <code>GROUP BY</code> with a large number of columns.</td><td><code>numColumns</code>: The number of columns requested.<br /><br /><code>maxColumns</code>: The limit on columns which was exceeded.<code>stage</code>: The stage number exceeding the limit<br /><br /></td></tr>
<tr><td><a name="error_TooManyRowsWithSameKey"><code>TooManyRowsWithSameKey</code></a></td><td>The number of rows for a given key exceeded the maximum number of buffered bytes on both sides of a join. See the <a href="#limits">Limits</a> table for the specific limit. Only occurs when <code>sqlJoinAlgorithm</code> is <code>sortMerge</code>.</td><td><code>key</code>: The key that had a large number of rows.<br /><br /><code>numBytes</code>: Number of bytes buffered, which may include other keys.<br /><br /><code>maxBytes</code>: Maximum number of bytes buffered.</td></tr>
<tr><td><a name="error_TooManyColumns"><code>TooManyColumns</code></a></td><td>Exceeded the maximum number of columns for a stage (2,000 columns).</td><td><code>numColumns</code>: The number of columns requested.<br /><br /><code>maxColumns</code>: The limit on columns which was exceeded.</td></tr>
<tr><td><a name="error_TooManyWarnings"><code>TooManyWarnings</code></a></td><td>Exceeded the maximum allowed number of warnings of a particular type.</td><td><code>rootErrorCode</code>: The error code corresponding to the exception that exceeded the required limit. <br /><br /><code>maxWarnings</code>: Maximum number of warnings that are allowed for the corresponding <code>rootErrorCode</code>.</td></tr>
<tr><td><a name="error_TooManyWorkers"><code>TooManyWorkers</code></a></td><td>Exceeded the maximum number of simultaneously-running workers. See the <a href="#limits">Limits</a> table for more details.</td><td><code>workers</code>: The number of simultaneously running workers that exceeded a hard or soft limit. This may be larger than the number of workers in any one stage if multiple stages are running simultaneously. <br /><br /><code>maxWorkers</code>: The hard or soft limit on workers that was exceeded. If this is lower than the hard limit (1,000 workers), then you can increase the limit by adding more memory to each task.</td></tr>
<tr><td><a name="error_NotEnoughMemory"><code>NotEnoughMemory</code></a></td><td>Insufficient memory to launch a stage.</td><td><code>suggestedServerMemory</code>: Suggested number of bytes of memory to allocate to a given process. <br /><br /><code>serverMemory</code>: The number of bytes of memory available to a single process.<br /><br /><code>usableMemory</code>: The number of usable bytes of memory for a single process.<br /><br /><code>serverWorkers</code>: The number of workers running in a single process.<br /><br /><code>serverThreads</code>: The number of threads in a single process.</td></tr>
<tr><td><a name="error_NotEnoughTemporaryStorage"><code>NotEnoughTemporaryStorage</code></a></td><td>Insufficient temporary storage configured to launch a stage. This limit is set by the property <code>druid.indexer.task.tmpStorageBytesPerTask</code>. This property should be increased to the minimum suggested limit to resolve this.</td><td><code>suggestedMinimumStorage</code>: Suggested number of bytes of temporary storage space to allocate to a given process. <br /><br /><code>configuredTemporaryStorage</code>: The number of bytes of storage currently configured.</td></tr>
<tr><td><a name="error_WorkerFailed"><code>WorkerFailed</code></a></td><td>A worker task failed unexpectedly.</td><td><code>errorMsg</code><br /><br /><code>workerTaskId</code>: The ID of the worker task.</td></tr>
<tr><td><a name="error_WorkerRpcFailed"><code>WorkerRpcFailed</code></a></td><td>A remote procedure call to a worker task failed and could not recover.</td><td><code>workerTaskId</code>: the id of the worker task</td></tr>
<tr><td><a name="error_UnknownError"><code>UnknownError</code></a></td><td>All other errors.</td><td><code>message</code></td></tr>
</tbody>
</table>
</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/latest/multi-stage-query/examples.html"><span class="arrow-prev">โ† </span><span>Examples</span></a><a class="docs-next button" href="/docs/latest/multi-stage-query/known-issues.html"><span>Known issues</span><span class="arrow-next"> โ†’</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#sql-reference">SQL reference</a><ul class="toc-headings"><li><a href="#extern-function"><code>EXTERN</code> Function</a></li><li><a href="#insert"><code>INSERT</code></a></li><li><a href="#replace"><code>REPLACE</code></a></li><li><a href="#partitioned-by"><code>PARTITIONED BY</code></a></li><li><a href="#clustered-by"><code>CLUSTERED BY</code></a></li></ul></li><li><a href="#context-parameters">Context parameters</a></li><li><a href="#joins">Joins</a><ul class="toc-headings"><li><a href="#broadcast">Broadcast</a></li><li><a href="#sort-merge">Sort-merge</a></li></ul></li><li><a href="#durable-storage">Durable Storage</a><ul class="toc-headings"><li><a href="#enable-durable-storage">Enable durable storage</a></li><li><a href="#use-durable-storage-for-queries">Use durable storage for queries</a></li></ul></li><li><a href="#durable-storage-configurations">Durable storage configurations</a></li><li><a href="#limits">Limits</a></li><li><a href="#error-codes">Error codes</a></li></ul></nav></div><footer class="nav-footer druid-footer" id="footer"><div class="container"><div class="text-center"><p><a href="/technology">Technology</a>โ€‚ยทโ€‚<a href="/use-cases">Use Cases</a>โ€‚ยทโ€‚<a href="/druid-powered">Powered by Druid</a>โ€‚ยทโ€‚<a href="/docs/latest/">Docs</a>โ€‚ยทโ€‚<a href="/community/">Community</a>โ€‚ยทโ€‚<a href="/downloads.html">Download</a>โ€‚ยทโ€‚<a href="/faq">FAQ</a></p></div><div class="text-center"><a title="Join the user group" href="https://groups.google.com/forum/#!forum/druid-user" target="_blank"><span class="fa fa-comments"></span></a>โ€‚ยทโ€‚<a title="Follow Druid" href="https://twitter.com/druidio" target="_blank"><span class="fab fa-twitter"></span></a>โ€‚ยทโ€‚<a title="Download via Apache" href="https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/{{ site.druid_versions[0].versions[0].version }}/apache-druid-{{ site.druid_versions[0].versions[0].version }}-bin.tar.gz" target="_blank"><span class="fas fa-feather"></span></a>โ€‚ยทโ€‚<a title="GitHub" href="https://github.com/apache/druid" target="_blank"><span class="fab fa-github"></span></a></div><div class="text-center license">Copyright ยฉ 2022 <a href="https://www.apache.org/" target="_blank">Apache Software Foundation</a>.<br/>Except where otherwise noted, licensed under <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA 4.0</a>.<br/>Apache Druid, Druid, and the Druid logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.</div></div></footer></div><script type="text/javascript" src="https://cdn.jsdelivr.net/npm/docsearch.js@2/dist/cdn/docsearch.min.js"></script><script>
document.addEventListener('keyup', function(e) {
if (e.target !== document.body) {
return;
}
// keyCode for '/' (slash)
if (e.keyCode === 191) {
const search = document.getElementById('search_input_react');
search && search.focus();
}
});
</script><script>
var search = docsearch({
appId: 'CPK9PMSCEY',
apiKey: 'd4ef4ffe3a2f0c7d1e34b062fd98736b',
indexName: 'apache_druid',
inputSelector: '#search_input_react',
algoliaOptions: {"facetFilters":["language:en","version:26.0.0"]}
});
</script></body></html>