feat: Bounded Stream Supervisor (#19372) * Initial implementation for BoundedStreamConfig * Implement isOffsetAtOrBeyond for Rabbit and Kinesis * Unit test coverage * Fix BoundedStreamConfigTest * Remove unused import * Remove unneeded tests * Unit test fix * Fix import and add coverage for RabbitStreamSupervisor * Test coverage for validateBoundedStreamConfig * Re-initialize partition group and reset state after reset * Handle edge case where startOffset equals endOffset * Compare Kinesis sequence numbers using BigInteger * Remove stale test case * Remove redundant validation of boundedStreamConfig * Throw DruidException with ADMIN persona for BoundedStreamConfig * Clean up unused Logger * javadoc and comment cleanup for isBoundedWorkComplete * Add embedded test for bounded ingestion * Add boundedStreamConfig to SeekableStreamDataSourceMetadata for metadata validation * Revert pendingCompletionGroups check * Unit test fix * embedded-test for metadata mismatch * Remove unused var * Unit test fix * Add boundedStreamConfig documentation * Fix spellcheck * Increase code coverage * Increase coverage for BoundedStreamConfig * Remove unnecessary test * Simplify completion check in createNewTasks * Remove unused function * Unit test bounded supervisor completion * Improve coverage on RabbitStreamSupervisor * Unit test coverage * Unit test for IllegalArgumentException for KafkaSupervisor * Check if end offsets are exclusive for bounded work completion * Increase branch coverage * Increase branch coverage * Unit test coverage * Fix import * Remove use of deprecated function * Revert to deprecated function since not initialized in mock object * Fix merge conflict * Detect metadata mismatch when committed offset > bounded config end * Clean up redundant tests in BoundedStreamConfigTest and use EqualsVerifier * Compare Kinesis Sequence numbers using built in comparison * Clean up docs based on review comments * Early return before convert for hasTaskGroupReachedBoundedEnd * Resolve merge conflicts * Fix KinesisSupervisorTest * Update KinesisSupervisorTest.java * Cover case where start > end
| Workflow | Status |
|---|---|
| โ๏ธ CodeQL Config | |
| ๐ CodeQL | |
| ๐ Cron Job ITS | |
| ๐ท๏ธ Labeler | |
| ๐ ๏ธ Static Checks | |
| ๐งช Unit and Integration Tests Unified |
Druid is a high performance real-time analytics database. Druid's main value add is to reduce time to insight and action.
Druid is designed for workflows where fast queries and ingest really matter. Druid excels at powering UIs, running operational (ad-hoc) queries, or handling high concurrency. Consider Druid as an open source alternative to data warehouses for a variety of use cases. The design documentation explains the key concepts.
You can get started with Druid with our local or Docker quickstart. For Kubernetes deployments, the druid-operator is maintained in a separate repository.
Druid provides a rich set of APIs (via HTTP and JDBC) for loading, managing, and querying your data. You can also interact with Druid via the built-in web console (shown below).
Load streaming and batch data using a point-and-click wizard to guide you through ingestion setup. Monitor one off tasks and ingestion supervisors.
Manage your cluster with ease. Get a view of your datasources, segments, ingestion tasks, and services from one convenient location. All powered by SQL systems tables, allowing you to see the underlying query for each view.
Use the built-in query workbench to prototype DruidSQL and native queries or connect one of the many tools that help you make the most out of Druid.
See the latest documentation for the documentation for the current official release. If you need information on a previous release, you can browse previous releases documentation.
Make documentation and tutorials updates in /docs using Markdown or extended Markdown (MDX). Then, open a pull request.
To build the site locally, you need Node 18 or higher and to install Docusaurus 3 with npm|yarn install in the website directory. Then you can run npm|yarn start to launch a local build of the docs.
If you're looking to update non-doc pages like Use Cases, those files are in the druid-website-src repo.
For more information, see the README in the ./website directory.
Visit the official project community page to read about getting involved in contributing to Apache Druid, and how we help one another use and operate Druid.
druid-user mailing list on Google Groups, and have more technical conversations in #troubleshooting on Slack.druid-dev mailing list (dev@druid.apache.org). Subscribe by emailing dev-subscribe@druid.apache.org. For live conversations, join the #dev channel on Slack.Check out the official community page for details of how to join the community Slack channels.
Find articles written by community members and a calendar of upcoming events on the project site - contribute your own events and articles by submitting a PR in the apache/druid-website-src repository.
Please note that JDK 21 or JDK 25 is required to build Druid.
See the latest build guide for instructions on building Apache Druid from source.
Please follow the community guidelines for contributing.
For instructions on setting up IntelliJ dev/intellij-setup.md