generic block compressed complex columns (#16863)

changes:
* Adds new `CompressedComplexColumn`, `CompressedComplexColumnSerializer`, `CompressedComplexColumnSupplier` based on `CompressedVariableSizedBlobColumn` used by JSON columns
* Adds `IndexSpec.complexMetricCompression` which can be used to specify compression for the generic compressed complex column. Defaults to uncompressed because compressed columns are not backwards compatible.
* Adds new definition of `ComplexMetricSerde.getSerializer` which accepts an `IndexSpec` argument when creating a serializer. The old signature has been marked `@Deprecated` and has a default implementation that returns `null`, but it will be used by the default implementation of the new version if it is implemented to return a non-null value. The default implementation of the new method will use a `CompressedComplexColumnSerializer` if `IndexSpec.complexMetricCompression` is not null/none/uncompressed, or will use `LargeColumnSupportedComplexColumnSerializer` otherwise.
* Removed all duplicate generic implementations of `ComplexMetricSerde.getSerializer` and `ComplexMetricSerde.deserializeColumn` into default implementations `ComplexMetricSerde` instead of being copied all over the place. The default implementation of `deserializeColumn` will check if the first byte indicates that the new compression was used, otherwise will use the `GenericIndexed` based supplier.
* Complex columns with custom serializers/deserializers are unaffected and may continue doing whatever it is they do, either with specialized compression or whatever else, this new stuff is just to provide generic implementations built around `ObjectStrategy`.
* add ObjectStrategy.readRetainsBufferReference so CompressedComplexColumn only copies on read if required
* add copyValueOnRead flag down to CompressedBlockReader to avoid buffer duplicate if the value needs copied anyway
100 files changed
tree: 5c076668d9e628759b1b4eabaca81b4d2504b732
  1. .github/
  2. .idea/
  3. benchmarks/
  4. cloud/
  5. codestyle/
  6. dev/
  7. distribution/
  8. docs/
  9. examples/
  10. extensions-contrib/
  11. extensions-core/
  12. hooks/
  13. indexing-hadoop/
  14. indexing-service/
  15. integration-tests/
  16. integration-tests-ex/
  17. licenses/
  18. processing/
  19. publications/
  20. quidem-ut/
  21. server/
  22. services/
  23. sql/
  24. web-console/
  25. website/
  26. .asf.yaml
  27. .backportrc.json
  28. .codecov.yml
  29. .dockerignore
  30. .gitignore
  31. .lgtm.yml
  32. check_test_suite.py
  33. check_test_suite_test.py
  34. CONTRIBUTING.md
  35. doap_Druid.rdf
  36. it.sh
  37. LABELS
  38. LICENSE
  39. licenses.yaml
  40. NOTICE
  41. owasp-dependency-check-suppressions.xml
  42. pom.xml
  43. README.md
  44. README.template
  45. rewrite.yml
  46. upload.sh
README.md

Coverage Status Docker Helm

WorkflowStatus
โš™๏ธ CodeQL Configcodeql-config
๐Ÿ” CodeQLcodeql
๐Ÿ•’ Cron Job ITScron-job-its
๐Ÿท๏ธ Labelerlabeler
โ™ป๏ธ Reusable Revised ITSreusable-revised-its
โ™ป๏ธ Reusable Standard ITSreusable-standard-its
โ™ป๏ธ Reusable Unit Testsreusable-unit-tests
๐Ÿ”„ Revised ITSrevised-its
๐Ÿ”ง Standard ITSstandard-its
๐Ÿ› ๏ธ Static Checksstatic-checks
๐Ÿงช Unit and Integration Tests Unifiedunit-and-integration-tests-unified
๐Ÿ”ฌ Unit Testsunit-tests

Website Twitter Download Get Started Documentation Community Build Contribute License


Apache Druid

Druid is a high performance real-time analytics database. Druid's main value add is to reduce time to insight and action.

Druid is designed for workflows where fast queries and ingest really matter. Druid excels at powering UIs, running operational (ad-hoc) queries, or handling high concurrency. Consider Druid as an open source alternative to data warehouses for a variety of use cases. The design documentation explains the key concepts.

Getting started

You can get started with Druid with our local or Docker quickstart.

Druid provides a rich set of APIs (via HTTP and JDBC) for loading, managing, and querying your data. You can also interact with Druid via the built-in web console (shown below).

Load data

data loader Kafka

Load streaming and batch data using a point-and-click wizard to guide you through ingestion setup. Monitor one off tasks and ingestion supervisors.

Manage the cluster

management

Manage your cluster with ease. Get a view of your datasources, segments, ingestion tasks, and services from one convenient location. All powered by SQL systems tables, allowing you to see the underlying query for each view.

Issue queries

query view combo

Use the built-in query workbench to prototype DruidSQL and native queries or connect one of the many tools that help you make the most out of Druid.

Documentation

See the latest documentation for the documentation for the current official release. If you need information on a previous release, you can browse previous releases documentation.

Make documentation and tutorials updates in /docs using Markdown or extended Markdown (MDX). Then, open a pull request.

To build the site locally, you need Node 16.14 or higher and to install Docusaurus 2 with npm|yarn install in the website directory. Then you can run npm|yarn start to launch a local build of the docs.

If you're looking to update non-doc pages like Use Cases, those files are in the druid-website-src repo.

Community

Visit the official project community page to read about getting involved in contributing to Apache Druid, and how we help one another use and operate Druid.

  • Druid users can find help in the druid-user mailing list on Google Groups, and have more technical conversations in #troubleshooting on Slack.
  • Druid development discussions take place in the druid-dev mailing list (dev@druid.apache.org). Subscribe by emailing dev-subscribe@druid.apache.org. For live conversations, join the #dev channel on Slack.

Check out the official community page for details of how to join the community Slack channels.

Find articles written by community members and a calendar of upcoming events on the project site - contribute your own events and articles by submitting a PR in the apache/druid-website-src repository.

Building from source

Please note that JDK 8 or JDK 11 is required to build Druid.

See the latest build guide for instructions on building Apache Druid from source.

Contributing

Please follow the community guidelines for contributing.

For instructions on setting up IntelliJ dev/intellij-setup.md

License

Apache License, Version 2.0