Improve code flow in the First/Last vector aggregators and unify the numeric aggregators with the String implementations  (#16230)

This PR fixes the first and last vector aggregators and improves their readability. Following changes are introduced

    The folding is broken in the vectorized versions. We consider time before checking the folded object.

    If the numerical aggregator gets passed any other object type for some other reason (like String), then the aggregator considers it to be folded, even though it shouldn’t be. We should convert these objects to the desired type, and aggregate them properly.

    The aggregators must properly use generics. This would minimize the ClassCastException issues that can happen with mixed segment types. We are unifying the string first/last aggregators with numeric versions as well.

    The aggregators must aggregate null values (https://github.com/apache/druid/blob/master/processing/src/main/java/org/apache/druid/query/aggregation/first/StringFirstLastUtils.java#L55-L56 ). The aggregator should only ignore pairs with time == null, and not value == null

    Time nullity is ignored when trying to vectorize the data.

    String versions initialized with DateTimes.MIN that is equal to Long.MIN / 2. This can cause incorrect results in case the user enters a custom time column. NOTE: This is still present because it would require a larger refactor in all of the versions.

    There is a difference in what users might expect from the results because the code flow is changed (for example, the direction of the for loops, etc), however, this will only change the results, and not the contract set by first/last aggregators, which is that if multiple values have the same timestamp, then any of them can get picked.

    If the column is non-existent, the users might expect a change in the timestamp from DateTime.MAX to Long.MAX, because the code incorrectly used DateTime.MAX to initialize the aggregator, however, in case of a custom timestamp column, this might not be the case. The SQL query might be prohibited from using any Long since it requires a cast to the timestamp function that can fail, but AFAICT native queries don't have such limitations.
108 files changed
tree: 95198c775ee6ce2f6965327d643241954c51a95c
  1. .github/
  2. .idea/
  3. benchmarks/
  4. cloud/
  5. codestyle/
  6. dev/
  7. distribution/
  8. docs/
  9. examples/
  10. extensions-contrib/
  11. extensions-core/
  12. hooks/
  13. indexing-hadoop/
  14. indexing-service/
  15. integration-tests/
  16. integration-tests-ex/
  17. licenses/
  18. processing/
  19. publications/
  20. server/
  21. services/
  22. sql/
  23. web-console/
  24. website/
  25. .asf.yaml
  26. .backportrc.json
  27. .codecov.yml
  28. .dockerignore
  29. .gitignore
  30. .lgtm.yml
  31. check_test_suite.py
  32. check_test_suite_test.py
  33. CONTRIBUTING.md
  34. doap_Druid.rdf
  35. it.sh
  36. LABELS
  37. LICENSE
  38. licenses.yaml
  39. NOTICE
  40. owasp-dependency-check-suppressions.xml
  41. pom.xml
  42. README.md
  43. README.template
  44. rewrite.yml
  45. upload.sh
README.md

Coverage Status Docker Helm

WorkflowStatus
⚙️ CodeQL Configcodeql-config
🔍 CodeQLcodeql
🕒 Cron Job ITScron-job-its
🏷️ Labelerlabeler
♻️ Reusable Revised ITSreusable-revised-its
♻️ Reusable Standard ITSreusable-standard-its
♻️ Reusable Unit Testsreusable-unit-tests
🔄 Revised ITSrevised-its
🔧 Standard ITSstandard-its
🛠️ Static Checksstatic-checks
🧪 Unit and Integration Tests Unifiedunit-and-integration-tests-unified
🔬 Unit Testsunit-tests

Website Twitter Download Get Started Documentation Community Build Contribute License


Apache Druid

Druid is a high performance real-time analytics database. Druid's main value add is to reduce time to insight and action.

Druid is designed for workflows where fast queries and ingest really matter. Druid excels at powering UIs, running operational (ad-hoc) queries, or handling high concurrency. Consider Druid as an open source alternative to data warehouses for a variety of use cases. The design documentation explains the key concepts.

Getting started

You can get started with Druid with our local or Docker quickstart.

Druid provides a rich set of APIs (via HTTP and JDBC) for loading, managing, and querying your data. You can also interact with Druid via the built-in web console (shown below).

Load data

data loader Kafka

Load streaming and batch data using a point-and-click wizard to guide you through ingestion setup. Monitor one off tasks and ingestion supervisors.

Manage the cluster

management

Manage your cluster with ease. Get a view of your datasources, segments, ingestion tasks, and services from one convenient location. All powered by SQL systems tables, allowing you to see the underlying query for each view.

Issue queries

query view combo

Use the built-in query workbench to prototype DruidSQL and native queries or connect one of the many tools that help you make the most out of Druid.

Documentation

See the latest documentation for the documentation for the current official release. If you need information on a previous release, you can browse previous releases documentation.

Make documentation and tutorials updates in /docs using Markdown or extended Markdown (MDX). Then, open a pull request.

To build the site locally, you need Node 16.14 or higher and to install Docusaurus 2 with npm|yarn install in the website directory. Then you can run npm|yarn start to launch a local build of the docs.

If you're looking to update non-doc pages like Use Cases, those files are in the druid-website-src repo.

Community

Visit the official project community page to read about getting involved in contributing to Apache Druid, and how we help one another use and operate Druid.

  • Druid users can find help in the druid-user mailing list on Google Groups, and have more technical conversations in #troubleshooting on Slack.
  • Druid development discussions take place in the druid-dev mailing list (dev@druid.apache.org). Subscribe by emailing dev-subscribe@druid.apache.org. For live conversations, join the #dev channel on Slack.

Check out the official community page for details of how to join the community Slack channels.

Find articles written by community members and a calendar of upcoming events on the project site - contribute your own events and articles by submitting a PR in the apache/druid-website-src repository.

Building from source

Please note that JDK 8 or JDK 11 is required to build Druid.

See the latest build guide for instructions on building Apache Druid from source.

Contributing

Please follow the community guidelines for contributing.

For instructions on setting up IntelliJ dev/intellij-setup.md

License

Apache License, Version 2.0