To use this Apache Druid extension, include druid-bloom-filter
in the extensions load list.
This extension adds the ability to both construct bloom filters from query results, and filter query results by testing against a bloom filter. A Bloom filter is a probabilistic data structure for performing a set membership check. A bloom filter is a good candidate to use with Druid for cases where an explicit filter is impossible, e.g. filtering a query against a set of millions of values.
Following are some characteristics of Bloom filters:
test()
says true)test()
will never say false).This extension is currently based on org.apache.hive.common.util.BloomKFilter
from hive-storage-api
. Internally, this implementation uses Murmur3 as the hash algorithm.
To construct a BloomKFilter externally with Java to use as a filter in a Druid query:
BloomKFilter bloomFilter = new BloomKFilter(1500); bloomFilter.addString("value 1"); bloomFilter.addString("value 2"); bloomFilter.addString("value 3"); ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream(); BloomKFilter.serialize(byteArrayOutputStream, bloomFilter); String base64Serialized = Base64.encodeBase64String(byteArrayOutputStream.toByteArray());
This string can then be used in the native or SQL Druid query.
{ "type" : "bloom", "dimension" : <dimension_name>, "bloomKFilter" : <serialized_bytes_for_BloomKFilter>, "extractionFn" : <extraction_fn> }
Property | Description | required? |
---|---|---|
type | Filter Type. Should always be bloom | yes |
dimension | The dimension to filter over. | yes |
bloomKFilter | Base64 encoded Binary representation of org.apache.hive.common.util.BloomKFilter | yes |
extractionFn | Extraction function to apply to the dimension values | no |
Serialized BloomKFilter format:
Note: org.apache.hive.common.util.BloomKFilter
provides a serialize method which can be used to serialize bloom filters to outputStream.
Bloom filters can be used in SQL WHERE
clauses via the bloom_filter_test
operator:
SELECT COUNT(*) FROM druid.foo WHERE bloom_filter_test(<expr>, '<serialized_bytes_for_BloomKFilter>')
The bloom filter extension also adds a bloom filter Druid expression which shares syntax with the SQL operator.
bloom_filter_test(<expr>, '<serialized_bytes_for_BloomKFilter>')
Input for a bloomKFilter
can also be created from a druid query with the bloom
aggregator. Note that it is very important to set a reasonable value for the maxNumEntries
parameter, which is the maximum number of distinct entries that the bloom filter can represent without increasing the false positive rate. It may be worth performing a query using one of the unique count sketches to calculate the value for this parameter in order to build a bloom filter appropriate for the query.
{ "type": "bloom", "name": <output_field_name>, "maxNumEntries": <maximum_number_of_elements_for_BloomKFilter> "field": <dimension_spec> }
Property | Description | required? |
---|---|---|
type | Aggregator Type. Should always be bloom | yes |
name | Output field name | yes |
field | DimensionSpec to add to org.apache.hive.common.util.BloomKFilter | yes |
maxNumEntries | Maximum number of distinct values supported by org.apache.hive.common.util.BloomKFilter , default 1500 | no |
{ "queryType": "timeseries", "dataSource": "wikiticker", "intervals": [ "2015-09-12T00:00:00.000/2015-09-13T00:00:00.000" ], "granularity": "day", "aggregations": [ { "type": "bloom", "name": "userBloom", "maxNumEntries": 100000, "field": { "type":"default", "dimension":"user", "outputType": "STRING" } } ] }
response
[{"timestamp":"2015-09-12T00:00:00.000Z","result":{"userBloom":"BAAAJhAAAA..."}}]
These values can then be set in the filter specification described above.
Ordering results by a bloom filter aggregator, for example in a TopN query, will perform a comparatively expensive linear scan of the filter itself to count the number of set bits as a means of approximating how many items have been added to the set. As such, ordering by an alternate aggregation is recommended if possible.
Bloom filters can be computed in SQL expressions with the bloom_filter
aggregator:
SELECT BLOOM_FILTER(<expression>, <max number of entries>) FROM druid.foo WHERE dim2 = 'abc'
but requires the setting druid.sql.planner.serializeComplexValues
to be set to true
. Bloom filter results in a SQL response are serialized into a base64 string, which can then be used in subsequent queries as a filter.