blob: ec88bd69dcae40333353f53e8c76c3425f0bf895 [file] [log] [blame]
<!DOCTYPE html>
<!--
| Generated by Apache Maven Doxia Site Renderer 1.8.1 from src/site/markdown/aql/filters.md at 2019-03-07
| Rendered using Apache Maven Fluido Skin 1.7
-->
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<meta name="Date-Revision-yyyymmdd" content="20190307" />
<meta http-equiv="Content-Language" content="en" />
<title>AsterixDB &#x2013; Filter-Based LSM Index Acceleration</title>
<link rel="stylesheet" href="../css/apache-maven-fluido-1.7.min.css" />
<link rel="stylesheet" href="../css/site.css" />
<link rel="stylesheet" href="../css/print.css" media="print" />
<script type="text/javascript" src="../js/apache-maven-fluido-1.7.min.js"></script>
</head>
<body class="topBarDisabled">
<div class="container-fluid">
<div id="banner">
<div class="pull-left"><a href=".././" id="bannerLeft"><img src="../images/asterixlogo.png" alt="AsterixDB"/></a></div>
<div class="pull-right"></div>
<div class="clear"><hr/></div>
</div>
<div id="breadcrumbs">
<ul class="breadcrumb">
<li id="publishDate">Last Published: 2019-03-07</li>
<li id="projectVersion" class="pull-right">Version: 0.9.4</li>
<li class="pull-right"><a href="../index.html" title="Documentation Home">Documentation Home</a></li>
</ul>
</div>
<div class="row-fluid">
<div id="leftColumn" class="span2">
<div class="well sidebar-nav">
<ul class="nav nav-list">
<li class="nav-header">Get Started - Installation</li>
<li><a href="../ncservice.html" title="Option 1: using NCService"><span class="none"></span>Option 1: using NCService</a></li>
<li><a href="../ansible.html" title="Option 2: using Ansible"><span class="none"></span>Option 2: using Ansible</a></li>
<li><a href="../aws.html" title="Option 3: using Amazon Web Services"><span class="none"></span>Option 3: using Amazon Web Services</a></li>
<li class="nav-header">AsterixDB Primer</li>
<li><a href="../sqlpp/primer-sqlpp.html" title="Option 1: using SQL++"><span class="none"></span>Option 1: using SQL++</a></li>
<li><a href="../aql/primer.html" title="Option 2: using AQL"><span class="none"></span>Option 2: using AQL</a></li>
<li class="nav-header">Data Model</li>
<li><a href="../datamodel.html" title="The Asterix Data Model"><span class="none"></span>The Asterix Data Model</a></li>
<li class="nav-header">Queries - SQL++</li>
<li><a href="../sqlpp/manual.html" title="The SQL++ Query Language"><span class="none"></span>The SQL++ Query Language</a></li>
<li><a href="../sqlpp/builtins.html" title="Builtin Functions"><span class="none"></span>Builtin Functions</a></li>
<li class="nav-header">Queries - AQL</li>
<li><a href="../aql/manual.html" title="The Asterix Query Language (AQL)"><span class="none"></span>The Asterix Query Language (AQL)</a></li>
<li><a href="../aql/builtins.html" title="Builtin Functions"><span class="none"></span>Builtin Functions</a></li>
<li class="nav-header">API/SDK</li>
<li><a href="../api.html" title="HTTP API"><span class="none"></span>HTTP API</a></li>
<li><a href="../csv.html" title="CSV Output"><span class="none"></span>CSV Output</a></li>
<li class="nav-header">Advanced Features</li>
<li><a href="../aql/fulltext.html" title="Support of Full-text Queries"><span class="none"></span>Support of Full-text Queries</a></li>
<li><a href="../aql/externaldata.html" title="Accessing External Data"><span class="none"></span>Accessing External Data</a></li>
<li><a href="../feeds/tutorial.html" title="Support for Data Ingestion"><span class="none"></span>Support for Data Ingestion</a></li>
<li><a href="../udf.html" title="User Defined Functions"><span class="none"></span>User Defined Functions</a></li>
<li class="active"><a href="#"><span class="none"></span>Filter-Based LSM Index Acceleration</a></li>
<li><a href="../aql/similarity.html" title="Support of Similarity Queries"><span class="none"></span>Support of Similarity Queries</a></li>
</ul>
<hr />
<div id="poweredBy">
<div class="clear"></div>
<div class="clear"></div>
<div class="clear"></div>
<div class="clear"></div>
<a href=".././" title="AsterixDB" class="builtBy"><img class="builtBy" alt="AsterixDB" src="../images/asterixlogo.png" /></a>
</div>
</div>
</div>
<div id="bodyColumn" class="span10" >
<!--
! Licensed to the Apache Software Foundation (ASF) under one
! or more contributor license agreements. See the NOTICE file
! distributed with this work for additional information
! regarding copyright ownership. The ASF licenses this file
! to you under the Apache License, Version 2.0 (the
! "License"); you may not use this file except in compliance
! with the License. You may obtain a copy of the License at
!
! http://www.apache.org/licenses/LICENSE-2.0
!
! Unless required by applicable law or agreed to in writing,
! software distributed under the License is distributed on an
! "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
! KIND, either express or implied. See the License for the
! specific language governing permissions and limitations
! under the License.
!-->
<h1>Filter-Based LSM Index Acceleration</h1>
<div class="section">
<h2><a name="Table_of_Contents"></a><a name="toc" id="toc">Table of Contents</a></h2>
<ul>
<li><a href="#Motivation">Motivation</a></li>
<li><a href="#FiltersInAsterixDB">Filters in AsterixDB</a></li>
<li><a href="#FiltersAndMergePolicies">Filters and Merge Policies</a></li>
</ul></div>
<div class="section">
<h2><a name="Motivation_.5BBack_to_TOC.5D"></a><a name="Motivation" id="Motivation">Motivation</a> <font size="4"><a href="#toc">[Back to TOC]</a></font></h2>
<p>Traditional relational databases usually employ conventional index structures such as B+ trees due to their low read latency. However, such traditional index structures use in-place writes to perform updates, resulting in costly random writes to disk. Today&#x2019;s emerging applications often involve insert-intensive workloads for which the cost of random writes prohibits efficient ingestion of data. Consequently, popular NoSQL systems such as Cassandra, HBase, LevelDB, BigTable, etc. have adopted Log-Structured Merge (LSM) Trees as their storage structure. LSM-trees avoids the cost of random writes by batching updates into a component of the index that resides in main memory &#x2013; an <i>in-memory component</i>. When the space occupancy of the in-memory component exceeds a specified threshold, its entries are <i>flushed</i> to disk forming a new component &#x2013; a <i>disk component</i>. As disk components accumulate on disk, they are periodically merged together subject to a <i>merge policy</i> that decides when and what to merge. The benefit of the LSM-trees comes at the cost of possibly sacrificing read efficiency, but, it has been shown in previous studies that these inefficiencies can be mostly mitigated.</p>
<p>AsterixDB has also embraced LSM-trees, not just by using them as primary indexes, but also by using the same LSM-ification technique for all of its secondary index structures. In particular, AsterixDB adopted a generic framework for converting a class of indexes (that includes conventional B+ trees, R trees, and inverted indexes) into LSM-based secondary indexes, allowing higher data ingestion rates. In fact, for certain index structures, our results have shown that using an LSM-based version of an index can be made to significantly outperform its conventional counterpart for <i>both</i> ingestion and query speed (an example of such an index being the R-tree for spatial data).</p>
<p>Since an LSM-based index naturally partitions data into multiple disk components, it is possible, when answering certain queries, to exploit partitioning to only access some components and safely filter out the remaining components, thus reducing query times. For instance, referring to our <a href="primer.html#ADM:_Modeling_Semistructed_Data_in_AsterixDB">TinySocial</a> example, suppose a user always retrieves tweets from the <tt>TweetMessages</tt> dataset based on the <tt>send-time</tt> field (e.g., tweets posted in the last 24 hours). Since there is not a secondary index on the <tt>send-time</tt> field, the only available option for AsterixDB would be to scan the whole <tt>TweetMessages</tt> dataset and then apply the predicate as a post-processing step. However, if disk components of the primary index were tagged with the minimum and maximum timestamp values of the objects they contain, we could utilize the tagged information to directly access the primary index and prune components that do not match the query predicate. Thus, we could save substantial cost by avoiding scanning the whole dataset and only access the relevant components. We simply call such tagging information that are associated with components, filters. (Note that even if there were a secondary index on <tt>send-time</tt> field, using filters could save substantial cost by avoiding accessing the secondary index, followed by probing the primary index for every fetched entry.) Moreover, the same filtering technique can also be used with any secondary LSM index (e.g., an LSM R-tree), in case the query contains multiple predicates (e.g., spatial and temporal predicates), to obtain similar pruning power.</p></div>
<div class="section">
<h2><a name="Filters_in_AsterixDB_.5BBack_to_TOC.5D"></a><a name="FiltersInAsterixDB" id="FiltersInAsterixDB">Filters in AsterixDB</a> <font size="4"><a href="#toc">[Back to TOC]</a></font></h2>
<p>We have added support for LSM-based filters to all of AsterixDB&#x2019;s index types. To enable the use of filters, the user must specify the filter&#x2019;s key when creating a dataset, as shown below:</p>
<div class="section">
<div class="section">
<h4><a name="Creating_a_Dataset_with_a_Filter"></a>Creating a Dataset with a Filter</h4>
<div>
<div>
<pre class="source"> create dataset Tweets(TweetType) primary key tweetid with filter on send-time;
</pre></div></div>
<p>Filters can be created on any totally ordered datatype (i.e., any field that can be indexed using a B+ -tree), such as integers, doubles, floats, UUIDs, datetimes, etc.</p>
<p>When a dataset with a filter is created, the name of the filter&#x2019;s key field is persisted in the <tt>Metadata.Dataset</tt> dataset (which is the metadata dataset that stores the details of each dataset in an AsterixDB instance) so that DML operations against the dataset can recognize the existence of filters and can update them or utilize them accordingly. Creating a dataset with a filter in AsterixDB implies that the primary and all secondary indexes of that dataset will maintain filters on their disk components. Once a filtered dataset is created, the user can use the dataset normally (just like any other dataset). AsterixDB will automatically maintain the filters and will leverage them to efficiently answer queries whenever possible (i.e., when a query has predicates on the filter&#x2019;s key).</p></div></div></div>
<div class="section">
<h2><a name="Filters_and_Merge_Policies_.5BBack_to_TOC.5D"></a><a name="FiltersAndMergePolicies" id="FiltersAndMergePolicies">Filters and Merge Policies</a> <font size="4"><a href="#toc">[Back to TOC]</a></font></h2>
<p>The AsterixDB default merge policy, the prefix merge policy, relies on component sizes and the number of components to decide which components to merge. This merge policy has proven to provide excellent performance for both ingestion and queries. However, when evaluating our filtering solution with the prefix policy, we observed a behavior that can reduce filter effectiveness. In particular, we noticed that under the prefix merge policy, the disk components of a secondary index tend to be constantly merged into a single component. This is because the prefix policy relies on a single size parameter for all of the indexes of a dataset. This parameter is typically chosen based on the sizes of the disk components of the primary index, which tend to be much larger than the sizes of the secondary indexes&#x2019; disk components. This difference caused the prefix merge policy to behave similarly to the constant merge policy (i.e., relatively poorly) when applied to secondary indexes in the sense that the secondary indexes are constantly merged into a single disk component. Consequently, the effectiveness of filters on secondary indexes was greatly reduced under the prefix-merge policy, but they were still effective when probing the primary index. Based on this behavior, we developed a new merge policy, an improved version of the prefix policy, called the correlated-prefix policy. The basic idea of this policy is that it delegates the decision of merging the disk components of all the indexes in a dataset to the primary index. When the policy decides that the primary index needs to be merged (using the same decision criteria as for the prefix policy), then it will issue successive merge requests to the I/O scheduler on behalf of all other indexes associated with the same dataset. The end result is that secondary indexes will always have the same number of disk components as their primary index under the correlated-prefix merge policy. This has improved query performance, since disk components of secondary indexes now have a much better chance of being pruned.</p></div>
</div>
</div>
</div>
<hr/>
<footer>
<div class="container-fluid">
<div class="row-fluid">
<div class="row-fluid">Apache AsterixDB, AsterixDB, Apache, the Apache
feather logo, and the Apache AsterixDB project logo are either
registered trademarks or trademarks of The Apache Software
Foundation in the United States and other countries.
All other marks mentioned may be trademarks or registered
trademarks of their respective owners.
</div>
</div>
</div>
</footer>
</body>
</html>