blob: 3cb46b004eb98195a638113697809ea2d785d8b2 [file] [log] [blame]
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link href='images/favicon.ico' rel='shortcut icon' type='image/x-icon'>
<!-- The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags -->
<title>CarbonData</title>
<style>
</style>
<!-- Bootstrap -->
<link rel="stylesheet" href="css/bootstrap.min.css">
<link href="css/style.css" rel="stylesheet">
<!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries -->
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/html5shiv/3.7.3/html5shiv.min.js"></script>
<script src="https://oss.maxcdn.scom/respond/1.4.2/respond.min.js"></script>
<![endif]-->
<script src="js/jquery.min.js"></script>
<script src="js/bootstrap.min.js"></script>
<script defer src="https://use.fontawesome.com/releases/v5.0.8/js/all.js"></script>
</head>
<body>
<header>
<nav class="navbar navbar-default navbar-custom cd-navbar-wrapper">
<div class="container">
<div class="navbar-header">
<button aria-controls="navbar" aria-expanded="false" data-target="#navbar" data-toggle="collapse"
class="navbar-toggle collapsed" type="button">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a href="index.html" class="logo">
<img src="images/CarbonDataLogo.png" alt="CarbonData logo" title="CarbocnData logo"/>
</a>
</div>
<div class="navbar-collapse collapse cd_navcontnt" id="navbar">
<ul class="nav navbar-nav navbar-right navlist-custom">
<li><a href="index.html" class="hidden-xs"><i class="fa fa-home" aria-hidden="true"></i> </a>
</li>
<li><a href="index.html" class="hidden-lg hidden-md hidden-sm">Home</a></li>
<li class="dropdown">
<a href="#" class="dropdown-toggle " data-toggle="dropdown" role="button" aria-haspopup="true"
aria-expanded="false"> Download <span class="caret"></span></a>
<ul class="dropdown-menu">
<li>
<a href="https://dist.apache.org/repos/dist/release/carbondata/2.2.0/"
target="_blank">Apache CarbonData 2.2.0</a></li>
<li>
<a href="https://dist.apache.org/repos/dist/release/carbondata/2.1.1/"
target="_blank">Apache CarbonData 2.1.1</a></li>
<li>
<a href="https://dist.apache.org/repos/dist/release/carbondata/2.1.0/"
target="_blank">Apache CarbonData 2.1.0</a></li>
<li>
<a href="https://dist.apache.org/repos/dist/release/carbondata/2.0.1/"
target="_blank">Apache CarbonData 2.0.1</a></li>
<li>
<a href="https://dist.apache.org/repos/dist/release/carbondata/2.0.0/"
target="_blank">Apache CarbonData 2.0.0</a></li>
<li>
<a href="https://dist.apache.org/repos/dist/release/carbondata/1.6.1/"
target="_blank">Apache CarbonData 1.6.1</a></li>
<li>
<a href="https://dist.apache.org/repos/dist/release/carbondata/1.6.0/"
target="_blank">Apache CarbonData 1.6.0</a></li>
<li>
<a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.4/"
target="_blank">Apache CarbonData 1.5.4</a></li>
<li>
<a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.3/"
target="_blank">Apache CarbonData 1.5.3</a></li>
<li>
<a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.2/"
target="_blank">Apache CarbonData 1.5.2</a></li>
<li>
<a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
target="_blank">Apache CarbonData 1.5.1</a></li>
<li>
<a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
target="_blank">Release Archive</a></li>
</ul>
</li>
<li><a href="documentation.html" class="active">Documentation</a></li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true"
aria-expanded="false">Community <span class="caret"></span></a>
<ul class="dropdown-menu">
<li>
<a href="https://github.com/apache/carbondata/blob/master/docs/how-to-contribute-to-apache-carbondata.md"
target="_blank">Contributing to CarbonData</a></li>
<li>
<a href="https://github.com/apache/carbondata/blob/master/docs/release-guide.md"
target="_blank">Release Guide</a></li>
<li>
<a href="https://cwiki.apache.org/confluence/display/CARBONDATA/PMC+and+Committers+member+list"
target="_blank">Project PMC and Committers</a></li>
<li>
<a href="https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=66850609"
target="_blank">CarbonData Meetups</a></li>
<li><a href="security.html">Apache CarbonData Security</a></li>
<li><a href="https://issues.apache.org/jira/browse/CARBONDATA" target="_blank">Apache
Jira</a></li>
<li><a href="videogallery.html">CarbonData Videos </a></li>
</ul>
</li>
<li class="dropdown">
<a href="http://www.apache.org/" class="apache_link hidden-xs dropdown-toggle"
data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false">Apache</a>
<ul class="dropdown-menu">
<li><a href="http://www.apache.org/" target="_blank">Apache Homepage</a></li>
<li><a href="http://www.apache.org/licenses/" target="_blank">License</a></li>
<li><a href="http://www.apache.org/foundation/sponsorship.html"
target="_blank">Sponsorship</a></li>
<li><a href="http://www.apache.org/foundation/thanks.html" target="_blank">Thanks</a></li>
</ul>
</li>
<li class="dropdown">
<a href="http://www.apache.org/" class="hidden-lg hidden-md hidden-sm dropdown-toggle"
data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false">Apache</a>
<ul class="dropdown-menu">
<li><a href="http://www.apache.org/" target="_blank">Apache Homepage</a></li>
<li><a href="http://www.apache.org/licenses/" target="_blank">License</a></li>
<li><a href="http://www.apache.org/foundation/sponsorship.html"
target="_blank">Sponsorship</a></li>
<li><a href="http://www.apache.org/foundation/thanks.html" target="_blank">Thanks</a></li>
</ul>
</li>
<li>
<a href="#" id="search-icon"><i class="fa fa-search" aria-hidden="true"></i></a>
</li>
</ul>
</div><!--/.nav-collapse -->
<div id="search-box">
<form method="get" action="http://www.google.com/search" target="_blank">
<div class="search-block">
<table border="0" cellpadding="0" width="100%">
<tr>
<td style="width:80%">
<input type="text" name="q" size=" 5" maxlength="255" value=""
class="search-input" placeholder="Search...." required/>
</td>
<td style="width:20%">
<input type="submit" value="Search"/></td>
</tr>
<tr>
<td align="left" style="font-size:75%" colspan="2">
<input type="checkbox" name="sitesearch" value="carbondata.apache.org" checked/>
<span style=" position: relative; top: -3px;"> Only search for CarbonData</span>
</td>
</tr>
</table>
</div>
</form>
</div>
</div>
</nav>
</header> <!-- end Header part -->
<div class="fixed-padding"></div> <!-- top padding with fixde header -->
<section><!-- Dashboard nav -->
<div class="container-fluid q">
<div class="col-sm-12 col-md-12 maindashboard">
<div class="verticalnavbar">
<nav class="b-sticky-nav">
<div class="nav-scroller">
<div class="nav__inner">
<a class="b-nav__intro nav__item" href="./introduction.html">introduction</a>
<a class="b-nav__quickstart nav__item" href="./quick-start-guide.html">quick start</a>
<a class="b-nav__uses nav__item" href="./usecases.html">use cases</a>
<div class="nav__item nav__item__with__subs">
<a class="b-nav__docs nav__item nav__sub__anchor" href="./language-manual.html">Language Reference</a>
<a class="nav__item nav__sub__item" href="./ddl-of-carbondata.html">DDL</a>
<a class="nav__item nav__sub__item" href="./dml-of-carbondata.html">DML</a>
<a class="nav__item nav__sub__item" href="./streaming-guide.html">Streaming</a>
<a class="nav__item nav__sub__item" href="./configuration-parameters.html">Configuration</a>
<a class="nav__item nav__sub__item" href="./index-developer-guide.html">Indexes</a>
<a class="nav__item nav__sub__item" href="./supported-data-types-in-carbondata.html">Data Types</a>
</div>
<div class="nav__item nav__item__with__subs">
<a class="b-nav__datamap nav__item nav__sub__anchor" href="./index-management.html">Index Managament</a>
<a class="nav__item nav__sub__item" href="./bloomfilter-index-guide.html">Bloom Filter</a>
<a class="nav__item nav__sub__item" href="./lucene-index-guide.html">Lucene</a>
<a class="nav__item nav__sub__item" href="./secondary-index-guide.html">Secondary Index</a>
<a class="nav__item nav__sub__item" href="../spatial-index-guide.html">Spatial Index</a>
<a class="nav__item nav__sub__item" href="../mv-guide.html">MV</a>
</div>
<div class="nav__item nav__item__with__subs">
<a class="b-nav__api nav__item nav__sub__anchor" href="./sdk-guide.html">API</a>
<a class="nav__item nav__sub__item" href="./sdk-guide.html">Java SDK</a>
<a class="nav__item nav__sub__item" href="./csdk-guide.html">C++ SDK</a>
</div>
<a class="b-nav__perf nav__item" href="./performance-tuning.html">Performance Tuning</a>
<a class="b-nav__s3 nav__item" href="./s3-guide.html">S3 Storage</a>
<a class="b-nav__indexserver nav__item" href="./index-server.html">Index Server</a>
<a class="b-nav__prestodb nav__item" href="./prestodb-guide.html">PrestoDB Integration</a>
<a class="b-nav__prestosql nav__item" href="./prestosql-guide.html">PrestoSQL Integration</a>
<a class="b-nav__flink nav__item" href="./flink-integration-guide.html">Flink Integration</a>
<a class="b-nav__scd nav__item" href="./scd-and-cdc-guide.html">SCD & CDC</a>
<a class="b-nav__faq nav__item" href="./faq.html">FAQ</a>
<a class="b-nav__contri nav__item" href="./how-to-contribute-to-apache-carbondata.html">Contribute</a>
<a class="b-nav__security nav__item" href="./security.html">Security</a>
<a class="b-nav__release nav__item" href="./release-guide.html">Release Guide</a>
</div>
</div>
<div class="navindicator">
<div class="b-nav__intro navindicator__item"></div>
<div class="b-nav__quickstart navindicator__item"></div>
<div class="b-nav__uses navindicator__item"></div>
<div class="b-nav__docs navindicator__item"></div>
<div class="b-nav__datamap navindicator__item"></div>
<div class="b-nav__api navindicator__item"></div>
<div class="b-nav__perf navindicator__item"></div>
<div class="b-nav__s3 navindicator__item"></div>
<div class="b-nav__indexserver navindicator__item"></div>
<div class="b-nav__prestodb navindicator__item"></div>
<div class="b-nav__prestosql navindicator__item"></div>
<div class="b-nav__flink navindicator__item"></div>
<div class="b-nav__scd navindicator__item"></div>
<div class="b-nav__faq navindicator__item"></div>
<div class="b-nav__contri navindicator__item"></div>
<div class="b-nav__security navindicator__item"></div>
</div>
</nav>
</div>
<div class="mdcontent">
<section>
<div style="padding:10px 15px;">
<div id="viewpage" name="viewpage">
<div class="row">
<div class="col-sm-12 col-md-12">
<div>
<h1>
<a id="configuring-carbondata" class="anchor" href="#configuring-carbondata" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Configuring CarbonData</h1>
<p>This guide explains the configurations that can be used to tune CarbonData to achieve better performance. Most of the properties that control the internal settings have reasonable default values. They are listed along with the properties along with explanation.</p>
<ul>
<li><a href="#system-configuration">System Configuration</a></li>
<li><a href="#data-loading-configuration">Data Loading Configuration</a></li>
<li><a href="#compaction-configuration">Compaction Configuration</a></li>
<li><a href="#query-configuration">Query Configuration</a></li>
<li><a href="#data-mutation-configuration">Data Mutation Configuration</a></li>
<li><a href="#dynamic-configuration-in-carbondata-using-set-reset">Dynamic Configuration In CarbonData Using SET-RESET</a></li>
</ul>
<h2>
<a id="system-configuration" class="anchor" href="#system-configuration" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>System Configuration</h2>
<p>This section provides the details of all the configurations required for the CarbonData System.</p>
<table>
<thead>
<tr>
<th>Property</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>carbon.ddl.base.hdfs.url</td>
<td>(none)</td>
<td>To simplify and shorten the path to be specified in DDL/DML commands, this property is supported. This property is used to configure the HDFS relative path, the path configured in carbon.ddl.base.hdfs.url will be appended to the HDFS path configured in fs.defaultFS of core-site.xml. If this path is configured, then user need not pass the complete path while dataload. For example: If absolute path of the csv file is hdfs://10.18.101.155:54310/data/cnbc/2016/xyz.csv, the path "hdfs://10.18.101.155:54310" will come from property fs.defaultFS and user can configure the /data/cnbc/ as carbon.ddl.base.hdfs.url. Now while dataload user can specify the csv path as /2016/xyz.csv.</td>
</tr>
<tr>
<td>carbon.badRecords.location</td>
<td>(none)</td>
<td>CarbonData can detect the records not conforming to defined table schema and isolate them as bad records. This property is used to specify where to store such bad records.</td>
</tr>
<tr>
<td>carbon.streaming.auto.handoff.enabled</td>
<td>true</td>
<td>CarbonData supports storing of streaming data. To have high throughput for streaming, the data is written in Row format which is highly optimized for write, but performs poorly for query. When this property is true and when the streaming data size reaches <em><strong>carbon.streaming.segment.max.size</strong></em>, CabonData will automatically convert the data to columnar format and optimize it for faster querying. <strong>NOTE:</strong> It is not recommended to keep the default value which is true.</td>
</tr>
<tr>
<td>carbon.streaming.segment.max.size</td>
<td>1024000000</td>
<td>CarbonData writes streaming data in row format which is optimized for high write throughput. This property defines the maximum size of data to be held is row format, beyond which it will be converted to columnar format in order to support high performance query, provided <em><strong>carbon.streaming.auto.handoff.enabled</strong></em> is true. <strong>NOTE:</strong> Setting higher value will impact the streaming ingestion. The value has to be configured in bytes.</td>
</tr>
<tr>
<td>carbon.segment.lock.files.preserve.hours</td>
<td>48</td>
<td>In order to support parallel data loading onto the same table, CarbonData sequences(locks) at the granularity of segments. Operations affecting the segment(like IUD, alter) are blocked from parallel operations. This property value indicates the number of hours the segment lock files will be preserved after dataload. These lock files will be deleted with the clean command after the configured number of hours.</td>
</tr>
<tr>
<td>carbon.timestamp.format</td>
<td>yyyy-MM-dd HH:mm:ss</td>
<td>CarbonData can understand data of timestamp type and process it in special manner. It can be so that the format of Timestamp data is different from that understood by CarbonData by default. This configuration allows users to specify the format of Timestamp in their data.</td>
</tr>
<tr>
<td>carbon.lock.type</td>
<td>LOCALLOCK</td>
<td>This configuration specifies the type of lock to be acquired during concurrent operations on table. There are following types of lock implementation: - LOCALLOCK: Lock is created on local file system as file. This lock is useful when only one spark driver (thrift server) runs on a machine and no other CarbonData spark application is launched concurrently. - HDFSLOCK: Lock is created on HDFS file system as file. This lock is useful when multiple CarbonData spark applications are launched and no ZooKeeper is running on cluster and HDFS supports file based locking.</td>
</tr>
<tr>
<td>carbon.lock.path</td>
<td>TABLEPATH</td>
<td>This configuration specifies the path where lock files have to be created. Recommended to configure zookeeper lock type or configure HDFS lock path(to this property) in case of S3 file system as locking is not feasible on S3.</td>
</tr>
<tr>
<td>enable.offheap.sort</td>
<td>true</td>
<td>Whether carbondata will use offheap or onheap memory. By default, the value is true and carbondata will use the property value from <em>carbon.unsafe.working.memory.in.mb</em> or <em>carbon.unsafe.driver.working.memory.in.mb</em> as the amount of memory; if it is false, carbondata will use the minimum value between the configured amount of unsafe memory and the 60% of JVM Heap Memory as the amount of memory.</td>
</tr>
<tr>
<td>carbon.unsafe.working.memory.in.mb</td>
<td>512</td>
<td>CarbonData supports storing data in off-heap memory for certain operations during data loading and query. This helps to avoid the Java GC and thereby improve the overall performance. The Minimum value recommeded is 512MB. Any value below this is reset to default value of 512MB. <strong>NOTE:</strong> The below formulas explain how to arrive at the off-heap size required.Memory Required For Data Loading per executor: (<em>carbon.number.of.cores.while.loading</em>) * (Number of tables to load in parallel) * (<em>offheap.sort.chunk.size.inmb</em> + <em>carbon.blockletgroup.size.in.mb</em> + <em>carbon.blockletgroup.size.in.mb</em>/3.5 ). Memory required for Query per executor: (<em>carbon.blockletgroup.size.in.mb</em> + <em>carbon.blockletgroup.size.in.mb</em> * 3.5) * spark.executor.cores</td>
</tr>
<tr>
<td>carbon.unsafe.driver.working.memory.in.mb</td>
<td>(none)</td>
<td>CarbonData supports storing data in unsafe on-heap memory in driver for certain operations like insert into, query for loading index cache. The Minimum value recommended is 512MB. If this configuration is not set, carbondata will use the value of <code>carbon.unsafe.working.memory.in.mb</code>.</td>
</tr>
<tr>
<td>carbon.update.sync.folder</td>
<td>/tmp/carbondata</td>
<td>CarbonData maintains last modification time entries in modifiedTime.htmlt to determine the schema changes and reload only when necessary. This configuration specifies the path where the file needs to be written.</td>
</tr>
<tr>
<td>carbon.invisible.segments.preserve.count</td>
<td>200</td>
<td>CarbonData maintains each data load entry in tablestatus file. The entries from this file are not deleted for those segments that are compacted or dropped, but are made invisible. If the number of data loads are very high, the size and number of entries in tablestatus file can become too many causing unnecessary reading of all data. This configuration specifies the number of segment entries to be maintained afte they are compacted or dropped. Beyond this, the entries are moved to a separate history tablestatus file. <strong>NOTE:</strong> The entries in tablestatus file help to identify the operations performed on CarbonData table and is also used for checkpointing during various data manupulation operations. This is similar to AUDIT file maintaining all the operations and its status. Hence the entries are never deleted but moved to a separate history file.</td>
</tr>
<tr>
<td>carbon.lock.retries</td>
<td>3</td>
<td>CarbonData ensures consistency of operations by blocking certain operations from running in parallel. In order to block the operations from running in parallel, lock is obtained on the table. This configuration specifies the maximum number of retries to obtain the lock for any operations other than load. <strong>NOTE:</strong> Data manupulation operations like Compaction,UPDATE,DELETE or LOADING,UPDATE,DELETE are not allowed to run in parallel. How ever data loading can happen in parallel to compaction.</td>
</tr>
<tr>
<td>carbon.lock.retry.timeout.sec</td>
<td>5</td>
<td>Specifies the interval between the retries to obtain the lock for any operation other than load. <strong>NOTE:</strong> Refer to <em><strong>carbon.lock.retries</strong></em> for understanding why CarbonData uses locks for operations.</td>
</tr>
<tr>
<td>carbon.fs.custom.file.provider</td>
<td>None</td>
<td>To support FileTypeInterface for configuring custom CarbonFile implementation to work with custom FileSystem.</td>
</tr>
<tr>
<td>carbon.timeseries.first.day.of.week</td>
<td>SUNDAY</td>
<td>This parameter configures which day of the week to be considered as first day of the week. Because first day of the week will be different in different parts of the world.</td>
</tr>
<tr>
<td>carbon.enable.tablestatus.backup</td>
<td>false</td>
<td>In cloud object store scenario, overwriting table status file is not an atomic operation since it uses rename API. Thus, it is possible that table status is corrupted if process crashed when overwriting the table status file. To protect from file corruption, user can enable this property.</td>
</tr>
</tbody>
</table>
<h2>
<a id="data-loading-configuration" class="anchor" href="#data-loading-configuration" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Data Loading Configuration</h2>
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>carbon.concurrent.lock.retries</td>
<td>100</td>
<td>CarbonData supports concurrent data loading onto same table. To ensure the loading status is correctly updated into the system,locks are used to sequence the status updation step. This configuration specifies the maximum number of retries to obtain the lock for updating the load status. <strong>NOTE:</strong> This value is high as more number of concurrent loading happens,more the chances of not able to obtain the lock when tried. Adjust this value according to the number of concurrent loading to be supported by the system.</td>
</tr>
<tr>
<td>carbon.concurrent.lock.retry.timeout.sec</td>
<td>1</td>
<td>Specifies the interval between the retries to obtain the lock for concurrent operations. <strong>NOTE:</strong> Refer to <em><strong>carbon.concurrent.lock.retries</strong></em> for understanding why CarbonData uses locks during data loading operations.</td>
</tr>
<tr>
<td>carbon.csv.read.buffersize.byte</td>
<td>1048576</td>
<td>CarbonData uses Hadoop InputFormat to read the csv files. This configuration value is used to pass buffer size as input for the Hadoop MR job when reading the csv files. This value is configured in bytes. <strong>NOTE:</strong> Refer to <em><strong>org.apache.hadoop.mapreduce. InputFormat</strong></em> documentation for additional information.</td>
</tr>
<tr>
<td>carbon.loading.prefetch</td>
<td>false</td>
<td>CarbonData uses univocity parser to read csv files. This configuration is used to inform the parser whether it can prefetch the data from csv files to speed up the reading. <strong>NOTE:</strong> Enabling prefetch improves the data loading performance, but needs higher memory to keep more records which are read ahead from disk.</td>
</tr>
<tr>
<td>carbon.skip.empty.line</td>
<td>false</td>
<td>The csv files givent to CarbonData for loading can contain empty lines. Based on the business scenario, this empty line might have to be ignored or needs to be treated as NULL value for all columns. In order to define this business behavior, this configuration is provided. <strong>NOTE:</strong> In order to consider NULL values for non string columns and continue with data load, <em><strong>carbon.bad.records.action</strong></em> need to be set to <strong>FORCE</strong>;else data load will be failed as bad records encountered.</td>
</tr>
<tr>
<td>carbon.number.of.cores.while.loading</td>
<td>2</td>
<td>Number of cores to be used while loading data. This also determines the number of threads to be used to read the input files (csv) in parallel. <strong>NOTE:</strong> This configured value is used in every data loading step to parallelize the operations. Configuring a higher value can lead to increased early thread pre-emption by OS and there by reduce the overall performance.</td>
</tr>
<tr>
<td>enable.unsafe.sort</td>
<td>true</td>
<td>CarbonData supports unsafe operations of Java to avoid GC overhead for certain operations. This configuration enables to use unsafe functions in CarbonData. <strong>NOTE:</strong> For operations like data loading, which generates more short lived Java objects, Java GC can be a bottle neck. Using unsafe can overcome the GC overhead and improve the overall performance.</td>
</tr>
<tr>
<td>enable.offheap.sort</td>
<td>true</td>
<td>CarbonData supports storing data in off-heap memory for certain operations during data loading and query. This helps to avoid the Java GC and thereby improve the overall performance. This configuration enables using off-heap memory for sorting of data during data loading.<strong>NOTE:</strong> <em><strong>enable.unsafe.sort</strong></em> configuration needs to be configured to true for using off-heap</td>
</tr>
<tr>
<td>carbon.load.sort.scope</td>
<td>NO_SORT [If sort columns are not specified while creating table] and LOCAL_SORT [If sort columns are specified]</td>
<td>CarbonData can support various sorting options to match the balance between load and query performance. LOCAL_SORT: All the data given to an executor in the single load is fully sorted and written to carbondata files. Data loading performance is reduced a little as the entire data needs to be sorted in the executor. GLOBAL SORT: Entire data in the data load is fully sorted and written to carbondata files. Data loading performance would get reduced as the entire data needs to be sorted. But the query performance increases significantly due to very less false positives and concurrency is also improved. <strong>NOTE 1:</strong> This property will be taken into account only when SORT COLUMNS are specified explicitly while creating table, otherwise it is always NO SORT</td>
</tr>
<tr>
<td>carbon.global.sort.rdd.storage.level</td>
<td>MEMORY_ONLY</td>
<td>Storage level to persist dataset of RDD/dataframe when loading data with 'sort_scope'='global_sort', if user's executor has less memory, set this parameter to 'MEMORY_AND_DISK_SER' or other storage level to correspond to different environment. <a href="http://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-persistence" rel="nofollow">See detail</a>.</td>
</tr>
<tr>
<td>carbon.load.global.sort.partitions</td>
<td>0</td>
<td>The number of partitions to use when shuffling data for global sort. Default value 0 means to use same number of map tasks as reduce tasks. <strong>NOTE:</strong> In general, it is recommended to have 2-3 tasks per CPU core in your cluster.</td>
</tr>
<tr>
<td>carbon.sort.size</td>
<td>100000</td>
<td>Number of records to hold in memory to sort and write intermediate sort temp files. <strong>NOTE:</strong> Memory required for data loading will increase if you turn this value bigger. Besides each thread will cache this amout of records. The number of threads is configured by <em>carbon.number.of.cores.while.loading</em>.</td>
</tr>
<tr>
<td>carbon.options.bad.records.logger.enable</td>
<td>false</td>
<td>CarbonData can identify the records that are not conformant to schema and isolate them as bad records. Enabling this configuration will make CarbonData to log such bad records. <strong>NOTE:</strong> If the input data contains many bad records, logging them will slow down the over all data loading throughput. The data load operation status would depend on the configuration in <em><strong>carbon.bad.records.action</strong></em>.</td>
</tr>
<tr>
<td>carbon.bad.records.action</td>
<td>FAIL</td>
<td>CarbonData in addition to identifying the bad records, can take certain actions on such data. This configuration can have four types of actions for bad records namely FORCE, REDIRECT, IGNORE and FAIL. If set to FORCE then it auto-corrects the data by storing the bad records as NULL. If set to REDIRECT then bad records are written to the raw CSV instead of being loaded. If set to IGNORE then bad records are neither loaded nor written to the raw CSV. If set to FAIL then data loading fails if any bad records are found.</td>
</tr>
<tr>
<td>carbon.options.is.empty.data.bad.record</td>
<td>false</td>
<td>Based on the business scenarios, empty("" or '' or ,,) data can be valid or invalid. This configuration controls how empty data should be treated by CarbonData. If false, then empty ("" or '' or ,,) data will not be considered as bad record and vice versa.</td>
</tr>
<tr>
<td>carbon.options.bad.record.path</td>
<td>(none)</td>
<td>Specifies the HDFS path where bad records are to be stored. By default the value is Null. This path must be configured by the user if <em><strong>carbon.options.bad.records.logger.enable</strong></em> is <strong>true</strong> or <em><strong>carbon.bad.records.action</strong></em> is <strong>REDIRECT</strong>.</td>
</tr>
<tr>
<td>carbon.blockletgroup.size.in.mb</td>
<td>64</td>
<td>Please refer to <a href="./file-structure-of-carbondata.html#carbondata-file-format">file-structure-of-carbondata</a> to understand the storage format of CarbonData. The data are read as a group of blocklets which are called blocklet groups. This parameter specifies the size of each blocklet group. Higher value results in better sequential IO access. The minimum value is 16MB, any value lesser than 16MB will reset to the default value (64MB). <strong>NOTE:</strong> Configuring a higher value might lead to poor performance as an entire blocklet group will have to read into memory before processing. For filter queries with limit, it is <strong>not advisable</strong> to have a bigger blocklet size. For aggregation queries which need to return more number of rows, bigger blocklet size is advisable.</td>
</tr>
<tr>
<td>carbon.sort.file.write.buffer.size</td>
<td>16384</td>
<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. This configuration determines the buffer size to be used for reading and writing such files. <strong>NOTE:</strong> This configuration is useful to tune IO and derive optimal performance. Based on the OS and underlying harddisk type, these values can significantly affect the overall performance. It is ideal to tune the buffer size equivalent to the IO buffer size of the OS. Recommended range is between 10240 and 10485760 bytes.</td>
</tr>
<tr>
<td>carbon.sort.intermediate.files.limit</td>
<td>20</td>
<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. Before writing the target carbondata file, the records in these intermediate files needs to be merged to reduce the number of intermediate files. This configuration determines the minimum number of intermediate files after which merged sort is applied on them sort the data. <strong>NOTE:</strong> Intermediate merging happens on a separate thread in the background. Number of threads used is determined by <em><strong>carbon.merge.sort.reader.thread</strong></em>. Configuring a low value will cause more time to be spent in merging these intermediate merged files which can cause more IO. Configuring a high value would cause not to use the idle threads to do intermediate sort merges. Recommended range is between 2 and 50.</td>
</tr>
<tr>
<td>carbon.merge.sort.reader.thread</td>
<td>3</td>
<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. When the intermediate files reaches <em><strong>carbon.sort.intermediate.files.limit</strong></em>, the files will be merged in another thread pool. This value will control the size of the pool. Each thread will read the intermediate files and do merge sort and finally write the records to another file. <strong>NOTE:</strong> Refer to <em><strong>carbon.sort.intermediate.files.limit</strong></em> for operation description. Configuring smaller number of threads can cause merging slow down over loading process whereas configuring larger number of threads can cause thread contention with threads in other data loading steps. Hence configure a fraction of <em><strong>carbon.number.of.cores.while.loading</strong></em>.</td>
</tr>
<tr>
<td>carbon.merge.sort.prefetch</td>
<td>true</td>
<td>CarbonData writes every <em><strong>carbon.sort.size</strong></em> number of records to intermediate temp files during data loading to ensure memory footprint is within limits. These intermediate temp files will have to be sorted using merge sort before writing into CarbonData format. This configuration enables pre fetching of data from these temp files in order to optimize IO and speed up data loading process.</td>
</tr>
<tr>
<td>carbon.prefetch.buffersize</td>
<td>1000</td>
<td>When the configuration <em><strong>carbon.merge.sort.prefetch</strong></em> is configured to true, we need to set the number of records that can be prefetched. This configuration is used specify the number of records to be prefetched.**NOTE: **Configuring more number of records to be prefetched increases memory footprint as more records will have to be kept in memory.</td>
</tr>
<tr>
<td>carbon.sort.storage.inmemory.size.inmb</td>
<td>512</td>
<td>CarbonData writes every <em><strong>carbon.sort.size</strong></em> number of records to intermediate temp files during data loading to ensure memory footprint is within limits. When <em><strong>enable.unsafe.sort</strong></em> configuration is enabled, instead of using <em><strong>carbon.sort.size</strong></em> which is based on rows count, size occupied in memory is used to determine when to flush data pages to intermediate temp files. This configuration determines the memory to be used for storing data pages in memory. <strong>NOTE:</strong> Configuring a higher value ensures more data is maintained in memory and hence increases data loading performance due to reduced or no IO. Based on the memory availability in the nodes of the cluster, configure the values accordingly.</td>
</tr>
<tr>
<td>carbon.load.sortmemory.spill.percentage</td>
<td>0</td>
<td>During data loading, some data pages are kept in memory upto memory configured in <em><strong>carbon.sort.storage.inmemory.size.inmb</strong></em> beyond which they are spilled to disk as intermediate temporary sort files. This configuration determines after what percentage data needs to be spilled to disk. <strong>NOTE:</strong> Without this configuration, when the data pages occupy upto configured memory, new data pages would be dumped to disk and old pages are still maintained in disk.</td>
</tr>
<tr>
<td>carbon.enable.calculate.size</td>
<td>true</td>
<td>
<strong>For Load Operation</strong>: Enabling this property will let carbondata calculate the size of the carbon data file (.carbondata) and the carbon index file (.carbonindex) for each load and update the table status file. <strong>For Describe Formatted</strong>: Enabling this property will let carbondata calculate the total size of the carbon data files and the carbon index files for the each table and display it in describe formatted command. <strong>NOTE:</strong> This is useful to determine the overall size of the carbondata table and also get an idea of how the table is growing in order to take up other backup strategy decisions.</td>
</tr>
<tr>
<td>carbon.cutOffTimestamp</td>
<td>(none)</td>
<td>CarbonData has capability to generate the Dictionary values for the timestamp columns from the data itself without the need to store the computed dictionary values. This configuration sets the start date for calculating the timestamp. Java counts the number of milliseconds from start of "1970-01-01 00:00:00". This property is used to customize the start of position. For example "2000-01-01 00:00:00". <strong>NOTE:</strong> The date must be in the form <em><strong>carbon.timestamp.format</strong></em>. CarbonData supports storing data for upto 68 years. For example, if the cut-off time is 1970-01-01 05:30:00, then data upto 2038-01-01 05:30:00 will be supported by CarbonData.</td>
</tr>
<tr>
<td>carbon.timegranularity</td>
<td>SECOND</td>
<td>The configuration is used to specify the data granularity level such as DAY, HOUR, MINUTE, or SECOND. This helps to store more than 68 years of data into CarbonData.</td>
</tr>
<tr>
<td>carbon.use.local.dir</td>
<td>true</td>
<td>CarbonData,during data loading, writes files to local temp directories before copying the files to HDFS. This configuration is used to specify whether CarbonData can write locally to tmp directory of the container or to the YARN application directory.</td>
</tr>
<tr>
<td>carbon.sort.temp.compressor</td>
<td>SNAPPY</td>
<td>CarbonData writes every <em><strong>carbon.sort.size</strong></em> number of records to intermediate temp files during data loading to ensure memory footprint is within limits. These temporary files can be compressed and written in order to save the storage space. This configuration specifies the name of compressor to be used to compress the intermediate sort temp files during sort procedure in data loading. The valid values are 'SNAPPY','GZIP','BZIP2','LZ4','ZSTD' and empty. By default, empty means that Carbondata will not compress the sort temp files. <strong>NOTE:</strong> Compressor will be useful if you encounter disk bottleneck. Since the data needs to be compressed and decompressed,it involves additional CPU cycles,but is compensated by the high IO throughput due to less data to be written or read from the disks.</td>
</tr>
<tr>
<td>carbon.load.skewedDataOptimization.enabled</td>
<td>false</td>
<td>During data loading,CarbonData would divide the number of blocks equally so as to ensure all executors process same number of blocks. This mechanism satisfies most of the scenarios and ensures maximum parallel processing for optimal data loading performance. In some business scenarios, there might be scenarios where the size of blocks vary significantly and hence some executors would have to do more work if they get blocks containing more data. This configuration enables size based block allocation strategy for data loading. When loading, carbondata will use file size based block allocation strategy for task distribution. It will make sure that all the executors process the same size of data. <strong>NOTE:</strong> This configuration is useful if the size of your input data files varies widely, say 1MB to 1GB. For this configuration to work effectively,knowing the data pattern and size is important and necessary.</td>
</tr>
<tr>
<td>enable.data.loading.statistics</td>
<td>false</td>
<td>CarbonData has extensive logging which would be useful for debugging issues related to performance or hard to locate issues. This configuration when made <em><strong>true</strong></em> would log additional data loading statistics information to more accurately locate the issues being debugged. <strong>NOTE:</strong> Enabling this would log more debug information to log files, there by increasing the log files size significantly in short span of time. It is advised to configure the log files size, retention of log files parameters in log4j properties appropriately. Also extensive logging is an increased IO operation and hence over all data loading performance might get reduced. Therefore it is recommended to enable this configuration only for the duration of debugging.</td>
</tr>
<tr>
<td>carbon.dictionary.chunk.size</td>
<td>10000</td>
<td>CarbonData generates dictionary keys and writes them to separate dictionary file during data loading. To optimize the IO, this configuration determines the number of dictionary keys to be persisted to dictionary file at a time. <strong>NOTE:</strong> Writing to file also serves as a commit point to the dictionary generated. Increasing more values in memory causes more data loss during system or application failure. It is advised to alter this configuration judiciously.</td>
</tr>
<tr>
<td>carbon.load.directWriteToStorePath.enabled</td>
<td>false</td>
<td>During data load, all the carbondata files are written to local disk and finally copied to the target store location in HDFS/S3. Enabling this parameter will make carbondata files to be written directly onto target HDFS/S3 location bypassing the local disk. <strong>NOTE:</strong> Writing directly to HDFS/S3 saves local disk IO(once for writing the files and again for copying to HDFS/S3) there by improving the performance. But the drawback is when data loading fails or the application crashes, unwanted carbondata files will remain in the target HDFS/S3 location until it is cleared during next data load or by running <em>CLEAN FILES</em> DDL command</td>
</tr>
<tr>
<td>carbon.options.serialization.null.format</td>
<td>\N</td>
<td>Based on the business scenarios, some columns might need to be loaded with null values. As null value cannot be written in csv files, some special characters might be adopted to specify null values. This configuration can be used to specify the null values format in the data being loaded.</td>
</tr>
<tr>
<td>carbon.column.compressor</td>
<td>snappy</td>
<td>CarbonData will compress the column values using the compressor specified by this configuration. Currently CarbonData supports 'snappy', 'zstd' and 'gzip' compressors.</td>
</tr>
<tr>
<td>carbon.minmax.allowed.byte.count</td>
<td>200</td>
<td>CarbonData will write the min max values for string/varchar types column using the byte count specified by this configuration. Max value is 1000 bytes(500 characters) and Min value is 10 bytes(5 characters). <strong>NOTE:</strong> This property is useful for reducing the store size thereby improving the query performance but can lead to query degradation if value is not configured properly.</td>
</tr>
<tr>
<td>carbon.merge.index.failure.throw.exception</td>
<td>true</td>
<td>It is used to configure whether or not merge index failure should result in data load failure also.</td>
</tr>
<tr>
<td>carbon.binary.decoder</td>
<td>None</td>
<td>Support configurable decode for loading. Two decoders supported: base64 and hex</td>
</tr>
<tr>
<td>carbon.local.dictionary.size.threshold.inmb</td>
<td>4</td>
<td>size based threshold for local dictionary in MB, maximum allowed size is 16 MB.</td>
</tr>
<tr>
<td>carbon.enable.bad.record.handling.for.insert</td>
<td>false</td>
<td>by default, disable the bad record and converter step during "insert into"</td>
</tr>
</tbody>
</table>
<h2>
<a id="compaction-configuration" class="anchor" href="#compaction-configuration" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Compaction Configuration</h2>
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>carbon.number.of.cores.while.compacting</td>
<td>2</td>
<td>Number of cores to be used while compacting data. This also determines the number of threads to be used to read carbondata files in parallel.</td>
</tr>
<tr>
<td>carbon.compaction.level.threshold</td>
<td>4, 3</td>
<td>Each CarbonData load will create one segment, if every load is small in size it will generate many small file over a period of time impacting the query performance. This configuration is for minor compaction which decides how many segments to be merged. Configuration is of the form (x,y). Compaction will be triggered for every x segments and form a single level 1 compacted segment. When the number of compacted level 1 segments reach y, compaction will be triggered again to merge them to form a single level 2 segment. For example: If it is set as 2, 3 then minor compaction will be triggered for every 2 segments. 3 is the number of level 1 compacted segments which is further compacted to new segment. <strong>NOTE:</strong> When <em><strong>carbon.enable.auto.load.merge</strong></em> is <strong>true</strong>, configuring higher values cause overall data loading time to increase as compaction will be triggered after data loading is complete but status is not returned till compaction is complete. But compacting more number of segments can increase query performance. Hence optimal values needs to be configured based on the business scenario. Valid values are between 0 to 100.</td>
</tr>
<tr>
<td>carbon.major.compaction.size</td>
<td>1024</td>
<td>To improve query performance and all the segments can be merged and compacted to a single segment upto configured size. This Major compaction size can be configured using this parameter. Sum of the segments which is below this threshold will be merged. This value is expressed in MB.</td>
</tr>
<tr>
<td>carbon.horizontal.compaction.enable</td>
<td>true</td>
<td>CarbonData supports DELETE/UPDATE functionality by creating delta data files for existing carbondata files. These delta files would grow as more number of DELETE/UPDATE operations are performed. Compaction of these delta files are termed as horizontal compaction. This configuration is used to turn ON/OFF horizontal compaction. After every DELETE and UPDATE statement, horizontal compaction may occur in case the delta (DELETE/ UPDATE) files becomes more than specified threshold. <strong>NOTE:</strong> Having many delta files will reduce the query performance as scan has to happen on all these files before the final state of data can be decided. Hence it is advisable to keep horizontal compaction enabled and configure reasonable values to <em><strong>carbon.horizontal.DELETE.compaction.threshold</strong></em>
</td>
</tr>
<tr>
<td>carbon.horizontal.delete.compaction.threshold</td>
<td>1</td>
<td>This configuration specifies the threshold limit on number of DELETE delta files within a block of a segment. In case the number of delta files goes beyond the threshold, the DELETE delta files for the particular block of the segment becomes eligible for horizontal compaction and are compacted into single DELETE delta file. Values range between 1 to 10000.</td>
</tr>
<tr>
<td>carbon.update.segment.parallelism</td>
<td>1</td>
<td>CarbonData processes the UPDATE operations by grouping records belonging to a segment into a single executor task. When the amount of data to be updated is more, this behavior causes problems like restarting of executor due to low memory and data-spill related errors. This property specifies the parallelism for each segment during update. <strong>NOTE:</strong> It is recommended to set this value to a multiple of the number of executors for balance. Values range between 1 to 1000.</td>
</tr>
<tr>
<td>carbon.numberof.preserve.segments</td>
<td>0</td>
<td>If the user wants to preserve some number of segments from being compacted then he can set this configuration. Example: carbon.numberof.preserve.segments = 2 then 2 latest segments will always be excluded from the compaction. No segments will be preserved by default. <strong>NOTE:</strong> This configuration is useful when the chances of input data can be wrong due to environment scenarios. Preserving some of the latest segments from being compacted can help to easily delete the wrongly loaded segments. Once compacted,it becomes more difficult to determine the exact data to be deleted(except when data is incrementing according to time)</td>
</tr>
<tr>
<td>carbon.allowed.compaction.days</td>
<td>0</td>
<td>This configuration is used to control on the number of recent segments that needs to be compacted, ignoring the older ones. This configuration is in days. For Example: If the configuration is 2, then the segments which are loaded in the time frame of past 2 days only will get merged. Segments which are loaded earlier than 2 days will not be merged. This configuration is disabled by default. <strong>NOTE:</strong> This configuration is useful when a bulk of history data is loaded into the carbondata. Query on this data is less frequent. In such cases involving these segments also into compaction will affect the resource consumption, increases overall compaction time.</td>
</tr>
<tr>
<td>carbon.enable.auto.load.merge</td>
<td>false</td>
<td>Compaction can be automatically triggered once data load completes. This ensures that the segments are merged in time and thus query times does not increase with increase in segments. This configuration enables to do compaction along with data loading. <strong>NOTE:</strong> Compaction will be triggered once the data load completes. But the status of data load wait till the compaction is completed. Hence it might look like data loading time has increased, but thats not the case. Moreover failure of compaction will not affect the data loading status. If data load had completed successfully, the status would be updated and segments are committed. However, failure while data loading, will not trigger compaction and error is returned immediately.</td>
</tr>
<tr>
<td>carbon.enable.page.level.reader.in.compaction</td>
<td>false</td>
<td>Enabling page level reader for compaction reduces the memory usage while compacting more number of segments. It allows reading only page by page instead of reading whole blocklet to memory. <strong>NOTE:</strong> Please refer to <a href="./file-structure-of-carbondata.html#carbondata-file-format">file-structure-of-carbondata</a> to understand the storage format of CarbonData and concepts of pages.</td>
</tr>
<tr>
<td>carbon.concurrent.compaction</td>
<td>true</td>
<td>Compaction of different tables can be executed concurrently. This configuration determines whether to compact all qualifying tables in parallel or not. <strong>NOTE:</strong> Compacting concurrently is a resource demanding operation and needs more resources there by affecting the query performance also. This configuration is <strong>deprecated</strong> and might be removed in future releases.</td>
</tr>
<tr>
<td>carbon.compaction.prefetch.enable</td>
<td>false</td>
<td>Compaction operation is similar to Query + data load where in data from qualifying segments are queried and data loading performed to generate a new single segment. This configuration determines whether to query ahead data from segments and feed it for data loading. <strong>NOTE:</strong> This configuration is disabled by default as it needs extra resources for querying extra data. Based on the memory availability on the cluster, user can enable it to improve compaction performance.</td>
</tr>
<tr>
<td>carbon.merge.index.in.segment</td>
<td>true</td>
<td>Each CarbonData file has a companion CarbonIndex file which maintains the metadata about the data. These CarbonIndex files are read and loaded into driver and is used subsequently for pruning of data during queries. These CarbonIndex files are very small in size(few KB) and are many. Reading many small files from HDFS is not efficient and leads to slow IO performance. Hence these CarbonIndex files belonging to a segment can be combined into a single file and read once there by increasing the IO throughput. This configuration enables to merge all the CarbonIndex files into a single MergeIndex file upon data loading completion. <strong>NOTE:</strong> Reading a single big file is more efficient in HDFS and IO throughput is very high. Due to this the time needed to load the index files into memory when query is received for the first time on that table is significantly reduced and there by significantly reduces the delay in serving the first query.</td>
</tr>
<tr>
<td>carbon.enable.range.compaction</td>
<td>true</td>
<td>To configure Ranges-based Compaction to be used or not for RANGE_COLUMN. If true after compaction also the data would be present in ranges.</td>
</tr>
<tr>
<td>carbon.si.segment.merge</td>
<td>false</td>
<td>Making this true degrade the LOAD performance. When the number of small files increase for SI segments(it can happen as number of columns will be less and we store position id and reference columns), user an either set to true which will merge the data files for upcoming loads or run SI refresh command which does this job for all segments. (REFRESH INDEX &lt;index_table&gt;)</td>
</tr>
</tbody>
</table>
<h2>
<a id="query-configuration" class="anchor" href="#query-configuration" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Query Configuration</h2>
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>carbon.max.driver.lru.cache.size</td>
<td>-1</td>
<td>Maximum memory <strong>(in MB)</strong> upto which the driver process can cache the data (BTree and dictionary values). Beyond this, least recently used data will be removed from cache before loading new set of values. Default value of -1 means there is no memory limit for caching. Only integer values greater than 0 are accepted. <strong>NOTE:</strong> Minimum number of entries that needs to be removed from cache in order to load the new set of data is determined and unloaded.ie.,for example if 3 cache entries qualify for pre-emption, out of these, those entries that free up more cache memory is removed prior to others. Please refer <a href="./faq.html#how-to-check-lru-cache-memory-footprint">FAQs</a> for checking LRU cache memory footprint.</td>
</tr>
<tr>
<td>carbon.max.executor.lru.cache.size</td>
<td>-1</td>
<td>Maximum memory <strong>(in MB)</strong> upto which the executor process can cache the data (BTree and reverse dictionary values). Default value of -1 means there is no memory limit for caching. Only integer values greater than 0 are accepted. <strong>NOTE:</strong> If this parameter is not configured, then the value of <em><strong>carbon.max.driver.lru.cache.size</strong></em> will be used.</td>
</tr>
<tr>
<td>max.query.execution.time</td>
<td>60</td>
<td>Maximum time allowed for one query to be executed. The value is in minutes.</td>
</tr>
<tr>
<td>carbon.enableMinMax</td>
<td>true</td>
<td>CarbonData maintains the metadata which enables to prune unnecessary files from being scanned as per the query conditions. To achieve pruning, Min,Max of each column is maintined.Based on the filter condition in the query, certain data can be skipped from scanning by matching the filter value against the min,max values of the column(s) present in that carbondata file. This pruning enhances query performance significantly.</td>
</tr>
<tr>
<td>carbon.dynamical.location.scheduler.timeout</td>
<td>5</td>
<td>CarbonData has its own scheduling algorithm to suggest to Spark on how many tasks needs to be launched and how much work each task need to do in a Spark cluster for any query on CarbonData. To determine the number of tasks that can be scheduled, knowing the count of active executors is necessary. When dynamic allocation is enabled on a YARN based spark cluster, executor processes are shutdown if no request is received for a particular amount of time. The executors are brought up when the requet is received again. This configuration specifies the maximum time (unit in seconds) the carbon scheduler can wait for executor to be active. Minimum value is 5 sec and maximum value is 15 sec.<strong>NOTE:</strong> Waiting for longer time leads to slow query response time.Moreover it might be possible that YARN is not able to start the executors and waiting is not beneficial.</td>
</tr>
<tr>
<td>carbon.scheduler.min.registered.resources.ratio</td>
<td>0.8</td>
<td>Specifies the minimum resource (executor) ratio needed for starting the block distribution. The default value is 0.8, which indicates 80% of the requested resource is allocated for starting block distribution. The minimum value is 0.1 min and the maximum value is 1.0.</td>
</tr>
<tr>
<td>carbon.detail.batch.size</td>
<td>100</td>
<td>The buffer size to store records, returned from the block scan. In limit scenario this parameter is very important. For example your query limit is 1000. But if we set this value to 3000 that means we get 3000 records from scan but spark will only take 1000 rows. So the 2000 remaining are useless. In one Finance test case after we set it to 100, in the limit 1000 scenario the performance increase about 2 times in comparison to if we set this value to 12000.<br><br> <strong>NOTE</strong> The minimum batch size allowed is 100 and maximum batch size allowed by this property is 1000.</td>
</tr>
<tr>
<td>carbon.enable.vector.reader</td>
<td>true</td>
<td>Spark added vector processing to optimize cpu cache miss and there by increase the query performance. This configuration enables to fetch data as columnar batch of size 4*1024 rows instead of fetching data row by row and provide it to spark so that there is improvement in select queries performance.</td>
</tr>
<tr>
<td>carbon.task.distribution</td>
<td>block</td>
<td>CarbonData has its own scheduling algorithm to suggest to Spark on how many tasks needs to be launched and how much work each task need to do in a Spark cluster for any query on CarbonData. Each of these task distribution suggestions has its own advantages and disadvantages. Based on the customer use case, appropriate task distribution can be configured.<strong>block</strong>: Setting this value will launch one task per block. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. <strong>custom</strong>: Setting this value will group the blocks and distribute it uniformly to the available resources in the cluster. This enhances the query performance but not suggested in case of concurrent queries and queries having big shuffling scenarios. <strong>blocklet</strong>: Setting this value will launch one task per blocklet. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. <strong>merge_small_files</strong>: Setting this value will merge all the small carbondata files upto a bigger size configured by <em><strong>spark.sql.files.maxPartitionBytes</strong></em> (128 MB is the default value,it is configurable) during querying. The small carbondata files are combined to a map task to reduce the number of read task. This enhances the performance.</td>
</tr>
<tr>
<td>carbon.custom.block.distribution</td>
<td>false</td>
<td>CarbonData has its own scheduling algorithm to suggest to Spark on how many tasks needs to be launched and how much work each task need to do in a Spark cluster for any query on CarbonData. When this configuration is true, CarbonData would distribute the available blocks to be scanned among the available number of cores. For Example:If there are 10 blocks to be scanned and only 3 tasks can be run(only 3 executor cores available in the cluster), CarbonData would combine blocks as 4,3,3 and give it to 3 tasks to run. <strong>NOTE:</strong> When this configuration is false, as per the <em><strong>carbon.task.distribution</strong></em> configuration, each block/blocklet would be given to each task.</td>
</tr>
<tr>
<td>enable.query.statistics</td>
<td>false</td>
<td>CarbonData has extensive logging which would be useful for debugging issues related to performance or hard to locate issues. This configuration when made <em><strong>true</strong></em> would log additional query statistics information to more accurately locate the issues being debugged. <strong>NOTE:</strong> Enabling this would log more debug information to log files, there by increasing the log files size significantly in short span of time. It is advised to configure the log files size, retention of log files parameters in log4j properties appropriately. Also extensive logging is an increased IO operation and hence over all query performance might get reduced. Therefore it is recommended to enable this configuration only for the duration of debugging.</td>
</tr>
<tr>
<td>enable.unsafe.in.query.processing</td>
<td>false</td>
<td>CarbonData supports unsafe operations of Java to avoid GC overhead for certain operations. This configuration enables to use unsafe functions in CarbonData while scanning the data during query.</td>
</tr>
<tr>
<td>carbon.max.driver.threads.for.block.pruning</td>
<td>4</td>
<td>Number of threads used for driver pruning when the carbon files are more than 100k Maximum memory. This configuration can used to set number of threads between 1 to 4.</td>
</tr>
<tr>
<td>carbon.heap.memory.pooling.threshold.bytes</td>
<td>1048576</td>
<td>CarbonData supports unsafe operations of Java to avoid GC overhead for certain operations. Using unsafe, memory can be allocated on Java Heap or off heap. This configuration controls the allocation mechanism on Java HEAP. If the heap memory allocations of the given size is greater or equal than this value,it should go through the pooling mechanism. But if set this size to -1, it should not go through the pooling mechanism. Default value is 1048576(1MB, the same as Spark). Value to be specified in bytes.</td>
</tr>
<tr>
<td>carbon.push.rowfilters.for.vector</td>
<td>false</td>
<td>When enabled complete row filters will be handled by carbon in case of vector. If it is disabled then only page level pruning will be done by carbon and row level filtering will be done by spark for vector. And also there are scan optimizations in carbon to avoid multiple data copies when this parameter is set to false. There is no change in flow for non-vector based queries.</td>
</tr>
<tr>
<td>carbon.query.prefetch.enable</td>
<td>true</td>
<td>By default this property is true, so prefetch is used in query to read next blocklet asynchronously in other thread while processing current blocklet in main thread. This can help to reduce CPU idle time. Setting this property false will disable this prefetch feature in query.</td>
</tr>
<tr>
<td>carbon.query.stage.input.enable</td>
<td>false</td>
<td>Stage input files are data files written by external applications (such as Flink), but have not been loaded into carbon table. Enabling this configuration makes query to include these files, thus makes query on latest data. However, since these files are not indexed, query maybe slower as full scan is required for these files.</td>
</tr>
<tr>
<td>carbon.driver.pruning.multi.thread.enable.files.count</td>
<td>100000</td>
<td>To prune in multi-thread when total number of segment files for a query increases beyond the configured value.</td>
</tr>
<tr>
<td>carbon.load.all.segment.indexes.to.cache</td>
<td>true</td>
<td>Setting this configuration to false, will prune and load only matched segment indexes to cache using segment metadata information such as columnid and it's minmax values, which decreases the usage of driver memory.</td>
</tr>
<tr>
<td>carbon.secondary.index.creation.threads</td>
<td>1</td>
<td>Specifies the number of threads to concurrently process segments during secondary index creation. This property helps fine tuning the system when there are a lot of segments in a table. The value range is 1 to 50.</td>
</tr>
<tr>
<td>carbon.si.lookup.partialstring</td>
<td>true</td>
<td>When true, it includes starts with, ends with and contains. When false, it includes only starts with secondary indexes.</td>
</tr>
</tbody>
</table>
<h2>
<a id="data-mutation-configuration" class="anchor" href="#data-mutation-configuration" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Data Mutation Configuration</h2>
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>carbon.update.persist.enable</td>
<td>true</td>
<td>Configuration to enable the dataset of RDD/dataframe to persist data. Enabling this will reduce the execution time of UPDATE operation.</td>
</tr>
<tr>
<td>carbon.update.storage.level</td>
<td>MEMORY_AND_DISK</td>
<td>Storage level to persist dataset of a RDD/dataframe. Applicable when <em><strong>carbon.update.persist.enable</strong></em> is <strong>true</strong>, if user's executor has less memory, set this parameter to 'MEMORY_AND_DISK_SER' or other storage level to correspond to different environment. <a href="http://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-persistence" rel="nofollow">See detail</a>.</td>
</tr>
<tr>
<td>carbon.update.check.unique.value</td>
<td>true</td>
<td>By default this property is true, so update will validate key value mapping. This validation might have slight degrade in performance of update query. If user knows that key value mapping is correct, can disable this validation for better update performance by setting this property to false.</td>
</tr>
</tbody>
</table>
<h2>
<a id="dynamic-configuration-in-carbondata-using-set-reset" class="anchor" href="#dynamic-configuration-in-carbondata-using-set-reset" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Dynamic Configuration In CarbonData Using SET-RESET</h2>
<p><strong>SET/RESET</strong> commands are used to add, update, display, or reset the carbondata properties dynamically without restarting the driver.</p>
<p><strong>Syntax</strong></p>
<ul>
<li>
<strong>Add or Update :</strong> This command adds or updates the value of parameter_name.</li>
</ul>
<pre><code>SET parameter_name=parameter_value
</code></pre>
<ul>
<li>Display Property Value: This command displays the value of the specified parameter_name.</li>
</ul>
<pre><code>SET parameter_name
</code></pre>
<ul>
<li>Display Session Parameters: This command displays all the supported session parameters.</li>
</ul>
<pre><code>SET
</code></pre>
<ul>
<li>Display Session Parameters along with usage details: This command displays all the supported session parameters along with their usage details.</li>
</ul>
<pre><code>SET -v
</code></pre>
<ul>
<li>Reset: This command clears all the session parameters.</li>
</ul>
<pre><code>RESET
</code></pre>
<p><strong>Parameter Description:</strong></p>
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>parameter_name</td>
<td>Name of the property whose value needs to be dynamically added, updated, or displayed.</td>
</tr>
<tr>
<td>parameter_value</td>
<td>New value of the parameter_name to be set.</td>
</tr>
</tbody>
</table>
<p><b></b></p><p align="center">Dynamically Configurable Properties of CarbonData</p>
<table>
<thead>
<tr>
<th>Properties</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>carbon.options.bad.records.logger.enable</td>
<td>To enable or disable a bad record logger. CarbonData can identify the records that are not conformant to schema and isolate them as bad records. Enabling this configuration will make CarbonData to log such bad records. <strong>NOTE:</strong> If the input data contains many bad records, logging them will slow down the overall data loading throughput. The data load operation status would depend on the configuration in <em><strong>carbon.bad.records.action</strong></em>.</td>
</tr>
<tr>
<td>carbon.options.bad.records.action</td>
<td>This property has four types of bad record actions: FORCE, REDIRECT, IGNORE and FAIL. If set to FORCE then it auto-corrects the data by storing the bad records as NULL. If set to REDIRECT then bad records are written to the raw CSV instead of being loaded. If set to IGNORE then bad records are neither loaded nor written to the raw CSV. If set to FAIL then data loading fails if any bad records are found.</td>
</tr>
<tr>
<td>carbon.options.is.empty.data.bad.record</td>
<td>If false, then empty ("" or '' or ,,) data will not be considered as bad record and vice versa.</td>
</tr>
<tr>
<td>carbon.options.bad.record.path</td>
<td>Specifies the HDFS path where bad records needs to be stored.</td>
</tr>
<tr>
<td>carbon.custom.block.distribution</td>
<td>Specifies whether to use the Spark or Carbon block distribution feature. <strong>NOTE:</strong> Refer to <a href="#query-configuration">Query Configuration</a>#carbon.custom.block.distribution for more details on CarbonData scheduler.</td>
</tr>
<tr>
<td>enable.unsafe.sort</td>
<td>Specifies whether to use unsafe sort during data loading. Unsafe sort reduces the garbage collection during data load operation, resulting in better performance.</td>
</tr>
<tr>
<td>carbon.options.date.format</td>
<td>Specifies the data format of the date columns in the data being loaded</td>
</tr>
<tr>
<td>carbon.options.timestamp.format</td>
<td>Specifies the timestamp format of the time stamp columns in the data being loaded</td>
</tr>
<tr>
<td>carbon.options.sort.scope</td>
<td>Specifies how the current data load should be sorted with. This sort parameter is at the table level. <strong>NOTE:</strong> Refer to <a href="#data-loading-configuration">Data Loading Configuration</a>#carbon.sort.scope for detailed information.</td>
</tr>
<tr>
<td>carbon.table.load.sort.scope.&lt;db_name&gt;.&lt;table_name&gt;</td>
<td>Overrides the SORT_SCOPE provided in CREATE TABLE.</td>
</tr>
<tr>
<td>carbon.options.global.sort.partitions</td>
<td>Specifies the number of partitions to be used during global sort.</td>
</tr>
<tr>
<td>carbon.options.serialization.null.format</td>
<td>Default Null value representation in the data being loaded. <strong>NOTE:</strong> Refer to <a href="#data-loading-configuration">Data Loading Configuration</a>#carbon.options.serialization.null.format for detailed information.</td>
</tr>
<tr>
<td>carbon.number.of.cores.while.loading</td>
<td>Specifies number of cores to be used while loading data. <strong>NOTE:</strong> Refer to <a href="#data-loading-configuration">Data Loading Configuration</a>#carbon.number.of.cores.while.loading for detailed information.</td>
</tr>
<tr>
<td>carbon.number.of.cores.while.compacting</td>
<td>Specifies number of cores to be used while compacting data. <strong>NOTE:</strong> Refer to <a href="#data-loading-configuration">Data Loading Configuration</a>#carbon.number.of.cores.while.compacting for detailed information.</td>
</tr>
<tr>
<td>enable.offheap.sort</td>
<td>To enable off-heap memory usage. <strong>NOTE:</strong> Refer to <a href="#data-loading-configuration">Data Loading Configuration</a>#enable.offheap.sort for detailed information.</td>
</tr>
<tr>
<td>carbon.blockletgroup.size.in.mb</td>
<td>Specifies the size of each blocklet group. <strong>NOTE:</strong> Refer to <a href="#data-loading-configuration">Data Loading Configuration</a>#carbon.blockletgroup.size.in.mb for detailed information.</td>
</tr>
<tr>
<td>carbon.enable.auto.load.merge</td>
<td>To enable compaction along with data loading. <strong>NOTE:</strong> Refer to <a href="#compaction-configuration">Compaction Configuration</a>#carbon.enable.auto.load.merge for detailed information.</td>
</tr>
<tr>
<td>carbon.major.compaction.size</td>
<td>To configure major compaction size. <strong>NOTE:</strong> Refer to <a href="#compaction-configuration">Compaction Configuration</a>#carbon.major.compaction.size for detailed information.</td>
</tr>
<tr>
<td>carbon.compaction.level.threshold</td>
<td>To configure compaction threshold. <strong>NOTE:</strong> Refer to <a href="#compaction-configuration">Compaction Configuration</a>#carbon.compaction.level.threshold for detailed information.</td>
</tr>
<tr>
<td>carbon.enable.vector.reader</td>
<td>To enable fetching data as columnar batch of size 4*1024 rows instead of fetching a row at a time. <strong>NOTE:</strong> Refer to <a href="#query-configuration">Query Configuration</a>#carbon.enable.vector.reader for detailed information.</td>
</tr>
<tr>
<td>enable.unsafe.in.query.processing</td>
<td>To enable use of unsafe functions while scanning the data during query. <strong>NOTE:</strong> Refer to <a href="#query-configuration">Query Configuration</a>#enable.unsafe.in.query.processing for detailed information.</td>
</tr>
<tr>
<td>carbon.push.rowfilters.for.vector</td>
<td>To enable complete row filters handling by carbon in case of vector. <strong>NOTE:</strong> Refer to <a href="#query-configuration">Query Configuration</a>#carbon.push.rowfilters.for.vector for detailed information.</td>
</tr>
<tr>
<td>carbon.query.stage.input.enable</td>
<td>To make query to include staged input files. <strong>NOTE:</strong> Refer to <a href="#query-configuration">Query Configuration</a>#carbon.query.stage.input.enable for detailed information.</td>
</tr>
<tr>
<td>carbon.input.segments.&lt;db_name&gt;.&lt;table_name&gt;</td>
<td>To specify the segment ids to query from the table. segments ids are separated by comma.</td>
</tr>
<tr>
<td>carbon.index.visible.&lt;db_name&gt;.&lt;table_name&gt;.&lt;index_name&gt;</td>
<td>To specify query on <em><strong>db_name.table_name</strong></em> to not use the index <em><strong>index_name</strong></em>.</td>
</tr>
<tr>
<td>carbon.load.indexes.parallel.&lt;db_name&gt;.&lt;table_name&gt;</td>
<td>To enable parallel index loading for a table. when db_name.table_name are not specified, i.e., when <em><strong>carbon.load.indexes.parallel.</strong></em> is set, it applies for all the tables of the session.</td>
</tr>
<tr>
<td>carbon.enable.index.server</td>
<td>To use index server for caching and pruning. This property can be used for a session or for a particular table with <em><strong>carbon.enable.index.server.&lt;db_name&gt;.&lt;table_name&gt;</strong></em>.</td>
</tr>
</tbody>
</table>
<p><strong>Examples:</strong></p>
<ul>
<li>Add or Update:</li>
</ul>
<pre><code>SET enable.unsafe.sort =true
</code></pre>
<ul>
<li>Display Property Value:</li>
</ul>
<pre><code>SET enable.unsafe.sort
</code></pre>
<ul>
<li>Reset:</li>
</ul>
<pre><code>RESET
</code></pre>
<p><strong>System Response:</strong></p>
<ul>
<li>
<p>Success will be recorded in the driver log.</p>
</li>
<li>
<p>Failure will be displayed in the UI.</p>
</li>
</ul>
<script>
$(function() {
// Show selected style on nav item
$('.b-nav__docs').addClass('selected');
// Display docs subnav items
if (!$('.b-nav__docs').parent().hasClass('nav__item__with__subs--expanded')) {
$('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
}
});
</script></div>
</div>
</div>
</div>
<div class="doc-footer">
<a href="#top" class="scroll-top">Top</a>
</div>
</div>
</section>
</div>
</div>
</div>
</section><!-- End systemblock part -->
<script src="js/custom.js"></script>
</body>
</html>