blob: c23b8833e4e8061292635594743fd6d02d631ad4 [file] [log] [blame]
<!DOCTYPE html><html><head><meta charset="utf-8"><title>Untitled Document.md</title><style>
p img{ max-width: 100% }
</style></head><body id="preview">
<!--<p>&lt;!–-<br>
Licensed to the Apache Software Foundation (ASF) under one<br>
or more contributor license agreements. See the NOTICE file<br>
distributed with this work for additional information<br>
regarding copyright ownership. The ASF licenses this file<br>
to you under the Apache License, Version 2.0 (the<br>
“License”); you may not use this file except in compliance<br>
with the License. You may obtain a copy of the License at</p>
<pre><code> http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
&quot;AS IS&quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
</code></pre>
<p>-–&gt;</p>-->
<p><img src="../docs/images/format/CarbonData_logo.png?raw=true" alt="CarbonData_Logo"></p>
<p>This tutorial provides a detailed overview about :</p>
<ul>
<li>CarbonData,</li>
<li>Working and File Format</li>
<li>Features</li>
<li>Supported Data Types</li>
<li>Compatibility</li>
<li>Packaging and Interfaces.</li>
</ul>
<h2><a id="Introduction_30"></a>Introduction</h2>
<p>CarbonData is a fully indexed columnar and Hadoop native data-store for processing heavy analytical workloads and detailed queries on big data. CarbonData allows faster interactive query using advanced columnar storage, index, compression and encoding techniques to improve computing efficiency, in turn it will help speedup queries an order of magnitude faster over PetaBytes of data.</p>
<p>In customer benchmarks, CarbonData has proven to manage Petabyte of data running on extraordinarily low-cost hardware and answers queries around 10 times faster than the current open source solutions (column-oriented SQL on Hadoop data-stores).</p>
<p>Some of the Salient features of CarbonData are :</p>
<ul>
<li>Low-Latency for various types of data access patterns like Sequential,Random and OLAP.</li>
<li>Allows fast query on fast data.</li>
<li>Ensures Space Efficiency.</li>
<li>General format available on Hadoop-ecosystem.</li>
</ul>
<h2><a id="CarbonData_File_Structure_42"></a>CarbonData File Structure</h2>
<p>CarbonData file contains groups of data called blocklet, along with all required information like schema, offsets and indices, etc, in a file footer, co-located in HDFS.</p>
<p>The file footer can be read once to build the indices in memory, which can be utilized for optimizing the scans and processing for all subsequent queries.</p>
<p>Each blocklet in the file is further divided into chunks of data called Data Chunks. Each data chunk is organized either in columnar format or row format, and stores the data of either a single column or a set of columns. All blocklets in one file contain the same number and type of Data Chunks.</p>
<p><img src="../docs/images/format/carbon_data_file_structure_new.png?raw=true" alt="Carbon File Structure"></p>
<p>Each Data Chunk contains multiple groups of data called as Pages. There are three types of pages.</p>
<ul>
<li>Data Page: Contains the encoded data of a column/group of columns.</li>
<li>Row ID Page (optional): Contains the row id mappings used when the Data Page is stored as an inverted index.</li>
<li>RLE Page (optional): Contains additional metadata used when the Data Page in RLE coded.</li>
</ul>
<p><img src="../docs/images/format/carbon_data_format_new.png?raw=true" alt="Carbon File Format"></p>
<h2><a id="Features_59"></a>Features</h2>
<p>CarbonData file format is a columnar store in HDFS, it has many features that a modern columnar format has, such as splittable, compression schema ,complex data type etc, and CarbonData has following unique features:</p>
<ul>
<li>Stores data along with index: it can significantly accelerate query performance and reduces the I/O scans and CPU resources, where there are filters in the query. CarbonData index consists of multiple level of indices, a processing framework can leverage this index to reduce the task it needs to schedule and process, and it can also do skip scan in more finer grain unit (called blocklet) in task side scanning instead of scanning the whole file.</li>
<li>Operable encoded data :Through supporting efficient compression and global encoding schemes, can query on compressed/encoded data, the data can be converted just before returning the results to the users, which is “late materialized”.</li>
<li>Column group: Allow multiple columns to form a column group that would be stored as row format. This reduces the row reconstruction cost at query time.</li>
<li>Supports for various use cases with one single Data format : like interactive OLAP-style query, Sequential Access (big scan), Random Access (narrow scan).</li>
</ul>
<h2><a id="Data_Types_68"></a>Data Types</h2>
<p>The following types are supported :</p>
<ul>
<li>
<p>Numeric Types</p>
<ul>
<li>SMALLINT</li>
<li>INT/INTEGER</li>
<li>BIGINT</li>
<li>DOUBLE</li>
<li>DECIMAL</li>
</ul>
</li>
<li>
<p>Date/Time Types</p>
<ul>
<li>TIMESTAMP</li>
</ul>
</li>
<li>
<p>String Types</p>
<ul>
<li>STRING</li>
</ul>
</li>
<li>
<p>Complex Types</p>
<ul>
<li>arrays: ARRAY&lt;data_type&gt;</li>
<li>structs: STRUCT&lt;col_name : data_type [COMMENT col_comment], …&gt;</li>
</ul>
</li>
</ul>
<h2><a id="Compatibility_89"></a>Compatibility</h2>
<h2><a id="Packaging_and_Interfaces_92"></a>Packaging and Interfaces</h2>
<h3><a id="Packaging_94"></a>Packaging</h3>
<p>Carbon provides following JAR packages:</p>
<p><img src="https://cloud.githubusercontent.com/assets/6500698/14255195/831c6e90-fac5-11e5-87ab-3b16d84918fb.png" alt="carbon modules2"></p>
<ul>
<li>
<p><strong>carbon-store.jar or carbondata-assembly.jar:</strong> This is the main Jar for carbon project, the target user of it are both user and developer.<br>
- For MapReduce application users, this jar provides API to read and write carbon files through CarbonInput/OutputFormat in carbon-hadoop module.<br>
- For developer, this jar can be used to integrate carbon with processing engine like spark and hive, by leveraging API in carbon-processing module.</p>
</li>
<li>
<p><strong>carbon-spark.jar(Currently it is part of assembly jar):</strong> provides support for spark user, spark user can manipulate carbon data files by using native spark DataFrame/SQL interface. Apart from this, in order to leverage carbon’s builtin lifecycle management function, higher level concept like Managed Carbon Table, Database and corresponding DDL are introduced.</p>
</li>
<li>
<p><strong>carbon-hive.jar(not yet provided):</strong> similar to carbon-spark, which provides integration to carbon and hive.</p>
</li>
</ul>
<h3><a id="Interfaces_107"></a>Interfaces</h3>
<h4><a id="API_109"></a>API</h4>
<p>Carbon can be used in following scenarios:</p>
<ol>
<li>
<p>For MapReduce application user<br>
This User API is provided by carbon-hadoop. In this scenario, user can process carbon files in his MapReduce application by choosing CarbonInput/OutputFormat, and is responsible using it correctly.Currently only CarbonInputFormat is provided and OutputFormat will be provided soon.</p>
</li>
<li>
<p>For Spark user<br>
This User API is provided by the Spark itself. There are also two levels of APIs</p>
<ul>
<li>
<p><strong>Carbon File</strong></p>
<p>Similar to parquet, json, or other data source in Spark, carbon can be used with data source API. For example(please refer to DataFrameAPIExample for the more detail):</p>
<pre><code>// User can create a DataFrame from any data source or transformation.
val df = ...
// Write data
// User can write a DataFrame to a carbon file
df.write
.format(&quot;carbondata&quot;)
.option(&quot;tableName&quot;, &quot;carbontable&quot;)
.mode(SaveMode.Overwrite)
.save()
// read carbon data by data source API
df = carbonContext.read
.format(&quot;carbondata&quot;)
.option(&quot;tableName&quot;, &quot;carbontable&quot;)
.load(&quot;/path&quot;)
// User can then use DataFrame for analysis
df.count
SVMWithSGD.train(df, numIterations)
// User can also register the DataFrame with a table name, and use SQL for analysis
df.registerTempTable(&quot;t1&quot;) // register temporary table in SparkSQL catalog
df.registerHiveTable(&quot;t2&quot;) // Or, use a implicit funtion to register to Hive metastore
sqlContext.sql(&quot;select count(*) from t1&quot;).show
</code></pre>
</li>
<li>
<p><strong>Managed Carbon Table</strong></p>
<p>Carbon has in built support for high level concept like Table, Database, and supports full data lifecycle management, instead of dealing with just files, user can use carbon specific DDL to manipulate data in Table and Database level. Please refer <a href="https://github.com/HuaweiBigData/carbondata/wiki/Language-Manual:-DDL">DDL</a> and <a href="https://github.com/HuaweiBigData/carbondata/wiki/Language-Manual:-DML">DML</a></p>
<pre><code>// Use SQL to manage table and query data
create database db1;
use database db1;
show databases;
create table tbl1 using org.apache.carbondata.spark;
load data into table tlb1 path 'some_files';
select count(*) from tbl1;
</code></pre>
</li>
</ul>
</li>
<li>
<p>For developer who want to integrate carbon into a processing engines like spark,hive or flink, use API provided by carbon-hadoop and carbon-processing:</p>
<ul>
<li>
<p><strong>Query</strong> : integrate carbon-hadoop with engine specific API, like spark data source API</p>
</li>
<li>
<p><strong>Data life cycle management</strong> : carbon provides utility functions in carbon-processing to manage data life cycle, like data loading, compact, retention, schema evolution. Developer can implement DDLs of their choice and leverage these utility function to do data life cycle management.</p>
</li>
</ul>
</li>
</ol>
</body></html>