This tutorial is going to introduce all commands and data operations on CarbonData.
This command can be used to create a CarbonData table by specifying the list of fields along with the table properties. You can also specify the location where the table needs to be stored.
CREATE TABLE [IF NOT EXISTS] [db_name.]table_name[(col_name data_type , ...)] STORED AS carbondata [TBLPROPERTIES (property_name=property_value, ...)] [LOCATION 'path']
NOTE: CarbonData also supports “STORED AS carbondata” and “USING carbondata”. Find example code at CarbonSessionExample in the CarbonData repo.
Following are the guidelines for TBLPROPERTIES, CarbonData's additional table options can be set via carbon.properties.
Dictionary Encoding Configuration
Dictionary encoding is turned off for all columns by default from 1.3 onwards, you can use this command for including or excluding columns to do dictionary encoding. Suggested use cases : do dictionary encoding for low cardinality columns, it might help to improve data compression ratio and performance.
TBLPROPERTIES ('DICTIONARY_INCLUDE'='column1, column2')
NOTE: Dictionary Include/Exclude for complex child columns is not supported.
Inverted Index Configuration
By default inverted index is enabled, it might help to improve compression ratio and query speed, especially for low cardinality columns which are in reward position. Suggested use cases : For high cardinality columns, you can disable the inverted index for improving the data loading performance.
TBLPROPERTIES ('NO_INVERTED_INDEX'='column1, column3')
Sort Columns Configuration
This property is for users to specify which columns belong to the MDK(Multi-Dimensions-Key) index.
TBLPROPERTIES ('SORT_COLUMNS'='column1, column3') OR TBLPROPERTIES ('SORT_COLUMNS'='')
NOTE: Sort_Columns for Complex datatype columns is not supported.
Sort Scope Configuration
This property is for users to specify the scope of the sort during data load, following are the types of sort scope.
### Example:
CREATE TABLE IF NOT EXISTS productSchema.productSalesTable ( productNumber INT, productName STRING, storeCity STRING, storeProvince STRING, productCategory STRING, productBatch STRING, saleQuantity INT, revenue INT) STORED BY 'carbondata' TBLPROPERTIES ('SORT_COLUMNS'='productName,storeCity', 'SORT_SCOPE'='NO_SORT')
NOTE: CarbonData also supports “using carbondata”. Find example code at SparkSessionExample in the CarbonData repo.
Table Block Size Configuration
This command is for setting block size of this table, the default value is 1024 MB and supports a range of 1 MB to 2048 MB.
TBLPROPERTIES ('TABLE_BLOCKSIZE'='512')
NOTE: 512 or 512M both are accepted.
Table Compaction Configuration
These properties are table level compaction configurations, if not specified, system level configurations in carbon.properties will be used. Following are 5 configurations:
TBLPROPERTIES ('MAJOR_COMPACTION_SIZE'='2048', 'AUTO_LOAD_MERGE'='true', 'COMPACTION_LEVEL_THRESHOLD'='5,6', 'COMPACTION_PRESERVE_SEGMENTS'='10', 'ALLOWED_COMPACTION_DAYS'='5')
Streaming
CarbonData supports streaming ingestion for real-time data. You can create the ‘streaming’ table using the following table properties.
TBLPROPERTIES ('streaming'='true')
Local Dictionary Configuration
Columns for which dictionary is not generated needs more storage space and in turn more IO. Also since more data will have to be read during query, query performance also would suffer.Generating dictionary per blocklet for such columns would help in saving storage space and assist in improving query performance as carbondata is optimized for handling dictionary encoded columns more effectively.Generating dictionary internally per blocklet is termed as local dictionary. Please refer to File structure of Carbondata for understanding about the file structure of carbondata and meaning of terms like blocklet.
Local Dictionary helps in:
NOTE:
Following Data Types are Supported for Local Dictionary:
Following Data Types are not Supported for Local Dictionary:
In case of multi-level complex dataType columns, primitive string/varchar/char columns are considered for local dictionary generation.
Local dictionary will have to be enabled explicitly during create table or by enabling the system property ‘carbon.local.dictionary.enable’. By default, Local Dictionary will be disabled for the carbondata table.
Local Dictionary can be configured using the following properties during create table command:
Properties | Default value | Description |
---|---|---|
LOCAL_DICTIONARY_ENABLE | false | Whether to enable local dictionary generation. NOTE: If this property is defined, it will override the value configured at system level by ‘carbon.local.dictionary.enable’ |
LOCAL_DICTIONARY_THRESHOLD | 10000 | The maximum cardinality of a column upto which carbondata can try to generate local dictionary (maximum - 100000) |
LOCAL_DICTIONARY_INCLUDE | string/varchar/char columns | Columns for which Local Dictionary has to be generated.NOTE: Those string/varchar/char columns which are added into DICTIONARY_INCLUDE option will not be considered for local dictionary generation. |
LOCAL_DICTIONARY_EXCLUDE | none | Columns for which Local Dictionary need not be generated. |
Fallback behavior:
NOTE: When fallback is triggered, the data loading performance will decrease as encoded data will be discarded and the actual data is written to the temporary sort files.
Points to be noted:
Reduce Block size:
Number of Blocks generated is less in case of Local Dictionary as compression ratio is high. This may reduce the number of tasks launched during query, resulting in degradation of query performance if the pruned blocks are less compared to the number of parallel tasks which can be run. So it is recommended to configure smaller block size which in turn generates more number of blocks.
All the page-level data for a blocklet needs to be maintained in memory until all the pages encoded for local dictionary is processed in order to handle fallback. Hence the memory required for local dictionary based table is more and this memory increase is proportional to number of columns.
CREATE TABLE carbontable( column1 string, column2 string, column3 LONG ) STORED BY 'carbondata' TBLPROPERTIES('LOCAL_DICTIONARY_ENABLE'='true','LOCAL_DICTIONARY_THRESHOLD'='1000', 'LOCAL_DICTIONARY_INCLUDE'='column1','LOCAL_DICTIONARY_EXCLUDE'='column2')
NOTE:
Caching Min/Max Value for Required Columns By default, CarbonData caches min and max values of all the columns in schema. As the load increases, the memory required to hold the min and max values increases considerably. This feature enables you to configure min and max values only for the required columns, resulting in optimized memory usage.
Following are the valid values for COLUMN_META_CACHE:
COLUMN_META_CACHE=’’
COLUMN_META_CACHE=’col1’
COLUMN_META_CACHE=’col1,col2,col3,…’
Columns to be cached can be specified either while creating table or after creation of the table. During create table operation; specify the columns to be cached in table properties.
Syntax:
CREATE TABLE [dbName].tableName (col1 String, col2 String, col3 int,…) STORED BY ‘carbondata’ TBLPROPERTIES (‘COLUMN_META_CACHE’=’col1,col2,…’)
Example:
CREATE TABLE employee (name String, city String, id int) STORED BY ‘carbondata’ TBLPROPERTIES (‘COLUMN_META_CACHE’=’name’)
After creation of table or on already created tables use the alter table command to configure the columns to be cached.
Syntax:
ALTER TABLE [dbName].tableName SET TBLPROPERTIES (‘COLUMN_META_CACHE’=’col1,col2,…’)
Example:
ALTER TABLE employee SET TBLPROPERTIES (‘COLUMN_META_CACHE’=’city’)
Caching at Block or Blocklet Level
This feature allows you to maintain the cache at Block level, resulting in optimized usage of the memory. The memory consumption is high if the Blocklet level caching is maintained as a Block can have multiple Blocklet.
Following are the valid values for CACHE_LEVEL:
Configuration for caching in driver at Block level (default value).
CACHE_LEVEL= ‘BLOCK’
Configuration for caching in driver at Blocklet level.
CACHE_LEVEL= ‘BLOCKLET’
Cache level can be specified either while creating table or after creation of the table. During create table operation specify the cache level in table properties.
Syntax:
CREATE TABLE [dbName].tableName (col1 String, col2 String, col3 int,…) STORED BY ‘carbondata’ TBLPROPERTIES (‘CACHE_LEVEL’=’Blocklet’)
Example:
CREATE TABLE employee (name String, city String, id int) STORED BY ‘carbondata’ TBLPROPERTIES (‘CACHE_LEVEL’=’Blocklet’)
After creation of table or on already created tables use the alter table command to configure the cache level.
Syntax:
ALTER TABLE [dbName].tableName SET TBLPROPERTIES (‘CACHE_LEVEL’=’Blocklet’)
Example:
ALTER TABLE employee SET TBLPROPERTIES (‘CACHE_LEVEL’=’Blocklet’)
- **Support Flat folder same as Hive/Parquet** This feature allows all carbondata and index files to keep directy under tablepath. Currently all carbondata/carbonindex files written under tablepath/Fact/Part0/Segment_NUM folder and it is not same as hive/parquet folder structure. This feature makes all files written will be directly under tablepath, it does not maintain any segment folder structure.This is useful for interoperability between the execution engines and plugin with other execution engines like hive or presto becomes easier. Following table property enables this feature and default value is false. ``` 'flat_folder'='true' ``` Example: ``` CREATE TABLE employee (name String, city String, id int) STORED BY ‘carbondata’ TBLPROPERTIES ('flat_folder'='true') ``` - **String longer than 32000 characters** In common scenarios, the length of string is less than 32000, so carbondata stores the length of content using Short to reduce memory and space consumption. To support string longer than 32000 characters, carbondata introduces a table property called `LONG_STRING_COLUMNS`. For these columns, carbondata internally stores the length of content using Integer. You can specify the columns as 'long string column' using below tblProperties: ``` // specify col1, col2 as long string columns TBLPROPERTIES ('LONG_STRING_COLUMNS'='col1,col2') ``` Besides, you can also use this property through DataFrame by ``` df.format("carbondata") .option("tableName", "carbonTable") .option("long_string_columns", "col1, col2") .save() ``` If you are using Carbon-SDK, you can specify the datatype of long string column as `varchar`. You can refer to SDKwriterTestCase for example. **NOTE:** The LONG_STRING_COLUMNS can only be string/char/varchar columns and cannot be dictionary_include/sort_columns/complex columns.
This function allows user to create a Carbon table from any of the Parquet/Hive/Carbon table. This is beneficial when the user wants to create Carbon table from any other Parquet/Hive table and use the Carbon query engine to query and achieve better query results for cases where Carbon is faster than other file formats. Also this feature can be used for backing up the data.
CREATE TABLE [IF NOT EXISTS] [db_name.]table_name STORED BY 'carbondata' [TBLPROPERTIES (key1=val1, key2=val2, ...)] AS select_statement;
carbon.sql("CREATE TABLE source_table( id INT, name STRING, city STRING, age INT) STORED AS parquet") carbon.sql("INSERT INTO source_table SELECT 1,'bob','shenzhen',27") carbon.sql("INSERT INTO source_table SELECT 2,'david','shenzhen',31") carbon.sql("CREATE TABLE target_table STORED BY 'carbondata' AS SELECT city,avg(age) FROM source_table GROUP BY city") carbon.sql("SELECT * FROM target_table").show // results: // +--------+--------+ // | city|avg(age)| // +--------+--------+ // |shenzhen| 29.0| // +--------+--------+
This function allows user to create external table by specifying location.
CREATE EXTERNAL TABLE [IF NOT EXISTS] [db_name.]table_name STORED BY 'carbondata' LOCATION ‘$FilesPath’
Managed table data location provided will have both FACT and Metadata folder. This data can be generated by creating a normal carbon table and use this path as $FilesPath in the above syntax.
Example:
sql("CREATE TABLE origin(key INT, value STRING) STORED BY 'carbondata'") sql("INSERT INTO origin select 100,'spark'") sql("INSERT INTO origin select 200,'hive'") // creates a table in $storeLocation/origin sql(s""" |CREATE EXTERNAL TABLE source |STORED BY 'carbondata' |LOCATION '$storeLocation/origin' """.stripMargin) checkAnswer(sql("SELECT count(*) from source"), sql("SELECT count(*) from origin"))
Non-Transactional table data location will have only carbondata and carbonindex files, there will not be a metadata folder (table status and schema). Our SDK module currently support writing data in this format.
Example:
sql( s"""CREATE EXTERNAL TABLE sdkOutputTable STORED BY 'carbondata' LOCATION |'$writerPath' """.stripMargin)
Here writer path will have carbondata and index files. This can be SDK output. Refer SDK Writer Guide.
Note:
This function creates a new database. By default the database is created in Carbon store location, but you can also specify custom location.
CREATE DATABASE [IF NOT EXISTS] database_name [LOCATION path];
CREATE DATABASE carbon LOCATION “hdfs://name_cluster/dir1/carbonstore”;
This command can be used to list all the tables in current database or all the tables of a specific database.
SHOW TABLES [IN db_Name]
Example:
SHOW TABLES OR SHOW TABLES IN defaultdb
The following section introduce the commands to modify the physical or logical state of the existing table(s).
RENAME TABLE
This command is used to rename the existing table.
ALTER TABLE [db_name.]table_name RENAME TO new_table_name
Examples:
ALTER TABLE carbon RENAME TO carbonTable OR ALTER TABLE test_db.carbon RENAME TO test_db.carbonTable
ADD COLUMNS
This command is used to add a new column to the existing table.
ALTER TABLE [db_name.]table_name ADD COLUMNS (col_name data_type,...) TBLPROPERTIES('DICTIONARY_INCLUDE'='col_name,...', 'DEFAULT.VALUE.COLUMN_NAME'='default_value')
Examples:
ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING)
ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING) TBLPROPERTIES('DICTIONARY_INCLUDE'='a1')
ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING) TBLPROPERTIES('DEFAULT.VALUE.a1'='10')
NOTE: Add Complex datatype columns is not supported.
Users can specify which columns to include and exclude for local dictionary generation after adding new columns. These will be appended with the already existing local dictionary include and exclude columns of main table respectively.
ALTER TABLE carbon ADD COLUMNS (a1 STRING, b1 STRING) TBLPROPERTIES('LOCAL_DICTIONARY_INCLUDE'='a1','LOCAL_DICTIONARY_EXCLUDE'='b1')
DROP COLUMNS
This command is used to delete the existing column(s) in a table.
ALTER TABLE [db_name.]table_name DROP COLUMNS (col_name, ...)
Examples:
ALTER TABLE carbon DROP COLUMNS (b1) OR ALTER TABLE test_db.carbon DROP COLUMNS (b1) ALTER TABLE carbon DROP COLUMNS (c1,d1)
NOTE: Drop Complex child column is not supported.
CHANGE DATA TYPE
This command is used to change the data type from INT to BIGINT or decimal precision from lower to higher. Change of decimal data type from lower precision to higher precision will only be supported for cases where there is no data loss.
ALTER TABLE [db_name.]table_name CHANGE col_name col_name changed_column_type
Valid Scenarios
Example1:Changing data type of column a1 from INT to BIGINT.
ALTER TABLE test_db.carbon CHANGE a1 a1 BIGINT
Example2:Changing decimal precision of column a1 from 10 to 18.
ALTER TABLE test_db.carbon CHANGE a1 a1 DECIMAL(18,2)
MERGE INDEX
This command is used to merge all the CarbonData index files (.carbonindex) inside a segment to a single CarbonData index merge file (.carbonindexmerge). This enhances the first query performance.
ALTER TABLE [db_name.]table_name COMPACT 'SEGMENT_INDEX' ``` Examples: ``` ALTER TABLE test_db.carbon COMPACT 'SEGMENT_INDEX' ``` **NOTE:** * Merge index is not supported on streaming table.
SET and UNSET for Local Dictionary Properties
When set command is used, all the newly set properties will override the corresponding old properties if exists.
Example to SET Local Dictionary Properties:
ALTER TABLE tablename SET TBLPROPERTIES('LOCAL_DICTIONARY_ENABLE'='false','LOCAL_DICTIONARY_THRESHOLD'='1000','LOCAL_DICTIONARY_INCLUDE'='column1','LOCAL_DICTIONARY_EXCLUDE'='column2')
When Local Dictionary properties are unset, corresponding default values will be used for these properties.
Example to UNSET Local Dictionary Properties:
ALTER TABLE tablename UNSET TBLPROPERTIES('LOCAL_DICTIONARY_ENABLE','LOCAL_DICTIONARY_THRESHOLD','LOCAL_DICTIONARY_INCLUDE','LOCAL_DICTIONARY_EXCLUDE')
NOTE: For old tables, by default, local dictionary is disabled. If user wants local dictionary for these tables, user can enable/disable local dictionary for new data at their discretion. This can be achieved by using the alter table set command.
This command is used to delete an existing table.
DROP TABLE [IF EXISTS] [db_name.]table_name
Example:
DROP TABLE IF EXISTS productSchema.productSalesTable
This command is used to register Carbon table to HIVE meta store catalogue from existing Carbon table data.
REFRESH TABLE $db_NAME.$table_NAME
Example:
REFRESH TABLE dbcarbon.productSalesTable
NOTE:
You can provide more information on table by using table comment. Similarly you can provide more information about a particular column using column comment. You can see the column comment of an existing table using describe formatted command.
CREATE TABLE [IF NOT EXISTS] [db_name.]table_name[(col_name data_type [COMMENT col_comment], ...)] [COMMENT table_comment] STORED BY 'carbondata' [TBLPROPERTIES (property_name=property_value, ...)]
Example:
CREATE TABLE IF NOT EXISTS productSchema.productSalesTable ( productNumber Int COMMENT 'unique serial number for product') COMMENT “This is table comment” STORED BY 'carbondata' TBLPROPERTIES ('DICTIONARY_INCLUDE'='productNumber')
You can also SET and UNSET table comment using ALTER command.
Example to SET table comment:
ALTER TABLE carbon SET TBLPROPERTIES ('comment'='this table comment is modified');
Example to UNSET table comment:
ALTER TABLE carbon UNSET TBLPROPERTIES ('comment');
This command is used to load csv files to carbondata, OPTIONS are not mandatory for data loading process. Inside OPTIONS user can provide any options like DELIMITER, QUOTECHAR, FILEHEADER, ESCAPECHAR, MULTILINE as per requirement.
LOAD DATA [LOCAL] INPATH 'folder_path' INTO TABLE [db_name.]table_name OPTIONS(property_name=property_value, ...)
You can use the following options to load data:
DELIMITER: Delimiters can be provided in the load command.
OPTIONS('DELIMITER'=',')
QUOTECHAR: Quote Characters can be provided in the load command.
OPTIONS('QUOTECHAR'='"')
COMMENTCHAR: Comment Characters can be provided in the load command if user want to comment lines.
OPTIONS('COMMENTCHAR'='#')
HEADER: When you load the CSV file without the file header and the file header is the same with the table schema, then add ‘HEADER’=‘false’ to load data SQL as user need not provide the file header. By default the value is ‘true’. false: CSV file is without file header. true: CSV file is with file header.
OPTIONS('HEADER'='false')
NOTE: If the HEADER option exist and is set to ‘true’, then the FILEHEADER option is not required.
FILEHEADER: Headers can be provided in the LOAD DATA command if headers are missing in the source files.
OPTIONS('FILEHEADER'='column1,column2')
MULTILINE: CSV with new line character in quotes.
OPTIONS('MULTILINE'='true')
ESCAPECHAR: Escape char can be provided if user want strict validation of escape character in CSV files.
OPTIONS('ESCAPECHAR'='\')
SKIP_EMPTY_LINE: This option will ignore the empty line in the CSV file during the data load.
OPTIONS('SKIP_EMPTY_LINE'='TRUE/FALSE')
COMPLEX_DELIMITER_LEVEL_1: Split the complex type data column in a row (eg., a$b$c --> Array = {a,b,c}).
OPTIONS('COMPLEX_DELIMITER_LEVEL_1'='$')
COMPLEX_DELIMITER_LEVEL_2: Split the complex type nested data column in a row. Applies level_1 delimiter & applies level_2 based on complex data type (eg., a:b$c:d --> Array> = {{a,b},{c,d}}).
OPTIONS('COMPLEX_DELIMITER_LEVEL_2'=':')
ALL_DICTIONARY_PATH: All dictionary files path.
OPTIONS('ALL_DICTIONARY_PATH'='/opt/alldictionary/data.dictionary')
COLUMNDICT: Dictionary file path for specified column.
OPTIONS('COLUMNDICT'='column1:dictionaryFilePath1,column2:dictionaryFilePath2')
NOTE: ALL_DICTIONARY_PATH and COLUMNDICT can't be used together.
DATEFORMAT/TIMESTAMPFORMAT: Date and Timestamp format for specified column.
OPTIONS('DATEFORMAT' = 'yyyy-MM-dd','TIMESTAMPFORMAT'='yyyy-MM-dd HH:mm:ss')
NOTE: Date formats are specified by date pattern strings. The date pattern letters in CarbonData are same as in JAVA. Refer to SimpleDateFormat.
SORT COLUMN BOUNDS: Range bounds for sort columns.
Suppose the table is created with ‘SORT_COLUMNS’=‘name,id’ and the range for name is aaazzz, the value range for id is 01000. Then during data loading, we can specify the following option to enhance data loading performance.
OPTIONS('SORT_COLUMN_BOUNDS'='f,250;l,500;r,750')
Each bound is separated by ‘;’ and each field value in bound is separated by ‘,’. In the example above, we provide 3 bounds to distribute records to 4 partitions. The values ‘f’,‘l’,‘r’ can evenly distribute the records. Inside carbondata, for a record we compare the value of sort columns with that of the bounds and decide which partition the record will be forwarded to.
NOTE:
SINGLE_PASS: Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary.
This option specifies whether to use single pass for loading data or not. By default this option is set to FALSE.
OPTIONS('SINGLE_PASS'='TRUE')
NOTE:
Example:
LOAD DATA local inpath '/opt/rawdata/data.csv' INTO table carbontable options('DELIMITER'=',', 'QUOTECHAR'='"','COMMENTCHAR'='#', 'HEADER'='false', 'FILEHEADER'='empno,empname,designation,doj,workgroupcategory, workgroupcategoryname,deptno,deptname,projectcode, projectjoindate,projectenddate,attendance,utilization,salary', 'MULTILINE'='true','ESCAPECHAR'='\','COMPLEX_DELIMITER_LEVEL_1'='$', 'COMPLEX_DELIMITER_LEVEL_2'=':', 'ALL_DICTIONARY_PATH'='/opt/alldictionary/data.dictionary', 'SINGLE_PASS'='TRUE')
BAD RECORDS HANDLING: Methods of handling bad records are as follows:
OPTIONS('BAD_RECORDS_LOGGER_ENABLE'='true', 'BAD_RECORD_PATH'='hdfs://hacluster/tmp/carbon', 'BAD_RECORDS_ACTION'='REDIRECT', 'IS_EMPTY_DATA_BAD_RECORD'='false')
NOTE:
Bad Records Path:
This property is used to specify the location where bad records would be written.
TBLPROPERTIES('BAD_RECORDS_PATH'='/opt/badrecords'')
Example:
LOAD DATA INPATH 'filepath.csv' INTO TABLE tablename OPTIONS('BAD_RECORDS_LOGGER_ENABLE'='true','BAD_RECORD_PATH'='hdfs://hacluster/tmp/carbon', 'BAD_RECORDS_ACTION'='REDIRECT','IS_EMPTY_DATA_BAD_RECORD'='false')
OPTIONS('GLOBAL_SORT_PARTITIONS'='2')
NOTE:
This command inserts data into a CarbonData table, it is defined as a combination of two queries Insert and Select query respectively. It inserts records from a source table into a target CarbonData table, the source table can be a Hive table, Parquet table or a CarbonData table itself. It comes with the functionality to aggregate the records of a table by performing Select query on source table and load its corresponding resultant records into a CarbonData table.
INSERT INTO TABLE <CARBONDATA TABLE> SELECT * FROM sourceTableName [ WHERE { <filter_condition> } ]
You can also omit the table
keyword and write your query as:
INSERT INTO <CARBONDATA TABLE> SELECT * FROM sourceTableName [ WHERE { <filter_condition> } ]
Overwrite insert data:
INSERT OVERWRITE TABLE <CARBONDATA TABLE> SELECT * FROM sourceTableName [ WHERE { <filter_condition> } ]
NOTE:
Examples
INSERT INTO table1 SELECT item1, sum(item2 + 1000) as result FROM table2 group by item1
INSERT INTO table1 SELECT item1, item2, item3 FROM table2 where item2='xyz'
INSERT OVERWRITE TABLE table1 SELECT * FROM TABLE2
This command will allow to update the CarbonData table based on the column expression and optional filter conditions.
UPDATE <table_name> SET (column_name1, column_name2, ... column_name n) = (column1_expression , column2_expression, ... column n_expression ) [ WHERE { <filter_condition> } ]
alternatively the following command can also be used for updating the CarbonData Table :
UPDATE <table_name> SET (column_name1, column_name2) =(select sourceColumn1, sourceColumn2 from sourceTable [ WHERE { <filter_condition> } ] ) [ WHERE { <filter_condition> } ]
NOTE: The update command fails if multiple input rows in source table are matched with single row in destination table.
Examples:
UPDATE t3 SET (t3_salary) = (t3_salary + 9) WHERE t3_name = 'aaa1'
UPDATE t3 SET (t3_date, t3_country) = ('2017-11-18', 'india') WHERE t3_salary < 15003
UPDATE t3 SET (t3_country, t3_name) = (SELECT t5_country, t5_name FROM t5 WHERE t5_id = 5) WHERE t3_id < 5
UPDATE t3 SET (t3_date, t3_serialname, t3_salary) = (SELECT '2099-09-09', t5_serialname, '9999' FROM t5 WHERE t5_id = 5) WHERE t3_id < 5
UPDATE t3 SET (t3_country, t3_salary) = (SELECT t5_country, t5_salary FROM t5 FULL JOIN t3 u WHERE u.t3_id = t5_id and t5_id=6) WHERE t3_id >6
NOTE: Update Complex datatype columns is not supported.
This command allows us to delete records from CarbonData table.
DELETE FROM table_name [WHERE expression]
Examples:
DELETE FROM carbontable WHERE column1 = 'china'
DELETE FROM carbontable WHERE column1 IN ('china', 'USA')
DELETE FROM carbontable WHERE column1 IN (SELECT column11 FROM sourceTable2)
DELETE FROM carbontable WHERE column1 IN (SELECT column11 FROM sourceTable2 WHERE column1 = 'USA')
Compaction improves the query performance significantly.
There are several types of compaction.
ALTER TABLE [db_name.]table_name COMPACT 'MINOR/MAJOR/CUSTOM'
In Minor compaction, user can specify the number of loads to be merged. Minor compaction triggers for every data load if the parameter carbon.enable.auto.load.merge is set to true. If any segments are available to be merged, then compaction will run parallel with data load, there are 2 levels in minor compaction:
ALTER TABLE table_name COMPACT 'MINOR'
In Major compaction, multiple segments can be merged into one large segment. User will specify the compaction size until which segments can be merged, Major compaction is usually done during the off-peak time. Configure the property carbon.major.compaction.size with appropriate value in MB.
This command merges the specified number of segments into one segment:
ALTER TABLE table_name COMPACT 'MAJOR'
In Custom compaction, user can directly specify segment ids to be merged into one large segment. All specified segment ids should exist and be valid, otherwise compaction will fail. Custom compaction is usually done during the off-peak time.
ALTER TABLE table_name COMPACT 'CUSTOM' WHERE SEGMENT.ID IN (2,3,4)
NOTE: Compaction is unsupported for table containing Complex columns.
Clean the segments which are compacted:
CLEAN FILES FOR TABLE carbon_table
The partition is similar as spark and hive partition, user can use any column to build partition:
This command allows you to create table with partition.
CREATE TABLE [IF NOT EXISTS] [db_name.]table_name [(col_name data_type , ...)] [COMMENT table_comment] [PARTITIONED BY (col_name data_type , ...)] [STORED BY file_format] [TBLPROPERTIES (property_name=property_value, ...)]
Example:
CREATE TABLE IF NOT EXISTS productSchema.productSalesTable ( productNumber INT, productName STRING, storeCity STRING, storeProvince STRING, saleQuantity INT, revenue INT) PARTITIONED BY (productCategory STRING, productBatch STRING) STORED BY 'carbondata'
NOTE: Hive partition is not supported on complex datatype columns.
This command allows you to load data using static partition.
LOAD DATA [LOCAL] INPATH 'folder_path' INTO TABLE [db_name.]table_name PARTITION (partition_spec) OPTIONS(property_name=property_value, ...) INSERT INTO INTO TABLE [db_name.]table_name PARTITION (partition_spec) <SELECT STATEMENT>
Example:
LOAD DATA LOCAL INPATH '${env:HOME}/staticinput.csv' INTO TABLE locationTable PARTITION (country = 'US', state = 'CA') INSERT INTO TABLE locationTable PARTITION (country = 'US', state = 'AL') SELECT <columns list excluding partition columns> FROM another_user
This command allows you to load data using dynamic partition. If partition spec is not specified, then the partition is considered as dynamic.
Example:
LOAD DATA LOCAL INPATH '${env:HOME}/staticinput.csv' INTO TABLE locationTable INSERT INTO TABLE locationTable SELECT <columns list excluding partition columns> FROM another_user
This command gets the Hive partition information of the table
SHOW PARTITIONS [db_name.]table_name
This command drops the specified Hive partition only.
ALTER TABLE table_name DROP [IF EXISTS] PARTITION (part_spec, ...)
Example:
ALTER TABLE locationTable DROP PARTITION (country = 'US');
This command allows you to insert or load overwrite on a specific partition.
INSERT OVERWRITE TABLE table_name PARTITION (column = 'partition_name') select_statement
Example:
INSERT OVERWRITE TABLE partitioned_user PARTITION (country = 'US') SELECT * FROM another_user au WHERE au.country = 'US';
The partition supports three type:(Hash,Range,List), similar to other system‘s partition features, CarbonData’s partition feature can be used to improve query performance by filtering on the partition column.
This command allows us to create hash partition.
CREATE TABLE [IF NOT EXISTS] [db_name.]table_name [(col_name data_type , ...)] PARTITIONED BY (partition_col_name data_type) STORED BY 'carbondata' [TBLPROPERTIES ('PARTITION_TYPE'='HASH', 'NUM_PARTITIONS'='N' ...)]
NOTE: N is the number of hash partitions
Example:
CREATE TABLE IF NOT EXISTS hash_partition_table( col_A STRING, col_B INT, col_C LONG, col_D DECIMAL(10,2), col_F TIMESTAMP ) PARTITIONED BY (col_E LONG) STORED BY 'carbondata' TBLPROPERTIES('PARTITION_TYPE'='HASH','NUM_PARTITIONS'='9')
This command allows us to create range partition.
CREATE TABLE [IF NOT EXISTS] [db_name.]table_name [(col_name data_type , ...)] PARTITIONED BY (partition_col_name data_type) STORED BY 'carbondata' [TBLPROPERTIES ('PARTITION_TYPE'='RANGE', 'RANGE_INFO'='2014-01-01, 2015-01-01, 2016-01-01, ...')]
NOTE:
Example:
CREATE TABLE IF NOT EXISTS range_partition_table( col_A STRING, col_B INT, col_C LONG, col_D DECIMAL(10,2), col_E LONG ) partitioned by (col_F Timestamp) PARTITIONED BY 'carbondata' TBLPROPERTIES('PARTITION_TYPE'='RANGE', 'RANGE_INFO'='2015-01-01, 2016-01-01, 2017-01-01, 2017-02-01')
This command allows us to create list partition.
CREATE TABLE [IF NOT EXISTS] [db_name.]table_name [(col_name data_type , ...)] PARTITIONED BY (partition_col_name data_type) STORED BY 'carbondata' [TBLPROPERTIES ('PARTITION_TYPE'='LIST', 'LIST_INFO'='A, B, C, ...')]
NOTE: List partition supports list info in one level group.
Example:
CREATE TABLE IF NOT EXISTS list_partition_table( col_B INT, col_C LONG, col_D DECIMAL(10,2), col_E LONG, col_F TIMESTAMP ) PARTITIONED BY (col_A STRING) STORED BY 'carbondata' TBLPROPERTIES('PARTITION_TYPE'='LIST', 'LIST_INFO'='aaaa, bbbb, (cccc, dddd), eeee')
The following command is executed to get the partition information of the table
SHOW PARTITIONS [db_name.]table_name
ALTER TABLE [db_name].table_name ADD PARTITION('new_partition')
ALTER TABLE [db_name].table_name SPLIT PARTITION(partition_id) INTO('new_partition1', 'new_partition2'...)
Only drop partition definition, but keep data
ALTER TABLE [db_name].table_name DROP PARTITION(partition_id)
Drop both partition definition and data
ALTER TABLE [db_name].table_name DROP PARTITION(partition_id) WITH DATA
NOTE:
SegmentDir/0_batchno0-0-1502703086921.carbonindex ^ SegmentDir/part-0-0_batchno0-0-1502703086921.carbondata ^
Here are some useful tips to improve query performance of carbonData partition table:
Bucketing feature can be used to distribute/organize the table/partition data into multiple files such that similar records are present in the same file. While creating a table, user needs to specify the columns to be used for bucketing and the number of buckets. For the selection of bucket the Hash value of columns is used.
CREATE TABLE [IF NOT EXISTS] [db_name.]table_name [(col_name data_type, ...)] STORED BY 'carbondata' TBLPROPERTIES('BUCKETNUMBER'='noOfBuckets', 'BUCKETCOLUMNS'='columnname')
NOTE:
Example:
CREATE TABLE IF NOT EXISTS productSchema.productSalesTable ( productNumber INT, saleQuantity INT, productName STRING, storeCity STRING, storeProvince STRING, productCategory STRING, productBatch STRING, revenue INT) STORED BY 'carbondata' TBLPROPERTIES ('BUCKETNUMBER'='4', 'BUCKETCOLUMNS'='productName')
This command is used to list the segments of CarbonData table.
SHOW [HISTORY] SEGMENTS FOR TABLE [db_name.]table_name LIMIT number_of_segments
Example: Show visible segments
SHOW SEGMENTS FOR TABLE CarbonDatabase.CarbonTable LIMIT 4
Show all segments, include invisible segments
SHOW HISTORY SEGMENTS FOR TABLE CarbonDatabase.CarbonTable LIMIT 4
This command is used to delete segment by using the segment ID. Each segment has a unique segment ID associated with it. Using this segment ID, you can remove the segment.
The following command will get the segmentID.
SHOW SEGMENTS FOR TABLE [db_name.]table_name LIMIT number_of_segments
After you retrieve the segment ID of the segment that you want to delete, execute the following command to delete the selected segment.
DELETE FROM TABLE [db_name.]table_name WHERE SEGMENT.ID IN (segment_id1, segments_id2, ...)
Example:
DELETE FROM TABLE CarbonDatabase.CarbonTable WHERE SEGMENT.ID IN (0) DELETE FROM TABLE CarbonDatabase.CarbonTable WHERE SEGMENT.ID IN (0,5,8)
This command will allow to delete the CarbonData segment(s) from the store based on the date provided by the user in the DML command. The segment created before the particular date will be removed from the specific stores.
DELETE FROM TABLE [db_name.]table_name WHERE SEGMENT.STARTTIME BEFORE DATE_VALUE
Example:
DELETE FROM TABLE CarbonDatabase.CarbonTable WHERE SEGMENT.STARTTIME BEFORE '2017-06-01 12:05:06'
This command is used to read data from specified segments during CarbonScan.
Get the Segment ID:
SHOW SEGMENTS FOR TABLE [db_name.]table_name LIMIT number_of_segments
Set the segment IDs for table
SET carbon.input.segments.<database_name>.<table_name> = <list of segment IDs>
NOTE: carbon.input.segments: Specifies the segment IDs to be queried. This property allows you to query specified segments of the specified table. The CarbonScan will read data from specified segments only.
If user wants to query with segments reading in multi threading mode, then CarbonSession. threadSet can be used instead of SET query.
CarbonSession.threadSet ("carbon.input.segments.<database_name>.<table_name>","<list of segment IDs>");
Reset the segment IDs
SET carbon.input.segments.<database_name>.<table_name> = *;
If user wants to query with segments reading in multi threading mode, then CarbonSession. threadSet can be used instead of SET query.
CarbonSession.threadSet ("carbon.input.segments.<database_name>.<table_name>","*");
Examples:
SHOW SEGMENTS FOR carbontable1; SET carbon.input.segments.db.carbontable1 = 1,3,9;
CarbonSession.threadSet ("carbon.input.segments.db.carbontable_Multi_Thread","1,3");
def main(args: Array[String]) { Future { CarbonSession.threadSet ("carbon.input.segments.db.carbontable_Multi_Thread","1") spark.sql("select count(empno) from carbon.input.segments.db.carbontable_Multi_Thread").show(); } }