Currently, Doris supports multiple import methods such as broker load, routine load, stream load, etc. The data can only be deleted through the delete statement at present. When the delete statement is used to delete, a new data version will be generated every time delete is executed. Frequent deletion will seriously affect the query performance, and when using the delete method to delete, it is achieved by generating an empty rowset to record the deletion conditions. Each time you read, you must filter the deletion jump conditions. Also when there are many conditions, Performance affects. Compared with other systems, the implementation of greenplum is more like a traditional database product. Snowflake is implemented through the merge syntax.
For scenarios similar to the import of cdc data, insert and delete in the data data generally appear interspersed. In this scenario, our current import method is not enough, even if we can separate insert and delete, it can solve the import problem , But still cannot solve the problem of deletion. Use the batch delete function to solve the needs of these scenarios. There are three ways to merge data import:
This is achieved by adding a hidden column __DORIS_DELETE_SIGN__, because we are only doing batch deletion on the unique model, so we only need to add a hidden column whose type is bool and the aggregate function is replace. In be, the various aggregation write processes are the same as normal columns, and there are two read schemes:
Remove __DORIS_DELETE_SIGN__ when fe encounters extensions such as *, and add the condition of __DORIS_DELETE_SIGN__ != true by default When be reads, a column is added for judgment, and the condition is used to determine whether to delete.
When importing, set the value of the hidden column to the value of the DELETE ON expression during fe parsing. The other aggregation behaviors are the same as the replace aggregation column.
When reading, add the condition of __DORIS_DELETE_SIGN__ != true to all olapScanNodes with hidden columns, be does not perceive this process and executes normally.
In Cumulative Compaction, hidden columns are treated as normal columns, and the compaction logic remains unchanged.
In Base Compaction, delete the rows marked for deletion to reduce the space occupied by data.
There are two ways of enabling batch delete support:
By adding enable_batch_delete_by_default=true in the fe configuration file, all newly created tables after restarting fe support batch deletion, this option defaults to false
For tables that have not changed the above fe configuration or for existing tables that do not support the bulk delete function, you can use the following statement: ALTER TABLE tablename ENABLE FEATURE "BATCH_DELETE" to enable the batch delete.
If you want to determine whether a table supports batch delete, you can set a session variable to display the hidden columns SET show_hidden_columns=true, and then use desc tablename, if there is a __DORIS_DELETE_SIGN__ column in the output, it is supported, if not, it is not supported
The syntax design of the import is mainly to add a column mapping that specifies the field of the delete marker column, and it is necessary to add a column to the imported data. The syntax of various import methods is as follows:
The writing method of Stream Load adds a field to set the delete label column in the columns field in the header. Example -H "columns: k1, k2, label_c3" -H "merge_type: [MERGE|APPEND|DELETE]" -H "delete: label_c3=1"
The writing method of Broker Load sets the field of the delete marker column at PROPERTIES, the syntax is as follows:
LOAD LABEL db1.label1 ( [MERGE|APPEND|DELETE] DATA INFILE("hdfs://abc.com:8888/user/palo/test/ml/file1") INTO TABLE tbl1 COLUMNS TERMINATED BY "," (tmp_c1,tmp_c2, label_c3) SET ( id=tmp_c2, name=tmp_c1, ) [DELETE ON label_c3=true] ) WITH BROKER'broker' ( "username"="user", "password"="pass" ) PROPERTIES ( "timeout" = "3600" );
The writing method of Routine Load adds a mapping to the columns field. The mapping method is the same as above. The syntax is as follows:
CREATE ROUTINE LOAD example_db.test1 ON example_tbl [WITH MERGE|APPEND|DELETE] COLUMNS(k1, k2, k3, v1, v2, label), WHERE k1> 100 and k2 like "%doris%" [DELETE ON label=true] PROPERTIES ( "desired_concurrent_number"="3", "max_batch_interval" = "20", "max_batch_rows" = "300000", "max_batch_size" = "209715200", "strict_mode" = "false" ) FROM KAFKA ( "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092", "kafka_topic" = "my_topic", "kafka_partitions" = "0,1,2,3", "kafka_offsets" = "101,0,0,200" );
MERGE method, it needs to be used with load sequence. For the specific syntax, please refer to the sequence column related documentsDELETE ON condition can only be used with MERGE.mysql> SET show_hidden_columns=true; Query OK, 0 rows affected (0.00 sec) mysql> DESC test; +-----------------------+--------------+------+-------+---------+---------+ | Field | Type | Null | Key | Default | Extra | +-----------------------+--------------+------+-------+---------+---------+ | name | VARCHAR(100) | No | true | NULL | | | gender | VARCHAR(10) | Yes | false | NULL | REPLACE | | age | INT | Yes | false | NULL | REPLACE | | __DORIS_DELETE_SIGN__ | TINYINT | No | false | 0 | REPLACE | +-----------------------+--------------+------+-------+---------+---------+ 4 rows in set (0.00 sec)
curl --location-trusted -u root: -H "column_separator:," -H "columns: siteid, citycode, username, pv" -H "merge_type: APPEND" -T ~/table1_data http://127.0.0.1: 8130/api/test/table1/_stream_load
The APPEND condition can be omitted, which has the same effect as the following statement:
curl --location-trusted -u root: -H "column_separator:," -H "columns: siteid, citycode, username, pv" -T ~/table1_data http://127.0.0.1:8130/api/test/table1 /_stream_load
curl --location-trusted -u root: -H "column_separator:," -H "columns: siteid, citycode, username, pv" -H "merge_type: DELETE" -T ~/table1_data http://127.0.0.1: 8130/api/test/table1/_stream_load
Before load:
+--------+----------+----------+------+ | siteid | citycode | username | pv | +--------+----------+----------+------+ | 3 | 2 | tom | 2 | | 4 | 3 | bush | 3 | | 5 | 3 | helen | 3 | +--------+----------+----------+------+
Load data:
3,2,tom,0
After load:
+--------+----------+----------+------+ | siteid | citycode | username | pv | +--------+----------+----------+------+ | 4 | 3 | bush | 3 | | 5 | 3 | helen | 3 | +--------+----------+----------+------+
site_id=1curl --location-trusted -u root: -H "column_separator:," -H "columns: siteid, citycode, username, pv" -H "merge_type: MERGE" -H "delete: siteid=1" -T ~/ table1_data http://127.0.0.1:8130/api/test/table1/_stream_load
Before load:
+--------+----------+----------+------+ | siteid | citycode | username | pv | +--------+----------+----------+------+ | 4 | 3 | bush | 3 | | 5 | 3 | helen | 3 | | 1 | 1 | jim | 2 | +--------+----------+----------+------+
Load data:
2,1,grace,2 3,2,tom,2 1,1,jim,2
After load:
+--------+----------+----------+------+ | siteid | citycode | username | pv | +--------+----------+----------+------+ | 4 | 3 | bush | 3 | | 2 | 1 | grace | 2 | | 3 | 2 | tom | 2 | | 5 | 3 | helen | 3 | +--------+----------+----------+------+