import ChangeLog from ‘../changelog/connector-druid.md’;
Druid sink connector
Write data to Druid
SeaTunnel Data Type | Druid Data Type |
---|---|
TINYINT | LONG |
SMALLINT | LONG |
INT | LONG |
BIGINT | LONG |
FLOAT | FLOAT |
DOUBLE | DOUBLE |
DECIMAL | DOUBLE |
STRING | STRING |
BOOLEAN | STRING |
TIMESTAMP | STRING |
name | type | required | default value |
---|---|---|---|
coordinatorUrl | string | yes | - |
datasource | string | yes | - |
batchSize | int | no | 10000 |
common-options | no | - |
The coordinatorUrl host and port of Druid, example: “myHost:8888”
The datasource name you want to write, example: “seatunnel”
The number of rows flushed to Druid per batch. Default value is 1024
.
Sink plugin common parameters, please refer to Sink Common Options for details
Simple example:
sink { Druid { coordinatorUrl = "testHost:8888" datasource = "seatunnel" } }
Use placeholders get upstream table metadata example:
sink { Druid { coordinatorUrl = "testHost:8888" datasource = "${table_name}_test" } }