import ChangeLog from ‘../changelog/connector-jdbc.md’;
JDBC Redshift sink Connector
Spark
Flink
Seatunnel Zeta
Use
Xa transactions
to ensureexactly-once
. So only supportexactly-once
for the database which is supportXa transactions
. You can setis_exactly_once=true
to enable it.
Write data through jdbc. Support Batch mode and Streaming mode, support concurrent writing, support exactly-once semantics (using XA transaction guarantee).
datasource | supported versions | driver | url | maven |
---|---|---|---|---|
redshift | Different dependency version has different driver class. | com.amazon.redshift.jdbc.Driver | jdbc:redshift://localhost:5439/database | Download |
- You need to ensure that the jdbc driver jar package has been placed in directory
${SEATUNNEL_HOME}/plugins/
.
- You need to ensure that the jdbc driver jar package has been placed in directory
${SEATUNNEL_HOME}/lib/
.
SeaTunnel Data type | Redshift Data type |
---|---|
BOOLEAN | BOOLEAN |
TINYINT SMALLINT | SMALLINT |
INT | INTEGER |
BIGINT | BIGINT |
FLOAT | REAL |
DOUBLE | DOUBLE PRECISION |
DECIMAL | NUMERIC |
STRING(<=65535) | CHARACTER VARYING |
STRING(>65535) | SUPER |
BYTES | BINARY VARYING |
TIME | TIME |
TIMESTAMP | TIMESTAMP |
MAP ARRAY ROW | SUPER |
sink { jdbc { url = "jdbc:redshift://localhost:5439/mydatabase" driver = "com.amazon.redshift.jdbc.Driver" username = "myUser" password = "myPassword" generate_sink_sql = true schema = "public" table = "sink_table" } }
CDC change data is also supported by us In this case, you need config database, table and primary_keys.
sink { jdbc { url = "jdbc:redshift://localhost:5439/mydatabase" driver = "com.amazon.redshift.jdbc.Driver" username = "myUser" password = "mypassword" generate_sink_sql = true schema = "public" table = "sink_table" # config update/delete primary keys primary_keys = ["id","name"] } }