import ChangeLog from ‘../changelog/connector-jdbc.md’;
JDBC Redshift 接收器连接器
Spark
Flink
Seatunnel Zeta
使用
Xa transactions
确保exactly-once
. 因此,数据库只支持exactly-once
即支持Xa transactions
. 您可以设置is_exactly_once=true
来启用它.
通过jdbc写入数据. 支持批处理模式和流模式,支持并发写入,只支持一次语义 (使用 XA transaction guarantee).
数据源 | 支持版本 | 驱动 | url | maven |
---|---|---|---|---|
redshift | 不同的依赖版本有不同的驱动程序类. | com.amazon.redshift.jdbc.Driver | jdbc:redshift://localhost:5439/database | 下载 |
- 您需要确保 jdbc driver jar package 已放置在目录
${SEATUNNEL_HOME}/plugins/
.
- 您需要确保 jdbc driver jar package 已放置在目录
${SEATUNNEL_HOME}/lib/
.
SeaTunnel 数据类型 | Redshift 数据类型 |
---|---|
BOOLEAN | BOOLEAN |
TINYINT SMALLINT | SMALLINT |
INT | INTEGER |
BIGINT | BIGINT |
FLOAT | REAL |
DOUBLE | DOUBLE PRECISION |
DECIMAL | NUMERIC |
STRING(<=65535) | CHARACTER VARYING |
STRING(>65535) | SUPER |
BYTES | BINARY VARYING |
TIME | TIME |
TIMESTAMP | TIMESTAMP |
MAP ARRAY ROW | SUPER |
sink { jdbc { url = "jdbc:redshift://localhost:5439/mydatabase" driver = "com.amazon.redshift.jdbc.Driver" username = "myUser" password = "myPassword" generate_sink_sql = true schema = "public" table = "sink_table" } }
我们也支持CDC更改数据。在这种情况下,您需要配置数据库、表和主键.
sink { jdbc { url = "jdbc:redshift://localhost:5439/mydatabase" driver = "com.amazon.redshift.jdbc.Driver" username = "myUser" password = "mypassword" generate_sink_sql = true schema = "public" table = "sink_table" # config update/delete primary keys primary_keys = ["id","name"] } }