Redshift
JDBC Redshift sink Connector
Support those enginesâ
Spark
Flink
Seatunnel Zeta
Key featuresâ
Use
Xa transactions
to ensureexactly-once
. So only supportexactly-once
for the database which is supportXa transactions
. You can setis_exactly_once=true
to enable it.
Descriptionâ
Write data through jdbc. Support Batch mode and Streaming mode, support concurrent writing, support exactly-once semantics (using XA transaction guarantee).
Supported DataSource listâ
datasource | supported versions | driver | url | maven |
---|---|---|---|---|
redshift | Different dependency version has different driver class. | com.amazon.redshift.jdbc.Driver | jdbc:redshift://localhost:5439/database | Download |
Database dependencyâ
For Spark/Flink Engineâ
- You need to ensure that the jdbc driver jar package has been placed in directory
${SEATUNNEL_HOME}/plugins/
.
For SeaTunnel Zeta Engineâ
- You need to ensure that the jdbc driver jar package has been placed in directory
${SEATUNNEL_HOME}/lib/
.
Data Type Mappingâ
SeaTunnel Data type | Redshift Data type |
---|---|
BOOLEAN | BOOLEAN |
TINYINT SMALLINT | SMALLINT |
INT | INTEGER |
BIGINT | BIGINT |
FLOAT | REAL |
DOUBLE | DOUBLE PRECISION |
DECIMAL | NUMERIC |
STRING(<=65535) | CHARACTER VARYING |
STRING(>65535) | SUPER |
BYTES | BINARY VARYING |
TIME | TIME |
TIMESTAMP | TIMESTAMP |
MAP ARRAY ROW | SUPER |
Task Exampleâ
Simple:â
sink {
jdbc {
url = "jdbc:redshift://localhost:5439/mydatabase"
driver = "com.amazon.redshift.jdbc.Driver"
user = "myUser"
password = "myPassword"
generate_sink_sql = true
schema = "public"
table = "sink_table"
}
}
CDC(Change data capture) eventâ
CDC change data is also supported by us In this case, you need config database, table and primary_keys.
sink {
jdbc {
url = "jdbc:redshift://localhost:5439/mydatabase"
driver = "com.amazon.redshift.jdbc.Driver"
user = "myUser"
password = "mypassword"
generate_sink_sql = true
schema = "public"
table = "sink_table"
# config update/delete primary keys
primary_keys = ["id","name"]
}
}