Skip to main content
Version: 2.3.10

Redis

Redis source connector

Description

Used to read data from Redis.

Key features

Options

nametyperequireddefault value
hoststringyes when mode=single-
portintno6379
keysstringyes-
batch_sizeintyes10
data_typestringyes-
userstringno-
authstringno-
db_numintno0
modestringnosingle
hash_key_parse_modestringnoall
nodeslistyes when mode=cluster-
schemaconfigyes when format=json-
formatstringnojson
common-optionsno-

host [string]

redis host

port [int]

redis port

hash_key_parse_mode [string]

hash key parse mode, support all kv, used to tell connector how to parse hash key.

when setting it to all, connector will treat the value of hash key as a row and use the schema config to parse it, when setting it to kv, connector will treat each kv in hash key as a row and use the schema config to parse it:

for example, if the value of hash key is the following shown:

{ 
"001": {
"name": "tyrantlucifer",
"age": 26
},
"002": {
"name": "Zongwen",
"age": 26
}
}

if hash_key_parse_mode is all and schema config as the following shown, it will generate the following data:

schema {
fields {
001 {
name = string
age = int
}
002 {
name = string
age = int
}
}
}

001002
Row(name=tyrantlucifer, age=26)Row(name=Zongwen, age=26)

if hash_key_parse_mode is kv and schema config as the following shown, it will generate the following data:

schema {
fields {
hash_key = string
name = string
age = int
}
}

hash_keynameage
001tyrantlucifer26
002Zongwen26

each kv that in hash key it will be treated as a row and send it to upstream.

Tips: connector will use the first field information of schema config as the field name of each k that in each kv

keys [string]

keys pattern

batch_size [int]

indicates the number of keys to attempt to return per iteration,default 10

Tips:Redis source connector support fuzzy key matching, user needs to ensure that the matched keys are the same type

data_type [string]

redis data types, support key hash list set zset

  • key

The value of each key will be sent downstream as a single row of data. For example, the value of key is SeaTunnel test message, the data received downstream is SeaTunnel test message and only one message will be received.

  • hash

The hash key-value pairs will be formatted as json to be sent downstream as a single row of data. For example, the value of hash is name:tyrantlucifer age:26, the data received downstream is {"name":"tyrantlucifer", "age":"26"} and only one message will be received.

  • list

Each element in the list will be sent downstream as a single row of data. For example, the value of list is [tyrantlucier, CalvinKirs], the data received downstream are tyrantlucifer and CalvinKirs and only two message will be received.

  • set

Each element in the set will be sent downstream as a single row of data For example, the value of set is [tyrantlucier, CalvinKirs], the data received downstream are tyrantlucifer and CalvinKirs and only two message will be received.

  • zset

Each element in the sorted set will be sent downstream as a single row of data For example, the value of sorted set is [tyrantlucier, CalvinKirs], the data received downstream are tyrantlucifer and CalvinKirs and only two message will be received.

user [string]

redis authentication user, you need it when you connect to an encrypted cluster

auth [string]

redis authentication password, you need it when you connect to an encrypted cluster

db_num [int]

Redis database index ID. It is connected to db 0 by default

mode [string]

redis mode, single or cluster, default is single

nodes [list]

redis nodes information, used in cluster mode, must like as the following format:

["host1:port1", "host2:port2"]

format [string]

the format of upstream data, now only support json text, default json.

when you assign format is json, you should also assign schema option, for example:

upstream data is the following:

{"code":  200, "data":  "get success", "success":  true}

you should assign schema as the following:

schema {
fields {
code = int
data = string
success = boolean
}
}

connector will generate data as the following:

codedatasuccess
200get successtrue

when you assign format is text, connector will do nothing for upstream data, for example:

upstream data is the following:

{"code":  200, "data":  "get success", "success":  true}

connector will generate data as the following:

content
{"code": 200, "data": "get success", "success": true}

schema [config]

fields [config]

the schema fields of redis data

common options

Source plugin common parameters, please refer to Source Common Options for details

Example

simple:

Redis {
host = localhost
port = 6379
keys = "key_test*"
data_type = key
format = text
}
Redis {
host = localhost
port = 6379
keys = "key_test*"
data_type = key
format = json
schema {
fields {
name = string
age = int
}
}
}

read string type keys write append to list

source {
Redis {
host = "redis-e2e"
port = 6379
auth = "U2VhVHVubmVs"
keys = "string_test*"
data_type = string
batch_size = 33
}
}

sink {
Redis {
host = "redis-e2e"
port = 6379
auth = "U2VhVHVubmVs"
key = "string_test_list"
data_type = list
batch_size = 33
}
}

Changelog

Change Log
ChangeCommitVersion
[hotfix][redis] fix npe cause by null host parameter (#8881)https://github.com/apache/seatunnel/commit/7bd5865162.3.10
[Improve][Redis] Optimized Redis connection params (#8841)https://github.com/apache/seatunnel/commit/e56f06cdf2.3.10
[Improve] restruct connector common options (#8634)https://github.com/apache/seatunnel/commit/f3499a6ee2.3.10
[improve] update Redis connector config option (#8631)https://github.com/apache/seatunnel/commit/f1c313eea2.3.10
[Feature][Redis] Flush data when the time reaches checkpoint.interval and update test case (#8308)https://github.com/apache/seatunnel/commit/e15757bcd2.3.9
Revert "[Feature][Redis] Flush data when the time reaches checkpoint interval" and "[Feature][CDC] Add 'schema-changes.enabled' options" (#8278)https://github.com/apache/seatunnel/commit/fcb2938282.3.9
[Feature][Redis] Flush data when the time reaches checkpoint.interval (#8198)https://github.com/apache/seatunnel/commit/2e24941e62.3.9
[Hotfix] Fix redis sink NPE (#8171)https://github.com/apache/seatunnel/commit/6b9074e762.3.9
[Improve][dist]add shade check rule (#8136)https://github.com/apache/seatunnel/commit/51ef800012.3.9
[Feature][Connector-Redis] Redis connector support delete data (#7994)https://github.com/apache/seatunnel/commit/02a35c3972.3.9
[Improve][Connector-V2] Redis support custom key and value (#7888)https://github.com/apache/seatunnel/commit/ef2c3c7282.3.9
[Feature][Restapi] Allow metrics information to be associated to logical plan nodes (#7786)https://github.com/apache/seatunnel/commit/6b7c53d032.3.9
[improve][Redis]Redis scan command supports versions 5, 6, 7 (#7666)https://github.com/apache/seatunnel/commit/6e70cbe332.3.8
[Improve][Connector] Add multi-table sink option check (#7360)https://github.com/apache/seatunnel/commit/2489f64462.3.7
[Feature][Core] Support using upstream table placeholders in sink options and auto replacement (#7131)https://github.com/apache/seatunnel/commit/c4ca741222.3.6
[Improve][Redis] Redis reader use scan cammnd instead of keys, single mode reader/writer support batch (#7087)https://github.com/apache/seatunnel/commit/be37f05c02.3.6
[Feature][Kafka] Support multi-table source read (#5992)https://github.com/apache/seatunnel/commit/60104602d2.3.6
[Improve][Connector-V2]Support multi-table sink feature for redis (#6314)https://github.com/apache/seatunnel/commit/fed89ae3f2.3.5
[Feature][Core] Upgrade flink source translation (#5100)https://github.com/apache/seatunnel/commit/5aabb14a92.3.4
[Feature][Connector-V2] Support TableSourceFactory/TableSinkFactory on redis (#5901)https://github.com/apache/seatunnel/commit/e84dcb8c12.3.4
[Improve][Common] Introduce new error define rule (#5793)https://github.com/apache/seatunnel/commit/9d1b2582b2.3.4
[Improve] Remove use SeaTunnelSink::getConsumedType method and mark it as deprecated (#5755)https://github.com/apache/seatunnel/commit/8de7408102.3.4
[Improve][Connector-v2][Redis] Redis support select db (#5570)https://github.com/apache/seatunnel/commit/77fbbbd0e2.3.4
Support config column/primaryKey/constraintKey in schema (#5564)https://github.com/apache/seatunnel/commit/eac76b4e52.3.4
[Feature][Connector-v2][RedisSink]Support redis to set expiration time. (#4975)https://github.com/apache/seatunnel/commit/b5321ff1d2.3.3
Merge branch 'dev' into merge/cdchttps://github.com/apache/seatunnel/commit/4324ee1912.3.1
[Improve][Project] Code format with spotless plugin.https://github.com/apache/seatunnel/commit/423b583032.3.1
[improve][api] Refactoring schema parse (#4157)https://github.com/apache/seatunnel/commit/b2f573a132.3.1
[Improve][build] Give the maven module a human readable name (#4114)https://github.com/apache/seatunnel/commit/d7cd601052.3.1
[Improve][Project] Code format with spotless plugin. (#4101)https://github.com/apache/seatunnel/commit/a2ab166562.3.1
[Feature][Connector] add get source method to all source connector (#3846)https://github.com/apache/seatunnel/commit/417178fb82.3.1
[Hotfix][OptionRule] Fix option rule about all connectors (#3592)https://github.com/apache/seatunnel/commit/226dc6a112.3.0
[Improve][Connector-V2][Redis] Unified exception for redis source & sink exception (#3517)https://github.com/apache/seatunnel/commit/205f782582.3.0
options in conditional need add to required or optional options (#3501)https://github.com/apache/seatunnel/commit/51d5bcba12.3.0
[feature][api] add option validation for the ReadonlyConfig (#3417)https://github.com/apache/seatunnel/commit/4f824fea32.3.0
[Feature][Redis Connector V2] Add Redis Connector Option Rules & Improve Redis Connector doc (#3320)https://github.com/apache/seatunnel/commit/1c10aacb32.3.0
[Connector-V2][ElasticSearch] Add ElasticSearch Source/Sink Factory (#3325)https://github.com/apache/seatunnel/commit/38254e3f22.3.0
[Improve][Connector-V2][Redis] Support redis cluster connection & user authentication (#3188)https://github.com/apache/seatunnel/commit/c7275a49c2.3.0
[DEV][Api] Replace SeaTunnelContext with JobContext and remove singleton pattern (#2706)https://github.com/apache/seatunnel/commit/cbf82f7552.2.0-beta
[Feature][Connector-V2] Add redis sink connector (#2647)https://github.com/apache/seatunnel/commit/71a9e4b012.2.0-beta
[#2606]Dependency management split (#2630)https://github.com/apache/seatunnel/commit/fc047be692.2.0-beta
[Feature][Connector-V2] Add redis source connector (#2569)https://github.com/apache/seatunnel/commit/405f7d6f92.2.0-beta