S3File
S3 file source connector
Descriptionâ
Read data from aws s3 file system.
If you use spark/flink, In order to use this connector, You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 2.x.
If you use SeaTunnel Engine, It automatically integrated the hadoop jar when you download and install SeaTunnel Engine. You can check the jar package under ${SEATUNNEL_HOME}/lib to confirm this.
To use this connector you need put hadoop-aws-3.1.4.jar and aws-java-sdk-bundle-1.11.271.jar in ${SEATUNNEL_HOME}/lib dir.
Key featuresâ
Read all the data in a split in a pollNext call. What splits are read will be saved in snapshot.
- column projection
- parallelism
- support user-defined split
- file format type
- text
- csv
- parquet
- orc
- json
Optionsâ
name | type | required | default value |
---|---|---|---|
path | string | yes | - |
file_format_type | string | yes | - |
bucket | string | yes | - |
fs.s3a.endpoint | string | yes | - |
fs.s3a.aws.credentials.provider | string | yes | com.amazonaws.auth.InstanceProfileCredentialsProvider |
read_columns | list | no | - |
access_key | string | no | - |
secret_key | string | no | - |
hadoop_s3_properties | map | no | - |
delimiter | string | no | \001 |
parse_partition_from_path | boolean | no | true |
date_format | string | no | yyyy-MM-dd |
datetime_format | string | no | yyyy-MM-dd HH:mm:ss |
time_format | string | no | HH:mm:ss |
skip_header_row_number | long | no | 0 |
schema | config | no | - |
common-options | no | - |
path [string]â
The source file path.
fs.s3a.endpoint [string]â
fs s3a endpoint
fs.s3a.aws.credentials.provider [string]â
The way to authenticate s3a. We only support org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
and com.amazonaws.auth.InstanceProfileCredentialsProvider
now.
More information about the credential provider you can see Hadoop AWS Document
delimiter [string]â
Field delimiter, used to tell connector how to slice and dice fields when reading text files
default \001
, the same as hive's default delimiter
parse_partition_from_path [boolean]â
Control whether parse the partition keys and values from file path
For example if you read a file from path s3n://hadoop-cluster/tmp/seatunnel/parquet/name=tyrantlucifer/age=26
Every record data from file will be added these two fields:
name | age |
---|---|
tyrantlucifer | 26 |
Tips: Do not define partition fields in schema option
date_format [string]â
Date type format, used to tell connector how to convert string to date, supported as the following formats:
yyyy-MM-dd
yyyy.MM.dd
yyyy/MM/dd
default yyyy-MM-dd
datetime_format [string]â
Datetime type format, used to tell connector how to convert string to datetime, supported as the following formats:
yyyy-MM-dd HH:mm:ss
yyyy.MM.dd HH:mm:ss
yyyy/MM/dd HH:mm:ss
yyyyMMddHHmmss
default yyyy-MM-dd HH:mm:ss
time_format [string]â
Time type format, used to tell connector how to convert string to time, supported as the following formats:
HH:mm:ss
HH:mm:ss.SSS
default HH:mm:ss
skip_header_row_number [long]â
Skip the first few lines, but only for the txt and csv.
For example, set like following:
skip_header_row_number = 2
then Seatunnel will skip the first 2 lines from source files
file_format_type [string]â
File type, supported as the following file types:
text
csv
parquet
orc
json
If you assign file type to json
, you should also assign schema option to tell connector how to parse data to the row you want.
For example:
upstream data is the following:
{"code": 200, "data": "get success", "success": true}
You can also save multiple pieces of data in one file and split them by newline:
{"code": 200, "data": "get success", "success": true}
{"code": 300, "data": "get failed", "success": false}
you should assign schema as the following:
schema {
fields {
code = int
data = string
success = boolean
}
}
connector will generate data as the following:
code | data | success |
---|---|---|
200 | get success | true |
If you assign file type to parquet
orc
, schema option not required, connector can find the schema of upstream data automatically.
If you assign file type to text
csv
, you can choose to specify the schema information or not.
For example, upstream data is the following:
tyrantlucifer#26#male
If you do not assign data schema connector will treat the upstream data as the following:
content |
---|
tyrantlucifer#26#male |
If you assign data schema, you should also assign the option delimiter
too except CSV file type
you should assign schema and delimiter as the following:
delimiter = "#"
schema {
fields {
name = string
age = int
gender = string
}
}
connector will generate data as the following:
name | age | gender |
---|---|---|
tyrantlucifer | 26 | male |
bucket [string]â
The bucket address of s3 file system, for example: s3n://seatunnel-test
, if you use s3a
protocol, this parameter should be s3a://seatunnel-test
.
access_key [string]â
The access key of s3 file system. If this parameter is not set, please confirm that the credential provider chain can be authenticated correctly, you could check this hadoop-aws
secret_key [string]â
The access secret of s3 file system. If this parameter is not set, please confirm that the credential provider chain can be authenticated correctly, you could check this hadoop-aws
hadoop_s3_properties [map]â
If you need to add a other option, you could add it here and refer to this hadoop-aws
hadoop_s3_properties {
"xxx" = "xxx"
}
schema [config]â
fields [Config]â
The schema of upstream data.
read_columns [list]â
The read column list of the data source, user can use it to implement field projection.
The file type supported column projection as the following shown:
- text
- json
- csv
- orc
- parquet
Tips: If the user wants to use this feature when reading text
json
csv
files, the schema option must be configured
common optionsâ
Source plugin common parameters, please refer to Source Common Options for details.
Exampleâ
S3File {
path = "/seatunnel/text"
fs.s3a.endpoint="s3.cn-north-1.amazonaws.com.cn"
fs.s3a.aws.credentials.provider = "org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider"
access_key = "xxxxxxxxxxxxxxxxx"
secret_key = "xxxxxxxxxxxxxxxxx"
bucket = "s3a://seatunnel-test"
file_format_type = "orc"
}
S3File {
path = "/seatunnel/json"
bucket = "s3a://seatunnel-test"
fs.s3a.endpoint="s3.cn-north-1.amazonaws.com.cn"
fs.s3a.aws.credentials.provider="com.amazonaws.auth.InstanceProfileCredentialsProvider"
file_format_type = "json"
schema {
fields {
id = int
name = string
}
}
}
Changelogâ
2.3.0-beta 2022-10-20â
- Add S3File Source Connector