Skip to main content
Version: 2.3.10

FtpFile

Ftp file source connector

Support Those Engines

Spark
Flink
SeaTunnel Zeta

Key features

Description

Read data from ftp file server.

tip

If you use spark/flink, In order to use this connector, You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 2.x.

If you use SeaTunnel Engine, It automatically integrated the hadoop jar when you download and install SeaTunnel Engine. You can check the jar package under ${SEATUNNEL_HOME}/lib to confirm this.

Options

nametyperequireddefault value
hoststringyes-
portintyes-
userstringyes-
passwordstringyes-
pathstringyes-
file_format_typestringyes-
connection_modestringnoactive_local
delimiter/field_delimiterstringno\001
read_columnslistno-
parse_partition_from_pathbooleannotrue
date_formatstringnoyyyy-MM-dd
datetime_formatstringnoyyyy-MM-dd HH:mm:ss
time_formatstringnoHH:mm:ss
skip_header_row_numberlongno0
schemaconfigno-
sheet_namestringno-
xml_row_tagstringno-
xml_use_attr_formatbooleanno-
file_filter_patternstringno-
filename_extensionstringno-
compress_codecstringnonone
archive_compress_codecstringnonone
encodingstringnoUTF-8
null_formatstringno-
common-optionsno-

host [string]

The target ftp host is required

port [int]

The target ftp port is required

user [string]

The target ftp user name is required

password [string]

The target ftp password is required

path [string]

The source file path.

file_filter_pattern [string]

Filter pattern, which used for filtering files.

The pattern follows standard regular expressions. For details, please refer to https://en.wikipedia.org/wiki/Regular_expression. There are some examples.

File Structure Example:

/data/seatunnel/20241001/report.txt
/data/seatunnel/20241007/abch202410.csv
/data/seatunnel/20241002/abcg202410.csv
/data/seatunnel/20241005/old_data.csv
/data/seatunnel/20241012/logo.png

Matching Rules Example:

Example 1: Match all .txt files,Regular Expression:

/data/seatunnel/20241001/.*\.txt

The result of this example matching is:

/data/seatunnel/20241001/report.txt

Example 2: Match all file starting with abc,Regular Expression:

/data/seatunnel/20241002/abc.*

The result of this example matching is:

/data/seatunnel/20241007/abch202410.csv
/data/seatunnel/20241002/abcg202410.csv

Example 3: Match all file starting with abc,And the fourth character is either h or g, the Regular Expression:

/data/seatunnel/20241007/abc[h,g].*

The result of this example matching is:

/data/seatunnel/20241007/abch202410.csv

Example 4: Match third level folders starting with 202410 and files ending with .csv, the Regular Expression:

/data/seatunnel/202410\d*/.*\.csv

The result of this example matching is:

/data/seatunnel/20241007/abch202410.csv
/data/seatunnel/20241002/abcg202410.csv
/data/seatunnel/20241005/old_data.csv

filename_extension [string]

Filter filename extension, which used for filtering files with specific extension. Example: csv .txt json .xml.

file_format_type [string]

File type, supported as the following file types:

text csv parquet orc json excel xml binary

If you assign file type to json , you should also assign schema option to tell connector how to parse data to the row you want.

For example:

upstream data is the following:


{"code": 200, "data": "get success", "success": true}

you should assign schema as the following:


schema {
fields {
code = int
data = string
success = boolean
}
}

connector will generate data as the following:

codedatasuccess
200get successtrue

If you assign file type to text csv, you can choose to specify the schema information or not.

For example, upstream data is the following:


tyrantlucifer#26#male

If you do not assign data schema connector will treat the upstream data as the following:

content
tyrantlucifer#26#male

If you assign data schema, you should also assign the option field_delimiter too except CSV file type

you should assign schema and delimiter as the following:


field_delimiter = "#"
schema {
fields {
name = string
age = int
gender = string
}
}

connector will generate data as the following:

nameagegender
tyrantlucifer26male

If you assign file type to binary, SeaTunnel can synchronize files in any format, such as compressed packages, pictures, etc. In short, any files can be synchronized to the target place. Under this requirement, you need to ensure that the source and sink use binary format for file synchronization at the same time. You can find the specific usage in the example below.

connection_mode [string]

The target ftp connection mode , default is active mode, supported as the following modes:

active_local passive_local

delimiter/field_delimiter [string]

delimiter parameter will deprecate after version 2.3.5, please use field_delimiter instead.

Only need to be configured when file_format is text.

Field delimiter, used to tell connector how to slice and dice fields.

default \001, the same as hive's default delimiter

parse_partition_from_path [boolean]

Control whether parse the partition keys and values from file path

For example if you read a file from path ftp://hadoop-cluster/tmp/seatunnel/parquet/name=tyrantlucifer/age=26

Every record data from file will be added these two fields:

nameage
tyrantlucifer26

Tips: Do not define partition fields in schema option

date_format [string]

Date type format, used to tell connector how to convert string to date, supported as the following formats:

yyyy-MM-dd yyyy.MM.dd yyyy/MM/dd

default yyyy-MM-dd

datetime_format [string]

Datetime type format, used to tell connector how to convert string to datetime, supported as the following formats:

yyyy-MM-dd HH:mm:ss yyyy.MM.dd HH:mm:ss yyyy/MM/dd HH:mm:ss yyyyMMddHHmmss

default yyyy-MM-dd HH:mm:ss

time_format [string]

Time type format, used to tell connector how to convert string to time, supported as the following formats:

HH:mm:ss HH:mm:ss.SSS

default HH:mm:ss

skip_header_row_number [long]

Skip the first few lines, but only for the txt and csv.

For example, set like following:

skip_header_row_number = 2

then SeaTunnel will skip the first 2 lines from source files

schema [config]

Only need to be configured when the file_format_type are text, json, excel, xml or csv ( Or other format we can't read the schema from metadata).

The schema information of upstream data.

read_columns [list]

The read column list of the data source, user can use it to implement field projection.

sheet_name [string]

Reader the sheet of the workbook,Only used when file_format_type is excel.

xml_row_tag [string]

Only need to be configured when file_format is xml.

Specifies the tag name of the data rows within the XML file.

xml_use_attr_format [boolean]

Only need to be configured when file_format is xml.

Specifies Whether to process data using the tag attribute format.

compress_codec [string]

The compress codec of files and the details that supported as the following shown:

  • txt: lzo none
  • json: lzo none
  • csv: lzo none
  • orc/parquet:
    automatically recognizes the compression type, no additional settings required.

archive_compress_codec [string]

The compress codec of archive files and the details that supported as the following shown:

archive_compress_codecfile_formatarchive_compress_suffix
ZIPtxt,json,excel,xml.zip
TARtxt,json,excel,xml.tar
TAR_GZtxt,json,excel,xml.tar.gz
GZtxt,json,excel,xml.gz
NONEall.*

Note: gz compressed excel file needs to compress the original file or specify the file suffix, such as e2e.xls ->e2e_test.xls.gz

encoding [string]

Only used when file_format_type is json,text,csv,xml. The encoding of the file to read. This param will be parsed by Charset.forName(encoding).

null_format [string]

Only used when file_format_type is text. null_format to define which strings can be represented as null.

e.g: \N

common options

Source plugin common parameters, please refer to Source Common Options for details.

Example


FtpFile {
path = "/tmp/seatunnel/sink/text"
host = "192.168.31.48"
port = 21
user = tyrantlucifer
password = tianchao
file_format_type = "text"
schema = {
name = string
age = int
}
field_delimiter = "#"
}

Multiple Table


FtpFile {
tables_configs = [
{
schema {
table = "student"
}
path = "/tmp/seatunnel/sink/text"
host = "192.168.31.48"
port = 21
user = tyrantlucifer
password = tianchao
file_format_type = "parquet"
},
{
schema {
table = "teacher"
}
path = "/tmp/seatunnel/sink/text"
host = "192.168.31.48"
port = 21
user = tyrantlucifer
password = tianchao
file_format_type = "parquet"
}
]
}


FtpFile {
tables_configs = [
{
schema {
fields {
name = string
age = int
}
}
path = "/apps/hive/demo/student"
file_format_type = "json"
},
{
schema {
fields {
name = string
age = int
}
}
path = "/apps/hive/demo/teacher"
file_format_type = "json"
}
}

Transfer Binary File


env {
parallelism = 1
job.mode = "BATCH"
}

source {
FtpFile {
host = "192.168.31.48"
port = 21
user = tyrantlucifer
password = tianchao
path = "/seatunnel/read/binary/"
file_format_type = "binary"
}
}
sink {
// you can transfer local file to s3/hdfs/oss etc.
FtpFile {
host = "192.168.31.48"
port = 21
user = tyrantlucifer
password = tianchao
path = "/seatunnel/read/binary2/"
file_format_type = "binary"
}
}

Filter File

env {
parallelism = 1
job.mode = "BATCH"
}

source {
FtpFile {
host = "192.168.31.48"
port = 21
user = tyrantlucifer
password = tianchao
path = "/seatunnel/read/binary/"
file_format_type = "binary"
// file example abcD2024.csv
file_filter_pattern = "abc[DX]*.*"
}
}

sink {
Console {
}
}

Changelog

Change Log
ChangeCommitVersion
Revert " [improve] update localfile connector config" (#9018)https://github.com/apache/seatunnel/commit/cdc79e13a2.3.10
[improve] update localfile connector config (#8765)https://github.com/apache/seatunnel/commit/def369a852.3.10
[Improve][Connector-V2] Ensure that the FTP connector behaves reliably during directory operation (#8959)https://github.com/apache/seatunnel/commit/b5f0b43fc2.3.10
[Feature][Connector-V2] Add filename_extension parameter for read/write file (#8769)https://github.com/apache/seatunnel/commit/78b23c0ef2.3.10
[Improve] restruct connector common options (#8634)https://github.com/apache/seatunnel/commit/f3499a6ee2.3.10
[Feature][Connector-V2] Support create emtpy file when no data (#8543)https://github.com/apache/seatunnel/commit/275db78912.3.10
[Feature][Connector-V2] Support single file mode in file sink (#8518)https://github.com/apache/seatunnel/commit/e893deed52.3.10
[Improve][Connector-V2] Add some debug log when create dir in (S)FTP (#8286)https://github.com/apache/seatunnel/commit/8687bb8e92.3.9
[Feature][File] Support config null format for text file read (#8109)https://github.com/apache/seatunnel/commit/2dbf02df42.3.9
[Fix][Connector-V2][FTP] Fix FTP connector connection_mode is not effective (#7865)https://github.com/apache/seatunnel/commit/26c528a5e2.3.9
[Feature][Restapi] Allow metrics information to be associated to logical plan nodes (#7786)https://github.com/apache/seatunnel/commit/6b7c53d032.3.9
[Feature][Connector-V2]Ftp file source support multiple table (#7795)https://github.com/apache/seatunnel/commit/22fe27a3d2.3.9
[Improve][Connector-V2] Support read archive compress file (#7633)https://github.com/apache/seatunnel/commit/3f98cd8a12.3.8
[Feature][Connector-V2] Ftp file sink suport multiple table and save mode (#7665)https://github.com/apache/seatunnel/commit/4f812e12a2.3.8
[Improve][Files] Support write fixed/timestamp as int96 of parquet (#6971)https://github.com/apache/seatunnel/commit/1a48a9c492.3.6
[Feature][Connector-V2] Supports the transfer of any file (#6826)https://github.com/apache/seatunnel/commit/c1401787b2.3.6
Add support for XML file type to various file connectors such as SFTP, FTP, LocalFile, HdfsFile, and more. (#6327)https://github.com/apache/seatunnel/commit/ec533ecd92.3.5
[Feature][Connectors-v2-file-ftp] FTP source/sink add ftp connection mode (#6077) (#6099)https://github.com/apache/seatunnel/commit/f6bcc4d592.3.4
[Refactor][File Connector] Put Multiple Table File API to File Base Module (#6033)https://github.com/apache/seatunnel/commit/c324d663b2.3.4
Support using multiple hadoop account (#5903)https://github.com/apache/seatunnel/commit/d69d88d1a2.3.4
[Improve][Common] Introduce new error define rule (#5793)https://github.com/apache/seatunnel/commit/9d1b2582b2.3.4
[Improve][connector-file] unifiy option between file source/sink and update document (#5680)https://github.com/apache/seatunnel/commit/8d87cf8fc2.3.4
[Feature] Support LZO compress on File Read (#5083)https://github.com/apache/seatunnel/commit/a4a1901092.3.4
[Feature][Connector-V2][File] Support read empty directory (#5591)https://github.com/apache/seatunnel/commit/1f58f224a2.3.4
Support config column/primaryKey/constraintKey in schema (#5564)https://github.com/apache/seatunnel/commit/eac76b4e52.3.4
[Feature][File Connector]optionrule FILE_FORMAT_TYPE is text/csv ,add parameter BaseSinkConfig.ENABLE_HEADER_WRITE: #5566 (#5567)https://github.com/apache/seatunnel/commit/0e02db7682.3.4
[Feature][Connector V2][File] Add config of 'file_filter_pattern', which used for filtering files. (#5153)https://github.com/apache/seatunnel/commit/a3c13e59e2.3.3
[Feature][ConnectorV2]add file excel sink and source (#4164)https://github.com/apache/seatunnel/commit/e3b97ae5d2.3.2
Change file type to file_format_type in file source/sink (#4249)https://github.com/apache/seatunnel/commit/973a2fae32.3.1
Merge branch 'dev' into merge/cdchttps://github.com/apache/seatunnel/commit/4324ee1912.3.1
[Improve][Project] Code format with spotless plugin.https://github.com/apache/seatunnel/commit/423b583032.3.1
[improve][api] Refactoring schema parse (#4157)https://github.com/apache/seatunnel/commit/b2f573a132.3.1
[Improve][build] Give the maven module a human readable name (#4114)https://github.com/apache/seatunnel/commit/d7cd601052.3.1
[Improve][Project] Code format with spotless plugin. (#4101)https://github.com/apache/seatunnel/commit/a2ab166562.3.1
[Feature][Connector-V2][File] Support compress (#3899)https://github.com/apache/seatunnel/commit/55602f6b12.3.1
[Feature][Connector] add get source method to all source connector (#3846)https://github.com/apache/seatunnel/commit/417178fb82.3.1
[Improve][Connector-V2][File] Improve file connector option rule and document (#3812)https://github.com/apache/seatunnel/commit/bd76077662.3.1
[Feature][Shade] Add seatunnel hadoop3 uber (#3755)https://github.com/apache/seatunnel/commit/5a024bdf82.3.0
[Hotfix][OptionRule] Fix option rule about all connectors (#3592)https://github.com/apache/seatunnel/commit/226dc6a112.3.0
[Improve][Connector-V2][File] Unified excetion for file source & sink connectors (#3525)https://github.com/apache/seatunnel/commit/031e8e2632.3.0
[Feature][Connector-V2][File] Add option and factory for file connectors (#3375)https://github.com/apache/seatunnel/commit/db286e8632.3.0
[Improve][Connector-V2][File] Improve code structure (#3238)https://github.com/apache/seatunnel/commit/dd5c353882.3.0
[Connector-V2][ElasticSearch] Add ElasticSearch Source/Sink Factory (#3325)https://github.com/apache/seatunnel/commit/38254e3f22.3.0
[Core][Improve] Fix some sonar check error (#3240)https://github.com/apache/seatunnel/commit/8664bb53a2.3.0
[Improve][Connector-V2][File] Support parse field from file path (#2985)https://github.com/apache/seatunnel/commit/0bc12085c2.3.0-beta
[Improve][connector][file] Support user-defined schema for reading text file (#2976)https://github.com/apache/seatunnel/commit/1c05ee0d72.3.0-beta
[Improve][Connector] Improve write parquet (#2943)https://github.com/apache/seatunnel/commit/8fd9663942.3.0-beta
[Fix][Connector-V2] Fix HiveSource Connector read orc table error (#2845)https://github.com/apache/seatunnel/commit/61720306e2.2.0-beta
[Improve][Connector-V2] Improve read parquet (#2841)https://github.com/apache/seatunnel/commit/e19bc82f92.2.0-beta
[Imporve][Connector-V2] Refactor ftp sink & Add ftp file source (#2774)https://github.com/apache/seatunnel/commit/4aacbcdd12.2.0-beta
[Feature][File connector] Support ftp file sink (#2483)https://github.com/apache/seatunnel/commit/a87e5de802.2.0-beta