Skip to main content
Version: 2.3.2

Apache Iceberg

Apache Iceberg source connector

Descriptionโ€‹

Source connector for Apache Iceberg. It can support batch and stream mode.

Key featuresโ€‹

Optionsโ€‹

nametyperequireddefault value
catalog_namestringyes-
catalog_typestringyes-
uristringno-
warehousestringyes-
namespacestringyes-
tablestringyes-
schemaconfigno-
case_sensitivebooleannofalse
start_snapshot_timestamplongno-
start_snapshot_idlongno-
end_snapshot_idlongno-
use_snapshot_idlongno-
use_snapshot_timestamplongno-
stream_scan_strategyenumnoFROM_LATEST_SNAPSHOT
common-optionsno-

catalog_name [string]โ€‹

User-specified catalog name.

catalog_type [string]โ€‹

The optional values are:

  • hive: The hive metastore catalog.
  • hadoop: The hadoop catalog.

uri [string]โ€‹

The Hive metastoreโ€™s thrift URI.

warehouse [string]โ€‹

The location to store metadata files and data files.

namespace [string]โ€‹

The iceberg database name in the backend catalog.

table [string]โ€‹

The iceberg table name in the backend catalog.

case_sensitive [boolean]โ€‹

If data columns where selected via schema [config], controls whether the match to the schema will be done with case sensitivity.

schema [config]โ€‹

fields [Config]โ€‹

Use projection to select data columns and columns order.

e.g.

schema {
fields {
f2 = "boolean"
f1 = "bigint"
f3 = "int"
f4 = "bigint"
}
}

start_snapshot_id [long]โ€‹

Instructs this scan to look for changes starting from a particular snapshot (exclusive).

start_snapshot_timestamp [long]โ€‹

Instructs this scan to look for changes starting from the most recent snapshot for the table as of the timestamp. timestamp โ€“ the timestamp in millis since the Unix epoch

end_snapshot_id [long]โ€‹

Instructs this scan to look for changes up to a particular snapshot (inclusive).

use_snapshot_id [long]โ€‹

Instructs this scan to look for use the given snapshot ID.

use_snapshot_timestamp [long]โ€‹

Instructs this scan to look for use the most recent snapshot as of the given time in milliseconds. timestamp โ€“ the timestamp in millis since the Unix epoch

stream_scan_strategy [enum]โ€‹

Starting strategy for stream mode execution, Default to use FROM_LATEST_SNAPSHOT if donโ€™t specify any value. The optional values are:

  • TABLE_SCAN_THEN_INCREMENTAL: Do a regular table scan then switch to the incremental mode.
  • FROM_LATEST_SNAPSHOT: Start incremental mode from the latest snapshot inclusive.
  • FROM_EARLIEST_SNAPSHOT: Start incremental mode from the earliest snapshot inclusive.
  • FROM_SNAPSHOT_ID: Start incremental mode from a snapshot with a specific id inclusive.
  • FROM_SNAPSHOT_TIMESTAMP: Start incremental mode from a snapshot with a specific timestamp inclusive.

common optionsโ€‹

Source plugin common parameters, please refer to Source Common Options for details.

Exampleโ€‹

simple

source {
Iceberg {
catalog_name = "seatunnel"
catalog_type = "hadoop"
warehouse = "hdfs://your_cluster//tmp/seatunnel/iceberg/"
namespace = "your_iceberg_database"
table = "your_iceberg_table"
}
}

Or

source {
Iceberg {
catalog_name = "seatunnel"
catalog_type = "hive"
uri = "thrift://localhost:9083"
warehouse = "hdfs://your_cluster//tmp/seatunnel/iceberg/"
namespace = "your_iceberg_database"
table = "your_iceberg_table"
}
}

column projection

source {
Iceberg {
catalog_name = "seatunnel"
catalog_type = "hadoop"
warehouse = "hdfs://your_cluster/tmp/seatunnel/iceberg/"
namespace = "your_iceberg_database"
table = "your_iceberg_table"

schema {
fields {
f2 = "boolean"
f1 = "bigint"
f3 = "int"
f4 = "bigint"
}
}
}
}
tip

In order to be compatible with different versions of Hadoop and Hive, the scope of hive-exec and flink-shaded-hadoop-2 in the project pom file are provided, so if you use the Flink engine, first you may need to add the following Jar packages to <FLINK_HOME>/lib directory, if you are using the Spark engine and integrated with Hadoop, then you do not need to add the following Jar packages.

flink-shaded-hadoop-x-xxx.jar
hive-exec-xxx.jar
libfb303-xxx.jar

Some versions of the hive-exec package do not have libfb303-xxx.jar, so you also need to manually import the Jar package.

Changelogโ€‹

2.2.0-beta 2022-09-26โ€‹

  • Add Iceberg Source Connector

next versionโ€‹

  • [Feature] Support Hadoop3.x (3046)
  • [improve][api] Refactoring schema parse (4157)