Skip to main content
Version: 2.3.1

Hive

Hive source connector

Description​

Read data from Hive.

tip

In order to use this connector, You must ensure your spark/flink cluster already integrated hive. The tested hive version is 2.3.9.

If you use SeaTunnel Engine, You need put seatunnel-hadoop3-3.1.4-uber.jar and hive-exec-2.3.9.jar in $SEATUNNEL_HOME/lib/ dir.

Key features​

Read all the data in a split in a pollNext call. What splits are read will be saved in snapshot.

Options​

nametyperequireddefault value
table_namestringyes-
metastore_uristringyes-
kerberos_principalstringno-
kerberos_keytab_pathstringno-
hdfs_site_pathstringno-
read_partitionslistno-
read_columnslistno-
common-optionsno-

table_name [string]​

Target Hive table name eg: db1.table1

metastore_uri [string]​

Hive metastore uri

hdfs_site_path [string]​

The path of hdfs-site.xml, used to load ha configuration of namenodes

read_partitions [list]​

The target partitions that user want to read from hive table, if user does not set this parameter, it will read all the data from hive table.

Tips: Every partition in partitions list should have the same directory depth. For example, a hive table has two partitions: par1 and par2, if user sets it like as the following: read_partitions = [par1=xxx, par1=yyy/par2=zzz], it is illegal

kerberos_principal [string]​

The principal of kerberos authentication

kerberos_keytab_path [string]​

The keytab file path of kerberos authentication

read_columns [list]​

The read column list of the data source, user can use it to implement field projection.

common options​

Source plugin common parameters, please refer to Source Common Options for details

Example​


Hive {
table_name = "default.seatunnel_orc"
metastore_uri = "thrift://namenode001:9083"
}

Changelog​

2.2.0-beta 2022-09-26​

  • Add Hive Source Connector

Next version​

  • [Improve] Support kerberos authentication (3840)
  • Support user-defined partitions (3842)