AmazonSqs
Amazon SQS 接收器连接器
支持以下引擎
Spark
Flink
SeaTunnel Zeta
描述
将数据写入 Amazon SQS
关键特性
参数和选项
| 名称 | 类型 | 必需 | 默认值 | Description | 
|---|---|---|---|---|
| url | String | 是 | - | 从Amazon SQS读取的队列URL. | 
| region | String | 否 | - | SQS服务的AWS区域 | 
| format | String | 否 | json | 数据格式。默认格式为json。可选文本格式,canal json和debezium json。如果你使用json或文本格式。默认字段分隔符为“,”。如果自定义分隔符,请添加“field_delimiter”选项。如果您使用canal格式,请参阅[canal-json](../formats/canal-json.md)了解详细信息。如果您使用debezium格式,请参阅[debezium json](../formats/debezium json.md)了解详细信息. | 
| format_error_handle_way | String | 否 | fail | 数据格式错误的处理方法。默认值为fail,可选值为(fail,skip)。当选择失败时,数据格式错误将被阻止,并引发异常。当选择跳过时,数据格式错误将跳过此行数据. | 
| field_delimiter | String | 否 | , | 自定义数据格式的字段分隔符. | 
任务示例
source {
  FakeSource {
    schema = {
      fields {
        c_map = "map<string, string>"
        c_array = "array<int>"
        c_string = string
        c_boolean = boolean
        c_tinyint = tinyint
        c_smallint = smallint
        c_int = int
        c_bigint = bigint
        c_float = float
        c_double = double
        c_bytes = bytes
        c_date = date
        c_decimal = "decimal(38, 18)"
        c_timestamp = timestamp
        c_row = {
          c_map = "map<string, string>"
          c_array = "array<int>"
          c_string = string
          c_boolean = boolean
          c_tinyint = tinyint
          c_smallint = smallint
          c_int = int
          c_bigint = bigint
          c_float = float
          c_double = double
          c_bytes = bytes
          c_date = date
          c_decimal = "decimal(38, 18)"
          c_timestamp = timestamp
        }
      }
    }
    plugin_output = "fake"
  }
}
sink {
  AmazonSqs {
    url = "http://127.0.0.1:8000"
    region = "us-east-1"
    queue = "queueName"
    format = text
    field_delimiter = "|"  
  }
}
变更日志
Change Log
| Change | Commit | Version | 
|---|---|---|
| [Improve][Core] Unify the aws-sdk-v2 version to 2.31.30 (#9698) | https://github.com/apache/seatunnel/commit/41c251cc8a | 2.3.12 | 
| [Improve] restruct connector common options (#8634) | https://github.com/apache/seatunnel/commit/f3499a6eeb | 2.3.10 | 
| [improve] amazon sqs connector update (#8602) | https://github.com/apache/seatunnel/commit/c747e02a98 | 2.3.10 | 
| [Feature][Restapi] Allow metrics information to be associated to logical plan nodes (#7786) | https://github.com/apache/seatunnel/commit/6b7c53d03c | 2.3.9 | 
| [Feature][Kafka] Support multi-table source read (#5992) | https://github.com/apache/seatunnel/commit/60104602d1 | 2.3.6 | 
| [Improve][Common] Introduce new error define rule (#5793) | https://github.com/apache/seatunnel/commit/9d1b2582b2 | 2.3.4 | 
| [Improve] Remove use SeaTunnelSink::getConsumedTypemethod and mark it as deprecated (#5755) | https://github.com/apache/seatunnel/commit/8de7408100 | 2.3.4 | 
| [Improve] Remove all useless prepare,getProducedTypemethod (#5741) | https://github.com/apache/seatunnel/commit/ed94fffbb9 | 2.3.4 | 
| [Improve][Connector-V2] Change amazonsqstoAmazonSqsas connector identifier (#5742) | https://github.com/apache/seatunnel/commit/245705d0f7 | 2.3.4 | 
| [Feature][Connector-V2] Add connector amazonsqs (#5367) | https://github.com/apache/seatunnel/commit/7f75a8eafd | 2.3.4 |