import ChangeLog from ‘../changelog/connector-http.md’;

Http

Http sink connector

Support Those Engines

Spark
Flink
SeaTunnel Zeta

Key Features

Description

Used to launch web hooks using data.

For example, if the data from upstream is [age: 12, name: tyrantlucifer], the body content is the following: {"age": 12, "name": "tyrantlucifer"}

Tips: Http sink only support post json webhook and the data from source will be treated as body content in web hook.

Supported DataSource Info

In order to use the Http connector, the following dependencies are required. They can be downloaded via install-plugin.sh or from the Maven central repository.

DatasourceSupported VersionsDependency
HttpuniversalDownload

Sink Options

NameTypeRequiredDefaultDescription
urlStringYes-Http request url
headersMapNo-Http headers
retryIntNo-The max retry times if request http return to IOException
retry_backoff_multiplier_msIntNo100The retry-backoff times(millis) multiplier if request http failed
retry_backoff_max_msIntNo10000The maximum retry-backoff times(millis) if request http failed
connect_timeout_msIntNo12000Connection timeout setting, default 12s.
socket_timeout_msIntNo60000Socket timeout setting, default 60s.
array_modeBooleanNofalseSend data as a JSON array when true, or as a single JSON object when false (default)
batch_sizeIntNo1The batch size of records to send in one HTTP request. Only works when array_mode is true.
request_interval_msIntNo0The interval milliseconds between two HTTP requests, to avoid sending requests too frequently.
common-optionsNo-Sink plugin common parameters, please refer to Sink Common Options for details

Example

simple:

Http {
    url = "http://localhost/test/webhook"
    headers {
        token = "9e32e859ef044462a257e1fc76730066"
    }
}

With Batch Processing

Http {
    url = "http://localhost/test/webhook"
    headers {
        token = "9e32e859ef044462a257e1fc76730066"
        Content-Type = "application/json"
    }
    array_mode = true
    batch_size = 50
    request_interval_ms = 500
}

Multiple table

example1

env {
  parallelism = 1
  job.mode = "STREAMING"
  checkpoint.interval = 5000
}

source {
  Mysql-CDC {
    url = "jdbc:mysql://127.0.0.1:3306/seatunnel"
    username = "root"
    password = "******"
    
    table-names = ["seatunnel.role","seatunnel.user","galileo.Bucket"]
  }
}

transform {
}

sink {
  Http {
    ...
    url = "http://localhost/test/${database_name}_test/${table_name}_test"
  }
}

example2

env {
  parallelism = 1
  job.mode = "BATCH"
}

source {
  Jdbc {
    driver = oracle.jdbc.driver.OracleDriver
    url = "jdbc:oracle:thin:@localhost:1521/XE"
    user = testUser
    password = testPassword

    table_list = [
      {
        table_path = "TESTSCHEMA.TABLE_1"
      },
      {
        table_path = "TESTSCHEMA.TABLE_2"
      }
    ]
  }
}

transform {
}

sink {
  Http {
    ...
    url = "http://localhost/test/${schema_name}_test/${table_name}_test"
  }
}

Changelog