import ChangeLog from ‘../changelog/connector-socket.md’;
Socket source connector
Spark
Flink
SeaTunnel Zeta
Used to read data from Socket.
The File does not have a specific type list, and we can indicate which SeaTunnel data type the corresponding data needs to be converted to by specifying the Schema in the config.
| SeaTunnel Data type |
|---|
| STRING |
| SHORT |
| INT |
| BIGINT |
| BOOLEAN |
| DOUBLE |
| DECIMAL |
| FLOAT |
| DATE |
| TIME |
| TIMESTAMP |
| BYTES |
| ARRAY |
| MAP |
| Name | Type | Required | Default | Description |
|---|---|---|---|---|
| host | String | Yes | _ | socket server host |
| port | Integer | Yes | _ | socket server port |
| common-options | no | - | Source plugin common parameters, please refer to Source Common Options for details. |
The following example demonstrates how to create a data synchronization job that reads data from Socket and prints it on the local client:
# Set the basic configuration of the task to be performed env { parallelism = 1 job.mode = "BATCH" } # Create a source to connect to socket source { Socket { host = "localhost" port = 9999 } } # Console printing of the read socket data sink { Console { parallelism = 1 } }
nc -l 9999
Start a SeaTunnel task
Socket Source send test data
~ nc -l 9999 test hello flink spark
[test] [hello] [flink] [spark]