import ChangeLog from ‘../changelog/connector-sls.md’;
Sls sink connector
Spark
Flink
Seatunnel Zeta
Sink connector for Aliyun Sls.
从写入数据到阿里云Sls日志服务
为了使用Sls连接器,需要以下依赖关系。 它们可以通过install-plugin.sh或Maven中央存储库下载。
Datasource | Supported Versions | Maven |
---|---|---|
Sls | Universal | Download |
Name | Type | Required | Default | Description |
---|---|---|---|---|
project | String | Yes | - | 阿里云 Sls 项目 |
logstore | String | Yes | - | 阿里云 Sls 日志库 |
endpoint | String | Yes | - | 阿里云访问服务点 |
access_key_id | String | Yes | - | 阿里云访问用户ID |
access_key_secret | String | Yes | - | 阿里云访问用户密码 |
source | String | No | SeaTunnel-Source | 在sls中数据来源标记 |
topic | String | No | SeaTunnel-Topic | 在sls中数据主题标记 |
此示例写入sls的logstore1的数据。如果您尚未安装和部署SeaTunnel,则需要按照安装SeaTunnel中的说明安装和部署SeaTunnel。然后按照[快速启动SeaTunnel引擎](../../Start-v2/locale/Quick-Start SeaTunnel Engine.md)中的说明运行此作业。
创建RAM用户及授权, 请确认RAM用户有足够的权限来读取及管理数据,参考:RAM自定义授权示例
# Defining the runtime environment env { parallelism = 2 job.mode = "STREAMING" checkpoint.interval = 30000 } source { FakeSource { row.num = 10 map.size = 10 array.size = 10 bytes.length = 10 string.length = 10 schema = { fields = { id = "int" name = "string" description = "string" weight = "string" } } } } sink { Sls { endpoint = "cn-hangzhou-intranet.log.aliyuncs.com" project = "project1" logstore = "logstore1" access_key_id = "xxxxxxxxxxxxxxxxxxxxxxxx" access_key_secret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" } }