SeaTunnel was formerly named Waterdrop , and renamed SeaTunnel since October 12, 2021.
SeaTunnel is a very easy-to-use ultra-high-performance distributed data integration platform that supports real-time synchronization of massive data. It can synchronize tens of billions of data stably and efficiently every day, and has been used in the production of nearly 100 companies.
SeaTunnel will do its best to solve the problems that may be encountered in the synchronization of massive data:
Source[Data Source Input] -> Transform[Data Processing] -> Sink[Result Output]
The data processing pipeline is constituted by multiple filters to meet a variety of data processing needs. If you are accustomed to SQL, you can also directly construct a data processing pipeline by SQL, which is simple and efficient. Currently, the filter list supported by SeaTunnel is still being expanded. Furthermore, you can develop your own data processing plug-in, because the whole system is easy to expand.
Spark Connector Plugins | Database Type | Source | Sink |
---|---|---|---|
Batch | Fake | doc | |
ElasticSearch | doc | doc | |
File | doc | doc | |
Hive | doc | doc | |
Hudi | doc | doc | |
Jdbc | doc | doc | |
MongoDB | doc | doc | |
neo4j | doc | ||
Phoenix | doc | doc | |
Redis | doc | doc | |
Tidb | doc | doc | |
Clickhouse | doc | ||
Doris | doc | ||
doc | |||
Hbase | doc | doc | |
Kafka | doc | ||
Console | doc | ||
Kudu | doc | doc | |
Redis | doc | doc | |
Stream | FakeStream | doc | |
KafkaStream | doc | ||
SocketStream | doc |
Flink Connector Plugins | Database Type | Source | Sink |
---|---|---|---|
Druid | doc | doc | |
Fake | doc | ||
File | doc | doc | |
InfluxDb | doc | doc | |
Jdbc | doc | doc | |
Kafka | doc | doc | |
Socket | doc | ||
Console | doc | ||
Doris | doc | ||
ElasticSearch | doc |
Transform Plugins | Spark | Flink |
---|---|---|
Add | ||
CheckSum | ||
Convert | ||
Date | ||
Drop | ||
Grok | ||
Json | doc | |
Kv | ||
Lowercase | ||
Remove | ||
Rename | ||
Repartition | ||
Replace | ||
Sample | ||
Split | doc | doc |
Sql | doc | doc |
Table | ||
Truncate | ||
Uppercase | ||
Uuid |
java runtime environment, java >= 8
If you want to run SeaTunnel in a cluster environment, any of the following Spark cluster environments is usable:
If the data volume is small, or the goal is merely for functional verification, you can also start in local mode without a cluster environment, because SeaTunnel supports standalone operation. Note: SeaTunnel 2.0 supports running on Spark and Flink.
Download address for run-directly software package :https://github.com/apache/incubator-seatunnel/releases
Spark https://seatunnel.apache.org/docs/spark/quick-start
Flink https://seatunnel.apache.org/docs/flink/quick-start
Detailed documentation on SeaTunnel https://seatunnel.apache.org/docs/introduction
Weibo business uses an internal customized version of SeaTunnel and its sub-project Guardian for SeaTunnel On Yarn task monitoring for hundreds of real-time streaming computing tasks.
Sina Data Operation Analysis Platform uses SeaTunnel to perform real-time and offline analysis of data operation and maintenance for Sina News, CDN and other services, and write it into Clickhouse.
Sogou Qiqian System takes SeaTunnel as an ETL tool to help establish a real-time data warehouse system.
Qutoutiao Data Center uses SeaTunnel to support mysql to hive offline ETL tasks, real-time hive to clickhouse backfill technical support, and well covers most offline and real-time tasks needs.
Yixia Technology, Yizhibo Data Platform
Yonghui Superstores Founders' Alliance-Yonghui Yunchuang Technology, Member E-commerce Data Analysis Platform
SeaTunnel provides real-time streaming and offline SQL computing of e-commerce user behavior data for Yonghui Life, a new retail brand of Yonghui Yunchuang Technology.
Shuidichou adopts SeaTunnel to do real-time streaming and regular offline batch processing on Yarn, processing 3~4T data volume average daily, and later writing the data to Clickhouse.
Collecting various logs from business services into Apache Kafka, some of the data in Apache Kafka is consumed and extracted through Seatunnel, and then store into Clickhouse.
For more use cases, please refer to: https://seatunnel.apache.org/blog
This project adheres to the Contributor Covenant code of conduct. By participating, you are expected to uphold this code. Please follow the REPORTING GUIDELINES to report unacceptable behavior.
Thanks to all developers!
dev-subscribe@seatunnel.apache.org
, follow the reply to subscribe the mail list.Various companies and organizations use SeaTunnel for research, production and commercial products. Visit our website to find the user page.