This project is still under active development now, and doesn't have a stable release. Welcome to evaluate it.
Apache Spark is a stable, mature project that has been developed for many years. It is one of the best frameworks to scale out for processing petabyte-scale datasets. However, the Spark community has had to address performance challenges that require various optimizations over time. As a key optimization in Spark 2.0, Whole Stage Code Generation is introduced to replace Volcano Model, which achieves 2x speedup. Henceforth, most optimizations are at query plan level. Single operator's performance almost stops growing.
On the other side, native SQL engines have been developed for a few years, such as Clickhouse, Arrow and Velox, etc. With features like native execution, columnar data format and vectorized data processing, these native engines can outperform Spark's JVM based SQL engine. However, they only support single node execution.
“Gluten” is Latin for “glue”. The main goal of Gluten project is to glue native engines with SparkSQL. Thus, we can benefit from high scalability of Spark SQL framework and high performance of native engines.
The basic design rule is that we would reuse Spark's whole control flow and as much JVM code as possible but offload the compute-intensive data processing to native side. Here is what Gluten does basically:
Gluten‘s target user is anyone who aspires to accelerate SparkSQL fundamentally. As a plugin to Spark, Gluten doesn’t require any change for dataframe API or SQL query, but only requires user to make correct configuration. See Gluten configuration properties here.
You can click below links for more related information.
The overview chart is like below. Substrait provides a well-defined cross-language specification for data compute operations (see more details here). Spark physical plan is transformed to Substrait plan. Then Substrait plan is passed to native through JNI call. On native side, the native operator chain will be built out and offloaded to native engine. Gluten will return Columnar Batch to Spark and Spark Columnar API (since Spark-3.0) will be used at execution time. Gluten uses Apache Arrow data format as its basic data format, so the returned data to Spark JVM is ArrowColumnarBatch.
There are several key components in Gluten:
Here is a basic configuration to enable Gluten in Spark.
export GLUTEN_JAR=/PATH/TO/GLUTEN_JAR spark-shell \ --master yarn --deploy-mode client \ --conf spark.plugins=org.apache.gluten.GlutenPlugin \ --conf spark.memory.offHeap.enabled=true \ --conf spark.memory.offHeap.size=20g \ --conf spark.driver.extraClassPath=${GLUTEN_JAR} \ --conf spark.executor.extraClassPath=${GLUTEN_JAR} \ --conf spark.shuffle.manager=org.apache.spark.shuffle.sort.ColumnarShuffleManager ...
There are two ways to acquire Gluten jar for the above configuration.
Please download a tar package here, then extract out Gluten jar from it. It was verified on Centos-7, Centos-8, Ubuntu-20.04 and Ubuntu-22.04.
For Velox backend, please refer to Velox.md and build-guide.md.
For ClickHouse backend, please refer to ClickHouse.md. ClickHouse backend is developed by Kyligence, please visit https://github.com/Kyligence/ClickHouse for more information.
Gluten jar will be generated under /PATH/TO/GLUTEN/package/target/
after the build.
Welcome to contribute to Gluten project! See CONTRIBUTING.md about how to make contributions.
Gluten successfully became Apache incubator project in March 2024. Here are several ways to contact us:
Welcome to report any issue or create any discussion related to Gluten in GitHub. Please do a search from GitHub issue list before creating a new one to avoid repetition.
For any technical discussion, please send email to dev@gluten.apache.org. You can go to archives for getting historical discussions. Please click here to subscribe the mail list.
Please click here to get invitation for ASF Slack workspace where you can find “incubator-gluten” channel.
The ASF Slack login entry: https://the-asf.slack.com/.
For PRC developers/users, please contact weitingchen at apache.org or zhangzc at apache.org for getting invited to the WeChat group.
We use Decision Support Benchmark1 (TPC-H like) to evaluate Gluten's performance. Decision Support Benchmark1 is a query set modified from TPC-H benchmark. We use Parquet file format for Velox testing & MergeTree file format for Clickhouse testing, compared to Parquet file format as baseline. See Decision Support Benchmark1.
The below test environment: single node with 2TB data; Spark-3.3.2 for both baseline and Gluten. The Decision Support Benchmark1 result (tested in Jun. 2023) shows an overall speedup of 2.71x and up to 14.53x speedup in a single query with Gluten Velox backend used.
The below testing environment: a 8-nodes AWS cluster with 1TB data; Spark-3.1.1 for both baseline and Gluten. The Decision Support Benchmark1 result shows an average speedup of 2.12x and up to 3.48x speedup with Gluten Clickhouse backend.
The Qualification Tool is a utility to analyze Spark event log files and assess the compatibility and performance of SQL workloads with Gluten. This tool helps users understand how their workloads can benefit from Gluten.
To use the Qualification Tool, follow the instructions in its README.
java -jar target/qualification-tool-1.3.0-SNAPSHOT-jar-with-dependencies.jar -f /path/to/eventlog
For detailed usage instructions and advanced options, see the Qualification Tool README.
Gluten is licensed under Apache 2.0 license.
Gluten was initiated by Intel and Kyligence in 2022. Several companies are also actively participating in the development, such as BIGO, Meituan, Alibaba Cloud, NetEase, Baidu, Microsoft, IBM, Google, etc.