commit | 84410a4ad425edd6c918521ff6997b041c1c67a5 | [log] [tgz] |
---|---|---|
author | macduan <duanmeng@outlook.com> | Fri Mar 04 10:32:13 2022 +0800 |
committer | GitHub <noreply@github.com> | Fri Mar 04 10:32:13 2022 +0800 |
tree | 99260275bd1859fe52c0d001070a82762c0c16b2 | |
parent | c46296598527c85a1c824575553fe19d961d154b [diff] |
upgrade to 0.3.0 (#89) ### What changes were proposed in this pull request? upgrade version number ### Why are the changes needed? release 0.3.0 ### Does this PR introduce any user-facing change? no ### How was this patch tested? no
Firestorm is a Remote Shuffle Service, and provides the capability for Apache Spark applications to store shuffle data on remote servers.
Firestorm contains coordinator cluster, shuffle server cluster and remote storage(eg, HDFS) if necessary.
Coordinator will collect status of shuffle server and do the assignment for the job.
Shuffle server will receive the shuffle data, merge them and write to storage.
Depend on different situation, Firestorm supports Memory & Local, Memory & Remote Storage(eg, HDFS), Local only, Remote Storage only.
Spark driver ask coordinator to get shuffle server for shuffle process
Spark task write shuffle data to shuffle server with following step:
Depend on different storage type, spark task read shuffle data from shuffle server or remote storage or both of them.
The shuffle data is stored with index file and data file. Data file has all blocks for specific partition and index file has metadata for every block.
Current support Spark 2.3.x, Spark 2.4.x, Spark3.0.x, Spark 3.1.x, Spark 3.2.x
Note: To support dynamic allocation, the patch(which is included in client-spark/patch folder) should be applied to Spark
Firestorm is built using Apache Maven. To build it, run:
mvn -DskipTests clean package
To package the Firestorm, run:
./build_distribution.sh
rss-xxx.tgz will be generated for deployment
JAVA_HOME=<java_home> HADOOP_HOME=<hadoop home> XMX_SIZE="16g"
rss.rpc.server.port 19999 rss.jetty.http.port 19998 rss.coordinator.server.heartbeat.timeout 30000 rss.coordinator.app.expired 60000 rss.coordinator.shuffle.nodes.max 5 rss.coordinator.exclude.nodes.file.path RSS_HOME/conf/exclude_nodes
bash RSS_HOME/bin/start-coordnator.sh
JAVA_HOME=<java_home> HADOOP_HOME=<hadoop home> XMX_SIZE="80g"
rss.rpc.server.port 19999 rss.jetty.http.port 19998 rss.rpc.executor.size 2000 rss.storage.type MEMORY_LOCALFILE rss.coordinator.quorum <coordinatorIp1>:19999,<coordinatorIp2>:19999 rss.storage.basePath /data1/rssdata,/data2/rssdata.... rss.server.flush.thread.alive 5 rss.server.flush.threadPool.size 10 rss.server.buffer.capacity 40g rss.server.read.buffer.capacity 20g rss.server.heartbeat.timeout 60000 rss.server.heartbeat.interval 10000 rss.rpc.message.max.size 1073741824 rss.server.preAllocation.expired 120000 rss.server.commit.timeout 600000 rss.server.app.expired.withoutHeartbeat 120000
bash RSS_HOME/bin/start-shuffle-server.sh
Add client jar to Spark classpath, eg, SPARK_HOME/jars/
The jar for Spark2 is located in <RSS_HOME>/jars/client/spark2/rss-client-XXXXX-shaded.jar
The jar for Spark3 is located in <RSS_HOME>/jars/client/spark3/rss-client-XXXXX-shaded.jar
Update Spark conf to enable Firestorm, the following demo is for local storage only, eg,
spark.shuffle.manager org.apache.spark.shuffle.RssShuffleManager spark.rss.coordinator.quorum <coordinatorIp1>:19999,<coordinatorIp2>:19999 spark.rss.storage.type MEMORY_LOCALFILE
To support spark dynamic allocation with Firestorm, spark code should be updated. There are 2 patches for spark-2.4.6 and spark-3.1.2 in spark-patches folder for reference.
After apply the patch and rebuild spark, add following configuration in spark conf to enable dynamic allocation:
spark.shuffle.service.enabled false spark.dynamicAllocation.enabled true
The important configuration is listed as following.
Property Name | Default | Description |
---|---|---|
rss.coordinator.server.heartbeat.timeout | 30000 | Timeout if can't get heartbeat from shuffle server |
rss.coordinator.assignment.strategy | BASIC | Strategy for assigning shuffle server, only BASIC support |
rss.coordinator.app.expired | 60000 | Application expired time (ms), the heartbeat interval should be less than it |
rss.coordinator.shuffle.nodes.max | 9 | The max number of shuffle server when do the assignment |
rss.coordinator.exclude.nodes.file.path | - | The path of configuration file which have exclude nodes |
rss.coordinator.exclude.nodes.check.interval.ms | 60000 | Update interval (ms) for exclude nodes |
rss.rpc.server.port | - | RPC port for coordinator |
rss.jetty.http.port | - | Http port for coordinator |
Property Name | Default | Description |
---|---|---|
rss.coordinator.quorum | - | Coordinator quorum |
rss.rpc.server.port | - | RPC port for Shuffle server |
rss.jetty.http.port | - | Http port for Shuffle server |
rss.server.buffer.capacity | - | Max memory of buffer manager for shuffle server |
rss.server.memory.shuffle.highWaterMark.percentage | 75.0 | Threshold of spill data to storage, percentage of rss.server.buffer.capacity |
rss.server.memory.shuffle.lowWaterMark.percentage | 25.0 | Threshold of keep data in memory, percentage of rss.server.buffer.capacity |
rss.server.read.buffer.capacity | - | Max size of buffer for reading data |
rss.server.heartbeat.interval | 10000 | Heartbeat interval to Coordinator (ms) |
rss.server.flush.threadPool.size | 10 | Thread pool for flush data to file |
rss.server.commit.timeout | 600000 | Timeout when commit shuffle data (ms) |
rss.storage.type | - | Supports MEMORY_LOCALFILE, MEMORY_HDFS, MEMORY_LOCALFILE_HDFS |
rss.server.flush.cold.storage.threshold.size | 64M | The threshold of data size for LOACALFILE and HDFS if MEMORY_LOCALFILE_HDFS is used |
Property Name | Default | Description |
---|---|---|
spark.rss.writer.buffer.size | 3m | Buffer size for single partition data |
spark.rss.writer.buffer.spill.size | 128m | Buffer size for total partition data |
spark.rss.coordinator.quorum | - | Coordinator quorum |
spark.rss.storage.type | - | Supports MEMORY_LOCAL, MEMORY_HDFS, LOCALFILE, HDFS, LOCALFILE_HDFS |
spark.rss.client.send.size.limit | 16m | The max data size sent to shuffle server |
spark.rss.client.read.buffer.size | 32m | The max data size read from storage |
spark.rss.client.send.threadPool.size | 10 | The thread size for send shuffle data to shuffle server |
Firestorm is under the Apache License Version 2.0. See the LICENSE file for details.
For more information about contributing issues or pull requests, see Firestorm Contributing Guide.