[#1576] feat(doc): server deploy guide without hadoop-home env (#1577)
### What changes were proposed in this pull request?
Provide the uniffle server deploy guide for those machines without hadoop env.
### Why are the changes needed?
Leveraging from the #1379 and #1370 , we could setup uniffle shuffle-server without hadoop env.
This will simplify the quick start process.
Fix: #1576
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Existing tests
diff --git a/client-spark/common/src/test/java/org/apache/uniffle/shuffle/manager/RssShuffleManagerBaseTest.java b/client-spark/common/src/test/java/org/apache/uniffle/shuffle/manager/RssShuffleManagerBaseTest.java
index da45c82..fffb7af 100644
--- a/client-spark/common/src/test/java/org/apache/uniffle/shuffle/manager/RssShuffleManagerBaseTest.java
+++ b/client-spark/common/src/test/java/org/apache/uniffle/shuffle/manager/RssShuffleManagerBaseTest.java
@@ -63,6 +63,8 @@
@Test
public void testGetDefaultRemoteStorageInfo() {
SparkConf sparkConf = new SparkConf();
+ sparkConf.set(
+ "spark." + RssClientConf.RSS_CLIENT_REMOTE_STORAGE_USE_LOCAL_CONF_ENABLED.key(), "false");
RemoteStorageInfo remoteStorageInfo =
RssShuffleManagerBase.getDefaultRemoteStorageInfo(sparkConf);
assertTrue(remoteStorageInfo.getConfItems().isEmpty());
diff --git a/docs/server_guide.md b/docs/server_guide.md
index c5b0d75..9d65859 100644
--- a/docs/server_guide.md
+++ b/docs/server_guide.md
@@ -32,6 +32,11 @@
HADOOP_HOME=<hadoop home>
XMX_SIZE="80g"
```
+
+ For the following cases, you don't need to specify `HADOOP_HOME` that will simplify the server deployment.
+ 1. using the storage type without HDFS like `MEMORY_LOCALFILE
+ 2. using HDFS and package with hadoop jars, like this: `./build_distribution.sh --hadoop-profile 'hadoop3.2' -Phadoop-dependencies-included`. But you need to explicitly set the `spark.rss.client.remote.storage.useLocalConfAsDefault=true`
+
3. update RSS_HOME/conf/server.conf, eg,
```
rss.rpc.server.port 19999