tag | cc77eb2194b1ee230c8f22821d0abbb53d13b82e | |
---|---|---|
tagger | sivabalan <n.siva.b@gmail.com> | Thu Jan 13 09:05:05 2022 -0500 |
object | ab3f9fde98c05a75703022de15dacd87414deb7c |
0.10.1
commit | ab3f9fde98c05a75703022de15dacd87414deb7c | [log] [tgz] |
---|---|---|
author | sivabalan narayanan <n.siva.b@gmail.com> | Thu Jan 13 07:46:40 2022 -0500 |
committer | sivabalan narayanan <n.siva.b@gmail.com> | Thu Jan 13 07:46:40 2022 -0500 |
tree | 3f8c8187a89df7768f2c59009a58544c744d35b9 | |
parent | 91253ef05d2af82df2d530c847cd1440956b95e8 [diff] |
Bumping release candidate number 1 for 0.10.1
Apache Hudi (pronounced Hoodie) stands for Hadoop Upserts Deletes and Incrementals
. Hudi manages the storage of large analytical datasets on DFS (Cloud stores, HDFS or any Hadoop FileSystem compatible storage).
Hudi supports three types of queries:
Learn more about Hudi at https://hudi.apache.org
Prerequisites for building Apache Hudi:
# Checkout code and build git clone https://github.com/apache/hudi.git && cd hudi mvn clean package -DskipTests # Start command spark-2.4.4-bin-hadoop2.7/bin/spark-shell \ --jars `ls packaging/hudi-spark-bundle/target/hudi-spark-bundle_2.11-*.*.*-SNAPSHOT.jar` \ --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'
To build the Javadoc for all Java and Scala classes:
# Javadoc generated under target/site/apidocs mvn clean javadoc:aggregate -Pjavadocs
The default Scala version supported is 2.11. To build for Scala 2.12 version, build using scala-2.12
profile
mvn clean package -DskipTests -Dscala-2.12
The default Spark version supported is 2.4.4. To build for different Spark 3 versions, use the corresponding profile
# Build against Spark 3.1.2 (the default build shipped with the public Spark 3 bundle) mvn clean package -DskipTests -Dspark3 # Build against Spark 3.0.3 mvn clean package -DskipTests -Dspark3.0.x
The default hudi-jar bundles spark-avro module. To build without spark-avro module, build using spark-shade-unbundle-avro
profile
# Checkout code and build git clone https://github.com/apache/hudi.git && cd hudi mvn clean package -DskipTests -Pspark-shade-unbundle-avro # Start command spark-2.4.4-bin-hadoop2.7/bin/spark-shell \ --packages org.apache.spark:spark-avro_2.11:2.4.4 \ --jars `ls packaging/hudi-spark-bundle/target/hudi-spark-bundle_2.11-*.*.*-SNAPSHOT.jar` \ --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'
Unit tests can be run with maven profile unit-tests
.
mvn -Punit-tests test
Functional tests, which are tagged with @Tag("functional")
, can be run with maven profile functional-tests
.
mvn -Pfunctional-tests test
To run tests with spark event logging enabled, define the Spark event log directory. This allows visualizing test DAG and stages using Spark History Server UI.
mvn -Punit-tests test -DSPARK_EVLOG_DIR=/path/for/spark/event/log
Please visit https://hudi.apache.org/docs/quick-start-guide.html to quickly explore Hudi's capabilities using spark-shell.