commit | 8729c449eace1852231c2e6e52e62d0692e09fc9 | [log] [tgz] |
---|---|---|
author | Dezhi Cai <caidezhi655@foxmail.com> | Thu Feb 18 13:46:13 2021 +0800 |
committer | GitHub <noreply@github.com> | Wed Feb 17 21:46:13 2021 -0800 |
tree | 69c1ce82fd7e33976e11ec3ab3246a30a9c24308 | |
parent | 41202da7788193da77f1ae4b784127bb93eaae2c [diff] |
Removing spring repos from pom (#2481) (#2550) - These are being deprecated - Causes build issues when .m2 does not have this cached already Co-authored-by: vinoth chandar <vinothchandar@users.noreply.github.com>
Apache Hudi (Incubating) (pronounced Hoodie) stands for Hadoop Upserts Deletes and Incrementals
. Hudi manages the storage of large analytical datasets on DFS (Cloud stores, HDFS or any Hadoop FileSystem compatible storage).
Hudi supports three types of queries:
Learn more about Hudi at https://hudi.apache.org
Prerequisites for building Apache Hudi:
# Checkout code and build git clone https://github.com/apache/incubator-hudi.git && cd incubator-hudi mvn clean package -DskipTests -DskipITs
To build the Javadoc for all Java and Scala classes:
# Javadoc generated under target/site/apidocs mvn clean javadoc:aggregate -Pjavadocs
The default Scala version supported is 2.11. To build for Scala 2.12 version, build using scala-2.12
profile
mvn clean package -DskipTests -DskipITs -Dscala-2.12
Please visit https://hudi.apache.org/docs/quick-start-guide.html to quickly explore Hudi's capabilities using spark-shell.