commit | 6dcd0a3524fe7be0bbbd3e673ed7e1d4b035e0cb | [log] [tgz] |
---|---|---|
author | Balaji Varadarajan <varadarb@uber.com> | Tue Jun 02 01:49:37 2020 -0700 |
committer | Sivabalan Narayanan <sivabala@uber.com> | Sun Jun 07 13:32:00 2020 -0400 |
tree | 93f1c42fe98502e1777bb11c394de9dfc39882f1 | |
parent | 949941197df728d8bf811acd2a813332d2719e4b [diff] |
[HUDI-988] Fix Unit Test Flakiness : Ensure all instantiations of HoodieWriteClient is closed properly. Fix bug in TestRollbacks. Make CLI unit tests for Hudi CLI check skip redering strings
Apache Hudi (pronounced Hoodie) stands for Hadoop Upserts Deletes and Incrementals
. Hudi manages the storage of large analytical datasets on DFS (Cloud stores, HDFS or any Hadoop FileSystem compatible storage).
Hudi supports three types of queries:
Learn more about Hudi at https://hudi.apache.org
Prerequisites for building Apache Hudi:
# Checkout code and build git clone https://github.com/apache/hudi.git && cd hudi mvn clean package -DskipTests -DskipITs
To build the Javadoc for all Java and Scala classes:
# Javadoc generated under target/site/apidocs mvn clean javadoc:aggregate -Pjavadocs
The default Scala version supported is 2.11. To build for Scala 2.12 version, build using scala-2.12
profile
mvn clean package -DskipTests -DskipITs -Dscala-2.12
Please visit https://hudi.apache.org/docs/quick-start-guide.html to quickly explore Hudi's capabilities using spark-shell.