tag | 9b9bb8b639489c4eb072cb34d5ec55ed1387804e | |
---|---|---|
tagger | Sivabalan Narayanan <sivabala@uber.com> | Tue Jun 02 22:15:30 2020 -0400 |
object | 5fcc461647e197e805836c6aea24e9df8c09cf0f |
0.5.3
commit | 5fcc461647e197e805836c6aea24e9df8c09cf0f | [log] [tgz] |
---|---|---|
author | Sivabalan Narayanan <sivabala@uber.com> | Tue Jun 02 12:00:46 2020 -0400 |
committer | Sivabalan Narayanan <sivabala@uber.com> | Tue Jun 02 12:00:46 2020 -0400 |
tree | bda9b60aad1283c52070321a3bb07b8f1b6fe778 | |
parent | 949941197df728d8bf811acd2a813332d2719e4b [diff] |
Bumping release candidate number 1
Apache Hudi (pronounced Hoodie) stands for Hadoop Upserts Deletes and Incrementals
. Hudi manages the storage of large analytical datasets on DFS (Cloud stores, HDFS or any Hadoop FileSystem compatible storage).
Hudi supports three types of queries:
Learn more about Hudi at https://hudi.apache.org
Prerequisites for building Apache Hudi:
# Checkout code and build git clone https://github.com/apache/hudi.git && cd hudi mvn clean package -DskipTests -DskipITs
To build the Javadoc for all Java and Scala classes:
# Javadoc generated under target/site/apidocs mvn clean javadoc:aggregate -Pjavadocs
The default Scala version supported is 2.11. To build for Scala 2.12 version, build using scala-2.12
profile
mvn clean package -DskipTests -DskipITs -Dscala-2.12
Please visit https://hudi.apache.org/docs/quick-start-guide.html to quickly explore Hudi's capabilities using spark-shell.