tag | b0e4993612e0d546e6efffe551712d5a79aa4419 | |
---|---|---|
tagger | Sivabalan Narayanan <sivabala@uber.com> | Mon Jun 15 22:18:28 2020 -0400 |
object | ce0c840ec1425536d7db33c2140497c48a164fdb |
0.5.3
commit | ce0c840ec1425536d7db33c2140497c48a164fdb | [log] [tgz] |
---|---|---|
author | Sivabalan Narayanan <sivabala@uber.com> | Wed Jun 10 15:00:29 2020 -0400 |
committer | Sivabalan Narayanan <sivabala@uber.com> | Wed Jun 10 15:00:29 2020 -0400 |
tree | ebfa5ec18c7f2c6aa9224b34a36d2aab1ebd2745 | |
parent | 937566ee15c6a873258cb22f0cf78623f3c169fc [diff] |
Bumping release candidate number 2
Apache Hudi (pronounced Hoodie) stands for Hadoop Upserts Deletes and Incrementals
. Hudi manages the storage of large analytical datasets on DFS (Cloud stores, HDFS or any Hadoop FileSystem compatible storage).
Hudi supports three types of queries:
Learn more about Hudi at https://hudi.apache.org
Prerequisites for building Apache Hudi:
# Checkout code and build git clone https://github.com/apache/hudi.git && cd hudi mvn clean package -DskipTests -DskipITs
To build the Javadoc for all Java and Scala classes:
# Javadoc generated under target/site/apidocs mvn clean javadoc:aggregate -Pjavadocs
The default Scala version supported is 2.11. To build for Scala 2.12 version, build using scala-2.12
profile
mvn clean package -DskipTests -DskipITs -Dscala-2.12
Please visit https://hudi.apache.org/docs/quick-start-guide.html to quickly explore Hudi's capabilities using spark-shell.