commit | e9f114f3dd8673b8b66b024e62cc1e3e808e7ea9 | [log] [tgz] |
---|---|---|
author | Suneel Marthi <smarthi@apache.org> | Sat Mar 07 22:08:35 2020 -0500 |
committer | yanghua <yanghua1127@gmail.com> | Sun Mar 08 22:38:40 2020 +0800 |
tree | 867bb4eb65c758dcaa6152bdcc7c8e28507526c6 | |
parent | afaf4bac31dd2081d1080fd3d969d1fadc800da7 [diff] |
[HUDI-581] NOTICE need more work as it missing content form included 3rd party ALv2 licensed NOTICE files (#1354) * [HUDI-581] - Add 3rd party library NOTICE * [HUDI-581]: NOTICE need more work as it missing content form included 3rd party ALv2 licensed NOTICE files
Apache Hudi (Incubating) (pronounced Hoodie) stands for Hadoop Upserts Deletes and Incrementals
. Hudi manages the storage of large analytical datasets on DFS (Cloud stores, HDFS or any Hadoop FileSystem compatible storage).
Hudi supports three types of queries:
Learn more about Hudi at https://hudi.apache.org
Prerequisites for building Apache Hudi:
# Checkout code and build git clone https://github.com/apache/incubator-hudi.git && cd incubator-hudi mvn clean package -DskipTests -DskipITs
To build the Javadoc for all Java and Scala classes:
# Javadoc generated under target/site/apidocs mvn clean javadoc:aggregate -Pjavadocs
The default Scala version supported is 2.11. To build for Scala 2.12 version, build using scala-2.12
profile
mvn clean package -DskipTests -DskipITs -Dscala-2.12
Please visit https://hudi.apache.org/docs/quick-start-guide.html to quickly explore Hudi's capabilities using spark-shell.