Read and apply schema for each log block from the metadata header instead of the latest schema
2 files changed
tree: 8e8cac06c6b237ccadf20462d71b775847ed049e
  1. deploy/
  2. docker/
  3. hoodie-cli/
  4. hoodie-client/
  5. hoodie-common/
  6. hoodie-hadoop-mr/
  7. hoodie-hive/
  8. hoodie-integ-test/
  9. hoodie-spark/
  10. hoodie-utilities/
  11. packaging/
  12. style/
  13. .gitignore
  14. .travis.yml
  15. _config.yml
  16. CHANGELOG.md
  17. KEYS
  18. LICENSE.txt
  19. pom.xml
  20. README.md
  21. RELEASE_NOTES.md
README.md

Hudi

Hudi (pronounced Hoodie) stands for Hadoop Upserts anD Incrementals. Hudi manages storage of large analytical datasets on HDFS and serve them out via two types of tables

  • Read Optimized Table - Provides excellent query performance via purely columnar storage (e.g. Parquet)
  • Near-Real time Table (WIP) - Provides queries on real-time data, using a combination of columnar & row based storage (e.g Parquet + Avro)

For more, head over here