commit | 3b5c9f7f37b269c5e2df0feb11376333f29380e2 | [log] [tgz] |
---|---|---|
author | Karuppayya <karuppayya1990@gmail.com> | Wed Nov 20 15:10:26 2024 -0800 |
committer | GitHub <noreply@github.com> | Wed Nov 20 15:10:26 2024 -0800 |
tree | a58cb3a9f220e9f4b8b066002847634bcfb3f4ab | |
parent | a8f42d1b3954fb53a40204f2aa3a99080d637201 [diff] |
Spark 3.5: Procedure to compute table stats (#10986)
Iceberg is a high-performance format for huge analytic tables. Iceberg brings the reliability and simplicity of SQL tables to big data, while making it possible for engines like Spark, Trino, Flink, Presto, Hive and Impala to safely work with the same tables, at the same time.
Background and documentation is available at https://iceberg.apache.org
Iceberg is under active development at the Apache Software Foundation.
The Iceberg format specification is stable and new features are added with each version.
The core Java library is located in this repository and is the reference implementation for other libraries.
Documentation is available for all libraries and integrations.
Iceberg tracks issues in GitHub and prefers to receive contributions as pull requests.
Community discussions happen primarily on the dev mailing list or on specific issues.
Iceberg is built using Gradle with Java 11, 17, or 21.
./gradlew build
./gradlew build -x test -x integrationTest
./gradlew spotlessApply
./gradlew spotlessApply -DallModules
Iceberg table support is organized in library modules:
iceberg-common
contains utility classes used in other modulesiceberg-api
contains the public Iceberg APIiceberg-core
contains implementations of the Iceberg API and support for Avro data files, this is what processing engines should depend oniceberg-parquet
is an optional module for working with tables backed by Parquet filesiceberg-arrow
is an optional module for reading Parquet into Arrow memoryiceberg-orc
is an optional module for working with tables backed by ORC filesiceberg-hive-metastore
is an implementation of Iceberg tables backed by the Hive metastore Thrift clienticeberg-data
is an optional module for working with tables directly from JVM applicationsIceberg also has modules for adding Iceberg support to processing engines:
iceberg-spark
is an implementation of Spark's Datasource V2 API for Iceberg with submodules for each spark versions (use runtime jars for a shaded version)iceberg-flink
contains classes for integrating with Apache Flink (use iceberg-flink-runtime for a shaded version)iceberg-mr
contains an InputFormat and other classes for integrating with Apache HiveNOTE
The tests require Docker to execute. On MacOS (with Docker Desktop), you might need to create a symbolic name to the docker socket in order to be detected by the tests:
sudo ln -s $HOME/.docker/run/docker.sock /var/run/docker.sock
See the Multi-Engine Support page to know about Iceberg compatibility with different Spark, Flink and Hive versions. For other engines such as Presto or Trino, please visit their websites for Iceberg integration details.
This repository contains the Java implementation of Iceberg. Other implementations can be found at: