build(release): bump version to 0.2.0-rc.2
3 files changed
tree: 9fffa5fe686d119db34561b2f2cedfdefda329d0
  1. .cargo/
  2. .github/
  3. crates/
  4. python/
  5. release/
  6. .asf.yaml
  7. .commitlintrc.yaml
  8. .gitignore
  9. .licenserc.yaml
  10. Cargo.toml
  11. CHANGELOG.md
  12. cliff.toml
  13. codecov.yml
  14. CONTRIBUTING.md
  15. LICENSE
  16. NOTICE
  17. README.md
  18. rust-toolchain.toml
README.md

The hudi-rs project aims to broaden the use of Apache Hudi for a diverse range of users and projects.

SourceInstallation Command
PyPipip install hudi
Crates.iocargo add hudi

Example usage

[!NOTE] These examples expect a Hudi table exists at /tmp/trips_table, created using the quick start guide.

Python

Read a Hudi table into a PyArrow table.

from hudi import HudiTableBuilder
import pyarrow as pa

hudi_table = (
    HudiTableBuilder
    .from_base_uri("/tmp/trips_table")
    .with_option("hoodie.read.as.of.timestamp", "20241122010827898")
    .build()
)
records = hudi_table.read_snapshot(filters=[("city", "=", "san_francisco")])

arrow_table = pa.Table.from_batches(records)
result = arrow_table.select(["rider", "city", "ts", "fare"])
print(result)

Rust (DataFusion)

cargo new my_project --bin && cd my_project
cargo add tokio@1 datafusion@42
cargo add hudi --features datafusion

Update src/main.rs with the code snippet below then cargo run.

use std::sync::Arc;

use datafusion::error::Result;
use datafusion::prelude::{DataFrame, SessionContext};
use hudi::HudiDataSource;

#[tokio::main]
async fn main() -> Result<()> {
    let ctx = SessionContext::new();
    let hudi = HudiDataSource::new_with_options(
        "/tmp/trips_table",
        [("hoodie.read.as.of.timestamp", "20241122010827898")]).await?;
    ctx.register_table("trips_table", Arc::new(hudi))?;
    let df: DataFrame = ctx.sql("SELECT * from trips_table where city = 'san_francisco'").await?;
    df.show().await?;
    Ok(())
}

Work with cloud storage

Ensure cloud storage credentials are set properly as environment variables, e.g., AWS_*, AZURE_*, or GOOGLE_*. Relevant storage environment variables will then be picked up. The target table's base uri with schemes such as s3://, az://, or gs:// will be processed accordingly.

Alternatively, you can pass the storage configuration as options to the HudiTableBuilder or HudiDataSource.

Python

from hudi import HudiTableBuilder

hudi_table = (
    HudiTableBuilder
    .from_base_uri("s3://bucket/trips_table")
    .with_option("aws_region", "us-west-2")
    .build()
)

Rust (DataFusion)

use hudi::HudiDataSource;

async fn main() -> Result<()> {
    let hudi = HudiDataSource::new_with_options(
        "s3://bucket/trips_table",
        [("aws_region", "us-west-2")]
    ).await?;
}

Contributing

Check out the contributing guide for all the details about making contributions to the project.