The hudi-rs
project aims to broaden the use of Apache Hudi for a diverse range of users and projects.
Source | Installation Command |
---|---|
PyPi | pip install hudi |
Crates.io | cargo add hudi |
[!NOTE] These examples expect a Hudi table exists at
/tmp/trips_table
, created using the quick start guide.
Read a Hudi table into a PyArrow table.
from hudi import HudiTable hudi_table = HudiTable("/tmp/trips_table") records = hudi_table.read_snapshot() import pyarrow as pa import pyarrow.compute as pc arrow_table = pa.Table.from_batches(records) result = arrow_table.select( ["rider", "ts", "fare"]).filter( pc.field("fare") > 20.0) print(result)
cargo new my_project --bin && cd my_project cargo add tokio@1 datafusion@39 cargo add hudi --features datafusion
Update src/main.rs
with the code snippet below then cargo run
.
use std::sync::Arc; use datafusion::error::Result; use datafusion::prelude::{DataFrame, SessionContext}; use hudi::HudiDataSource; #[tokio::main] async fn main() -> Result<()> { let ctx = SessionContext::new(); let hudi = HudiDataSource::new("/tmp/trips_table").await?; ctx.register_table("trips_table", Arc::new(hudi))?; let df: DataFrame = ctx.sql("SELECT * from trips_table where fare > 20.0").await?; df.show().await?; Ok(()) }
Ensure cloud storage credentials are set properly as environment variables, e.g., AWS_*
, AZURE_*
, or GOOGLE_*
. Relevant storage environment variables will then be picked up. The target table's base uri with schemes such as s3://
, az://
, or gs://
will be processed accordingly.
Check out the contributing guide for all the details about making contributions to the project.