commit | 2c84570455fc6ad1d4afa2e71e8b101506faf5ae | [log] [tgz] |
---|---|---|
author | Shiyan Xu <2701446+xushiyan@users.noreply.github.com> | Sun Jul 14 23:41:53 2024 -0500 |
committer | Shiyan Xu <2701446+xushiyan@users.noreply.github.com> | Sun Jul 14 23:41:53 2024 -0500 |
tree | 3d083f4eb87ffab962fe4b09bccfe5f833098844 | |
parent | a5321c95bc6be92eaed292bc184ded8891c88b75 [diff] |
build: bump version to 0.1.0
The hudi-rs
project aims to broaden the use of Apache Hudi for a diverse range of users and projects.
Source | Installation Command |
---|---|
PyPi | pip install hudi |
Crates.io | cargo add hudi |
Read a Hudi table into a PyArrow table.
from hudi import HudiTable hudi_table = HudiTable("/tmp/trips_table") records = hudi_table.read_snapshot() import pyarrow as pa import pyarrow.compute as pc arrow_table = pa.Table.from_batches(records) result = arrow_table.select( ["rider", "ts", "fare"]).filter( pc.field("fare") > 20.0) print(result)
[dependencies] hudi = { version = "0" , features = ["datafusion"] } tokio = "1" datafusion = "39.0.0"
use std::sync::Arc; use datafusion::error::Result; use datafusion::prelude::{DataFrame, SessionContext}; use hudi::HudiDataSource; #[tokio::main] async fn main() -> Result<()> { let ctx = SessionContext::new(); let hudi = HudiDataSource::new("/tmp/trips_table").await?; ctx.register_table("trips_table", Arc::new(hudi))?; let df: DataFrame = ctx.sql("SELECT * from trips_table where fare > 20.0").await?; df.show().await?; Ok(()) }
Ensure cloud storage credentials are set properly as environment variables, e.g., AWS_*
, AZURE_*
, or GOOGLE_*
. Relevant storage environment variables will then be picked up. The target table's base uri with schemes such as s3://
, az://
, or gs://
will be processed accordingly.
Check out the contributing guide for all the details about making contributions to the project.