This is a Python library that binds to Apache Arrow in-memory query engine DataFusion.
Like pyspark, it allows you to build a plan through SQL or a DataFrame API against in-memory data, parquet or CSV files, run it in a multi-threaded environment, and obtain the result back in Python.
It also allows you to use UDFs and UDAFs for complex operations.
The major advantage of this library over other execution engines is that this library achieves zero-copy between Python and its execution engine: there is no cost in using UDFs, UDAFs, and collecting the results to Python apart from having to lock the GIL when running those operations.
Its query engine, DataFusion, is written in Rust, which makes strong assumptions about thread safety and lack of memory leaks.
Technically, zero-copy is achieved via the c data interface.
Simple usage:
import datafusion import pyarrow # an alias f = datafusion.functions # create a context ctx = datafusion.ExecutionContext() # create a RecordBatch and a new DataFrame from it batch = pyarrow.RecordBatch.from_arrays( [pyarrow.array([1, 2, 3]), pyarrow.array([4, 5, 6])], names=["a", "b"], ) df = ctx.create_dataframe([[batch]]) # create a new statement df = df.select( f.col("a") + f.col("b"), f.col("a") - f.col("b"), ) # execute and collect the first (and only) batch result = df.collect()[0] assert result.column(0) == pyarrow.array([5, 7, 9]) assert result.column(1) == pyarrow.array([-3, -3, -3])
def is_null(array: pyarrow.Array) -> pyarrow.Array: return array.is_null() udf = f.udf(is_null, [pyarrow.int64()], pyarrow.bool_()) df = df.select(udf(f.col("a")))
import pyarrow import pyarrow.compute class Accumulator: """ Interface of a user-defined accumulation. """ def __init__(self): self._sum = pyarrow.scalar(0.0) def to_scalars(self) -> [pyarrow.Scalar]: return [self._sum] def update(self, values: pyarrow.Array) -> None: # not nice since pyarrow scalars can't be summed yet. This breaks on `None` self._sum = pyarrow.scalar(self._sum.as_py() + pyarrow.compute.sum(values).as_py()) def merge(self, states: pyarrow.Array) -> None: # not nice since pyarrow scalars can't be summed yet. This breaks on `None` self._sum = pyarrow.scalar(self._sum.as_py() + pyarrow.compute.sum(states).as_py()) def evaluate(self) -> pyarrow.Scalar: return self._sum df = ... udaf = f.udaf(Accumulator, pyarrow.float64(), pyarrow.float64(), [pyarrow.float64()]) df = df.aggregate( [], [udaf(f.col("a"))] )
pip install datafusion
This assumes that you have rust and cargo installed. We use the workflow recommended by pyo3 and maturin.
Bootstrap:
# fetch this repo git clone git@github.com:apache/arrow-datafusion.git cd arrow-datafusion/python # prepare development environment (used to build wheel / install in development) python3 -m venv venv # activate the venv source venv/bin/activate pip install -r requirements.txt
Whenever rust code changes (your changes or via git pull):
# make sure you activate the venv using "source venv/bin/activate" first maturin develop python -m pytest
To change test dependencies, change the requirements.in and run
# install pip-tools (this can be done only once), also consider running in venv pip install pip-tools # change requirements.in and then run pip-compile --generate-hashes
To update dependencies, run
pip-compile update
More details here