Add missing scalar functions (#1470) * Add missing scalar functions: get_field, union_extract, union_tag, arrow_metadata, version, row Expose upstream DataFusion scalar functions that were not yet available in the Python API. Closes #1453. - get_field: extracts a field from a struct or map by name - union_extract: extracts a value from a union type by field name - union_tag: returns the active field name of a union type - arrow_metadata: returns Arrow field metadata (all or by key) - version: returns the DataFusion version string - row: alias for the struct constructor Note: arrow_try_cast was listed in the issue but does not exist in DataFusion 53, so it is not included. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Add tests for new scalar functions Tests for get_field, arrow_metadata, version, row, union_tag, and union_extract. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Accept str for field name and type parameters in scalar functions Allow arrow_cast, get_field, and union_extract to accept plain str arguments instead of requiring Expr wrappers. Also improve arrow_metadata test coverage and fix parameter shadowing. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Accept str for key parameter in arrow_metadata for consistency Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Add doctest examples and fix docstring style for new scalar functions Replace Args/Returns sections with doctest Examples blocks for arrow_metadata, get_field, union_extract, union_tag, and version to match existing codebase conventions. Simplify row to alias-style docstring with See Also reference. Document that arrow_cast accepts both str and Expr for data_type. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Support pyarrow DataType in arrow_cast Allow arrow_cast to accept a pyarrow DataType in addition to str and Expr. The DataType is converted to its string representation before being passed to DataFusion. Adds test coverage for the new input type. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Document bracket syntax shorthand in get_field docstring Note that expr["field"] is a convenient alternative when the field name is a static string, and get_field is needed for dynamic expressions. Add a second doctest example showing the bracket syntax. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Fix arrow_cast with pyarrow DataType by delegating to Expr.cast Use the existing Rust-side PyArrowType<DataType> conversion via Expr.cast() instead of str() which produces pyarrow type names that DataFusion does not recognize. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Clarify when to use arrow_cast vs Expr.cast in docstring Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This is a Python library that binds to Apache Arrow in-memory query engine DataFusion.
DataFusion's Python bindings can be used as a foundation for building new data systems in Python. Here are some examples:
For tips on tuning parallelism, see Maximizing CPU Usage in the configuration guide.
The following example demonstrates running a SQL query against a Parquet file using DataFusion, storing the results in a Pandas DataFrame, and then plotting a chart.
The Parquet file used in this example can be downloaded from the following page:
from datafusion import SessionContext # Create a DataFusion context ctx = SessionContext() # Register table with context ctx.register_parquet('taxi', 'yellow_tripdata_2021-01.parquet') # Execute SQL df = ctx.sql("select passenger_count, count(*) " "from taxi " "where passenger_count is not null " "group by passenger_count " "order by passenger_count") # convert to Pandas pandas_df = df.to_pandas() # create a chart fig = pandas_df.plot(kind="bar", title="Trip Count by Number of Passengers").get_figure() fig.savefig('chart.png')
This produces the following chart:
You can use SessionContext's register_view method to convert a DataFrame into a view and register it with the context.
from datafusion import SessionContext, col, literal # Create a DataFusion context ctx = SessionContext() # Create sample data data = {"a": [1, 2, 3, 4, 5], "b": [10, 20, 30, 40, 50]} # Create a DataFrame from the dictionary df = ctx.from_pydict(data, "my_table") # Filter the DataFrame (for example, keep rows where a > 2) df_filtered = df.filter(col("a") > literal(2)) # Register the dataframe as a view with the context ctx.register_view("view1", df_filtered) # Now run a SQL query against the registered view df_view = ctx.sql("SELECT * FROM view1") # Collect the results results = df_view.collect() # Convert results to a list of dictionaries for display result_dicts = [batch.to_pydict() for batch in results] print(result_dicts)
This will output:
[{'a': [3, 4, 5], 'b': [30, 40, 50]}]
It is possible to configure runtime (memory and disk settings) and configuration settings when creating a context.
runtime = ( RuntimeEnvBuilder() .with_disk_manager_os() .with_fair_spill_pool(10000000) ) config = ( SessionConfig() .with_create_default_catalog_and_schema(True) .with_default_catalog_and_schema("foo", "bar") .with_target_partitions(8) .with_information_schema(True) .with_repartition_joins(False) .with_repartition_aggregations(False) .with_repartition_windows(False) .with_parquet_pruning(False) .set("datafusion.execution.parquet.pushdown_filters", "true") ) ctx = SessionContext(config, runtime)
Refer to the API documentation for more information.
Printing the context will show the current configuration settings.
print(ctx)
For information about how to extend DataFusion Python, please see the extensions page of the online documentation.
See examples for more information.
uv add datafusion
pip install datafusion # or python -m pip install datafusion
conda install -c conda-forge datafusion
You can verify the installation by running:
>>> import datafusion >>> datafusion.__version__ '0.6.0'
This assumes that you have rust and cargo installed. We use the workflow recommended by pyo3 and maturin. The Maturin tools used in this workflow can be installed either via uv or pip. Both approaches should offer the same experience. It is recommended to use uv since it has significant performance improvements over pip.
Currently for protobuf support either protobuf or cmake must be installed.
Bootstrap (uv):
By default uv will attempt to build the datafusion python package. For our development we prefer to build manually. This means that when creating your virtual environment using uv sync you need to pass in the additional --no-install-package datafusion and for uv run commands the additional parameter --no-project
# fetch this repo git clone git@github.com:apache/datafusion-python.git # cd to the repo root cd datafusion-python/ # create the virtual environment uv sync --dev --no-install-package datafusion # activate the environment source .venv/bin/activate
Bootstrap (pip):
# fetch this repo git clone git@github.com:apache/datafusion-python.git # cd to the repo root cd datafusion-python/ # prepare development environment (used to build wheel / install in development) python3 -m venv .venv # activate the venv source .venv/bin/activate # update pip itself if necessary python -m pip install -U pip # install dependencies python -m pip install -r pyproject.toml
The tests rely on test data in git submodules.
git submodule update --init
Whenever rust code changes (your changes or via git pull):
# make sure you activate the venv using "source venv/bin/activate" first maturin develop --uv python -m pytest
Alternatively if you are using uv you can do the following without needing to activate the virtual environment:
uv run --no-project maturin develop --uv uv run --no-project pytest
To run the FFI tests within the examples folder, after you have built datafusion-python with the previous commands:
cd examples/datafusion-ffi-example uv run --no-project maturin develop --uv uv run --no-project pytest python/tests/_test_*py
datafusion-python takes advantage of pre-commit to assist developers with code linting to help reduce the number of commits that ultimately fail in CI due to linter errors. Using the pre-commit hooks is optional for the developer but certainly helpful for keeping PRs clean and concise.
Our pre-commit hooks can be installed by running pre-commit install, which will install the configurations in your DATAFUSION_PYTHON_ROOT/.github directory and run each time you perform a commit, failing to complete the commit if an offending lint is found allowing you to make changes locally before pushing.
The pre-commit hooks can also be run adhoc without installing them by simply running pre-commit run --all-files.
NOTE: the current pre-commit hooks require docker, and cmake. See note on protobuf above.
There are scripts in ci/scripts for running Rust and Python linters.
./ci/scripts/python_lint.sh ./ci/scripts/rust_clippy.sh ./ci/scripts/rust_fmt.sh ./ci/scripts/rust_toml_fmt.sh
This project includes an AI agent skill for auditing which features from the upstream Apache DataFusion Rust library are not yet exposed in these Python bindings. This is useful when adding missing functions, auditing API coverage, or ensuring parity with upstream.
The skill accepts an optional area argument:
scalar functions aggregate functions window functions dataframe session context ffi types all
If no argument is provided, it defaults to checking all areas. The skill will fetch the upstream DataFusion documentation, compare it against the functions and methods exposed in this project, and produce a coverage report listing what is currently exposed and what is missing.
The skill definition lives in .ai/skills/check-upstream/SKILL.md and follows the Agent Skills open standard. It can be used by any AI coding agent that supports skill discovery, or followed manually.
To change test dependencies, change the pyproject.toml and run
uv sync --dev --no-install-package datafusion