Add SKILL.md and enrich package docstring (#1497)

* Add AGENTS.md and enrich __init__.py module docstring

Add python/datafusion/AGENTS.md as a comprehensive DataFrame API guide
for AI agents and users. It ships with pip automatically (Maturin includes
everything under python-source = "python"). Covers core abstractions,
import conventions, data loading, all DataFrame operations, expression
building, a SQL-to-DataFrame reference table, common pitfalls, idiomatic
patterns, and a categorized function index.

Enrich the __init__.py module docstring from 2 lines to a full overview
with core abstractions, a quick-start example, and a pointer to AGENTS.md.

Closes #1394 (PR 1a)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Clarify audience of root vs package AGENTS.md

The root AGENTS.md (symlinked as CLAUDE.md) is for contributors working
on the project. Add a pointer to python/datafusion/AGENTS.md which is
the user-facing DataFrame API guide shipped with the package. Also add
the Apache license header to the package AGENTS.md.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Add PR template and pre-commit check guidance to AGENTS.md

Document that all PRs must follow .github/pull_request_template.md and
that pre-commit hooks must pass before committing. List all configured
hooks (actionlint, ruff, ruff-format, cargo fmt, cargo clippy, codespell,
uv-lock) and the command to run them manually.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Remove duplicated hook list from AGENTS.md

Let the hooks be discoverable from .pre-commit-config.yaml rather than
maintaining a separate list that can drift.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Fix AGENTS.md: Arrow C Data Interface, aggregate filter, fluent example

- Clarify that DataFusion works with any Arrow C Data Interface
  implementation, not just PyArrow.
- Show the filter keyword argument on aggregate functions (the idiomatic
  HAVING equivalent) instead of the post-aggregate .filter() pattern.
- Update the SQL reference table to show FILTER (WHERE ...) syntax.
- Remove the now-incorrect "Aggregate then filter for HAVING" pitfall.
- Add .collect() to the fluent chaining example so the result is clearly
  materialized.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Update agents file after working through the first tpc-h query using only the text description

* Add feedback from working through each of the TPC-H queries

* Address Copilot review feedback on AGENTS.md

- Wrap CASE/WHEN method-chain examples in parentheses and assign to a
  variable so they are valid Python as shown (Copilot #1, #2).
- Fix INTERSECT/EXCEPT mapping: the default distinct=False corresponds to
  INTERSECT ALL / EXCEPT ALL, not the distinct forms. Updated both the
  Set Operations section and the SQL reference table to show both the
  ALL and distinct variants (Copilot #4).
- Change write_parquet / write_csv / write_json examples to file-style
  paths (output.parquet, etc.) to match the convention used in existing
  tests and examples. Note that a directory path is also valid for
  partitioned output (Copilot #5).

Verified INTERSECT/EXCEPT semantics with a script:
  df1.intersect(df2)                -> [1, 1, 2]  (= INTERSECT ALL)
  df1.intersect(df2, distinct=True) -> [1, 2]     (= INTERSECT)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Use short-form comparisons in AGENTS.md examples

Drop lit() on the RHS of comparison operators since Expr auto-wraps raw
Python values, matching the style the guide recommends (Copilot #3, #6).

Updates examples in the Aggregation, CASE/WHEN, SQL reference table,
Common Pitfalls, Fluent Chaining, and Variables-as-CTEs sections, plus
the __init__.py quick-start snippet. Prose explanations of the rule
(which cite the long form as the thing to avoid) are left unchanged.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Move user guide from python/datafusion/AGENTS.md to SKILL.md

The in-wheel AGENTS.md was not a real distribution channel -- no shipping
agent walks site-packages for AGENTS.md files. Moving to SKILL.md at the
repo root, with YAML frontmatter, lets the skill ecosystems (npx skills,
Claude Code plugin marketplaces, community aggregators) discover it.

Update the pointers in the contributor AGENTS.md and the __init__.py
module docstring accordingly. The docstring now references the GitHub
URL since the file no longer ships with the wheel.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* Address review feedback: doctest, streaming, date/timestamp

- Convert the __init__.py quick-start block to doctest format so it is
  picked up by `pytest --doctest-modules` (already the project default),
  preventing silent rot.
- Extract streaming into its own SKILL.md subsection with guidance on
  when to prefer execute_stream() over collect(), sync and async
  iteration, and execute_stream_partitioned() for per-partition streams.
- Generalize the date-arithmetic rule from Date32 to both Date32 and
  Date64 (both reject Duration at any precision, both accept
  month_day_nano_interval), and note that Timestamp columns differ and
  do accept Duration.
- Document the PyArrow-inherited type mapping returned by
  to_pydict()/to_pylist(), including the nanosecond fallback to
  pandas.Timestamp / pandas.Timedelta and the to_pandas() footgun where
  date columns come back as an object dtype.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* Distinguish user guide from agent reference in module docstring

The docstring pointed readers at SKILL.md as a "comprehensive guide," but
SKILL.md is written in a dense, skill-oriented format for agents — humans
are better served by the online user guide. Put the online docs first as
the primary reference and label the SKILL.md link as the agent reference.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
3 files changed
tree: c70a137ffbb797b52787dac6f858e97af8de001f
  1. .ai/
  2. .claude/
  3. .github/
  4. benchmarks/
  5. ci/
  6. crates/
  7. dev/
  8. docs/
  9. examples/
  10. python/
  11. .asf.yaml
  12. .dockerignore
  13. .gitignore
  14. .gitmodules
  15. .pre-commit-config.yaml
  16. AGENTS.md
  17. Cargo.lock
  18. Cargo.toml
  19. CHANGELOG.md
  20. conftest.py
  21. LICENSE.txt
  22. pyproject.toml
  23. README.md
  24. rustfmt.toml
  25. SKILL.md
  26. uv.lock
README.md

DataFusion in Python

Python test Python Release Build

This is a Python library that binds to Apache Arrow in-memory query engine DataFusion.

DataFusion's Python bindings can be used as a foundation for building new data systems in Python. Here are some examples:

  • Dask SQL uses DataFusion's Python bindings for SQL parsing, query planning, and logical plan optimizations, and then transpiles the logical plan to Dask operations for execution.
  • DataFusion Ballista is a distributed SQL query engine that extends DataFusion's Python bindings for distributed use cases.
  • DataFusion Ray is another distributed query engine that uses DataFusion's Python bindings.

Features

  • Execute queries using SQL or DataFrames against CSV, Parquet, and JSON data sources.
  • Queries are optimized using DataFusion's query optimizer.
  • Execute user-defined Python code from SQL.
  • Exchange data with Pandas and other DataFrame libraries that support PyArrow.
  • Serialize and deserialize query plans in Substrait format.
  • Experimental support for transpiling SQL queries to DataFrame calls with Polars, Pandas, and cuDF.

For tips on tuning parallelism, see Maximizing CPU Usage in the configuration guide.

Example Usage

The following example demonstrates running a SQL query against a Parquet file using DataFusion, storing the results in a Pandas DataFrame, and then plotting a chart.

The Parquet file used in this example can be downloaded from the following page:

from datafusion import SessionContext

# Create a DataFusion context
ctx = SessionContext()

# Register table with context
ctx.register_parquet('taxi', 'yellow_tripdata_2021-01.parquet')

# Execute SQL
df = ctx.sql("select passenger_count, count(*) "
             "from taxi "
             "where passenger_count is not null "
             "group by passenger_count "
             "order by passenger_count")

# convert to Pandas
pandas_df = df.to_pandas()

# create a chart
fig = pandas_df.plot(kind="bar", title="Trip Count by Number of Passengers").get_figure()
fig.savefig('chart.png')

This produces the following chart:

Chart

Registering a DataFrame as a View

You can use SessionContext's register_view method to convert a DataFrame into a view and register it with the context.

from datafusion import SessionContext, col, literal

# Create a DataFusion context
ctx = SessionContext()

# Create sample data
data = {"a": [1, 2, 3, 4, 5], "b": [10, 20, 30, 40, 50]}

# Create a DataFrame from the dictionary
df = ctx.from_pydict(data, "my_table")

# Filter the DataFrame (for example, keep rows where a > 2)
df_filtered = df.filter(col("a") > literal(2))

# Register the dataframe as a view with the context
ctx.register_view("view1", df_filtered)

# Now run a SQL query against the registered view
df_view = ctx.sql("SELECT * FROM view1")

# Collect the results
results = df_view.collect()

# Convert results to a list of dictionaries for display
result_dicts = [batch.to_pydict() for batch in results]

print(result_dicts)

This will output:

[{'a': [3, 4, 5], 'b': [30, 40, 50]}]

Configuration

It is possible to configure runtime (memory and disk settings) and configuration settings when creating a context.

runtime = (
    RuntimeEnvBuilder()
    .with_disk_manager_os()
    .with_fair_spill_pool(10000000)
)
config = (
    SessionConfig()
    .with_create_default_catalog_and_schema(True)
    .with_default_catalog_and_schema("foo", "bar")
    .with_target_partitions(8)
    .with_information_schema(True)
    .with_repartition_joins(False)
    .with_repartition_aggregations(False)
    .with_repartition_windows(False)
    .with_parquet_pruning(False)
    .set("datafusion.execution.parquet.pushdown_filters", "true")
)
ctx = SessionContext(config, runtime)

Refer to the API documentation for more information.

Printing the context will show the current configuration settings.

print(ctx)

Extensions

For information about how to extend DataFusion Python, please see the extensions page of the online documentation.

More Examples

See examples for more information.

Executing Queries with DataFusion

Running User-Defined Python Code

Substrait Support

How to install

uv

uv add datafusion

Pip

pip install datafusion
# or
python -m pip install datafusion

Conda

conda install -c conda-forge datafusion

You can verify the installation by running:

>>> import datafusion
>>> datafusion.__version__
'0.6.0'

How to develop

This assumes that you have rust and cargo installed. We use the workflow recommended by pyo3 and maturin. The Maturin tools used in this workflow can be installed either via uv or pip. Both approaches should offer the same experience. It is recommended to use uv since it has significant performance improvements over pip.

Currently for protobuf support either protobuf or cmake must be installed.

Bootstrap (uv):

By default uv will attempt to build the datafusion python package. For our development we prefer to build manually. This means that when creating your virtual environment using uv sync you need to pass in the additional --no-install-package datafusion and for uv run commands the additional parameter --no-project

# fetch this repo
git clone git@github.com:apache/datafusion-python.git
# cd to the repo root
cd datafusion-python/
# create the virtual environment
uv sync --dev --no-install-package datafusion
# activate the environment
source .venv/bin/activate

Bootstrap (pip):

# fetch this repo
git clone git@github.com:apache/datafusion-python.git
# cd to the repo root
cd datafusion-python/
# prepare development environment (used to build wheel / install in development)
python3 -m venv .venv
# activate the venv
source .venv/bin/activate
# update pip itself if necessary
python -m pip install -U pip
# install dependencies
python -m pip install -r pyproject.toml

The tests rely on test data in git submodules.

git submodule update --init

Whenever rust code changes (your changes or via git pull):

# make sure you activate the venv using "source venv/bin/activate" first
maturin develop --uv
python -m pytest

Alternatively if you are using uv you can do the following without needing to activate the virtual environment:

uv run --no-project maturin develop --uv
uv run --no-project pytest

To run the FFI tests within the examples folder, after you have built datafusion-python with the previous commands:

cd examples/datafusion-ffi-example
uv run --no-project maturin develop --uv
uv run --no-project pytest python/tests/_test_*py

Running & Installing pre-commit hooks

datafusion-python takes advantage of pre-commit to assist developers with code linting to help reduce the number of commits that ultimately fail in CI due to linter errors. Using the pre-commit hooks is optional for the developer but certainly helpful for keeping PRs clean and concise.

Our pre-commit hooks can be installed by running pre-commit install, which will install the configurations in your DATAFUSION_PYTHON_ROOT/.github directory and run each time you perform a commit, failing to complete the commit if an offending lint is found allowing you to make changes locally before pushing.

The pre-commit hooks can also be run adhoc without installing them by simply running pre-commit run --all-files.

NOTE: the current pre-commit hooks require docker, and cmake. See note on protobuf above.

Running linters without using pre-commit

There are scripts in ci/scripts for running Rust and Python linters.

./ci/scripts/python_lint.sh
./ci/scripts/rust_clippy.sh
./ci/scripts/rust_fmt.sh
./ci/scripts/rust_toml_fmt.sh

Checking Upstream DataFusion Coverage

This project includes an AI agent skill for auditing which features from the upstream Apache DataFusion Rust library are not yet exposed in these Python bindings. This is useful when adding missing functions, auditing API coverage, or ensuring parity with upstream.

The skill accepts an optional area argument:

scalar functions
aggregate functions
window functions
dataframe
session context
ffi types
all

If no argument is provided, it defaults to checking all areas. The skill will fetch the upstream DataFusion documentation, compare it against the functions and methods exposed in this project, and produce a coverage report listing what is currently exposed and what is missing.

The skill definition lives in .ai/skills/check-upstream/SKILL.md and follows the Agent Skills open standard. It can be used by any AI coding agent that supports skill discovery, or followed manually.

How to update dependencies

To change test dependencies, change the pyproject.toml and run

uv sync --dev --no-install-package datafusion