commit | b54e648a7ee82e1ef292ef36df7d49902171b94f | [log] [tgz] |
---|---|---|
author | wiedld <wiedld@users.noreply.github.com> | Wed Jan 01 08:03:11 2025 -0500 |
committer | GitHub <noreply@github.com> | Wed Jan 01 08:03:11 2025 -0500 |
tree | 74fae4f9f5c4a8fc87a3ec44763478a55d273d50 | |
parent | aafec07e086463fc7ed72c704e9f7e367460618a [diff] |
Supporting writing schema metadata when writing Parquet in parallel (#13866) * refactor: make ParquetSink tests a bit more readable * chore(11770): add new ParquetOptions.skip_arrow_metadata * test(11770): demonstrate that the single threaded ParquetSink is already writing the arrow schema in the kv_meta, and allow disablement * refactor(11770): replace with new method, since the kv_metadata is inherent to TableParquetOptions and therefore we should explicitly make the API apparant that you have to include the arrow schema or not * fix(11770): fix parallel ParquetSink to encode arrow schema into the file metadata, based on the ParquetOptions * refactor(11770): provide deprecation warning for TryFrom * test(11770): update tests with new default to include arrow schema * refactor: including partitioning of arrow schema inserted into kv_metdata * test: update tests for new config prop, as well as the new file partition offsets based upon larger metadata * chore: avoid cloning in tests, and update code docs * refactor: return to the WriterPropertiesBuilder::TryFrom<TableParquetOptions>, and separately add the arrow_schema to the kv_metadata on the TableParquetOptions * refactor: require the arrow_schema key to be present in the kv_metadata, if is required by the configuration * chore: update configs.md * test: update tests to handle the (default) required arrow schema in the kv_metadata * chore: add reference to arrow-rs upstream PR
DataFusion is an extensible query engine written in Rust that uses Apache Arrow as its in-memory format.
This crate provides libraries and binaries for developers building fast and feature rich database and analytic systems, customized to particular workloads. See use cases for examples. The following related subprojects target end users:
“Out of the box,” DataFusion offers [SQL] and [Dataframe
] APIs, excellent performance, built-in support for CSV, Parquet, JSON, and Avro, extensive customization, and a great community.
DataFusion features a full query planner, a columnar, streaming, multi-threaded, vectorized execution engine, and partitioned data sources. You can customize DataFusion at almost all points including additional data sources, query languages, functions, custom operators and more. See the Architecture section for more details.
Here are links to some important information
DataFusion is great for building projects such as domain specific query engines, new database platforms and data pipelines, query languages and more. It lets you start quickly from a fully working engine, and then customize those features specific to your use. Click Here to see a list known users.
Please see the contributor guide and communication pages for more information.
This crate has several features which can be specified in your Cargo.toml
.
Default features:
nested_expressions
: functions for working with nested type function such as array_to_string
compression
: reading files compressed with xz2
, bzip2
, flate2
, and zstd
crypto_expressions
: cryptographic functions such as md5
and sha256
datetime_expressions
: date and time functions such as to_timestamp
encoding_expressions
: encode
and decode
functionsparquet
: support for reading the Apache Parquet formatregex_expressions
: regular expression functions, such as regexp_match
unicode_expressions
: Include unicode aware functions such as character_length
unparser
: enables support to reverse LogicalPlans back into SQLrecursive_protection
: uses recursive for stack overflow protection.Optional features:
avro
: support for reading the Apache Avro formatbacktrace
: include backtrace information in error messagespyarrow
: conversions between PyArrow and DataFusion typesserde
: enable arrow-schema's serde
featureThe Rust toolchain releases are tracked at Rust Versions and follow semantic versioning. A Rust toolchain release can be identified by a version string like 1.80.0
, or more generally major.minor.patch
.
DataFusion's supports the last 4 stable Rust minor versions released and any such versions released within the last 4 months.
For example, given the releases 1.78.0
, 1.79.0
, 1.80.0
, 1.80.1
and 1.81.0
DataFusion will support 1.78.0, which is 3 minor versions prior to the most minor recent 1.81
.
Note: If a Rust hotfix is released for the current MSRV, the MSRV will be updated to the specific minor version that includes all applicable hotfixes preceding other policies.
DataFusion enforces MSRV policy using a MSRV CI Check
Public methods in Apache DataFusion evolve over time: while we try to maintain a stable API, we also improve the API over time. As a result, we typically deprecate methods before removing them, according to the deprecation guidelines.