fix: Provide more generic API for the capacity limit parsing (#20372)

## Which issue does this PR close?
- Closes #20371.

## Rationale for this change
Currently, `datafusion.runtime.max_temp_directory_size` is a disk based
config but when it is set as `invalid limit` or `invalid unit`, `memory
limit` is mentioned in error message. This seems to introduce
`inconsistency` between related runtime config and error message like
`disk` vs `memory` limit. This error message can be updated more generic
by setting problematic config name and covering both memory and disk
based capacity/size settings.

**Current:**
```
statement error DataFusion error: Error during planning: Failed to parse number from memory limit 'invalid_size'
SET datafusion.runtime.max_temp_directory_size = 'invalid_size'

statement error DataFusion error: Error during planning: Unsupported unit 'B' in memory limit '1024B'
SET datafusion.runtime.max_temp_directory_size = '1024B'
```

**New:**
```
statement error DataFusion error: Error during planning: Failed to parse number from 'datafusion.runtime.max_temp_directory_size', limit 'invalid_size'
SET datafusion.runtime.max_temp_directory_size = 'invalid_size'

statement error DataFusion error: Error during planning: Unsupported unit 'B' in 'datafusion.runtime.max_temp_directory_size', limit '1024B'. Unit must be one of: 'K', 'M', 'G'
SET datafusion.runtime.max_temp_directory_size = '1024B'
```

## What changes are included in this PR?
Following improvements are being offered:
1. Setting problematic config name in error message to cover all
use-cases (e.g: both memory and disk based capacity/size settings),
2. Allowed units are also added in error message,
3. `SessionContext.parse_memory_limit()` =>
`SessionContext.parse_capacity_limit()` function signature is also being
renamed to cover both memory and disk based capacity/size settings. This
is applied to both `SessionContext` and `Benchmark Utils`,
4. Being added new UT cases to cover these negative use-cases in terms
of above changes.

## Are these changes tested?
Yes and adding new UT cases

## Are there any user-facing changes?
Yes, new more detailed and generic error messages are exposed to
end-users.

---------

Co-authored-by: Martin Grigorov <martin-g@users.noreply.github.com>
4 files changed
tree: 5e42193505ed598b130b178cea008188f8bf78da
  1. .devcontainer/
  2. .github/
  3. benchmarks/
  4. ci/
  5. datafusion/
  6. datafusion-cli/
  7. datafusion-examples/
  8. dev/
  9. docs/
  10. python/
  11. test-utils/
  12. .asf.yaml
  13. .dockerignore
  14. .editorconfig
  15. .gitattributes
  16. .gitignore
  17. .gitmodules
  18. Cargo.lock
  19. Cargo.toml
  20. CHANGELOG.md
  21. clippy.toml
  22. CODE_OF_CONDUCT.md
  23. CONTRIBUTING.md
  24. doap.rdf
  25. header
  26. LICENSE.txt
  27. licenserc.toml
  28. NOTICE.txt
  29. pre-commit.sh
  30. pyproject.toml
  31. README.md
  32. rust-toolchain.toml
  33. rustfmt.toml
  34. taplo.toml
  35. typos.toml
  36. uv.lock
README.md

Apache DataFusion

Crates.io Apache licensed Build Status Commit Activity Open Issues Pending PRs Discord chat Linkedin Crates.io MSRV

Website | API Docs | Chat

DataFusion is an extensible query engine written in Rust that uses Apache Arrow as its in-memory format.

This crate provides libraries and binaries for developers building fast and feature-rich database and analytic systems, customized for particular workloads. See use cases for examples. The following related subprojects target end users:

“Out of the box,” DataFusion offers SQL and DataFrame APIs, excellent performance, built-in support for CSV, Parquet, JSON, and Avro, extensive customization, and a great community.

DataFusion features a full query planner, a columnar, streaming, multi-threaded, vectorized execution engine, and partitioned data sources. You can customize DataFusion at almost all points including additional data sources, query languages, functions, custom operators and more. See the Architecture section for more details.

Here are links to important resources:

What can you do with this crate?

DataFusion is great for building projects such as domain-specific query engines, new database platforms and data pipelines, query languages and more. It lets you start quickly from a fully working engine, and then customize those features specific to your needs. See the list of known users.

Contributing to DataFusion

Please see the contributor guide and communication pages for more information.

Crate features

This crate has several features which can be specified in your Cargo.toml.

Default features:

  • nested_expressions: functions for working with nested types such as array_to_string
  • compression: reading files compressed with xz2, bzip2, flate2, and zstd
  • crypto_expressions: cryptographic functions such as md5 and sha256
  • datetime_expressions: date and time functions such as to_timestamp
  • encoding_expressions: encode and decode functions
  • parquet: support for reading the Apache Parquet format
  • sql: support for SQL parsing and planning
  • regex_expressions: regular expression functions, such as regexp_match
  • unicode_expressions: include Unicode-aware functions such as character_length
  • unparser: enables support to reverse LogicalPlans back into SQL
  • recursive_protection: uses recursive for stack overflow protection.

Optional features:

  • avro: support for reading the Apache Avro format
  • backtrace: include backtrace information in error messages
  • parquet_encryption: support for using Parquet Modular Encryption
  • serde: enable arrow-schema's serde feature

DataFusion API Evolution and Deprecation Guidelines

Public methods in Apache DataFusion evolve over time: while we try to maintain a stable API, we also improve the API over time. As a result, we typically deprecate methods before removing them, according to the deprecation guidelines.

Dependencies and Cargo.lock

Following the guidance on committing Cargo.lock files, this project commits its Cargo.lock file.

CI uses the committed Cargo.lock file, and dependencies are updated regularly using Dependabot PRs.