feat: Enhance `array_slice` functionality to support `ListView` and `LargeListView` types (#18432)

## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes #123` indicates that this PR will close issue #123.
-->

- Closes #18351

## Rationale for this change

`array_slice` accepts `ListView` / `LargeListView` inputs.

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

## What changes are included in this PR?


- Extend array_slice_inner to handle `ListView`/`LargeListView` arrays
directly.
- Share the stride/bounds logic between list and list‑view
implementations via a new `SlicePlan`.



<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

## Are these changes tested?
Yes

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

## Are there any user-facing changes?

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->
Yes. `array_slice` now accepts `ListView` and `LargeListView` arrays
without requiring an explicit cast.
2 files changed
tree: 2c741907736b3b1dd94aebd191b6ead966a41bcf
  1. .devcontainer/
  2. .github/
  3. benchmarks/
  4. ci/
  5. datafusion/
  6. datafusion-cli/
  7. datafusion-examples/
  8. dev/
  9. docs/
  10. python/
  11. test-utils/
  12. .asf.yaml
  13. .dockerignore
  14. .editorconfig
  15. .gitattributes
  16. .gitignore
  17. .gitmodules
  18. Cargo.lock
  19. Cargo.toml
  20. CHANGELOG.md
  21. clippy.toml
  22. CODE_OF_CONDUCT.md
  23. CONTRIBUTING.md
  24. doap.rdf
  25. header
  26. LICENSE.txt
  27. licenserc.toml
  28. NOTICE.txt
  29. pre-commit.sh
  30. README.md
  31. rust-toolchain.toml
  32. rustfmt.toml
  33. taplo.toml
  34. typos.toml
README.md

Apache DataFusion

Crates.io Apache licensed Build Status Commit Activity Open Issues Discord chat Linkedin Crates.io MSRV

Website | API Docs | Chat

DataFusion is an extensible query engine written in Rust that uses Apache Arrow as its in-memory format.

This crate provides libraries and binaries for developers building fast and feature rich database and analytic systems, customized to particular workloads. See use cases for examples. The following related subprojects target end users:

“Out of the box,” DataFusion offers SQL and Dataframe APIs, excellent performance, built-in support for CSV, Parquet, JSON, and Avro, extensive customization, and a great community.

DataFusion features a full query planner, a columnar, streaming, multi-threaded, vectorized execution engine, and partitioned data sources. You can customize DataFusion at almost all points including additional data sources, query languages, functions, custom operators and more. See the Architecture section for more details.

Here are links to some important information

What can you do with this crate?

DataFusion is great for building projects such as domain specific query engines, new database platforms and data pipelines, query languages and more. It lets you start quickly from a fully working engine, and then customize those features specific to your use. Click Here to see a list known users.

Contributing to DataFusion

Please see the contributor guide and communication pages for more information.

Crate features

This crate has several features which can be specified in your Cargo.toml.

Default features:

  • nested_expressions: functions for working with nested type function such as array_to_string
  • compression: reading files compressed with xz2, bzip2, flate2, and zstd
  • crypto_expressions: cryptographic functions such as md5 and sha256
  • datetime_expressions: date and time functions such as to_timestamp
  • encoding_expressions: encode and decode functions
  • parquet: support for reading the Apache Parquet format
  • sql: Support for sql parsing / planning
  • regex_expressions: regular expression functions, such as regexp_match
  • unicode_expressions: Include unicode aware functions such as character_length
  • unparser: enables support to reverse LogicalPlans back into SQL
  • recursive_protection: uses recursive for stack overflow protection.

Optional features:

  • avro: support for reading the Apache Avro format
  • backtrace: include backtrace information in error messages
  • parquet_encryption: support for using Parquet Modular Encryption
  • pyarrow: conversions between PyArrow and DataFusion types
  • serde: enable arrow-schema's serde feature

DataFusion API Evolution and Deprecation Guidelines

Public methods in Apache DataFusion evolve over time: while we try to maintain a stable API, we also improve the API over time. As a result, we typically deprecate methods before removing them, according to the deprecation guidelines.

Dependencies and Cargo.lock

Following the guidance on committing Cargo.lock files, this project commits its Cargo.lock file.

CI uses the committed Cargo.lock file, and dependencies are updated regularly using Dependabot PRs.