ARROW-10354: [Rust][DataFusion] regexp_extract function to select regex groups from strings

Adds a regexp_extract compute kernel to select a substring based on a regular expression.

Some things I did that I may be doing wrong:

* I exposed `GenericStringBuilder`
* I build the resulting Array using a builder - this looks quite different from e.g. the substring kernel. Should I change it accordingly, e.g. because of performance considerations?
* In order to apply the new function in datafusion, I did not see a better solution than to handle the pattern string as `StringArray` and take the first record to compile the regex pattern from it and apply it to all values. Is there a way to define that an argument has to be a literal/scalar and cannot be filled by e.g. another column? I consider my current implementation quite error prone and would like to make this a bit more robust.

Closes #9428 from sweb/ARROW-10354/regexp_extract

Authored-by: Florian Müller <florian@tomueller.de>
Signed-off-by: Andrew Lamb <andrew@nerdnetworks.org>
4 files changed
tree: fc15c08e69da76476081d543143003683a9173e3
  1. .github/
  2. ci/
  3. dev/
  4. format/
  5. rust/
  6. .asf.yaml
  7. .clang-format
  8. .clang-tidy
  9. .clang-tidy-ignore
  10. .dir-locals.el
  11. .dockerignore
  12. .env
  13. .gitattributes
  14. .gitignore
  15. .gitmodules
  16. .hadolint.yaml
  17. .pre-commit-config.yaml
  18. .readthedocs.yml
  19. .travis.yml
  20. appveyor.yml
  21. CHANGELOG.md
  22. cmake-format.py
  23. CODE_OF_CONDUCT.md
  24. CONTRIBUTING.md
  25. docker-compose.yml
  26. header
  27. LICENSE.txt
  28. NOTICE.txt
  29. README.md
  30. run-cmake-format.py
README.md

Apache Arrow

Build Status Coverage Status Fuzzing Status License Twitter Follow

Powering In-Memory Analytics

Apache Arrow is a development platform for in-memory analytics. It contains a set of technologies that enable big data systems to process and move data fast.

Major components of the project include:

Arrow is an Apache Software Foundation project. Learn more at arrow.apache.org.

What's in the Arrow libraries?

The reference Arrow libraries contain many distinct software components:

  • Columnar vector and table-like containers (similar to data frames) supporting flat or nested types
  • Fast, language agnostic metadata messaging layer (using Google's Flatbuffers library)
  • Reference-counted off-heap buffer memory management, for zero-copy memory sharing and handling memory-mapped files
  • IO interfaces to local and remote filesystems
  • Self-describing binary wire formats (streaming and batch/file-like) for remote procedure calls (RPC) and interprocess communication (IPC)
  • Integration tests for verifying binary compatibility between the implementations (e.g. sending data from Java to C++)
  • Conversions to and from other in-memory data structures
  • Readers and writers for various widely-used file formats (such as Parquet, CSV)

Implementation status

The official Arrow libraries in this repository are in different stages of implementing the Arrow format and related features. See our current feature matrix on git master.

How to Contribute

Please read our latest project contribution guide.

Getting involved

Even if you do not plan to contribute to Apache Arrow itself or Arrow integrations in other projects, we'd be happy to have you involved: