GH-42143: [R] Sanitize R metadata (#41969)

### Rationale for this change

`arrow` uses R `serialize()`/`unserialize()` to store additional
metadata in the Arrow schema. This PR adds some extra checking and
sanitizing in order to make the reading of this metadata robust to data
of unknown provenance.

### What changes are included in this PR?

* When writing metadata, we strip out all but simple types: strings,
numbers, boolean, lists, etc. Objects of other types, such as
environments, external pointers, and other language types, are removed.
* When reading metadata, the same filter is applied. If there are types
that are not in the allowlist, one of two things happen. By default,
they are removed with a warning. If you set
`options(arrow.unsafe_metadata = TRUE)`, the full metadata including
disallowed types is returned, also with a warning. This option is an
escape hatch in case we are too strict with dropping types when reading
files produced by older versions of the package that did not filter them
out.
* `unserialize()` is called in a way that prevents promises contained in
the data from being automatically invoked. This technique works on all
versions of R: it is not dependent on the patch for RDS reading that was
included in 4.4.
* Other sanity checking to be stricter about only reading back in
something of the form we wrote out: assert that the data is
ASCII-serialized, and if it is compressed, it is gzip, the same way we
do on serialization. It's not clear that it's necessary, but it's not
bad to be extra strict here.

### Are these changes tested?

Yes

### Are there any user-facing changes?

For most, no. But:

**This PR contains a "Critical Fix".** 

Without this patch, it is possible to construct an Arrow or Parquet file
that would contain code that would execute when the R metadata is
applied when converting to a data.frame. If you are using an older
version of the package and are reading data from a source you do not
trust, you can read into a `Table` and use its internal
`$to_data_frame()` method, like `read_parquet(..., as_data_frame =
FALSE)$to_data_frame()`. This should skip the reading of the R metadata.
* GitHub Issue: #42143
4 files changed
tree: dcbf085c9a904f2725cb31f3ccdb347d0759d9dc
  1. .github/
  2. c_glib/
  3. ci/
  4. cpp/
  5. csharp/
  6. dev/
  7. docs/
  8. format/
  9. go/
  10. java/
  11. js/
  12. matlab/
  13. python/
  14. r/
  15. ruby/
  16. swift/
  17. .asf.yaml
  18. .clang-format
  19. .clang-tidy
  20. .clang-tidy-ignore
  21. .dir-locals.el
  22. .dockerignore
  23. .env
  24. .gitattributes
  25. .gitignore
  26. .gitmodules
  27. .golangci.yaml
  28. .hadolint.yaml
  29. .pre-commit-config.yaml
  30. .readthedocs.yml
  31. appveyor.yml
  32. CHANGELOG.md
  33. cmake-format.py
  34. CODE_OF_CONDUCT.md
  35. CONTRIBUTING.md
  36. docker-compose.yml
  37. header
  38. LICENSE.txt
  39. NOTICE.txt
  40. README.md
README.md

Apache Arrow

Fuzzing Status License Twitter Follow

Powering In-Memory Analytics

Apache Arrow is a development platform for in-memory analytics. It contains a set of technologies that enable big data systems to process and move data fast.

Major components of the project include:

Arrow is an Apache Software Foundation project. Learn more at arrow.apache.org.

What's in the Arrow libraries?

The reference Arrow libraries contain many distinct software components:

  • Columnar vector and table-like containers (similar to data frames) supporting flat or nested types
  • Fast, language agnostic metadata messaging layer (using Google's Flatbuffers library)
  • Reference-counted off-heap buffer memory management, for zero-copy memory sharing and handling memory-mapped files
  • IO interfaces to local and remote filesystems
  • Self-describing binary wire formats (streaming and batch/file-like) for remote procedure calls (RPC) and interprocess communication (IPC)
  • Integration tests for verifying binary compatibility between the implementations (e.g. sending data from Java to C++)
  • Conversions to and from other in-memory data structures
  • Readers and writers for various widely-used file formats (such as Parquet, CSV)

Implementation status

The official Arrow libraries in this repository are in different stages of implementing the Arrow format and related features. See our current feature matrix on git main.

How to Contribute

Please read our latest project contribution guide.

Getting involved

Even if you do not plan to contribute to Apache Arrow itself or Arrow integrations in other projects, we'd be happy to have you involved: