commit | 801de2fbcf5bcbce0c019ed4b35ff3fc863b141b | [log] [tgz] |
---|---|---|
author | Neal Richardson <neal.p.richardson@gmail.com> | Fri Jun 14 16:09:34 2024 -0400 |
committer | GitHub <noreply@github.com> | Fri Jun 14 16:09:34 2024 -0400 |
tree | dcbf085c9a904f2725cb31f3ccdb347d0759d9dc | |
parent | 69e8a78c018da88b60f9eb2b3b45703f81f3c93d [diff] |
GH-42143: [R] Sanitize R metadata (#41969) ### Rationale for this change `arrow` uses R `serialize()`/`unserialize()` to store additional metadata in the Arrow schema. This PR adds some extra checking and sanitizing in order to make the reading of this metadata robust to data of unknown provenance. ### What changes are included in this PR? * When writing metadata, we strip out all but simple types: strings, numbers, boolean, lists, etc. Objects of other types, such as environments, external pointers, and other language types, are removed. * When reading metadata, the same filter is applied. If there are types that are not in the allowlist, one of two things happen. By default, they are removed with a warning. If you set `options(arrow.unsafe_metadata = TRUE)`, the full metadata including disallowed types is returned, also with a warning. This option is an escape hatch in case we are too strict with dropping types when reading files produced by older versions of the package that did not filter them out. * `unserialize()` is called in a way that prevents promises contained in the data from being automatically invoked. This technique works on all versions of R: it is not dependent on the patch for RDS reading that was included in 4.4. * Other sanity checking to be stricter about only reading back in something of the form we wrote out: assert that the data is ASCII-serialized, and if it is compressed, it is gzip, the same way we do on serialization. It's not clear that it's necessary, but it's not bad to be extra strict here. ### Are these changes tested? Yes ### Are there any user-facing changes? For most, no. But: **This PR contains a "Critical Fix".** Without this patch, it is possible to construct an Arrow or Parquet file that would contain code that would execute when the R metadata is applied when converting to a data.frame. If you are using an older version of the package and are reading data from a source you do not trust, you can read into a `Table` and use its internal `$to_data_frame()` method, like `read_parquet(..., as_data_frame = FALSE)$to_data_frame()`. This should skip the reading of the R metadata. * GitHub Issue: #42143
Apache Arrow is a development platform for in-memory analytics. It contains a set of technologies that enable big data systems to process and move data fast.
Major components of the project include:
Arrow is an Apache Software Foundation project. Learn more at arrow.apache.org.
The reference Arrow libraries contain many distinct software components:
The official Arrow libraries in this repository are in different stages of implementing the Arrow format and related features. See our current feature matrix on git main.
Please read our latest project contribution guide.
Even if you do not plan to contribute to Apache Arrow itself or Arrow integrations in other projects, we'd be happy to have you involved: