tree: e93fc608710f8004d218345856264e5a6267ff31 [path history] [tgz]
  1. _KEY_MATERIAL_FOR_external_key_material_java.parquet.encrypted.json
  2. alltypes_dictionary.parquet
  3. alltypes_plain.parquet
  4. alltypes_plain.snappy.parquet
  5. alltypes_tiny_pages.parquet
  6. alltypes_tiny_pages_plain.parquet
  7. binary.parquet
  8. bloom_filter.bin
  9. bloom_filter.xxhash.bin
  10. byte_array_decimal.parquet
  11. data_index_bloom_encoding_stats.parquet
  12. datapage_v1-corrupt-checksum.parquet
  13. datapage_v1-snappy-compressed-checksum.parquet
  14. datapage_v1-uncompressed-checksum.parquet
  15. datapage_v2.snappy.parquet
  16. delta_binary_packed.md
  17. delta_binary_packed.parquet
  18. delta_binary_packed_expect.csv
  19. delta_byte_array.md
  20. delta_byte_array.parquet
  21. delta_byte_array_expect.csv
  22. delta_encoding_optional_column.md
  23. delta_encoding_optional_column.parquet
  24. delta_encoding_optional_column_expect.csv
  25. delta_encoding_required_column.md
  26. delta_encoding_required_column.parquet
  27. delta_encoding_required_column_expect.csv
  28. delta_length_byte_array.parquet
  29. dict-page-offset-zero.parquet
  30. encrypt_columns_and_footer.parquet.encrypted
  31. encrypt_columns_and_footer_aad.parquet.encrypted
  32. encrypt_columns_and_footer_ctr.parquet.encrypted
  33. encrypt_columns_and_footer_disable_aad_storage.parquet.encrypted
  34. encrypt_columns_plaintext_footer.parquet.encrypted
  35. external_key_material_java.parquet.encrypted
  36. fixed_length_byte_array.md
  37. fixed_length_byte_array.parquet
  38. fixed_length_decimal.parquet
  39. fixed_length_decimal_legacy.parquet
  40. hadoop_lz4_compressed.parquet
  41. hadoop_lz4_compressed_larger.parquet
  42. int32_decimal.parquet
  43. int32_with_null_pages.md
  44. int32_with_null_pages.parquet
  45. int64_decimal.parquet
  46. list_columns.parquet
  47. lz4_raw_compressed.parquet
  48. lz4_raw_compressed_larger.parquet
  49. nan_in_stats.parquet
  50. nation.dict-malformed.parquet
  51. nested_lists.snappy.parquet
  52. nested_maps.snappy.parquet
  53. nested_structs.rust.parquet
  54. non_hadoop_lz4_compressed.parquet
  55. nonnullable.impala.parquet
  56. null_list.parquet
  57. nullable.impala.parquet
  58. nulls.snappy.parquet
  59. overflow_i16_page_cnt.parquet
  60. plain-dict-uncompressed-checksum.parquet
  61. README.md
  62. repeated_no_annotation.parquet
  63. rle-dict-snappy-checksum.parquet
  64. rle-dict-uncompressed-corrupt-checksum.parquet
  65. rle_boolean_encoding.parquet
  66. single_nan.parquet
  67. uniform_encryption.parquet.encrypted
data/README.md

Test data files for Parquet compatibility and regression testing

FileDescription
delta_byte_array.parquetstring columns with DELTA_BYTE_ARRAY encoding. See delta_byte_array.md for details.
delta_length_byte_array.parquetstring columns with DELTA_LENGTH_BYTE_ARRAY encoding.
delta_binary_packed.parquetINT32 and INT64 columns with DELTA_BINARY_PACKED encoding. See delta_binary_packed.md for details.
delta_encoding_required_column.parquetrequired INT32 and STRING columns with delta encoding. See delta_encoding_required_column.md for details.
delta_encoding_optional_column.parquetoptional INT64 and STRING columns with delta encoding. See delta_encoding_optional_column.md for details.
nested_structs.rust.parquetUsed to test that the Rust Arrow reader can lookup the correct field from a nested struct. See ARROW-11452
data_index_bloom_encoding_stats.parquetoptional STRING column. Contains optional metadata: bloom filters, column index, offset index and encoding stats.
null_list.parquetan empty list. Generated from this json {"emptylist":[]} and for the purposes of testing correct read/write behaviour of this base case.
alltypes_tiny_pages.parquetsmall page sizes with dictionary encoding with page index from impala.
alltypes_tiny_pages_plain.parquetsmall page sizes with plain encoding with page index impala.
rle_boolean_encoding.parquetoption boolean columns with RLE encoding
fixed_length_byte_array.parquetoptional FIXED_LENGTH_BYTE_ARRAY column with page index. See fixed_length_byte_array.md for details.
int32_with_null_pages.parquetoptional INT32 column with random null pages. See int32_with_null_pages.md for details.
datapage_v1-uncompressed-checksum.parquetuncompressed INT32 columns in v1 data pages with a matching CRC
datapage_v1-snappy-compressed-checksum.parquetcompressed INT32 columns in v1 data pages with a matching CRC
datapage_v1-corrupt-checksum.parquetuncompressed INT32 columns in v1 data pages with a mismatching CRC
overflow_i16_page_cnt.parquetrow group with more than INT16_MAX pages
bloom_filter.bindeprecated bloom filter binary with binary header and murmur3 hashing
bloom_filter.xxhash.binbloom filter binary with thrift header and xxhash hashing
nan_in_stats.parquetstatistics contains NaN in max, from PyArrow 0.8.0. See note below on “NaN in stats”.
rle-dict-snappy-checksum.parquetcompressed and dictionary-encoded INT32 and STRING columns in format v2 with a matching CRC
plain-dict-uncompressed-checksum.parquetuncompressed and dictionary-encoded INT32 and STRING columns in format v1 with a matching CRC
rle-dict-uncompressed-corrupt-checksum.parquetuncompressed and dictionary-encoded INT32 and STRING columns in format v2 with a mismatching CRC

TODO: Document what each file is in the table above.

Encrypted Files

Tests files with .parquet.encrypted suffix are encrypted using Parquet Modular Encryption.

A detailed description of the Parquet Modular Encryption specification can be found here:

 https://github.com/apache/parquet-format/blob/encryption/Encryption.md

Following are the keys and key ids (when using key_retriever) used to encrypt the encrypted columns and footer in the all the encrypted files:

  • Encrypted/Signed Footer:
    • key: {0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5}
    • key_id: “kf”
  • Encrypted column named double_field:
    • key: {1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,0}
    • key_id: “kc1”
  • Encrypted column named float_field:
    • key: {1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,1}
    • key_id: “kc2”

The following files are encrypted with AAD prefix “tester”:

  1. encrypt_columns_and_footer_disable_aad_storage.parquet.encrypted
  2. encrypt_columns_and_footer_aad.parquet.encrypted

A sample that reads and checks these files can be found at the following tests:

cpp/src/parquet/encryption-read-configurations-test.cc
cpp/src/parquet/test-encryption-util.h

The external_key_material_java.parquet.encrypted file was encrypted using parquet-mr with external key material enabled, so the key material is found in the _KEY_MATERIAL_FOR_external_key_material_java.parquet.encrypted.json file. This data was written using the org.apache.parquet.crypto.keytools.mocks.InMemoryKMS KMS client, which is compatible with the TestOnlyInServerWrapKms KMS client used in C++ tests.

Checksum Files

The schema for the datapage_v1-*-checksum.parquet test files is:

message m {
    required int32 a;
    required int32 b;
} 

The detailed structure for these files is as follows:

  • data/datapage_v1-uncompressed-checksum.parquet:

    [ Column "a" [ Page 0 [correct crc] | Uncompressed Contents ][ Page 1 [correct crc] | Uncompressed Contents ]]
    [ Column "b" [ Page 0 [correct crc] | Uncompressed Contents ][ Page 1 [correct crc] | Uncompressed Contents ]]
    
  • data/datapage_v1-snappy-compressed-checksum.parquet:

    [ Column "a" [ Page 0 [correct crc] | Snappy Contents ][ Page 1 [correct crc] | Snappy Contents ]]
    [ Column "b" [ Page 0 [correct crc] | Snappy Contents ][ Page 1 [correct crc] | Snappy Contents ]]
    
  • data/datapage_v1-corrupt-checksum.parquet:

    [ Column "a" [ Page 0 [bad crc] | Uncompressed Contents ][ Page 1 [correct crc] | Uncompressed Contents ]]
    [ Column "b" [ Page 0 [correct crc] | Uncompressed Contents ][ Page 1 [bad crc] | Uncompressed Contents ]]
    

The schema for the *-dict-*-checksum.parquet test files is:

  • data/rle-dict-snappy-checksum.parquet:

    [ Column "long_field" [ Dict Page [correct crc] | Compressed PLAIN Contents ][ Page 0 [correct crc] | Compressed RLE_DICTIONARY Contents ]]
    [ Column "binary_field" [ Dict Page [correct crc] | Compressed PLAIN Contents ][ Page 0 [correct crc] | Compressed RLE_DICTIONARY Contents ]]
    
  • data/plain-dict-uncompressed-checksum.parquet:

    [ Column "long_field" [ Dict Page [correct crc] | Uncompressed PLAIN_DICTIONARY(DICT) Contents ][ Page 0 [correct crc] | Uncompressed PLAIN_DICTIONARY Contents ]]
    [ Column "binary_field" [ Dict Page [correct crc] | Uncompressed PLAIN_DICTIONARY(DICT) Contents ][ Page 0 [correct crc] | Uncompressed PLAIN_DICTIONARY Contents ]]
    
  • data/rle-dict-uncompressed-corrupt-checksum.parquet:

    [ Column "long_field" [ Dict Page [bad crc] | Uncompressed PLAIN Contents ][ Page 0 [correct crc] | Uncompressed RLE_DICTIONARY Contents ]]
    [ Column "binary_field" [ Dict Page [bad crc] | Uncompressed PLAIN Contents ][ Page 0 [correct crc] | Uncompressed RLE_DICTIONARY Contents ]]
    

Bloom Filter Files

Bloom filter examples have been generated by parquet-mr. They are not Parquet files but only contain the bloom filter header and payload.

For each of bloom_filter.bin and bloom_filter.xxhash.bin, the bloom filter was generated by inserting the strings “hello”, “parquet”, “bloom”, “filter”.

bloom_filter.bin uses the original Murmur3-based bloom filter format as of https://github.com/apache/parquet-format/commit/54839ad5e04314c944fed8aa4bc6cf15e4a58698.

bloom_filter.xxhash.bin uses the newer xxHash-based bloom filter format as of https://github.com/apache/parquet-format/commit/3fb10e00c2204bf1c6cc91e094c59e84cefcee33.

NaN in stats

Prior to version 1.4.0, the C++ Parquet writer would write NaN values in min and max statistics. (Correction in this issue). It has been updated since to ignore NaN values when calculating statistics, but for backwards compatibility the following rules were established (in PARQUET-1222):

For backwards compatibility when reading files:

  • If the min is a NaN, it should be ignored.
  • If the max is a NaN, it should be ignored.
  • If the min is +0, the row group may contain -0 values as well.
  • If the max is -0, the row group may contain +0 values as well.
  • When looking for NaN values, min and max should be ignored.

The file nan_in_stats.parquet was generated with:

import pyarrow as pa # version 0.8.0
import pyarrow.parquet as pq
from numpy import NaN

tab = pa.Table.from_arrays(
    [pa.array([1.0, NaN])],
    names="x"
)

pq.write_table(tab, "nan_in_stats.parquet")

metadata = pq.read_metadata("nan_in_stats.parquet")
metadata.row_group(0).column(0)
# <pyarrow._parquet.ColumnChunkMetaData object at 0x7f28539e58f0>
#   file_offset: 88
#   file_path: 
#   type: DOUBLE
#   num_values: 2
#   path_in_schema: x
#   is_stats_set: True
#   statistics:
#     <pyarrow._parquet.RowGroupStatistics object at 0x7f28539e5738>
#       has_min_max: True
#       min: 1
#       max: nan
#       null_count: 0
#       distinct_count: 0
#       num_values: 2
#       physical_type: DOUBLE
#   compression: 1
#   encodings: <map object at 0x7f28539eb4e0>
#   has_dictionary_page: True
#   dictionary_page_offset: 4
#   data_page_offset: 36
#   index_page_offset: 0
#   total_compressed_size: 84
#   total_uncompressed_size: 80