)]}'
{
  "log": [
    {
      "commit": "ec771cc372a119222a68759335715f824872d217",
      "tree": "38b8d1ed54bc16ce882a0da74a1a49d60eacd67b",
      "parents": [
        "ec32ac31d648fbfbd2255ed32dae0ea509ff9ffa"
      ],
      "author": {
        "name": "Sava Vranešević",
        "email": "20240220+svranesevic@users.noreply.github.com",
        "time": "Fri Apr 03 22:57:38 2026 +0200"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 03 16:57:38 2026 -0400"
      },
      "message": "Expose option to set line terminator for CSV writer (#9617)\n\n# Which issue does this PR close?\n\n\u003c!--\nWe generally require a GitHub issue to be filed for all bug fixes and\nenhancements and this helps us generate change logs for our releases.\nYou can link an issue to this PR using the GitHub syntax.\n--\u003e\n\n- Closes #9571.\n\n# Rationale for this change\n\n\u003c!--\nWhy are you proposing this change? If this is already explained clearly\nin the issue then this section is not needed.\nExplaining clearly why changes are proposed helps reviewers understand\nyour changes and offer better suggestions for fixes.\n--\u003e\n\nEnable configuring line terminator for CSV writer.\n\n# What changes are included in this PR?\n\n\u003c!--\nThere is no need to duplicate the description in the issue here but it\nis sometimes worth providing a summary of the individual changes in this\nPR.\n--\u003e\n\nSee above.\n\n# Are these changes tested?\n\n\u003c!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example,\nare they covered by existing tests)?\n--\u003e\n\nYes, added tests.\n\n# Are there any user-facing changes?\n\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e\n\nYes, expose option to set line terminator for CSV writer.\n\n---------\n\nCo-authored-by: svranesevic \u003csvranesevic@users.noreply.github.com\u003e\nCo-authored-by: Andrew Lamb \u003candrew@nerdnetworks.org\u003e"
    },
    {
      "commit": "ec32ac31d648fbfbd2255ed32dae0ea509ff9ffa",
      "tree": "e25bf5cccf8c601869e3ee907aa03a7b92f803e1",
      "parents": [
        "ade038153a66464c56e21e350085bfcf950be09f"
      ],
      "author": {
        "name": "Liam Bao",
        "email": "liam.zw.bao@gmail.com",
        "time": "Fri Apr 03 16:57:21 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 03 16:57:21 2026 -0400"
      },
      "message": "Support nested REE in arrow-ord partition function (#9642)\n\n# Which issue does this PR close?\n\n\u003c!--\nWe generally require a GitHub issue to be filed for all bug fixes and\nenhancements and this helps us generate change logs for our releases.\nYou can link an issue to this PR using the GitHub syntax.\n--\u003e\n\n- Closes #9640.\n\n# Rationale for this change\n\n\u003c!--\nWhy are you proposing this change? If this is already explained clearly\nin the issue then this section is not needed.\nExplaining clearly why changes are proposed helps reviewers understand\nyour changes and offer better suggestions for fixes.\n--\u003e\n\nAlthough rare, it\u0027s totally valid to have nested REE and dict encoding\nand we should handle it correctly.\n\n# What changes are included in this PR?\n\n\u003c!--\nThere is no need to duplicate the description in the issue here but it\nis sometimes worth providing a summary of the individual changes in this\nPR.\n--\u003e\n\nProcess nested REE, nested dict, dict of REE, REE of dict gracefully\n\n# Are these changes tested?\n\n\u003c!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example,\nare they covered by existing tests)?\n--\u003e\n\nYes\n\n# Are there any user-facing changes?\n\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e"
    },
    {
      "commit": "ade038153a66464c56e21e350085bfcf950be09f",
      "tree": "54cbd44e27285d690d6ae8d5f2005fa84c0b014d",
      "parents": [
        "2b851d9b30ce76b70e68e29391fdef63719e694b"
      ],
      "author": {
        "name": "Ed Seidl",
        "email": "etseidl@users.noreply.github.com",
        "time": "Fri Apr 03 13:55:38 2026 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 03 16:55:38 2026 -0400"
      },
      "message": "No longer allow BIT_PACKED level encoding in Parquet writer (#9656)\n\n# Which issue does this PR close?\n\n- Closes #9635.\n\n# Rationale for this change\nThe `BIT_PACKED` encoding for repetition and definition levels has long\nbeen deprecated. Remove the possibility of using it.\n\n# What changes are included in this PR?\n\n\u003c!--\nThere is no need to duplicate the description in the issue here but it\nis sometimes worth providing a summary of the individual changes in this\nPR.\n--\u003e\n\n# Are these changes tested?\n\nShould be covered by existing tests\n\n# Are there any user-facing changes?\n\nNo, only changes to API marked \"experimental\""
    },
    {
      "commit": "2b851d9b30ce76b70e68e29391fdef63719e694b",
      "tree": "05b5b9b61625b42c86c649f19af4842117b7bec3",
      "parents": [
        "07a363632bb57a1cab74b31ce4c67c8ec9814ebd"
      ],
      "author": {
        "name": "Adam Gutglick",
        "email": "adam@spiraldb.com",
        "time": "Fri Apr 03 12:23:31 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 03 13:23:31 2026 +0200"
      },
      "message": "Add List and ListView take benchmarks (#9626)\n\n# Which issue does this PR close?\n\n- Closes https://github.com/apache/arrow-rs/issues/9627.\n\n# Rationale for this change\n\nAdding benchmarks makes it easier to measure performance and evaluate\nthe impact of changes to the implementation. I also have a PR including\nsome significant improvements, but figured its worth splitting it into\ntwo parts, LMK if its better to do that in one step.\n\n# What changes are included in this PR?\n\nAdd a couple of utility functions to generate list and list_view arrays\nwithout providing a seed\n\n# Are these changes tested?\n\nBenchmarks run locally, same setup as other benchmarks.\n\n# Are there any user-facing changes?\n\nNo"
    },
    {
      "commit": "07a363632bb57a1cab74b31ce4c67c8ec9814ebd",
      "tree": "3c95072ebc19199dfa17525769c338f7214353bc",
      "parents": [
        "f5365c343a576e0b7e48ef69897c53326e9d5539"
      ],
      "author": {
        "name": "Liam Bao",
        "email": "liam.zw.bao@gmail.com",
        "time": "Thu Apr 02 17:43:28 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 02 17:43:28 2026 -0400"
      },
      "message": "[Json] Replace `ArrayData` with typed Array construction in json-reader (#9497)\n\n# Which issue does this PR close?\n\n\u003c!--\nWe generally require a GitHub issue to be filed for all bug fixes and\nenhancements and this helps us generate change logs for our releases.\nYou can link an issue to this PR using the GitHub syntax.\n--\u003e\n\n- Part of #9298.\n\n# Rationale for this change\n\n\u003c!--\nWhy are you proposing this change? If this is already explained clearly\nin the issue then this section is not needed.\nExplaining clearly why changes are proposed helps reviewers understand\nyour changes and offer better suggestions for fixes.\n--\u003e\n\nWhile implementing `ListViewArrayDecoder` in arrow-json, I noticed we\ncould potentially retire `ArrayDataBuilder` inside `ListArrayDecoder`.\nTherefore, I\u0027d like to use a small PR here to make sure there\u0027s no\nregression\n\n# What changes are included in this PR?\n\n\u003c!--\nThere is no need to duplicate the description in the issue here but it\nis sometimes worth providing a summary of the individual changes in this\nPR.\n--\u003e\n\nReplace `ArrayDataBuilder` with `GenericListArray` in `ListArrayDecoder`\n\n# Are these changes tested?\n\n\u003c!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example,\nare they covered by existing tests)?\n--\u003e\n\nCovered by existing tests\n\n# Are there any user-facing changes?\n\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e\n\nNo"
    },
    {
      "commit": "f5365c343a576e0b7e48ef69897c53326e9d5539",
      "tree": "3fea8a1c7e7595c73f5c4b8c250959223a0f2d0c",
      "parents": [
        "652c95018349b37864799f088bb3d7b5eba97e90"
      ],
      "author": {
        "name": "Adam Gutglick",
        "email": "adam@spiraldb.com",
        "time": "Wed Apr 01 21:58:50 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 01 16:58:50 2026 -0400"
      },
      "message": "Use nextest to run Miri in CI (#9629)\n\n# Which issue does this PR close?\n\n- Closes #NNN.\n\n# Rationale for this change\n\nMiri in CI is VERY slow (around 2.5 hours), but the github runners\nactually have 4 vCPUs and some memory, so using nextest can give us some\nspeedup.\n\n# What changes are included in this PR?\n\nInstall nextest in CI and then use it to run Miri\n\n# Are these changes tested?\n\ntested the script locally\n\n# Are there any user-facing changes?\n\nNo"
    },
    {
      "commit": "652c95018349b37864799f088bb3d7b5eba97e90",
      "tree": "63a392952da5719eaaae3f17d37a1353621bb522",
      "parents": [
        "a05129a08ecf2f1f39f7eba56eb1d76848f79c3a"
      ],
      "author": {
        "name": "Andrew Lamb",
        "email": "andrew@nerdnetworks.org",
        "time": "Wed Apr 01 16:50:47 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 01 16:50:47 2026 -0400"
      },
      "message": "[arrow-pyarrow]: restore nicer pyarrow-arrow error message (#9639)\n\n# Which issue does this PR close?\n\n- Follow on to https://github.com/apache/arrow-rs/pull/9594\n\n# Rationale for this change\n\n\n@kylebarron says\nhttps://github.com/apache/arrow-rs/pull/9594#discussion_r3004995827:\n\n\u003e fwiw previously there was a nice user-facing error here, while now the\nerror generated from extract will be much more obtuse. Ideally this\nexception will never be raised except if the producer doesn\u0027t follow the\nspec correctly.\n\n# What changes are included in this PR?\nRestore the nice error\n\n# Are these changes tested?\n\nyes, added a test\n\n# Are there any user-facing changes?\n\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e"
    },
    {
      "commit": "a05129a08ecf2f1f39f7eba56eb1d76848f79c3a",
      "tree": "cc798c9377f5f4d2213bcd79887439c0f20047ba",
      "parents": [
        "1f07e54c1209a512f3ce15f4e576f0ca5d1c8d97"
      ],
      "author": {
        "name": "Hippolyte Barraud",
        "email": "hippolyte.barraud@gmail.com",
        "time": "Wed Apr 01 15:21:19 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 01 15:21:19 2026 -0400"
      },
      "message": "feat(parquet): stream-encode definition/repetition levels incrementally (#9447)\n\n# Which issue does this PR close?\n\n- Closes #9446.\n- closes https://github.com/apache/arrow-rs/pull/9636\n\n# Rationale for this change\n\nWhen writing a Parquet column with very sparse data,\n`GenericColumnWriter` accumulates unbounded memory for definition and\nrepetition levels. The raw `i16` values are appended into `Vec\u003ci16\u003e`\nsinks on every `write_batch` call and only RLE-encoded in bulk when a\ndata page is flushed. For a column that is almost entirely nulls, the\nactual RLE-encoded output can be tiny, yet the intermediate buffer grows\nlinearly with the number of rows.\n\n# What changes are included in this PR?\n\nReplace the two raw-level `Vec\u003ci16\u003e` sinks (`def_levels_sink` /\n`rep_levels_sink`) with streaming `LevelEncoder` fields\n(`def_levels_encoder` / `rep_levels_encoder`). Behavior is the same, but\nwe keep running RLE-encoded state rather than the full list of rows in\nmemory. Existing logic is reused.\n\n# Are these changes tested?\n\nYes, all tests passing.\nBenchmarks show no regression. `list_primitive` benches improved by\n3-5%:\n\n```\nBenchmarking list_primitive/default: Warming up for 3.0000 s\nWarning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 6.1s, enable flat sampling, or reduce sample count to 60.\nlist_primitive/default  time:   [1.2109 ms 1.2171 ms 1.2248 ms]\n                        thrpt:  [1.6999 GiB/s 1.7105 GiB/s 1.7194 GiB/s]\n                 change:\n                        time:   [−3.7197% −2.8848% −2.0036%] (p \u003d 0.00 \u003c 0.05)\n                        thrpt:  [+2.0445% +2.9705% +3.8634%]\n                        Performance has improved.\nFound 4 outliers among 100 measurements (4.00%)\n  3 (3.00%) high mild\n  1 (1.00%) high severe\nBenchmarking list_primitive/bloom_filter: Warming up for 3.0000 s\nWarning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 7.5s, enable flat sampling, or reduce sample count to 50.\nlist_primitive/bloom_filter\n                        time:   [1.4405 ms 1.4810 ms 1.5292 ms]\n                        thrpt:  [1.3615 GiB/s 1.4058 GiB/s 1.4452 GiB/s]\n                 change:\n                        time:   [−6.4332% −4.7568% −2.9048%] (p \u003d 0.00 \u003c 0.05)\n                        thrpt:  [+2.9917% +4.9944% +6.8755%]\n                        Performance has improved.\nFound 5 outliers among 100 measurements (5.00%)\n  2 (2.00%) high mild\n  3 (3.00%) high severe\nBenchmarking list_primitive/parquet_2: Warming up for 3.0000 s\nWarning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 6.3s, enable flat sampling, or reduce sample count to 60.\nlist_primitive/parquet_2\n                        time:   [1.2271 ms 1.2311 ms 1.2362 ms]\n                        thrpt:  [1.6841 GiB/s 1.6911 GiB/s 1.6966 GiB/s]\n                 change:\n                        time:   [−5.8536% −4.9672% −4.1905%] (p \u003d 0.00 \u003c 0.05)\n                        thrpt:  [+4.3738% +5.2269% +6.2175%]\n                        Performance has improved.\nFound 5 outliers among 100 measurements (5.00%)\n  2 (2.00%) high mild\n  3 (3.00%) high severe\nlist_primitive/zstd     time:   [2.0056 ms 2.0148 ms 2.0262 ms]\n                        thrpt:  [1.0275 GiB/s 1.0333 GiB/s 1.0381 GiB/s]\n                 change:\n                        time:   [−4.7073% −3.6719% −2.6698%] (p \u003d 0.00 \u003c 0.05)\n                        thrpt:  [+2.7431% +3.8118% +4.9398%]\n                        Performance has improved.\nFound 12 outliers among 100 measurements (12.00%)\n  2 (2.00%) high mild\n  10 (10.00%) high severe\nlist_primitive/zstd_parquet_2\n                        time:   [2.0455 ms 2.0730 ms 2.1120 ms]\n                        thrpt:  [1009.4 MiB/s 1.0043 GiB/s 1.0178 GiB/s]\n                 change:\n                        time:   [−5.8626% −3.7672% −1.4196%] (p \u003d 0.00 \u003c 0.05)\n                        thrpt:  [+1.4401% +3.9146% +6.2277%]\n                        Performance has improved.\nFound 7 outliers among 100 measurements (7.00%)\n  2 (2.00%) high mild\n  5 (5.00%) high severe\n\nBenchmarking list_primitive_non_null/default: Warming up for 3.0000 s\nWarning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 6.6s, enable flat sampling, or reduce sample count to 60.\nlist_primitive_non_null/default\n                        time:   [1.3199 ms 1.3333 ms 1.3504 ms]\n                        thrpt:  [1.5384 GiB/s 1.5581 GiB/s 1.5740 GiB/s]\n                 change:\n                        time:   [−4.1662% −2.3491% −0.7148%] (p \u003d 0.01 \u003c 0.05)\n                        thrpt:  [+0.7200% +2.4056% +4.3473%]\n                        Change within noise threshold.\nFound 6 outliers among 100 measurements (6.00%)\n  3 (3.00%) high mild\n  3 (3.00%) high severe\nBenchmarking list_primitive_non_null/bloom_filter: Warming up for 3.0000 s\nWarning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 8.4s, enable flat sampling, or reduce sample count to 50.\nlist_primitive_non_null/bloom_filter\n                        time:   [1.6567 ms 1.6668 ms 1.6805 ms]\n                        thrpt:  [1.2362 GiB/s 1.2464 GiB/s 1.2540 GiB/s]\n                 change:\n                        time:   [−2.7884% −1.3493% +0.2820%] (p \u003d 0.07 \u003e 0.05)\n                        thrpt:  [−0.2812% +1.3677% +2.8684%]\n                        No change in performance detected.\nFound 4 outliers among 100 measurements (4.00%)\n  1 (1.00%) high mild\n  3 (3.00%) high severe\nBenchmarking list_primitive_non_null/parquet_2: Warming up for 3.0000 s\nWarning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 7.2s, enable flat sampling, or reduce sample count to 50.\nlist_primitive_non_null/parquet_2\n                        time:   [1.4279 ms 1.4409 ms 1.4551 ms]\n                        thrpt:  [1.4277 GiB/s 1.4418 GiB/s 1.4550 GiB/s]\n                 change:\n                        time:   [−2.0598% −0.9952% −0.1318%] (p \u003d 0.04 \u003c 0.05)\n                        thrpt:  [+0.1319% +1.0052% +2.1032%]\n                        Change within noise threshold.\nFound 3 outliers among 100 measurements (3.00%)\n  2 (2.00%) high mild\n  1 (1.00%) high severe\nlist_primitive_non_null/zstd\n                        time:   [2.6966 ms 2.7358 ms 2.7994 ms]\n                        thrpt:  [759.93 MiB/s 777.60 MiB/s 788.89 MiB/s]\n                 change:\n                        time:   [−3.8379% −2.1418% +0.0785%] (p \u003d 0.03 \u003c 0.05)\n                        thrpt:  [−0.0784% +2.1887% +3.9911%]\n                        Change within noise threshold.\nFound 7 outliers among 100 measurements (7.00%)\n  3 (3.00%) high mild\n  4 (4.00%) high severe\nlist_primitive_non_null/zstd_parquet_2\n                        time:   [2.7684 ms 2.7861 ms 2.8099 ms]\n                        thrpt:  [757.07 MiB/s 763.55 MiB/s 768.44 MiB/s]\n                 change:\n                        time:   [−6.4460% −4.1387% −2.1474%] (p \u003d 0.00 \u003c 0.05)\n                        thrpt:  [+2.1946% +4.3174% +6.8901%]\n                        Performance has improved.\n```\n\n# Are there any user-facing changes?\n\nNone. Some internal symbols are now unused. I added some\n`#[allow(dead_code)]` statements since these were experimental-visible\nand might be externally relied on.\n\n---------\n\nSigned-off-by: Hippolyte Barraud \u003chippolyte.barraud@datadoghq.com\u003e\nCo-authored-by: Andrew Lamb \u003candrew@nerdnetworks.org\u003e"
    },
    {
      "commit": "1f07e54c1209a512f3ce15f4e576f0ca5d1c8d97",
      "tree": "202d20fe052972f70e880c39c315ed1e91ff33c2",
      "parents": [
        "61b5763a368db4bfa76e8fbafdbf26718f39a031"
      ],
      "author": {
        "name": "Andrew Lamb",
        "email": "andrew@nerdnetworks.org",
        "time": "Wed Apr 01 10:09:37 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 01 10:09:37 2026 -0400"
      },
      "message": "Disable failing arrow_writer benchmark (#9638)\n\n# Which issue does this PR close?\n\n- Part of https://github.com/apache/arrow-rs/issues/9637\n# Rationale for this change\n\nI can\u0027t benchmark the arrow-writer changes in\nhttps://github.com/apache/arrow-rs/pull/9447 due to hitting a panic:\n- https://github.com/apache/arrow-rs/issues/9637\n\n# What changes are included in this PR?\n\nTemporarily disable the cdc benchmarks until the underlying bug is fixed\n\n# Are these changes tested?\n\n\u003c!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example,\nare they covered by existing tests)?\n--\u003e\n\n# Are there any user-facing changes?\n\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e"
    },
    {
      "commit": "61b5763a368db4bfa76e8fbafdbf26718f39a031",
      "tree": "300d0cbf577ecfeaf16bb4cfb1dabc29e1faa630",
      "parents": [
        "51bf8a40f72e37528cf36419f8f453ccd0e45868"
      ],
      "author": {
        "name": "Thomas Tanon",
        "email": "thomas@pellissier-tanon.fr",
        "time": "Tue Mar 31 22:40:44 2026 +0200"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 31 16:40:44 2026 -0400"
      },
      "message": "pyarrow: Small code simplifications (#9594)\n\n# Rationale for this change\n\nMakes the code simpler and more readable by relying on new PyO3 and Rust\nfeatures. No behavior should have changed outside of an error message if\n`__arrow_c_array__` does not return a tuple\n\n# What changes are included in this PR?\n\n- use `.call_method0(M)?` instead of `.getattr(M)?.call0()`\n- Use `.extract()` that allows more advanced features like directly\nextracting tuple elements\n- remove temporary variables just before returning\n- use \u0026raw const and \u0026raw mut pointers instead of casting and addr_of!"
    },
    {
      "commit": "51bf8a40f72e37528cf36419f8f453ccd0e45868",
      "tree": "437ea93f9720492bec9cfba7b25ad3fc0c5b62c9",
      "parents": [
        "f91231160716d2b726a6bd01ef1b596c9ff69e17"
      ],
      "author": {
        "name": "Konstantin Tarasov",
        "email": "33369833+sdf-jkl@users.noreply.github.com",
        "time": "Tue Mar 31 15:44:32 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 31 15:44:32 2026 -0400"
      },
      "message": "[Variant] extend shredded null handling for arrays (#9599)\n\n# Which issue does this PR close?\n\n\u003c!--\nWe generally require a GitHub issue to be filed for all bug fixes and\nenhancements and this helps us generate change logs for our releases.\nYou can link an issue to this PR using the GitHub syntax.\n--\u003e\n\n- Closes #8400.\n\n# Rationale for this change\n\nCheck issue\n\u003c!--\nWhy are you proposing this change? If this is already explained clearly\nin the issue then this section is not needed.\nExplaining clearly why changes are proposed helps reviewers understand\nyour changes and offer better suggestions for fixes.\n--\u003e\n\n# What changes are included in this PR?\n\n- Added `AppendNullMode` enum supporting all semantics.\n- Replaced the bool logic to the new enum\n- Fix test outputs for List Array cases\n\n\u003c!--\nThere is no need to duplicate the description in the issue here but it\nis sometimes worth providing a summary of the individual changes in this\nPR.\n--\u003e\n\n# Are these changes tested?\n- Added unit tests\n\u003c!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example,\nare they covered by existing tests)?\n--\u003e\n\n# Are there any user-facing changes?\n\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e"
    },
    {
      "commit": "f91231160716d2b726a6bd01ef1b596c9ff69e17",
      "tree": "89059c1adaf0586f025f72234b1ff35db558b804",
      "parents": [
        "1a169cd638aa4b72ccb4961e37e5014a66308718"
      ],
      "author": {
        "name": "Krisztián Szűcs",
        "email": "szucs.krisztian@gmail.com",
        "time": "Tue Mar 31 21:31:35 2026 +0200"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 31 15:31:35 2026 -0400"
      },
      "message": "feat(parquet): derive `PartialEq` and `Eq` for `CdcOptions` (#9602)\n\n# Rationale for this change\n\nCdcOptions only contains primitive fields (usize, usize, i32) so\nderiving PartialEq and Eq is straightforward. This is needed by\ndownstream crates such as DataFusion that embed CdcOptions in their own\nconfiguration structs and need to compare them.\n\n# What changes are included in this PR?\n\nImplemented PartialEq and Eq for CdcOptions.\n\n# Are these changes tested?\n\nAdded an equality test.\n\n# Are there any user-facing changes?\n\nNo."
    },
    {
      "commit": "1a169cd638aa4b72ccb4961e37e5014a66308718",
      "tree": "586742e3795cd1afcea7d7c1bec2d19f5ae3ba7b",
      "parents": [
        "77e4d05fe0f199ccfaad578e58278329534a9c3d"
      ],
      "author": {
        "name": "Alexander Rafferty",
        "email": "hello@alexanderrafferty.com",
        "time": "Wed Apr 01 06:27:16 2026 +1100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 31 15:27:16 2026 -0400"
      },
      "message": "Fix `MutableBuffer::clear` (#9622)\n\n# Which issue does this PR close?\n\n- closes https://github.com/apache/arrow-rs/pull/9593\n\n# Rationale for this change\n\nIn a previous PR (#9593), I change instances of `truncate(0)` to\n`clear()`. However, this breaks the test `test_truncate_with_pool` at\n`arrow-buffer/src/buffer/mutable.rs:1357`, due to an inconsistency\nbetween the implementation of `truncate` and `clear`. This PR fixes that\ntest.\n\n# What changes are included in this PR?\n\nThis PR copies a section of code related to the `pool` feature present\nin `truncate` but absent in `clear`, fixing the failing unit test.\n\n# Are these changes tested?\n\nYes.\n\n# Are there any user-facing changes?\n\nNo."
    },
    {
      "commit": "77e4d05fe0f199ccfaad578e58278329534a9c3d",
      "tree": "f40a20df4e110c28010e4162e9521dd3974f5ba8",
      "parents": [
        "aa9432c8833f5701085e8b933b30560d21df9f80"
      ],
      "author": {
        "name": "Liam Bao",
        "email": "liam.zw.bao@gmail.com",
        "time": "Tue Mar 31 09:59:46 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 31 09:59:46 2026 -0400"
      },
      "message": "[Json] Add json reader benchmarks for Map and REE (#9616)\n\n# Which issue does this PR close?\n\n\u003c!--\nWe generally require a GitHub issue to be filed for all bug fixes and\nenhancements and this helps us generate change logs for our releases.\nYou can link an issue to this PR using the GitHub syntax.\n--\u003e\n\n- Relates to #9497.\n\n# Rationale for this change\n\n\u003c!--\nWhy are you proposing this change? If this is already explained clearly\nin the issue then this section is not needed.\nExplaining clearly why changes are proposed helps reviewers understand\nyour changes and offer better suggestions for fixes.\n--\u003e\n\n# What changes are included in this PR?\n\n\u003c!--\nThere is no need to duplicate the description in the issue here but it\nis sometimes worth providing a summary of the individual changes in this\nPR.\n--\u003e\n\nAs part of the effort to move the Json reader away from `ArrayData`\ntoward typed `ArrayRef` APIs, it\u0027s necessary to change the\n`ArrayDecoder::decode` interface to return `ArrayRef` directly and\nupdates all decoder implementations (list, struct, map, run-end encoded)\nto construct typed arrays without intermediate `ArrayData` round-trips.\nNew benchmarks for map and run-end encoded decoding are added to verify\nthere is no performance regression.\n\n# Are these changes tested?\n\n\u003c!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example,\nare they covered by existing tests)?\n--\u003e\nYes\n\n# Are there any user-facing changes?\n\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e\nNo"
    },
    {
      "commit": "aa9432c8833f5701085e8b933b30560d21df9f80",
      "tree": "1b253a49df12c8947c963a6da5adec54015436eb",
      "parents": [
        "c194e54dc4e8fcd8b9333eea2528d5db1c1ba912"
      ],
      "author": {
        "name": "Adrian Garcia Badaracco",
        "email": "1755071+adriangb@users.noreply.github.com",
        "time": "Sun Mar 29 23:11:05 2026 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 30 17:11:05 2026 +1100"
      },
      "message": "Fix `extend_nulls` panic for UnionArray (#9607)\n\n## Summary\n\n- Fix `MutableArrayData::extend_nulls` which previously panicked\nunconditionally for both sparse and dense Union arrays\n- For sparse unions: append the first type_id and extend nulls in all\nchildren\n- For dense unions: append the first type_id, compute offsets into the\nfirst child, and extend nulls in that child only\n\n## Background\n\nThis bug was discovered via DataFusion. `CaseExpr` uses\n`MutableArrayData` via `scatter()` to build result arrays. When a `CASE`\nexpression returns a Union type (e.g., from `json_get` which returns a\nJSON union) and there are rows where no `WHEN` branch matches (implicit\n`ELSE NULL`), `scatter` calls `extend_nulls` which panics with \"cannot\ncall extend_nulls on UnionArray as cannot infer type\".\n\nAny query like:\n```sql\nSELECT CASE WHEN condition THEN returns_union(col, \u0027key\u0027) END FROM table\n```\nwould panic if `condition` is false for any row.\n\n## Root Cause\n\nThe `extend_nulls` implementation for Union arrays unconditionally\npanicked because it claimed it \"cannot infer type\". However, the Union\u0027s\nfield definitions (child types and type IDs) are available in the\n`MutableArrayData`\u0027s data type — there\u0027s enough information to produce\nvalid null entries by picking the first declared type_id.\n\n## Test plan\n\n- [x] Added test for sparse union `extend_nulls`\n- [x] Added test for dense union `extend_nulls`\n- [x] Existing `test_union_dense` continues to pass\n- [x] All `array_transform` tests pass\n\n🤖 Generated with [Claude Code](https://claude.com/claude-code)\n\n---------\n\nCo-authored-by: Claude Opus 4.6 (1M context) \u003cnoreply@anthropic.com\u003e\nCo-authored-by: Jeffrey Vo \u003cjeffrey.vo.australia@gmail.com\u003e"
    },
    {
      "commit": "c194e54dc4e8fcd8b9333eea2528d5db1c1ba912",
      "tree": "3ce5ce77e5e665ab4bd8e6cb2c559439d34ee6f3",
      "parents": [
        "7f307c031f31a691be566f5e20171455c41dd661"
      ],
      "author": {
        "name": "Konstantin Tarasov",
        "email": "33369833+sdf-jkl@users.noreply.github.com",
        "time": "Thu Mar 26 18:18:42 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 26 18:18:42 2026 -0400"
      },
      "message": "[Variant]  Add unshredded `Struct` fast-path for `variant_get(..., Struct)` (#9597)\n\n# Which issue does this PR close?\n\n\u003c!--\nWe generally require a GitHub issue to be filed for all bug fixes and\nenhancements and this helps us generate change logs for our releases.\nYou can link an issue to this PR using the GitHub syntax.\n--\u003e\n\n- Closes #9596.\n\n# Rationale for this change\n\nCheck issue\n\u003c!--\nWhy are you proposing this change? If this is already explained clearly\nin the issue then this section is not needed.\nExplaining clearly why changes are proposed helps reviewers understand\nyour changes and offer better suggestions for fixes.\n--\u003e\n\n# What changes are included in this PR?\n\nReuse `shred_basic_variant` as a fast path for unshredded `Struct`\nhandling in `variant_get(..., Struct)`\n\u003c!--\nThere is no need to duplicate the description in the issue here but it\nis sometimes worth providing a summary of the individual changes in this\nPR.\n--\u003e\n\n# Are these changes tested?\n\nYes, added two unit tests to establish safe mode behavior.\n\u003c!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example,\nare they covered by existing tests)?\n--\u003e\n\n# Are there any user-facing changes?\n\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e"
    },
    {
      "commit": "7f307c031f31a691be566f5e20171455c41dd661",
      "tree": "bfa992a3ab2bcd460e184d57b35c5d35932d10f8",
      "parents": [
        "1f1c3a4cea6972ade7ff73a7765521c21a992e4f"
      ],
      "author": {
        "name": "Raúl Cumplido",
        "email": "raulcumplido@gmail.com",
        "time": "Thu Mar 26 23:17:50 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 26 18:17:50 2026 -0400"
      },
      "message": "fix: Stop using https://dist.apache.org/repos/dist/dev/arrow/KEYS for verification (#9604)\n\n# Which issue does this PR close?\n\n- Closes #9603 \n\n# Rationale for this change\n\nThe release and dev KEYS files could get out of synch.\nWe should use the release/ version:\n- Users use the release/ version not dev/ version when they verify our\nartifacts\u0027 signature\n- https://dist.apache.org/ may reject our request when we request many\ntimes by CI\n\n# What changes are included in this PR?\n\nUse\n`https://www.apache.org/dyn/closer.lua?action\u003ddownload\u0026filename\u003darrow/KEYS`\nto download the KEYS file and the expected\n`https://dist.apache.org/repos/dist/dev/arrow` for the RC artifacts.\n\n# Are these changes tested?\n\nYes, I\u0027ve verified 58.1.0 1 both previous to the change and after the\nchange.\n\n# Are there any user-facing changes?\n\nNo"
    },
    {
      "commit": "1f1c3a4cea6972ade7ff73a7765521c21a992e4f",
      "tree": "08d2e1962983043b4b47284ca309095aa4874698",
      "parents": [
        "f2512b5341ec66dcafe9de94ae382401ce5e8698"
      ],
      "author": {
        "name": "Liam Bao",
        "email": "liam.zw.bao@gmail.com",
        "time": "Thu Mar 26 18:15:41 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 26 18:15:41 2026 -0400"
      },
      "message": "Support `ListView` codec in arrow-json (#9503)\n\n# Which issue does this PR close?\n\n\u003c!--\nWe generally require a GitHub issue to be filed for all bug fixes and\nenhancements and this helps us generate change logs for our releases.\nYou can link an issue to this PR using the GitHub syntax.\n--\u003e\n\n- Closes #9340.\n\n# Rationale for this change\n\n\u003c!--\nWhy are you proposing this change? If this is already explained clearly\nin the issue then this section is not needed.\nExplaining clearly why changes are proposed helps reviewers understand\nyour changes and offer better suggestions for fixes.\n--\u003e\n\n# What changes are included in this PR?\n\n\u003c!--\nThere is no need to duplicate the description in the issue here but it\nis sometimes worth providing a summary of the individual changes in this\nPR.\n--\u003e\n\nSupport `ListView` codec in arrow-json. Using `ListLikeArray` trait to\nsimplify implementation.\n\n# Are these changes tested?\n\n\u003c!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example,\nare they covered by existing tests)?\n--\u003e\n\nTests added\n\n# Are there any user-facing changes?\n\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e\n\nNew encoder/decoder"
    },
    {
      "commit": "f2512b5341ec66dcafe9de94ae382401ce5e8698",
      "tree": "84ed4a7cdc90fb8eefc8bbbb4ad5d0139443a0fb",
      "parents": [
        "398962ec67bc777eca1c635c8ef01a9c634530eb"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Thu Mar 26 18:11:17 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 26 18:11:17 2026 -0400"
      },
      "message": "chore(deps): update sha2 requirement from 0.10 to 0.11 (#9618)\n\nUpdates the requirements on [sha2](https://github.com/RustCrypto/hashes)\nto permit the latest version.\n\u003cdetails\u003e\n\u003csummary\u003eCommits\u003c/summary\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/RustCrypto/hashes/commit/ffe093984c004769747e998f77da8ff7c0e7a765\"\u003e\u003ccode\u003effe0939\u003c/code\u003e\u003c/a\u003e\nRelease sha2 0.11.0 (\u003ca\nhref\u003d\"https://redirect.github.com/RustCrypto/hashes/issues/806\"\u003e#806\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/RustCrypto/hashes/commit/8991b65fe400c31c4cc189510f86ae642c470cd9\"\u003e\u003ccode\u003e8991b65\u003c/code\u003e\u003c/a\u003e\nUse the standard order of the \u003ccode\u003e[package]\u003c/code\u003e section fields (\u003ca\nhref\u003d\"https://redirect.github.com/RustCrypto/hashes/issues/807\"\u003e#807\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/RustCrypto/hashes/commit/3d2bc57db40fd6aeb25d6c6da98d67e2784c2985\"\u003e\u003ccode\u003e3d2bc57\u003c/code\u003e\u003c/a\u003e\nsha2: refactor backends (\u003ca\nhref\u003d\"https://redirect.github.com/RustCrypto/hashes/issues/802\"\u003e#802\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/RustCrypto/hashes/commit/faa55fb83697c8f3113636d88070e5f5edc8c335\"\u003e\u003ccode\u003efaa55fb\u003c/code\u003e\u003c/a\u003e\nsha3: bump \u003ccode\u003ekeccak\u003c/code\u003e to v0.2 (\u003ca\nhref\u003d\"https://redirect.github.com/RustCrypto/hashes/issues/803\"\u003e#803\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/RustCrypto/hashes/commit/d3e6489e56f8486d4a93ceb7a8abf4924af1de7b\"\u003e\u003ccode\u003ed3e6489\u003c/code\u003e\u003c/a\u003e\nsha3 v0.11.0-rc.9 (\u003ca\nhref\u003d\"https://redirect.github.com/RustCrypto/hashes/issues/801\"\u003e#801\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/RustCrypto/hashes/commit/bbf6f51ff97f81ab15e6e5f6cf878bfbcb1f47c8\"\u003e\u003ccode\u003ebbf6f51\u003c/code\u003e\u003c/a\u003e\nsha2: tweak backend docs (\u003ca\nhref\u003d\"https://redirect.github.com/RustCrypto/hashes/issues/800\"\u003e#800\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/RustCrypto/hashes/commit/155dbbf2959dbec0ec75948a82590ddaede2d3bc\"\u003e\u003ccode\u003e155dbbf\u003c/code\u003e\u003c/a\u003e\nsha3: add default value for the \u003ccode\u003eDS\u003c/code\u003e generic parameter on\n\u003ccode\u003eTurboShake128/256\u003c/code\u003e...\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/RustCrypto/hashes/commit/ed514f2b34526683b3b7c41670f1887982c3df64\"\u003e\u003ccode\u003eed514f2\u003c/code\u003e\u003c/a\u003e\nUse published version of \u003ccode\u003ekeccak\u003c/code\u003e v0.2 (\u003ca\nhref\u003d\"https://redirect.github.com/RustCrypto/hashes/issues/799\"\u003e#799\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/RustCrypto/hashes/commit/702bcd83735a49c928c0fc24506924f5c0aa22af\"\u003e\u003ccode\u003e702bcd8\u003c/code\u003e\u003c/a\u003e\nMigrate to closure-based \u003ccode\u003ekeccak\u003c/code\u003e (\u003ca\nhref\u003d\"https://redirect.github.com/RustCrypto/hashes/issues/796\"\u003e#796\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/RustCrypto/hashes/commit/827c043f82d57666a0b146d156e91c39535c1305\"\u003e\u003ccode\u003e827c043\u003c/code\u003e\u003c/a\u003e\nsha3 v0.11.0-rc.8 (\u003ca\nhref\u003d\"https://redirect.github.com/RustCrypto/hashes/issues/794\"\u003e#794\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eAdditional commits viewable in \u003ca\nhref\u003d\"https://github.com/RustCrypto/hashes/compare/groestl-v0.10.0...sha2-v0.11.0\"\u003ecompare\nview\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/details\u003e\n\u003cbr /\u003e\n\n\nDependabot will resolve any conflicts with this PR as long as you don\u0027t\nalter it yourself. You can also trigger a rebase manually by commenting\n`@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n\u003cdetails\u003e\n\u003csummary\u003eDependabot commands and options\u003c/summary\u003e\n\u003cbr /\u003e\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits\nthat have been made to it\n- `@dependabot show \u003cdependency name\u003e ignore conditions` will show all\nof the ignore conditions of the specified dependency\n- `@dependabot ignore this major version` will close this PR and stop\nDependabot creating any more for this major version (unless you reopen\nthe PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop\nDependabot creating any more for this minor version (unless you reopen\nthe PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop\nDependabot creating any more for this dependency (unless you reopen the\nPR or upgrade to it yourself)\n\n\n\u003c/details\u003e\n\nSigned-off-by: dependabot[bot] \u003csupport@github.com\u003e\nCo-authored-by: dependabot[bot] \u003c49699333+dependabot[bot]@users.noreply.github.com\u003e"
    },
    {
      "commit": "398962ec67bc777eca1c635c8ef01a9c634530eb",
      "tree": "206d0e0520a9d5cfaed87b8ab7996c5ac7e1be75",
      "parents": [
        "980ea0b36c79a9e996efd90ad5f24571f0f9c0e0"
      ],
      "author": {
        "name": "Mikhail Zabaluev",
        "email": "mikhail.zabaluev@flarion.io",
        "time": "Wed Mar 25 20:05:53 2026 +0200"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 25 19:05:53 2026 +0100"
      },
      "message": "deps: fix `object_store` breakage for 0.13.2 (#9612)\n\n# Rationale for this change\n\nThe `object_store` crate release 0.13.2 breaks the build of parquet\nbecause it feature-gates the `buffered` module. I have filed\nhttps://github.com/apache/arrow-rs-object-store/issues/677 about the\nbreakage; meanwhile this fix is made in expectation that 0.13.2 will not\nbe yanked and the feature gate will remain.\n\n# What changes are included in this PR?\n\nBump the version to 0.13.2 and requesting the \"tokio\" feature.\n\n# Are these changes tested?\n\nThe build should succeed in CI workflows.\n\n# Are there any user-facing changes?\n\nNo\n\nCo-authored-by: Mikhail Zabaluev \u003cmikhail.zabaluev@gmail.com\u003e"
    },
    {
      "commit": "980ea0b36c79a9e996efd90ad5f24571f0f9c0e0",
      "tree": "4da0455d93ef7abbbc3c29d7326c787be7babeb5",
      "parents": [
        "70445c54ff510e1b9f8d6e6782b92a1ce45fbb28"
      ],
      "author": {
        "name": "Daniël Heres",
        "email": "danielheres@gmail.com",
        "time": "Tue Mar 24 14:03:23 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 24 14:03:23 2026 +0100"
      },
      "message": "Reduce per-byte overhead in VLQ integer decoding (#9584)\n\n## Which issue does this PR close?\n\nCloses #9580\n\n## Rationale\n\nThe current VLQ decoder calls `get_aligned` for each byte, which\ninvolves repeated offset calculations and bounds checks in the hot loop.\n\n## What changes are included in this PR?\n\nAlign to the byte boundary once, then iterate directly over the buffer\nslice, avoiding per-byte overhead from `get_aligned`.\n\n## Are there any user-facing changes?\n\nNo.\n\n🤖 Generated with [Claude Code](https://claude.com/claude-code)\n\nCo-authored-by: Claude Opus 4.6 (1M context) \u003cnoreply@anthropic.com\u003e"
    },
    {
      "commit": "70445c54ff510e1b9f8d6e6782b92a1ce45fbb28",
      "tree": "ac4d11db3fdc3060cac68a43ce2696d05dbdf69d",
      "parents": [
        "6471e9ac72a79fd13963568ec3294a76fab826a6"
      ],
      "author": {
        "name": "Jochen Görtler",
        "email": "grtlr@users.noreply.github.com",
        "time": "Sun Mar 22 15:48:05 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Mar 22 10:48:05 2026 -0400"
      },
      "message": "Add `quoted_strings` to `FormatOptions` (#9221)\n\n# Rationale for this change\n\nIn some cases, it is desirable to print strings with surrounding\nquotation marks. A typical example that we run into in\nhttps://github.com/rerun-io/rerun is a `StructArray` that contains empty\nstrings:\n\nCurrent formatting:\n\n```text\n{name: }\n```\n\nAdded option in this PR:\n\n```text\n{name: \"\"}\n```\n\n# What changes are included in this PR?\n\nThis PR relies on `std::fmt::Debug` to do the actual formatting of\nstrings, which means that all escaping is handled out of the box.\n\n# Are these changes tested?\n\nThis PR contains test for different types of inputs, including escape\nsequences. Additionally, it also tests the `StructArray` example\noutlined above.\n\n# Are there any user-facing changes?\n\nBy default this option is false, making the feature opt-in.\n\n---------\n\nCo-authored-by: Andrew Lamb \u003candrew@nerdnetworks.org\u003e"
    },
    {
      "commit": "6471e9ac72a79fd13963568ec3294a76fab826a6",
      "tree": "0cb289b7ecedc5087147c9d803be016af4e0bf93",
      "parents": [
        "6cadf3b4de916c707e2103b123a168154e668a33"
      ],
      "author": {
        "name": "Daniël Heres",
        "email": "danielheres@gmail.com",
        "time": "Fri Mar 20 20:38:26 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 20 20:38:26 2026 +0100"
      },
      "message": "Pre-reserve output capacity in ByteView/ByteArray dictionary decoding (#9590)\n\n## Summary\n\n- Reserve `output.views` capacity in\n`ByteViewArrayDecoderDictionary::read` before the decode loop\n- Reserve `output.offsets` capacity in\n`ByteArrayDecoderDictionary::read` before the decode loop\n\nThis avoids per-chunk reallocation during `extend` calls inside the\ndictionary decode loop.\n\nCloses #9587\n\n## Test plan\n\n- [ ] Existing tests pass (no functional change, only pre-allocation)\n- [ ] Benchmark dictionary-encoded StringView/BinaryView/String reads\n\n🤖 Generated with [Claude Code](https://claude.com/claude-code)\n\nCo-authored-by: Claude Opus 4.6 (1M context) \u003cnoreply@anthropic.com\u003e"
    },
    {
      "commit": "6cadf3b4de916c707e2103b123a168154e668a33",
      "tree": "7ad56369e56e4e5ca2d98e4e7ae9910fcfc1e704",
      "parents": [
        "322f9ce681ed51aa0c99b6517d5f43b7279ecc52"
      ],
      "author": {
        "name": "Andrew Lamb",
        "email": "andrew@nerdnetworks.org",
        "time": "Fri Mar 20 15:00:39 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 20 15:00:39 2026 -0400"
      },
      "message": "Prepare for 58.1.0 Release (#9573)\n\n# Which issue does this PR close?\n\n- part of https://github.com/apache/arrow-rs/issues/9108\n\n# Rationale for this change\n\nPrepare for next release\n\n# What changes are included in this PR?\n\n1. Update version to `58.1.0`\n2. Add changelog. See rendered preview here:\nhttps://github.com/alamb/arrow-rs/blob/alamb/prepare_58.1.0/CHANGELOG.md\n\n# Are these changes tested?\n\nBy CI\n# Are there any user-facing changes?\n\nYes"
    },
    {
      "commit": "322f9ce681ed51aa0c99b6517d5f43b7279ecc52",
      "tree": "17a52e7afe1e9a6f8c57306d1c6cb4e5aec07e56",
      "parents": [
        "bc74c7192a48bd36a9e79b883a3482af396a2350"
      ],
      "author": {
        "name": "Kunal",
        "email": "155142500+kunalsinghdadhwal@users.noreply.github.com",
        "time": "Fri Mar 20 19:26:43 2026 +0530"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 20 09:56:43 2026 -0400"
      },
      "message": "[Variant] Add unshred_variant support for Binary and LargeBinary types (#9576)\n\n# Which issue does this PR close?\n\n\u003c!--\nWe generally require a GitHub issue to be filed for all bug fixes and\nenhancements and this helps us generate change logs for our releases.\nYou can link an issue to this PR using the GitHub syntax.\n--\u003e\n\n- Closes #9526 \n\n# Rationale for this change\n\n\u003c!--\nWhy are you proposing this change? If this is already explained clearly\nin the issue then this section is not needed.\nExplaining clearly why changes are proposed helps reviewers understand\nyour changes and offer better suggestions for fixes.\n--\u003e\n\n`shred_variant` already supports Binary and LargeBinary types (#9525,\n#9554), but unshred_variant does not handle these types. This means\nshredded Binary/LargeBinary columns cannot be converted back to\nunshredded VariantArrays.\n\n# What changes are included in this PR?\n\nAdds unshred_variant support for DataType::Binary and\nDataType::LargeBinary in parquet-variant-compute/src/unshred_variant.rs:\n  - New enum variants PrimitiveBinary and PrimitiveLargeBinary\n  - Match arms in append_row and try_new_opt\n  - AppendToVariantBuilder impls for BinaryArray and LargeBinaryArray\n\n\n\n# Are these changes tested?\n\nYes\n\n# Are there any user-facing changes?\n\nNo breaking changes\n\n---------\n\nSigned-off-by: Kunal Singh Dadhwal \u003ckunalsinghdadhwal@gmail.com\u003e"
    },
    {
      "commit": "bc74c7192a48bd36a9e79b883a3482af396a2350",
      "tree": "d9fe240fd6a829567cdf136c38e201946df272ed",
      "parents": [
        "39dda22517e6369d006aaac5eaac53d9cd72c29b"
      ],
      "author": {
        "name": "Krisztián Szűcs",
        "email": "szucs.krisztian@gmail.com",
        "time": "Fri Mar 20 14:54:53 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 20 09:54:53 2026 -0400"
      },
      "message": "feat(parquet): add content defined chunking for arrow writer (#9450)\n\n# Which issue does this PR close?\n\n- Closes #NNN.\n\n# Rationale for this change\n\nRust implementation of https://github.com/apache/arrow/pull/45360\n\nTraditional Parquet writing splits data pages at fixed sizes, so a\nsingle inserted or deleted row causes all subsequent pages to shift —\nresulting in nearly every byte being re-uploaded to content-addressable\nstorage (CAS) systems. CDC determines page boundaries via a rolling\ngearhash over column values, so unchanged data produces identical pages\nacross different writes enabling storage cost reductions and faster\nupload times.\n\nSee more details in https://huggingface.co/blog/parquet-cdc\n\nThe original C++ implementation\nhttps://github.com/apache/arrow/pull/45360\n\nEvaluation tool https://github.com/huggingface/dataset-dedupe-estimator\nwhere I already integrated this PR to verify that deduplication\neffectiveness is on par with parquet-cpp (lower is better):\n\n\u003cimg width\u003d\"984\" height\u003d\"411\" alt\u003d\"image\"\nsrc\u003d\"https://github.com/user-attachments/assets/e6e80931-ac76-4bdd-bf9c-ba7e06559411\"\n/\u003e\n\n\n# What changes are included in this PR?\n\n- **Content-defined chunker**  at `parquet/src/column/chunker/`\n- **Arrow writer integration** integrated in `ArrowColumnWriter`\n- **Writer properties** via `CdcOptions` struct (`min_chunk_size`,\n`max_chunk_size`, `norm_level`)\n- **ColumnDescriptor**: added `repeated_ancestor_def_level` field to for\nnested field values iteration\n\n# Are these changes tested?\n\nYes — unit tests are located in `cdc.rs` and ported from the C++\nimplementation.\n\n# Are there any user-facing changes?\n\nNew **experimental** API, disabled by default — no behavior change for\nexisting code:\n\n```rust\n// Simple toggle (256 KiB min, 1 MiB max, norm_level 0)\nlet props \u003d WriterProperties::builder()\n    .set_content_defined_chunking(true)\n    .build();\n\n// Excpliti CDC parameters\nlet props \u003d WriterProperties::builder()\n    .set_cdc_options(CdcOptions { min_chunk_size: 128 * 1024, max_chunk_size: 512 * 1024, norm_level: 1 })\n    .build();\n```\n\n---------\n\nCo-authored-by: Ed Seidl \u003cetseidl@users.noreply.github.com\u003e"
    },
    {
      "commit": "39dda22517e6369d006aaac5eaac53d9cd72c29b",
      "tree": "fcaf4abae82ba0b9c379a7ed9ec3939cf54c4dbf",
      "parents": [
        "d53df605656d8012eca42e8ddffe165362a1a4cb"
      ],
      "author": {
        "name": "Peter L",
        "email": "cetra3@hotmail.com",
        "time": "Sat Mar 21 00:14:56 2026 +1030"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 20 09:44:56 2026 -0400"
      },
      "message": "Make Sbbf Constructers Public (#9569)\n\n# Which issue does this PR close?\n\nNone\n\n# Rationale for this change\n\nWe want to use the SBBF Bloom Filter, but need to construct/serialize it\nmanually. Currently there is no way to create a new `Sbbf` outside of\nthis crate. Alongside this: we want to store the `Sbbf` in a\n`FixedSizedBinary` column for some fancy indexing.\n\n# What changes are included in this PR?\n\nSome methods become public\n\n# Are these changes tested?\n\nN/A\n\n# Are there any user-facing changes?\n\nYes, we add a few more public methods to the `Sbbf` struct"
    },
    {
      "commit": "d53df605656d8012eca42e8ddffe165362a1a4cb",
      "tree": "a5ba7640eca4dca35df2abfb7d0f0c0b7635d67b",
      "parents": [
        "44f5dfc607892bab849a4dba008b6ee8966c1461"
      ],
      "author": {
        "name": "Kunal",
        "email": "155142500+kunalsinghdadhwal@users.noreply.github.com",
        "time": "Fri Mar 20 01:44:01 2026 +0530"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 19 21:14:01 2026 +0100"
      },
      "message": "feat: Optimize from_bitwise_binary_op with 64-bit alignment (#9441)\n\n# Which issue does this PR close?\n\n\n- Closes #9378 \n\n# Rationale for this change\n\nthe optimizations as listed in the issue description\n\n- Align to 8 bytes\n- Don\u0027t try to return a buffer with bit_offset 0 but round it to a\nmultiple of 64\n- Use chunk_exact for the fallback path\n\n\n# What changes are included in this PR?\n\nWhen both inputs share the same sub-64-bit alignment (left_offset % 64\n\u003d\u003d right_offset % 64), the optimized path is used. This covers the\ncommon cases (both offset 0, both sliced equally, etc.). The BitChunks\nfallback is retained only when the two offsets have different sub-64-bit\nalignment.\n\n# Are these changes tested?\n\nYes the tests are changed and they are included\n\n# Are there any user-facing changes?\n  \n\nYes, this is a minor breaking change to from_bitwise_binary_op:\n\n- The returned BooleanBuffer may now have a non-zero offset (previously\nalways 0)\n- The returned BooleanBuffer may have padding bits set outside the\nlogical range in values()\n\n---------\n\nSigned-off-by: Kunal Singh Dadhwal \u003ckunalsinghdadhwal@gmail.com\u003e\nCo-authored-by: Andrew Lamb \u003candrew@nerdnetworks.org\u003e"
    },
    {
      "commit": "44f5dfc607892bab849a4dba008b6ee8966c1461",
      "tree": "883b38598ec58853b8105c5f5689764e66ef11a2",
      "parents": [
        "14f1eb97fbf017dbd0faef749f62f6cd9389a451"
      ],
      "author": {
        "name": "Daniël Heres",
        "email": "danielheres@gmail.com",
        "time": "Thu Mar 19 19:49:12 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 19 19:49:12 2026 +0100"
      },
      "message": "perf: Coalesce page fetches when RowSelection selects all rows (#9578)\n\n## Summary\n\n- When a `RowSelection` selects every row in a row group, `fetch_ranges`\nnow treats it as no selection, producing a single whole-column-chunk I/O\nrequest instead of N individual page requests\n- This reduces the number of I/O requests for subsequent filter\npredicates when an earlier predicate passes all rows\n\n## Details\n\nIn `InMemoryRowGroup::fetch_ranges`, when both a `RowSelection` and an\n`OffsetIndex` are present, the code enters a page-level fetch path that\nuses `scan_ranges()` to produce individual page ranges. Even when the\nselection covers all rows, this produces N separate ranges (one per\npage).\n\nThe fix: before entering the page-level path, check if the selection\u0027s\n`row_count()` equals the row group\u0027s total row count. If so, drop the\nselection and take the simpler whole-column-chunk path.\n\nThis commonly happens when a multi-predicate `RowFilter` has an early\npredicate that passes all rows in a row group (e.g., `CounterID \u003d 62` on\na row group where all rows have `CounterID \u003d 62`).\n\n## Test plan\n\n- [x] Existing tests pass (snapshot updated to reflect fewer I/O\nrequests)\n- [x] `test_read_multiple_row_filter` verifies the coalesced fetch\npattern\n\n🤖 Generated with [Claude Code](https://claude.com/claude-code)\n\nCo-authored-by: Claude Opus 4.6 (1M context) \u003cnoreply@anthropic.com\u003e"
    },
    {
      "commit": "14f1eb97fbf017dbd0faef749f62f6cd9389a451",
      "tree": "3802916470cec674504b0295ac74c92e3c438915",
      "parents": [
        "55a7768bbb95976e1dac29facb2ea337aa4d89b6"
      ],
      "author": {
        "name": "Thomas Tanon",
        "email": "thomas@pellissier-tanon.fr",
        "time": "Thu Mar 19 18:42:07 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 19 13:42:07 2026 -0400"
      },
      "message": "pyarrow: Cache the imported classes to avoid importing them each time (#9439)\n\n# Which issue does this PR close?\n\n- Closes #9438.\n\n# Rationale for this change\n\nSpeed up conversion by only importing `pyarrow` once.\n\n# What changes are included in this PR?\n\n- Use `PyOnceLock::import` to import the types.\n- Remove some not useful `.extract::\u003cPyBackedStr\u003e()?` (the `Display`\nimplementation already does something similar)\n\n# Are these changes tested?\n\nCovered by existing tests. It would be nice to add benchmark but it\nmight require to:\n- either add a dependency to a python benchmark runner\n- write some hacky code to import `pyarrow` from criterion tests (likely\nby running `pip`/`uv` from the Rust benchmark code)\n\n# Are there any user-facing changes?\n\nNo"
    },
    {
      "commit": "55a7768bbb95976e1dac29facb2ea337aa4d89b6",
      "tree": "bd11da521b96e2009c3b46a4d177626386cf3088",
      "parents": [
        "42ab0bcef7c2257772dfb7de77b04051350e18cb"
      ],
      "author": {
        "name": "Konstantin Tarasov",
        "email": "33369833+sdf-jkl@users.noreply.github.com",
        "time": "Thu Mar 19 13:41:47 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 19 13:41:47 2026 -0400"
      },
      "message": "[Variant] Add `variant_to_arrow` `Struct` type support (#9572)\n\n# Which issue does this PR close?\n\n\u003c!--\nWe generally require a GitHub issue to be filed for all bug fixes and\nenhancements and this helps us generate change logs for our releases.\nYou can link an issue to this PR using the GitHub syntax.\n--\u003e\n\n- Closes #9529 .\n\n# Rationale for this change\n\n- In a follow up PR, can fix the `variant_get` TODO:\n\nhttps://github.com/apache/arrow-rs/blob/3b6179658203dc1b1610b67c1777d5b8beb137fc/parquet-variant-compute/src/variant_get.rs#L89-L92\n- When we know that Struct VariantArray is not shredded can reuse\n`shred_basic_variant`\n\u003c!--\nWhy are you proposing this change? If this is already explained clearly\nin the issue then this section is not needed.\nExplaining clearly why changes are proposed helps reviewers understand\nyour changes and offer better suggestions for fixes.\n--\u003e\n\n# What changes are included in this PR?\n\n- Added `StructVariantToArrowRowBuilder` builder.\n- Moved `make_variant_to_arrow_row_builder` logic to\n`make_typed_variant_to_arrow_row_builder` to reuse by `Struct` array\u0027s\ninner fields.\n- Changed a `variant_get` test to show that it now handles unshredded\n`Struct` `VariantArray`\n\u003c!--\nThere is no need to duplicate the description in the issue here but it\nis sometimes worth providing a summary of the individual changes in this\nPR.\n--\u003e\n\n# Are these changes tested?\n- Yes, added `test_struct_row_builder_handles_unshredded_nested_structs`\n- Everything else still works.\n\u003c!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example,\nare they covered by existing tests)?\n--\u003e\n\n# Are there any user-facing changes?\nNo\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e\n\n---------\n\nCo-authored-by: Ryan Johnson \u003cscovich@users.noreply.github.com\u003e"
    },
    {
      "commit": "42ab0bcef7c2257772dfb7de77b04051350e18cb",
      "tree": "708552aed558f440fc958b43456b122ec460d78d",
      "parents": [
        "88422cbdcbfa8f4e2411d66578dd3582fafbf2a1"
      ],
      "author": {
        "name": "Ed Seidl",
        "email": "etseidl@users.noreply.github.com",
        "time": "Thu Mar 19 10:39:35 2026 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 19 13:39:35 2026 -0400"
      },
      "message": "fix: Used `checked_add` for bounds checks to avoid UB (#9568)\n\n# Which issue does this PR close?\n\n- Closes #9543.\n\n# Rationale for this change\n\nSee issue, but it is possible to construct arguments to\n`arrow_buffer::bit_util::bit_mask::set_bits` that overflow the bounds\nchecking protecting unsafe code.\n\n# What changes are included in this PR?\nUse `checked_add` when doing the bounds checking and panic when an\noverflow occurs.\n\n# Are these changes tested?\n\nYes\n\n# Are there any user-facing changes?\n\nNo"
    },
    {
      "commit": "88422cbdcbfa8f4e2411d66578dd3582fafbf2a1",
      "tree": "e46cee6fd06538908bc4cacbda02455fcf79e37f",
      "parents": [
        "7ea7cdc55a20162346e2e006ac4589a30f7bfdbb"
      ],
      "author": {
        "name": "Alfonso Subiotto Marqués",
        "email": "alfonso.subiotto@polarsignals.com",
        "time": "Wed Mar 18 20:47:38 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 18 15:47:38 2026 -0400"
      },
      "message": "arrow-flight: generate dict_ids for dicts nested inside complex types (#9556)\n\nSome cases were missing.\n\n# Which issue does this PR close?\n\n\u003c!--\nWe generally require a GitHub issue to be filed for all bug fixes and\nenhancements and this helps us generate change logs for our releases.\nYou can link an issue to this PR using the GitHub syntax.\n--\u003e\n\n- Closes #9555 .\n\n# Rationale for this change\n\n\u003c!--\nWhy are you proposing this change? If this is already explained clearly\nin the issue then this section is not needed.\nExplaining clearly why changes are proposed helps reviewers understand\nyour changes and offer better suggestions for fixes.\n--\u003e\nFix flight encoding panic\n\n# What changes are included in this PR?\n\n\u003c!--\nThere is no need to duplicate the description in the issue here but it\nis sometimes worth providing a summary of the individual changes in this\nPR.\n--\u003e\nAssigning dict ids properly to nested dicts\n\n\n# Are these changes tested?\n\n\u003c!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example,\nare they covered by existing tests)?\n--\u003e\nYes. The same tests fail on main.\n\n# Are there any user-facing changes?\n\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e\n\nSigned-off-by: Alfonso Subiotto Marques \u003calfonso.subiotto@polarsignals.com\u003e"
    },
    {
      "commit": "7ea7cdc55a20162346e2e006ac4589a30f7bfdbb",
      "tree": "142f9627ad49a369ae66e451611752d18d6f22d7",
      "parents": [
        "f4ab49e9f3621e72f875b5da26c0dffae880249c"
      ],
      "author": {
        "name": "Tobias Schwarzinger",
        "email": "tobias.schwarzinger@tuwien.ac.at",
        "time": "Wed Mar 18 20:36:23 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 18 15:36:23 2026 -0400"
      },
      "message": "Optimize `take_fixed_size_binary` For Predefined Value Lengths (#9535)\n\n# Which issue does this PR close?\n\n- Related to https://github.com/apache/arrow-rs/issues/279\n\n# Rationale for this change\n\nThe `take` kernel is very important for many operations (e.g.,\n`HashJoin` in DataFusion IIRC). Currently, there is a gap between the\nperformance of the take kernel for primitive arrays (e.g.,\n`DataType::UInt32`) and fixed size binary arrays of the same length\n(e.g., `FixedSizeBinary\u003c4\u003e`).\n\nIn our case this lead to a performance reduction when moving from an\ninteger-based id column to a fixed-size-binary-based id column. This PR\naims to address parts of this gap.\n\nThe 16-bytes case would especially benefit operations on UUID columns.\n\n# What changes are included in this PR?\n\n- Add `take_fixed_size` that can be called for set of predefined\nfsb-lengths that we want to support. This is a \"flat buffer\" version of\nthe `take_native` kernel.\n\n# Are these changes tested?\n\nI\u0027ve added another test that still exercises the non-optimized code\npath.\n\n# Are there any user-facing changes?\n\nNo"
    },
    {
      "commit": "f4ab49e9f3621e72f875b5da26c0dffae880249c",
      "tree": "baddd6aa1eec8c406e5979d6d84f6dce4ca023b1",
      "parents": [
        "8745c3560ba6b688e3cb8e1599e4da82b4168be4"
      ],
      "author": {
        "name": "Konstantin Tarasov",
        "email": "33369833+sdf-jkl@users.noreply.github.com",
        "time": "Wed Mar 18 15:23:24 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 18 15:23:24 2026 -0400"
      },
      "message": "[Variant] clean up `variant_get` tests (#9518)\n\n# Which issue does this PR close?\n\n- closes #9517.\n\n# Rationale for this change\ncheck issue\n\u003c!--\nWhy are you proposing this change? If this is already explained clearly\nin the issue then this section is not needed.\nExplaining clearly why changes are proposed helps reviewers understand\nyour changes and offer better suggestions for fixes.\n--\u003e\n\n# What changes are included in this PR?\n- Use `variant_shred` in test macros\n- Use `VariantArray::from_parts` instead of using `StructArrayBuilder`\n\u003c!--\nThere is no need to duplicate the description in the issue here but it\nis sometimes worth providing a summary of the individual changes in this\nPR.\n--\u003e\n\n# Are these changes tested?\nyes, changes pass same tests\n\u003c!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example,\nare they covered by existing tests)?\n--\u003e\n\n# Are there any user-facing changes?\n\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e\nno"
    },
    {
      "commit": "8745c3560ba6b688e3cb8e1599e4da82b4168be4",
      "tree": "2bab1ceb3146ec36e7de109143beee42c3c7a1ea",
      "parents": [
        "ea3c0509bcee34e1e85152db56d085c19ae05e9c"
      ],
      "author": {
        "name": "Alexander Rafferty",
        "email": "hello@alexanderrafferty.com",
        "time": "Thu Mar 19 06:18:33 2026 +1100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 18 15:18:33 2026 -0400"
      },
      "message": "Move `ValueIter` into own module, and add public `record_count` function (#9557)\n\n# Which issue does this PR close?\n\nAnother smaller PR extracted from #9494.\n\n# Rationale for this change\n\nI\u0027ve moved `ValueIter` into its own module because it\u0027s already\nself-contained, and because that will make it easier to review the\nchanges I have made to `arrow-json/src/reader/schema.rs`.\n\nI\u0027ve also added a public `record_count` function to `ValueIter` - which\ncan be used to simplify consuming code in Datafusion which is currently\ntracking it separately.\n\n# What changes are included in this PR?\n\n* Moved `ValueIter` into own module\n* Added `record_count` method to `ValueIter`\n\n# Are these changes tested?\n\nYes.\n\n# Are there any user-facing changes?\n\nAddition of one new public method, `ValueIter::record_count`."
    },
    {
      "commit": "ea3c0509bcee34e1e85152db56d085c19ae05e9c",
      "tree": "fb77ca59e1a52df6ae4c8d6110511baa5042deee",
      "parents": [
        "c50ea6eaaf484620d4895896400ab0e2ced731ce"
      ],
      "author": {
        "name": "Peter L",
        "email": "cetra3@hotmail.com",
        "time": "Thu Mar 19 05:44:35 2026 +1030"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 18 15:14:35 2026 -0400"
      },
      "message": "Add `claim` method to recordbatch for memory accounting (#9433)\n\n# Which issue does this PR close?\n\nNone specifically but aligns with some of the changes in\nhttps://github.com/apache/arrow-rs/issues/8137\n\n# Rationale for this change\n\nIt should be easy to claim a `RecordBatch` in totality with an arrow\nmemory pool\n\n# What changes are included in this PR?\n\nAdds a few methods to bubble up the `claim` to `RecordBatch` level if\nthe `pool` feature is enabled.\n\n# Are these changes tested?\n\nYes \u0026 new tests added\n\n# Are there any user-facing changes?\n\nIf `pool` feature is added, a new `claim` method on `RecordBatch` and\nassociated structs"
    },
    {
      "commit": "c50ea6eaaf484620d4895896400ab0e2ced731ce",
      "tree": "629fce764896def3184f8c9b46c024962c03242f",
      "parents": [
        "3b6179658203dc1b1610b67c1777d5b8beb137fc"
      ],
      "author": {
        "name": "Ed Seidl",
        "email": "etseidl@users.noreply.github.com",
        "time": "Wed Mar 18 11:47:48 2026 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 18 14:47:48 2026 -0400"
      },
      "message": "Optimize delta binary decoder in the case where bitwidth\u003d0 (#9477)\n\n# Which issue does this PR close?\n\n- Closes #9476.\n\n# Rationale for this change\n\nExplore if we can achieve the speedups seen in arrow-cpp\n(https://github.com/apache/arrow/pull/49296).\n\n# What changes are included in this PR?\n\nAdds special cases to the delta binary packed decoder when bitwidth for\na miniblock is 0. The optimization avoids relying on previous values to\ndecode current ones.\n\n# Are these changes tested?\n\nYes, tests have been added, as well as new benchmarks.\n\n# Are there any user-facing changes?\n\nNo"
    },
    {
      "commit": "3b6179658203dc1b1610b67c1777d5b8beb137fc",
      "tree": "4d18b42d0f4dcb2562c7029e582cb23b1f28f239",
      "parents": [
        "66313ae9a18bd5479c5be97aaaf926fd5f64cdb9"
      ],
      "author": {
        "name": "Raz Luvaton",
        "email": "16746759+rluvaton@users.noreply.github.com",
        "time": "Wed Mar 18 16:27:12 2026 +0200"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 18 10:27:12 2026 -0400"
      },
      "message": "fix: first next_back() on new RowsIter panics (#9505)\n\n# Which issue does this PR close?\n\nN/A\n\n# Rationale for this change\n\nit should not panic\n\n# What changes are included in this PR?\n\ncorrectly use last row in `next_back`\n\n# Are these changes tested?\n\nyes\n\n# Are there any user-facing changes?\n\nthey can now use `next_back`"
    },
    {
      "commit": "66313ae9a18bd5479c5be97aaaf926fd5f64cdb9",
      "tree": "34b8a2cf19ca90e61030ef8a76d97dda150dddab",
      "parents": [
        "c4b43bb916aa91d366d17013d867992d931aae70"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Wed Mar 18 10:12:12 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 18 10:12:12 2026 -0400"
      },
      "message": "Bump actions/download-artifact from 7 to 8 (#9488)\n\nBumps\n[actions/download-artifact](https://github.com/actions/download-artifact)\nfrom 7 to 8.\n\u003cdetails\u003e\n\u003csummary\u003eRelease notes\u003c/summary\u003e\n\u003cp\u003e\u003cem\u003eSourced from \u003ca\nhref\u003d\"https://github.com/actions/download-artifact/releases\"\u003eactions/download-artifact\u0027s\nreleases\u003c/a\u003e.\u003c/em\u003e\u003c/p\u003e\n\u003cblockquote\u003e\n\u003ch2\u003ev8.0.0\u003c/h2\u003e\n\u003ch2\u003ev8 - What\u0027s new\u003c/h2\u003e\n\u003ch3\u003eDirect downloads\u003c/h3\u003e\n\u003cp\u003eTo support direct uploads in \u003ccode\u003eactions/upload-artifact\u003c/code\u003e,\nthe action will no longer attempt to unzip all downloaded files.\nInstead, the action checks the \u003ccode\u003eContent-Type\u003c/code\u003e header ahead of\nunzipping and skips non-zipped files. Callers wishing to download a\nzipped file as-is can also set the new \u003ccode\u003eskip-decompress\u003c/code\u003e\nparameter to \u003ccode\u003efalse\u003c/code\u003e.\u003c/p\u003e\n\u003ch3\u003eEnforced checks (breaking)\u003c/h3\u003e\n\u003cp\u003eA previous release introduced digest checks on the download. If a\ndownload hash didn\u0027t match the expected hash from the server, the action\nwould log a warning. Callers can now configure the behavior on mismatch\nwith the \u003ccode\u003edigest-mismatch\u003c/code\u003e parameter. To be secure by\ndefault, we are now defaulting the behavior to \u003ccode\u003eerror\u003c/code\u003e which\nwill fail the workflow run.\u003c/p\u003e\n\u003ch3\u003eESM\u003c/h3\u003e\n\u003cp\u003eTo support new versions of the @actions/* packages, we\u0027ve upgraded\nthe package to ESM.\u003c/p\u003e\n\u003ch2\u003eWhat\u0027s Changed\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eDon\u0027t attempt to un-zip non-zipped downloads by \u003ca\nhref\u003d\"https://github.com/danwkennedy\"\u003e\u003ccode\u003e@​danwkennedy\u003c/code\u003e\u003c/a\u003e in\n\u003ca\nhref\u003d\"https://redirect.github.com/actions/download-artifact/pull/460\"\u003eactions/download-artifact#460\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003eAdd a setting to specify what to do on hash mismatch and default it\nto \u003ccode\u003eerror\u003c/code\u003e by \u003ca\nhref\u003d\"https://github.com/danwkennedy\"\u003e\u003ccode\u003e@​danwkennedy\u003c/code\u003e\u003c/a\u003e in\n\u003ca\nhref\u003d\"https://redirect.github.com/actions/download-artifact/pull/461\"\u003eactions/download-artifact#461\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003e\u003cstrong\u003eFull Changelog\u003c/strong\u003e: \u003ca\nhref\u003d\"https://github.com/actions/download-artifact/compare/v7...v8.0.0\"\u003ehttps://github.com/actions/download-artifact/compare/v7...v8.0.0\u003c/a\u003e\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003c/details\u003e\n\u003cdetails\u003e\n\u003csummary\u003eCommits\u003c/summary\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/actions/download-artifact/commit/70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3\"\u003e\u003ccode\u003e70fc10c\u003c/code\u003e\u003c/a\u003e\nMerge pull request \u003ca\nhref\u003d\"https://redirect.github.com/actions/download-artifact/issues/461\"\u003e#461\u003c/a\u003e\nfrom actions/danwkennedy/digest-mismatch-behavior\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/actions/download-artifact/commit/f258da9a506b755b84a09a531814700b86ccfc62\"\u003e\u003ccode\u003ef258da9\u003c/code\u003e\u003c/a\u003e\nAdd change docs\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/actions/download-artifact/commit/ccc058e5fbb0bb2352213eaec3491e117cbc4a5c\"\u003e\u003ccode\u003eccc058e\u003c/code\u003e\u003c/a\u003e\nFix linting issues\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/actions/download-artifact/commit/bd7976ba57ecea96e6f3df575eb922d11a12a9fd\"\u003e\u003ccode\u003ebd7976b\u003c/code\u003e\u003c/a\u003e\nAdd a setting to specify what to do on hash mismatch and default it to\n\u003ccode\u003eerror\u003c/code\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/actions/download-artifact/commit/ac21fcf45e0aaee541c0f7030558bdad38d77d6c\"\u003e\u003ccode\u003eac21fcf\u003c/code\u003e\u003c/a\u003e\nMerge pull request \u003ca\nhref\u003d\"https://redirect.github.com/actions/download-artifact/issues/460\"\u003e#460\u003c/a\u003e\nfrom actions/danwkennedy/download-no-unzip\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/actions/download-artifact/commit/15999bff51058bc7c19b50ebbba518eaef7c26c0\"\u003e\u003ccode\u003e15999bf\u003c/code\u003e\u003c/a\u003e\nAdd note about package bumps\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/actions/download-artifact/commit/974686ed5098c7f9c9289ec946b9058e496a2561\"\u003e\u003ccode\u003e974686e\u003c/code\u003e\u003c/a\u003e\nBump the version to \u003ccode\u003ev8\u003c/code\u003e and add release notes\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/actions/download-artifact/commit/fbe48b1d2756394be4cd4358ed3bc1343b330e75\"\u003e\u003ccode\u003efbe48b1\u003c/code\u003e\u003c/a\u003e\nUpdate test names to make it clearer what they do\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/actions/download-artifact/commit/96bf374a614d4360e225874c3efd6893a3f285e7\"\u003e\u003ccode\u003e96bf374\u003c/code\u003e\u003c/a\u003e\nOne more test fix\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/actions/download-artifact/commit/b8c4819ef592cbe04fd93534534b38f853864332\"\u003e\u003ccode\u003eb8c4819\u003c/code\u003e\u003c/a\u003e\nFix skip decompress test\u003c/li\u003e\n\u003cli\u003eAdditional commits viewable in \u003ca\nhref\u003d\"https://github.com/actions/download-artifact/compare/v7...v8\"\u003ecompare\nview\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/details\u003e\n\u003cbr /\u003e\n\n\n[![Dependabot compatibility\nscore](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name\u003dactions/download-artifact\u0026package-manager\u003dgithub_actions\u0026previous-version\u003d7\u0026new-version\u003d8)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)\n\nDependabot will resolve any conflicts with this PR as long as you don\u0027t\nalter it yourself. You can also trigger a rebase manually by commenting\n`@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n\u003cdetails\u003e\n\u003csummary\u003eDependabot commands and options\u003c/summary\u003e\n\u003cbr /\u003e\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits\nthat have been made to it\n- `@dependabot show \u003cdependency name\u003e ignore conditions` will show all\nof the ignore conditions of the specified dependency\n- `@dependabot ignore this major version` will close this PR and stop\nDependabot creating any more for this major version (unless you reopen\nthe PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop\nDependabot creating any more for this minor version (unless you reopen\nthe PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop\nDependabot creating any more for this dependency (unless you reopen the\nPR or upgrade to it yourself)\n\n\n\u003c/details\u003e\n\nSigned-off-by: dependabot[bot] \u003csupport@github.com\u003e\nCo-authored-by: dependabot[bot] \u003c49699333+dependabot[bot]@users.noreply.github.com\u003e"
    },
    {
      "commit": "c4b43bb916aa91d366d17013d867992d931aae70",
      "tree": "c245c059ce2d26d74a1ceef4751c17a5cd4825bd",
      "parents": [
        "00ad7fca2fc5e09c0da5f56f87edc3a454eec576"
      ],
      "author": {
        "name": "Mikhail Zabaluev",
        "email": "mikhail.zabaluev@gmail.com",
        "time": "Wed Mar 18 16:11:40 2026 +0200"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 18 10:11:40 2026 -0400"
      },
      "message": "feat(arrow-avro): `HeaderInfo` to expose OCF header (#9548)\n\n# Which issue does this PR close?\n\n- Closes #9460.\n\n# Rationale for this change\n\nRework of #9462 along the lines proposed in\nhttps://github.com/apache/arrow-rs/pull/9462#issuecomment-3995541243.\n\n# What changes are included in this PR?\n\nAdd `HeaderInfo` as a cheaply cloneable value to expose header\ninformation parsed from an Avro OCF file.\n\nAdd `read_header_info` function to the `reader` module, and its async\ncounterpart to the `reader::async_reader` module, to read the header\nfrom the file reader and return `HeaderInfo`.\n\nAdd `build_with_header` method to async reader builder to enable reuse\nof the header with multiple readers.\n\n# Are these changes tested?\n\nAdded a test for the async reader.\n\n# Are there any user-facing changes?\n\nNew API in arrow-avro:\n\n* `reader::HeaderInfo`\n* `reader::read_header_info` and\n`reader::async_reader::read_header_info`\n*  `build_with_header` method of `AvroAsyncFileReader`\u0027s builder.\n\n---------\n\nCo-authored-by: Connor Sanders \u003c170039284+jecsand838@users.noreply.github.com\u003e"
    },
    {
      "commit": "00ad7fca2fc5e09c0da5f56f87edc3a454eec576",
      "tree": "cc88ccc1bff596e67595d47158e5c573b61a89b5",
      "parents": [
        "edc3cb78b4e2b9bf6a21e4c522d0f9e90fa10532"
      ],
      "author": {
        "name": "Burak Şen",
        "email": "buraksenb@gmail.com",
        "time": "Wed Mar 18 16:25:34 2026 +0300"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 18 09:25:34 2026 -0400"
      },
      "message": "chore: extend record_batch macro to support variables and expressions (#9522)\n\n# Which issue does this PR close?\n- Closes #9245.\n\n# Rationale for this change\nCurrently record_batch! macro supports only literal values. In\ndatafusion repository there is also a record_batch! macro that supports\nthis.\n\n\nhttps://github.com/apache/datafusion/issues/13037 can be closed after\nDatafusion repository upgrades version\n\n# What changes are included in this PR?\nExtend record_batch! macro to support datafusion equivalent added in: \n\n# Are these changes tested?\nI\u0027ve actually ported datafusion logic to here. I was not sure if it\nmakes sense to add unit tests for this macro but I can if requested\n\n# Are there any user-facing changes?\nNo breaking changes to downstream since this only extends macro\n\n---------\n\nCo-authored-by: Andrew Lamb \u003candrew@nerdnetworks.org\u003e"
    },
    {
      "commit": "edc3cb78b4e2b9bf6a21e4c522d0f9e90fa10532",
      "tree": "9dff140b1e255bc0317359b769215a92c479e598",
      "parents": [
        "e3926a96b7b807e54cb303791a3d31cd9591357b"
      ],
      "author": {
        "name": "Alfonso Subiotto Marqués",
        "email": "alfonso.subiotto@polarsignals.com",
        "time": "Wed Mar 18 14:25:02 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 18 09:25:02 2026 -0400"
      },
      "message": "arrow-select: fix MutableArrayData interleave for ListView (#9560)\n\nThe previous code did not extend child data buffers.\n\nI\u0027m preparing a PR for an optimized listview interleave, but wanted to\nmake sure the fallback path was correct before comparing benchmarks.\n\n# Which issue does this PR close?\n\n\u003c!--\nWe generally require a GitHub issue to be filed for all bug fixes and\nenhancements and this helps us generate change logs for our releases.\nYou can link an issue to this PR using the GitHub syntax.\n--\u003e\n\n- Closes #9559 \n- Closes https://github.com/apache/arrow-rs/pull/9562\n- https://github.com/apache/arrow-rs/issues/9561\n\n# Rationale for this change\n\n\u003c!--\nWhy are you proposing this change? If this is already explained clearly\nin the issue then this section is not needed.\nExplaining clearly why changes are proposed helps reviewers understand\nyour changes and offer better suggestions for fixes.\n--\u003e\nFix a bug\n\n# What changes are included in this PR?\n\n\u003c!--\nThere is no need to duplicate the description in the issue here but it\nis sometimes worth providing a summary of the individual changes in this\nPR.\n--\u003e\nBugfix and test\n\n# Are these changes tested?\n\n\u003c!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example,\nare they covered by existing tests)?\n--\u003e\nYes\n\n# Are there any user-facing changes?\n\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e\nListView interleaves did not succeed previously.\n\nSigned-off-by: Alfonso Subiotto Marques \u003calfonso.subiotto@polarsignals.com\u003e"
    },
    {
      "commit": "e3926a96b7b807e54cb303791a3d31cd9591357b",
      "tree": "3ed060820d4503541e466a6c95b4003b7204e286",
      "parents": [
        "19889a33f63427c4b22ab3b7fcb62b77dbe9ddec"
      ],
      "author": {
        "name": "Daniël Heres",
        "email": "danielheres@gmail.com",
        "time": "Wed Mar 18 14:04:03 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 18 09:04:03 2026 -0400"
      },
      "message": "Add mutable operations to BooleanBuffer (Bit*Assign) (#9567)\n\n# Which issue does this PR close?\n\n\u003c!--\nWe generally require a GitHub issue to be filed for all bug fixes and\nenhancements and this helps us generate change logs for our releases.\nYou can link an issue to this PR using the GitHub syntax.\n--\u003e\n\n- Closes #NNN.\n\n# Rationale for this change\n\nI want to avoid allocating a new buffer when doing `\u0026`.\nWe can use `\u0026\u003d` this way.\n\n\n# What changes are included in this PR?\n\n\n# Are these changes tested?\n\n\n# Are there any user-facing changes?\n\n---------\n\nCo-authored-by: Andrew Lamb \u003candrew@nerdnetworks.org\u003e"
    },
    {
      "commit": "19889a33f63427c4b22ab3b7fcb62b77dbe9ddec",
      "tree": "feef3bcef2e48e1a37544c4cb97eb502ec5036a9",
      "parents": [
        "d42610711d12a03b810bf0297d38a36029093304"
      ],
      "author": {
        "name": "Adrian Garcia Badaracco",
        "email": "1755071+adriangb@users.noreply.github.com",
        "time": "Wed Mar 18 02:58:01 2026 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 18 08:58:01 2026 +0100"
      },
      "message": "Use chunks_exact for has_true/has_false to enable compiler unrolling (#9570)\n\n## Summary\n- Replace `.chunks(64)` with `.chunks_exact(16)` in `has_true()` and\n`has_false()` as suggested in\nhttps://github.com/apache/arrow-rs/pull/9511#discussion_r2950942579\n- With `chunks_exact`, the compiler can fully unroll the inner fold\n(guaranteed size, no inner branch/loop), allowing a smaller block size\nfor more frequent short-circuit exits without regressing the full-scan\npath\n\n## Benchmark results (block size 16 vs baseline)\n- Full-scan worst case (65536): No regression (~49ns both)\n- Early-exit cases (65536): ~27% faster (6.0ns → 4.4ns)\n- Small arrays (64, 1024): Unchanged\n\n## Test plan\n- [x] All 13 existing `test_has` tests pass\n\nrun benchmarks boolean_array\n\n@DanDanDan Would appreciate your review!\n\n🤖 Generated with [Claude Code](https://claude.com/claude-code)\n\nCo-authored-by: Claude Opus 4.6 (1M context) \u003cnoreply@anthropic.com\u003e"
    },
    {
      "commit": "d42610711d12a03b810bf0297d38a36029093304",
      "tree": "e792906fad87687f39532fd2800458b728200a43",
      "parents": [
        "bedabc59eb80e24e222398d6c4e38a4f783bf999"
      ],
      "author": {
        "name": "Adrian Garcia Badaracco",
        "email": "1755071+adriangb@users.noreply.github.com",
        "time": "Wed Mar 18 00:31:25 2026 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 18 06:31:25 2026 +0100"
      },
      "message": "Add has_true() and has_false() to BooleanArray (#9511)\n\n## Motivation\n\nWhen working with `BooleanArray`, a common pattern is checking whether\n*any* true or false value exists — e.g.\n`arr.true_count() \u003e 0` or `arr.false_count() \u003d\u003d 0`. This currently\nrequires `true_count()` / `false_count()`, which scan the **entire**\nbitmap to count every set bit (via `popcount`), even though we only need\nto know if at least one exists.\n\nThis PR adds `has_true()` and `has_false()` methods that short-circuit\nas soon as they find a matching value, providing both:\n\n1. **Better performance** — faster on large arrays in the best case\n2. **More ergonomic API** — `arr.has_true()` expresses intent more\nclearly than `arr.true_count() \u003e 0`\n\n## Callsites in DataFusion\n\nThere are several places in DataFusion that would benefit from these\nmethods:\n\n- **`datafusion/functions-nested/src/array_has.rs`** —\n`eq_array.true_count() \u003e 0` → `eq_array.has_true()`\n- **`datafusion/physical-plan/src/topk/mod.rs`** — `filter.true_count()\n\u003d\u003d 0` check → `!filter.has_true()`\n- **`datafusion/datasource-parquet/src/metadata.rs`** —\n`exactness.true_count() \u003d\u003d 0` and `combined_mask.true_count() \u003e 0`\n- **`datafusion/physical-plan/src/joins/nested_loop_join.rs`** —\n`bitmap.true_count() \u003d\u003d 0` checks\n- **`datafusion/physical-expr-common/src/physical_expr.rs`** —\n`selection_count \u003d\u003d 0` from `selection.true_count()`\n- **`datafusion/physical-expr/src/expressions/binary.rs`** —\nshort-circuit checks for AND/OR\n\n## Benchmark Results\n\n```\nScenario                          true_count     has_true       has_false      Speedup (best)\n─────────────────────────────────────────────────────────────────────────────────────────────\nall_true, 64                      4.32 ns        4.08 ns        4.76 ns        ~1.1x\nall_false, 64                     4.30 ns        4.25 ns        4.52 ns        ~1.0x\nall_true, 1024                    5.15 ns        4.52 ns        4.99 ns        ~1.1x\nall_false, 1024                   5.17 ns        4.55 ns        5.00 ns        ~1.1x\nmixed_early, 1024                 5.22 ns        —              5.04 ns        ~1.0x\nnulls_all_true, 1024              12.84 ns       4.10 ns        12.92 ns       ~3.1x\nall_true, 65536                   100.06 ns      5.96 ns        49.70 ns       ~16.8x (has_true)\nall_false, 65536                  99.33 ns       49.30 ns       6.19 ns        ~16.0x (has_false)\nmixed_early, 65536                100.10 ns      —              6.20 ns        ~16.1x (has_false)\nnulls_all_true, 65536             522.94 ns      4.05 ns        521.82 ns      ~129x (has_true)\n```\n\nThe key wins are on larger arrays (65,536 elements), where\n`has_true`/`has_false` are **up to 16-129x faster** than\n`true_count()` in best-case scenarios (early short-circuit). Even in\nworst case (must scan entire array), performance is\ncomparable to `true_count`.\n\n## Implementation\n\nThe implementation processes bits in 64-bit chunks using\n`UnalignedBitChunk`, which handles arbitrary bit offsets and aligns\ndata for SIMD-friendly processing.\n\n- **`has_true` (no nulls):** OR-folds 64-bit chunks, short-circuits when\nany bit is set\n- **`has_false` (no nulls):** AND-folds 64-bit chunks, short-circuits\nwhen any bit is unset (with padding bits masked to 1)\n- **With nulls:** Iterates paired `(null, value)` chunks, checking `null\n\u0026 value !\u003d 0` (has_true) or `null \u0026 !value !\u003d 0`\n(has_false)\n\n### Alternatives considered\n\n1. **Fully vectorized (no early stopping):** Would process the entire\nbitmap like `true_count()` but with simpler bitwise ops\ninstead of popcount. Marginally faster than `true_count()` but misses\nthe main optimization opportunity.\n2. **Per-element iteration with early stopping:** `self.iter().any(|v| v\n\u003d\u003d Some(true))`. Simple but processes one bit at a\ntime, missing SIMD vectorization of the inner loop. Our approach\nprocesses 64 bits at a time while still supporting early\nexit.\n\nThe chosen approach balances SIMD-friendly bulk processing (64 bits per\niteration) with early termination, giving the best of\n  both worlds.\n\n## Test Plan\n\n- Unit tests covering: all-true, all-false, mixed, empty, nullable\n(all-valid-true, all-valid-false, all-null), non-aligned\nlengths (65 elements, 64+1 with trailing false)\n- Criterion benchmarks comparing `has_true`/`has_false` vs `true_count`\nacross sizes (64, 1024, 65536) and data distributions\n\n🤖 Generated with [Claude Code](https://claude.com/claude-code\n\n---------\n\nCo-authored-by: Claude Opus 4.6 \u003cnoreply@anthropic.com\u003e"
    },
    {
      "commit": "bedabc59eb80e24e222398d6c4e38a4f783bf999",
      "tree": "bd1b1a27412044cf03a2a706ba0dfec83e07fbef",
      "parents": [
        "d1ec77065c6b606bce97b7acd51b2079182822ad"
      ],
      "author": {
        "name": "Mikhail Zabaluev",
        "email": "mikhail.zabaluev@gmail.com",
        "time": "Tue Mar 17 22:52:29 2026 +0200"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 17 16:52:29 2026 -0400"
      },
      "message": "feat(arrow-avro): Configurable Arrow timezone ID for Avro timestamps (#9280)\n\n# Which issue does this PR close?\n\n- Closes #9279.\n\n# Rationale for this change\n\nEnable an alternative representation of UTC timestamp data types with\nthe \"UTC\" timezone ID, which is useful for interoperability with\napplications preferring that form.\n\n# What changes are included in this PR?\n\nIn the `ReaderBuilder` API, add a new method `with_tz` that allows users\nto specify the timezone ID for Avro logical types that represent UTC\ntimestamps. The choices are between \"+00:00\" and \"UTC\" and can be\nselected by the new `Tz` enumeration.\n\n# Are these changes tested?\n\nAdded unit tests to verify the representation with different `Tz`\nparameter values.\n\n# Are there any user-facing changes?\n\nA new `with_tz` method is added to `arrow_avro::reader::Builder`.\n\n---------\n\nCo-authored-by: Connor Sanders \u003c170039284+jecsand838@users.noreply.github.com\u003e\nCo-authored-by: Andrew Lamb \u003candrew@nerdnetworks.org\u003e"
    },
    {
      "commit": "d1ec77065c6b606bce97b7acd51b2079182822ad",
      "tree": "d596f2918b4ad2a620a5e6f1dac6ac6867ccea82",
      "parents": [
        "e7b4842f7b4a8a1766baef3ddd35d5d305e63b5f"
      ],
      "author": {
        "name": "Val Lorentz",
        "email": "vlorentz@softwareheritage.org",
        "time": "Tue Mar 17 20:00:09 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 17 15:00:09 2026 -0400"
      },
      "message": "DeltaBitPackEncoderConversion: Fix panic message on invalid type (#9552)\n\n# Which issue does this PR close?\n\n- Closes #9551.\n\n# Rationale for this change\n\nDeltaBitPackDecoder supports Int32Type, UInt32Type, Int64Type, and\nUInt64Type; but the error message claimed it supported only Int32Type\nand Int64Type\n\n# What changes are included in this PR?\n\n* changed the error message\n* deduplicated the string\n* extended `ensure_phys_ty!()` to allow anything `panic!()` does\n\n# Are these changes tested?\n\nno\n\n# Are there any user-facing changes?\n\nonly the panic message\n\n---------\n\nCo-authored-by: Daniël Heres \u003cdanielheres@gmail.com\u003e\nCo-authored-by: Andrew Lamb \u003candrew@nerdnetworks.org\u003e"
    },
    {
      "commit": "e7b4842f7b4a8a1766baef3ddd35d5d305e63b5f",
      "tree": "55333dc11b98bfff7ad328bacc74ebfaee940e47",
      "parents": [
        "68b607631dc930d7220b82356be30cc0e5b9cac2"
      ],
      "author": {
        "name": "Raz Luvaton",
        "email": "16746759+rluvaton@users.noreply.github.com",
        "time": "Tue Mar 17 20:25:36 2026 +0200"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 17 14:25:36 2026 -0400"
      },
      "message": "feat: add `RunArray::new_unchecked` and `RunArray::into_parts` (#9376)\n\n# Which issue does this PR close?\n\nN/A\n\n# Rationale for this change\n\nAllow to make easy changes without validation (for example replace the\nvalues)\n\n# What changes are included in this PR?\n\nadded 2 functions\n\n# Are these changes tested?\n\nyes\n\n# Are there any user-facing changes?\n\nyes new functions"
    },
    {
      "commit": "68b607631dc930d7220b82356be30cc0e5b9cac2",
      "tree": "34695a356796a7e65608da23352031f010df87f2",
      "parents": [
        "a8fe8b32045f32bc59794b9ad919ba08d22ef514"
      ],
      "author": {
        "name": "Daniël Heres",
        "email": "danielheres@gmail.com",
        "time": "Tue Mar 17 19:01:00 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 17 14:01:00 2026 -0400"
      },
      "message": "[minor] Download clickbench file when missing (#9553)\n\n# Which issue does this PR close?\n\n\n- Closes #NNN.\n\n# Rationale for this change\n\nI want it to download the file when it\u0027s not there\n\n# What changes are included in this PR?\n\n\u003c!--\nThere is no need to duplicate the description in the issue here but it\nis sometimes worth providing a summary of the individual changes in this\nPR.\n--\u003e\n\n# Are these changes tested?\n\n\u003c!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example,\nare they covered by existing tests)?\n--\u003e\n\n# Are there any user-facing changes?\n\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e"
    },
    {
      "commit": "a8fe8b32045f32bc59794b9ad919ba08d22ef514",
      "tree": "2823fc071524e6b66e47f1f59179255d443da30f",
      "parents": [
        "55ff6eb7885f757f2d8637400f223eb84bb6a500"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Tue Mar 17 07:26:00 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 17 07:26:00 2026 +0100"
      },
      "message": "chore(deps): update lz4_flex requirement from 0.12 to 0.13 (#9565)\n\nUpdates the requirements on\n[lz4_flex](https://github.com/pseitz/lz4_flex) to permit the latest\nversion.\n\u003cdetails\u003e\n\u003csummary\u003eChangelog\u003c/summary\u003e\n\u003cp\u003e\u003cem\u003eSourced from \u003ca\nhref\u003d\"https://github.com/PSeitz/lz4_flex/blob/main/CHANGELOG.md\"\u003elz4_flex\u0027s\nchangelog\u003c/a\u003e.\u003c/em\u003e\u003c/p\u003e\n\u003cblockquote\u003e\n\u003ch1\u003e0.13.0 (2026-03-15)\u003c/h1\u003e\n\u003ch3\u003eFeatures\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eAdd option to reuse compression dict \u003ca\nhref\u003d\"https://redirect.github.com/PSeitz/lz4_flex/pull/207\"\u003e#207\u003c/a\u003e\n(thanks \u003ca\nhref\u003d\"https://github.com/matthewfollegot\"\u003e\u003ccode\u003e@​matthewfollegot\u003c/code\u003e\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3\u003eFixes\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eFix handling of invalid match offsets during decompression \u003ca\nhref\u003d\"https://github.com/PSeitz/lz4_flex/commit/055502e\"\u003e#055502e\u003c/a\u003e\n(thanks \u003ca\nhref\u003d\"https://github.com/Marcono1234\"\u003e\u003ccode\u003e@​Marcono1234\u003c/code\u003e\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003cpre\u003e\u003ccode\u003eInvalid match offsets (offset \u003d\u003d 0) during decompression were\nnot properly\nhandled, which could lead to invalid memory reads. This is a security\nfix\nthat was also backported to 0.12.1 and 0.11.6.\n\u003c/code\u003e\u003c/pre\u003e\n\u003cul\u003e\n\u003cli\u003eFix \u003ccode\u003eget_maximum_output_size\u003c/code\u003e overflow on 32-bit targets\n\u003ca href\u003d\"https://redirect.github.com/PSeitz/lz4_flex/pull/205\"\u003e#205\u003c/a\u003e\n(thanks \u003ca\nhref\u003d\"https://github.com/dglittle\"\u003e\u003ccode\u003e@​dglittle\u003c/code\u003e\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003cpre\u003e\u003ccode\u003eCast input_len to u64 before multiplying by 110, avoiding\noverflow on\n32-bit targets (e.g. wasm32) where input_len * 110 overflows usize\nwhen input_len \u0026gt; ~39MB.\n\u003c/code\u003e\u003c/pre\u003e\n\u003ch1\u003e0.12.1 (2026-03-14)\u003c/h1\u003e\n\u003ch3\u003eSecurity Fix\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eFix handling of invalid match offsets during decompression \u003ca\nhref\u003d\"https://github.com/PSeitz/lz4_flex/commit/a0b9154\"\u003e#a0b9154\u003c/a\u003e\n(thanks \u003ca\nhref\u003d\"https://github.com/Marcono1234\"\u003e\u003ccode\u003e@​Marcono1234\u003c/code\u003e\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003cpre\u003e\u003ccode\u003eInvalid match offsets (offset \u003d\u003d 0) during decompression were\nnot properly\nhandled, which could lead to invalid memory reads on untrusted input.\nUsers on 0.12.x should upgrade to 0.12.1.\n\u003c/code\u003e\u003c/pre\u003e\n\u003ch1\u003e0.12.0 (2025-11-11)\u003c/h1\u003e\n\u003cul\u003e\n\u003cli\u003eFix integer overflows when decoding large payloads \u003ca\nhref\u003d\"https://redirect.github.com/PSeitz/lz4_flex/pull/192\"\u003e#192\u003c/a\u003e\n(thanks \u003ca\nhref\u003d\"https://github.com/teh-cmc\"\u003e\u003ccode\u003e@​teh-cmc\u003c/code\u003e\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003cpre\u003e\u003ccode\u003eThis fixes an u32 integer overflow when decoding large\npayloads in the block format.\nNote: The block format is not suitable for such large payloads, since it\nkeeps everything in memory. Consider using the frame format for large\ndata.\n\u003cp\u003eThis change also removes a unsafe fast-path for write_integer to\nsimplify the code.\u003cbr /\u003e\nThe performance impact is on incompressible data, which is already fast\nenough.\u003cbr /\u003e\n\u003c/code\u003e\u003c/pre\u003e\u003c/p\u003e\n\u003ch1\u003e0.11.6 (2026-03-14)\u003c/h1\u003e\n\u003ch3\u003eSecurity Fix\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eFix handling of invalid match offsets during decompression \u003ca\nhref\u003d\"https://github.com/PSeitz/lz4_flex/commit/84cdafb\"\u003e#84cdafb\u003c/a\u003e\n(thanks \u003ca\nhref\u003d\"https://github.com/Marcono1234\"\u003e\u003ccode\u003e@​Marcono1234\u003c/code\u003e\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003cpre\u003e\u003ccode\u003eInvalid match offsets (offset \u003d\u003d 0) during decompression were\nnot properly\nhandled, which could lead to invalid memory reads on untrusted input.\nUsers on 0.11.x should upgrade to 0.11.6.\n\u003c/code\u003e\u003c/pre\u003e\n\u003c!-- raw HTML omitted --\u003e\n\u003c/blockquote\u003e\n\u003cp\u003e... (truncated)\u003c/p\u003e\n\u003c/details\u003e\n\u003cdetails\u003e\n\u003csummary\u003eCommits\u003c/summary\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/PSeitz/lz4_flex/commit/bfaae84cd4131e432577f04a0476c661e67cbdb0\"\u003e\u003ccode\u003ebfaae84\u003c/code\u003e\u003c/a\u003e\nrelease 0.13.0\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/PSeitz/lz4_flex/commit/055502ee5d297ecd6bf448ac91c055c7f6df9b6d\"\u003e\u003ccode\u003e055502e\u003c/code\u003e\u003c/a\u003e\nfix handling of invalid match offsets during decompression\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/PSeitz/lz4_flex/commit/7191df8231f2be4daa70c9171ed1c1521123efe5\"\u003e\u003ccode\u003e7191df8\u003c/code\u003e\u003c/a\u003e\nmake hashtable visibility crate public\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/PSeitz/lz4_flex/commit/1bdafca3edf87b60fb5e045af9c37702e5c83ca5\"\u003e\u003ccode\u003e1bdafca\u003c/code\u003e\u003c/a\u003e\nadd doc comments\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/PSeitz/lz4_flex/commit/c90fc91feebc8d583e33ead82030882526d0fc86\"\u003e\u003ccode\u003ec90fc91\u003c/code\u003e\u003c/a\u003e\nlz4_block exposes option to reuse compression dict\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/PSeitz/lz4_flex/commit/22e77f9bd191f31f958f75fd48891ee3d70a70d5\"\u003e\u003ccode\u003e22e77f9\u003c/code\u003e\u003c/a\u003e\nDelete .github/workflows/typos.yml\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/PSeitz/lz4_flex/commit/2991a09be12bad4574205daa3b2b09b2fc27f17f\"\u003e\u003ccode\u003e2991a09\u003c/code\u003e\u003c/a\u003e\nfix get_maximum_output_size overflow on 32-bit targets\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/PSeitz/lz4_flex/commit/7b5fb80e759e29c85aab6545bc143c4d4a217103\"\u003e\u003ccode\u003e7b5fb80\u003c/code\u003e\u003c/a\u003e\nadd minimal security policy\u003c/li\u003e\n\u003cli\u003eSee full diff in \u003ca\nhref\u003d\"https://github.com/pseitz/lz4_flex/compare/0.12.0...0.13.0\"\u003ecompare\nview\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/details\u003e\n\u003cbr /\u003e\n\n\nDependabot will resolve any conflicts with this PR as long as you don\u0027t\nalter it yourself. You can also trigger a rebase manually by commenting\n`@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n\u003cdetails\u003e\n\u003csummary\u003eDependabot commands and options\u003c/summary\u003e\n\u003cbr /\u003e\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits\nthat have been made to it\n- `@dependabot show \u003cdependency name\u003e ignore conditions` will show all\nof the ignore conditions of the specified dependency\n- `@dependabot ignore this major version` will close this PR and stop\nDependabot creating any more for this major version (unless you reopen\nthe PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop\nDependabot creating any more for this minor version (unless you reopen\nthe PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop\nDependabot creating any more for this dependency (unless you reopen the\nPR or upgrade to it yourself)\n\n\n\u003c/details\u003e\n\nSigned-off-by: dependabot[bot] \u003csupport@github.com\u003e\nCo-authored-by: dependabot[bot] \u003c49699333+dependabot[bot]@users.noreply.github.com\u003e"
    },
    {
      "commit": "55ff6eb7885f757f2d8637400f223eb84bb6a500",
      "tree": "736a3eeb7e56a7f7f07245669f1516db8f9adc01",
      "parents": [
        "fcab5d234458de9dd6a9222f6336d51c18ae141d"
      ],
      "author": {
        "name": "Konstantin Tarasov",
        "email": "33369833+sdf-jkl@users.noreply.github.com",
        "time": "Mon Mar 16 10:23:01 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 16 07:23:01 2026 -0700"
      },
      "message": "add `shred_variant` support for `LargeUtf8` and `LargeBinary` (#9554)\n\n# Which issue does this PR close?\n\n- Closes #9525 .\n\n# Rationale for this change\n\ncheck issue.\n\n# What changes are included in this PR?\n\nAdd `shred_variant` support for `LargeUtf8` and `LargeBinary`\n\n# Are these changes tested?\n\nYes, unit tests.\n\n# Are there any user-facing changes?\n\nNo"
    },
    {
      "commit": "fcab5d234458de9dd6a9222f6336d51c18ae141d",
      "tree": "edd44922dd0a4fbdefdf9d92ab178b23b6cdbb52",
      "parents": [
        "83b6908f92de32c6695d95d7dc2b0a0116aa3185"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Sat Mar 14 19:28:21 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Mar 14 19:28:21 2026 +0100"
      },
      "message": "chore(deps): bump black from 24.3.0 to 26.3.1 in /parquet/pytest (#9545)\n\nBumps [black](https://github.com/psf/black) from 24.3.0 to 26.3.1.\n\u003cdetails\u003e\n\u003csummary\u003eRelease notes\u003c/summary\u003e\n\u003cp\u003e\u003cem\u003eSourced from \u003ca\nhref\u003d\"https://github.com/psf/black/releases\"\u003eblack\u0027s\nreleases\u003c/a\u003e.\u003c/em\u003e\u003c/p\u003e\n\u003cblockquote\u003e\n\u003ch2\u003e26.3.1\u003c/h2\u003e\n\u003ch3\u003eStable style\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003ePrevent Jupyter notebook magic masking collisions from corrupting\ncells by using\nexact-length placeholders for short magics and aborting if a placeholder\ncan no longer\nbe unmasked safely (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/5038\"\u003e#5038\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3\u003eConfiguration\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eAlways hash cache filename components derived from\n\u003ccode\u003e--python-cell-magics\u003c/code\u003e so custom\nmagic names cannot affect cache paths (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/5038\"\u003e#5038\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3\u003e\u003cem\u003eBlackd\u003c/em\u003e\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eDisable browser-originated requests by default, add configurable\norigin allowlisting\nand request body limits, and bound executor submissions to improve\nbackpressure\n(\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/5039\"\u003e#5039\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2\u003e26.3.0\u003c/h2\u003e\n\u003ch3\u003eStable style\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eDon\u0027t double-decode input, causing non-UTF-8 files to be corrupted\n(\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4964\"\u003e#4964\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eFix crash on standalone comment in lambda default arguments (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4993\"\u003e#4993\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003ePreserve parentheses when \u003ccode\u003e# type: ignore\u003c/code\u003e comments would\nbe merged with other\ncomments on the same line, preventing AST equivalence failures (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4888\"\u003e#4888\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3\u003ePreview style\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eFix bug where \u003ccode\u003eif\u003c/code\u003e guards in \u003ccode\u003ecase\u003c/code\u003e blocks\nwere incorrectly split when the pattern had\na trailing comma (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4884\"\u003e#4884\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eFix \u003ccode\u003estring_processing\u003c/code\u003e crashing on unassigned long\nstring literals with trailing\ncommas (one-item tuples) (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4929\"\u003e#4929\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eSimplify implementation of the power operator \u0026quot;hugging\u0026quot;\nlogic (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4918\"\u003e#4918\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3\u003ePackaging\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eFix shutdown errors in PyInstaller builds on macOS by disabling\nmultiprocessing in\nfrozen environments (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4930\"\u003e#4930\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3\u003ePerformance\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eIntroduce winloop for windows as an alternative to uvloop (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4996\"\u003e#4996\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eRemove deprecated function \u003ccode\u003euvloop.install()\u003c/code\u003e in favor of\n\u003ccode\u003euvloop.new_event_loop()\u003c/code\u003e\n(\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4996\"\u003e#4996\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eRename \u003ccode\u003emaybe_install_uvloop\u003c/code\u003e function to\n\u003ccode\u003emaybe_use_uvloop\u003c/code\u003e to simplify loop\ninstallation and creation of either a uvloop/winloop evenloop or default\neventloop\n(\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4996\"\u003e#4996\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3\u003eOutput\u003c/h3\u003e\n\u003c!-- raw HTML omitted --\u003e\n\u003c/blockquote\u003e\n\u003cp\u003e... (truncated)\u003c/p\u003e\n\u003c/details\u003e\n\u003cdetails\u003e\n\u003csummary\u003eChangelog\u003c/summary\u003e\n\u003cp\u003e\u003cem\u003eSourced from \u003ca\nhref\u003d\"https://github.com/psf/black/blob/main/CHANGES.md\"\u003eblack\u0027s\nchangelog\u003c/a\u003e.\u003c/em\u003e\u003c/p\u003e\n\u003cblockquote\u003e\n\u003ch2\u003e26.3.1\u003c/h2\u003e\n\u003ch3\u003eStable style\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003ePrevent Jupyter notebook magic masking collisions from corrupting\ncells by using\nexact-length placeholders for short magics and aborting if a placeholder\ncan no longer\nbe unmasked safely (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/5038\"\u003e#5038\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3\u003eConfiguration\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eAlways hash cache filename components derived from\n\u003ccode\u003e--python-cell-magics\u003c/code\u003e so custom\nmagic names cannot affect cache paths (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/5038\"\u003e#5038\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3\u003e\u003cem\u003eBlackd\u003c/em\u003e\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eDisable browser-originated requests by default, add configurable\norigin allowlisting\nand request body limits, and bound executor submissions to improve\nbackpressure\n(\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/5039\"\u003e#5039\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2\u003e26.3.0\u003c/h2\u003e\n\u003ch3\u003eStable style\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eDon\u0027t double-decode input, causing non-UTF-8 files to be corrupted\n(\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4964\"\u003e#4964\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eFix crash on standalone comment in lambda default arguments (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4993\"\u003e#4993\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003ePreserve parentheses when \u003ccode\u003e# type: ignore\u003c/code\u003e comments would\nbe merged with other\ncomments on the same line, preventing AST equivalence failures (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4888\"\u003e#4888\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3\u003ePreview style\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eFix bug where \u003ccode\u003eif\u003c/code\u003e guards in \u003ccode\u003ecase\u003c/code\u003e blocks\nwere incorrectly split when the pattern had\na trailing comma (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4884\"\u003e#4884\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eFix \u003ccode\u003estring_processing\u003c/code\u003e crashing on unassigned long\nstring literals with trailing\ncommas (one-item tuples) (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4929\"\u003e#4929\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eSimplify implementation of the power operator \u0026quot;hugging\u0026quot;\nlogic (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4918\"\u003e#4918\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3\u003ePackaging\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eFix shutdown errors in PyInstaller builds on macOS by disabling\nmultiprocessing in\nfrozen environments (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4930\"\u003e#4930\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3\u003ePerformance\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eIntroduce winloop for windows as an alternative to uvloop (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4996\"\u003e#4996\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eRemove deprecated function \u003ccode\u003euvloop.install()\u003c/code\u003e in favor of\n\u003ccode\u003euvloop.new_event_loop()\u003c/code\u003e\n(\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4996\"\u003e#4996\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eRename \u003ccode\u003emaybe_install_uvloop\u003c/code\u003e function to\n\u003ccode\u003emaybe_use_uvloop\u003c/code\u003e to simplify loop\ninstallation and creation of either a uvloop/winloop evenloop or default\neventloop\n(\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/4996\"\u003e#4996\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003c!-- raw HTML omitted --\u003e\n\u003c/blockquote\u003e\n\u003cp\u003e... (truncated)\u003c/p\u003e\n\u003c/details\u003e\n\u003cdetails\u003e\n\u003csummary\u003eCommits\u003c/summary\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/psf/black/commit/c6755bb741b6481d6b3d3bb563c83fa060db96c9\"\u003e\u003ccode\u003ec6755bb\u003c/code\u003e\u003c/a\u003e\nPrepare release 26.3.1 (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/5046\"\u003e#5046\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/psf/black/commit/69973fd6950985fbeb1090d96da717dc4d8380b0\"\u003e\u003ccode\u003e69973fd\u003c/code\u003e\u003c/a\u003e\nHarden blackd browser-facing request handling (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/5039\"\u003e#5039\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/psf/black/commit/4937fe6cf241139ddbfc16b0bdbb5b422798909d\"\u003e\u003ccode\u003e4937fe6\u003c/code\u003e\u003c/a\u003e\nFix some shenanigans with the cache file and IPython (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/5038\"\u003e#5038\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/psf/black/commit/2e641d174469c505d5ae905e75d4c769597e681f\"\u003e\u003ccode\u003e2e641d1\u003c/code\u003e\u003c/a\u003e\ndocs: remove outdated Black Playground references (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/5044\"\u003e#5044\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/psf/black/commit/c014b22a2d5e0632587b47b81151658bddfa0b88\"\u003e\u003ccode\u003ec014b22\u003c/code\u003e\u003c/a\u003e\nRemove unused internal code (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/5041\"\u003e#5041\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/psf/black/commit/0dae20b2d009f2f03de8696d06b0c947d3abafc9\"\u003e\u003ccode\u003e0dae20b\u003c/code\u003e\u003c/a\u003e\nAdd new changelog (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/5036\"\u003e#5036\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/psf/black/commit/c5c1cbddd92cecb554ac2a77a24139dd76831030\"\u003e\u003ccode\u003ec5c1cbd\u003c/code\u003e\u003c/a\u003e\nMinor release patches (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/5035\"\u003e#5035\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/psf/black/commit/7e5a828c37d71b6a6666e28eed444816def6a8f4\"\u003e\u003ccode\u003e7e5a828\u003c/code\u003e\u003c/a\u003e\ndocs: clarify relationship between Black style and PEP 8 (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/5025\"\u003e#5025\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/psf/black/commit/69705deb8776e7c5e585668da106d1abe2cb8d77\"\u003e\u003ccode\u003e69705de\u003c/code\u003e\u003c/a\u003e\ndocs: add clearer pyproject configuration guidance (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/5026\"\u003e#5026\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/psf/black/commit/35ea67920b7f6ac8e09be1c47278752b1e827f76\"\u003e\u003ccode\u003e35ea679\u003c/code\u003e\u003c/a\u003e\nPrepare release 26.3.0 (\u003ca\nhref\u003d\"https://redirect.github.com/psf/black/issues/5032\"\u003e#5032\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eAdditional commits viewable in \u003ca\nhref\u003d\"https://github.com/psf/black/compare/24.3.0...26.3.1\"\u003ecompare\nview\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/details\u003e\n\u003cbr /\u003e\n\n\n[![Dependabot compatibility\nscore](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name\u003dblack\u0026package-manager\u003dpip\u0026previous-version\u003d24.3.0\u0026new-version\u003d26.3.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)\n\nDependabot will resolve any conflicts with this PR as long as you don\u0027t\nalter it yourself. You can also trigger a rebase manually by commenting\n`@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n\u003cdetails\u003e\n\u003csummary\u003eDependabot commands and options\u003c/summary\u003e\n\u003cbr /\u003e\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits\nthat have been made to it\n- `@dependabot show \u003cdependency name\u003e ignore conditions` will show all\nof the ignore conditions of the specified dependency\n- `@dependabot ignore this major version` will close this PR and stop\nDependabot creating any more for this major version (unless you reopen\nthe PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop\nDependabot creating any more for this minor version (unless you reopen\nthe PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop\nDependabot creating any more for this dependency (unless you reopen the\nPR or upgrade to it yourself)\nYou can disable automated security fix PRs for this repo from the\n[Security Alerts\npage](https://github.com/apache/arrow-rs/network/alerts).\n\n\u003c/details\u003e\n\nSigned-off-by: dependabot[bot] \u003csupport@github.com\u003e\nCo-authored-by: dependabot[bot] \u003c49699333+dependabot[bot]@users.noreply.github.com\u003e"
    },
    {
      "commit": "83b6908f92de32c6695d95d7dc2b0a0116aa3185",
      "tree": "5d13eecd034ca342d99d000f66ec425644e61e19",
      "parents": [
        "002426087ea9106b616194a5d0942aedba2bc884"
      ],
      "author": {
        "name": "Daniël Heres",
        "email": "danielheres@gmail.com",
        "time": "Sat Mar 14 18:50:38 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Mar 14 18:50:38 2026 +0100"
      },
      "message": "Unroll interleave -25-30% (#9542)\n\n# Which issue does this PR close?\n\n\u003c!--\nWe generally require a GitHub issue to be filed for all bug fixes and\nenhancements and this helps us generate change logs for our releases.\nYou can link an issue to this PR using the GitHub syntax.\n--\u003e\n\n- Closes #NNN.\n\n# Rationale for this change\n\n```\n\n🤖: Benchmark completed\n\nDetails\n\ngroup                                                                                        main                                   interleave\n-----                                                                                        ----                                   -----------\ninterleave dict(20, 0.0) 100 [0..100, 100..230, 450..1000]                                   1.08    805.6±8.28ns        ? ?/sec    1.00   748.5±14.05ns        ? ?/sec\ninterleave dict(20, 0.0) 1024 [0..100, 100..230, 450..1000, 0..1000]                         1.18      2.6±0.00µs        ? ?/sec    1.00      2.2±0.01µs        ? ?/sec\ninterleave dict(20, 0.0) 1024 [0..100, 100..230, 450..1000]                                  1.21      2.6±0.01µs        ? ?/sec    1.00      2.2±0.02µs        ? ?/sec\ninterleave dict(20, 0.0) 400 [0..100, 100..230, 450..1000]                                   1.16   1431.6±3.11ns        ? ?/sec    1.00  1232.9±14.26ns        ? ?/sec\ninterleave dict_distinct 100                                                                 1.03      2.9±0.12µs        ? ?/sec    1.00      2.9±0.07µs        ? ?/sec\ninterleave dict_distinct 1024                                                                1.02      2.9±0.06µs        ? ?/sec    1.00      2.8±0.03µs        ? ?/sec\ninterleave dict_distinct 2048                                                                1.03      2.9±0.02µs        ? ?/sec    1.00      2.8±0.08µs        ? ?/sec\ninterleave dict_sparse(20, 0.0) 100 [0..100, 100..230, 450..1000]                            1.00      2.7±0.26µs        ? ?/sec    1.02      2.8±0.21µs        ? ?/sec\ninterleave dict_sparse(20, 0.0) 1024 [0..100, 100..230, 450..1000, 0..1000]                  1.11      5.3±0.31µs        ? ?/sec    1.00      4.8±0.40µs        ? ?/sec\ninterleave dict_sparse(20, 0.0) 1024 [0..100, 100..230, 450..1000]                           1.16      4.8±0.25µs        ? ?/sec    1.00      4.1±0.23µs        ? ?/sec\ninterleave dict_sparse(20, 0.0) 400 [0..100, 100..230, 450..1000]                            1.05      3.5±0.31µs        ? ?/sec    1.00      3.3±0.29µs        ? ?/sec\ninterleave i32(0.0) 100 [0..100, 100..230, 450..1000]                                        1.21    313.8±1.03ns        ? ?/sec    1.00    258.9±4.98ns        ? ?/sec\ninterleave i32(0.0) 1024 [0..100, 100..230, 450..1000, 0..1000]                              1.34  1856.5±17.40ns        ? ?/sec    1.00  1385.9±32.73ns        ? ?/sec\ninterleave i32(0.0) 1024 [0..100, 100..230, 450..1000]                                       1.34   1848.6±8.80ns        ? ?/sec    1.00  1382.4±48.64ns        ? ?/sec\ninterleave i32(0.0) 400 [0..100, 100..230, 450..1000]                                        1.37    843.3±7.37ns        ? ?/sec    1.00   615.5±22.71ns        ? ?/sec\ninterleave i32(0.5) 100 [0..100, 100..230, 450..1000]                                        1.09    604.2±5.60ns        ? ?/sec    1.00    555.1±4.48ns        ? ?/sec\ninterleave i32(0.5) 1024 [0..100, 100..230, 450..1000, 0..1000]                              1.12      4.3±0.01µs        ? ?/sec    1.00      3.8±0.04µs        ? ?/sec\ninterleave i32(0.5) 1024 [0..100, 100..230, 450..1000]                                       1.13      4.4±0.06µs        ? ?/sec    1.00      3.9±0.17µs        ? ?/sec\ninterleave i32(0.5) 400 [0..100, 100..230, 450..1000]                                        1.12  1889.4±19.68ns        ? ?/sec    1.00  1691.5±17.15ns        ? ?/sec\ninterleave list\u003ci64\u003e(0.0,0.0,20) 100 [0..100, 100..230, 450..1000]                           1.07      2.7±0.03µs        ? ?/sec    1.00      2.5±0.03µs        ? ?/sec\ninterleave list\u003ci64\u003e(0.0,0.0,20) 1024 [0..100, 100..230, 450..1000, 0..1000]                 1.06     26.2±0.11µs        ? ?/sec    1.00     24.6±0.31µs        ? ?/sec\ninterleave list\u003ci64\u003e(0.0,0.0,20) 1024 [0..100, 100..230, 450..1000]                          1.06     25.9±0.14µs        ? ?/sec    1.00     24.5±0.29µs        ? ?/sec\ninterleave list\u003ci64\u003e(0.0,0.0,20) 400 [0..100, 100..230, 450..1000]                           1.07     10.5±0.21µs        ? ?/sec    1.00      9.9±0.06µs        ? ?/sec\ninterleave list\u003ci64\u003e(0.1,0.1,20) 100 [0..100, 100..230, 450..1000]                           1.05      5.8±0.25µs        ? ?/sec    1.00      5.5±0.06µs        ? ?/sec\ninterleave list\u003ci64\u003e(0.1,0.1,20) 1024 [0..100, 100..230, 450..1000, 0..1000]                 1.05     47.4±2.23µs        ? ?/sec    1.00     45.2±0.14µs        ? ?/sec\ninterleave list\u003ci64\u003e(0.1,0.1,20) 1024 [0..100, 100..230, 450..1000]                          1.06     48.0±2.35µs        ? ?/sec    1.00     45.5±0.64µs        ? ?/sec\ninterleave list\u003ci64\u003e(0.1,0.1,20) 400 [0..100, 100..230, 450..1000]                           1.05     19.2±0.90µs        ? ?/sec    1.00     18.2±0.03µs        ? ?/sec\ninterleave str(20, 0.0) 100 [0..100, 100..230, 450..1000]                                    1.01    786.8±1.50ns        ? ?/sec    1.00    779.4±4.35ns        ? ?/sec\ninterleave str(20, 0.0) 1024 [0..100, 100..230, 450..1000, 0..1000]                          1.04      6.3±0.12µs        ? ?/sec    1.00      6.0±0.02µs        ? ?/sec\ninterleave str(20, 0.0) 1024 [0..100, 100..230, 450..1000]                                   1.04      6.2±0.08µs        ? ?/sec    1.00      6.0±0.01µs        ? ?/sec\ninterleave str(20, 0.0) 400 [0..100, 100..230, 450..1000]                                    1.09      2.7±0.01µs        ? ?/sec    1.00      2.4±0.01µs        ? ?/sec\ninterleave str(20, 0.5) 100 [0..100, 100..230, 450..1000]                                    1.04  1064.4±19.37ns        ? ?/sec    1.00   1023.8±3.56ns        ? ?/sec\ninterleave str(20, 0.5) 1024 [0..100, 100..230, 450..1000, 0..1000]                          1.03     10.3±0.06µs        ? ?/sec    1.00     10.1±0.13µs        ? ?/sec\ninterleave str(20, 0.5) 1024 [0..100, 100..230, 450..1000]                                   1.02     10.3±0.05µs        ? ?/sec    1.00     10.1±0.54µs        ? ?/sec\ninterleave str(20, 0.5) 400 [0..100, 100..230, 450..1000]                                    1.04      3.7±0.03µs        ? ?/sec    1.00      3.6±0.17µs        ? ?/sec\ninterleave str_view(0.0) 100 [0..100, 100..230, 450..1000]                                   1.01    856.9±2.90ns        ? ?/sec    1.00    849.1±7.00ns        ? ?/sec\ninterleave str_view(0.0) 1024 [0..100, 100..230, 450..1000, 0..1000]                         1.00      5.0±0.15µs        ? ?/sec    1.02      5.1±0.02µs        ? ?/sec\ninterleave str_view(0.0) 1024 [0..100, 100..230, 450..1000]                                  1.00      4.9±0.05µs        ? ?/sec    1.04      5.1±0.02µs        ? ?/sec\ninterleave str_view(0.0) 400 [0..100, 100..230, 450..1000]                                   1.00      2.2±0.05µs        ? ?/sec    1.03      2.2±0.01µs        ? ?/sec\ninterleave struct(i32(0.0), i32(0.0) 100 [0..100, 100..230, 450..1000]                       1.20    874.3±4.12ns        ? ?/sec    1.00   729.1±12.04ns        ? ?/sec\ninterleave struct(i32(0.0), i32(0.0) 1024 [0..100, 100..230, 450..1000, 0..1000]             1.34      4.0±0.01µs        ? ?/sec    1.00      3.0±0.02µs        ? ?/sec\ninterleave struct(i32(0.0), i32(0.0) 1024 [0..100, 100..230, 450..1000]                      1.31      4.0±0.04µs        ? ?/sec    1.00      3.0±0.01µs        ? ?/sec\ninterleave struct(i32(0.0), i32(0.0) 400 [0..100, 100..230, 450..1000]                       1.24  1905.1±19.48ns        ? ?/sec    1.00  1532.8±33.13ns        ? ?/sec\ninterleave struct(i32(0.0), str(20, 0.0) 100 [0..100, 100..230, 450..1000]                   1.00   1340.9±6.76ns        ? ?/sec    1.01  1347.8±12.50ns        ? ?/sec\ninterleave struct(i32(0.0), str(20, 0.0) 1024 [0..100, 100..230, 450..1000, 0..1000]         1.08      8.3±0.16µs        ? ?/sec    1.00      7.7±0.02µs        ? ?/sec\ninterleave struct(i32(0.0), str(20, 0.0) 1024 [0..100, 100..230, 450..1000]                  1.08      8.3±0.06µs        ? ?/sec    1.00      7.7±0.06µs        ? ?/sec\ninterleave struct(i32(0.0), str(20, 0.0) 400 [0..100, 100..230, 450..1000]                   1.09      3.7±0.13µs        ? ?/sec    1.00      3.4±0.02µs        ? ?/sec\ninterleave struct(str(20, 0.0), str(20, 0.0)) 100 [0..100, 100..230, 450..1000]              1.05   1927.3±9.31ns        ? ?/sec    1.00  1842.2±18.19ns        ? ?/sec\ninterleave struct(str(20, 0.0), str(20, 0.0)) 1024 [0..100, 100..230, 450..1000, 0..1000]    1.04     12.6±0.06µs        ? ?/sec    1.00     12.1±0.08µs        ? ?/sec\ninterleave struct(str(20, 0.0), str(20, 0.0)) 1024 [0..100, 100..230, 450..1000]             1.04     12.6±0.03µs        ? ?/sec    1.00     12.1±0.14µs        ? ?/sec\ninterleave struct(str(20, 0.0), str(20, 0.0)) 400 [0..100, 100..230, 450..1000]              1.04      5.4±0.07µs        ? ?/sec    1.00      5.2±0.04µs        ? ?/sec\n```\n\n# What changes are included in this PR?\n\n\u003c!--\nThere is no need to duplicate the description in the issue here but it\nis sometimes worth providing a summary of the individual changes in this\nPR.\n--\u003e\n\n# Are these changes tested?\n\n\u003c!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example,\nare they covered by existing tests)?\n--\u003e\n\n# Are there any user-facing changes?\n\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e\n\n---------\n\nCo-authored-by: Claude Opus 4.6 (1M context) \u003cnoreply@anthropic.com\u003e"
    },
    {
      "commit": "002426087ea9106b616194a5d0942aedba2bc884",
      "tree": "c6ec40083bb0d3dd004af7b731f43df5b1e73491",
      "parents": [
        "393117979882e97a15125edd142c70a5e2c16386"
      ],
      "author": {
        "name": "xudong.w",
        "email": "wxd963996380@gmail.com",
        "time": "Sat Mar 14 22:18:18 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Mar 14 22:18:18 2026 +0800"
      },
      "message": "Replace interleave overflow panic with error (#9549)\n\n# Which issue does this PR close?\n\n\u003c!--\nWe generally require a GitHub issue to be filed for all bug fixes and\nenhancements and this helps us generate change logs for our releases.\nYou can link an issue to this PR using the GitHub syntax.\n--\u003e\n\n- Closes #NNN.\n\n# Rationale for this change\n\n\u003c!--\nWhy are you proposing this change? If this is already explained clearly\nin the issue then this section is not needed.\nExplaining clearly why changes are proposed helps reviewers understand\nyour changes and offer better suggestions for fixes.\n--\u003e\n\n\n\n# What changes are included in this PR?\n\n\u003c!--\nThere is no need to duplicate the description in the issue here but it\nis sometimes worth providing a summary of the individual changes in this\nPR.\n--\u003e\n\nReplace interleave overflow panic with error\n\n# Are these changes tested?\n\n\u003c!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example,\nare they covered by existing tests)?\n--\u003e\nYes UT\n\n# Are there any user-facing changes?\n\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e"
    },
    {
      "commit": "393117979882e97a15125edd142c70a5e2c16386",
      "tree": "3a69678bae5241fef94dddbd634565db9f9afb70",
      "parents": [
        "c214c3c6f539c50ff644a3d92571375c57ffe11b"
      ],
      "author": {
        "name": "Oleks V",
        "email": "comphead@users.noreply.github.com",
        "time": "Fri Mar 13 02:54:56 2026 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 13 10:54:56 2026 +0100"
      },
      "message": "chore: Protect `main` branch with required reviews (#9547)\n\n# Which issue does this PR close?\n\n\u003c!--\nWe generally require a GitHub issue to be filed for all bug fixes and\nenhancements and this helps us generate change logs for our releases.\nYou can link an issue to this PR using the GitHub syntax.\n--\u003e\n\n- Closes #NNN.\n\n# Rationale for this change\n\nCurrently any user with `write` access can merge the PR without review.\nGood practice to get at least 1 review before the merge\n\n\u003c!--\nWhy are you proposing this change? If this is already explained clearly\nin the issue then this section is not needed.\nExplaining clearly why changes are proposed helps reviewers understand\nyour changes and offer better suggestions for fixes.\n--\u003e\n\n# What changes are included in this PR?\n\n\u003c!--\nThere is no need to duplicate the description in the issue here but it\nis sometimes worth providing a summary of the individual changes in this\nPR.\n--\u003e\n\n# Are these changes tested?\n\n\u003c!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example,\nare they covered by existing tests)?\n--\u003e\n\n# Are there any user-facing changes?\n\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e"
    },
    {
      "commit": "c214c3c6f539c50ff644a3d92571375c57ffe11b",
      "tree": "31cb9e566f408c43a28364eb6ffa5c5b2bc957f5",
      "parents": [
        "92a239a54e33043f05fef98d81d3c7bd2b926467"
      ],
      "author": {
        "name": "Alexander Rafferty",
        "email": "hello@alexanderrafferty.com",
        "time": "Fri Mar 13 20:54:04 2026 +1100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 13 10:54:04 2026 +0100"
      },
      "message": "Add benchmark for `infer_json_schema` (#9546)\n\n# Which issue does this PR close?\n\nSplit out from #9494 to make review easier. It simply adds a benchmark\nfor JSON schema inference.\n\n# Rationale for this change\n\nI have an open PR that significantly refactors the JSON schema inference\ncode, so I want confidence that not only is the new code correct, but\nalso has better performance than the existing code.\n\n# What changes are included in this PR?\n\nAdds a benchmark.\n\n# Are these changes tested?\n\nN/A\n\n# Are there any user-facing changes?\n\nNo"
    },
    {
      "commit": "92a239a54e33043f05fef98d81d3c7bd2b926467",
      "tree": "c0093379a582036e4e0de28bce35853b76eb747b",
      "parents": [
        "6931d881d88b515574133e4edda7757b5ee2dd56"
      ],
      "author": {
        "name": "Bruno",
        "email": "brunocauet@gmail.com",
        "time": "Thu Mar 12 07:31:45 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 12 17:31:45 2026 +1100"
      },
      "message": "Implement min, max, sum for run-end-encoded arrays. (#9409)\n\nEfficient implementations:\n* min \u0026 max work directly on the values child array.\n* sum folds over run lengths \u0026 values, without decompressing the array.\n\nIn particular, those implementations takes care of the logical offset \u0026\nlen of the run-end-encoded arrays. This is non-trivial:\n* We get the physical start \u0026 end indices in O(log(#runs)), but those\nare incorrect for empty arrays.\n* Slicing can happen in the middle of a run. For sum, we need to track\nthe logical start \u0026 end and reduce the run length accordingly.\n\nFinally, one caveat: the aggregation functions only work when the child\nvalues array is a primitive array. That\u0027s fine ~always, but some client\nmight store the values in an unexpected type. They\u0027ll either get None or\nan Error, depending on the aggregation function used.\n\nThis feature is tracked in\nhttps://github.com/apache/arrow-rs/issues/3520."
    },
    {
      "commit": "6931d881d88b515574133e4edda7757b5ee2dd56",
      "tree": "5003d805ab87df389074079839144045de576cbf",
      "parents": [
        "2956dbf30fe5b50f8f76e6bad93505a8e7b86eb5"
      ],
      "author": {
        "name": "Mikhail Zabaluev",
        "email": "mikhail.zabaluev@gmail.com",
        "time": "Wed Mar 11 23:59:10 2026 +0200"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 11 17:59:10 2026 -0400"
      },
      "message": "feat: expose arrow schema on async avro reader (#9534)\n\n# Rationale for this change\n\nExposes the Arrow schema produced by the async Avro file reader,\nsimilarly to the `schema` method on the synchronous reader.\n\nThis allows an application to prepare casting or other schema\ntransformations with no need to fetch the first record batch to learn\nthe produced Arrow schema. Since the async reader only parses OCF\ncontent for the moment, the schema does not change from batch to batch.\n\n# What changes are included in this PR?\n\nThe `schema` method for `AsyncAvroFileReader` exposes the Arrow schema\nof record batches that are produced by the reader.\n\n# Are these changes tested?\n\nAdded tests verifying that the returned schema matches the expected.\n\n# Are there any user-facing changes?\n\nAdded a `schema` method to `AsyncAvroFileReader`."
    },
    {
      "commit": "2956dbf30fe5b50f8f76e6bad93505a8e7b86eb5",
      "tree": "b4a5d4859f6825448a014f3e8001481e469c3347",
      "parents": [
        "b3e047f59a562020a0fd50e7c68c4e6cbd53687d"
      ],
      "author": {
        "name": "Ryan Johnson",
        "email": "scovich@users.noreply.github.com",
        "time": "Wed Mar 11 12:46:51 2026 -0600"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 11 14:46:51 2026 -0400"
      },
      "message": "fix: Do not assume missing nullcount stat means zero nullcount (#9481)\n\n# Which issue does this PR close?\n\n- Closes https://github.com/apache/arrow-rs/issues/9451\n- Closes https://github.com/apache/arrow-rs/issues/6256\n\n# Rationale for this change\n\nA reader might be annoyed (performance wart) if a parquet footer lacks\nnullcount stats, but inferring nullcount\u003d0 for missing stats makes the\nstats untrustworthy and can lead to incorrect behavior.\n\n# What changes are included in this PR?\n\nIf a parquet footer nullcount stat is missing, surface it as None,\nreserving `Some(0)` for known-no-null cases.\n\n# Are these changes tested?\n\nFixed one unit test that broke, added a missing unit test that covers\nthe other change site.\n\n# Are there any user-facing changes?\n\nThe stats API doesn\u0027t change signature, but there is a behavior change.\nThe existing doc that called out the incorrect behavior has been removed\nto reflect that the incorrect behavior no longer occurs."
    },
    {
      "commit": "b3e047f59a562020a0fd50e7c68c4e6cbd53687d",
      "tree": "27843fa9273c7754a2114f581317716763ed5b6c",
      "parents": [
        "ba02ab9b339480241de32b90a372fd443bf3ab5b"
      ],
      "author": {
        "name": "Peter L",
        "email": "cetra3@hotmail.com",
        "time": "Thu Mar 12 05:15:07 2026 +1030"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 11 14:45:07 2026 -0400"
      },
      "message": "Fix Invalid offset in sparse column chunk data error for multiple predicates (#9509)\n\n# Which issue does this PR close?\n\nRaised an issue at https://github.com/apache/arrow-rs/issues/9516 for\nthis one\n\nSame issue as https://github.com/apache/arrow-rs/issues/9239 but\nextended to another scenario\n\n# Rationale for this change\n\nWhen there are multiple predicates being evaluated, we need to reset the\nrow selection policy before overriding the strategy.\n\nScenario:\n- Dense initial RowSelection (alternating select/skip) covers all pages\n→ Auto resolves to Mask\n- Predicate 1 evaluates on column A, narrows selection to skip middle\npages\n- Predicate 2\u0027s column B is fetched sparsely with the narrowed selection\n(missing middle pages)\n- Without the fix, the override for predicate 2 returns early\n(policy\u003dMask, not Auto), so Mask is used and tries to read missing pages\n→ \"Invalid offset\" error\n\n# What changes are included in this PR?\n\nThis is a one line change to reset the selection policy in the\n`RowGroupDecoderState::WaitingOnFilterData` arm\n\n# Are these changes tested?\n\nYes a new test added that fails currently on `main`, but as you can see\nit\u0027s a doozy to set up.\n\n# Are there any user-facing changes?\n\nNope"
    },
    {
      "commit": "ba02ab9b339480241de32b90a372fd443bf3ab5b",
      "tree": "5e5f3cb1d0426bbb2885b5ac78990d75329183b1",
      "parents": [
        "a475f844d8473eb1d69baebf4337e1c1e1de235c"
      ],
      "author": {
        "name": "Filippo",
        "email": "12383260+notfilippo@users.noreply.github.com",
        "time": "Wed Mar 11 18:59:51 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 11 13:59:51 2026 -0400"
      },
      "message": "feat(memory-tracking): expose API to NullBuffer, ArrayData, and Array (#8918)\n\n# Which issue does this PR close?\n\nPart of #8137. Follow up of #7303. Replaces #8040.\n\n# Rationale for this change\n\n#7303 implements the fundamental symbols for tracking memory. This patch\nexposes those APIs to a higher level Array and ArrayData.\n\n# What changes are included in this PR?\n\nNew `claim` API for NullBuffer, ArrayData, and Array. New `pool`\nfeature-flag to arrow, arrow-array, and arrow-data.\n\n# Are these changes tested?\n\nAdded a doctest on the `Array::claim` method.\n\n# Are there any user-facing changes?\n\nAdded API and a new feature-flag for arrow, arrow-array, and arrow-data."
    },
    {
      "commit": "a475f844d8473eb1d69baebf4337e1c1e1de235c",
      "tree": "36bc9ab4fe486c7188eb22aa1593d7544cd1da53",
      "parents": [
        "d3c79006f2595e144d539f56b3054fe916ab184b"
      ],
      "author": {
        "name": "Liam Bao",
        "email": "liam.zw.bao@gmail.com",
        "time": "Wed Mar 11 13:50:02 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 11 13:50:02 2026 -0400"
      },
      "message": "[Json] Add benchmarks for list json reader (#9507)\n\n# Which issue does this PR close?\n\n\u003c!--\nWe generally require a GitHub issue to be filed for all bug fixes and\nenhancements and this helps us generate change logs for our releases.\nYou can link an issue to this PR using the GitHub syntax.\n--\u003e\n\n- Relates to #9497.\n\n# Rationale for this change\n\n\u003c!--\nWhy are you proposing this change? If this is already explained clearly\nin the issue then this section is not needed.\nExplaining clearly why changes are proposed helps reviewers understand\nyour changes and offer better suggestions for fixes.\n--\u003e\n\nAdd benchmark for `ListArray` in `json_reader` to support the\nperformance evaluation of #9497\n\n# What changes are included in this PR?\n\n\u003c!--\nThere is no need to duplicate the description in the issue here but it\nis sometimes worth providing a summary of the individual changes in this\nPR.\n--\u003e\n\n- Benchmarks for decoding and serialize json list to `ListArray`.\n- Benchmarks for `ListArray` and `FixedSizeListArray` json writer\n\n# Are these changes tested?\n\n\u003c!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example,\nare they covered by existing tests)?\n--\u003e\n\nBenchmarks only\n\n# Are there any user-facing changes?\n\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e\n\nNo"
    },
    {
      "commit": "d3c79006f2595e144d539f56b3054fe916ab184b",
      "tree": "23750196f19803a84600c0d5f1d95dcf0f7f99b1",
      "parents": [
        "33aed330b962d40e6e6b456bc4cd13ec80967f75"
      ],
      "author": {
        "name": "Qi Zhu",
        "email": "821684824@qq.com",
        "time": "Wed Mar 11 18:37:37 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 11 06:37:37 2026 -0400"
      },
      "message": "fix: handle Null type in try_merge for Struct, List, LargeList, and Union (#9524)\n\n# Which issue does this PR close?\n\nField::try_merge correctly handles DataType::Null for primitive types\nand when self is Null, but fails when self is a compound type (Struct,\nList, LargeList, Union) and from is Null. This causes Schema::try_merge\nto error when merging schemas where one has a Null field and another has\na\nconcrete compound type for the same field.\n\nThis is common in JSON inference where some files have null values for\nfields that are structs/lists in other files.\n\n- Closes[ #9523](https://github.com/apache/arrow-rs/issues/9523)\n\n# Rationale for this change\n\nAdd `DataType::Null` arms to the Struct, List, LargeList, and Union\nbranches in `Field::try_merge`, consistent with how primitive types\nalready handle it.\n\n# What changes are included in this PR?\n\nAdd `DataType::Null` arms to the Struct, List, LargeList, and Union\nbranches in `Field::try_merge`, consistent with how primitive types\nalready handle it.\n# Are these changes tested?\n\n- Added test `test_merge_compound_with_null` covering Struct, List,\n  LargeList, and Union merging with Null in both directions.\n- Existing tests continue to pass.\n\n# Are there any user-facing changes?\n\nNo"
    },
    {
      "commit": "33aed330b962d40e6e6b456bc4cd13ec80967f75",
      "tree": "9781b93adbd1d834efc044ef371362bedbdee4af",
      "parents": [
        "d2e2cdafed93a8e0152fe1d018ec2cef154ccb20"
      ],
      "author": {
        "name": "Daniël Heres",
        "email": "danielheres@gmail.com",
        "time": "Wed Mar 11 08:55:51 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 11 08:55:51 2026 +0100"
      },
      "message": "Make with_file_decryption_properties pub instead of pub(crate) (#9532)\n\n# Which issue does this PR close?\n\n\n- Closes #NNN.\n\n# Rationale for this change\nI would like to use `ParquetMetaDataPushDecoder` in arrow-datafusion,\nbut the `with_file_decryption_properties` function is pub(crate), so I\ncan\u0027t fully implement the encryption feature.,\n\n# What changes are included in this PR?\nMake it pub\n\n# Are these changes tested?\n\nNot needed\n\n# Are there any user-facing changes?\n\nNow pub"
    },
    {
      "commit": "d2e2cdafed93a8e0152fe1d018ec2cef154ccb20",
      "tree": "c2a732700c17873062a3b956f746ee826e9ed349",
      "parents": [
        "0b044835a8180100c89b60d856e9f67634b5d5e7"
      ],
      "author": {
        "name": "Jonas Dedden",
        "email": "university@jonas-dedden.de",
        "time": "Mon Mar 09 21:32:53 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 09 16:32:53 2026 -0400"
      },
      "message": "Fix skip_records over-counting when partial record precedes num_rows page skip (#9374)\n\n# Which issue does this PR close?\n\n- Closes #9370 .\n\n# Rationale for this change\n\nThe bug occurs when using RowSelection with nested types (like\nList\u003cString\u003e) when:\n1. A column has multiple pages in a row group\n2. The selected rows span across page boundaries\n  3. The first page is entirely consumed during skip operations\n\nThe issue was in `arrow-rs/parquet/src/column/reader.rs:287-382`\n(`skip_records` function).\n\n**Root cause:** When `skip_records` completed successfully after\ncrossing page boundaries, the `has_partial` state in the\n`RepetitionLevelDecoder` could incorrectly remain true.\n\nThis happened when:\n\n- The skip operation exhausted a page where has_record_delimiter was\nfalse\n- The skip found the remaining records on the next page by counting a\ndelimiter at index 0\n- When a subsequent read_records(1) was called, the stale\nhas_partial\u003dtrue state caused count_records to incorrectly interpret the\nfirst repetition level (0) at index 0 as ending a \"phantom\" partial\nrecord, returning (1 record, 0 levels, 0 values) instead of properly\nreading the actual record data.\n\nFor a more descriptive explanation, look here:\nhttps://github.com/apache/arrow-rs/issues/9370#issuecomment-3861143928\n\n# What changes are included in this PR?\n\nAdded code at the end of skip_records to reset the partial record state\nwhen all requested records have been successfully skipped.\n\nThis ensures that after skip_records completes, we\u0027re at a clean record\nboundary with no lingering partial record state, fixing the array length\nmismatch in StructArrayReader.\n\n# Are these changes tested?\n\nCommit\nhttps://github.com/apache/arrow-rs/commit/365bd9a4ced7897f391e4533930a0c9683952723\nintroduces a test showcasing this issue with v2 data pages only on a\nunit-test level. PR https://github.com/apache/arrow-rs/pull/9399 could\nbe used to showcase the issue in an end-to-end way.\n\nPreviously wrong assumption that thought it had to do with mixing v1 and\nv2 data pages:\n\n```\nIn b52e043 I added a test that I validated to fail whenever I remove my fix.\n\n  Bug Mechanism                                                                                                                                                                                             \n                                                                                                                                                                                                            \n  The bug requires three ingredients:                                                                                                                                                                       \n\n  1. Page 1 (DataPage v1): Contains a nested column (with rep levels). During skip_records, all levels on this page are consumed. count_records sees no following rep\u003d0 delimiter, so it sets               \n  has_partial\u003dtrue. Since has_record_delimiter is false (the default InMemoryPageReader returns false when more pages exist), flush_partial is not called.\n  2. Page 2 (DataPage v2): Has num_rows available in its metadata. When num_rows \u003c\u003d remaining_records, the entire page is skipped via skip_next_page() — this does not touch the rep level decoder at all,\n  so has_partial remains stale true from page 1.\n  3. Page 3 (DataPage v1): When read_records loads this page, the stale has_partial\u003dtrue causes the rep\u003d0 at position 0 to be misinterpreted as completing a \"phantom\" partial record. This produces (1\n  record, 0 levels, 0 values) instead of reading the actual record data.\n\n  Test Verification\n\n  - With fix (flush_partial at end of skip_records): read_records(1) correctly returns (1, 2, 2) with values [70, 80]\n  - Without fix: read_records(1) returns (1, 0, 0) — a phantom record with no data, which is what causes the \"Not all children array length are the same!\" error when different sibling columns in a struct\n  produce different record counts\n```\n\n---------\n\nCo-authored-by: Ed Seidl \u003cetseidl@users.noreply.github.com\u003e\nCo-authored-by: Andrew Lamb \u003candrew@nerdnetworks.org\u003e"
    },
    {
      "commit": "0b044835a8180100c89b60d856e9f67634b5d5e7",
      "tree": "1e6028e6938c081d75361eb0de4ef80a7344b650",
      "parents": [
        "edd2c8eef5a7b702947a25e3223539e3723d5aac"
      ],
      "author": {
        "name": "Matthew Kim",
        "email": "38759997+friendlymatthew@users.noreply.github.com",
        "time": "Mon Mar 09 14:41:30 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 09 11:41:30 2026 -0700"
      },
      "message": "support string view unshred variant (#9514)\n\n# Which issue does this PR close?\n\n- Closes https://github.com/apache/arrow-rs/issues/9512\n\n# Rationale for this change\n\nYou can build a Variant with a StringView type shredded out, but calling\n`unshred_variant` will fail with not yet implemented"
    },
    {
      "commit": "edd2c8eef5a7b702947a25e3223539e3723d5aac",
      "tree": "2b117c4c55d5e5ec865d01e380c3e569c477f872",
      "parents": [
        "fec3c021e85f34723250c413891f580657a1eb4f"
      ],
      "author": {
        "name": "Matthew Kim",
        "email": "38759997+friendlymatthew@users.noreply.github.com",
        "time": "Mon Mar 09 12:57:17 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 09 09:57:17 2026 -0700"
      },
      "message": "support large string for unshred variant (#9515)\n\n# Which issue does this PR close?\n\n- Closes https://github.com/apache/arrow-rs/issues/9513\n\n# Rationale for this change\n\n`VariantArray::try_new` and `canonicalize_and_verify_data_type` both\naccept `LargeUtf8` as a valid shredded variant type. However\nunshred_variant currently only handles Utf8 for string typed_value\ncolumns\n\nThis means a VariantArray with a LargeUtf8 typed_value column can be\nconstructed successfully, but calling unshred_variant on it fails"
    },
    {
      "commit": "fec3c021e85f34723250c413891f580657a1eb4f",
      "tree": "08112dbbe8d5f347662a25a416bd0730639d2bf5",
      "parents": [
        "097c2038971b9306f8a9c3c767f64d1794e2eb2f"
      ],
      "author": {
        "name": "Tim-53",
        "email": "82676248+Tim-53@users.noreply.github.com",
        "time": "Mon Mar 09 13:45:16 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 09 08:45:16 2026 -0400"
      },
      "message": "fix: remove incorrect debug assertion in BatchCoalescer  (#9508)\n\n# Which issue does this PR close?\n\n - Closes https://github.com/apache/arrow-rs/issues/9506\n\n\n# Rationale for this change\n\n`Vec::reserve(n)` does not guarantee exact capacity, Rust\u0027s\n`MIN_NON_ZERO_CAP` optimization means `reserve(2)` gives capacity \u003d 4\nfor most numeric types, causing `debug_assert_eq!(capacity, batch_size)`\nto panic in debug mode when `batch_size \u003c 4`.\n# What changes are included in this PR?\n\nReplace `reserve` with `reserve_exact` in `ensure_capacity` in both\n`InProgressPrimitiveArray` and `InProgressByteViewArray`.\n`reserve_exact` skips the amortized growth optimization and allocates\nexactly the requested capacity, making the assertion correct.\n\n# Are these changes tested?\nNo. This only fixes an incorrect debug assertion. \n\n# Are there any user-facing changes?\nNo"
    },
    {
      "commit": "097c2038971b9306f8a9c3c767f64d1794e2eb2f",
      "tree": "8c904b0702b464b6586f67214a885a7a66bd223c",
      "parents": [
        "5ba451531efd2e98de38f6a8443aad605b6b5cc5"
      ],
      "author": {
        "name": "Ed Seidl",
        "email": "etseidl@users.noreply.github.com",
        "time": "Sat Mar 07 12:48:33 2026 -0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Mar 07 21:48:33 2026 +0100"
      },
      "message": "Add some benchmarks for decoding delta encoded Parquet (#9500)\n\n# Which issue does this PR close?\n\n\u003c!--\nWe generally require a GitHub issue to be filed for all bug fixes and\nenhancements and this helps us generate change logs for our releases.\nYou can link an issue to this PR using the GitHub syntax.\n--\u003e\n\n- Part of #9476.\n\n# Rationale for this change\n\nAdd benchmarks to show benefit of the optimizations in #9477\n\n# What changes are included in this PR?\n\nAdds some benches for DELTA_BINARY_PACKED, DELTA_BYTE_ARRAY, and\nDELTA_LENGTH_BYTE_ARRAY. The generated data is meant to show the benefit\nof special casing for miniblocks with a bitwidth of 0.\n\n# Are these changes tested?\n\nJust benches\n\n# Are there any user-facing changes?\n\nNo"
    },
    {
      "commit": "5ba451531efd2e98de38f6a8443aad605b6b5cc5",
      "tree": "879380c08e838ad8fefdec34b6b616fbe37f1667",
      "parents": [
        "8c89814ef12be9603eee6aa6edeacedef0a6c5a3"
      ],
      "author": {
        "name": "Bruno",
        "email": "brunocauet@gmail.com",
        "time": "Thu Mar 05 04:44:44 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 04 20:44:44 2026 -0700"
      },
      "message": "Simplify downcast_...!() macro definitions (#9454)\n\n1. Reduce some quantifiers from `*` to `?` when 2+ occurrences would\ngenerate invalid Rust code. `$(if $pred:expr)*`\n2. Clean up 4-armed recursive macros: \n  * put the base case first\n  * explain the fixups\n* fix all at once, going directly to the base case, instead of possibly\nmultiple hoops\n\nThe inital motivation was getting rust-analyzer to stop choking on such\nmacros usage where the left-hand side was a tuple and the\nright-hand-side an expr."
    },
    {
      "commit": "8c89814ef12be9603eee6aa6edeacedef0a6c5a3",
      "tree": "db9b4803beed04727255f88260b48780a1c5d1b9",
      "parents": [
        "5025e6825971c7618532515b572026c61f8589b8"
      ],
      "author": {
        "name": "Mikhail Zabaluev",
        "email": "mikhail.zabaluev@gmail.com",
        "time": "Thu Mar 05 01:56:08 2026 +0200"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 04 18:56:08 2026 -0500"
      },
      "message": "refactor: simplify dynamic state for Avro record projection (#9419)\n\n# Rationale for this change\n\nThe inner loop in `Projector::project_record` gives the optimizer\nsomewhat complicated dynamic data to branch through.\nThe sparse arrays in `Projector` are redundantly coded: `None` in the\nindex positions of `writer_to_reader` must match `Some` in\n`skip_decoders` and vice versa.\n\n# What changes are included in this PR?\n\nRefactor record projection state with a single array of directive-like\nenums corresponding to each writer schema field.\n\n# Are these changes tested?\n\nAdded a benchmark for record projection (the benchmark code is partially\nshared with #9397).\nSomewhat counterintuitively for me, it does not show improvement on a\nmore complex case with a mix of projected fields, but does improve the\nsimpler one-field projection cases.\n \nPasses the existing tests."
    },
    {
      "commit": "5025e6825971c7618532515b572026c61f8589b8",
      "tree": "656ea4873be573f43028f07ea69e07a61560e9ba",
      "parents": [
        "e4b68e6f82e41d3f06182e39723183c28e47afa4"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Tue Mar 03 19:22:37 2026 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 03 19:22:37 2026 -0700"
      },
      "message": "Update strum_macros requirement from 0.27 to 0.28 (#9471)\n\nUpdates the requirements on\n[strum_macros](https://github.com/Peternator7/strum) to permit the\nlatest version.\n\u003cdetails\u003e\n\u003csummary\u003eChangelog\u003c/summary\u003e\n\u003cp\u003e\u003cem\u003eSourced from \u003ca\nhref\u003d\"https://github.com/Peternator7/strum/blob/master/CHANGELOG.md\"\u003estrum_macros\u0027s\nchangelog\u003c/a\u003e.\u003c/em\u003e\u003c/p\u003e\n\u003cblockquote\u003e\n\u003ch2\u003e0.28.0\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003e\n\u003cp\u003e\u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/pull/461\"\u003e#461\u003c/a\u003e:\nAllow any kind of passthrough attributes on\n\u003ccode\u003eEnumDiscriminants\u003c/code\u003e.\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003ePreviously only list-style attributes (e.g.\n\u003ccode\u003e#[strum_discriminants(derive(...))]\u003c/code\u003e) were supported. Now\npath-only\n(e.g. \u003ccode\u003e#[strum_discriminants(non_exhaustive)]\u003c/code\u003e) and\nname/value (e.g. \u003ccode\u003e#[strum_discriminants(doc \u003d\n\u0026quot;foo\u0026quot;)]\u003c/code\u003e)\nattributes are also supported.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\n\u003cp\u003e\u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/pull/462\"\u003e#462\u003c/a\u003e:\nAdd missing \u003ccode\u003e#[automatically_derived]\u003c/code\u003e to generated impls not\ncovered by \u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/pull/444\"\u003e#444\u003c/a\u003e.\u003c/p\u003e\n\u003c/li\u003e\n\u003cli\u003e\n\u003cp\u003e\u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/pull/466\"\u003e#466\u003c/a\u003e:\nBump MSRV to 1.71, required to keep up with updated \u003ccode\u003esyn\u003c/code\u003e and\n\u003ccode\u003ewindows-sys\u003c/code\u003e dependencies. This is a breaking change if\nyou\u0027re on an old version of rust.\u003c/p\u003e\n\u003c/li\u003e\n\u003cli\u003e\n\u003cp\u003e\u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/pull/469\"\u003e#469\u003c/a\u003e:\nUse absolute paths in generated proc macro code to avoid\npotential name conflicts.\u003c/p\u003e\n\u003c/li\u003e\n\u003cli\u003e\n\u003cp\u003e\u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/pull/465\"\u003e#465\u003c/a\u003e:\nUpgrade \u003ccode\u003ephf\u003c/code\u003e dependency to v0.13.\u003c/p\u003e\n\u003c/li\u003e\n\u003cli\u003e\n\u003cp\u003e\u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/pull/473\"\u003e#473\u003c/a\u003e:\nFix \u003ccode\u003ecargo fmt\u003c/code\u003e / \u003ccode\u003eclippy\u003c/code\u003e issues and add GitHub\nActions CI.\u003c/p\u003e\n\u003c/li\u003e\n\u003cli\u003e\n\u003cp\u003e\u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/pull/477\"\u003e#477\u003c/a\u003e:\n\u003ccode\u003estrum::ParseError\u003c/code\u003e now implements\n\u003ccode\u003ecore::fmt::Display\u003c/code\u003e instead\n\u003ccode\u003estd::fmt::Display\u003c/code\u003e to make it \u003ccode\u003e#[no_std]\u003c/code\u003e\ncompatible. Note the \u003ccode\u003eError\u003c/code\u003e trait wasn\u0027t available in core\nuntil \u003ccode\u003e1.81\u003c/code\u003e\nso \u003ccode\u003estrum::ParseError\u003c/code\u003e still only implements that in std.\u003c/p\u003e\n\u003c/li\u003e\n\u003cli\u003e\n\u003cp\u003e\u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/pull/476\"\u003e#476\u003c/a\u003e:\n\u003cstrong\u003eBreaking Change\u003c/strong\u003e - \u003ccode\u003eEnumString\u003c/code\u003e now\nimplements \u003ccode\u003eFrom\u0026lt;\u0026amp;str\u0026gt;\u003c/code\u003e\n(infallible) instead of \u003ccode\u003eTryFrom\u0026lt;\u0026amp;str\u0026gt;\u003c/code\u003e when the\nenum has a \u003ccode\u003e#[strum(default)]\u003c/code\u003e variant. This more accurately\nreflects that parsing cannot fail in that case. If you need the old\n\u003ccode\u003eTryFrom\u003c/code\u003e behavior, you can opt back in using\n\u003ccode\u003eparse_error_ty\u003c/code\u003e and \u003ccode\u003eparse_error_fn\u003c/code\u003e:\u003c/p\u003e\n\u003cpre lang\u003d\"rust\"\u003e\u003ccode\u003e#[derive(EnumString)]\n#[strum(parse_error_ty \u003d strum::ParseError, parse_error_fn \u003d\nmake_error)]\npub enum Color {\n    Red,\n    #[strum(default)]\n    Other(String),\n}\n\u003cp\u003efn make_error(x: \u0026amp;str) -\u0026gt; strum::ParseError {\nstrum::ParseError::VariantNotFound\n}\n\u003c/code\u003e\u003c/pre\u003e\u003c/p\u003e\n\u003c/li\u003e\n\u003cli\u003e\n\u003cp\u003e\u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/pull/431\"\u003e#431\u003c/a\u003e:\nFix bug where \u003ccode\u003eEnumString\u003c/code\u003e ignored the\n\u003ccode\u003eparse_err_ty\u003c/code\u003e\nattribute when the enum had a \u003ccode\u003e#[strum(default)]\u003c/code\u003e\nvariant.\u003c/p\u003e\n\u003c/li\u003e\n\u003cli\u003e\n\u003cp\u003e\u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/pull/474\"\u003e#474\u003c/a\u003e:\nEnumDiscriminants will now copy \u003ccode\u003edefault\u003c/code\u003e over from the\noriginal enum to the Discriminant enum.\u003c/p\u003e\n\u003cpre lang\u003d\"rust\"\u003e\u003ccode\u003e#[derive(Debug, Default, EnumDiscriminants)]\n#[strum_discriminants(derive(Default))] // \u0026lt;- Remove this in 0.28.\nenum MyEnum {\n    #[default] // \u0026lt;- Will be the #[default] on the MyEnumDiscriminant\n    #[strum_discriminants(default)] // \u0026lt;- Remove this in 0.28\n    Variant0,\n    Variant1 { a: NonDefault },\n}\n\u003c/code\u003e\u003c/pre\u003e\n\u003c/li\u003e\n\u003c/ul\u003e\n\u003c!-- raw HTML omitted --\u003e\n\u003c/blockquote\u003e\n\u003cp\u003e... (truncated)\u003c/p\u003e\n\u003c/details\u003e\n\u003cdetails\u003e\n\u003csummary\u003eCommits\u003c/summary\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/Peternator7/strum/commit/7376771128834d28bb9beba5c39846cba62e71ec\"\u003e\u003ccode\u003e7376771\u003c/code\u003e\u003c/a\u003e\nPeternator7/0.28 (\u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/issues/475\"\u003e#475\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/Peternator7/strum/commit/26e63cd964a2e364331a5dd977d589bb9f649d8c\"\u003e\u003ccode\u003e26e63cd\u003c/code\u003e\u003c/a\u003e\nDisplay exists in core (\u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/issues/477\"\u003e#477\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/Peternator7/strum/commit/9334c728eedaa8a992d1388a8f4564bbccad1934\"\u003e\u003ccode\u003e9334c72\u003c/code\u003e\u003c/a\u003e\nMake TryFrom and FromStr infallible if there\u0027s a default (\u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/issues/476\"\u003e#476\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/Peternator7/strum/commit/0ccbbf823c16e827afc263182cd55e99e3b2a52e\"\u003e\u003ccode\u003e0ccbbf8\u003c/code\u003e\u003c/a\u003e\nHonor parse_err_ty attribute when the enum has a default variant (\u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/issues/431\"\u003e#431\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/Peternator7/strum/commit/2c9e5a9259189ce8397f2f4967060240c6bafd74\"\u003e\u003ccode\u003e2c9e5a9\u003c/code\u003e\u003c/a\u003e\nAutomatically add Default implementation to EnumDiscriminant if it\nexists on ...\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/Peternator7/strum/commit/e241243e48359b8b811b8eaccdcfa1ae87138e0d\"\u003e\u003ccode\u003ee241243\u003c/code\u003e\u003c/a\u003e\nFix existing cargo fmt + clippy issues and add GH actions (\u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/issues/473\"\u003e#473\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/Peternator7/strum/commit/639b67fefd20eaead1c5d2ea794e9afe70a00312\"\u003e\u003ccode\u003e639b67f\u003c/code\u003e\u003c/a\u003e\nfeat: allow any kind of passthrough attributes on\n\u003ccode\u003eEnumDiscriminants\u003c/code\u003e (\u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/issues/461\"\u003e#461\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/Peternator7/strum/commit/0ea1e2d0fd1460e7492ea32e6b460394d9199ff8\"\u003e\u003ccode\u003e0ea1e2d\u003c/code\u003e\u003c/a\u003e\ndocs: Fix typo (\u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/issues/463\"\u003e#463\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/Peternator7/strum/commit/36c051b91086b37d531c63ccf5a49266832a846d\"\u003e\u003ccode\u003e36c051b\u003c/code\u003e\u003c/a\u003e\nUpgrade \u003ccode\u003ephf\u003c/code\u003e to v0.13 (\u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/issues/465\"\u003e#465\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/Peternator7/strum/commit/9328b38617dc6f4a3bc5fdac03883d3fc766cf34\"\u003e\u003ccode\u003e9328b38\u003c/code\u003e\u003c/a\u003e\nUse absolute paths in proc macro (\u003ca\nhref\u003d\"https://redirect.github.com/Peternator7/strum/issues/469\"\u003e#469\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eAdditional commits viewable in \u003ca\nhref\u003d\"https://github.com/Peternator7/strum/compare/v0.27.0...v0.28.0\"\u003ecompare\nview\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/details\u003e\n\u003cbr /\u003e\n\n\nDependabot will resolve any conflicts with this PR as long as you don\u0027t\nalter it yourself. You can also trigger a rebase manually by commenting\n`@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n\u003cdetails\u003e\n\u003csummary\u003eDependabot commands and options\u003c/summary\u003e\n\u003cbr /\u003e\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits\nthat have been made to it\n- `@dependabot show \u003cdependency name\u003e ignore conditions` will show all\nof the ignore conditions of the specified dependency\n- `@dependabot ignore this major version` will close this PR and stop\nDependabot creating any more for this major version (unless you reopen\nthe PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop\nDependabot creating any more for this minor version (unless you reopen\nthe PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop\nDependabot creating any more for this dependency (unless you reopen the\nPR or upgrade to it yourself)\n\n\n\u003c/details\u003e\n\nSigned-off-by: dependabot[bot] \u003csupport@github.com\u003e\nCo-authored-by: dependabot[bot] \u003c49699333+dependabot[bot]@users.noreply.github.com\u003e"
    },
    {
      "commit": "e4b68e6f82e41d3f06182e39723183c28e47afa4",
      "tree": "b45ed920400272e581dd36c88add032616c985d8",
      "parents": [
        "bee4595c13665b9dfbd2da3dd0232423a4f2b3c9"
      ],
      "author": {
        "name": "Fokko Driesprong",
        "email": "fokko@apache.org",
        "time": "Mon Mar 02 22:51:19 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 02 13:51:19 2026 -0800"
      },
      "message": "Add `append_non_nulls` to `StructBuilder` (#9430)\n\n# Which issue does this PR close?\n\n- Closes #9429\n\nI\u0027m doing some performance optimization, and noticed that we have a loop\nadding one value to the null mask at a time. Instead, I\u0027d suggest adding\n`append_non_nulls` to do this at once.\n\n```\nappend_non_nulls(n) vs append(true) in a loop (with bitmap allocated)\n\n┌───────────┬───────────────────┬─────────────────────┬─────────┐\n│     n     │ append(true) loop │ append_non_nulls(n) │ speedup │\n├───────────┼───────────────────┼─────────────────────┼─────────┤\n│ 100       │ 251 ns            │ 73 ns               │ ~3x     │\n├───────────┼───────────────────┼─────────────────────┼─────────┤\n│ 1,000     │ 2.0 µs            │ 94 ns               │ ~21x    │\n├───────────┼───────────────────┼─────────────────────┼─────────┤\n│ 10,000    │ 19.3 µs           │ 119 ns              │ ~162x   │\n├───────────┼───────────────────┼─────────────────────┼─────────┤\n│ 100,000   │ 191 µs            │ 348 ns              │ ~549x   │\n├───────────┼───────────────────┼─────────────────────┼─────────┤\n│ 1,000,000 │ 1.90 ms           │ 3.5 µs              │ ~543x   │\n└───────────┴───────────────────┴─────────────────────┴─────────┘\n```\n\n\n# Rationale for this change\n\nIt adds a new public API in favor of performance improvements.\n\n# What changes are included in this PR?\n\nA new public API\n\n# Are these changes tested?\n\nYes, with new unit-tests.\n\n# Are there any user-facing changes?\n\nJust a new convient API."
    },
    {
      "commit": "bee4595c13665b9dfbd2da3dd0232423a4f2b3c9",
      "tree": "8f3637ee402ae96c66f7cd412c1f1ef5d42a73ac",
      "parents": [
        "01d34a8bee7fae52afd167469ef9e75ff9533309"
      ],
      "author": {
        "name": "Fokko Driesprong",
        "email": "fokko@apache.org",
        "time": "Mon Mar 02 22:51:03 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 02 13:51:03 2026 -0800"
      },
      "message": "Add `append_nulls` to `MapBuilder` (#9432)\n\n# Which issue does this PR close?\n\nCloses #9431\n\n# Rationale for this change\n\nIt would be nice to add `append_nulls` to MapBuilder, similar to\n`append_nulls` on `GenericListBuilder`. Appending the nulls at once,\ninstead of using a loop has some nice performance implications:\n\n```\nBenchmark results (1,000,000 nulls):\n\n┌─────────────────────────┬─────────┐\n│         Method          │  Time   │\n├─────────────────────────┼─────────┤\n│ append(false) in a loop │ 2.36 ms │\n├─────────────────────────┼─────────┤\n│ append_nulls(N)         │ 50 µs   │\n└─────────────────────────┴─────────┘\n```\n\n# What changes are included in this PR?\n\nA new public API.\n\n# Are these changes tested?\n\nWith some fresh unit tests.\n\n# Are there any user-facing changes?\n\nA nice and convient new public API"
    },
    {
      "commit": "01d34a8bee7fae52afd167469ef9e75ff9533309",
      "tree": "42532483aa03ce7fd10e1f447aab5010a1e3b504",
      "parents": [
        "73a516e3bc9d3850f16b66d6cb65d01e6b080c97"
      ],
      "author": {
        "name": "Fokko Driesprong",
        "email": "fokko@apache.org",
        "time": "Mon Mar 02 22:50:41 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 02 13:50:41 2026 -0800"
      },
      "message": "Add `append_value_n` to GenericByteBuilder (#9426)\n\n# Which issue does this PR close?\n\n- Closes #9425.\n\n# Rationale for this change\n\nI noticed that this method is available on PrimitiveTypeBuilder, but\nmissing on the GenericByteBuilder, which make sense since the gain is\nless, but after benchmarking, it shows a solid 10%. Mostly because the\nmore efficient allocation of the null-mask.\n\n```\n┌───────────────────┬────────────────┬───────────────────┬─────────┐\n│     Benchmark     │ append_value_n │ append_value loop │ Speedup │\n├───────────────────┼────────────────┼───────────────────┼─────────┤\n│ n\u003d100/len\u003d5       │ 371 ns         │ 408 ns            │ 10%     │\n├───────────────────┼────────────────┼───────────────────┼─────────┤\n│ n\u003d100/len\u003d30      │ 456 ns         │ 507 ns            │ 10%     │\n├───────────────────┼────────────────┼───────────────────┼─────────┤\n│ n\u003d100/len\u003d1024    │ 1.81 µs        │ 1.95 µs           │ 8%      │\n├───────────────────┼────────────────┼───────────────────┼─────────┤\n│ n\u003d1000/len\u003d5      │ 2.39 µs        │ 2.87 µs           │ 17%     │\n├───────────────────┼────────────────┼───────────────────┼─────────┤\n│ n\u003d1000/len\u003d30     │ 3.41 µs        │ 3.89 µs           │ 12%     │\n├───────────────────┼────────────────┼───────────────────┼─────────┤\n│ n\u003d1000/len\u003d1024   │ 12.3 µs        │ 14.4 µs           │ 15%     │\n├───────────────────┼────────────────┼───────────────────┼─────────┤\n│ n\u003d10000/len\u003d5     │ 23.8 µs        │ 29.3 µs           │ 19%     │\n├───────────────────┼────────────────┼───────────────────┼─────────┤\n│ n\u003d10000/len\u003d30    │ 33.7 µs        │ 39.0 µs           │ 14%     │\n├───────────────────┼────────────────┼───────────────────┼─────────┤\n│ n\u003d10000/len\u003d1024  │ 115.9 µs       │ 135.0 µs          │ 14%     │\n├───────────────────┼────────────────┼───────────────────┼─────────┤\n│ n\u003d100000/len\u003d5    │ 227.5 µs       │ 278.6 µs          │ 18%     │\n├───────────────────┼────────────────┼───────────────────┼─────────┤\n│ n\u003d100000/len\u003d30   │ 328.1 µs       │ 377.9 µs          │ 13%     │\n├───────────────────┼────────────────┼───────────────────┼─────────┤\n│ n\u003d100000/len\u003d1024 │ 1.16 ms        │ 1.34 ms           │ 14%     │\n└───────────────────┴────────────────┴───────────────────┴─────────┘\n```\n\nI think this is still worthwhile to be added. Let me know what the\ncommunity thinks!\n\n# What changes are included in this PR?\n\nA new public API.\n\n# Are these changes tested?\n\nYes!\n\n# Are there any user-facing changes?\n\nA new public API."
    },
    {
      "commit": "73a516e3bc9d3850f16b66d6cb65d01e6b080c97",
      "tree": "62f5c033866c23a9dd896fd7f924e908dffe52a3",
      "parents": [
        "d99043e3c3a30f283cc2b3332770f8e65e8d9d8e"
      ],
      "author": {
        "name": "Liam Bao",
        "email": "liam.zw.bao@gmail.com",
        "time": "Mon Mar 02 16:49:56 2026 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 02 13:49:56 2026 -0800"
      },
      "message": "Move `ListLikeArray` to arrow-array to be shared with json writer and parquet unshredding (#9437)\n\n# Which issue does this PR close?\n\n- Part of #9340.\n\n# Rationale for this change\n\nJson writers for ListLike types (List/ListView/FixedSizeList) are pretty\nsimilar apart from the element range representation. We already had a\ngood way to abstract this kind of encoder in parquet variant\nunshredding. Given this, it would be good to move this `ListLikeArray`\ntrait to arrow-array to be shared with json/parquet\n\n# What changes are included in this PR?\n\nMove `ListLikeArray` trait from parquet-variant-compute to arrow-array\n\n# Are these changes tested?\n\nCovered by existing tests\n\n# Are there any user-facing changes?\n\nNew pub trait in arrow-array"
    },
    {
      "commit": "d99043e3c3a30f283cc2b3332770f8e65e8d9d8e",
      "tree": "0bd2675f6be7c39a88fc788008fec2e778069856",
      "parents": [
        "4d8e8baed0a712f875d7ee83536be2c983261631"
      ],
      "author": {
        "name": "Congxian Qiu",
        "email": "qcx978132955@gmail.com",
        "time": "Tue Mar 03 05:49:08 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 02 13:49:08 2026 -0800"
      },
      "message": "[Variant] Enahcne bracket access for VariantPath (#9479)\n\n# Which issue does this PR close?\n\n- Closes #9478 .\n\n# What changes are included in this PR?\n\n- Fix the typo\n- Enhance the bracket access for the variant path\n\n# Are these changes tested?\n\n- Add some tests to cover the logic\n\n# Are there any user-facing changes?\n\nNo"
    },
    {
      "commit": "4d8e8baed0a712f875d7ee83536be2c983261631",
      "tree": "647bedf96f43b51c31543e5de3844084cfa568d0",
      "parents": [
        "9ec9f578fc7e1fa38534e3cf4859822c50001be5"
      ],
      "author": {
        "name": "Yan Tingwang",
        "email": "tingwangyan2020@163.com",
        "time": "Tue Mar 03 05:48:19 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 02 13:48:19 2026 -0800"
      },
      "message": "chore: remove duplicate macro `partially_shredded_variant_array_gen` (#9498)\n\n# Which issue does this PR close?\n\n- Closes #9492 .\n\n# What changes are included in this PR?\n\nSee title.\n\n# Are these changes tested?\n\nYES\n\n# Are there any user-facing changes?\n\nNO"
    },
    {
      "commit": "9ec9f578fc7e1fa38534e3cf4859822c50001be5",
      "tree": "79e4cd0d653c1de0298e8cea6a20b052b7e6830e",
      "parents": [
        "a7acf3d7396d763c0ae2ebba6190358ce574ee5f"
      ],
      "author": {
        "name": "Yan Tingwang",
        "email": "tingwangyan2020@163.com",
        "time": "Tue Mar 03 04:24:31 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 02 12:24:31 2026 -0800"
      },
      "message": "Deprecate ArrowTimestampType::make_value in favor of from_naive_datetime (#9491)\n\nMark ArrowTimestampType::make_value as deprecated and migrate internal\ncallers to the newer from_naive_datetime API.\n\n# Which issue does this PR close?\n\n- Closes #9490 .\n\n# Rationale for this change\n\nFollow-up from PR #9345.\n\n# What changes are included in this PR?\n\nMark ArrowTimestampType::make_value as deprecated and migrate internal\ncallers to the newer from_naive_datetime API.\n\n# Are these changes tested?\n\nYES.\n\n# Are there any user-facing changes?\n\nMigration Path: Users should replace:\n\n```rust\n// Old\nTimestampSecondType::make_value(naive)\n```\nWith:\n\n```rust\n// New\nTimestampSecondType::from_naive_datetime(naive, None)\n```"
    },
    {
      "commit": "a7acf3d7396d763c0ae2ebba6190358ce574ee5f",
      "tree": "55c3c54d0d8cd240ea3fa91a39c0e9da8be4751f",
      "parents": [
        "a20753c70c74258831df149e6fb222b6ec501098"
      ],
      "author": {
        "name": "Jochen Görtler",
        "email": "grtlr@users.noreply.github.com",
        "time": "Mon Mar 02 14:11:37 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 03 00:11:37 2026 +1100"
      },
      "message": "Convert `prettyprint` tests in `arrow-cast` to `insta` inline snapshots (#9472)\n\n# Rationale for this change\n\nThe motivation for this PR is to create to improve the testing\ninfrastructure as a precursor to the following PR:\n\n- #9221 \n\n@Jefffrey seemed to be in favor of using `insta` for more tests:\nhttps://github.com/apache/arrow-rs/pull/9221#discussion_r2735246111\n\n# What changes are included in this PR?\n\nThis PR does not do logic changes, but is a straightforward translation\nof the current tests. More test cases, especially around escape\nsequences can be added in follow up PRs.\n\n# Are these changes tested?\n\nYes, to review we still need to manually confirm that no test cases\nchanged accidentally.\n\n# Are there any user-facing changes?\n\nNo."
    },
    {
      "commit": "a20753c70c74258831df149e6fb222b6ec501098",
      "tree": "93ffe28273ebbcffa085da78a3f82deeafd7d9e2",
      "parents": [
        "ae934888bb87196d272340bc528e93dd516bc9e6"
      ],
      "author": {
        "name": "Andrew Lamb",
        "email": "andrew@nerdnetworks.org",
        "time": "Fri Feb 27 14:48:28 2026 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Feb 27 14:48:28 2026 -0500"
      },
      "message": "Update planned release schedule in README.md (#9466)\n\n- part of https://github.com/apache/arrow-rs/issues/8466\n\nUpdate release schedule based on historical reality"
    },
    {
      "commit": "ae934888bb87196d272340bc528e93dd516bc9e6",
      "tree": "d0e9b91baaa51a5e743b1fccdfe2b0a2a0fdc1b2",
      "parents": [
        "183f8c1c5361ac5f026d6fbfa8e99a2920dcb652"
      ],
      "author": {
        "name": "Mikhail Zabaluev",
        "email": "mikhail.zabaluev@gmail.com",
        "time": "Fri Feb 27 20:09:41 2026 +0200"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Feb 27 13:09:41 2026 -0500"
      },
      "message": "fix: resolution of complex type variants in Avro unions (#9328)\n\n# Which issue does this PR close?\n\n- Closes #9336\n\n# Rationale for this change\n\nWhen an Avro reader schema has a union type that needs to be resolved\nagainst the type in the writer schema, resolution information other than\nprimitive type promotions is not properly handled when creating the\ndecoder.\nFor example, when the reader schema has a nullable record field that has\nan added nested field on top of the fields defined in the writer schema,\nthe record type resolution needs to be applied, using a projection with\nthe default field value.\n\n# What changes are included in this PR?\n\nExtend the union resolution information in the decoder with variant\ndata for enum remapping and record projection. The `Projector` data\nstructure with `Skipper` decoders makes part of this information,\nwhich necessitated some refactoring.\n\n# Are these changes tested?\n\nTODO:\n\n- [x] Debug failing tests including a busy-loop failure mode.\n- [ ] Add more unit tests exercising the complex resolutions.\n\n# Are there any user-facing changes?\n\nNo."
    },
    {
      "commit": "183f8c1c5361ac5f026d6fbfa8e99a2920dcb652",
      "tree": "2f77438a23714aabbf80c456bcd06089c016f00e",
      "parents": [
        "2bf6909305091c69edddb0f16c76184edd206141"
      ],
      "author": {
        "name": "Bruno",
        "email": "brunocauet@gmail.com",
        "time": "Fri Feb 27 14:34:49 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Feb 27 06:34:49 2026 -0700"
      },
      "message": "Add PrimitiveRunBuilder::with_data_type() to customize the values\u0027 DataType (#9473)\n\nThis enables setting a timezone or precision \u0026 scale on parameterized\nDataType values.\n\nNote: I think the panic is unfortunate, and a try_with_data_type() would\nbe sensible.\n\n# Which issue does this PR close?\n\n- Closes https://github.com/apache/arrow-rs/issues/8042.\n\n# Are these changes tested?\n\nYes\n\n# Are there any user-facing changes?\n\n- Adds `PrimitiveRunBuilder::with_data_type`."
    },
    {
      "commit": "2bf6909305091c69edddb0f16c76184edd206141",
      "tree": "fdad6c0b273fee1b99b67974e9b6c23281edbd19",
      "parents": [
        "a2cffdbf85c94e6850b725ce2f9d0f2d9b5ebb32"
      ],
      "author": {
        "name": "Konstantin Tarasov",
        "email": "33369833+sdf-jkl@users.noreply.github.com",
        "time": "Wed Feb 25 16:56:53 2026 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Feb 25 13:56:53 2026 -0800"
      },
      "message": "Add list-like types support to VariantArray::try_new (#9457)\n\n# Which issue does this PR close?\n\n- Closes #9455.\n\n# Rationale for this change\ncheck issue\n\n# What changes are included in this PR?\n\nAdded list types support to `VariantArray` data type checking\n\n# Are these changes tested?\n\n# Are there any user-facing changes?\n"
    },
    {
      "commit": "a2cffdbf85c94e6850b725ce2f9d0f2d9b5ebb32",
      "tree": "817f5144b088dd464e03123b1aa03d55fa8713f8",
      "parents": [
        "ff736e0167348ffdd66d7502614cc7749c8690c4"
      ],
      "author": {
        "name": "Eyad Ibrahim",
        "email": "159264031+Eyad3skr@users.noreply.github.com",
        "time": "Tue Feb 24 14:16:08 2026 +0200"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Feb 24 07:16:08 2026 -0500"
      },
      "message": "Add `NullBuffer::from_unsliced_buffer` helper and refactor call sites (#9411)\n\nImplements a helper to replace the pattern of creating a `BooleanBuffer`\nfrom an unsliced validity bitmap and filtering by null count. Previously\nthis was done with `BooleanBuffer::new(...)` plus\n`Some(NullBuffer::new(...)).filter(|n| n.null_count() \u003e 0);` now it is a\nsingle call to` NullBuffer::try_from_unsliced(buffer, len)`, which\nreturns `Some(NullBuffer)` when there are nulls and `None` when all\nvalues are valid.\n\n- Added `try_from_unsliced` in `arrow-buffer/src/buffer/null.rs` with\ntests for nulls, all valid, all null, empty\n- Refactor `FixedSizeBinaryArray::try_from_iter_with_size` and\n`try_from_sparse_iter_with_size` to use it\n- Refactor `take_nulls` in `arrow-select` to use it\n\n\nCloses #9385"
    },
    {
      "commit": "ff736e0167348ffdd66d7502614cc7749c8690c4",
      "tree": "134243db83953e2d83a03a36a1ed3cfbc6c96e6a",
      "parents": [
        "9af5c75899993d619556972ca2f2f878df1037f2"
      ],
      "author": {
        "name": "Jason",
        "email": "940334249@qq.com",
        "time": "Tue Feb 24 08:53:50 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Feb 23 16:53:50 2026 -0800"
      },
      "message": "docs(parquet): Fix broken links in README (#9467)\n\nFix missing link in Parquet README"
    },
    {
      "commit": "9af5c75899993d619556972ca2f2f878df1037f2",
      "tree": "0e98d08a5230862d1a0ce82231fd069e6457a285",
      "parents": [
        "25ad2d488fe19175b2cd15e86c3bc1153cb5a38b"
      ],
      "author": {
        "name": "Jason",
        "email": "940334249@qq.com",
        "time": "Tue Feb 24 03:47:00 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Feb 23 11:47:00 2026 -0800"
      },
      "message": "refactor: simplify iterator using cloned().map(Some) (#9449)\n\n# Which issue does this PR close?\n\n# Rationale for this change\n\nUse .cloned().map(Some) instead of .map(|b| Some(b.clone()))\nfor better readability and idiomatic Rust style.\n\n# What changes are included in this PR?\n\n# Are these changes tested?\n\n# Are there any user-facing changes?\n"
    },
    {
      "commit": "25ad2d488fe19175b2cd15e86c3bc1153cb5a38b",
      "tree": "d230fda01182171f2f5f65ba5a009bac0677b455",
      "parents": [
        "9d0e8beae74fedb362d88cbc6e32d9760657c9de"
      ],
      "author": {
        "name": "Jason",
        "email": "libevent@yeah.net",
        "time": "Fri Feb 20 10:24:11 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Feb 20 13:24:11 2026 +1100"
      },
      "message": "docs: fix markdown link syntax in README (#9440)\n\n# Which issue does this PR close?\n\n\u003c!--\nWe generally require a GitHub issue to be filed for all bug fixes and\nenhancements and this helps us generate change logs for our releases.\nYou can link an issue to this PR using the GitHub syntax.\n--\u003e\n\n- Closes #NNN.\n\n\u003cimg width\u003d\"933\" height\u003d\"379\" alt\u003d\"image\"\nsrc\u003d\"https://github.com/user-attachments/assets/5e1cda10-c605-4daa-8bc3-9b02011d13f2\"\n/\u003e\n\n\n# Rationale for this change\n\n\u003c!--\nWhy are you proposing this change? If this is already explained clearly\nin the issue then this section is not needed.\nExplaining clearly why changes are proposed helps reviewers understand\nyour changes and offer better suggestions for fixes.\n--\u003e\n\n# What changes are included in this PR?\n\n\u003c!--\nThere is no need to duplicate the description in the issue here but it\nis sometimes worth providing a summary of the individual changes in this\nPR.\n--\u003e\n\n# Are these changes tested?\n\n\u003c!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example,\nare they covered by existing tests)?\n--\u003e\n\n# Are there any user-facing changes?\n\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e"
    },
    {
      "commit": "9d0e8beae74fedb362d88cbc6e32d9760657c9de",
      "tree": "c3ff6d62ca1c823291f4f3f59175c9c31e4e809b",
      "parents": [
        "ab9c0627892586e5e45832999253d2877a54c3d4"
      ],
      "author": {
        "name": "Andrew Lamb",
        "email": "andrew@nerdnetworks.org",
        "time": "Thu Feb 19 10:45:53 2026 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Feb 19 10:45:53 2026 -0500"
      },
      "message": "Update version to 58.0.0 and add CHANGELOG (#9420)\n\n# Which issue does this PR close?\n\n- Part of https://github.com/apache/arrow-rs/issues/8466\n\n# Rationale for this change\n\n- https://github.com/apache/arrow-rs/issues/8466\n\n# What changes are included in this PR?\n\n1. Update version to 58.0.0\n2. Update CHANGELOG. See rendered version here:\nhttps://github.com/alamb/arrow-rs/blob/alamb/prepare_58/CHANGELOG.md\n\nI\u0027ll update the changelog with this command\n```shell\nARROW_GITHUB_API_TOKEN\u003dXXXX ./dev/release/update_change_log.sh\n```\n\n\n# Are these changes tested?\n\nN/A\n\n# Are there any user-facing changes?\n\nNew version"
    },
    {
      "commit": "ab9c0627892586e5e45832999253d2877a54c3d4",
      "tree": "51abe774c51e664d291ac305afe8ec0d161acd01",
      "parents": [
        "c129c7cfc27bf64ea07665f27db5bc1f485b66cc"
      ],
      "author": {
        "name": "Mateusz Matejuk",
        "email": "esavier@erglabs.org",
        "time": "Thu Feb 19 10:49:00 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Feb 19 10:49:00 2026 +0100"
      },
      "message": "fix: fixed trait functions clash get_date_time_part_extract_fn (#8221) (#9424)\n\n# Which issue does this PR close?\n\nfixes #8221 \n\n# Rationale for this change\n\nIt blocks users from building if the overall build state in Cargo.lock\nhas `Chrono` in versions \u003e\u003d 0.4.40.\nRecently `Chrono` added `quarter()` function that clashes with\n`Datelike`\u0027s, and requires disambiguation, or build will fail. That also\nmakes users unable to use it in larger projects.\n\n# What changes are included in this PR?\n\n`arrow-rs/arrow-arith/src/temporal.rs:91`\n`get_date_time_part_extract_fn()`\n\nI forced `DatePart::Quarter` to return `Datelike::quarter()`.\nWith versions \u003c 0.4.40 of `Chrono` it worked since it did not export\nthis function.\n\n# Are these changes tested?\n\nFull testing suite is not failing.\nI added a few tests confirming that quarter() does it job, but those are\nnot regression tests. Those will only test if the quarter() actually\nworks.\n\nBuild-related tests are difficult to achieve and require extra setup,\nmoreover are very fragile.\nBenchmarks were run, no deviation was found.\n\n# Are there any user-facing changes?\n\nNo, users should not be affected.\n\n---------\n\nCo-authored-by: Marco Neumann \u003cmarco@crepererum.net\u003e"
    },
    {
      "commit": "c129c7cfc27bf64ea07665f27db5bc1f485b66cc",
      "tree": "d5cb8c0422c48c37cd5c4e7077fee9b83d073fd4",
      "parents": [
        "2f40f78e4feae3aee261d9608cede9535e1429e0"
      ],
      "author": {
        "name": "Fokko Driesprong",
        "email": "fokko@apache.org",
        "time": "Wed Feb 18 16:22:16 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Feb 18 16:22:16 2026 +0100"
      },
      "message": "Avoid allocating a `Vec` in `StructBuilder` (#9428)\n\n# Which issue does this PR close?\n\nResolves #9427\n\nWhile going through the code, @scovich noticed that it allocates a\n`vec![false; n]` to be appended to the null buffer, which is not very\nefficent:\n\n```\nappend_nulls: vec![false; n] (old) vs append_n_nulls (new)\n\n┌─────────┬─────────────────┬──────────────────────┬─────────┐\n│    n    │ old (vec alloc) │ new (append_n_nulls) │ speedup │\n├─────────┼─────────────────┼──────────────────────┼─────────┤\n│ 100     │ 82 ns           │ 43 ns                │ ~2x     │\n├─────────┼─────────────────┼──────────────────────┼─────────┤\n│ 1,000   │ 319 ns          │ 47 ns                │ ~7x     │\n├─────────┼─────────────────┼──────────────────────┼─────────┤\n│ 10,000  │ 2,540 ns        │ 68 ns                │ ~37x    │\n├─────────┼─────────────────┼──────────────────────┼─────────┤\n│ 100,000 │ 25,526 ns       │ 293 ns               │ ~87x    │\n└─────────┴─────────────────┴──────────────────────┴─────────┘\n```\n\n# Rationale for this change\n\nMOAR efficient\n\n# What changes are included in this PR?\n\nAvoid allocating a `Vec`.\n\n# Are these changes tested?\n\nExisting tests\n\n# Are there any user-facing changes?\n\nLess memory consumption and a happy CPU"
    },
    {
      "commit": "2f40f78e4feae3aee261d9608cede9535e1429e0",
      "tree": "376eb79d9e136c59f37b362c58617b96d7fa63d7",
      "parents": [
        "442e1b8d952f5f15cc0922165e56a8f42bd1e716"
      ],
      "author": {
        "name": "Congxian Qiu",
        "email": "qcx978132955@gmail.com",
        "time": "Tue Feb 17 20:18:55 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Feb 17 07:18:55 2026 -0500"
      },
      "message": "[Variant] Support `[\u0027fieldName\u0027]` in VariantPath parser (#9276)\n\n# Which issue does this PR close?\n\n\u003c!--\nWe generally require a GitHub issue to be filed for all bug fixes and\nenhancements and this helps us generate change logs for our releases.\nYou can link an issue to this PR using the GitHub syntax.\n--\u003e\n\n- Closes #9050.\n- close #8954\n\n# Rationale for this change\n\n\u003c!--\nWhy are you proposing this change? If this is already explained clearly\nin the issue then this section is not needed.\nExplaining clearly why changes are proposed helps reviewers understand\nyour changes and offer better suggestions for fixes.\n--\u003e\n\n# What changes are included in this PR?\n\nAdd `[fieldName]` support in `VariantPath` parser, will throw an error\nif the parser fails.\n\nAlso support escaping `\\` inside brackets. If we force users to use\nbrackets when the field contains special characters, maybe we can also\nclose #8954\n\nSample behaviors(read more on the code doc)\n\n- `[foo]` -\u003e filed `foo`\n- `[2]` -\u003e index 2\n- `[a.b]` -\u003e field `a.b`\n- `[a\\]b]` -\u003e field `a]b`\n- `[a\\xb]` -\u003e field `axb`\n\n\n# Are these changes tested?\n\n\u003c!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example,\nare they covered by existing tests)?\n--\u003e\n\nAdded tests\n\n# Are there any user-facing changes?\n\n\u003c!--\nIf there are user-facing changes then we may require documentation to be\nupdated before approving the PR.\n\nIf there are any breaking changes to public APIs, please call them out.\n--\u003e\n\nYes, there are some user-facing changes, but `parquet-variant` is still\nexperient for now, so maybe we don\u0027t need to wait for a major version."
    },
    {
      "commit": "442e1b8d952f5f15cc0922165e56a8f42bd1e716",
      "tree": "ddc26794f7dcfd1824366d770fd9b4b4c84ef07a",
      "parents": [
        "df635903108418d95f7d0fc2101091684d8504fd"
      ],
      "author": {
        "name": "Mikhail Zabaluev",
        "email": "mikhail.zabaluev@gmail.com",
        "time": "Mon Feb 16 13:27:44 2026 +0200"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Feb 16 06:27:44 2026 -0500"
      },
      "message": "perf: optimize skipper for varint values used when projecting Avro record types (#9397)\n\n# Rationale for this change\n\nThe `Skipper` implementation, used to skip over unneeded fields when\nprojecting an Avro record type to a reader schema, delegates to the\n`read_vlq` cursor method for variable-length integer types. Besides\nchecking the validity of the encoding, the decoding method performs\ncomputations to obtain the value, which is discarded at the skipper call\nsite.\n\n# What changes are included in this PR?\n\nProvide a dedicated code path to skip over an encoded variable-length\ninteger, and use it to implement `Skipper` for the types that uses this\nencoding.\n\n# Are these changes tested?\n\nA benchmark is added to evaluate the performance improvement.\nIt shows about 7% improvement in my testing on 11th Gen Intel Core\ni5-1135G7.\n\n# Are there any user-facing changes?\n\nNo"
    },
    {
      "commit": "df635903108418d95f7d0fc2101091684d8504fd",
      "tree": "0b556c252ba65f5ec5fe7a90694f41056212ae39",
      "parents": [
        "39a2b71e55e3fa12ee06defb1d133f828bb383f3"
      ],
      "author": {
        "name": "Daniël Heres",
        "email": "danielheres@gmail.com",
        "time": "Sun Feb 15 15:04:00 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Feb 15 15:04:00 2026 +0100"
      },
      "message": "[Minor] Use per-predicate projection masks in arrow_reader_clickbench benchmark (#9413)\n\n# Which issue does this PR close?\n\n- Closes #NNN.\n\n# Rationale for this change\n\nAs suggested by Claude - currently it uses a projection mask for all\ncolumns, significantly slowing down queries that have multiple\npredicates.\nThis makes it more in line with consumer side (e.g. DataFusion) (so we\ncan more accurately benchmark improvements).\n\nIt shows the perf difference in a number of (multi-filter) queries:\n\n```\ngroup                                             clickbench-optimizations               main\narrow_reader_clickbench/async_object_store/Q22    1.00    151.8±6.46ms        ? ?/sec    1.52    230.5±1.68ms        ? ?/sec\narrow_reader_clickbench/async_object_store/Q36    1.00     26.3±0.24ms        ? ?/sec    4.30    113.1±0.67ms        ? ?/sec\narrow_reader_clickbench/async_object_store/Q37    1.00      9.3±0.06ms        ? ?/sec    9.64     89.7±1.20ms        ? ?/sec\narrow_reader_clickbench/async_object_store/Q38    1.00     22.4±0.26ms        ? ?/sec    1.44     32.3±0.29ms        ? ?/sec\narrow_reader_clickbench/async_object_store/Q39    1.00     38.1±0.66ms        ? ?/sec    1.09     41.5±0.35ms        ? ?/sec\narrow_reader_clickbench/async_object_store/Q40    1.00     13.0±0.15ms        ? ?/sec    2.96     38.6±0.45ms        ? ?/sec\narrow_reader_clickbench/async_object_store/Q41    1.00     10.1±0.11ms        ? ?/sec    2.83     28.5±0.73ms        ? ?/sec\narrow_reader_clickbench/async_object_store/Q42    1.00      5.6±0.05ms        ? ?/sec    1.87     10.5±0.12ms        ? ?/sec\n```\n\n# What changes are included in this PR?\n\n# Are these changes tested?\n\n\n# Are there any user-facing changes?\n\n---------\n\nCo-authored-by: Claude Opus 4.6 \u003cnoreply@anthropic.com\u003e"
    },
    {
      "commit": "39a2b71e55e3fa12ee06defb1d133f828bb383f3",
      "tree": "2e50a1029ad7dd031a8f403521faad66cef88aaa",
      "parents": [
        "d8946ca0775ab7fe0eef2fdea4b8bb3d55ec6664"
      ],
      "author": {
        "name": "Connor Sanders",
        "email": "170039284+jecsand838@users.noreply.github.com",
        "time": "Sat Feb 14 04:22:27 2026 -0600"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Feb 14 05:22:27 2026 -0500"
      },
      "message": "Add additional Arrow type support  (#9291)\n\n# Which issue does this PR close?\n\n- Closes #9290 \n\n# Rationale for this change\n\n**NOTE TO REVIEWERS:** Over 1500 lines of this diff are tests.\n\n`arrow-avro` currently cannot encode/decode a number of Arrow\n`DataType`s, and some types have schema/encoding mismatches that can\nlead to incorrect data (even when encoding succeeds).\n\nThe goal is:\n\n* **No more `ArrowError::NotYetImplemented` (or similar) when\nwriting/reading an Arrow `RecordBatch` containing supported Arrow\ntypes**, excluding **Sparse Unions** (will be handled separately).\n* **When compiled with `feature \u003d \"avro_custom_types\"`:** Arrow to Avro\nto Arrow should **round-trip the Arrow `DataType`** (including\nwidth/signedness/time units and relevant metadata using **Arrow-specific\ncustom logical types** following the established `arrow.*` pattern.\n* **When compiled without `avro_custom_types`:** Arrow types should be\nencoded to the **closest standard Avro primitive / logical type**, with\nany necessary lossy conversions documented and consistently applied.\n\n# What changes are included in this PR?\n\nImplementation of all existing missing `arrow-avro` types except for\nSparse Unions\n\n# Are these changes tested?\n\nYes\n\n# Are there any user-facing changes?\n\nYes, additional type support is being added which is user-facing."
    },
    {
      "commit": "d8946ca0775ab7fe0eef2fdea4b8bb3d55ec6664",
      "tree": "7833b804412ad9ecb04d1263aa3a2e6c9eb38aa9",
      "parents": [
        "70089ac5c1e8de99cd9af780bb3ccb4564ae8ef7"
      ],
      "author": {
        "name": "Jonas Dedden",
        "email": "university@jonas-dedden.de",
        "time": "Fri Feb 13 15:49:10 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Feb 13 09:49:10 2026 -0500"
      },
      "message": "Fix `ArrowArrayStreamReader` for 0-columns record batch streams (#9405)\n\n# Which issue does this PR close?\n\n- Closes https://github.com/apache/arrow-rs/issues/9394\n\n# Rationale for this change\n\nPR https://github.com/apache/arrow-rs/pull/8944 introduced a regression\nthat 0-column record batch streams could not longer be decoded.\n\n# What changes are included in this PR?\n\n- Construct `RecordBatch` with `try_new_with_options` using the `len` of\nthe `ArrayData`, instead of letting it try to implicitly determine `len`\nby looking at the first column (this is what `try_new` does).\n- Slight refactor and reduction of code duplication of the existing\n`test_stream_round_trip_[import/export]` tests\n- Introduction of a new `test_stream_round_trip_no_columns` test \n\n# Are these changes tested?\n\nYes, both export and import are tested in\n`test_stream_round_trip_no_columns`.\n\n# Are there any user-facing changes?\n\n0-column record batch streams should be decodable now."
    },
    {
      "commit": "70089ac5c1e8de99cd9af780bb3ccb4564ae8ef7",
      "tree": "539eee38392c9fe92e543830f1c8229930e01cd8",
      "parents": [
        "7fbbde24aee76e00bffd9088375086f53985fb90"
      ],
      "author": {
        "name": "Abhishek",
        "email": "166850903+Abhisheklearn12@users.noreply.github.com",
        "time": "Fri Feb 13 19:48:59 2026 +0530"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Feb 14 01:18:59 2026 +1100"
      },
      "message": "feat: support RunEndEncoded arrays in arrow-json reader and writer (#9379)\n\n# Which issue does this PR close?\n\n- Closes #9359.\n\n# Rationale for this change\n\nThe `arrow-json` crate does not support `RunEndEncoded` arrays. This\nadds read and write support for `RunEndEncoded` arrays in the JSON\nreader and writer.\n\n# What changes are included in this PR?\n\n- Add `DataType::RunEndEncoded` match arm in `make_decoder` function\n- Add `RunEndEncodedArrayDecoder` that decodes JSON values and\nrun-length encodes consecutive equal values\n- Add `DataType::RunEndEncoded` match arm in `make_encoder` function\n- Add `RunEndEncodedEncoder` that maps logical indices to physical\nindices via `get_physical_index()`\n- Add tests for RunEndEncoded read, write, and roundtrip\n\n# Are these changes tested?\n\nYes. Added seven tests:\n\n- `test_read_run_end_encoded` - tests basic read with consecutive runs\n- `test_run_end_encoded_roundtrip` - tests write then read back\n- `test_read_run_end_encoded_consecutive_nulls` - tests null run\ncoalescing\n- `test_read_run_end_encoded_all_unique` - tests no compression when all\nvalues unique\n- `test_read_run_end_encoded_int16_run_ends` - tests Int16 run end type\n- `test_write_run_end_encoded` - tests writing string REE array\n- `test_write_run_end_encoded_int_values` - tests writing integer REE\narray\n\n# Are there any user-facing changes?\n\nYes. `RunEndEncoded` arrays can now be serialized to and deserialized\nfrom JSON using the `arrow-json` crate."
    },
    {
      "commit": "7fbbde24aee76e00bffd9088375086f53985fb90",
      "tree": "3e250b72f523b5bc4cb5c0913a299be80fde3020",
      "parents": [
        "7d16cd039b74fc24a766eb852856278de7f4567b"
      ],
      "author": {
        "name": "Bruno",
        "email": "brunocauet@gmail.com",
        "time": "Fri Feb 13 15:17:43 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Feb 14 01:17:43 2026 +1100"
      },
      "message": "Remove lint issues in parquet-related code. (#9375)\n\n* CacheOptions: `use` in arrow/push_decoder/reader_builder/mod.rs was\n  unused. But removing it triggers a doc complaint in array_reader.rs.\n  I fix it by spelling out `CacheOptions` at one usage site.\n\n* ExtensionType import was unused in arrow/schema/extension.rs when\n  compiling with default features. Only `use` it inside the\n  feature-guarded code paths.\n\n* Prefix unused function arguments with `_`.\n\n* Some code in parquet/tests/arrow_reader/io/mod.rs is unused. As I lack\n  knowledge of the context, I just add just add #[allow(dead_code)].\n# Rationale for this change\n\nrust-analyzer bugs me about those unused symbols \u0026 co.\n\n---------\n\nCo-authored-by: Jeffrey Vo \u003cjeffrey.vo.australia@gmail.com\u003e\nCo-authored-by: Andrew Lamb \u003candrew@nerdnetworks.org\u003e"
    },
    {
      "commit": "7d16cd039b74fc24a766eb852856278de7f4567b",
      "tree": "785b9e2f82496221a80a3928df9d7f362cad4220",
      "parents": [
        "7c833d2be88c7f639d6e33d68036d93345e9f344"
      ],
      "author": {
        "name": "Daniël Heres",
        "email": "danielheres@gmail.com",
        "time": "Fri Feb 13 00:02:58 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Feb 13 00:02:58 2026 +0100"
      },
      "message": "Use zstd::bulk API in IPC and Parquet with context reuse for compression and decompression (#9400)\n\n# Which issue does this PR close?\n\n- Closes #9401\n\n# Rationale for this change\nSwitch parquet and IPC zstd codec from the streaming API\n(zstd::Encoder/Decoder) to the bulk API\n(zstd::bulk::Compressor/Decompressor) with reusable contexts. This\navoids the overhead of reinitializing zstd contexts on every\ncompress/decompress call, yielding ~8-11% speedup on benchmarks.\n\nParquet: Store Compressor and Decompressor in ZSTDCodec, reused across\ncalls. IPC: Add DecompressionContext (mirroring existing\nCompressionContext) with a reusable bulk Decompressor, threaded through\nRecordBatchDecoder.\n\n\n```\n  Benchmark: cargo bench -p parquet --features experimental --bench compression -- \"Zstd\"                                                                                                     \n  ┌────────────────────────────────┬──────────┬───────────┬────────┐         \n  │           Benchmark            │   Main   │ Optimized │ Change │\n  ├────────────────────────────────┼──────────┼───────────┼────────┤\n  │ compress ZSTD - alphanumeric   │ 866 µs   │ 789 µs    │ -9.6%  │\n  ├────────────────────────────────┼──────────┼───────────┼────────┤\n  │ decompress ZSTD - alphanumeric │ 1.125 ms │ 1.007 ms  │ -8.8%  │\n  ├────────────────────────────────┼──────────┼───────────┼────────┤\n  │ compress ZSTD - words          │ 2.869 ms │ 2.590 ms  │ -9.7%  │\n  ├────────────────────────────────┼──────────┼───────────┼────────┤\n  │ decompress ZSTD - words        │ 1.001 ms │ 848 µs    │ -10.6% │\n  └────────────────────────────────┴──────────┴───────────┴────────┘\n  IPC Reader Decompression (10 batches)\n\n  Benchmark: cargo bench -p arrow-ipc --features zstd --bench ipc_reader -- \"zstd\"\n  ┌─────────────────────────────────────────┬──────────┬───────────┬────────┐\n  │                Benchmark                │   Main   │ Optimized │ Change │\n  ├─────────────────────────────────────────┼──────────┼───────────┼────────┤\n  │ StreamReader/read_10/zstd               │ 2.756 ms │ 2.540 ms  │ -7.8%  │\n  ├─────────────────────────────────────────┼──────────┼───────────┼────────┤\n  │ StreamReader/no_validation/read_10/zstd │ 2.601 ms │ 2.352 ms  │ -9.6%  │\n  └─────────────────────────────────────────┴──────────┴───────────┴────────┘\n```\n\n# What changes are included in this PR?\n\n# Are these changes tested?\n\n\n# Are there any user-facing changes?\n\n---------\n\nCo-authored-by: Claude Opus 4.6 \u003cnoreply@anthropic.com\u003e\nCo-authored-by: Andrew Lamb \u003candrew@nerdnetworks.org\u003e"
    }
  ],
  "next": "7c833d2be88c7f639d6e33d68036d93345e9f344"
}
