)]}'
{
  "log": [
    {
      "commit": "fb7da6954bfd1fe76e03e2c545c800bf30dd8636",
      "tree": "c3d5c63affc49eadae7e35a3c7fd07198b47a9ec",
      "parents": [
        "0b1cf61a15e7536856f3a9e68343671a5aad476d"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Tue May 05 09:27:48 2026 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 05 09:27:48 2026 -0700"
      },
      "message": "Bump tokio from 1.52.1 to 1.52.2 (#2233)\n\nBumps [tokio](https://github.com/tokio-rs/tokio) from 1.52.1 to 1.52.2.\n\u003cdetails\u003e\n\u003csummary\u003eRelease notes\u003c/summary\u003e\n\u003cp\u003e\u003cem\u003eSourced from \u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/releases\"\u003etokio\u0027s\nreleases\u003c/a\u003e.\u003c/em\u003e\u003c/p\u003e\n\u003cblockquote\u003e\n\u003ch2\u003eTokio v1.52.2\u003c/h2\u003e\n\u003ch1\u003e1.52.2 (May 4th, 2026)\u003c/h1\u003e\n\u003cp\u003eThis release reverts the LIFO slot stealing change introduced in\n1.51.0 (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7431\"\u003e#7431\u003c/a\u003e),\ndue to [its performance impact]\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/8065\"\u003e#8065\u003c/a\u003e.\n(\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/8100\"\u003e#8100\u003c/a\u003e)\u003c/p\u003e\n\u003cp\u003e\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7431\"\u003e#7431\u003c/a\u003e:\n\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/pull/7431\"\u003etokio-rs/tokio#7431\u003c/a\u003e\n\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/8065\"\u003e#8065\u003c/a\u003e:\n\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/pull/8065\"\u003etokio-rs/tokio#8065\u003c/a\u003e\n\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/8100\"\u003e#8100\u003c/a\u003e:\n\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/pull/8100\"\u003etokio-rs/tokio#8100\u003c/a\u003e\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003c/details\u003e\n\u003cdetails\u003e\n\u003csummary\u003eCommits\u003c/summary\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/4abe9d732eb01f7b092a571c3dcc4fbd266f4067\"\u003e\u003ccode\u003e4abe9d7\u003c/code\u003e\u003c/a\u003e\nchore: prepare Tokio v1.52.2 (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/8115\"\u003e#8115\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/f82bcf3f45eb9d0dad9d7e45251adf67223f03b6\"\u003e\u003ccode\u003ef82bcf3\u003c/code\u003e\u003c/a\u003e\nMerge \u0027tokio-1.51.2\u0027 into \u0027tokio-1.52.x\u0027 (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/8114\"\u003e#8114\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/7db9bc41f18dffb6953f762a5f8e2f4ddb54d80d\"\u003e\u003ccode\u003e7db9bc4\u003c/code\u003e\u003c/a\u003e\ntest: revert \u0026quot;remove \u003ccode\u003echurn()\u003c/code\u003e task from\n\u003ccode\u003elifo_stealable\u003c/code\u003e\u0026quot; (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/8114\"\u003e#8114\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/64834ec7018de92fadf00d053b565263913439c1\"\u003e\u003ccode\u003e64834ec\u003c/code\u003e\u003c/a\u003e\nchore: prepare Tokio v1.51.2 (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/8113\"\u003e#8113\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/967f5715a71d5d2600b71da8c4ab652c4e644a41\"\u003e\u003ccode\u003e967f571\u003c/code\u003e\u003c/a\u003e\nruntime: revert \u0026quot;steal tasks from the LIFO slot\u0026quot; (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/8100\"\u003e#8100\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/9271e3ed05928eafbeed9dd31d93aebaa49d2aad\"\u003e\u003ccode\u003e9271e3e\u003c/code\u003e\u003c/a\u003e\nMerge tokio-1.51.x (for \u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/8101\"\u003e#8101\u003c/a\u003e)\ninto tokio-1.52.x (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/8106\"\u003e#8106\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/cd1823f43efa95439b79a5a4507df65f83822004\"\u003e\u003ccode\u003ecd1823f\u003c/code\u003e\u003c/a\u003e\nRevert \u0026quot;Pin stable to 1.94 for tokio-1.51.x\u0026quot; (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/8106\"\u003e#8106\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/a97cf12ed9b90e3d5c1557f3afb47f43fcb84301\"\u003e\u003ccode\u003ea97cf12\u003c/code\u003e\u003c/a\u003e\nMerge tokio-1.47.x (commit 670a907c55c7) into tokio-1.51.x (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/8105\"\u003e#8105\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/bde3f20b0fd5de85a8946c4c5c623c039dcfa842\"\u003e\u003ccode\u003ebde3f20\u003c/code\u003e\u003c/a\u003e\nPin stable to 1.94 for tokio-1.51.x (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/8105\"\u003e#8105\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/670a907c55c7f7b27da203208e65da60de6598b2\"\u003e\u003ccode\u003e670a907\u003c/code\u003e\u003c/a\u003e\nci: fix CI on tokio-1.47.x (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/8101\"\u003e#8101\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eAdditional commits viewable in \u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/compare/tokio-1.52.1...tokio-1.52.2\"\u003ecompare\nview\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/details\u003e\n\u003cbr /\u003e\n\n\n[![Dependabot compatibility\nscore](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name\u003dtokio\u0026package-manager\u003dcargo\u0026previous-version\u003d1.52.1\u0026new-version\u003d1.52.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)\n\nDependabot will resolve any conflicts with this PR as long as you don\u0027t\nalter it yourself. You can also trigger a rebase manually by commenting\n`@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n\u003cdetails\u003e\n\u003csummary\u003eDependabot commands and options\u003c/summary\u003e\n\u003cbr /\u003e\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits\nthat have been made to it\n- `@dependabot show \u003cdependency name\u003e ignore conditions` will show all\nof the ignore conditions of the specified dependency\n- `@dependabot ignore this major version` will close this PR and stop\nDependabot creating any more for this major version (unless you reopen\nthe PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop\nDependabot creating any more for this minor version (unless you reopen\nthe PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop\nDependabot creating any more for this dependency (unless you reopen the\nPR or upgrade to it yourself)\n\n\n\u003c/details\u003e\n\nSigned-off-by: dependabot[bot] \u003csupport@github.com\u003e\nCo-authored-by: dependabot[bot] \u003c49699333+dependabot[bot]@users.noreply.github.com\u003e"
    },
    {
      "commit": "0b1cf61a15e7536856f3a9e68343671a5aad476d",
      "tree": "0f73e8d225ea433b7a5ccf08f699313e1e0d5fda",
      "parents": [
        "456b3b73732b3b40e2b0e138818573573450cbe4"
      ],
      "author": {
        "name": "guixiaowen",
        "email": "58287738+guixiaowen@users.noreply.github.com",
        "time": "Tue May 05 13:01:09 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 05 13:01:09 2026 +0800"
      },
      "message": "[AURON #2227] auron-build.sh fails to parse options after -D arguments (#2228)\n\n# Which issue does this PR close?\n\nCloses #2227 \n\n# Rationale for this change\n\nWhen a -D Maven system property (e.g. -DskipBuildNative) appears before\nother command-line options, the script stops parsing further arguments,\ncausing valid options after it to be ignored.\n\nEG\n\nsh auron-build.sh --pre --sparkver 3.5 --scalaver 2.12 -DskipBuildNative\n--sparktests true\n\nOnce a -D* argument is encountered, the argument parsing stops (due to\nbreak logic in -* handler).\nAny options appearing after -D* (e.g. --sparktests, --sparkver) are not\nparsed.\n\nERROR INFO\n\n`\n[INFO] Starting Apache Auron build...\n\nUnable to parse command line options: Unrecognized option: --sparktests\n\nusage: mvn [options] [\u003cgoal(s)\u003e] [\u003cphase(s)\u003e]\n\nOptions:\n-am,--also-make If project list is specified,\nalso build projects required by\n\n`\n\n# What changes are included in this PR?\nHandle -D* separately and do not break parsing:\n\nAfter this PR change, the build command now executes correctly.\n\nsh auron-build.sh --pre --sparkver 3.5 --scalaver 2.12 -DskipBuildNative\n--sparktests true\n\nsh auron-build.sh --pre --sparkver 3.5 --scalaver 2.12 -DskipBuildNative\n--sparktests true -Dscalafix.mode\u003dCHECK\n\n\n# Are there any user-facing changes?\n\n# How was this patch tested?"
    },
    {
      "commit": "456b3b73732b3b40e2b0e138818573573450cbe4",
      "tree": "7fafcd1dd164e318db8c133450c7aac4cda5bd4d",
      "parents": [
        "4bb09d43150f53fa2434cf1435a89c7645967a5c"
      ],
      "author": {
        "name": "Ming Wei",
        "email": "weimingdiit@gmail.com",
        "time": "Tue May 05 12:58:58 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue May 05 12:58:58 2026 +0800"
      },
      "message": "[AURON #2181] Implement native support for cume_dist window function (#2205)\n\n# Which issue does this PR close?\n\nCloses #2181 \n\n# Rationale for this change\nAuron does not currently support the `cume_dist()` window function in\nnative execution, so queries using it fall back to Spark.\n\n`cume_dist()` must follow Spark semantics exactly: it returns the number\nof rows preceding or equal to the current row within the partition\nordering, divided by the total number of rows in the partition, and rows\nin the same peer group must return the same value. Because this depends\non the full partition size and peer group boundaries, it cannot be\nimplemented correctly with the existing purely streaming window path.\n\n\n# What changes are included in this PR?\nThis PR adds native support for `cume_dist()` window function end to\nend.\n\nThe main changes are:\n\n- Add `CUME_DIST` to the window function protobuf and planner conversion\npath.\n- Extend `NativeWindowBase` to recognize Spark\u0027s `CumeDist` expression\nand serialize it into the native window plan.\n- Add a native `CumeDistProcessor` that computes `cume_dist()` with\nSpark-compatible semantics:\n  - rows in the same peer group produce the same value\n  - the result is `(peer_group_end_position) / (partition_size)`\n  - single-row partitions return `1.0`\n- Extend native window execution with a full-partition processing path\nfor window functions that require complete partition context.\n- Use that full-partition path for `cume_dist()` to ensure correctness.\n- Add regression tests on both the native execution side and the Spark\nSQL side.\n\n\n# Are there any user-facing changes?\nYes.\n\nQueries using `cume_dist()` window function can now stay on the native\nexecution path instead of falling back to Spark, as long as the rest of\nthe plan is supported by Auron. No new user-facing configuration is\nintroduced.\n\n# How was this patch tested?\nCI.\n\nSigned-off-by: weimingdiit \u003cweimingdiit@gmail.com\u003e"
    },
    {
      "commit": "4bb09d43150f53fa2434cf1435a89c7645967a5c",
      "tree": "765d28aa8340914623e774d8d56c478e9eb27b7c",
      "parents": [
        "016c7a4e99eeee86584aa0127752f81123588e70"
      ],
      "author": {
        "name": "Stefan Wang",
        "email": "1fannnw@gmail.com",
        "time": "Tue Apr 28 01:51:33 2026 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 28 16:51:33 2026 +0800"
      },
      "message": "[AURON #1924] [BUILD] Add `-Xlint:_` to scala-maven-plugin and resolve related warnings (#2216)\n\n# Which issue does this PR close?\n\nCloses #1924. Sets up the next sibling under the [BUILD] epic #1869\n(#1925) without changing its scope.\n\n# Rationale for this change\n\n`-Xlint:_` enables Scala\u0027s bundled lint checks, several of which catch\nreal bug classes that the existing `-Ywarn-unused` + `-Xfatal-warnings`\ndo not. This turns the flag on in both the base `scala-maven-plugin`\nconfig and the `scala-2.13` profile, then resolves every warning it\nsurfaces so the matrix build stays green.\n\n# What changes are included in this PR?\n\n**pom.xml** -- add `-Xlint:_` and `-Wconf:cat\u003dlint-inaccessible:s` to\nboth `scala-maven-plugin` arg lists. The `lint-inaccessible` silencer is\ncommented inline: Auron implements Spark plugin SPIs\n(`AppHistoryServerPlugin`, the `Shims` trait,\n`AuronRssShuffleWriterBase`) whose contracts take `private[spark]` types\n(`ElementTrackingStore`, `SparkUI`, `IndexShuffleBlockResolver`,\n`ShuffleWriteMetricsReporter`). Exposing them in our overrides is\nrequired by Spark, not a defect, so silencing the category project-wide\nis cleaner than annotating every override site.\n\n**Source fixes** for the warnings that are real bug classes (8 sites\nacross 8 files):\n\n- `lint:adapted-args` -- `:+ (\"k\", v)` on a `Seq[(String, V)]` was being\nsilently auto-tupled. Made the `Tuple2` explicit via `-\u003e`. Files:\n`ConvertToNativeBase`, `NativeBroadcastExchangeBase`,\n`NativeParquetInsertIntoHiveTableBase`.\n\n- `lint:nonlocal-return` -- `return` from inside a `for { ... }` /\n`foreach { ... }` body desugars to throw/catch via\n`NonLocalReturnControl`. Replaced with `Iterable.exists` in\n`sparkver.matchVersion`, and an early-exit `while`-loop over an iterator\nin `SparkOnHeapSpillManager.spill`.\n\n- pattern shadowing (`lint:type-parameter-shadow` / pattern-shadow) --\npattern bindings reusing names from the enclosing scope. Renamed: `case\nin: ... \u003d\u003e` -\u003e `case limited: ... \u003d\u003e` / `case fileIn: ... \u003d\u003e` in\n`AuronBlockStoreShuffleReaderBase`; `case expr \u003d\u003e expr` -\u003e `case other\n\u003d\u003e other` in `SparkUDAFWrapperContext`; `case nativeXScanExec` -\u003e `case\nscan` in `AuronEmptyNativeRddSuite`.\n\n# Are there any user-facing changes?\n\nNo -- build-only.\n\n# How was this patch tested?\n\nLocal `./auron-build.sh --pre --sparkver \u003cv\u003e --scalaver \u003cs\u003e\n-DskipBuildNative -Dspotless.skip\u003dtrue` was green for:\n\n- Spark 3.0 / Scala 2.12\n- Spark 3.1 / Scala 2.12\n- Spark 3.2 / Scala 2.12\n- Spark 3.4 / Scala 2.12\n- Spark 3.5 / Scala 2.12\n- Spark 3.5 / Scala 2.13\n\nSpark 3.3 / Scala 2.12 left for the matrix CI to cover (middle of the\nrange, unlikely to surface different lint categories). Happy to iterate\non any warning CI catches that I missed locally."
    },
    {
      "commit": "016c7a4e99eeee86584aa0127752f81123588e70",
      "tree": "d5f9406ff1d0275570a5ba82ceabc1cd0c29d617",
      "parents": [
        "55eb5894929bc072d29c673af76a5725007f223c"
      ],
      "author": {
        "name": "cxzl25",
        "email": "3898450+cxzl25@users.noreply.github.com",
        "time": "Mon Apr 27 16:18:02 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 27 16:18:02 2026 +0800"
      },
      "message": "[AURON #1801] Add MacOS build (#1839)\n\n# Which issue does this PR close?\n\nCloses #1801\n\n# Rationale for this change\n\n# What changes are included in this PR?\n\n# Are there any user-facing changes?\n\n# How was this patch tested?"
    },
    {
      "commit": "55eb5894929bc072d29c673af76a5725007f223c",
      "tree": "84e5db4c4d6dab8d999a38aa4ab726ac8ad05271",
      "parents": [
        "183c365c2a9e227f307133fd711263ab237278eb"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Mon Apr 27 15:05:31 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 27 15:05:31 2026 +0800"
      },
      "message": "Bump rustls-webpki from 0.103.12 to 0.103.13 (#2214)"
    },
    {
      "commit": "183c365c2a9e227f307133fd711263ab237278eb",
      "tree": "a83b16b5b09cf89a3c19ad78801813cac4e57445",
      "parents": [
        "ec10e3051ee7f7ccc9a3a8bc6c7ef7d4d4c3da30"
      ],
      "author": {
        "name": "Ming Wei",
        "email": "weimingdiit@gmail.com",
        "time": "Mon Apr 27 13:37:38 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 27 13:37:38 2026 +0800"
      },
      "message": "[AURON #2182] Implement native support for percent_rank window function (#2204)\n\n# Which issue does this PR close?\n\nCloses #2182\n\n# Rationale for this change\nAuron does not currently support `percent_rank()` in native window\nexecution, so queries using this function fall back to Spark. This\nleaves a gap in native window function coverage.\n\n`percent_rank()` cannot be implemented with the existing streaming-style\nwindow processors alone, because its result depends on both the row rank\nand the total number of rows in the partition. To match Spark semantics,\nthe native engine needs to evaluate it with full-partition context.\n\n# What changes are included in this PR?\nThis PR adds native support for `percent_rank()` window function end to\nend.\nThe main changes are:\n- Add `PERCENT_RANK` to the window function protobuf and planner\nconversion path so Spark plans can be serialized into native plans\ncorrectly.\n- Extend `NativeWindowBase` to recognize Spark\u0027s `PercentRank`\nexpression and convert it to the native window function enum.\n- Introduce a native `PercentRankProcessor` that computes percent rank\nwith Spark-compatible semantics:\n  - rows in the same peer group share the same rank\n  - the result is `(rank - 1) / (partition_size - 1)`\n  - single-row partitions return `0.0`\n- Add a full-partition execution path in native window execution for\nfunctions that require complete partition context, and use that path for\n`percent_rank()`.\n- Add tests on both the native execution side and the Spark SQL side to\nverify correctness.\n\n\n# Are there any user-facing changes?\nYes.\n\nQueries using `percent_rank()` window function can now stay on the\nnative execution path instead of falling back to Spark, as long as the\nrest of the plan is supported by Auron. No user-facing configuration\nchanges are introduced.\n\n\n# How was this patch tested?\nCI.\n\nSigned-off-by: weimingdiit \u003cweimingdiit@gmail.com\u003e"
    },
    {
      "commit": "ec10e3051ee7f7ccc9a3a8bc6c7ef7d4d4c3da30",
      "tree": "b107c5cd98eb7ea56a855d2526557a65e46d58f6",
      "parents": [
        "390417cf3b62f73d1d4c500f678dd670422b768a"
      ],
      "author": {
        "name": "yew1eb",
        "email": "yew1eb@gmail.com",
        "time": "Mon Apr 27 12:53:23 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 27 12:53:23 2026 +0800"
      },
      "message": "[AURON #2003] Extract common package configs to workspace Cargo.toml (#2004)\n\n# Which issue does this PR close?\n\nCloses #2003\n\n# Rationale for this change\n\n# What changes are included in this PR?\nMove `version, license, edition` from sub-crates to root\nworkspace.package.\nKeep `resolver` in sub-crates since it does not support workspace\ninheritance.\n\n# Are there any user-facing changes?\n\n# How was this patch tested?"
    },
    {
      "commit": "390417cf3b62f73d1d4c500f678dd670422b768a",
      "tree": "a8fb9a933e7c8ddc071ee446b80a8652c0b217c9",
      "parents": [
        "6ac62c14421c2cfabd0cf646553d53546cb0c74e"
      ],
      "author": {
        "name": "yaommen",
        "email": "myanstu@163.com",
        "time": "Thu Apr 23 21:49:45 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 23 21:49:45 2026 +0800"
      },
      "message": "[AURON #2211] Bump Celeborn 0.6.3 (#2212)\n\n# Which issue does this PR close?\n\nCloses #2211\n\n# Rationale for this change\n\nCeleborn `0.6.3` has been released, so Auron\u0027s Celeborn `0.6` profile\nshould be aligned with the latest patch release.\n\n# What changes are included in this PR?\n\nUpdate `celebornVersion` in the `celeborn-0.6` profile from `0.6.2` to\n`0.6.3`.\n\n# Are there any user-facing changes?\n\nNo.\n\n# How was this patch tested?\n\nVerified the `celeborn-0.6` profile resolves `celebornVersion` to\n`0.6.3`.\n\nSigned-off-by: yanmin \u003cmyanstu@163.com\u003e"
    },
    {
      "commit": "6ac62c14421c2cfabd0cf646553d53546cb0c74e",
      "tree": "7f7b34a0d8ad1008f56a5409fbbb5d5057a95e85",
      "parents": [
        "4725fcd2aad676dd938158445052788e6515ad91"
      ],
      "author": {
        "name": "Ming Wei",
        "email": "weimingdiit@gmail.com",
        "time": "Thu Apr 23 19:52:06 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 23 19:52:06 2026 +0800"
      },
      "message": "[AURON #2180] Implement native support for nth_value window function (#2203)\n\n# Which issue does this PR close?\n\nCloses #2180 \n\n# Rationale for this change\nAuron did not previously support native execution for Spark nth_value\nwindow function, so queries using nth_value would fall back to Spark\nexecution.\n\nSpark defines nth_value(input, offset) as returning the value at the\noffsetth row from the beginning of the current window frame, with offset\nstarting from 1. When IGNORE NULLS is specified, null input rows should\nbe skipped when counting toward the target position. To improve Spark\ncompatibility, Auron needs native support for both the regular and\nIGNORE NULLS variants.\n\nThe current native window executor only supports cumulative row-frame\nsemantics, so this change scopes native conversion to the frame that is\nalready supported correctly:\nROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW.\n\n# What changes are included in this PR?\n\nThis PR adds end-to-end native support for nth_value window function\nacross the Spark frontend, planner, and native engine.\n\nThe changes include:\n- adding new protobuf/planner window function variants for NTH_VALUE and\nNTH_VALUE_IGNORE_NULLS\n- extending Spark-side window conversion in NativeWindowBase to\nrecognize Spark NthValue\n- using reflection to access NthValue fields so the code remains\ncompatible with Spark 3.0, where this expression class is not available\n- adding a native Rust processor to evaluate nth_value\n- implementing both standard counting semantics and IGNORE NULLS\nsemantics\n- restricting native conversion to the supported cumulative row frame\n- adding Rust-side execution tests\n- adding Scala integration tests to verify result correctness and native\noperator conversion\n\n# Are there any user-facing changes?\nYes.\nQueries using nth_value can now be executed natively when they use the\nsupported frame:\nROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW.\n\nBoth of the following forms are supported:\n- nth_value(input, offset)\n- nth_value(input, offset) IGNORE NULLS\n\nQueries using unsupported window frames will continue to fall back to\nSpark execution.\n\n# How was this patch tested?\nCI.\n\nSigned-off-by: weimingdiit \u003cweimingdiit@gmail.com\u003e"
    },
    {
      "commit": "4725fcd2aad676dd938158445052788e6515ad91",
      "tree": "b37410b009fad5bcef745ac2dd600bea9f173906",
      "parents": [
        "f54d60e689d269bd9b70abd7bd65e3defb5e3ea9"
      ],
      "author": {
        "name": "Shreyesh",
        "email": "shreyesh.arangath@gmail.com",
        "time": "Tue Apr 21 07:28:55 2026 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 21 07:28:55 2026 -0700"
      },
      "message": "[AURON #2155] Date-part extraction functions missing timezone handling for Timestamp inputs  (#2156)\n\n# Which issue does this PR close?\n\nCloses #2155\n\n# Rationale for this change\n\nFive date-part extraction functions in NativeConverters.scala use\nbuildExtScalarFunction, which does not pass the session timezone to the\nnative Rust implementation:\n\nBy contrast, Hour, Minute, Second, and WeekOfYear correctly use\nbuildTimePartExt, which passes sessionLocalTimeZone for TimestampType\ninputs.\n\nThis inconsistency can cause incorrect results for timestamp inputs near\ndate boundaries in non-UTC timezones.\n\nAffected functions:\n\n- Year (Spark_Year) — not timezone-aware\n- Month (Spark_Month) — not timezone-aware\n- DayOfMonth (Spark_Day) — not timezone-aware\n- DayOfWeek (Spark_DayOfWeek) — not timezone-aware\n- Quarter (Spark_Quarter) — not timezone-aware\n\n# What changes are included in this PR?\nThis PR fixes the bug described above\n\n# Are there any user-facing changes?\nCorrectness issues fixed\n\n# How was this patch tested?\nUnit tests\n\n---------\n\nCo-authored-by: Claude Opus 4.6 \u003cnoreply@anthropic.com\u003e"
    },
    {
      "commit": "f54d60e689d269bd9b70abd7bd65e3defb5e3ea9",
      "tree": "5901d00ed3ea03678bf574a46f82ee5f9cfc1508",
      "parents": [
        "f329c64dafce9fadd088a6695d8b145ed9640bb9"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Tue Apr 21 20:35:24 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 21 20:35:24 2026 +0800"
      },
      "message": "Bump rustls-webpki from 0.103.4 to 0.103.12 (#2207)"
    },
    {
      "commit": "f329c64dafce9fadd088a6695d8b145ed9640bb9",
      "tree": "a00852cb9b5f15a13ca00fb066d4e5974621da1d",
      "parents": [
        "5fe95f244a1bdc452b454e5c4ead3d889354d609"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Tue Apr 21 18:51:21 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 21 18:51:21 2026 +0800"
      },
      "message": "Bump tokio from 1.52.0 to 1.52.1 (#2210)"
    },
    {
      "commit": "5fe95f244a1bdc452b454e5c4ead3d889354d609",
      "tree": "3f208589d250b36578e09c18ee86ec40043685a5",
      "parents": [
        "b606d26b40088d2cf3e0fe6c9044cce30ee353d1"
      ],
      "author": {
        "name": "Shreyesh",
        "email": "shreyesh.arangath@gmail.com",
        "time": "Mon Apr 20 10:16:16 2026 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 20 10:16:16 2026 -0700"
      },
      "message": "[AURON #2208] correctness testing: setup correctness testing scaffolding for all spark versions (#2209)\n\n# Which issue does this PR close?\n\nIntroduce empty test modules for Spark 3.1/3.2/3.4/3.5/4.0/4.1 alongside\nthe existing spark33 module. Each module ships only a Maven pom and an\nempty AuronSparkTestSettings stub so that profile activation and the\nreflection lookup in common/SparkTestSettings both succeed.\n\nCloses #2208 \n\n# Rationale for this change\nN/A\n\n# What changes are included in this PR?\nScaffolding for correctness tests\n\n# Are there any user-facing changes?\nN/A\n\n# How was this patch tested?\nUnit tests"
    },
    {
      "commit": "b606d26b40088d2cf3e0fe6c9044cce30ee353d1",
      "tree": "99a91e43a15eb3951e040fc1463327fb55c83eef",
      "parents": [
        "0ffb571f6f88de4a9bbc63816db54ddf7cf87c3d"
      ],
      "author": {
        "name": "Ming Wei",
        "email": "weimingdiit@gmail.com",
        "time": "Sat Apr 18 12:35:34 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Apr 18 12:35:34 2026 +0800"
      },
      "message": "[AURON #2154] Implement native function of str_to_map (#2190)\n\n# Which issue does this PR close?\n\nCloses #2154\n\n# Rationale for this change\nThis PR implements native support for Spark `str_to_map(text[,\npairDelim[, keyValueDelim]])` in Auron.\n\nIn addition to adding native execution for the function itself, this\nchange aligns delimiter handling with Spark semantics by evaluating\n`pairDelim` and `keyValueDelim` with Java regex behavior instead of Rust\nregex behavior. This is important because Spark’s `StringToMap`\nsemantics are based on Java regex splitting, and some valid Spark\npatterns, such as look-around expressions, are not supported by Rust’s\nregex engine.\n\n## Motivation\n`str_to_map` is a commonly used Spark function for constructing maps\nfrom delimited strings. Before this change, Auron did not support it\nnatively.\n\nA straightforward Rust implementation can handle simple regex\ndelimiters, but it does not fully match Spark semantics because Spark\nuses Java regex behavior for both delimiters. That difference can lead\nto incorrect splitting or execution errors for regex patterns that are\nvalid in Spark but unsupported in Rust regex.\n\nThis PR addresses both gaps:\n- it adds native support for `str_to_map`\n- it makes delimiter splitting follow Spark-compatible Java regex\nsemantics\n\n## What changes are included in this PR?\nThis PR:\n- adds native conversion for Spark `StringToMap` expressions in\n`NativeConverters`\n- registers a new native function entry point for `Spark_StrToMap`\n- implements native `str_to_map` evaluation in\n`datafusion-ext-functions`\n- propagates nulls consistently with Spark semantics\n- applies `pairDelim` using Java regex `split(..., -1)`\n- applies `keyValueDelim` using Java regex `split(..., 2)`\n- preserves Spark behavior where a missing value becomes `null`\n- preserves Spark duplicate-key handling via\n`spark.sql.mapKeyDedupPolicy`\n- adds a JVM bridge helper so delimiter splitting uses Java `Pattern`\nsemantics instead of Rust regex semantics\n- adds integration coverage in `AuronFunctionSuite` for standard cases,\nregex delimiters, Java-regex-specific delimiters, duplicate keys, and\n`LAST_WIN` dedup policy\n\n## Why this design?\n\nThe main design choice in this PR is to use Java regex splitting through\nthe existing JNI bridge instead of relying only on Rust’s regex crate.\n\nThis was chosen because Spark semantics for `str_to_map` are defined by\nJava regex behavior. Using Java `Pattern.split` avoids semantic\nmismatches for constructs such as look-around and other\nJava-regex-specific behavior. That gives better correctness and makes\nnative `str_to_map` behavior match Spark more closely.\n\nThe native side still owns the rest of the function logic, including:\n- row-wise null propagation\n- duplicate-key handling\n- map construction in Arrow/native format\n\nThis keeps the Spark-specific regex semantics where they belong while\npreserving the native execution path for the rest of the work.\n\n## How was this patch tested?\nCI.\n\nSigned-off-by: weimingdiit \u003cweimingdiit@gmail.com\u003e"
    },
    {
      "commit": "0ffb571f6f88de4a9bbc63816db54ddf7cf87c3d",
      "tree": "3dadf8d508d2bcccda6552de1f251ebde5a92db1",
      "parents": [
        "0be0e7994738df08ddd87554ca023452f03c6341"
      ],
      "author": {
        "name": "Ming Wei",
        "email": "weimingdiit@gmail.com",
        "time": "Sat Apr 18 12:34:59 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Apr 18 12:34:59 2026 +0800"
      },
      "message": "[AURON #2176] Implement native support for lead window function (#2188)\n\n# Which issue does this PR close?\n\nCloses #2176 \n\n# Rationale for this change\nAuron’s native window support previously covered rank-like functions and\na subset of aggregate window functions, but did not support offset-based\nwindow functions such as `lead(...)`.\n\nThis PR extends native window coverage with a conservative first step:\n- support `lead(...)`\n- preserve Spark-compatible behavior for `input`, `offset`, and\n`default`\n- keep unsupported semantics out of the native path rather than\napproximating them incorrectly\n\n\n# What changes are included in this PR?\n\nThis PR:\n- adds `Lead` handling in `NativeWindowBase`\n- extends the protobuf/planner window function enum with `LEAD`\n- adds native planner support to decode `LEAD` into the native window\nplan\n- introduces a native `LeadProcessor` in `datafusion-ext-plans`\n- evaluates `lead` using Spark-compatible offset/default/null behavior\n- adds a full-partition processing path for `lead` so that lookahead\nworks correctly across input batches\n- adds Rust regression coverage for cross-batch `lead`\n- adds Scala regression tests for:\n  - native `lead(...)` execution\n  - Spark fallback for `lead(... ) IGNORE NULLS`\n\nThe native implementation supports Spark semantics for:\n\n- `lead(input)`\n  - default offset is `1`\n  - default value is `null`\n\n- `lead(input, offset, default)`\n- returns the value of `input` at the `offset`th row after the current\nrow in the same window partition\n  - if the target row exists and `input` there is `null`, returns `null`\n  - if the target row does not exist, returns `default`\n\nSupported scope in this PR:\n- standard `RESPECT NULLS` behavior\n\nNot supported natively in this PR:\n- `IGNORE NULLS`\n\nUnsupported `IGNORE NULLS` queries continue to fall back to Spark to\npreserve correctness.\n\n\n# Are there any user-facing changes?\nYes.\nQueries using `lead(...)` can now remain on Auron’s native window\nexecution path when they use supported semantics.\nQueries using unsupported `lead(... ) IGNORE NULLS` behavior will\ncontinue to fall back to Spark.\n\n# How was this patch tested?\nCI.\n\n---------\n\nSigned-off-by: weimingdiit \u003cweimingdiit@gmail.com\u003e"
    },
    {
      "commit": "0be0e7994738df08ddd87554ca023452f03c6341",
      "tree": "b393aaade80d42398afb47b83bed48f74c447965",
      "parents": [
        "bad4012df2f5fbda6e2e090336a17ac466ba2f09"
      ],
      "author": {
        "name": "Weiqing Yang",
        "email": "wiyang@linkedin.com",
        "time": "Fri Apr 17 01:23:33 2026 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 17 16:23:33 2026 +0800"
      },
      "message": "[AURON #1859] Convert math operators to Auron native operators (#2167)\n\n# Which issue does this PR close?\n\nCloses #1859\n\n# Rationale for this change\n\nNext step in the Flink integration track (#1264). After #1856\nestablished the converter framework (`FlinkRexNodeConverter`,\n`FlinkNodeConverterFactory`, `ConverterContext`), this PR adds the first\nconcrete converter implementations that translate Flink/Calcite\n`RexNode` expressions into Auron native `PhysicalExprNode` protobuf\nrepresentations. Required before #1857 (`FlinkAuronCalcOperator`) can\nwire them into the execution pipeline.\n\n# What changes are included in this PR?\n\nThree new `FlinkRexNodeConverter` implementations (3 commits, 6 new\nfiles, 1 modified):\n\n**Commit 1 — `RexInputRefConverter`**\n- Converts `RexInputRef` column references to `PhysicalColumn` (name +\nindex)\n- Resolves column names from the input schema via `ConverterContext`\n\n**Commit 2 — `RexLiteralConverter`**\n- Converts `RexLiteral` scalar values to `ScalarValue` with Arrow IPC\nbytes\n- Supports: TINYINT, SMALLINT, INTEGER, BIGINT, FLOAT, DOUBLE, DECIMAL,\nBOOLEAN, CHAR, VARCHAR, NULL\n- Serializes via single-element Arrow vector → `ArrowStreamWriter` →\n`ipc_bytes` field\n\n**Commit 3 — `RexCallConverter`**\n- Converts `RexCall` operator expressions dispatched by `SqlKind`\n- Arithmetic: `+`, `-`, `*`, `/`, `%` → `PhysicalBinaryExprNode` with\nexplicit type promotion\n- Unary: `-` → `PhysicalNegativeNode`, `+` → identity passthrough\n- CAST → `PhysicalTryCastNode`\n- Type promotion via `getCommonTypeForComparison()` +\n`castIfNecessary()` (per reviewer PoC)\n- Recursive operand conversion via `FlinkNodeConverterFactory`\n\n**pom.xml** — Added `**/*Test.java` to surefire includes for the planner\nmodule\n\n**Key design decisions** (per @Tartarus0zm PoC in\nhttps://github.com/apache/auron/issues/1859#issuecomment-4181565860):\n- `PhysicalTryCastNode` (not `PhysicalCastNode`) for all type casts\n- Explicit type promotion — does not rely solely on Calcite upstream\nCASTs\n- Output type cast when compatible type differs from declared return\ntype\n- Division by zero handling deferred to #1857 (requires Rust-side\n`Flink_NullIfZero`)\n\n# Are there any user-facing changes?\n\nNo.\n\n# How was this patch tested?\n\n33 unit tests across 3 test classes:\n- `RexInputRefConverterTest` — 4 tests (node class, isSupported,\nfirst/second column)\n- `RexLiteralConverterTest` — 12 tests (all supported types, null\nliteral, unsupported type rejection)\n- `RexCallConverterTest` — 17 tests (all 5 arithmetic ops, unary\nminus/plus, cast, mixed-type promotion, output-type cast, nested\nexpressions, isSupported checks, `getCommonTypeForComparison` direct\ntests)"
    },
    {
      "commit": "bad4012df2f5fbda6e2e090336a17ac466ba2f09",
      "tree": "141c2ee595b2db1a949b50519cf643ce2ee963e5",
      "parents": [
        "0ea67676483770d017a712094efefc89d95828ac"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Fri Apr 17 15:49:29 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 17 15:49:29 2026 +0800"
      },
      "message": "Bump tokio from 1.51.1 to 1.52.0 (#2206)"
    },
    {
      "commit": "0ea67676483770d017a712094efefc89d95828ac",
      "tree": "253d1116c25fe5ab550740dd6fb7f8b703222b1d",
      "parents": [
        "bddb8a999f3923b4cad2d0c4c37219d258c7d1f4"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Tue Apr 14 12:23:12 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 14 12:23:12 2026 +0800"
      },
      "message": "Bump org.apache.kafka:kafka-clients from 3.9.1 to 3.9.2 in /auron-flink-extension/auron-flink-runtime (#2198)"
    },
    {
      "commit": "bddb8a999f3923b4cad2d0c4c37219d258c7d1f4",
      "tree": "9b163f236e0823d35abc998e63136b99141d91de",
      "parents": [
        "193b5cbd540f4e632ad8f694fcf4d617d545606b"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Tue Apr 14 12:18:19 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 14 12:18:19 2026 +0800"
      },
      "message": "Bump rand from 0.9.2 to 0.9.3 (#2201)"
    },
    {
      "commit": "193b5cbd540f4e632ad8f694fcf4d617d545606b",
      "tree": "1a30c911d59a9d8df438800996ed2113efc4612d",
      "parents": [
        "0cbfeed129fdbab834dd05da9fe3e98dceb9ba4e"
      ],
      "author": {
        "name": "Ming Wei",
        "email": "weimingdiit@gmail.com",
        "time": "Sun Apr 12 11:11:05 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Apr 12 11:11:05 2026 +0800"
      },
      "message": "[AURON #2175] Add native support for the _file metadata column (#2184)\n\n# Which issue does this PR close?\n\nCloses #2175 \n\n# Rationale for this change\nThis PR adds native support for Iceberg metadata columns in Auron,\nstarting with `_file`.\n\nPreviously, Iceberg scans fell back whenever metadata columns were\nprojected. With this change, queries that read `_file` can remain on the\nnative Iceberg scan path.\nIceberg metadata columns are useful in real workloads for debugging,\nlineage, and inspection queries. However, Auron previously treated\nmetadata columns as unsupported and fell back to Spark.\n\nThis PR improves native Iceberg scan coverage by supporting metadata\ncolumns that can be represented as file-level constant values, while\nstill falling back for unsupported row-level metadata columns.\n\n# What changes are included in this PR?\n\nThis PR:\n- adds native support for the Iceberg `_file` metadata column\n- keeps unsupported metadata columns such as `_pos` on the fallback path\n- extends `IcebergScanPlan` to distinguish between:\n  - file-backed data columns\n  - metadata columns materialized outside the file payload\n- updates `IcebergScanSupport` to stop rejecting all metadata columns\nunconditionally\n- passes supported metadata values through the native Iceberg scan path\nas per-file constant values\n- updates `NativeIcebergTableScanExec` to project both normal data\ncolumns and supported metadata columns\n- adds integration tests in `AuronIcebergIntegrationSuite`\n\n# Scope of support in this PR\n\nThis PR intentionally takes a conservative approach.\n\nSupported in native scan:\n- `_file`\n\nStill falls back:\n- `_pos`\n- other unsupported metadata columns that require row-level metadata\nhandling\n\n# Why this design?\n`_file` is a file-level metadata column: every row coming from the same\nfile shares the same value. That makes it a good fit for the existing\nnative file-scan path by treating it as a per-file constant column.\n\nIn contrast, `_pos` is row-level metadata and cannot be represented\ncorrectly with the same mechanism, so it remains unsupported in native\nexecution for now.\n\n\n# How was this patch tested?\nCI.\n\n---------\n\nSigned-off-by: weimingdiit \u003cweimingdiit@gmail.com\u003e"
    },
    {
      "commit": "0cbfeed129fdbab834dd05da9fe3e98dceb9ba4e",
      "tree": "650d982203efd13352946eaf6c9dda1bace360ed",
      "parents": [
        "f251bbb21a5396395b13204976d4d68f8e847e59"
      ],
      "author": {
        "name": "Ming Wei",
        "email": "weimingdiit@gmail.com",
        "time": "Sat Apr 11 07:47:15 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Apr 11 07:47:15 2026 +0800"
      },
      "message": "[AURON #2163] Support native Iceberg scans with residual filters via scan pruning and post-scan native filter (#2164)\n\n# Which issue does this PR close?\nCloses #2163 \n\n# Rationale for this change\nThe previous behavior was too conservative for Iceberg scans with\nresidual filters. Even when the scan could still be executed natively\nand the remaining filter logic could be handled above the scan, the\nplanner would fall back entirely.\n\nThis PR improves native coverage for Iceberg reads by:\n- preserving correctness for unsupported predicates\n- increasing native scan applicability for common filter patterns\n- reusing the existing native filter path instead of requiring full\nscan-level predicate support up front\n\nThis is an incremental improvement to Iceberg native execution, not full\nIceberg feature coverage.\n\n# What changes are included in this PR?\nThis PR:\n- removes the unconditional fallback for Iceberg scans with\nnon-`alwaysTrue` residual filters\n- extends `IcebergScanPlan` to carry `pruningPredicates`\n- extracts Iceberg scan filter expressions and converts a supported\nsubset into Spark expressions\n- converts those Spark expressions into native scan pruning predicates\n- passes pruning predicates down through `NativeIcebergTableScanExec`\n- keeps unsupported predicates on the upper `NativeFilter` path\n- adds integration coverage for:\n  - equality-based pruning\n  - `IN`-based pruning\n- partial pushdown where only part of the predicate is pushed to scan\npruning\n\n## Supported predicate scope in this PR\n\nThe scan-pruning conversion added here supports a limited subset of\nIceberg expressions, including:\n- `AND`\n- `OR`\n- `NOT`\n- `IS NULL`\n- `IS NOT NULL`\n- `IS NAN`\n- `NOT NAN`\n- comparison predicates such as `\u003d`, `!\u003d`, `\u003c`, `\u003c\u003d`, `\u003e`, `\u003e\u003d`\n- `IN`\n- `NOT IN`\n\nThe current implementation intentionally avoids pushing some types\nthrough scan pruning, including:\n- `StringType`\n- `BinaryType`\n- `DecimalType`\n\nUnsupported predicates are not pushed into scan pruning and are instead\nleft for post-scan native filtering.\n\n# How was this patch tested?\nIntegration coverage was added in `AuronIcebergIntegrationSuite`"
    },
    {
      "commit": "f251bbb21a5396395b13204976d4d68f8e847e59",
      "tree": "896664645fdbf99e86e655973fbd633152500d9a",
      "parents": [
        "988c4bf348827204515a7101a84df1d37d4f7863"
      ],
      "author": {
        "name": "xuzifu666",
        "email": "1206332514@qq.com",
        "time": "Fri Apr 10 12:38:24 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 10 12:38:24 2026 +0800"
      },
      "message": "[AURON #2185] Integration with Flink metrics in AuronKafkaSourceFunction (#2186)\n\n# Which issue does this PR close?\n\nCloses https://github.com/apache/auron/issues/2185\n\n# Rationale for this change\n\n# What changes are included in this PR?\n\n# Are there any user-facing changes?\n\n# How was this patch tested?"
    },
    {
      "commit": "988c4bf348827204515a7101a84df1d37d4f7863",
      "tree": "f72b12d574d9738715c39dcdee7fbbbdbe5c0318",
      "parents": [
        "5814d40e7a595e78c9624d980845d3bf5de37d05"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Fri Apr 10 11:00:13 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 10 11:00:13 2026 +0800"
      },
      "message": "Bump sonic-rs from 0.5.7 to 0.5.8 (#2114)"
    },
    {
      "commit": "5814d40e7a595e78c9624d980845d3bf5de37d05",
      "tree": "a1ce7282f79cd493cf9a404019008ef744c836d3",
      "parents": [
        "3c97babe0b6cedd56ef8888b1173cac6fa940cd4"
      ],
      "author": {
        "name": "Ming Wei",
        "email": "weimingdiit@gmail.com",
        "time": "Fri Apr 10 06:48:24 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 10 06:48:24 2026 +0800"
      },
      "message": "[AURON #2153] Implement native function of map_from_entries (#2169)\n\n# Which issue does this PR close?\n\nCloses #2153 \n\n# Rationale for this change\n`map_from_entries(...)` was not supported in Auron’s native execution\npath.\nThis PR extends native coverage for Spark map functions using the\nexisting extension-function pattern already used for Spark-specific\nfunctions such as `map_concat(...)`. The goal is to support this\nfunction natively while preserving Spark-compatible behavior.\n\n# What changes are included in this PR?\nThis PR:\n- adds `MapFromEntries` conversion in `NativeConverters`\n- passes Spark’s `spark.sql.mapKeyDedupPolicy` to the native\nimplementation\n- registers `Spark_MapFromEntries` in `datafusion-ext-functions`\n- implements `map_from_entries(...)` in `spark_map.rs`\n- handles Spark-compatible semantics for:\n  - null input array -\u003e null result\n  - null entry inside the input array -\u003e null result\n  - null key -\u003e error\n  - duplicate keys -\u003e error by default\n  - duplicate keys with `LAST_WIN` -\u003e last value wins\n  - null values -\u003e allowed\n- adds Scala regression tests in `AuronFunctionSuite`\n- adds Rust unit tests in `spark_map.rs`\n\n# Are there any user-facing changes?\nQueries using `map_from_entries(arrayOfEntries)` can now run through\nAuron’s native extension-function path instead of falling back or\nremaining unsupported.\n\n# How was this patch tested?\nCI.\n\nSigned-off-by: weimingdiit \u003cweimingdiit@gmail.com\u003e"
    },
    {
      "commit": "3c97babe0b6cedd56ef8888b1173cac6fa940cd4",
      "tree": "d821f51a2dae56d8934472c1141c6edc402643f6",
      "parents": [
        "74844702d1ca9928fe1c623e38da386cfb42c34b"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Thu Apr 09 17:27:20 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 09 17:27:20 2026 +0800"
      },
      "message": "Bump tokio from 1.51.0 to 1.51.1 (#2187)"
    },
    {
      "commit": "74844702d1ca9928fe1c623e38da386cfb42c34b",
      "tree": "9b9d55ee4e382c623188bca00a40d3507d83ddc6",
      "parents": [
        "429d5a1405fa1001b3f2dc8e21c48455bfa52588"
      ],
      "author": {
        "name": "Ming Wei",
        "email": "weimingdiit@gmail.com",
        "time": "Wed Apr 08 14:12:49 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 08 14:12:49 2026 +0800"
      },
      "message": "[AURON #2152] Implement native function of map_from_arrays (#2165)\n\n# Which issue does this PR close?\n\nCloses #2152 \n\n# Rationale for this change\nThis PR adds native support for Spark `map_from_arrays(keys, values)`\nthrough Auron’s extension-function path.\n\nSpark defines this function as creating a map from the given key/value\narrays, with the requirement that all elements in `keys` must be\nnon-null. This change implements that behavior in Auron and adds\nregression coverage.\n\n`map_from_arrays(...)` was not supported in Auron’s native execution\npath.\n\nThis PR extends native map-function coverage while keeping behavior\naligned with Spark semantics, following the existing extension-function\npattern already used for Spark-specific functions such as\n`map_concat(...)`.\n\n\n# What changes are included in this PR?\nThis PR:\n- adds `MapFromArrays` conversion in `NativeConverters`\n- registers `Spark_MapFromArrays` in `datafusion-ext-functions`\n- implements `map_from_arrays(...)` in `spark_map.rs`\n- enforces Spark-compatible null-key rejection\n- handles row-level null propagation\n- adds Scala and Rust regression tests\n\n# Are there any user-facing changes?\nYes.\nQueries using `map_from_arrays(keys, values)` can now run through\nAuron’s native extension-function path.\n\n# How was this patch tested?\nUT.\n\n---------\n\nSigned-off-by: weimingdiit \u003cweimingdiit@gmail.com\u003e"
    },
    {
      "commit": "429d5a1405fa1001b3f2dc8e21c48455bfa52588",
      "tree": "a241fb06b6ba6ac040a7c8ed8523fc71d4762095",
      "parents": [
        "5e5e59b31cc5f4640b0cc72bb5334cf4a0cedba2"
      ],
      "author": {
        "name": "Shreyesh",
        "email": "shreyesh.arangath@gmail.com",
        "time": "Mon Apr 06 10:56:33 2026 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 06 10:56:33 2026 -0700"
      },
      "message": "[AURON #2166] Wire existing native scalar functions in NativeConverters (#2162)\n\n# Which issue does this PR close?\n\nCloses #2166 \n\n# Rationale for this change\nThese functions already had protobuf enum values and native\nRust/DataFusion implementations but were missing the Scala converter\nwiring in NativeConverters.scala.\n\n# What changes are included in this PR?\n- Support for ascii, bit_length, chr, translate, replace, date_trunc \n# Are there any user-facing changes?\n\n# How was this patch tested?\n- Unit tested through AuronFunctionSuite\n\n---------\n\nCo-authored-by: Claude Opus 4.6 \u003cnoreply@anthropic.com\u003e"
    },
    {
      "commit": "5e5e59b31cc5f4640b0cc72bb5334cf4a0cedba2",
      "tree": "dc0866d253b741cc664d6a0d9a079b9e2b200871",
      "parents": [
        "9c4debc0618f3f9d874165f8a5a6d85c9c3121aa"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Mon Apr 06 20:49:18 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 06 20:49:18 2026 +0800"
      },
      "message": "Bump tokio from 1.50.0 to 1.51.0 (#2168)"
    },
    {
      "commit": "9c4debc0618f3f9d874165f8a5a6d85c9c3121aa",
      "tree": "62f8276413aa9164c0b5e2aa1537d2c7144fb24c",
      "parents": [
        "83c06bc2175c1220a963bf673db2a846d7039569"
      ],
      "author": {
        "name": "Ming Wei",
        "email": "weimingdiit@gmail.com",
        "time": "Mon Apr 06 14:24:14 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 06 14:24:14 2026 +0800"
      },
      "message": "[AURON #2134] Implement native function of months_between (#2149)\n\n# Which issue does this PR close?\nCloses #2134 \n\n# Rationale for this change\nSpark `months_between(...)` was not supported in Auron’s native\nexecution path, so queries using it fell back instead of being planned\nand executed natively.\n\nThis change adds native support through the extension-function path and\nkeeps the behavior aligned with Spark semantics for `DATE` and\n`TIMESTAMP` inputs, including `roundOff` handling and session time zone\naware calculations.\n\n\n# What changes are included in this PR?\nThis PR:\n- adds Spark `MonthsBetween` expression conversion in `NativeConverters`\n- adds `buildMonthsBetweenExt` to pass `date1`, `date2`, `roundOff`, and\nsession time zone to the native function\n- registers `Spark_MonthsBetween` in `datafusion-ext-functions`\n- implements `spark_months_between` in `spark_dates.rs`\n- handles Spark-compatible same-day and last-day-of-month cases,\nfractional month calculation, rounding, and null propagation\n- adds regression coverage in `AuronFunctionSuite`\n- adds Rust unit tests for core `months_between` semantics\n\n# Are there any user-facing changes?\nNO.\n# How was this patch tested?\nCI.\n\n---------\n\nSigned-off-by: weimingdiit \u003cweimingdiit@gmail.com\u003e"
    },
    {
      "commit": "83c06bc2175c1220a963bf673db2a846d7039569",
      "tree": "e268465984535a2500913df40e3b1c7c554e4acb",
      "parents": [
        "dd242db57fc2e026f8bb363f467024583c76b4b4"
      ],
      "author": {
        "name": "slfan1989",
        "email": "55643692+slfan1989@users.noreply.github.com",
        "time": "Sun Apr 05 11:42:39 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Apr 05 11:42:39 2026 +0800"
      },
      "message": "[AURON #1996] Standardize Cargo workspace edition and resolver configuration. (#1997)\n\n### Which issue does this PR close?\n\nCloses #1996\n\n### Rationale for this change\n\nStandardize the Cargo workspace configuration to ensure consistent\ndependency resolution and edition settings across all crates.\n\nThis change:\n- Upgrades workspace to use resolver 2 for improved dependency\nresolution\n- Centralizes edition configuration at the workspace level\n- Removes conflicting resolver and edition settings from individual\ncrates\n\nResolver 2 is the default resolver for Rust 2021 edition and provides\nimproved dependency version resolution strategy.\n\n### What changes are included in this PR?\n\n- Add `resolver \u003d \"2\"` to the workspace configuration\n- Add `[workspace.package]` section with unified `edition \u003d \"2024\"`\n- Update all 8 sub-crates to inherit edition from workspace using\n`edition.workspace \u003d true`\n- Remove conflicting `resolver \u003d \"1\"` declarations from 6 sub-crates\n\n### Are there any user-facing changes?\n\nNo.\n\n### How was this patch tested?\n\nVerified with `cargo check --workspace` - all crates compile\nsuccessfully with the unified configuration.\n\nSigned-off-by: slfan1989 \u003cslfan1989@apache.org\u003e"
    },
    {
      "commit": "dd242db57fc2e026f8bb363f467024583c76b4b4",
      "tree": "74d0d55e104450e686ee27e9355be9c5e1f54cbe",
      "parents": [
        "a332a182d6d74863a8005c59ddc4be706c0f3c1c"
      ],
      "author": {
        "name": "Ming Wei",
        "email": "weimingdiit@gmail.com",
        "time": "Sun Apr 05 11:21:13 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Apr 05 11:21:13 2026 +0800"
      },
      "message": "[AURON #2130] Implement native function of weekofyear (#2131)\n\n# Which issue does this PR close?\nCloses #https://github.com/apache/auron/issues/2130\n\n# Rationale for this change\nTo improve compatibility with Spark SQL date functions, we should\nimplement `weekofyear()` with Spark-compatible semantics.\n\nExpected behavior\n\nFunction name: `weekofyear(expr)`\n\nReturn type: `INT`\n\nWeek semantics:\n- A week starts on Monday\n- Week 1 is the first week of the year with more than 3 days\n- This matches Spark’s ISO-style week numbering behavior\n\nExamples:\n- `weekofyear(\u00272009-07-30\u0027)` -\u003e `31`\n- `weekofyear(\u00272016-01-01\u0027)` -\u003e `53`\n- `weekofyear(\u00272017-01-01\u0027)` -\u003e `52`\n\nSupports: `DATE`, `TIMESTAMP`, and compatible string/date inputs\nconsistent with existing date extraction functions\n\nAdditional expectations:\n- Null-safe: returns `NULL` if input is `NULL`\n- Array and scalar inputs: consistent with existing native date\nextraction functions\n- Cross-year boundary behavior should match Spark semantics exactly\n\n\n# What changes are included in this PR?\nThis PR adds native support for the `weekofyear()` function with\nSpark-compatible semantics.\nThe following changes are included:\n\n- Added native implementation of `spark_weekofyear()` in the expression\nlayer\n- Added `WeekOfYear` expression support in `NativeConverters` for proper\nSpark -\u003e native translation\n- Registered `Spark_WeekOfYear` in native function dispatch\n- Added unit tests to verify correctness for:\n  - normal date inputs\n  - cross-year boundary cases\n  - Spark-compatible ISO week numbering semantics\n  - null input handling\n\n# Are there any user-facing changes?\nNO.\n\n# How was this patch tested?\n- Added and ran targeted Rust unit tests for `spark_weekofyear()`\n- Verified expected results for representative Spark-compatible cases\nsuch as:\n  - `weekofyear(\u00272009-07-30\u0027) \u003d 31`\n  - `weekofyear(\u00272016-01-01\u0027) \u003d 53`\n  - `weekofyear(\u00272017-01-01\u0027) \u003d 52`\n\nSigned-off-by: weimingdiit \u003cweimingdiit@gmail.com\u003e"
    },
    {
      "commit": "a332a182d6d74863a8005c59ddc4be706c0f3c1c",
      "tree": "ab876d3c3944f25eb11e5c4a5810f3a03f907f23",
      "parents": [
        "385846891b4ebf56cf68d111454d4c64cfaee8da"
      ],
      "author": {
        "name": "Yizhong Zhang",
        "email": "zyzzxycj@gmail.com",
        "time": "Sat Apr 04 19:02:39 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Apr 04 19:02:39 2026 +0800"
      },
      "message": "[AURON #2076] Fix typos and unnecessary memory allocation in native engine (#2077)\n\n\u003c!--\n- Start the PR title with the related issue ID, e.g. \u0027[AURON #XXXX]\nShort summary...\u0027.\n--\u003e\n# Which issue does this PR close?\nCloses #2076\n\n# Rationale for this change\nFix typos, improve unnecessary memory usage.\n\n# What changes are included in this PR?\nFix typos.\nUse `unwrap_or_else(|| panic` instead of `expect(\u0026format`.\n\n# Are there any user-facing changes?\nNo.\n\n# How was this patch tested?\nUT \u0026 manual tests\n\nCo-authored-by: zhangyizhong \u003czhangyizhong03@kuaishou.com\u003e"
    },
    {
      "commit": "385846891b4ebf56cf68d111454d4c64cfaee8da",
      "tree": "2a2639250ecbe92eb9e642603be8ce56586c42a7",
      "parents": [
        "6b94710de594ea42f4fbda0116d2eba19e3524f8"
      ],
      "author": {
        "name": "Ming Wei",
        "email": "weimingdiit@gmail.com",
        "time": "Sat Apr 04 18:52:33 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Apr 04 18:52:33 2026 +0800"
      },
      "message": "[AURON #2140] Implement native function of map_concat (#2141)\n\n# Which issue does this PR close?\n\nCloses #https://github.com/apache/auron/issues/2140\n\n# Rationale for this change\n`map_concat(...)` was not supported in Auron’s native execution path, so\nqueries using it fell back instead of being executed natively.\n\nThis change adds support through the extension-function path, which fits\nthe existing pattern for Spark-specific functions implemented outside\nthe standard builtin `ScalarFunction` chain.\n\n# What changes are included in this PR?\nThis PR:\n- adds Spark `MapConcat` expression conversion in `NativeConverters`\n- introduces a dedicated native implementation in `spark_map.rs`\n- registers `Spark_MapConcat` in `datafusion-ext-functions`\n- adds a regression test in `AuronFunctionSuite`\n\n# Are there any user-facing changes?\nNO.\n# How was this patch tested?\nCI.\n\n---------\n\nSigned-off-by: weimingdiit \u003cweimingdiit@gmail.com\u003e\nCo-authored-by: Copilot \u003c175728472+Copilot@users.noreply.github.com\u003e"
    },
    {
      "commit": "6b94710de594ea42f4fbda0116d2eba19e3524f8",
      "tree": "7c646bb9661b1ccc38aaa92af7dc63315129cf0b",
      "parents": [
        "9e48cc72a2eeede8e298df2f61db784f711283f0"
      ],
      "author": {
        "name": "yew1eb",
        "email": "yew1eb@gmail.com",
        "time": "Fri Apr 03 17:09:37 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 03 17:09:37 2026 +0800"
      },
      "message": "[AURON #2158] fix: change const to static in batch_size() to fix OnceCell caching (#2159)\n\nFixes AURON-2158. The const OnceCell was creating a new instance on\nevery call, causing the cache to never hit.\n\n# Which issue does this PR close?\n\nCloses #2158 \n\n# Rationale for this change\n\n# What changes are included in this PR?\n\n# Are there any user-facing changes?\n\n# How was this patch tested?"
    },
    {
      "commit": "9e48cc72a2eeede8e298df2f61db784f711283f0",
      "tree": "2643d5ef5bb7adaa870d892ac3aa78f339aafddd",
      "parents": [
        "edaac4a354b911e365bd2fbdac8cfc01912744a0"
      ],
      "author": {
        "name": "Weiqing Yang",
        "email": "wiyang@linkedin.com",
        "time": "Wed Apr 01 22:38:32 2026 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 02 13:38:32 2026 +0800"
      },
      "message": "[AURON #1856] Introduce Flink expression-level converter framework (#2146)\n\n# Which issue does this PR close?                         \n\n  Closes #1856\n\n  # Rationale for this change\n\nAuron\u0027s Flink integration has the data exchange layer in place (Arrow\nWriter #1850, Arrow Reader #1851) but lacks the conversion\ninfrastructure — the machinery that decides which Flink expressions and\naggregates can be converted to native execution and how to translate\nthem into Auron\u0027s protobuf representation.\n\n# What changes are included in this PR?\n   \nFive foundational Java classes in `auron-flink-planner`:\n                                                            \n  | Class | Role |\n  |-------|------|\n| `FlinkNodeConverter\u003cT\u003e` | Generic base interface: `getNodeClass()`,\n`isSupported()`, `convert()` → `PhysicalExprNode` |\n| `FlinkRexNodeConverter` | Sub-interface for Calcite `RexNode`\nexpressions |\n| `FlinkAggCallConverter` | Sub-interface for Calcite `AggregateCall`\naggregates |\n| `FlinkNodeConverterFactory` | Singleton registry with separate\n`rexConverterMap` + `aggConverterMap`, typed registration and fail-safe\ndispatch |\n| `ConverterContext` | Immutable holder for input schema (`RowType`),\nFlink config, Auron config, and classloader |\n   \npom.xml: `flink-core` scope changed from `test` to `provided` (required\nfor `ReadableConfig`).\n                                                            \nFramework-only — no concrete converter implementations. Follow-up issues\nwill add RexLiteral, RexInputRef, RexCall, and aggregate converters.\nDesign doc: `docs/PR-AURON-1856/AURON-1856-DESIGN.md`\nReview helper: `docs/reviewhelper/AURON-1856/01-converter-framework.md`\n  # Are there any user-facing changes?                      \n\n  No.\n\n  # How was this patch tested?\n\n8 unit tests. Checkstyle: 0 violations."
    },
    {
      "commit": "edaac4a354b911e365bd2fbdac8cfc01912744a0",
      "tree": "741e30877e05de1329f667376132edead389753e",
      "parents": [
        "4b92ce3133862d835c744764065fda08215d234c"
      ],
      "author": {
        "name": "cxzl25",
        "email": "3898450+cxzl25@users.noreply.github.com",
        "time": "Thu Apr 02 10:48:10 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 02 10:48:10 2026 +0800"
      },
      "message": "[AURON #2147] Set thread name for ArrowFFIExporter (#2148)\n\n# Which issue does this PR close?\n\nCloses #2147\n\n# Rationale for this change\n\n# What changes are included in this PR?\n\n# Are there any user-facing changes?\n\n# How was this patch tested?"
    },
    {
      "commit": "4b92ce3133862d835c744764065fda08215d234c",
      "tree": "0512381ace6ac975c3b7606384b0273858316355",
      "parents": [
        "3518098e62736a1537e45bcef41cccce85f7d070"
      ],
      "author": {
        "name": "Ming Wei",
        "email": "weimingdiit@gmail.com",
        "time": "Wed Apr 01 10:25:25 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 01 10:25:25 2026 +0800"
      },
      "message": "[AURON #2126] Add native support for acosh function (#2135)\n\n# Which issue does this PR close?\n\nCloses # https://github.com/apache/auron/issues/2126\n\n# Rationale for this change\nSpark Acosh expressions were not wired into Auron’s standard builtin\nscalar function conversion path, so acosh(expr) could not be planned\nthrough the native backend.\n\nThis change follows the existing ScalarFunction flow used by other\nbuiltin math functions such as acos, asin, and atan: Spark expression\nconversion in NativeConverters, protobuf enum registration in\nauron.proto, and planner mapping in planner.rs. This keeps acosh aligned\nwith the current architecture instead of introducing a custom extension\nfunction path.\n\n# What changes are included in this PR?\nThis PR:\n\nadds Spark Acosh expression conversion in NativeConverters\nintroduces ScalarFunction::Acosh in auron.proto\nmaps ScalarFunction::Acosh in planner.rs\nenables acosh(expr) through the standard builtin ScalarFunction chain\n\n# Are there any user-facing changes?\nNo.\n# How was this patch tested?\nCI.\n\n---------\n\nSigned-off-by: weimingdiit \u003cweimingdiit@gmail.com\u003e"
    },
    {
      "commit": "3518098e62736a1537e45bcef41cccce85f7d070",
      "tree": "bd0fba12c85376bbf26f7c42a7effe25d2da2e70",
      "parents": [
        "0e832f2cb9298a0c99bba5e5e261f6f518a974ed"
      ],
      "author": {
        "name": "cxzl25",
        "email": "3898450+cxzl25@users.noreply.github.com",
        "time": "Wed Apr 01 07:11:52 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 31 16:11:52 2026 -0700"
      },
      "message": "[AURON #2137] Native thread name appends the original thread name (#2144)\n\n# Which issue does this PR close?\n\nCloses #2137\n\n# Rationale for this change\n\n# What changes are included in this PR?\n\n# Are there any user-facing changes?\n\n# How was this patch tested?"
    },
    {
      "commit": "0e832f2cb9298a0c99bba5e5e261f6f518a974ed",
      "tree": "182dc50e5e4dbef91f678adfa9442a6dfc9504c7",
      "parents": [
        "9f7c72701f0360e06551532ed6a48a4dba13e2a3"
      ],
      "author": {
        "name": "Ming Wei",
        "email": "weimingdiit@gmail.com",
        "time": "Wed Apr 01 05:50:35 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 01 05:50:35 2026 +0800"
      },
      "message": "[AURON #2128] Implement native function of dayofweek (#2129)\n\n# Which issue does this PR close?\n\nCloses #https://github.com/apache/auron/issues/2128\n\n# Rationale for this change\nTo achieve full compatibility with Spark’s date functions, we should\nimplement `dayofweek()` with the following characteristics:\n\n_Expected behavior_\n\nFunction name: `dayofweek(expr)`  \nReturn value: `Sunday \u003d 1, Monday \u003d 2, ..., Saturday \u003d 7`  \nExample:\n\n`dayofweek(\u00272009-07-30\u0027)` → `5`\n\nSupports: `DATE`, `TIMESTAMP`, and compatible string/date inputs\nconsistent with existing date extraction functions\n\nNull-safe: should return `NULL` if input is `NULL`  \nArray and scalar inputs: consistent with current date extraction\nfunction implementations\n\n\n# What changes are included in this PR?\nThis PR adds native support for the `dayofweek()` function with\nSpark-compatible semantics.\nThe following changes are included:\n\n- Added native implementation of `spark_dayofweek()` in the expression\nlayer.\n- Added `DayOfWeek` expression support in `NativeConverters` for proper\nSpark → native translation.\n- Added unit tests to verify correctness.\n\n# Are there any user-facing changes?\nNo.\n\n# How was this patch tested?\nCI.\n\nSigned-off-by: weimingdiit \u003cweimingdiit@gmail.com\u003e"
    },
    {
      "commit": "9f7c72701f0360e06551532ed6a48a4dba13e2a3",
      "tree": "05aed760f0e973bca4a17f72cd429969e0cff5c6",
      "parents": [
        "1554515c6e1ad05d8669cbf38e5e6ca28931f4ef"
      ],
      "author": {
        "name": "zhangmang",
        "email": "zhangmang1@163.com",
        "time": "Tue Mar 31 20:31:06 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 31 20:31:06 2026 +0800"
      },
      "message": "[AURON #2062] Kafka source support watermark with idleness (#2142)\n\n# Which issue does this PR close?\n\nCloses #2062 \n\n# Rationale for this change\n* Kafka source support watermark with idleness\n\n# What changes are included in this PR?\n* use `org.apache.flink.api.common.eventtime.WatermarkGenerator` to\ngenerate watermark for every partition\n\n# Are there any user-facing changes?\n* No\n\n# How was this patch tested?\n* add UT AuronKafkaSourceITCase#testEventTimeTumbleTvfWindowWithIdle"
    },
    {
      "commit": "1554515c6e1ad05d8669cbf38e5e6ca28931f4ef",
      "tree": "8fbaf95e26cd083d230709dd47fa0521b017e274",
      "parents": [
        "455962782b0a94c85bd93c60f949baef7e098b42"
      ],
      "author": {
        "name": "zhangmang",
        "email": "zhangmang1@163.com",
        "time": "Tue Mar 31 20:30:51 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 31 20:30:51 2026 +0800"
      },
      "message": "[AURON #1848] fix pb list name missing (#2143)\n\n# Which issue does this PR close?\n\nCloses #1848 \n\n# Rationale for this change\n* PB list type add field names\n\n# What changes are included in this PR?\n* shared_list_array_builder add `FieldRef`\n\n# Are there any user-facing changes?\n* No\n\n# How was this patch tested?\n* No"
    },
    {
      "commit": "455962782b0a94c85bd93c60f949baef7e098b42",
      "tree": "e972b91f3a532395c8bb9b6bef033fa7334ce801",
      "parents": [
        "c1e13a75fbb010ed99e19bc46a61dadb61b14f30"
      ],
      "author": {
        "name": "bkhan",
        "email": "xorsum@outlook.com",
        "time": "Mon Mar 30 15:08:53 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 30 15:08:53 2026 +0800"
      },
      "message": "[AURON #2138] Fix some typos in native (#2139)\n\n# Which issue does this PR close?\n\nCloses #2138\n\n# Rationale for this change\n\n# What changes are included in this PR?\n\n# Are there any user-facing changes?\n\n# How was this patch tested?"
    },
    {
      "commit": "c1e13a75fbb010ed99e19bc46a61dadb61b14f30",
      "tree": "791861928cc1b6e17e679b5626a8a740b4fd2349",
      "parents": [
        "cb81ca14007fcdaf27bfcc558bdce9f980c4933e"
      ],
      "author": {
        "name": "bkhan",
        "email": "xorsum@outlook.com",
        "time": "Mon Mar 30 12:03:52 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 30 12:03:52 2026 +0800"
      },
      "message": "[AURON #2123] Remove pull_request_template.md comment (#2136)\n\n# Which issue does this PR close?\n\nCloses #2123\n\n# Rationale for this change\n\n# What changes are included in this PR?\n\n# Are there any user-facing changes?\n\n# How was this patch tested?"
    },
    {
      "commit": "cb81ca14007fcdaf27bfcc558bdce9f980c4933e",
      "tree": "6f4f5172258f825989d80be95db065f2847e2737",
      "parents": [
        "5c85070fff0a7349811e445038e1ca26b740c291"
      ],
      "author": {
        "name": "cxzl25",
        "email": "3898450+cxzl25@users.noreply.github.com",
        "time": "Mon Mar 30 10:53:40 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 30 10:53:40 2026 +0800"
      },
      "message": "[AURON #2119] Add `isTaskRunning` method to check task status in SparkAuronAdaptor (#2120)\n\n# Which issue does this PR close?\n\nCloses #2119\n\n# Rationale for this change\n\nWhen Spark initiates a task kill (e.g. stage cancellation, speculative\nexecution kill, or user-triggered job cancellation), the Rust native\nengine always receives true from the JNI callback\nJniBridge.isTaskRunning(), making it unable to detect that the task has\nalready been killed.\n\n# What changes are included in this PR?\n\n# Are there any user-facing changes?\n\n# How was this patch tested?"
    },
    {
      "commit": "5c85070fff0a7349811e445038e1ca26b740c291",
      "tree": "5380939c56e0b34831650398785628e51fc96fd9",
      "parents": [
        "2b77f1ec6776618891d908403c0edd3ea2091111"
      ],
      "author": {
        "name": "cxzl25",
        "email": "3898450+cxzl25@users.noreply.github.com",
        "time": "Fri Mar 27 21:55:18 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 27 21:55:18 2026 +0800"
      },
      "message": "[AURON #2121] Update artifact name to include Celeborn and Uniffle versions (#2122)\n\n# Which issue does this PR close?\n\nCloses #2121\n\n# Rationale for this change\n\n# What changes are included in this PR?\n\n# Are there any user-facing changes?\n\n# How was this patch tested?"
    },
    {
      "commit": "2b77f1ec6776618891d908403c0edd3ea2091111",
      "tree": "3069bafcf9858bf0d6393c4c5753df5cf532e524",
      "parents": [
        "0eb8e7033dea4ffda22949f838cefda19130e075"
      ],
      "author": {
        "name": "Yuepeng Pan",
        "email": "panyuepeng@apache.org",
        "time": "Thu Mar 26 12:26:23 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 26 12:26:23 2026 +0800"
      },
      "message": "[AURON #2116] Make libInputStream auto-close when creating in Flink/Spark AuronAdaptor. (#2117)\n\n# Which issue does this PR close?\n\nCloses #2116\n\n# Rationale for this change\n\nfix the input stream leaking.\n\n# What changes are included in this PR?\n\nMake libInputStream auto-close when creating in FlinkAuronAdaptor.java\n\n# Are there any user-facing changes?\n\nN.A\n\n# How was this patch tested?\n\nN.A\n\nSigned-off-by: Yuepeng Pan \u003cpanyuepeng@apache.org\u003e"
    },
    {
      "commit": "0eb8e7033dea4ffda22949f838cefda19130e075",
      "tree": "bc655db3126b2b72ad359a3c7f9c2aecad3b164f",
      "parents": [
        "f0953e1e46217d3c2b4f45c457e2d7377176fb0e"
      ],
      "author": {
        "name": "zhangmang",
        "email": "zhangmang1@163.com",
        "time": "Wed Mar 25 15:13:11 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 25 15:13:11 2026 +0800"
      },
      "message": "[AURON #2059] Optimize option names to align with the Flink community (#2113)\n\n# Which issue does this PR close?\n\nCloses #2059 \n\n# Rationale for this change\nOptimize option names to align with the Flink community\n\n# What changes are included in this PR?\n* modify option name\n\n# Are there any user-facing changes?\n* No\n\n# How was this patch tested?\n* no need"
    },
    {
      "commit": "f0953e1e46217d3c2b4f45c457e2d7377176fb0e",
      "tree": "921e3496b63a490288709401f5dd2efa20f6f6cf",
      "parents": [
        "03776da8ec5207b1d404ed2eb6ec45d24c317bb2"
      ],
      "author": {
        "name": "zhangmang",
        "email": "zhangmang1@163.com",
        "time": "Tue Mar 24 14:37:02 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 24 14:37:02 2026 +0800"
      },
      "message": "[AURON #1849] Introduce native json deserializer (#2112)\n\n# Which issue does this PR close?\n\nCloses #1849 \n\n# Rationale for this change\n* auron flink kafka connector support json\n\n# What changes are included in this PR?\n* add json_deserializer to deserialize JSON data from Kafka\n* modify kafka_scan_exec to supports selecting different deserializers\nbased on the data format\n\n# Are there any user-facing changes?\n* No\n\n# How was this patch tested?\n* No kafka environment, test via rust UT for json_deserializer"
    },
    {
      "commit": "03776da8ec5207b1d404ed2eb6ec45d24c317bb2",
      "tree": "bc18d41f058b22b5aca2d043de5cc40dc3bc0bfc",
      "parents": [
        "ffbf4373b61400becedd3ee36d46125f0925f477"
      ],
      "author": {
        "name": "zhangmang",
        "email": "zhangmang1@163.com",
        "time": "Mon Mar 23 19:38:20 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 23 19:38:20 2026 +0800"
      },
      "message": "[AURON #2083] Support kafka partition discovery (#2111)\n\n# Which issue does this PR close?\n\nCloses #2083 \n\n# Rationale for this change\n* Auron Kafka Source supports automatic detection of new Kafka\npartitions\n\n# What changes are included in this PR?\n* modify AuronKafkaDynamicTableFactory and AuronKafkaDynamicTableSource\nto add a partition discovery interval\n* modify AuronKafkaSourceFunction to add partition discovery and write\nto native\n* modify `kafka_scan_exec.rs` to enhance the ability to periodically\nmonitor partition changes\n\n# Are there any user-facing changes?\n* No\n\n# How was this patch tested?\n* No kafka environment"
    },
    {
      "commit": "ffbf4373b61400becedd3ee36d46125f0925f477",
      "tree": "7bff9fa856b3d9aa48de400f28e61d64803c93c0",
      "parents": [
        "c48e26f6354e22a90e851019262dc0019c02bb48"
      ],
      "author": {
        "name": "zhangmang",
        "email": "zhangmang1@163.com",
        "time": "Mon Mar 23 11:19:26 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 23 11:19:26 2026 +0800"
      },
      "message": "[AURON #2093] Introduce kafka mock source (#2106)\n\n# Which issue does this PR close?\n\nCloses #2093 \n\n# Rationale for this change\n* Since we don’t have a Kafka environment, testing is difficult;\ntherefore, we’ve introduced a simulated Kafka source that allows us to\nspecify the data to be sent, thereby achieving our testing objectives.\n\n# What changes are included in this PR?\n* add kafka_mock_scan_exec to send mock data\n* add support for specifying mock data in the Kafka Table Factory\n* add test AuronKafkaSourceITCase and AuronKafkaSourceTestBase\n\n# Are there any user-facing changes?\n* No\n\n# How was this patch tested?\n* test via UT"
    },
    {
      "commit": "c48e26f6354e22a90e851019262dc0019c02bb48",
      "tree": "1429c23ed57af781645f54bbaf62bff020a1bffb",
      "parents": [
        "471a86583cb9f9dcb89c758d7468b791d2043ca9"
      ],
      "author": {
        "name": "Yizhong Zhang",
        "email": "zyzzxycj@gmail.com",
        "time": "Mon Mar 23 10:52:32 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 23 10:52:32 2026 +0800"
      },
      "message": "[AURON #2107] Fix driver spill NPE error (#2108)\n\n# Which issue does this PR close?\n\nCloses #2107\n\n# Rationale for this change\nWhen `SparkOnHeapSpillManager.current` is called on the driver side,\n`TaskContext.get` returns null, causing a NPE.\n\n# What changes are included in this PR?\nAdd a null check when TaskContext is unavailable.\nUpdate the JNI bridge SIG_TYPE to resolve methods against the\nOnHeapSpillManager.\n\n# Are there any user-facing changes?\nNo.\n\n# How was this patch tested?\nUT\n\nCo-authored-by: zhangyizhong \u003czhangyizhong03@kuaishou.com\u003e"
    },
    {
      "commit": "471a86583cb9f9dcb89c758d7468b791d2043ca9",
      "tree": "088f7b712e60b1f5eb7592e603a6691776c261c4",
      "parents": [
        "10cf0fdf2b1d3872b9415112e2d9c105b933d198"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Sat Mar 21 13:50:52 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Mar 21 13:50:52 2026 +0800"
      },
      "message": "Bump actions/upload-artifact from 6 to 7 (#2109)"
    },
    {
      "commit": "10cf0fdf2b1d3872b9415112e2d9c105b933d198",
      "tree": "418da3716111a401001d8ea51968139bd2da8f76",
      "parents": [
        "7da2ba5d10113ede0706db4cc228e1d563e7faf3"
      ],
      "author": {
        "name": "slfan1989",
        "email": "55643692+slfan1989@users.noreply.github.com",
        "time": "Fri Mar 20 10:33:34 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 20 10:33:34 2026 +0800"
      },
      "message": "[AURON #2030] Add Native Scan Support for Apache Hudi Copy-On-Write Tables. (#2031)\n\n### Which issue does this PR close?\n\nCloses #2030\n\n### Rationale for this change\n\nThis PR adds native scan support for Hudi Copy-On-Write (COW) tables,\nenabling Auron to accelerate Hudi table reads by converting\n`FileSourceScanExec` operations to native Parquet/ORC scan\nimplementations.\n\n### What changes are included in this PR?\n\n#### 1. **New Module: `thirdparty/auron-hudi`**\n- **`HudiConvertProvider`**: Implements `AuronConvertProvider` SPI to\nintercept and convert Hudi `FileSourceScanExec` to native scans\n- Detects Hudi file formats (`HoodieParquetFileFormat`,\n`HoodieOrcFileFormat`)\n  - Converts to `NativeParquetScanExec` or `NativeOrcScanExec`\n  - Handles timestamp fallback logic automatically\n\n- **`HudiScanSupport`**: Core detection and validation logic\n  - File format recognition with `NewHoodie*` format rejection\n  - Table type resolution via multi-source metadata fallback:\n    - Options → Catalog → `.hoodie/hoodie.properties`\n  - MOR table detection and rejection\n- Time travel query detection (via `as.of.instant`, `as.of.timestamp`\noptions)\n  - FileIndex class hierarchy verification\n\n#### 2. **Configuration**\n- Added `spark.auron.enable.hudi.scan` config option (default: `true`)\n- Respects existing Parquet/ORC timestamp scanning configurations\n- Runtime Spark version validation (3.0–3.5 only)\n\n#### 3. **Build \u0026 Integration**\n- **Maven**: New profile `hudi-0.15` with enforcer rules\n  - Validates `hudiEnabled\u003dtrue` property\n  - Restricts Spark to 3.0–3.5\n  - Pins Hudi version to 0.15.0\n\n- **Build Script**: Enhanced `auron-build.sh`\n  - Added `--hudi \u003cVERSION\u003e` parameter\n  - Version compatibility validation\n  - Auto-enables `hudiEnabled` property\n\n- **CI/CD**: New workflow `.github/workflows/hudi.yml`\n  - Matrix testing: Spark 3.0–3.5 × JDK 8/17/21 × Scala 2.12\n  - Independent Hudi test pipeline\n\n### Are there any user-facing changes?\n\n## New Configuration Option\n\n```scala\n// Enable Hudi native scan (enabled by default)\nspark.conf.set(\"spark.auron.enable.hudi.scan\", \"true\")\n```\n\n### How was this patch tested?\n\nAdd Junit Test.\n\nSigned-off-by: slfan1989 \u003cslfan1989@apache.org\u003e"
    },
    {
      "commit": "7da2ba5d10113ede0706db4cc228e1d563e7faf3",
      "tree": "ca12537d563901c6ed391135f0b880d7673eccf2",
      "parents": [
        "440cf90db79b9f72fbf63d1a1884705bfbed574d"
      ],
      "author": {
        "name": "zhangmang",
        "email": "zhangmang1@163.com",
        "time": "Thu Mar 19 17:02:29 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 19 17:02:29 2026 +0800"
      },
      "message": "[AURON #2104] Migrate the Native dependency configuration from SparkAuronConfiguration to AuronConfiguration (#2105)\n\n# Which issue does this PR close?\n\nCloses #2104 \n\n# Rationale for this change\nDecoupling the Native Engine and Spark, migrate the configuration items\ncurrently in the Spark module to the auron-core module\n\n# What changes are included in this PR?\nMigrate these configurations from SparkAuronConfiguration to\nAuronConfiguration.\n* TOKIO_WORKER_THREADS_PER_CPU\n* SPARK_TASK_CPUS\n* SUGGESTED_BATCH_MEM_SIZE\n\n# Are there any user-facing changes?\n* No\n\n# How was this patch tested?\n* No need test"
    },
    {
      "commit": "440cf90db79b9f72fbf63d1a1884705bfbed574d",
      "tree": "be22b7b85b2097587e38302b9db9c3b1d6574991",
      "parents": [
        "5998f1cd630acc4881786d7309b362c86e51f549"
      ],
      "author": {
        "name": "zhangmang",
        "email": "zhangmang1@163.com",
        "time": "Thu Mar 19 11:04:13 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 19 11:04:13 2026 +0800"
      },
      "message": "[AURON #2062] Fix AuronKafkaSourceFunction watermark was not generated (#2098)\n\n# Which issue does this PR close?\n\nCloses #2062\n\n# Rationale for this change\n* use `org.apache.flink.table.runtime.generated.WatermarkGenerator` to\ncalculate the watermark\n\n# What changes are included in this PR?\n* add `flink-table-runtime` dependency\n* modify `AuronKafkaSourceFunction` to use\n`org.apache.flink.table.runtime.generated.WatermarkGenerator`\n\n# Are there any user-facing changes?\n* No\n\n# How was this patch tested?\n* JNI and Spark need to be decoupled"
    },
    {
      "commit": "5998f1cd630acc4881786d7309b362c86e51f549",
      "tree": "7dd5a4b1a09a08e2cedc65d92d99ff2ea7b10313",
      "parents": [
        "1820797d3750991aed5e138a6ee43f24cbf2380e"
      ],
      "author": {
        "name": "zhangmang",
        "email": "zhangmang1@163.com",
        "time": "Wed Mar 18 20:16:57 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 18 20:16:57 2026 +0800"
      },
      "message": "[AURON #2102] Initializing JavaClasses in JNI and decoupling Spark (#2103)\n\n# Which issue does this PR close?\n\nCloses #2102 \n\n# Rationale for this change\nMany fields in JavaClasses are tightly coupled with Spark Java code; we\ndecide whether to load the relevant code based on the engine.\n\n# What changes are included in this PR?\n* Introduce getEngineName API for `JniBridge` and `AuronAdaptor`\n* modify jni_bridge add engine type checking when initializing\nJavaClasses\n\n# Are there any user-facing changes?\n* No\n\n# How was this patch tested?\n* No"
    },
    {
      "commit": "1820797d3750991aed5e138a6ee43f24cbf2380e",
      "tree": "2e975dde9f709ac341fa6060f2aa48f0e75d157e",
      "parents": [
        "6ac9e89c4c5fc9ad30fe2f93a55bac82d74bd032"
      ],
      "author": {
        "name": "zhangmang",
        "email": "zhangmang1@163.com",
        "time": "Wed Mar 18 14:09:43 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 18 14:09:43 2026 +0800"
      },
      "message": "[AURON #2099] Fix rust lint test fail (#2100)\n\n# Which issue does this PR close?\n\nCloses #2099 \n\n# Rationale for this change\n* Fix CI pipeline `Rust Lint \u0026 Test` failure issues\n\n# What changes are included in this PR?\n* fix kafka_scan_exec / pb_deserializer / shared_map_array_builder\nSyntax issues\n\n# Are there any user-facing changes?\n* No\n\n# How was this patch tested?\n* no"
    },
    {
      "commit": "6ac9e89c4c5fc9ad30fe2f93a55bac82d74bd032",
      "tree": "e35f16fab7be56ef7f31f7beec3a0ebbdeb2eddd",
      "parents": [
        "cb4bc336d65721377a511aca1b229b1746177100"
      ],
      "author": {
        "name": "yew1eb",
        "email": "yew1eb@gmail.com",
        "time": "Tue Mar 17 14:52:52 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 17 14:52:52 2026 +0800"
      },
      "message": "[AURON #1912] Clean up rust default lints (#2039)\n\n\u003c!--\n- Start the PR title with the related issue ID, e.g. \u0027[AURON #XXXX]\nShort summary...\u0027.\n--\u003e\n# Which issue does this PR close?\n\nCloses #1912 \n\n# Rationale for this change\n\n# What changes are included in this PR?\n\n# Are there any user-facing changes?\n\n# How was this patch tested?"
    },
    {
      "commit": "cb4bc336d65721377a511aca1b229b1746177100",
      "tree": "337fa4ff230178f910787f8336790b54580dc717",
      "parents": [
        "d1ac7fe7a478e989adbf7425db3874ea627fef60"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Tue Mar 17 14:16:07 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 17 14:16:07 2026 +0800"
      },
      "message": "Bump org.apache.kafka:kafka-clients from 3.4.0 to 3.9.1 in /auron-flink-extension/auron-flink-runtime (#2092)"
    },
    {
      "commit": "d1ac7fe7a478e989adbf7425db3874ea627fef60",
      "tree": "009e4177ebcf82c05b2430a8d9d1b574f8ec414f",
      "parents": [
        "b45135652bae7fd114a1832255a2bd7e16f3f4ba"
      ],
      "author": {
        "name": "xTong",
        "email": "lwqcode@foxmail.com",
        "time": "Tue Mar 17 10:24:56 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 17 10:24:56 2026 +0800"
      },
      "message": "[AURON #1850] Add ArrowFieldWriters for temporal and composite types (#2086)\n\n# Which issue does this PR close?\n\nPartially addresses #1850 (Part 2b of the Flink RowData to Arrow\nconversion).\n\n# Rationale for this change\n\nPart 2a (#2079) implemented `ArrowFieldWriter` base class, 12 basic type\nwriters, and `FlinkArrowWriter` orchestrator. This PR completes the\nremaining 5 writer types (Time, Timestamp, Array, Map, Row), enabling\nfull coverage of all Flink logical types supported by the Arrow type\nmapping introduced in Part 1 (#1959).\n\nThe implementation follows Flink\u0027s official `flink-python` Arrow module\nas established in Part 2a, with the same `forRow()`/`forArray()`\ndual-mode factory pattern and template method design.\n\n# What changes are included in this PR?\n\n## Commit 1: 5 ArrowFieldWriters + unit tests (10 files, +1509 lines)\n\n- **`TimeWriter`** — Handles all 4 Arrow time precisions\n(`TimeSecVector`, `TimeMilliVector`, `TimeMicroVector`,\n`TimeNanoVector`) via instanceof dispatch. Flink stores TIME as int\n(milliseconds), converted to each precision with `L`-suffixed literals\nto avoid int overflow.\n- **`TimestampWriter`** — Handles all 4 Arrow timestamp precisions.\nCombines `TimestampData.getMillisecond()` (long) and\n`getNanoOfMillisecond()` (int) for sub-millisecond precision.\nConstructor validates `timezone \u003d\u003d null` via `Preconditions.checkState`,\nmatching Flink official — timezone is not handled at the writer layer.\n- **`ArrayWriter`** — Delegates to an `elementWriter`\n(`ArrowFieldWriter\u003cArrayData\u003e`) for each array element. Overrides\n`finish()`/`reset()` to propagate to the element writer.\n- **`MapWriter`** — Arrow maps are `List\u003cStruct{key, value}\u003e`. Holds\nseparate key and value writers operating on `ArrayData`. Sets\n`structVector.setIndexDefined()` for each entry. Overrides\n`finish()`/`reset()` to propagate to key/value writers.\n- **`RowWriter`** — Nested struct handling with\n`ArrowFieldWriter\u003cRowData\u003e[]` for child fields. Caches a `nullRow`\n(`GenericRowData`) in the constructor for null struct handling (avoids\nper-call allocation). Uses a single child-write loop for both null and\nnon-null paths, matching Flink official.\n- **Unit tests**: `TimeWriterTest` (8), `TimestampWriterTest` (9),\n`ArrayWriterTest` (5), `MapWriterTest` (3), `RowWriterTest` (3) — 28\ntests covering all precisions, null handling, reset/multi-batch, edge\ncases (pre-epoch timestamps, empty arrays/maps).\n\n## Commit 2: Factory method extension + integration test (2 files, +158\nlines)\n\n- **`FlinkArrowUtils`** — Extended `createArrowFieldWriterForRow()` and\n`createArrowFieldWriterForArray()` with branches for `TimeWriter`,\n`TimestampWriter`, `ArrayWriter`, `MapWriter`, `RowWriter`. MapVector\ncheck is placed before ListVector (since `MapVector extends\nListVector`). Timestamp branch extracts precision from both\n`TimestampType` and `LocalZonedTimestampType`.\n- **`FlinkArrowWriterTest`** — Added `testWriteTemporalAndComplexTypes`\nintegration test covering TIME(6), TIMESTAMP(6), TIMESTAMP_LTZ(3),\nARRAY\\\u003cINT\\\u003e, MAP\\\u003cVARCHAR, INT\\\u003e, ROW\\\u003cnested_id INT\\\u003e. Updated\n`testUnsupportedTypeThrows` to use `MultisetType` (since `ArrayType` is\nnow supported).\n\n# Scope\n\nThis PR completes all Flink-to-Arrow writer types. The remaining work\nfor #1850 is the reverse direction (Arrow-to-Flink reader), which is\ntracked separately.\n\n# Are there any user-facing changes?\n\nNo. Internal API for Flink integration.\n\n# How was this patch tested?\n\n36 tests across 6 test classes (28 new + 8 existing):\n\n```bash\n./build/mvn test -Pflink-1.18 -Pspark-3.5 -Pscala-2.12 \\\n  -pl auron-flink-extension/auron-flink-runtime -am -DskipBuildNative\n```\n\nResult: 36 pass, 0 failures."
    },
    {
      "commit": "b45135652bae7fd114a1832255a2bd7e16f3f4ba",
      "tree": "03dc0956fa9c4a978327563bbb81bd604d40488e",
      "parents": [
        "072bcb7f7eb063378d4b10822aa52d3244194ebb"
      ],
      "author": {
        "name": "bkhan",
        "email": "xorsum@outlook.com",
        "time": "Mon Mar 16 22:22:02 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 16 22:22:02 2026 +0800"
      },
      "message": "[AURON #2094] chore: add Iceberg version to AuronBuildInfo (#2095)\n\n# Which issue does this PR close?\n\nCloses #2094\n\n# Rationale for this change\n\n# What changes are included in this PR?\n\n# Are there any user-facing changes?\n\n# How was this patch tested?"
    },
    {
      "commit": "072bcb7f7eb063378d4b10822aa52d3244194ebb",
      "tree": "c3873bc098a26a8aa0ed73b49745efdefad4e8e4",
      "parents": [
        "aaaea52e4ac97ce7b04acdb237f7b6262a29995f"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Mon Mar 16 21:17:28 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 16 21:17:28 2026 +0800"
      },
      "message": "Bump lz4_flex from 0.12.0 to 0.13.0 (#2091)"
    },
    {
      "commit": "aaaea52e4ac97ce7b04acdb237f7b6262a29995f",
      "tree": "e997e4f6602f5f4c559fec76c748cfc06188d49d",
      "parents": [
        "c0c4a0bc5e1c0dc31890b0dc093605b05a9f6749"
      ],
      "author": {
        "name": "zhangmang",
        "email": "zhangmang1@163.com",
        "time": "Mon Mar 16 18:55:20 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 16 18:55:20 2026 +0800"
      },
      "message": "[AURON #2062] AuronKafkaSourceFunction support generating watermarks (#2089)\n\n# Which issue does this PR close?\n\nCloses #2062 \n\n# Rationale for this change\n* auron kafka source support flink watermark\n\n# What changes are included in this PR?\n* modify AuronKafkaSourceFunction, Assign tasks to their corresponding\nKafka partitions and generate watermark\n* modify kafka_scan_exec.rs, Consume the list of kafka partitions passed\nfrom the Java side\n* add KafkaTopicPartitionAssigner , copy from flink\n* add SourceContextWatermarkOutputAdapter , copy from flink\n\n# Are there any user-facing changes?\n* No\n\n# How was this patch tested?\n* There is currently no Kafka environment integration, so automated\ntesting is not possible."
    },
    {
      "commit": "c0c4a0bc5e1c0dc31890b0dc093605b05a9f6749",
      "tree": "5ea685bab0b6d1afbcd0e075a1380b98225bb511",
      "parents": [
        "5998623334e4aefe0887fa6862282e10367e574c"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Fri Mar 13 17:20:16 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 13 17:20:16 2026 +0800"
      },
      "message": "Bump once_cell from 1.21.3 to 1.21.4 (#2088)"
    },
    {
      "commit": "5998623334e4aefe0887fa6862282e10367e574c",
      "tree": "97ee58f79928b82cde59e8a2558a585a44d528ba",
      "parents": [
        "ebf14908d036d9631a76c9dc078ea7259b98f55f"
      ],
      "author": {
        "name": "Calvin Kirs",
        "email": "kirs@apache.org",
        "time": "Thu Mar 12 12:00:17 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 12 12:00:17 2026 +0800"
      },
      "message": "Update copyright year in NOTICE file (#2087)"
    },
    {
      "commit": "ebf14908d036d9631a76c9dc078ea7259b98f55f",
      "tree": "36e878521462fa163c40e509556d2e79e672e9f9",
      "parents": [
        "d8677b6c81ccf7ea993aaf0b3288c3a8edc69574"
      ],
      "author": {
        "name": "zhangmang",
        "email": "zhangmang1@163.com",
        "time": "Wed Mar 11 16:50:12 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 11 16:50:12 2026 +0800"
      },
      "message": "[AURON #2061] AuronKafkaSourceFunction support flink checkpoint (#2084)\n\n# Which issue does this PR close?\n\nCloses #2061 \n\n# Rationale for this change\n* Auron kafka source support flink checkpoint\n\n# What changes are included in this PR?\n* modify AuronKafkaSourceFunction add flink checkpoint interface\n* modify kafka_scan_exec.rs add offset restore and commit mechanism\n* copy from KafkaTopicPartition, flink state compatibility requirements\n\n# Are there any user-facing changes?\n* No\n\n# How was this patch tested?\n* There is currently no Kafka environment integration, so automated\ntesting is not possible."
    },
    {
      "commit": "d8677b6c81ccf7ea993aaf0b3288c3a8edc69574",
      "tree": "9a6b3db011d0c66bca32aabba0cc68e3a93992ec",
      "parents": [
        "f899c8456387c012881e78744eb11122c4922a50"
      ],
      "author": {
        "name": "cxzl25",
        "email": "3898450+cxzl25@users.noreply.github.com",
        "time": "Wed Mar 11 14:30:36 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 11 14:30:36 2026 +0800"
      },
      "message": "[AURON #1981] Fix ORC range start index out of range for slice of length (#1986)\n\n# Which issue does this PR close?\n\nCloses #1981\n\n# Rationale for this change\n\n# What changes are included in this PR?\n\n# Are there any user-facing changes?\n\n# How was this patch tested?\nlocal test"
    },
    {
      "commit": "f899c8456387c012881e78744eb11122c4922a50",
      "tree": "bba047332544db7049106dc9e9d6345cadbca1be",
      "parents": [
        "64de43f3ba3818aa387ec422d5580165e3633a8f"
      ],
      "author": {
        "name": "zhangmang",
        "email": "zhangmang1@163.com",
        "time": "Tue Mar 10 19:29:23 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 10 19:29:23 2026 +0800"
      },
      "message": "[AURON #2060] Introduce Auron Flink Kafka TableSource (#2082)\n\n# Which issue does this PR close?\n\nCloses #2060 \n\n# Rationale for this change\n* Implementing the integration of Flink Kafka Source and Auron Native\nengine\n\n# What changes are included in this PR?\n* add AuronKafkaSourceFunction\n* add AuronColumnarRowData\n* add SchemaConverters\n* modify FlinkArrowReader\n* modify FlinkArrowUtils\n* FlinkAuronConfiguration\n* AuronKafkaDynamicTableFactory\n* AuronKafkaDynamicTableSource\n\n# Are there any user-facing changes?\n* No\n\n# How was this patch tested?\n* There is currently no Kafka environment integration, so automated\ntesting is not possible."
    },
    {
      "commit": "64de43f3ba3818aa387ec422d5580165e3633a8f",
      "tree": "10e221993e4ab441dc29489a937bb68e68e823a6",
      "parents": [
        "8dc034a09338ffde319e6e3493508b0baf1a2e5c"
      ],
      "author": {
        "name": "xTong",
        "email": "lwqcode@foxmail.com",
        "time": "Tue Mar 10 17:21:46 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 10 17:21:46 2026 +0800"
      },
      "message": "[AURON #1850] Add ArrowFieldWriter and FlinkArrowWriter for basic types (#2079)\n\n# Which issue does this PR close?\n\nPartially addresses #1850 (Part 2a of the Flink RowData to Arrow\nconversion).\n\n# Rationale for this change\n\nPer AIP-1, the Flink integration data path requires converting Flink\n`RowData` into Arrow `VectorSchemaRoot` for export to the native engine\n(DataFusion/Rust). This PR implements the writer layer for basic types,\nfollowing Flink\u0027s official `flink-python` Arrow implementation as\nrequested during Part 1 review (#1959).\n\n# What changes are included in this PR?\n\n## Commit 1: ArrowFieldWriter base class + 12 type writers (16 files,\n+2181 lines)\n\n- **`ArrowFieldWriter\u003cIN\u003e`** — Generic abstract base class using\ntemplate method pattern (`write()` → `doWrite()` + count++), aligned\nwith Flink\u0027s `flink-python` `ArrowFieldWriter`.\n- **12 concrete writers** in `writers/` sub-package, each with\n`forRow()`/`forArray()` dual-mode factory methods:\n- Numeric: `IntWriter`, `TinyIntWriter`, `SmallIntWriter`,\n`BigIntWriter`, `FloatWriter`, `DoubleWriter`\n- Non-numeric: `BooleanWriter`, `VarCharWriter`, `VarBinaryWriter`,\n`DecimalWriter`, `DateWriter`, `NullWriter`\n- **Key design**: Each writer (except `NullWriter`) has two `public\nstatic final` inner classes (`XxxWriterForRow` / `XxxWriterForArray`)\nbecause Flink\u0027s `RowData` and `ArrayData` have no common getter\ninterface.\n- **Special cases**:\n- `NullWriter`: No inner classes needed, `doWrite()` is empty\n(NullVector values are inherently null)\n- `DecimalWriter`: Takes precision/scale parameters, includes\n`fitBigDecimal()` validation before writing (aligned with Flink\u0027s\n`fromBigDecimal` logic)\n- **Unit tests**: `IntWriterTest` (5), `BasicWritersTest` (20),\n`NonNumericWritersTest` (12) — 37 tests\n\n## Commit 2: FlinkArrowWriter orchestrator + factory methods (3 files,\n+482 lines)\n\n- **`FlinkArrowWriter`** — Orchestrates per-column\n`ArrowFieldWriter\u003cRowData\u003e[]` to write Flink `RowData` into Arrow\n`VectorSchemaRoot`. Lifecycle: `create()` → `write(row)*` → `finish()` →\n`reset()`.\n- **Factory methods in `FlinkArrowUtils`** —\n`createArrowFieldWriterForRow()`/`createArrowFieldWriterForArray()`\ndispatch writer creation based on Arrow vector type (instanceof chain).\nBoth are package-private.\n- **Integration tests**: `FlinkArrowWriterTest` (7) — all-types write,\nnull handling, multi-row batches, reset, empty batch, zero columns,\nunsupported type. Total: **53 tests, all passing**.\n\n# Scope\n\nThis PR covers basic types only. Time, Timestamp, and complex types\n(Array/Map/Row) will be in Part 2b.\n\n# Are there any user-facing changes?\n\nNo. Internal API for Flink integration.\n\n# How was this patch tested?\n\n53 tests across 4 test classes:\n\n```bash\n./build/mvn test -Pflink-1.18 -Pspark-3.5 -Pscala-2.12 \\\n  -pl auron-flink-extension/auron-flink-runtime -am -DskipBuildNative\n```\n\nResult: 53 pass, 0 failures."
    },
    {
      "commit": "8dc034a09338ffde319e6e3493508b0baf1a2e5c",
      "tree": "454b5dc67a9c422f3856c7ae8a616670fcf42270",
      "parents": [
        "4fef6262f416bef6e311151f7e2d69fce7364e2b"
      ],
      "author": {
        "name": "Weiqing Yang",
        "email": "yangweiqing001@gmail.com",
        "time": "Mon Mar 09 20:58:19 2026 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 10 11:58:19 2026 +0800"
      },
      "message": "[AURON #1851] Introduce Arrow to Flink RowData reader (#2063)\n\n# Which issue does this PR close?\n\n Closes #1851\n\n# Rationale for this change\nPer AIP-1, the Flink integration data path requires converting Arrow\nvectors returned by the native engine (DataFusion/Rust) back into Flink\nRowData so downstream Flink operators can process results.\n\n# What changes are included in this PR?\n- FlinkArrowReader orchestrator — zero-copy columnar access via\nColumnarRowData + VectorizedColumnBatch\n  - 16 ArrowXxxColumnVector wrappers for all 17 supported types\n  - Decimal fromUnscaledLong optimization for precision ≤ 18\n  - Batch reset support for streaming pipelines\n  - 21 unit tests in FlinkArrowReaderTest\n\n# Are there any user-facing changes?\n No. Internal API for Flink integration.\n\n# How was this patch tested?\n21 tests: ./build/mvn test -pl auron-flink-extension/auron-flink-runtime\n-am -Pscala-2.12 -Pflink-1.18 -Pspark-3.5 -DskipBuildNative\n-Dtest\u003dFlinkArrowReaderTest\nResult: 21 pass, 0 failures."
    },
    {
      "commit": "4fef6262f416bef6e311151f7e2d69fce7364e2b",
      "tree": "332503961fc6e96a59e73caa7163fcf2266686fe",
      "parents": [
        "fc06ee63dbcc2fbfad174b3185dd138225f3dda5"
      ],
      "author": {
        "name": "zhangmang",
        "email": "zhangmang1@163.com",
        "time": "Mon Mar 09 19:02:45 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 09 19:02:45 2026 +0800"
      },
      "message": "[AURON #2059] Introduce Auron Native Kafka DynamicTableFactory (#2078)\n\n# Which issue does this PR close?\n\nCloses #2059\n\n# Rationale for this change\n* Introduce Auron Kafka DynamicTableFactory\n\n# What changes are included in this PR?\n* add AuronKafkaDynamicTableFactory\n* add AuronKafkaDynamicTableSource\n* KafkaConstants\n\n# Are there any user-facing changes?\n* No\n\n# How was this patch tested?\n* test vim UT: AuronKafkaDynamicTableFactoryTest"
    },
    {
      "commit": "fc06ee63dbcc2fbfad174b3185dd138225f3dda5",
      "tree": "8ac9e88fbd794a983b03cbd86e0adc2a8241b8bc",
      "parents": [
        "7da41e132f0144900b013e03f95f63c3609a84cd"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Mon Mar 09 18:43:29 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 09 18:43:29 2026 +0800"
      },
      "message": "Bump actions/upload-artifact from 6 to 7 (#2075)"
    },
    {
      "commit": "7da41e132f0144900b013e03f95f63c3609a84cd",
      "tree": "c2136880bcf00b4d91bdc2d6297fc011125b3365",
      "parents": [
        "046a808f15ca7e0a3b5949fa6f4e85f640f8d254"
      ],
      "author": {
        "name": "Bryton Lee",
        "email": "brytonlee01@gmail.com",
        "time": "Mon Mar 09 15:07:06 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 09 15:07:06 2026 +0800"
      },
      "message": "[AURON #1957] Fix panic/coredumps/memory leaks and Java thread InterruptedException errors. (#1980)\n\n# Which issue does this PR close?\n\nCloses #1957 \n\n# Rationale for this change\n\nOptimized rt.rs to prevent SendError panic to gracefully finalize a\nnative plan execution.\n\ncoredumps and memory leaks are all introdced a memory leak issue inside\nffi_reader_exec.rs, when a stream is droped, the Java side ffi import\nhas a race condition to write memory that was drop in rust side.\n\nOptimized ArrowFFIExporter.scala close() logic to gracefully close\noutputThread without sending errors.\n\n# What changes are included in this PR?\nChanged rt.rs, ffi_reader_exec.rs and ArrowFFIExporter.scala.\n\n# Are there any user-facing changes?\nNo\n\n# How was this patch tested?\nIt was tested on the latest master branch and  some earlier versions."
    },
    {
      "commit": "046a808f15ca7e0a3b5949fa6f4e85f640f8d254",
      "tree": "d88502088888ba60f3eaa0bdce490309e8fe3b31",
      "parents": [
        "34a85904d74938499abf6702afd9ed1ae4e1a908"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Mon Mar 09 15:04:42 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 09 15:04:42 2026 +0800"
      },
      "message": "Bump tokio from 1.49.0 to 1.50.0 (#2070)\n\nBumps [tokio](https://github.com/tokio-rs/tokio) from 1.49.0 to 1.50.0.\n\u003cdetails\u003e\n\u003csummary\u003eRelease notes\u003c/summary\u003e\n\u003cp\u003e\u003cem\u003eSourced from \u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/releases\"\u003etokio\u0027s\nreleases\u003c/a\u003e.\u003c/em\u003e\u003c/p\u003e\n\u003cblockquote\u003e\n\u003ch2\u003eTokio v1.50.0\u003c/h2\u003e\n\u003ch1\u003e1.50.0 (Mar 3rd, 2026)\u003c/h1\u003e\n\u003ch3\u003eAdded\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003enet: add \u003ccode\u003eTcpStream::set_zero_linger\u003c/code\u003e (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7837\"\u003e#7837\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003ert: add \u003ccode\u003eis_rt_shutdown_err\u003c/code\u003e (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7771\"\u003e#7771\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3\u003eChanged\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eio: add optimizer hint that \u003ccode\u003ememchr\u003c/code\u003e returns in-bounds\npointer (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7792\"\u003e#7792\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eio: implement vectored writes for \u003ccode\u003ewrite_buf\u003c/code\u003e (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7871\"\u003e#7871\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eruntime: panic when \u003ccode\u003eevent_interval\u003c/code\u003e is set to 0 (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7838\"\u003e#7838\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eruntime: shorten default thread name to fit in Linux limit (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7880\"\u003e#7880\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003esignal: remember the result of \u003ccode\u003eSetConsoleCtrlHandler\u003c/code\u003e\n(\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7833\"\u003e#7833\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003esignal: specialize windows \u003ccode\u003eRegistry\u003c/code\u003e (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7885\"\u003e#7885\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3\u003eFixed\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eio: always cleanup \u003ccode\u003eAsyncFd\u003c/code\u003e registration list on\nderegister (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7773\"\u003e#7773\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003emacros: remove (most) local \u003ccode\u003euse\u003c/code\u003e declarations in\n\u003ccode\u003etokio::select!\u003c/code\u003e (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7929\"\u003e#7929\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003enet: fix \u003ccode\u003eGET_BUF_SIZE\u003c/code\u003e constant for \u003ccode\u003etarget_os \u003d\n\u0026quot;android\u0026quot;\u003c/code\u003e (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7889\"\u003e#7889\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eruntime: avoid redundant unpark in current_thread scheduler (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7834\"\u003e#7834\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eruntime: don\u0027t park in \u003ccode\u003ecurrent_thread\u003c/code\u003e if\n\u003ccode\u003ebefore_park\u003c/code\u003e defers waker (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7835\"\u003e#7835\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eio: fix write readiness on ESP32 on short writes (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7872\"\u003e#7872\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eruntime: wake deferred tasks before entering\n\u003ccode\u003eblock_in_place\u003c/code\u003e (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7879\"\u003e#7879\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003esync: drop rx waker when oneshot receiver is dropped (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7886\"\u003e#7886\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eruntime: fix double increment of \u003ccode\u003enum_idle_threads\u003c/code\u003e on\nshutdown (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7910\"\u003e#7910\u003c/a\u003e,\n\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7918\"\u003e#7918\u003c/a\u003e,\n\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7922\"\u003e#7922\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3\u003eUnstable\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003efs: check for io-uring opcode support (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7815\"\u003e#7815\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eruntime: avoid lock acquisition after uring init (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7850\"\u003e#7850\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3\u003eDocumented\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003edocs: update outdated unstable features section (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7839\"\u003e#7839\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eio: clarify the behavior of \u003ccode\u003eAsyncWriteExt::shutdown()\u003c/code\u003e\n(\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7908\"\u003e#7908\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eio: explain how to flush stdout/stderr (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7904\"\u003e#7904\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eio: fix incorrect and confusing \u003ccode\u003eAsyncWrite\u003c/code\u003e\ndocumentation (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7875\"\u003e#7875\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003ert: clarify the documentation of \u003ccode\u003eRuntime::spawn\u003c/code\u003e (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7803\"\u003e#7803\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003ert: fix missing quotation in docs (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7925\"\u003e#7925\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eruntime: correct the default thread name in docs (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7896\"\u003e#7896\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eruntime: fix \u003ccode\u003eevent_interval\u003c/code\u003e doc (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7932\"\u003e#7932\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003esync: clarify RwLock fairness documentation (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7919\"\u003e#7919\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003esync: clarify that \u003ccode\u003erecv\u003c/code\u003e returns \u003ccode\u003eNone\u003c/code\u003e once\nclosed and no more messages (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7920\"\u003e#7920\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003etask: clarify when to use \u003ccode\u003espawn_blocking\u003c/code\u003e vs dedicated\nthreads (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7923\"\u003e#7923\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003etask: doc that task drops before \u003ccode\u003eJoinHandle\u003c/code\u003e completion\n(\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7825\"\u003e#7825\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003esignal: guarantee that listeners never return \u003ccode\u003eNone\u003c/code\u003e (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7869\"\u003e#7869\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003etask: fix task module feature flags in docs (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7891\"\u003e#7891\u003c/a\u003e)\u003c/li\u003e\n\u003c/ul\u003e\n\u003c!-- raw HTML omitted --\u003e\n\u003c/blockquote\u003e\n\u003cp\u003e... (truncated)\u003c/p\u003e\n\u003c/details\u003e\n\u003cdetails\u003e\n\u003csummary\u003eCommits\u003c/summary\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/0273e45ead199dac7725faee1e3dc35a9c8753ab\"\u003e\u003ccode\u003e0273e45\u003c/code\u003e\u003c/a\u003e\nchore: prepare Tokio v1.50.0 (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7934\"\u003e#7934\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/e3ee4e58dc9bb7accf26dfd51b0a2146922b5269\"\u003e\u003ccode\u003ee3ee4e5\u003c/code\u003e\u003c/a\u003e\nchore: prepare tokio-macros v2.6.1 (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7943\"\u003e#7943\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/8c980ea75a0f8dd2799403777db700c2e8f4cda4\"\u003e\u003ccode\u003e8c980ea\u003c/code\u003e\u003c/a\u003e\nio: add \u003ccode\u003ewrite_all_vectored\u003c/code\u003e to \u003ccode\u003etokio-util\u003c/code\u003e (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7768\"\u003e#7768\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/e35fd6d6b7d9a8ba37ee621835ef91372c2565cb\"\u003e\u003ccode\u003ee35fd6d\u003c/code\u003e\u003c/a\u003e\nci: fix patch during clippy step (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7935\"\u003e#7935\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/03fe44c10302fdb55c29dbe5b08d4f8769c80272\"\u003e\u003ccode\u003e03fe44c\u003c/code\u003e\u003c/a\u003e\nruntime: fix \u003ccode\u003eevent_interval\u003c/code\u003e doc (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7932\"\u003e#7932\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/d18e5dfbb0cdc28725bebb28cde80a6c11ee32bc\"\u003e\u003ccode\u003ed18e5df\u003c/code\u003e\u003c/a\u003e\nio: fix race in \u003ccode\u003eMock::poll_write\u003c/code\u003e (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7882\"\u003e#7882\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/f21f2693f02aec9a876ac2bd21566c85e15b682e\"\u003e\u003ccode\u003ef21f269\u003c/code\u003e\u003c/a\u003e\nruntime: fix race condition during the blocking pool shutdown (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7922\"\u003e#7922\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/d81e8f0acbdd7d866bce4f733b3545fd834c7840\"\u003e\u003ccode\u003ed81e8f0\u003c/code\u003e\u003c/a\u003e\nmacros: remove (most) local \u003ccode\u003euse\u003c/code\u003e declarations in\n\u003ccode\u003etokio::select!\u003c/code\u003e (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7929\"\u003e#7929\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/25e7f2641ef2555d688c267059431a2802805f1d\"\u003e\u003ccode\u003e25e7f26\u003c/code\u003e\u003c/a\u003e\nrt: fix missing quotation in docs (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7925\"\u003e#7925\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003e\u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/commit/e1a91ef114a301b542d810abab9956f2868861b9\"\u003e\u003ccode\u003ee1a91ef\u003c/code\u003e\u003c/a\u003e\nutil: fix typo in docs (\u003ca\nhref\u003d\"https://redirect.github.com/tokio-rs/tokio/issues/7926\"\u003e#7926\u003c/a\u003e)\u003c/li\u003e\n\u003cli\u003eAdditional commits viewable in \u003ca\nhref\u003d\"https://github.com/tokio-rs/tokio/compare/tokio-1.49.0...tokio-1.50.0\"\u003ecompare\nview\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/details\u003e\n\u003cbr /\u003e\n\n\n[![Dependabot compatibility\nscore](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name\u003dtokio\u0026package-manager\u003dcargo\u0026previous-version\u003d1.49.0\u0026new-version\u003d1.50.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)\n\nDependabot will resolve any conflicts with this PR as long as you don\u0027t\nalter it yourself. You can also trigger a rebase manually by commenting\n`@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n\u003cdetails\u003e\n\u003csummary\u003eDependabot commands and options\u003c/summary\u003e\n\u003cbr /\u003e\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits\nthat have been made to it\n- `@dependabot show \u003cdependency name\u003e ignore conditions` will show all\nof the ignore conditions of the specified dependency\n- `@dependabot ignore this major version` will close this PR and stop\nDependabot creating any more for this major version (unless you reopen\nthe PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop\nDependabot creating any more for this minor version (unless you reopen\nthe PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop\nDependabot creating any more for this dependency (unless you reopen the\nPR or upgrade to it yourself)\n\n\n\u003c/details\u003e\n\nSigned-off-by: dependabot[bot] \u003csupport@github.com\u003e\nCo-authored-by: dependabot[bot] \u003c49699333+dependabot[bot]@users.noreply.github.com\u003e"
    },
    {
      "commit": "34a85904d74938499abf6702afd9ed1ae4e1a908",
      "tree": "379c2e4df011def81918a35bfadc5d54d3166b30",
      "parents": [
        "b364521c2163054ad7b9e73970cb6c69be07cb2f"
      ],
      "author": {
        "name": "slfan1989",
        "email": "55643692+slfan1989@users.noreply.github.com",
        "time": "Mon Mar 09 10:40:29 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 09 10:40:29 2026 +0800"
      },
      "message": "[AURON #2015] Add Native Scan Support for Apache Iceberg Copy-On-Write Tables. (#2016)\n\n\u003c!--\n- Start the PR title with the related issue ID, e.g. \u0027[AURON #XXXX]\nShort summary...\u0027.\n--\u003e\n# Which issue does this PR close?\n\nCloses #2015\n\n### Rationale for this change\n\nThis PR adds native scan support for Apache Iceberg Copy-On-Write (COW)\ntables to improve query performance. Currently, Auron lacks direct\nintegration with Iceberg, forcing all Iceberg queries to use Spark\u0027s\nnative execution path, missing opportunities for native engine\nacceleration.\n\n#### Key Motivations:\n\n- Enable Auron\u0027s native execution engine to read Iceberg tables directly\n- Leverage native performance optimizations for Iceberg COW tables\n- Provide automatic fallback to Spark scan for unsupported scenarios\n- Lay the foundation for future Iceberg feature enhancements (MOR\ntables, pruning predicates, etc.)\n\n### What changes are included in this PR?\n\n#### Core Implementation:\n\n- **IcebergConvertProvider** - SPI extension point that detects Iceberg\nscans and decides whether to use native execution\n- **IcebergScanSupport** - Decision logic that validates scan plans and\nchecks for COW table eligibility\n- **NativeIcebergTableScanExec** - Native execution node that converts\nIceberg FileScanTask to native scan plans\n\n#### Build \u0026 Configuration:\n- Updated `pom.xml` with Iceberg version management and Maven enforcer\nrules\n- Modified `auron-build.sh` to support Iceberg build parameters\n- Added configuration option: `spark.auron.enable.iceberg.scan`\n(default: true)\n\n#### Supported Features:\n- Iceberg COW tables (Parquet and ORC formats)\n- Projection pushdown (column pruning)\n- Partitioned and non-partitioned tables\n- Automatic fallback for unsupported scenarios\n\n#### Version Support:\n- Spark: 3.4, 3.5, 4.0 only\n- Iceberg: 1.10.1 only (enforced by Maven)\n\n### Are there any user-facing changes?\n\n**No Breaking Changes**: Existing functionality remains unchanged.\nIceberg support is additive and disabled by default in unsupported\nscenarios.\n\n### How was this patch tested?\n\n#### Unit \u0026 Integration Tests:\n\n- Added 10 integration test cases in AuronIcebergIntegrationSuite:\n  - Simple COW table scan\n  - Projection pushdown\n  - Partitioned table with partition filter\n  - Orc format support\n  - Empty table handling\n  - Residual filters fallback\n  - Metadata columns fallback\n  - Decimal type fallback\n  - Delete files (MOR) fallback\n  - Configuration toggle functionality\n\n#### Test Environment:\n- Spark versions: 3.4.4, 3.5.8, 4.0.2\n- Iceberg version: 1.10.1\n- File formats: Parquet, ORC\n- Scala versions: 2.12, 2.13\n\nSigned-off-by: slfan1989 \u003cslfan1989@apache.org\u003e"
    },
    {
      "commit": "b364521c2163054ad7b9e73970cb6c69be07cb2f",
      "tree": "1e5141142fbbd748f5d4a34b711eb94c96c4716b",
      "parents": [
        "02c0a2847667b0125fdbfb7afcff9461eb77c0a8"
      ],
      "author": {
        "name": "slfan1989",
        "email": "55643692+slfan1989@users.noreply.github.com",
        "time": "Mon Mar 09 10:37:47 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 09 10:37:47 2026 +0800"
      },
      "message": "[AURON #2020] [BUILD] Add `--threads` option to control Maven build parallelism. (#2021)\n\n### Which issue does this PR close?\n\nCloses #2020\n\n### Rationale for this change\n\nCurrently, local builds are always single-threaded, which can be slow on\nmulti-core machines. Docker builds hardcode `-T8`, which cannot be\noverridden by users. This change adds a `--threads` option to\n`auron-build.sh` to give users control over Maven build parallelism.\n\n\n### What changes are included in this PR?\n\n1. **auron-build.sh**\n   - Added `--threads` parameter parsing and validation\n- Unified thread configuration logic: user-specified value takes\nprecedence, Docker defaults to 8 threads, local defaults to\nsingle-threaded\n   - Removed hardcoded `-T8` from Docker build section\n   - Updated help text to document the new option\n\n2. **CONTRIBUTING.md**\n   - Documented `--threads` option under Build Options section\n   - Described default behavior for local vs Docker builds\n\n### Are there any user-facing changes?\n\nYes. Users can now specify `--threads` to control Maven build\nparallelism\n\nDefault behavior remains unchanged (backward compatible):\n- Local builds: single-threaded\n- Docker builds: 8 threads\n\n### How was this patch tested?\n\n- Verified `./auron-build.sh --help` displays correct usage information\n\nSigned-off-by: slfan1989 \u003cslfan1989@apache.org\u003e"
    },
    {
      "commit": "02c0a2847667b0125fdbfb7afcff9461eb77c0a8",
      "tree": "703a16ef6ee4360fd12d6099179d548aa52fba76",
      "parents": [
        "0aa013a12c99b1acabf6ac6180f246798b25b9c9"
      ],
      "author": {
        "name": "bkhan",
        "email": "xorsum@outlook.com",
        "time": "Fri Mar 06 14:58:27 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 06 14:58:27 2026 +0800"
      },
      "message": "[AURON #2049] skipping the plan stability validation for tpcds q54 test in spark3.5 (#2071)\n\n# Which issue does this PR close?\n\nCloses #2049\n\n# Rationale for this change\n\n# What changes are included in this PR?\n\n# Are there any user-facing changes?\n\n# How was this patch tested?"
    },
    {
      "commit": "0aa013a12c99b1acabf6ac6180f246798b25b9c9",
      "tree": "68c317c0f7f4ae3b49aa6c6cb9a5903b8f4cc473",
      "parents": [
        "9df46cafa6e51000c2910889bc21c8de31693dd1"
      ],
      "author": {
        "name": "zhangmang",
        "email": "zhangmang1@163.com",
        "time": "Fri Mar 06 11:42:34 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 06 11:42:34 2026 +0800"
      },
      "message": "[AURON #2058] Introduce KafkaScanExec (#2072)\n\n# Which issue does this PR close?\n\nCloses #2058 \n\n# Rationale for this change\n* add native kafka consumer\n\n# What changes are included in this PR?\n* add Protobuf Node : KafkaScanExecNode \n* add Native kafka_scan_exec.rs\n\n# Are there any user-facing changes?\n* No\n\n# How was this patch tested?\n* No need test for kafka_scan_exec.rs\n* kafka_scan_exec#test_flink_kafka_partition_assign Validate Kafka\npartition allocation strategy."
    },
    {
      "commit": "9df46cafa6e51000c2910889bc21c8de31693dd1",
      "tree": "32270de8fd2747fcece277cb2c434dc4f444b0d9",
      "parents": [
        "72232d57e03b0bee57341f282ce76ed39aac2303"
      ],
      "author": {
        "name": "yew1eb",
        "email": "yew1eb@gmail.com",
        "time": "Thu Mar 05 12:57:04 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 05 12:57:04 2026 +0800"
      },
      "message": "[AURON #2050] Support UnaryMinus expression (#2051)\n\n# Which issue does this PR close?\n\nCloses #2050\n\n# Rationale for this change\n\n# What changes are included in this PR?\n\n# Are there any user-facing changes?\n\n# How was this patch tested?"
    },
    {
      "commit": "72232d57e03b0bee57341f282ce76ed39aac2303",
      "tree": "382fe014f3a4d1ab1dcc9988c3a1d05f55643ccf",
      "parents": [
        "f4ef0442c6a3872e1cbe5db75ef5748ce3bb5139"
      ],
      "author": {
        "name": "zhangmang",
        "email": "zhangmang1@163.com",
        "time": "Wed Mar 04 17:51:45 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 04 17:51:45 2026 +0800"
      },
      "message": "[AURON #1848] Introduce native protobuf deserializer (#2069)\n\n# Which issue does this PR close?\nCloses #1848 \n\n# Rationale for this change\n* Supports deserialize Protobuf data\n\n# What changes are included in this PR?\n* add flink_deserializer.rs\n* pb_deserializer.rs\n* shared_array_builder.rs\n* shared_list_array_builder.rs\n* shared_map_array_builder.rs\n* shared_struct_array_builder.rs\n* add prost-types and prost-reflect\n\n# Are there any user-facing changes?\n* No\n# How was this patch tested?\n* pb_deserializer#test_parse_messages_with_kafka_meta_basic\n* pb_deserializer#test_parse_messages_with_kafka_meta_nested\n* pb_deserializer#test_parse_messages_with_kafka_meta_empty\n*\npb_deserializer#test_parse_messages_with_kafka_meta_different_partitions"
    },
    {
      "commit": "f4ef0442c6a3872e1cbe5db75ef5748ce3bb5139",
      "tree": "d9a8765c6645b34d03d2a06d8c904b0d8c336913",
      "parents": [
        "e96a0060f3c1c5fea762603a3505086be9ce0c82"
      ],
      "author": {
        "name": "slfan1989",
        "email": "55643692+slfan1989@users.noreply.github.com",
        "time": "Wed Mar 04 14:44:24 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 04 14:44:24 2026 +0800"
      },
      "message": "[AURON #2052] NPE fix in `AuronUniffleShuffleReader` (#2053)\n\n### Which issue does this PR close?\n\nCloses #2052\n\n### Rationale for this change\n\nIn `AuronUniffleShuffleReader`, there are potential\n`NullPointerException` risks when handling partitions:\n\n- `partitionToExpectBlocks.get(partition)` may return null, but the code\ndirectly calls `.isEmpty()` without null check\n- `partitionToShuffleServers.get(partition)` may return null, but the\ncode uses it directly without validationThis can cause the shuffle\nreader to crash when reading partitions that have missing or incomplete\nmetadata.\n\nThis can cause the shuffle reader to crash when reading partitions that\nhave missing or incomplete metadata.\n\n### What changes are included in this PR?\n\n### Are there any user-facing changes?\n\nNo.\n\n### How was this patch tested?\n\nExists Junit Test.\n\nSigned-off-by: slfan1989 \u003cslfan1989@apache.org\u003e"
    },
    {
      "commit": "e96a0060f3c1c5fea762603a3505086be9ce0c82",
      "tree": "ce0101e11e2bd9dd094277d0e0e1663e3cce863b",
      "parents": [
        "18f26a49f5320334922d1ed985b1bab40e621b77"
      ],
      "author": {
        "name": "bkhan",
        "email": "xorsum@outlook.com",
        "time": "Wed Mar 04 14:34:30 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 04 14:34:30 2026 +0800"
      },
      "message": "[AURON #2064] PR title check (#2066)\n\n# Which issue does this PR close?\n\nCloses #2064\n\n# Rationale for this change\n\n# What changes are included in this PR?\n\n# Are there any user-facing changes?\n\n# How was this patch tested?\n\n---------\n\nCo-authored-by: Copilot \u003c175728472+Copilot@users.noreply.github.com\u003e\nCo-authored-by: cxzl25 \u003c3898450+cxzl25@users.noreply.github.com\u003e"
    },
    {
      "commit": "18f26a49f5320334922d1ed985b1bab40e621b77",
      "tree": "afc09022496c0af43a2b436b6b24c48fe3398968",
      "parents": [
        "6f77955d3586652226b5983f0077b1242d8b9e29"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Tue Mar 03 20:50:07 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 03 20:50:07 2026 +0800"
      },
      "message": "Bump futures from 0.3.31 to 0.3.32 (#2065)\n\n"
    },
    {
      "commit": "6f77955d3586652226b5983f0077b1242d8b9e29",
      "tree": "64703acfd753fa04bf373962f2afb4cb6e101fba",
      "parents": [
        "0976dee5837c63564ee7d77ea302f8b3388c1509"
      ],
      "author": {
        "name": "zhangmang",
        "email": "zhangmang1@163.com",
        "time": "Tue Mar 03 17:14:13 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 03 17:14:13 2026 +0800"
      },
      "message": "[AURON #1858] Introduce Flink Auron interface (#2057)\n\n# Which issue does this PR close?\n\nCloses #1858\n\n# Rationale for this change\n\nIntroducing Flink support for the Operator and Function interfaces\nimplemented by Auron to facilitate subsequent feature development.\n\n# What changes are included in this PR?\n* add SupportsAuronNative\n* add FlinkAuronFunction\n* FlinkAuronOperator\n\n# Are there any user-facing changes?\n\n* No\n\n# How was this patch tested?\n* No need test"
    },
    {
      "commit": "0976dee5837c63564ee7d77ea302f8b3388c1509",
      "tree": "e91e72c06099745e54d2768f367dbcb74be45d31",
      "parents": [
        "04b48c31371d795763bc436c6e83d2a0e44ab49c"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Tue Mar 03 15:28:27 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 03 15:28:27 2026 +0800"
      },
      "message": "Bump bytes from 1.11.0 to 1.11.1 (#1983)\n\n"
    },
    {
      "commit": "04b48c31371d795763bc436c6e83d2a0e44ab49c",
      "tree": "d22f3a94cb14b18f3e92beb4611e2c780a7620bb",
      "parents": [
        "af99782d25a73e5a944c8f6bab18deffd8cf27fb"
      ],
      "author": {
        "name": "slfan1989",
        "email": "55643692+slfan1989@users.noreply.github.com",
        "time": "Tue Mar 03 14:31:48 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 03 14:31:48 2026 +0800"
      },
      "message": "[AURON #2011] History Server fails when BuildInfo event is missing. (#2012)\n\n### Which issue does this PR close?\n\nCloses #2011\n\n### Rationale for this change\n\nThe History Server plugin currently crashes during initialization when\nthe `AuronBuildInfoUIData` record is missing from the KVStore. This\ncauses applications without BuildInfo events to either fail plugin\ninitialization or show no Auron tab.\n\n### What changes are included in this PR?\n\n1. **AuronSQLAppStatusStore**: Changed `buildInfo()` to return\n`Option[AuronBuildInfoUIData]`, catching `NoSuchElementException` and\nother exceptions to return `None` instead of throwing\n2. **AuronSQLHistoryServerPlugin**: Removed the null check and always\ncreate the Auron tab, letting the UI handle empty state\n3. **AuronAllExecutionsPage**: Added `buildInfoSummary()` method to\nhandle `Option[AuronBuildInfoUIData]`:\n   - `Some`: displays BuildInfo table as before\n- `None`: shows user-friendly message \"Auron build information is not\navailable for this application.\"\n\n\n### Are there any user-facing changes?\n\nYes. When BuildInfo is not available:\n- Before: Plugin initialization fails or no Auron tab appears\n- After: Auron tab displays with a clear warning message explaining\nBuildInfo is unavailable\n\n### How was this patch tested?\n\n- Existing unit tests pass\n\nSigned-off-by: slfan1989 \u003cslfan1989@apache.org\u003e"
    },
    {
      "commit": "af99782d25a73e5a944c8f6bab18deffd8cf27fb",
      "tree": "80ec460d2e080de11781b8bab70521c8e7cc3b4f",
      "parents": [
        "b2b04cdacf3b6894d67375eeaa66dd1f1c7218a2"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Tue Mar 03 12:35:48 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 03 12:35:48 2026 +0800"
      },
      "message": "Bump futures-util from 0.3.31 to 0.3.32 (#2010)\n\n"
    },
    {
      "commit": "b2b04cdacf3b6894d67375eeaa66dd1f1c7218a2",
      "tree": "aea1bbbcfeddeb87ffed5cc8d6931db29bf6d9f2",
      "parents": [
        "bd83f3dbf3ef8ffff1af60a3e7d8ac8b66780f65"
      ],
      "author": {
        "name": "slfan1989",
        "email": "55643692+slfan1989@users.noreply.github.com",
        "time": "Tue Mar 03 11:30:52 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 03 11:30:52 2026 +0800"
      },
      "message": "[AURON #2026] Migrate from deprecated JavaConverters to CollectionConverters. (#2027)\n\n### Which issue does this PR close?\n\nCloses #2026\n\n### Rationale for this change\n\nThe codebase currently uses the deprecated\n`scala.collection.JavaConverters` API for Java-Scala collection\nconversions. This API was deprecated in Scala 2.13 in favor of\n`scala.jdk.CollectionConverters`.\n\nCurrent issues:\n- Deprecation warnings are being suppressed in build configuration\n(pom.xml:1055)\n- Using outdated API that doesn\u0027t follow Scala best practices\n- Potential migration barrier when upgrading to Scala 2.13+\n\nThe project already includes `scala-collection-compat` dependency\n(version 2.12.0), which provides a compatibility layer allowing the use\nof modern `scala.jdk.CollectionConverters` API in Scala 2.12 without any\nbehavioral changes.\n\n### What changes are included in this PR?\n\n**How this works with Scala 2.12:**\nThe `scala-collection-compat` library provides a compatibility layer\nthat:\n- In Scala 2.12: redirects `scala.jdk.CollectionConverters` to\n`scala.collection.JavaConverters`\n- In Scala 2.13+: uses native `scala.jdk.CollectionConverters`\n\nThis ensures zero runtime behavioral changes.\n\n### Are there any user-facing changes?\n\nNo. This is purely an internal code refactoring with no user-facing\nchanges or behavioral differences.\n\n### How was this patch tested?\n\nExisting unit tests pass.\n\nSigned-off-by: slfan1989 \u003cslfan1989@apache.org\u003e"
    },
    {
      "commit": "bd83f3dbf3ef8ffff1af60a3e7d8ac8b66780f65",
      "tree": "77498f295f16319e7c2cb2c322db5f7ab1e04b8d",
      "parents": [
        "b8bf7253d950e6ce0a514423f6243ddfd19dae5a"
      ],
      "author": {
        "name": "zhangmang",
        "email": "zhangmang1@163.com",
        "time": "Tue Mar 03 10:38:47 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 03 10:38:47 2026 +0800"
      },
      "message": "Introduce FlinkAuronAdaptor and FlinkAuronAdaptorProvider (#2056)\n\n# Which issue does this PR close?\n\nCloses #1855\n\n# Rationale for this change\n* Introduce FlinkAuronAdaptor and FlinkAuronAdaptorProvider\n# What changes are included in this PR?\n* add FlinkAuronAdaptor\n* add FlinkAuronAdaptorProvider\n# Are there any user-facing changes?\n* NO\n# How was this patch tested?\n* Test vim UT FlinkAuronAdaptorTest#testCreateAuronAdaptor"
    },
    {
      "commit": "b8bf7253d950e6ce0a514423f6243ddfd19dae5a",
      "tree": "3482ce71f4379dec55d2f03ab7c11be4e16f84f2",
      "parents": [
        "6b0643c63f8e7f9cb5754a2c0443dc5f13b3547a"
      ],
      "author": {
        "name": "zhangmang",
        "email": "zhangmang1@163.com",
        "time": "Mon Mar 02 20:31:26 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 02 20:31:26 2026 +0800"
      },
      "message": "[AURON #1854] Introduce FlinkAuronConfiguration (#2055)\n\n\u003c!--\n- Start the PR title with the related issue ID, e.g. \u0027[AURON #XXXX]\nShort summary...\u0027.\n--\u003e\n# Which issue does this PR close?\n\nCloses #1854 \n\n# Rationale for this change\n\nIntroduce FlinkAuronConfiguration to unify access operations to\nFlinkConfiguration within Auron.\n\n# What changes are included in this PR?\n\n* add SparkAuronConfiguration\n* add FlinkAuronConfigurationTest\n* CommonTestUtils\n\n\n# Are there any user-facing changes?\n* NO\n# How was this patch tested?\n* Test vim UT FlinkAuronConfigurationTest#testGetConfigFromFlinkConfig"
    },
    {
      "commit": "6b0643c63f8e7f9cb5754a2c0443dc5f13b3547a",
      "tree": "0698f410d0e8dc6cb9ab7c62847aad38dd01d2b1",
      "parents": [
        "df3e06558be4e87ca36f7d2752e637f300840974"
      ],
      "author": {
        "name": "yew1eb",
        "email": "yew1eb@gmail.com",
        "time": "Mon Mar 02 20:24:52 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 02 20:24:52 2026 +0800"
      },
      "message": "[AURON #2043] Bump hadoop from 3.4.2 to 3.4.3 (#2044)\n\n# Which issue does this PR close?\n\nCloses #2043 \n\n# Rationale for this change\nHadoop 3.4.3 has been released, and we’re planning to upgrade our Hadoop\nversion to 3.4.3.\n\n# What changes are included in this PR?\n\n# Are there any user-facing changes?\n\n# How was this patch tested?"
    },
    {
      "commit": "df3e06558be4e87ca36f7d2752e637f300840974",
      "tree": "6c146f2110de011ebd3b377ddec2c7f8e37bae03",
      "parents": [
        "b9c9fdd605c9e9f8297725d367f8339870d31c2e"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Mon Mar 02 14:57:56 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 02 14:57:56 2026 +0800"
      },
      "message": "Bump actions/upload-artifact from 6 to 7 (#2041)\n\n"
    },
    {
      "commit": "b9c9fdd605c9e9f8297725d367f8339870d31c2e",
      "tree": "42edc13bb7b78952d519fe5081b32108f2a37892",
      "parents": [
        "41bd32f2aaa9d8af17f3bde98847a130690baac3"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Mon Mar 02 12:08:58 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 02 12:08:58 2026 +0800"
      },
      "message": "Bump actions/download-artifact from 7 to 8 (#2040)\n\n"
    },
    {
      "commit": "41bd32f2aaa9d8af17f3bde98847a130690baac3",
      "tree": "7a0b8fe39b3933d8b3a3fc13c63c855660b72add",
      "parents": [
        "d85c40c58225fe62ce7599cfade08911a6ede62f"
      ],
      "author": {
        "name": "slfan1989",
        "email": "55643692+slfan1989@users.noreply.github.com",
        "time": "Mon Mar 02 10:23:59 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 02 10:23:59 2026 +0800"
      },
      "message": "[AURON #2017] [BUILD] Add Spark 4.x support to dev/reformat script. (#2018)\n\n\u003c!--\n- Start the PR title with the related issue ID, e.g. \u0027[AURON #XXXX]\nShort summary...\u0027.\n--\u003e\n### Which issue does this PR close?\n\nCloses #2017\n\n### Rationale for this change\n\nWith Spark 4.0 and 4.1 support added to the project, the `dev/reformat`\nscript needs to be updated to handle formatting and style checks for\nthese new versions. Spark 4.x requires JDK 17+ and Scala 2.13, while\nSpark 3.x uses JDK 8 and Scala 2.12. The script should automatically\nswitch between these environments.\n\n### What changes are included in this PR?\n\n#### 1. Fix Flink Maven profile\n- Before: -Pflink,flink-1.18\n- After: -Pflink-1.18\n- Reason: Avoid activating non-existent flink profile\n\n#### 2.Add Spark 4.x support\n- Add spark-4.0 and spark-4.1 to the version sweep list\n- Auto-switch to scala-2.13 profile for Spark 4.x (Spark 4.x requires\nScala 2.13)\n- Auto-switch to JDK 17 for Spark 4.x (Spark 4.x requires JDK 17+)\n- Auto-switch back to JDK 8 for Spark 3.x versions\n\n#### 3.Update CI workflow (.github/workflows/style.yml)\n- Add JDK 17 setup alongside existing JDK 8\n- Enable style check to work with both Spark 3.x and Spark 4.x versions\n\n### Are there any user-facing changes?\nNo.\n\n### How was this patch tested?\nVerified automatic JDK switching works for Spark 3.x (JDK 8) and Spark\n4.x (JDK 17)\n\n---------\n\nSigned-off-by: slfan1989 \u003cslfan1989@apache.org\u003e"
    },
    {
      "commit": "d85c40c58225fe62ce7599cfade08911a6ede62f",
      "tree": "9f93292fa8ba14b09a7c2b2bdfb34aeeb1a5630e",
      "parents": [
        "1c4a4aebd59bfd50ad2daab2b0c8e5f7f0c2e18c"
      ],
      "author": {
        "name": "cxzl25",
        "email": "3898450+cxzl25@users.noreply.github.com",
        "time": "Sun Mar 01 09:50:28 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Mar 01 09:50:28 2026 +0800"
      },
      "message": "[AURON #2047] Fix Scala 2.13 CI NoClassDefFoundError (#2048)\n\n# Which issue does this PR close?\n\nCloses #2047\n\n# Rationale for this change\n\n```bash\n  if [ 2.13 \u003d \"2.13\" \u0026\u0026 \"spark-3.5\" !\u003d \"spark-4.0\" \u0026\u0026 \"spark-3.5\" !\u003d \"spark-4.1\" ]; then\n    SPARK_FILE\u003d\"spark-3.5.8-bin-hadoop3-scala2.13.tgz\"\n  else\n    SPARK_FILE\u003d\"spark-3.5.8-bin-hadoop3.tgz\"\n  fi\n```\n\n```\n/home/runner/work/_temp/5e39dd44-8905-4fc9-ad30-a316cba0dc50.sh: line 2: [: missing `]\u0027\n--2026-02-27 05:16:45--  https://www.apache.org/dyn/closer.lua/spark/spark-3.5.8/spark-3.5.8-bin-hadoop3.tgz?action\u003ddownload\n```\n\n\nhttps://github.com/apache/auron/actions/runs/22473356538/job/65096490529?pr\u003d2040\n\n\n# What changes are included in this PR?\n\n# Are there any user-facing changes?\n\n# How was this patch tested?\nGHA\n\n```\n--2026-02-27 13:27:10--  https://www.apache.org/dyn/closer.lua/spark/spark-3.5.8/spark-3.5.8-bin-hadoop3-scala2.13.tgz?action\u003ddownload\n```"
    },
    {
      "commit": "1c4a4aebd59bfd50ad2daab2b0c8e5f7f0c2e18c",
      "tree": "8430734e813fe8d712d509f13f85d9f6a00a66d8",
      "parents": [
        "b68d88d9673b182f1512f6688456d44f5274a90d"
      ],
      "author": {
        "name": "slfan1989",
        "email": "55643692+slfan1989@users.noreply.github.com",
        "time": "Sat Feb 28 17:15:36 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Feb 28 17:15:36 2026 +0800"
      },
      "message": "[AURON #2022] [BUG] Native engine panic on closed channel causes JVM crash. (#2023)\n\n### Which issue does this PR close?\n\nCloses #2022\n\n### Rationale for this change\n\nWhen running TPC-DS queries (q60-q69) with Spark 4.0 / JDK 21, the\nnative engine panics when `WrappedSender::send` attempts to send data\ninto a closed channel. This panic escalates into a JVM crash with exit\ncode 134.\n\nThe root cause is a race condition where the producer task continues\nsending data while the receiver has already been closed/canceled due to\ntask completion or cancellation. The current implementation panics on\nsend failure instead of handling it gracefully.\n\nLink:\nhttps://github.com/apache/auron/actions/runs/22128240337/job/63964149831?pr\u003d2018\n\n### What changes are included in this PR?\n\nModified `WrappedSender::send` in `execution_context.rs` to gracefully\nhandle channel closure:\n- Check `send().await.is_err()` instead of panicking\n- Log debug message with context (partition_id, task_id, session_id) for\nobservability\n- Return early without updating metrics when channel is closed\n\n### Are there any user-facing changes?\n\nNo user-facing changes. \nThis is an internal fix that prevents JVM crashes when tasks are\ncanceled or completed early.\n\n### How was this patch tested?\n\nExist Junit Test.\n\nSigned-off-by: slfan1989 \u003cslfan1989@apache.org\u003e"
    },
    {
      "commit": "b68d88d9673b182f1512f6688456d44f5274a90d",
      "tree": "18a13023626e0b45452a334989842d4d4e8be081",
      "parents": [
        "ec54cf24c42cad0d9c168add9b0561daa4243382"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Thu Feb 26 23:57:09 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Feb 26 23:57:09 2026 +0800"
      },
      "message": "Bump sonic-rs from 0.5.6 to 0.5.7 (#2038)\n\n"
    },
    {
      "commit": "ec54cf24c42cad0d9c168add9b0561daa4243382",
      "tree": "d78db049e71b06155d71512a04dcff03cd4c0656",
      "parents": [
        "8e0681e4531df16848dbd86aea84a748aa0c4d65"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Thu Feb 26 23:56:09 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Feb 26 23:56:09 2026 +0800"
      },
      "message": "Bump chrono from 0.4.43 to 0.4.44 (#2036)\n\n"
    },
    {
      "commit": "8e0681e4531df16848dbd86aea84a748aa0c4d65",
      "tree": "54d564729fb32f8a2d41b6f05eb473c73a749428",
      "parents": [
        "d2af8a4b6d1d8f2522790caa04a65d055a270230"
      ],
      "author": {
        "name": "slfan1989",
        "email": "55643692+slfan1989@users.noreply.github.com",
        "time": "Wed Feb 25 18:50:13 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Feb 25 18:50:13 2026 +0800"
      },
      "message": "[AURON #2028] Fix extra brace in SparkOnHeapSpillManager log message. (#2029)\n\n### Which issue does this PR close?\n\nCloses #2028\n\n### Rationale for this change\n\nThere is an extra closing brace in the log message in\n`SparkOnHeapSpillManager.scala` at line 161, causing improper log\nformatting.\n\n### What changes are included in this PR?\n\nRemove the extra closing brace `}` from the log message\n\n### Are there any user-facing changes?\n\nNo.\n\n### How was this patch tested?\n\nExists Junit Test.\n\nSigned-off-by: slfan1989 \u003cslfan1989@apache.org\u003e"
    }
  ],
  "next": "d2af8a4b6d1d8f2522790caa04a65d055a270230"
}
