)]}'
{
  "log": [
    {
      "commit": "66dfacbd4916ca5a1058bcb8113932cfad6ae157",
      "tree": "bc034c60c86077b9d7e23f313fd193e401e3d731",
      "parents": [
        "d0fae6e17d6fcf74f012a1431e1d4aca6e0b7b07"
      ],
      "author": {
        "name": "ChengHui Chen",
        "email": "chchen110@gmail.com",
        "time": "Wed May 06 21:26:44 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 06 21:26:44 2026 +0800"
      },
      "message": "[python] Fix broken test `branch_manager_test` (#7773)"
    },
    {
      "commit": "d0fae6e17d6fcf74f012a1431e1d4aca6e0b7b07",
      "tree": "bc4900abf50b26923605bd0a1538535cc8b10c93",
      "parents": [
        "487bed2abff9bc001c0a2a72ba03c9200dab56e9"
      ],
      "author": {
        "name": "XiaoHongbo",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Wed May 06 14:33:29 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed May 06 14:33:29 2026 +0800"
      },
      "message": "[flink] Fix StackOverflowError when building global index with many index columns and partitions (#7754)"
    },
    {
      "commit": "487bed2abff9bc001c0a2a72ba03c9200dab56e9",
      "tree": "7470238392363d5cdb045f5694d4de91cc16ab59",
      "parents": [
        "61bd57ea91603de9b1ecd927abd122e5dadebfc5"
      ],
      "author": {
        "name": "chaoyang",
        "email": "chaoyang@apache.org",
        "time": "Sun May 03 08:06:01 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun May 03 08:06:01 2026 +0800"
      },
      "message": "[python] Align Identifier with Java by encoding branch into the object field (#7738)"
    },
    {
      "commit": "61bd57ea91603de9b1ecd927abd122e5dadebfc5",
      "tree": "84bef7492270ee048fc5e530a95c27de0e490190",
      "parents": [
        "47f44ee1507f067cf36985c4eacd13eaf7432367"
      ],
      "author": {
        "name": "XiaoHongbo",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Sun May 03 08:03:22 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun May 03 08:03:22 2026 +0800"
      },
      "message": "[lumina] Add lance format support for Lumina vector index e2e test (#7753)"
    },
    {
      "commit": "47f44ee1507f067cf36985c4eacd13eaf7432367",
      "tree": "095265b38ba42002cad942b76e09279a0164243f",
      "parents": [
        "718f292c95c43a9bf8cf022c0ab50e301bd9a4a4"
      ],
      "author": {
        "name": "chaoyang",
        "email": "chaoyang@apache.org",
        "time": "Sun May 03 08:02:57 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun May 03 08:02:57 2026 +0800"
      },
      "message": "[python] Implement branch CRUD on FileSystemCatalog (#7755)"
    },
    {
      "commit": "718f292c95c43a9bf8cf022c0ab50e301bd9a4a4",
      "tree": "4fbd0c8e4282512aa1c7cd6bd84ede33e34df80d",
      "parents": [
        "b4e54ada3b14a88ea1d5798f90a8026569210981"
      ],
      "author": {
        "name": "chaoyang",
        "email": "chaoyang@apache.org",
        "time": "Sun May 03 08:02:12 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun May 03 08:02:12 2026 +0800"
      },
      "message": "[python] Fix FileSystemBranchManager from-tag and fast-forward path computation (#7756)\n\nFix a pre-existing bug in `FileSystemBranchManager._copy_with_branch`\nthat made `create_branch(tag_name\u003d...)` and `fast_forward` raise\n`SameFileError`: the `SnapshotManager` it produced still pointed at the\nmain-branch snapshot directory, so `copy_file(src, dst)` collapsed to\n`src \u003d\u003d dst`.\n\nThe fix mirrors Java `SnapshotManager.copyWithBranch`\n(paimon-core/.../utils/SnapshotManager.java:89-95): the Python\n`SnapshotManager` now carries an explicit `branch` field and a\n`copy_with_branch(branch_name)` factory, so its `snapshot_dir` /\n`get_snapshot_path(...)` resolve to\n`{table_path}/branch/branch-{name}/snapshot/...` for non-main branches.\n`FileSystemBranchManager._copy_with_branch` then dispatches to the\nper-manager factories (`SnapshotManager.copy_with_branch` /\n`SchemaManager.copy_with_branch`) instead of blindly reconstructing a\nmain-branch `SnapshotManager`.\n\n`SnapshotLoader` is rebranched in lockstep so REST-path catalog loads\ntarget the requested branch rather than falling back to the main-branch\nidentifier (mirrors Java `SnapshotLoaderImpl.copyWithBranch`)."
    },
    {
      "commit": "b4e54ada3b14a88ea1d5798f90a8026569210981",
      "tree": "c6e0adfdc54dea7b84ed9d3e02cb2388757eb9b9",
      "parents": [
        "f816b3e73e40dfd7ecfad8a95a004acd15b6512d"
      ],
      "author": {
        "name": "chaoyang",
        "email": "chaoyang@apache.org",
        "time": "Fri May 01 09:25:29 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri May 01 09:25:29 2026 +0800"
      },
      "message": "[python] Add branch CRUD REST implementation to RESTCatalog (#7747)\n\n`pypaimon`\u0027s `Catalog` ABC has stubs for `create_branch` / `drop_branch`\n/ `fast_forward` / `list_branches` raising `NotImplementedError`, and is\nmissing `rename_branch` entirely. The `RESTCatalog` never overrode any\nof them, so calling any branch operation on a real REST catalog raises\n`NotImplementedError` instead of issuing the REST call. Java defines all\nfive (`paimon-core/.../catalog/Catalog.java:843-912`) with a complete\nREST implementation in `paimon-core/.../rest/RESTCatalog.java:703-769`.\nThis PR closes that gap.\n\nThis is the sister PR of [#7746 — Tag\nCRUD](https://github.com/apache/paimon/pull/7746); same shape (abstract\nstub + RESTCatalog overrides + wire DTOs + URL builders + mock REST\nserver route handlers + tests). `FilesystemCatalog` inherits the\nabstract `NotImplementedError` stubs; a Python-side `BranchManager` port\nis tracked separately."
    },
    {
      "commit": "f816b3e73e40dfd7ecfad8a95a004acd15b6512d",
      "tree": "c0a0527619a30de355b5d0097b562849391247bd",
      "parents": [
        "75e7ed233482e5aa292155660fbbda78eca344ff"
      ],
      "author": {
        "name": "chaoyang",
        "email": "chaoyang@apache.org",
        "time": "Fri May 01 09:24:35 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri May 01 09:24:35 2026 +0800"
      },
      "message": "[python] Implement tag CRUD on FileSystemCatalog (#7751)\n\nPR #7746 added abstract tag stubs to `Catalog` and a complete\n`RESTCatalog` implementation, but left `FileSystemCatalog` inheriting\n`NotImplementedError`. The merge note said implementing filesystem tag\nCRUD required porting a Python `TagManager`. As it turns out,\n`pypaimon/tag/tag_manager.py::TagManager` and\n`FileStoreTable.create_tag` / `delete_tag` / `list_tags` /\n`tag_manager()` **already exist on master**, so this PR is a thin\ncatalog-layer wrapper that delegates to those helpers and translates\n`ValueError` / return-value-`False` into the typed catalog exceptions\nused by the rest of the API."
    },
    {
      "commit": "75e7ed233482e5aa292155660fbbda78eca344ff",
      "tree": "dfbe5741a5b998fc4c65fd93663cedd174fe7f3c",
      "parents": [
        "c688560bed00477535fc8ccbb8980d45e6d5a922"
      ],
      "author": {
        "name": "XiaoHongbo",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Fri May 01 09:23:45 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri May 01 09:23:45 2026 +0800"
      },
      "message": "[python] Fix vector search RecursionError with many index readers (#7752)\n\nVector search throws `RecursionError: maximum recursion depth exceeded`\nwhen there are many index readers. This PR fixes the above issue by\nchanging `or_`/`and_` in `GlobalIndexResult` from lazy to eager\nevaluation"
    },
    {
      "commit": "c688560bed00477535fc8ccbb8980d45e6d5a922",
      "tree": "e751d06404aff3b2307bb3c647a0a3f187cabcca",
      "parents": [
        "904a5f7dd7703398cf3e42b477c78632a9fb6af4"
      ],
      "author": {
        "name": "chaoyang",
        "email": "chaoyang@apache.org",
        "time": "Thu Apr 30 17:05:35 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 30 17:05:35 2026 +0800"
      },
      "message": "[python] Add tag CRUD API to Catalog (REST + abstract) (#7746)\n\n`pypaimon`\u0027s `Catalog` ABC has stubs for branch CRUD (`create_branch` /\n`drop_branch` / `list_branches` / `fast_forward`) but no tag methods at\nall — calling `catalog.create_tag(...)` on master raises\n`AttributeError`. Java\u0027s `Catalog`\n(`paimon-core/.../catalog/Catalog.java:914-985`) defines `createTag` /\n`getTag` / `listTagsPaged` / `deleteTag`. This PR ports that surface to\nPython with a complete `RESTCatalog` implementation.\n\n`FilesystemCatalog` inherits the abstract `NotImplementedError` stubs\n(mirrors how it inherits the existing branch stubs). A concrete\nfilesystem implementation requires a Python-side `TagManager` port and\nis tracked as a separate follow-up."
    },
    {
      "commit": "904a5f7dd7703398cf3e42b477c78632a9fb6af4",
      "tree": "17880ae043b77f72e28120b7a65bbbd9c7a2632f",
      "parents": [
        "6c1aac8c07f228bf6c265fc557e6192da7ff3923"
      ],
      "author": {
        "name": "0dunay0",
        "email": "bicorn@yelp.com",
        "time": "Wed Apr 29 10:04:30 2026 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 29 23:04:30 2026 +0800"
      },
      "message": "[arrow] Normalize timezone strings for arrow-rust compatibility (#7705)\n\n`ZoneId.systemDefault().toString()` can produce GMT-prefix timezone\nstrings like `GMT-10:00` when the JVM timezone is set to a GMT-prefix\noffset. Arrow-rust doesn\u0027t recognize the `GMT` prefix and rejects these\nstrings.\n\n`normalized()` converts them to formats arrow-rust accepts:\n- `GMT-10:00` becomes `-10:00`\n- `GMT+05:30` becomes `+05:30`\n- `UTC` / `GMT` becomes `Z`\n- IANA names like `America/New_York` pass through unchanged"
    },
    {
      "commit": "6c1aac8c07f228bf6c265fc557e6192da7ff3923",
      "tree": "4815e49e7801e0bcbdbc8db9eb02cc408ffa9063",
      "parents": [
        "234e528fd935f700fd26e816ad088bfc3b28c35e"
      ],
      "author": {
        "name": "Dapeng Sun(孙大鹏)",
        "email": "dapeng.sdp@alibaba-inc.com",
        "time": "Wed Apr 29 22:51:37 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 29 22:51:37 2026 +0800"
      },
      "message": "[iceberg] Fix typo in IcebergRestMetadataCommitter (#7728)"
    },
    {
      "commit": "234e528fd935f700fd26e816ad088bfc3b28c35e",
      "tree": "a5c93d77bd1cc9fc0088128734a788b72a26a5d0",
      "parents": [
        "c40420fdf2fb7b6fef301905aec2134b132790b9"
      ],
      "author": {
        "name": "Dapeng Sun(孙大鹏)",
        "email": "dapeng.sdp@alibaba-inc.com",
        "time": "Wed Apr 29 22:51:19 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 29 22:51:19 2026 +0800"
      },
      "message": "[arrow] Replace System.out.println with LOG.debug in ArrowBundleWriter (#7729)"
    },
    {
      "commit": "c40420fdf2fb7b6fef301905aec2134b132790b9",
      "tree": "aa5f0862a5348879b07e1b40b7518a3b8685672c",
      "parents": [
        "12c32b145e8c7455943e61f5fb10da1bd83b4ca9"
      ],
      "author": {
        "name": "Nick Del Nano",
        "email": "ndelnano@yelp.com",
        "time": "Wed Apr 29 07:25:21 2026 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 29 22:25:21 2026 +0800"
      },
      "message": "[core] [iceberg] Support syncing Iceberg metadata to multiple Hive databases (#7696)\n\nIssue:  https://github.com/apache/paimon/issues/7694\n\n\u003e Paimon\u0027s Iceberg compatibility supports a single Iceberg table in the\ncatalog with metadata.iceberg.database and metadata.iceberg.table\nvalues. I\u0027d like it to support the option of multiple db.table\nreferences so that I can register the Iceberg metadata to both a legacy\ndb.table value and a new one. This will allow me to migrate my platform\nonto this feature without changing existing queries while also using my\nnew db.table naming standard.\n\n\u003e I don\u0027t have a view implementation available in my environment so I\ncannot use views to solve it.\n\nSupport registering Iceberg metadata in multiple Hive databases by\nallowing semicolon-delimited values in metadata.iceberg.database (e.g.\n\u0027db1;db2;db3\u0027)."
    },
    {
      "commit": "12c32b145e8c7455943e61f5fb10da1bd83b4ca9",
      "tree": "e5d14f69f7446c440d73987e8fbf7fc7e42de35c",
      "parents": [
        "91f9d6774a52c61b920bb0e32bac5d9819c717b4"
      ],
      "author": {
        "name": "chaoyang",
        "email": "1955938454@qq.com",
        "time": "Wed Apr 29 22:22:36 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 29 22:22:36 2026 +0800"
      },
      "message": "[python] Fix predicate index stale after with_projection narrows read_type (#7741)\n\nPredicate leaves are built against the *original* table schema via\n`PredicateBuilder`, so each leaf\u0027s `index` field encodes a position in\nthat schema\u0027s column order. `SplitRead.__init__` then takes the\npredicate as-is and forwards it to the row-level `FilterRecordReader`.\n\nWhen the caller chains `.with_projection(...)` onto the read builder,\n`read_type` becomes narrower or reordered relative to the original\nschema. The `OffsetRow` handed to `FilterRecordReader` uses `read_type`\nindices, so the stale `index` carried on the predicate either\n\n* raises `\"Position N is out of bounds for row arity M\"` when `N` is\npast the end of the projected row, or\n* silently selects the wrong column when `N` is still in range but no\nlonger points at the predicate field.\n\nThis shows up most easily on PK tables (the row-level filter path) when\na non-PK column is filtered while the projection narrows the read."
    },
    {
      "commit": "91f9d6774a52c61b920bb0e32bac5d9819c717b4",
      "tree": "4dfffc5dbcc1a57e5a84cc3024cf2f87812fbc7f",
      "parents": [
        "abf1ca65594d81ef36643b52fbc7fd8ecdc6323b"
      ],
      "author": {
        "name": "Arnav Balyan",
        "email": "60175178+ArnavBalyan@users.noreply.github.com",
        "time": "Wed Apr 29 19:48:17 2026 +0530"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 29 22:18:17 2026 +0800"
      },
      "message": "[iceberg] Support timestamp conversion for IcebergDataField (#7734)"
    },
    {
      "commit": "abf1ca65594d81ef36643b52fbc7fd8ecdc6323b",
      "tree": "0bfe473290c67088791edfdefec584ff0abb086c",
      "parents": [
        "14745f42f9a5f739087135d032f65a7cda310c53"
      ],
      "author": {
        "name": "ChengHui Chen",
        "email": "chchen110@gmail.com",
        "time": "Wed Apr 29 22:06:15 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 29 22:06:15 2026 +0800"
      },
      "message": "[python] Push down IndexedSplit row ranges to Lance reader (#7666)\n\nAfter a global index search returns a sparse set of matching row IDs,\nthe data fetch path previously used RowIdFilterRecordBatchReader for\nLance — reading the entire Lance file and discarding non-matching rows\nin memory.\n\nThis change converts the global row ID ranges to local file offsets and\npushes them down to lance.file.LanceFileReader.take_rows(), so only the\nmatched rows are physically read from disk, leveraging Lance\u0027s native\nrandom-access capability."
    },
    {
      "commit": "14745f42f9a5f739087135d032f65a7cda310c53",
      "tree": "86ef78e2d4fb272bdd264cb922bb76e64dcaaacd",
      "parents": [
        "1d05829c21d92dd01314de3355deb4c26bd372e1"
      ],
      "author": {
        "name": "ChengHui Chen",
        "email": "chchen110@gmail.com",
        "time": "Wed Apr 29 22:05:03 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 29 22:05:03 2026 +0800"
      },
      "message": "[python] Support VARIANT type in pypaimon (#7635)\n\nThis PR adds VARIANT read/write support to pypaimon, with a particular\nfocus on shredded VARIANT.\n\n- **Write**: when `variant.shreddingSchema` is configured on a table,\nVARIANT columns are written in shredded Parquet format according to the\nschema.\n- **Read**: shredded VARIANT columns are automatically reassembled back\ninto standard `struct\u003cvalue: binary, metadata: binary\u003e` form,\ntransparent to the caller.\n\nShredded column pruning and predicate pushdown will be built on top of\nthis PR."
    },
    {
      "commit": "1d05829c21d92dd01314de3355deb4c26bd372e1",
      "tree": "7e647443c8e12699e048e1d54ef18d583dfbb9ae",
      "parents": [
        "fee912c5d4da700c0430d153a81eda60c4a1a91d"
      ],
      "author": {
        "name": "yunfengzhou-hub",
        "email": "yuri.zhouyunfeng@outlook.com",
        "time": "Tue Apr 28 22:43:45 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 28 22:43:45 2026 +0800"
      },
      "message": "[deploy] Move extensions to root pom (#7724)"
    },
    {
      "commit": "fee912c5d4da700c0430d153a81eda60c4a1a91d",
      "tree": "3ff46eff80cd66c0608740abe57c9bc837bc73ab",
      "parents": [
        "4b170b6cd826455a82e8cc2552eb92f33792a3df"
      ],
      "author": {
        "name": "Dapeng Sun(孙大鹏)",
        "email": "dapeng.sdp@alibaba-inc.com",
        "time": "Tue Apr 28 22:18:45 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 28 22:18:45 2026 +0800"
      },
      "message": "[core] Replace reflective FileIO lookups with HadoopOptionsProvider interface (#7722)"
    },
    {
      "commit": "4b170b6cd826455a82e8cc2552eb92f33792a3df",
      "tree": "772825b464a90fd36819c416293f21c7b5723d7a",
      "parents": [
        "60b6852218957fc087c04b7e1254a493039f5856"
      ],
      "author": {
        "name": "Nicholas Jiang",
        "email": "programgeek@163.com",
        "time": "Tue Apr 28 17:52:36 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 28 17:52:36 2026 +0800"
      },
      "message": "[python] Support FileScanner partition_predicate to fix FileStoreCommit overwrite for sanity check (#7711)\n\nAdd `partition_predicate` support to `FileScanner`, matching the Java\nand Rust implementations:\n\n- Previously, callers like `FileStoreCommit.overwrite()`,\n`drop_partitions()`, and `CommitScanner` had to build\nfull-schema-indexed predicates (`PredicateBuilder(table.fields)`) that\n`FileScanner` would internally re-trim to partition keys via\n`trim_and_transform_predicate`.\n- Now `FileScanner` accepts an optional `partition_predicate` parameter\nindexed directly by partition keys. When provided, it is used as\n`partition_key_predicate` without any index translation. The existing\npredicate parameter is preserved for the read path. All write-path\ncallers are simplified to use `PredicateBuilder(partition_keys_fields)`\nand pass the result as `partition_predicate`."
    },
    {
      "commit": "60b6852218957fc087c04b7e1254a493039f5856",
      "tree": "f46c6340b96868dd7759fdf5276eb05afe2f1377",
      "parents": [
        "3f414ce3b14e09869ebe1c668f6ca0b052422064"
      ],
      "author": {
        "name": "XiaoHongbo",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Tue Apr 28 17:51:20 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 28 17:51:20 2026 +0800"
      },
      "message": "[lance] Fix ArrayIndexOutOfBoundsException when reading vector search results with Lance format (#7719)\n\nReading vector search results on lance format throws\n`java.lang.ArrayIndexOutOfBoundsException: Index 2 out of bounds for\nlength 2`. This PR fixes the issue so that lance format can work well in vector search"
    },
    {
      "commit": "3f414ce3b14e09869ebe1c668f6ca0b052422064",
      "tree": "61dace9e194a1df58e4b71aaf88183673b6658d2",
      "parents": [
        "81bc068aa87470f3e9175acc848f0dddf22f202b"
      ],
      "author": {
        "name": "Jingsong Lee",
        "email": "jingsonglee0@gmail.com",
        "time": "Tue Apr 28 16:42:32 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 28 16:42:32 2026 +0800"
      },
      "message": "[core] Skip tryUpgrade for pkClusteringOverride since sort order differs from pk order (#7709)"
    },
    {
      "commit": "81bc068aa87470f3e9175acc848f0dddf22f202b",
      "tree": "5c428bd30efba6b02a841b490150c75d218ad784",
      "parents": [
        "8fc7772d2a1b1bbef8853d57c7fc63f79df82d46"
      ],
      "author": {
        "name": "ChengHui Chen",
        "email": "chchen110@gmail.com",
        "time": "Tue Apr 28 16:42:13 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 28 16:42:13 2026 +0800"
      },
      "message": "[python] Support Kerberos authentication for HDFS (#7716)\n\nAdd Kerberos support for HDFS in PyPaimon, aligned with Java Paimon\u0027s\n`SecurityConfiguration`. Supports keytab login (automatic `kinit`) and\nexisting ticket cache mode, with fallback keys `security.principal` /\n`security.keytab` for Java compatibility.\n\nIncludes `SecurityOptions` config class, unit tests, sample script, and\ndocumentation."
    },
    {
      "commit": "8fc7772d2a1b1bbef8853d57c7fc63f79df82d46",
      "tree": "957e45dd78cfe2d37dc0f6a1bc067ae7c99167c7",
      "parents": [
        "f9eb5026e3cd48c9ba16ee7742fb261b47526a79"
      ],
      "author": {
        "name": "jerry",
        "email": "jinglining0@gmail.com",
        "time": "Tue Apr 28 16:41:31 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 28 16:41:31 2026 +0800"
      },
      "message": "[format] Update Vortex JNI to 0.69.0 (#7718)\n\nThis PR upgrades the Vortex JNI integration from 0.24.0 to 0.69.0 and\nadapts Paimon\u0027s Vortex format implementation to the updated Java API and\nprotobuf layout.\n\nIt also fixes several correctness issues found during the upgrade:\n- Write record batches through Arrow IPC byte arrays instead of Arrow\nFFI, so Java can reset Arrow writers after each flush without keeping\nnative-borrowed Arrow buffers alive until file close.\n- Keep the Vortex writer close path robust by attempting to close both\nthe native writer and Arrow writer even when flush fails, while\npreserving suppressed cleanup exceptions.\n- Support projected reads that include virtual row-tracking fields by\nreading only physical columns and mapping them back by field id/name.\n- Update Vortex scalar, dtype, expression, temporal metadata, and JNI\nwrappers for 0.69.0."
    },
    {
      "commit": "f9eb5026e3cd48c9ba16ee7742fb261b47526a79",
      "tree": "5dd410b1e440bf56078dd5ef6b943c5cd20c39aa",
      "parents": [
        "bf47a5b0755e9ba988d0295cbb0e9d6f84a8db2c"
      ],
      "author": {
        "name": "Kerwin Zhang",
        "email": "xiyu.zk@alibaba-inc.com",
        "time": "Tue Apr 28 14:09:24 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 28 14:09:24 2026 +0800"
      },
      "message": "[spark] Add paimon spark4.1 module (#7648)"
    },
    {
      "commit": "bf47a5b0755e9ba988d0295cbb0e9d6f84a8db2c",
      "tree": "503da91da306184a650cb5c5f16121093708c9f4",
      "parents": [
        "aa5d9e60d1c922eadcc7b67ca50f093a15ecbb30"
      ],
      "author": {
        "name": "XiaoHongbo",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Tue Apr 28 10:19:07 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 28 10:19:07 2026 +0800"
      },
      "message": "[lance] Fix zero-column read error in data evolution with BTree global index (#7714)"
    },
    {
      "commit": "aa5d9e60d1c922eadcc7b67ca50f093a15ecbb30",
      "tree": "a42373d719eef7d12564970a47699dbe243addcb",
      "parents": [
        "dd2273f70d2f5298a3a35a557c6b462f961e3647"
      ],
      "author": {
        "name": "Faiz",
        "email": "wxy407679@antgroup.com",
        "time": "Tue Apr 28 10:12:58 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 28 10:12:58 2026 +0800"
      },
      "message": "[spark] supports reading blob descriptors from different FS with paimon (#7713)"
    },
    {
      "commit": "dd2273f70d2f5298a3a35a557c6b462f961e3647",
      "tree": "23b12f8374ff31ad6ced72422c3a26c786a6101f",
      "parents": [
        "4a64e72e5eb2484e0670154b8d7afdc3d743922b"
      ],
      "author": {
        "name": "jerry",
        "email": "jinglining0@gmail.com",
        "time": "Mon Apr 27 18:29:34 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 27 18:29:34 2026 +0800"
      },
      "message": "[lumina] Rename index identifier `lumina-vector-ann` to `lumina (#7707)\n\nRename the Lumina vector global index identifier from\n`lumina-vector-ann` to\n`lumina` for a cleaner, shorter name. Existing tables created with the\nold\nidentifier continue to work via a\n`LegacyLuminaVectorGlobalIndexerFactory`\nregistered through SPI, so no on-disk migration is required. File naming\non\n  disk is unchanged (the writer has always used the `lumina-` prefix)."
    },
    {
      "commit": "4a64e72e5eb2484e0670154b8d7afdc3d743922b",
      "tree": "a8b8dd5ff09b539fa29727cfa5fd3fcccb588a83",
      "parents": [
        "30d311cf2e5a4ba3f763d48ce244076cabf58938"
      ],
      "author": {
        "name": "JingsongLi",
        "email": "jingsonglee0@gmail.com",
        "time": "Mon Apr 27 17:01:37 2026 +0800"
      },
      "committer": {
        "name": "JingsongLi",
        "email": "jingsonglee0@gmail.com",
        "time": "Mon Apr 27 17:01:37 2026 +0800"
      },
      "message": "[python] Minor refactor for S3 options in pyarrow_file_io\n"
    },
    {
      "commit": "30d311cf2e5a4ba3f763d48ce244076cabf58938",
      "tree": "288290a912b85e670d9af44289bfe58f703c9d5d",
      "parents": [
        "4a267491269ad587bc7bba2fc6104385aa7ac1ab"
      ],
      "author": {
        "name": "Colin",
        "email": "hansichan.crypto@gmail.com",
        "time": "Mon Apr 27 16:55:07 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 27 16:55:07 2026 +0800"
      },
      "message": "[python] Support S3 options in filesystem catalog (#7712)\n\nSupport authenticated S3 warehouses in PyPaimon filesystem catalog by\naccepting Paimon-compatible S3 option names such as `s3.access-key`,\n`s3.secret-key`, `s3.endpoint`, and temporary credential/session-token\nsettings. Also keeps compatibility with existing `fs.s3.*` PyPaimon\noption names and documents the filesystem catalog S3 usage."
    },
    {
      "commit": "4a267491269ad587bc7bba2fc6104385aa7ac1ab",
      "tree": "1b044c640755b8d0c9495440e3f0e497424ca0d7",
      "parents": [
        "cec7050bfb1b43e5e5f8e842116520c91f8c2526"
      ],
      "author": {
        "name": "0dunay0",
        "email": "bicorn@yelp.com",
        "time": "Mon Apr 27 01:34:19 2026 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 27 14:34:19 2026 +0800"
      },
      "message": "[iceberg] Fix compaction commits emitting \u0027overwrite\u0027 instead of \u0027replace\u0027 in Iceberg snapshot operation (#7702)\n\nWhen Paimon compacts an LSM table with the Iceberg metadata committer\nenabled, the resulting Iceberg snapshot has its `operation` field set to\n`\"overwrite\"`. According to the [Iceberg\nspec](https://iceberg.apache.org/spec/#snapshots), compaction and file\nrewrite operations should use `\"replace\"` instead. Using `\"overwrite\"`\nmakes downstream consumers like Polaris REST catalogs and query engines\nthink the compaction was a data mutation, which is incorrect.\n\nThe problem was in\n`IcebergCommitCallback.createWithDeleteManifestFileMetas()`, which\nhardcoded `IcebergSnapshotSummary.OVERWRITE` whenever files were\nremoved, without checking whether the commit was actually a compaction.\n\nThe fix:\n- Add an `IcebergSnapshotSummary.REPLACE` constant for the `\"replace\"`\noperation\n- Pass `Snapshot.CommitKind` into `createWithDeleteManifestFileMetas()`\nand use `REPLACE` when `commitKind \u003d\u003d COMPACT`, `OVERWRITE` otherwise\n- Add an assertion in `IcebergCompatibilityTest.testDeleteImpl()` to\nverify that compaction commits produce `operation: \"replace\"` in the\nIceberg snapshot metadata"
    },
    {
      "commit": "cec7050bfb1b43e5e5f8e842116520c91f8c2526",
      "tree": "8aca708a01a7cdc7cd5f7845a119f7d3d45bc740",
      "parents": [
        "8a96ee93bed618760888b6a9441236cc070f02d9"
      ],
      "author": {
        "name": "XiaoHongbo",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Mon Apr 27 14:32:12 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 27 14:32:12 2026 +0800"
      },
      "message": "[python] Fix pathlib.Path producing backslashes paths on Windows (#7700)"
    },
    {
      "commit": "8a96ee93bed618760888b6a9441236cc070f02d9",
      "tree": "61f0e71b8cdbc8d3abfd59c3ce3bf7a992924f90",
      "parents": [
        "9f9b8db42acc56d7fbc4a025692d9275019b424e"
      ],
      "author": {
        "name": "XiaoHongbo",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Mon Apr 27 11:47:13 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 27 11:47:13 2026 +0800"
      },
      "message": "[lance] Lance format support row tracking (#7698)"
    },
    {
      "commit": "9f9b8db42acc56d7fbc4a025692d9275019b424e",
      "tree": "fcb19653eb2e353fb07606e2232878c17de493eb",
      "parents": [
        "9807fe413d6c0dca21d5e325207b0ff5b2e07b89"
      ],
      "author": {
        "name": "XiaoHongbo",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Sun Apr 26 14:30:38 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Apr 26 14:30:38 2026 +0800"
      },
      "message": "[core] Skip partition-specific table-default options for non-partitioned tables (#7681)"
    },
    {
      "commit": "9807fe413d6c0dca21d5e325207b0ff5b2e07b89",
      "tree": "b355c2afdca4442a5a37bdf133f1783d2342d695",
      "parents": [
        "ec1552814613a46b42c7c985e85ec26e441ac675"
      ],
      "author": {
        "name": "XiaoHongbo",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Sun Apr 26 12:57:18 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Apr 26 12:57:18 2026 +0800"
      },
      "message": "[core] Skip file index for changelog files (#7697)\n\nChangelog files are sequentially consumed by streaming readers, file\nindex (bloom filter/bitmap) provides no benefit. Skip generating .index\nfiles for changelog to avoid unnecessary storage overhead and potential\norphan files after changelog expiration. These orphan .index files\nincrease I/O pressure on the orphan file clean job."
    },
    {
      "commit": "ec1552814613a46b42c7c985e85ec26e441ac675",
      "tree": "42afaae9ebb317389d9a9150e0ba6f1a658d03f1",
      "parents": [
        "e5214cc7d9a362a05a3918184ee730ccd3952ec4"
      ],
      "author": {
        "name": "umi",
        "email": "55790489+discivigour@users.noreply.github.com",
        "time": "Fri Apr 24 08:55:09 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 24 08:55:09 2026 +0800"
      },
      "message": "[core] Fix sort by sequence.field.sort-order when writing pk table. (#7684)"
    },
    {
      "commit": "e5214cc7d9a362a05a3918184ee730ccd3952ec4",
      "tree": "3c8bac8ce7365e21f913e8eb84c3f843d8f326fe",
      "parents": [
        "ed1661e7402a5a75b4f035a7bd39f642eb636be0"
      ],
      "author": {
        "name": "Oleksandr Nitavskyi",
        "email": "oleksandr.nitavskyi@datadoghq.com",
        "time": "Fri Apr 24 02:46:29 2026 +0200"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 24 08:46:29 2026 +0800"
      },
      "message": "[iceberg] Fix NPE in IcebergRestMetadataCommitter when table has no snapshots (#7688)"
    },
    {
      "commit": "ed1661e7402a5a75b4f035a7bd39f642eb636be0",
      "tree": "b1d7604625317d6189e6f873d12e3801056e194c",
      "parents": [
        "9df16bfec441320f663048a385b58f330938af93"
      ],
      "author": {
        "name": "Kerwin Zhang",
        "email": "xiyu.zk@alibaba-inc.com",
        "time": "Fri Apr 24 08:44:46 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 24 08:44:46 2026 +0800"
      },
      "message": "[spark] Fix ArrayIndexOutOfBoundsException reading empty partitioned format table (#7692)"
    },
    {
      "commit": "9df16bfec441320f663048a385b58f330938af93",
      "tree": "50a955c1b5008d060051ca1c27065fdb4b2e1241",
      "parents": [
        "fe359ed41416bac826d45853b7ca7a768489e962"
      ],
      "author": {
        "name": "Zouxxyy",
        "email": "zouxinyu.zxy@alibaba-inc.com",
        "time": "Fri Apr 24 08:44:34 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 24 08:44:34 2026 +0800"
      },
      "message": "[test] Fix FormatTable directory creation in RESTFileSystemCatalog (#7693)"
    },
    {
      "commit": "fe359ed41416bac826d45853b7ca7a768489e962",
      "tree": "d8aeed7872cd1a3089d361f9c9ce5a90939f705e",
      "parents": [
        "8fbb498e3c38476395fbf7a14c4378b6ba5b91a4"
      ],
      "author": {
        "name": "jerry",
        "email": "jinglining0@gmail.com",
        "time": "Thu Apr 23 20:00:47 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 23 20:00:47 2026 +0800"
      },
      "message": "[index] vector search support predict in python (#7691)"
    },
    {
      "commit": "8fbb498e3c38476395fbf7a14c4378b6ba5b91a4",
      "tree": "3d742cb83a54610b3a22d558dc6353660dda75e8",
      "parents": [
        "def22dcf24ceb1660a8f94ec25cd4c2999181405"
      ],
      "author": {
        "name": "Jingsong Lee",
        "email": "jingsonglee0@gmail.com",
        "time": "Thu Apr 23 17:13:32 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 23 17:13:32 2026 +0800"
      },
      "message": "[core] Introduce \u0027compaction.small-file-ratio\u0027 (#7690)\n\nThe ratio of target file size. Files whose size is smaller than\ntarget-file-size compaction.small-file-ratio will be picked for\ncompaction rewriting. This avoids compacting the same file repeatedly\ndue to compression inaccuracy causing output files to be slightly\nsmaller than the target size."
    },
    {
      "commit": "def22dcf24ceb1660a8f94ec25cd4c2999181405",
      "tree": "1743c5c81a1576902aa8a5f4bbb7d760045a5fe4",
      "parents": [
        "08c8a2e60030ea11f90bd14af5374de130f67320"
      ],
      "author": {
        "name": "Jingsong Lee",
        "email": "jingsonglee0@gmail.com",
        "time": "Wed Apr 22 21:36:13 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 22 21:36:13 2026 +0800"
      },
      "message": "[core] Enhance ArrowBundleWriter to more batch scenarios (#7678)\n\nEnhance `ArrowBundleWriter`:\n- Enhance `VectorizedBundleRecords` input for batch processing."
    },
    {
      "commit": "08c8a2e60030ea11f90bd14af5374de130f67320",
      "tree": "674577faed47301a7005c850bf2a83cfe00bb0ff",
      "parents": [
        "6ed44fe6268fe5a3f662f147a32415f0807e9cd2"
      ],
      "author": {
        "name": "linchen1101",
        "email": "1309571797@qq.com",
        "time": "Wed Apr 22 20:46:22 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 22 20:46:22 2026 +0800"
      },
      "message": "[core] Optimize the printing of error logs for snapshot expiration (#7654)"
    },
    {
      "commit": "6ed44fe6268fe5a3f662f147a32415f0807e9cd2",
      "tree": "a2dc28a6a107569f1ba6ec306d38c7eb8adf30ec",
      "parents": [
        "ce0836d13c1a58c42fb9da21004de5289d953d92"
      ],
      "author": {
        "name": "XiaoHongbo",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Tue Apr 21 20:23:29 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 21 20:23:29 2026 +0800"
      },
      "message": "[core] Optimize error message when creating PK table with row-tracking enabled  (#7679)"
    },
    {
      "commit": "ce0836d13c1a58c42fb9da21004de5289d953d92",
      "tree": "258421d60addba0397a60e853b25dde38723d67b",
      "parents": [
        "04c8c521c4ac83c7ab57a0f08dab8ea45b29cd73"
      ],
      "author": {
        "name": "Faiz",
        "email": "wxy407679@antgroup.com",
        "time": "Tue Apr 21 17:04:04 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 21 17:04:04 2026 +0800"
      },
      "message": "[docs] warn user the potential inaccuracy of global index. (#7677)"
    },
    {
      "commit": "04c8c521c4ac83c7ab57a0f08dab8ea45b29cd73",
      "tree": "b3e83b0929bc6333fe5e085c1fe14c614570c0ce",
      "parents": [
        "3c2620af1395a154e34de3909440231fb3c5196b"
      ],
      "author": {
        "name": "YeJunHao",
        "email": "41894543+leaves12138@users.noreply.github.com",
        "time": "Mon Apr 20 21:34:42 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 20 21:34:42 2026 +0800"
      },
      "message": "[docs] Remove External-Storage Descriptor Fields and MERGE INTO Support sections from blob docs (#7676)"
    },
    {
      "commit": "3c2620af1395a154e34de3909440231fb3c5196b",
      "tree": "a7720b7f5caa9589280d2897109ffe04a52d593e",
      "parents": [
        "c8a0d2af47372893f872da9aa7235ac9ed922835"
      ],
      "author": {
        "name": "LsomeYeah",
        "email": "94825748+LsomeYeah@users.noreply.github.com",
        "time": "Mon Apr 20 12:15:22 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 20 12:15:22 2026 +0800"
      },
      "message": "[common] handle tmp file when classifying file type (#7675)"
    },
    {
      "commit": "c8a0d2af47372893f872da9aa7235ac9ed922835",
      "tree": "8253987ff40861a2982a27ba8f33dfa78a921c2b",
      "parents": [
        "05c2d3df0a23e3751b3c892a84a679d226a78c92"
      ],
      "author": {
        "name": "Dapeng Sun(孙大鹏)",
        "email": "dapeng.sdp@alibaba-inc.com",
        "time": "Mon Apr 20 09:45:26 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 20 09:45:26 2026 +0800"
      },
      "message": "[jindo] Extract doDelete() template method from HadoopCompliantFileIO (#7668)\n\nRefactor `HadoopCompliantFileIO.delete()` to delegate to a new\n`protected doDelete(Path, boolean)` template method. This allows\nsubclasses (e.g. JindoFileIO) to intercept delete operations\n(trash/recycle-bin, audit logging, etc.) while retaining access to the\nreal filesystem delete via `doDelete()`."
    },
    {
      "commit": "05c2d3df0a23e3751b3c892a84a679d226a78c92",
      "tree": "5afd7cb1119f4f0d343d7cd7ca0ed04c3683ffb6",
      "parents": [
        "aa66e398c54507ddb74293693cb27bd2585ed724"
      ],
      "author": {
        "name": "XiaoHongbo",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Mon Apr 20 09:44:48 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 20 09:44:48 2026 +0800"
      },
      "message": "[python] Fix blob read OOM by wrapping lazy stream in BlobRef (#7672)"
    },
    {
      "commit": "aa66e398c54507ddb74293693cb27bd2585ed724",
      "tree": "f9a808c48b3db0785b4c198dc09b5583e7e1c64f",
      "parents": [
        "aa6fff57ef20eacb1d4e20adbaa1d3ed4b8febff"
      ],
      "author": {
        "name": "XiaoHongbo",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Sun Apr 19 19:07:53 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Apr 19 19:07:53 2026 +0800"
      },
      "message": "[deploy] Fix JDK 17 deploy script to use renamed Spark4 module artifactIds (#7669)"
    },
    {
      "commit": "aa6fff57ef20eacb1d4e20adbaa1d3ed4b8febff",
      "tree": "2bdf814aad5434f64e1ad6c4eb92eb14b5733529",
      "parents": [
        "6f6dcc9739d3758d453065dab5df20feb7f4e914"
      ],
      "author": {
        "name": "Zouxxyy",
        "email": "zouxinyu.zxy@alibaba-inc.com",
        "time": "Sat Apr 18 08:29:16 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Apr 18 08:29:16 2026 +0800"
      },
      "message": "[spark] Support CREATE TABLE LIKE (#7663)\n\nSupport `CREATE TABLE LIKE` in Paimon Spark integration on Spark 3.4+.\n\nBehavior:\n- `SparkCatalog`: rewrite `CREATE TABLE LIKE` to the Paimon command,\nexcept `STORED AS`, which remains unsupported.\n- If `USING` is omitted in `SparkCatalog`, inherit the source table\nprovider.\n- `SparkGenericCatalog`: rewrite only when `USING paimon` is specified;\notherwise keep Spark native behavior.\n- Copy source table comment and properties only when source and target\nproviders match. If providers differ, keep only the comment and log a\nwarning."
    },
    {
      "commit": "6f6dcc9739d3758d453065dab5df20feb7f4e914",
      "tree": "7f91c9b33a3becd5a453febeefad67354b74c428",
      "parents": [
        "9a301e6671d69b47270825c343edf2510d70d734"
      ],
      "author": {
        "name": "XiaoHongbo",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Fri Apr 17 21:36:00 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 17 21:36:00 2026 +0800"
      },
      "message": "[core] Do not pushdown limit when non partition filter is present (#7665)\n\nIf limit pushdown is applied before non-partition filters are evaluated,\nscanning may stop too early and return fewer rows than requested. To fix\nthis, limit pushdown is disabled when non-partition filters exist."
    },
    {
      "commit": "9a301e6671d69b47270825c343edf2510d70d734",
      "tree": "744c4df43356e9860e2dc2a85f689dd029d2c870",
      "parents": [
        "ee28b547c0b7c8c0a7ed0530cb08e13aa9ed5c8b"
      ],
      "author": {
        "name": "littlecoder04",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Thu Apr 16 21:47:48 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 16 21:47:48 2026 +0800"
      },
      "message": "[python] Fix index_manifest not inherited from previous snapshot with append commit (#7662)"
    },
    {
      "commit": "ee28b547c0b7c8c0a7ed0530cb08e13aa9ed5c8b",
      "tree": "0b982040eba7314170fd39feac4889ab416f3e1b",
      "parents": [
        "26f266b07bc0aaabb8ef2fa1604fad9860ac09a1"
      ],
      "author": {
        "name": "Jiajia Li",
        "email": "plusplusjiajia@alibaba-inc.com",
        "time": "Thu Apr 16 16:12:22 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 16 16:12:22 2026 +0800"
      },
      "message": "[python] Fix Identifier.is_system_table() (#7651)"
    },
    {
      "commit": "26f266b07bc0aaabb8ef2fa1604fad9860ac09a1",
      "tree": "22ff7e6b9ad61977814eed3c8de6a0f00094f34d",
      "parents": [
        "afe48b26382ed70e6f6cd9a3303d432da70c8ba5"
      ],
      "author": {
        "name": "Faiz",
        "email": "wxy407679@antgroup.com",
        "time": "Thu Apr 16 16:12:02 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 16 16:12:02 2026 +0800"
      },
      "message": "[core] fix BlobDescriptor toString format (#7653)"
    },
    {
      "commit": "afe48b26382ed70e6f6cd9a3303d432da70c8ba5",
      "tree": "711ee29e12dc75e4e2afed6b5b84f5274df82e8e",
      "parents": [
        "03394f996fb692afbc3ccb438ff4334124546901"
      ],
      "author": {
        "name": "littlecoder04",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Thu Apr 16 15:24:39 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 16 15:24:39 2026 +0800"
      },
      "message": "[python] Generate date-stamped dev package alongside standard sdist (#7661)"
    },
    {
      "commit": "03394f996fb692afbc3ccb438ff4334124546901",
      "tree": "af0642edc826fa6cc631220c60596080416a7a2e",
      "parents": [
        "cf96eed0d70b88a877bd999f422499ec64d30ee3"
      ],
      "author": {
        "name": "littlecoder04",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Thu Apr 16 13:20:01 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 16 13:20:01 2026 +0800"
      },
      "message": "[core][python] Fix commitKind mislabeled as OVERWRITE for data-evolution merge into (#7639)"
    },
    {
      "commit": "cf96eed0d70b88a877bd999f422499ec64d30ee3",
      "tree": "7c89ece2c5b50b1cd9c5c46cfbb7016446ae9417",
      "parents": [
        "79e1481d43ecdd2f6a573f0971a1df62495b8751"
      ],
      "author": {
        "name": "littlecoder04",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Thu Apr 16 12:09:19 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 16 12:09:19 2026 +0800"
      },
      "message": "[deploy] Fix empty javadoc jar when building with JDK 8 (#7658)\n\nThe --ignore-source-errors flag is only supported by JDK 9+. Move it to\na JDK 9+ activated profile so that JDK 8 builds can generate javadoc\ncorrectly.\n\n### Before：\nOnly META-INF files are generated in doc jar with JDK8\nDoc jar looks good in JDK11 and JDK17.\n\n### After:\nDoc jar looks good in JDK8, JDK11 and JDK17."
    },
    {
      "commit": "79e1481d43ecdd2f6a573f0971a1df62495b8751",
      "tree": "9e0e82d58eac4ec1d9760cb1036f00293516332d",
      "parents": [
        "da8dde093df9afb9fdf5221c5de0ab4ddcebaef4"
      ],
      "author": {
        "name": "lszskye",
        "email": "57179283+lszskye@users.noreply.github.com",
        "time": "Thu Apr 16 09:00:22 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 16 09:00:22 2026 +0800"
      },
      "message": "[core] Fix FieldListaggAgg distinct mode incorrectly skipping values (#7652)\n\nFix a bug in `FieldListaggAgg.agg()` where the distinct deduplication\nlogic\nincorrectly uses substring matching (`BinaryString.contains()`) instead\nof\nexact token matching, causing valid values to be silently dropped.\n\nFor example, with delimiter `,`:\n- accumulator \u003d `\"abc,def,asd\"`\n- inputField \u003d `\"ab,xy\"`\n- Token `\"ab\"` is incorrectly skipped because\n`\"abc,def,asd\".contains(\"ab\")`\n  returns `true`\n- Result: `\"abc,def,asd,xy\"` (missing `\"ab\"`)\n- Expected: `\"abc,def,asd,ab,xy\"`"
    },
    {
      "commit": "da8dde093df9afb9fdf5221c5de0ab4ddcebaef4",
      "tree": "94d13e927255129eb9b7c7d4296ab9a19f5e78b0",
      "parents": [
        "77ad5fc17dfe3782b4e11e790fbe7940f38296e9"
      ],
      "author": {
        "name": "Yann Byron",
        "email": "biyan900116@gmail.com",
        "time": "Wed Apr 15 15:24:00 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 15 15:24:00 2026 +0800"
      },
      "message": "[spark] Support column aliases and comments for Paimon views (#7625)"
    },
    {
      "commit": "77ad5fc17dfe3782b4e11e790fbe7940f38296e9",
      "tree": "2e86b48693eeb9a53dde3812f03a695e6559cc9f",
      "parents": [
        "a45be9f0d2775790245b8b6008f8c7d3f34265ab"
      ],
      "author": {
        "name": "shyjsarah",
        "email": "44659226+shyjsarah@users.noreply.github.com",
        "time": "Tue Apr 14 21:16:01 2026 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 15 12:16:01 2026 +0800"
      },
      "message": "[rest] Align view permission handling with table in RESTCatalog. (#7641)"
    },
    {
      "commit": "a45be9f0d2775790245b8b6008f8c7d3f34265ab",
      "tree": "09c537d807132b1f52a1b35e488286a1f319819f",
      "parents": [
        "7c93bd7206fa01c2896cad1a2118c8039018d40d"
      ],
      "author": {
        "name": "lxy",
        "email": "38709059+lxy-9602@users.noreply.github.com",
        "time": "Wed Apr 15 12:03:11 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 15 12:03:11 2026 +0800"
      },
      "message": "[core] Fix initRow in AggregateMergeFunction and PartialUpdateMergeFunction to set all fields including nullable ones (#7645)"
    },
    {
      "commit": "7c93bd7206fa01c2896cad1a2118c8039018d40d",
      "tree": "6c14ef92334087b85c25074d7729af1e2d7bbdd6",
      "parents": [
        "6e3a7d3db763df46d13d0d36e7127a234d940bad"
      ],
      "author": {
        "name": "Jingsong Lee",
        "email": "jingsonglee0@gmail.com",
        "time": "Tue Apr 14 17:48:36 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 14 17:48:36 2026 +0800"
      },
      "message": "[core] Avoid key bytes OOM in ClusteringFileRewriter.sortAndRewriteFile (#7642)\n\nAvoid key bytes OOM in ClusteringFileRewriter.sortAndRewriteFile.\nRemoving the in-memory List\u003cbyte[]\u003e collectedKeys and the batchPutIndex\nmethod eliminates the unbounded memory accumulation."
    },
    {
      "commit": "6e3a7d3db763df46d13d0d36e7127a234d940bad",
      "tree": "682b4049be6c913388db8581ff035956dfbb2aca",
      "parents": [
        "ade5d201c994fbe668a446acc502c6f2741f8007"
      ],
      "author": {
        "name": "dependabot[bot]",
        "email": "49699333+dependabot[bot]@users.noreply.github.com",
        "time": "Tue Apr 14 11:36:25 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 14 11:36:25 2026 +0800"
      },
      "message": "Bump org.apache.logging.log4j:log4j-1.2-api from 2.25.3 to 2.25.4 (#7640)"
    },
    {
      "commit": "ade5d201c994fbe668a446acc502c6f2741f8007",
      "tree": "2b19099e2b7c0657694ee325f6522957bf3a3654",
      "parents": [
        "dc061b446eceb54748bc9833a080bf69c5ef8aa5"
      ],
      "author": {
        "name": "LsomeYeah",
        "email": "94825748+LsomeYeah@users.noreply.github.com",
        "time": "Mon Apr 13 18:53:54 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 13 18:53:54 2026 +0800"
      },
      "message": "[core] Improve error message for fallback branch schema mismatch (#7633)"
    },
    {
      "commit": "dc061b446eceb54748bc9833a080bf69c5ef8aa5",
      "tree": "d94853c80da2a45e7beeb6a990cc0c0b43f51bf3",
      "parents": [
        "23985653a1ee0e5716aaf48e016ce3e029249a07"
      ],
      "author": {
        "name": "LsomeYeah",
        "email": "94825748+LsomeYeah@users.noreply.github.com",
        "time": "Mon Apr 13 17:25:11 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 13 17:25:11 2026 +0800"
      },
      "message": "[common] move FileType from paimon-core to paimon-common (#7632)"
    },
    {
      "commit": "23985653a1ee0e5716aaf48e016ce3e029249a07",
      "tree": "500934e003b9285cb7dfcde570cc0a311c138100",
      "parents": [
        "a1a8dcb1652222ee87a7555e0044e1d10f3942a4"
      ],
      "author": {
        "name": "yuzelin",
        "email": "33053040+yuzelin@users.noreply.github.com",
        "time": "Mon Apr 13 16:56:02 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 13 16:56:02 2026 +0800"
      },
      "message": "[core] Fix DataEvolutionCompactTask potential resource leak (#7634)"
    },
    {
      "commit": "a1a8dcb1652222ee87a7555e0044e1d10f3942a4",
      "tree": "00147b92b287ce0a4643f209014c7907e8dc6957",
      "parents": [
        "52a626983147ab11a3500bd2566a11dd560a1ec2"
      ],
      "author": {
        "name": "Jingsong Lee",
        "email": "jingsonglee0@gmail.com",
        "time": "Mon Apr 13 14:48:03 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 13 14:48:03 2026 +0800"
      },
      "message": "[core] Allow pk-clustering-override without explicit DV for first-row merge engine (#7622)\n\nFirst-row merge engine already has built-in deletion vector semantics,\nso requiring users to explicitly enable deletion-vectors is unnecessary."
    },
    {
      "commit": "52a626983147ab11a3500bd2566a11dd560a1ec2",
      "tree": "4756d7bbd7ee1df9188a70b139c92934dfc77d8c",
      "parents": [
        "dec9ec9811549c3b408b759b510c1538abaaa058"
      ],
      "author": {
        "name": "jerry",
        "email": "jinglining0@gmail.com",
        "time": "Mon Apr 13 13:56:36 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 13 13:56:36 2026 +0800"
      },
      "message": "[index] lumina index build support append model (#7631)"
    },
    {
      "commit": "dec9ec9811549c3b408b759b510c1538abaaa058",
      "tree": "a6899a0e8e4a1926653f0698ef6db9f6fec64425",
      "parents": [
        "f15c510a41e55e9eab94e87c4538ec6f1dd18684"
      ],
      "author": {
        "name": "liziyan",
        "email": "49580493+ziyanTOP@users.noreply.github.com",
        "time": "Mon Apr 13 11:41:34 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 13 11:41:34 2026 +0800"
      },
      "message": "[cdc] fix special regex characters unescaped in excluded table patterns (#7619)"
    },
    {
      "commit": "f15c510a41e55e9eab94e87c4538ec6f1dd18684",
      "tree": "08bd7d3f75507344b94cb9e17f83a13310da27ba",
      "parents": [
        "ee987516c87664768bcab45e882aaac835555575"
      ],
      "author": {
        "name": "YeJunHao",
        "email": "41894543+leaves12138@users.noreply.github.com",
        "time": "Mon Apr 13 11:33:22 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 13 11:33:22 2026 +0800"
      },
      "message": "[common] Replace commons StringUtils usages with Paimon StringUtils (#7603)"
    },
    {
      "commit": "ee987516c87664768bcab45e882aaac835555575",
      "tree": "5f09fa125042e40e2481bb0547397e9b368d3bd6",
      "parents": [
        "369b8d928cc8c35e653dcb3a1205bf305ccf24eb"
      ],
      "author": {
        "name": "jerry",
        "email": "jinglining0@gmail.com",
        "time": "Mon Apr 13 11:20:33 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 13 11:20:33 2026 +0800"
      },
      "message": "[index] system table index add field (#7623)"
    },
    {
      "commit": "369b8d928cc8c35e653dcb3a1205bf305ccf24eb",
      "tree": "a37a22e3049ba270531dd5e082113d31e220181a",
      "parents": [
        "82c3d83ad9959c73a7bdfea0228b7fbcb1ac3dcd"
      ],
      "author": {
        "name": "Zouxxyy",
        "email": "zouxinyu.zxy@alibaba-inc.com",
        "time": "Mon Apr 13 11:15:07 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 13 11:15:07 2026 +0800"
      },
      "message": "[spark] Fix exprId mismatch in MERGE INTO by constructing Join plan directly (#7624)\n\nFix exprId mismatch bug in MERGE INTO when action expressions reference\nboth source\nand target columns (e.g., `COALESCE(src.b, dest.b)`).\n\nReplace Dataset API join (`sourceDS.join(targetDS, ...)`) with manual\n`Join` plan node\nconstruction, following the same pattern as Spark\u0027s\n`RewriteMergeIntoTable`. This\npreserves original exprIds and eliminates the need for position-based\nrebinding."
    },
    {
      "commit": "82c3d83ad9959c73a7bdfea0228b7fbcb1ac3dcd",
      "tree": "4d626651be739fcadd83cdf720cdf68c85fa03f1",
      "parents": [
        "22baf60059a09ff4f09afe4c5ac2b962c68fe975"
      ],
      "author": {
        "name": "littlecoder04",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Sun Apr 12 15:52:13 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Apr 12 15:52:13 2026 +0800"
      },
      "message": "[python] Add conflict detection in shard update (#7630)\n\n### Purpose\nThis PR is a follow-up to #7323. PR #7323 introduced conflict detection\nfor Python data evolution updates, but the shard-update path was not\ncovered.\n\nAs a result, when shard update and compact run concurrently, the shard\nupdate may commit successfully against a stale scan snapshot instead of\nfailing fast. The problem only shows up later during read, with the\nerror:\n`All files in a field merge split should have the same row count`.This\nPR extends the same conflict-detection coverage to the shard-update\npath.\n\n### Tests\n\nrun_compact_conflict_test in run_mixed_tests.sh"
    },
    {
      "commit": "22baf60059a09ff4f09afe4c5ac2b962c68fe975",
      "tree": "c6ca51e16224acdf394b6240ba9d1d17a5307d9a",
      "parents": [
        "6a8167f1d5c682e9f9a04ed9b1443234f5235988"
      ],
      "author": {
        "name": "littlecoder04",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Fri Apr 10 22:11:03 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 10 22:11:03 2026 +0800"
      },
      "message": "[python] Fix duplicate _ROW_ID when file exceeds read batch size (#7626)\n\nWhen a data file contains more than 1024 rows, upsert_by_arrow_with_key\nfails with: `ValueError: Input data contains duplicate _ROW_ID values`.\nThis PR fixes above issue by advancing first_row_id in\n  DataFileBatchReader._assign_row_tracking after each batch."
    },
    {
      "commit": "6a8167f1d5c682e9f9a04ed9b1443234f5235988",
      "tree": "6fc5684727de69e786104189d4947e3f5aaa5822",
      "parents": [
        "72600f9bb5edaeb7845c001b228de25caba6a7a8"
      ],
      "author": {
        "name": "Kerwin Zhang",
        "email": "xiyu.zk@alibaba-inc.com",
        "time": "Fri Apr 10 21:10:09 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 10 21:10:09 2026 +0800"
      },
      "message": "[spark] Support partition statistics in SHOW TABLE EXTENDED PARTITION command (#7612)"
    },
    {
      "commit": "72600f9bb5edaeb7845c001b228de25caba6a7a8",
      "tree": "bde75841073fb415e7f10e8ba8eaba89f13a1262",
      "parents": [
        "f424d261b76753c44af0d9ed35a9342604980d08"
      ],
      "author": {
        "name": "littlecoder04",
        "email": "xiaohongbo.xhb@alibaba-inc.com",
        "time": "Fri Apr 10 16:18:23 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 10 16:18:23 2026 +0800"
      },
      "message": "[core] Fix blob read failure after alter table and compaction (#7618)\n\nReading a blob table fails after ALTER TABLE SET and compaction with:\n`java.lang.IllegalArgumentException: All files in this bunch should have\nthe same schema id.` This PR fixes the above issue by do not check\nschemaId for blob file and do not allow rename for blob col."
    },
    {
      "commit": "f424d261b76753c44af0d9ed35a9342604980d08",
      "tree": "4ecb249764667d0ec1bbb6d1600d64e616e925cb",
      "parents": [
        "fe9b975c868cd1f81a81c2d68041348a31670d56"
      ],
      "author": {
        "name": "LsomeYeah",
        "email": "94825748+LsomeYeah@users.noreply.github.com",
        "time": "Fri Apr 10 13:08:50 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 10 13:08:50 2026 +0800"
      },
      "message": "[core] Add FileType enum for classifying Paimon files (#7613)"
    },
    {
      "commit": "fe9b975c868cd1f81a81c2d68041348a31670d56",
      "tree": "e2c4af9c8c0abb7cdf5754cbc407c19b74a20388",
      "parents": [
        "1e1c9f4ed8c28420e8248862b15b03fce9b7edca"
      ],
      "author": {
        "name": "tsreaper",
        "email": "tsreaper96@gmail.com",
        "time": "Fri Apr 10 13:06:47 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 10 13:06:47 2026 +0800"
      },
      "message": "[core] When scan.primary-branch is set, allow primary branch to be an append only table (#7614)\n\nIn `scan.fallback-branch`, we allow main branch to be append only, and\nfallback branch to have primary key. So in `scan.primary-branch`, we\nshould allow main branch to have primary key, and primary branch to be\nappend only."
    },
    {
      "commit": "1e1c9f4ed8c28420e8248862b15b03fce9b7edca",
      "tree": "7d1b9cbebd7504af8b35f8913ad35fcbd80097f7",
      "parents": [
        "4557599a9ae6ea8e84365b66fb5f6f91882d85e8"
      ],
      "author": {
        "name": "jerry",
        "email": "jinglining0@gmail.com",
        "time": "Fri Apr 10 09:32:14 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 10 09:32:14 2026 +0800"
      },
      "message": "[index] lumina index build in flink support increment (#7611)"
    },
    {
      "commit": "4557599a9ae6ea8e84365b66fb5f6f91882d85e8",
      "tree": "4541feecbfc52980f965fb39de8158d23e9f84db",
      "parents": [
        "ef2d480b6d6a999387ca38ca9ffe02f91c2d47e4"
      ],
      "author": {
        "name": "Faiz",
        "email": "wxy407679@antgroup.com",
        "time": "Thu Apr 09 22:24:24 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 09 22:24:24 2026 +0800"
      },
      "message": "[flink] fix compatibility issues of StreamExecutionEnv (#7615)"
    },
    {
      "commit": "ef2d480b6d6a999387ca38ca9ffe02f91c2d47e4",
      "tree": "e8798411f87a44ea635a263a854094fdc3763e8b",
      "parents": [
        "70110f229aa0cf30df0cef81cce7cfb219dc2a58"
      ],
      "author": {
        "name": "Weitai Li",
        "email": "l8261793@gmail.com",
        "time": "Thu Apr 09 22:23:54 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 09 22:23:54 2026 +0800"
      },
      "message": " [core] Add hashCode to AbstractFileStoreTable for consistency with equals  (#7616)"
    },
    {
      "commit": "70110f229aa0cf30df0cef81cce7cfb219dc2a58",
      "tree": "173bf0f86f9b19aa4ac7e00585d49051d4b3f4a0",
      "parents": [
        "e69109d1229ad1bda8f8d529ef047c3c1acd277f"
      ],
      "author": {
        "name": "littlecoder04",
        "email": "xiaohongbo04@gmail.com",
        "time": "Thu Apr 09 20:49:39 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 09 20:49:39 2026 +0800"
      },
      "message": "[spark] Fix RowTrackingTable reading by add _ROW_ID existing check before adding _ROW_ID (#7606)"
    },
    {
      "commit": "e69109d1229ad1bda8f8d529ef047c3c1acd277f",
      "tree": "0dd69bfa89827372f4cbc67ca3c7e2f0f731a206",
      "parents": [
        "f6cb6c849fc4ade93cf3366b79a4b2955b8919a2"
      ],
      "author": {
        "name": "littlecoder04",
        "email": "xiaohongbo04@gmail.com",
        "time": "Thu Apr 09 13:26:25 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 09 13:26:25 2026 +0800"
      },
      "message": "[python] Add file_source assertion in row-tracking commit to fail fast on missing file_source (#7610)\n\nif file_source is None, the condition `None \u003d\u003d 0` silently evaluates to\nFalse, skipping row ID assignment. This causes nextRowId not to\nincrement, leading to first_row_id conflicts. So this PR add file_source\ncheck like java."
    },
    {
      "commit": "f6cb6c849fc4ade93cf3366b79a4b2955b8919a2",
      "tree": "64aa175e09d239264abcf64da72a6fac766641cd",
      "parents": [
        "1bf23b1e5d98ce33c3ff32076ed9ae616b819dcf"
      ],
      "author": {
        "name": "jerry",
        "email": "jinglining0@gmail.com",
        "time": "Wed Apr 08 11:28:58 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 08 11:28:58 2026 +0800"
      },
      "message": "[python] vector lumina: python fix code (#7609)"
    },
    {
      "commit": "1bf23b1e5d98ce33c3ff32076ed9ae616b819dcf",
      "tree": "2b65d04f0813d10a8285eb6a7da8084f047c3fad",
      "parents": [
        "f5b9decbff14751b83fd029f8346928c951166af"
      ],
      "author": {
        "name": "Faiz",
        "email": "wxy407679@antgroup.com",
        "time": "Tue Apr 07 21:53:14 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 07 21:53:14 2026 +0800"
      },
      "message": "[flink] fix de-merge-into runtimeContext may throw NoSuchMethod error (#7607)"
    },
    {
      "commit": "f5b9decbff14751b83fd029f8346928c951166af",
      "tree": "0fc908782cd64f736c764282d2056cba7077fa33",
      "parents": [
        "99bb80478f286d5e4889bcaf1e9005b85c8fedf5"
      ],
      "author": {
        "name": "jerry",
        "email": "jinglining0@gmail.com",
        "time": "Tue Apr 07 21:27:49 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 07 21:27:49 2026 +0800"
      },
      "message": "[index] support lumina index in python (#7579)\n\nAdd Lumina vector index read/search support to paimon-python.\nUsage example:\n```python\n  from pypaimon import CatalogFactory\n\n  catalog \u003d CatalogFactory.create({\u0027warehouse\u0027: \u0027/path/to/warehouse\u0027})\n  table \u003d catalog.get_table(\u0027default.my_table\u0027)\n\n  # Step 1: Vector search — find top-5 nearest neighbors\n  builder \u003d table.new_vector_search_builder()\n  builder.with_vector_column(\u0027embedding\u0027)\n  builder.with_query_vector([0.1, 0.2, 0.3, 0.4])\n  builder.with_limit(5)\n  result \u003d builder.execute_local()\n\n  # Step 2: Read the matching rows\n  read_builder \u003d table.new_read_builder()\n  scan \u003d read_builder.new_scan().with_global_index_result(result)\n  plan \u003d scan.plan()\n  arrow_table \u003d read_builder.new_read().to_arrow(plan.splits())\n  print(arrow_table.to_pandas())\n```"
    },
    {
      "commit": "99bb80478f286d5e4889bcaf1e9005b85c8fedf5",
      "tree": "c269c8e176cd218f090e4775663e62f9e7148d3c",
      "parents": [
        "b8cc8cdf650689679921314b405d7887b367421f"
      ],
      "author": {
        "name": "sanshi",
        "email": "43472713+lilei1128@users.noreply.github.com",
        "time": "Tue Apr 07 14:42:44 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 07 14:42:44 2026 +0800"
      },
      "message": "[docs] Add Spark SQL example for fast_forward in manage-branches (#7604)"
    },
    {
      "commit": "b8cc8cdf650689679921314b405d7887b367421f",
      "tree": "7c5b4bdc06b8b207983968f00a3d2190d78d7e97",
      "parents": [
        "0223348b6c62ee6697ac56fe23afb5d3cfac80f9"
      ],
      "author": {
        "name": "Jiajia Li",
        "email": "plusplusjiajia@alibaba-inc.com",
        "time": "Tue Apr 07 11:44:33 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 07 11:44:33 2026 +0800"
      },
      "message": "[python] Fix stats evolution index mapping to use field ID instead of field name (#7593)\n\nSimpleStatsEvolutions._create_index_cast_mapping was using field.name to\nbuild the mapping between table schema and data schema. This breaks\nafter column renames because names change while field IDs remain stable.\nChanged to use field.id, aligning with Java\u0027s\nSchemaEvolutionUtil.createIndexMapping."
    },
    {
      "commit": "0223348b6c62ee6697ac56fe23afb5d3cfac80f9",
      "tree": "c2b7310636d644b67bc7b09175df2685c3771ab6",
      "parents": [
        "2b557d427ad7bfcaf411f0dd89de3cc4e7f1e138"
      ],
      "author": {
        "name": "yuzelin",
        "email": "33053040+yuzelin@users.noreply.github.com",
        "time": "Tue Apr 07 11:43:43 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 07 11:43:43 2026 +0800"
      },
      "message": "[core] Fix TagAutoCreation.forceCreatingSnapshot to use SINK_PROCESS_TIME_ZONE (#7600)\n\nTagAutoCreation.forceCreatingSnapshot in the ProcessTimeExtractor branch\nuses LocalDateTime.now() (machine timezone) to determine whether to\nforce creating a snapshot. When sink.process-time-zone is configured\ndifferently from the machine timezone (e.g. UTC on an Asia/Shanghai\nmachine), the tag creation time is incorrect — it triggers at the\nmachine\u0027s midnight instead of the configured timezone\u0027s midnight."
    },
    {
      "commit": "2b557d427ad7bfcaf411f0dd89de3cc4e7f1e138",
      "tree": "bc66613dad2bcc4b0bab1a533bbb9b68ff7486c6",
      "parents": [
        "9227cf71404ca22e2b336876f98ce74022aefd0e"
      ],
      "author": {
        "name": "Jiajia Li",
        "email": "plusplusjiajia@alibaba-inc.com",
        "time": "Tue Apr 07 09:58:38 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 07 09:58:38 2026 +0800"
      },
      "message": "[python] Fix BlockHandle varlen decoding in SstFileIterator.read_batch (#7597)\n\nread_batch() incorrectly used fixed-width struct.unpack to decode\nBlockHandle, while the SST format uses variable-length encoding. Extract\nshared _parse_block_handle() to ensure both seek_to() and read_batch()\nuse consistent varlen decoding."
    },
    {
      "commit": "9227cf71404ca22e2b336876f98ce74022aefd0e",
      "tree": "b9d287cbf22908077f238504861ef611ebae73ff",
      "parents": [
        "051dc7594564d6871f640e592bf9b961acd0e9d8"
      ],
      "author": {
        "name": "Jingsong Lee",
        "email": "jingsonglee0@gmail.com",
        "time": "Mon Apr 06 20:10:49 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 06 20:10:49 2026 +0800"
      },
      "message": "[python] Introduce DataFusion SQL to PyPaimon (#7599)\n\nPR has introduced PyPaimon with SQL query capabilities based on\nPyPaimon-rust + DataFusion."
    },
    {
      "commit": "051dc7594564d6871f640e592bf9b961acd0e9d8",
      "tree": "04c1b46382161ad9c81c35cab442113b3b619952",
      "parents": [
        "bb00b63dc25a1c4a138dce2bd41a6fc00a1721d7"
      ],
      "author": {
        "name": "Kerwin Zhang",
        "email": "xiyu.zk@alibaba-inc.com",
        "time": "Fri Apr 03 16:02:12 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 03 16:02:12 2026 +0800"
      },
      "message": "[core] Support partition and bucket predicate pushdown for BucketsTable (#7592)"
    },
    {
      "commit": "bb00b63dc25a1c4a138dce2bd41a6fc00a1721d7",
      "tree": "d69f7f99f5f3c08947348a9aa411ddeaa8f3ec94",
      "parents": [
        "4a4907d6bcd4e2af46e3bf7fa58dd406b77e7589"
      ],
      "author": {
        "name": "Wenchao Wu",
        "email": "60921147+Stephen0421@users.noreply.github.com",
        "time": "Fri Apr 03 12:36:06 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 03 12:36:06 2026 +0800"
      },
      "message": "[hotfix] Fix incorrect integer comparison prevent misjudgment in chain table. (#7591)"
    },
    {
      "commit": "4a4907d6bcd4e2af46e3bf7fa58dd406b77e7589",
      "tree": "38abddf0409cc8cd81cfa9a6a37211e5f688773e",
      "parents": [
        "409dd85e51065496301aa3db55babf72de250368"
      ],
      "author": {
        "name": "bryndenZh",
        "email": "49369598+bryndenZh@users.noreply.github.com",
        "time": "Thu Apr 02 22:58:38 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 02 22:58:38 2026 +0800"
      },
      "message": "[core] Fix partial-update sequence-group delete mismatch under projected reads (#7586)"
    },
    {
      "commit": "409dd85e51065496301aa3db55babf72de250368",
      "tree": "6fb2dabc912628b048d98784f001606547f4ed8e",
      "parents": [
        "5427a958c73fdbcbb1a30e61b65e9d9c3d0f1b95"
      ],
      "author": {
        "name": "Zouxxyy",
        "email": "zouxinyu.zxy@alibaba-inc.com",
        "time": "Thu Apr 02 21:52:18 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 02 21:52:18 2026 +0800"
      },
      "message": "[spark] Fix view query failure with VARCHAR/CHAR column types (#7585)\n\nFix querying Paimon View fails with \"Cannot up cast from STRING to\nVARCHAR(n)\" when the underlying table has VARCHAR or CHAR columns.\n\nSpark replaces CharType/VarcharType with StringType during V2 table\nresolution, but the view schema still preserves the original types. This\ncauses UpCast(StringType → VarcharType(n)) to fail. The fix uses\nCharVarcharUtils.replaceCharVarcharWithStringInSchema to normalize the\nview schema before applying UpCast."
    },
    {
      "commit": "5427a958c73fdbcbb1a30e61b65e9d9c3d0f1b95",
      "tree": "45403f61a4f36d20e02cbb6b01ebc4f943dafd1c",
      "parents": [
        "654f4e878c168166ded38febf89c8bec0ec34bbf"
      ],
      "author": {
        "name": "XiaoHongbo",
        "email": "1346652787@qq.com",
        "time": "Thu Apr 02 20:03:35 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 02 20:03:35 2026 +0800"
      },
      "message": "[python] Support REST function API in pypaimon (#7559)"
    },
    {
      "commit": "654f4e878c168166ded38febf89c8bec0ec34bbf",
      "tree": "30092f8d2c67d51c6478fedeca8d03c92403f1ae",
      "parents": [
        "6cad835a06425dba7b689413ab5dc4eb2b46def6"
      ],
      "author": {
        "name": "umi",
        "email": "55790489+discivigour@users.noreply.github.com",
        "time": "Thu Apr 02 17:53:35 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 02 17:53:35 2026 +0800"
      },
      "message": "[python] Fix rust toolchain action check (#7583)"
    },
    {
      "commit": "6cad835a06425dba7b689413ab5dc4eb2b46def6",
      "tree": "1121c7b58f3f99ef3b8fdf21580a996b5262bf80",
      "parents": [
        "5571f4abea35d3f03000ff855141216c29536378"
      ],
      "author": {
        "name": "jerry",
        "email": "jinglining0@gmail.com",
        "time": "Thu Apr 02 16:40:10 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 02 16:40:10 2026 +0800"
      },
      "message": "[index] lumina support vector type (#7580)"
    }
  ],
  "next": "5571f4abea35d3f03000ff855141216c29536378"
}
