)]}'
{
  "log": [
    {
      "commit": "726edbf54bf816c041c7e23967b8fa0b9a86d8ab",
      "tree": "df081c73f25ac15a012bc4d3e86113173390c201",
      "parents": [
        "5d6c9726748b0a5c0f19addbc183f18e994c1f6a"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Fri Apr 10 00:30:26 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 10 00:30:26 2026 -0400"
      },
      "message": "[Docs] Complete API reference for tvm.relax backend and testing modules (#19378)\n\nas per title"
    },
    {
      "commit": "5d6c9726748b0a5c0f19addbc183f18e994c1f6a",
      "tree": "ba48f044d528ea614bfe817d5d0ab975f8db3dca",
      "parents": [
        "4f5a17a4ae4f04061bb7ed4327922bc831bb0d3a"
      ],
      "author": {
        "name": "Soowon Jeong",
        "email": "soowon1106@gmail.com",
        "time": "Fri Apr 10 07:31:19 2026 +0900"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 09 18:31:19 2026 -0400"
      },
      "message": "[BugFix][Relax] Fix ONNX `get_converter` index underflow for old opsets (#19376)\n\n## Description\n\n`OnnxOpConverter.get_converter()` has an index underflow bug: when a\nmodel\u0027s opset is below all implemented converter versions, the selection\nindex wraps to -1 (Python negative indexing), silently picking the\n**latest** implementation instead of the earliest.\n\n### Root cause\n\n```python\nversions \u003d sorted(versions + [opset])\nversion \u003d versions[max([i for i, v in enumerate(versions) if v \u003d\u003d opset]) - 1]\n#                                                                          ^^^\n#         opset\u003d11, versions\u003d[11, 13, 18] → index\u003d0 → 0-1\u003d-1 → versions[-1]\u003d18\n```\n\n### Impact\n\n14 operators affected. For opset 11-12 models, these ops silently\nproduce wrong results:\n\n| Operator | Impl versions | Opset 11 dispatches to | Correct |\n|----------|:---:|:---:|:---:|\n| ReduceMean | [13, 18] | **v18** | v13 |\n| ReduceL1/L2 | [13, 18] | **v18** | v13 |\n| ReduceLogSum | [13, 18] | **v18** | v13 |\n| ReduceLogSumExp | [13, 18] | **v18** | v13 |\n| ReduceProd | [13, 18] | **v18** | v13 |\n| ReduceSumSquare | [13, 18] | **v18** | v13 |\n| ReduceMax/Min | [11, 18] | correct | correct |\n| Pad, Scatter, ScatterND, RoiAlign | various | **wrong** | — |\n\nThe v18 implementations read `axes` from **inputs**, but opset 11-12\npasses `axes` as **attributes** — so axes becomes None and the op\nreduces over all dimensions.\n\nExample: `ReduceMean(axes\u003d[2,3])` on shape `(2,3,4,4)`:\n- Before fix: output shape `(1,1,1,1)` (wrong, all-axis reduction)\n- After fix: output shape `(2,3,1,1)` (correct)\n\n### Fix\n\nReplace the index arithmetic with an explicit filter:\n```python\ncandidates \u003d [v for v in impl_versions if v \u003c\u003d opset]\nversion \u003d max(candidates) if candidates else impl_versions[0]\n```\n\n### Testing\n\nAdded opset 11 test cases for 7 Reduce operators in\n`test_all_reduce_funcs_axes_attr`. All 20 new tests pass. Existing tests\nunaffected (570 pass, 14 pre-existing failures in axes_input/topk\nunrelated to this change).\n\n```bash\npytest tests/python/relax/test_frontend_onnx.py -k \"axes_attr and 11\" -v\n```"
    },
    {
      "commit": "4f5a17a4ae4f04061bb7ed4327922bc831bb0d3a",
      "tree": "024958719fe674ebd99bdccf67f4804df51c70b5",
      "parents": [
        "fb6453a817264efd5b2e19c8b3a118e6b383725b"
      ],
      "author": {
        "name": "Soowon Jeong",
        "email": "soowon1106@gmail.com",
        "time": "Fri Apr 10 03:42:33 2026 +0900"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 09 14:42:33 2026 -0400"
      },
      "message": "[BugFix][Relax] Fix ONNX Clip converter for opset 11-12 (#19375)\n\n## Description\n\nONNX changed the Clip operator from attribute-based min/max (opset 1-10)\nto input-based min/max (opset 11+). The Relax ONNX frontend only had\n`_impl_v1` (attributes) and `_impl_v13` (inputs), so opset 11-12 models\nwere dispatched to `_impl_v1` which ignores the input-based min/max and\nfalls back to `-inf`/`inf` defaults, making Clip a no-op.\n\nThis caused **silent numerical divergence** in any opset 11-12 model\nusing Clip/ReLU6 (e.g. MobileNetV2-12 from the ONNX Model Zoo).\n\n### Root cause\n\n`OnnxOpConverter.get_converter()` selects the largest `_impl_v*` version\n\u003c\u003d the model opset. With only `v1` and `v13`, opset 11-12 mapped to\n`v1`, which reads min/max from attributes — but opset 11+ passes them as\ninputs.\n\n### Fix\n\nAdd `_impl_v11` that delegates to `_impl_v13`.\n\n### Results (MobileNetV2, opset 12)\n\n| Metric | Before | After |\n|--------|:---:|:---:|\n| max abs diff vs ORT | 1.72e+06 | **8.58e-06** |\n| cosine similarity | 0.222 | **1.000** |\n| Top-5 match | No | **Yes** |\n\n## Testing\n\n```bash\npytest tests/python/relax/test_frontend_onnx.py -k \"clip\" -v\n```\n\nAll 6 existing Clip tests pass (opset 6 and 13+). The fix only affects\nopset 11-12 dispatch."
    },
    {
      "commit": "fb6453a817264efd5b2e19c8b3a118e6b383725b",
      "tree": "d8477c684b27a2c01b95526d35070029253ec6dc",
      "parents": [
        "7dcdb56273e598e739074153537e88b5765ef6ef"
      ],
      "author": {
        "name": "Soowon Jeong",
        "email": "soowon1106@gmail.com",
        "time": "Fri Apr 10 03:38:57 2026 +0900"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 09 14:38:57 2026 -0400"
      },
      "message": "[Relax][TFLite] Fix `MIRROR_PAD`/`ONE_HOT` converters and add tests for `PAD`, `PADV2`, `MIRROR_PAD`, `TOPK_V2`, `ONE_HOT` (#19373)\n\nPart of #18971\n\nTwo bugs in the TFLite Relax frontend converters are fixed, and unit\ntests\nare added for the **Padding / Sparse / Other** category operators\nclaimed in\nthat tracking issue.\n\n## Bug fixes\n\n### `convert_mirror_pad`\nCalled `relax.op.nn.mirror_pad` which does not exist in the Relax op\nnamespace. Replaced with `relax.op.nn.pad` using `pad_mode\u003d\"reflect\"`\nfor\nREFLECT mode (the modes are semantically equivalent). SYMMETRIC mode\nraises\n`OpAttributeUnImplemented` as there is no direct Relax equivalent.\n\n### `convert_one_hot`\n- `on_value` and `off_value` were passed as `Expr` (constant tensor\nnodes),\n  but `relax.op.one_hot` requires `PrimValue` arguments.\n- An extra `dtype` positional argument was passed, which the function\n  signature does not accept.\n\nFixed by extracting the scalar from each tensor buffer and wrapping it\nin\n`relax.PrimValue` with the correct dtype via `tvm.tirx.FloatImm` /\n`tvm.tirx.IntImm`.\n\n## Tests added\n\nEach test uses the `verify()` + `tf.Module` pattern and includes an\nexplicit\nexpected IRModule verified with `tvm.ir.assert_structural_equal`.\n\n| Operator | TFLite op | Notes |\n|----------|-----------|-------|\n| `test_pad` | `PAD` | constant zero padding |\n| `test_pad_v2` | `PADV2` | explicit `constant_values\u003d5.0` |\n| `test_mirror_pad` | `MIRROR_PAD` | REFLECT mode |\n| `test_topk_v2` | `TOPK_V2` | returns top-k values |\n| `test_one_hot` | `ONE_HOT` | float32 on/off values, depth\u003d4 |"
    },
    {
      "commit": "7dcdb56273e598e739074153537e88b5765ef6ef",
      "tree": "eb2c1f991232485db2c799ec1711d27d44ab3ff7",
      "parents": [
        "643cf60f40f330173299ef1484abc9f650dbf798"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Thu Apr 09 13:22:13 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 09 13:22:13 2026 -0400"
      },
      "message": "[Docs] Add API reference for tvm.s_tir submodules: dlight, meta_schedule, backend (#19369)\n\nAdd API reference for tvm.s_tir submodules: dlight, meta_schedule, backend"
    },
    {
      "commit": "643cf60f40f330173299ef1484abc9f650dbf798",
      "tree": "6adeee1715660fb9d5201c23e1870962edc25890",
      "parents": [
        "2e6ee08eafc328b82a965e49d106d61828c1d623"
      ],
      "author": {
        "name": "Bana",
        "email": "banabilalt@gmail.com",
        "time": "Wed Apr 08 21:38:24 2026 +0300"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 08 14:38:24 2026 -0400"
      },
      "message": "[Relax][TFLite] Add test coverage for Reduction operations (#18971) (#19370)\n\nCloses part of #18971\n\n---\n\n## Description\nThis PR improves the test coverage of the TFLite frontend to Relax\nconverter by adding comprehensive tests for the **\"Reductions\" group.**\n\n**Operations covered:**\n* `SUM` (`tf.reduce_sum`)\n* `MEAN` (`tf.reduce_mean`)\n* `REDUCE_MAX` (`tf.reduce_max`)\n* `REDUCE_MIN` (`tf.reduce_min`)\n* `REDUCE_PROD` (`tf.reduce_prod`)\n\n## Changes made:\n* Added a parameterized testing function (`test_reduction_ops`) to cover\ncombinations of the aforementioned reduction operators.\n* Covered a variety of axis configurations, including positive scalars,\nlists of axes, negative indices, and `None` (global reduction).\n* Tested with different combinations of the `keepdims` flag\n(`True`/`False`) and dtypes (`float32`/`int32`).\n* Handled the representation of global reductions (`axis\u003dNone`) in the\nexpected Relax IR Module `_make_reduce_expected` utility by expanding it\nto all input axes, perfectly mirroring the frontend\u0027s output graph\nstructure (`list(range(len(input_shape)))`).\n\n## Testing\n```bash\npytest tests/python/relax/test_frontend_tflite.py::test_reduction_ops\n```\n\u003cimg width\u003d\"931\" height\u003d\"42\" alt\u003d\"Screenshot 2026-04-08 102859\"\nsrc\u003d\"https://github.com/user-attachments/assets/f666199b-b839-42e7-a6ad-2753b92b45b0\"\n/\u003e"
    },
    {
      "commit": "2e6ee08eafc328b82a965e49d106d61828c1d623",
      "tree": "7021d38632a2ea2de076442a7b1f0d5b9a3c4d21",
      "parents": [
        "2345e6ea2665dcc681c78cdc2d2da650d87130c3"
      ],
      "author": {
        "name": "Soowon Jeong",
        "email": "soowon1106@gmail.com",
        "time": "Thu Apr 09 03:35:22 2026 +0900"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 08 14:35:22 2026 -0400"
      },
      "message": "[BugFix] Align `tir.round` to ties-to-even across all backends (#19368)\n\n## Problem\n\n`tir.round` constant-folds using `std::nearbyint` (IEEE 754\nties-to-even), but all backends lower it to platform `round()` which\nuses ties-away-from-zero. This means compiled code can produce different\nresults from constant-folded code for midpoint values:\n\n| Input | Constant-fold (ties-to-even) | Compiled (ties-away) |\n|-------|-----|------|\n| 0.5   | 0.0 | 1.0  |\n| 2.5   | 2.0 | 3.0  |\n| -0.5  | 0.0 | -1.0 |\n\nThis was identified as a follow-up to #19367 — see [this\ncomment](https://github.com/apache/tvm/pull/19367#issuecomment-4201800320).\n\n## Fix\n\nAlign all backends to use ties-to-even intrinsics, matching the\nconstant-folding behavior:\n\n| Backend | Before | After |\n|---------|--------|-------|\n| LLVM/ROCm/Hexagon | `llvm::Intrinsic::round` |\n`llvm::Intrinsic::nearbyint` |\n| NVPTX | `__nv_round[f]` | `__nv_nearbyint[f]` |\n| CUDA | `round`/`roundf` | `nearbyint`/`nearbyintf` (f16/bf16 already\nused `hrint`) |\n| Metal/OpenCL | `round` | `rint` |\n| Vulkan/SPIR-V | `GLSLstd450Round` | `GLSLstd450RoundEven` |\n\nAlso fixes OpenCL codegen where `tir.nearbyint` was incorrectly mapped\nto OpenCL `round()` instead of `rint()`.\n\nUpdates `op.h` documentation to explicitly state ties-to-even semantics\nfor both `round()` and `nearbyint()`.\n\n## Testing\n\n```\npython -m pytest tests/python/tirx-base/test_tir_intrin.py -xvs\n```\n\nNew `test_round_ties_to_even` verifies midpoint inputs `[0.5, 1.5, 2.5,\n3.5, -0.5, -1.5, -2.5, -3.5]` produce ties-to-even results on the LLVM\nbackend. All 12 tests pass (10 passed, 2 skipped for CUDA).\n\n---------\n\nCo-authored-by: Claude Opus 4.6 (1M context) \u003cnoreply@anthropic.com\u003e"
    },
    {
      "commit": "2345e6ea2665dcc681c78cdc2d2da650d87130c3",
      "tree": "bf5009add25dc2d665e682d61e80e1bcaaef752c",
      "parents": [
        "865ace905f3bcdc96d87d7cd901f19df59020e47"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Wed Apr 08 00:58:00 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 08 00:58:00 2026 -0400"
      },
      "message": "[Docs] Add API reference documentation for tvm.script module (#19366)\n\nAdd API reference documentation for tvm.script module"
    },
    {
      "commit": "865ace905f3bcdc96d87d7cd901f19df59020e47",
      "tree": "16b14e378104af2291388c552ceb170edfcd53e0",
      "parents": [
        "9acbf4ae6b075d441b1ed7f9bd2f37a64521aec2"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Wed Apr 08 00:57:33 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 08 00:57:33 2026 -0400"
      },
      "message": "[Docs] Add DLight and MetaSchedule deep-dive instructions (#19356)\n\nThis pr adds a instructions covering MetaSchedule and Flight usage in\ndeep dive"
    },
    {
      "commit": "9acbf4ae6b075d441b1ed7f9bd2f37a64521aec2",
      "tree": "1d1478b72491073f12ebf40e03902faa3dbfeae8",
      "parents": [
        "acfc4837389323f952feabe602f10e665f3dd59e"
      ],
      "author": {
        "name": "3em0",
        "email": "59153706+3em0@users.noreply.github.com",
        "time": "Wed Apr 08 11:18:05 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 07 23:18:05 2026 -0400"
      },
      "message": "[BugFix][Relax] Add structural_equal verification to subroutine cache lookup (#18962)\n\n## Summary\n\n- `SubroutineMixin._get_subroutine()` used `structural_hash` as the sole\ncache key without `structural_equal` verification. If two different\n`arg_sinfo` values produced the same 64-bit hash (collision), the cache\nwould return a previously compiled function with mismatched parameter\nshapes, leading to silently incorrect compiled output.\n- Changed the cache to store a list of `(arg_sinfo, result)` pairs per\nhash bucket and verify with `structural_equal` on lookup, consistent\nwith the pattern in `block_builder.cc`.\n- Added a security advisory document and regression test.\n\n## Root Cause\n\nThe subroutine cache (`cls._gvar`) was keyed by\n`(structural_hash(arg_sinfo), is_dataflow)`. A hash match was treated as\nproof of structural equality, skipping the necessary `structural_equal`\ncheck. This is a hash-only lookup anti-pattern — hash determines the\nbucket, but equality must confirm the match.\n\nFor comparison, `block_builder.cc` correctly uses `StructuralHash` +\n`StructuralEqual` together as the hash and equality functions for\n`std::unordered_map`.\n\n## Test plan\n\n- [ ] Existing test `test_linear` passes (no regression)\n- [ ] New test `test_different_shapes_produce_distinct_subroutines`\npasses — verifies that the same Module class with different input shapes\ngenerates distinct subroutines\n\n🤖 Generated with [Claude Code](https://claude.com/claude-code)\n\n---------\n\nCo-authored-by: 79475432@qq.com \u003cdem0@kali.kali\u003e\nCo-authored-by: Claude Opus 4.6 (1M context) \u003cnoreply@anthropic.com\u003e\nCo-authored-by: gemini-code-assist[bot] \u003c176961590+gemini-code-assist[bot]@users.noreply.github.com\u003e"
    },
    {
      "commit": "acfc4837389323f952feabe602f10e665f3dd59e",
      "tree": "803f59e8e486a19384671775ce9d752cb67c5603",
      "parents": [
        "f0863fd5c711e9fec08e2f3b2cb69b8e02933a31"
      ],
      "author": {
        "name": "Soowon Jeong",
        "email": "soowon1106@gmail.com",
        "time": "Wed Apr 08 04:52:47 2026 +0900"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Apr 07 15:52:47 2026 -0400"
      },
      "message": "[BugFix][ONNX] Fix Round op to use ties-to-even (#19367)\n\n## Problem\n\nThe ONNX `Round` operator specification requires **ties-to-even**\n(banker\u0027s) rounding:\n\n\u003e \"For cases where number is exactly halfway between two integers, it\nrounds to the nearest even integer.\"\n\u003e — https://onnx.ai/onnx/operators/onnx__Round.html\n\nHowever, the current TVM implementation produces **ties-away-from-zero**\nresults on midpoint values:\n\n| Input | Expected (ties-to-even) | Actual (ties-away) |\n|-------|------------------------|--------------------|\n| 0.5   | 0.0                    | 1.0                |\n| 1.5   | 2.0                    | 2.0                |\n| 2.5   | 2.0                    | 3.0                |\n| -0.5  | 0.0                    | -1.0               |\n| -2.5  | -2.0                   | -3.0               |\n\nThis was reported in issue #18590.\n\n## Root Cause\n\nThe lowering chain for `relax.op.round`:\n\n```\nrelax.op.round -\u003e (LegalizeOps) -\u003e topi.round() -\u003e te.round -\u003e tir.round -\u003e llvm::round\n```\n\n`llvm::round` is defined as ties-away-from-zero (C99 `round()`), while\n`llvm::nearbyint` uses the IEEE 754 default rounding mode\n(ties-to-even).\n\n## Fix\n\n**`python/tvm/topi/math.py`**: Switch `topi.round()` from `te.round` to\n`te.nearbyint`. This lowers to `tir.nearbyint` -\u003e `llvm::nearbyint`,\nwhich respects IEEE 754 ties-to-even.\n\n**`src/target/source/intrin_rule_webgpu.cc`**: Register `tir.nearbyint`\nfor the WebGPU backend. WGSL `round()` is already ties-to-even per the\nWGSL spec, so `tir.nearbyint` -\u003e `round` is the correct mapping.\n\n**`tests/python/relax/test_frontend_onnx.py`**: Add\n`test_round_ties_to_even()` with explicit midpoint inputs to prevent\nregression.\n\n## Testing\n\n```\npython -m pytest tests/python/relax/test_frontend_onnx.py::test_round_ties_to_even -xvs\npython -m pytest \"tests/python/relax/test_frontend_onnx.py::test_unary[Round]\" -xvs\n```\n\nBoth pass. The new test compares TVM output against onnxruntime (which\ncorrectly implements ties-to-even) for inputs `[0.5, 1.5, 2.5, -0.5,\n-1.5, -2.5]`.\n\nFixes #18590"
    },
    {
      "commit": "f0863fd5c711e9fec08e2f3b2cb69b8e02933a31",
      "tree": "7ae217fcba673089c0bb8ac186da2dcc2f7e7001",
      "parents": [
        "072c849439f887aec6f70e061de0eba2e3d9b811"
      ],
      "author": {
        "name": "Bana",
        "email": "banabilalt@gmail.com",
        "time": "Tue Apr 07 04:05:00 2026 +0300"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 06 21:05:00 2026 -0400"
      },
      "message": "[Relax][ONNX] Support ConcatFromSequenc/SequenceInsert with new_axis\u003d1 (#19361)\n\n## Description\n\nThis PR adds support for `new_axis\u003d1` in the ONNX `ConcatFromSequence`\noperator, which was previously raising a `NotImplementedError`.\n\n*Note regarding the tracking issue:* The tracking issue listed this task\nas \"SequenceInsert — Does not support inserting with new axis.\", but i\nthink it meant `ConcatFromSequence`. ONNX\u0027s `SequenceInsert` does not\nhave a `new_axis` attribute, whereas `ConcatFromSequence` does and was\nthrowing the \"Insert new axis is not supported yet\" error. This PR\nimplements the missing feature.\n\n## Changes:\n- Replaced the `NotImplementedError` in `ConcatFromSequence` with\n`relax.op.stack(inputs[0], axis\u003daxis)`\n- Removed the `pytest.skip` from `test_concat_from_sequence`.\n- Parameterized the test to explicitly check both standard concatenation\n(`new_axis\u003d0, axis\u003d0` yielding `[64, 32]`) and stacking operations\n(`new_axis\u003d1, axis\u003d1` yielding `[32, 2, 32]`).\n\n## Testing\nI tested the implementation via running:\n```\npytest tests/python/relax/test_frontend_onnx.py::test_concat_from_sequence\n```\nand all tests passed:\n\u003cimg width\u003d\"1044\" height\u003d\"89\" alt\u003d\"image\"\nsrc\u003d\"https://github.com/user-attachments/assets/25d6ad26-ad4b-4437-9fa5-e29efc9e0c9f\"\n/\u003e\n\n\n## Reference\nhttps://onnx.ai/onnx/operators/onnx__ConcatFromSequence.html\nhttps://onnx.ai/onnx/operators/onnx__SequenceInsert.html\n\npartially addresses #18945"
    },
    {
      "commit": "072c849439f887aec6f70e061de0eba2e3d9b811",
      "tree": "f4da368abfd00fe5c5589b1092c5a51a5990794b",
      "parents": [
        "e0df8028bfe40bfdd5c84437bf516995968270e8"
      ],
      "author": {
        "name": "Kryptonite",
        "email": "oalazizi75@gmail.com",
        "time": "Tue Apr 07 04:04:13 2026 +0300"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 06 21:04:13 2026 -0400"
      },
      "message": "[Test][TFLite] Add unit tests for RESIZE_BILINEAR and RESIZE_NEAREST_NEIGHBOR ops (#19365)\n\nPartially closes #18971.\n\nAdd parametrized unit tests for `RESIZE_BILINEAR` and\n`RESIZE_NEAREST_NEIGHBOR` following the existing `verify()` +\n`tf.Module` pattern in the file.\n\n## Tests added (13 total)\n\n### `RESIZE_BILINEAR` (7 cases via `test_resize_bilinear`)\n| Case | Input shape | Output size | `coordinate_transformation_mode` |\n|------|-------------|-------------|----------------------------------|\n| upsample, default | `(1, 4, 4, 1)` | `8×8` | `half_pixel` |\n| downsample, default | `(1, 8, 8, 3)` | `4×4` | `half_pixel` |\n| align_corners | `(1, 4, 4, 1)` | `7×7` | `align_corners` |\n| half_pixel_centers | `(1, 4, 4, 2)` | `8×8` | `half_pixel` |\n| multichannel / batch \u003e 1 | `(2, 6, 6, 16)` | `12×12` | `half_pixel` |\n| identity | `(1, 5, 5, 3)` | `5×5` | `half_pixel` |\n| non-square | `(1, 4, 8, 1)` | `8×16` | `half_pixel` |\n\n### `RESIZE_NEAREST_NEIGHBOR` (6 cases via\n`test_resize_nearest_neighbor`)\n| Case | Input shape | Output size | `coordinate_transformation_mode` |\n`rounding_method` |\n\n|------|-------------|-------------|----------------------------------|-------------------|\n| upsample, default | `(1, 2, 2, 1)` | `4×4` | `half_pixel` |\n`round_prefer_ceil` |\n| downsample, default | `(1, 8, 8, 3)` | `4×4` | `half_pixel` |\n`round_prefer_ceil` |\n| align_corners | `(1, 4, 4, 1)` | `7×7` | `align_corners` | `\"\"` |\n| multichannel / batch \u003e 1 | `(4, 3, 3, 8)` | `6×6` | `half_pixel` |\n`round_prefer_ceil` |\n| non-square | `(1, 4, 8, 1)` | `8×16` | `half_pixel` |\n`round_prefer_ceil` |\n| identity | `(1, 3, 3, 2)` | `3×3` | `half_pixel` | `round_prefer_ceil`\n|\n\n## Implementation notes\n\n- `Expected` modules are built programmatically via `relax.BlockBuilder`\nrather than TVMScript — the TVMScript parser does not accept runtime\nvariables in type annotations, which would be required inside a\nparametrized test.\n- All 13 new tests pass.\n- `test_fill`, `test_batch_matmul`, and `test_batch_matmul_adj` were\nalready failing on unmodified `main` before this PR. No regressions\nintroduced.\n- Tested with Python 3.10.20, TensorFlow 2.9.0, NumPy 1.26.4."
    },
    {
      "commit": "e0df8028bfe40bfdd5c84437bf516995968270e8",
      "tree": "ef474096a69580bc0f07bcd52bd83126819b3a42",
      "parents": [
        "d58e9f7bbef42c2256df693d8fdec8bd8cce0b69"
      ],
      "author": {
        "name": "Bana",
        "email": "banabilalt@gmail.com",
        "time": "Mon Apr 06 21:58:58 2026 +0300"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 06 14:58:58 2026 -0400"
      },
      "message": "[Docs] TFLite tests requiring Python 3.10 and specific package versions to avoid core dumps (#19364)\n\npartially fixes #19348"
    },
    {
      "commit": "d58e9f7bbef42c2256df693d8fdec8bd8cce0b69",
      "tree": "9d628fac3ff93799e5e195ecdbf5374f3ccb8a46",
      "parents": [
        "d1f5583e0efeb2bd02907e84e30034de46a934af"
      ],
      "author": {
        "name": "Sai Gopal Reddy Kovvuri",
        "email": "ksgr5566@gmail.com",
        "time": "Mon Apr 06 14:56:15 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 06 14:56:15 2026 -0400"
      },
      "message": "[WebGPU] Add gating logic for subgroup shuffle primitives (#18823)\n\n## Summary\nThis adds gating logic on top of #17699 to support optional subgroup\nshuffle\nprimitives based on a compile-time flag.\n\n## Problem\nThe PR #17699 always generates subgroup shuffle ops when targeting\nWebGPU.\nHowever, not all WebGPU devices support subgroups. We need a way to:\n- Default to shared memory reductions (universally compatible)\n- Optionally enable subgroup shuffles for devices that support them\n\n## Solution\nImplement gating via TVM target parameter:\n- Default `thread_warp_size\u003d1` disables warp reductions (uses shared\nmemory + barriers)\n- Add target parser `UpdateWebGPUAttrs()` that sets\n`thread_warp_size\u003d32` when `supports_subgroups\u003dtrue`\n- Add `--enable-subgroups` CLI flag in mlc-llm to surface the option to\nusers\n\nThe gating happens at the reduction path selection level\n(`IsWarpReduction()` in\n`lower_thread_allreduce.cc`), ensuring subgroup ops are never generated\nunless explicitly enabled.\n\n## Testing\n\nTested with Llama-3.2-1B-q4f16_1. Baseline (no flag) uses shared memory\nreductions;\nwith flag, generates subgroupShuffle* ops.\nBoth the generated WGSLs here:\nhttps://gist.github.com/ksgr5566/301664a5dda3e46f44092be4d09b2d4f\nBenchmarking:\nhttps://gist.github.com/ksgr5566/c9bd5bc5aadba999ec2f2c38eb0c49b3"
    },
    {
      "commit": "d1f5583e0efeb2bd02907e84e30034de46a934af",
      "tree": "eb43dce0abc329d691ae61c36f2dd80fc33a2b68",
      "parents": [
        "93e28d1126d41158e440fc42db427e7358f30f44"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Mon Apr 06 14:52:17 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 06 14:52:17 2026 -0400"
      },
      "message": "[Docs] Add tutorial for importing models from PyTorch, ONNX, and TFLite (#19354)\n\nThis pr adds a tutorial for user to have a quick understanding of how to\nimport models from our supporting frontends.\n\n\nBesides, this pr also adds `absl::InitializeLog` and `trackable_obj`\nwarnings to the CI docs ignore list — these are emitted by TensorFlow\u0027s\nC++ runtime during import and cannot be suppressed from Python."
    },
    {
      "commit": "93e28d1126d41158e440fc42db427e7358f30f44",
      "tree": "56c728eff4dd4d74bbdc277e685b7daf2bf79229",
      "parents": [
        "9ec6a52e2194aa890cc1d9c7204a852e8e15f775"
      ],
      "author": {
        "name": "Soowon Jeong",
        "email": "soowon1106@gmail.com",
        "time": "Tue Apr 07 03:49:23 2026 +0900"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 06 14:49:23 2026 -0400"
      },
      "message": "[BugFix][TVMScript] Fix invalid f-string format spec causing TypeError on Python 3.14 (#19362)\n\n## Problem\n\nOn Python 3.14, any use of TVMScript raises a `TypeError` before the\nmodule body is even parsed:\n\n```\nTypeError: unsupported format string passed to type.__format__\n```\n\nThe traceback points to\n`python/tvm/script/parser/core/diagnostics.py:120`:\n\n```python\nraise TypeError(f\"Source for {obj:!r} not found\")\n```\n\n## Root Cause\n\n`{obj:!r}` is an invalid f-string expression. The `:` introduces a\n`format_spec`, so `!r` is passed to `type.__format__` as a format string\n— which it does not support.\n\nThe intended syntax for a `repr()` conversion is `{obj!r}` (no colon).\n\nPython 3.14 re-implemented f-string parsing under [PEP\n701](https://peps.python.org/pep-0701/) and now strictly validates\nformat specs, surfacing this latent bug. Python 3.10–3.13 silently\npassed the invalid spec to `__format__` and happened not to raise in\nmost code paths, so the bug went unnoticed.\n\n## Fix\n\n```diff\n- raise TypeError(f\"Source for {obj:!r} not found\")\n+ raise TypeError(f\"Source for {obj!r} not found\")\n```\n\nOne character change. Valid across all Python versions \u003e\u003d 3.6.\n\n## Testing\n\nVerified on Python 3.14.2 (darwin/arm64):\n\n- TVMScript `ir_module` + `prim_func` parses and compiles correctly\nafter the fix\n- Full TVMScript test suite: **628 passed, 1 xfailed** (the 1 failure in\n`test_tvmscript_roundtrip.py::test_roundtrip[relax_symbolic_size_var]`\nis pre-existing and unrelated to this change)"
    },
    {
      "commit": "9ec6a52e2194aa890cc1d9c7204a852e8e15f775",
      "tree": "536f2117e6ece67786637da3d55ced23ab8b7e8f",
      "parents": [
        "628b394ed779a518e1e3aaeb0866d0884d5abadb"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Mon Apr 06 14:47:57 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 06 14:47:57 2026 -0400"
      },
      "message": "[Docs] Add Dataflow Pattern Language (DPL) documentation for Relax (#19358)\n\nDPL (`tvm.relax.dpl`) is heavily used across the TVM stack — operator\nfusion, CUTLASS/cuBLAS/cuDNN backend dispatch, and user-defined graph\ntransforms all rely on it. Since there is no doc explaining how to use\nit, this pr adds deep-dive documentation\n(`docs/deep_dive/relax/dpl.rst`) covering DPL\u0027s pattern construction,\nmatching, rewriting APIs, and integration with `FuseOpsByPattern`\nbackend dispatch passes."
    },
    {
      "commit": "628b394ed779a518e1e3aaeb0866d0884d5abadb",
      "tree": "8af22807dd3dd25bff352c11eead82b150db272d",
      "parents": [
        "744ef561e40d1232823a72ca29386e351c62a821"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Mon Apr 06 12:02:52 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 06 12:02:52 2026 -0400"
      },
      "message": "[Docs] Add Disco distributed runtime architecture overview (#19357)\n\nAdd Disco distributed runtime architecture overview"
    },
    {
      "commit": "744ef561e40d1232823a72ca29386e351c62a821",
      "tree": "4e858b81494f12c2d9b8afcb7fcdc98fa99bfc34",
      "parents": [
        "491480d4db14444189a230d91cea747453be70e9"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Mon Apr 06 12:01:56 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 06 12:01:56 2026 -0400"
      },
      "message": "[Docs] Fix outdated paths, links, and add missing API references across documentation (#19351)\n\nThis PR is a follow-up PR of #19344 , continuing cleaning up outdated\nitems in docs"
    },
    {
      "commit": "491480d4db14444189a230d91cea747453be70e9",
      "tree": "19188577d64815c67633317dbd6cda908aea0c3d",
      "parents": [
        "5673754e7c70de5bab04f06fbe70c424dcf15092"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Mon Apr 06 09:51:19 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 06 09:51:19 2026 -0400"
      },
      "message": "[Docs] Add tvm.s_tir.analysis API reference page (#19353)\n\nThis PR adds the API reference documentation for `tvm.s_tir.analysis`.\n                                                           \n`tvm.s_tir.analysis` functions use Var in their type annotations, which\nexists in both `tvm.tirx` and `tvm.relax`. The existing disambiguator\nuses common module prefix to pick the right one, but `tvm.s_tir` shares\nno prefix with either. The new `tvm_module_type_preference` mapping\ntells the disambiguator to prefer `tvm.tirx` types for `tvm.s_tir.*`\nmodules."
    },
    {
      "commit": "5673754e7c70de5bab04f06fbe70c424dcf15092",
      "tree": "8e7a4bd5c858929ebaab35c3f5818a9877e54525",
      "parents": [
        "36a82f515960b1460909ffebfa9466b71c5acd95"
      ],
      "author": {
        "name": "Yong Wu",
        "email": "yongcale@gmail.com",
        "time": "Mon Apr 06 05:26:19 2026 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Apr 06 08:26:19 2026 -0400"
      },
      "message": "Bump tvm-ffi to 1fed0a (#18967)\n\nbump tvm-ffi to the [v0.1.10rc2 commit\n](https://github.com/apache/tvm-ffi/commit/1fed0ae0421e614d45662e8ee6bcae353d3ab2ea)."
    },
    {
      "commit": "36a82f515960b1460909ffebfa9466b71c5acd95",
      "tree": "9ae38d091374226738ae4f120bf91405b4971874",
      "parents": [
        "b8ecf2f803a7b0cdfc382ba38fc74ee53227c5fe"
      ],
      "author": {
        "name": "liggest",
        "email": "43201720+liggest@users.noreply.github.com",
        "time": "Mon Apr 06 00:27:09 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Apr 05 12:27:09 2026 -0400"
      },
      "message": "[BugFix][TVMScript] Add `doc.keyword` handling for `ExprEvaluator._visit` (#19352)\n\nAdd handling for `doc.keyword` nodes in `ExprEvaluator._visit` to ensure\nexpressions (e.g. `BoolOp`) in keyword arguments are processed with\ncorrect evaluation methods.\n\nFix #18972 . For more details, please refer to this issue."
    },
    {
      "commit": "b8ecf2f803a7b0cdfc382ba38fc74ee53227c5fe",
      "tree": "127b5a8c188f758f3cd55a7e65913f98bff37f02",
      "parents": [
        "befee5827eb6816c37c7bce1fe70c37182f7cc14"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Sat Apr 04 22:56:26 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Apr 04 22:56:26 2026 -0400"
      },
      "message": "[Docs] Add Relax VM architecture overview in documentation (#19350)\n\nThis adds Relax VM architecture overview in documentation"
    },
    {
      "commit": "befee5827eb6816c37c7bce1fe70c37182f7cc14",
      "tree": "71a4e25fcc65ed20991f881f9bde14dc8f826dd3",
      "parents": [
        "28aac4744dbc1153b8f9f937616294720533026e"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Sat Apr 04 22:55:19 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Apr 04 22:55:19 2026 -0400"
      },
      "message": "[Docs] Fix outdated code examples, typos, and missing API reference in documentation(2) (#19344)\n\nThis PR is a follow-up of #18965\n- Fix incorrect variable names in Relax dataflow code example (`lv0` →\n`lv`, `b` → `n`) in\n`docs/deep_dive/relax/learning.rst`\n- Fix `func.time_evaluator(func.entry_name, ...)` to\n`func.time_evaluator(\"add_one\", ...)`\nin `docs/how_to/tutorials/cross_compilation_and_rpc.py`, since\n`entry_name` is a class\nconstant `\"main\"` but the compiled function is named `\"add_one\"`\n- Fix typo `tvfm.testing` → `tvm.testing` in\n`docs/how_to/dev/pytest_target_parametrization.rst`\n- Add missing `tvm.relax.frontend.tflite` automodule entry to\n`docs/reference/api/python/relax/frontend.rst`"
    },
    {
      "commit": "28aac4744dbc1153b8f9f937616294720533026e",
      "tree": "46de99b140bcc16bdc3756c68f17eb2a3ae12aa5",
      "parents": [
        "78b7286d323fa991b18012f7e16fb14d7ee773c4"
      ],
      "author": {
        "name": "Bana",
        "email": "banabilalt@gmail.com",
        "time": "Sat Apr 04 22:26:47 2026 +0300"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Apr 04 15:26:47 2026 -0400"
      },
      "message": "[Relax][TFLite] Add NON_MAX_SUPPRESSION_V5 support (#19349)\n\nPartially resolves #18928\n\n\nto run the tests:\n```bash\npytest tests/python/relax/test_frontend_tflite.py -v -k \"nms\"\n```\n\u003cimg width\u003d\"728\" height\u003d\"44\" alt\u003d\"image\"\nsrc\u003d\"https://github.com/user-attachments/assets/fbd4092a-bc7f-459d-8d51-b0ef926b241f\"\n/\u003e"
    },
    {
      "commit": "78b7286d323fa991b18012f7e16fb14d7ee773c4",
      "tree": "c008809e84400d5c6695763d47b9fbd7bc74415e",
      "parents": [
        "87f291589720d8a6eec7aae519511a6cb95af372"
      ],
      "author": {
        "name": "Bana",
        "email": "bbaltawalbeh23@cit.just.edu.jo",
        "time": "Sat Apr 04 18:38:15 2026 +0300"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Apr 04 11:38:15 2026 -0400"
      },
      "message": "fix: TFLite model retrieval with error handling (#19347)\n\ncloses #19346"
    },
    {
      "commit": "87f291589720d8a6eec7aae519511a6cb95af372",
      "tree": "d20db3fc83afdd8544b4e23d57761f33c7c2dc8d",
      "parents": [
        "e3dda2398f58dd4e1d3a5deb61761d96489077d5"
      ],
      "author": {
        "name": "Dayuxiaoshui",
        "email": "158081477+Dayuxiaoshui@users.noreply.github.com",
        "time": "Sat Apr 04 14:15:18 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Apr 04 02:15:18 2026 -0400"
      },
      "message": "test(relax): cover TFLite LOG and GREATER_EQUAL in test_frontend_tflite (#19343)\n\n- Extend test_element_wise with tf.math.log -\u003e R.log.\n- Extend test_split_compare with tf.math.greater_equal -\u003e\nR.greater_equal, matching the existing split-tensor compare pattern and\nbool Expected IR. https://github.com/apache/tvm/issues/18971"
    },
    {
      "commit": "e3dda2398f58dd4e1d3a5deb61761d96489077d5",
      "tree": "647f03b4764d20cff7a607f0dd4ed093dba15057",
      "parents": [
        "66019626a9612e76ecd4edc3da1965db621a171d"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Sat Apr 04 00:57:03 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Apr 04 00:57:03 2026 -0400"
      },
      "message": "[TFLite][Frontend] Add expected IRModule checks for conv2d, pool2d, and batch_matmul tests (#18970)\n\nAdd expected IRModule checks for conv2d, pool2d, and batch_matmul tests"
    },
    {
      "commit": "66019626a9612e76ecd4edc3da1965db621a171d",
      "tree": "0994fb714afba2f88bf2f7a7821d761ec2326cbb",
      "parents": [
        "427b66da1acd5d36c481773570ed500f0e6f1c1b"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Sat Apr 04 00:56:15 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Apr 04 00:56:15 2026 -0400"
      },
      "message": "[Docs] Fix outdated code examples, types, and missing references across documentation   (#18965)\n\n- Fix incorrect function names (`lnumpy_matmul`→`lnumpy_linear`,\n`lnumpy_relu`→`lnumpy_relu0`) and undefined variables (`lv0`→`lv`,\n`b`→`n`) in Relax learning tutorial\n- Add missing `I`, `T`, `R` imports in Relax and TensorIR learning\ntutorials\n- Update `pass_infra.rst` to match current source: fix `PassInfoNode`\nfield order and add `traceable`, correct `PassContextNode`\narray types (`Expr`→`String`), remove obsolete `StringImm` cast in\n`SequentialNode`, and add `traceable` param to `Create*Pass`\nsignatures                                                      \n- Replace stale `PrintIRBefore`/`PrintAfter` TODOs with\nalready-implemented instruments (`PrintBeforeAll`, `PrintAfterAll`,\n`PassPrintingInstrument`, `DumpIR`)\n- Add missing `tvm.relax.op.vision` and `tvm.relax.op.vm` to API\nreference\n- Add `PythonDomain.find_obj` patch to resolve ambiguous\ncross-references for classes that exist in multiple TVM namespaces (e.g.\n`StringImm` in both `tvm.relax` and `tvm.tirx`). This is a general\nsolution that reuses the existing `tvm_class_name_rewrite_map` and also\nbenefits `Var`, `Call`, etc."
    },
    {
      "commit": "427b66da1acd5d36c481773570ed500f0e6f1c1b",
      "tree": "aa6d583bf8d0c92ae16ba28d5f1f612b0a91ea4f",
      "parents": [
        "10ba3c232bfce87d7dc4f079c7717ac82ce7203a"
      ],
      "author": {
        "name": "Kryptonite",
        "email": "oalazizi75@gmail.com",
        "time": "Sat Apr 04 00:31:03 2026 +0300"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Apr 03 17:31:03 2026 -0400"
      },
      "message": "[Frontend][ONNX] Fix SplitToSequence keepdims\u003d0 and uneven last chunk (#19341)\n\n### Summary\n\nFixes two spec violations in `SplitToSequence`:\n\n1. **keepdims\u003d0** was raising `NotImplementedError`. The fix squeezes\nthe split axis from each chunk when `split` is scalar and `keepdims\u003d0`.\nPer spec:\n\u003e \"If input \u0027split\u0027 is specified [as a 1-D array], this attribute is\nignored\" —\n   verified against ORT.\n\n2. **Uneven last chunk** was raising `ValueError`. The spec states: \"The\nlast chunk alone may be smaller than \u0027split\u0027 if the input size is not\ndivisible by \u0027split\u0027.\" Fixed by using index-based splitting via\n`range(chunk_size, dim_size, chunk_size)` instead of a count.\n\nReference: https://onnx.ai/onnx/operators/onnx__SplitToSequence.html\nCloses part of #18945"
    },
    {
      "commit": "10ba3c232bfce87d7dc4f079c7717ac82ce7203a",
      "tree": "03de8ef03f9b117f91f6e0b36c71c119d793a2c0",
      "parents": [
        "ccb84cd74ad89665f741143f2ea7462c406bbbbb"
      ],
      "author": {
        "name": "Kryptonite",
        "email": "oalazizi75@gmail.com",
        "time": "Fri Apr 03 06:29:56 2026 +0300"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 02 23:29:56 2026 -0400"
      },
      "message": "[Frontend][ONNX] Support select_last_index for ArgMax and ArgMin (#18969)\n\n### Summary\n\nThis PR implements the `select_last_index` attribute (introduced in\nopset 12) for the `ArgMax` and `ArgMin` ONNX operators.\n\nPreviously, setting `select_last_index\u003d1` raised\n`OpAttributeUnImplemented`. This closes the limitation tracked in the\nONNX frontend issue.\n\n### Implementation\n\nWhen `select_last_index\u003d1`, the input tensor is reversed along the\nreduction axis using `relax.op.flip`, argmax/argmin is computed on the\nflipped copy, and the result is remapped back to the original index\nspace via `last_idx \u003d (axis_size - 1) - flipped_idx`\n\nCloses part of #18945\n\n---------\n\nSigned-off-by: OmarAzizi \u003coalazizi75@gmail.com\u003e"
    },
    {
      "commit": "ccb84cd74ad89665f741143f2ea7462c406bbbbb",
      "tree": "b5e32d86ff932f013c03b3d206ba9cbf8d124cd3",
      "parents": [
        "81889decfaa5061e2bdc53ae75a7b395ecd649ce"
      ],
      "author": {
        "name": "Akaash Parthasarathy",
        "email": "43900735+akaashrp@users.noreply.github.com",
        "time": "Thu Apr 02 14:17:19 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 02 14:17:19 2026 -0400"
      },
      "message": "[Web] Fix rollup errors and bump tvmjs version (#18958)\n\n- Fix rollup errors by upgrading rollup version and updating rollup\nconfig\n- Bump tvmjs version to 0.24.0-dev3"
    },
    {
      "commit": "81889decfaa5061e2bdc53ae75a7b395ecd649ce",
      "tree": "351564a5283a83936b877295088b4c1626f1c38e",
      "parents": [
        "ad65d9fb15c8fc2f3afa03710946cd04c67475c8"
      ],
      "author": {
        "name": "Akaash Parthasarathy",
        "email": "43900735+akaashrp@users.noreply.github.com",
        "time": "Thu Apr 02 14:17:02 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 02 14:17:02 2026 -0400"
      },
      "message": "[Fix] Replace str(target.kind) with target.kind.name for Target objects (#18959)\n\nReplace `str(target.kind)` with `target.kind.name` for `Target` objects\nsince `target.kind` is a `TargetKind` object while `target.kind.name`\nyields a string describing the target"
    },
    {
      "commit": "ad65d9fb15c8fc2f3afa03710946cd04c67475c8",
      "tree": "31a78145377aaaa525f27a74d1ed8285547b043e",
      "parents": [
        "eb531188f2a2cebbeb48a568e3fe978e6ce46f19"
      ],
      "author": {
        "name": "Akaash Parthasarathy",
        "email": "43900735+akaashrp@users.noreply.github.com",
        "time": "Thu Apr 02 14:16:42 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 02 14:16:42 2026 -0400"
      },
      "message": "[WebGPU] Reserve additional keywords to avoid WGSL identifier collisions (#18960)\n\nCompiling Qwen3.5 yielded WGSL of the following form: `var\u003cstorage,\nread_write\u003e storage : array\u003cf32\u003e;\n`. This led to a \u0027cyclic dependency\u0027 error due to the identifier\ncollision. This PR reserves keywords such as storage to avoid parsing\nerrors."
    },
    {
      "commit": "eb531188f2a2cebbeb48a568e3fe978e6ce46f19",
      "tree": "e9de268bf5cdd97a473d92a9e654ea8f23641c62",
      "parents": [
        "a7bfc857b901177a5e82a86d6f9cc2ffed763e09"
      ],
      "author": {
        "name": "Ruslan Baratov",
        "email": "x@ruslo.dev",
        "time": "Thu Apr 02 23:47:09 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 02 11:47:09 2026 -0400"
      },
      "message": "[DOC] Fix various issues (#18966)\n\n- Fix few typos\n- Unify Android naming\n- Fix HTTPS link"
    },
    {
      "commit": "a7bfc857b901177a5e82a86d6f9cc2ffed763e09",
      "tree": "41ee16a747131223bcda7382268a79f6b9444b74",
      "parents": [
        "ec0daad082aa1c35c6d8e4bcc6cd450038cc994c"
      ],
      "author": {
        "name": "Gabe Guralnick",
        "email": "gnguralnick@gmail.com",
        "time": "Wed Apr 01 21:00:49 2026 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Apr 02 00:00:49 2026 -0400"
      },
      "message": "[FIX] Inline ceil_log2 in gpu_2d_continuous_cumsum to fix MakePackedAPI error (#18957)\n\n- The intermediate variable `ceil_log2` in `gpu_2d_continuous_cumsum`\ncreated a `LetStmt`-bound `Var` in the TIR function\n- When `MakePackedAPI` processed the function, it reported `ceil_log2`\nas an undefined variable not passed as an API argument\n- Inline the expression directly into `total_rounds` to avoid the\nintermediate `Var` — the computation is identical\n\n## Test plan\n- Compile a model that uses GPU sampling (e.g. any LLM with top-p\nsampling on Metal) and verify compilation succeeds\n- The error this fixes: `Check failed: undefined.size() \u003d\u003d 0: In\nPrimFunc gpu_2d_continuous_cumsum variables [ceil_log2] are used, but\nare not passed in as API arguments`\n\nCo-authored-by: Akaash Parthasarathy \u003c43900735+akaashrp@users.noreply.github.com\u003e"
    },
    {
      "commit": "ec0daad082aa1c35c6d8e4bcc6cd450038cc994c",
      "tree": "9fb76a7d32b36cf476807d30134a0697e79948b8",
      "parents": [
        "fd9d9db130fb9ede300a062b95f81dd186dc5140"
      ],
      "author": {
        "name": "Dayuxiaoshui",
        "email": "158081477+Dayuxiaoshui@users.noreply.github.com",
        "time": "Thu Apr 02 11:25:40 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Apr 01 23:25:40 2026 -0400"
      },
      "message": "[Relax][ONNX] Support Resize dynamic ROI via TOPI (#18963)\n\nThe ONNX Resize converter previously rejected non-constant ROI inputs,\nwhich blocked models where ROI is provided at runtime. This change adds\na dynamic-ROI path lowered through TOPI resize kernels while preserving\nthe existing relax.image.resize* path for static ROI.\n\nSpecifically:\n- add reusable helper to convert ONNX full ROI ([starts..., ends...])\ninto spatial ROI vector\n- add reusable helper to emit topi.image.resize1d/2d/3d for dynamic ROI\n- keep static ROI fast path for relax.image.resize2d/resize3d\n- normalize dynamic ROI expr before emit_te to ensure struct_info is\npopulated\n- handle optional Resize inputs (roi/scales/sizes) more defensively\n- add frontend test coverage with graph-input ROI:\ntest_resize_dynamic_roi_tf_crop_and_resize\n\nRef: apache/tvm#18945"
    },
    {
      "commit": "fd9d9db130fb9ede300a062b95f81dd186dc5140",
      "tree": "29c6f1ee10778e709fc16c803135bccb2e034699",
      "parents": [
        "d293b7a7074e514a6d669adacacff77a0095b926"
      ],
      "author": {
        "name": "HoYi",
        "email": "62729549+Aharrypotter@users.noreply.github.com",
        "time": "Tue Mar 31 22:33:59 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 31 10:33:59 2026 -0400"
      },
      "message": "[Relax][ONNX] Fix shape/dynamic restrictions for `Squeeze`/`Unsqueeze` and `Slice` (#18955)\n\n## Summary\n\nRelates to #18945.\n\nThis PR improves ONNX frontend handling for dynamic\n`Unsqueeze`/`Squeeze`/`Slice`, tightens validation paths, and adds\ntargeted structural/negative regression tests.\n\n- Refactor constant-path `Unsqueeze` lowering to use a single `reshape`\nbased on computed target shape.\n- Remove scalar-specific branching and repeated `expand_dims` in the\nconstant path.\n- Add/keep structural helper usage in ONNX frontend tests for Relax\ncall-op checks.\n- Add regression coverage for scalar-input `Unsqueeze`.\n\n## Changes\n\n- Add dynamic-axes conversion paths for `Unsqueeze` and `Squeeze`:\n  - infer output shape via runtime shape-tensor construction\n- lower to `relax.reshape` with validated shape rank/length assumptions\n- Improve `Slice` conversion robustness:\n  - support dynamic parameter forms with stricter rank/length validation\n  - reject invalid zero-step inputs when statically known\n  - fix docstring wording (`Splice` -\u003e `Slice`)\n- Strengthen ONNX frontend tests:\n  - negative test for duplicate `Unsqueeze` axes\n- structural IR check for dynamic `Slice` (`relax.dynamic_strided_slice`\npresent, `relax.strided_slice` absent)\n  - negative test for zero-step `Slice`\n- Refactor constant-path `Unsqueeze` scalar handling:\n- replace scalar special-casing + repeated `expand_dims` with one\ntarget-shape `reshape`\n  - add scalar-input regression test\n- Restore shared test helper used by structural Relax call-op checks.\n\n## validation\n\n- `ruff check`: passed\n- `pre-commit --files`: passed\n- `pytest`: 8 passed"
    },
    {
      "commit": "d293b7a7074e514a6d669adacacff77a0095b926",
      "tree": "18602002b937e9066926a52877d1a7c1ce23474e",
      "parents": [
        "c79caf0c0040fcdea8850214b84fbae4e26b543a"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Mon Mar 30 18:09:10 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 30 18:09:10 2026 -0400"
      },
      "message": "[Docs] Align documentation with tirx/s_tir namespace split (#18953)\n\nSince we\u0027ve split the old `tir` namespace into `tirx` (core IR /\nlowering) and `s_tir` (schedule primitives / auto-tuning), some outdated\ndocumentation need to be updated. The global rename still leaves a few\nconcept-level references using \"tirx\" in prose (for example, \"Relax and\ntirx programs\"). Since \"tirx\" now refers only to one part of the old\nTensorIR stack, these higher-level references should use \"TensorIR\"\ninstead, so they correctly cover both `tirx` and `s_tir`.\n\nIn this PR, we\n- Add tirx / s_tir module descriptions to\n`docs/deep_dive/tensor_ir/index.rst` and `docs/arch/index.rst` (new\n`tvm/s_tir` section, updated `tvm/tirx` section).\n- Fix concept-level prose in `docs/arch/index.rst` and\n`docs/arch/pass_infra.rst` to use \"TensorIR\" instead of \"tirx\" where\nreferring to the concept rather than the namespace.\n- Fix `docs/arch/runtimes/vulkan.rst` to use \"TIR\" instead of \"tirx\" in\ndebug shader description.\n- Correct `tvm/dlight` → `tvm/s_tir/dlight` section path and \"tirx\nschedules\" → \"s_tir schedules\" in `docs/arch/index.rst`.\n- Revert unintended label changes in `abstraction.rst` and\n`tir_creation.py` (labels kept as `_tir-abstraction`, `_tir-creation`).\n- Revert unintended title change in `tir_transformation.py` (kept as\n\"Transformation\").\n- Revert `exclude-members` change in `tirx/tirx.rst` (kept original\nlist)."
    },
    {
      "commit": "c79caf0c0040fcdea8850214b84fbae4e26b543a",
      "tree": "ccae2b39005c81406af78e3d910dce78d3660474",
      "parents": [
        "cb5e290931fa403110c047618b3aad0e9df60607"
      ],
      "author": {
        "name": "YinHanke",
        "email": "hankeyin@gmail.com",
        "time": "Tue Mar 31 01:13:58 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 30 13:13:58 2026 -0400"
      },
      "message": "[Relax][ONNX] Complete ShapeExpr reshape handling in ONNX frontend (#18956)\n\n## Summary\n\nComplete `Reshape` handling for shape values in the Relax ONNX frontend.\n\n## Changes\n\n- keep `ShapeExpr -\u003e Reshape([-1])` on the shape-specialized path\n- materialize `ShapeExpr` to an `int64` tensor for other reshape targets\nand apply regular tensor reshape semantics\n- add frontend coverage for `Shape -\u003e Reshape([-1])`\n- add frontend coverage for reshaping shape outputs to non-`[-1]`\ntargets such as `[1, 3]` and `[3, 1]`\n- extend symbolic shape deduction coverage to include the common `Shape\n-\u003e Reshape([-1]) -\u003e Gather -\u003e Unsqueeze` shape-construction pattern\n\n## Validation\n\n- `pytest -k \u0027test_symbolic_shape_deduction or test_reshape_shape_output\nor test_reshape\u0027`\n\nThis PR completes the `Reshape` limitation in the Relax ONNX frontend\noperator work tracked in #18945."
    },
    {
      "commit": "cb5e290931fa403110c047618b3aad0e9df60607",
      "tree": "572adc9d6a6bfdce4b47420772947a62bd4a2ff1",
      "parents": [
        "e229bda76faff035a19fcdc515be51059ed4957b"
      ],
      "author": {
        "name": "HoYi",
        "email": "62729549+Aharrypotter@users.noreply.github.com",
        "time": "Mon Mar 30 19:33:35 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 30 07:33:35 2026 -0400"
      },
      "message": "[Relax][ONNX] Add Optional and MatMulInteger16 frontend support (#18950)\n\n## Summary\n\nThis PR adds Relax ONNX frontend support for:\n- `Optional`\n- `OptionalHasElement`\n- `OptionalGetElement`\n- `MatMulInteger16` from the `com.microsoft` domain\n\nThe implementation follows existing TVM ONNX frontend patterns and keeps\nOptional handling explicit through an empty-Optional sentinel during\nimport.\n\n## Changes\n\n- add ONNX frontend converters for `Optional`, `OptionalHasElement`, and\n`OptionalGetElement`\n- add ONNX frontend converter for `MatMulInteger16`\n- extend ONNX attribute parsing to handle `TYPE_PROTO`\n- preserve empty Optional values during import and unwrap them\nconsistently\n- register Optional-related ops and `MatMulInteger16` in the ONNX\nconverter map\n- handle Optional outputs correctly in importer output counting and\nnormalization\n- tighten converter docstrings and input validation for better\nconsistency with nearby TVM code\n\n## Tests\n\nAdded or updated tests in `tests/python/relax/test_frontend_onnx.py` to\ncover:\n- numerical correctness for `MatMulInteger16`\n- structural IR checks for `MatMulInteger16`\n- invalid dtype rejection for `MatMulInteger16`\n- tensor and sequence Optional round-trips\n- empty Optional behavior for `OptionalHasElement`\n- structural IR checks ensuring Optional ops are erased as expected\n- missing `type` attribute rejection for empty `Optional`\n- empty `OptionalGetElement` rejection\n\n## Validation\n\nValidated with:\n- `python -m ruff check python/tvm/relax/frontend/onnx/onnx_frontend.py\ntests/python/relax/test_frontend_onnx.py`\n- `python -m pytest -n 1 tests/python/relax/test_frontend_onnx.py -k\n\"optional or matmulinteger16\" -v`\n\nResult:\n- `13 passed`\n\nThis PR completes the ONNX `MatMulInteger16` and `Optional` work tracked\nin https://github.com/apache/tvm/issues/18945."
    },
    {
      "commit": "e229bda76faff035a19fcdc515be51059ed4957b",
      "tree": "a0fb5c07c1a79d645fd4329a82f2c23dbfc0df8c",
      "parents": [
        "8597d21a8d065e235f59e82ca178617107857c1e"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Sun Mar 29 15:26:49 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Mar 29 15:26:49 2026 -0400"
      },
      "message": "[Docs] Add tutorial for mixing Python/PyTorch with TVM using BasePyModule (#18947)\n\nThis pr add a new tutorial `mix_python_and_tvm_with_pymodule.py`\ndemonstrating how to use `BasePyModule` to mix Python/PyTorch functions\nwith TIR and Relax in a single IRModule.\n## Tutorial Contents (7 steps)                                  \n- **Step 1**: `I.pyfunc` + `call_tir` basics, DLPack zero-copy\nconversion, `show()`\n- **Step 2**: Debugging with `print` in pyfuncs — inspect intermediate\ntensors without compiling\n- **Step 3**: Realistic pipeline combining `call_tir`,\n`call_dps_packed`, and Python/PyTorch in one forward pass\n- **Step 4**: Dynamic function registration via `add_python_function`\n- **Step 5**: `RelaxToPyFuncConverter` — convert Relax IR to PyTorch at\ndifferent compilation stages (before and after passes) to verify\nnumerical correctness\n- **Step 6**: `R.call_py_func` — cross-level calls between compiled\nRelax VM and Python functions\n- **Step 7**: Symbolic shapes for dynamic batch sizes\nThis pr also fixs a bug in `BasePyModule._compile_functions` where\nmodules without Relax functions would incorrectly attempt Relax VM\ncompilation, producing spurious warnings like `Failed to compile Relax\nVM: \u0027NoneType\u0027 object has no attribute \u0027kind\u0027`."
    },
    {
      "commit": "8597d21a8d065e235f59e82ca178617107857c1e",
      "tree": "c504dde0f110e2d05411bc8e4f466796531f74bf",
      "parents": [
        "4df6b1750b790cb413c833f15f9741904871df4b"
      ],
      "author": {
        "name": "Kryptonite",
        "email": "oalazizi75@gmail.com",
        "time": "Sun Mar 29 22:08:45 2026 +0300"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Mar 29 15:08:45 2026 -0400"
      },
      "message": "[Frontend][ONNX] Add MatMulInteger support to Relax ONNX frontend (#18951)\n\n### Summary\n\nImplements the `MatMulInteger` operator (opset 10) in the Relax ONNX\nfrontend — INT8 matrix multiplication. Required for quantized model\ninference (e.g. ONNX QDQ models).\n\nCloses #18945 (Tier 1 — MatMulInteger operator)\n\n### Tests\n\n- All 4 `int8`/`uint8` dtype combinations, with and without scalar zero\npoints\n- 3-D and 4-D batched matmul\n\n---------\n\nSigned-off-by: OmarAzizi \u003coalazizi75@gmail.com\u003e"
    },
    {
      "commit": "4df6b1750b790cb413c833f15f9741904871df4b",
      "tree": "057d99a5f329c58647c0172c27e11ad30b4dafec",
      "parents": [
        "4de1f11344608d2305891c7fc585bc4f089158eb"
      ],
      "author": {
        "name": "YinHanke",
        "email": "hankeyin@gmail.com",
        "time": "Mon Mar 30 00:29:43 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Mar 29 12:29:43 2026 -0400"
      },
      "message": "[Relax][ONNX] Add roi_pool op and MaxRoiPool frontend support (#18952)\n\n## Summary\n\nAdd Relax `roi_pool` support and wire it through the ONNX frontend for\n`MaxRoiPool`.\n\n## Changes\n\n- add `relax.vision.roi_pool`, including attrs, Python wrapper, struct\ninfo inference, and legalization\n- add TOPI `roi_pool` compute for NCHW layout\n- support ONNX `MaxRoiPool` in the Relax ONNX frontend\n- handle empty / out-of-bound pooled bins according to ONNX/reference\nsemantics, returning `0` instead of propagating invalid reductions\n- add regression tests for Relax op inference, legalization, and ONNX\nfrontend import\n- add out-of-bound ROI coverage to make sure fully invalid pooled bins\nstill match ONNX Runtime\n\n## Validation\n\n- `pytest tests/python/relax/test_op_vision.py -k roi_pool`\n- `pytest tests/python/relax/test_frontend_onnx.py -k \u0027max_roi_pool\u0027`\n\n\nThis PR completes the `MaxRoiPool` portion of the Relax ONNX frontend\noperator work tracked in #18945."
    },
    {
      "commit": "4de1f11344608d2305891c7fc585bc4f089158eb",
      "tree": "c291e3a3cf2702b4211de6f82c8e404164a3435a",
      "parents": [
        "52b5d55fc4e629bed002b6e4e0383088a52ad17b"
      ],
      "author": {
        "name": "Dayuxiaoshui",
        "email": "158081477+Dayuxiaoshui@users.noreply.github.com",
        "time": "Sun Mar 29 17:42:35 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Mar 29 05:42:35 2026 -0400"
      },
      "message": "[Relax] Add conv3d_transpose and ONNX ConvTranspose 3D support (#18948)\n\nIntroduce relax.nn.conv3d_transpose (attrs, C++ inference/layout, Python\nAPI) and lower it to TOPI group_conv3d_transpose_ncdhw when using\nNCDHW/IODHW with dilation 1, matching the conv2d_transpose legalization\npolicy.\n\nWire the Relax ONNX frontend to emit conv3d_transpose for 5D inputs.\nExtend tests for ONNX, struct info, LegalizeOps, and TVMScript\nround-trip; fix ConvTranspose test output spatial size to include\noutput_padding.https://github.com/apache/tvm/issues/18945"
    },
    {
      "commit": "52b5d55fc4e629bed002b6e4e0383088a52ad17b",
      "tree": "fcaa2956c323d3fb38f982f4406910f00160eda5",
      "parents": [
        "38eb79c63f1514402e75efdf71ab1868532115c8"
      ],
      "author": {
        "name": "Kryptonite",
        "email": "oalazizi75@gmail.com",
        "time": "Sat Mar 28 20:25:47 2026 +0300"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Mar 28 13:25:47 2026 -0400"
      },
      "message": "[Frontend][ONNX] Add If operator support to Relax ONNX frontend (#18946)\n\n### Summary\n\nThis PR implements the ONNX `If` operator in the Relax ONNX frontend.\nThe `If` operator enables conditional branching in ONNX models, where a\nboolean condition selects between two subgraph branches (`then_branch`\nand `else_branch`) at runtime. This is required for any model with\nruntime-dependent execution paths.\n\nCloses #18945 (Tier 1 — `If` operator)\n\n### Implementation Notes\n\n- The main challenge is that `relax.If` cannot be emitted inside a\ndataflow block, which is how the ONNX frontend normally builds the\nentire graph. To handle this, when the graph contains an `If` node, the\nfunction body is built as a regular binding block instead — matching the\napproach used by the PyTorch Relax frontend for `torch.cond`.\n\n- Each branch is an ONNX subgraph that can reference values from the\nouter graph. A new `_convert_subgraph` method handles converting these\nsubgraphs into Relax expressions, making outer-scope values available to\nthe branch while ensuring branch-local bindings don\u0027t leak back to the\nparent graph.\n\n### Why `relax.If` cannot live inside a dataflow block\n\nDataflow blocks in Relax carry a semantic guarantee: every operation\ninside them must be pure and side-effect-free with no control flow. This\nallows the compiler to treat the entire block as a static computational\ngraph for optimizations like operator fusion and constant folding. An\n`If` node breaks this guarantee by introducing runtime-dependent\nbranching, so Relax\u0027s well-formedness checker explicitly forbids it. I\ndiscovered this when the checker raised:\n```\nThis IR is not well-formed: If nodes are not allowed to appear in dataflow blocks.\n```\n\nThe fix — skipping the dataflow block when the graph contains an `If`\nnode — mirrors exactly how the PyTorch Relax frontend handles\n`torch.cond`.\n\n### Known Limitations\n\n**Dataflow block**: Models whose top-level graph contains an `If` node\nare built without a dataflow block, which may affect downstream\noptimisation passes that rely on dataflow block structure.\n\n### Tests\n\nFour new tests covering: scalar and tensor conditions, condition\ncomputed from another op, and multiple branch outputs. All verified\nagainst onnxruntime via `check_correctness`.\n\n---------\n\nSigned-off-by: OmarAzizi \u003coalazizi75@gmail.com\u003e"
    },
    {
      "commit": "38eb79c63f1514402e75efdf71ab1868532115c8",
      "tree": "fa46a047930deb22aa5419cb6509ff124ccc9ff6",
      "parents": [
        "9dcc73131648b17fe2e500f81e94508e123e13de"
      ],
      "author": {
        "name": "HoYi",
        "email": "62729549+Aharrypotter@users.noreply.github.com",
        "time": "Sun Mar 29 01:24:27 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Mar 28 13:24:27 2026 -0400"
      },
      "message": "[Relax][Vision] Add get_valid_counts and classic NMS (#18943)\n\n## Summary\n\nAdd `relax.vision.get_valid_counts` and classic\n`relax.vision.non_max_suppression` for object-detection post-processing\npipelines.\n\n`get_valid_counts` performs score-based bounding box filtering and\ncompacts valid boxes to the front of each batch. Classic\n`non_max_suppression` performs flexible IoU-based suppression on\nfiltered boxes, complementing existing `all_class_non_max_suppression`\nfor custom post-processing workflows.\n\nThis PR implements the Relax-level registration, legalization, TOPI\ncompute, and test coverage for both operators.\n\n## Changes\n\n**Relax op registration and legalization:**\n- C++ op functions, FFI registration, and struct info inference for both\noperators (`vision.h`, `vision.cc`)\n- Python wrappers with Relax docstrings (`vision.py`)\n- Legalization to `topi.vision.get_valid_counts` and\n`topi.vision.non_max_suppression`\n- Additional struct-info validation for `score_index`, `id_index`, and\n`coord_start` when `elem_length` is statically known\n\n**TOPI and testing:**\n- Full TOPI implementation for `get_valid_counts`\n- Reimplementation of classic `non_max_suppression` in TOPI\n- NumPy reference implementations in `tvm.topi.testing` for both\noperators\n- Op-level tests for struct info inference, legalization, invalid\nattribute ranges, and e2e numerical correctness\n- Stronger legalization tests that verify both `relax.call_tir`\nintroduction and removal of the original Relax vision op\n\n## Limitations\n\n- Attribute range validation for `score_index`, `id_index`, and\n`coord_start` is only enforced when the input `elem_length` is\nstatically known during struct-info inference.\n- Classic `non_max_suppression` follows the existing Relax / TOPI API\nshape and is intended for single-class or class-aware custom\npost-processing flows, distinct from `all_class_non_max_suppression`.\n\n## Validation\n\n```bash\npytest tests/python/relax/test_op_vision.py -k \"get_valid_counts\" -v\npytest tests/python/relax/test_op_vision.py -k \"test_nms_\" -v\n```\nAll related tests passed."
    },
    {
      "commit": "9dcc73131648b17fe2e500f81e94508e123e13de",
      "tree": "cb9df6213a5d9557f351cee58b9e7268bfb55cd2",
      "parents": [
        "44dbd138d51314309955fba8c3d1294e4a89da35"
      ],
      "author": {
        "name": "Akaash Parthasarathy",
        "email": "43900735+akaashrp@users.noreply.github.com",
        "time": "Sat Mar 28 09:39:35 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Mar 28 09:39:35 2026 -0400"
      },
      "message": "[Web] Update includes after FFI JSON refactor (#18944)\n\nInclude `ffi/extra/json_parser.cc` and `ffi/extra/json_writer.cc` to\nmaintain compatibility after FFI JSON refactor"
    },
    {
      "commit": "44dbd138d51314309955fba8c3d1294e4a89da35",
      "tree": "0f7091d9a5b85ae1059e9fb13f1cd1bf1463ee49",
      "parents": [
        "3eb86f78ed1bdb2111118924d16e92bf2d1b054d"
      ],
      "author": {
        "name": "Tianqi Chen",
        "email": "tqchen@users.noreply.github.com",
        "time": "Sat Mar 28 09:38:18 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Mar 28 09:38:18 2026 -0400"
      },
      "message": "[FFI] Bump tvm-ffi to 63224e3 and fix regressions (#18938)\n\n## Summary\n\nBump tvm-ffi submodule from c85fd42 (#471) to 63224e3 (#512), spanning\n41 commits with 7 breaking changes. Fix regressions introduced by the\nbump:\n\n### Fixes\n\n1. **Duplicate field declarations in C++ types** — New tvm-ffi\nauto-wires `__init__` from C++ reflection by walking the parent type\nchain. Child types that re-declared parent fields\n(`RXPlaceholderOpNode`, `FunctionFrameNode`) caused duplicate parameter\nerrors. Fixed by removing duplicate field registrations from child\ntypes.\n\n2. **Repr format regression** (7 tests) — New tvm-ffi `CObject.__repr__`\nuses dataclass repr. Added `Node.__repr__` in `python/tvm/ir/base.py` to\nuse TVMScript printer for IR nodes.\n\n3. **Host/device function split** (3 tests) — `str(target.kind)` now\nreturns full dataclass repr instead of kind name. Changed to\n`target.kind.name` in `python/tvm/tirx/build.py`.\n\n4. **`__slots__` enforcement** — New tvm-ffi enforces `__slots__\u003d()` on\nObject subclasses. Added `__slots__ \u003d (\"__dict__\",)` to classes that\nneed instance attributes: `Pass`, `BlockBuilder`, `TVMDerivedObject`.\n\n### Changes\n- `3rdparty/tvm-ffi` — submodule bump c85fd42 → 63224e3\n- `python/tvm/ir/base.py` — `Node.__repr__` using TVMScript printer\n- `python/tvm/ir/transform.py` — `Pass.__slots__ \u003d (\"__dict__\",)`\n- `python/tvm/tirx/build.py` — `target.kind.name` instead of\n`str(target.kind)`\n- `python/tvm/relax/block_builder.py` — `BlockBuilder.__slots__ \u003d\n(\"__dict__\",)`\n- `python/tvm/runtime/support.py` — `TVMDerivedObject.__slots__ \u003d\n(\"__dict__\", \"__weakref__\")`\n- `python/tvm/s_tir/meta_schedule/utils.py` —\n`TVMDerivedObject.__slots__ \u003d (\"__dict__\",)`\n- `include/tvm/script/ir_builder/relax/frame.h` — remove duplicate field\nregistrations\n- `src/relax/ir/emit_te.h` — remove duplicate field registrations\n\n## Test plan\n- [x] tirx-base: 251 passed, 23 skipped\n- [x] relax import + build: verified\n- [ ] Full CI"
    },
    {
      "commit": "3eb86f78ed1bdb2111118924d16e92bf2d1b054d",
      "tree": "b06d816693c21f2586b75f575f6b33a68ede796f",
      "parents": [
        "f66c04840fac174cea193ab78241a7d09c06cf47"
      ],
      "author": {
        "name": "Dayuxiaoshui",
        "email": "158081477+Dayuxiaoshui@users.noreply.github.com",
        "time": "Sat Mar 28 12:20:22 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Mar 28 00:20:22 2026 -0400"
      },
      "message": "[Relax][TOPI] Add relax.vision.multibox_transform_loc for SSD/TFLite box decode (#18942)\n\nIntroduce relax.vision.multibox_transform_loc with\nMultiboxTransformLocAttrs: decode center-size offsets against ltrb\npriors, softmax on class logits, and optional clip, threshold masking,\nand background score zeroing. Register the C++ op with FInferStructInfo\nchecks for shapes and dtypes (including batch and 4*N consistency).\nLegalize to topi.vision.multibox_transform_loc.\n\nAdd tests for struct inference, invalid inputs, Legalize+e2e on LLVM,\nattribute branches, and TVMScript roundtrip. Add a standalone numpy\nreference under topi/testing (not exported from tvm.topi.testing to\navoid pulling scipy).\n\nUpdate TFLite frontend NotImplementedError text for\nDETECTION_POSTPROCESS and NON_MAX_SUPPRESSION_V5 to note multibox is\navailable and link tracking issue #18928."
    },
    {
      "commit": "f66c04840fac174cea193ab78241a7d09c06cf47",
      "tree": "f7811b32b94e5d182bd0cf099e7e82a7887dd23e",
      "parents": [
        "9c8e5a60376ff16a7a88dc841271befd9f32bf96"
      ],
      "author": {
        "name": "cj",
        "email": "erhsh_165@126.com",
        "time": "Sat Mar 28 02:56:22 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 27 14:56:22 2026 -0400"
      },
      "message": "[DOC] Fix inconsistent code comments (#18939)\n\nthe file `tvm/python/update_version.py` not exist. So remove the\ncomment."
    },
    {
      "commit": "9c8e5a60376ff16a7a88dc841271befd9f32bf96",
      "tree": "b15185a1ed8b0f3a70619a54ede9654a4504eb46",
      "parents": [
        "53b65762816693cdaa81f67baec72f378b5945bc"
      ],
      "author": {
        "name": "Ruihang Lai",
        "email": "ruihangl@cs.cmu.edu",
        "time": "Fri Mar 27 09:30:37 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 27 09:30:37 2026 -0400"
      },
      "message": "[TIR] Handle Bind in LowerDeviceKernelLaunch (#18912)\n\nDeviceInfoCollector did not track Bind statements, so when CSE (or\nany other pass) inserted a Bind before a thread_extent AttrStmt, the\ncollected extent referenced a locally-bound variable instead of\nfunction parameters.  LowerDeviceKernelLaunch then produced dangling\nreferences in the host function.\n\nFix: record Bind definitions in DeviceInfoCollector and inline them\nwhen extracting thread_extent values and dynamic shared memory sizes."
    },
    {
      "commit": "53b65762816693cdaa81f67baec72f378b5945bc",
      "tree": "a24eb34470aaea7c8d94447904a3a69318493f1d",
      "parents": [
        "90e6e8b77a76907a1eee8d2b97b7bc01315f69e5"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Fri Mar 27 03:17:49 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 27 16:17:49 2026 +0900"
      },
      "message": "[Fix] Fix tvm.tir references in Tflite frontend (#18940)\n\nas per title"
    },
    {
      "commit": "90e6e8b77a76907a1eee8d2b97b7bc01315f69e5",
      "tree": "c81a00e6dc33f56f13d38656d485eb7c625af4cb",
      "parents": [
        "2f2469e6371dd4c9f89cba3924d877d090230861"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Fri Mar 27 01:51:09 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 27 01:51:09 2026 -0400"
      },
      "message": "[Docs] Fix duplicate license headers and incorrect module paths after tirx rename (#18941)\n\n- Remove duplicate Apache license blocks in\n`docs/get_started/overview.rst` and `docs/README.md` introduced by\n#18913, which render as visible garbage text in built documentation\n- Fix incorrect `tirx/schedule` and `tirx/tensor_intrin` paths in\n`docs/arch/index.rst` — these modules are in `s_tir/`, not `tirx/`"
    },
    {
      "commit": "2f2469e6371dd4c9f89cba3924d877d090230861",
      "tree": "21642d1b8b027a70b328b603cd2ff01b6d73e3fe",
      "parents": [
        "bf6ed31302486fe189c9872642c4a5dbd5c7988f"
      ],
      "author": {
        "name": "HoYi",
        "email": "62729549+Aharrypotter@users.noreply.github.com",
        "time": "Fri Mar 27 10:38:10 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 26 22:38:10 2026 -0400"
      },
      "message": "[Relax] Add affine_grid operator with PyTorch and ONNX frontend support (#18933)\n\n## Summary\n\nAdd `relax.image.affine_grid` operator for Spatial Transformer Networks,\nalong with PyTorch and ONNX frontend integration.\n\nTOPI compute (`topi.image.affine_grid`) already exists. This PR\ncompletes the Relax-level registration and frontend support, following\nthe existing `resize2d` / `grid_sample` pattern.\n\n## Changes\n\n**Relax op registration:**\n- C++ op function, FFI registration, and struct info inference\n(`resize.h`, `resize.cc`)\n- Python wrapper with flexible size parameter handling (`image.py`)\n- Legalization to `topi.image.affine_grid` with `PrimExpr` → `int`\nconversion\n- Op-level tests (struct info inference + e2e numerical correctness) and\nlegalization test\n\n**PyTorch frontend:**\n- Converter for `aten.affine_grid_generator.default`\n- Layout conversion from TVM `[N,2,H,W]` to PyTorch `[N,H,W,2]` via\n`permute_dims`\n- Single-kernel path is 5.6x faster than the decomposed path (30+ ops)\n- Structural IR test and numerical correctness test\n\n**ONNX frontend:**\n- `AffineGrid` converter with `_impl_v20` (opset 20, when the op was\nfirst introduced)\n- Support for constant size tensor `[N,C,H,W]`\n- Layout conversion from TVM `[N,2,H,W]` to ONNX `[N,H,W,2]`\n- End-to-end correctness test against ONNX Runtime\n\n## Limitations\n\n- Only `align_corners\u003dTrue` is supported (matches current TOPI\nimplementation)\n- Only 2D affine grid is supported\n\n## Validation\n\n```bash\npytest tests/python/relax/test_op_image.py -k \"affine_grid\" -v           # 8 passed\npytest tests/python/relax/test_transform_legalize_ops_image.py -k \"affine_grid\" -v  # 1 passed\npytest tests/python/relax/test_frontend_from_exported_program.py -k \"affine_grid\" -v  # 2 passed\npytest tests/python/relax/test_frontend_onnx.py -k \"affine_grid\" -v     # 1 passed\n```\n\nAll 12 tests passed.\n\n---------\n\nCo-authored-by: Claude Opus 4.6 \u003cnoreply@anthropic.com\u003e"
    },
    {
      "commit": "bf6ed31302486fe189c9872642c4a5dbd5c7988f",
      "tree": "b3628bbfe99a09737e24b4ba3b89b93e2db73198",
      "parents": [
        "1e08eb2fa9dcb59a4f34917a94c659d81ad36c0c"
      ],
      "author": {
        "name": "Nirdesh Devadiya",
        "email": "89861517+nirdesh17@users.noreply.github.com",
        "time": "Thu Mar 26 20:14:27 2026 +0530"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 26 10:44:27 2026 -0400"
      },
      "message": "[Relax][PyTorch] Add 3D interpolate support using resize3d (#18937)\n\nAdds support for torch.nn.functional.interpolate 3D mode in Relax\nfrontend.\n\n- Handles 5D inputs (NCDHW)\n- Maps to relax.op.image.resize3d\n- Ensures correct layout handling\n- Adds tests for scale_factor and size cases\n\nAll tests pass locally.\n\npart of #18928\n\nSigned-off-by: nirdesh17 \u003cnirdeshdevadiya17@gmail.com\u003e"
    },
    {
      "commit": "1e08eb2fa9dcb59a4f34917a94c659d81ad36c0c",
      "tree": "808337ae620b2e10d38751c6332d178ba06b5258",
      "parents": [
        "4d36a45b9f5d3a52b1337c77bd6162fd71796c87"
      ],
      "author": {
        "name": "YinHanke",
        "email": "hankeyin@gmail.com",
        "time": "Thu Mar 26 22:41:54 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 26 10:41:54 2026 -0400"
      },
      "message": "[Relax][ONNX][Torch] Add roi_align support and frontend integration (#18936)\n\n## Summary\n\nAdd Relax `roi_align` support and wire it through the ONNX and PyTorch\nfrontends.\n\n## Changes\n\n- add `relax.vision.roi_align`, including attrs, Python wrapper, struct\ninfo inference, and legalization\n- add TOPI `roi_align` compute and keep both legacy and aligned ROIAlign\nsemantics\n- support ONNX `RoiAlign`, including `coordinate_transformation_mode`\nhandling for `output_half_pixel` and `half_pixel`\n- support PyTorch `torchvision.ops.roi_align` in the exported-program\nfrontend, including the `aligned` flag\n- add regression tests for Relax op inference, legalization, TVMScript\nparsing, ONNX frontend import, and PyTorch frontend import\n- add aligned ROIAlign test coverage to make sure sub-pixel RoIs no\nlonger use the legacy `min\u003d1.0` clamp\n\n## Validation\n\n- `pytest tests/python/relax/test_op_vision.py -k roi_align`\n- `pytest tests/python/relax/test_tvmscript_parser_op_vision.py -k\nroi_align`\n- `pytest tests/python/relax/test_frontend_onnx.py -k roi_align`\n- `pytest tests/python/relax/test_frontend_from_exported_program.py -k\nroi_align`\n\nThis PR completes the Relax/ONNX/Torch roi_align work tracked in #18928."
    },
    {
      "commit": "4d36a45b9f5d3a52b1337c77bd6162fd71796c87",
      "tree": "85c084c6e1b0a11a457a7c48ccc2c8add3b53551",
      "parents": [
        "59c14f6d45a961a5384a7f40e52e58d2c83ea343"
      ],
      "author": {
        "name": "Dayuxiaoshui",
        "email": "158081477+Dayuxiaoshui@users.noreply.github.com",
        "time": "Thu Mar 26 12:37:25 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 26 00:37:25 2026 -0400"
      },
      "message": "[Relax][ONNX] Add image.resize3d op and wire 5D Resize (#18931)\n\n## Summary\n\n- Add Relax `image.resize3d` end-to-end: attrs, C++ op\nregistration/inference, Python API, and legalization to\n`topi.image.resize3d`.\n- Update ONNX 5D `Resize` to emit `relax.image.resize3d` instead of\ndirect `emit_te(topi.image.resize3d)`.\n- Reuse the existing `resize2d` implementation pattern, which let us\nmove faster while keeping behavior consistent and risk low.\n- Add tests for op inference, TVMScript parser, legalization, ONNX\nimport, and `resize3d` negative/error cases.\n\n## Test Plan\n\n- `python3 -m pytest -q tests/python/relax/test_op_image.py -k\n\u0027resize3d\u0027`\n- `python3 -m pytest -q\ntests/python/relax/test_transform_legalize_ops_image.py -k \u0027resize3d\u0027`\n- `python3 -m pytest -q\ntests/python/relax/test_tvmscript_parser_op_image.py -k \u0027resize3d\u0027`\n- `python3 -m pytest -q tests/python/relax/test_frontend_onnx.py -k\n\u0027resize_nd_sizes or resize_5d_emits_relax_resize3d\u0027`"
    },
    {
      "commit": "59c14f6d45a961a5384a7f40e52e58d2c83ea343",
      "tree": "ee7036fd573387e107258ca7ce0e0d60063c6404",
      "parents": [
        "e53cfe138c8660c473d7008c8ccb950ab673d5aa"
      ],
      "author": {
        "name": "Kryptonite",
        "email": "oalazizi75@gmail.com",
        "time": "Thu Mar 26 03:28:26 2026 +0300"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 25 20:28:26 2026 -0400"
      },
      "message": "[Relax][ONNX] Add GridSample ONNX frontend integration (#18932)\n\n### Summary\n\n- Implements ONNX `GridSample` frontend integration for Relax, which was\npreviously commented out in the converter map.\n- Adds `GridSample` converter class that handles ONNX→TVM grid shape\ntranspose (`[N, H_out, W_out, 2]` → `[N, 2, H_out, W_out]`) and\nmode/padding attribute mapping.\n- Reuses the existing `grid_sample` Relax op\n(`relax.op.image.grid_sample`), which already exists, keeping the change\nminimal and focused on the frontend layer.\n- Adds tests covering all supported mode/padding_mode/align_corners\ncombinations.\n\nCloses part of #18928\n\n### Notes for Maintainers\n\nThe ONNX spec defines the default mode as `\"linear\"`, but onnxruntime\nonly accepts `\"bilinear\"`. I\u0027ve set the converter default to\n`\"bilinear\"` — happy to add a `\"linear\"` → `\"bilinear\"` translation if\nneeded for spec compliance. (**Edit: This was addressed**)\n\n### Test Plan\n```bash\npython3 -m pytest -q tests/python/relax/test_frontend_onnx.py -k \u0027grid_sample\u0027\n```\n\n---------\n\nSigned-off-by: OmarAzizi \u003coalazizi75@gmail.com\u003e"
    },
    {
      "commit": "e53cfe138c8660c473d7008c8ccb950ab673d5aa",
      "tree": "9ee8941dab9a4a5f56c13923f3dc166928fe45bd",
      "parents": [
        "ff883dbcbc7e52db32d1d320e796477082c72197"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Wed Mar 25 17:10:35 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 25 17:10:35 2026 -0400"
      },
      "message": "[Frontend][TFLite] Fix undefined symbols and Relay API remnants in TFLite frontend (#18929)\n\nThe TFLite frontend was ported from Relay but contains several undefined\nsymbols and Relay-specific APIs that cause runtime errors. This PR cleans\nup these issues so that working code paths are clean and broken paths fail\nwith clear `NotImplementedError` instead of `NameError`."
    },
    {
      "commit": "ff883dbcbc7e52db32d1d320e796477082c72197",
      "tree": "89fbe03e25378a1146bf9c3735707b6f089d22c7",
      "parents": [
        "7b3fa38e6b76564dabe5d6b09023a566a1520c27"
      ],
      "author": {
        "name": "Tianqi Chen",
        "email": "tqchen@users.noreply.github.com",
        "time": "Tue Mar 24 11:17:29 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 25 00:17:29 2026 +0900"
      },
      "message": "Revert \"fix: add safety warning to pickle_memoize cache loading\" (#18926)\n\nReverts apache/tvm#18925"
    },
    {
      "commit": "7b3fa38e6b76564dabe5d6b09023a566a1520c27",
      "tree": "e8554a5354ecad967311ab8fcf8ea7c5eb4c8455",
      "parents": [
        "5e05f70a1a9a9974ab0a5371e64edd0774599d94"
      ],
      "author": {
        "name": "scruge1",
        "email": "190668909+scruge1@users.noreply.github.com",
        "time": "Tue Mar 24 05:39:47 2026 +0000"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 24 01:39:47 2026 -0400"
      },
      "message": "fix: add safety warning to pickle_memoize cache loading (#18925)\n\n## Summary\n`pickle_memoize` loads cached pickle files via `pickle.load()` without\nany integrity verification or user warning. If an attacker can write to\nthe cache directory, they can inject malicious pickle payloads that\nexecute arbitrary code on next load.\n\n## Fix\nAdds a `UserWarning` when loading pickle cache files to alert users\nabout the security risk.\n\n## Related\nHuntr security vulnerability report (CWE-502: Deserialization of\nUntrusted Data)\n\nSigned-off-by: scruge1 \u003cscruge1@proton.me\u003e\nCo-authored-by: scruge1 \u003cscruge1@proton.me\u003e"
    },
    {
      "commit": "5e05f70a1a9a9974ab0a5371e64edd0774599d94",
      "tree": "89fbe03e25378a1146bf9c3735707b6f089d22c7",
      "parents": [
        "d33630c2a28c6b6d6b86f58ec952d1f4e14df29f"
      ],
      "author": {
        "name": "Siva",
        "email": "quic_sivb@quicinc.com",
        "time": "Mon Mar 23 08:36:32 2026 +0530"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 23 12:06:32 2026 +0900"
      },
      "message": "[Relax][TFLite] Introduce TensorFlow Lite frontend (#18868)\n\nVerified for entire range of classification nets\nQuantization is disabled at the moment\nThere exists few unsupported ops in conversion maps which is need to be\nmapped in future when relax op inventory grows."
    },
    {
      "commit": "d33630c2a28c6b6d6b86f58ec952d1f4e14df29f",
      "tree": "db778d2db17d86bc5035b17bf27f94ba62ae55ed",
      "parents": [
        "00eb226f8d6f569f93a5770510af23ab2159a850"
      ],
      "author": {
        "name": "Yu Chengye",
        "email": "60293095+kabu1204@users.noreply.github.com",
        "time": "Sat Mar 21 21:43:24 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Mar 21 09:43:24 2026 -0400"
      },
      "message": "[Vulkan] Avoid explicit layout decoration on non-interface allocations (#18914)\n\nSPIR-V codegen currently emits `ArrayStride` and `Offset` decorations\nfor non-interface allocations in `GetStructArrayType()`. That is correct\nfor descriptor-backed interface blocks, but not for static workgroup\nallocations.\n\nI hit this while bringing up tilelang vulkan shared memory allocation\npath: the vulkan validation rejected shaders that used shared memory\nlowered through this path:\n```\ntvm.error.InternalError: Check failed: res \u003d\u003d SPV_SUCCESS (-10 vs. 0) :\nindex\u003d44 error:[VUID-StandaloneSpirv-None-10684] Invalid explicit layout decorations on type for operand \u002725[%_ptr_Workgroup__struct_24]\u0027\n  %A_shared \u003d OpVariable %_ptr_Workgroup__struct_24 Workgroup\n```\n\nFIX: This PR keeps layout decoration for interface blocks, and skips for\nnon-interface allocations such as static shared/workgroup memory. A new\ncompile-only test is added for this.\n\nOne possible concern is that there\u0027s already a pre-existing test using\n`fetch_to_shared`."
    },
    {
      "commit": "00eb226f8d6f569f93a5770510af23ab2159a850",
      "tree": "4b313678939a32f269273c22411fdc21baa71a3a",
      "parents": [
        "c581abeee410cc5426a667f64b3ac27b59ace2af"
      ],
      "author": {
        "name": "Masahiro Hiramori",
        "email": "contact@mshr-h.com",
        "time": "Fri Mar 20 21:04:46 2026 +0900"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 20 20:04:46 2026 +0800"
      },
      "message": "[Refactor] Update type references from tir to tirx in PyTorch ExportedProgram frontend (#18920)\n\nFollow up for #18913 and #18917"
    },
    {
      "commit": "c581abeee410cc5426a667f64b3ac27b59ace2af",
      "tree": "d60b495b4d2f95e30afd8ce8657fc9a191789ac3",
      "parents": [
        "69336ac4a5f9e244deaaad94a031417c64114b2c"
      ],
      "author": {
        "name": "Masahiro Hiramori",
        "email": "contact@mshr-h.com",
        "time": "Fri Mar 20 15:18:52 2026 +0900"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 20 02:18:52 2026 -0400"
      },
      "message": "[Relax][PyTorch] Fix _slice and _expand for dynamic shapes in PyTorch ExportedProgram frontend (#18918)\n\nFixes two issues when translating PyTorch models with dynamic shapes:\n\n1. **_slice**: Resolve `fx.Node` references in start/end/step arguments\nand detect identity slices where the symbolic end equals the tensor\ndimension (avoids redundant `strided_slice` ops).\n\n2. **_expand**: Fall back to FX node metadata when `shape_of()` returns\n`None` for tensors with unknown shapes."
    },
    {
      "commit": "69336ac4a5f9e244deaaad94a031417c64114b2c",
      "tree": "3183a6cac282d0ed13c903b9440eb6cbbef6062e",
      "parents": [
        "141c22fd8abad2d729926232ccde1233f519330c"
      ],
      "author": {
        "name": "Masahiro Hiramori",
        "email": "contact@mshr-h.com",
        "time": "Fri Mar 20 15:17:36 2026 +0900"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 20 02:17:36 2026 -0400"
      },
      "message": "[TOPI] Fix strided_slice_with_axes to handle negative axis values (#18917)\n\nNegative axis values (e.g., `axes\u003d[-1]`) in `strided_slice_with_axes`\nwere used directly as array indices without normalization, causing an\n`IndexError` during `LegalizeOps`.\n\nThis PR normalizes negative axes to positive equivalents before passing\nthem to `StridedSliceCanonicalizeBegin`, `StridedSliceOutputShape`, and\nthe compute lambda."
    },
    {
      "commit": "141c22fd8abad2d729926232ccde1233f519330c",
      "tree": "ab795e3b08aea2995bdb9a0144372269debce47f",
      "parents": [
        "c9fb8cd3cd481fd77de1aeb18e40cc197e4f59cf"
      ],
      "author": {
        "name": "Tianqi Chen",
        "email": "tqchen@users.noreply.github.com",
        "time": "Thu Mar 19 21:27:54 2026 -0700"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 19 21:27:54 2026 -0700"
      },
      "message": "[Refactor] Bring up tirx namespace (#18913)\n\nThis PR brings up the tirx namespace. We have been spliting out the\noriginal tir namespace to include high-level component s_tir and this PR\nupdates the remaining low-level part as tirx namespace"
    },
    {
      "commit": "c9fb8cd3cd481fd77de1aeb18e40cc197e4f59cf",
      "tree": "f77630e334ae00281b42b932a711d66d6784a794",
      "parents": [
        "42a534996cce3ac76d44c75c23af595190eae450"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Wed Mar 18 13:57:14 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 18 13:57:14 2026 -0400"
      },
      "message": "[Docs] Clean up stale references from recent refactors (#18908)\n\nThis is a follow-up pr of #18906 \n\n- **Removed \"Unity\" branding**: Replace \"Apache TVM Unity\" with \"Apache\nTVM\" across all docs (Unity branch has been merged into\n  main)\n- **Fixed stale Python APIs**: `tvm.instrument` → `tvm.ir.instrument`,\n`tvm.transform` → `tvm.ir.transform`, `tvm.module.Module` →\n`tvm.runtime.Module`, `tvm.convert` → `tvm.runtime.convert`,\n`tvm.runtime.load` → `tvm.runtime.load_module`\n- **Fixed S-TIR migration**: `tvm.tir.transform.DefaultGPUSchedule` →\n`tvm.s_tir.transform.DefaultGPUSchedule`; added missing\n  `s_tir/transform` API doc page\n- **Removed references to deleted features**: AutoTVM/AutoScheduler\n(removed), FewShotTuning (phased out in #18864), i386 CI\n(removed in #18737), VTA (removed), TensorFlow frontend (not in Relax),\nMXNet/Gluon (archived)\n- **Fixed stale C++ references**: `NodeRef` → `ObjectRef`, `make_node` →\n`ffi::make_object`, removed unused `using\ntvm::runtime::Registry`, `src/relax/transforms/` →\n`src/relax/transform/`\n- **Updated CI docs**: Added GitHub Actions lint workflow info to\n`ci.rst`\n- **Fixed broken links**: dead tutorial links in `pass_infra.rst`,\n`gallery/` → `docs/how_to/tutorials/`, `how-to/index.rst` →\n  `how_to/dev/index.rst`, `discuss.tvm.ai` → `discuss.tvm.apache.org`\n- **Updated overview**: TensorFlow → ONNX in supported framework list,\nAutoTVM → RPC in security docs"
    },
    {
      "commit": "42a534996cce3ac76d44c75c23af595190eae450",
      "tree": "465cce870fda28e7fec1539164fa3a94d67abed0",
      "parents": [
        "be2ea89e443f5eb857ce1217d0287a076c0b0fbe"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Wed Mar 18 13:56:09 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 18 13:56:09 2026 -0400"
      },
      "message": "[Target][LLVM] Fix -mcpu validation compatibility across LLVM versions       (#18909)\n\nPR #18884 replaced the `getAllProcessorDescriptions()` enumeration check\nwith `MCSubtargetInfo::isCPUStringValid()` to fix LLVM 22+ where\n`apple-m1` became an alias not listed in the enumeration. However, on\nLLVM 19, `isCPUStringValid(\"apple-m1\")` returns false, even though the\nCPU is valid and present in the enumeration, producing errors like:\n```\nError: Using LLVM 19.1.7     \nwith `-mcpu\u003dapple-m1` is not valid in `-mtriple\u003darm64-apple-macos`, using default `-mcpu\u003dgeneric`           \n[20:03:31] /workspace/tvm/src/target/llvm/llvm_instance.cc:219: Error: Using LLVM 19.1.7 with               \n`-mcpu\u003dapple-m1` is not valid in `-mtriple\u003darm64-apple-macos`, using default `-mcpu\u003dgeneric`                \n[20:03:31] /workspace/tvm/src/target/llvm/llvm_instance.cc:219: Error: Using LLVM 19.1.7 with               \n`-mcpu\u003dapple-m2` is not valid in `-mtriple\u003darm64-apple-macos`, using default `-mcpu\u003dgeneric`                \n[20:03:31] /workspace/tvm/src/target/llvm/llvm_instance.cc:219: Error: Using LLVM 19.1.7 with               \n`-mcpu\u003dapple-m2` is not valid in `-mtriple\u003darm64-apple-macos`, using default `-mcpu\u003dgeneric`                \n[20:03:31] /workspace/tvm/src/target/llvm/llvm_instance.cc:219: Error: Using LLVM 19.1.7 with               \n`-mcpu\u003dapple-m1` is not valid in `-mtriple\u003darm64-apple-macos`, using default `-mcpu\u003dgeneric`                \n[20:03:31] /workspace/tvm/src/target/llvm/llvm_instance.cc:219: Error: Using LLVM 19.1.7 with               \n`-mcpu\u003dapple-m1` is not valid in `-mtriple\u003darm64-apple-macos`, using default `-mcpu\u003dgeneric`\n```\n\n- Fix `-mcpu` validation regression introduced by #18884, where\n`isCPUStringValid()` returns false for valid CPUs like `apple-m1` on\nLLVM 19, causing spurious `LOG(ERROR)` warnings\n- Take the union of both validation methods: try `isCPUStringValid()`\nfirst (handles LLVM 22+ aliases), then fall back to\n`getAllProcessorDescriptions()` enumeration (handles LLVM 17-21)"
    },
    {
      "commit": "be2ea89e443f5eb857ce1217d0287a076c0b0fbe",
      "tree": "2e01416fab166eff2b2f574032106ad0542808f8",
      "parents": [
        "474cde494dac47cab60d8cd4181ae22bfee98bf5"
      ],
      "author": {
        "name": "Shushi Hong",
        "email": "820958424@qq.com",
        "time": "Mon Mar 16 12:07:23 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 16 12:07:23 2026 -0400"
      },
      "message": "[Docs] Update outdated references from recent refactors   (#18906)\n\n- Update Python version requirement from 3.7/3.8 to 3.10 in docs\n- Migrate lint commands from `docker/lint.sh` / `docker/bash.sh ci_lint`\nto `pre-commit` (#18807)\n- Replace `make` / `make runtime` with CMake commands (root Makefile\nremoved in #18828)\n- Remove `dmlc::ThreadLocalStore` reference in pass_infra (dmlc phased\nout in #18779)\n- Update FFI header links in runtime.rst (`packed_func.h` →\n`tvm/ffi/function.h`)\n  - Remove stale `tvm.testing.ErrorTest` example in error_handling.rst\n  - Remove dead `apps/extension` link in runtime.rst"
    },
    {
      "commit": "474cde494dac47cab60d8cd4181ae22bfee98bf5",
      "tree": "0ce0b81b99a4801adb9a7c2ac285835ca99f6b00",
      "parents": [
        "353cc701cfacf7d066791997204c6d211eec0638"
      ],
      "author": {
        "name": "kimm240",
        "email": "67453494+kimm240@users.noreply.github.com",
        "time": "Mon Mar 16 13:10:50 2026 +0900"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 16 00:10:50 2026 -0400"
      },
      "message": "[Optimization][Operator] Implement and enable Conv2d-Reshape-Add-ReLU fusion (#18240)\n\nThis commit extends the make_fused_bias_activation_pattern function to\nsupport\nPyTorch frontend\u0027s specific IR generation pattern for convolution\noperations\nwith bias. When PyTorch models with bias\u003dTrue are converted to Relax IR,\nthe\nfrontend generates a conv2d -\u003e reshape -\u003e add -\u003e relu sequence instead\nof the\nsimpler conv2d -\u003e add -\u003e relu pattern that existing fusion logic\nexpected.\n  \nThe key changes include:  \n  \n1. Add allow_reshape parameter to make_fused_bias_activation_pattern in\nboth\ndpl/pattern.py and backend/patterns.py with default value False to\nmaintain\n   backward compatibility.  \n  \n2. When allow_reshape\u003dTrue, the pattern matcher now recognizes and fuses\nthe\ncomplete conv2d -\u003e reshape -\u003e add -\u003e relu sequence into a single\ncomposite\nfunction, eliminating intermediate tensor allocations and kernel launch\n   overhead.  \n  \n3. The original pattern (allow_reshape\u003dFalse) only fuses conv2d -\u003e add\n-\u003e relu,\nleaving the reshape operation outside the fused function, which results\nin\n   suboptimal performance for PyTorch-originated models.  \n  \nThis enhancement enables more efficient operator fusion for PyTorch\nmodels,\nreducing memory usage and improving execution performance by capturing\nthe\ncomplete computation pattern in a single fused kernel. The\nimplementation\nmaintains full backward compatibility while extending support for\nPyTorch\nfrontend\u0027s specific IR generation patterns.  \n  \nComprehensive tests are added to verify the fusion behavior with both\nold and\nnew patterns, ensuring correctness across different convolution types\n(Conv1d,\nConv2d, Conv3d) and validating that fusion only occurs when appropriate\nconditions are met.\n\n---------\n\nCo-authored-by: kim hyun gyu \u003ckimm240@telepix.net\u003e"
    },
    {
      "commit": "353cc701cfacf7d066791997204c6d211eec0638",
      "tree": "19a59eebae4491a2d39f0e66c0c8d27684177d7d",
      "parents": [
        "64a24c4104110c83c15ac507effbc18845b2d530"
      ],
      "author": {
        "name": "Akaash Parthasarathy",
        "email": "43900735+akaashrp@users.noreply.github.com",
        "time": "Sun Mar 15 03:39:29 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Mar 15 15:39:29 2026 +0800"
      },
      "message": "[Web][Experimental] Add support for cross-origin storage caching (#18893)\n\n1. Add support for cross-origin storage caching\n2. Bump version to 0.24.0-dev2"
    },
    {
      "commit": "64a24c4104110c83c15ac507effbc18845b2d530",
      "tree": "b0c20ebe04197ea0aadfd8e18889d1895f60b43a",
      "parents": [
        "3788e99b8cd66c45111ba94a1b685291bf2bd64b"
      ],
      "author": {
        "name": "Siva",
        "email": "quic_sivb@quicinc.com",
        "time": "Sun Mar 15 11:49:43 2026 +0530"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Mar 15 02:19:43 2026 -0400"
      },
      "message": "[RELAX][LAYOUT] Support multiple axis paching (#18869)\n\nLike OIHW[4o4i] where we can pack multiple axis.\nHelpful while handling complex target layouts.\nThis PR covers layout representation and transforms for these."
    },
    {
      "commit": "3788e99b8cd66c45111ba94a1b685291bf2bd64b",
      "tree": "2f3e8a9849c6e31e919f99fdae9dc54db119a744",
      "parents": [
        "122592b1d80c4dbfef896e8fb4e590d75a2ada67"
      ],
      "author": {
        "name": "Masahiro Hiramori",
        "email": "contact@mshr-h.com",
        "time": "Sun Mar 15 13:33:36 2026 +0900"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Mar 15 00:33:36 2026 -0400"
      },
      "message": "[Relax][PyTorch] Add torch.cond support to ExportedProgram frontend (#18904)\n\nAdd support for importing torch.ops.higher_order.cond from PyTorch\nExportedProgram into Relax IR. This enables torch.cond-based conditional\nlogic to be represented as relax.If with branch functions.\n\nKey changes:\n- Add \u0027cond\u0027 entry to ExportedProgramImporter.convert_map\n- Add _has_cond_op() to detect cond in FX graphs\n- Add _import_branch_subgraph() to translate branch GraphModules into\nseparate Relax functions with fresh symbolic vars\n- Add _cond() converter that emits relax.If\n- Skip DataflowBlock when graph contains cond (use BindingBlock)\n- Add gt/lt to symbolic comparison operators\n- Add 4 tests with structural equality: basic, shape predicate, tuple\noutput, nested cond\n\n---------\n\nCo-authored-by: Copilot \u003c223556219+Copilot@users.noreply.github.com\u003e"
    },
    {
      "commit": "122592b1d80c4dbfef896e8fb4e590d75a2ada67",
      "tree": "bb6ec899e07c90c22f22824e1d51c233c6b2674c",
      "parents": [
        "1499bdae5950b7bcaa585df4e736b894d11768f1"
      ],
      "author": {
        "name": "anusha975",
        "email": "canusha835@gmail.com",
        "time": "Fri Mar 13 12:20:15 2026 +0530"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 13 02:50:15 2026 -0400"
      },
      "message": "[Relax] Add input type validation for make_shape and corresponding tests (#18870)\n\n## Summary\n\nThis PR adds input type validation for `make_shape` in Relax to ensure\nonly valid argument types are accepted.\n\n## Changes\n\n- Added validation logic to `make_shape`\n- Improved error handling for invalid inputs\n- Added unit tests in `test_make_shape.py` to verify behavior\n\n## Motivation\n\nClear validation improves robustness and prevents unexpected runtime\nerrors.\nIt also ensures consistent behavior when invalid inputs are provided.\n\n## Testing\n\n- Added new tests in `tests/python/relax/test_make_shape.py`\n- Verified tests run successfully locally\n\nPlease let me know if any modifications or improvements are required."
    },
    {
      "commit": "1499bdae5950b7bcaa585df4e736b894d11768f1",
      "tree": "1cbf04af5cc055af8c00b8ae635527007dc0e2d3",
      "parents": [
        "7a41af002bf1f77de7b22a7eb06333fa4ed0ceac"
      ],
      "author": {
        "name": "Masahiro Hiramori",
        "email": "contact@mshr-h.com",
        "time": "Wed Mar 11 01:29:38 2026 +0900"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 10 12:29:38 2026 -0400"
      },
      "message": "[Relax][PyTorch] Fix crash on dynamic shapes with identity slice in ExportedProgram importer (#18903)\n\nFixes `TypeError: \u0027NoneType\u0027 object is not iterable` when importing\nmodels with dynamic batch dimensions that contain identity slices (e.g.,\n`x[:, :H, :W, :]` on a dynamic batch dim).\n\n**Root cause:** `aten.slice.Tensor(x, 0, 0, INT_MAX)` (an identity slice\non a dynamic dim `s`) produces a result with shape `[T.min(INT_MAX, s),\n...]` instead of `[s, ...]`. When this is combined with the original\ntensor via `add`, TVM cannot unify the shapes, resulting in\n`struct_info.shape \u003d None`. Any subsequent `view`/`reshape` then crashes\ncalling `list(None)`.\n\nThis pattern appears in models like `swin_t`, where shifted window\nattention crops padded features with `x[:, :H, :W, :].contiguous()`.\n\n**Changes:**\n- `exported_program_translator.py`: Skip `strided_slice` for identity\nslices (`start\u003d0, end\u003e\u003dINT_MAX, step\u003d1`) and return the input tensor\ndirectly.\n- `base_fx_graph_translator.py`: Guard the identity-reshape check in\n`_reshape` against `None` shape."
    },
    {
      "commit": "7a41af002bf1f77de7b22a7eb06333fa4ed0ceac",
      "tree": "baab14546e88589c790be99d251b47e9c8638ecc",
      "parents": [
        "728eb1517f1877d23b71edf91ebd0a288960503b"
      ],
      "author": {
        "name": "Gabe Guralnick",
        "email": "gnguralnick@gmail.com",
        "time": "Tue Mar 10 10:39:09 2026 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 10 11:39:09 2026 -0400"
      },
      "message": "[FIX][LLVM] Use isCPUStringValid for mcpu validation instead of enumerating processor descriptions (#18884)\n\n## Summary\n\nFix false rejection of `apple-m1`, `apple-m2`, and `apple-m3` as LLVM\nCPU names when building TVM with LLVM 22+.\n\n## Behavior\n\nAfter following the [installation from source\ninstructions](https://tvm.apache.org/docs/install/from_source.html) and\nbuilding against LLVM 22, every `import tvm` produces spurious error\nmessages:\n\n```\nError: Using LLVM 22.1.0 with `-mcpu\u003dapple-m1` is not valid in `-mtriple\u003darm64-apple-macos`, using default `-mcpu\u003dgeneric`\nError: Using LLVM 22.1.0 with `-mcpu\u003dapple-m2` is not valid in `-mtriple\u003darm64-apple-macos`, using default `-mcpu\u003dgeneric`\n```\n\nThese are triggered by the Metal target tag registrations in\n`python/tvm/target/tag_registry/metal.py`, which use `apple-m1` and\n`apple-m2` as the host `-mcpu`. The CPUs are silently downgraded to\n`generic`.\n\n## Root cause\n\nLLVM 22 reorganized its AArch64 processor table. `apple-m1` through\n`apple-m3` are now CPU **aliases** — fully valid and accepted by\n`createTargetMachine` and `isCPUStringValid()`, but no longer returned\nby `MCSubtargetInfo::getAllProcessorDescriptions()`.\n\nTVM\u0027s `LLVMTargetInfo` constructor validates `-mcpu` by enumerating\n`getAllProcessorDescriptions()` and checking membership, so it misses\nalias-only names.\n\n## Fix\n\nReplace the enumeration-based check with a new `IsValidCPU()` method\nthat uses `MCSubtargetInfo::isCPUStringValid()`, which correctly handles\nboth primary names and aliases. This API has been available since at\nleast LLVM 7, well before TVM\u0027s minimum supported version.\n\n## Validation\n\n- Built and tested on macOS (Apple Silicon) with LLVM 22.1.0\n- `python -c \"import tvm; print(tvm.__file__)\"` produces clean output\nwith no error messages\n\n---------\n\nCo-authored-by: Gabriel Guralnick \u003cgabriel@imbue.com\u003e"
    },
    {
      "commit": "728eb1517f1877d23b71edf91ebd0a288960503b",
      "tree": "5eb1ffdbab3c56ca4aa499b46abd1080c1663e4f",
      "parents": [
        "87bd8af37d17b3dffaf6b70ce1f43bb983b72939"
      ],
      "author": {
        "name": "Ruihang Lai",
        "email": "ruihangl@cs.cmu.edu",
        "time": "Mon Mar 09 13:58:43 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 09 13:58:43 2026 -0400"
      },
      "message": "[Build] Fix version regex to anchor at line start in pyproject.toml (#18892)\n\nAdd ^ anchor to the version regex so it matches only the top-level\n`version \u003d \"...\"` instead of all three occurrences, which caused\nhit_count \u003d\u003d 3 and a RuntimeError in sync_version."
    },
    {
      "commit": "87bd8af37d17b3dffaf6b70ce1f43bb983b72939",
      "tree": "cc905c1b9a4ac10fca61917902dc9429a7295fbd",
      "parents": [
        "a89b9f288014917339ba04de412a748c9cd82b58"
      ],
      "author": {
        "name": "Tianqi Chen",
        "email": "tqchen@users.noreply.github.com",
        "time": "Mon Mar 09 08:09:06 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 09 08:09:06 2026 -0400"
      },
      "message": "[TVMScript] Remove T.Bind backward-compat alias (#18891)\n\n## Summary\n- Remove `Bind \u003d bind` backward-compat alias from `ir.py`\n- Remove `\"Bind\"` from `__all__` exports\n- Follows #18889 which renamed `T.Bind` → `T.bind`\n\n## Test plan\n- [x] tvmscript roundtrip/printer/ir_builder tests pass (232 passed)\n- [x] pre-commit lint passes"
    },
    {
      "commit": "a89b9f288014917339ba04de412a748c9cd82b58",
      "tree": "ad851b9e0547a444fa422a84d5df7355d6b6e4f7",
      "parents": [
        "5d3b525675937be6c178b3f358e94e41ca080df8"
      ],
      "author": {
        "name": "YinHanke",
        "email": "hankeyin@gmail.com",
        "time": "Mon Mar 09 06:39:21 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Mar 08 18:39:21 2026 -0400"
      },
      "message": "[TOPI] Reject non-float inputs for inverse unary math ops (#18880)\n\n## Summary\n\nReject non-float inputs for inverse trigonometric and hyperbolic unary\nops in TOPI.\n\n## Changes\n\n- add a shared floating-point dtype check for inverse unary math ops in\nTOPI\n- apply the check to `topi.acos`, `topi.acosh`, `topi.asin`,\n`topi.asinh`, and `topi.atanh`\n- add TE tests covering integer-input rejection for these ops\n- add regression tests covering successful LLVM build for both `float32`\nand `bfloat16`\n\n## Validation\n\n- `tests/python/te/test_te_create_primfunc.py -k \u0027topi_float_unary\u0027`\n- local repro now fails early with a clear `TypeError` for integer\ninputs\n- local regression check confirms the valid `float32` and `bfloat16`\npaths still compile with LLVM\n\n## Issue\n\nFixes #18729"
    },
    {
      "commit": "5d3b525675937be6c178b3f358e94e41ca080df8",
      "tree": "3a190a87e21d2235951e0888918e633ec65f9a91",
      "parents": [
        "f83cebb54c50718fa5f97835172680d5ee25d6a8"
      ],
      "author": {
        "name": "YinHanke",
        "email": "hankeyin@gmail.com",
        "time": "Mon Mar 09 06:35:14 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Mar 08 18:35:14 2026 -0400"
      },
      "message": "[TIR] Reject non-floating inputs for trig unary ops (#18879)\n\n## Summary\n\nReject non-floating inputs for trig-style TIR unary ops.\n\n## Changes\n\n- reject non-floating inputs for trig-style TIR unary ops such as `tan`,\n`sin`, and `cos`\n- add the same dtype check in the Python TIR wrapper so\n`topi.tan(int32)` fails early with a clear `TypeError`\n- add regression tests for `tvm.tir.tan(int32)` and `topi.tan(int32)`\n\n## Validation\n\n- `tests/python/tir-base/test_tir_constructor.py -k\n\u0027math_unary_constructor_requires_float_dtype or\ntopi_tan_requires_float_dtype\u0027 -q`\n- local repro for the original `where -\u003e tan(int32)` case now fails\nearly with `TypeError`\n- verified `topi.tan(float32)` still builds with `target\u003d\"llvm\"`\n\n## Issue\n\nFixes #18769"
    },
    {
      "commit": "f83cebb54c50718fa5f97835172680d5ee25d6a8",
      "tree": "47dc0c6847707eab26d783e794f4e40c941533e8",
      "parents": [
        "72de1226765e5095e218a48ed042d4e23cc21cc6"
      ],
      "author": {
        "name": "Tianqi Chen",
        "email": "tqchen@users.noreply.github.com",
        "time": "Sun Mar 08 18:29:24 2026 -0400"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sun Mar 08 18:29:24 2026 -0400"
      },
      "message": "[TVMScript] Normalize T.Bind to T.bind for statement builder convention (#18889)\n\n## Summary\n- Rename `T.Bind` (capitalized) to `T.bind` (lowercase) to match\nTVMScript naming convention: statement builders use lowercase\n(`T.evaluate`, `T.buffer_store`, `T.bind`), expression constructors use\ncapitalized (`T.Cast`, `T.Select`, `T.Let`)\n- Keep `Bind \u003d bind` backward-compat alias\n- Update parser, printer references, and all test files\n\n## Test plan\n- [x] tvmscript tests (771 passed)\n- [x] tir-transform tests (346 passed)\n- [x] tir-base tests (224 passed)\n- [x] pre-commit lint passes"
    },
    {
      "commit": "72de1226765e5095e218a48ed042d4e23cc21cc6",
      "tree": "34086d27ccb5035ed76d5141f2e7547b0920b2db",
      "parents": [
        "717c822a800c9a8c699f89e01889da5f74ddcd72"
      ],
      "author": {
        "name": "Tianqi Chen",
        "email": "tqchen@users.noreply.github.com",
        "time": "Sat Mar 07 18:38:50 2026 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Mar 07 18:38:50 2026 -0500"
      },
      "message": "[TIR][REFACTOR] Revamp Common Subexpression Elimination (#18886)\n\n## Summary\n\nThis PR do a rebuild of TIR Common Subexpression Elimination (CSE) using\na two-phase architecture:\n\n- **Phase 1 — CSEPlanner**: Read-only visitor that builds a scope tree\nand expression DAG. Computes a plan (InsertBeforeTable + ExprRemapTable)\nin a single pass using shallower-first processing with repr propagation\n— no cascade loop needed.\n- **Phase 2 — CSERewriter**: Mechanical mutator that inserts\n`Bind(cse_var, expr)` statements and substitutes expressions per the\nplan.\n\nKey improvements over the old implementation:\n- **Simpler architecture**: Two clean classes (planner + rewriter)\ninstead of interleaved analysis/mutation\n- **No cascade loop**: Shallower-first processing with repr propagation\nresolves all CSE opportunities in one plan + one rewrite\n- **Incremental DAG construction**: Expression depth, children, and\nconsumed counts computed during bottom-up scan — no separate traversals\n- **No single-use bindings**: Consumed count tracking avoids introducing\nbindings that would only be used once\n- **Unified insertion via VisitStmt**: SeqStmt flattening handles all\ninsertion contexts uniformly\n\nOther changes:\n- Rename `CommonSubexprElimTIR` → `CommonSubexprElim`, remove\n`enable_cse_tir` and `identify_equiv_terms` params\n- Move old CSE tools (used by cache_index) to\n`cache_index_helpers.{cc,h}`\n- Remove unused `arith.detect_common_subexpr` API\n- Add `T.bind` as lowercase alias for `T.Bind`"
    },
    {
      "commit": "717c822a800c9a8c699f89e01889da5f74ddcd72",
      "tree": "cd614e48ce6c0ad8c9d2a2a61bb5599a0fe15def",
      "parents": [
        "9dfa116c942180e90092505d679b422ff3073410"
      ],
      "author": {
        "name": "Tianqi Chen",
        "email": "tqchen@users.noreply.github.com",
        "time": "Sat Mar 07 13:44:54 2026 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Mar 07 13:44:54 2026 -0500"
      },
      "message": "[FIX] Fix cumsum kernel sblock_alloc_buffer for non-sblock buffer (#18887)\n\n## Summary\n\n- Fix `gpu_2d_continuous_cumsum` using `T.sblock_alloc_buffer` for `Tmp`\nbuffer that is used across multiple kernel launches (not within a single\nsblock). Changed to `T.alloc_buffer`.\n- `T.sblock_alloc_buffer` places the buffer in SBlock metadata, making\nsubsequent references to buffer dimensions (used by `ceil_log2`)\nundefined after the AllocBuffer/DeclBuffer refactor.\n\nFixes #18885"
    },
    {
      "commit": "9dfa116c942180e90092505d679b422ff3073410",
      "tree": "c719ab5a8298e9fa16194f6af389aa6aef571a90",
      "parents": [
        "c0a305dfeb0848ccd06763753eb11ace8d72e7e6"
      ],
      "author": {
        "name": "Miti",
        "email": "biniqi.ardit@kumadigital.com",
        "time": "Sat Mar 07 17:25:54 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Sat Mar 07 11:25:54 2026 -0500"
      },
      "message": "[Metal] Batched command dispatch and staging buffer pool (#18877)"
    },
    {
      "commit": "c0a305dfeb0848ccd06763753eb11ace8d72e7e6",
      "tree": "1f6db17dce32b82a535c8200a383df773aa6c63a",
      "parents": [
        "17d1a289004243ae3bf6cef88a456135229289cd"
      ],
      "author": {
        "name": "Masahiro Hiramori",
        "email": "contact@mshr-h.com",
        "time": "Sat Mar 07 09:39:34 2026 +0900"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 06 19:39:34 2026 -0500"
      },
      "message": "[TARGET] Fix round-trip reconstruction of targets with canonicalizer-generated `feature.*` attrs (#18883)\n\nFix #18882 \n\n`TargetNode::ToConfig()` exports all target attrs, including derived\n`feature.*` fields set by target canonicalizers. However,\n`TargetInternal::FromConfig()` rejects these keys during schema\nvalidation because they are not declared in the target kind schema. This\nbreaks round-tripping exported configs through `Target(config)`.\n\nThis PR strips `feature.*` keys from the config before\n`ConfigSchema::Resolve`, then merges them back afterward. Canonicalizer\noutput is authoritative — if the canonicalizer re-emits a `feature.*`\nkey, it overwrites the preserved value. Unknown non-`feature.*` keys\ncontinue to fail validation as before.\n\nChanges:\n- src/target/target.cc: Extract and re-merge `feature.*` keys around\nschema resolution in `FromConfig()`\n- tests/cpp/target_test.cc: Add tests for single-target round-trip,\nnested-host round-trip, and continued rejection of unknown non-feature\nkeys\n\n---------\n\nCo-authored-by: gemini-code-assist[bot] \u003c176961590+gemini-code-assist[bot]@users.noreply.github.com\u003e"
    },
    {
      "commit": "17d1a289004243ae3bf6cef88a456135229289cd",
      "tree": "f9c241f282c41460e3d47da6afd3d41fc5bac4ed",
      "parents": [
        "689d2b51b2baf25b48704f58d68428f378152061"
      ],
      "author": {
        "name": "Tianqi Chen",
        "email": "tqchen@users.noreply.github.com",
        "time": "Fri Mar 06 08:08:16 2026 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 06 08:08:16 2026 -0500"
      },
      "message": "[FIX][Adreno] Replace AllocBuffer with Bind in texture alloc injection (#18881)\n\n## Summary\n\n- Fix `inject_texture_alloc.cc` to replace `AllocBuffer` with just a\n`Bind(nd_mem_alloc_with_scope)`, consistent with `LowerVtcmAlloc`\n- Previously kept a redundant `AllocBuffer` alongside the `Bind` in a\n`SeqStmt`\n\nFollow-up to #18876."
    },
    {
      "commit": "689d2b51b2baf25b48704f58d68428f378152061",
      "tree": "a2aee19f53432cc7af053853f13c7541fb5d71ba",
      "parents": [
        "14e41c681ac7e65af7e1118f86c55af7f2834043"
      ],
      "author": {
        "name": "Tianqi Chen",
        "email": "tqchen@users.noreply.github.com",
        "time": "Fri Mar 06 06:47:20 2026 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Fri Mar 06 06:47:20 2026 -0500"
      },
      "message": "[REFACTOR][TIR] Remove body from AllocBuffer and DeclBuffer (#18876)\n\n## Summary\n\n- Remove `body` field from `AllocBufferNode` and `DeclBufferNode`,\nmaking them flat statements consistent with `Bind`\n- Buffer scope extends to end of enclosing scope via flat `SeqStmt`\nsemantics\n- 60 files changed across core IR, codegen backends, transforms, script\nIR builder, and tests\n\n## Test plan\n\n- All existing test suites pass (tir-transform, tir-base, tvmscript,\ns_tir, codegen, C++)"
    },
    {
      "commit": "14e41c681ac7e65af7e1118f86c55af7f2834043",
      "tree": "1dd8d4af7bc74cb7a39c3d0cf9b97af0c03ebd63",
      "parents": [
        "419a8c861e6766dbbde5cd184fb77651f2b51587"
      ],
      "author": {
        "name": "YinHanke",
        "email": "hankeyin@gmail.com",
        "time": "Fri Mar 06 12:38:32 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 05 23:38:32 2026 -0500"
      },
      "message": "[Relax][ONNX] Support dynamic repeats for Tile (#18878)\n\n## Summary\n\nSupport dynamic `repeats` for ONNX Tile in the Relax frontend.\n\n## Changes\n\n- add a dynamic Tile conversion path for ONNX when `repeats` is a graph\ninput\n- expose `topi.dyn_tile` to the Python/packed TOPI interface\n- add frontend tests for dynamic `repeats`\n\n## Validation\n\n- `tests/python/relax/test_frontend_onnx.py -k test_tile_dynamic_repeats\n-q`\n- local end-to-end repro matches ONNX Runtime\n\n## Issue\nFixes #18752"
    },
    {
      "commit": "419a8c861e6766dbbde5cd184fb77651f2b51587",
      "tree": "f3d446feaa8dd92e27713bab6133d7ed8c0e83b0",
      "parents": [
        "dd7e260656c9556ffedfa5a8e5193f763778a9ec"
      ],
      "author": {
        "name": "Miti",
        "email": "biniqi.ardit@kumadigital.com",
        "time": "Fri Mar 06 01:13:09 2026 +0100"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 05 19:13:09 2026 -0500"
      },
      "message": "Batched GPU dispatch and object caching for WebGPU runtime (#18871)\n\n## Summary\n- Batch compute dispatches into a single GPUCommandEncoder, flushing on\nsync/readback instead of per-dispatch submit to reduce JS↔GPU transition\noverhead during LLM decode\n- Cache uniform buffers (FIFO/512), bind groups (FIFO/256), shape\ntuples, and pool MAP_READ staging buffers to eliminate redundant GPU\nobject creation\n- Fix padding self-assignment bug in `deviceCopyToGPU`"
    },
    {
      "commit": "dd7e260656c9556ffedfa5a8e5193f763778a9ec",
      "tree": "441e7d0888c9b4041b3a99bfe678ae9228b6287d",
      "parents": [
        "079e4af391031e24fcdfd8a4e8a752282792d3a9"
      ],
      "author": {
        "name": "Zhengke Zhou",
        "email": "zhengke.zhou.dev@gmail.com",
        "time": "Thu Mar 05 21:55:41 2026 +0800"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 05 08:55:41 2026 -0500"
      },
      "message": "[chore] Update docker/README.md documentation and fix links (#18875)\n\n## Summary\n- Fix Markdown link syntax for build.sh.\n- Correct docker/bash.sh usage examples to use proper image names and\nshortcuts.\n\n---------\n\nCo-authored-by: gemini-code-assist[bot] \u003c176961590+gemini-code-assist[bot]@users.noreply.github.com\u003e"
    },
    {
      "commit": "079e4af391031e24fcdfd8a4e8a752282792d3a9",
      "tree": "d866199b55178c112083ae8d93a62f855da307c9",
      "parents": [
        "969fad363be13375a4f4ecbc6a5fb2030e8e3f41"
      ],
      "author": {
        "name": "Tianqi Chen",
        "email": "tqchen@users.noreply.github.com",
        "time": "Thu Mar 05 08:55:02 2026 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Thu Mar 05 08:55:02 2026 -0500"
      },
      "message": "[REFACTOR][TIR] Rename LetStmt to Bind and flatten to sequential semantics (#18874)\n\n## Summary\n\nRename `LetStmtNode`/`LetStmt` to `BindNode`/`Bind` and remove the\n`body` field.\nThe variable defined by `Bind(var, value)` is now visible in all\nsubsequent\nstatements within the same enclosing scope, rather than being scoped to\na nested body.\n\nThis flattens deeply nested let-chains into sequential\n`SeqStmt([Bind(...), Bind(...), ...])`,\nmaking the IR easier to read, transform, and analyze.\n\n## Key Changes\n\n- **New `BindNode`**: `{var, value}` — no body field. Variable scope is\nthe enclosing\n  statement\u0027s body (For, IfThenElse, AllocBuffer, etc.)\n- **ScopeStack pattern**: Passes that need scope-aware cleanup\n(ConvertSSA, CSE,\ntir_visitor_with_path) use `ScopeStack` instead of manual save/restore\nor RAII wrappers\n- **All passes migrated**: 89 files updated across codegen backends, TIR\ntransforms,\n  S-TIR transforms, analyses, TVMScript printer/parser/ir_builder"
    },
    {
      "commit": "969fad363be13375a4f4ecbc6a5fb2030e8e3f41",
      "tree": "e32a9606a92a6484d718620227565a6c89503f42",
      "parents": [
        "0fba1606be11fc2612e8cbeecdeaa299b36faa2c"
      ],
      "author": {
        "name": "Tianqi Chen",
        "email": "tqchen@users.noreply.github.com",
        "time": "Wed Mar 04 22:06:03 2026 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 04 22:06:03 2026 -0500"
      },
      "message": "[TIR] Add VisitBufferDef/VisitBufferUse to base StmtVisitor/StmtMutator (#18873)\n\n"
    },
    {
      "commit": "0fba1606be11fc2612e8cbeecdeaa299b36faa2c",
      "tree": "ff120c401b92a59cd7cbd34513d0a1dc6484424d",
      "parents": [
        "21e52254842bf624ab8cbc3c35c65704d89c6471"
      ],
      "author": {
        "name": "Tianqi Chen",
        "email": "tqchen@users.noreply.github.com",
        "time": "Wed Mar 04 11:59:20 2026 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 04 11:59:20 2026 -0500"
      },
      "message": "[REFACTOR][TIR] Introduce AllocBuffer and phase out Allocate+DeclBuffer (#18865)\n\n## Summary\n\nThis PR introduces `AllocBufferNode`/`AllocBuffer` as a single TIR\nstatement that both allocates memory and declares a buffer into scope.\nThis replaces the previous pattern of `Allocate(var, dtype, shape, cond,\nDeclBuffer(buf, body))` with the simpler `AllocBuffer(buf, body)`.\n\n### Main changes\n\n- **New IR node** `AllocBufferNode` with fields `{buffer, annotations,\nbody}` — same semantics as `DeclBuffer` but also allocates memory\n- **TVMScript**: `T.alloc_buffer(shape, dtype, scope)` now emits\n`AllocBuffer` directly (statement-level allocation).\n`T.sblock_alloc_buffer(...)` for SBlock-level buffer allocation (full\nparameter set)\n- **All codegen backends** (C, CUDA, Metal, OpenCL, WebGPU, LLVM, NVPTX,\nAMDGPU, SPIR-V) updated to handle `AllocBufferNode`\n- **All TIR transforms** (storage_rewrite, flatten_buffer,\nvectorize_loop, lower_warp_memory, etc.) updated\n- **All S-TIR transforms** (compact_buffer_region, merge_shared_memory,\ninject_double_buffer, etc.) updated\n- **Removed `AllocateNode`** entirely — `AllocBuffer` is now the sole\nallocation primitive\n- **Removed `AllocDescriptor`** from merge_shared_memory_allocations —\nuses `Buffer` objects directly\n- **Added `AllocBuffer::ConstantAllocationSize()`** inline helper method\n\n### Design rationale\n\nThe old `Allocate + DeclBuffer` pair was a historical artifact:\n`AllocateNode` stored raw fields (`buffer_var`, `dtype`, `extents`,\n`condition`) separate from the `Buffer` object, requiring pattern\nmatching (`IsAllocateDeclBufferPattern`) to reconstruct the buffer\nassociation. `AllocBuffer` unifies this into a single node with a proper\n`Buffer` reference, simplifying codegen backends and transform passes.\n\n225 files changed, ~3500 insertions/deletions (net near-zero, mostly\nmechanical migration).\n\n## Test plan\n\n- [x] All TIR base tests pass\n- [x] All TIR transform tests pass\n- [x] TVMScript roundtrip tests pass\n- [x] S-TIR transform tests pass\n- [x] Codegen tests pass\n- [x] All-platform minimal tests pass\n- [x] C++ functor tests pass\n- [x] Pre-commit clean (clang-format, ruff, etc.)"
    },
    {
      "commit": "21e52254842bf624ab8cbc3c35c65704d89c6471",
      "tree": "8ff0acf966898ab70f3a33185544b07d06d2ddd5",
      "parents": [
        "75f858973273ea313a103c1375b2f9fd52778d7d"
      ],
      "author": {
        "name": "Siva",
        "email": "quic_sivb@quicinc.com",
        "time": "Wed Mar 04 19:19:54 2026 +0530"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Wed Mar 04 08:49:54 2026 -0500"
      },
      "message": "[ADRENO] Revive and consolicate Adreno features (#18867)\n\nEnable opencl target for gpu tests.\nConsolidates all Adreno tests under tests/python/relax/backend/adreno\nChanges to CLML corresponding to recent changes on json codegen/runtime.\nDocker specification for Adreno (ci_gpu + Android SDK, Gradle)."
    },
    {
      "commit": "75f858973273ea313a103c1375b2f9fd52778d7d",
      "tree": "3b5cced73f5c6cc0c7823ba3731da3246c5d1e12",
      "parents": [
        "521440ea8e025492b7a81ca906be5016b8ef9095"
      ],
      "author": {
        "name": "Masahiro Hiramori",
        "email": "contact@mshr-h.com",
        "time": "Tue Mar 03 15:04:24 2026 +0900"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Tue Mar 03 15:04:24 2026 +0900"
      },
      "message": "[CI] Update images to `20260301-134651-63f099ad` (#18827)\n\n"
    },
    {
      "commit": "521440ea8e025492b7a81ca906be5016b8ef9095",
      "tree": "1e8c4ecd44d4b3410720559c75e0cf95bee7052c",
      "parents": [
        "c481950807791f0d3c9e005381f76555b0ceb5aa"
      ],
      "author": {
        "name": "Tianqi Chen",
        "email": "tqchen@users.noreply.github.com",
        "time": "Mon Mar 02 14:03:20 2026 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 02 14:03:20 2026 -0500"
      },
      "message": "[REFACTOR][TIR] Cleanup AttrStmt attributes (#18862)\n\n## Summary\n\n- Phase out 12 unused `tir::attr` constants (`scan_*`, `channel_*`,\n`pipeline_*`, `buffer_bind_scope`, `coproc_*`, `loop_scope`) and remove\ntheir dead code paths\n- Move 11 S-TIR-owned attributes (`async_*`, `double_buffer_*`,\n`fragment_*`, `pragma_loop_partition_hint`, `reduce_scope`,\n`virtual_thread`) from `tir::attr` to `s_tir::attr`\n- Alphabetize the remaining 15 `tir::attr` constants"
    },
    {
      "commit": "c481950807791f0d3c9e005381f76555b0ceb5aa",
      "tree": "51082e1739d62aa2a7551d56096d32c7199d2ec2",
      "parents": [
        "6627296c69a963635a5ab6d7277c9c3a3e9ec88d"
      ],
      "author": {
        "name": "Tianqi Chen",
        "email": "tqchen@users.noreply.github.com",
        "time": "Mon Mar 02 12:49:42 2026 -0500"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "time": "Mon Mar 02 12:49:42 2026 -0500"
      },
      "message": "[Relax][Refactor] Phase out FewShotTuning (#18864)\n\n## Summary\n\n- Remove `FewShotTuning` pass from Relax transform (C++ implementation,\nPython bindings, and test file)\n- The pass is unused in the current codebase and can be safely removed\n\n## Files Changed\n\n- `include/tvm/relax/transform.h` — Remove declaration\n- `python/tvm/relax/transform/__init__.py` — Remove from imports\n- `python/tvm/relax/transform/transform.py` — Remove Python function\n- `src/relax/transform/few_shot_tuning.cc` — Delete (C++ implementation)\n- `tests/python/relax/test_transform_few_shot_tuning.py` — Delete (test\nfile)"
    }
  ],
  "next": "6627296c69a963635a5ab6d7277c9c3a3e9ec88d"
}
