[BugFix] Align `tir.round` to ties-to-even across all backends (#19368)

## Problem

`tir.round` constant-folds using `std::nearbyint` (IEEE 754
ties-to-even), but all backends lower it to platform `round()` which
uses ties-away-from-zero. This means compiled code can produce different
results from constant-folded code for midpoint values:

| Input | Constant-fold (ties-to-even) | Compiled (ties-away) |
|-------|-----|------|
| 0.5   | 0.0 | 1.0  |
| 2.5   | 2.0 | 3.0  |
| -0.5  | 0.0 | -1.0 |

This was identified as a follow-up to #19367 — see [this
comment](https://github.com/apache/tvm/pull/19367#issuecomment-4201800320).

## Fix

Align all backends to use ties-to-even intrinsics, matching the
constant-folding behavior:

| Backend | Before | After |
|---------|--------|-------|
| LLVM/ROCm/Hexagon | `llvm::Intrinsic::round` |
`llvm::Intrinsic::nearbyint` |
| NVPTX | `__nv_round[f]` | `__nv_nearbyint[f]` |
| CUDA | `round`/`roundf` | `nearbyint`/`nearbyintf` (f16/bf16 already
used `hrint`) |
| Metal/OpenCL | `round` | `rint` |
| Vulkan/SPIR-V | `GLSLstd450Round` | `GLSLstd450RoundEven` |

Also fixes OpenCL codegen where `tir.nearbyint` was incorrectly mapped
to OpenCL `round()` instead of `rint()`.

Updates `op.h` documentation to explicitly state ties-to-even semantics
for both `round()` and `nearbyint()`.

## Testing

```
python -m pytest tests/python/tirx-base/test_tir_intrin.py -xvs
```

New `test_round_ties_to_even` verifies midpoint inputs `[0.5, 1.5, 2.5,
3.5, -0.5, -1.5, -2.5, -3.5]` produce ties-to-even results on the LLVM
backend. All 12 tests pass (10 passed, 2 skipped for CUDA).

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
13 files changed
tree: 7021d38632a2ea2de076442a7b1f0d5b9a3c4d21
  1. .github/
  2. 3rdparty/
  3. apps/
  4. ci/
  5. cmake/
  6. docker/
  7. docs/
  8. include/
  9. jvm/
  10. licenses/
  11. python/
  12. src/
  13. tests/
  14. web/
  15. .asf.yaml
  16. .clang-format
  17. .gitattributes
  18. .gitignore
  19. .gitmodules
  20. .markdownlint-cli2.yaml
  21. .pre-commit-config.yaml
  22. .yamllint.yaml
  23. CMakeLists.txt
  24. conftest.py
  25. CONTRIBUTORS.md
  26. KEYS
  27. LICENSE
  28. NOTICE
  29. pyproject.toml
  30. README.md
  31. version.py
README.md

Open Machine Learning Compiler Framework

Documentation | Contributors | Community | Release Notes

Apache TVM is an open machine learning compilation framework, following the following principles:

  • Python-first development that enables quick customization of machine learning compiler pipelines.
  • Universal deployment to bring models into minimum deployable modules.

License

TVM is licensed under the Apache-2.0 license.

Getting Started

Check out the TVM Documentation site for installation instructions, tutorials, examples, and more. The Getting Started with TVM tutorial is a great place to start.

Contribute to TVM

TVM adopts the Apache committer model. We aim to create an open-source project maintained and owned by the community. Check out the Contributor Guide.

History and Acknowledgement

TVM started as a research project for deep learning compilation. The first version of the project benefited a lot from the following projects:

  • Halide: Part of TVM's TIR and arithmetic simplification module originates from Halide. We also learned and adapted some parts of the lowering pipeline from Halide.
  • Loopy: use of integer set analysis and its loop transformation primitives.
  • Theano: the design inspiration of symbolic scan operator for recurrence.

Since then, the project has gone through several rounds of redesigns. The current design is also drastically different from the initial design, following the development trend of the ML compiler community.

The most recent version focuses on a cross-level design with TensorIR as the tensor-level representation and Relax as the graph-level representation and Python-first transformations. The project's current design goal is to make the ML compiler accessible by enabling most transformations to be customizable in Python and bringing a cross-level representation that can jointly optimize computational graphs, tensor programs, and libraries. The project is also a foundation infra for building Python-first vertical compilers for domains, such as LLMs.