| commit | a54af64872c68913309541f6f30e75da3921ef77 | [log] [tgz] |
|---|---|---|
| author | Shushi Hong <820958424@qq.com> | Sun Sep 21 23:36:18 2025 -0400 |
| committer | GitHub <noreply@github.com> | Sun Sep 21 23:36:18 2025 -0400 |
| tree | 0c4344f09f2eb1660c424784e8646abc25609d4a | |
| parent | eb45a46d7004c05a0f613b087d7cf82c19ce6196 [diff] |
[Relax][Backend] Implement R.call_py_func operator for calling Python functions from compiled TVM (#18326)
This PR implements the `R.call_py_func` operator that allows compiled
TVM Relax modules to call Python functions at runtime. This enables
integration between TVM's compiled code and Python through a
robust VM backend implementation.
#### Simple Usage with BasePyModule
```python
@I.ir_module
class MyModule(BasePyModule):
@I.pyfunc
def torch_relu(self, x):
return torch.relu(x)
@R.function
def forward(x: R.Tensor((10,), "float32")) -> R.Tensor((10,), "float32"):
return R.call_py_func("torch_relu", (x,), out_sinfo=R.Tensor((10,), "float32"))
```
#### Direct VM Backend Usage (Manual)
```python
# Manually register Python function with VM backend
register_func = tvm.get_global_func("vm.builtin.register_py_func")
register_func("my_func", my_python_function)
# Use in Relax function (compiled to VM backend)
@R.function
def test(x: R.Tensor((5,), "float32")) -> R.Tensor((5,), "float32"):
return R.call_py_func("my_func", (x,), out_sinfo=R.Tensor((5,), "float32"))
# Manual cleanup (required for direct VM backend usage)
clear_func = tvm.get_global_func("vm.builtin.clear_py_func_registry")
clear_func()
```Documentation | Contributors | Community | Release Notes
Apache TVM is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end-to-end compilation for different backends.
TVM is licensed under the Apache-2.0 license.
Check out the TVM Documentation site for installation instructions, tutorials, examples, and more. The Getting Started with TVM tutorial is a great place to start.
TVM adopts the Apache committer model. We aim to create an open-source project maintained and owned by the community. Check out the Contributor Guide.
TVM started as a research project for deep learning compilation. The first version of the project benefited a lot from the following projects:
Since then, the project has gone through several rounds of redesigns. The current design is also drastically different from the initial design, following the development trend of the ML compiler community.
The most recent version focuses on a cross-level design with TensorIR as the tensor-level representation and Relax as the graph-level representation and Python-first transformations. The project's current design goal is to make the ML compiler accessible by enabling most transformations to be customizable in Python and bringing a cross-level representation that can jointly optimize computational graphs, tensor programs, and libraries. The project is also a foundation infra for building Python-first vertical compilers for domains, such as LLMs.