commit | 5012462ef88acfd6a84b3f28135b361a8788f257 | [log] [tgz] |
---|---|---|
author | Ruihang Lai <lairuihangdongdong@qq.com> | Sat Jul 31 13:04:02 2021 +0800 |
committer | GitHub <noreply@github.com> | Fri Jul 30 22:04:02 2021 -0700 |
tree | 8f2c5281bcd690e814649ab0801c5acc3a9e4032 | |
parent | c8a892b66d22965c42df8384a3d8405d024ddf5f [diff] |
[TensorIR][M2a] Reduction Factoring (RFactor) (#8544) Co-authored-by: Junru Shao <junrushao1994@gmail.com> Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn> Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com> Co-authored-by: Hongyi Jin <3231950289@qq.com> Co-authored-by: Wuwei Lin <wuwei@apache.org>
Documentation | Contributors | Community | Release Notes
Apache TVM is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends.
© Contributors Licensed under an Apache-2.0 license.
TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community. Check out the Contributor Guide.
We learned a lot from the following projects when building TVM.