commit | 4b67daccb9764c332e00e21acad3d462878f9214 | [log] [tgz] |
---|---|---|
author | masahi <masahi129@gmail.com> | Sat Jul 31 23:20:24 2021 +0900 |
committer | GitHub <noreply@github.com> | Sat Jul 31 10:20:24 2021 -0400 |
tree | f4a885ed58dc1a76883788480533c8d0434e4176 | |
parent | 28de742d5270d2df3beb1713538967bf8a6962dd [diff] |
[CUDA] Support multiple TIR-level dynamic shared memory allocations (#8571)
Documentation | Contributors | Community | Release Notes
Apache TVM is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends.
© Contributors Licensed under an Apache-2.0 license.
TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community. Check out the Contributor Guide.
We learned a lot from the following projects when building TVM.