Open deep learning compiler stack for cpu, gpu and specialized accelerators

Clone this repo:
  1. 4b67dac [CUDA] Support multiple TIR-level dynamic shared memory allocations (#8571) by masahi · 9 hours ago main
  2. 28de742 [Refactor] Unify the shared pass prefix between vm and graph (#8526) by Yuchen Jin · 10 hours ago
  3. 7d8a774 [VTA] Recover rpc server support (#8604) by Tianqi Chen · 10 hours ago
  4. 2a8950b [TensorIR] Support for match_buffer from subregion (#8585) by Siyuan Feng · 18 hours ago
  5. 5012462 [TensorIR][M2a] Reduction Factoring (RFactor) (#8544) by Ruihang Lai · 18 hours ago

Open Deep Learning Compiler Stack

Documentation | Contributors | Community | Release Notes

Build Status WinMacBuild

Apache TVM is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends.

License

© Contributors Licensed under an Apache-2.0 license.

Contribute to TVM

TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community. Check out the Contributor Guide.

Acknowledgement

We learned a lot from the following projects when building TVM.

  • Halide: Part of TVM's TIR and arithmetic simplification module originates from Halide. We also learned and adapted some part of lowering pipeline from Halide.
  • Loopy: use of integer set analysis and its loop transformation primitives.
  • Theano: the design inspiration of symbolic scan operator for recurrence.