Open deep learning compiler stack for cpu, gpu and specialized accelerators

Clone this repo:
  1. 3ec67f0 [AutoScheduler] Fix conv3d's op strategy for auto-scheduler (#7328) by Lianmin Zheng · 3 hours ago main
  2. af9d1d2 [BYOC][Verilator] add support to dynamically load hardware library (#7286) by Luis Vega · 5 hours ago
  3. 6787d74 get_top_results works on a copy of output (#7327) by SebastianBoblestETAS · 13 hours ago
  4. 790344c relax tolerance for dlpack test (#7325) by masahi · 24 hours ago
  5. 17ae44d add a shape function and dynamic test for round (#7324) by Matthew Brookhart · 24 hours ago

Open Deep Learning Compiler Stack

Documentation | Contributors | Community | Release Notes

Build Status WinMacBuild

Apache TVM (incubating) is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends.


© Contributors Licensed under an Apache-2.0 license.

Contribute to TVM

TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community. Checkout the Contributor Guide


We learned a lot from the following projects when building TVM.

  • Halide: Part of TVM's TIR and arithmetic simplification module originates from Halide. We also learned and adapted some part of lowering pipeline from Halide.
  • Loopy: use of integer set analysis and its loop transformation primitives.
  • Theano: the design inspiration of symbolic scan operator for recurrence.