[VTA][HotFix] Relay->VTA quantization fix (#4433)

* relay -> vta fix

* setting optlevel to 3 for quantization to fold batchnorm
3 files changed
tree: b82ab5c1b1fa8b41c3c2d84abfe87cf84552b288
  1. .github/
  2. 3rdparty/
  3. apps/
  4. cmake/
  5. conda/
  6. docker/
  7. docs/
  8. golang/
  9. include/
  10. jvm/
  11. licenses/
  12. nnvm/
  13. python/
  14. rust/
  15. src/
  16. tests/
  17. topi/
  18. tutorials/
  19. vta/
  20. web/
  21. .clang-format
  22. .gitignore
  23. .gitmodules
  24. CMakeLists.txt
  25. CONTRIBUTORS.md
  26. DISCLAIMER
  27. Jenkinsfile
  28. KEYS
  29. LICENSE
  30. Makefile
  31. NEWS.md
  32. NOTICE
  33. README.md
  34. version.py
README.md

Open Deep Learning Compiler Stack

Documentation | Contributors | Community | Release Notes

Build Status Azure Pipeline

Apache TVM (incubating) is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends.

License

© Contributors Licensed under an Apache-2.0 license.

Contribute to TVM

TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community. Checkout the Contributor Guide

Acknowledgement

We learned a lot from the following projects when building TVM.

  • Halide: TVM uses HalideIR as data structure for arithmetic simplification and low level lowering. We also learned and adapted some part of lowering pipeline from Halide.
  • Loopy: use of integer set analysis and its loop transformation primitives.
  • Theano: the design inspiration of symbolic scan operator for recurrence.