[Frontend] Add Span filling for frontends to Relay (#9723)

* [Frontend] Add Span filling for frontends to Relay

* Add a common span filling feature for tf1/2, tflite and pytorch.
* Add test case for Span filling in each frontend.
* Expose Tuple and TupleGetItem to python end

* [Frontend] Add Span filling for frontends to Relay

* Fix lint errors
* Change default string of scope_part in Pytorch
* Reorder the span position for one to many conversion

* [Frontend] Add Span filling for frontends to Relay

 * nit fixed
 * Add a bool flag to control print span
 * refactor pytorch get span to a birefer way

* [Frontend] Add Span filling for frontends to Relay

* Add one more condition for spanFller
* Refine the format for those pytorch node without scopeName

* [Frontend] Add Span filling for frontends to Relay

* Fix lint
13 files changed
tree: f002ab51c13ff15532842dc8dab38a6a72735fae
  1. .github/
  2. 3rdparty/
  3. apps/
  4. cmake/
  5. conda/
  6. docker/
  7. docs/
  8. gallery/
  9. golang/
  10. include/
  11. jvm/
  12. licenses/
  13. nnvm/
  14. python/
  15. rust/
  16. src/
  17. tests/
  18. vta/
  19. web/
  20. .asf.yaml
  21. .clang-format
  22. .gitignore
  23. .gitmodules
  24. .pre-commit-config.yaml
  25. CMakeLists.txt
  26. conftest.py
  27. CONTRIBUTORS.md
  28. Jenkinsfile
  29. KEYS
  30. LICENSE
  31. Makefile
  32. mypy.ini
  33. NEWS.md
  34. NOTICE
  35. pyproject.toml
  36. README.md
  37. version.py
README.md

Open Deep Learning Compiler Stack

Documentation | Contributors | Community | Release Notes

Build Status WinMacBuild

Apache TVM is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends.

License

TVM is licensed under the Apache-2.0 license.

Getting Started

Check out the TVM Documentation site for installation instructions, tutorials, examples, and more. The Getting Started with TVM tutorial is a great place to start.

Contribute to TVM

TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community. Check out the Contributor Guide.

Acknowledgement

We learned a lot from the following projects when building TVM.

  • Halide: Part of TVM's TIR and arithmetic simplification module originates from Halide. We also learned and adapted some part of lowering pipeline from Halide.
  • Loopy: use of integer set analysis and its loop transformation primitives.
  • Theano: the design inspiration of symbolic scan operator for recurrence.