[Relay][TOPI]Fix meaning of conv2d_transpose output_padding parameter (#4318)

* Add output_padding to generic

* Add output_padding to the reference impl

* Add output_padding to arm_cpu

* Add output_padding to the test

* Add output_padding for cuda

* Add output_padding for x86

* Make use of the new output_padding argument in Relay

* Adjust conv2d_transpose Relay test

* Fix lint errors

* Fix the VTA declaration of conv2d_transpose

* support for output padding in conv2d transpose

* some output padding will break IR pass

* Fix new conv2d_transpose test

* Update tophub

* Fix conv1d output_padding too.

* Fix the conv1d_transpose reference function.

* Fix the cuda impl

* fix the topi test for conv1d

* Update the versions in tophub.py

Co-authored-by: Thierry Moreau <tmoreau@octoml.ai>
3 files changed
tree: 8a7932a4031de079f141b3cab29a7f89d701c604
  1. apps/
  2. config/
  3. hardware/
  4. include/
  5. python/
  6. scripts/
  7. src/
  8. tests/
  9. tutorials/
  10. README.md
README.md

VTA: Open, Modular, Deep Learning Accelerator Stack

VTA (versatile tensor accelerator) is an open-source deep learning accelerator complemented with an end-to-end TVM-based compiler stack.

The key features of VTA include:

  • Generic, modular, open-source hardware
    • Streamlined workflow to deploy to FPGAs.
    • Simulator support to prototype compilation passes on regular workstations.
  • Driver and JIT runtime for both simulator and FPGA hardware back-end.
  • End-to-end TVM stack integration
    • Direct optimization and deployment of models from deep learning frameworks via TVM.
    • Customized and extensible TVM compiler back-end.
    • Flexible RPC support to ease deployment, and program FPGAs with the convenience of Python.

Learn more about VTA here.