commit | 3667499e915e4f1a6f44e9992e0db6c3f0b2a748 | [log] [tgz] |
---|---|---|
author | abergeron <abergeron@gmail.com> | Fri Jan 10 19:24:52 2020 -0500 |
committer | Wuwei Lin <wuwei@apache.org> | Fri Jan 10 19:24:52 2020 -0500 |
tree | 8a7932a4031de079f141b3cab29a7f89d701c604 | |
parent | 7ffd5c5b33a30df64ad826e2f5490d632ef7f305 [diff] |
[Relay][TOPI]Fix meaning of conv2d_transpose output_padding parameter (#4318) * Add output_padding to generic * Add output_padding to the reference impl * Add output_padding to arm_cpu * Add output_padding to the test * Add output_padding for cuda * Add output_padding for x86 * Make use of the new output_padding argument in Relay * Adjust conv2d_transpose Relay test * Fix lint errors * Fix the VTA declaration of conv2d_transpose * support for output padding in conv2d transpose * some output padding will break IR pass * Fix new conv2d_transpose test * Update tophub * Fix conv1d output_padding too. * Fix the conv1d_transpose reference function. * Fix the cuda impl * fix the topi test for conv1d * Update the versions in tophub.py Co-authored-by: Thierry Moreau <tmoreau@octoml.ai>
VTA (versatile tensor accelerator) is an open-source deep learning accelerator complemented with an end-to-end TVM-based compiler stack.
The key features of VTA include:
Learn more about VTA here.