commit | 453d0c87b555afcce8797b8a63e155aa8f92a020 | [log] [tgz] |
---|---|---|
author | Tianqi Chen <tqchen@users.noreply.github.com> | Sun Jan 05 21:39:33 2020 -0800 |
committer | GitHub <noreply@github.com> | Sun Jan 05 21:39:33 2020 -0800 |
tree | f49ee12b2295b0592923dae9b1e02f8ca1d56b83 | |
parent | 80f518a57bd9a1b59140d750413a7fb0c94a21a2 [diff] |
[REFACTOR][IR] Introduce SeqStmt to replace ir::Block (#4627) * [REFACTOR][IR] Introduce SeqStmt to replace Block ir::Block was used to represent a sequence of Stmts in the original low-level IR. The nested ir::Block structure is not really friendly for recursive visits, especially when the statements are unrolled. This PR introduce a SeqStmt that directly stores a sequence of statements in an Array container. The new SeqStmt will be used as a replacement of the original Block structure. * [REFACTOR] Migrate use of Block to SeqStmt. * [REFACTOR] Remove Block * Add more comments per yizhi's comment
VTA (versatile tensor accelerator) is an open-source deep learning accelerator complemented with an end-to-end TVM-based compiler stack.
The key features of VTA include:
Learn more about VTA here.