Open, Modular, Deep Learning Accelerator

Clone this repo:
  1. d4a15f6 Port to new Chisel stable release (3.5) (#37) by Kevin Laeufer · 2 years, 3 months ago main
  2. 36a9157 VTA Chisel Wide memory interface. (#32) by Anton Sorokin · 2 years, 8 months ago
  3. 5b53757 update ci files (#36) by Luis Vega · 2 years, 8 months ago
  4. 981bf2f fix types (#34) by Luis Vega · 2 years, 9 months ago
  5. 3ec5f21 Port to the latest stable Chisel release (#33) by Kevin Laeufer · 2 years, 9 months ago

VTA Hardware Design Stack

Build Status

VTA (versatile tensor accelerator) is an open-source deep learning accelerator complemented with an end-to-end TVM-based compiler stack.

The key features of VTA include:

  • Generic, modular, open-source hardware
    • Streamlined workflow to deploy to FPGAs.
    • Simulator support to prototype compilation passes on regular workstations.
  • Driver and JIT runtime for both simulator and FPGA hardware back-end.
  • End-to-end TVM stack integration
    • Direct optimization and deployment of models from deep learning frameworks via TVM.
    • Customized and extensible TVM compiler back-end.
    • Flexible RPC support to ease deployment, and program FPGAs with the convenience of Python.