Open, Modular, Deep Learning Accelerator

Clone this repo:

Branches

  1. 5bd9c6a [Hardware][OpenCL] Intelfocl support (#9) by ZHANG Hao · 6 weeks ago main
  2. 57db5a7 Enable Supported Xilinx target ZCU104 with Hardware Preset (#20) by Daniel Steger · 6 weeks ago
  3. 6096ca0 vta: Update VTA to use load_pad_2d in compute (#17) by Daniel Steger · 7 weeks ago
  4. 9a23bc8 config: small spelling fix (#19) by Daniel Steger · 7 weeks ago
  5. 12fb486 [Hardware][Verilator] Integrating and simulating hardware accelerators in TVM (#18) by Luis Vega · 8 weeks ago

VTA Hardware Design Stack

Build Status

VTA (versatile tensor accelerator) is an open-source deep learning accelerator complemented with an end-to-end TVM-based compiler stack.

The key features of VTA include:

  • Generic, modular, open-source hardware
    • Streamlined workflow to deploy to FPGAs.
    • Simulator support to prototype compilation passes on regular workstations.
  • Driver and JIT runtime for both simulator and FPGA hardware back-end.
  • End-to-end TVM stack integration
    • Direct optimization and deployment of models from deep learning frameworks via TVM.
    • Customized and extensible TVM compiler back-end.
    • Flexible RPC support to ease deployment, and program FPGAs with the convenience of Python.