Release Note

New Features

NVDLA Backend

  • The first open-source compiler backend that supports NVIDIA Deep Learning Accelerator (NVDLA)
  • Initial release of nv_full hardwre configuration support
  • Support status for the models in the ONNX model zoo — ONNC can compile 6 models and run on NVDLA virtual platform successfully. 2 models are not supported by nv_full configuration. The other 4 models need support for more operators.

Framework Support

  • Interpreter Interface — Target backend now can write a customized interpreter.
  • Vanilla Backend — A template for porting a new backend
  • Statistic API

Tools

ONNI

  • Add more verbose level for debugging or benchmarking (level 1 to 4)
  • Add flag --dry-run : Do not do the inference, just print statistics.
  • Add flag --onnx-opt: Enable onnx optimizer.
  • Add flag -fLinearScanAlgo=<string>: Select linear scan algorithm: first-fit, best-fit. (default is first-fit)
  • Add flag --enable-x86-fuse-conv-relu: Enable x86 fuse conv relu.

Documentation

--

--

The Open Neural Network Compiler (ONNC), a compiler that connects Open Neural Network Exchange Format (ONNX) to every deep learning accelerator (DLA).

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
ONNC

ONNC

The Open Neural Network Compiler (ONNC), a compiler that connects Open Neural Network Exchange Format (ONNX) to every deep learning accelerator (DLA).