Release Note

ONNC framework

[New feature] ONNC supports new operators Clip, Max, Min, ReduceMean, and PRelu.

C Backend

[New feature] ONNC can compile models into C files.
[New feature] ONNC provides a library containing function implementation for 116 neural network operators defined in ONNX rel-1.3.0 specification.
[New feature] The ONNC library can call Intel MKLDNN library for accelerating the computation of convolution and Gemm (matrix multiplication) on Intel CPU.

Supported ONNX Operators

  • Support
  • Add
  • AveragePool
  • BatchNormalization
  • Concat
  • Conv
  • Clip new
  • Gemm
  • GlobalAveragePool
  • Identity
  • LRN
  • Max new
  • MaxPool
  • Min new
  • Mul
  • PRelu new
  • Relu
  • ReduceMean new
  • Reshape
  • Softmax
  • Sum
  • Transpose (use in ShuffleNet)
  • Unsqueeze

Supported ONNX Models

Some hardware modules inside NVDLA change the precision of the prediction results. If a calibrator didn’t consider hardware architectural characteristics in its algorithm, then it may not preserve the precision of some AI models. For some large AI models, the lack of architectural consideration would produce unacceptable errors. …

Release Note

ONNC framework

  • [New Feature] add methods for manipulating ComputeOperator input/output links
  • [New Feature] add methods for erasing Value in Module
  • [New Feature] add new method addOnncIrOptimization() for class TargetBackend
  • [New Feature] add new method runOnComputeGraph() for class CustomPass<T>
  • [New Feature] add several utility libraries
  • [New Feature] add 5 ONNC IR optimization passes

Release Note

New Features

NVDLA Backend

  • The first open-source compiler backend that supports NVIDIA Deep Learning Accelerator (NVDLA)
  • Initial release of nv_full hardwre configuration support
  • Support status for the models in the ONNX model zoo — ONNC can compile 6 models and run on NVDLA virtual platform successfully. 2 models are not supported by…

NOTE: The feature described below is scheduled to be available in version 1.0.0.

ONNC serves as a bridge between AI frameworks and the underlying accelerator hardware. Like GCC in the traditional compiler area, ONNC intends to support any kind of deep learning accelerators (DLAs) with a unified interface for the…


The Open Neural Network Compiler (ONNC), a compiler that connects Open Neural Network Exchange Format (ONNX) to every deep learning accelerator (DLA).

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store