Release Note

ONNC framework

  • [New Feature] add methods for manipulating ComputeOperator input/output links
  • [New Feature] add methods for erasing Value in Module
  • [New Feature] add new method addOnncIrOptimization() for class TargetBackend
  • [New Feature] add new method runOnComputeGraph() for class CustomPass<T>
  • [New Feature] add several utility libraries
  • [New Feature] add 5 ONNC IR optimization passes
  • [Bug fix] fix segmentation fault due to unexpected global opt<T> object initialization order
  • [Bug fix] fix name collision when using type LiveInterval
  • [Bug fix] remove C++11 incompatible codes
  • [Bug fix] fix bugs in default ComputeVisitor::visit() implementation
  • [Bug fix] fix ONNC runtime bugs for 12 ONNX model zoo models
  • [Bug fix] Add detailed error messages for unsupported ONNX operators

NVDLA Backend

  • [New Feature] support more operators
  • [New Feature] add NVDLA UMD/KMD patch file (in the nvdla directory)
  • [New Feature] add single layer test models for supported 16 ONNX operators (in the single_layer_test directory)
  • [Bug fix] avoid creating duplicate AddressListEntry
  • [Bug fix] fix sharing memory logics for Reshape
  • [Bug fix] fix split Conv algorithms to get correct inference result
  • [Bug fix] fix incorrect data & weight bank allocation logics for Conv
  • [Bug fix] fix incorrect memory source setting for group Conv
  • [Bug fix] fix incorrect weight packing logics for Conv
  • [Bug fix] fix incorrect data cube size calculation logics
  • [Bug fix] remove redundant MemoryListEntry blocks in NVDLA Loadables
  • [Bug fix] fix incorrect NVDLA Loadable task submit logics
  • [Bug fix] avoid allocating MemoryListEntry for unused tensors
  • [Bug fix] fix bugs for the case where the AveragePool attribute count_include_pad is 0
  • [Bug fix] fix segmentation fault for non-biased Conv
  • [Bug fix] fix LRN lookup table settings

ONNX Operator Support

Please refer to Supported Operators for more details.

  • Add new
  • AveragePool
  • BatchNormalization
  • Concat
  • Conv
  • Gemm
  • GlobalAveragePool new
  • LRN
  • MaxPool
  • Mul new
  • Relu
  • Reshape
  • Softmax
  • Sum
  • Transpose (use in ShuffleNet) new
  • Unsqueeze new

ONNX Model Zoo Support Status

ONNC can successfully compile 12 ONNX models listed in the following table from the ONNX Model Zoo, and run inference on NVDLA virtual platform (with nv_full hardwre configuration) correctly.

The Open Neural Network Compiler (ONNC), a compiler that connects Open Neural Network Exchange Format (ONNX) to every deep learning accelerator (DLA).

The Open Neural Network Compiler (ONNC), a compiler that connects Open Neural Network Exchange Format (ONNX) to every deep learning accelerator (DLA).