ONNC Runtime Incorporated with Intel MKL Library

Motivation

ONNC runtime is synchronizing in C language. The advantage is that you can run on any CPU using ONNC runtime, but writing general C language on emerging hardware shows poor efficiency. ONNC Calibrator utilizes the ONNC runtime, so it runs inference slowly. It takes two hours for vgg19 models to calibrate two hundred pictures.

Experiment

Take the following steps to Incorporated Intel MKL Library to ONNC Runtime

  1. Incorporated the MKL-DNN library in ONNC Runtime
  2. Call MKL-DNN in ONNC Runtime
  3. ONNC Runtime link to MKL-DNN library

Results

The performance of the Conversion Operator(Conv) significantly improved.

Conclusion

ONNC (Open Neural Network Compiler) is a retargetable compilation framework designed specifically for proprietary deep learning accelerators. Its software architecture expedites porting ONNC to any Deep Learning Accelerator (DLA) design that supports ONNX (Open Neural Network Exchange) operators. ONNC Runtime shows significant improvement in execution time by incorporating intel MKL-DNN library.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
ONNC

ONNC

The Open Neural Network Compiler (ONNC), a compiler that connects Open Neural Network Exchange Format (ONNX) to every deep learning accelerator (DLA).