Use this tutorial to install OpenVINO Runtime.
Configure OV environment with source <INSTALL_DIR>/setupvars.sh
git clone https://github.com/dkurt/openvino_pytorch_layers.git
cd openvino_pytorch_layers
python examples/complex_mul/export_model.py
As result you will get model.onnx
file with model and inp.npy
, inp1.npy
and ref.npy
files with input tensor and output for onnx model.
cd openvino_pytorch_layers/user_ie_extensions
mkdir build && cd build
cmake .. && make
As result you will get user_ie_extensions/build/libuser_cpu_extension.so
file.
Create test_ov_model.py
file inside of openvino_pytorch_layers
folder and run it.
openvino_pytorch_layers/compare.py
- the same script, but for old OpenVINO API
# test_ov_model.py
import numpy as np
from openvino.runtime import Core
# Load reference values
inp = np.load('inp.npy')
inp1 = np.load('inp1.npy')
ref_res = np.load('ref.npy')
# Create Core and register user extension
core = Core()
core.add_extension('user_ie_extensions/build/libuser_cpu_extension.so')
# You can get .xml and .bin OpenVINO model files with
# `mo --input_model model.onnx --extension user_ie_extensions/build/libuser_cpu_extension.so`
# or load model from .onnx file directly
model = core.read_model('model.onnx')
compiled_model = core.compile_model(model, 'CPU')
results = compiled_model.infer_new_request({'input': inp, 'input1': inp1})
predictions = next(iter(results.values()))
# compare ONNX and OV models results
diff = np.max(np.abs(ref_res-predictions))
print('Res diff: ' + str(diff))