Skip to main content
Associate II
February 26, 2024
Solved

INTERNAL ERROR: Value of zero not correct

  • February 26, 2024
  • 2 replies
  • 3493 views

Hello, everyone! I'm learning to deploy my target detection model using cubeai. My model is trained using pytorch and I found that cubeai doesn't support .pth file conversion so I'm trying to convert my model to onnx format. Since the size of my model did not match the requirements of the development board, I performed a quantization operation. Noting that cubeai officially recommends quantization in onnx format, I did so. However, when I analyze the model with cubeai, the following error occurs (the model before quantization can be analyzed normally). Can anyone help me to see what's going on?

mc_daydayup_0-1708936849552.png

 

Best answer by fauvarque.daniel

FYI, the fix for this model will be available in the December 2024 release

Regards

Daniel

2 replies

Associate II
February 26, 2024

By the way, quantized onnx models are normal to run with onnxruntime.

fauvarque.daniel
ST Employee
February 26, 2024

STM32Cube.AI supports ONNX quantization in QDQ format per channel and INT8.

You can look in the embedded documentation in the quantization chapter for sample script using ONNX.

That said could you share the model with us so we could analyze what's going on.

Thanks

fauvarque.daniel
ST Employee
February 26, 2024

Oops, I didn't see that the model was attached

Associate II
February 26, 2024

Aha, thank you very much for your reply, that's exactly the procedure I refer to in the quantization section of the embedded documentation for the quantization process. My model works fine with onnxruntime.InferenceSession. Attached is a test image

mc_daydayup_0-1708946384477.png