Issues converting YOLOv8 ONNX model to uint8 .nb for STM32MP25
Hi everyone,
I’m currently working on deploying a YOLOv8 model on the STM32MP25 using ST Edge AI. Here’s what I’ve done so far and where I’m stuck.
Training & Export:
ONNX model details:
Input: [1, 3, 256, 256], float32
Output: [1, 5, 1344], float32
I successfully converted it to .nb using:
This produces a model in float16 (I tried to run this but it did not work).
Problem:
STM32MP2 works with uint8 models, so I tried converting input/output to uint8:
This fails during compilation with:
Observation:
The normally .nb file is generating, without conversion.
Explicitly trying uint8 fails.
The error suggests the ST Edge AI compiler doesn’t support direct uint8 conversion for this ONNX model.
- I've also tried ove the ST Edge AI developer cloud, but did not worked.
My Question:
Has anyone successfully converted a YOLOv8 ONNX model to uint8 for STM32MP25? Are there recommended steps for preparing the model for uint8 quantization?
Any advice on preprocessing, exporting, or using ST tools to get a uint8-ready model would be highly appreciated.
Thanks!
Aman
