ONNX model compiled to INT8 I/O in STEdgeAI despite FP32 input/output
Hi all,
I’m trying to compile an ONNX model that has float32 input and output. When I compile the model using ST Edge AI, the generated output ONNX shows both input and output as INT8 instead of float32. I wanted to understand:
- Is this behavior because the model is being compiled specifically as an ONNX model?
- Or is there a specific reason STEdgeAI is forcing INT8 input/output during compilation (e.g., quantization settings, optimization constraints, or tool limitations)? Any clarification on how STEdgeAI handles mixed-precision models and I/O types would be really helpful.
Thanks in advance!
