Skip to main content
Explorer
February 9, 2026
Solved

ONNX model compiled to INT8 I/O in STEdgeAI despite FP32 input/output

  • February 9, 2026
  • 1 reply
  • 208 views

Hi all,

I’m trying to compile an ONNX model that has float32 input and output. When I compile the model using ST Edge AI, the generated output ONNX shows both input and output as INT8 instead of float32. I wanted to understand:

- Is this behavior because the model is being compiled specifically as an ONNX model?

- Or is there a specific reason STEdgeAI is forcing INT8 input/output during compilation (e.g., quantization settings, optimization constraints, or tool limitations)? Any clarification on how STEdgeAI handles mixed-precision models and I/O types would be really helpful.

Thanks in advance!

Best answer by Julian E.

Hi @Afreen,

 

You can select the input and output data type with the command --input-data-type and output-data-type.

I let you look at the detail here:

https://stedgeai-dc.st.com/assets/embedded-docs/command_line_interface.html#ref_input_data_type_option

 

You will see that it is said that for Quantized QDQ ONNX model, the default type is changed to INT8.

 

Have a good day,

Julian

1 reply

Julian E.
Julian E.Best answer
Technical Moderator
February 9, 2026

Hi @Afreen,

 

You can select the input and output data type with the command --input-data-type and output-data-type.

I let you look at the detail here:

https://stedgeai-dc.st.com/assets/embedded-docs/command_line_interface.html#ref_input_data_type_option

 

You will see that it is said that for Quantized QDQ ONNX model, the default type is changed to INT8.

 

Have a good day,

Julian

​In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.