Skip to main content
Associate II
October 12, 2023
Solved

Model compression with quantized model

  • October 12, 2023
  • 1 reply
  • 1926 views

The STM32CubeMX AI tool has an option to compress a model. Is there any way to get compression to work on an quantized model? I'm guessing the answer is no because when I try with a .tflite or .onnx model it complains that only float or double is supported. If there is any way to get model compression to work with an integer model I would appreciate some tips on how that might be done.

This topic has been closed for replies.
Best answer by marchold

I found my answer in another post. Apparently compression is only available on floating point dense layers.  

1 reply

marcholdAuthorBest answer
Associate II
October 12, 2023

I found my answer in another post. Apparently compression is only available on floating point dense layers.