Model compression with quantized model
The STM32CubeMX AI tool has an option to compress a model. Is there any way to get compression to work on an quantized model? I'm guessing the answer is no because when I try with a .tflite or .onnx model it complains that only float or double is supported. If there is any way to get model compression to work with an integer model I would appreciate some tips on how that might be done.
