A self-trained model takes too much flash
Hi,
I have a self-trained image classification model with two classes with dynamic range quantization(unquantized model is about 1.5mb and it is 500kb after quantization). And this model takes too much flash and ram after benchmarking.
Is there any method for reducing the flash and ram consumption? The model I am using is mobilenet, should I use other model for image classification? Or is there any other model for recommendation?
Also note that STM32 developer cloud does not support quantization for my self-trained model so that I quantize the model by myself(using dynamic range quantization).
Thank you!
