Problem deploying ST SSD Mobilenet v1 on STM32N6
Hello,
I have a discovery kit STM32N6570-DK and I am trying to follow the instructions in ST Model Zoo Services so that I can train, evaluate and deploy an object detection model that can detect cars. So I downloaded the Pascal_VOC_2012 dataset, I made some custom data cleanup to keep only the images with the vehicles, (classes [ bicycle,bus,car,motorbike ]) and on that I used the predefined scripts in ST model zoo to train st_ssd_mobilenet_v1. This went well and the evaluation gave good accuracy results. Then I used the scipts to make predictions on some images. The prediction options "host" and "stedgeai_host" also went well. When I try the option "stedgeai_n6", then the compilation and programming of the board works ok but then the inference freezes. After some debugging I found out that there is a software epoch that performs a Tile operation and this one freezes. In the network_generate_report.txt I noticed some question marks just before this Tile epoch.
epoch 39 HW
epoch 40 -SW- ( Conv )
epoch 41 ??
epoch 42 ??
epoch 43 -SW- ( Tile )
epoch 44 -SW- ( QuantizeLinear )
epoch 45 HW
In the network.c code this is translated to epoch 40 being the conv and epoch 41 the tile, which I think is not how it should be.
So it this something that you have seen before?
I am using X-Cube-AI 10.0.0 with STEdgeAI 2.0.0 and STM32Cube_FW_N6_V1.1.1
Through this post I am taking the opportunity to ask some additional questions:
1. In all the .yaml scripts that are in the ST Zoo Services in the quantization field the output is float. However when I use prediction with "stedgeai_host" and "stedgeai_n6" they complain that the quantized model must have int8 output. And the same happens when I use some of the predefined models that can be found in the model zoo. Does that mean that to use them I have to perform a quantization on the relevant .h5 model?
2. In the training Readme I read that I can train mobilenet with pretrained_weights = imagenet. But this cannot be done with any of the yolo. The option pretrained_weights is not recognized. However in the documentation it is mentioned that for the tiny_yolo the pretrained_weights are from COCO. So it there a way to use pretrained weights also for the yolo models?
Thanks in advance for your help
