Floating-point Audio Classification inference in FP-AI-SENSING1
Reading the UM2524 user manual, I noted the following statements at page 14
The phrase "A training script for HAR is provided in the Utilities\AI_Resources\Training Scripts\HAR folder along with a Jupyter Notebook to explain all the steps taken." is obviously an oversight.
It seems anyway that the ASC (Audio Scene Classification) is deployed in the FP-AI-SENSING1 package as a fixed-point NN inference.
I have developed a simple two-class floating-point implementation.
Can I modify some files in the package, e.g. asc_processing.c, asc_processing.h and adapt the package to my inference? Do I need for some reason to quantize it to integer?
Do you have tips about such a challenge?
The UM2524 manual is available at https://www.st.com/resource/en/user_manual/um2524-getting-started-with-the-stm32cube-function-pack-for-ultralow-power-iot-nodes-for-ai-applications-based-on-audio-and-motion-sensing-stmicroelectronics.pdf
Thank you
M
