Skip to main content
Senior
July 6, 2025
Question

STM32N6570 and storing a model in the flash

  • July 6, 2025
  • 1 reply
  • 647 views

Hello! How can I store the model's input data in the flash of the STM32N6570-DK, then deploy the model, read the input data from flash, and perform inference? Are there any relevant tutorials for this? I previously followed the model deployment tutorial in the link below. Can this deployment method support reading input_data directly from flash? If not, are there alternative approaches to achieve this goal? Thanks.
https://stm32ai-cs.st.com/assets/embedded-docs/stneuralart_getting_started.html 

1 reply

Julian E.
Technical Moderator
July 8, 2025

Hello @llcc,

 

We don't currently have any tutorial for that.

We should deliver in the coming days a tutorial about creating a X Cube AI appli to do an ai inference and a FSBL loading this appli from external flash to internal ram. I will update you once we have it available.

 

Have a good day,

Julian

​In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
llccAuthor
Senior
July 8, 2025

Hello, I am currently trying to write my input_data_int8.bin file to flash, but I keep getting an error that says 'Error: failed to download the file'. How can I fix this? Thanks!

llcc_0-1751981748206.png

Julian E.
Technical Moderator
July 8, 2025

Hello @llcc,

 

did you select the external loader for the DK board?

JulianE_0-1751984216862.png

(Click on the bottom left EL icon)

 

Have a good day,

Julian

 

​In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.