Skip to main content
Associate III
April 11, 2025
Solved

How to understand the output of Cube AI after analyze a model?

  • April 11, 2025
  • 1 reply
  • 730 views

I am using the X-Cube-AI to deploy a neural network onto  a STM32N6570-DK board. I selected the "n6-allmems-O3" option, and after analyzing, it shows two data, labeled as used RAM and used Flash. I am wondering since I have selected the "allmems" option, the RAM usage refers to the usage of the external RAM or the internal RAM, and the Flash usage refers to the internal Flash or the external Flash.

Best answer by Julian E.

Hello @Z-YF,

 

Here is the doc:

  1. https://stedgeai-dc.st.com/assets/embedded-docs/stneuralart_neural_art_compiler.html#ref_st_neural_art_option
  2. https://stedgeai-dc.st.com/assets/embedded-docs/stneuralart_neural_art_compiler.html#ref_aton_compiler_mempools

 

Basically, the memory pool describes what memory you authorize the compiler to use when allocating the weights and activation of your model.

When you use allmems, you authorize the compiler to use everything and the compiler will try to use the fastest memory first, then the slowest for you to have the best inference time.

 

This can also be useful:

https://stedgeai-dc.st.com/assets/embedded-docs/stneuralart_programming_model.html

 

Have a good day,

Julian

1 reply

Julian E.
Julian E.Best answer
Technical Moderator
April 11, 2025

Hello @Z-YF,

 

Here is the doc:

  1. https://stedgeai-dc.st.com/assets/embedded-docs/stneuralart_neural_art_compiler.html#ref_st_neural_art_option
  2. https://stedgeai-dc.st.com/assets/embedded-docs/stneuralart_neural_art_compiler.html#ref_aton_compiler_mempools

 

Basically, the memory pool describes what memory you authorize the compiler to use when allocating the weights and activation of your model.

When you use allmems, you authorize the compiler to use everything and the compiler will try to use the fastest memory first, then the slowest for you to have the best inference time.

 

This can also be useful:

https://stedgeai-dc.st.com/assets/embedded-docs/stneuralart_programming_model.html

 

Have a good day,

Julian

​In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
Z-YFAuthor
Associate III
April 19, 2025

Hi

I’m sending this just to make sure that if I want to make the X-cube-AI automatically distribute the model to the external Flash and the internal ram of the STM32N6570-DK board, I need to configure the json file of the AI core based on the real address of the external memory and the internal memory, right?

Julian E.
Technical Moderator
April 22, 2025

Hello @Z-YF,

 

Yes, I think the best way is to edit user_neuralart.json and add a field for example:

  1. Copy one of the existing one
  2. Give it a name: CUSTOM_MPOOLs in my case
  3. Create a mpool file and set the path on the "memory_pool" line
  4. edit other things if you want
  5. Save
 "CUSTOM_MPOOLS": {
 "memory_pool": "./my_mpools/MY_CUSTOM_MPOOL.mpool",
 "memory_desc": "./my_mdescs/stm32n6.mdesc",
 "options": "--optimization 3 --all-buffers-info --mvei --no-hw-sw-parallelism --cache-maintenance --Oalt-sched --native-float --enable-virtual-mem-pools --Omax-ca-pipe 4 --Oshuffle-dma --Ocache-opt --Os"
 },

 

You can find the file here:

C:\Users\YOUR_USER\STM32Cube\Repository\Packs\STMicroelectronics\X-CUBE-AI\10.0.0\scripts\N6_scripts\

 

Then in Cube AI you can select it:

JulianE_0-1745311011349.png

 

Have a good day,

Julian

​In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.