Skip to main content
Associate II
March 18, 2025
Solved

Which software do I need to install to start with STM32N6570-DK?

  • March 18, 2025
  • 5 replies
  • 4127 views

I am a newbie with STM32N6570-DK.

My target is to deploy AI model on this board. Which software do I need to install to start with STM32N6570-DK? I means that all softwares.

Thank you.

Best answer by Julian E.

Hello @Anhem ,

 

I would suggest you start with ST Model Zoo: 

STMicroelectronics/stm32ai-modelzoo: AI Model Zoo for STM32 devices

 

This github repository is composed of 2 parts:

  • ST Model Zoo: where you can download models
  • ST Model Zoo Services: Where you will find python scripts mainly to retrain models, do quantization and deployment on N6.

 

Make sure to follow the before you start in ST Model Zoo Services (at the end) to install everything correctly.

Also, to deploy model on N6, follow this document (example for object detection). You will need to download a few additional things:

stm32ai-modelzoo-services/object_detection/deployment/README_STM32N6.md at main · STMicroelectronics/stm32ai-modelzoo-services

 

Have a good day,

Julian

 

 

5 replies

Andrew Neil
Super User
March 18, 2025
A complex system that works is invariably found to have evolved from a simple system that worked.A complex system designed from scratch never works and cannot be patched up to make it work.
Julian E.
Julian E.Best answer
Technical Moderator
March 18, 2025

Hello @Anhem ,

 

I would suggest you start with ST Model Zoo: 

STMicroelectronics/stm32ai-modelzoo: AI Model Zoo for STM32 devices

 

This github repository is composed of 2 parts:

  • ST Model Zoo: where you can download models
  • ST Model Zoo Services: Where you will find python scripts mainly to retrain models, do quantization and deployment on N6.

 

Make sure to follow the before you start in ST Model Zoo Services (at the end) to install everything correctly.

Also, to deploy model on N6, follow this document (example for object detection). You will need to download a few additional things:

stm32ai-modelzoo-services/object_detection/deployment/README_STM32N6.md at main · STMicroelectronics/stm32ai-modelzoo-services

 

Have a good day,

Julian

 

 

​In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
AnhemAuthor
Associate II
March 19, 2025

Hi @Julian E. 

Thank you. I am following with this link.

Step 1: I installed STM32CubeIDE and installed STEdgeAI (for generating C code from tflite/onnx).

Step 2: I also downloaded STM32N6 Getting Started V1.0.0 and copy application_code to modezoo-services https://github.com/STMicroelectronics/stm32ai-modelzoo-services/blob/main/application_code/object_detection/STM32N6/README_ModelZoo.md 

I have a question. Whether I need to instal CubeMX and install X-CUBE-AI in CubeMX.

As my understanding with STEdgeAI we can generated C code from tflite/onnx, why we need X-CUBE-AI more?

 

Moreover, I tried installed CubeMX and X-CUBE-AI inside CubeMX, it seems that I need to download STEdgeAI again and NPU Art inside CubeMX. This made me confuse because in the aboe step 1, we have already installed STEdgeAI and NPU Art.

 

Julian E.
Technical Moderator
March 19, 2025

Hello @Anhem ,

 

Sure, it is a little bit hard to understand.

 

CubeMX is useful to generate project when starting from zero.

The application codes were started using cubeMX.

In your case, you will use the application code, so you don't need it. In the future, if you want to start your project from blank, you will need it to configure the I/O, get the LL and HAL libraries etc.

 

For X Cube AI, I would explain it as a GUI in CubeMX for ST Edge AI core.

So if you were to add manually  an AI model in a new project, the easiest way would be to use X Cube AI for it to generate all your c files and generate a CubeIDE project with a template code.

You could use ST Edge AI, get your files, and manually add it to your project folder and add the path in CubeIDE for the builder to find your files.

With model zoo, it is the deployment script that will call st edge ai, get your model as c files and replace it automatically in the application codes.

 

To sum up, st edge ai core is the heart of everything, it is the software that does the conversion.

Most of the AI tool are way to use it: ST Dev cloud uses it in background, X Cube AI also and Model Zoo scripts

So, for now, you have installed everything that you need.

 

Have a good day,

Julian

​In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
AnhemAuthor
Associate II
March 20, 2025

Hi @Julian E. 

I follow this link https://github.com/STMicroelectronics/stm32ai-modelzoo-services/blob/main/object_detection/deployment/README_STM32N6.md and I got this error.

My PC do not have GPU so I commented out some code in stm32ai.py

2025-03-20 13:56:14.310314: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2025-03-20 13:56:14.310329: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
[INFO] : Running `deployment` operation mode
2025/03/20 13:56:16 INFO mlflow.tracking.fluent: Experiment with name '<unnamed>' does not exist. Creating a new experiment.
[INFO] : ClearML config check
[INFO] : The random seed for this simulation is 123
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
[INFO] : Generating C header file for Getting Started...
[INFO] : This TFLITE model doesnt contain a post-processing layer
loading model.. model_path="../../stm32ai-modelzoo/object_detection/ssd_mobilenet_v2_fpnlite/ST_pretrainedmodel_public_dataset/coco_2017_person/ssd_mobilenet_v2_fpnlite_035_192/ssd_mobilenet_v2_fpnlite_035_192_int8.tflite"
loading conf file.. "../../application_code/object_detection/STM32N6/stmaic_STM32N6570-DK.conf" config="None"
"n6 release" configuration is used
[INFO] : Selected board : "STM32N6570-DK Getting Started Object Detection (STM32CubeIDE)" (stm32_cube_ide/n6 release/stm32n6)
[INFO] : Compiling the model and generating optimized C code + Lib/Inc files: ../../stm32ai-modelzoo/object_detection/ssd_mobilenet_v2_fpnlite/ST_pretrainedmodel_public_dataset/coco_2017_person/ssd_mobilenet_v2_fpnlite_035_192/ssd_mobilenet_v2_fpnlite_035_192_int8.tflite
setting STM.AI tools.. root_dir="", req_version=""
 Cube AI Path: "/opt/ST/STEdgeAI/2.0/Utilities/linux/stedgeai".
[INFO] : Offline CubeAI used; Selected tools: 10.0.0 (x-cube-ai pack)
loading conf file.. "../../application_code/object_detection/STM32N6/stmaic_STM32N6570-DK.conf" config="None"
"n6 release" configuration is used
compiling... "ssd_mobilenet_v2_fpnlite_035_192_int8_tflite" session
 model_path : ['../../stm32ai-modelzoo/object_detection/ssd_mobilenet_v2_fpnlite/ST_pretrainedmodel_public_dataset/coco_2017_person/ssd_mobilenet_v2_fpnlite_035_192/ssd_mobilenet_v2_fpnlite_035_192_int8.tflite']
 tools : 10.0.0 (x-cube-ai pack)
 target : "STM32N6570-DK Getting Started Object Detection (STM32CubeIDE)" (stm32_cube_ide/n6 release/stm32n6)
 options : --st-neural-art default@../../application_code/object_detection/STM32N6/Model/user_neuralart.json --input-data-type uint8 --inputs-ch-position chlast
[returned code = 255 - FAILED]
$ cwd: None
$ args: /opt/ST/STEdgeAI/2.0/Utilities/linux/stedgeai generate --target stm32n6 -m ../../stm32ai-modelzoo/object_detection/ssd_mobilenet_v2_fpnlite/ST_pretrainedmodel_public_dataset/coco_2017_person/ssd_mobilenet_v2_fpnlite_035_192/ssd_mobilenet_v2_fpnlite_035_192_int8.tflite --output /media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/experiments_outputs/2025_03_20_13_56_16 --workspace /media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/experiments_outputs/2025_03_20_13_56_16 --st-neural-art default@../../application_code/object_detection/STM32N6/Model/user_neuralart.json --input-data-type uint8 --inputs-ch-position chlast
ST Edge AI Core v2.0.0-20049

PASS: 0%| | 0/82 [00:00<?, ?it/s]
PASS: 9%|▊ | 7/82 [00:00<00:04, 15.86it/s]
PASS: 11%|█ | 9/82 [00:00<00:05, 12.90it/s]
PASS: 13%|█▎ | 11/82 [00:00<00:05, 12.68it/s]
PASS: 17%|█▋ | 14/82 [00:01<00:04, 14.12it/s]
PASS: 26%|██▌ | 21/82 [00:01<00:02, 23.07it/s]
PASS: 29%|██▉ | 24/82 [00:01<00:02, 20.97it/s]
PASS: 33%|███▎ | 27/82 [00:01<00:02, 19.90it/s]
PASS: 37%|███▋ | 30/82 [00:01<00:02, 18.26it/s]
PASS: 39%|███▉ | 32/82 [00:01<00:02, 17.69it/s]
PASS: 41%|████▏ | 34/82 [00:01<00:02, 17.63it/s]
PASS: 44%|████▍ | 36/82 [00:02<00:02, 17.23it/s]
PASS: 46%|████▋ | 38/82 [00:02<00:02, 17.30it/s]
PASS: 49%|████▉ | 40/82 [00:02<00:02, 17.31it/s]
PASS: 52%|█████▏ | 43/82 [00:02<00:02, 19.02it/s]
PASS: 55%|█████▍ | 45/82 [00:02<00:02, 18.37it/s]
PASS: 60%|█████▉ | 49/82 [00:02<00:01, 21.31it/s]
PASS: 63%|██████▎ | 52/82 [00:02<00:01, 20.05it/s]
PASS: 67%|██████▋ | 55/82 [00:03<00:01, 17.26it/s]
PASS: 70%|██████▉ | 57/82 [00:03<00:01, 16.71it/s]
PASS: 72%|███████▏ | 59/82 [00:03<00:01, 16.15it/s]
PASS: 74%|███████▍ | 61/82 [00:03<00:01, 15.85it/s]
PASS: 77%|███████▋ | 63/82 [00:03<00:01, 14.18it/s]
PASS: 80%|████████ | 66/82 [00:03<00:00, 16.11it/s]
PASS: 84%|████████▍ | 69/82 [00:03<00:00, 17.32it/s]
PASS: 87%|████████▋ | 71/82 [00:04<00:00, 16.69it/s]
PASS: 89%|████████▉ | 73/82 [00:04<00:00, 16.22it/s]

 >>>> EXECUTING NEURAL ART COMPILER

PASS: 89%|████████▉ | 73/82 [00:06<00:00, 16.22it/s]

 /opt/ST/STEdgeAI/2.0/Utilities/linux/atonn -i "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/experiments_outputs/2025_03_20_13_56_16/ssd_mobilenet_v2_fpnlite_035_192_int8_OE_3_1_0.onnx" --json-quant-file "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/experiments_outputs/2025_03_20_13_56_16/ssd_mobilenet_v2_fpnlite_035_192_int8_OE_3_1_0_Q.json" -g "network.c" --load-mdesc "/opt/ST/STEdgeAI/2.0/Utilities/configs/stm32n6.mdesc" --load-mpool "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/Model/my_mpools/stm32n6-app2.mpool" --save-mpool-file "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/experiments_outputs/2025_03_20_13_56_16/neural_art__network/stm32n6-app2.mpool" --out-dir-prefix "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/experiments_outputs/2025_03_20_13_56_16/neural_art__network/" --all-buffers-info --no-hw-sw-parallelism --cache-maintenance --enable-virtual-mem-pools --native-float --optimization 3 --Os --Omax-ca-pipe 4 --Ocache-opt --enable-epoch-controller --output-info-file "c_info.json"

PASS: 89%|████████▉ | 73/82 [00:06<00:00, 16.22it/s]

 Unable to execute the command "['/opt/ST/STEdgeAI/2.0/Utilities/linux/atonn', '-i', '/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/experiments_outputs/2025_03_20_13_56_16/ssd_mobilenet_v2_fpnlite_035_192_int8_OE_3_1_0.onnx', '--json-quant-file', '/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/experiments_outputs/2025_03_20_13_56_16/ssd_mobilenet_v2_fpnlite_035_192_int8_OE_3_1_0_Q.json', '-g', 'network.c', '--load-mdesc', '/opt/ST/STEdgeAI/2.0/Utilities/configs/stm32n6.mdesc', '--load-mpool', '/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/Model/my_mpools/stm32n6-app2.mpool', '--save-mpool-file', '/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/experiments_outputs/2025_03_20_13_56_16/neural_art__network/stm32n6-app2.mpool', '--out-dir-prefix', '/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/experiments_outputs/2025_03_20_13_56_16/neural_art__network/', '--all-buffers-info', '--no-hw-sw-parallelism', '--cache-maintenance', '--enable-virtual-mem-pools', '--native-float', '--optimization', '3', '--Os', '--Omax-ca-pipe', '4', '--Ocache-opt', '--enable-epoch-controller', '--output-info-file', 'c_info.json']"
 [Errno 13] Permission denied: '/opt/ST/STEdgeAI/2.0/Utilities/linux/atonn'

PASS: 89%|████████▉ | 73/82 [00:06<00:00, 16.22it/s]


 E103(CliRuntimeError): Error calling the Neural Art compiler - []
Error executing job with overrides: []
Traceback (most recent call last):
 File "/home/anhem/miniconda3/envs/stm_32/lib/python3.10/site-packages/clearml/binding/hydra_bind.py", line 230, in _patched_task_function
 return task_function(a_config, *a_args, **a_kwargs)
 File "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/stm32ai_main.py", line 228, in main
 process_mode(cfg)
 File "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/stm32ai_main.py", line 102, in process_mode
 deploy(cfg)
 File "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/../deployment/deploy.py", line 111, in deploy
 stm32ai_deploy_stm32n6(target=board, stlink_serial_number=stlink_serial_number, stm32ai_version=stm32ai_version, c_project_path=c_project_path,
 File "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/../../common/deployment/common_deploy.py", line 515, in stm32ai_deploy_stm32n6
 stmaic_local_call(session)
 File "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/../../common/deployment/common_deploy.py", line 489, in stmaic_local_call
 stmaic.compile(session=session, options=opt, target=session._board_config)
 File "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/../../common/stm32ai_local/compile.py", line 216, in cmd_compile
 raise Exception('Error during compilation')
Exception: Error during compilation

 What I need to do to fix it? Thank you

AnhemAuthor
Associate II
March 20, 2025

@Julian E. 

Here is my user_config.yaml. I keep it the same as initial. I see stedgeAI and IDE paths. Do I need to change these paths?

I also see there is some places for board STM32H7. Do I need to changed these places?

general:
 project_name: COCO_2017_person_Demo
 model_type: st_yolo_lc_v1
#choices=[st_ssd_mobilenet_v1, ssd_mobilenet_v2_fpnlite, tiny_yolo_v2, st_yolo_lc_v1, 
# st_yolo_x, yolo_v8, yolo_v5u]
 model_path: 
 logs_dir: logs
 saved_models_dir: saved_models
 gpu_memory_limit: 16
 num_threads_tflite: 4
 global_seed: 127

operation_mode: chain_tqeb
#choices=['training' , 'evaluation', 'deployment', 'quantization', 'benchmarking',
# 'chain_tqeb','chain_tqe','chain_eqe','chain_qb','chain_eqeb','chain_qd ']

dataset:
 name: COCO_2017_person
 class_names: [ person ]
 training_path: /dataset/coco_person_2017_tfs/train
 validation_path:
 validation_split: 0.1
 test_path: /dataset/coco_person_2017_tfs/val
 quantization_path: /dataset/coco_person_2017_tfs/val
 quantization_split: 0.01

preprocessing:
 rescaling: { scale: 1/127.5, offset: -1 }
 resizing:
 aspect_ratio: fit
 interpolation: nearest
 color_mode: rgb
 
data_augmentation:
 ########## For tiny_yolo_v2 and st_yolo_lc_v1 only ###########
 random_periodic_resizing:
 period: 10
 image_sizes: [(192, 192), (224, 224), (256, 256), (288, 288), (320, 320), (352, 352),
 (384, 384), (416, 416), (448, 448), (480, 480), (512, 512),
 (544, 544), (576, 576), (608, 608)]
 random_flip:
 mode: horizontal
 random_crop:
 crop_center_x: (0.25, 0.75)
 crop_center_y: (0.25, 0.75)
 crop_width: (0.5, 0.9)
 crop_height: (0.5, 0.9)
 change_rate: 0.9
 random_contrast:
 factor: 0.4
 random_brightness:
 factor: 0.3 

training:
 model:
 # alpha: 0.35
 input_shape: (192, 192, 3)
 # pretrained_weights: imagenet
 dropout:
 batch_size: 64
 epochs: 4
 optimizer:
 Adam:
 learning_rate: 0.005
 callbacks:
 ReduceLROnPlateau:
 monitor: val_map
 patience: 10
 factor: 0.25
 ModelCheckpoint:
 monitor: val_map
 EarlyStopping:
 monitor: val_map
 patience: 20

postprocessing:
 confidence_thresh: 0.001
 NMS_thresh: 0.5
 IoU_eval_thresh: 0.4
 plot_metrics: False # Plot precision versus recall curves. Default is False.
 max_detection_boxes: 100

quantization:
 quantizer: TFlite_converter
 quantization_type: PTQ
 quantization_input_type: uint8
 quantization_output_type: float
 granularity: per_channel #per_tensor
 optimize: False #can be True if per_tensor
 export_dir: quantized_models

benchmarking:
 board: STM32H747I-DISCO

tools:
 stedgeai:
 version: 10.0.0
 optimization: balanced
 on_cloud: True
 path_to_stedgeai: C:/Users/<XXXXX>/STM32Cube/Repository/Packs/STMicroelectronics/X-CUBE-AI/<*.*.*>/Utilities/windows/stedgeai.exe
 path_to_cubeIDE: C:/ST/STM32CubeIDE_<*.*.*>/STM32CubeIDE/stm32cubeide.exe

deployment:
 c_project_path: ../../application_code/object_detection/STM32H7/
 IDE: GCC
 verbosity: 1
 hardware_setup:
 serie: STM32H7
 board: STM32H747I-DISCO

mlflow:
 uri: ./experiments_outputs/mlruns

hydra:
 run:
 dir: ./experiments_outputs/${now:%Y_%m_%d_%H_%M_%S}

 

Julian E.
Technical Moderator
March 20, 2025

When you run the command:

python stm32ai_main.py

 it uses by default the user_config.py

 

If you want to use the one you sent me in the zip you should run:

python stm32ai_main --config-name YOUR_YAML.yaml

 

You do indeed need to edit the ST Edge AI and CubeIDE paths.

(which you did in your yaml in the zip, but not in the user_config.yaml that is called if you did not specify the config-name)

 

In your error I see permission denied, so you may have to edit the permission with chmod of your stedgeai folder.

 

Let me know if it works.

Also try to revert the python file for the GPU to its original state.

 

Julian

​In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
AnhemAuthor
Associate II
March 24, 2025

Hi @Julian E. 

After editing permission with

sudo chmod 777 -R /opt/ST/STEdgeAI/ 
sudo chmod 777 -R /opt/ST/STEdgeAI/*
python3 stm32ai_main.py --config-path ./config_file_examples/ --config-name deployment_n6_ssd_mobilenet_v2_fpnlite_config.yaml

But the process took too much time and it stopped to at the point. You can see in the atttached message. I waited 30 minutes, but it did not do anything as before changing permission. My board has 2 LEDs (red and orange). It seems that I need to use USB-C to USB-C (not USB-A to USB to make sure enough power. Anyway, which cable should I use: data cable or charge cable same as for mobile phone charging?), but this is different problem.

How I can fix this issue with deployment and permission? I am afraid that I need to reinstall STEdgeAI because currently it does not run after changing permission. Thank you.

 

(stm_32) /media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src  python3 stm32ai_main.py --config-path ./config_file_examples/ --config-name deployment_n6_ssd_mobilenet_v2_fpnlite_config.yaml 

2025-03-24 09:14:14.630486: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2025-03-24 09:14:14.630502: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
[INFO] : Running `deployment` operation mode
[INFO] : ClearML config check
[INFO] : The random seed for this simulation is 123
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
[INFO] : Generating C header file for Getting Started...
[INFO] : This TFLITE model doesnt contain a post-processing layer
loading model.. model_path="/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/stm32ai-modelzoo/object_detection/ssd_mobilenet_v2_fpnlite/ST_pretrainedmodel_public_dataset/coco_2017_person/ssd_mobilenet_v2_fpnlite_035_192/ssd_mobilenet_v2_fpnlite_035_192_int8.tflite"
loading conf file.. "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/stmaic_STM32N6570-DK.conf" config="None"
"n6 release" configuration is used
[INFO] : Selected board : "STM32N6570-DK Getting Started Object Detection (STM32CubeIDE)" (stm32_cube_ide/n6 release/stm32n6)
[INFO] : Compiling the model and generating optimized C code + Lib/Inc files: /media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/stm32ai-modelzoo/object_detection/ssd_mobilenet_v2_fpnlite/ST_pretrainedmodel_public_dataset/coco_2017_person/ssd_mobilenet_v2_fpnlite_035_192/ssd_mobilenet_v2_fpnlite_035_192_int8.tflite
setting STM.AI tools.. root_dir="", req_version=""
 Cube AI Path: "/opt/ST/STEdgeAI/2.0/Utilities/linux/stedgeai".
[INFO] : Offline CubeAI used; Selected tools: 10.0.0 (x-cube-ai pack)
loading conf file.. "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/stmaic_STM32N6570-DK.conf" config="None"
"n6 release" configuration is used
compiling... "ssd_mobilenet_v2_fpnlite_035_192_int8_tflite" session
 model_path : ['/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/stm32ai-modelzoo/object_detection/ssd_mobilenet_v2_fpnlite/ST_pretrainedmodel_public_dataset/coco_2017_person/ssd_mobilenet_v2_fpnlite_035_192/ssd_mobilenet_v2_fpnlite_035_192_int8.tflite']
 tools : 10.0.0 (x-cube-ai pack)
 target : "STM32N6570-DK Getting Started Object Detection (STM32CubeIDE)" (stm32_cube_ide/n6 release/stm32n6)
 options : --st-neural-art default@/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/Model/user_neuralart.json --input-data-type uint8 --inputs-ch-position chlast

 

Julian E.
Technical Moderator
March 25, 2025

Hello @Anhem,

 

Please set both boot pin to the right. This should solve the issue.

Also, please try to only use USB-C to USB-C data cable. When the application will run, you will encounter an issue because of the insufficient voltage.

 

Have a good day,

Julian 

 

​In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
AnhemAuthor
Associate II
March 25, 2025

Hi @Julian E. 

Please set both boot pin to the right. This should solve the issue.

https://github.com/STMicroelectronics/stm32ai-modelzoo-services/blob/main/object_detection/deployment/README_STM32N6.md#table-of-contents in this guide in deployment part. Step 3.3 said that BOOT0 - RIGHT and after running commands

cd ../src/
python stm32ai_main.py --config-path ./config_file_examples/ --config-name deployment_n6_ssd_mobilenet_v2_fpnlite_config.yaml

both pins are set to the LEFT. 

So I will try as your comment. Firstly I will set both pins to RIGHT and after running Python script I wall set both pins to LEFT to run inference.

Also, please try to only use USB-C to USB-C data cable. When the application will run, you will encounter an issue because of the insufficient voltage.

Yes, I understood

 

Thank you so much. I could successfully run tfllite model from Model zoo. You are right, we need to set both BOOT pins to the right before running Python script, after that turn them to LEFT. I also used USB-C to USB-C data cable (led LD3 is yellow-green => that's fine). Moreover, in Linux we need to solve permission issue and PATH of some STM32 components. That is from my experience that I want to share.

 

I have one more question. As my understanding, currently I am running with internal flash (user internal memory). Is that right? and How much memory I can use with this model? I want to know this value to estimate whether my AI model can be run with this settings.

 

Moreover, I know that STM32N6570-DK supports SD card. It means that we can deploy all C source code + AI model to SD and run inference for LARGER models. Is that right? If it is correct, could you please provide guide link to do this (about how to set up board to run with SD card, how to build and flash, it is better if it comes with AI model).

 

Julian E.
Technical Moderator
March 26, 2025

Hello @Anhem,

 

You can find the memory available and used when doing a stedgeai generate. For example:

JulianE_2-1742978429606.png

https://stedgeai-dc.st.com/assets/embedded-docs/stneuralart_getting_started.html

 

When you use external memory (other than cpu and npu ram in the image above), the performance in terms of inference time reduce drastically as it needs to gather the data from external memory before using it.

 

So yes, you can run larger model and use SD card, but it is not advised. If your use case do not need a fast inference time for any reason, it can be useful though.

 

I am not an embedded engineer, so I don't really know how to use a SD card. You can ask the MCU forum board about that. 

Regarding AI, you will for sure need to edit the memory-pool file:

JulianE_3-1742979017168.png

Memory-pool descriptor: file: $STEDGEAI_CORE_DIR/Utilities/windows/targets/stm32/resources/mpools/stm32n6.mpool.

 

When you use the st edge ai core, it looks at your memory pool file that indicate what kinds of memories are available, their start adress, size etc.

 

If you look at the default one, you will see that you have a description of the memories available that correspond to the first image I sent.

JulianE_4-1742979208247.png

 

Because our real use case tends to be focusing on fast inference time, we don't currently have any example using SD Card.

 

Have a good day,

Julian

​In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.