Skip to main content
Associate
May 24, 2024
Question

Want to deploy TFLM library on STM32L562E

  • May 24, 2024
  • 2 replies
  • 1681 views

Hi,

Currently, I want to use TFLM library to do the inference rather than Cube.AI. For me, what I am doing is:

1. download the latest tflm library

2. build it with commands: "make -f tensorflow/lite/micro/tools/make/Makefile TARGET=cortex_m_generic TARGET_ARCH=cortex-m33 OPTIMIZED_KERNEL_DIR=cmsis_nn  microlite"

3. Create a new project in STM32IDE and copy above-built-in tensorflow and third-party folders into the project

4. include all needed header files in tensorflow and third-party folders

 

Is the above process correct? I tried for multiple times but it always show that "missing some files in tensorflow folder". Do you have any suggestions?

Thanks in advance

 

2 replies

fauvarque.daniel
ST Employee
May 24, 2024

X-CUBE-AI inside STM32CubeMX can create for you a project using the tensoflow lite micro runtime

The snapshot that we use is not the latest one but at least it should give you a working project that you can then update with the latest sources.

Associate
May 24, 2024

Dear Daniel,

For the "tensorflow lite micro runtime", is it true that there is an analyze tool in CubeMX that can help see the system performance based on tflm runtime (as shown in the below picture).

My purpose is want to debug each piece of the TFLM codes, and to see how they are working. In this case, I want to know if any ways to import TFLM library into STM32 boards and debug it. 

 

 

haoliu0027_0-1716580414064.png

 

fauvarque.daniel
ST Employee
May 27, 2024

In the UI, with the TFLite Micro runtime you are able to do an automatic validtion on the target that will generate, compile and flash a validation program on the target and then run the "validate" command of the tool.

With the validate command you'll get the inference time and a comparison of the inference results between the code running on the target and a python run.

You can also generate the system performance project that will display just the inference time on a serial port

Regards

Associate
August 8, 2025

I am trying to run tflm with stm32f769 eval. I build the tflm as a static librar and included it in my project. I am getting errors whille calling allocatetensors() function.  Anyone having any luck with it?

Julian E.
Technical Moderator
August 11, 2025

Hello @hadi81,

 

We do not support TFLM anymore, 

You can find our documentation regarding support of NN here:

https://stedgeai-dc.st.com/assets/embedded-docs/index.html

 

Have a good day,

Julian

​In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.