Skip to main content
PGaja.1
Associate
May 12, 2022
Question

No source available for "ai_platform_network_process() at 0x8xxxxx"

  • May 12, 2022
  • 6 replies
  • 4980 views

Hi,

We are trying a very simple deep neural network model with 20 inputs and 3 outputs and want to generate inferences in real-time. We are following the instructions from the document "Getting started with X-CUBE-AI Expansion Package for Artificial Intelligence (AI)" numbered UM2526 with X Cube AI 7.1.0 with STM32CubeMX.

Has anyone encountered the same error?

Thanks in advance.

6 replies

Andrew Neil
Super User
May 12, 2022

At what point, exactly, do you get that message?

It sounds like you're using the debugger, and trying to step into or break within a function called ai_platform_network_process() ?

So the first question is: do you have the source code for that function?

If you don't have the source code, then "no source available" is obviously true!

A complex system that works is invariably found to have evolved from a simple system that worked.A complex system designed from scratch never works and cannot be patched up to make it work.
PGaja.1
PGaja.1Author
Associate
May 24, 2022

I have not made any changes to file generated by STM32 CubeMX. Also, this is the code that we are using for inference in the neural network:

 static ai_handle network = AI_HANDLE_NULL;

 ai_network_report report;

 ai_bool res;

 ai_error err;

 const ai_u16 batch_size = 1;

 err = ai_network_create(&network, AI_NETWORK_DATA_CONFIG);

 AI_ALIGNED(4) static ai_u8 activations[AI_NETWORK_DATA_ACTIVATIONS_SIZE];

 const ai_network_params params = {

  AI_NETWORK_DATA_WEIGHTS(ai_network_data_weights_get()),

  AI_NETWORK_DATA_ACTIVATIONS(activations) };

 nbatch = ai_network_run(network, &ai_input[0], &ai_output[0])

PGaja.1
PGaja.1Author
Associate
May 24, 2022

I have not made any changes to file generated by STM32 CubeMX. Also, this is the code that we are using for inference in the neural network:

 static ai_handle network = AI_HANDLE_NULL;

 ai_network_report report;

 ai_bool res;

 ai_error err;

 const ai_u16 batch_size = 1;

 err = ai_network_create(&network, AI_NETWORK_DATA_CONFIG);

 AI_ALIGNED(4) static ai_u8 activations[AI_NETWORK_DATA_ACTIVATIONS_SIZE];

 const ai_network_params params = {

  AI_NETWORK_DATA_WEIGHTS(ai_network_data_weights_get()),

  AI_NETWORK_DATA_ACTIVATIONS(activations) };

 nbatch = ai_network_run(network, &ai_input[0], &ai_output[0]);

ATosu.1
Associate
March 15, 2023

I face with same problem. I imported Keras model and CubeAI tool could analyze the model successfully but it didn't create source file for same function. There is just a declaration for function but no definition.

Could you find any solution for that? If you found, please share with me too.

Graduate
November 29, 2024

Hi,

I'm also facing the same issue while trying to deploy a simple neural network model using STM32Cube.AI. I'm working with a small model (https://www.youtube.com/watch?v=crJcDqIUbP4&t=85s).

I'm using X-CUBE-AI version 7.1.0 and STM32CubeMX for code generation. Although the code generation process completes successfully, I'm encountering runtime errors when trying to execute the network on the STM32. Despite following the instructions and ensuring all configurations seem correct, the inference does not run as expected. Function network_model_run() returns 0.

It would be great if someone who has encountered and resolved this issue could share their approach. Any help or guidance would be appreciated!

Thanks in advance.

Julian E.
Technical Moderator
December 2, 2024

Hello @josepauloo,

 

Could you try to use the latest version of X Cube AI (v9.1) and come back to me if you still encounter the issue.

 

Have a good day,

Julian

 

​In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
Graduate
December 2, 2024

Hi Julian,

Thank you for your suggestion. I have already attempted to implement the model using the latest version of X-CUBE-AI (v9.1), but unfortunately, I am still encountering the same runtime issue during the network execution phase.

Attached, you can find the main.c file, if it helps!

 

Have a great day as well!

Best regards,

José

hamitiya
ST Employee
December 3, 2024

Hello,

Did you try to generate a project using a "SystemPerformance" template from X-CUBE-AI ? 

Also, I see you are calling ai_sine_model_run() but I do not see any call to aiInit();. Is it wrapped in your function ?

 

Best regards,

Yanis

​In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
Julian E.
Technical Moderator
December 10, 2024

Hello @josepauloo,

 

I would like to use X cube AI more to create some documentation through easy examples, but we are pretty shorthanded for the moment...

 

You can generate a code example that use the model with a random input and send via serial some metrics (inference time, predicted class etc) on a STM32H747I Disco using the ST Dev Cloud: 

JulianE_0-1733847586547.png

You need to import a model .onnx, h5 or .tflite then go through the steps. You can basically skip everything until you reach the last part.

You can download different things depending on what you want. 

Currently the CubeMX will have an issue because the cubeAI version in the dev cloud is higher than the one you can download in CubeMX (version 10.0 that came out today, it is maybe already fixed).

 

I don't know if it works for other board, you can test it, with this H7 I tested it, so it works.

 

Have a good day,

Julian

​In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
Graduate
December 12, 2024

Hello Julian,

Thank you for your support. In the meantime, I managed to make my model work using the structure of the Embedded Inference Client API (file:///C:/Users/jpaul/STM32Cube/Repository/Packs/STMicroelectronics/X-CUBE-AI/9.1.0/Documentation/embedded_client_api.html) instead of the Embedded Inference Client ST Edge AI APIs (file:///C:/Users/jpaul/STM32Cube/Repository/Packs/STMicroelectronics/X-CUBE-AI/9.1.0/Documentation/embedded_client_stai_api.html). When creating the project in CubeMX (I used version 9 of X-CUBE-AI), I could not find any functions starting with “stai” in the generated files. So, I tried the other structure, and it worked.

Unfortunately, the documentation linked above does not clarify many important details, making problem-solving more challenging. For instance, the TensorFlow version supported by X-CUBE-AI 9 is 2.12.0. Using a newer version causes errors during the network model analysis in CubeMX. This should have been documented. There is significant version discrepancy (TensorFlow, Keras, and STM tools), which complicates the process.

Thank you for your help and support.

 

Wishing you and your team a joyful holiday season!
José Paulo

Julian E.
Technical Moderator
December 12, 2024

Hello @josepauloo,

 

Thank you very much for your comment.

 

CubeAI is a plugin for cubeMX using the STEdgeAI that you can use in CLI.

  • The Embedded Inference Client API is the "old API" that cubeMX uses
  • The Embedded Inference Client ST Edge AI API is the "new API" that you can use if you want using the Extra command line options in the Advanced options

JulianE_0-1734016791248.png

So, as you said, you don't use the function with the stai from the new API.

I wasn't aware of this eitheir, but now I know thanks to you :).

As for what to add in the Extra command Line options, I will need to look for it.

 

The date for the migration from the old API to the new one is not defined.

 

Concerning the versions required for TensorFlow for example you can find it in the installation part of the documentation, but it may not be that clear.

JulianE_1-1734017191025.png

 

I also wish you a joyful holiday season!

Julian

​In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.