Skip to main content
Raggio
Associate III
February 20, 2022
Solved

X-CUBE-AI 7.1.0 Generated Code Initialisation and use

  • February 20, 2022
  • 1 reply
  • 2321 views

Hello, everyone.

I had a problem with version 7.1.0 of X-CUBE-AI.

When I load the TFlite model of the neural network and generate the code I have problems initializing the input and output buffers. In particular I have a problem with this assignment: ai_buffer ai_input[AI_NETWORK_IN_NUM] = AI_NETWORK_IN;

The compiler gives me a problem with the AI_NETWORK_IN macro, which is listed as deprecated and returns ai_buffer*. Has anyone else encountered the same problem? How can it be solved?

In version 5.20.0, I had no problems with initialisation and already evaluated many different examples.

Thank you,

Davide

    This topic has been closed for replies.
    Best answer by jean-michel.d

    Hello Raggio,

    I don't know when you use this assignment, but now since 7.x, the macro "AI_NETWORK_IN" is mapped on a function (ai_network_inputs_get()). Now to create the handle of the IO tensor before to use them with the ai_network_run() fct, the following typical code is expected:

    /* Array of pointer to manage the model's input/output tensors */
    static ai_buffer *ai_input;
    static ai_buffer *ai_output;
     
    vois aiInit() {
    ...
     /* Reteive pointers to the model's input/output tensors */
     ai_input = ai_network_inputs_get(network);
     ai_ouput = ai_network_outputs_get(network);
    ...
    }
     
    /* 
     * Run inference
     */
    int aiRun(const void *in_data, void *out_data) {
     
     /* 1 - Update IO handlers with the @ of the data payload */
     ai_input[0].data = AI_HANDLE_PTR(in_data);
     ai_output[0].data = AI_HANDLE_PTR(out_data);
     
     /* 2 - Perform the inference */
     n_batch = ai_network_run(network, &ai_input[0], &ai_output[0]);
     if (n_batch != 1) {
     err = ai_network_get_error(network);
     ...
     };
     ...
     return 0;
    }

    br,

    Jean-Michel

    1 reply

    jean-michel.dBest answer
    ST Employee
    March 4, 2022

    Hello Raggio,

    I don't know when you use this assignment, but now since 7.x, the macro "AI_NETWORK_IN" is mapped on a function (ai_network_inputs_get()). Now to create the handle of the IO tensor before to use them with the ai_network_run() fct, the following typical code is expected:

    /* Array of pointer to manage the model's input/output tensors */
    static ai_buffer *ai_input;
    static ai_buffer *ai_output;
     
    vois aiInit() {
    ...
     /* Reteive pointers to the model's input/output tensors */
     ai_input = ai_network_inputs_get(network);
     ai_ouput = ai_network_outputs_get(network);
    ...
    }
     
    /* 
     * Run inference
     */
    int aiRun(const void *in_data, void *out_data) {
     
     /* 1 - Update IO handlers with the @ of the data payload */
     ai_input[0].data = AI_HANDLE_PTR(in_data);
     ai_output[0].data = AI_HANDLE_PTR(out_data);
     
     /* 2 - Perform the inference */
     n_batch = ai_network_run(network, &ai_input[0], &ai_output[0]);
     if (n_batch != 1) {
     err = ai_network_get_error(network);
     ...
     };
     ...
     return 0;
    }

    br,

    Jean-Michel

    Raggio
    RaggioAuthor
    Associate III
    March 10, 2022

    Thank you Jean-Michel,

    Your solution is right, thank you for the answer.

    Best Regards,

    Raggio