Skip to main content
Associate II
September 30, 2024
Solved

Validation Error: Unable to generate the "libai_network" library

  • September 30, 2024
  • 2 replies
  • 2952 views

 

Hello,

I am currently working on validating a machine learning model that I have deployed on an STM32 microcontroller. While the model successfully runs on the STM32, the results it produces differ significantly from those obtained when running the model on a computer.

Additionally, when I attempt to perform validation on a desktop environment, I encounter the following error: 

E103(CliRuntimeError): Unable to generate the "libai_network" library.

Since the website could not accept the .tflite file, I have attached the picture of the structure of the model. The model was trained using TensorFlow 2.17 and utilizes approximately 75KB of flash memory and 15KB of RAM.

I would greatly appreciate any guidance or assistance in resolving these issues.

Thank you!

Best answer by hamitiya

While we perform validation, we can see this error instead of the one you reported earlier:

 

 

TOOL ERROR: tensorflow/lite/kernels/transpose_conv.cc:299 weights->type != input->type (INT8 != FLOAT32)Node number 17 (TRANSPOSE_CONV) failed to prepare.Failed to apply the default TensorFlow Lite delegate indexed at 0.

 

 Based on the model you shared, we can confirm it has float32 inputs and outputs, but we also expect same type for weights, which is not the case.

Layer 17 "TransposeConv" has quantized int8 weights (name: tfl.pseudo_qconst1) 

 

I would suggest to modify it in order to have float32 weights for this layer. Also, verify if you don't have another layer(s) with int8 weights.

 

Best regards,

Yanis

2 replies

fauvarque.daniel
ST Employee
October 3, 2024

You can attach a zip with the model in it, that will help us understand the root cause.

hamitiya
ST Employee
October 3, 2024

Hello,

In order to provide us your .tflite model, could you please change its extension with an extension allowed such as .7z ? Or compress it in a .7z file ?

 

Best regards,

Yanis

 

​In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
JinchenAuthor
Associate II
October 3, 2024

Thanks so much for the reply.

Please find the model in the attachment

 

hamitiya
hamitiyaBest answer
ST Employee
October 3, 2024

While we perform validation, we can see this error instead of the one you reported earlier:

 

 

TOOL ERROR: tensorflow/lite/kernels/transpose_conv.cc:299 weights->type != input->type (INT8 != FLOAT32)Node number 17 (TRANSPOSE_CONV) failed to prepare.Failed to apply the default TensorFlow Lite delegate indexed at 0.

 

 Based on the model you shared, we can confirm it has float32 inputs and outputs, but we also expect same type for weights, which is not the case.

Layer 17 "TransposeConv" has quantized int8 weights (name: tfl.pseudo_qconst1) 

 

I would suggest to modify it in order to have float32 weights for this layer. Also, verify if you don't have another layer(s) with int8 weights.

 

Best regards,

Yanis

​In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.