Skip to main content
MAles.1
Associate II
April 20, 2021
Question

X-CUBE-AI, tflite model: INTERNAL ERROR: Transpose of batch not supported

  • April 20, 2021
  • 5 replies
  • 1958 views

Using X-CUBE-AI 6.0.0, trying to import a tflite model of a RNN network, with both dense and LSTM layers, this error is returned. I can't find any information on that. Is there something that can be done?

Thank you very much

    This topic has been closed for replies.

    5 replies

    MAles.1
    MAles.1Author
    Associate II
    April 20, 2021

    As added information, this error is reported during the "Analyze" procedure. Using a full Keras model there's no such error, apart from the model being too big for the available RAM.

    fauvarque.daniel
    ST Employee
    April 26, 2021

    Transpose of batch is not supported since after that layer likely the batch size is no more 1, and we only support batch size of 1.

    This transpose layer is added by the tflite converter, are you using version tensorflow 2.3.1

    Are you trying to convert to tflite to use the tflite post training quantization ?

    Can you share your model so we could look at the issue.

    Regards

    Daniel

    MAles.1
    MAles.1Author
    Associate II
    April 26, 2021

    Hi, I'm using tf 2.4.1, and converting a keras model to tflite. The problem happens with or without quantization.

    I'm attaching an example model.

    Thank you very much

    https://drive.google.com/file/d/1dvC4nHtnFZNU4Ka5ETkwRb6BwUKeoYbp/view?usp=sharing

    IHema.1
    Visitor II
    May 6, 2021

    Hi,

    I have exactly the same problem. My neural network is made up of (LSTM + dense) learnt with tensorflow 2.4.1. The input of the network is a vector of 200 consecutive data and the output is a vector of 200 x 2 probabilities for each data to belong to our 2 categories. However, when converting the tensorflow lite file to a C file with STM32CubeMx, I also get the "Transpose of Batch not supported" error message.

    Can you help me to solve my problem ?

    MAles.1
    MAles.1Author
    Associate II
    May 6, 2021

    I'm the original poster, in the meanwhile we solved by not using tf lite. We reduced the original model by downsampling the input data, and so reducing the input vector size. This reduced the RAM occupation and so we can use the original keras model.

    ARaja.2
    Visitor II
    February 4, 2022

    Still facing this issue. The workaround as suggested by @MAles.1​  works. But the Analyze option still fails for a TFlite model. Cube AI Version 7.1.0. Tensor Flow version 2.5.

    Is there an update / fix for this?