Skip to main content
rickforescue
Associate II
February 25, 2022
Question

INTERNAL ERROR: 'FLOAT16' --> Is it that STM doesnot support tf.float16 formats ?

  • February 25, 2022
  • 2 replies
  • 2040 views

I tried to compress a keras model. It is a versy simple Neural network model with just fully connected (dense) layers. I converted it to tflite and compressed the weight to tf.float16 which was originally tf.float32. When I upload the model, it gave me error

Neural Network Tools for STM32AI v1.6.0 (STM.ai v7.1.0-RC3) 

INTERNAL ERROR: 'FLOAT16'

Is it that STM doesnot support float16 formats ?

More Info: Its a tflite model and I used STM32Cube.AI.runtime

    This topic has been closed for replies.

    2 replies

    TDK
    Super User
    February 26, 2022

    Half-precision floats aren't supported by FPUs on Cortex M4 or M7.

    "If you feel a post has answered your question, please click ""Accept as Solution""."
    waclawek.jan
    Super User
    February 26, 2022

    As usually. there's no point in discussing this without knowing STM32 model (core type) and compiler.

    Actually, half-precision floats came up recently as a question in a local forum (in Czech language), and there appears to be *some* support in Cortex-M, although compilers probably won't support it seamlessly.

    JW

    TDK
    Super User
    February 26, 2022

    Weird of them to support that instruction and yet not support any add/subtract/mult/divide instructions on FLOAT16.

    https://developer.arm.com/documentation/dui0646/a/The-Cortex-M7-Instruction-Set/Multiply-and-divide-instructions

    "If you feel a post has answered your question, please click ""Accept as Solution""."