Skip to main content
Explorer
May 22, 2017
Question

How to use half precision floating variable

  • May 22, 2017
  • 6 replies
  • 6287 views
Posted on May 22, 2017 at 12:26

Hello,

I would like to use half precision floating variable in my project but i don't know how. I'm working on a NUCLEO-L073 board that embeds a cortex m0 STM32l073 MCU that does not have any FPU. I'm using the SW4STM32 eclipse environment. I saw that gcc proposes some options : 

https://gcc.gnu.org/onlinedocs/gcc-4.5.1/gcc/Half_002dPrecision.html

 

but i don't know how to set flag or use their specific libraires to activate this feature.

I you have any idea, p

lease feel free to share your thoughts about my issue.

Thank you very much.

Best regards,

Aurélien

    This topic has been closed for replies.

    6 replies

    Super User
    May 22, 2017
    Posted on May 22, 2017 at 12:36

    The gcc link you've given above says among other things:

    The

    __fp16

    type is a storage format only. For purposes of arithmetic and other operations,

    __fp16

    values in C or C++ expressions are automatically promoted to

    float

    .

    Are you sure you want to use this? Tell us more about your project requirements, if you want to discuss this further.

    JW

    Visitor II
    May 22, 2017
    Posted on May 22, 2017 at 12:55

    You can surely use float if you want or need it.

    I expect the IDE to automatically select the proper emulation library and build options ('soft').

    As Jan mentioned, this will have a significant impact on run-time and code size.

    Explorer
    May 24, 2017
    Posted on May 24, 2017 at 14:22

    hi,

    Thx for your replies. Actually, i intend to transmit data over the air (sigfox/lora ...) but i'm limited in size and in order to save bytes, i wanted to use half precision float format if possible.

    Aurélien

    Visitor II
    May 24, 2017
    Posted on May 24, 2017 at 14:31

    There seems to be no convincing arguments to use float. Using even one variable or constant will pull in a bunch of supportive conversion and calculation routines, hogging up several kBytes.

    If it's just for data transfer, integer data types, scaled math and ad-hoc conversion routines can do with much less space.

    Graduate II
    May 24, 2017
    Posted on May 24, 2017 at 16:16

    If you understand the range of the numbers being used, then scaling into a suitably sized integer is how this gets done.

    Even where something might fit in a 32-bit float, using an integer can give more precision if the range is known/constrained.

    Using 16, or 32-bit floating point is a trap for the unwary, and they usually fall into it

    Explorer
    May 24, 2017
    Posted on May 24, 2017 at 16:30

    Yes, i think you right, i will use scaled integer instead of float, it will be easier and more suitable for my chip and my project.

    Thank you very much,

    BR,

    Aurélien

    Graduate
    February 23, 2024

    Hi

    I am also got stuck in same situation. I wanted to implement 16bit half precision datatype for stm32 MCU. I am searching the procedure for it on web but none found convincing. I do got information that Clang compiler can be used, but don't know how it can be done for STM32cube ide.

    Did you get your job done? 

    let us know if you have any help me.

    Thanks

    Super User
    February 23, 2024

    Hi,

    you can set it in IDE : menu -> project -> properties ...

    AScha3_0-1708696787970.png

    set software ... -  to use floating lib here.

    Graduate
    May 31, 2024

    Sorry for late response. I stuck into another problems not related to this. 

    But still I cannot able to resolve this problem of using float16. I have same setting as you suggested. I have attached a screenshoot of the window. 

    float16Setting.png

     I tried to declare the variable like __fp16. but it does not work. I get the following error:

    testVariable__fp16.png

     

    Did I missing something any library , compilar setting others? 

     

    I need this setting for enabling custom deep learning model on stm32 MCUs to save memory for weights. Any help from side will apreciated and will be very helpful. 

    Thank you 

    deepak kumar

    Super User
    February 23, 2024

    > I wanted to implement 16bit half precision

    Why?

    JW