Skip to main content
Explorer
May 22, 2024
Solved

STM32H7 ADC calibration, I don't understand

  • May 22, 2024
  • 1 reply
  • 2376 views

Hi all,

I am experimenting with H7 ADC (after having tried the G4 ADC).

I see in the HAL that the calibration, HAL_ADCEx_Calibration_Start(), for H7 MCU wants a parameter more than G4. One should specify ADC_CALIB_OFFSET or ADC_CALIB_OFFSET_LINEARITY.

I am confused, and reading the datasheet does not help me. Can't understand the difference between the two methods.

Should I use ADC_CALIB_OFFSET  or  ADC_CALIB_OFFSET_LINEARITY, when calling the calibration_start()?

Or should I call the calibration *twice*, one for each option?

TY,
linuxfan

    This topic has been closed for replies.
    Best answer by linuxfan

    Don't mind. After reading AN5354 "getting-started-with-the-stm32h7-series-mcu-16bit-adc" things are a little clearer.

    "Linearity" has to be called once (after a reset), and ADC_CALIB_OFFSET must be performed once and eventually more times later, when conditions change.

     

    1 reply

    linuxfanAuthorAnswer
    Explorer
    May 22, 2024

    Don't mind. After reading AN5354 "getting-started-with-the-stm32h7-series-mcu-16bit-adc" things are a little clearer.

    "Linearity" has to be called once (after a reset), and ADC_CALIB_OFFSET must be performed once and eventually more times later, when conditions change.

     

    Graduate
    September 18, 2025

    I am a beginner, i'm more confused after seeing your answer, should i call ADC calibration repeatedly to reduce error?

    Im working on STM32H723ZG , using its 16bit ADC to read, i perform ADC Calibration at start (the linearity one), i am getting 1-20mv error, how can i bring this to under 1mv. Without calibration the error was in 50-60mv range. Your insights would be helpful. 

    linuxfanAuthor
    Explorer
    September 19, 2025

    As I said in the previous post, best results are achievable by calling the calibration once for linearity, and once for offset. Then, the offset calibration should be called when the temperature of the chip changes significantly.

    This is what I have understood, but I could be wrong.

    That said, and as you qualify yourself as a beginner, I can add that lowering the error under 1 mV seems difficult. ADCs are complicated and many many things influence their behaviour. One should take care of the reference voltages, which should be clean and stable; acquisition time and impedance of the source; frequency of the conversions, range of the input signal and so on. But noise and temperature changes are not under the control of the user...

    Even having a very stable source (the voltage to read) and temperature, multiple ADC readings will differ. Every application should cope with this in the more appropriate way for the application.

    Perhaps the most significant parameter for and ADC is the ENOB (effective number of bits). Suppose you have an ADC with 12 bit of resolution and an ENOB of 10 (which is quite good), and a measure range from 0 to 3,3 volts.

    In theoria, you would have 4096 different values: 3,3V divided 4096 gives 0.8 - i.e. every step in the converted value means 0.8 mV. But if the effective number of bits (ENOB) is 10, you have in reality only 1024 significant steps in your reading, i.e. 3300 mV divided 1024 = 3 mV per step instead of 0.8. This means that even if you read, say, 32 mV, it could be 32 minus 1.5 mV or 32 plus 1.5 mV. This is "uncertainity".

    Beware, I am not a guru: I hope someone else can confirm what I am writing. But it seems to me that reducing the error to 1 mV with a 12-bit ADC is impossible.