Skip to main content
Super User
November 1, 2019
Solved

STM32H7 64MHz HSI is off by 4%!

  • November 1, 2019
  • 8 replies
  • 5219 views

According to the STM32H745xI/G datasheet, the 64MHz HSI is accurate to +/- 0.3 MHz around room temperature, so +/- 0.5%.

During testing, I noticed about 10-20% of my UART characters were getting dropped. I then looked at the signal on a scope and noticed the frequency is off.

After redirecting the system clock to MCO2 (PC9) and measuring on a scope, I discovered the problem:

The HSI RC (64 MHz) on the STM32H745 chip is off by 4%!! So much for the "factory calibration".

Am I missing something here?

Clock initialization (480MHz):

 /** Initializes the CPU, AHB and APB busses clocks 
 */
 RCC_OscInitStruct.OscillatorType = RCC_OSCILLATORTYPE_HSI;
 RCC_OscInitStruct.HSIState = RCC_HSI_DIV1;
 RCC_OscInitStruct.HSICalibrationValue = RCC_HSICALIBRATION_DEFAULT;
 RCC_OscInitStruct.PLL.PLLState = RCC_PLL_ON;
 RCC_OscInitStruct.PLL.PLLSource = RCC_PLLSOURCE_HSI;
 RCC_OscInitStruct.PLL.PLLM = 32;
 RCC_OscInitStruct.PLL.PLLN = 480;
 RCC_OscInitStruct.PLL.PLLP = 2;
 RCC_OscInitStruct.PLL.PLLQ = 2;
 RCC_OscInitStruct.PLL.PLLR = 2;
 RCC_OscInitStruct.PLL.PLLRGE = RCC_PLL1VCIRANGE_1;
 RCC_OscInitStruct.PLL.PLLVCOSEL = RCC_PLL1VCOWIDE;
 RCC_OscInitStruct.PLL.PLLFRACN = 0;
 if (HAL_RCC_OscConfig(&RCC_OscInitStruct) != HAL_OK) {
 Error_Handler();
 }
 /** Initializes the CPU, AHB and APB busses clocks 
 */
 RCC_ClkInitStruct.ClockType = RCC_CLOCKTYPE_HCLK | RCC_CLOCKTYPE_SYSCLK
 | RCC_CLOCKTYPE_PCLK1 | RCC_CLOCKTYPE_PCLK2 | RCC_CLOCKTYPE_D3PCLK1
 | RCC_CLOCKTYPE_D1PCLK1;
 RCC_ClkInitStruct.SYSCLKSource = RCC_SYSCLKSOURCE_PLLCLK;
 RCC_ClkInitStruct.SYSCLKDivider = RCC_SYSCLK_DIV1;
 RCC_ClkInitStruct.AHBCLKDivider = RCC_HCLK_DIV2;
 RCC_ClkInitStruct.APB3CLKDivider = RCC_APB3_DIV2;
 RCC_ClkInitStruct.APB1CLKDivider = RCC_APB1_DIV2;
 RCC_ClkInitStruct.APB2CLKDivider = RCC_APB2_DIV2;
 RCC_ClkInitStruct.APB4CLKDivider = RCC_APB4_DIV2;
 
 if (HAL_RCC_ClockConfig(&RCC_ClkInitStruct, FLASH_LATENCY_4) != HAL_OK) {
 Error_Handler();
 }

MCO initialization:

HAL_RCC_MCOConfig(RCC_MCO2, RCC_MCO2SOURCE_SYSCLK, RCC_MCODIV_10);

Measured frequency is 45.99 MHz, which is -4.2% off from what it should be: 48 MHz.

0690X00000ArWkRQAV.png

If I change to 400MHz, the result is the same. -4.2% from what it should be.

    This topic has been closed for replies.
    Best answer by TDK

    There's a bit of misunderstanding/misinformation going on here. The hard-coded calibration value cannot be overwritten. What __HAL_RCC_HSI_CALIBRATIONVALUE_ADJUST and similar does is adjust HSITRIM, which indirectly adjusts HSICAL. The original HSICAL can always be reset by restoring the default on-reset value of HSITRIM, which, unfortunately, differs depending on chip rev.

    ST's current code has the following:

    #if defined(RCC_HSICFGR_HSITRIM_6)
    #define RCC_HSICALIBRATION_DEFAULT (0x40U) /* Default HSI calibration trimming value, for STM32H7 rev.V and above */
    #else
    #define RCC_HSICALIBRATION_DEFAULT (0x20U) /* Default HSI calibration trimming value, for STM32H7 rev.Y */
    #endif

    but since RCC_HSICFGR_HSITRIM_6 is always defined, it always evaluates to 0x40. It's not like there are different include files for different chip revisions.

    The solution proposed by @DWest.1​ works and is similar to what I did, as long as you call it after HAL_RCC_OscConfig. IMO, HAL_RCC_OscConfig shouldn't be touching HSITRIM at all.

    CubeMX lists the calibration value it is using, so you could either adjust it there if you know your chip rev ahead of time or within code. And because it's listed and there is no option to select "default" or "do not change", it doesn't allow for it to be adjusted based on chip revision. Consistent with the code, but not super helpful.

    Presumably, as the old chip revision becomes less common, this will be less of an issue.

    8 replies

    Visitor II
    November 1, 2019
    TDKAuthor
    Super User
    November 1, 2019

    Yep, that's the problem. CubeMX sets HSICAL to 0x20, which is different from its reset value of 0x40. After changing it to 0x40, the clock is now at 48.25MHz, which is an error of +0.5%.

    Yet another instances of the hardware being fine but CubeMX producing a bug. I had thought at least the power initialization in CubeMX would be robust.

    My chip revision is V (REV_ID = 0x2003).

    This is the problem statement:

    #define RCC_HSICALIBRATION_DEFAULT (0x20U) /* Default HSI calibration trimming value */

    Visitor II
    June 19, 2020

    > I had thought at least the power initialization in CubeMX would be robust.

    From the same team that can't even get toggling pins right?

    Graduate II
    November 2, 2019

    > I had thought at least the power initialization in CubeMX would be robust.

    Isn't expecting that the same code monkeys, who make limited non-flexible driver architecture and bloated code full of bugs, where not a single component is of reasonable quality, could make GUI generated code robust "a bit" naive?

    Technical Moderator
    November 18, 2019

    Hello,

    RCC_HSICALIBRATION_DEFAULT should be revision dependent.

    This is reported internally to be fixed by our development team.

    -Amel

    PS: the problem is not on STM32CubeMX generated code, but on the header file stm32h7xx_hal_rcc.h which is copied as is from the STM32CubeH7 package.

    Visitor II
    June 18, 2020

    The issue is that RCC_HSICALIBRATION_DEFAULT is hardcoded as 0x40 in STM32Cube_FW_H7_V1.6.0 and STM32Cube_FW_H7_V1.7.0, but was 0x20 in STM32Cube_FW_H7_V1.5.0. The correct trim value is dependent on the chip revision.

    Here is the fix that will work for all chip revisions. Insert in a user area after SystemClock_Config() is called, so it survives cubemx's code regen.

    /* USER CODE BEGIN SysInit */
     if (HAL_GetREVID() <= REV_ID_Y)
     {
     /* Default HSI calibration trimming value, for STM32H7 rev.Y */
     __HAL_RCC_HSI_CALIBRATIONVALUE_ADJUST(0x20U);
     }
     /* USER CODE END SysInit */

    Graduate II
    June 19, 2020

    > RCC_HSICALIBRATION_DEFAULT should be revision dependent.

    > The issue is that RCC_HSICALIBRATION_DEFAULT is hardcoded

    https://github.com/STMicroelectronics/STM32CubeH7/blob/79196b09acfb720589f58e93ccf956401b18a191/Drivers/STM32H7xx_HAL_Driver/Inc/stm32h7xx_hal_rcc.h#L214

    Am I on a different internet? =)

    > the problem is not on STM32CubeMX generated code

    But the problem with HAL code is in the fact that it changes this value at all. How do you expect a calibrated value to be placed in source code and compiled? Have anyone at ST with brain ever even thought about it for a minute?

    TDKAuthor
    Super User
    June 19, 2020
    This question is quite old. Note the post date. Library has no doubt been updated since then.
    Super User
    June 22, 2020

    In the last post in the related thread linked to also above https://community.st.com/s/question/0D50X0000B41tlASQQ/stm32h743-hsi-frequency-waaaayyy-off it's reported that CubeMX generates the calibration-value-changing code (also with incorrect parameter) for 'L452, so this part of the problem (that CubeMX generates that call at all) might have wider scope than just the 'H743.

    I don't have CubeMX, but maybe this is related to some tickbox being ticked inadvertently?

    JW

    TDKAuthorAnswer
    Super User
    June 22, 2020

    There's a bit of misunderstanding/misinformation going on here. The hard-coded calibration value cannot be overwritten. What __HAL_RCC_HSI_CALIBRATIONVALUE_ADJUST and similar does is adjust HSITRIM, which indirectly adjusts HSICAL. The original HSICAL can always be reset by restoring the default on-reset value of HSITRIM, which, unfortunately, differs depending on chip rev.

    ST's current code has the following:

    #if defined(RCC_HSICFGR_HSITRIM_6)
    #define RCC_HSICALIBRATION_DEFAULT (0x40U) /* Default HSI calibration trimming value, for STM32H7 rev.V and above */
    #else
    #define RCC_HSICALIBRATION_DEFAULT (0x20U) /* Default HSI calibration trimming value, for STM32H7 rev.Y */
    #endif

    but since RCC_HSICFGR_HSITRIM_6 is always defined, it always evaluates to 0x40. It's not like there are different include files for different chip revisions.

    The solution proposed by @DWest.1​ works and is similar to what I did, as long as you call it after HAL_RCC_OscConfig. IMO, HAL_RCC_OscConfig shouldn't be touching HSITRIM at all.

    CubeMX lists the calibration value it is using, so you could either adjust it there if you know your chip rev ahead of time or within code. And because it's listed and there is no option to select "default" or "do not change", it doesn't allow for it to be adjusted based on chip revision. Consistent with the code, but not super helpful.

    Presumably, as the old chip revision becomes less common, this will be less of an issue.

    Technical Moderator
    June 25, 2020

    but since RCC_HSICFGR_HSITRIM_6 is always defined, it always evaluates to 0x40. It's not like there are different include files for different chip revisions.

    The solution proposed by @DWest.1 (Community Member)​ works and is similar to what I did, as long as you call it after HAL_RCC_OscConfig. IMO, HAL_RCC_OscConfig shouldn't be touching HSITRIM at all.

    ==> This is reported again to our development team.

    Visitor II
    October 7, 2021

    I have the same problem on H73x, using CubeMX v6.3.0, and MCU Package 1.9.0.

    Cube sets HSI Calibration value to 32 by default, allowing maximum value to be 63, when it should be 64 with maximum value 127. This cause decalibrated HSI clocks which affects every peripheral.

    TDKAuthor
    Super User
    October 7, 2021

    It's fixed in recent CubeMX versions for the H7 family. Ensure the correct hardware revision on your chip is selected in CubeMX.

    https://github.com/STMicroelectronics/STM32CubeH7/blob/master/Drivers/STM32H7xx_HAL_Driver/Inc/stm32h7xx_hal_rcc.h#L7172

    Visitor II
    October 7, 2021

    Hi again,

    It sounds like the stm32h7xx_hal_rcc.h solution checked in for STM32Cube_FW_H7_V1.8.0 expects the developer to know the chip revision prior to compile time?

    That sounds awful for a product's lifecycle. I don't expect the calibration default value to change with new chip revisions, but I am concerned that production will get a batch of chips with an unknown revision. Load up the approved binary and experience timing failures.