Skip to main content
Associate II
January 7, 2026
Solved

Calibrating the STM32's real-time clock (RTC)

  • January 7, 2026
  • 9 replies
  • 1283 views

I have an STM32L011F4P6 with an RTC.  It is running at 3.3V and 16 MHz off the internal HSI RC clock.  The RTC is driven by a +/-20ppm 32.768 KHz crystal.  It is running at room temperature (23-25C).  After calibration, I am losing several seconds per day!  What's wrong?

Here's more information:
I have a auto-calibration routine where it will output a 1Hz signal on PA2 and capture the rising edges using TIM2 channel 3, running at 16 MHz.  I capture the start time (1st edge), I interrupt on overflow and add 65536 to a 32bit variable, and I capture the final (33rd) edge to get the total number of 16MHz clocks in 32 seconds according to the RTC.  At the same time, I have a GPS module with a 1Hz output fed into PA10.  I set up TIM21 channel 1 identically to the TIM2 setup, to capture the total number of 16MHz clocks in 32 seconds according to the GPS.

I take the two numbers and find the difference, not in PPM but in units of calibration (0.954 PPM).
I get a reasonable value of 17 (about 16 PPM), and I adjust the GPS clock accordingly.
I don't recall if it is +17 or -17 that it read, but the magnitude of the value seems reasonable.
This value is stored in EEPROM and loaded into the RTC on power up as well.

 f_TempFloat = (float)(i32_RTCclockTotalCount - i32_GPSclockTotalCount);
 f_TempFloat /= (float)i32_GPSclockTotalCount;
 f_TempFloat *= 1048576;
 if(f_TempFloat > 0)
 {
 i16_FactoryCalibrationValue = (int16_t)(f_TempFloat + 0.5f);
 }
 else
 {
 i16_FactoryCalibrationValue = (int16_t)(f_TempFloat - 0.5f);
 }
 if(i16_FactoryCalibrationValue > 511)
 {
 i16_FactoryCalibrationValue = 511;
 }
 else if (i16_FactoryCalibrationValue < -511)
 {
 i16_FactoryCalibrationValue = -511;
 }
 if(i16_FactoryCalibrationValue > 0)
 {
 u32_SmoothCalibPlusPulses = RTC_CALR_CALP;
 u32_SmoothCalibMinusPulsesValue = 512 - i16_FactoryCalibrationValue ;
 }
 else
 {
 u32_SmoothCalibPlusPulses = 0x00000000u;
 u32_SmoothCalibMinusPulsesValue = 0 - i16_FactoryCalibrationValue ;
 }
 //Disable write protection
 RTC->WPR = 0xCA;
 RTC->WPR = 0x53;
 RTC->CALR = (uint32_t)((uint32_t)u32_SmoothCalibPeriod | (uint32_t)u32_SmoothCalibPlusPulses | (uint32_t)u32_SmoothCalibMinusPulsesValue);
 //Enable write protection
 RTC->WPR = 0xFE;
 RTC->WPR = 0x64;

 
The clock is intended to operate outside, though so far I'm only testing at room temperature.
I am using the onboard temperature sensor to detect changes in temperature and adjust the calibration accordingly.  The GPS calibration routine will always be done at room temperature, immediately after power up, so I calibrate the raw reading at that time to 25C.

During normal operation, the temperature sensor is read every 5 minutes and a temperature calibration factor is calculated. This factor is added to i16_FactoryCalibrationValue and the RTC calibration is updated.

#define XTAL_K_VALUE 0.034f // Frequency Temperature Curve value (ppm/C)
#define XTAL_T_VALUE 25.0f // Turnover Temperature value (C)
#define PPM_VALUE 0.953674f // Adjustment granularity

int16_t Get_Current_Temperature(void)
{
 uint16_t u16_temp;
 
 ADC->CCR |= ADC_CCR_TSEN; //Startup time <= 10uS
 ADC1->CHSELR = LL_ADC_CHANNEL_TEMPSENSOR;
 // Compute number of CPU cycles to wait for
 uint32_t waitLoopIndex = (10 * (SystemCoreClock / 1000000U)); //10uS * number of instructions
 while(waitLoopIndex != 0U)
 {
 waitLoopIndex--;
 }

 // Wait until End of unitary conversion or sequence conversions flag is raised
 waitLoopIndex = (100 * (SystemCoreClock / 1000000U)); //100uS * number of instructions
 ADC1->ISR |= ADC_ISR_EOC; //Clear End of conversion flag
 ADC1->CR |= ADC_CR_ADSTART;
 while( ((ADC1->ISR & ADC_ISR_EOC) != ADC_ISR_EOC) && (waitLoopIndex != 0U))
 {
 waitLoopIndex--;
 }
 ADC->CCR &= ~ADC_CCR_TSEN; //Turn off temp sensor
 if(waitLoopIndex)
 { // Clear regular group conversion flag. It is cleared by software writing 1 to it or by reading the ADC_DR register.
 ReadEEPROM(EEPROM_TEMPSENSOR_CAL1_ADDR, &u16_temp);

 return (((int32_t)((ADC1->DR * ((uint32_t)(f_voltageRef * 1000.0f))) / TEMPSENSOR_CAL_VREFANALOG) - (int32_t)*TEMPSENSOR_CAL1_ADDR) * (int32_t)(TEMPSENSOR_CAL2_TEMP - TEMPSENSOR_CAL1_TEMP) / (int32_t)((int32_t)*TEMPSENSOR_CAL2_ADDR - (int32_t)*TEMPSENSOR_CAL1_ADDR) + TEMPSENSOR_CAL1_TEMP);
 }
 else
 {
 return -100; //Ignore this value
 } 
}


What is causing me to lose time?
I have already checked and confirmed that the temp sensor reads within 19-29C (24+/-5 C rounds to an adjustment factor of 0).
I have already checked that the adjustment is the correct direction by using a frequency generator.  A slower external clock results in a slower calibrated RTC and vice versa.

Is it the frequent writes to the calibration register that are causing issues?

Best answer by AScha.3

range of -40 to +85C.

+

target accuracy is +/-1 minute over 180 days.  That works out to +/-3.858 PPM.

Forget it. Thats not realistic, because :

A standard 32.768 kHz tuning fork crystal typically drifts by approximately -150 ppm to -180 ppm at the extremes of the -40°C to +80°C range. 

And basic anyway only at +/- 20 ppm.

For anything close to your expectations you need a TCO , external oscillator: HSE bypass.

To maintain high accuracy across this wide temperature range, a Temperature-Compensated Crystal Oscillator (TCXO) or an integrated RTC with internal compensation (like the DS3231) is required to stay within ±10 ppm.

btw

I use a mems TCO :

https://www.mouser.de/ProductDetail/SiTime/SiT1552AC-JE-DCC-32.768D?qs=dMZC7um9hO%2FIvYvvHhpVFQ%3D%3D

AScha3_0-1767979955220.png

 

9 replies

TDK
Super User
January 7, 2026

Do you take into account the current CALR values? I don't see that done in the code. Perhaps log how CALR is changing to UART on each update and see what's happening. It should be relatively constant. You should be adjusting values rather than writing new ones and ignoring what's currently in there.

Didn't quite follow the math you described. Why are you adding 65536 instead of just taking the difference? Otherwise the procedure seems sound.

"If you feel a post has answered your question, please click ""Accept as Solution""."
waclawek.jan
Super User
January 8, 2026

> Do you take into account the current CALR values?

Are you referring to the fact that RTC_OUT comes from the already CALR-corrected prescaler output?

That indeed will make a huge difference - and would probably result in alternating real-ppm/close-to-zero values for the next CALR, if it wouldn't be taken into account; and the resulting long-term effect then would be that only roughly half of the real-ppm difference would be corrected.

JW

TDK
Super User
January 8, 2026

Yes, precisely this. And it should be obvious if CALR gets logged.

"If you feel a post has answered your question, please click ""Accept as Solution""."
waclawek.jan
Super User
January 8, 2026

What happens if you just perform the comparison to GPS once, written the calculated value to , and then left the RTC running without touching it the next 24 hours (or whatever time you deem adequate) to see whether it drifts or not?

JW

Associate II
January 8, 2026

So it isn't missed, I clear out RTC->CALR prior to GPS calibration so that the GPS pulses are compared to the unadjusted RTC pulses.

Since the HSI could have jitter and its frequency could change over the 32 second measurement window, I have the PCB wait for the 1st rising edge to turn on the RTC pulse train.  This aligns the two clock edges so any frequency shifting on the HSI affects both rising edges equally, and effectively cancels out.

After starting this thread, I set up two PCBs.  One retains the code as listed above.  The other one has the temperature checks and adjustments removed so the RTC->CALR is only set once on power up from the stored GPS calibration value.  I'm letting it run awhile to see if removing that has any effect.

I know the calibration works by adding or subtracting pulses from the 1048576 pulses in 32 seconds.  The added/removed pulses are supposed to be spread evenly over those 32 seconds.  As far as I know, where the clock is inside that 32 second window is not visible to the user.  Does writes to RTC->CALR reset that 32 second window?  Could writing to it cause it to skip over some of the added/removed pulses, and thus accumulate error?

LCE
Principal II
January 8, 2026

Apart from the CALR issue, just to make sure...

- the RTC crystal is +-20 ppm, this means you might get some +-1.7 seconds per day

- check the crystal datasheet: the +-20 ppm does not mean that it's perfect at room temperature, but is within this error range -> and then comes the temperature drift...

- timer set up correctly? (mind the +-1 for some registers)

- the L01 can run with max. 32 MHz, maybe there's some other interrupts influencing your timing?

 

You probably know all that, but sometimes we forget the simple things.

Associate II
January 8, 2026

Thank you!  I've been lost in the weeds before and missed obvious things.

The +/-20 ppm is the base accuracy for the crystal.  Any deviation from the ideal capacitance loading will affect this, potentially adding to the error.  I used COG capacitors, but so far I'm removing temperature as a variable by only calibrating and testing in 24 +/-1 C temperatures.

I believe the timers are correct, but I can post the timer setup code if there's reason to suspect it.

During calibration, the only interrupts running are TIM2, TIM21 and LPTIM.
The LPTIM interrupt is where all the main code.
This might be an obvious overlook, but since I'm using the CCR registers to capture the edge times and compensating for timer overflows, I assume any interrupt latency won't affect the results.

During normal (not calibration) operation, the device goes to sleep and the LPTIM interrupt periodically wakes it up (about every 50mS).  It reads the RTC and compares the RTC minute variable with a local variable.  If they are different, it moves a motor attached to a minute hand and sets the local variable as equal to the RTC minute variable.  There will be some jitter +/-25mS as to when the minute hand moves, but that's not critical as long as it averages out over time.

If RTC min % 5 == 0, and it just changed, the temperature compensation code is run.
The temperature compensation code is disabled on one of the two PCBs.

Associate II
January 8, 2026

The PCB without the temperature compensation code came in over 1 minute fast in 24 hours.  This is around 700 ppm off!

I am now running it in debugger (I have no external serial bus or anything) to directly monitor the RTC values.
This will eliminate any issues with the motor driving code and the physical clock mechanism to focus exclusively on the RTC code and calibration.

I'll report back after several hours to see if there's any drift.

waclawek.jan
Super User
January 9, 2026

Isn't the RTC set by mistake to LSI? 

JW

TDK
Super User
January 9, 2026

When you do the calibration, do you detect an error of 700 ppm? Or do the errors come at discrete steps, perhaps when the chip is starting up or when you are doing something with RTC?

It's unlikely that a crystal is off by 700 ppm, so probably something else to blame here.

The LSE has a CSS you can enable to see if it's dropping out. Or you can hook up a logic analyzer to the 1 Hz signal and record it for an hour. Shouldn't be too hard to catch if it's off by 1 minute over 24 hours.

"If you feel a post has answered your question, please click ""Accept as Solution""."
Associate II
January 9, 2026

Good question, but it should be set to LSE.
Any reason not to use "LOW" drive strength?

 

 LL_RCC_LSE_SetDriveCapability(LL_RCC_LSEDRIVE_LOW);
 LL_RCC_LSE_Enable();

 /* Wait till LSE is ready */
 while(LL_RCC_LSE_IsReady() != 1)
 {

 }
 LL_RCC_SetRTCClockSource(LL_RCC_RTC_CLKSOURCE_LSE);
 LL_RCC_EnableRTC();

 


I started the timer last night at 4:57:00 pm with the RTC set to 0:00:00.
(16 hours and 57 minutes behind).

Today at 10:08:00 am (17 hours and 10 minutes later) it read 17:09:58.

With delays in setting breakpoints as well as startup time, I can see 1-2 seconds.
Even if it was legitimate drift, that's 23 ppm instead of 700+ ppm.

The issue then may be in the motor driving code then. I'll let it run a full 24 hours and report back.
Then I'll add back in the temperature compensation code and see what happens. 

mbarg.1
Senior III
January 9, 2026

We use NTP but te same approach could be fine for pps.

First, we use RTC value to get subsecond (1/1024 sec) time stamp; we use whole date and time but you can use only subsecond part. IMPORTANT: we use direct register access (3 words) to get reference time date subsec aligned.

Then we compute timedate to reference, in your case reference is 0 subsec value. You get a signed error (0 to +511-512).

If we are too far (abs (error) > than XmaxDrift, usually 16) we drift setting to max correction according to error sign.

Wait.

As we are in range, we start small correction steps to keep inside +-8 error, a kind of simple IIR fuzzy filter.

Once in range ( no error change for some seconds) we relax our test little by little to one time every hour - we have clocks outside with big temp change, std nucleo crystal and that has proven to be enough to stay in range.

Hope can help.

Associate II
January 9, 2026

I appreciate the solution, but unfortunately that won't work for this.

The final product is going to be mass produced and then operate in the field over a specified operating temperature range of -40 to +85C.  It operates without any access to any sort of time reference in the field, like GPS, cell signal, network access etc. The target accuracy is +/-1 minute over 180 days.  That works out to +/-3.858 PPM.

Since it will be mass produced, it would be difficult to spend extended time periods in calibration.  Your procedure would take hours, which we won't have in a production environment.  I will probably have to fight the production manager for the 32 seconds I'm currently requiring.  I wish I could do an extended time calibration.  I already am going to have to rely on the typical temperature drift characteristics of the crystal, even though those specs have tolerance on them as well.

My measurement resolution should be 16 MHz x 32 seconds or ~2 parts per billion.  That should be plenty for a base calibration of +/- 3.8 PPM.  Unless I'm not accounting for something.  As I said above, it's easy to get lost in the weeds and overlook something obvious.

AScha.3
AScha.3Best answer
Super User
January 9, 2026

range of -40 to +85C.

+

target accuracy is +/-1 minute over 180 days.  That works out to +/-3.858 PPM.

Forget it. Thats not realistic, because :

A standard 32.768 kHz tuning fork crystal typically drifts by approximately -150 ppm to -180 ppm at the extremes of the -40°C to +80°C range. 

And basic anyway only at +/- 20 ppm.

For anything close to your expectations you need a TCO , external oscillator: HSE bypass.

To maintain high accuracy across this wide temperature range, a Temperature-Compensated Crystal Oscillator (TCXO) or an integrated RTC with internal compensation (like the DS3231) is required to stay within ±10 ppm.

btw

I use a mems TCO :

https://www.mouser.de/ProductDetail/SiTime/SiT1552AC-JE-DCC-32.768D?qs=dMZC7um9hO%2FIvYvvHhpVFQ%3D%3D

AScha3_0-1767979955220.png

 

"If you feel a post has answered your question, please click ""Accept as Solution""."