Skip to main content
Visitor II
April 14, 2021
Question

Why does HAL_PWR_EnterSLEEPMode use NOP instead of DSB to "flush instructions"?

  • April 14, 2021
  • 0 replies
  • 1778 views

I am using an STM32L071CB MCU. It has a Cortex-M0+ core.

I noticed that the implementation for `HAL_PWR_EnterSLEEPMode` in the latest version of the HAL uses a NOP instruction to "flush instructions" before WFI. I don't understand how this is guaranteed, however, as I understand the NOP will just add 1 cycle of latency:

/**
 * @brief Enters Sleep mode.
 * @note In Sleep mode, all I/O pins keep the same state as in Run mode.
 * @param Regulator: Specifies the regulator state in SLEEP mode.
 * This parameter can be one of the following values:
 * @arg PWR_MAINREGULATOR_ON: SLEEP mode with regulator ON
 * @arg PWR_LOWPOWERREGULATOR_ON: SLEEP mode with low power regulator ON
 * @param SLEEPEntry: Specifies if SLEEP mode is entered with WFI or WFE instruction.
 * When WFI entry is used, tick interrupt have to be disabled if not desired as 
 * the interrupt wake up source.
 * This parameter can be one of the following values:
 * @arg PWR_SLEEPENTRY_WFI: enter SLEEP mode with WFI instruction
 * @arg PWR_SLEEPENTRY_WFE: enter SLEEP mode with WFE instruction
 * @retval None
 */
void HAL_PWR_EnterSLEEPMode(uint32_t Regulator, uint8_t SLEEPEntry)
{
 uint32_t tmpreg = 0U;
 /* Check the parameters */
 assert_param(IS_PWR_REGULATOR(Regulator));
 assert_param(IS_PWR_SLEEP_ENTRY(SLEEPEntry));
 
 /* Select the regulator state in Sleep mode ---------------------------------*/
 tmpreg = PWR->CR;
 
 /* Clear PDDS and LPDS bits */
 CLEAR_BIT(tmpreg, (PWR_CR_PDDS | PWR_CR_LPSDSR));
 
 /* Set LPSDSR bit according to PWR_Regulator value */
 SET_BIT(tmpreg, Regulator);
 
 /* Store the new value */
 PWR->CR = tmpreg;
 
 /* Clear SLEEPDEEP bit of Cortex System Control Register */
 CLEAR_BIT(SCB->SCR, SCB_SCR_SLEEPDEEP_Msk);
 
 /* Select SLEEP mode entry -------------------------------------------------*/
 if(SLEEPEntry == PWR_SLEEPENTRY_WFI)
 {
 /* Request Wait For Interrupt */
 __WFI();
 }
 else
 {
 /* Request Wait For Event */
 __SEV();
 __WFE();
 __WFE();
 }
 
 /* Additional NOP to ensure all pending instructions are flushed before entering low power mode */
 __NOP();
 
}

Why isn't the DSB (Data Synchronization Barrier) instruction used instead? This seems like a better fit for flushing pending instructions. From the ARM docs:

0693W000008zZ2nQAE.png0693W000008zZ8nQAE.png0693W000008zZ9HQAU.pngSo it seems to me that NOP could be "good enough" in many cases, but in case there is a long-running memory operation queued up before, say, WFI, then DSB would be a "safer" choice. Is that true and would there be any drawbacks? Why did ST choose to use NOP only?

    This topic has been closed for replies.