Skip to main content
Graduate II
April 17, 2025
Question

Should DMA variables be volatile

  • April 17, 2025
  • 6 replies
  • 1531 views

I am using the DMA in a few places. I am wondering should these variables be volatile. As it is, the compiler has no idea when the data will be read from or written into these variables. When that is the case, shouldn't you make the variable volatile? I don't think I have seen them as volatile in the example code however. Am I missing something here?

    This topic has been closed for replies.

    6 replies

    Super User
    April 17, 2025

    Which "variables", exactly, are you referring to?

    As you say, variables which get written-to beyond the compiler's view should be qualified as volatile.

    But variables used to control the DMA, or only read-by the DMA - maybe not...

     

    Perhaps provide a concrete example?

    Explorer
    April 17, 2025

    It depends, I would say.
    The buffers itself don't need it, since the (direct) access by DMA is not evaluated by the compiler. It is just triggered by hardware.

    When having DMA, I use to configure a TC interrupt, where I set a flag for another routine.
    This flag, which is basically a stand-in for the DMA buffer, must be volatile to not be optimized out.

    I don't know what Cube code does in this regard, I don't use it.

    Super User
    April 17, 2025

    @Ozone,

    > The buffers itself don't need it, since the (direct) access by DMA is not evaluated by the compiler. It is just triggered by hardware.

    I beg to differ. It's the data in the buffers which are "evaluated" (written or read, for M2P or P2M transfers respectively) by the processor thus compiler (unless the DMA transfers directly from peripheral to peripheral, which IMO is exceedingly rare). If the compiler does not see the program to consume elsewhere data written to the buffer (for M2P) or that there is no other part in the program which writes data to the buffer (for P2M), then the compiler can simply omit the write to/read from the buffer.

    This flag

    What flag? There's no link just an underscore.

    JW

    Explorer
    April 17, 2025

    >> This flag
    > What flag? There's no link just an underscore.

    The "flag" variable I use from within the interrupt routine.
    Not to be confused with a flag bit in some peripheral register.


    I talked about the case when the compiler only sees the buffer values consumed, but not the initial write access (which happens by DMA). I use this method relatively often for asynchronously processing input data. And I never had to mark the DMA buffer as "volatile" to work properly.

    Carl_GAuthor
    Graduate II
    April 17, 2025

    That access to the buffer is the issue. For example, the compiler or MCU can see when you put data into the buffer but has no idea if or when or where that data is coming out. And vice versa when the data flow is in the opposite direction.

    Super User
    April 17, 2025

    So mark that as volatile.

    I don't think compilers tend to cache entire buffers - but qualifying it as volatile shouldn't hurt...

     

    As buffers tend to be accessed via pointers, you'd have to careful about where you put the 'volatile' keyword(s):

    volatile char* vpch; // pointer to a volatile char
    
     char* volatile pchv; // volatile pointer to a char
    
    volatile char* volatile vpchv; // volatile pointer to a volatile char

     

    Explorer
    April 17, 2025

    > I don't think compilers tend to cache entire buffers - ...

    Although I'd like to note I did not consider data caches of F7 and H cores in my remarks.
    Another aspect to possibly consider.

    Carl_GAuthor
    Graduate II
    April 17, 2025

    To be pedantic the compiler isn't caching. It's just delaying reads and writes. The MCU will do multilevel caching. Both are potential problems. Everything for me is working now but it needs to be right. You never know when conditions are correct to enable some new optimization on any given compiler execution. And it doesn't need to do it to the whole buffer. Just one byte will cause a fail.

    Super User
    April 17, 2025

    @Carl_G wrote:

    To be pedantic the compiler isn't caching. 


    You're right - that was poor terminology on my part.

     

    PS:

    Or was it?

    A commonly-given example is where a piece of code makes multiple reads of a variable:

    Instead of actually reading from memory each time, the compiler might optimise by reading once into a register, and using that.

    Which is, surely, a form of "caching" ?

     

    But, indeed, distinct from hardware caching.

    Super User
    April 17, 2025

    Instead of actually reading from memory each time, the compiler might optimise by reading once into a register, and using that.

    IIRC this is called register allocation or register optimization. Not caching.

     

    Super User
    April 17, 2025

    If using cache, buffers from DMA should be cleaned before writing and invalidated before reading. Volatile here doesn't help.

    Buffers should be marked volatile if the value is changing within a function and the function needs the updated value. For example, if you set a flag in an interrupt which you then check for and process in the main flag. (This is a little more complicated since link time optimization can cause functions to become combined, so to speak.)

     

    Graduate II
    April 17, 2025

    If a buffer is initialised (which is always the case for C, if it's not an automatic), then the optimizer can make assumptions about the value it holds. If that buffer is used to hold (say) an SPI command sequence and it is also used to received a response (replacing the original content), then the optimizer may assume that the value has not changed if the receive transfer takes place via DMA:

    void f1( void )
    {
     uint8_t buffer = 0xa5; // Command value
    
     spiDMASendReceive( &buffer );
    
     switch ( buffer )
     {
     case 0x00: action00(); break; // Eliminated by optimization
     case 0x01: action01(); break; // Eliminated by optimization
     default: // Only feasible value without 'volatile'
     }
    }

    You can see the effect here, where no code is generated for f1(), but it is for f2() as 'volatile' qualification is added.

    And yes, caching (hardware) and optimization (compile time) are not the same, but both can appear to "break" the code.

    Edited to make it clear that I was referring to hardware caching, not run time caching of calculated values.

    Super User
    April 17, 2025

    @CTapp.1 wrote:

    And yes, caching (runtime) and optimization (compile time) are not the same


    But the optimisation might be to use "caching" (unlikely in the case of a buffer, but quite possible for a simple int, say).

    See the PS to my previous post.

    Graduate II
    April 17, 2025

    That's where we get into terminology and definitions! I was only considering hardware, so will update my reply to reflect that ;)

    The language standard doesn't help either, as it does mention "caching" (generally to mean a computed value may be stored in a static buffer* so that the result can be reused), but has nothing to say about optimization other than that it is irrelevant (but the program must behave the same).

    * this is generally "design time" optimization made during compiler implementation, where as the elimination of a read would be made during compilation.