Skip to main content
Visitor II
March 10, 2025
Question

STM32N6 VENC at higher resolutions

  • March 10, 2025
  • 12 replies
  • 2768 views

Hi,

We have STM32N6570-DK and are testing camera input, video encode and write to SD card

We have not been able to make the "VENC_SDCard_Appli" example work, the SD card just contains zeros.

The "VENC_SDCard_ThreadX" example does work however, resulting in an 800x480 video.

Are there any examples on encoding at 1080p resolution? Changing the encoder parameters and level results in memory allocation failures in EncAsicMemAlloc_V2.

Many thanks

 

    This topic has been closed for replies.

    12 replies

    ST Employee
    January 6, 2026
     
     
    You have to look at latest  STM32CubeN6 (v1.3.0)
    The configurability of the VENC_RTSP_Server and VENC_USB applications is enhanced. 
    => You can change encoding settings in the venc_h264_config.c file.
     
     
    #if defined(FULL_1080P_SLICE)
    #...
    #elif defined (FULL_1080P_FRAME )
    ...
     
    For information about encoder allocation hooks, refer to the integration guide, which describes them in detail:
    ./Middlewares/Third_Party/VideoEncoder/doc/Hantro.VC8000NanoE.V50x.SW.Integration.Guide-v1.02-20200708.pdf (see EWLMalloc, EWLMallocRefFrm, and EWLMallocLinear).
    One important point is that EWLMallocLinear returns contiguous, aligned, and uncached memory.
     
    In the examples above, the memory hooks use ThreadX memory pools.
     
    There are three main buffers:
     
    uint8_t input_frame[...]: raw frame as captured by the camera or DCMIPP
    uint8_t ewl_pool[...]: H.264 encoder internal memory
    uint8_t output_block_buffer[...]: encoded bitstream
     
    Hope this helps,
    Daniel
    Graduate II
    January 6, 2026

    Hi @DanielS 

    Thank you very much. Because the ST example project using the Neural-ART accelerator - STM32N6-GettingStarted-ObjectDetection - is bare metal rather than ThreadX we were focusing on integrating the bare metal VENC_SD example - it's very helpful to know that configuration in the VENC_RTSP_Server and VENC_USB applications is handled differently.

    Would it be a good way forward for us to take this way of handling configuration from VENC_RTSP_Server and VENC_USB and attempt to port it over to the STM32N6-GettingStarted-ObjectDetection example?

    Does this configuration in the VENC_RTSP_Server and VENC_USB applications also put the reference frame used by the VENC in PSRAM? I wasn't sure from the source code how the reference frame is handled in these applications and whether the reference frame was treated as part of "VENC internal buffer" or not?

    /* Buffer placement ---------------------------------------------------------*/
    /** Location macro for VENC internal buffer */
    #define VENC_BUFFER_LOCATION IN_PSRAM
    /** Location macro for input frame buffer */
    #define INPUT_FRAME_LOCATION IN_PSRAM

    I'd guessed that the reference frame is treated as part of "VENC internal buffer" from this but I wasn't sure:

     refPic = picBuffer->refPic; /* Reference frame store */
     cur_pic = picBuffer->cur_pic; /* Reconstructed picture */

    Many thanks!

    Will

    ST Employee
    January 7, 2026

    Hello @will Robertson,

     

    In our ThreadX examples, there is no distinction between EWLFreeRefFrm (reference frame allocator) and EWLMallocLinear (encoder software/hardware shared buffer allocator).
    Both use the same ThreadX pool allocator mapped to the same physical memory block: uint8_t ewl_pool[VENC_POOL_SIZE] ALIGN_32 VENC_BUFFER_LOCATION;

    Therefore, when VENC_BUFFER_LOCATION is set to "IN_PSRAM", all memory used by the encoder resides in external PSRAM.

    Regards,
    Daniel

    ST Employee
    January 8, 2026

    Hello @Will_Robertson,

    The EWL layer is available in three variants: ThreadX, FreeRTOS, and No OS.
    The variant depends on the EWL_ALLOC_API compile flags.
    The memory allocators are also defined as __weak, which allows you to adapt memory placement to specific requirements.

    Best regards,
    Daniel