Skip to main content
Associate II
February 24, 2026
Solved

STM32N6570-DK – LTDC display glitches only when NPU (X-CUBE-AI) is running

  • February 24, 2026
  • 3 replies
  • 218 views

Hello ST community,

I am working on an application on STM32N6570-DK using the camera to detect humans with an AI model (X-CUBE-AI + NPU) and draw bounding boxes on the LCD using LTDC.

I am facing a display corruption / glitch issue that only appears under very specific conditions, and I strongly suspect an AXI / interconnect / NPU side effect, possibly related to an undocumented reset or QoS state.

Project architecture:

Camera -> DCMIPP -> PIPE1 -> BUFFER1 -> LTDC_L1

                                 -> PIPE2 -> BUFFER2 -> NPU -> Config ROI of DCMIPP

Memory placement:

  • LTDC framebuffer: AXIRAM2

  • AI input/output buffers: AXIRAM5

  • No buffer sharing between LTDC and AI

Problem description:

  • When MX_X_CUBE_AI_Process() is commented, the camera preview is perfectly stable on the LCD.

  • When MX_X_CUBE_AI_Process() is enabled, continuous display glitches appear (tearing / corrupted lines).

  • The issue happens only when LTDC is fetching the framebuffer while the NPU is running.

  • The camera can be fully desynchronized (snapshot mode, manual triggering), the issue still happens.

If I start one debug session (SWD):

  • The application runs perfectly

  • No display glitches at all

  • Even after:

    • leaving debug mode

    • software reset

  • The system keeps working until power is removed

After a power cycle, the issue reappears.

 

Questions:

  1. Does the NPU act as a high-priority AXI master that can starve LTDC fetches?

  2. Are there AXI QoS / arbitration / FIFO states that are:

    • modified during debug attach

    • not reset by software reset

    • not configurable from CubeMX / HAL?

  3. Is there a known erratum or recommended initialization sequence when using LTDC + NPU concurrently?

  4. Is there a way to force a full AXI / interconnect reset or re-arbitration from software?

  5. Are there debug-related side effects (AXI flush, FIFO clear, QoS rebalance) documented somewhere?

Thank you in advance.

Best answer by lou_v

Hi, I finally find my problem.

When the NPU is running, the majority of exemple of AI project go to sleep mode with __WFE(). But when you do that, some clock might be off if you never configure it. x-cube-ai generate a fonction (with cubeMX) to prevent that : set_clk_sleep_mode.

But you need to add new periph if you enable them in your project, the LTDC for me. 

3 replies

Senior III
February 24, 2026

I've had the same issue with an audio buffer and frame buffer saved respectively on internal RAM and external PSRAM. Despite accessing it in different times, the images displayed on the LTDC had glitches. At the end of the day, it was both the priority and the speed of access to the external RAM.

Are you using an RTOS? If so, I think you need to tweak the priorities of the threads/tasks.

Also, have you checked the AXIRAM configuration? I would try to speed up the access to it to see if the performances increse

lou_vAuthor
Associate II
February 24, 2026

No, I'm coding in bare metal.

I will check if I can do something with the AXIRAM config

lou_vAuthor
Associate II
February 25, 2026

I have enabled LTDC error interrupts and I consistently observe FIFO underrun (FUIF) reported in LTDC ISR2 when the NPU is running in parallel with LTDC.

Key experimental results:

  • If the AI/NPU is never started, LTDC runs indefinitely without any FUIF or display corruption.

  • If the AI/NPU runs for a few seconds while LTDC is enabled, a FIFO underrun occurs.

  • Stopping the AI is NOT sufficient to recover:

    • The display remains corrupted

    • FUIF continues to be reported

  • Only a full LTDC hardware reset via RCC (FORCE_RESET / RELEASE_RESET) restores correct display behavior.

This indicates that a single FIFO underrun permanently corrupts the LTDC internal pipeline/state, and the peripheral does not self-recover.

 

When I attach a debugger (SWD) once:

  • LTDC and NPU can run in parallel without any FIFO underrun

  • No FUIF is ever triggered

  • The system remains stable even after leaving debug mode and performing software resets

  • The issue only reappears after a power cycle

If someone have an idea to fix the problem.

Thanks in advance

lou_vAuthorBest answer
Associate II
March 5, 2026

Hi, I finally find my problem.

When the NPU is running, the majority of exemple of AI project go to sleep mode with __WFE(). But when you do that, some clock might be off if you never configure it. x-cube-ai generate a fonction (with cubeMX) to prevent that : set_clk_sleep_mode.

But you need to add new periph if you enable them in your project, the LTDC for me.