Skip to main content
Visitor II
September 26, 2019
Solved

No memory for tty_prepare_flip_string

  • September 26, 2019
  • 4 replies
  • 1758 views

Hello,

I'm using the OpenAMP_TTY_echo example with some modification.

if (VirtUart0RxMsg) {
 	VirtUart0RxMsg = RESET;
 
 	while (data < 65535) {
 		itoa(data, buf, 10);
 		if(VIRT_UART_Transmit(&huart0, (uint8_t *)buf, 10) != VIRT_UART_OK){
 		 		 Error_Handler();
 		 }
 		data ++;
 	}
 
 	data = 0;
 }

On Linux side I'm use write on ttyRPMSG0.

But if I don't read, the log give me the message:

rpmsg_tty virtio0.rpmsg-tty-channel.-1.0: No memory for tty_prepare_flip_string

I think that I'm understand the problem, but what can I do to prevent this problem?

    This topic has been closed for replies.
    Best answer by ArnaudP

    Hello @lenonrt​,

    Regarding the "No memory for tty_prepare_flip_string" message the bottleneck seems to be your application... A lot of root causes can explain your issue which probably depends on the other tasks the processor executes.

    Regarding the code you shared, you Rpmsg are filled with only 10- bytes. Each message generates an interruption on Cortex-A7, this is not optimum. I would suggest to concatenate the buffers to fill the whole RPMsg buffer which can contain up to 496 bytes (by default). With this optimization you should significantly decrease the number of mailbox (IPCC) interruptions.

    Now If you are looking for an alternative based on separate shared buffers you can have a look here: https://github.com/STMicroelectronics/meta-st-stm32mpu-app-logicanalyser

    This sample is relying on a rpmsg_sdb Linux drivers which allows to to expose buffers to userland. the RPMsg is used to provide the buffer addresses to the Cortex-M4 which inform the Cortex-A7 when a buffer is filled.

    Be aware that the rpmsg_sdb driver is provided only as example implementing transfer from Cortex-M4 to Cortex-A7. You can use it as a guide to implement your own driver.

    Arnaud

    4 replies

    Technical Moderator
    September 27, 2019

    Hi @lenonrt​ 

    I understand you are not reading the message you sent on A7 side.

    So I guess you reach the queue message or buffer full

    Try to implement a process on A7 reading the message.

    Olivier

    lenonrtAuthor
    Visitor II
    October 17, 2019

    Hi @Community member​ , thank you for your answer!

    Yes, you are right.

    I was trying to use for sending data. I try to receive the messages on A7.

    But after some seconds the problem occurs again.

    I do some tests with a 496 bytes buffer message using request and answer method.

    I get 18Mb/s which isn't a good. I saw in the forum that you say ST achieved 15.4MB/s.

    Now I'm developing to send just a pointer.

    Lenon

    ArnaudPAnswer
    ST Employee
    October 25, 2019

    Hello @lenonrt​,

    Regarding the "No memory for tty_prepare_flip_string" message the bottleneck seems to be your application... A lot of root causes can explain your issue which probably depends on the other tasks the processor executes.

    Regarding the code you shared, you Rpmsg are filled with only 10- bytes. Each message generates an interruption on Cortex-A7, this is not optimum. I would suggest to concatenate the buffers to fill the whole RPMsg buffer which can contain up to 496 bytes (by default). With this optimization you should significantly decrease the number of mailbox (IPCC) interruptions.

    Now If you are looking for an alternative based on separate shared buffers you can have a look here: https://github.com/STMicroelectronics/meta-st-stm32mpu-app-logicanalyser

    This sample is relying on a rpmsg_sdb Linux drivers which allows to to expose buffers to userland. the RPMsg is used to provide the buffer addresses to the Cortex-M4 which inform the Cortex-A7 when a buffer is filled.

    Be aware that the rpmsg_sdb driver is provided only as example implementing transfer from Cortex-M4 to Cortex-A7. You can use it as a guide to implement your own driver.

    Arnaud

    lenonrtAuthor
    Visitor II
    October 25, 2019

    Hello,

    Thanks for the information!