Skip to main content
Graduate
December 8, 2023
Solved

TCP Client

  • December 8, 2023
  • 1 reply
  • 5208 views

Hello I'm developing a TCP client for STM32F4 MCU.

I'm using the LWIP sockets api and freeRTOS to do so.

This client will both send and receive data when triggered by RTC (every hour or so) and based on an ADC reading.

My understanding is both sockets and netconn api's are based on state machine so they need to run in a different thread than MX_LWIP_Init() so after this function is called I am creating a thread that communicates to my server application.

I initialise the thread with xTaskCreate(TcpThread, "TCP", 128 * 4, NULL, osPriorityNormal, &tcp_thread_id);

In this thread I am creating, connecting writing and closing the socket and checking for errors for each with UART as one should do.

After this I delete the thread with vTaskDelete(NULL).

I then create the thread again 5s afterwards by calling osDelay(5000).

The application seems to run fine but crashes anywhere from 24 to 1100 connections without showing errors over UART.

Is there something wrong with my implementation?

I've attached main and client source files to this post.

If anyone has tips on how to solve this I would very much appreciate them for it.

Thank you,

Marc

    This topic has been closed for replies.
    Best answer by Bob S

    > My understanding is both sockets and netconn api's are based on state machine so they need to run in a different thread than MX_LWIP_Init() 

    Not quite.  netconn and socket APIs need to be called from a different task/thread than the tcp main thread.  But that is always the case.  tcpip_init() creates the tcpcip task to handle the API calls, so any thread you create is "different" than that.

    Don't delete your task and re-start/re-create it.  That is in-efficient and unnecessary.  Have the task wait on a semaphore.  When your A/D data is ready, set that semaphore (from a separate task, obviously).  Your task then opens the socket, sends data, closes the socket and goes back to waiting for the semaphore.

    Or - if the tcp task also reads the A/D data, the have that task start a (FreeRTOS) timer with a period of 5 seconds then wait for a semaphore.  The timer callback will set that semaphore, thus waking up your tcp task which then collects data, opens the socket, sends the data, closes the socket then goes back to waiting for the semaphore.

    1 reply

    Bob SAnswer
    Super User
    December 8, 2023

    > My understanding is both sockets and netconn api's are based on state machine so they need to run in a different thread than MX_LWIP_Init() 

    Not quite.  netconn and socket APIs need to be called from a different task/thread than the tcp main thread.  But that is always the case.  tcpip_init() creates the tcpcip task to handle the API calls, so any thread you create is "different" than that.

    Don't delete your task and re-start/re-create it.  That is in-efficient and unnecessary.  Have the task wait on a semaphore.  When your A/D data is ready, set that semaphore (from a separate task, obviously).  Your task then opens the socket, sends data, closes the socket and goes back to waiting for the semaphore.

    Or - if the tcp task also reads the A/D data, the have that task start a (FreeRTOS) timer with a period of 5 seconds then wait for a semaphore.  The timer callback will set that semaphore, thus waking up your tcp task which then collects data, opens the socket, sends the data, closes the socket then goes back to waiting for the semaphore.

    Graduate
    December 9, 2023

    Ok thank you Bob!

    I just learned that I could check if the memory was being freed as it apparently isn't done before exiting the function as I thought would be the case with the higher level blocking api. That is what is causing the crash.

    So semaphore's would not be bad? Even for long intervals between connections, longer than an hour?

    Thanks again for your help,

    Marc

    Graduate II
    December 9, 2023

    There is no time limit - one can wait indefinitely, if that is appropriate. Why would semaphores be bad? Anyway, the thread events/notifications are even simpler and more efficient.