Influence of LOG() on UART Read


I wrote a code, where esp32 sends 5 requests to device (UART2) and reads 5 responses (about 70 bytes). For convenience, I have inserted informational messages LOG(LL_INFO, (“Message”)) in code (UART0).
In this way it takes about 2 seconds to read all data and show all informational messages in terminal.

After I commented out only the lines with LOG(LL_INFO, (“Message”)). After that it takes about 5 seconds to read all data and show result message in terminal. And I noticed that every request+response goes after 1 seconds “pause”.

It’s very strange for me: More informational messages LOG(LL_INFO, (“Message”)) - less code execution time; less informational messages - more code execution time.

Maybe someone knows why?

P.S. There are no any delays like mgos_msleep() in code.


The action of logging may of course be intrusive but definitely taking it off should not make your code slower…
Your requests should not take that long to be processed unless you are not firing at the proper event or not checking the proper condition. This is an event-driven framework and the same event handles both tx and rx for a UART, what exactly are you doing (code) ?


Excuse me for russian comments! :slight_smile:


// Обработчик событий на прием данных
static void device_uart_cb(int uart_no, void* param) 
    assert(uart_no == DEVICE_SERIAL_PORT);
    size_t rx_av = mgos_uart_read_avail(DEVICE_SERIAL_PORT);
    if (rx_av == 0) 
        if (!device_reading_in_process)
           device_reading_in_process = false;            
            //LOG(LL_INFO, ("- Прием данных завершен"));
            //LOG(LL_INFO, ("Кол-во всех байт - %d", (int)device->connection.receive_buffer.len));
            //LOG(LL_INFO, ("Конец приёма данных"));
        LOG(LL_INFO, ("."));
        device_reading_in_process = true;
        //LOG(LL_INFO, ("Кол-во байт - %d", (int)rx_av));
        //LOG(LL_INFO, ("Чтение входного буфера"));    
        mgos_uart_read_mbuf(DEVICE_SERIAL_PORT, &device->connection.receive_buffer, rx_av);
        //LOG(LL_INFO, ("SlaveID: %.2x, Receive Buffer: %d, Read Available: %d",
        //                   device->, device->connection.receive_buffer.len, rx_av));

// Запустить отправку данных
static bool device_start_transaction()
    if (device->connection.transmit_buffer.len > 0) {
        //LOG(LL_INFO, ("- Подготовка к отправке сообщения"));
        //LOG(LL_INFO, ("- Отправка..."));
        mgos_uart_write(DEVICE_SERIAL_PORT, device->connection.transmit_buffer.buf, device->connection.transmit_buffer.len);
        device_reading_in_process = false;
        mgos_uart_set_dispatcher(DEVICE_SERIAL_PORT, device_uart_cb, NULL);
        return true;
    return false;

// Чтение объема
void device_read_volume()
    device->connection.transmit_buffer.buf = GetVolumeMessage;
    device->connection.transmit_buffer.len = 10;


And the second observation:
In handler device_uart_cb(int uart_no, void param)* when I uncomment line LOG(LL_INFO, (".")) - the code will execute faster, if comment - slower


I’m too lazy to read code that long, sorry. Please trim it to the minimum reproducing your problem or wait for someone less lazy than I am.

1 Like

I trimed the code in my previous post


Well, I can’t follow your coding. If you ask me, I wouldn’t mess with internal structures, not even for reading, unless it is absolutely necessary (and perhaps not even so).
Your callback will also be called when some bytes get transmitted and there is more room in the FIFO for more data. I guess your ‘device_reading_in_process’ is to get some sort of half-duplex operation.
You seem to read the received buffer when mgos_uart_read_avail() returns 0 data available, but no data to be read will not invoke the callback, so if I’m correct you will end up being called after some data was transmitted and there is room or the magic internals determine that somehow your callback has to be called.
What most people do is to know the length of the message or how it ends, and read as much data as is available and parse it to detect the end of a data frame to be later processed or delivered to the upper layer.


Thanks for your explanation!

You seem to read the received buffer when mgos_uart_read_avail() returns 0 data available, but no data to be read will not invoke the callback…

No, I read data only when mgos_uart_read_avail() returns value !=0. If it returns 0 and we have not received any data soon after request, we interrupt the handler by return; but if we have already received some data and in the next call handler mgos_uart_read_avail() returns 0 - we stoped reading and begin to parse response message from buffer.

I think, I misunderstand the mechanism of call handler of mgos_uart_set_dispatcher()… And still can not correctly understand the process of transmitting and receiving data for half-duplex mode for mongoose os… Maybe because I have never worked with operating systems.

I don’t understand why between last response and next requests system wait 1 second (If I don’t use LOG(LL_INFO, ("."));) Is it some systems timeout?

And. If I understand my LOG(LL_INFO, (".")); begin to call handler again? But this cannot affect the reception of data in any way, because at this moment debug uart0 (not uart2) is used, right?

Sorry for the many, possibly stupid questions, but I really want to figure this out.


UARTs these days are full-duplex, they have two pins. Some UARTs do have some sort of support for muting your rx when transmitting to help RS-485 users but afaik that is not the case and there is no built-in support for half-duplex. You can just disable rx by calling the proper function, though.
The dispatcher/handler mechanism is quite simple, there is only one callback and is called when the OS wants… that usually corresponds to two well defined situations, but since there are two ring buffers plus two FIFOs iirc and two hardware FIFOs at the chip, you are in trouble if you expect to control the timing.
Those two situations I mentioned they are:

  • you sent some data and now there is room to send more data
  • you received some data and either it is enough to fill the FIFO and it has been taken out and we are presenting it to you, or, it has been lingering for a while in the buffer, nothing else came on board, and it is time for you to take it and do something with it. I’m not sure but I think there is also a periodic call when your buffer is empty
    The scheme is somehow similar to what UN*X does, you can configure those thresholds: a length threshold and a lingering time threshold.

What people usually do is to get whatever comes, put it in a buffer, and process it.
When you are sending, you send as much as you can and keep sending when there is room, until you finish.
Whatever else you have to do, you do it at the upper layer. I love to work at the processor level and cram everything on the UART interrupts, with FIFOs that becomes a bit more involved and with an OS you forget about that and just take the data.

I don’t know what it does, but device_get_data_from_message() will be called when mgos_uart_read_avail() returns 0 and device_reading_in_process is true; that is after you call
mgos_uart_write() or some time after you got the last bytes in your message and emptied the buffer calling mgos_uart_read_mbuf() if that timeout I mentioned really exists, or there is some other event I’m not aware of that calls you “just in case”
When there really is data available, your callback won’t be called immediately unless your message is long enough to exceed the set threshold, whatever is smaller than that threshold will remain there until the time threshold expires


Thanks for your detailed answer, something became clearer!)