Need a way to flush tcp send buffer

Hello,

I need to be able to upload HTTP Posts that are a minimum of 256KiB from an ESP8266. I’m reading a large file from external SPI flash. Originally the main problem was that we did not have enough ram to buffer the entire Post request, so I broke the request into multiple TCP packets.

However, Mongoose is just taking these requests and adding them to an internal queue within the struct mg_connection, and so I run into the same problem, where we run out of memory.

More specifically, I have a for loop in an event handler that calls mg_send repeatedly. So what I need is a way to either make mg_send a blocking function that only returns once the TCP packet has actually been sent, or a way to know when to stop queuing packets and resume once there is space in the send queue again.

  1. My goal is: Upload a large HTTP POST request in multiple TCP packets in order to conserve RAM
  2. My actions are: I call mg_send on small chunks of memory during the request
  3. The result I see is: A large file causes Out Of Memory errors due to the messages being queued internally
  4. My expectation & question is: To send the TCP packets in a blocking manner so that the send queue doesn’t grow too large. Is there a mechanism for this in Mongoose’s networking library.

I figure that the solution is going to be putting the thread to sleep so that the networking thread can start emptying the send queue, but I don’t know how the tasks are being set in FreeRTOS within Mongoose OS.

Anyone tried something like this?

Maybe this can help.

1 Like

Also, https://github.com/mongoose-os-apps/http-fetch

If you’re not using Mongoose OS but plain Mongoose Library, you can process the MG_EV_HTTP_CHUNK message which is fired for every chunk in the upload.
Alternatively, for a non-chunked upload, use MS_EV_RECV.

Using Mongoose OS is preferrable however, because there is already a FS.* set of functions for managing files, and you can upload large files to ESP8266 with no issues.

Thanks for the replies! So I’ve been able to implement both of your suggestions, but the caveat still remains, that the PUT request needs to be a minimum of 256KiB. I should mention, I’m uploading to Google Cloud using its JSON Resumable Upload API:

https://cloud.google.com/storage/docs/json_api/v1/how-tos/resumable-upload

So I’m uploading the file in chunks of 256KiB, broken into multiple TCP packets. I’ve verified that I can do this by uploading a small file as many TCP packets without issue.

What I have right now is the initial PUT request for a single chunk:

PUT https://www.googleapis.com/upload/storage/v1/b/myBucket/o?uploadType=resumable&upload_id=xa298sd_sdlkj2 HTTP/1.1
Content-Length: 524288
Content-Type: image/jpeg
Content-Range: bytes 0-524287/2000000

Followed by separate mg_send calls. The logic kind of looks like this:

for( unsigned int i = 0; i < transfer_target; i += transfer_actual )
{
    // Read and transmit chunk size data to tcp directly
    transfer_actual = fread(
            cbdata->gstore.massive_buffer,
            1,
            MGIX_GSTORE_BUFFER_SIZE,
            fp );
    mg_printf(cbdata->nc,
            "%.*s",
            (int)transfer_actual,
            cbdata->gstore.massive_buffer);
}

I think what I’m missing is a call to mg_mgr_poll(..) between calls to mg_printf. And since there is no guarantee that the packet will be transmitted during a call to poll, I’ll need to register a callback with MG_EV_SEND, and set a flag before the next call.

However, since I’m using Mongoose OS, will there be a negative impact of having this function block for a long time?

Cheers

  1. Do not block
  2. The mg_printf() does not send, it only copies into the output buffer. The mg_poll() drains that buffer into the network.
  3. Thus a state machine needs to be implemented, akin to mg_http_transfer_file_data() in https://github.com/cesanta/mongoose/blob/master/src/mg_http.c