Limitations on ESP-32 file download

#1
  1. I am trying to log a large number of A/D samples for testing of my device. I would like to be able to download 10 seconds worth of data. Each data point consists of 2 samples of A/D (encoded into 4 bytes each of ASCII), and a single control byte, each separated by tabs, and ended by a newline. Each data point is thus 12 characters long. 1000 data points are generated each second. After a test, the data is downloaded from the ESP-32 using

curl -d “{”"“filename”"":""“datafile.txt”""}" 192.168.4.1/rpc/FS.Get

I am partially successful at this (as described below). I think that there must be some limitations that are preventing ultimate success.

  1. My actions are to collect the data into arrays p, q and n at 1000 samples per second. I then call the following function:
int scanzone (char* ZSstring, int ZSlen, int depth){	
	fp=fopen("datafile1.txt","w+");
	loggerstring[0]='\0';
	for (i=0;i<loglimit/2;i++){
		sprintf(dumstring,"%d",p[i]); strcat(loggerstring,dumstring); strcat(loggerstring,"\t");
		sprintf(dumstring,"%d",q[i]); strcat(loggerstring,dumstring); strcat(loggerstring,"\t");
		sprintf(dumstring,"%1d",n[i]); strcat(loggerstring,dumstring); strcat(loggerstring,"\n");
	}
	fputs(loggerstring,fp);
	fclose(fp);
	
	fp=fopen("datafile2.txt","w+");
	loggerstring[0]='\0';
	for (i=loglimit/2;i<loglimit;i++) {
		sprintf(dumstring,"%d",p[i]); strcat(loggerstring,dumstring); strcat(loggerstring,"\t");
		sprintf(dumstring,"%d",q[i]); strcat(loggerstring,dumstring); strcat(loggerstring,"\t");
		sprintf(dumstring,"%1d",n[i]); strcat(loggerstring,dumstring); strcat(loggerstring,"\n");
	}
	fputs(loggerstring,fp);
	fclose(fp);

	logpointer=0;
	return 1;				
}

After this is complete, I attempt to download the data using:

curl -d “{”"“filename”"":""“datafile.txt”""}" 192.168.4.1/rpc/FS.Get

  1. The results depend on the size of the arrays p, q, n and the value of logpointer. If 3000 datapoints are taken using the above method, everything works perfectly. I get 3000 properly recorded datapoints in Base64 encoded form. If I attempt to increase to 4000 datapoints, I receive no reaction to

curl -d “{”"“filename”"":""“datafile.txt”""}" 192.168.4.1/rpc/FS.Get

(i. e. control returns to the command line prompt after about 1 second, and no data or error message is received).

I am monitoring the heap size that is available in both cases. With 3000 data points, the free heap space is about 160K, with 4000 data points, it is reduced to about 150K.

  1. I am sure that the sudden change that occurs between 3000 data points and 4000 data points is due to some system limitation that I am not aware of. Can anyone describe such a limitation (or indicate a better process to achieve the desired results)?

Thank you as always for your expert assistance,

JSW

#2

The limitation comes from the available memory.

[Nov 25 13:07:06.258] mg_rpc.c:293            FS.Get via HTTP 192.168.0.39:53788
[Nov 25 13:07:06.264] mgos_service_filesy:176 Sending datafile2.txt
[Nov 25 13:07:07.232] E:M 61583
[Nov 25 13:07:07.234] Heap summary for capabilities 0x00001800:
[Nov 25 13:07:07.238]   At 0x3ffae6e0 len 6432 free 0 allocated 6276 min_free 0
[Nov 25 13:07:07.243]     largest_free_block 0 alloc_blocks 31 free_blocks 0 total_blocks 31
[Nov 25 13:07:07.249]   At 0x3ffba948 len 153272 free 50600 allocated 101612 min_free 1320
[Nov 25 13:07:07.255]     largest_free_block 49624 alloc_blocks 254 free_blocks 3 total_blocks 257
[Nov 25 13:07:07.262]   At 0x3ffe0440 len 129984 free 7296 allocated 122644 min_free 7296
[Nov 25 13:07:07.268]     largest_free_block 7296 alloc_blocks 2 free_blocks 1 total_blocks 3
[Nov 25 13:07:07.274]   Totals:
[Nov 25 13:07:07.275]     free 57896 allocated 230532 min_free 8616 largest_free_block 49624
[Nov 25 13:07:07.282] E:M 61455
[Nov 25 13:07:07.283] Heap summary for capabilities 0x00001800:
[Nov 25 13:07:07.286]   At 0x3ffae6e0 len 6432 free 0 allocated 6276 min_free 0
[Nov 25 13:07:07.291]     largest_free_block 0 alloc_blocks 31 free_blocks 0 total_blocks 31
[Nov 25 13:07:07.298]   At 0x3ffba948 len 153272 free 50600 allocated 101612 min_free 1320
[Nov 25 13:07:07.304]     largest_free_block 49624 alloc_blocks 254 free_blocks 3 total_blocks 257
[Nov 25 13:07:07.311]   At 0x3ffe0440 len 129984 free 7296 allocated 122644 min_free 7296
[Nov 25 13:07:07.317]     largest_free_block 7296 alloc_blocks 2 free_blocks 1 total_blocks 3
[Nov 25 13:07:07.323]   Totals:
[Nov 25 13:07:07.324]     free 57896 allocated 230532 min_free 8616 largest_free_block 49624
[Nov 25 13:07:07.331] mgos_mongoose.c:66      New heap free LWM: 8616

The http rpc channel sends all the requested data in one reply. It needs several dynamically allocated buffers which, if the data size is big, will exhaust the heap memory. The log example shows that the requests to allocate 61583 and 61455 bytes could not be completed (the size of the example datafile2.txt is 45952 bytes).

Use mos --port ws://192.168.4.1/rpc get datafile2.txt and it will complete without memory allocation problems and the returned data will be plain ASCII, not base64 encoded. The rpc-ws is needed in this case.

PS. Simpler code:

  char buf[64];
  for (i = 0; i < loglimit / 2; i++) {
    sprintf(buf, "%d\t%d\t%d\n", p[i], q[i], n[i]);
    fputs(buf, fp);
  }
#3

Hi nliviu,

This method is vastly superior in every way! I am now able to store and quickly download 25 seconds worth of data in an instantly usable format!

One small issue that I can’t explain: the returned data is always slightly smaller in number than requested (i. e. if I request 25000 points, I get 24992). This is certainly good enough for my data measurements, but may cause problems for downloads of other types of data. Is this some kind of buffer issue? (I am using the simpler code).

Also, is there a list of ws rpc calls?

Thanks for your help!

JSW

#4

A RPC request can be made by any installed channel: serial, http, ws, mqtt, udp.
https://mongoose-os.com/docs/mongoose-os/userguide/rpc.md

Do you close the file after the fputs loop?

#5

I do now. Works perfectly.

Thanks for your help,

JSW