Hey @chris ,
Great question. There are a variety of different ways to solve this. Personally I’ve had the most luck with:
- Base64-encoding the file
- Splitting it up into 8KB chunks (arbitrary, but this seems to be a “safe” size for limited-bandwidth connections)
- Using the web.post API to individually send those chunks to Notehub (and beyond to whatever endpoint you are using)
- Reassembling the file on the server (being aware that Notehub will have converted the payload to binary for you)
You can get fancier by also performing a checksum on each chunk to make sure the file transfer was successful.
I’m hoping to put together a bit of a “best practices” guide for this in the near future. In the meantime, we just did a webinar with Edge Impulse on remotely updating ML model files which has a file transfer component.
Hope that helps!