@florent The actual problem we had to solve was that AWS S3 allows 10000 chunks, whereas other implementations by default only allow 1000, so s3 library calculates small chunk size.
We tried configuring swift proxy multipart and swift S3 to 10000, but it causes other problems.
We also ran into the same issue using Mender with Minio as a backend failure with files over 10G.
Understanding this more now and the solution could simply be to specify someone a maximum number of chunks, which would be 10000 by default by could be set to 1000.
We settled on our solution because of this code in the s3 js files
chunk_size = max(MIN_PART_SIZE, ceil(file_size / 10000))
It was easier to set MIN_PART_SIZE, than modify the code to have a parameter instead of 10000. And we patch the code after each XO update.
Either solution would work for us, thanks for following up on this.