@florent I have tested in our dev environment backing up a 30GB VM, which compressed into 22GB and was uploaded in 739 parts of 40MB each.
I will test a larger VM next week but I think this was successful test is already conclusive.
Thank You
@florent I have tested in our dev environment backing up a 30GB VM, which compressed into 22GB and was uploaded in 739 parts of 40MB each.
I will test a larger VM next week but I think this was successful test is already conclusive.
Thank You
@florent That would be great. Thanks again!
@florent I have tested in our dev environment backing up a 30GB VM, which compressed into 22GB and was uploaded in 739 parts of 40MB each.
I will test a larger VM next week but I think this was successful test is already conclusive.
Thank You
@florent The actual problem we had to solve was that AWS S3 allows 10000 chunks, whereas other implementations by default only allow 1000, so s3 library calculates small chunk size.
We tried configuring swift proxy multipart and swift S3 to 10000, but it causes other problems.
We also ran into the same issue using Mender with Minio as a backend failure with files over 10G.
Understanding this more now and the solution could simply be to specify someone a maximum number of chunks, which would be 10000 by default by could be set to 1000.
We settled on our solution because of this code in the s3 js files
chunk_size = max(MIN_PART_SIZE, ceil(file_size / 10000))
It was easier to set MIN_PART_SIZE, than modify the code to have a parameter instead of 10000. And we patch the code after each XO update.
Either solution would work for us, thanks for following up on this.
Hi @olivierlambert,
Finally got round to following up on this.
I found a better solution using DEFAULT_CHUNK_SIZE, which is normally set to 4MB, with MAX_PART_NUMBER being 1000, this limits the largest object backed up to 3rd party S3 implementations as 4GB.
Increasing the DEFAULT_CHUNK_SIZE in the S3 code, or having a setting in configuration, either file or web GUI would allow backups of VMs larger than 4GB, without the memory required to track several thousand object chunks.
Regards,
Mark
We use S3 swift proxy for swift storage and have built some infrastructure around it.
When we tried S3 for backup we got this error
xo-server[144792]: error: InvalidArgument: Part number must be an integer between 1 and 1000, inclusive
Swift defaults to max number of parts of 1000, as do several commercial S3 implementations, and we have tried increasing but it caused instability in one of applications built around swift SLO.
The solution I've implemented is to reduce the MAX_PART_NUMBER in
@xen-orchestra/fs/dist/s3.js to 1000, but this be overwritten on each upgrade.
Could an option be added into the AWS S3 parameters to set this? or even just being able to override the value in a configuration file somewhere.