Nothing new to this.
But now it sounds like an encryption problem. Seems like it has nothing to do with cloud storage.
Nothing new to this.
But now it sounds like an encryption problem. Seems like it has nothing to do with cloud storage.
@olivierlambert
sorry no AWS to test
I have now tested several times with several different values. But getting the same result with every attempt. The error occurs after about 3 hours.
And I don't think it's a Backblaze bug.
For testing purposes, I installed a local Minio server and added it as an encrypted remote in Xen Orchestra.
The same error occurs. The error occurs every time after about 12 - 13 minutes.
In my test job(full mirroring with selected vms) are 2 VM Backups. A small one (xo with about 7GB) that is mirrored correctly on the first try on both remotes (Minio and Backblaze).
The error occurs after about the same amount of time every time when mirroring the large VM (tried various large VM backups...).
Then I created another minio remote(another bucket) without encryption and run the same backup mirror job to the unencrypted remote.
And this time, it went through without any errors...
So it must be a bug related to S3 remotes, large VMs, full mirroring and encryption!
@olivierlambert said in Backup Fail: Trying to add data in unsupported state:
It might be related to BackBlaze being overloaded at some point. Our advice:
- reduce backup concurrency
- reduce block concurrency during upload (
writeblockConcurrency
) and merge (mergeBlockConcurrency
) in theconfig.toml
yesterday I reduced writeblockConcurrency to 12 and started the backup.
Same error. I will try some other values.
Here is the error message from the orchestra.log file:
2024-09-18T09:50:27.217Z xo:backups:worker INFO starting backup
2024-09-18T12:57:16.979Z xo:backups:worker WARN possibly unhandled rejection {
error: Error: Trying to add data in unsupported state
at Cipheriv.update (node:internal/crypto/cipher:186:29)
at /root/git-down/xen-orchestra/@xen-orchestra/fs/dist/_encryptor.js:52:22
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async pumpToNode (node:internal/streams/pipeline:135:22)
}
2024-09-18T12:57:21.817Z xo:backups:AbstractVmRunner WARN writer step failed {
error: Error: Trying to add data in unsupported state
at Cipheriv.update (node:internal/crypto/cipher:186:29)
at /root/git-down/xen-orchestra/@xen-orchestra/fs/dist/_encryptor.js:52:22
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async pumpToNode (node:internal/streams/pipeline:135:22),
step: 'writer.run()',
writer: 'FullRemoteWriter'
}
2024-09-18T12:57:22.065Z xo:backups:worker INFO backup has ended
2024-09-18T12:57:22.076Z xo:backups:worker INFO process will exit {
duration: 11214858233,
exitCode: 0,
resourceUsage: {
userCPUTime: 1092931109,
systemCPUTime: 108325008,
maxRSS: 404280,
sharedMemorySize: 0,
unsharedDataSize: 0,
unsharedStackSize: 0,
minorPageFault: 2966382,
majorPageFault: 2,
swappedOut: 0,
fsRead: 134218296,
fsWrite: 0,
ipcSent: 0,
ipcReceived: 0,
signalsCount: 0,
voluntaryContextSwitches: 2662776,
involuntaryContextSwitches: 1238267
},
summary: { duration: '3h', cpuUsage: '11%', memoryUsage: '394.8 MiB' }
}
Before this error, I had the following error:
transfer
Start: 2024-09-11 15:14
End: 2024-09-11 16:07
Duration: an hour
Error: no tomes available
Start: 2024-09-11 15:14
End: 2024-09-11 16:07
Duration: an hour
Error: no tomes available
Start: 2024-09-11 15:14
End: 2024-09-11 16:07
Duration: an hour
Error: no tomes available
Type: full
I was able to fix this by giving the xen-orchestra vm more RAM.
I thought these were triggered by some kind of timeout.
When the current error first occurred, I doubled the RAM again. Unfortunately that didn't help
So, same error after 3 hours of uploading/mirroring an uncompressed backup to the encrypted backblaze remote.
transfer
Start: 2024-09-17 07:27
End: 2024-09-17 10:33
Duration: 3 hours
Error: Trying to add data in unsupported state
Yes.
iam making a non compressed backup now.
And then i try to mirror it with the mirroring job.
Report follows... But uncompressed backup and upload will need some time
sorry... i forgot...
with zstd compression
Hi,
Same problem here.
Its an encrypted S3 remote to Backblaze.
Full mirror backup with selected VMs.
Small VM like Xen-Orchestra works. As soon as a large VM is added (approx. 500GB), the error occurs after about 3 hours.
Tried several times.
xen-orchestra build from source