XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Backup Fail: Trying to add data in unsupported state

    Scheduled Pinned Locked Moved Backup
    38 Posts 7 Posters 4.6k Views 7 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Offline
      olivierlambert Vates 🪐 Co-Founder CEO
      last edited by

      Can you try without it and report?

      1 Reply Last reply Reply Quote 0
      • D Offline
        daniel.grimm
        last edited by

        Yes.

        iam making a non compressed backup now.
        And then i try to mirror it with the mirroring job.

        Report follows... But uncompressed backup and upload will need some time

        1 Reply Last reply Reply Quote 1
        • D Offline
          daniel.grimm
          last edited by

          So, same error after 3 hours of uploading/mirroring an uncompressed backup to the encrypted backblaze remote.

          transfer
          Start: 2024-09-17 07:27
          End: 2024-09-17 10:33
          Duration: 3 hours
          Error: Trying to add data in unsupported state
          
          1 Reply Last reply Reply Quote 0
          • olivierlambertO Offline
            olivierlambert Vates 🪐 Co-Founder CEO
            last edited by

            I have the feeling it might be related to Backblaze and a potential timeout or something 🤔

            nvossN 1 Reply Last reply Reply Quote 0
            • D Offline
              daniel.grimm
              last edited by

              Before this error, I had the following error:

              transfer
                      Start: 2024-09-11 15:14
                      End: 2024-09-11 16:07
                      Duration: an hour
                      Error: no tomes available
                  Start: 2024-09-11 15:14
                  End: 2024-09-11 16:07
                  Duration: an hour
                  Error: no tomes available
              
              Start: 2024-09-11 15:14
              End: 2024-09-11 16:07
              Duration: an hour
              Error: no tomes available
              Type: full
              

              I was able to fix this by giving the xen-orchestra vm more RAM.
              I thought these were triggered by some kind of timeout.

              When the current error first occurred, I doubled the RAM again. Unfortunately that didn't help

              1 Reply Last reply Reply Quote 0
              • olivierlambertO Offline
                olivierlambert Vates 🪐 Co-Founder CEO
                last edited by

                Error: no tomes available

                🤔 Never heard of this before.

                1 Reply Last reply Reply Quote 0
                • olivierlambertO Offline
                  olivierlambert Vates 🪐 Co-Founder CEO
                  last edited by

                  It seems to come from BackBlaze, eg https://github.com/mastodon/mastodon/issues/30030

                  Sadly, I'm not sure this is something we are able to fix on our side 😕

                  cadusilva created this issue in mastodon/mastodon

                  open Backblaze B2 "no tomes available" and related errors #30030

                  1 Reply Last reply Reply Quote 0
                  • olivierlambertO Offline
                    olivierlambert Vates 🪐 Co-Founder CEO
                    last edited by olivierlambert

                    It might be related to BackBlaze being overloaded at some point. Our advice:

                    • reduce backup concurrency
                    • reduce block concurrency during upload (writeblockConcurrency) and merge (mergeBlockConcurrency) in the config.toml
                    D 1 Reply Last reply Reply Quote 0
                    • nvossN Offline
                      nvoss @olivierlambert
                      last edited by

                      @olivierlambert @florent

                      Of note from ours is we use Wasabi S3-compatible as the remote in one case and a Synology NAS as our local remote in the other. Both of those remotes fail with the unsupported state error when the backups are encrypted.

                      In the same encrypted job I have the following machines which have a backup size and duration of:

                      VM1 - 31.55GB - 47 mins
                      VM2 - 14.51GB - 22 mins
                      VM3 - 30.28GB - 48 mins
                      VM4 - 45.33GB - 24 mins
                      VM5 - FAIL - 1hr 27 min
                      VM6 - 2.14GB - 4 mins
                      VM7 - FAIL - 1hr 28 min
                      VM8 - 35.95GB - 1hr 5 min

                      The two machines erroring have thin provisioned disks whose size are
                      VM5 -- 128GB and 100GB which are 10.94GB and 86MB on disk
                      VM7 -- 123GB and 128GB which are 11.09GB and 10.3MB on disk

                      At first I thought it was size related or perhaps duration. But what's causing that extra duration for machines of these sizes? Something about activity on the Windows VMs?

                      Or perhaps that it was related to having multiple disks on Windows machines?

                      1 Reply Last reply Reply Quote 0
                      • olivierlambertO Offline
                        olivierlambert Vates 🪐 Co-Founder CEO
                        last edited by

                        It might be a different problem (zstd compression failing on the host) vs a problem with the S3 provider. That's why I'd like to sort the two things.

                        1 Reply Last reply Reply Quote 0
                        • D Offline
                          daniel.grimm @olivierlambert
                          last edited by

                          @olivierlambert said in Backup Fail: Trying to add data in unsupported state:

                          It might be related to BackBlaze being overloaded at some point. Our advice:

                          • reduce backup concurrency
                          • reduce block concurrency during upload (writeblockConcurrency) and merge (mergeBlockConcurrency) in the config.toml

                          yesterday I reduced writeblockConcurrency to 12 and started the backup.
                          Same error. I will try some other values.

                          Here is the error message from the orchestra.log file:

                          2024-09-18T09:50:27.217Z xo:backups:worker INFO starting backup
                          2024-09-18T12:57:16.979Z xo:backups:worker WARN possibly unhandled rejection {
                            error: Error: Trying to add data in unsupported state
                                at Cipheriv.update (node:internal/crypto/cipher:186:29)
                                at /root/git-down/xen-orchestra/@xen-orchestra/fs/dist/_encryptor.js:52:22
                                at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
                                at async pumpToNode (node:internal/streams/pipeline:135:22)
                          }
                          2024-09-18T12:57:21.817Z xo:backups:AbstractVmRunner WARN writer step failed {
                            error: Error: Trying to add data in unsupported state
                                at Cipheriv.update (node:internal/crypto/cipher:186:29)
                                at /root/git-down/xen-orchestra/@xen-orchestra/fs/dist/_encryptor.js:52:22
                                at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
                                at async pumpToNode (node:internal/streams/pipeline:135:22),
                            step: 'writer.run()',
                            writer: 'FullRemoteWriter'
                          }
                          2024-09-18T12:57:22.065Z xo:backups:worker INFO backup has ended
                          2024-09-18T12:57:22.076Z xo:backups:worker INFO process will exit {
                            duration: 11214858233,
                            exitCode: 0,
                            resourceUsage: {
                              userCPUTime: 1092931109,
                              systemCPUTime: 108325008,
                              maxRSS: 404280,
                              sharedMemorySize: 0,
                              unsharedDataSize: 0,
                              unsharedStackSize: 0,
                              minorPageFault: 2966382,
                              majorPageFault: 2,
                              swappedOut: 0,
                              fsRead: 134218296,
                              fsWrite: 0,
                              ipcSent: 0,
                              ipcReceived: 0,
                              signalsCount: 0,
                              voluntaryContextSwitches: 2662776,
                              involuntaryContextSwitches: 1238267
                            },
                            summary: { duration: '3h', cpuUsage: '11%', memoryUsage: '394.8 MiB' }
                          }
                          
                          
                          1 Reply Last reply Reply Quote 0
                          • D Offline
                            daniel.grimm
                            last edited by

                            I have now tested several times with several different values. But getting the same result with every attempt. The error occurs after about 3 hours.

                            And I don't think it's a Backblaze bug.

                            For testing purposes, I installed a local Minio server and added it as an encrypted remote in Xen Orchestra.
                            The same error occurs. The error occurs every time after about 12 - 13 minutes.

                            In my test job(full mirroring with selected vms) are 2 VM Backups. A small one (xo with about 7GB) that is mirrored correctly on the first try on both remotes (Minio and Backblaze).
                            The error occurs after about the same amount of time every time when mirroring the large VM (tried various large VM backups...).

                            Then I created another minio remote(another bucket) without encryption and run the same backup mirror job to the unencrypted remote.
                            And this time, it went through without any errors...

                            So it must be a bug related to S3 remotes, large VMs, full mirroring and encryption!

                            1 Reply Last reply Reply Quote 0
                            • olivierlambertO Offline
                              olivierlambert Vates 🪐 Co-Founder CEO
                              last edited by

                              I'd love to see if you have the same error with AWS S3, because that would tremendously help to debug.

                              D 1 Reply Last reply Reply Quote 0
                              • D Offline
                                daniel.grimm @olivierlambert
                                last edited by

                                @olivierlambert
                                sorry no AWS to test

                                1 Reply Last reply Reply Quote 0
                                • D Offline
                                  djingo
                                  last edited by djingo

                                  I am seeing this problem on a encrypted NFS remote (server runs unRAID), too. Only one VM out of 6 VMs that get full backup. Always the same VM shows this error. The same schedule for these VMs did not produce this error on an unencrypted NFS remote on the same server. Also, using delta backup instead full backup on the same encrypted remote does not produce this error.

                                  "stack": "Error: Trying to add data in unsupported state\n    at Cipheriv.update (node:internal/crypto/cipher:181:29)\n    at /etc/xen-orchestra/@xen-orchestra/fs/dist/_encryptor.js:52:22\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async pumpToNode (node:internal/streams/pipeline:135:22)"
                                  

                                  Are there any news regarding this issue?

                                  D 1 Reply Last reply Reply Quote 0
                                  • D Offline
                                    daniel.grimm @djingo
                                    last edited by

                                    Nothing new to this.

                                    But now it sounds like an encryption problem. Seems like it has nothing to do with cloud storage.

                                    D 1 Reply Last reply Reply Quote 0
                                    • D Offline
                                      djingo @daniel.grimm
                                      last edited by

                                      @daniel-grimm definitely not related to cloud in my case!

                                      1 Reply Last reply Reply Quote 0
                                      • First post
                                        Last post