XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Some backups failing

    Scheduled Pinned Locked Moved Backup
    12 Posts 2 Posters 175 Views 1 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • R Offline
      RickyH
      last edited by

      Most backups working fine. 5 not working. End with "SR_Backend_Failure_44 (There is unsufficient space)". Those 5 x VMs will not snapshot either. The host is 8.2.1.

      Local storage on the host is 51% used.1e29c729-6bd8-41cc-b60f-b5001eccde13-image.png

      This is one of the VMs that won;t backup, however there are 6 identical, and only one of them backs up correctly.bb17767d-4189-41ae-a64b-3d81c5832e9c-image.png

      Any ideas?

      1 Reply Last reply Reply Quote 0
      • DanpD Offline
        Danp Pro Support Team
        last edited by

        It sounds like you are running low on diskspace on the SR where these VMs are located. Have you checked the Unhealth VDIs section under Dashboard > Health tab to see if you have a coalesce issue?

        R 1 Reply Last reply Reply Quote 0
        • R Offline
          RickyH @Danp
          last edited by

          @Danp Hi Dan, there is only one Unhealthy VDI, but about 110 (500mb) Orphan VDIs....?

          ebc8c9de-a4f6-4259-b392-a6eaf7149e92-image.png

          R 1 Reply Last reply Reply Quote 0
          • R Offline
            RickyH @RickyH
            last edited by

            Just too add something too. The 1 x VM that snapshots fine had management agent 9.4.1 on it - and the other 5 have 9.3.3. So I assumed this might have been the issue - and upgraded one of them to 9.4.1 as a test. Still same error.
            cc3af569-d19b-4873-ab62-0c9e40ad6f1a-image.png
            Note that about 10 other VMs on the host snapshot and backup just fine.

            R 1 Reply Last reply Reply Quote 0
            • R Offline
              RickyH @RickyH
              last edited by Danp

              The error log for the snapshot failure:

              vm.snapshot
              {
                "id": "567459b9-362c-90e8-9b2b-9e61572243ff"
              }
              {
                "code": "SR_BACKEND_FAILURE_44",
                "params": [
                  "",
                  "There is insufficient space",
                  ""
                ],
                "task": {
                  "uuid": "f2c46023-ea35-7d61-1106-b799adec4648",
                  "name_label": "Async.VM.snapshot",
                  "name_description": "",
                  "allowed_operations": [],
                  "current_operations": {},
                  "created": "20250829T10:31:09Z",
                  "finished": "20250829T10:31:16Z",
                  "status": "failure",
                  "resident_on": "OpaqueRef:b71238e8-bce1-4a59-b9be-870e2de57558",
                  "progress": 1,
                  "type": "<none/>",
                  "result": "",
                  "error_info": [
                    "SR_BACKEND_FAILURE_44",
                    "",
                    "There is insufficient space",
                    ""
                  ],
                  "other_config": {},
                  "subtask_of": "OpaqueRef:NULL",
                  "subtasks": [
                    "OpaqueRef:09be2d2a-c450-42fc-8c3a-3c876274bb18"
                  ],
                  "backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 80))((process xapi)(filename list.ml)(line 110))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 122))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 130))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 171))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 209))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 220))((process xapi)(filename list.ml)(line 121))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 222))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 442))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/xapi_vm_snapshot.ml)(line 33))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 131))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 205))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 95)))"
                },
                "message": "SR_BACKEND_FAILURE_44(, There is insufficient space, )",
                "name": "XapiError",
                "stack": "XapiError: SR_BACKEND_FAILURE_44(, There is insufficient space, )
                  at Function.wrap (file:///opt/xo/xo-builds/xen-orchestra-202408190902/packages/xen-api/_XapiError.mjs:16:12)
                  at default (file:///opt/xo/xo-builds/xen-orchestra-202408190902/packages/xen-api/_getTaskResult.mjs:13:29)
                  at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202408190902/packages/xen-api/index.mjs:1041:24)
                  at file:///opt/xo/xo-builds/xen-orchestra-202408190902/packages/xen-api/index.mjs:1075:14
                  at Array.forEach (<anonymous>)
                  at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202408190902/packages/xen-api/index.mjs:1065:12)
                  at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202408190902/packages/xen-api/index.mjs:1238:14)"
              }
              
              R 1 Reply Last reply Reply Quote 0
              • DanpD Offline
                Danp Pro Support Team
                last edited by

                What is the type and size of the SR? How much space is free on it?

                2dd08c8f-5007-488c-9231-6064f9ef89f8-image.png

                1 Reply Last reply Reply Quote 0
                • R Offline
                  RickyH @RickyH
                  last edited by

                  2d9d0aff-08ca-4230-b13b-7db85371ad15-image.png

                  The SR of the host is 51% full.

                  The VM:35b937b2-91f1-4c26-a337-ea9fc613a948-image.png

                  1 Reply Last reply Reply Quote 0
                  • DanpD Offline
                    Danp Pro Support Team
                    last edited by

                    The 1TB disk resides on a different SR, FME_Snap, which appears to be near capacity. This is why you can't take a snapshot or backup these VMs.

                    R 1 Reply Last reply Reply Quote 0
                    • R Offline
                      RickyH @Danp
                      last edited by

                      @Danp Thanks Dan - that makes sense - however one of those 6 x VMs is backing up and snapshotting perfectly, and it has the extra disk too..
                      Why would this be?

                      a93ad214-6648-4504-9660-ed1c5dc4d214-image.png

                      6b5b8520-d07b-4bb1-a528-64e693d60da1-image.png

                      R 1 Reply Last reply Reply Quote 0
                      • R Offline
                        RickyH
                        last edited by

                        Is there a possibility of converting that SR to thin provisioning - would that help in this case?

                        1 Reply Last reply Reply Quote 0
                        • R Offline
                          RickyH @RickyH
                          last edited by

                          Or could I get a one off backup of those VMs by disconnecting that disk temporarily?

                          1 Reply Last reply Reply Quote 0
                          • R Offline
                            RickyH
                            last edited by

                            Thanks everyone for their help. I believe I have a way forward now.

                            1 Reply Last reply Reply Quote 0
                            • First post
                              Last post