XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. JSylvia007
    3. Posts
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 6
    • Posts 28
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Backup Suddenly Failing

      @pilow - I know this has gotten a bit off topic, but in the interests of keeping folks informed... I've managed to migrate ALL VDIs over to remote NFS Storage, except this problem VM.

      To solve that problem, I:

      1. Added a second disk on the remote storage, attached it, and booted into clonezilla.
      2. Cloned the disk from INSIDE the VM.
      3. Detatched and removed the old disk.

      The local SR is now void of disks. I decided that instead of rebuild the RAID with a new disk, I'm just going to completely pull the disks and replace with shiny new disks. It's just not worth the uncertainty of what MIGHT happen. All the disks in that array are the same age.

      EDIT: Backup Succeeded on the new disk, so I think you hit the nail on the head @pilow ... Failing SR.

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @Pilow - This is why I'm nervous... BUT... EVERYTHING else has a good backup. I plan to shut down all the VMs and then backup once more while I rebuild the array.
      Fingers crossed.

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @Pilow - Well... Didn't you ruin my day yesterday... LOL.

      Long story short... The local SR is on a RAID5 in the server. The damn thing isn't reporting that a drive has failed (hard failed, it's not even showing up in the RAID Controller anymore). The array says degraded, but fine... So it's weird to me that this could be related. I have a drive on order and plan to rebuild the array as soon as the drive arrives.

      Once that happens, I will start investigating this again... Maybe it magically clears up.

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @pilow & @florent - The plot thickens. I'm unable to full-clone the VM...

      {
        "id": "0mn6ds7ih",
        "properties": {
          "method": "vm.copy",
          "params": {
            "vm": "afe4bee2-745d-da4a-0016-c74751856556",
            "sr": "247ef8a6-9c10-e100-acd3-c9193f34ddc3",
            "name": "ADMIN-VM02_COPY"
          },
          "name": "API call: vm.copy",
          "userId": "b06e5d9f-a602-4b76-a7bb-b1c915712ca3",
          "type": "api.call"
        },
        "start": 1774463552009,
        "status": "failure",
        "updatedAt": 1774464678345,
        "end": 1774464678344,
        "result": {
          "code": "VDI_COPY_FAILED",
          "params": [
            "Fatal error: exception Unix.Unix_error(Unix.EIO, \"read\", \"\")\n"
          ],
          "task": {
            "uuid": "555f90cc-12b7-7c2c-a2df-0f29a16a007e",
            "name_label": "Async.VM.copy",
            "name_description": "",
            "allowed_operations": [],
            "current_operations": {},
            "created": "20260325T18:32:32Z",
            "finished": "20260325T18:51:18Z",
            "status": "failure",
            "resident_on": "OpaqueRef:22c5ddea-00c6-f412-4439-536c4bbdca63",
            "progress": 1,
            "type": "<none/>",
            "result": "",
            "error_info": [
              "VDI_COPY_FAILED",
              "Fatal error: exception Unix.Unix_error(Unix.EIO, \"read\", \"\")\n"
            ],
            "other_config": {},
            "subtask_of": "OpaqueRef:NULL",
            "subtasks": [
              "OpaqueRef:655cc4e3-0205-ba7d-5831-4b191ecfba9e"
            ],
            "backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 77))((process xapi)(filename list.ml)(line 110))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 120))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 128))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 171))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 210))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 221))((process xapi)(filename list.ml)(line 121))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 223))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 461))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/xapi_vm.ml)(line 791))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/rbac.ml)(line 228))((process xapi)(filename ocaml/xapi/rbac.ml)(line 238))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 78)))"
          },
          "message": "VDI_COPY_FAILED(Fatal error: exception Unix.Unix_error(Unix.EIO, \"read\", \"\")\n)",
          "name": "XapiError",
          "stack": "XapiError: VDI_COPY_FAILED(Fatal error: exception Unix.Unix_error(Unix.EIO, \"read\", \"\")\n)\n    at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202603241416/packages/xen-api/_XapiError.mjs:16:12)\n    at default (file:///opt/xo/xo-builds/xen-orchestra-202603241416/packages/xen-api/_getTaskResult.mjs:13:29)\n    at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202603241416/packages/xen-api/index.mjs:1078:24)\n    at file:///opt/xo/xo-builds/xen-orchestra-202603241416/packages/xen-api/index.mjs:1112:14\n    at Array.forEach (<anonymous>)\n    at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202603241416/packages/xen-api/index.mjs:1102:12)\n    at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202603241416/packages/xen-api/index.mjs:1275:14)"
        }
      }
      
      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @Pilow - I can try this, but not until a bit later.

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @Pilow - I did just that. Fails in the exact same way.

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @Pilow - The issue is that I've changed nothing... And the job is suddenly failing. And all the other jobs and VMs are working just fine, so I don't think that would have anything to do with it.

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @Pilow - That's correct. It's the same host and same job and same remote.

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @florent - Same. Still just this one failing.

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @florent - Here is the JSON. Removing the Snapshots now and trying again with the merge synchronously toggled off.

      Note the remote is a Synology using NFS, if that matters.

      {
        "data": {
          "mode": "delta",
          "reportWhen": "failure"
        },
        "id": "1774449668020",
        "jobId": "7fc5396a-5383-4dab-91fe-6758eb8b7474",
        "jobName": "ADMIN VMS",
        "message": "backup",
        "scheduleId": "d09acecc-cc98-4cfd-84a4-5bfd1575b20f",
        "start": 1774449668020,
        "status": "failure",
        "infos": [
          {
            "data": {
              "vms": [
                "b827a2ad-361d-e44c-19ca-f9d632baacf8",
                "afe4bee2-745d-da4a-0016-c74751856556"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "b827a2ad-361d-e44c-19ca-f9d632baacf8",
              "name_label": "ADMIN-VM01"
            },
            "id": "1774449670085",
            "message": "backup VM",
            "start": 1774449670085,
            "status": "success",
            "tasks": [
              {
                "id": "1774449670095",
                "message": "clean-vm",
                "start": 1774449670095,
                "status": "success",
                "end": 1774449670170,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1774449670451",
                "message": "snapshot",
                "start": 1774449670451,
                "status": "success",
                "end": 1774449672123,
                "result": "dad1585e-4094-88aa-4894-d521fae5cb63"
              },
              {
                "data": {
                  "id": "9f2e49f9-4e87-444a-aa68-4cbf73f28e6d",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1774449672123:0",
                "message": "export",
                "start": 1774449672123,
                "status": "success",
                "tasks": [
                  {
                    "id": "1774449673924",
                    "message": "transfer",
                    "start": 1774449673924,
                    "status": "success",
                    "end": 1774449690670,
                    "result": {
                      "size": 283115520
                    }
                  },
                  {
                    "id": "1774449697186",
                    "message": "clean-vm",
                    "start": 1774449697186,
                    "status": "success",
                    "tasks": [
                      {
                        "id": "1774449698513",
                        "message": "merge",
                        "start": 1774449698513,
                        "status": "success",
                        "end": 1774449706694
                      }
                    ],
                    "end": 1774449706704,
                    "result": {
                      "merge": true
                    }
                  }
                ],
                "end": 1774449706707
              }
            ],
            "end": 1774449706707
          },
          {
            "data": {
              "type": "VM",
              "id": "afe4bee2-745d-da4a-0016-c74751856556",
              "name_label": "ADMIN-VM02"
            },
            "id": "1774449670088",
            "message": "backup VM",
            "start": 1774449670088,
            "status": "failure",
            "tasks": [
              {
                "id": "1774449670096",
                "message": "clean-vm",
                "start": 1774449670096,
                "status": "success",
                "end": 1774449670110,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1774449670452",
                "message": "snapshot",
                "start": 1774449670452,
                "status": "success",
                "end": 1774449673024,
                "result": "77d9de45-e6b7-d202-9245-7db47b6fd9c9"
              },
              {
                "data": {
                  "id": "9f2e49f9-4e87-444a-aa68-4cbf73f28e6d",
                  "isFull": true,
                  "type": "remote"
                },
                "id": "1774449673024:0",
                "message": "export",
                "start": 1774449673024,
                "status": "failure",
                "tasks": [
                  {
                    "id": "1774449674094",
                    "message": "transfer",
                    "start": 1774449674094,
                    "status": "failure",
                    "end": 1774451157435,
                    "result": {
                      "text": "HTTP/1.1 500 Internal Error\r\ncontent-length: 266\r\ncontent-type: text/html\r\nconnection: close\r\ncache-control: no-cache, no-store\r\n\r\n<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred; please wait a while and try again. If the problem persists, please contact your support representative.<h1> Additional information </h1>VDI_IO_ERROR: [ Device I/O errors ]</body></html>",
                      "message": "stream has ended with not enough data (actual: 397, expected: 2097152)",
                      "name": "Error",
                      "stack": "Error: stream has ended with not enough data (actual: 397, expected: 2097152)\n    at readChunkStrict (/opt/xo/xo-builds/xen-orchestra-202603241416/@vates/read-chunk/index.js:88:19)\n    at process.processTicksAndRejections (node:internal/process/task_queues:104:5)\n    at async #read (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:98:65)\n    at async generator (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:199:22)\n    at async Timeout.next (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@vates/generator-toolbox/dist/timeout.mjs:14:24)\n    at async generatorWithLength (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/disk-transform/dist/Throttled.mjs:12:44)\n    at async Throttle.createThrottledGenerator (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@vates/generator-toolbox/dist/throttle.mjs:53:30)\n    at async ThrottledDisk.diskBlocks (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/disk-transform/dist/Disk.mjs:26:30)\n    at async Promise.all (index 0)\n    at async ForkedDisk.diskBlocks (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/disk-transform/dist/SynchronizedDisk.mjs:18:30)"
                    }
                  },
                  {
                    "id": "1774451158098",
                    "message": "clean-vm",
                    "start": 1774451158098,
                    "status": "success",
                    "end": 1774451158157,
                    "result": {
                      "merge": false
                    }
                  }
                ],
                "end": 1774451158216
              }
            ],
            "end": 1774451158218,
            "result": {
              "errno": -2,
              "code": "ENOENT",
              "syscall": "stat",
              "path": "/opt/xo/mounts/9f2e49f9-4e87-444a-aa68-4cbf73f28e6d/xo-vm-backups/afe4bee2-745d-da4a-0016-c74751856556/vdis/7fc5396a-5383-4dab-91fe-6758eb8b7474/530abab7-9ea9-43d4-be6e-acb3fbf67065/20260325T144114Z.alias.vhd",
              "message": "ENOENT: no such file or directory, stat '/opt/xo/mounts/9f2e49f9-4e87-444a-aa68-4cbf73f28e6d/xo-vm-backups/afe4bee2-745d-da4a-0016-c74751856556/vdis/7fc5396a-5383-4dab-91fe-6758eb8b7474/530abab7-9ea9-43d4-be6e-acb3fbf67065/20260325T144114Z.alias.vhd'",
              "name": "Error",
              "stack": "Error: ENOENT: no such file or directory, stat '/opt/xo/mounts/9f2e49f9-4e87-444a-aa68-4cbf73f28e6d/xo-vm-backups/afe4bee2-745d-da4a-0016-c74751856556/vdis/7fc5396a-5383-4dab-91fe-6758eb8b7474/530abab7-9ea9-43d4-be6e-acb3fbf67065/20260325T144114Z.alias.vhd'\nFrom:\n    at NfsHandler.addSyncStackTrace (/opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/fs/dist/local.js:21:26)\n    at NfsHandler._getSize (/opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/fs/dist/local.js:113:48)\n    at /opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/fs/dist/utils.js:29:26\n    at new Promise (<anonymous>)\n    at NfsHandler.<anonymous> (/opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/fs/dist/utils.js:24:12)\n    at loopResolver (/opt/xo/xo-builds/xen-orchestra-202603241416/node_modules/promise-toolbox/retry.js:83:46)\n    at new Promise (<anonymous>)\n    at loop (/opt/xo/xo-builds/xen-orchestra-202603241416/node_modules/promise-toolbox/retry.js:85:22)\n    at NfsHandler.retry (/opt/xo/xo-builds/xen-orchestra-202603241416/node_modules/promise-toolbox/retry.js:87:10)\n    at NfsHandler._getSize (/opt/xo/xo-builds/xen-orchestra-202603241416/node_modules/promise-toolbox/retry.js:103:18)"
            }
          }
        ],
        "end": 1774451158219
      }
      
      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @florent - Any additional information I can provide? The backup is still failing, and there's really no indication why.

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @florent - It was on, I toggled it off, re-ran the backup, still failed.

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @olivierlambert - You beat me to it. I did just that. I'm at the latest commit and just reran the backups. This is the ONLY VM that fails out of 6 VMs.

      Failed with the exact same error. The files referenced are different, but the error related to the stream is the exact same.

      The VM works fine. There's no indications that there's an issue with the virtual hard drives itself.

      I've also tried with the VM running and powered off. Same issue.

      It happens to be a Windows 10 VM. I do have another Windows VM that backup just fine (but it's a Windows Server VM).

      They all backup to the exact same remote.

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @olivierlambert - It's XO from sources. The about page says:

      Xen Orchestra, commit d1736
      Master, commit f2b19

      What's weird is that I have 6 other backups, all configured the same way, and all work perfectly fine. There's even another VM in that same backup, and that one works fine too.

      posted in Backup
      JSylvia007J
      JSylvia007
    • Backup Suddenly Failing

      Howdy all! I have a backup configuration that's been working fine for years. It's for some non-critical VMs, but recently, one of the VMs mysteriously started failing.

      Since it's not critical, I just sad 'screw it', and deleted the backup and recreated the configuration. This completely new backup is still failing, and only on that one problem VM.

      Error: stream has ended with not enough data (actual: 397, expected: 2097152)
      
      Error: ENOENT: no such file or directory, stat '/opt/xo/mounts/9f2e49f9-4e87-444a-aa68-4cbf73f28e6d/xo-vm-backups/afe4bee2-745d-da4a-0016-c74751856556/vdis/7fc5396a-5383-4dab-91fe-6758eb8b7474/530abab7-9ea9-43d4-be6e-acb3fbf67065/20260323T124158Z.alias.vhd'
      

      I re-ran it, and got some more info, but that could just be that the initial backup failed in this new backup configuration...

      ADMIN-VM02 (xcpng01) 
      Clean VM directory 
      VHD check error
      path
      "/xo-vm-backups/afe4bee2-745d-da4a-0016-c74751856556/vdis/e57785aa-f99d-4f67-b951-5c6ac5fef518/530abab7-9ea9-43d4-be6e-acb3fbf67065/20260313T020007Z.alias.vhd"
      error
      {"generatedMessage":false,"code":"ERR_ASSERTION","actual":false,"expected":true,"operator":"==","diff":"simple"}
      
      orphan merge state
      mergeStatePath
      "/xo-vm-backups/afe4bee2-745d-da4a-0016-c74751856556/vdis/e57785aa-f99d-4f67-b951-5c6ac5fef518/530abab7-9ea9-43d4-be6e-acb3fbf67065/.20260313T020007Z.alias.vhd.merge.json"
      missingVhdPath
      "/xo-vm-backups/afe4bee2-745d-da4a-0016-c74751856556/vdis/e57785aa-f99d-4f67-b951-5c6ac5fef518/530abab7-9ea9-43d4-be6e-acb3fbf67065/20260313T020007Z.alias.vhd"
      
      missing target of alias
      alias
      "/xo-vm-backups/afe4bee2-745d-da4a-0016-c74751856556/vdis/e57785aa-f99d-4f67-b951-5c6ac5fef518/530abab7-9ea9-43d4-be6e-acb3fbf67065/20260313T020007Z.alias.vhd"
      
      Start: 2026-03-23 17:26
      End: 2026-03-23 17:26
      

      Any idea what's going on here?

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Remove a VM without Destroying the Disk

      @Anonabhar Thanks!! This worked exactly as you described!

      posted in Xen Orchestra
      JSylvia007J
      JSylvia007
    • RE: Remove a VM without Destroying the Disk

      @Anonabhar - Brilliant. That's exactly what I was loking for...

      Second (but related question) - once I delete the VM, will all my backups for that VM become invalid? I don't actually mind deleting the VM and the active disk, IF I can just go into restore and restore the VM.

      posted in Xen Orchestra
      JSylvia007J
      JSylvia007
    • Remove a VM without Destroying the Disk

      Hey all! I'm trying to decommission my first Xen VM. In VMWare, I used to just "unregister" the VM, and then move the disk files over to cold storage on my NAS.

      When I try to "remove" the VM in XCP-NG it warns me that it's going to delete the disks.

      Question is... How can I "remove" the VM configuration such that the VM is not available anymore, but still keep the disks intact? Is this possible with XCP-NG? Perhaps I need to re-think how I'm executing this...

      posted in Xen Orchestra
      JSylvia007J
      JSylvia007
    • Help with Command Line Translation to XO Jobs

      Howdy all!
      I have a script that runs the following 2 commands:

      xe vm-shutdown --multiple power-state=running tags:contains=NAS_REQD
      xe vm-shutdown --multiple power-state=running tags:contains=NAS_REQD_LAST
      

      Is there any way I can somehow convert these to the Jobs section? I'd love to execute this from the GUI within XO, but I'm not sure I have the skills LOL.

      Conversely, the reverse of this script calls this:

      xe vm-start power-state=halted vm=NS01
      sleep 5
      xe vm-start power-state=halted vm=RELAY01
      sleep 5
      xe vm-start --multiple power-state=halted tags:contains=NAS_REQD
      

      This allows two VMs to start first, and then all the rest to start at the same time. Obviously I can add vm.start for the 2 specific, but how would I then quantify "everything else" with that tag on the Jobs screen?

      posted in Advanced features
      JSylvia007J
      JSylvia007
    • RE: Host CPU Statistics

      @splastunov said in Host CPU Statistics:

      You can get it with such command

      That command doesn't give me the aggregate, it only gives me the CPU for Dom0. I'd like to get that aggregate number.

      posted in Development
      JSylvia007J
      JSylvia007