XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. JSylvia007
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 6
    • Posts 28
    • Groups 0

    JSylvia007

    @JSylvia007

    4
    Reputation
    3
    Profile views
    28
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online
    Age 41
    Website www.jacobsylvia.com
    Location Seekonk, MA, USA

    JSylvia007 Unfollow Follow

    Best posts made by JSylvia007

    • RE: Installed certificates Section in XO - What does it do?

      @olivierlambert - I'd also like to say that I'm a very recent homelab convert over to XCP-NG. I've been using VMWare for like... 15 years.

      I have a project at work that I use native Xen for, and have been really impressed with the security and control that we get out of it.

      I've been in the process of shifting over to XCP-NG over this Christmas break, and I've been really, really happy. There was a bit of a learning curve, but I'm very happy that I'm going to be getting an increase in functionality over my free ESXi instance.

      Really great work you're all doing!

      posted in Xen Orchestra
      JSylvia007J
      JSylvia007

    Latest posts made by JSylvia007

    • RE: Backup Suddenly Failing

      @pilow - I know this has gotten a bit off topic, but in the interests of keeping folks informed... I've managed to migrate ALL VDIs over to remote NFS Storage, except this problem VM.

      To solve that problem, I:

      1. Added a second disk on the remote storage, attached it, and booted into clonezilla.
      2. Cloned the disk from INSIDE the VM.
      3. Detatched and removed the old disk.

      The local SR is now void of disks. I decided that instead of rebuild the RAID with a new disk, I'm just going to completely pull the disks and replace with shiny new disks. It's just not worth the uncertainty of what MIGHT happen. All the disks in that array are the same age.

      EDIT: Backup Succeeded on the new disk, so I think you hit the nail on the head @pilow ... Failing SR.

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @Pilow - This is why I'm nervous... BUT... EVERYTHING else has a good backup. I plan to shut down all the VMs and then backup once more while I rebuild the array.
      Fingers crossed.

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @Pilow - Well... Didn't you ruin my day yesterday... LOL.

      Long story short... The local SR is on a RAID5 in the server. The damn thing isn't reporting that a drive has failed (hard failed, it's not even showing up in the RAID Controller anymore). The array says degraded, but fine... So it's weird to me that this could be related. I have a drive on order and plan to rebuild the array as soon as the drive arrives.

      Once that happens, I will start investigating this again... Maybe it magically clears up.

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @pilow & @florent - The plot thickens. I'm unable to full-clone the VM...

      {
        "id": "0mn6ds7ih",
        "properties": {
          "method": "vm.copy",
          "params": {
            "vm": "afe4bee2-745d-da4a-0016-c74751856556",
            "sr": "247ef8a6-9c10-e100-acd3-c9193f34ddc3",
            "name": "ADMIN-VM02_COPY"
          },
          "name": "API call: vm.copy",
          "userId": "b06e5d9f-a602-4b76-a7bb-b1c915712ca3",
          "type": "api.call"
        },
        "start": 1774463552009,
        "status": "failure",
        "updatedAt": 1774464678345,
        "end": 1774464678344,
        "result": {
          "code": "VDI_COPY_FAILED",
          "params": [
            "Fatal error: exception Unix.Unix_error(Unix.EIO, \"read\", \"\")\n"
          ],
          "task": {
            "uuid": "555f90cc-12b7-7c2c-a2df-0f29a16a007e",
            "name_label": "Async.VM.copy",
            "name_description": "",
            "allowed_operations": [],
            "current_operations": {},
            "created": "20260325T18:32:32Z",
            "finished": "20260325T18:51:18Z",
            "status": "failure",
            "resident_on": "OpaqueRef:22c5ddea-00c6-f412-4439-536c4bbdca63",
            "progress": 1,
            "type": "<none/>",
            "result": "",
            "error_info": [
              "VDI_COPY_FAILED",
              "Fatal error: exception Unix.Unix_error(Unix.EIO, \"read\", \"\")\n"
            ],
            "other_config": {},
            "subtask_of": "OpaqueRef:NULL",
            "subtasks": [
              "OpaqueRef:655cc4e3-0205-ba7d-5831-4b191ecfba9e"
            ],
            "backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 77))((process xapi)(filename list.ml)(line 110))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 120))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 128))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 171))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 210))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 221))((process xapi)(filename list.ml)(line 121))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 223))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 461))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/xapi_vm.ml)(line 791))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/rbac.ml)(line 228))((process xapi)(filename ocaml/xapi/rbac.ml)(line 238))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 78)))"
          },
          "message": "VDI_COPY_FAILED(Fatal error: exception Unix.Unix_error(Unix.EIO, \"read\", \"\")\n)",
          "name": "XapiError",
          "stack": "XapiError: VDI_COPY_FAILED(Fatal error: exception Unix.Unix_error(Unix.EIO, \"read\", \"\")\n)\n    at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202603241416/packages/xen-api/_XapiError.mjs:16:12)\n    at default (file:///opt/xo/xo-builds/xen-orchestra-202603241416/packages/xen-api/_getTaskResult.mjs:13:29)\n    at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202603241416/packages/xen-api/index.mjs:1078:24)\n    at file:///opt/xo/xo-builds/xen-orchestra-202603241416/packages/xen-api/index.mjs:1112:14\n    at Array.forEach (<anonymous>)\n    at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202603241416/packages/xen-api/index.mjs:1102:12)\n    at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202603241416/packages/xen-api/index.mjs:1275:14)"
        }
      }
      
      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @Pilow - I can try this, but not until a bit later.

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @Pilow - I did just that. Fails in the exact same way.

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @Pilow - The issue is that I've changed nothing... And the job is suddenly failing. And all the other jobs and VMs are working just fine, so I don't think that would have anything to do with it.

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @Pilow - That's correct. It's the same host and same job and same remote.

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @florent - Same. Still just this one failing.

      posted in Backup
      JSylvia007J
      JSylvia007
    • RE: Backup Suddenly Failing

      @florent - Here is the JSON. Removing the Snapshots now and trying again with the merge synchronously toggled off.

      Note the remote is a Synology using NFS, if that matters.

      {
        "data": {
          "mode": "delta",
          "reportWhen": "failure"
        },
        "id": "1774449668020",
        "jobId": "7fc5396a-5383-4dab-91fe-6758eb8b7474",
        "jobName": "ADMIN VMS",
        "message": "backup",
        "scheduleId": "d09acecc-cc98-4cfd-84a4-5bfd1575b20f",
        "start": 1774449668020,
        "status": "failure",
        "infos": [
          {
            "data": {
              "vms": [
                "b827a2ad-361d-e44c-19ca-f9d632baacf8",
                "afe4bee2-745d-da4a-0016-c74751856556"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "b827a2ad-361d-e44c-19ca-f9d632baacf8",
              "name_label": "ADMIN-VM01"
            },
            "id": "1774449670085",
            "message": "backup VM",
            "start": 1774449670085,
            "status": "success",
            "tasks": [
              {
                "id": "1774449670095",
                "message": "clean-vm",
                "start": 1774449670095,
                "status": "success",
                "end": 1774449670170,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1774449670451",
                "message": "snapshot",
                "start": 1774449670451,
                "status": "success",
                "end": 1774449672123,
                "result": "dad1585e-4094-88aa-4894-d521fae5cb63"
              },
              {
                "data": {
                  "id": "9f2e49f9-4e87-444a-aa68-4cbf73f28e6d",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1774449672123:0",
                "message": "export",
                "start": 1774449672123,
                "status": "success",
                "tasks": [
                  {
                    "id": "1774449673924",
                    "message": "transfer",
                    "start": 1774449673924,
                    "status": "success",
                    "end": 1774449690670,
                    "result": {
                      "size": 283115520
                    }
                  },
                  {
                    "id": "1774449697186",
                    "message": "clean-vm",
                    "start": 1774449697186,
                    "status": "success",
                    "tasks": [
                      {
                        "id": "1774449698513",
                        "message": "merge",
                        "start": 1774449698513,
                        "status": "success",
                        "end": 1774449706694
                      }
                    ],
                    "end": 1774449706704,
                    "result": {
                      "merge": true
                    }
                  }
                ],
                "end": 1774449706707
              }
            ],
            "end": 1774449706707
          },
          {
            "data": {
              "type": "VM",
              "id": "afe4bee2-745d-da4a-0016-c74751856556",
              "name_label": "ADMIN-VM02"
            },
            "id": "1774449670088",
            "message": "backup VM",
            "start": 1774449670088,
            "status": "failure",
            "tasks": [
              {
                "id": "1774449670096",
                "message": "clean-vm",
                "start": 1774449670096,
                "status": "success",
                "end": 1774449670110,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1774449670452",
                "message": "snapshot",
                "start": 1774449670452,
                "status": "success",
                "end": 1774449673024,
                "result": "77d9de45-e6b7-d202-9245-7db47b6fd9c9"
              },
              {
                "data": {
                  "id": "9f2e49f9-4e87-444a-aa68-4cbf73f28e6d",
                  "isFull": true,
                  "type": "remote"
                },
                "id": "1774449673024:0",
                "message": "export",
                "start": 1774449673024,
                "status": "failure",
                "tasks": [
                  {
                    "id": "1774449674094",
                    "message": "transfer",
                    "start": 1774449674094,
                    "status": "failure",
                    "end": 1774451157435,
                    "result": {
                      "text": "HTTP/1.1 500 Internal Error\r\ncontent-length: 266\r\ncontent-type: text/html\r\nconnection: close\r\ncache-control: no-cache, no-store\r\n\r\n<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred; please wait a while and try again. If the problem persists, please contact your support representative.<h1> Additional information </h1>VDI_IO_ERROR: [ Device I/O errors ]</body></html>",
                      "message": "stream has ended with not enough data (actual: 397, expected: 2097152)",
                      "name": "Error",
                      "stack": "Error: stream has ended with not enough data (actual: 397, expected: 2097152)\n    at readChunkStrict (/opt/xo/xo-builds/xen-orchestra-202603241416/@vates/read-chunk/index.js:88:19)\n    at process.processTicksAndRejections (node:internal/process/task_queues:104:5)\n    at async #read (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:98:65)\n    at async generator (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:199:22)\n    at async Timeout.next (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@vates/generator-toolbox/dist/timeout.mjs:14:24)\n    at async generatorWithLength (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/disk-transform/dist/Throttled.mjs:12:44)\n    at async Throttle.createThrottledGenerator (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@vates/generator-toolbox/dist/throttle.mjs:53:30)\n    at async ThrottledDisk.diskBlocks (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/disk-transform/dist/Disk.mjs:26:30)\n    at async Promise.all (index 0)\n    at async ForkedDisk.diskBlocks (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/disk-transform/dist/SynchronizedDisk.mjs:18:30)"
                    }
                  },
                  {
                    "id": "1774451158098",
                    "message": "clean-vm",
                    "start": 1774451158098,
                    "status": "success",
                    "end": 1774451158157,
                    "result": {
                      "merge": false
                    }
                  }
                ],
                "end": 1774451158216
              }
            ],
            "end": 1774451158218,
            "result": {
              "errno": -2,
              "code": "ENOENT",
              "syscall": "stat",
              "path": "/opt/xo/mounts/9f2e49f9-4e87-444a-aa68-4cbf73f28e6d/xo-vm-backups/afe4bee2-745d-da4a-0016-c74751856556/vdis/7fc5396a-5383-4dab-91fe-6758eb8b7474/530abab7-9ea9-43d4-be6e-acb3fbf67065/20260325T144114Z.alias.vhd",
              "message": "ENOENT: no such file or directory, stat '/opt/xo/mounts/9f2e49f9-4e87-444a-aa68-4cbf73f28e6d/xo-vm-backups/afe4bee2-745d-da4a-0016-c74751856556/vdis/7fc5396a-5383-4dab-91fe-6758eb8b7474/530abab7-9ea9-43d4-be6e-acb3fbf67065/20260325T144114Z.alias.vhd'",
              "name": "Error",
              "stack": "Error: ENOENT: no such file or directory, stat '/opt/xo/mounts/9f2e49f9-4e87-444a-aa68-4cbf73f28e6d/xo-vm-backups/afe4bee2-745d-da4a-0016-c74751856556/vdis/7fc5396a-5383-4dab-91fe-6758eb8b7474/530abab7-9ea9-43d4-be6e-acb3fbf67065/20260325T144114Z.alias.vhd'\nFrom:\n    at NfsHandler.addSyncStackTrace (/opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/fs/dist/local.js:21:26)\n    at NfsHandler._getSize (/opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/fs/dist/local.js:113:48)\n    at /opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/fs/dist/utils.js:29:26\n    at new Promise (<anonymous>)\n    at NfsHandler.<anonymous> (/opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/fs/dist/utils.js:24:12)\n    at loopResolver (/opt/xo/xo-builds/xen-orchestra-202603241416/node_modules/promise-toolbox/retry.js:83:46)\n    at new Promise (<anonymous>)\n    at loop (/opt/xo/xo-builds/xen-orchestra-202603241416/node_modules/promise-toolbox/retry.js:85:22)\n    at NfsHandler.retry (/opt/xo/xo-builds/xen-orchestra-202603241416/node_modules/promise-toolbox/retry.js:87:10)\n    at NfsHandler._getSize (/opt/xo/xo-builds/xen-orchestra-202603241416/node_modules/promise-toolbox/retry.js:103:18)"
            }
          }
        ],
        "end": 1774451158219
      }
      
      posted in Backup
      JSylvia007J
      JSylvia007