XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. acebmxer
    A
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 14
    • Posts 81
    • Groups 0

    acebmxer

    @acebmxer

    22
    Reputation
    5
    Profile views
    81
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online

    acebmxer Unfollow Follow

    Best posts made by acebmxer

    • RE: Backups not working

      Confirm commit 1a7b5 backups completed successfully.

      posted in Backup
      A
      acebmxer
    • RE: Windows 2025 Standard 24H2.11 (iso release of sept 25) crash on reboot with "INACCESSIBLE BOOT DEVICE 0x7B" in XCP 8.2.1 and XCP 8.3

      @Pilow

      Did dirty upgrade left all boxes checked and vm is setup to update drivers via windows update....

      Screenshot 2025-09-30 151025.png

      I have rebooted vm and restarts no issues.

      Screenshot 2025-09-30 151319.png

      posted in XCP-ng
      A
      acebmxer
    • RE: Windows 2025 Standard 24H2.11 (iso release of sept 25) crash on reboot with "INACCESSIBLE BOOT DEVICE 0x7B" in XCP 8.2.1 and XCP 8.3

      @dinhngtu same mine where two fresh VMs fresh install of server 2025. Older iso works newer iso does not. For me the newer iso the system rebooted 2 maybe 3 times then crashed to a hard power off of the vm.

      Again no xen tools installed as os never finished installing on a fresh vm. no OS upgrade.

      posted in XCP-ng
      A
      acebmxer
    • RE: Veeam backup with XCP NG

      Correction XCP-NG backups are Application aware... Just not accessible from the right click menu on the backup. If you click from the menu bar at top it is available...

      Screenshot 2025-10-03 104430.png

      posted in Backup
      A
      acebmxer
    • RE: 1 out of 6 vms failing backup

      @Pilow

      I thought restarting tool stack affected both host? Either way did try that but on host 1 originally. So i restarted on host 2 (where problem is). still no locked. I rebooted host 2 and that seem to do the trick. Took a few min for garbage collection to complete. All clean now.

      Thanks.

      posted in Backup
      A
      acebmxer
    • RE: Veeam backup with XCP NG

      @Pilow didnt realize veeam support for xcp-ng entered public beta. Downloading now and going to start testing.

      posted in Backup
      A
      acebmxer
    • Mirror Incremental or Mirror Full backup Tags???

      Not sure this as been mention before You create a tag called "Backup" and in your backup job you can have it backup every vm with that tag. Why is this not an option for Mirror jobs? I have some VMs i want to mirror to another Remote while others not so much.

      Having tags would make this a lot easier to manage. So you dont have to re-edit the mirror job to add or remove a vm to be mirrored.

      posted in Backup
      A
      acebmxer
    • RE: visual bug / backup state filter

      I just checked this on XOCE commit d76e7 and shows the same.

      Screenshot 2025-09-22 163035.png

      Screenshot 2025-09-22 163111.png

      Screenshot 2025-09-22 163139.png

      Screenshot 2025-09-22 163330.png

      posted in Backup
      A
      acebmxer
    • RE: Delta Backups failing again XOCE. Error: Missing tag to check there are some transferred data

      @pierrebrunet

      Updated to Commit 04338 and the backups are fixed Thank you.

      posted in Backup
      A
      acebmxer
    • Backups not working

      Last night 2 of my vms failed to complete a delta backup. As the tasks could not be cancled in any way i rebooted XO (built from sources) the task still show "runing" so i restarted the tool stack on host 1 and the tasks cleared. I attempted to restart failed backups and again the backup just hangs. It create the snapshot but never transfer data. The Remote is the same location as the nfs storage the vms are running from. So i know the Storage is good.

      A few more rebooted of XO and tool stack. I rebooted both host and each time backups get stuck. If i try to start a new backup (same job) all vms hang. I tried to run a full delta backup and same. I tried to update XO but I am on current master build as of today (6b263) I tried to do a force update and still backup never completes.

      I built a new VM for XO and installed from sources and still fail.

      Screenshot 2025-06-21 094543.png

      Here is one of the logs from the backups...

      {
        "data": {
          "mode": "delta",
          "reportWhen": "always",
          "hideSuccessfulItems": true
        },
        "id": "1750503695411",
        "jobId": "95ac8089-69f3-404e-b902-21d0e878eec2",
        "jobName": "Backup Job 1",
        "message": "backup",
        "scheduleId": "76989b41-8bcf-4438-833a-84ae80125367",
        "start": 1750503695411,
        "status": "failure",
        "infos": [
          {
            "data": {
              "vms": [
                "b25a5709-f1f8-e942-f0cc-f443eb9b9cf3",
                "3446772a-4110-7a2c-db35-286c73af4ab4",
                "bce2b7f4-d602-5cdf-b275-da9554be61d3",
                "e0a3093a-52fd-f8dc-1c39-075eeb9d0314",
                "afbef202-af84-7e64-100a-e8a4c40d5130"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "b25a5709-f1f8-e942-f0cc-f443eb9b9cf3",
              "name_label": "SeedBox"
            },
            "id": "1750503696510",
            "message": "backup VM",
            "start": 1750503696510,
            "status": "interrupted",
            "tasks": [
              {
                "id": "1750503696519",
                "message": "clean-vm",
                "start": 1750503696519,
                "status": "success",
                "end": 1750503696822,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1750503697911",
                "message": "snapshot",
                "start": 1750503697911,
                "status": "success",
                "end": 1750503699564,
                "result": "6e2edbe9-d4bd-fd23-28b9-db4b03219e96"
              },
              {
                "data": {
                  "id": "1575a1d8-3f87-4160-94fc-b9695c3684ac",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1750503699564:0",
                "message": "export",
                "start": 1750503699564,
                "status": "success",
                "tasks": [
                  {
                    "id": "1750503701979",
                    "message": "clean-vm",
                    "start": 1750503701979,
                    "status": "success",
                    "end": 1750503702141,
                    "result": {
                      "merge": false
                    }
                  }
                ],
                "end": 1750503702142
              }
            ],
            "warnings": [
              {
                "data": {
                  "attempt": 1,
                  "error": "invalid HTTP header in response body"
                },
                "message": "Retry the VM backup due to an error"
              }
            ]
          },
          {
            "data": {
              "type": "VM",
              "id": "3446772a-4110-7a2c-db35-286c73af4ab4",
              "name_label": "XO"
            },
            "id": "1750503696512",
            "message": "backup VM",
            "start": 1750503696512,
            "status": "interrupted",
            "tasks": [
              {
                "id": "1750503696518",
                "message": "clean-vm",
                "start": 1750503696518,
                "status": "success",
                "end": 1750503696693,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1750503712472",
                "message": "snapshot",
                "start": 1750503712472,
                "status": "success",
                "end": 1750503713915,
                "result": "a1bdef52-142c-5996-6a49-169ef390aa2e"
              },
              {
                "data": {
                  "id": "1575a1d8-3f87-4160-94fc-b9695c3684ac",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1750503713915:0",
                "message": "export",
                "start": 1750503713915,
                "status": "success",
                "tasks": [
                  {
                    "id": "1750503716280",
                    "message": "clean-vm",
                    "start": 1750503716280,
                    "status": "success",
                    "end": 1750503716383,
                    "result": {
                      "merge": false
                    }
                  }
                ],
                "end": 1750503716385
              }
            ],
            "warnings": [
              {
                "data": {
                  "attempt": 1,
                  "error": "invalid HTTP header in response body"
                },
                "message": "Retry the VM backup due to an error"
              }
            ]
          },
          {
            "data": {
              "type": "VM",
              "id": "bce2b7f4-d602-5cdf-b275-da9554be61d3",
              "name_label": "iVentoy"
            },
            "id": "1750503702145",
            "message": "backup VM",
            "start": 1750503702145,
            "status": "interrupted",
            "tasks": [
              {
                "id": "1750503702148",
                "message": "clean-vm",
                "start": 1750503702148,
                "status": "success",
                "end": 1750503702233,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1750503702532",
                "message": "snapshot",
                "start": 1750503702532,
                "status": "success",
                "end": 1750503704850,
                "result": "05c5365e-3bc5-4640-9b29-0684ffe6d601"
              },
              {
                "data": {
                  "id": "1575a1d8-3f87-4160-94fc-b9695c3684ac",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1750503704850:0",
                "message": "export",
                "start": 1750503704850,
                "status": "interrupted",
                "tasks": [
                  {
                    "id": "1750503706813",
                    "message": "transfer",
                    "start": 1750503706813,
                    "status": "interrupted"
                  }
                ]
              }
            ],
            "infos": [
              {
                "message": "Transfer data using NBD"
              }
            ]
          },
          {
            "data": {
              "type": "VM",
              "id": "e0a3093a-52fd-f8dc-1c39-075eeb9d0314",
              "name_label": "Docker of Things"
            },
            "id": "1750503716389",
            "message": "backup VM",
            "start": 1750503716389,
            "status": "interrupted",
            "tasks": [
              {
                "id": "1750503716395",
                "message": "clean-vm",
                "start": 1750503716395,
                "status": "success",
                "warnings": [
                  {
                    "data": {
                      "path": "/xo-vm-backups/e0a3093a-52fd-f8dc-1c39-075eeb9d0314/20250604T160135Z.json",
                      "actual": 6064872448,
                      "expected": 6064872960
                    },
                    "message": "cleanVm: incorrect backup size in metadata"
                  }
                ],
                "end": 1750503716886,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1750503717182",
                "message": "snapshot",
                "start": 1750503717182,
                "status": "success",
                "end": 1750503719640,
                "result": "9effb56d-68e6-8015-6bd5-64fa65acbada"
              },
              {
                "data": {
                  "id": "1575a1d8-3f87-4160-94fc-b9695c3684ac",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1750503719640:0",
                "message": "export",
                "start": 1750503719640,
                "status": "interrupted",
                "tasks": [
                  {
                    "id": "1750503721601",
                    "message": "transfer",
                    "start": 1750503721601,
                    "status": "interrupted"
                  }
                ]
              }
            ],
            "infos": [
              {
                "message": "Transfer data using NBD"
              }
            ]
          }
        ],
        "end": 1750504870213,
        "result": {
          "message": "worker exited with code null and signal SIGTERM",
          "name": "Error",
          "stack": "Error: worker exited with code null and signal SIGTERM\n    at ChildProcess.<anonymous> (file:///opt/xo/xo-builds/xen-orchestra-202506202218/@xen-orchestra/backups/runBackupWorker.mjs:24:48)\n    at ChildProcess.emit (node:events:518:28)\n    at ChildProcess.patchedEmit [as emit] (/opt/xo/xo-builds/xen-orchestra-202506202218/@xen-orchestra/log/configure.js:52:17)\n    at Process.ChildProcess._handle.onexit (node:internal/child_process:293:12)\n    at Process.callbackTrampoline (node:internal/async_hooks:130:17)"
        }
      }
      

      Screenshot 2025-06-21 095553.png

      posted in Backup
      A
      acebmxer

    Latest posts made by acebmxer

    • RE: How to protect a VM and Disks from accidental exclusion

      While the system prevents you from deleting as VM's disk while the vm is running. There is nothing to stop you from deleteing a disk to a vm that is powered off.

      The check box under advance for the vm just protects the VM itself but not the disk seperatatly.

      I guess they have some staff that like to clean up things that should be left alone... thats my take.

      posted in XCP-ng
      A
      acebmxer
    • RE: Veeam backup with XCP NG

      Correction XCP-NG backups are Application aware... Just not accessible from the right click menu on the backup. If you click from the menu bar at top it is available...

      Screenshot 2025-10-03 104430.png

      posted in Backup
      A
      acebmxer
    • RE: Veeam backup with XCP NG

      @redneckitguy
      File level restore works out of box but application restore ie Active Directory not so much. Still testing.

      Edit - The backup job from XCP-NG does not offer the option for Application aware backup. When backing up via the agent then yes but still two separate backups.

      Screenshot 2025-10-03 103535.png

      Screenshot 2025-10-03 103549.png

      Screenshot 2025-10-03 103605.png

      Screenshot 2025-10-03 103626.png

      posted in Backup
      A
      acebmxer
    • RE: 1 out of 6 vms failing backup

      @Pilow

      I thought restarting tool stack affected both host? Either way did try that but on host 1 originally. So i restarted on host 2 (where problem is). still no locked. I rebooted host 2 and that seem to do the trick. Took a few min for garbage collection to complete. All clean now.

      Thanks.

      posted in Backup
      A
      acebmxer
    • RE: 1 out of 6 vms failing backup

      Maybe this was caused by Veeam? Other vm's that were backed up by veeam not having issues just this one.

      Screenshot 2025-10-02 220500.png

      When i try to click on the forget button this error shows.

      OPERATION_NOT_ALLOWED(VBD '817247bb-50a9-6b1a-04bc-1c7458e9f824' still attached to '5e876c35-6d27-4090-950b-a4d2a94d4ec8')
      

      Screenshot 2025-10-02 220806.png

      When i click disconnect.

      INTERNAL_ERROR(Expected 0 or 1 VDI with datapath, had 3)
      

      When i click forget

      OPERATION_NOT_ALLOWED(VBD '817247bb-50a9-6b1a-04bc-1c7458e9f824' still attached to '5e876c35-6d27-4090-950b-a4d2a94d4ec8')
      

      When i click on detroy

      VDI_IN_USE(OpaqueRef:e9edb90f-c8c4-b7d5-889b-893779a626de, destroy)
      
      posted in Backup
      A
      acebmxer
    • 1 out of 6 vms failing backup

      Currently on Commit CF044

      Backups from last night all passed this afternoon 1 windows vm failing backup.

      Screenshot 2025-10-02 133426.png

        "id": "1759427171116",
                "message": "clean-vm",
                "start": 1759427171116,
                "status": "success",
                "end": 1759427171251,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1759427171846",
                "message": "snapshot",
                "start": 1759427171846,
                "status": "success",
                "end": 1759427174310,
                "result": "cd33d2e1-b161-4258-a1dc-704402cf9f96"
              },
              {
                "data": {
                  "id": "52af1ce0-abad-4478-ac69-db1b7cfcefd8",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1759427174311",
                "message": "export",
                "start": 1759427174311,
                "status": "success",
                "tasks": [
                  {
                    "id": "1759427175416",
                    "message": "transfer",
                    "start": 1759427175416,
                    "status": "success",
                    "end": 1759427349288,
                    "result": {
                      "size": 11827937280
                    }
                  },
                  {
                    "id": "1759427397567",
                    "message": "clean-vm",
                    "start": 1759427397567,
                    "status": "success",
                    "warnings": [
                      {
                        "data": {
                          "path": "/xo-vm-backups/2eaf6b24-ae55-6e01-a3a4-aa710d221834/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/0e0733db-3db0-437b-aff5-d2c7644c08a2/20251002T174615Z.vhd",
                          "error": {
                            "parent": "/xo-vm-backups/2eaf6b24-ae55-6e01-a3a4-aa710d221834/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/0e0733db-3db0-437b-aff5-d2c7644c08a2/20251002T040024Z.vhd",
                            "child1": "/xo-vm-backups/2eaf6b24-ae55-6e01-a3a4-aa710d221834/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/0e0733db-3db0-437b-aff5-d2c7644c08a2/20251002T170006Z.vhd",
                            "child2": "/xo-vm-backups/2eaf6b24-ae55-6e01-a3a4-aa710d221834/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/0e0733db-3db0-437b-aff5-d2c7644c08a2/20251002T174615Z.vhd"
                          }
                        },
                        "message": "VHD check error"
                      },
                      {
                        "data": {
                          "backup": "/xo-vm-backups/2eaf6b24-ae55-6e01-a3a4-aa710d221834/20251002T174615Z.json",
                          "missingVhds": [
                            "/xo-vm-backups/2eaf6b24-ae55-6e01-a3a4-aa710d221834/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/0e0733db-3db0-437b-aff5-d2c7644c08a2/20251002T174615Z.vhd"
                          ]
                        },
                        "message": "some VHDs linked to the backup are missing"
                      }
                    ],
                    "end": 1759427397842,
                    "result": {
                      "merge": false
                    }
                  }
                ],
                "end": 1759427397855
              }
            ],
            "infos": [
              {
                "message": "Transfer data using NBD"
              }
            ],
            "end": 1759427397855,
            "result": {
              "code": "VDI_IN_USE",
              "params": [
                "OpaqueRef:e9edb90f-c8c4-b7d5-889b-893779a626de",
                "destroy"
              ],
              "task": {
                "uuid": "db444258-2a26-a05f-3e5a-d6f1da90f46d",
                "name_label": "Async.VDI.destroy",
                "name_description": "",
                "allowed_operations": [],
                "current_operations": {},
                "created": "20251002T17:49:57Z",
                "finished": "20251002T17:49:57Z",
                "status": "failure",
                "resident_on": "OpaqueRef:fd6f7486-5079-c16d-eac9-59586ef1b0f9",
                "progress": 1,
                "type": "<none/>",
                "result": "",
                "error_info": [
                  "VDI_IN_USE",
                  "OpaqueRef:e9edb90f-c8c4-b7d5-889b-893779a626de",
                  "destroy"
                ],
                "other_config": {},
                "subtask_of": "OpaqueRef:NULL",
                "subtasks": [],
                "backtrace": "(((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 5189))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/helpers.ml)(line 1706))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 5178))((process xapi)(filename ocaml/xapi/rbac.ml)(line 188))((process xapi)(filename ocaml/xapi/rbac.ml)(line 197))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 77)))"
              },
              "message": "VDI_IN_USE(OpaqueRef:e9edb90f-c8c4-b7d5-889b-893779a626de, destroy)",
              "name": "XapiError",
              "stack": "XapiError: VDI_IN_USE(OpaqueRef:e9edb90f-c8c4-b7d5-889b-893779a626de, destroy)\n    at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202509301501/packages/xen-api/_XapiError.mjs:16:12)\n    at default (file:///opt/xo/xo-builds/xen-orchestra-202509301501/packages/xen-api/_getTaskResult.mjs:13:29)\n    at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202509301501/packages/xen-api/index.mjs:1073:24)\n    at file:///opt/xo/xo-builds/xen-orchestra-202509301501/packages/xen-api/index.mjs:1107:14\n    at Array.forEach (<anonymous>)\n    at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202509301501/packages/xen-api/index.mjs:1097:12)\n    at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202509301501/packages/xen-api/index.mjs:1270:14)\n    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)"
            }
          }
        ],
        "end": 1759427397855
      
      posted in Backup
      A
      acebmxer
    • RE: Veeam backup with XCP NG

      and when not accidentally routing the backups... save 6 min..

      Screenshot 2025-10-01 224148.png

      host
      Screenshot 2025-10-01 224223.png

      posted in Backup
      A
      acebmxer
    • RE: Veeam backup with XCP NG

      I decided to go all in and do 5 at once... 4 vms at once.

      3 linux and 2 windows 11 vms.

      Screenshot 2025-10-01 214608.png

      Screenshot 2025-10-01 214624.png

      Screenshot 2025-10-01 215639.png

      posted in Backup
      A
      acebmxer
    • RE: Veeam backup with XCP NG

      @Pilow

      How did you add your xcp-ng pool/host to veeam? I sure its stupid simple but i cant figure it out...

      Yes i am running the Beta version.

      Screenshot 2025-10-01 164030.png

      Edit - after full uninstall and reinstall i see the option now. Now i can begin testing.

      posted in Backup
      A
      acebmxer
    • RE: Feature request add open in new tab to XO GUI

      @marcoi

      If using xoa just add /v6 at the end of the url.

      If using XO from sources. run the following command -

      sudo yarn run turbo run build --filter @xen-orchestra/web
      

      depending on who's script you use will depend what directory you need to run that command in.

      For me i need to run that command in

      /opt/xo/xo-web
      
      posted in Xen Orchestra
      A
      acebmxer