Confirm commit 1a7b5 backups completed successfully.
Best posts made by acebmxer
-
RE: Delta Backups failing again XOCE. Error: Missing tag to check there are some transferred data
Updated to Commit 04338 and the backups are fixed Thank you.
-
Backups not working
Last night 2 of my vms failed to complete a delta backup. As the tasks could not be cancled in any way i rebooted XO (built from sources) the task still show "runing" so i restarted the tool stack on host 1 and the tasks cleared. I attempted to restart failed backups and again the backup just hangs. It create the snapshot but never transfer data. The Remote is the same location as the nfs storage the vms are running from. So i know the Storage is good.
A few more rebooted of XO and tool stack. I rebooted both host and each time backups get stuck. If i try to start a new backup (same job) all vms hang. I tried to run a full delta backup and same. I tried to update XO but I am on current master build as of today (6b263) I tried to do a force update and still backup never completes.
I built a new VM for XO and installed from sources and still fail.
Here is one of the logs from the backups...
{ "data": { "mode": "delta", "reportWhen": "always", "hideSuccessfulItems": true }, "id": "1750503695411", "jobId": "95ac8089-69f3-404e-b902-21d0e878eec2", "jobName": "Backup Job 1", "message": "backup", "scheduleId": "76989b41-8bcf-4438-833a-84ae80125367", "start": 1750503695411, "status": "failure", "infos": [ { "data": { "vms": [ "b25a5709-f1f8-e942-f0cc-f443eb9b9cf3", "3446772a-4110-7a2c-db35-286c73af4ab4", "bce2b7f4-d602-5cdf-b275-da9554be61d3", "e0a3093a-52fd-f8dc-1c39-075eeb9d0314", "afbef202-af84-7e64-100a-e8a4c40d5130" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "b25a5709-f1f8-e942-f0cc-f443eb9b9cf3", "name_label": "SeedBox" }, "id": "1750503696510", "message": "backup VM", "start": 1750503696510, "status": "interrupted", "tasks": [ { "id": "1750503696519", "message": "clean-vm", "start": 1750503696519, "status": "success", "end": 1750503696822, "result": { "merge": false } }, { "id": "1750503697911", "message": "snapshot", "start": 1750503697911, "status": "success", "end": 1750503699564, "result": "6e2edbe9-d4bd-fd23-28b9-db4b03219e96" }, { "data": { "id": "1575a1d8-3f87-4160-94fc-b9695c3684ac", "isFull": false, "type": "remote" }, "id": "1750503699564:0", "message": "export", "start": 1750503699564, "status": "success", "tasks": [ { "id": "1750503701979", "message": "clean-vm", "start": 1750503701979, "status": "success", "end": 1750503702141, "result": { "merge": false } } ], "end": 1750503702142 } ], "warnings": [ { "data": { "attempt": 1, "error": "invalid HTTP header in response body" }, "message": "Retry the VM backup due to an error" } ] }, { "data": { "type": "VM", "id": "3446772a-4110-7a2c-db35-286c73af4ab4", "name_label": "XO" }, "id": "1750503696512", "message": "backup VM", "start": 1750503696512, "status": "interrupted", "tasks": [ { "id": "1750503696518", "message": "clean-vm", "start": 1750503696518, "status": "success", "end": 1750503696693, "result": { "merge": false } }, { "id": "1750503712472", "message": "snapshot", "start": 1750503712472, "status": "success", "end": 1750503713915, "result": "a1bdef52-142c-5996-6a49-169ef390aa2e" }, { "data": { "id": "1575a1d8-3f87-4160-94fc-b9695c3684ac", "isFull": false, "type": "remote" }, "id": "1750503713915:0", "message": "export", "start": 1750503713915, "status": "success", "tasks": [ { "id": "1750503716280", "message": "clean-vm", "start": 1750503716280, "status": "success", "end": 1750503716383, "result": { "merge": false } } ], "end": 1750503716385 } ], "warnings": [ { "data": { "attempt": 1, "error": "invalid HTTP header in response body" }, "message": "Retry the VM backup due to an error" } ] }, { "data": { "type": "VM", "id": "bce2b7f4-d602-5cdf-b275-da9554be61d3", "name_label": "iVentoy" }, "id": "1750503702145", "message": "backup VM", "start": 1750503702145, "status": "interrupted", "tasks": [ { "id": "1750503702148", "message": "clean-vm", "start": 1750503702148, "status": "success", "end": 1750503702233, "result": { "merge": false } }, { "id": "1750503702532", "message": "snapshot", "start": 1750503702532, "status": "success", "end": 1750503704850, "result": "05c5365e-3bc5-4640-9b29-0684ffe6d601" }, { "data": { "id": "1575a1d8-3f87-4160-94fc-b9695c3684ac", "isFull": false, "type": "remote" }, "id": "1750503704850:0", "message": "export", "start": 1750503704850, "status": "interrupted", "tasks": [ { "id": "1750503706813", "message": "transfer", "start": 1750503706813, "status": "interrupted" } ] } ], "infos": [ { "message": "Transfer data using NBD" } ] }, { "data": { "type": "VM", "id": "e0a3093a-52fd-f8dc-1c39-075eeb9d0314", "name_label": "Docker of Things" }, "id": "1750503716389", "message": "backup VM", "start": 1750503716389, "status": "interrupted", "tasks": [ { "id": "1750503716395", "message": "clean-vm", "start": 1750503716395, "status": "success", "warnings": [ { "data": { "path": "/xo-vm-backups/e0a3093a-52fd-f8dc-1c39-075eeb9d0314/20250604T160135Z.json", "actual": 6064872448, "expected": 6064872960 }, "message": "cleanVm: incorrect backup size in metadata" } ], "end": 1750503716886, "result": { "merge": false } }, { "id": "1750503717182", "message": "snapshot", "start": 1750503717182, "status": "success", "end": 1750503719640, "result": "9effb56d-68e6-8015-6bd5-64fa65acbada" }, { "data": { "id": "1575a1d8-3f87-4160-94fc-b9695c3684ac", "isFull": false, "type": "remote" }, "id": "1750503719640:0", "message": "export", "start": 1750503719640, "status": "interrupted", "tasks": [ { "id": "1750503721601", "message": "transfer", "start": 1750503721601, "status": "interrupted" } ] } ], "infos": [ { "message": "Transfer data using NBD" } ] } ], "end": 1750504870213, "result": { "message": "worker exited with code null and signal SIGTERM", "name": "Error", "stack": "Error: worker exited with code null and signal SIGTERM\n at ChildProcess.<anonymous> (file:///opt/xo/xo-builds/xen-orchestra-202506202218/@xen-orchestra/backups/runBackupWorker.mjs:24:48)\n at ChildProcess.emit (node:events:518:28)\n at ChildProcess.patchedEmit [as emit] (/opt/xo/xo-builds/xen-orchestra-202506202218/@xen-orchestra/log/configure.js:52:17)\n at Process.ChildProcess._handle.onexit (node:internal/child_process:293:12)\n at Process.callbackTrampoline (node:internal/async_hooks:130:17)" } }
-
RE: Backup Failure to BackBlaze
Thank you for the update. Last night before I saw your replys that is exactly what i did. I purged the backup's for the two VMs from BackBlaze. The backup job completed successfully while copying over the previous backup images.
-
RE: XOA license issue
@Danp said in XOA license issue:
@acebmxer It looks like you've resolved the first issue by binding the trial license. You haven't yet purchased support, so you won't be able to bind licenses to your XCP-ng hosts. This has no effect on the functionality of the hosts.
Ok thank you for the update that this is expected behavior. We do plan to pay for support just need to take care of a few things on our side before we do so.
-
RE: Host 2 shows Error but works just fine...
That pic was taken from the host tab. But yes I removed from the settings tab and host 2 still shows under pool 1 and all looks ok.
Thank you.
-
Custom Tags not showing
This is probably something to do with nginx proxy manager but maybe someone can assist.
When i access xo-ce from proxy manager i can not see my custom tags.
If i access xo-ce from ip address i can see them...
-
RE: Disk import failed
Got it working. With some help in another forum a user helped. Added the following line in the advanced tap for the proxy host.
client_max_body_size 0;
I thought i have uploaded ISOs previously with no issues before but i may have been mistaken and have uploaded other ways for other reasons.
Latest posts made by acebmxer
-
RE: 10gb backup only managing about 80Mb
I could be wrong but from VMware world the management interface didnt transfer much data if at all. It was only used to communicate to vsphere and/or to the to the host. So no need to waste a 10gb port on something only only see kb worth of data.
Our previous server had 2x 1gb nics for management 1x 10gb nic for network 2x 10tgb nic for storage 1x 10gb nic for vmotion.
-
RE: XOProxy Remote location setup
Thanks for the quick reply. Is this what you were referring to? - https://xen-orchestra.com/blog/xo-proxy-a-concrete-guide/
-
XOProxy Remote location setup
Can someone please verify my steps are correct or provide documentation how to deploy XOProxy with remote host / location.
Currently Working:
Site 1
XOA
Pool1 - 2 XCP-NG host.Thoughts.
- Deploy new XCP-NG host (Offsite location)
- Deploy Temp XOA or XO-CE to configure new pool network and storage settings
- From Site 1 XOA add Site 2 pool.
- From Site 1 XOA deploy XOProxy to site 2 pool?
- Once XOProxy deployed to Site 2 pool remove site 2 pool and re-add using Proxy?
Am I overthinking this, missing anything?
-
RE: Delta Backups failing again XOCE. Error: Missing tag to check there are some transferred data
Updated to Commit 04338 and the backups are fixed Thank you.
-
RE: Delta Backups failing again XOCE. Error: Missing tag to check there are some transferred data
{ "data": { "type": "VM", "id": "b25a5709-f1f8-e942-f0cc-f443eb9b9cf3", "name_label": "SeedBox" }, "id": "1756290916238", "message": "backup VM", "start": 1756290916238, "status": "failure", "tasks": [ { "id": "1756290916244", "message": "clean-vm", "start": 1756290916244, "status": "success", "warnings": [ { "data": { "path": "/xo-vm-backups/b25a5709-f1f8-e942-f0cc-f443eb9b9cf3/20250816T170026Z.json", "actual": 26680452096, "expected": 26680452608 }, "message": "cleanVm: incorrect backup size in metadata" } ], "end": 1756290917313, "result": { "merge": false } }, { "id": "1756290917697", "message": "snapshot", "start": 1756290917697, "status": "success", "end": 1756290934498, "result": "53f984e4-0ac2-8f77-2c74-f71ed8e56a8f" }, { "data": { "id": "95ba58d1-f202-4900-9b1f-7933adbc6764", "isFull": false, "type": "remote" }, "id": "1756290934499", "message": "export", "start": 1756290934499, "status": "success", "tasks": [ { "id": "1756290936032", "message": "transfer", "start": 1756290936032, "status": "success", "end": 1756290942108, "result": { "size": 1061158912 } }, { "id": "1756290955378", "message": "clean-vm", "start": 1756290955378, "status": "success", "warnings": [ { "data": { "path": "/xo-vm-backups/b25a5709-f1f8-e942-f0cc-f443eb9b9cf3/20250827T103536Z.json", "actual": 1061158912, "expected": 1063779840 }, "message": "cleanVm: incorrect backup size in metadata" } ], "end": 1756290956472, "result": { "merge": false } } ], "end": 1756290956478 } ], "infos": [ { "message": "Transfer data using NBD" } ], "end": 1756290956478, "result": { "message": "Missing tag to check there are some transferred data ", "name": "Error", "stack": "Error: Missing tag to check there are some transferred data \n at IncrementalXapiVmBackupRunner._healthCheck (file:///opt/xo/xo-builds/xen-orchestra-202508271005/@xen-orchestra/backups/_runners/_vmRunners/_Abstract.mjs:76:13)\n at IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202508271005/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:407:16)\n at process.processTicksAndRejections (node:internal/process/task_queues:105:5)\n at async file:///opt/xo/xo-builds/xen-orchestra-202508271005/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38" } },
{ "message": "Transfer data using NBD" } ], "end": 1756290942772, "result": { "message": "Missing tag to check there are some transferred data ", "name": "Error", "stack": "Error: Missing tag to check there are some transferred data \n at IncrementalXapiVmBackupRunner._healthCheck (file:///opt/xo/xo-builds/xen-orchestra-202508271005/@xen-orchestra/backups/_runners/_vmRunners/_Abstract.mjs:76:13)\n at IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202508271005/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:407:16)\n at process.processTicksAndRejections (node:internal/process/task_queues:105:5)\n at async file:///opt/xo/xo-builds/xen-orchestra-202508271005/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38" } }, { "data": { "type": "VM",
-
Delta Backups failing again XOCE. Error: Missing tag to check there are some transferred data
I on commit a4410
Backup seam to break during one of the commits yesterday. I have not tried to run a full backup as of yet.
-
RE: XO (self build) tasks spamming
I just updated to Commit d1577 and I seam to have the bug now.
-
RE: Moving my homelab from vmware (vsphere8) to xcp ng
@olivierlambert said in Moving my homelab from vmware (vsphere8) to xcp ng:
Hi,
Since Vates doesn't develop VEEAM, I don't see how I could answer that question
A first closed beta started since yesterday. If you want to have your answer, you have to ask them
Glad to see strong movement with the Veeam team. @olivierlambert I know you are limiting beta testing to internal at them moment. But if you do need further testing from the community I would like to sign up.
keep up the hard work Vates team you doing great work.
-
Restore list & S3 Backblaze
When viewing the restore page I constantly have to press the refresh backup list button to see the correct value for total backups and to see the S3 restore points or wait 2 -3 min to have it update on its own. If i navigate way from the page and back again i have to press refresh again or wait.
Can xo-ce not store this information and then update periodically? Why does it need to constantly contact the S3 every time the page is visited?