Confirm commit 1a7b5 backups completed successfully.
Best posts made by acebmxer
-
Backups not working
Last night 2 of my vms failed to complete a delta backup. As the tasks could not be cancled in any way i rebooted XO (built from sources) the task still show "runing" so i restarted the tool stack on host 1 and the tasks cleared. I attempted to restart failed backups and again the backup just hangs. It create the snapshot but never transfer data. The Remote is the same location as the nfs storage the vms are running from. So i know the Storage is good.
A few more rebooted of XO and tool stack. I rebooted both host and each time backups get stuck. If i try to start a new backup (same job) all vms hang. I tried to run a full delta backup and same. I tried to update XO but I am on current master build as of today (6b263) I tried to do a force update and still backup never completes.
I built a new VM for XO and installed from sources and still fail.
Here is one of the logs from the backups...
{ "data": { "mode": "delta", "reportWhen": "always", "hideSuccessfulItems": true }, "id": "1750503695411", "jobId": "95ac8089-69f3-404e-b902-21d0e878eec2", "jobName": "Backup Job 1", "message": "backup", "scheduleId": "76989b41-8bcf-4438-833a-84ae80125367", "start": 1750503695411, "status": "failure", "infos": [ { "data": { "vms": [ "b25a5709-f1f8-e942-f0cc-f443eb9b9cf3", "3446772a-4110-7a2c-db35-286c73af4ab4", "bce2b7f4-d602-5cdf-b275-da9554be61d3", "e0a3093a-52fd-f8dc-1c39-075eeb9d0314", "afbef202-af84-7e64-100a-e8a4c40d5130" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "b25a5709-f1f8-e942-f0cc-f443eb9b9cf3", "name_label": "SeedBox" }, "id": "1750503696510", "message": "backup VM", "start": 1750503696510, "status": "interrupted", "tasks": [ { "id": "1750503696519", "message": "clean-vm", "start": 1750503696519, "status": "success", "end": 1750503696822, "result": { "merge": false } }, { "id": "1750503697911", "message": "snapshot", "start": 1750503697911, "status": "success", "end": 1750503699564, "result": "6e2edbe9-d4bd-fd23-28b9-db4b03219e96" }, { "data": { "id": "1575a1d8-3f87-4160-94fc-b9695c3684ac", "isFull": false, "type": "remote" }, "id": "1750503699564:0", "message": "export", "start": 1750503699564, "status": "success", "tasks": [ { "id": "1750503701979", "message": "clean-vm", "start": 1750503701979, "status": "success", "end": 1750503702141, "result": { "merge": false } } ], "end": 1750503702142 } ], "warnings": [ { "data": { "attempt": 1, "error": "invalid HTTP header in response body" }, "message": "Retry the VM backup due to an error" } ] }, { "data": { "type": "VM", "id": "3446772a-4110-7a2c-db35-286c73af4ab4", "name_label": "XO" }, "id": "1750503696512", "message": "backup VM", "start": 1750503696512, "status": "interrupted", "tasks": [ { "id": "1750503696518", "message": "clean-vm", "start": 1750503696518, "status": "success", "end": 1750503696693, "result": { "merge": false } }, { "id": "1750503712472", "message": "snapshot", "start": 1750503712472, "status": "success", "end": 1750503713915, "result": "a1bdef52-142c-5996-6a49-169ef390aa2e" }, { "data": { "id": "1575a1d8-3f87-4160-94fc-b9695c3684ac", "isFull": false, "type": "remote" }, "id": "1750503713915:0", "message": "export", "start": 1750503713915, "status": "success", "tasks": [ { "id": "1750503716280", "message": "clean-vm", "start": 1750503716280, "status": "success", "end": 1750503716383, "result": { "merge": false } } ], "end": 1750503716385 } ], "warnings": [ { "data": { "attempt": 1, "error": "invalid HTTP header in response body" }, "message": "Retry the VM backup due to an error" } ] }, { "data": { "type": "VM", "id": "bce2b7f4-d602-5cdf-b275-da9554be61d3", "name_label": "iVentoy" }, "id": "1750503702145", "message": "backup VM", "start": 1750503702145, "status": "interrupted", "tasks": [ { "id": "1750503702148", "message": "clean-vm", "start": 1750503702148, "status": "success", "end": 1750503702233, "result": { "merge": false } }, { "id": "1750503702532", "message": "snapshot", "start": 1750503702532, "status": "success", "end": 1750503704850, "result": "05c5365e-3bc5-4640-9b29-0684ffe6d601" }, { "data": { "id": "1575a1d8-3f87-4160-94fc-b9695c3684ac", "isFull": false, "type": "remote" }, "id": "1750503704850:0", "message": "export", "start": 1750503704850, "status": "interrupted", "tasks": [ { "id": "1750503706813", "message": "transfer", "start": 1750503706813, "status": "interrupted" } ] } ], "infos": [ { "message": "Transfer data using NBD" } ] }, { "data": { "type": "VM", "id": "e0a3093a-52fd-f8dc-1c39-075eeb9d0314", "name_label": "Docker of Things" }, "id": "1750503716389", "message": "backup VM", "start": 1750503716389, "status": "interrupted", "tasks": [ { "id": "1750503716395", "message": "clean-vm", "start": 1750503716395, "status": "success", "warnings": [ { "data": { "path": "/xo-vm-backups/e0a3093a-52fd-f8dc-1c39-075eeb9d0314/20250604T160135Z.json", "actual": 6064872448, "expected": 6064872960 }, "message": "cleanVm: incorrect backup size in metadata" } ], "end": 1750503716886, "result": { "merge": false } }, { "id": "1750503717182", "message": "snapshot", "start": 1750503717182, "status": "success", "end": 1750503719640, "result": "9effb56d-68e6-8015-6bd5-64fa65acbada" }, { "data": { "id": "1575a1d8-3f87-4160-94fc-b9695c3684ac", "isFull": false, "type": "remote" }, "id": "1750503719640:0", "message": "export", "start": 1750503719640, "status": "interrupted", "tasks": [ { "id": "1750503721601", "message": "transfer", "start": 1750503721601, "status": "interrupted" } ] } ], "infos": [ { "message": "Transfer data using NBD" } ] } ], "end": 1750504870213, "result": { "message": "worker exited with code null and signal SIGTERM", "name": "Error", "stack": "Error: worker exited with code null and signal SIGTERM\n at ChildProcess.<anonymous> (file:///opt/xo/xo-builds/xen-orchestra-202506202218/@xen-orchestra/backups/runBackupWorker.mjs:24:48)\n at ChildProcess.emit (node:events:518:28)\n at ChildProcess.patchedEmit [as emit] (/opt/xo/xo-builds/xen-orchestra-202506202218/@xen-orchestra/log/configure.js:52:17)\n at Process.ChildProcess._handle.onexit (node:internal/child_process:293:12)\n at Process.callbackTrampoline (node:internal/async_hooks:130:17)" } }
-
RE: Backup Failure to BackBlaze
Thank you for the update. Last night before I saw your replys that is exactly what i did. I purged the backup's for the two VMs from BackBlaze. The backup job completed successfully while copying over the previous backup images.
-
RE: Host 2 shows Error but works just fine...
That pic was taken from the host tab. But yes I removed from the settings tab and host 2 still shows under pool 1 and all looks ok.
Thank you.
-
RE: Disk import failed
Got it working. With some help in another forum a user helped. Added the following line in the advanced tap for the proxy host.
client_max_body_size 0;
I thought i have uploaded ISOs previously with no issues before but i may have been mistaken and have uploaded other ways for other reasons.
Latest posts made by acebmxer
-
RE: XOA license issue
Just an FYI I currently only have 1 xcp-ng host as I am still in the vmware migration. I have 2 more vms to migrate which is scheduled for tomorrow. If no issues I will be able to being over the 2nd host to xcp-ng.
-
RE: XOA fails after update to 5.106.0
@olivierlambert said in XOA fails after update to 5.106.0:
I don't think it's the same issue. Originally, you ended on a white page with "Cannot get /". It seems unrelated here, right?
Yeah i think i have moved passed the original issue onto this issue. That is why I stated you can move this to its own thread if need be. Or I can open a support ticket now that is usable.
-
RE: XOA license issue
I still belive this an issue unless i am missing something here. Just fired up my XOA 5.105 and updated to latest stable 5.108 and it broke. Updated to latest XOA 5.109.1 and broke still. Maybe I just from 5.105 to 5.109.1 it will work by skipping 5.108?
In this state i can not navigate any menu stuck at this screen.
EDIT:
Update. Feel free to move this to its own thread if need be.
Think I figured it out but still having an issue. The below screen shots are from 5.105. Is the error
-
RE: Asking for upgrade on creating a backup ?
@utopianfish & @AtaxyaNetwork - Maybe related to this bug noted here- https://xcp-ng.org/forum/topic/10798/xoa-fails-after-update-to-5.106.0/71
-
RE: Windows Server not listening to radius port after vmware migration
I was following the directions stated here - https://docs.xcp-ng.org/installation/migrate-to-xcp-ng/
Which states to remove tools then take a fresh snapshot.
VMFS 6 (6.7 and newer)
For VMFS 6, the VM needs to be shut down. Follow these steps:Uninstall VMware tools before migration.
Remove all snapshots attached to the VM.
Ensure no ISO files are mounted to the VM.
Shut down the VM.
Take a fresh snapshot.
Start the migration process.Also I am aware of the XO backups are not application aware. We use Veeam for that backup data. And i still have Veeam backups from the Servers to fall back onto. I just wasn't sure if there were any other precautions i should be taking or thinking about.
-
RE: Windows Server not listening to radius port after vmware migration
It seams after another restart or two of the server and repallying the latest update (same version that was installed) and restarting its services a few times. It appears to be working...
I now have a fear I might have issues when i go to migrate our Windows Active Directory master. I have moved over our backup dc (dc2) which runs our DHCP server and backup DNS. That appears to be working.
Is there any particular precautions i should be taking prior to moving that server? Previous steps followed were
- remove vmware tools and reboot server.
- once back up shut down server.
- remove any old snapshots(none to remove)
- take a snapshot
- start the migration process.
- adjust cpu topology.
- start vm log in adjust date and time and IP address as needed.
- shut down vm enable vm tools via windows update and attach iso made with management tools.
- Power on vm let it reboot as needed for driver installs
- install management tools (let windows update handle drivers).
- Run windows updates until all updates applied.
Should I take the 1 / final snapshot prior to removing vmware tools? Incase of issue i can spin up old vm? Open to ideas and suggestions.
-
Windows Server not listening to radius port after vmware migration
After migrating our windows server that host our Duo Proxy manager having an issue.
[info] Testing section 'radius_client' with configuration:
[info] {'host': '192.168.20.16', 'pass_through_all': 'true', 'secret': '*****'}
[error] Host 192.168.20.16 is not listening for RADIUS traffic on port 1812
[debug] Exception: [WinError 10054] An existing connection was forcibly closed by the remote hostAfter the migration I did have to reset the IP address and I did install the Xen tools via windows update.
Any suggestions? I am thinking I may have the same issue if i spin up the old vm as the vmware tools were removed which i think effected that nic as well....
-
RE: XOA fails after update to 5.106.0
@Andrew said in XOA fails after update to 5.106.0:
@acebmxer Did you activate your License?
Click on XOA then Licenses (at the top). Make sure you seeThis license is active on this XOA
Yes in 5.105.0 (fresh install) it worked. When i upgrated to 5.108.1 it stoped. I tried v5.109 and failed. same as @Lhoust
Using XO from sources for the moment.
-
RE: XOA fails after update to 5.106.0
I just installed XOA at work and after updating to latest stable 5.108.1 i am facing the same error.
This is a fresh install in preparation to migrate from vpshere.