@olivierlambert Well that was easy. Much appreciated!
Best posts made by nuentes
-
RE: Ghost Tasks keep appearing
-
RE: Ghost Backup job keeps failing
@olivierlambert @florent Whoops. Yes, ok. In the CLI for my docker container. It seems I didn't need to be in the xo-server folder, I needed to be in the dist subfolder. But here are my results
root@54727a77cd7a:/etc/xen-orchestra/packages/xo-server/dist# ./db-cli.mjs ls job 5 record(s) found { id: '886244d4-6ec3-4d76-90b1-aafb7ce2eaef', mode: 'full' } { id: '570366d3-558f-4e0c-85ca-cc0a608d4633', mode: 'full', name: 'NAS Daily', remotes: '{"id":{"__or":[]}}', settings: '{"8500fa4c-88e8-4c5d-85f2-3b4c67575537":{"snapshotRetention":3}}', srs: '{"id":{"__or":[]}}', type: 'backup', userId: '2c7dcd77-8e98-43b4-8311-ebe81b2bb3c0', vms: '{"id":"b4b08d92-bb02-c02e-bbd7-fd5ed2575dd5"}' } { id: '9dfab633-9435-4ea0-9564-b0f73749e118', mode: 'full', name: 'NAS 6 Hour', remotes: '{"id":{"__or":[]}}', settings: '{"d737f6d8-734c-4c7e-aac7-09fe8e59e795":{"snapshotRetention":3}}', srs: '{"id":{"__or":[]}}', type: 'backup', userId: '2c7dcd77-8e98-43b4-8311-ebe81b2bb3c0', vms: '{"id":"b4b08d92-bb02-c02e-bbd7-fd5ed2575dd5"}' } { id: 'f3ada8d2-6979-4a16-90d5-ed8fc17cbe8f', mode: 'full', name: 'Disaster Recovery', remotes: '{"id":{"__or":[]}}', settings: '{"e3129421-1ea1-4dc4-8854-b629c62b5ae1":{"copyRetention":3}}', srs: '{"id":"b4f0994f-914a-564e-0778-6d747907ff9a"}', type: 'backup', userId: '2c7dcd77-8e98-43b4-8311-ebe81b2bb3c0', vms: '{"id":"b4b08d92-bb02-c02e-bbd7-fd5ed2575dd5"}' } { id: 'd9faea90-4cd5-47ef-ab92-646ac62f9b3c', mode: 'full', name: 'NAS Weekly', remotes: '{"id":{"__or":[]}}', settings: '{"b1957ab0-fd8d-4c83-ab93-27173abc4290":{"snapshotRetention":4}}', srs: '{"id":{"__or":[]}}', type: 'backup', userId: '2c7dcd77-8e98-43b4-8311-ebe81b2bb3c0', vms: '{"id":"b4b08d92-bb02-c02e-bbd7-fd5ed2575dd5"}' } root@54727a77cd7a:/etc/xen-orchestra/packages/xo-server/dist# ./db-cli.mjs ls schedule { cron: '5 5 * * 1', enabled: 'true', id: 'b1957ab0-fd8d-4c83-ab93-27173abc4290', jobId: 'd9faea90-4cd5-47ef-ab92-646ac62f9b3c', name: 'Weekly', timezone: 'America/New_York' } { cron: '0 5 * * 0,3,5', enabled: 'true', id: '9ae19958-f28c-4e0b-aae4-a45836428d77', jobId: '886244d4-6ec3-4d76-90b1-aafb7ce2eaef', name: '', timezone: 'America/New_York' } { cron: '0 0 * * 4', enabled: 'true', id: 'e3129421-1ea1-4dc4-8854-b629c62b5ae1', jobId: 'f3ada8d2-6979-4a16-90d5-ed8fc17cbe8f', name: '', timezone: 'America/New_York' } { cron: '5 11,17,23 * * *', enabled: 'true', id: 'd737f6d8-734c-4c7e-aac7-09fe8e59e795', jobId: '9dfab633-9435-4ea0-9564-b0f73749e118', name: '6 Hours', timezone: 'America/New_York' } { cron: '5 5 * * 0,2,3,4,5,6', enabled: 'true', id: '8500fa4c-88e8-4c5d-85f2-3b4c67575537', jobId: '570366d3-558f-4e0c-85ca-cc0a608d4633', name: 'Daily', timezone: 'America/New_York' }
Latest posts made by nuentes
-
RE: DR error - (intermediate value) is not iterable
I worked with ChatGPT on this for a bit. We have narrowed it down to an issue with the NFS Storage that I ship the backups to.
"When you recreated storage and moved data back, OMV is technically exporting a different underlying filesystem object than before. NFS clients that had an old handle cached (your XCP-ng host) try to access it and get ESTALE. That explains the initial backup errors and why deleting/re-adding the SR is failing now."
I had to remove the NFS storage from XCP-ng, then delete the NFS share from OMV, then add the NFS share back to OMV, and then add it back to XCP-ng.
I probably could have resolved this with a reboot, but I didn't wanna. This issue is resolved now.
-
DR error - (intermediate value) is not iterable
Yesterday I migrated all of my VM disks to a temp disk and reformatted my primary VM disk storage from LVM (thick) to EXT4 (thin), and migrated them back. This all went mostly fine - all my VMs were back online before I went to bed. The only issue was that my overnight DR jobs failed with error "(intermediate value) is not iterable". I've googled a bit, and haven't really found an explanation of that error. Here is my failure log from XCP-ng:
AlpineNUT (xcp-ng) Snapshot Start: 2025-08-16 06:54 End: 2025-08-16 06:54 Backup_STOR (897.6 GiB free - thin) - xcp-ng transfer Start: 2025-08-16 06:54 End: 2025-08-16 06:54 Duration: a few seconds Error: (intermediate value) is not iterable Start: 2025-08-16 06:54 End: 2025-08-16 06:54 Duration: a few seconds Error: (intermediate value) is not iterable Start: 2025-08-16 06:54 End: 2025-08-16 06:54 Duration: a few seconds Error: (intermediate value) is not iterable Type: full
-
RE: VUSB keeps disappearing
Thanks for the help. The script is a great idea that I'll look at implementing. I have absolutely no idea how I'd take a USB drive and use PCI passthrough, so I'm just going to focus on on the script.
-
VUSB keeps disappearing
I have a VM that only has NUT installed. I have a VUSB for my CyberPower UPS that passes through to it. I've noticed that the VUSB keeps disappearing. A couple of days ago, I noticed that PeaNUT was reporting that NUT was offline. I checked this VM, and saw that the VUSB had not just disconnected, but it was completely unplugged as well. I thought nothing of it, shut down the VM, added the VUSB, and booted again.
Today I just noticed the same thing. The VUSB does not even show in the list of VUSBs in XOA. Now, I'm certain you'll tell me that the physical USB disconnected or something. I'm sure that this is the expected behavior in that scenario. But that definitely didn't happen. I just want to know what I should look at to troubleshoot this? How can I see when a VUSB disconnected, and maybe figure out what steps led to that?
-
RE: Backup fails with "VM_HAS_VUSBS" error
@olivierlambert so is the only option to:
- power off the VM
- backup
- power on the VM
If that's the case, I might just make another VM for the VUSB to attach to.
-
RE: Backup fails with "VM_HAS_VUSBS" error
@olivierlambert It doesn't seem to be done automatically yet, as I'm using 8.3 (up to date) and XOA commit 749f0 from 5 days ago.
I would love for this to get added. I looked into simply doing this with a cronjob (disconnect/reconnect VUSB before/after the backup jobs occur), but it seems that there is no command to simply reconnect the VUSB without performing a reboot. I'd really rather not need to reboot this VM with high regularity.
-
RE: Backup fails with "VM_HAS_VUSBS" error
@olivierlambert Sorry to resurrect an old subject, but I just started to have this issue myself. I have a VM running nut with a VUSB connected to my UPS. I snapshot weekly, so the odds of a snapshot occurring during a power outage are quite low, so I have no problem with disconnecting/reconnecting in order to take a snapshot. It's a small price to pay to be able to automate the snapshot process.
-
RE: Ghost Tasks keep appearing
@olivierlambert Well that was easy. Much appreciated!
-
RE: Ghost Tasks keep appearing
@olivierlambert I spun up a new XOA (that process is pretty impressive, btw), and can confirm that I see the tasks there as well.
-
RE: Ghost Tasks keep appearing
@olivierlambert ok, you bring up a great point and that gives me some ideas of things to try.
Tell me - are backup jobs stored on the server, or stored in XOA? If I spin up a new XOA, will my backup jobs I've configured show up?