• How Best to Achieve Higher Transfer Speeds for Backup Jobs

    13
    0 Votes
    13 Posts
    322 Views
    K
    @ph7 Yep, that's what I've done. Actually, I've powered off the prior XOA instance. Not ideal, but a workable solution.
  • Invalid Health Check SR causes Bakup to fail with no error

    5
    5
    0 Votes
    5 Posts
    156 Views
    T
    @olivierlambert - I'm not sure who would be the best person at Vates to ping or whether there is another channel I should be using to request enhancements. I'm happy to be directed to the correct place if that's not here. Despite the fact that I brought this upon myself... I do think that it would be nice if Xen Orchestra could improve the error handling/messaging for situations where a task fails due to an invalid object UUID. It seems like the UI is already making a simple XAPI call to lookup the name-label of the SR, which, upon failure results in the schedule where an invalid/unknown UUID is configured displaying the invalid/unknown UUID in Red text with a red triangle.
  • Deleting incremental backups

    7
    0 Votes
    7 Posts
    137 Views
    F
    @DustinB OK. I think I'm between a rock and a hard place! Thanks for the advice.
  • maybe a bug when restarting mirroring delta backups that have failed

    2
    2
    0 Votes
    2 Posts
    69 Views
    R
    @jshiells i believe this is by design for a mirror backup. it checks every vm to mirror (if no filter is selected that will be all on the SR), if there is no new recovery point it will not copy anything as it allready does have the latest version of that backup on the destination recovery point. When a filter is selected it checks every vm on the sr but it does not copy data.
  • Backup issues with S3 remote

    9
    4
    0 Votes
    9 Posts
    293 Views
    P
    @florent It's been quiet since I supplied the json logs for the failed backup jobs. I understand this is a low priority case, and for myself I can continue having these failed backups, since I use other methods (especially for the Linux machines) to back up modified files of importance. Has there been any progress in tracking what's happening here ? Two different kind of failures since I reported: Got a "failure" (but success) in other jobs.. To explain the "success" part of it, I can select the job when doing a restore, and the "transfer" is green but the "remote" is marked red. [image: 1741003478991-xcp-ng-backup-fail-or-ok-admin-ubuntu.png] ... and the remote is OK ... [image: 1741003521687-xcp-ng-backup-fail-or-ok-admin-ubuntu-remote-ok.png] Restoring from this backup seems to be possible: [image: 1741004632183-xcp-ng-backup-fail-or-ok-admin-ubuntu-restore-available.png] The same happens with the previously mentioned "Win 10 22H2 new", backup gets a red blob on "remote", but green on "transfer" and restore is available from that point in time. This is against another remote, that get "full success" on other VMs. Another example, against the same remote (as in the images above) with five jobs at the approximate same time (machines belongs to a backup sequence) [image: 1741004104739-backup-2025-03-03-2x-success-followed-by-3x-failure-followed-by-1x-success.png] The failures are identical: [image: 1741004215619-backup-2025-03-03-failure-example.png] If there is anything I can do to provide you with more details, explain what I should do to obtain the data you need, and if this problem is rare enough to never have happened (or will happen) to a real customer, then just close the case.
  • 8.3RC2 Backup Pool restore not compatible with 8.3 release

    1
    0 Votes
    1 Posts
    38 Views
    No one has replied
  • Question on NDB backups

    3
    0 Votes
    3 Posts
    106 Views
    A
    @TS79 That makes sense - thanks!
  • Backup failed with "Body Timeout Error"

    8
    0 Votes
    8 Posts
    321 Views
    A
    Update: It’s weird: • There are three VM’s on this host. The backup works with two but not with the third. It does with the “Body Timeout Error” error. • Two of the VM’s are almost identical (same drive sizes). The only difference is that one was set up as “Other install media” (came over from esxi) and the one that fails was set up using the “Windows Server 2022” template. • I normally backup to multiple NFS servers so I changed to try one at a time; both failed. • After watching it do the backup too many times to thing, I found that, at about the 95% stage, the snapshot stops writing to the NFS share. • About that time, the file /var/log/xensource.log records this information: o Feb 26 09:43:33 HOST1 xapi: [error||19977 HTTPS 192.168.1.5->:::80|[XO] VM export R:c14f4c4c1c4c|xapi_compression] nice failed to compress: exit code 70 o Feb 26 09:43:33 HOST1 xapi: [ warn||19977 HTTPS 192.168.1.5->:::80|[XO] VM export R:c14f4c4c1c4c|pervasiveext] finally: Error while running cleanup after failure of main function: Failure("nice failed to compress: exit code 70") o Feb 26 09:43:33 HOST1 xapi: [debug||922 |xapi events D:d21ea5c4dd9a|xenops] Event on VM fbcbd709-a9d9-4cc7-80de-90185a74eba4; resident_here = true o Feb 26 09:43:33 HOST1 xapi: [debug||922 |xapi events D:d21ea5c4dd9a|dummytaskhelper] task timeboxed_rpc D:a08ebc674b6d created by task D:d21ea5c4dd9a o Feb 26 09:43:33 HOST1 xapi: [debug||922 |timeboxed_rpc D:a08ebc674b6d|xmlrpc_client] stunnel pid: 339060 (cached) connected to 192.168.1.6:443 o Feb 26 09:43:33 HOST1 xapi: [debug||19977 HTTPS 192.168.1.5->:::80|[XO] VM export R:c14f4c4c1c4c|xmlrpc_client] stunnel pid: 296483 (cached) connected to 192.168.1.6:443 o Feb 26 09:43:33 HOST1 xapi: [error||19977 :::80|VM.export D:c62fe7a2b4f2|backtrace] [XO] VM export R:c14f4c4c1c4c failed with exception Server_error(CLIENT_ERROR, [ INTERNAL_ERROR: [ Unix.Unix_error(Unix.EPIPE, "write", "") ] ]) o Feb 26 09:43:33 HOST1 xapi: [error||19977 :::80|VM.export D:c62fe7a2b4f2|backtrace] Raised Server_error(CLIENT_ERROR, [ INTERNAL_ERROR: [ Unix.Unix_error(Unix.EPIPE, "write", "") ] ]) • I have no idea if it means anything but the “failed to compress” made me try something. I change “compression” from “Zstd” to “disabled” and that time it worked. Here are my results: Regular backup to TrueNas, “compression” set to “Zstd”: backup fails. Regular backup to TrueNas, “compression” set to “disabled”: backup is successful. Regular backup to vanilla Ubuntu test VM, “compression” set to “Zstd”: backup is successful. Delta backup to TrueNas: backup is successful. Sooooo…the $64,000 question is why doesn’t it work on that one VM when compression is on and it a TrueNas box?
  • Question on Mirror backups

    10
    0 Votes
    10 Posts
    232 Views
    florentF
    @manilx yes, that's it we should change the "full backup interval" to "base/complete backup interval", to clarify what is a full . And we will do it.
  • Backup fails with "VM_HAS_VUSBS" error

    18
    0 Votes
    18 Posts
    969 Views
    olivierlambertO
    Yes, so for that to be done automatically, you can set up "offline backup" option, that will do exactly that
  • Backup XCP-NG instances, beyond Pool metadata backup

    4
    0 Votes
    4 Posts
    143 Views
    stormiS
    There's no built-in feature to backup changes made to dom0 outside changes made via XAPI, so configuration as code is probably the best option here.
  • Dumb question about multiple remotes

    Solved
    3
    0 Votes
    3 Posts
    82 Views
    A
    @olivierlambert Very cool...thanks for the explanation (but I'll admit I had to google "Node streams")!
  • Export to ova from command line

    Solved
    11
    0 Votes
    11 Posts
    2k Views
    A
    @ainsean got it xe vm-export vm=uuid filename=path compress=true
  • No more options for export

    Solved
    6
    4
    0 Votes
    6 Posts
    217 Views
    GheppyG
    It works for me again, thanks for your support
  • Weird performance alert. Start importing VM for no reason.

    5
    6
    0 Votes
    5 Posts
    131 Views
    P
    I increased the dom0 RAM from default 1.75 to 2 GiB Hopefully this will do.
  • 0 Votes
    35 Posts
    895 Views
    marcoiM
    I just had a backup fail with a similar error. Details: XO Community Xen Orchestra, commit 749f0 Master, commit 749f0 Merge backups synchronously was off on the backup job, going to enable it. [image: 1740267400062-88c76052-e8f1-43dc-a218-5794d46ebaad-image.png] { "data": { "type": "VM", "id": "4f715b32-ddfb-5818-c7bd-aaaa2a77ce70", "name_label": "PROD_SophosXG" }, "id": "1740263738986", "message": "backup VM", "start": 1740263738986, "status": "failure", "warnings": [ { "message": "the writer IncrementalRemoteWriter has failed the step writer.beforeBackup() with error Lock file is already being held. It won't be used anymore in this job execution." } ], "end": 1740263738993, "result": { "code": "ELOCKED", "file": "/run/xo-server/mounts/f992fff1-e245-48f7-8eb3-25987ecbfbd4/xo-vm-backups/4f715b32-ddfb-5818-c7bd-aaaa2a77ce70", "message": "Lock file is already being held", "name": "Error", "stack": "Error: Lock file is already being held\n at /opt/xo/xo-builds/xen-orchestra-202502212211/node_modules/proper-lockfile/lib/lockfile.js:68:47\n at callback (/opt/xo/xo-builds/xen-orchestra-202502212211/node_modules/graceful-fs/polyfills.js:306:20)\n at FSReqCallback.oncomplete (node:fs:199:5)\n at FSReqCallback.callbackTrampoline (node:internal/async_hooks:130:17)\nFrom:\n at NfsHandler.addSyncStackTrace (/opt/xo/xo-builds/xen-orchestra-202502212211/@xen-orchestra/fs/dist/local.js:21:26)\n at NfsHandler._lock (/opt/xo/xo-builds/xen-orchestra-202502212211/@xen-orchestra/fs/dist/local.js:135:48)\n at NfsHandler.lock (/opt/xo/xo-builds/xen-orchestra-202502212211/@xen-orchestra/fs/dist/abstract.js:234:27)\n at IncrementalRemoteWriter.beforeBackup (file:///opt/xo/xo-builds/xen-orchestra-202502212211/@xen-orchestra/backups/_runners/_writers/_MixinRemoteWriter.mjs:54:34)\n at async IncrementalRemoteWriter.beforeBackup (file:///opt/xo/xo-builds/xen-orchestra-202502212211/@xen-orchestra/backups/_runners/_writers/IncrementalRemoteWriter.mjs:68:5)\n at async file:///opt/xo/xo-builds/xen-orchestra-202502212211/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:343:7\n at async callWriter (file:///opt/xo/xo-builds/xen-orchestra-202502212211/@xen-orchestra/backups/_runners/_vmRunners/_Abstract.mjs:33:9)\n at async IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202502212211/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:342:5)\n at async file:///opt/xo/xo-builds/xen-orchestra-202502212211/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38" } }, Email report of failure VM Backup report Global status : failure 🚨 • Job ID: bfadfecc-b651-4fd6-b104-2f015100db29 • Run ID: 1740263706121 • Mode: delta • Start time: Saturday, February 22nd 2025, 5:35:06 pm • End time: Saturday, February 22nd 2025, 5:37:59 pm • Duration: 3 minutes • Successes: 0 / 8 1 Failure PROD_SophosXG Production Sophos Firewall Application • UUID: 4f715b32-ddfb-5818-c7bd-aaaa2a77ce70 • Start time: Saturday, February 22nd 2025, 5:35:38 pm • End time: Saturday, February 22nd 2025, 5:35:38 pm • Duration: a few seconds • ⚠️ the writer IncrementalRemoteWriter has failed the step writer.beforeBackup() with error Lock file is already being held. It won't be used anymore in this job execution. • Error: Lock file is already being held Manually run completed PROD_SophosXG (xcp02) Clean VM directory cleanVm: incorrect backup size in metadata Start: 2025-02-22 18:37 End: 2025-02-22 18:37 Snapshot Start: 2025-02-22 18:37 End: 2025-02-22 18:37 Qnap NFS Backup transfer Start: 2025-02-22 18:37 End: 2025-02-22 18:38 Duration: a minute Size: 12.85 GiB Speed: 185.52 MiB/s Start: 2025-02-22 18:37 End: 2025-02-22 18:44 Duration: 7 minutes Start: 2025-02-22 18:37 End: 2025-02-22 18:44 Duration: 7 minutes Type: delta
  • Licensing XO Proxy

    3
    0 Votes
    3 Posts
    82 Views
    J
    Hi jkatz - I am a sales manager for Vates America Corp and I apologize for any confusion but 'complete feature set' is referring to XOA. We list XOSTOR, XOPROXY, and Airgap Support all as 'optional' because they are add-ons and incur additional costs. One of my team members or I will be happy to answer any questions you have and discuss the various costs of our add-on offerings. We can be reached at sales@vates.tech and we'll look forward to hearing from you.
  • How to restore a VM from VHD files?

    18
    0 Votes
    18 Posts
    2k Views
    lawrencesystemsL
    @starmood For each VM backup run inside of Xen Orchestra it backs up not just the VHD but everything that is needed to restore that VM to any other XCP-ng host. So in a complete loss situation you can load a new XCP-ng Host, setup Xen Orchestra, point the new Xen Orchestra to those backups and restore any of the VM's with all their settings. The metadata backup of XCP-ng is just that, the metadata of the system and it's not granular. It's good to have because all the things like network setting and what VM's are on the hosts will be there, but the VM backups to me are the most important. I have a tutorial covering how the backups work, it's from a bit over a year ago and there are EVEN MORE features now and I will be doing a new video this year to cover that. https://youtu.be/weVoKm8kDb4?si=1z6IDqwnK1cxEGjm I also have a tutorial on how you can also automate the backup validation process https://youtu.be/A0HTRF3dhQE?si=gZLXQUqLJmDkIQs6
  • XO6 Backup displayed in "start page/dash board" Feedback

    2
    4
    0 Votes
    2 Posts
    67 Views
    pdoniasP
    @ph7 Thanks for the report, we'll check that
  • Misleading status in VM->Backup screen

    7
    1
    0 Votes
    7 Posts
    200 Views
    J
    @olivierlambert @DustinB @Forza May I suggest that it goes a bit further, unless it already does so. Can the VMs in Xen Orchestra show if it was backed up successfully in the most recent job. As well as when it was last backed up, if you happen to not have the report yet (or read it) you can see at a glance. That way it makes decoding the orange status for the backup job easier, so you know which ones you need to do a backup job run for. Alternatively show which ones failed and were successful, when you get the details for the job (task) in Xen Orchestra following an orange status on Xen Orchestra 6.