• Backup issues with S3 remote

    9
    4
    0 Votes
    9 Posts
    261 Views
    P
    @florent It's been quiet since I supplied the json logs for the failed backup jobs. I understand this is a low priority case, and for myself I can continue having these failed backups, since I use other methods (especially for the Linux machines) to back up modified files of importance. Has there been any progress in tracking what's happening here ? Two different kind of failures since I reported: Got a "failure" (but success) in other jobs.. To explain the "success" part of it, I can select the job when doing a restore, and the "transfer" is green but the "remote" is marked red. [image: 1741003478991-xcp-ng-backup-fail-or-ok-admin-ubuntu.png] ... and the remote is OK ... [image: 1741003521687-xcp-ng-backup-fail-or-ok-admin-ubuntu-remote-ok.png] Restoring from this backup seems to be possible: [image: 1741004632183-xcp-ng-backup-fail-or-ok-admin-ubuntu-restore-available.png] The same happens with the previously mentioned "Win 10 22H2 new", backup gets a red blob on "remote", but green on "transfer" and restore is available from that point in time. This is against another remote, that get "full success" on other VMs. Another example, against the same remote (as in the images above) with five jobs at the approximate same time (machines belongs to a backup sequence) [image: 1741004104739-backup-2025-03-03-2x-success-followed-by-3x-failure-followed-by-1x-success.png] The failures are identical: [image: 1741004215619-backup-2025-03-03-failure-example.png] If there is anything I can do to provide you with more details, explain what I should do to obtain the data you need, and if this problem is rare enough to never have happened (or will happen) to a real customer, then just close the case.
  • 8.3RC2 Backup Pool restore not compatible with 8.3 release

    1
    0 Votes
    1 Posts
    34 Views
    No one has replied
  • Question on NDB backups

    3
    0 Votes
    3 Posts
    82 Views
    A
    @TS79 That makes sense - thanks!
  • Backup failed with "Body Timeout Error"

    8
    0 Votes
    8 Posts
    209 Views
    A
    Update: It’s weird: • There are three VM’s on this host. The backup works with two but not with the third. It does with the “Body Timeout Error” error. • Two of the VM’s are almost identical (same drive sizes). The only difference is that one was set up as “Other install media” (came over from esxi) and the one that fails was set up using the “Windows Server 2022” template. • I normally backup to multiple NFS servers so I changed to try one at a time; both failed. • After watching it do the backup too many times to thing, I found that, at about the 95% stage, the snapshot stops writing to the NFS share. • About that time, the file /var/log/xensource.log records this information: o Feb 26 09:43:33 HOST1 xapi: [error||19977 HTTPS 192.168.1.5->:::80|[XO] VM export R:c14f4c4c1c4c|xapi_compression] nice failed to compress: exit code 70 o Feb 26 09:43:33 HOST1 xapi: [ warn||19977 HTTPS 192.168.1.5->:::80|[XO] VM export R:c14f4c4c1c4c|pervasiveext] finally: Error while running cleanup after failure of main function: Failure("nice failed to compress: exit code 70") o Feb 26 09:43:33 HOST1 xapi: [debug||922 |xapi events D:d21ea5c4dd9a|xenops] Event on VM fbcbd709-a9d9-4cc7-80de-90185a74eba4; resident_here = true o Feb 26 09:43:33 HOST1 xapi: [debug||922 |xapi events D:d21ea5c4dd9a|dummytaskhelper] task timeboxed_rpc D:a08ebc674b6d created by task D:d21ea5c4dd9a o Feb 26 09:43:33 HOST1 xapi: [debug||922 |timeboxed_rpc D:a08ebc674b6d|xmlrpc_client] stunnel pid: 339060 (cached) connected to 192.168.1.6:443 o Feb 26 09:43:33 HOST1 xapi: [debug||19977 HTTPS 192.168.1.5->:::80|[XO] VM export R:c14f4c4c1c4c|xmlrpc_client] stunnel pid: 296483 (cached) connected to 192.168.1.6:443 o Feb 26 09:43:33 HOST1 xapi: [error||19977 :::80|VM.export D:c62fe7a2b4f2|backtrace] [XO] VM export R:c14f4c4c1c4c failed with exception Server_error(CLIENT_ERROR, [ INTERNAL_ERROR: [ Unix.Unix_error(Unix.EPIPE, "write", "") ] ]) o Feb 26 09:43:33 HOST1 xapi: [error||19977 :::80|VM.export D:c62fe7a2b4f2|backtrace] Raised Server_error(CLIENT_ERROR, [ INTERNAL_ERROR: [ Unix.Unix_error(Unix.EPIPE, "write", "") ] ]) • I have no idea if it means anything but the “failed to compress” made me try something. I change “compression” from “Zstd” to “disabled” and that time it worked. Here are my results: Regular backup to TrueNas, “compression” set to “Zstd”: backup fails. Regular backup to TrueNas, “compression” set to “disabled”: backup is successful. Regular backup to vanilla Ubuntu test VM, “compression” set to “Zstd”: backup is successful. Delta backup to TrueNas: backup is successful. Sooooo…the $64,000 question is why doesn’t it work on that one VM when compression is on and it a TrueNas box?
  • Question on Mirror backups

    10
    0 Votes
    10 Posts
    184 Views
    florentF
    @manilx yes, that's it we should change the "full backup interval" to "base/complete backup interval", to clarify what is a full . And we will do it.
  • Backup fails with "VM_HAS_VUSBS" error

    18
    0 Votes
    18 Posts
    852 Views
    olivierlambertO
    Yes, so for that to be done automatically, you can set up "offline backup" option, that will do exactly that
  • Backup XCP-NG instances, beyond Pool metadata backup

    4
    0 Votes
    4 Posts
    119 Views
    stormiS
    There's no built-in feature to backup changes made to dom0 outside changes made via XAPI, so configuration as code is probably the best option here.
  • Dumb question about multiple remotes

    Solved
    3
    0 Votes
    3 Posts
    64 Views
    A
    @olivierlambert Very cool...thanks for the explanation (but I'll admit I had to google "Node streams")!
  • Export to ova from command line

    Solved
    11
    0 Votes
    11 Posts
    2k Views
    A
    @ainsean got it xe vm-export vm=uuid filename=path compress=true
  • No more options for export

    Solved
    6
    4
    0 Votes
    6 Posts
    192 Views
    GheppyG
    It works for me again, thanks for your support
  • Weird performance alert. Start importing VM for no reason.

    5
    6
    0 Votes
    5 Posts
    120 Views
    P
    I increased the dom0 RAM from default 1.75 to 2 GiB Hopefully this will do.
  • 0 Votes
    35 Posts
    723 Views
    marcoiM
    I just had a backup fail with a similar error. Details: XO Community Xen Orchestra, commit 749f0 Master, commit 749f0 Merge backups synchronously was off on the backup job, going to enable it. [image: 1740267400062-88c76052-e8f1-43dc-a218-5794d46ebaad-image.png] { "data": { "type": "VM", "id": "4f715b32-ddfb-5818-c7bd-aaaa2a77ce70", "name_label": "PROD_SophosXG" }, "id": "1740263738986", "message": "backup VM", "start": 1740263738986, "status": "failure", "warnings": [ { "message": "the writer IncrementalRemoteWriter has failed the step writer.beforeBackup() with error Lock file is already being held. It won't be used anymore in this job execution." } ], "end": 1740263738993, "result": { "code": "ELOCKED", "file": "/run/xo-server/mounts/f992fff1-e245-48f7-8eb3-25987ecbfbd4/xo-vm-backups/4f715b32-ddfb-5818-c7bd-aaaa2a77ce70", "message": "Lock file is already being held", "name": "Error", "stack": "Error: Lock file is already being held\n at /opt/xo/xo-builds/xen-orchestra-202502212211/node_modules/proper-lockfile/lib/lockfile.js:68:47\n at callback (/opt/xo/xo-builds/xen-orchestra-202502212211/node_modules/graceful-fs/polyfills.js:306:20)\n at FSReqCallback.oncomplete (node:fs:199:5)\n at FSReqCallback.callbackTrampoline (node:internal/async_hooks:130:17)\nFrom:\n at NfsHandler.addSyncStackTrace (/opt/xo/xo-builds/xen-orchestra-202502212211/@xen-orchestra/fs/dist/local.js:21:26)\n at NfsHandler._lock (/opt/xo/xo-builds/xen-orchestra-202502212211/@xen-orchestra/fs/dist/local.js:135:48)\n at NfsHandler.lock (/opt/xo/xo-builds/xen-orchestra-202502212211/@xen-orchestra/fs/dist/abstract.js:234:27)\n at IncrementalRemoteWriter.beforeBackup (file:///opt/xo/xo-builds/xen-orchestra-202502212211/@xen-orchestra/backups/_runners/_writers/_MixinRemoteWriter.mjs:54:34)\n at async IncrementalRemoteWriter.beforeBackup (file:///opt/xo/xo-builds/xen-orchestra-202502212211/@xen-orchestra/backups/_runners/_writers/IncrementalRemoteWriter.mjs:68:5)\n at async file:///opt/xo/xo-builds/xen-orchestra-202502212211/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:343:7\n at async callWriter (file:///opt/xo/xo-builds/xen-orchestra-202502212211/@xen-orchestra/backups/_runners/_vmRunners/_Abstract.mjs:33:9)\n at async IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202502212211/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:342:5)\n at async file:///opt/xo/xo-builds/xen-orchestra-202502212211/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38" } }, Email report of failure VM Backup report Global status : failure 🚨 • Job ID: bfadfecc-b651-4fd6-b104-2f015100db29 • Run ID: 1740263706121 • Mode: delta • Start time: Saturday, February 22nd 2025, 5:35:06 pm • End time: Saturday, February 22nd 2025, 5:37:59 pm • Duration: 3 minutes • Successes: 0 / 8 1 Failure PROD_SophosXG Production Sophos Firewall Application • UUID: 4f715b32-ddfb-5818-c7bd-aaaa2a77ce70 • Start time: Saturday, February 22nd 2025, 5:35:38 pm • End time: Saturday, February 22nd 2025, 5:35:38 pm • Duration: a few seconds • ⚠️ the writer IncrementalRemoteWriter has failed the step writer.beforeBackup() with error Lock file is already being held. It won't be used anymore in this job execution. • Error: Lock file is already being held Manually run completed PROD_SophosXG (xcp02) Clean VM directory cleanVm: incorrect backup size in metadata Start: 2025-02-22 18:37 End: 2025-02-22 18:37 Snapshot Start: 2025-02-22 18:37 End: 2025-02-22 18:37 Qnap NFS Backup transfer Start: 2025-02-22 18:37 End: 2025-02-22 18:38 Duration: a minute Size: 12.85 GiB Speed: 185.52 MiB/s Start: 2025-02-22 18:37 End: 2025-02-22 18:44 Duration: 7 minutes Start: 2025-02-22 18:37 End: 2025-02-22 18:44 Duration: 7 minutes Type: delta
  • Licensing XO Proxy

    3
    0 Votes
    3 Posts
    71 Views
    J
    Hi jkatz - I am a sales manager for Vates America Corp and I apologize for any confusion but 'complete feature set' is referring to XOA. We list XOSTOR, XOPROXY, and Airgap Support all as 'optional' because they are add-ons and incur additional costs. One of my team members or I will be happy to answer any questions you have and discuss the various costs of our add-on offerings. We can be reached at sales@vates.tech and we'll look forward to hearing from you.
  • How to restore a VM from VHD files?

    18
    0 Votes
    18 Posts
    1k Views
    lawrencesystemsL
    @starmood For each VM backup run inside of Xen Orchestra it backs up not just the VHD but everything that is needed to restore that VM to any other XCP-ng host. So in a complete loss situation you can load a new XCP-ng Host, setup Xen Orchestra, point the new Xen Orchestra to those backups and restore any of the VM's with all their settings. The metadata backup of XCP-ng is just that, the metadata of the system and it's not granular. It's good to have because all the things like network setting and what VM's are on the hosts will be there, but the VM backups to me are the most important. I have a tutorial covering how the backups work, it's from a bit over a year ago and there are EVEN MORE features now and I will be doing a new video this year to cover that. https://youtu.be/weVoKm8kDb4?si=1z6IDqwnK1cxEGjm I also have a tutorial on how you can also automate the backup validation process https://youtu.be/A0HTRF3dhQE?si=gZLXQUqLJmDkIQs6
  • XO6 Backup displayed in "start page/dash board" Feedback

    2
    4
    0 Votes
    2 Posts
    58 Views
    pdoniasP
    @ph7 Thanks for the report, we'll check that
  • Misleading status in VM->Backup screen

    7
    1
    0 Votes
    7 Posts
    183 Views
    J
    @olivierlambert @DustinB @Forza May I suggest that it goes a bit further, unless it already does so. Can the VMs in Xen Orchestra show if it was backed up successfully in the most recent job. As well as when it was last backed up, if you happen to not have the report yet (or read it) you can see at a glance. That way it makes decoding the orange status for the backup job easier, so you know which ones you need to do a backup job run for. Alternatively show which ones failed and were successful, when you get the details for the job (task) in Xen Orchestra following an orange status on Xen Orchestra 6.
  • remote encryption algorithm

    4
    0 Votes
    4 Posts
    120 Views
    olivierlambertO
    Ah great! It should be documented I suppose, ping @thomas-dkmt
  • Backup and Replication Strategy

    2
    0 Votes
    2 Posts
    114 Views
    D
    @Disbelief5920 Backup VMs from production cluster in datacenter to separate backup storage in same datacenter. (Delta Backup. Retention <X> Monthly, <X> Weekly, <X> daily). Easily, through the scheduling system you can have the system perform Continuous Replication or a standard Delta job. Copy the backup file from production datacenter backup storage to offsite DR datacenter backup storage. (Retention possibly different than production <Y> Monthly, <Y> Weekly, <Y> daily). I think what you'd be looking at would be the "Mirror Backup" job. This'll copy your backup repo to another available repo. copy the backup from production datacenter or offsite DR datacenter to cloud storage (Backblaze) Yes, this can be completed in XO's interface or from your NAS/SAN solutions (or client side server software). Continuous replication from production cluster in datacenter to offsite DR datacenter (1-2x daily for example). I've had CR jobs running as quickly as every 5 minutes between two datacenters. Obviously bandwidth matters, once or twice a day should be perfectly doable. Is it possible to achieve the above by only backing up the production VM one time so it does not need to do 2x snapshots and 2x data copies? (1 for the backup and 1 for the replication jobs? You'd use the Delta job format for the bulk of these. Is it possible to achieve the above with only 1 transfer across the WAN instead of once for the continuous replication and once for the delta backup? No, CR is very expressly any changes that have happened, it's continuous by it's nature @Disbelief5920 said in Backup and Replication Strategy: For example, is there a scenario that would first create a backup from production to onsite backup storage, then replicate across the WAN to the backup storage, then restore the VM from the backup storage to a DR cluster? This was the production NAS/XCP-NG nodes would only have to process/copy the data once, only 1 copy would transfer across the WAN and only 1 restore from offsite backup storage to DR cluster. I'm not 100% on this scenario.... so I don't think I can readily help explain it.
  • Start backup for one single vm

    7
    0 Votes
    7 Posts
    166 Views
    P
    @rtjdamen I was in the same situation I made a new schedule, BCK-Man with a tag BCK-Man. When I needed a manual backup, I added the tag to that VM
  • Question on backup sequence

    6
    0 Votes
    6 Posts
    377 Views
    J
    I would like to ask a followup question to confirm. so IF using sequences, we should disable the backup tasks on the overview tab?