• 0 Votes
    8 Posts
    511 Views
    lawrencesystemsL
    @thomas-wood "Remotes" are for XO backups and SR are for VM Storage. Once you add a SR to a pool it's available to all hosts. Also, all hosts should be able to access the IP of the NFS server before setting that up. I have a video explaining storage in XCP-ng here: https://youtu.be/xTo1F3LUhbE?si=QHYkABgElsEVOj6H
  • Restoring a disk only not the whole VM

    5
    0 Votes
    5 Posts
    217 Views
    olivierlambertO
    This is not a backup but a replication
  • Mirror all incremental VM backups

    2
    0 Votes
    2 Posts
    87 Views
    olivierlambertO
    Ping @lsouai-vates
  • Continues Backup Interrupted.

    1
    0 Votes
    1 Posts
    54 Views
    No one has replied
  • Issue with backup & snapshot

    Unsolved
    3
    1
    0 Votes
    3 Posts
    177 Views
    K
    @Danp Thanks for your response. I already tried those steps but the problem is that I am getting authorisation error. [17:17 xcp-ng-2 ~]# iscsiadm -m session tcp: [10] 10.204.228.100:3260,1026 iqn.1992-08.com.netapp:sn.302cea4af64811ec84d9d039ea3ae4de:vs.5 (non-flash) tcp: [7] 10.204.228.101:3260,1027 iqn.1992-08.com.netapp:sn.302cea4af64811ec84d9d039ea3ae4de:vs.5 (non-flash) tcp: [8] 10.204.228.103:3260,1029 iqn.1992-08.com.netapp:sn.302cea4af64811ec84d9d039ea3ae4de:vs.5 (non-flash) tcp: [9] 10.204.228.102:3260,1028 iqn.1992-08.com.netapp:sn.302cea4af64811ec84d9d039ea3ae4de:vs.5 (non-flash) Not sure why its giving authetication error [17:18 xcp-ng-2 ~]# iscsiadm -m discovery -t sendtargets -p 10.204.228.100 iscsiadm: Login failed to authenticate with target iscsiadm: discovery login to 10.204.228.100 rejected: initiator failed authorization iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure [17:18 xcp-ng-2 ~]# If iscsi daemon has logged out then how I am able to access the VMs? If iscsi daemon has logged out then how I am able to migrate VM from one xcp-ng node to another? For migration and accessing VM, xcp-ng should be able to communicate SR, right? However, I tried restart iscsi service in one of the xcp-ng node on which 2 VMs were there. One on local storage of the xcp-ng and another on the SR. When I restarted the iscsid service and logged out and logged in to the storage, login failed giving authorisation error (not sure why it gave error, strange) and then both the VMs alongwith the xcp-ng node rebooted automatically Once xcp-ng node came up, iscsi daemon automatically logged in to the storage. I am not sure what to do in this case. Your support is much appreciated.
  • Error: VM_HAS_PCI_ATTACHED when IN MEMORY Snapshot mode upon Delta backup

    2
    0 Votes
    2 Posts
    80 Views
    olivierlambertO
    Hi, It's not supported because you have a PCI device attached, so memory snapshot aren't supported, because this cause crash on resuming it while the PCI device had another context in the meantime. Offline means the VM will be shutdown, snap then started. edit: "In memory" is like a live migration, so not compatible with PCI passthrough
  • 0 Votes
    1 Posts
    66 Views
    No one has replied
  • Issue with SR and coalesce

    Unsolved
    62
    0 Votes
    62 Posts
    2k Views
    nikadeN
    @tjkreidl said in Issue with SR and coalesce: @nikade Am wondering still if one of the hosts isn't connected to that SR properly. Re-creating teh SR from scratch would do the trick, but a lot of work shuffling all the VMs to different SR storage. Might be worth it, of course, if it fixes the issue. Yeah maybe, but I think there would be some kind of indication in XO if the SR wasn't properly mounted on one of the hosts. Lets see what happends, its weird indeed that its not shown.
  • Health Check Schedule

    5
    0 Votes
    5 Posts
    134 Views
    M
    @ph7 Thanks for the help...I think that gets me close. I will tinker with it some more. Thanks again for the answers.
  • How Best to Achieve Higher Transfer Speeds for Backup Jobs

    13
    0 Votes
    13 Posts
    436 Views
    K
    @ph7 Yep, that's what I've done. Actually, I've powered off the prior XOA instance. Not ideal, but a workable solution.
  • Invalid Health Check SR causes Bakup to fail with no error

    5
    5
    0 Votes
    5 Posts
    225 Views
    T
    @olivierlambert - I'm not sure who would be the best person at Vates to ping or whether there is another channel I should be using to request enhancements. I'm happy to be directed to the correct place if that's not here. Despite the fact that I brought this upon myself... I do think that it would be nice if Xen Orchestra could improve the error handling/messaging for situations where a task fails due to an invalid object UUID. It seems like the UI is already making a simple XAPI call to lookup the name-label of the SR, which, upon failure results in the schedule where an invalid/unknown UUID is configured displaying the invalid/unknown UUID in Red text with a red triangle.
  • Deleting incremental backups

    7
    0 Votes
    7 Posts
    187 Views
    F
    @DustinB OK. I think I'm between a rock and a hard place! Thanks for the advice.
  • maybe a bug when restarting mirroring delta backups that have failed

    2
    2
    0 Votes
    2 Posts
    102 Views
    R
    @jshiells i believe this is by design for a mirror backup. it checks every vm to mirror (if no filter is selected that will be all on the SR), if there is no new recovery point it will not copy anything as it allready does have the latest version of that backup on the destination recovery point. When a filter is selected it checks every vm on the sr but it does not copy data.
  • Backup issues with S3 remote

    9
    4
    0 Votes
    9 Posts
    332 Views
    P
    @florent It's been quiet since I supplied the json logs for the failed backup jobs. I understand this is a low priority case, and for myself I can continue having these failed backups, since I use other methods (especially for the Linux machines) to back up modified files of importance. Has there been any progress in tracking what's happening here ? Two different kind of failures since I reported: Got a "failure" (but success) in other jobs.. To explain the "success" part of it, I can select the job when doing a restore, and the "transfer" is green but the "remote" is marked red. [image: 1741003478991-xcp-ng-backup-fail-or-ok-admin-ubuntu.png] ... and the remote is OK ... [image: 1741003521687-xcp-ng-backup-fail-or-ok-admin-ubuntu-remote-ok.png] Restoring from this backup seems to be possible: [image: 1741004632183-xcp-ng-backup-fail-or-ok-admin-ubuntu-restore-available.png] The same happens with the previously mentioned "Win 10 22H2 new", backup gets a red blob on "remote", but green on "transfer" and restore is available from that point in time. This is against another remote, that get "full success" on other VMs. Another example, against the same remote (as in the images above) with five jobs at the approximate same time (machines belongs to a backup sequence) [image: 1741004104739-backup-2025-03-03-2x-success-followed-by-3x-failure-followed-by-1x-success.png] The failures are identical: [image: 1741004215619-backup-2025-03-03-failure-example.png] If there is anything I can do to provide you with more details, explain what I should do to obtain the data you need, and if this problem is rare enough to never have happened (or will happen) to a real customer, then just close the case.
  • 8.3RC2 Backup Pool restore not compatible with 8.3 release

    1
    0 Votes
    1 Posts
    48 Views
    No one has replied
  • Question on NDB backups

    3
    0 Votes
    3 Posts
    151 Views
    A
    @TS79 That makes sense - thanks!
  • Backup failed with "Body Timeout Error"

    8
    0 Votes
    8 Posts
    460 Views
    A
    Update: It’s weird: • There are three VM’s on this host. The backup works with two but not with the third. It does with the “Body Timeout Error” error. • Two of the VM’s are almost identical (same drive sizes). The only difference is that one was set up as “Other install media” (came over from esxi) and the one that fails was set up using the “Windows Server 2022” template. • I normally backup to multiple NFS servers so I changed to try one at a time; both failed. • After watching it do the backup too many times to thing, I found that, at about the 95% stage, the snapshot stops writing to the NFS share. • About that time, the file /var/log/xensource.log records this information: o Feb 26 09:43:33 HOST1 xapi: [error||19977 HTTPS 192.168.1.5->:::80|[XO] VM export R:c14f4c4c1c4c|xapi_compression] nice failed to compress: exit code 70 o Feb 26 09:43:33 HOST1 xapi: [ warn||19977 HTTPS 192.168.1.5->:::80|[XO] VM export R:c14f4c4c1c4c|pervasiveext] finally: Error while running cleanup after failure of main function: Failure("nice failed to compress: exit code 70") o Feb 26 09:43:33 HOST1 xapi: [debug||922 |xapi events D:d21ea5c4dd9a|xenops] Event on VM fbcbd709-a9d9-4cc7-80de-90185a74eba4; resident_here = true o Feb 26 09:43:33 HOST1 xapi: [debug||922 |xapi events D:d21ea5c4dd9a|dummytaskhelper] task timeboxed_rpc D:a08ebc674b6d created by task D:d21ea5c4dd9a o Feb 26 09:43:33 HOST1 xapi: [debug||922 |timeboxed_rpc D:a08ebc674b6d|xmlrpc_client] stunnel pid: 339060 (cached) connected to 192.168.1.6:443 o Feb 26 09:43:33 HOST1 xapi: [debug||19977 HTTPS 192.168.1.5->:::80|[XO] VM export R:c14f4c4c1c4c|xmlrpc_client] stunnel pid: 296483 (cached) connected to 192.168.1.6:443 o Feb 26 09:43:33 HOST1 xapi: [error||19977 :::80|VM.export D:c62fe7a2b4f2|backtrace] [XO] VM export R:c14f4c4c1c4c failed with exception Server_error(CLIENT_ERROR, [ INTERNAL_ERROR: [ Unix.Unix_error(Unix.EPIPE, "write", "") ] ]) o Feb 26 09:43:33 HOST1 xapi: [error||19977 :::80|VM.export D:c62fe7a2b4f2|backtrace] Raised Server_error(CLIENT_ERROR, [ INTERNAL_ERROR: [ Unix.Unix_error(Unix.EPIPE, "write", "") ] ]) • I have no idea if it means anything but the “failed to compress” made me try something. I change “compression” from “Zstd” to “disabled” and that time it worked. Here are my results: Regular backup to TrueNas, “compression” set to “Zstd”: backup fails. Regular backup to TrueNas, “compression” set to “disabled”: backup is successful. Regular backup to vanilla Ubuntu test VM, “compression” set to “Zstd”: backup is successful. Delta backup to TrueNas: backup is successful. Sooooo…the $64,000 question is why doesn’t it work on that one VM when compression is on and it a TrueNas box?
  • Question on Mirror backups

    10
    0 Votes
    10 Posts
    342 Views
    florentF
    @manilx yes, that's it we should change the "full backup interval" to "base/complete backup interval", to clarify what is a full . And we will do it.
  • Backup fails with "VM_HAS_VUSBS" error

    18
    0 Votes
    18 Posts
    1k Views
    olivierlambertO
    Yes, so for that to be done automatically, you can set up "offline backup" option, that will do exactly that
  • Backup XCP-NG instances, beyond Pool metadata backup

    4
    0 Votes
    4 Posts
    179 Views
    stormiS
    There's no built-in feature to backup changes made to dom0 outside changes made via XAPI, so configuration as code is probably the best option here.