• CBT: the thread to centralize your feedback

    Pinned
    434
    1 Votes
    434 Posts
    384k Views
    A
    @rtjdamen As of writing this, I don't see any VMs VDI's listed as being attached to Control Domain. I have 10+ VMs on 3 XCP-NG 8.3 servers in one pool so I'll let you know if we come across that.
  • Feedback on immutability

    Pinned
    45
    2 Votes
    45 Posts
    9k Views
    florentF
    @afk the agent is as dumb as possible also if you encrypt the backup, the agent will need to decrypt the metadata to detect the chains, thus having access to the encryption key, which need getting the encryption key out of XO and transferred to the immutability agent I think it will be easier to provide more feedback on the immutabiltiy backup, XO has access to the chain , and / or alert when something seems to be strange
  • Our future backup code: test it!

    2
    1 Votes
    2 Posts
    24 Views
    D
    @olivierlambert I just tried to build the install with that branch and got the following error. • Running build in 22 packages • Remote caching disabled @xen-orchestra/disk-transform:build: cache miss, executing c1d61a12721a1a1b @xen-orchestra/disk-transform:build: yarn run v1.22.22 @xen-orchestra/disk-transform:build: $ tsc @xen-orchestra/disk-transform:build: src/SynchronizedDisk.mts(2,30): error TS 2307: Cannot find module '@vates/generator-toolbox' or its corresponding type declarations. @xen-orchestra/disk-transform:build: error Command failed with exit code 2. OS: Rocky 9 Yarn: 1.22.22 Node: v22.14.0 Let me know if you need anymore information.
  • Potential bug with Windows VM backup: "Body Timeout Error"

    12
    3
    1 Votes
    12 Posts
    152 Views
    olivierlambertO
    So maybe there's a timeout that's too long for XO. Adding @florent and/or @julien-f in the loop.
  • Restoring a disk only not the whole VM

    3
    0 Votes
    3 Posts
    58 Views
    florentF
    @McHenry you can select the SR on each disk,, and ignore the disks you don't need [image: 1742985028616-cf5f7060-298d-4453-abc7-95767ff72aaa-image.png] then you can attach the disks to any other VM
  • Continues Backup Interrupted.

    1
    0 Votes
    1 Posts
    19 Views
    No one has replied
  • Issue with backup & snapshot

    Unsolved
    3
    1
    0 Votes
    3 Posts
    43 Views
    K
    @Danp Thanks for your response. I already tried those steps but the problem is that I am getting authorisation error. [17:17 xcp-ng-2 ~]# iscsiadm -m session tcp: [10] 10.204.228.100:3260,1026 iqn.1992-08.com.netapp:sn.302cea4af64811ec84d9d039ea3ae4de:vs.5 (non-flash) tcp: [7] 10.204.228.101:3260,1027 iqn.1992-08.com.netapp:sn.302cea4af64811ec84d9d039ea3ae4de:vs.5 (non-flash) tcp: [8] 10.204.228.103:3260,1029 iqn.1992-08.com.netapp:sn.302cea4af64811ec84d9d039ea3ae4de:vs.5 (non-flash) tcp: [9] 10.204.228.102:3260,1028 iqn.1992-08.com.netapp:sn.302cea4af64811ec84d9d039ea3ae4de:vs.5 (non-flash) Not sure why its giving authetication error [17:18 xcp-ng-2 ~]# iscsiadm -m discovery -t sendtargets -p 10.204.228.100 iscsiadm: Login failed to authenticate with target iscsiadm: discovery login to 10.204.228.100 rejected: initiator failed authorization iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure [17:18 xcp-ng-2 ~]# If iscsi daemon has logged out then how I am able to access the VMs? If iscsi daemon has logged out then how I am able to migrate VM from one xcp-ng node to another? For migration and accessing VM, xcp-ng should be able to communicate SR, right? However, I tried restart iscsi service in one of the xcp-ng node on which 2 VMs were there. One on local storage of the xcp-ng and another on the SR. When I restarted the iscsid service and logged out and logged in to the storage, login failed giving authorisation error (not sure why it gave error, strange) and then both the VMs alongwith the xcp-ng node rebooted automatically Once xcp-ng node came up, iscsi daemon automatically logged in to the storage. I am not sure what to do in this case. Your support is much appreciated.
  • Error: VM_HAS_PCI_ATTACHED when IN MEMORY Snapshot mode upon Delta backup

    2
    0 Votes
    2 Posts
    27 Views
    olivierlambertO
    Hi, It's not supported because you have a PCI device attached, so memory snapshot aren't supported, because this cause crash on resuming it while the PCI device had another context in the meantime. Offline means the VM will be shutdown, snap then started. edit: "In memory" is like a live migration, so not compatible with PCI passthrough
  • 0 Votes
    1 Posts
    26 Views
    No one has replied
  • Suspicius presentation of time in backup

    5
    3
    0 Votes
    5 Posts
    114 Views
    P
    @ph7 And When in xsconsole / Keyboard and Timezone: Current keyboard type, Default ?? Timezone, seems not to be set
  • Issue with SR and coalesce

    Unsolved
    62
    0 Votes
    62 Posts
    597 Views
    nikadeN
    @tjkreidl said in Issue with SR and coalesce: @nikade Am wondering still if one of the hosts isn't connected to that SR properly. Re-creating teh SR from scratch would do the trick, but a lot of work shuffling all the VMs to different SR storage. Might be worth it, of course, if it fixes the issue. Yeah maybe, but I think there would be some kind of indication in XO if the SR wasn't properly mounted on one of the hosts. Lets see what happends, its weird indeed that its not shown.
  • Health Check Schedule

    5
    0 Votes
    5 Posts
    66 Views
    M
    @ph7 Thanks for the help...I think that gets me close. I will tinker with it some more. Thanks again for the answers.
  • How Best to Achieve Higher Transfer Speeds for Backup Jobs

    13
    0 Votes
    13 Posts
    272 Views
    K
    @ph7 Yep, that's what I've done. Actually, I've powered off the prior XOA instance. Not ideal, but a workable solution.
  • Invalid Health Check SR causes Bakup to fail with no error

    5
    5
    0 Votes
    5 Posts
    108 Views
    T
    @olivierlambert - I'm not sure who would be the best person at Vates to ping or whether there is another channel I should be using to request enhancements. I'm happy to be directed to the correct place if that's not here. Despite the fact that I brought this upon myself... I do think that it would be nice if Xen Orchestra could improve the error handling/messaging for situations where a task fails due to an invalid object UUID. It seems like the UI is already making a simple XAPI call to lookup the name-label of the SR, which, upon failure results in the schedule where an invalid/unknown UUID is configured displaying the invalid/unknown UUID in Red text with a red triangle.
  • Deleting incremental backups

    7
    0 Votes
    7 Posts
    109 Views
    F
    @DustinB OK. I think I'm between a rock and a hard place! Thanks for the advice.
  • maybe a bug when restarting mirroring delta backups that have failed

    2
    2
    0 Votes
    2 Posts
    51 Views
    R
    @jshiells i believe this is by design for a mirror backup. it checks every vm to mirror (if no filter is selected that will be all on the SR), if there is no new recovery point it will not copy anything as it allready does have the latest version of that backup on the destination recovery point. When a filter is selected it checks every vm on the sr but it does not copy data.
  • Backup issues with S3 remote

    9
    4
    0 Votes
    9 Posts
    258 Views
    P
    @florent It's been quiet since I supplied the json logs for the failed backup jobs. I understand this is a low priority case, and for myself I can continue having these failed backups, since I use other methods (especially for the Linux machines) to back up modified files of importance. Has there been any progress in tracking what's happening here ? Two different kind of failures since I reported: Got a "failure" (but success) in other jobs.. To explain the "success" part of it, I can select the job when doing a restore, and the "transfer" is green but the "remote" is marked red. [image: 1741003478991-xcp-ng-backup-fail-or-ok-admin-ubuntu.png] ... and the remote is OK ... [image: 1741003521687-xcp-ng-backup-fail-or-ok-admin-ubuntu-remote-ok.png] Restoring from this backup seems to be possible: [image: 1741004632183-xcp-ng-backup-fail-or-ok-admin-ubuntu-restore-available.png] The same happens with the previously mentioned "Win 10 22H2 new", backup gets a red blob on "remote", but green on "transfer" and restore is available from that point in time. This is against another remote, that get "full success" on other VMs. Another example, against the same remote (as in the images above) with five jobs at the approximate same time (machines belongs to a backup sequence) [image: 1741004104739-backup-2025-03-03-2x-success-followed-by-3x-failure-followed-by-1x-success.png] The failures are identical: [image: 1741004215619-backup-2025-03-03-failure-example.png] If there is anything I can do to provide you with more details, explain what I should do to obtain the data you need, and if this problem is rare enough to never have happened (or will happen) to a real customer, then just close the case.
  • 8.3RC2 Backup Pool restore not compatible with 8.3 release

    1
    0 Votes
    1 Posts
    34 Views
    No one has replied
  • Question on NDB backups

    3
    0 Votes
    3 Posts
    80 Views
    A
    @TS79 That makes sense - thanks!
  • Backup failed with "Body Timeout Error"

    8
    0 Votes
    8 Posts
    203 Views
    A
    Update: It’s weird: • There are three VM’s on this host. The backup works with two but not with the third. It does with the “Body Timeout Error” error. • Two of the VM’s are almost identical (same drive sizes). The only difference is that one was set up as “Other install media” (came over from esxi) and the one that fails was set up using the “Windows Server 2022” template. • I normally backup to multiple NFS servers so I changed to try one at a time; both failed. • After watching it do the backup too many times to thing, I found that, at about the 95% stage, the snapshot stops writing to the NFS share. • About that time, the file /var/log/xensource.log records this information: o Feb 26 09:43:33 HOST1 xapi: [error||19977 HTTPS 192.168.1.5->:::80|[XO] VM export R:c14f4c4c1c4c|xapi_compression] nice failed to compress: exit code 70 o Feb 26 09:43:33 HOST1 xapi: [ warn||19977 HTTPS 192.168.1.5->:::80|[XO] VM export R:c14f4c4c1c4c|pervasiveext] finally: Error while running cleanup after failure of main function: Failure("nice failed to compress: exit code 70") o Feb 26 09:43:33 HOST1 xapi: [debug||922 |xapi events D:d21ea5c4dd9a|xenops] Event on VM fbcbd709-a9d9-4cc7-80de-90185a74eba4; resident_here = true o Feb 26 09:43:33 HOST1 xapi: [debug||922 |xapi events D:d21ea5c4dd9a|dummytaskhelper] task timeboxed_rpc D:a08ebc674b6d created by task D:d21ea5c4dd9a o Feb 26 09:43:33 HOST1 xapi: [debug||922 |timeboxed_rpc D:a08ebc674b6d|xmlrpc_client] stunnel pid: 339060 (cached) connected to 192.168.1.6:443 o Feb 26 09:43:33 HOST1 xapi: [debug||19977 HTTPS 192.168.1.5->:::80|[XO] VM export R:c14f4c4c1c4c|xmlrpc_client] stunnel pid: 296483 (cached) connected to 192.168.1.6:443 o Feb 26 09:43:33 HOST1 xapi: [error||19977 :::80|VM.export D:c62fe7a2b4f2|backtrace] [XO] VM export R:c14f4c4c1c4c failed with exception Server_error(CLIENT_ERROR, [ INTERNAL_ERROR: [ Unix.Unix_error(Unix.EPIPE, "write", "") ] ]) o Feb 26 09:43:33 HOST1 xapi: [error||19977 :::80|VM.export D:c62fe7a2b4f2|backtrace] Raised Server_error(CLIENT_ERROR, [ INTERNAL_ERROR: [ Unix.Unix_error(Unix.EPIPE, "write", "") ] ]) • I have no idea if it means anything but the “failed to compress” made me try something. I change “compression” from “Zstd” to “disabled” and that time it worked. Here are my results: Regular backup to TrueNas, “compression” set to “Zstd”: backup fails. Regular backup to TrueNas, “compression” set to “disabled”: backup is successful. Regular backup to vanilla Ubuntu test VM, “compression” set to “Zstd”: backup is successful. Delta backup to TrueNas: backup is successful. Sooooo…the $64,000 question is why doesn’t it work on that one VM when compression is on and it a TrueNas box?