• Backing up from Replica triggers full backup

    14
    6
    1 Votes
    14 Posts
    403 Views
    florentF
    @Pilow open a dedicated topic on this . I will ping the relevant team there is it iscsi + CBT ?
  • backup mail report says INTERRUPTED but it's not ?

    119
    5
    0 Votes
    119 Posts
    9k Views
    florentF
    yes, the last changes are released in latest ( 6.3) tomorrow if everything proceed as intended mostly : https://github.com/vatesfr/xen-orchestra/pull/9622 https://github.com/vatesfr/xen-orchestra/pull/9557 and this one was in 6.2 : https://github.com/vatesfr/xen-orchestra/commit/e36e1012e20c9678efa15148179941cb284c39a6 that's nice to hear fbeauchamp opened this pull request in vatesfr/xen-orchestra closed feat(xo-server): use index for getAllUnhealthyVdiChainsLength #9622 fbeauchamp opened this pull request in vatesfr/xen-orchestra closed fix(backups): better handling of generator cleanup #9557 0 fbeauchamp committed to vatesfr/xen-orchestra fix(xo-web): reducing polling for patches and license
  • Backup Suddenly Failing

    28
    0 Votes
    28 Posts
    781 Views
    tjkreidlT
    @JSylvia007 Sorry, I'm really late to this thread, but note that backups can become problematic if the SR is something like 90% or more full. There needs to be some buffer for storage as part of the process. The fact you could copy/clone VMs means your SR is working OK, but backups are a different situation. If need be, you can always migrate VMs to other storage which is evidently what you ended up doing, which frees up extra disk space. Also backups are pretty intensive so make sure you have both enough CPU capacity and memory to handle the load. Finally. a defective SR will definitely cause issues if there are I/O errors, so watch your /var/log/SMlog for any such entries.
  • Backup strategy

    4
    0 Votes
    4 Posts
    119 Views
    P
    And this is the result [image: 1774531386302-screenshot-2026-03-21-at-09-31-45-backup.png]
  • Replication is leaving VDIs attached to Control Domain, again

    9
    0 Votes
    9 Posts
    453 Views
    A
    @florent I rebuilt my XCP hosting environment (everything is faster and bigger stuffed into one rack).... and this issue is now worse. The main changes to this new setup are 2x40Gb networking, faster NFS NVMe NAS, faster pool servers, more memory, much faster CR destination machine with ZFS. Running XCP 8.3 (March 2026 updates) and XO (master a2e33). Replication is leaving many attached to control domain every day with NBD connection set to 2. Changing it to 1 seems to resolve the issue (no more stuck to control domain).
  • Potential bug with Windows VM backup: "Body Timeout Error"

    59
    3
    2 Votes
    59 Posts
    7k Views
    nikadeN
    @Pilow We tried that as well, same problem Also tried with a VM on the same newtork, just another VLAN, and we're seeing the same thing. At first we figured it was because one of the xcp-ng's was on a remote site which is connected through an IPSEC VPN, but that wasn't the case.
  • Best pratice : Add dedicated host for CR or DR.

    5
    0 Votes
    5 Posts
    141 Views
    P
    @Dezerd you just have to start-copy the Replica VM it permits the original job to keep replicating on the VM. there is no failover/failback mecanism AFAIK if you work on replica started VM, you will have to put up in place a replica going to original hosts
  • Failed backup jobs since updating

    7
    1
    0 Votes
    7 Posts
    243 Views
    olivierlambertO
    png @florent
  • "NOT_SUPPORTED_DURING_UPGRADE()" error after yesterday's update

    24
    0 Votes
    24 Posts
    2k Views
    M
    @MajorP93 said: -disable HA on pool level -disable load balancer plugin -upgrade master -upgrade all other nodes -restart toolstack on master -restart toolstack on all other nodes -live migrate all VMs running on master to other node(s) -reboot master -reboot next node (live migrate all VMs running on that particular node away before doing so) -repeat until all nodes have been rebooted (one node at a time) -re-enable HA on pool level -re-enable load balancer plugin Never had any issues with that. No downtime for none of the VMs. update time again. and same issue I followed these steps -upgrade master -upgrade all other nodes -restart toolstack on master -restart toolstack on all other nodes -live migrate all VMs running on master to other node(s) -reboot master now cant migrate anything else. live migration : NOT_SUPPORTED_DURING_UPGRADE warm migration: fails and VM shuts down immediately and needs to be forced back to life CR backup to another server : NOT_SUPPORTED_DURING_UPGRADE
  • 0 Votes
    3 Posts
    127 Views
    K
    @olivierlambert done.
  • VHD Check Error

    12
    1
    0 Votes
    12 Posts
    328 Views
    A
    @Pilow While that could be true only 1 vm was shuffled to another SR. But all VMs now being backup with CBT enabled. Well... i just check again at backups and after the last scheduled backup at 1am all vms fell back to full backup again.
  • Delta Backup not deleting old snapshots

    7
    1
    0 Votes
    7 Posts
    306 Views
    A
    @florent XO also can't migrate a VM to a new pool with a CD in the drive... It does generate an error, but the error is unclear.
  • 0 Votes
    1 Posts
    67 Views
    No one has replied
  • Backup: ERR_OUT_OF_RANGE in RemoteVhdDisk.mergeBlock

    14
    1
    0 Votes
    14 Posts
    469 Views
    W
    @florent One last update. I reverted to Master branch (6699b) yesterday evening and the backup ran without issues overnight.
  • S3 Chunk Size

    16
    0 Votes
    16 Posts
    875 Views
    olivierlambertO
    502 is an answer coming from your S3, telling the server is having an issue. Adding @florent in the loop
  • 0 Votes
    4 Posts
    146 Views
    W
    @simonp I'm not sure which one as I can see 2 config.tom file. 1st is under "/root/.config/xo-server/" config.toml.txt 2nd is under "/opt/xo/xo-server/" config.toml2.txt Both config.toml attached. Thank you. Best regards, Azren
  • Backup and the replication - Functioning/Scale

    20
    0 Votes
    20 Posts
    1k Views
    florentF
    thanks Andryi We us round robin when using NBD , but to be fair, it does not change the performance a lot in most of the case. The concurrency settings ( multiple connection to the same file ) is helping when there is a high latency between XO and the host. SO , @fcgo if you have thousand of VMs , you should enable NBD it will consume less resource on the DOM0 and XO , and it will be spread on all the possible hosts.
  • Mirror backup: Progress status and ETA

    4
    1
    1 Votes
    4 Posts
    300 Views
    A
    @Forza Too funny. I came across this post and clicked on the URL you referenced......and that earlier question was from me! Well, nothing has changed. I'm doing mirror'ed backups and I'm still blind as a bat.
  • Backups routinely get stuck on the health check

    Unsolved
    10
    1
    0 Votes
    10 Posts
    822 Views
    D
    Hello @Austin.Payne, I wanted to share my experience. I had similar issues through multiple XO versions. However, after learning that health checks rely on the management port I did some more digging. TL;DR it was a network configuration, and not an XO or XCPNG problem. If you have a multi-nic setup and you have XO as a VM on your XCPNG host, I would recommend that whatever network you use for management is on the same NIC. Setup: XO is a VM on XCPNG Host (only one host in pool). Network setup: eth0 = 1GB NIC = Management interface for XCPNG host (192.168.0.0/24 network) eth1 = 10GB DAC = NIC for 192.168.0.0/24 network to pass through VMs (XO uses this NIC) eth1.200 = 10GB DAC = VLAN 200 on eth1 NIC for storage network (10.10.200.0/28). Both the XCPNG host and VMs (including XO VM) use this. IP setup: XCPNG host = 192.168.0.201 on eth0; 10.10.200.1 on eth1.200 XO VM = 192.168.0.202 on eth1; 10.10.200.2 on eth1.200 Remote NAS for backups = Different computer on 10.10.200.0/28 network In this setup, backups would always finish, but health checks would hang indefinitely. However, after changing the XCPNG host to use eth1 for the management interface instead of eth0, health checks starting passing flawlessly. I am not sure if the problem was having the XCPNG host connecting to the same network with two different NICs or if eth1 was the better NIC thus was more reliable during the health check (could also explain why backups would always succeed). It's also possible it was switch related. In this setup, eth0 was connected to a Cisco switch and eth1/eth1.200 was connected to a MIkroTIk switch. Again, not sure what actually solved it, but consolidating everything to a single NIC solved the issue for me (and physically unplugging eth0 after the eth1 consolidation). Hopefully sharing my experience helps solve this issue for you.
  • XOA 6.2 parent VHD is missing followed by full Backup

    7
    1
    0 Votes
    7 Posts
    253 Views
    F
    @florent Awesome, i have patched a second location's XOA and am not experiencing the issue anymore there either.