the pool host maximum limit is 3200% right
right
the pool host maximum limit is 3200% right
right
@delacosta456 do not take into account the pool CPU USAGE...
for example I have a pool with one host, here is the POOL CPU USAGE :

yeah right ... 600%
the host :

BUT, if you switch on the STACKED VALUES

you get the POOL CPU USAGE value...
it adds up all core usage, so... not a very good view...
@ashinobi in your specific ISO SR problem, you could try this
wilsonqanda
9 Jan 2026, 22:41
@Pilow now that you mention that everything is seen as snapshot i remember all my ISO are seen as snapshots too... so most iso cannot be mounted until i drag them in a folder then put them back in same folder so XO could reprocess all the iso correctly. This might be a related issue to all the snapshots for the VMs.
@ashinobi you are now in the twilight zone.
this is a current bug where all VDIs are seen as snapshots, so they do not show up in the DISK tab of the VMs.
try to go to XO6, you will see them correctly.
this bug also get in the way of ISO SRs...
on a VM with "no disk visible", you can snapshot, and REVERT to this snapshot, it's VDIs will magically re appear. you would have to do so on each VM, but the bug will eventually make them disappear again.
go on the HOME/STORAGE page, on your SR, DISK tab
all VDI should present a snapshot icon if you are affected 
please report to this topic https://xcp-ng.org/forum/topic/11715/vdi-not-showing-in-xo-5-from-source./
@acebmxer NFS remotes on the DS1819+ ?
we have iSCSI SR (25Gb mellanox 6 PIFs on hosts to 25Gb MSA2062 SAN dual controller)
our remotes are iSCSI os mounted volumes on MSA SANs, presented as S3 (minio VMs)
using XO PROXIES to offload backups from XOA
we max out a 150/200Mb/s during backups 
but we are on VHD VDIs, asking myself if the added backup performance you present could be due to QCOW2 format on source SR ?
will have to try VDIs on such SR to see the diff
@acebmxer so, NBD it was...
holy molly, you have some good network performance !
what kind of SR at source ? and remote at destination ?
what about the PIFs ?
@acebmxer bottom of POOL advanced tab, is BACKUP NETWORK selected on the NBD enabled network accessible by both hosts and XOA ?
@acebmxer I have a new case of managing to force the fell back to full error...
i'll create a new topic for this
in the time being, if you can, do a toolstack restart on your pool when no tasks is ongoing
your backups with NBD could be better (spoiler alert : iptables rules...
)
Thanks.
The old snapshots are being removed as the total never increases beyond 16, so when a new snapshot is added, the old one is removed.
immediatly removed, yes, but then Garbage collection takes place.
and perhaps with 19x16 GC to process it can't be done in one hour, and then next CR is launched, etc etc...
@florent was finally able to read the pull
/clap ! the fix seems totally legit and consistant with XOA ram ramping up !
when will this be officially published ? 
so we can disable daily reboot of XOA & XO PROXies 
@McHenry could you screen the health page ?
where we could see the chain length
@McHenry I dont think more than 3 snapshots triggers an error, just tested on one VM 
it is not recommended for "in production" VMs, but for a CR destination, it's OK (as you would need to start a copy anyway)
your problem, failing CR jobs is probably due to garbage collection not finishing in the one hour timeframe when chain is long.
@simonp patched tonight, a job who took 3 hours yesterday took only 1 tonight.
so, big improvment !
need to re up concurrency to 2 or 4 on some jobs to see if I can squeeze more time on the backup window
perhaps "in the context of a proceeding RPU, do not start halted VMs" ?
or "boot only halted VMs that have HA enabled" ?
but I can imagine corner cases where this is not wanted.
some chicken & egg problem.
@stormi indeed.
but the host restarting is by design empty of all VMs because of evacuate process
the stopped VMs are on other hosts. so strange to see them booting when restarted host is coming online.
ps : as I wrote it, I understood my "error". halted VMs are on no host at all, just attached to the pool.
I did verify and yes
ha-reboot-vm-on-internal-shutdown ( RW): true
it is enabled on our Pool. but no HA
ha-enabled ( RO): false
@stormi no, we do not use HA, it's disabled on the POOL and on the VMs
@olivierlambert having done a RPU yesterday on 3 hosts, I still have the "bug" where some VMs with "auto start" switched ON, but halted on purpose, reboot when a host is rebooted.
we do stop some VMs during RPU to lessen the migrate times, but at every host reboot, they DO START UP
annoying. not critical, but annoying.