This may not be best practice.
in a two hosts pool, if your replicated VMs live on the Master, and it's gone, you won't be able to start the replicated VMs
you will first need to transition slave to master
indeed CR is better to another pool 
This may not be best practice.
in a two hosts pool, if your replicated VMs live on the Master, and it's gone, you won't be able to start the replicated VMs
you will first need to transition slave to master
indeed CR is better to another pool 
@Dezerd you just have to start-copy the Replica VM
it permits the original job to keep replicating on the VM.
there is no failover/failback mecanism AFAIK
if you work on replica started VM, you will have to put up in place a replica going to original hosts
@User-cxs it is possible as long as your OS accepts it.
must be planned in the advanced settings of your VM.
Can be hotplug

If your CPU limits are 2/8, you can hotplug 6 more vcpus
@florent ho nice
we had as many "VMs with a timestamp in the name" as number of REPLICAs, and multiple snapshot on source VM
now we have "one Replica VM with multiple snapshots" ? Veeam-replica-style...
do multiple snapshots persists on source VM too ?
if it's true, that's nice on the concept.
but when your replica is over lvmoiscsi
not so nice
ps : i didnt upgrade to last XOA/XCP patchs yet
@florent Florent, I could see the benefits of unified VM name, but could you at least push the timestamp in a note on the VM ?
it is important to know wich timestamp a replica VM is, in order to choose failover option wisely
@nikade Hey Nikade,
Did you try to create a new job that will do a new chain ? Just for test.
When creating vm in v6 and renaming the disk name. You can only enter 1 letter at a time. You have to keep clicking then typing the next letter. No other field has this issue.
JS focus problem, should be easy to fix by the devs
@acebmxer when changing SR, full pass is expected (it is documented) even with CBT enabled.
the bitmap file of the CBT needs to be reconstructed on the destination SR, so you get a full pass, and next passes will get delta as expected
@MajorP93 did see that too
but when in a THICK lvmoiscsi environment, hard to let full snaps ... CBT is quite a savior, but you get these quircks (backup fell back to full) from time to time.. seems random.
only time when you get it 100% is when you have a DR job and then next delta of VM WILL BE a full... 
@Greg_E hi there,
beyond number of minimal hosts to be supported (think it's 3) and minimal disks to get good redundancy (I think its minimal 3 per host, must be identical) you have a replication parameter when building an XOSTOR

it defaults to two (you have two copies of each workload) and this parameter can impact your total usable space.
also beware of network requirements (for satellites connections, and DRBD replication)
minimum of 2 nics per server, and DRDB replication should be at least 10Gb nics
tip : the linstor-controller is not always the pool master...
@kent you would have to rollback to early december 2025 XO/XOA (before the 10 of december)
quite a long way
I'm just waiting the devs to eventually fix it as we have other way to manage VDIs (API calls)
@benapetr you seem gifted with app development.
do you know RV Tools ? https://www.dell.com/support/kbdoc/en-us/000325532/rvtools-4-7-1-installer
this tool is pretty handy when auditing VMWARE infrastructures, it can connect to vcenter or directly to ESXi and full dump in csv/xlsx the infrastructure configuration (all aspects of the config, be it VM, hosts, networks, datastores, files, ...)
I could see a real production use of same tool, but for XCP Pools/XCP hosts
would be a great add to XenAdmin !
@MajorP93 we have almost same size 30 backup jobs (mixed delta/cr/repl/dr) for 95+ VMs

but these are not operated by XOA but dispatched in 4 XO Proxies
and still XOA get OOM killed 
@florent okay
but at 3 days without reboot i'm like this :

still on XOA 6.1.2
uptime is 2 days for me

but restarted xo-server yesterday
zoom on 2 hours

@florent since your early patch of my XOA (xo-server restart at the disk activity peak)

noticing vif0 transmitting a bit on the network and different ram activity on a timespan of 2 hours
something has changed, is it for the better ?
let the night decide
@florent rest and xo-cli
tunnel ID sent by message
@florent okay disabled.
We have quite an extensive use of API calls too, all over the place
and websockets (to get VM console in in-house web app)
I don't know if this could have an impact