just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing
just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing

Hey all,
We are proud of our new setup, full XCPng hosting solution we racked in a datacenter today.
This is the production node, tomorrow i'll post the replica node !
XCPng 8.3, HPE hardware obviously, and we are preparing full automation of clients by API (from switch vlans to firewall public IP, and automatic VM deployment).
This needs a sticker "Vates Inside"
#vent
@sluflyer06 and I wish HPE would be added too 
is there anywhere where we can check the backlog / work in progress / to be done on XO6 ?
I did stick to version: 1 in my working configuration

Had to rename my "Ethernet 2" nic name to Ethernet2 without the space
You have to put the exact template nic name for this to work.
@MajorP93 throw in multiple garbage collections during snap/desnap of backups on a XOSTOR SR, and these SR scans really get in the way
Hi,
Smart backups are wonderful to manage smart selection of VMs to be backuped.
When we browse the RESTORE section, would be cool to get the TAGs back visually, and possibility to filter them
I'd like to get "all VMs restorable with this particular TAG" type of filter, hope i'm clear.

Perhaps something to add to XO6 ?
Could we have a way to know wich backup is part of LTR ?
In veeam B&R, when doing LTR/GFS, there is a letter like W(weekly) M(monthly) Y(yearly)to signal the fact in the UI

That's pure cosmectics indeed, but practical.
@Bastien-Nollet okay i'll do that tonight and will report back
@Bastien-Nollet said in backup mail report says INTERRUPTED but it's not ?:
file packages/xo-server-backup-reports/dist/index.js by a
modification done, will give feedback
@DustyArmstrong said in Detached VM Snapshots after Warm Migration:
I mainly just want to know the best method to wipe XO and start over so it can rebuild the database.
ha, to wipe XO
, check the troubleshooting section of documentation here :
https://docs.xen-orchestra.com/troubleshooting#reset-configuration
it is a destructive command for your XO database !
@DustyArmstrong mmmm what do you mean by pool master and pool slave ?
You have one pool with 2 hosts ? one master and one slave ?
could you screenshot the HOME/Hosts page ?
and the SETTINGS/SERVERS page ?
@DustyArmstrong ho okay, so old XO is downed.
You could have 2 XO connected to the same pools, as long as they do not have same IP address.
beware, I do not tell you to do that, it is not best practice at all if you ever are in this situation, you must understand what could happen (detached snaps, licence problems, ...)
but I thought this was the case
in your case, did you do snaps/backups BEFORE reverting the XO config of old XO to new XO ?
as far as I understand, this could have get you detached snapshots (snapshots being on VMs but not initiated by the restored version of your current online XO)
or do I overthink this too much ?!
keep us informed if you manage to clear the situation, curious about it
@DustyArmstrong I have a corner case where I see these detached snapshots, related to backup.
I have a (remote) pool, the master is attached by a XOproxy on main XOA.
All snapshots done by remote XOA are seen as "detached" snapshots on the main XOA where the pool is a distant one.
Main XOA do not own backup jobs, so it is not aware of the snapshots, so mark them as detached.
I think this is your current situation, you have 2 XO servers, one doing snapshots, the other seeing them as detached.
@Danp ha thanks for the correction, I was so certain to have seen it, that I didn't check on my master
@DustyArmstrong on your slave host, do a
# cat /etc/xensource/pool.conf
slave:xxx.xxx.xxx.xxx
you should see IP address of the master. If not, correct it.
the master must be pingable and accessible from management of the slaves in order for the slaves to have correct network propagation.
you can try the command on MASTER host, you should see
master
if you corrected the file on slave host, reboot it, it should come back normally
@blueh2o said in Health check scripts. Where is the example?:
xen-orchestra/backups/docs/healthcheck example/wait30seconds.sh
that would be here https://github.com/vatesfr/xen-orchestra/tree/master/%40xen-orchestra/backups/docs
@DustinB if he has CR, he could stop it, Failover, patch primary host, and eventually failback or reverse CR job
should you not enter the ESX ip and not the vCenter IP to migrate ?

give it a try
@DustinB I'd like to step in, from past experience
Storage wise XO STOR would need 3 hosts to be compliant, so you should follow advice of shared storage like a NAS or iSCSI SAN.
Plan accordingly if you need multipathing to shared storage (no NFS) or if you need thin provisionning (no iSCSI - lvmoiscsi is thick)