just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing
just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing

Hey all,
We are proud of our new setup, full XCPng hosting solution we racked in a datacenter today.
This is the production node, tomorrow i'll post the replica node !
XCPng 8.3, HPE hardware obviously, and we are preparing full automation of clients by API (from switch vlans to firewall public IP, and automatic VM deployment).
This needs a sticker "Vates Inside"
#vent
@sluflyer06 and I wish HPE would be added too 
is there anywhere where we can check the backlog / work in progress / to be done on XO6 ?
I did stick to version: 1 in my working configuration

Had to rename my "Ethernet 2" nic name to Ethernet2 without the space
You have to put the exact template nic name for this to work.
@MajorP93 throw in multiple garbage collections during snap/desnap of backups on a XOSTOR SR, and these SR scans really get in the way
Hi,
Smart backups are wonderful to manage smart selection of VMs to be backuped.
When we browse the RESTORE section, would be cool to get the TAGs back visually, and possibility to filter them
I'd like to get "all VMs restorable with this particular TAG" type of filter, hope i'm clear.

Perhaps something to add to XO6 ?
Could we have a way to know wich backup is part of LTR ?
In veeam B&R, when doing LTR/GFS, there is a letter like W(weekly) M(monthly) Y(yearly)to signal the fact in the UI

That's pure cosmectics indeed, but practical.
@Bastien-Nollet okay i'll do that tonight and will report back
@Bastien-Nollet said in backup mail report says INTERRUPTED but it's not ?:
file packages/xo-server-backup-reports/dist/index.js by a
modification done, will give feedback
@robertblissitt yup, afterward this seems to be a good best practice...
my hosts were up for 4 month, and because of DNS resolution problem had 77 patches to catch up (80 for one with advanced telemetry enabled)
a rolling reboot would have probably put in front the initial migration/evacuation problem (and subsequent zombies VMs)
and no patches applied, and no pool in a semi upgraded state
note to my future self, try a rolling reboot first.
@olivierlambert so strange though. i have it enabled on 4 pools, that didn't propagte on hosts.
still VM boots up when not needed as explained on fist post of the thread.
i'll try différent combinations to see what's really going on
currently when using the deploy URL, it still states Xen Orchestra 5, but we end up with XO6 default 

mmm there is a POWER ON mode switch also on the host level in advanced TAB (disabled in my case)
so what is the expected behavior of these 3 switches ?
Hi there,
Latest XOA, latest XCP Patchs; but this pre existed it's not a new behavior.
no HA involved.
We have a pool of 3 hosts. Some VMs have the AUTO POWER ON enabled in the advanced tab.
We noticed that at every reboot of a server of this pool :
-->this is not a wanted behavior for VMs that were purposedly manually halted.
I thinked the AUTO POWER ON switch as a "host must restart the VM in case of power loss and the VM was previously running"
so we get surprised each time we reboot a host, was it for simple maintenance or RPU when patches are applied.
when we do a RPU, we manually shutdown some minor VMS, to speed up the evacuate process involved, but each time a host reboots, the VMs start up
annoying
perhaps do we have a bad usage of pool's ADVANCED tab of auto start ?

auto power on is enabled on the pool AND at the VM level.
Could someone explain what is the difference ?
Perhaps to resolve the behavior we just hve to disable auto power on at the pool level ?!
@olivierlambert shoutout to @danp that did a takeover of the incident ticket
he headed me the right way to resolution of the problem, my production pool is back up & running with its VMs.
there was indeed a diff between what was seen by "xl list"/"xenops-cli list" and what was seen by XOA in the web ui.
a couple "xl destroy pid" to destroy zombie VMs, and toolstack restarts later, all is now up.
I don't know how the hell a simple RPU did get me in this situation though...
currently having heavy issues with a production cluster of 3 hosts
RPU launched, all VMs except one did evacuate the Master. we managed to shutdown this VM/restart it on another host
we had 
Master patch & reboot proceeded
Then RPU tried to evacuate a slave host and all VM are now locked we can't shutdown/hard shutdown them,
we have a critical VM on this host that is still running, we tried to snapshot it in case of need of hard reboot of the host, but OPERATION NOT SUPPORTED DURING AN UPGRADE
we manually install patches on the host without reboot and then snapshot proceeded
I hope this VM is secured by this snapshot...
ticket is open with pro support but quite stalled for now... no news since yesterday Ticket#7751752
@gduperrey we had the XOA update alert, upgraded to XOA 6.1.0
but no sign of XCP hosts updates ?

When patches are available, it usually pops up on its own, is there something to do on cli now ?
EDIT : my bad, we had a DNS resolution problem... I now see a bunch of updates available...
@olivierlambert planning to give RustFS a try, i'll report back (currently full minio)
@DRWhite85 as said earlier, you can.
select whatever template. and then modify ALL the params you want. templates are just ... templates... presets...