so, I stopped rebooting my XOA everyday

just patched 6.1.1, it restarted xo-server

guess I'll have to let it disabled for 48h to see if with new patch, RAM is still ramping up.
will report back.
so, I stopped rebooting my XOA everyday

just patched 6.1.1, it restarted xo-server

guess I'll have to let it disabled for 48h to see if with new patch, RAM is still ramping up.
will report back.
@dinhngtu tried on a Windows Server 2025 VM, UEFI BIOS
working well, no reboot needed
@florent I did healthchecks one month or two ago.. I do not remember precisely
but I didn't have this bug.
so either specific to OP installation, or there is a regression somewhere
@Greg_E i think protection is doing its job. do not delete.
but in case of healthcheck, the process should bypass that for the restored VM
thoughts @bastien-nollet @florent ?
@olivierlambert okay, understood.
it is small minor cosmetic problem as of now.
need to get you an access to our internal developped app, to show you the potential of sub-xoas ^^'
almost out of alpha phase
hi,
We added pools/hosts on main XOA with XO PROXY httpUrl
we try to get all our clients onpremise pools&hosts in our "MSP-like" central XOA
host are seen not licensed

and so are the proxies

but on the remote sites, there is a licence & pro support in the remote XOA
I get it, on the central XOA is not present the licence of the others XOA onprem. so we get this error.
Would it be possible to "fetch" the licence information when attaching these remote pools/hosts ?
@john.c said in backup mail report says INTERRUPTED but it's not ?:
Are you using NodeJS 22 or 24 for your instance of XO?
here is the node version on our problematic XOA

this XOA do NOT manage backup jobs, totally offloaded to XO PROXIES
XOA PROXies :
[06:18 04] xoa@XOA-PROXY01:~$ node -v
v20.18.3
and XO CE :
root@fallback-XOA:~# node -v
v24.13.0
memory problems arise on our XOA
we have a spare XO CE deployed by ronivay script on ubuntu VM that we use only as a spare when main XOA is upgrading/rebooting
same pools/hosts attached, quite a read only XO
totally different behavior


@florent said in backup mail report says INTERRUPTED but it's not ?:
@Pilow We pushed a lot of memory fixes to master, would it be possible to test it ?
how so ? I stop my reboot everyday task and check if RAM is still crawling to 8Gb ?
@randyrue on our side, we do not use custom fields, but TAGs.
working well. depends on the number of custom fields you manage, and perhaps not usable because you need field:value ?
but nested tags could do the trick : field=value

@MajorP93 here are some screenshots of my XOA RAM

(lost before sunday stats since I crashed my host in RPU this weekend...)

you can clearly see RAM crawling and beeing dumped each reboot.
here is one of my XOA Proxies (4 in total, they totally offload backups from my main XOA)

there is also a slope of RAM crawling up... little spikes are overhead when backups are ongoing.
I started to reboot XOA+all 4 proxxies every morning.
@MajorP93 probably irrelevant, but since end of december I noticed a memory-leak behavior on my XOA.
I finally put up a job to restart it everyday 4.15am, otherwise at about 48h it was saturating it's RAM (8Gb...)
no more problem with a reboot everyday but, something is cooking.
@robertblissitt yup, afterward this seems to be a good best practice...
my hosts were up for 4 month, and because of DNS resolution problem had 77 patches to catch up (80 for one with advanced telemetry enabled)
a rolling reboot would have probably put in front the initial migration/evacuation problem (and subsequent zombies VMs)
and no patches applied, and no pool in a semi upgraded state
note to my future self, try a rolling reboot first.
@olivierlambert so strange though. i have it enabled on 4 pools, that didn't propagte on hosts.
still VM boots up when not needed as explained on fist post of the thread.
i'll try différent combinations to see what's really going on
currently when using the deploy URL, it still states Xen Orchestra 5, but we end up with XO6 default 

mmm there is a POWER ON mode switch also on the host level in advanced TAB (disabled in my case)
so what is the expected behavior of these 3 switches ?
Hi there,
Latest XOA, latest XCP Patchs; but this pre existed it's not a new behavior.
no HA involved.
We have a pool of 3 hosts. Some VMs have the AUTO POWER ON enabled in the advanced tab.
We noticed that at every reboot of a server of this pool :
-->this is not a wanted behavior for VMs that were purposedly manually halted.
I thinked the AUTO POWER ON switch as a "host must restart the VM in case of power loss and the VM was previously running"
so we get surprised each time we reboot a host, was it for simple maintenance or RPU when patches are applied.
when we do a RPU, we manually shutdown some minor VMS, to speed up the evacuate process involved, but each time a host reboots, the VMs start up
annoying
perhaps do we have a bad usage of pool's ADVANCED tab of auto start ?

auto power on is enabled on the pool AND at the VM level.
Could someone explain what is the difference ?
Perhaps to resolve the behavior we just hve to disable auto power on at the pool level ?!
@olivierlambert shoutout to @danp that did a takeover of the incident ticket
he headed me the right way to resolution of the problem, my production pool is back up & running with its VMs.
there was indeed a diff between what was seen by "xl list"/"xenops-cli list" and what was seen by XOA in the web ui.
a couple "xl destroy pid" to destroy zombie VMs, and toolstack restarts later, all is now up.
I don't know how the hell a simple RPU did get me in this situation though...
currently having heavy issues with a production cluster of 3 hosts
RPU launched, all VMs except one did evacuate the Master. we managed to shutdown this VM/restart it on another host
we had 
Master patch & reboot proceeded
Then RPU tried to evacuate a slave host and all VM are now locked we can't shutdown/hard shutdown them,
we have a critical VM on this host that is still running, we tried to snapshot it in case of need of hard reboot of the host, but OPERATION NOT SUPPORTED DURING AN UPGRADE
we manually install patches on the host without reboot and then snapshot proceeded
I hope this VM is secured by this snapshot...
ticket is open with pro support but quite stalled for now... no news since yesterday Ticket#7751752