@markxc could you tell us what endpoint you used ?
Posts
-
RE: Setting "Protect from accidental deletion" breaks with backups and healtchecks
@Greg_E i think protection is doing its job. do not delete.
but in case of healthcheck, the process should bypass that for the restored VMthoughts @bastien-nollet @florent ?
-
RE: licence "error" when through XO PROXY
@olivierlambert okay, understood.
it is small minor cosmetic problem as of now.need to get you an access to our internal developped app, to show you the potential of sub-xoas ^^'
almost out of alpha phase -
licence "error" when through XO PROXY
hi,
We added pools/hosts on main XOA with XO PROXY httpUrl
we try to get all our clients onpremise pools&hosts in our "MSP-like" central XOAhost are seen not licensed

and so are the proxies

but on the remote sites, there is a licence & pro support in the remote XOA
I get it, on the central XOA is not present the licence of the others XOA onprem. so we get this error.
Would it be possible to "fetch" the licence information when attaching these remote pools/hosts ?
-
RE: backup mail report says INTERRUPTED but it's not ?
@john.c said in backup mail report says INTERRUPTED but it's not ?:
Are you using NodeJS 22 or 24 for your instance of XO?
here is the node version on our problematic XOA

this XOA do NOT manage backup jobs, totally offloaded to XO PROXIESXOA PROXies :
[06:18 04] xoa@XOA-PROXY01:~$ node -v v20.18.3and XO CE :
root@fallback-XOA:~# node -v v24.13.0 -
RE: backup mail report says INTERRUPTED but it's not ?
memory problems arise on our XOA
we have a spare XO CE deployed by ronivay script on ubuntu VM that we use only as a spare when main XOA is upgrading/rebooting
same pools/hosts attached, quite a read only XOtotally different behavior


-
RE: backup mail report says INTERRUPTED but it's not ?
@florent said in backup mail report says INTERRUPTED but it's not ?:
@Pilow We pushed a lot of memory fixes to master, would it be possible to test it ?
how so ? I stop my reboot everyday task and check if RAM is still crawling to 8Gb ?
-
RE: filter for custom field
@randyrue on our side, we do not use custom fields, but TAGs.
working well. depends on the number of custom fields you manage, and perhaps not usable because you need field:value ?but nested tags could do the trick : field=value

-
RE: backup mail report says INTERRUPTED but it's not ?
@MajorP93 here are some screenshots of my XOA RAM

(lost before sunday stats since I crashed my host in RPU this weekend...)

you can clearly see RAM crawling and beeing dumped each reboot.here is one of my XOA Proxies (4 in total, they totally offload backups from my main XOA)

there is also a slope of RAM crawling up... little spikes are overhead when backups are ongoing.
I started to reboot XOA+all 4 proxxies every morning.
-
RE: backup mail report says INTERRUPTED but it's not ?
@MajorP93 probably irrelevant, but since end of december I noticed a memory-leak behavior on my XOA.
I finally put up a job to restart it everyday 4.15am, otherwise at about 48h it was saturating it's RAM (8Gb...)
no more problem with a reboot everyday but, something is cooking.
-
RE: XCP-ng 8.3 updates announcements and testing
@robertblissitt yup, afterward this seems to be a good best practice...
my hosts were up for 4 month, and because of DNS resolution problem had 77 patches to catch up (80 for one with advanced telemetry enabled)a rolling reboot would have probably put in front the initial migration/evacuation problem (and subsequent zombies VMs)
and no patches applied, and no pool in a semi upgraded state
note to my future self, try a rolling reboot first.
-
RE: strange behavior of auto start of VMs in a pool - bug or feature ?
@olivierlambert so strange though. i have it enabled on 4 pools, that didn't propagte on hosts.
still VM boots up when not needed as explained on fist post of the thread.
i'll try différent combinations to see what's really going on
-
need change XOA version from 5 to 6 on deploy page ?
currently when using the deploy URL, it still states Xen Orchestra 5, but we end up with XO6 default


-
RE: strange behavior of auto start of VMs in a pool - bug or feature ?
mmm there is a POWER ON mode switch also on the host level in advanced TAB (disabled in my case)
so what is the expected behavior of these 3 switches ?
- on pool advanced tab
- on host advanced tab
- on vm advanced tab
-
strange behavior of auto start of VMs in a pool - bug or feature ?
Hi there,
Latest XOA, latest XCP Patchs; but this pre existed it's not a new behavior.
no HA involved.We have a pool of 3 hosts. Some VMs have the AUTO POWER ON enabled in the advanced tab.
We noticed that at every reboot of a server of this pool :- VMs with auto power on that are currently halted DO START
-->this is not a wanted behavior for VMs that were purposedly manually halted.
I thinked the AUTO POWER ON switch as a "host must restart the VM in case of power loss and the VM was previously running"
so we get surprised each time we reboot a host, was it for simple maintenance or RPU when patches are applied.
when we do a RPU, we manually shutdown some minor VMS, to speed up the evacuate process involved, but each time a host reboots, the VMs start up
annoyingperhaps do we have a bad usage of pool's ADVANCED tab of auto start ?

auto power on is enabled on the pool AND at the VM level.
Could someone explain what is the difference ?
Perhaps to resolve the behavior we just hve to disable auto power on at the pool level ?! -
RE: XCP-ng 8.3 updates announcements and testing
@olivierlambert shoutout to @danp that did a takeover of the incident ticket
he headed me the right way to resolution of the problem, my production pool is back up & running with its VMs.
there was indeed a diff between what was seen by "xl list"/"xenops-cli list" and what was seen by XOA in the web ui.
a couple "xl destroy pid" to destroy zombie VMs, and toolstack restarts later, all is now up.I don't know how the hell a simple RPU did get me in this situation though...
-
RE: XCP-ng 8.3 updates announcements and testing
currently having heavy issues with a production cluster of 3 hosts
RPU launched, all VMs except one did evacuate the Master. we managed to shutdown this VM/restart it on another host
we had
Master patch & reboot proceeded
Then RPU tried to evacuate a slave host and all VM are now locked we can't shutdown/hard shutdown them,
we have a critical VM on this host that is still running, we tried to snapshot it in case of need of hard reboot of the host, but OPERATION NOT SUPPORTED DURING AN UPGRADE
we manually install patches on the host without reboot and then snapshot proceeded
I hope this VM is secured by this snapshot...ticket is open with pro support but quite stalled for now... no news since yesterday Ticket#7751752
-
RE: XCP-ng 8.3 updates announcements and testing
@gduperrey we had the XOA update alert, upgraded to XOA 6.1.0
but no sign of XCP hosts updates ?

When patches are available, it usually pops up on its own, is there something to do on cli now ?
EDIT : my bad, we had a DNS resolution problem... I now see a bunch of updates available...
-
RE: S3 Chunk Size
@olivierlambert planning to give RustFS a try, i'll report back (currently full minio)
-
RE: Create new virtual machine?
@DRWhite85 as said earlier, you can.
select whatever template. and then modify ALL the params you want. templates are just ... templates... presets...