@Tristis-Oris So you're on 
Correct? Maybe the dev team can help find the issue.
@Tristis-Oris So you're on 
Correct? Maybe the dev team can help find the issue.
@Tristis-Oris said in VM migration loop:
@DustinB as i say, XO.
Okay so we'll assume XO from Source. What commit are you on?
@Danp Right..
Hrm maybe he has to many concurrent jobs that are running that are beating up dom0.
@Tristis-Oris said in VM migration loop:
@olivierlambert
XO been updated few times during this week. xen 8.3.i'm dunno what to search at logs. See nothing interesting.
Are you using XOA or XOCE/XO from Source?
@McHenry The issue is host hst150 in pool HST150 is using more than 95% of the available memory based on your screenshot over 5 minutes.
Either migrate some VMs to another host in the pool or shutdown/scale back some of the resources of your VMs.
@samuelolavo Stop the VMs ahead of your host shutdown; gracefully and have "auto power on" enabled on these so when your pool master (and slaves) come back online the VMs start on their own.
@dfrizon said in How to protect a VM and Disks from accidental exclusion:
@olivierlambert The idea is to block the VM and exclusion disks even by root itself, and make it possible only via command line in the console. That's why I started the post by mentioning the command...
We dream of the day when MFA authentication will be required to delete a VM...
How would you prevent the root account from taking action..... that is the absolute opposite permission set of root, as if there is an account with even more permissions than root.
You can use permission sets and move your team who are deleting powered off VM's that are protected from accidental deletion into a group that doesn't have the permission to delete VMs, at the same time, remove their permissions from deleting items from your SR.
I think that would solve your problem, and doesn't cause any logical permission issues like above.
@dfrizon said in How to protect a VM and Disks from accidental exclusion:
Hello everyone!
We want to protect some VMs and associated disks from being deleted (acidental or proposal). The command below protects the VM from being deleted but not the associated disks:
xe vm-param-set uuid=<UUID_OF_THE_VM> blocked-operations:destroy=true
What parameter is missing to be included?
Thanks!!
Within XO this is very straight-forward. Under the VM's advanced details tab

@olivierlambert said in CPU radio buttons on usage graph:
Those graph are 10y old, and XO 6 will be the default UI in XO 6, so I think we can confidently said we won't take time to debug those old graphs.
Kind of what I had figured. That XO6 is the way forward.
@olivierlambert said in Netbox - Conflicting IP addresses break sync:
Hi,
What would be the expected behavior? It's normal to report a conflict, because if you boot the copy, while leaving the original running, you will indeed have a conflict.
Is it possible to disable the NIC on the DR replica to validate that it boots to a desktop?
@Pilow changing the MAC address could effect DHCP and DNS. Changing the IP address occurs within the VM and XCP-ng/XO don't directly make edits to these types of settings.
The same issue was reported here, but seems its been fixed already.
https://github.com/Jarli01/xenorchestra_installer/issues/137
Hrm...
Same thing here too, never noticed it until now.


Must be some sort of bug
@blueh2o said in CPU radio buttons on usage graph:
@DustinB Still there in commit d77d6. The deselected CPU disappears from the graph but reappears when it refreshes.
"refreshes" like when you press F5 in your browser?
@blueh2o said in CPU radio buttons on usage graph:
@DustinB commit 8f2b8
This looks like it was released on Sept 16, if you update to the latest is this still an issue?
@blueh2o said in CPU radio buttons on usage graph:
@DustinB exactly. I figured that's what they were for but they don't seem to work for me.
What version of XO are you using?
@olivierlambert I think so.
@blueh2o you should be able to disable a specific CPU graph to see what your other (or a specific Core is doing).
For example I have 32 cores on my hypervisor and while I can't disable every core, ideally you can take a look to see if a specific core is getting slammed with work.

@farokh said in Xen Orchestra 5.110 V2V not working:
@DustinB Because I was led to believe that I could leave my VM running while the initial transfer is occurring and only have it shutdown for the final bit to be copied. I have VMs that I'd rather not shut down for some number of hours if I can avoid doing that.
You would do what florent said, transfer the VM without shutting down the original VM on ESXi. Once the migration is completed you'd then power off the ESXi VM and power on the XCP-ng.
@farokh said in Xen Orchestra 5.110 V2V not working:
@DustinB It forcibly powers off the VM.
You'd have to do what Florent is saying in this case. But I question why should it matter as long as the migration completes successfully and the VM boots on your new pool...
@farokh said in Xen Orchestra 5.110 V2V not working:
@DustinB That's my point. The "Warm Migration" powers off the running machine.
Are you stating that it forcibly powers the VM off, or does it gracefully execute a shutdown, IE "Start > Shutdown"?