@jku209 Go to Home -> Storages, click the SR you want to change, inside the SR information just double click the name on top and you should be able to change it.
@olivierlambert great setup, and very good written article. So NFS is recommended over iSCSI cause of less possibilities to misconfigure.
Iam running a cheap single host system with 1x Xeon E-1230v5. I would love to get a second host up and maybe with focus on desktop virtualization and GPU Passthru. Thanks for the hint on Epyc Servers.
good inspiration on this board and community
Hi!
XCP-ng Center is not officially supported as client for XCP-ng. It's only community maintained and doesn't report updates.
Applying patches shouldn't crash the host, at all. Try to check the logs if it's related and if you have more info
Rebooting is needed if you want the patches to be applied "for real". In theory, there's only some kind of patches that would require a reboot (kernel and Xen itself), but so far we can't discriminate them, so we ask for a reboot to be 100% they are applied anyway.
I tried to get the mountpoint with the command from @tony but the result always was "The uuid you supplied was invalid."
To get a new/live log i restarted the toolstack (the 100th time..) and now everything is working again. I dont know why but okay..
Thank you very, very much @olivierlambert and @tony for your help!!!!
Thanks for the input @tony and @stormi. Turns out I've managed to get most of the machines back on the second node after getting some help to force it back to being the master and powering off dead VM's etc. so I think I've avoided having to do this.
It is really useful info however, and I will probably give this a try at some point just so I can if I ever need too.
Thanks again.
@erfant probably not because the nvme driver is loaded and there're no nvme errors in the logs.
@olivierlambert thank you and your team for this great project and community! It's a nice place to share knowledge and learn new stuff. I learn a lot here!
In other words, you must empty the host before add it to the primary pool, of course, you can migrate VM from one pool to another. And if your hw is not the same, but similar, you'll have an heterogeneous pool.
That's because UEFI is almost by default on HyperV since longer in XenServer/XCP-ng. When Windows 2012R2 was out, only BIOS was available on XenServer. So they put the template using UEFI.
Again, with Xen Orchestra, you should have been able to enable UEFI with it.
@Ascar said in Disk Usage for Control Domain on server 'abc.mydomain.com' has reached 92%.:
If you see any deficiency in my plan please share your view.
The main drawback I see is that you may have to reboot that VM from time to time, so the SR will become unresponsive and maybe hang some tasks until it's back. And when you reboot your hosts they will try to connect to a VM that may not be available yet.
I would say definitely no for use of a VM as shared storage, but for an ISO SR that may be usable enough.
To answer your last question, you can sort of make this work, you can set a "home" server for each VM where it will try to start first, if home server isn't available it will start on any other available server. If you want absolute separation then you'll have to make 2 pools and split them there. You should still be able to live migrate the VMs between pools but you won't be able to share your SR between pools.
@jstorgaard Updated packages are now available for you to test: https://github.com/xcp-ng/xcp/issues/434#issuecomment-688786905
olivierlambert created this issue in xcp-ng/xcp
closed
Backport Xen patch to support Ice Lake and Comet Lake CPUs
#434