@olivierlambert with NVMeOF I can split them easily too (target per namespace), and actually I get redundancy compared to local device (connect to two targets on different hosts and RAID1 them in VM). Some newer NVMe support SR-IOV natively too, so no additional hardware would be needed to split it and pass through to VMs (I did not test this though). I'm not sure of the price of these cards, but CX3 are really cheap, while CX5/6 are getting more affordable too.
Posts made by POleszkiewicz
-
RE: More than 64 vCPU on Debian11 VM and AMD EPYC
-
RE: More than 64 vCPU on Debian11 VM and AMD EPYC
@olivierlambert interesting, however where is the benefit over nvmeof + sriov doable on a mellanox cx3 or better cx5 and up? Offloading dom0 to specialized hardware is interesting, but what I see in these articles is basically equal to connecting to nvmeof target via sriov nic, doable already for quite a while without any changes in xcp-ng?
-
RE: More than 64 vCPU on Debian11 VM and AMD EPYC
@olivierlambert what exactly do you support from kalray? Could you tell more?
-
High number of vCPUs performance penalty
Hi,
Is there any kind of architecture imposed penalty for a VM with high number of vCPUs?
Running a compute cluster with 4 x 8890v4 per node - 96 core, 192 threads,
when running kubernetes worker nodes if there is an architecture imposed penalty caused by more vCPUs then 32 or 64 i.e. per VM ?
I can run more smaller worker nodes instead of smaller amount of bigger ones.
Anyone from the team could shed some light on this? What would be the sweet spot here?
-
RE: More than 64 vCPU on Debian11 VM and AMD EPYC
@olivierlambert said in More than 64 vCPU on Debian11 VM and AMD EPYC:
If you are heavily relying on disk perf, either:
- use multiple VDIs and RAID0 them (you'll have more than doubling perf because tapdisk is single threaded)
- PCI passthrough a drive to the VM
another option is to do NVMeOF and SR-IOV on the NIC, pretty similar performance to bare metal with PCI passthrough, yet one NVMe can be divided between VMs (if it supports namespaces) and you can attach NVMe from more than one source to the VM (for redundancy)
-
RE: XCP-ng 8.3 betas and RCs feedback 🚀
Any possibility to compile current ceph client packages for the 8.3 ? The ones that are in the 8.2.1 repos are pretty old
-
RE: XCP-ng 8.3 betas and RCs feedback 🚀
Great work,
BTW, would it be possible to add nvme-cli to the installer image? It would be nice if we could actually attach NVMeOF at install time and install to NVMeOF volumes (While keeping /boot either on USB/SD locally or on iSCSI), this way we could easily provision a cluster of diskless hosts, while keeping system storage redundancy by using MD RAID between two NVMEoF volumes located on different target hosts.
(with some manual work to attach NVMeOF before mounting root)
-
RE: Custom BIOS/BootROM for aVM
Sure,
Another thing is that included iPXE is a version that does not support multipath SAN boot (I guess it would be nice to update it)
-
Idea for (kindof) USB based installation
I believe it would be nice to add a new mode of installation:
Put /boot on USB/sdcard, put everything else on HDD/SSD/NVMe/SAN.
This way we would be able to:
- Use any device as main disk (root FS) device, even if the server itself does not support booting from it
- Avoid problems with dying media due to excessive overwrites
It should be relatively simple addition I believe and could have a lot of value for people working on older hardware (like HP DL < Gen9, booting from NVMe/SAS in passthrough mode) etc.
-
Custom BIOS/BootROM for aVM
Hi,
I would like to load BootROM for my network card into the VM, the card runs with SR-IOV, so the VM gets a VF from the card. I already have the ROM extracted. It is possible with KVM, and I wonder if it would be possible to do something similar with XCP-ng.
If it is not possible to load BootROM, maybe it would be possible to customize the BIOS / UEFI that is loaded into the VM?
The story is that I would like to boot a diskless VM directly from SAN over SR-IOV to avoid virtualization latencies, and the card I want to use is not recognizeb by iPXE, thus loading a manufacturer Boot ROM would solve the issue.
I know I can boot from a small "boot image" disk attached through a hypervisor, and then boot from SAN based on initrd drivers however this would make it much more complicated to provision/install multiple VMs in the cluster this way, and I would prefer a more straightforward way.