XO to manage KVM?
-
It's the exact same thing for KVM. Any virtualization platform is not "meant" to be a big filer. Period.
Using passthrough you can partially solve the issue, I don't see why you aren't considering this.
-
Note about technical feasability: XO uses the high level XAPI to manage pools and VMs and is tightly coupled to that XAPI and all the concepts it defines (VDI, PIF, VIF, SR, etc.). Changing it to make it manage other virtualization solutions would require a total overhaul, lots of "if XAPI else...", UI headaches when the concepts are too different. That would be a completely different tool in the end. I'm not a XO dev myself but I can see that it's a huge project that you are suggesting us to develop, for benefits that are not so obvious.
About your use case, I don't understand why you can't store your VMs on local storage. Why do you need them to be in local NFS share?
-
@olivierlambert said in XO to manage KVM?:
It's the exact same thing for KVM. Any virtualization platform is not "meant" to be a big filer. Period.
I don't know why, but it seems I'm not being able to get my message across. I don't want to run the file server "inside" the virtualisation solution. For my use case the main difference between KVM and Xen is that I can run the file server and KVM side-by-side both using separate local storage. Whilst on Xen I can't do that because I can't install the file server on bare metal.
Thank you @stormi for raising the technical challenges of that implementation and I would say that the main benefit is extending the excellent tool that is XO to the expanding user base of KVM. Just think about all the KVM shops or heterogeneous datacentres that would be able to use XO.
Yes, indeed it's a big strategical change with technical and financial implications, but XO could be an alternative to CloudStack or oVirt.
-
The message isn't clear because I think we are fundamentally seeing virtualization differently
As long as you are using a host for virtualization, there's no "outside". The host of KVM will also act as a "driver" domain/host for the VMs, a bit differently but still with the same general principle than Xen.
You can always install NFS inside the dom0, but that would the same weirdness level that doing that in KVM host too.
If you want a filer outside your virtualization, use another physical host. And again, passthrough is OK, there's various users here doing that with FreeNAS for example.
-
Sounds like KVM would be a better solution for you since you want a file server bare metal AND a hypervisor solution wrapped into one.
-
@olivierlambert said in XO to manage KVM?:
The message isn't clear because I think we are fundamentally seeing virtualization differently
I guess so. I understand when you look purely on virtualisation Xen and KVM will be similar in a sense. You have a host for the VMs and that's it. There is no "outside". The whole purpose of that physical server is hosting VMs.
I'm having a physical view of a server where a SME can only afford a single server. In that scenario, you are installing softwares to address different necessities. In that scenario, KVM is "just another software" that is addressing the need to host a few VMs whilst NFS/SMB will be serving files, and a webserver will be running webapps, etc. Think on the lines of ClearOS or UnRAID.
@Biggen said in XO to manage KVM?:
Sounds like KVM would be a better solution for you since you want a file server bare metal AND a hypervisor solution wrapped into one.
Indeed, that's why I said XCP-ng wouldn't be a good fit, but XO would be a great addition to manage the virtualisation side of this solution.
And the use case is much wider if you think about larger companies and hosting providers running heterogeneous hosts with both Xen and KVM. I'm pretty sure there are plenty of hosting companies running both Xen and KVM on their DCs.
-
I disagree regarding KVM vs XCP-ng in this case. The result, system wide, is pretty similar. You can create a file server in the "host" with both solutions.
Again, I don't see any problem to use disk controller passthrough for your filer, you won't lose any perfs and you'll have a good isolation.
The rest of the available disks will be used as local storage for the other VMs.
Using this setup, you have your "one machine" setup.
-
@olivierlambert Its still one machine the way I proposed it.
One *nix machine running a SMB/NFS share as a file server. Same machine also runs a KVM instance for VMs. Its the way I've done virtualization forever until I started messing with xcp-ng.
Actually, its still the way I'd do a massive file server. I don't think a 20TB file server needs virtualization. But that's my opinion.
-
@Biggen I know, and you can do the same with XCP-ng if you like (installing NFS in the dom0).
-
@olivierlambert said in XO to manage KVM?:
Again, I don't see any problem to use disk controller passthrough for your filer, you won't lose any perfs and you'll have a good isolation.
The rest of the available disks will be used as local storage for the other VMs.Could you please confirm whether is it the controller or the disks that need to be passthrough? If it's the controller it means you need 2 controllers in the server.
@olivierlambert said in XO to manage KVM?:
(installing NFS in the dom0).
Apologies for my ignorance, because I never tried. Doesn't it create problems to update XCP-ng?
@Biggen said in XO to manage KVM?:
Actually, its still the way I'd do a massive file server. I don't think a 20TB file server needs virtualization. But that's my opinion.
It depends on the use case. Sometimes it's easier to manage the applications isolated in VMs. Even the file level backup software can run in a VM and access the files via NFS share.
-
- Yes, if you do a PCI passthrough, you are passing the whole controller. Which shouldn't be a problem if you have a dedicated controller for your SAS disks (eg a LSI card) and the rest for local VM storage (eg your SATA disks, or NVMe drives). Alternatively, you could passthrough just the disks, but if you are thinking about a 20TiB filer, having a controller on a PCI card might have more sense.
- It's not creating problems per se, it won't be saved if you do an ISO upgrade, that's all.
- In your specific use case of "one machine to fits all", I would buy a LSI PCIe card and connect all disks on it, passthrough to the filer VM, problem solved.
-
@beagle The simple answer here, which has been implied by others, is...
You DO NOT, EVER, install other "services" on your hypervisor in a production environment - no matter what your cost, convenience, etc. "desires" are.
That's just not how this is all supposed to work; and, any technical guy worth his job position will know right away to not only recommend against setting it up like that, but, will adamantly decline being "forced" to do so - consequences be damned.
What is even worse, is you mentioned a 20 TB requirement for file server storage. There is literally only one solution for this...
A DEDICATED file server - whether that be a VM with enough attached storage (VDIs using workarounds, iSCSI\NFS\SMB direct attached), or you build a back-end NAS system that either passes off the data directly, or utilizes a front-end file server "proxy".
You also claim a need for high-performance - this automatically dismisses any "cheap" solution that aligns with some of your concepts above. Do it right...the first time.
There is a huge difference between "finding a creative solution" and "ignorantly (intentionally or not) pursuing an unrealistic and ill-advised idea".
-
https://www.ebay.co.uk/ulk/itm/132562388180
Flash it into IT mode and problem solved in 30mins for Β£20 and you get the best of both worlds; XCP-ng and FreeNAS
Is SMAPIv3 needed for reliable individual disk passthrough? I always thought that this ended in desaster?? Waiting patiently so I can play with Ceph......
-
I think, solution you are looking for is direct disk access. It's possible. I use it for my file server VM (FreeNAS) with five 4TB wd red drives (zfs). Passthrough entire PCI storage card is better, but even cheap server motherboards have 6 sata channels (one for dom0 ssd, and five for file server storage).
This is my example (and my UUIDs)
mkdir /srv/direct-sata_sr
xe sr-create name-label=βDirect SATA SRβ name-description=βSATA links to passthrough direct to VMsβ type=udev content-type=disk device-config:location=/srv/direct-sata_srNew UUID generated on sr creation
ba9b124c-c5ac-2c0b-7b1f-014d9cezzzzzln -s /dev/disk/by-id/<disk id> /srv/direct-sata_sr/<disk id>
ln -s /dev/disk/by-id/ata-WDC_WD5000BEVT-26A0RT0_WD-WXE1A10V7603 /srv/direct-sata_sr/ata-WDC_WD5000BEVT-26A0RT0_WD-WXE1A10V7603
xe sr-scan uuid=ba9b124c-c5ac-2c0b-7b1f-014d9cezzzzz
xe vdi-list sr-uuid=ba9b124c-c5ac-2c0b-7b1f-014d9cezzzzz
Good luck.