XO to manage KVM?
-
Sounds like KVM would be a better solution for you since you want a file server bare metal AND a hypervisor solution wrapped into one.
-
@olivierlambert said in XO to manage KVM?:
The message isn't clear because I think we are fundamentally seeing virtualization differently
I guess so. I understand when you look purely on virtualisation Xen and KVM will be similar in a sense. You have a host for the VMs and that's it. There is no "outside". The whole purpose of that physical server is hosting VMs.
I'm having a physical view of a server where a SME can only afford a single server. In that scenario, you are installing softwares to address different necessities. In that scenario, KVM is "just another software" that is addressing the need to host a few VMs whilst NFS/SMB will be serving files, and a webserver will be running webapps, etc. Think on the lines of ClearOS or UnRAID.
@Biggen said in XO to manage KVM?:
Sounds like KVM would be a better solution for you since you want a file server bare metal AND a hypervisor solution wrapped into one.
Indeed, that's why I said XCP-ng wouldn't be a good fit, but XO would be a great addition to manage the virtualisation side of this solution.
And the use case is much wider if you think about larger companies and hosting providers running heterogeneous hosts with both Xen and KVM. I'm pretty sure there are plenty of hosting companies running both Xen and KVM on their DCs.
-
I disagree regarding KVM vs XCP-ng in this case. The result, system wide, is pretty similar. You can create a file server in the "host" with both solutions.
Again, I don't see any problem to use disk controller passthrough for your filer, you won't lose any perfs and you'll have a good isolation.
The rest of the available disks will be used as local storage for the other VMs.
Using this setup, you have your "one machine" setup.
-
@olivierlambert Its still one machine the way I proposed it.
One *nix machine running a SMB/NFS share as a file server. Same machine also runs a KVM instance for VMs. Its the way I've done virtualization forever until I started messing with xcp-ng.
Actually, its still the way I'd do a massive file server. I don't think a 20TB file server needs virtualization. But that's my opinion.
-
@Biggen I know, and you can do the same with XCP-ng if you like (installing NFS in the dom0).
-
@olivierlambert said in XO to manage KVM?:
Again, I don't see any problem to use disk controller passthrough for your filer, you won't lose any perfs and you'll have a good isolation.
The rest of the available disks will be used as local storage for the other VMs.Could you please confirm whether is it the controller or the disks that need to be passthrough? If it's the controller it means you need 2 controllers in the server.
@olivierlambert said in XO to manage KVM?:
(installing NFS in the dom0).
Apologies for my ignorance, because I never tried. Doesn't it create problems to update XCP-ng?
@Biggen said in XO to manage KVM?:
Actually, its still the way I'd do a massive file server. I don't think a 20TB file server needs virtualization. But that's my opinion.
It depends on the use case. Sometimes it's easier to manage the applications isolated in VMs. Even the file level backup software can run in a VM and access the files via NFS share.
-
- Yes, if you do a PCI passthrough, you are passing the whole controller. Which shouldn't be a problem if you have a dedicated controller for your SAS disks (eg a LSI card) and the rest for local VM storage (eg your SATA disks, or NVMe drives). Alternatively, you could passthrough just the disks, but if you are thinking about a 20TiB filer, having a controller on a PCI card might have more sense.
- It's not creating problems per se, it won't be saved if you do an ISO upgrade, that's all.
- In your specific use case of "one machine to fits all", I would buy a LSI PCIe card and connect all disks on it, passthrough to the filer VM, problem solved.
-
@beagle The simple answer here, which has been implied by others, is...
You DO NOT, EVER, install other "services" on your hypervisor in a production environment - no matter what your cost, convenience, etc. "desires" are.
That's just not how this is all supposed to work; and, any technical guy worth his job position will know right away to not only recommend against setting it up like that, but, will adamantly decline being "forced" to do so - consequences be damned.
What is even worse, is you mentioned a 20 TB requirement for file server storage. There is literally only one solution for this...
A DEDICATED file server - whether that be a VM with enough attached storage (VDIs using workarounds, iSCSI\NFS\SMB direct attached), or you build a back-end NAS system that either passes off the data directly, or utilizes a front-end file server "proxy".
You also claim a need for high-performance - this automatically dismisses any "cheap" solution that aligns with some of your concepts above. Do it right...the first time.
There is a huge difference between "finding a creative solution" and "ignorantly (intentionally or not) pursuing an unrealistic and ill-advised idea".
-
https://www.ebay.co.uk/ulk/itm/132562388180
Flash it into IT mode and problem solved in 30mins for £20 and you get the best of both worlds; XCP-ng and FreeNAS
Is SMAPIv3 needed for reliable individual disk passthrough? I always thought that this ended in desaster?? Waiting patiently so I can play with Ceph......
-
I think, solution you are looking for is direct disk access. It's possible. I use it for my file server VM (FreeNAS) with five 4TB wd red drives (zfs). Passthrough entire PCI storage card is better, but even cheap server motherboards have 6 sata channels (one for dom0 ssd, and five for file server storage).
This is my example (and my UUIDs)
mkdir /srv/direct-sata_sr
xe sr-create name-label=”Direct SATA SR” name-description=”SATA links to passthrough direct to VMs” type=udev content-type=disk device-config:location=/srv/direct-sata_srNew UUID generated on sr creation
ba9b124c-c5ac-2c0b-7b1f-014d9cezzzzzln -s /dev/disk/by-id/<disk id> /srv/direct-sata_sr/<disk id>
ln -s /dev/disk/by-id/ata-WDC_WD5000BEVT-26A0RT0_WD-WXE1A10V7603 /srv/direct-sata_sr/ata-WDC_WD5000BEVT-26A0RT0_WD-WXE1A10V7603
xe sr-scan uuid=ba9b124c-c5ac-2c0b-7b1f-014d9cezzzzz
xe vdi-list sr-uuid=ba9b124c-c5ac-2c0b-7b1f-014d9cezzzzz
Good luck.