XO to manage KVM?
-
On a local storage, you have the exact same way than KVM: your VM disk is stored in a file (VHD or Qcow2). It's exactly the same principle.
Again, I don't really understand what you want to achieve, functionally speaking. Before talking about performances, what do you need? In simple words.
-
@olivierlambert said in XO to manage KVM?:
what do you need? In simple words.
As I said above:
@beagle said in XO to manage KVM?:
I'm looking for a 1 server solution for SMEs where file storage and VMs can be managed on the same server
-
I don't see the problem to achieve that with XCP-ng. I think we even have people here doing that.
-
I can't see other solution than setup NFS/SMB on a VM and attach a disk for storage, but as discussed above this solution has 2 issues: (1) the 2TB limit on VDIs and (2) serving the files stored in a file (VDI) instead of directly from the filesystem would give you poor performance.
These problems could probably be overcome by using disk passthrough but this solution has other implications. For instance, backups.
I would appreciate if you have another solution that I may be missing.
-
I still don't get what you want to do. I understand the part where you want to have eg a FreeNas as a VM and share files inside it. But why on earth use this VM to store other VM disk? This is a completely different usage, almost orthogonal.
You need a space to store your VMs, and you need another space to store your "shared files" (eg SMB/NFS) between your VMs. It's NOT the same thing and has vastly different requirements (eg you use hypervisor backup aware system for your VMs, and file level backup for your NAS).
Again, the "global" use case is unclear to me. Why not having your NAS VM on some disk space and keep the rest to store your VM disks?
-
Apologies if I was not clear, but you misunderstood what I said.
Let me try to put as a simple question. How would you setup a 20TB fileserver on XCP-ng?
-
My first answer is "I wouldn't" because it's not meant for that. However, if you have a good use case, then, second answer is "disk passthrough".
I wouldn't never backup nor migrate a 20TiB VM anyway. At least not with a backup that's hypervisor aware. Rsync would make a better job (inside the VM).
By having a very large VM, you'll lose all the flexibility of virtualization. So it's not "bad" if you want to avoid a second physical host for your file storage, because then VM flexibility is already annihilated by the VM disk size itself.
-
@olivierlambert said in XO to manage KVM?:
My first answer is "I wouldn't" because it's not meant for that
And that brings me back to my OP where I said XCP-ng was not a good fit for that use case.
On the other hand, you can install the file server on bare metal and have KVM handling the virtualisation side. Two complete different solutions working side-by-side. The piece missing in that overall solution is XO to manage the virtualisation.
Another upside of having XO extended to manage KVM hosts is the possibility of having XO managing a heterogeneous datacentre with both Xen and KVM hosts, similarly to CloudStack and oVirt.
-
It's the exact same thing for KVM. Any virtualization platform is not "meant" to be a big filer. Period.
Using passthrough you can partially solve the issue, I don't see why you aren't considering this.
-
Note about technical feasability: XO uses the high level XAPI to manage pools and VMs and is tightly coupled to that XAPI and all the concepts it defines (VDI, PIF, VIF, SR, etc.). Changing it to make it manage other virtualization solutions would require a total overhaul, lots of "if XAPI else...", UI headaches when the concepts are too different. That would be a completely different tool in the end. I'm not a XO dev myself but I can see that it's a huge project that you are suggesting us to develop, for benefits that are not so obvious.
About your use case, I don't understand why you can't store your VMs on local storage. Why do you need them to be in local NFS share?
-
@olivierlambert said in XO to manage KVM?:
It's the exact same thing for KVM. Any virtualization platform is not "meant" to be a big filer. Period.
I don't know why, but it seems I'm not being able to get my message across. I don't want to run the file server "inside" the virtualisation solution. For my use case the main difference between KVM and Xen is that I can run the file server and KVM side-by-side both using separate local storage. Whilst on Xen I can't do that because I can't install the file server on bare metal.
Thank you @stormi for raising the technical challenges of that implementation and I would say that the main benefit is extending the excellent tool that is XO to the expanding user base of KVM. Just think about all the KVM shops or heterogeneous datacentres that would be able to use XO.
Yes, indeed it's a big strategical change with technical and financial implications, but XO could be an alternative to CloudStack or oVirt.
-
The message isn't clear because I think we are fundamentally seeing virtualization differently
As long as you are using a host for virtualization, there's no "outside". The host of KVM will also act as a "driver" domain/host for the VMs, a bit differently but still with the same general principle than Xen.
You can always install NFS inside the dom0, but that would the same weirdness level that doing that in KVM host too.
If you want a filer outside your virtualization, use another physical host. And again, passthrough is OK, there's various users here doing that with FreeNAS for example.
-
Sounds like KVM would be a better solution for you since you want a file server bare metal AND a hypervisor solution wrapped into one.
-
@olivierlambert said in XO to manage KVM?:
The message isn't clear because I think we are fundamentally seeing virtualization differently
I guess so. I understand when you look purely on virtualisation Xen and KVM will be similar in a sense. You have a host for the VMs and that's it. There is no "outside". The whole purpose of that physical server is hosting VMs.
I'm having a physical view of a server where a SME can only afford a single server. In that scenario, you are installing softwares to address different necessities. In that scenario, KVM is "just another software" that is addressing the need to host a few VMs whilst NFS/SMB will be serving files, and a webserver will be running webapps, etc. Think on the lines of ClearOS or UnRAID.
@Biggen said in XO to manage KVM?:
Sounds like KVM would be a better solution for you since you want a file server bare metal AND a hypervisor solution wrapped into one.
Indeed, that's why I said XCP-ng wouldn't be a good fit, but XO would be a great addition to manage the virtualisation side of this solution.
And the use case is much wider if you think about larger companies and hosting providers running heterogeneous hosts with both Xen and KVM. I'm pretty sure there are plenty of hosting companies running both Xen and KVM on their DCs.
-
I disagree regarding KVM vs XCP-ng in this case. The result, system wide, is pretty similar. You can create a file server in the "host" with both solutions.
Again, I don't see any problem to use disk controller passthrough for your filer, you won't lose any perfs and you'll have a good isolation.
The rest of the available disks will be used as local storage for the other VMs.
Using this setup, you have your "one machine" setup.
-
@olivierlambert Its still one machine the way I proposed it.
One *nix machine running a SMB/NFS share as a file server. Same machine also runs a KVM instance for VMs. Its the way I've done virtualization forever until I started messing with xcp-ng.
Actually, its still the way I'd do a massive file server. I don't think a 20TB file server needs virtualization. But that's my opinion.
-
@Biggen I know, and you can do the same with XCP-ng if you like (installing NFS in the dom0).
-
@olivierlambert said in XO to manage KVM?:
Again, I don't see any problem to use disk controller passthrough for your filer, you won't lose any perfs and you'll have a good isolation.
The rest of the available disks will be used as local storage for the other VMs.Could you please confirm whether is it the controller or the disks that need to be passthrough? If it's the controller it means you need 2 controllers in the server.
@olivierlambert said in XO to manage KVM?:
(installing NFS in the dom0).
Apologies for my ignorance, because I never tried. Doesn't it create problems to update XCP-ng?
@Biggen said in XO to manage KVM?:
Actually, its still the way I'd do a massive file server. I don't think a 20TB file server needs virtualization. But that's my opinion.
It depends on the use case. Sometimes it's easier to manage the applications isolated in VMs. Even the file level backup software can run in a VM and access the files via NFS share.
-
- Yes, if you do a PCI passthrough, you are passing the whole controller. Which shouldn't be a problem if you have a dedicated controller for your SAS disks (eg a LSI card) and the rest for local VM storage (eg your SATA disks, or NVMe drives). Alternatively, you could passthrough just the disks, but if you are thinking about a 20TiB filer, having a controller on a PCI card might have more sense.
- It's not creating problems per se, it won't be saved if you do an ISO upgrade, that's all.
- In your specific use case of "one machine to fits all", I would buy a LSI PCIe card and connect all disks on it, passthrough to the filer VM, problem solved.
-
@beagle The simple answer here, which has been implied by others, is...
You DO NOT, EVER, install other "services" on your hypervisor in a production environment - no matter what your cost, convenience, etc. "desires" are.
That's just not how this is all supposed to work; and, any technical guy worth his job position will know right away to not only recommend against setting it up like that, but, will adamantly decline being "forced" to do so - consequences be damned.
What is even worse, is you mentioned a 20 TB requirement for file server storage. There is literally only one solution for this...
A DEDICATED file server - whether that be a VM with enough attached storage (VDIs using workarounds, iSCSI\NFS\SMB direct attached), or you build a back-end NAS system that either passes off the data directly, or utilizes a front-end file server "proxy".
You also claim a need for high-performance - this automatically dismisses any "cheap" solution that aligns with some of your concepts above. Do it right...the first time.
There is a huge difference between "finding a creative solution" and "ignorantly (intentionally or not) pursuing an unrealistic and ill-advised idea".