Seeking community insight/review of my first Homelab design (includes some open technical questions)
-
Looks like USB passthrough is possible.
-
Seems like if I got an NVME M.2 drive and this pcie 3.0 x1 adapter, then I could have a more reliable backup with the following configuration:
- [pcie x1] use a single M.2/NVME drive for:
- Installing/hosting XCP-ng
- Storing the associated VM ISOs
- Storing virtual drives (vhd) for all VMs
- [SATA-1/2] use the two 2T SSD drives as a RAID-1 pool for the Jellyfin media server
- [SATA-4/5] use the two 2T SSD drives as a 2nd RAID-1 pool for the TrueNAS datasets
- [SATA-5] use one Hitachi 3T disk drive as a backup for the RAID-1 pool composed of [SATA-1/2]
- [SATA-6] use one Hitachi 3T disk drive as a backup for the 2nd RAID-1 pool composed of [SATA-3/4]
Updated purchase list includes:
- Three additional SanDisk SSD Plus 2T (2.5-inch) sata drive
- PNY-GeForce 1030 gpu
- Single KVM switch that supports a single console for two machines
- PCIe 3.0 1x to NVME adapter
- M.2 NVME drive (at least 500GB or more)
- [pcie x1] use a single M.2/NVME drive for:
-
-
Hi and welcome @joehays !
Don't worry, I moved the topic in what I felt the right category, but it's not obvious, so I might improve them a bit
Anyway, what you are seeking is doable. At my own home lab, I'm using a "non production" setup, ie the NAS in a VM. It creates some circular dependencies sometimes, but it's pretty handy if you want to keep to a minimal number of nodes while having many services.
Everything is a balance between performance, convenience and reliability ("pick 2"). There's no perfect solution, and you did right to ask for comments, the community around here is really cool, I'm sure people will engage and bring some feedback.
Regarding some of your questions, answering on top of my head:
- PCI passthrough is pretty "standard", it works for most common devices (there's some exception with devices with broken design/not following the PCIe spec
- an USB controller is a PCI device, so you could pass it to your VM and it will be like plugin an USB device in your host will be visible only in the VM
- I would plan very very soon a way to store your backup somewhere else than the physical machine. I know it might be a PITA, but even a Raspi with a HDD can do the trick. This can be a life saver, because a backup isn't a backup until it's stored outside your "main" machine
-
@olivierlambert to add up:
A backup isn't a backup, until you tested the restore sucessfully.
-
@olivierlambert
Excellent. Thank you! -
@borzel
I'll definitely test it out. Thanks. -
@olivierlambert @joehays Easiest backup to a remote location IMO is to NFS-mount a drive from some other system and back up to it. That way, it can always be exported to anywhere needed, even if your server(s) are destroyed and you have to start over from scratch.
-
Big post and lots of good questions, love to see all the detail.
I'll chime in with a few thoughts I have and some experiences I've had, but in short everything you want to do is possible. I'll try to answer some of your open questions first then give some of my thoughts about the original goals that IMO maybe should be adjusted a bit.
-
You can pass through any PCIe device, so there aren't any issues with that, any PCIe 3.0 x4 GPU will be able to pass through and work on Windows or in any other OS you want to run in a VM
-
Motherboard ports can be passed through as well, they are just PCIe devices, but I'd maybe recommend getting a card anyway as it makes things a bit easier and make identifying which ports are for the VM easier, that's just me
-
As for your backups, I would recommend making sure they are on a different system and ideally somewhere offsite for anything critical, Backblaze B2 is a good provider for that sort of thing and is reasonably priced, Xen Orchestra can back things up to Backblaze automatically too
-
The last question here about Jellyfin is VERY open ended and we'd need some more info to know for sure. Like what movies? Resolution? 4K? HDR? Transcoded to what resolution? Still HDR after transcode? etc...
Now for some thoughts of my own about your original goals, either things to keep in mind or recommended changes:
-
While it's great to use older hardware, keep in mind that a 7700k isn't a great CPU for this sort of thing when you want a lot of VMs, it's only 4 cores which is a bit lacking, so don't expect the best overall performance.
-
It's great to virtualize TrueNAS to learn about it, but I would not recommend this in a real world use case if you can at all avoid it. IMO NAS stuff should always be bare metal, this is something I've stuck pretty firm too over the years and it has served me super well plenty of times. I could list a million reasons for it but really just wanted to point it out. Again for learning it's great, but just make sure you have REALLY good backups of that data. I virtualize literally 100% of everything else in my stack but my NAS.
-
Jellyfin is a fantastic route to go, but I would recommend running it as it's own VM instead of on top of already virtualized TrueNAS. It's easier to manage and maintain IMO and if you already are virtualizing it kinda begs the question, why not make it it's own VM? Gives you more options in the future and if you want to move to a bare metal NAS or something you can easily do so without Jellyfin having to be moved.
-
Like I said above, if you really care about reliability of the NAS, go bare metal with it (and IMO use TrueNAS Core not Scale, Scale is VERY good but Core has been more stable in my experience and is still what I use in full production)
-
As I mentioned way above, we need to know more about what you're actually planning on streaming. I have worked with 4K streaming before but you do NOT want to transcode that unless you can get ahold of like a 4090 or something along those lines (sometimes you can get decent 4k transcoded playback from something like a 2060, I've done this, but not with HDR and higher bitrate content). I'd say stick with 1080p and ideally, if you can, avoid transcoding altogether when possible, direct play is always going to be faster.
-
-
@tjkreidl and @planedrop
Awesome feedback! Thank you both. I'll definitely look into these suggestions. -
@joehays Also in the matter of the streaming video speed of the SR and hypervisor is key. You won't want a lags which result in dropped frames, as this will impact video quality.
Especially if a VM is going to be part of your home entertainment system, which is hosted by your home lab.
I have some proper servers which were professionally refurbished, and are former production grade hardware. They are very powerful and additionally capable of dual socket and can support up to 2x of 12 Core CPU.
Combine this with a 10G dual dedicated network connection, similar to the basis from CHSCC's lab network setup. That way the VMs will get a fast stable connection, this will be especially important for the video streaming. Especially if you don't want too much buffering and/or dropped frames.
-
These are examples of PCIe add-on cards that could work for software raid of the boot drive. Booting from M.2 SATA is much more likely to work with old hardware than NVMe. XCP-ng installer may automatically make a software RAID out of them. Most of the other dual M.2 cards are funky in that one is M.2 NVMe and the other M.2 SATA.
- $62.00 dual M.2 SATA only with no need for SATA cables. StarTech.com 2x M.2 SATA SSD Controller Card - PCIe - PCI Express M.2 SATA III Controller - NGFF Card Adapter (PEX2M2), Red
- $43.00 four port NVMe only PCIe card. GLOTRENDS PA41 Quad M.2 NVMe to PCIe 4.0 X16 Adapter Without PCIe Bifurcation Function, Support 22110/2280/2260/2242/2230 Size (PCIe Bifurcation Motherboard is Required)