Share your HomeLabs
-
Here is mine
HP DL360e Gen 8 for Truenas, 96GB of RAM, more processors than it needs, and 8x500 drives, NFS share for the VMs
3x HP DL360p gen 8 for XCP-NG with 128GB of RAM and 20c40t worth of processors.
10gbps networking to a Mikrotik switch, also an old Cisco 2960s for a second gigabit network (with POE+)
There is also a cheap GPS/GNSS NTP clock at the top of the rack, just too far to show these other items.
I like the spinning lights on the drives, just wish it was a little faster.
-
@Greg_E said in Share your HomeLabs:
I like the spinning lights on the drives, just wish it was a little faster.
haha
Nice homelab! What about the noise level?
-
@Greg_E Nice one, only issue energy costs??
-
Noise for these is not too bad at normal lab use levels, but if a fan goes bad... You know it. They are a double stack fan and if even one in the stack says it is bad, all fans come up to full speed. Then it is really loud. During boot is the other loud spot, when the BMC goes through and checks everything.
The room where they are located also has the rest of my servers, the production Supermicro are at least 2 times louder, which is part of the problem with 1u servers.
All that said, I have never stressed this system enough to know when the fans kick up. I'm mostly working on Windows with active directory things and with my typical lab set up, I'm only ever hitting about 10% on each VM which is about 1-2% host.
I would guess that the gen 9 or gen 10 versions of these might be lower noise, unless you get the high TDP processors in there. All of the processors in mine are the L versions so lower TDP and clock rate.
-
@Greg_E said in Share your HomeLabs:
There is also a cheap GPS/GNSS NTP clock at the top of the rack, just too far to show these other items.
What model and how do you have it set up? I've been tempted to add one to my setup but can't justify the cost.
Noise levels are the reason I've avoided rack mounted gear. Pizza boxes are just too loud.
-
It is the TF-NTP-LITE with 10meter inside antenna, bought it from ebay. They also have a cheaper model without the LCD display and maybe only a single ethernet connection. So far, so good, replaced my older devices in my production system that were often causing connection errors. I think my older stuff was all NTP v1 and the servers were looking for something newer.
If I do it again, I'll probably look deeper into a PTP server running on a Pi5 since they added the PTP support into this model. Should be able to make a decent PTP server for around $200 USD (or less), I think Jeff Geerling is working on this right now to see how it compares to his Pi4 with a PTP card attached to the top of a CM4 (or something like that).
I should add, I checked through the iLO and each server is drawing 80-100 watts at effectively idle. The storage device was a solid 100 watts with the drives spinning and no VMs attached. That's a lot of power for just idle, I don't even want to look at my production servers, I'd bet an extra 50 watts each.
-
@Greg_E Fans and hard drives eat up more power than most people expect.
-
Bringing this back up as I've changed things due to space limitations that are going to be happening:
Top shelf is three VMware hosts and a Truenas Scale storage.
Bottom shelf is three XCP-ng hosts and two extra.I think one of the extra are going to be another XCP-ng host that I keep as a separate pool with internal storage and use for backups and backup verification. The other will probably end up being a general purpose Linux desktop and probably run XO from sources as a backup (gigabit).
VM hosts are HP T740 (AMD V1756B processor) with 64GB or ram and a Supermicro dual 10gbps card (X520 based), one extra also has 10gbps but only 16GB of RAM. All of these have an extra a+e m.2 slot Intel i226-v card installed, and all have the built in Realtek gigabit which is working good enough for what I need (with XCP-ng, not VMware).
Switching is now Mikrotik CRS326-24s+2q+rm with most of the hosts broken out from the QSFP+ ports (8 connections) and a few DAC cables.
-
@Greg_E Like this A LOT more than the previous one
-
It is not cheaper, but it draws about half the power when idle. When doing work the power still gets up there, they are 60-90 watt computers, which is still less than the 300 watt capable HP servers at full go.
If you really stress your lab a lot, real servers are a win as they will draw less power for more performance, but if yours is like mine and it idles a lot, cutting the power is nice. I'm about 200 watts at idle with the above, and that's two labs worth. I was 400 with the single old lab system.
Mounts are cheap book ends, drilled and screwed to that specific shelf, and drilled for short m4 thumbscrews to get into the VESA mount on the t740.
If anyone reading this wants to duplicate, and they see HP T755 cheap enough, I recommend these because it has 6 cores / 12 threads over the t740 with 4 cores / 8 threads. Both are DDR4 sodimm and top out at 64GB. Both have a PCI 3.0 x8 (x16 slot) that takes half height cards. And if the BIOS is locked, you can write over them with a programmer and enter the data needed to function (see badcaps forum for my guide).