• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: October 27th, 2023

help-circle


  • Really kinda depends on your use case.

    For instance, if I’m building an ESXi cluster, then yea HPE all the way. It’s quite trivial to find the option parts I need to complete the build and scour eBay for them. With those components tested and certified to work together I shouldn’t have to worry too much about weird issues popping up.

    Now, on the other hand, when building a NAS / SAN, I don’t want to be locked into buying HPE branded disks so I opt for a Supermicro system. It doesn’t care what brand of disk you use so I’m free to buy what makes sense for the type of datastore I’m creating,

    Supermicro is also one of the few who build their server platforms on standard ATX / EATX form factors, so it’s pretty easy to get the chassis you like and build the insides out however you like. Also makes upgrading the server internals super easy. Just buy a later gen components and transplant them. They’re very good about making documentation and compatibility matrices available online


  • The way to do this with an L3 managed switch is to use inter-vlan routing and access control lists.

    First part is simple enough, enable IP routing in the switch, then give your vlan interfaces an IP address.

    To control which nets can talk to others you build ACLs and attach the policy to the vlan. For instance, you can permit your workstation on the main net to talk to anything on nets 2, 3, and 4, and conversely they can talk back to only your workstation if you wish. Then you can deny anything on nets 2 - 4 from talking to each other.



  • One company I used to work for, we had an MSP on contract to basically back me up and provide 24x7 support. They were a Watchguard dealer and had many properties with WG firewalls onsite

    We had Palo Alto firewalls at my company. During my tenure I got to know a lot of the MSP tier 2 and 3 techs, and we’d talk shop occasionally.

    It seemed like every day they were rebooting a Watchguard on one of their client properties because it had locked up and become unmanageable, so they were basically taking businesses offline in the middle of the day to get the firewall back

    I don’t know if it’s the hardware, firmware, or software that is at the core of those issues, but I am unabashedly NOT a fan of WG gear armed with that knowledge and experience




  • Well, since you’re going rackmount, bite the bullet and grab some enterprise gear for your VMs and containers. I run a HPE DL380 G10, and you can get them fairly inexpensively. The biggest cost driver in them on the grey market is RAM. Theyre surprisingly efficient for home use and will last forever. I have some G5’s that I ran for about 7 years and even though they sit on a warehouse rack in storage these days, they still run perfectly fine.

    For my NAS and SAN, I run a Supermicro 847 Chassis, which is 36 LFF bay, with an X11 mobo running TrueNAS Scale. This setup allows me to create multiple large arrays, for NAS I have an SMB share that stores all my media for Plex, another array thats an iSCSI SAN feeding the VMWare stack, and yet another for local backups, all from one box with plenty of room for expansion.

    Even with cloud backup services, its good to keep a local copy of everything live, and a local backup, so you can always find a need for more storage, good to have plenty of room to grow from the beginning.

    Many ways to go about setting up shop. Some design considerations are gonna be do you want just enough to run the home, or do you want significant space beyond that to truly lab and play with tech? Server platforms will run VMWare, Nutanix, ProxMox far better than a desktop platform will, and are worth the bit of power consumption increase. I prefer the two box approach, separating compute from storage, because as much as I like the HPE DL platform, for home use I dont wanna be locked into buying HP branded disks any time I want to add storage. With the TrueNAS box I can add whatever disk I want and either expose it directly to the network, or add it as another LUN to the hypervisor datastore.

    Rack gear is designed to move a lot of air. Ideally they need to be in their own closet away from people as much as possible, not only for the noise, but for the fact that people create dust and servers will suck that dust in and coat everything inside with it. To keep your gear running well, keep it away from people

    As for network and security, you said youre looking at Unifi - Ubiquiti has a decent ecosystem for the average prosumer. As long as youre not planning to expose services to the internet you should be fine with that gear. If youre wanting a more robust network security solution, youd want to look into Firewalla, pfSense, OPNsense, or perhaps SonicWall


  • Windows Desktop OS is optimized for foreground applications, GUI’s, etc, whereas Server OS is optimized for background services, multiple user connections, and minimized need for downtime.

    Neither of them are NAS software. Sure you can set up an SMB share on desktop, or build a fileserver in Server OS, but as youre wanting to replace a Network Attached Storage device, there are better options out there.

    You could get a Supermicro server off eBay for cheap, either 2, 3 or 4U, and motherboard generation around X10 or X11. If youd rather a tower, then something like a Dell T440. Load it up with the drives you want and throw TrueNAS on the OS drives. TrueNAS is free and does a really good job of what it was designed for.


  • A - you need to make sure that the opposite side can be removed, or has sufficient venting. Rack equipment moves air front to back (cold aisle to hot aisle) If all you have for venting is the top then youre going to need some heavy duty fans to lift the hot air out of the cabinet and exhaust it somewhere

    B - You can buy rack rails - theyre basically angle iron, and either tapped 10-32, 12-24, M6. AV racks typically use tapped rails. Server racks more commonly use square holes that accept cage nuts with M6 screws. Assuming you have 72" internal clearance in height, then https://www.server-rack-online.com/7205-es2p.html or https://www.penn-elcom.com/us/42u-rack-strip-with-square-holes-1-16in-thick-r0863-2mm-42 rail kit is what youd want (the first link is for a pair, vs the second link is per rail). Dont forget the cage nuts and screws https://www.server-rack-online.com/hdw-105-50.html

    For 19" racks, the distance between hole centers on the rails needs to be 18 & 5/16 inches. If you dont have about 20 inches between the cabinet walls (assuming about another 1.5 inches taken up by the width of the rails) then this cabinet is probably a bit of a lost cause

    All that said, for the $100 you’d spend on rails, theres probably someone out there with a proper server rack trying to get rid of it for that or less





  • You’ll need to keep the pfSense, as that will remain your default router, as well as firewall and vpn if youre using it. You would then trunk your VLANs to a managed switch.

    A Cisco WS-C3850-12X48U-L is a 48 port gigabit switch that includes 12 100Mbps/1/2.5/5/10 Gbps Base-T UPOE Ethernet ports, but you would need to bump your budget to about $600. It has a network module slot that can accommodate 10 and 40 gig SFP+ If you wanted to run a fiber uplink

    If you dont wanna blow the budget on the switch, something like a WS-C3750X-48P would be perfectly usable, its a 48 port 1G Base-T PoE+ switch with modular and redundant PSUs, and it has the option for a 2 port 10g SFP+ network module and you can usually find switches with the C3KX-NM-10G module installed for $100 or less.