Hi,
I’ve been using Proxmox on a mini pc for almost a year and now I would like to test/learn clustering and HA.
I have just receive a HP 600 G9 mini that I would like to use as main “beefy” node, with zfs on 2 nvme drives. I also have an old Intel NUC i5 525U and a Rpi 4.
what I would like to try is creating a 2-node cluster (hp mini + NUC) and Rpi as a qdevice.
what could be the best practices in terms of shared storage / HA? is shared storage a must? I would like to have my critical services in HA (home assistant, ngnix, omada controller etc)
don’t know if could help, but I also have a Synology mounted as NFS storage.
You don’t need shared storage. The simple method is to use ZFS, make sure each node ZFS has the same pool name. Then you can setup replication of VM images and HA config.
One drawback of using replication instead of shared storage is that all data written since the last replication job will be lost during failover. Not ideal for mission critical services or anything that relies on a database. Otherwise it’s a good solution.
For the shared storage, you may try to go with the Starwinds vsan, the system requirements are pretty low and you would be able to use the internal storage as shared one, so no need of the additional box. Also, it has guide with the steps, so shouldn’t be an issue: https://www.starwindsoftware.com/resource-library/starwind-virtual-san-vsan-free-configuration-guide-for-proxmox-virtual-environment-ve-kvm-vsan-deployed-as-a-controller-virtual-machine-cvm/
For clustering 2 nodes should be fine and so would no shared storage. However if HA clustering is your goal then shared storage and at least 3 other nodes is pretty much the only way to do HA clustering properly. Sure you could do it without 3 nodes and without shared storage but it wouldn’t be the proper way. 3 servers are needed because of how quorum works. Without that 3rd node both nodes will start considering the other one as down and they each vote for themselves and act accordingly and ultimately you risk potential outage and if replication is used you even risk data corruption. Now that’s not that big of a deal in a home lab but kinda defeats the purpose of HA. Now as for shared storage you don’t need it but it’s definitely much better than replication because 1. replication is very taxing on the network and 2. You don’t have to worry about how often to replicate the data therefore your data is never “behind”. Personally I set my HA clusters up with Ceph as my shared storage but nothing is stopping you from using ZFS it’s really up to you on what shared storage method you wanna use.