I'm putting together a requirement for a H/A ZFS build (may go Solaris v11 with Cluster v4) & would like feedback on my current order list if anyone could spare the time? The solution will be migrated to as the core production SAN once we've sufficiently tested it under load.
So far the kit list is as follows:
- 2 x heads utilising HP DL380G7 Seves (dual Xeon 6 core CPU, 144GB RAM, 2x local HDD fo O/S)
- 8 x Intel dual port 1GB NICs (4 in each head)
- 4 x HBAs, LSI 9205-8e SAS2 (2 in each head) - each contoller only has 1 connection to a disk enclosure SAS2 port
- 2 x disk enclosure (HP D2600, 12x 3.5in bays, dual domain SAS2
- 2 x Zils (STEC ZeusRAM 8GB SAS2 DP) - 1 in each disk enclosure
- 1 x L2ARC (STEC ZeusIOPS XE 300GB SAS2 DP) - only present in 1 disk enclosure
- 1 x Hot spare HDD (HP 2TB SAS 7.2K DP HDD - Seagate) - only present in 1 disk enclosure
- 18 x Data HDD (HP 2TB SAS 7.2K DP HDD - Seagate) - 9 in each enclosure
The ZFS setup is envisaged to be pool comprised of 9 mirrored vdevs, with each mirror member being in a seperate disk enclosure for resilience, presenting approx 16TB via iSCSI. A mirrored ZIL pair will be split across both disk enclosures, whilst one enclosure will host the L2ARC drive & the other enclosure will host a dedicated hot spare assigned to the pool. The only aspect which may change is dropping the 300GB L2ARC for additional RAM in the heads (from 144 to 288GB) due to equivalent cost & better performance.
Any comments or advice appreciated. Thanks,
RE: Hardware Advice - Added by FREDY . about 1 year ago
Why are you using so many 1GB NICs ? Can't you not use 4 ports NIC and save some slots ? Still having that would give you 12 NICs per node considering the onboard NICs, which seems a bit too much for me ? Are you expeting to use all that in bandwidth or are you using them to different proposes?
For the HBA's would you not consider use a SAS Switch (LSI has got one that is certified by Nexenta)? That would simplify the things and you don't need to add additional HBAs when you need more JBODs. Or just cascade the JBODs, it's not a big issue as long you don't saturate the 6Gbp/s on one path and you can have 2 paths.
L2ARC not sure the price for a STEC one for this propose, but there are some new OCZ Talos certified which seems to have a good pricing and are SAS, so no need of interposers.
For the HA setup I suggest you first, if you haven't done yet, install the trial version and test it, specially the failover times, etc.