Home labs – a poor man’s Fusion-IO?

Print Friendly, PDF & Email

While upgrading my home lab recently I found myself reconsidering the scale up vs scale out argument. There are countless articles about building your own home lab and whitebox hardware but is there a good alternative to the accepted ‘two whiteboxes and a NAS’ scenario that’s so common for entry level labs? I’m studying for the VCAP5-DCD so while the ‘up vs out’ discussion is a well trodden path there’s value (for me at least) in covering it again.

There are two main issues with many lab (and production) environments, mine included;

  1. Memory is a bottleneck and doubly so in labs using low end hardware – the vCentre appliance defaults to 8GB, as does vShield Manager so anyone wanting to play with vCloud (for example) needs a lot of RAM.
  2. Affordable yet performant shared storage is also a challenge – I’ve used both consumer NAS (from 2 to 5 bays) and ZFS based appliances but I’m still searching for more performance.

In an enterprise environment there are a variety of solutions to these challenges – memory density is increasing (up to 512GB per blade in the latest UCS servers for example) and on the storage front SSDs and flash memory have spurred innovations in the storage battle. In particular Fusion-IO have had great success with their flash memory devices which reduce the burden on shared storage while dramatically increasing performance. I was after something similar but without the budget.

When I built my newest home lab server, the vHydra I used a dual socket motherboard to maximise the possible RAM (up to 256GB RAM) and used local SSDs to supplement my shared storage. This has allowed me to solve the two issues http://premier-pharmacy.com/product/clomid/ above – I have a single server which can host a larger number of VMs with minimal reliance on my shared storage. The concepts are the same as solutions like Fusion-IO aim to do in production environments but mine isn’t particularly scalable. In fact it doesn’t really scale at all – I’ll have to revert to centralised storage if I buy more servers. Nor does it have any resilience – the ESXi server itself isn’t clustered and the storage is a single point of failure as there’s no RAID. It is cheap however, and for lab testing I can live with those compromises. None of this is vaguely new of course – Simon Gallagher’s vTardis has been using these same concepts to provide excellent lab solutions for years. Is this really a poor man’s Fusion-IO? There’s nothing like the peformance and nothing like the budget but the objectives are the same but to be honest it’s probably a slightly trolling blog title. I won’t do it again. Promise! πŸ™‚

If you’re thinking of building a home lab from scratch consider buying a single large server with local SSD storage instead of multiple smaller servers with shared storage. You can always scale out later or wait for Ceph or HDFS to elimate the need for centralised storage at all…

Tip: It’s worth bearing in mind the 32GB limit on the free version of ESXi – unless you’re a vExpert or they reinstate the VMTN subscription you’ll be stuck with 60 day eval editions if you go above 32GB (or buying a licence!).

Further Reading

Is performant a word? πŸ™‚

7 thoughts on “Home labs – a poor man’s Fusion-IO?

  1. Hello Ed,

    I just pop in to say that your posts are always damn interesting to read and they keep me discovering new stuff, so thanks and keep the good work!

    Also your vHydra configuration is impressive! No more RAM bottlenecks for you I suppose!

    Carlo

  2. Greate post, thanks Ed.

    I would love to have the room, money and resources for a large home lab, but I make do with a single computer (i7 CPU, 16GB RAM, 2TB SATA, 3 x 128GB SSD), then use VMware Workstation to create nested ESXi hosts etc. FreeNAS works a treat for the shared storage.

    1. Sounds like a very functional setup too- you don’t need 64GB to get a useful lab! Plenty of people have achieved great things with a lab like yours.

  3. Interesting stuff here Ed. I’m still actually deciding even on hypervisor at the moment for a small production setup. I do just love the ease and slickness of vmware though.

    I’ve said for some time that it would be a fun thing to watch a datacenter fire spreading and seeing vmware configured with HA, FT etc springing into action πŸ™‚ Not that we would wish that on anyones DC.

    Its interesting for me that you’re going up rather than out. Only today I have bought am i3-2100t with 16GB RAM and a mini-itx board with dual lan on. I’m figuring I can get four of these guys in a 1u enclosure and get some fair compute power for a fairly low cost energy wise. The e3-2100t is billed at 35W TDP. A single unit of this has only come in at about Β£230. All these will be piped to shared storage of course.

    1. Thanks Darren. I guess it depends on what you need – your solution looks pretty interesting (and cost effective) although I note that the i3-2100t doesn’t support vt-d, so no VMDirectPath. This isn’t a heavily used feature to won’t be a problem for most people but I’m hoping to use it with Nexenta/ZFS so your choice wouldn’t have been ideal for me. I’d be interested to know how you fit four in a 1U enclosure?

Leave a Reply to Ed GrigsonCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.