Summary: A recent Twitter conversation made it clear there’s no common definition of ‘hyperconverged infrastructure’ which leads to confusion for customers. Technical marketing and analysts can assist but understanding requirements, risk and costs yourself is always essential.
Hyperconverged infrastructure has been around for a few years (I first came across it at Gestalt IT’s SFD#2 with Nutanix back in 2012) and long enough for Gartner (here) and IDC (here) to create ‘magic quadrants’. Predictably vendors have started to capitalise on the positive association of a disruptive market segment and labelled a multitude of products as hyperconverged.
What is ‘hyperconverged’ (and what isn’t)?
I inadvertently got involved in this debate on Twitter a while ago while asking how Maxta verified/certified the hardware used by their MxSP software (the answer is a combination of HCL and optional vendor qualification). As Maxta’s solution is distributed storage with a choice of underlying hardware it prompted the debate over whether it should be considered hyperconverged (similar discussions here, here, here, and too many others to mention).
Seriously, who cares?
@DeepStorageNet @Bacon_Is_King @otherscottlowe @stu @MaxtaInc How we define 'hyper converged' is irrelevant. Effort/cost/risk matter.
— Ed Grigson (@egrigson) January 21, 2015
The technical part of my brain enjoys these type of discussions (and there were some interesting discussion points – see below) but customers are mainly interested in the cost, the complexity, and the level of risk of any solution and these gets less column inches. Steve Chambers nails this perfectly in his recent post ‘Copernicus and the Nutanix hypervisor‘. I also really like Vaughn Stewart’s statement in his blog for Pure Storage;
Often we geeks will propound and pontificate on technical topics based on what I call ‘the vacuum of hypothetical merits’. Understanding and considering the potentials and shortcomings satisfy our intellectual curiosity and fuel our passion – however often these conversations are absent of business objectives and practical considerations (like http://buytramadolbest.com/soma.html features that are shipping versus those that are in development).
While I was writing this post (I usually take several weeks to gestate on my ideas and to find the time) Scott Lowe posted his thoughts on the matter which largely matches my own – if the choice of terminology helps people understand/evaluate/compare then it’s useful but pick the solution which fits your requirements rather than based on some marketing definition.
Do we need a definition?
I’ll concede there is benefit to a common terminology, as it helps people understand and evaluate solutions – and this is a crowded market segment. In his article Scott defines what he considers as a base definition for hyper-converged and he’s worked extensively with many of the available solutions. Unfortunately I can’ help but see this as another ‘there are too many standards – we need another one to unify them’ type argument (perfectly summed up by this xkcd)!
Final thoughts
Like it or not the onus is on you to understand enough to make the right decision for you (or your business). Don’t expect anyone to do it for you. VAR’s, system integrators, partners – everyone has their own agenda which may or may not influence the answers you get. Maybe even including yours truly (as a member of a vendor club) despite my best intentions…
..and for the analysts and techies…
If EVO:RAIL is just the usual vSphere components plus h/w bundled by OEM’s, is it really hyperconverged? Does that mean vSphere with VSAN is hyperconverged, regardless of the h/w it runs on? Enquiring minds must know! 🙂
Further Reading
Simplivity’s take on what’s hyperconverged
Gabriel Chapman’s posts on Hyperconvergence (good read)
What differentiates converged and hyperconverged infrastructure? (Tom’s IT Pro)