Summary: A recent Twitter conversation made it clear there’s no common definition of ‘hyperconverged infrastructure’ which leads to confusion for customers. Technical marketing and analysts can assist but understanding requirements, risk and costs yourself is always essential.
Hyperconverged infrastructure has been around for a few years (I first came across it at Gestalt IT’s SFD#2 with Nutanix back in 2012) and long enough for Gartner (here) and IDC (here) to create ‘magic quadrants’. Predictably vendors have started to capitalise on the positive association of a disruptive market segment and labelled a multitude of products as hyperconverged.
What is ‘hyperconverged’ (and what isn’t)?
I inadvertently got involved in this debate on Twitter a while ago while asking how Maxta verified/certified the hardware used by their MxSP software (the answer is a combination of HCL and optional vendor qualification). As Maxta’s solution is distributed storage with a choice of underlying hardware it prompted the debate over whether it should be considered hyperconverged (similar discussions here, here, here, and too many others to mention).
The technical part of my brain enjoys these type of discussions (and there were some interesting discussion points – see below) but customers are mainly interested in the cost, the complexity, and the level of risk of any solution and these gets less column inches. Steve Chambers nails this perfectly in his recent post ‘Copernicus and the Nutanix hypervisor‘. I also really like Vaughn Stewart’s statement in his blog for Pure Storage;
Often we geeks will propound and pontificate on technical topics based on what I call ‘the vacuum of hypothetical merits’. Understanding and considering the potentials and shortcomings satisfy our intellectual curiosity and fuel our passion – however often these conversations are absent of business objectives and practical considerations (like features that are shipping versus those that are in development).
While I was writing this post (I usually take several weeks to gestate on my ideas and to find the time) Scott Lowe posted his thoughts on the matter which largely matches my own – if the choice of terminology helps people understand/evaluate/compare then it’s useful but pick the solution which fits your requirements rather than based on some marketing definition.
Do we need a definition?
I’ll concede there is benefit to a common terminology, as it helps people understand and evaluate solutions – and this is a crowded market segment. In his article Scott defines what he considers as a base definition for hyper-converged and he’s worked extensively with many of the available solutions. Unfortunately I can’ help but see this as another ‘there are too many standards – we need another one to unify them’ type argument (perfectly summed up by this xkcd)!
Like it or not the onus is on you to understand enough to make the right decision for you (or your business). Don’t expect anyone to do it for you. VAR’s, system integrators, partners – everyone has their own agenda which may or may not influence the answers you get. Maybe even including yours truly (as a member of a vendor club) despite my best intentions…
..and for the analysts and techies…
If EVO:RAIL is just the usual vSphere components plus h/w bundled by OEM’s, is it really hyperconverged? Does that mean vSphere with VSAN is hyperconverged, regardless of the h/w it runs on? Enquiring minds must know! 🙂
Summary: At VMworld in August VMware announced their new hyperconverged offering, EVO:RAIL. I found myself discussing this during TechFieldDay Extra at VMworld Barcelona and this post details my thoughts having spent a bit longer investigating. I’m not the first to write about EVO:RAIL so I’ll quickly recap the basics before giving my thoughts and some things to bear in mind if you’re considering EVO:RAIL.
Briefly, what is EVO:RAIL?
There’s no point in rediscovering the wheel so I’ll simply direct you to Julian Wood’s excellent series;
As of October 2014 there are now eight qualified OEM partners although beyond that list there’s very little actual information available yet. Most of the vendors have an information page but products aren’t actually shipping yet and it’s difficult to know how they’ll differentiate and compete with each other. Several partners already have their own offerings in the converged infrastructure space so it’ll be interesting to see how well EVO:RAIL fits into their overall product portfolios and how motivated they are to sell it (good thoughts on that for EMC, HP, and Dell). Unlike their own solutions, the form factor and hardware specifications are largely fixed so it’s going to be management additions (ILO cards, integration with management suites like HP OneView etc), service, and support that vary. For partners without an existing converged offering this is a great opportunity to easily and quickly compete in a growing market segment.
Management. The hyperconverged nature should mean improved management as VMware (and their partners) have done the heavy lifting of integration, licencing, performance tuning etc. EVO:RAIL also offers a lightweight GUI for those that value simplicity while also offering the usual vSphere Web Client and VMware APIs for those that want to use them. This is however a converged appliance and that comes with some limitations – you can manage it using the new HCIA interface or the Web Client but it comes with its own vCSA instance so you can’t add it to an existing vCenter without losing support. It won’t use VUM for patching (although it does promise non-disruptive upgrades) although you can add the vCSA to an existing vCOps instance.
Simplicity. This is the strongest selling point in my opinion – EVO:RAIL is a turnkey deployment of familiar VMware technology. EVO:RAIL handles the deployment, configuration, and management and you can grow the compute and storage automatically as additional appliances are discovered and added. As the technology itself isn’t new there’s not much for support staff to learn, plus there’s ‘one throat to choke’ for both hardware and software (the OEM partner). Some people have pointed out that it doesn’t even use a distributed switch, despite being licenced with Ent+. Apparently the choice of a standard vSwitch was because of a potential performance issue with vDS and VSAN, which eventually turned out not to be an issue. Simplicity was also a key consideration and VMware felt there was no need for a vDS at this scale. I imagine we’ll see a vDS in the next iteration.
Flexibility. This is probably the biggest constraint for customers – it’s a ‘fixed’ appliance and there’s limited scope for change. The hardware and software you get with EVO:RAIL is fixed (4 nodes, 192GB RAM per node, no NSX etc) so even though you have a choice of who to buy it from, what you buy is largely the same regardless of who you choose. There is currently only one model so you have to scale linearly – you can’t buy a storage heavy node or a compute heavy node for example. EVO RAIL is sold 4 nodes at a time and the SMB end of the market may find it hard to finance that kind of CAPEX. As mentioned earlier the partner is responsible for updates (firmware and patching) – you won’t be able to upgrade to the new version of vSphere until they’ve validated and released it for example. Likewise you can’t plug in that nice EMC VNX you have lying around to provide extra storage – you have to use the provided VSAN. Flexibility vs simplicity is always a tradeoff!
Interoperability/integration. In theory this is a big plus for EVO:RAIL as it’s the usual VMware components which have probably the best third party integration in the market (I’m assuming you can use full API access). Another couple of notable integration requirements;
10GB networking (ToR switch) is a requirement as it’s used to connect the four servers inside the 2U form factor given the lack of a backplane. You’ll need 8 ports per appliance therefore. I spoke to VMware engineers at VMworld on this and was told VMware looked for a 2u form factor where they could avoid this but couldn’t. Many SMB’s have not adopted 10GB yet so it’s a potential stumbling block – of course partners may use this opportunity to bundle 10GB networking which would be a good way to differentiate their solution.
IPv6 is required for the discovery feature used when more EVO:RAIL appliances are added. This discovery process is proprietary to VMware though it operates much like Apple’s Bonjour and apparently IPv6 is the only protocol which works (it guarantees a local link address).
Risk. This is always a consideration when adopting new technology but being a VMware backed solution using familiar components will go a considerable way to reducing concern. VSAN is a v1.0 product, as is HCIA although as that’s simply a thin wrapper around existing, mature, and best of breed components it’s probably safe to say VSAN maturity is the only concern for some people (given initialteething issues). Duncan Epping has a blogpost about this very subject but his summary is ‘it’s fully supported’ so make sure you know your own comfort level when adopting new technology.
Cost. A choice of partners is great as it’ll allow customers to leveraging existing relationships. It’s worth pointing out that you buy from the partner so any existing licencing agreements (site licences etc) with VMware probably won’t be applicable. At VMworld I was told VMware have had several customers enquire about large orders (in the hundreds) so it’ll be interesting to see how price affects adoption. I don’t think this is really targeted at service providers and I’ve no idea how pricing would work for them. Having spent considerable time compiling orders, having a single SKU for ordering is very welcome!
Talking of pricing, let’s have a look at ballpark costs. I’ve heard, though not been officially quoted, a cost of around €150,000 per 4 node block (or £120,000 for us Brits). This might seem high but bear in mind what you need;
UPDATE: 30th Nov – I realised I’d priced in four Supermicro chassis, rather than one, so I’ve updated the pricing.
Hardware. Let’s say approx £11k per node, so £45k for four nodes ie. one appliance (this is approx – don’t quote!);
Supermicro FatTwin chassis (inc 10GB NICs) £3500 (one chassis for all four nodes)
Software. List pricing is approx £11k per node plus vCenter, so a shade under £50k
vCenter (vCSA) 5.5 = £2000
vSphere 5.5 = £2750 per socket = £5500 per node
VSAN v1 = £1500 per socket = £3000 per node
Log Insight = £1500 per socket = £3000 per node
Support and maintenance for 3 years on both hardware and software – approx £15k
Total cost: £110,000
Once pricing is announced by the partners we’ll see just how much of a premium is being charged for the simplicity, automation, and integration that’s baked in to the EVO:RAIL appliance. There are of course multitudes of pricing options – you could just buy four commodity servers and an entry level SAN but there’s not much value in comparing apples and oranges (and I only have so much time to spend on this blogpost).
VMware aren’t the first to offer a converged appliance – in fact they’re several years behind. The likes of VCE’s vBlock was first back in 2010 and that was followed by the hyperconverged vendors like Nutanix and Simplivity. As John Troyer mentioned on vSoup’s VMworld podcast, Scale Computing use KVM to offer an EVO:RAIL competitor at cheaper prices (and have done for a few years). Looking at Gartner’s magic quadrant for converged infra it’s a pretty crowded market.
Microsoft recently announced their Cloud Platform Services (Cloud Pro thoughts on it) which was developed with Dell (who are obviously keeping their converged options wide open as they’ve also partnered with Nutanix and VMware on EVO:RAIL). While more similar to the upcoming EVO:RACK it’s another validation of the direction customers are expected to take.
From a market perspective I think VMware’s entry into the hyperconverged marketplace is both a big deal and a non-event. It’s big news because it will increase adoption of hyperconverged infrastructure, particularly in the SMB space, through increased awareness and because EVO:RAIL is backed by large tier 1 vendors. It’s a non-event in that EVO:RAIL doesn’t offer anything new other than form factor – it’s standard VMware technologies and you could already get similar (some would say superior) products from the likes of Nutanix, Simplivity and others.
Personally I’m optimistic and positive about EVO:RAIL. Reading the interview with Dave Shanley it’s impressive how much was achieved in 8 months by 6 engineers (backed by a large company, but none the less). If VMware can address the current limitations around management, integration, and flexibility, while maintaining the simplicity it seems likely to be a winner.
Pricing for EVO:RAIL customers will be key although not all of the chosen partners are likely to compete on price.
Summary: A recap of the major announcement and my thoughts on both the announcements and the conference. It’s a long post because I use it as a personal record of thoughts – feel free to skim read!
Like last year I arrived in Barcelona on the Sunday so I had more time to settle in. This was my first conference as a VMware Partner but unfortunately Monday, Partner day, was a bit of a wash out for me due to some registration issues which preventing me getting into the sessions (and lunch!). I probably need to allow myself some time to adjust my perspective and learn the partner side of the fence and it’s unfair to judge when I didn’t attend but looking at session titles most of the partner sessions appeared to be sales focused rather than roadmap or vision which would have interested me more. I guess everyone’s interested in those so they become general sessions. Which brings me nicely to the keynote presentations….
I’ve come to accept and almost enjoy the reality that Europe plays second fiddle to the US conference, which means the bulk of new announcement have already been made at the US show. My first ever blogpost was ranting about why the US show was the obvious one to attend but I now find I enjoy the gap as it gives me time to digest, investigate, and dwell on what’s new. It is a smaller show with less vendors, sessions etc but there’s still no way you can see or learn everything that’s on offer in the three or four days so it’s equally worth attending.
I think the buzz was a bit more balanced across the product suite this year. Two years ago felt like it was all about storage with the mass market adoption of caching, flash, scale out and hybrid arrays whereas last year was all about NSX. This year NSX was clearly still buzzing (top HOL by a mile) and storage continued it’s disruptive evolution (PernixData, VAIO, VSAN) but the announcement of EVO:RAIL got the most column inches. vCloud Air products, vRealize Automation adoption and some of the DevOps focus were also capturing plenty of the discussions and sessions. While VMware may be propping up ‘legacy’ applications until the Web 2.0/AWS crowd take over the world (;-)) it’s still a vibrant, exciting, and quick moving place – and therefore enjoyable!
I’ll recap the major announcements that caught my eye. When you look past the vRealize rebranding itself there are new releases – though most aren’t available until later this year (not too long to wait though);
SDDC (core infra)
EVO:RAIL (and later EVO:RACK) will allow VMware’s partners to compete with the existing hyperconverged vendors, while also selling more VMware licences.
vSphere 6 was NOT released but continues as an open beta. I’m on the beta and there are some great new features on the way (vSMP could be a game changer, vVOLs are an improvement but have taken too long to arrive) but this is somewhat overdue given VMware’s previous two year per major release lifecycle.
NSX 6.1 was announced (and released). NSX continues to grab mindshare but I think it’s going to be a long adoption cycle (as I’ve written previously).
vCloud Air continues to evolve at a rapid pace. New services such as vRealize Air Mobile and vRealize Air Automation are the first to be announced but more will no doubt show up in short order.
vRealize Automation is being released on an aggressive six monthly release cycle which everyone is struggling to keep up with but it reflects the importance VMware attach to this product.
VMware purchased CloudVolumes and rebranded it to AppVolumes. I first came across this technology last year via CloudCast episode 87 which is worth a listen as background. Interesting stuff and one to watch.
Docker integration was announced (the cynics would say to keep the DevOps crowd happy). I agree that containers and VMs complement each other but I think containers are still a threat to VMware in some use cases – after all containers run on any hypervisor so they level the playing field somewhat and containers without VMware are largely free….
VMware’s entry in the hyperconverged space is both a big event and a non-event. It’s big news because it will increase adoption of hyperconverged infrastructure, particularly in the SMB space, through increased awareness and because EVO:RAIL is backed by large vendors. It’s a non-event in that EVO:RAIL doesn’t offer anything new other than form factor – it’s standard VMware technologies and you could already get similar (some would say superior) products from the likes of Nutanix and Simplivity and others. I’ll be posting my (generally positive) thoughts on EVO:RAIL soon (now posted).
NSX is here to stay. A cutdown version, NSX Lite, looks set to become a core part of vSphere at some point in the future, probably towards the end of 2015 (my guess). It may not have mass market adoption yet but there’s a lot of interest and actual customer deployments. It’s already baked into vCloud Air and will be part of the EVO:RACK stack when it’s released. VMware are clearly ‘betting the business’ on NSX succeeding.
The introduction last year of VMware’s own vCloud Hybrid Service, now known as vCloud Air (part of this year’s rebrand) makes it clear that even VMware’s partners weren’t keeping up so VMware have decided to compete their own way and create a public cloud where they can integrate the latest and greatest on a schedule they control. For some partners this evolution may be a challenge in the long term (is there still enough scope for adding value?) but for now it seems the partner network is alive and well. vCloud Air feels already gets as much focus from VMware as the vSphere suite, despite being only a year old, so I imagine we’ll see this pushed even more in the future. Whether they can really compete with the big four (AWS, Azure, Google, Rackspace) is yet to be seen – for Cloud IaaS VMware are still in the ‘niche’ quadrant according to Gartner.
Keeping customers engaged over vCloud Director seems to be an after thought for VMware, despite the technology underpinning vCloud Air. I spoke to several colleagues at large service providers and all felt that vCD had a future and that VMware would help them transition to vCAC (the successor to the crown). For the first year many colleagues were actively involved with vCO/vCAC and coding workflows, though most also mentioned a steep learning curve and required changes to organisational structures that need to accompany adoption.
Competition is forcing VMware’s hand. The tagline for this year’s conference was ‘No Limit’ and the keynote was peppered with references to being ‘brave’ (and last year’s tagline was a not too dissimilar ‘Deny convention’). I think VMware are trying to encourage their customers to accelerate their pace of adoption and change. In 2012 I wrote about customers struggling to keep up and still think it’s a problem today. Likewise competition is forcing VMware to release products throughout the year rather than at the conference. Five years ago everything was released at VMworld whereas the last few major releases have come outside conference time – VSAN, vCHS, even the vSphere 6 beta. VMworld is still a great marketing platform but major releases can now arrive at any time of year.
I’ve not very familiar with OpenStack but VMware’s development of an OpenStack distribution feels like they’re hedging their bets. If OpenStack adoption increases VMware have a stake in it and if not then there’s less competition. Time (and more educated folks) will tell.
This year I noted more of my peers developing code (for vRealize Automation – OK, vCAC) than ever before. vCenter Orchestrator has been ‘the best kept secret’ for about four years but the swing towards ‘infrastructure as code’ is actually taking hold in VMware-land.
Breakout Sessions/Hands On Labs
I only attended a few sessions this year – frankly I don’t know where the four days went! As all the sessions are online after the event I don’t prioritise them as much as I probably should, given that I rarely find time to watch them later! I was also more focused on work related technologies rather than new features as I was attending on company time rather than on my own time.
Site Recovery Manager 101: What’s New (BCO2394) – this was an introductory session which frankly I attended my mistake! I did learn a few new things and there was a nice tip at the end to check out another session (BCO1916.2 – Site Recovery Manager and Stretched Storage: Tech Preview of a New Approach to Active-Active Data Centers) which was covering a new SRM use case in combination with a MetroCluster. I didn’t have time to catch that session live but will be downloading it later.
Multi-Site Data Center Solutions with VMware NSX (NET1974 ). When I attended the NSX ICM course there was a lot of discussion around NSX being a single datacentre solution so I was curious what this session was going to cover. This was a great session, and covered both enterprise and cloud use cases and was surprisingly easy to digest for a complex topic – that’s the sign of a good speaker (Ray Badavari). Well worth a watch.
Veeam Availability suite v8 deep dive (STO2905-SPO). This session highlighted the new features in the upcoming v8 release along with some useful best practices. The failover plans look useful (very similar to SRM failover plans) and I can see a use case for SureReplica (test/dev sandbox for replica VMs) although many of the other features are just ‘nice to have’ rather than revolutionary – WAN acceleration for replication jobs (backup-copy jobs only in v7), network traffic encryption, Netapp integration etc.
HOL-SDC-1423 – vCloud Suite Networking. I need to improve my networking knowledge and getting more familiar with vCNS and NSX is high on my list. I found this lab hard going – not technically difficult, just boring! Note to self – don’t take labs at the end of a long day when tired as it’s not productive!
HOL-SDC-1428 – VMware EVO:RAIL Introduction. I enjoyed this lab, simple though it is. It gives you a chance to get hands on with the simple GUI available with the EVO:RAIL.
HOL-SDC-1429 – Virtual Volumes Tech Preview. This provided hands experience and helped me understand how actually administering vVol’s might work rather than just the theory of how they advance profile based management. Now we just have to wait for vSphere6 to be released…
The Solutions Exchange
I didn’t make it to the vendor side of VMworld until late Wednesday and even then I spent less time than in previous years. The usual vendor enticements were on offer although I felt like the gimmicks and giveaways were slightly abated this year – less iPads, more t-shirts. This is a good thing, provided they can also provide technical information! I challenge myself every year to ‘take the pulse’ of the vendor ecosystem – who’s new, who’s thriving, who’s struggling, who’s going to be the next big thing? This year I thought the developments in the I/O path were one to watch so I checked out a few related vendors;
Pernix Data. I first saw these guys via Storage Field Day 3 back in April 2013 – that might not seem too long ago but in this industry it’s an age.Their ‘flash virtualisation platform’ is a read and write cache which operates across distributed hosts in a cluster to accelerate your I/O. I’ve also met with their CTO Satyam Vaghani on several occasions as I’m always impressed but both their technology and ambitions. Along with Proximal Data (who I saw at SFD2) these companies have been talking about the I/O path’s potential for a couple of years. There’s a reason it’s called a platform not just a product. Prior to VMworld this year PernixData launched v2.0 of their FVP platform which includes using RAM as a distributed cache. Good stuff!
Diablo Technologies. These guys deliver ultraDIMMs which essentially embed storage into your existing DIMM slots, facilitating blazing fast access in the process. In this thoughful introduction to Diablo by Justin Warren he tackles the technology and possible use cases. It’s certainly an interesting idea but are there enough use cases? I was hoping to see Diablo at TechFieldDay Extra but sadly they presented on the Thursday when I couldn’t attend – time to watch the videos I guess.
SanDisk. I spoke to FlashSoft (a division of SanDisk) back in 2012 when server side SANs were just getting started and PernixData etc were just coming out of stealth. Since then server-side caching has grown in popularity and this year SanDisk have partnered with VMware on the upcoming VAIO filters. FlashSoft never stuck me as the most popular flash cache solution so it’s interesting that VMware choose them as a partner. I wonder what this opening up of the APIs means to PernixData?
Here’s some further info and thoughts on some of these developments from Chris Wahl, Niels Hagoort, and Cormac Hogan. I was surprised that Proximal Data weren’t in attendance but their news page and twitter feed have been very quiet lately – maybe all is not well. I should also have spoken to Infinio and Atlantis Computing (as they operate in this space) but ran out of time.
For the first time (that I’m aware of) Oracle were in attendance. Given their licencing and certification/support stance on VMware it was brave to say the least! I’m familiar with their ‘converged’ infrastructure offering, the OVCA, having had some exposure to it at my previous employer but I didn’t find time to have a chat about it. I was surprised when Gartner put Oracle in the ‘leaders’ quadrant for converged infrastructure a few months ago (along with VCE, Netapp, and Cisco) but they must be doing something right. Whenever I mention OVM to anyone it gets short shrift though I’m not sure if that’s owing to actual knowledge or just because it’s ‘clever’ to bad mouth Oracle in the VMware world – personally I’ve never used OVM.
One of the interesting stands I always take time to visit is the VMware R&D team, now wrapped under the banner of ‘the office of the CTO’. They tend to pick a couple of ongoing projects and this year was no exception;
Auto-scaling applications (Download the full PDF here). I spoke to Xiaoyun Zhu who explained that they’re working on allowing applications to automatically scale out either vertically or horizontally based on a set of criteria. I remember trying to do something similar with dev/test environments and quickly found that while amending VMs is trivial that’s the tip of the iceberg – the application may need reconfiguring (buffers, caches etc) as will middleware and determining the ‘trigger’ for the initial memory upgrade is not always simple. How do you determine when to scale up vs out? What’s ‘typical’ performance for an app? What if only one tier in a multi-tier application needs to scale? What if scaling one tier has a knock on effect and you need to scale out every tier? The kind of machine learning used to create dynamic thresholds in vRealize Operations is probably being used here and I can see great value in the ability to adapt a whole application on the fly. On the other hand much can already be done with the publically available APIs and I can’t see how VMware would keep up with application revisions. This was also on display last year with a slightly reduced scope so it’s obviously not a quick win!
High Performance Computing (Download the full PDF here). The last release of vSphere came with some specific features aimed at low latency applications but they come with considerable constraints to core features like vMotion. VMware aren’t resting on their laurels and are continuing to find ways of supporting low latency without constraints. High Performance Computing refers to Grid Computing and is often used in the sciences where crunching large numbers is commonplace. I had a good chat about the challenges and progress with Josh Simons from the HPC division.
In these environments milliseconds count – if you want to understand why and enjoy a good read try Michael Lewis’s FlashBoys which tells the story behind high frequency trading! I read this on the flights to and from Barcelona so it won’t take too long but is recommended.
I also visited the VMware shop and found the selection of books now available to be very sobering! Compared to the early days of VMware there’s been an explosion in the complexity and breadth of topics you need to know. Two books caught my eye and are now on my Xmas wishlist – Cloud Computing by Thomas Erl and cloud networking by Gary Lee.
The blogger/community lounge (where I spent quite a bit of time) was nicely placed near the hands on labs and by the hang space. The vBrownBags (US agenda and sessions, Barcelona agenda), Engineers Unplugged (with guests Nick Howell and Gabriel Chapman), and VMworld TV crews were all in attendance doing their thing although sadly theCube wasn’t present as they only cover the larger US show. Lots of good content as always but sadly I missed this year’s vSoup VMworld podcast due to a clash with TechFieldDay Extra. This is the first time I’ve missed it since it started running in 2011. Instead they got John ‘the dude’ Troyer as a guest so it’s probably safe to assume I’ve lost my place for next year! Sad panda. 🙁
Taking my own advice I also went to the Meet an Expert sessions and had a couple of one on one sessions with VMware experts (Ninad Desai and Gurusimran Khalsa). This gave me the chance to put my question about the future of vCD directly to VMware staff although I had to go through quite a few people before I found someone who could give me a satisfactory answer (thanks Scott Harrison)! I’ve got a blogpost in the offing about this particular topic.
They often run one in the US but for the first time GestaltIT ran the TechFieldDay Extra event at the EMEA conference and I was invited as a delegate. I only saw two sponsors (X-IO and VMTurbo) as I could only attend one afternoon but both were interesting and as always there were good conversations both on and off camera with the other delegates (Andrea Mauro, Joep Piscaer, Arjan Timmerman, Marco Broekken, Martin Glassborrow, Nigel Poulton, Eric Ableson, Hans De Leenheer, & José Luis Gómez). It was fairly brief compared to a full event over a few days but still enjoyable and nice to meet a few new people who I’ve followed on twitter for a while. I was familiar with VMTurbo but X-IO were new to me – I’ll be posting my thoughts on both shortly. There was also a roundtable discussion on converged infrastructure which centred on EVO:RAIL – you can check out the videos by clicking on the logos below;
I watched a few of the vBrownBag sessions – one on SSO by Frank Buechsel (@fbuechsel) and another good one from Gabriel Chapman on converged infra which was commendable for including actual customer numbers. I also caught, more by chance than design, the vExpert daily which is always fun if not overly informational. The vBrownbag sessions felt a bit unloved stuck to the side of the hangspace and I wonder if it wouldn’t be better within the Solutions Exchange, given that people are used to watching presentations there? I expect that wouldn’t work as it would have cost implications. I should also mention the portable whiteboard I got from vBrownBag – maybe it’s a novelty but at least potentially useful! The vBrownBag sessions were recorded and are all online via their YouTube page.
I didn’t party as hard this year as I’ve got a newborn at home so sleep (and a VMware vest!) was more of a priority – I skipped the vExpert/VCDX party, the Veeam party, and the official VMworld party. It also gave me a chance to write up notes, something I’d promised myself I’d do a better job of. I did kickstart the conference with the vRockstar party at the Hard Rock Cafe, which was great. I’ve got to know a lot of people over the last five years and it’s great to have a catch up over a drink and some tech chat. I spent much more time chatting about industry trends and canvassing opinion than previous years. I did make the PernixData party (great venue) and had a good chat with Ather Beg from Xtravirt and Chris Dearden, Ricky El-Qasem from Veeam and Canopy Cloud respectively.
UPDATE: 12th Nov
I also spent some time recording some sessions for VMware EMEA, talking people through what to expect at VMworld. It makes me cringe seeing myself on camera (that’s why I’m a blogger – I can write rather than talk) but you can watch it on the official VMware blog.
For the last couple of years adoption of ‘converged infrastructure’ has been on the rise but until recently it wasn’t something I’d needed to understand beyond general market awareness and personal curiosity. I was familiar with some of the available solutions (in particular VCE’s vBlock and Netapp’s Flexpod) but I also knew there were plenty of other converged solutions which I wasn’t so familiar with. When the topic was raised at my company I realised that I needed to know more.
Google research quickly found a converged infrastructure primer at Wikibon which had the quotable “Nearly 2/3rds of the infrastructure that supports enterprise applications will be packaged in some type of converged solution by 2017“. The Wikibon report is well worth a read but it didn’t quite answer the questions I had, so I decided to delve into the various solutions myself. Before I continue I’ll review what’s meant by ‘converged infrastructure’ with a Wikipedia definition;
Converged infrastructure packages multiple information technology (IT) components into a single, optimized computing solution. Components of a converged infrastructure solution include servers, data storage devices, networking equipment and software for IT infrastructure management, automation and orchestration.
In a series of blogposts over the coming months I’m planning to summarize the converged offerings from various vendors including VCE, Netapp, HP, Oracle, IBM, Dell, Hitachi. If I find time I’ll also cover the newer ‘hyperconverged’ offerings from Nutanix, Scale Computing, Pivot3 and Simplivity. This is largely for my own benefit and as a record of my thoughts – there’s quite a bit of material out there already so it may turn into a compilation of links. I don’t want to rediscover the wheel!
Q. Will this series of blogposts tell you which converged solution you should choose?
A. Nope. There are many factors behind these decisions and I (unfortunately) don’t have real world experience of them all.
CI solutions vary considerably in their degree of convergence and use cases. Steve Chambers (previously of VCE, now CanopyCloud) has a good visualisation of the various solutions on a ‘convergence’ scale. If you haven’t read it already I’d strongly recommend you do so before continuing.
Why converged infrastructure?
Before I delve into the solutions let’s have a look at some factors which are common to them all – there’s no point looking at any solution unless you know how it’s going to add value.
Management. The management and orchestration tools are often what add real value to these solutions and that’s typically the component that people aren’t familiar with. Run a POC to understand how effective these new tools are. Do they offer and API?
Simplicity – validated architectures, preconfigured and integrated stacks of hardware and software, and built in automation all promise to ease the support burden of deploying and operating infrastructure. Who do you call to resolve problems? Will you be caught between vendors blaming each others components or is there a single point of contact/resolution? While a greenfield deployment may be simpler, if you add it to the existing mix (rather than as a replacement) then you’ve added complexity to your environment, and potentially increased your TCO rather than reduced it. Changes to existing processes may also impact job roles – maybe you won’t need a storage admin for example – which can be a benefit but may require considerable change and entail uncertainty for existing staff.
Flexibility – Is deploying a large block of compute/network/storage granular enough for your project? Many vendors are now producing a range of solutions to counter this potential issue. While deployment may be quicker, consider ongoing operations – because the engineered systems need to be validated by the vendor you may not be able to take advantage of the newest hardware or software releases, including security patches. For example Oracle’s Exalogic v2, released in July 2012, ships with Linux v5 despite v6 being released in February 2011. The CPU’s were Intel’s Westmere processors (launched in Jan 2011) instead of the E5 Romley line which were released in March 2012. This isn’t just Oracle – to varying degrees this will hold true for any ‘engineered’ system.
Interoperability. Can you replicate data to your existing infrastructure or another flavour of converged infrastructure? What about backups, monitoring etc – can you plumb them into existing processes and tools? Is there an API?
Risk. CI solutions can reduce the risk of operational issues – buy a 100 seat VDI block which has been designed and pretested for that purpose and you should be more confident that 100 users can work without issue. But what if your needs grow to 125 VDI users? Supplier management is also a factor – if a single vendor is now responsible for compute, networks, and storage, vendor lock in becomes more significant but consolidating vendors can also be a benefit.
Cost. CI is a great idea and easy to grasp concept but there’s no such thing as a free lunch – someone is doing the integration work (both software and hardware) and that has to be paid for. CI solutions aren’t cheap and tend to have a large initial outlay (although Oracle have recently announce a leasing scheme which some are sceptical of!) so may be more suited to greenfield sites or larger projects. TCO is a complex issue but also bear in mind support costs – engineered systems can be expensive if you need to customize them after deployment. CI system’s integrated nature may affect your refresh cycle and have an impact on your purchasing process.
Workload. Interestingly virtualisation promised a future where the hardware didn’t matter but the current bundling of CI solutions could be seen as a step backwards (as eloquently described by Theron Conrey in this blogpost ‘Is converged infrastructure a crutch?‘). There’s an interesting trend of extending the convergence through to the application tier as seen in Oracle’s Exadata/Exalogic, VCE’s’specialised’ solutions (SAP Hana etc) and Netapp’s Flexpod Select solutions. This promises certification/validation through the entire stack but does raise an interesting situation where the application teams (who are closer to the business) increasingly influence infrastructure decisions…
Late last week I joined an illustrious line of community bloggers, vendors, and authors by having a ‘chinwag’ with Mike Laverick. Anyone who knows Mike knows that a quick chat can easily last an hour for all the right reasons – he’s passionate about VMware and technology in general and good at presenting complex ideas in an easily understood manner. I guess that’s why he recently became a senior cloud evangelist for VMware! We discussed a few topics which are close to my heart at the moment;
As time is limited on the actual chinwag I thought I’d offer a few additional thoughts on a couple of the topics we discussed.
Oracle and converged infrastructure
I didn’t want to get embroiled in a discussion about Oracle’s support stance on VMware as that’s been covered many times before but it’s definitely still a barrier. Some of our Oracle team have peddled the ‘it’s not supported’ argument to senior management and even though I’ve clarified the ‘supported vs certified’ distinction it’s a difficult perception to alter. Every vendor wants to push their own solutions so you can’t blame Oracle for wanting to push their own solution but it sure is frustrating!
Of more interest to me is where converged infrastructure is going. As we discussed on the chinwag Oracle are an interesting use case for converged infrastructure (or engineered systems, pick your terminology of choice) because it includes the application tier. Most other converged offerings (VCE, FlexPod, vStart and even hyperconverged solutions like Nutanix) tend to stop at the hypervisor, thus providing a abstraction layer that you can run whatever workload you like on. Oracle (with the possible exception of IBM?) may be unique in owning the entire stack from hardware all the way up through storage, networking, compute, through to the hypervisor and up to their crown jewels, the Oracle database and applications. This gives them a position of strength to negotiate with even when certain layers are weak in comparison to ‘best of breed’, as is the case with OracleVM. Archie Hendryx explores this in his blogpost although I think he undersells the advantage Oracle have of owning a tier 1 application – Dell’s vStart or VCE’s vBlock may offer competition from an infrastructure perspective but my company don’t run any Dell or VCE applications. If you’re not Oracle how do you compete with this? You team up to provide a ‘virtual stack’ optimised for various workloads – today VDI is the most common (see reference architectures from Nexenta, Nimble Storage et al). As the market for converged infrastructure grows I think we’ll see more of these ‘vertical’ stack style offerings.
After I described my problem getting vCD tabled as a viable technology for lab management Mike rightly pointed out that many people are using vCD in test and dev – maybe more than in production. I agree with Mike but suspect that most are using dev/test as a POC for a production private cloud, not as purpose built lab management environment. I didn’t get time to discuss a couple of other points which both complicate the introduction of vCD even if you have an existing VMware environment;
Introducing vCD (or any cloud solution for that matter) is potentially a much bigger change compared to the initial introduction of server virtualisation. In the latter the changes mainly impacted the infrastructure teams although provisioning, purchasing, networks and storage were all impacted. If you’re intending to deliver test/dev environments you’re suddenly incorporating your applications too, potentially including the whole development/delivery lifecycle. If you go the whole hog to self-service then you potentially include an even larger part of the business right up to the end users. That’s a very disruptive change for some ‘infrastructure guy’ to be proposing!
vCD recommends Enterprise+ licencing which means I have to argue for the highest licencing level for test/dev, even if I don’t have it in production