Category Archives: Management

Converged infrastructure: an introduction

Print Friendly, PDF & Email

For the last couple of years adoption of ‘converged infrastructure’ has been on the rise but until recently it wasn’t something I’d needed to understand beyond general market awareness and personal curiosity. I was familiar with some of the available solutions (in particular VCE’s vBlock and Netapp’s Flexpod) but I also knew there were plenty of other converged solutions which I wasn’t so familiar with. When the topic was raised at my company I realised that I needed to know more.

Google research quickly found a converged infrastructure primer at Wikibon which had the quotable “Nearly 2/3rds of the infrastructure that supports enterprise applications will be packaged in some type of converged solution by 2017“. The Wikibon report is well worth a read but it didn’t quite answer the questions I had, so I decided to delve into the various solutions myself. Before I continue I’ll review what’s meant by ‘converged infrastructure’ with a Wikipedia definition;

Converged infrastructure packages multiple information technology (IT) components into a single, optimized computing solution. Components of a converged infrastructure solution include servers, data storage devices, networking equipment and software for IT infrastructure management, automation and orchestration.

In a series of blogposts over the coming months I’m planning to summarize the converged offerings from various vendors including VCE, Netapp, HP, Oracle, IBM, Dell, Hitachi. If I find time I’ll also cover the newer ‘hyperconverged’ offerings from Nutanix, Scale Computing, Pivot3 and Simplivity. This is largely for my own benefit and as a record of my thoughts – there’s quite a bit of material out there already so it may turn into a compilation of links. I don’t want to rediscover the wheel!

Q. Will this series of blogposts tell you which converged solution you should choose?
A. Nope. There are many factors behind these decisions and I (unfortunately) don’t have real world experience of them all.

CI solutions vary considerably in their degree of convergence and use cases. Steve Chambers (previously of VCE, now CanopyCloud) has a good visualisation of the various solutions on a ‘convergence’ scale. If you haven’t read it already I’d strongly recommend you do so before continuing.

Why converged infrastructure?

Before I delve into the solutions let’s have a look at some factors which are common to them all – there’s no point looking at any solution unless you know how it’s going to add value.

  • Management. The management and orchestration tools are often what add real value to these solutions and that’s typically the component that people aren’t familiar with. Run a POC to understand how effective these new tools are. Do they offer and API?
  • Simplicity – validated architectures, preconfigured and integrated stacks of hardware and software, and built in automation all promise to ease the support burden of deploying and operating infrastructure. Who do you call to resolve problems? Will you be caught between vendors blaming each others components or is there a single point of contact/resolution? While a greenfield deployment may be simpler, if you add it to the existing mix (rather than as a replacement) then you’ve added complexity to your environment, and potentially increased your TCO rather than reduced it. Changes to existing processes may also impact job roles – maybe you won’t need a storage admin for example – which can be a benefit but may require considerable change and entail uncertainty for existing staff.
  • Flexibility – Is deploying a large http://premier-pharmacy.com/product/viagra/ block of compute/network/storage granular enough for your project? Many vendors are now producing a range of solutions to counter this potential issue. While deployment may be quicker, consider ongoing operations – because the engineered systems need to be validated by the vendor you may not be able to take advantage of the newest hardware or software releases, including security patches. For example Oracle’s Exalogic v2, released in July 2012, ships with Linux v5 despite v6 being released in February 2011. The CPU’s were Intel’s Westmere processors (launched in Jan 2011) instead of the E5 Romley line which were released in March 2012. This isn’t just Oracle – to varying degrees this will hold true for any ‘engineered’ system.
  • Interoperability. Can you replicate data to your existing infrastructure or another flavour of converged infrastructure? What about backups, monitoring etc – can you plumb them into existing processes and tools? Is there an API?
  • Risk. CI solutions can reduce the risk of operational issues – buy a 100 seat VDI block which has been designed and pretested for that purpose and you should be more confident that 100 users can work without issue. But what if your needs grow to 125 VDI users? Supplier management is also a factor – if a single vendor is now responsible for compute, networks, and storage, vendor lock in becomes more significant but consolidating vendors can also be a benefit.
  • Cost. CI is a great idea and easy to grasp concept but there’s no such thing as a free lunch – someone is doing the integration work (both software and hardware) and that has to be paid for. CI solutions aren’t cheap and tend to have a large initial outlay (although Oracle have recently announce a leasing scheme which some are sceptical of!) so may be more suited to greenfield sites or larger projects. TCO is a complex issue but also bear in mind support costs – engineered systems can be expensive if you need to customize them after deployment. CI system’s integrated nature may affect your refresh cycle and have an impact on your purchasing process.
  • Workload. Interestingly virtualisation promised a future where the hardware didn’t matter but the current bundling of CI solutions could be seen as a step backwards (as eloquently described by Theron Conrey in this blogpost ‘Is converged infrastructure a crutch?‘). There’s an interesting trend of extending the convergence through to the application tier as seen in Oracle’s Exadata/Exalogic, VCE’s’specialised’ solutions (SAP Hana etc) and Netapp’s Flexpod Select solutions. This promises certification/validation through the entire stack but does raise an interesting situation where the application teams (who are closer to the business) increasingly influence infrastructure decisions…

There’s a thought provoking article at Computer Weekly discussing modular datacentres which takes the converged concept even further. Why bolt together building blocks in your datacentre when you can buy a complete datacentre in a box. Convergence on a larger scale! Next thing you know they’ll be using shipping containers for datacentres… 🙂

Further Reading

Converged infrastructure primer (Wikibon)

Management of converged infrastructures (the Virtualization Practice)

Engineers Unplugged session on hyperconverged infrastructure (7 mins)

The Future of Convergence think tank (2hr video)

EMC’s white paper “Time for Converged Infrastructure?” – some good points but with an obvious bias

Containerized datacenters – is a box a good fit?

Converged infrastructure and Object Oriented programming

The state of Converged infrastructure (Zenoss 2013 survey results)

Automating vSphere with Cody Bunch – book review

Print Friendly, PDF & Email

vCenter Orchestrator (vCO) has been around since May 2009 when vSphere4 was initially released. Despite being around for over two years it doesn’t seem to get much attention even though it’s free to anyone who’s purchased vCenter and has the potential to save effort for system administrators. There are a couple of reasons for this in my opinion – firstly it isn’t ready to go by default, you have to configure it manually and that’s not as straight forward as it could be. Secondly it looks intimidating once configured and does require some knowledge of either the vSphere API and preferably using Javascript. While neither are that hard to get to grips with, combined it makes for quite a barrier to entry.

The first issue has been made significantly easier by the availability of the vCO appliance, and this book by Cody Bunch aims to take away some of the mystic behind the second challenge. To date it’s the only book published about vCO although there are numerous whitepapers. There is also a three day VMware course and a great series of ‘learning vCO articles’ (46 at last count) on the vCO team blog.

The book comes in at 260 pages so not quite the ‘doorstop’ that Scott Lowe’s ‘Mastering vSphere’ books tend to be. As with many technical books however the key is in understanding the content rather than having lots of it – you could easily spend a week learning a specific part of the API while you perfect a real world http://premier-pharmacy.com/product/topamax/ workflow. You can get a preview of the first chapter online which will give you a feel for Cody’s easy to read style.

The book is split into three sections plus appendices;

  1. Introduction, installation and configuration (50 pages)
  2. Working with Orchestrator (50 pages)
  3. Real world use cases (100 pages)
  4. Appendices – Onyx, VIX, troubleshooting, the vCO vApp (50 pages)

If you’re familiar with vCO (if you’ve done the VCAP4-DCA exam for example you probably installed and configured Orchestrator as it was on the blueprint) you won’t dwell too long on the first section as there’s not much you won’t already know. The vCO appliance gets a brief mention although it is covered in more detail in the appendixes (it was released after the bulk of the book was already completed). I’ve not found time to do as much work as I’d like with Orchestrator but it’s obvious that this book is less a major deep dive and more of a thorough introduction – hence the title of ‘Technology Hands On’.

You can buy the book from Amazon.com or Amazon.co.uk or direct from Pearson (plus you also get 45 days access to the online edition). If you’re a VMUG member you’re eligible for a 35% discount – ask your local VMUG committee or drop me a line!

Further Reading

The official VMware vCO page

The vCO resources page (including forums, videos, FAQ etc)

The unofficial vCO blog

Cody Bunch’s section on vCO at Professional VMware.com

Joerg Lew’s website vCOPortal.de (VCI and all round vCO guru)

Tom Holingsworth’s review of the book

Twitter people to follow;

Zerto’s Virtual Replication 2.0 – first looks

Print Friendly, PDF & Email

In this article I’m going to talk about Zerto, a data protection company specialising in virtualized and cloud infrastructures who I recently saw as part of Storage Field Day #2. They’ve presented twice before at Tech Field Days (as part of their launch in June 2011 and Feb 2012) so I was interested to see what new developments (if any) were in store for us. In their own words;

Zerto provides large enterprises with data replication solutions designed specifically for virtualized infrastructure and the cloud. Zerto Virtual Replication is the industry’s first hypervisor-based replication solution for tier-one applications, replacing traditional array-based BC/DR solutions that were not built to deal with the virtual paradigm.

When I first heard the above description I couldn’t help but think of VMware’s SRM product which has been available since June 2008. Zerto’s carefully worded statement is correct in that SRM relies on storage array replication for maximum functionality but I still think it’s slightly disingeneous. To be fair VMware are equally disingeneous when they claim “the only truly hypervisor level replication engine available today” for their vSphere Replication technology – marketing will be marketing! 🙂 Later in this article I’ll clarify the differences between these products but let’s start by looking at what Zerto offers.

Zerto offer a product called Zerto Virtual Replication which integrates with vCenter to replicate your VMware VMs to one or more sites in a simple and easy to use manner. Since July 30th 2012 when v2.0 was released it supports replication to various clouds along with advanced features such as multisite replication and vCloud Director compatibility. Zerto are on an aggressive release schedule given that the initial release (which won ‘Best of Show’ at VMworld 2011) was only a year earlier but in a fast moving market that’s a good thing. For an entertaining 90 second introduction which explains what if offers better than I could check out the video below from the companies website;

Just as server virtualization opened up possibilities by abstracting the guest OS from the underlying hardware so data replication can benefit from moving ‘up the stack’ away from the storage array hardware and into the hypervisor. The extra layer of abstraction lifts certain constraints related to the storage layer;

  • Array agnostic – you can replicate between dissimilar storage arrays (for example Netapp at one end and EMC at the other). For both cloud and DR scenarios this could be a ‘make or break’ distinction compared to traditional array replication which requires similar systems at both ends. In fact you can replicate to local storage if you want – if you’re one of the growing believers in the NoSAN movement that could be useful…
  • Storage layout agnostic – because you choose which VMs to replicate rather than which volume/LUN on the array you’re less constrained when designing or maintaining your storage layout. When replicating you can also change between thin and thick provisioning, or from SAN to NAS, or from one datastore layout to another. A typical use case might be to replicate from thick at the source to thin provisioning at the DR location for example. There is a definite trend towards VM-aware storage and ditching LUN constraints – you see it with VMware’s vVols, storage arrays like Tintri and storage hypervisors like Virsto so having the same liberating concept for DR makes a lot of sense.

Zerto goes further than just being ‘storage agnostic’ as it allows further flexibility;

  • Replicate VMs from vCD to vSphere (or vice versa). vCD to vCD is also supported. This is impressive stuff as it understands the Organization Networks, vApp containers etc and creates whatever’s needed to replicate the VMs.
  • vSphere version agnostic – for example use vSphere 4.1 at one end and vSphere 5.0 at the other. For large companies which can typically lag behind this could be the prime reason to adopt Zerto.

With any replication technology bandwidth and latency are concerns as is WAN utilisation. Zerto uses virtual appliances on the source and destination hosts (combined with some VMware API calls, not a driver as this article states) and therefore isn’t dependent on changed block tracking (CBT), is storage protocol agnostic (ie you can use FC, iSCSI or NFS for your datastores) and offers compression and optimisation to boot. Zerto provide a profiling tool to ‘benchmark’ the rate of change per VM before you enable replication, thus alllowing you to predict your replication bandwidth requirements. Storage I/O control (SIOC) is not supported today although Zerto are implementing their own functionality to allow you to limit replication bandwidth. Today it’s done on a ‘per site’ basis although there’s no scheduling facility so you can’t set different limits during the day or at weekends.

VMware’s vSphere is the only hypervisor supported today although we were told the roadmap includes others (but no date was given). With Hyper-V v3 getting a good reception I’d expect to see support for it sooner than later and that could open up some interesting options.

Zerto’s Virtual Replication vs VMware’s SRM

Let’s revisit that claim that Zerto is the “industry’s first hypervisor-based replication solution for tier-one applications“. With the advent of vSphere 5.1 VMware now have two solutions which could be compared to Zerto – vSphere Replication and SRM. The former is bundled free with vSphere but is not comparable – it’s quite limited (no orchestration, testing, reporting or enterprise-class DR functions) and only really intended for data protection not full DR. SRM on the other hand is very much competition for Zerto although for comparable functionality you require array level replication.

When I mentioned SRM to the Zerto guys they were quick to say it’s an apples-to-oranges comparison which to a point is true – with Zerto you specify individual or groups of VMs to replicate whereas with SRM you’re still stuck specifying volumes or LUNs at array level. Both products have their respective strengths but there’s a large overlap in functionality and many people will want to compare them. SRM is very well known and has the advantage of VMware’s backing and promotion – having a single ‘throat to choke’ is an attractive proposition for many. I’m not going to list the differences because others have already done all the hard work;

Zerto compared to vSphere Replication the official Zerto blog

Zerto compared to Site Recovery Manager – a great comparison by Marcel Van den Berg (also includes VirtualSharp’s Reliable DR)

Looking through the comparisons with SRM there are quite a few areas where Zerto has an advantage although to put it in context check out the pricing comparison at the end of this article;
NOTE: Since the above comparison was written SRM v5.1 has added http://premier-pharmacy.com/product/cialis/ support for vSphere Essentials Plus but everything else remains accurate

  • RTO in the low seconds rather than 15 mins
  • Compression of replication traffic
  • No resync required after host failures
  • Consistency groups
  • Cloning of the DR VMs for testing
  • Point in time recovery (up to a max of 5 days)
  • The ability to flag a VMDK as a pagefile disk. In this instance it will be replicated once (and then stopped) so that during recovery a disk is mounted but no replication bandwidth is required. SRM can’t do this and it’s very annoying!
  • vApps supported (and automatically updated when the vApp changes)
  • vCloud Director compatibility

If you already have storage array replication then you’ll probably want to evaluate Zerto and SRM.
If you don’t have (or want the cost of) array replication or want the flexibility of specifying everthing in the hypervisor then Zerto is likely to be the best solution.

DR to the Cloud (DRaaS)

Of particular interest to some customers and a huge win for Zerto is the ability to recover to the cloud. Building on the flexibility to replicate to any storage array and to abstract the underlying storage layout allows you to replicate to any provider who’s signed up to Zerto’s solution. Multisite and multitenancy functionality was introduced in v2.0 and today there are over 30 cloud providers signed up including some of the big guys like Terremark, Colt, and Bluelock. Zerto have tackled the challenges of a single appliance (providers obviously wouldn’t want to run one per customer) providing secure multi-tenant replication with resource management included.

vCloud Director compatibility is another feather in Zerto’s cap, especially when you consider that VMware’s own ‘vCloud Suite’ lags behind (SRM only has limited support for vCD). One has to assume that this will be a short term advantage as VMware have promised tighter integration between their products.

Pricing

Often this is what it comes down to – you can have the best solution in the market but if you’re charging the most then that’s what people expect. Zerto are targeting the enterprise so maybe it shouldn’t be a surprise that they’re also priced at the top end of the market. The table below shows pricing for SRM (both Standard and Enterprise edition) and Zerto;

SRM StandardSRM EnterpriseZerto Virtual Replication
$195 per VM$495 per VM$745 per VM

As you can see Zerto costs a significant premium over SRM. When making that comparison you may need to factor in the cost of storage array replication as SRM using vSphere Replication is severely limited. These are all list prices so get your negotiating hat on! We were told that Zerto were seeing good adoption from all sizes of customer from 15VMs through to service providers.

Final thoughts

I’ve not used SRM in production since the early v1.0 days and I’ve not used Zerto in production either so my thoughts are based purely on what I’ve read and been shown. I was very impressed with Zerto’s solution which certainly looks very polished and obviously trumps SRM in a few areas – hence why I took the time to investigate and write up my findings in this blogpost. From a simple and quick appliance based installation (which was shown in a live demo to us) through to the GUI and even the pricing model Zerto’s aim is to keep things simple and it looks as if they’ve succeeded (despite quite a bit of complexity under the hood). If you’re in the market for a DR solution take time to review the comparison with SRM above and see which fits your requirements and budget. Given how comprehensive the feature set is I wouldn’t be surprised to see this come out on top over SRM for many customers despite VMware’s backing for SRM and the cost differential.

Multi-hypervisor management could be a ‘killer feature’ for Zerto. It would distinguish the product for the forseeable future (I’d be surprised to see this in VMware’s roadmap anytime soon despite their more hypervisor friendly stance) and needs to happen before VMware bake comparable functionality into the SRM product. Looking at they way VMware are increasingly bundling software to leverage the base vSphere product there’s a risk that SRM features work their way down the stack and into lower priced SKU’s – good for customers but a challenge for Zerto. There’s definitely intriguing possibilities though – how about replicating from VMware to Hyper-V for example? As the use of cloud infrastructure increases the ability to run across heteregenous infrastructures will become key and Zerto have a good start in this space with their DRaaS offering. If you don’t want to wait and you’re interested in multi-hypervisor management (and conversion) today check out Hotlink (thanks to my fellow SFD#2 delegates for that tip).

I see a slight challenge in Zerto targeting the enterprise specifically. Typically these larger companies will already have storage array replication and are more likely to have a mixture of virtual and physical and therefore will still need array functionality for physical applications. This erodes the value proposition for Zerto. Furthermore if you have separate storage and virtualisation teams then moving replication away from the storage array could break accepted processes not to mention put noses out of joint! Replication at the storage array is a well accepted and mature technology whereas virtualisation solutions still have to prove themselves in some quarters. In contrast VMware’s SRM may be seen to offer the best of both worlds by offering the choice of both hypervisor and/or array replication – albeit with a significantly less powerful replication engine (if using vSphere Replication) and with the aforementioned constaints around replicating LUNs rather than VMs. Zerto also have the usual challenges around convincing enterprises that as a ‘startup’ they’re able to provide the expected level of support – for an eloquent answer to this read ‘Small is beautiful’ by Sudheesh Nair on the Nutanix blog (who face the same challenges).

Disclosure: the Storage Field Day #2 event is sponsored by the companies we visit, including flight and hotel, but we are in no way obligated to write (either positively or negatively) about the sponsors.

Further Reading

Steve Foskett and Gabrie Van Zanten discuss Zerto (from VMworld 2012)

Good introduction to Zerto v1.0 (Marcel Van Den Berg)

Zerto and vSphere Host replication – what’s the difference?

Zerto vs SRM (and VirtualSharp’s ReliableDR)

Step away from the array – fun Zerto blog in true Dr Seuss style

vBrownbag Zerto demo from VMworld Barcelona 2012

Zerto replication and disaster recovery the easy way

Take two Zerto and call me in the morning (Chris Wahl from a previous TFD)

Musings of Rodos – Zerto (Rodney Haywood from a previous TFD)

451 group’s report on Zerto (March 2012)

Storage field day #2 coverage

Twitter contacts

@zertocorp – the official Zerto twitter account

Shannon Snowdon – Senior Technical Marketing Architect

@zertojjones – another Zerto employee

Using vCenter Operations v5 – Capacity features and conclusions (3/3)

Print Friendly, PDF & Email

In the first part of this series I introduced vCOps and it’s requirements before covering the new features in part two. This final blogpost covers the capacity features (available in the Advanced and higher editions) along with pricing information and my conclusions.

The previous trial I used didn’t include the capacity planning elements so I was keen to try this out. I’d used CapacityIQ previously (although only briefly) and found it useful but combined with the powerful analytics in vCOps it promises to be an even more compelling solution. VMware have created four videos with Ben Scheerer from the vCOps product team – they’re focused on capacity but if you’ve watched Kit Colbert’s overview much of it will be familiar;

UPDATE APRIL 2012 – VMware have just launched 2.5 hrs of free training for vCOps!

If you don’t have time to watch the videos and read the documentation (section 4 in the Advanced Getting Started guide) here’s the key takeaways;

  • Capacity information is integrated throughout the product although modelling is primarily found under the ‘Planning’ view. Almost every view has some capacity information included either via the dynamic thresholds (which indicate the standard capacity used) or popup graphs of usage and trending.
  • Storage is now included in the capacity calculations (an improvement over CapacityIQ) resulting in a more complete analysis. Datastores are now shown in the Operations view although if you’re like me and use NFS direct to the guest OS it’s not going to be as comprehensive as using block protocols.
  • the capacity tools require more tailoring to your environment than the performance aspects but provide valuable information
  • With vCOps you can both view existing and predicted capacity and you can model changes like adding hosts or VMs.

Continue reading Using vCenter Operations v5 – Capacity features and conclusions (3/3)

Using vCenter Operations v5 – What’s new (2/3)

Print Friendly, PDF & Email

In part one of Using vCenter Operations I covered what the product does along with the different versions available and deployment considerations. In this post I’ll delve into what’s new and improved and in the final part I’ll cover capacity features, product pricing, and my overall conclusions. I had intended to cover the configuration management and application dependency features too but it’s such a big product I’ll have to write another blogpost or I’ll never finish!

Introductory learning materials

UPDATE APRIL 2012 – VMware have just launched 2.5 hrs of free training for vCOps.

Deep dive learning materials;

What’s new and improved in vCOps

Monitoring is a core feature and for some people the only one they’re concerned about. As the size of your infrastructure grows and becomes more complex the need for a tool to combine compute, network, and storage in real time also grows. Here are my key takeaways;

  • there’s a new dashboard screen which shows health (immediate issues), risks (upcoming issues) and efficiency (opportunity for improvements) in a single screen. The dashboard can provide a high level view of your infrastructure and works nicely on a plasma screen as your ‘traffic light’ view of the virtual world (and physical if you go with Enterprise+). The dashboard can also be targeted at the datacenter, cluster, host or VM level which I found very useful although you can only customise the dashboard in Enterprise versions. There is still the Operations view (the main view in vCOPS v1) which now also includes datastores. This view scales extremely well – even if you have thousands of VMs and datastores across multiple vCenters they can all be displayed on a single screen.
    NOTE: If you find some or all of your datastores show up as grey with no data (as mine did) there is a hotfix available via VMware support.
  • Continue reading Using vCenter Operations v5 – What’s new (2/3)