Archive

Archive for the ‘Virtualisation’ Category

IP Expo or VMworld Europe?

September 2nd, 2013 No comments
Print Friendly

IPExpoChoiceWith VMworld in San Francisco, which I couldn’t attend, now a distant memory my thoughts have turned to attending the European replay to be held in Barcelona this October. As VMware have grown over the years so has the conference (see some stats and how competing conferences compare) and as I always tell my team it isn’t just a virtualisation show it covers all aspects of infrastructure. The downside is that VMworld isn’t vendor neutral (the clue is in the name!) so you won’t see Microsoft, Citrix, or Amazon solutions despite their increasing relevance to those interested in the enterprise cloud and virtualisation industry. I have to fund my own way to VMworld every year (though a blogger’s pass to the conference takes care of the lion’s share) so I’m definitely concerned with getting value for money.

This is the first year when I’ve questioned “is VMworld the best place to be?”

In the UK’s there’s a competing show, IP Expo, which for the last few years has clashed with VMworld Europe and does again this year. In the future I’m hoping this will change as the current venue, Earls Court, is due to be sold so alternative arrangements will have to be made. Unlike VMworld, IP Expo is vendor neutral so there’s a level playing field and you can investigate solutions and technology from all parties (hey, even Oracle are there for those 1% who use OVM!). VMworld has always focused on infrastructure more than the application tier and with the spinoff of Pivotal earlier this year that seems likely to continue whereas IP Expo tends to be more well rounded. It’s interesting to note that Annika Jimenez, lead data scientist at Pivotal, is a keynote speaker at IP Expo while the VMworld keynotes were remarkably Pivotal free.

I attended an early IP Expo event (last decade) but haven’t been since so bear that in mind when considering my points below. I spoke to Jane Rimmer who’s attended both shows and sees things from both the attendee and the exhibitor viewpoints but nonetheless I’m probably making some assumptions! A few numbers;

VMworldIP Expo
Conference length:3 days (plus a partner day)2 days
Attendees:600014000
Exhibitors:99240
Sessions/seminars:265240

Why go to VMworld?

  1. For VMware focused technical content you can’t get anywhere else. There are over 265 sessions throughout the three days and while some are still heavy on the marketing there’s plenty of deep dive technical sessions. Group discussions are also an invaluable way to quiz senior engineers directly, something that’s very hard to do in any other forum. The onsite labs also offer a chance to get hands on with the latest technologies although there is a public beta still running outside the conference and the IP Expo labs look to be centered around Amazon and Oracle solutions.
  2. It’s three days of technology immersion, not two (unless you’re a partner in which case I’m assuming you’re going to VMworld regardless). Frankly it’s impossible to do justice to all the content in three days let alone cover even more ground with more exhibitors in two days!
  3. It’s the centre of the VMware universe for those three days. Anyone who’s anyone in the VMware ecosystem will be there – vendors, technical experts, bloggers etc. In theory there will be some announcements from VMware but in the past anything significant obviously comes out first at the US show. For career networking these are the folks you want to meet.
  4. It’s a conference, not an exhibition – it’s NOT free and therefore should be more educational and less sales focused. By introducing a barrier to entry it ups the ante – people don’t show up just to have a day out of the office :-). Vendors know that attendees have paid good money to be there so value your time (well, some more than others), VMware prepare a lot of content, and the other attendees you meet <assumption>tend to be more senior and focused</assumption>.

Why go to IP Expo?

  1. It’s free. No costs and easier ROI to justify, less planning upfront required!
  2. It’s free. Did I mention that already? You can register here.
  3. It’s vendor neutral. You can see solutions from all the competing vendors in one place (including a VMware zone).
  4. It’s only two days so less time out of the office and less disruption to your workload.
  5. Amazon are the primary sponsor this year, the first time they’ve sponsored a third party event in Europe. Did anyone say Amazon are targeting the enterprise? Even if VMware is your lifeblood you need to know your competition (or coopetition) :-).
  6. If you’re interested in the application layer as well as infrastructure, IP Expo has more of interest.
  7. You can catchup on VMworld later, whereas IP Expo you have to be there! Sessions at VMworld are recorded and access can be purchased separately so while this is kind of a benefit of VMworld if you’re interested in alternative vendors it’s almost possible to do both…

Which will you attend and why?

Categories: VMware Tags:

Twelve weeks is a long time in tech!

May 13th, 2013 No comments
Print Friendly

Firstly an apology for those who regularly read my blog – I’ve just returned from three months paternity leave where I was largely ‘off the grid’ and had very little to do with technology and lots to do with changing nappies and singing nursery rhymes in public!  I could write a blogpost about technology parallels but that’s already been covered by Bob Plankers so I thought I’d at least check on industry developments and write up the events that caught my attention in those months. In no particular order;

Obviously three months isn’t very long in strategic terms although there are a couple of interesting developments. With the acquisition of Virsto and the announcement of NSX VMware are progressing their ‘software defined’ datacentre vision while the hybrid cloud move was leaked last year and now seems obvious given their lack of progress against rival public cloud providers like Amazon. EMC aren’t ignoring the threat that the shift towards open source, commodity, and ‘software defined’ products poses to their existing product lines although it’ll be interesting to see how other storage vendors respond to the same challenges. From my limited viewpoint (my company aren’t really doing ‘cloud’ at all if you ignore shadow IT) OpenStack seems to be gaining ground – I see more coverage and more people I know getting involved.

Anything I’ve missed? What’s in store in the next twelve weeks? Interesting times!

Categories: VMware Tags: ,

Spring has sprung and it’s LonVMUG time again!

April 16th, 2013 1 comment
Print Friendly

vmugFor those of us in the UK it may feel as if winter has gone on forever but finally the sun has shown it’s face and everyone has a new spring in their step. What to do with all that pent up energy?

Attend the London VMUG on Thursday 25th April of course! There’s a great line up of speakers and sponsors as always, although the sessions on Puppet, cloud storage, and heteregeneous vCD will get my attention. Below is the full agenda but note that you need to register for free in advance.

Where to go for the usergroup

London Chamber of Commerce and Industry 33 Queen Street
London, EC4R 1AP (map)

Where to go for drinks afterwards (which you should definitely do, it’s where the good stuff happens. It’s a five minute walk from the usergroup)

The Pavilion End pub
23 Watling Street, Moorgate
London
EC4M 9BR (map)

Twitter:@lonvmug (or hashtag #lonvmug)

April2013-VMUG-agenda

Hope to see you there!

Categories: VMware Tags:

Automating vSphere with Cody Bunch – book review

March 6th, 2013 No comments
Print Friendly

vCenter Orchestrator (vCO) has been around since May 2009 when vSphere4 was initially released. Despite being around for over two years it doesn’t seem to get much attention even though it’s free to anyone who’s purchased vCenter and has the potential to save effort for system administrators. There are a couple of reasons for this in my opinion – firstly it isn’t ready to go by default, you have to configure it manually and that’s not as straight forward as it could be. Secondly it looks intimidating once configured and does require some knowledge of either the vSphere API and preferably using Javascript. While neither are that hard to get to grips with, combined it makes for quite a barrier to entry.

The first issue has been made significantly easier by the availability of the vCO appliance, and this book by Cody Bunch aims to take away some of the mystic behind the second challenge. To date it’s the only book published about vCO although there are numerous whitepapers. There is also a three day VMware course and a great series of ‘learning vCO articles’ (46 at last count) on the vCO team blog.

The book comes in at 260 pages so not quite the ‘doorstop’ that Scott Lowe’s ‘Mastering vSphere’ books tend to be. As with many technical books however the key is in understanding the content rather than having lots of it – you could easily spend a week learning a specific part of the API while you perfect a real world workflow. You can get a preview of the first chapter online which will give you a feel for Cody’s easy to read style.

The book is split into three sections plus appendices;

  1. Introduction, installation and configuration (50 pages)
  2. Working with Orchestrator (50 pages)
  3. Real world use cases (100 pages)
  4. Appendices – Onyx, VIX, troubleshooting, the vCO vApp (50 pages)

If you’re familiar with vCO (if you’ve done the VCAP4-DCA exam for example you probably installed and configured Orchestrator as it was on the blueprint) you won’t dwell too long on the first section as there’s not much you won’t already know. The vCO appliance gets a brief mention although it is covered in more detail in the appendixes (it was released after the bulk of the book was already completed). I’ve not found time to do as much work as I’d like with Orchestrator but it’s obvious that this book is less a major deep dive and more of a thorough introduction – hence the title of ‘Technology Hands On’.

You can buy the book from Amazon.com or Amazon.co.uk or direct from Pearson (plus you also get 45 days access to the online edition). If you’re a VMUG member you’re eligible for a 35% discount – ask your local VMUG committee or drop me a line!

Further Reading

The official VMware vCO page

The vCO resources page (including forums, videos, FAQ etc)

The unofficial vCO blog

Cody Bunch’s section on vCO at Professional VMware.com

Joerg Lew’s website vCOPortal.de (VCI and all round vCO guru)

Tom Holingsworth’s review of the book

Twitter people to follow;

BetterWPSecurity – a great WordPress plugin but proceed with caution

February 19th, 2013 No comments
Print Friendly

I’ve recently installed the BetterWPSecurity WordPress plugin, and found that while it’s very useful and does increase the security of WordPress it can also break your site.

Ah, Monday morning and the start of my three months paternity leave looking after my six month old son Zach. During his morning nap I logged into my blog to work on an article and noticed that my blog wasn’t loading articles correctly even though the home page worked just fine. Investigating further and looking at my site stats (I use both the Jetpack plugin and Google Analytics) clearly showed that something broke at the start of the weekend – I had nearly no traffic all weekend. Having just referred a colleague to my site for some information and on my first day of paternity leave (ie less time on my hands, not more as some may think) this was definitely not ideal timing!

My first step was to check my logs for information, in this case the BetterWPSecurity log for changed files. This revealed that the .htaccess file in the root directory was changed late on Friday night at 11:35pm – and I knew that wasn’t me as I was tucked up in bed. My first thought was a hack as the .htaccess file permits access to the site but there was no redirect or site graffiti and the homepage still worked so that didn’t seem likely. I logged in via SSH to have a look at the .htaccess file but didn’t see anything obvious although I’m no WordPress expert.


My priority was to get the blog working again so I tried restoring a copy of the changed file from the previous week’s backup (made via the BackWPUp plugin) only to find the backup wasn’t useable. Bad plugin! Luckily I’m a believer in ‘belt and braces’ and I knew my hosting company, EvoHosting, also took backups. I logged a call with them and within the hour they’d replied with the contents of the file from a week earlier. Sure enough the file had been changed but looking at the syntax it appeared to be an error rather than malicious hack.

My .htaccess file when the site was working;

# BEGIN WordPress

RewriteEngine On

RewriteBase /

RewriteRule ^index\.php$ - [L]

RewriteCond %{REQUEST_FILENAME} !-f

RewriteCond %{REQUEST_FILENAME} !-d

RewriteRule . /index.php [L]

# END WordPress

My .htaccess file after the suspicious change;

# BEGIN Better WP Security

Order allow,deny

Allow from all

Deny from 88.227.227.32

# END Better WP Security

RewriteBase /

RewriteRule ^index\.php$ - [L]

RewriteCond %{REQUEST_FILENAME} !-f

RewriteCond %{REQUEST_FILENAME} !-d

RewriteRule . /index.php [L]

</IfModule>

# END WordPress

I backed up the suspicious copy of the file (for future reference, ie writing this blogpost), restored the original et voila – the blog was working again. Step one complete, now to find the root cause…

Part of any diagnostic process is the question ‘what’s changed?’ and I had a suspicion that BetterWPSecurity could be the culprit as I’d only installed it a few weeks earlier. There was also the obvious issue of the new code in the .htaccess file which looked to belong to BetterWPSecurity. I checked the site access logs which confirmed my hypothesis – someone had attempted to break into my site and while attempting to block the attacker BetterWPSecurity had mangled my .htaccess file. The logs below have been truncated to remove many of the brute force login attempts (there were plenty more) but note that on the final line (after BetterWPSecurity has blocked the attacker) the HTML return code was 418 (“I’m a teapot”) rather than 200 plus the suspect IP 88.227.227.32 is the same as the one denied in the mangled .htaccess file. Yes, you read that right, “I’m a teapot”! Here’s a full explanation for that April Fool’s error code. :-)

88.227.227.32 - - [15/Feb/2013:23:35:19 +0000] "POST /wp-login.php HTTP/1.1" 200 3017 "http://www.vexperienced.co.uk//wp-login.php" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0.1) Gecko/20100101 Firefox/4.0.1"
88.227.227.32 - - [15/Feb/2013:23:35:19 +0000] "POST /wp-login.php HTTP/1.1" 200 3017 "http://www.vexperienced.co.uk//wp-login.php" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0.1) Gecko/20100101 Firefox/4.0.1"
88.227.227.32 - - [15/Feb/2013:23:35:19 +0000] "POST /wp-login.php HTTP/1.1" 200 3017 "http://www.vexperienced.co.uk//wp-login.php" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0.1) Gecko/20100101 Firefox/4.0.1"
88.227.227.32 - - [15/Feb/2013:23:35:19 +0000] "POST /wp-login.php HTTP/1.1" 200 3017 "http://www.vexperienced.co.uk//wp-login.php" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0.1) Gecko/20100101 Firefox/4.0.1"
88.227.227.32 - - [15/Feb/2013:23:35:19 +0000] "POST /wp-login.php HTTP/1.1" 418 5 "http://www.vexperienced.co.uk//wp-login.php" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0.1) Gecko/20100101 Firefox/4.0.1"

So BetterWPSecurity led me to the fault but also caused it. To be fair the plugin does warn you which settings are potentially going to cause issues but I’d assumed that it wouldn’t be me – dangerous things assumptions. I’ve rectified the issue by restricing BetterWPSecurity from altering core system files as shown in the screenshot below;

My blog is fixed and I’m feeling quite chuffed that it was all resolved during a long lunchbreak – not a bad day’s work if I do say so myself! Lesson for today? Take warnings seriously and have multiple backups!

Categories: VMware Tags: , ,

My ‘chinwag’ with Mike Laverick

January 21st, 2013 No comments
Print Friendly

Late last week I joined an illustrious line of community bloggers, vendors, and authors by having a ‘chinwag’ with Mike Laverick. Anyone who knows Mike knows that a quick chat can easily last an hour for all the right reasons – he’s passionate about VMware and technology in general and good at presenting complex ideas in an easily understood manner. I guess that’s why he recently became a senior cloud evangelist for VMware! We discussed a few topics which are close to my heart at the moment;

  • Oracle
  • vCloud Director
  • Storage Field Day

You can listen to the audio (MP3 or the iPod/iPad friendly M4V) or watch the YouTube video. As time is limited on the actual chinwag I thought I’d offer a few additional thoughts on a couple of the topics we discussed.

Oracle and converged infrastructure

I didn’t want to get embroiled in a discussion about Oracle’s support stance on VMware as that’s been covered many times before but it’s definitely still a barrier. Some of our Oracle team have peddled the ‘it’s not supported’ argument to senior management and even though I’ve clarified the ‘supported vs certified’ distinction it’s a difficult perception to alter. Every vendor wants to push their own solutions so you can’t blame Oracle for wanting to push their own solution but it sure is frustrating!

Of more interest to me is where converged infrastructure is going. As we discussed on the chinwag Oracle are an interesting use case for converged infrastructure (or engineered systems, pick your terminology of choice) because it includes the application tier. Most other converged offerings (VCE, FlexPod, vStart and even hyperconverged solutions like Nutanix) tend to stop at the hypervisor, thus providing a abstraction layer that you can run whatever workload you like on. Oracle (with the possible exception of IBM?) may be unique in owning the entire stack from hardware all the way up through storage, networking, compute, through to the hypervisor and up to their crown jewels, the Oracle database and applications. This gives them a position of strength to negotiate with even when certain layers are weak in comparison to ‘best of breed’, as is the case with OracleVM. Archie Hendryx explores this in his blogpost although I think he undersells the advantage Oracle have of owning a tier 1 application – Dell’s vStart or VCE’s vBlock may offer competition from an infrastructure perspective but my company don’t run any Dell or VCE applications. If you’re not Oracle how do you compete with this? You team up to provide a ‘virtual stack’ optimised for various workloads – today VDI is the most common (see reference architectures from Nexenta, Nimble Storage et al). As the market for converged infrastructure grows I think we’ll see more of these ‘vertical’ stack style offerings.

Here’s a few blogpost’s I found interesting related to Oracle’s solutions: a look at the Exadata infrastructure, who manages the Exadata, Exalogic 2.0 Focuses on Elastic Cloud

vCloud Director

After I described my problem getting vCD tabled as a viable technology for lab management Mike rightly pointed out that many people are using vCD in test and dev – maybe more than in production. I agree with Mike but suspect that most are using dev/test as a POC for a production private cloud, not as purpose built lab management environment. I didn’t get time to discuss a couple of other points which both complicate the introduction of vCD even if you have an existing VMware environment;

  • Introducing vCD (or any cloud solution for that matter) is potentially a much bigger change compared to the initial introduction of server virtualisation. In the latter the changes mainly impacted the infrastructure teams although provisioning, purchasing, networks and storage were all impacted. If you’re intending to deliver test/dev environments you’re suddenly incorporating your applications too, potentially including the whole development/delivery lifecycle. If you go the whole hog to self-service then you potentially include an even larger part of the business right up to the end users. That’s a very disruptive change for some ‘infrastructure guy’ to be proposing!
  • vCD recommends Enterprise+ licencing which means I have to argue for the highest licencing level for test/dev, even if I don’t have it in production

If you’re interested in vCloud Director as a lab management solution here are links to some of the companies and technologies I mentioned;  SkyTap Cloud, VMworld session OPS-CSM2150 – “Lab management with VMware vCloud Director: Software development customer panel”, Frank Brix’s network fencing blogpost, and a good generic post about using the cloud for development.

Categories: VMware Tags: , , ,

Here’s what you missed in 2012 (LonVMUG)

December 3rd, 2012 No comments
Print Friendly

It’s that time of year when I book the next London VMUG session into my calendar and rather than my usual ‘here’s the agenda, you should go‘ blogpost I thought I’d recap what the last year has delivered. If this doesn’t convince you that there’s value in attending a free event where you could have learnt all the topics listed below as well as networking with your peers then nothing will. :-)

If there’s a topic you’d like covered or if you’d like to present something yourself get in touch with the organising commmittee. I’m planning to present at one of next year’s VMUG sessions (it’s about time!) because it’s a user group and real world experience can be gold dust for others to learn from. I’m told we’re a friendly audience!

Before you continue, register for the next session on 24th Jan 2013!

Cartoon showing Dilbert

I’ve grouped them according to some industry trends so your own ‘pointy haired boss’ will also see the value;

I could mention the giveaways (iPad, Fusion-IO card, t-shirts, AppleTV etc) and the free beers afterwards, the fact we had at least five VCDX’s presenting and the live labs from EMC, VMTurbo, and Embotics etc but you’re already sold right?

Register for the next session on 24th Jan 2013 (did I mention it’s free?)

Categories: VMware Tags: , ,

Zerto’s Virtual Replication 2.0 – first looks

November 28th, 2012 2 comments
Print Friendly

In this article I’m going to talk about Zerto, a data protection company specialising in virtualized and cloud infrastructures who I recently saw as part of Storage Field Day #2. They’ve presented twice before at Tech Field Days (as part of their launch in June 2011 and Feb 2012) so I was interested to see what new developments (if any) were in store for us. In their own words;

Zerto provides large enterprises with data replication solutions designed specifically for virtualized infrastructure and the cloud. Zerto Virtual Replication is the industry’s first hypervisor-based replication solution for tier-one applications, replacing traditional array-based BC/DR solutions that were not built to deal with the virtual paradigm.

When I first heard the above description I couldn’t help but think of VMware’s SRM product which has been available since June 2008. Zerto’s carefully worded statement is correct in that SRM relies on storage array replication for maximum functionality but I still think it’s slightly disingeneous. To be fair VMware are equally disingeneous when they claim “the only truly hypervisor level replication engine available today” for their vSphere Replication technology – marketing will be marketing! :-) Later in this article I’ll clarify the differences between these products but let’s start by looking at what Zerto offers.

Zerto offer a product called Zerto Virtual Replication which integrates with vCenter to replicate your VMware VMs to one or more sites in a simple and easy to use manner. Since July 30th 2012 when v2.0 was released it supports replication to various clouds along with advanced features such as multisite replication and vCloud Director compatibility. Zerto are on an aggressive release schedule given that the initial release (which won ‘Best of Show’ at VMworld 2011) was only a year earlier but in a fast moving market that’s a good thing. For an entertaining 90 second introduction which explains what if offers better than I could check out the video below from the companies website;

Just as server virtualization opened up possibilities by abstracting the guest OS from the underlying hardware so data replication can benefit from moving ‘up the stack’ away from the storage array hardware and into the hypervisor. The extra layer of abstraction lifts certain constraints related to the storage layer;

  • Array agnostic – you can replicate between dissimilar storage arrays (for example Netapp at one end and EMC at the other). For both cloud and DR scenarios this could be a ‘make or break’ distinction compared to traditional array replication which requires similar systems at both ends. In fact you can replicate to local storage if you want – if you’re one of the growing believers in the NoSAN movement that could be useful…
  • Storage layout agnostic – because you choose which VMs to replicate rather than which volume/LUN on the array you’re less constrained when designing or maintaining your storage layout. When replicating you can also change between thin and thick provisioning, or from SAN to NAS, or from one datastore layout to another. A typical use case might be to replicate from thick at the source to thin provisioning at the DR location for example. There is a definite trend towards VM-aware storage and ditching LUN constraints – you see it with VMware’s vVols, storage arrays like Tintri and storage hypervisors like Virsto so having the same liberating concept for DR makes a lot of sense.

Zerto goes further than just being ‘storage agnostic’ as it allows further flexibility;

  • Replicate VMs from vCD to vSphere (or vice versa). vCD to vCD is also supported. This is impressive stuff as it understands the Organization Networks, vApp containers etc and creates whatever’s needed to replicate the VMs.
  • vSphere version agnostic – for example use vSphere 4.1 at one end and vSphere 5.0 at the other. For large companies which can typically lag behind this could be the prime reason to adopt Zerto.

With any replication technology bandwidth and latency are concerns as is WAN utilisation. Zerto uses virtual appliances on the source and destination hosts (combined with some VMware API calls, not a driver as this article states) and therefore isn’t dependent on changed block tracking (CBT), is storage protocol agnostic (ie you can use FC, iSCSI or NFS for your datastores) and offers compression and optimisation to boot. Zerto provide a profiling tool to ‘benchmark’ the rate of change per VM before you enable replication, thus alllowing you to predict your replication bandwidth requirements. Storage I/O control (SIOC) is not supported today although Zerto are implementing their own functionality to allow you to limit replication bandwidth. Today it’s done on a ‘per site’ basis although there’s no scheduling facility so you can’t set different limits during the day or at weekends.

VMware’s vSphere is the only hypervisor supported today although we were told the roadmap includes others (but no date was given). With Hyper-V v3 getting a good reception I’d expect to see support for it sooner than later and that could open up some interesting options.

Zerto’s Virtual Replication vs VMware’s SRM

Let’s revisit that claim that Zerto is the “industry’s first hypervisor-based replication solution for tier-one applications“. With the advent of vSphere 5.1 VMware now have two solutions which could be compared to Zerto – vSphere Replication and SRM. The former is bundled free with vSphere but is not comparable – it’s quite limited (no orchestration, testing, reporting or enterprise-class DR functions) and only really intended for data protection not full DR. SRM on the other hand is very much competition for Zerto although for comparable functionality you require array level replication.

When I mentioned SRM to the Zerto guys they were quick to say it’s an apples-to-oranges comparison which to a point is true – with Zerto you specify individual or groups of VMs to replicate whereas with SRM you’re still stuck specifying volumes or LUNs at array level. Both products have their respective strengths but there’s a large overlap in functionality and many people will want to compare them. SRM is very well known and has the advantage of VMware’s backing and promotion – having a single ‘throat to choke’ is an attractive proposition for many. I’m not going to list the differences because others have already done all the hard work;

Zerto compared to vSphere Replication - the official Zerto blog

Zerto compared to Site Recovery Manager - a great comparison by Marcel Van den Berg (also includes VirtualSharp’s Reliable DR)

Looking through the comparisons with SRM there are quite a few areas where Zerto has an advantage although to put it in context check out the pricing comparison at the end of this article;
NOTE: Since the above comparison was written SRM v5.1 has added support for vSphere Essentials Plus but everything else remains accurate

  • RTO in the low seconds rather than 15 mins
  • Compression of replication traffic
  • No resync required after host failures
  • Consistency groups
  • Cloning of the DR VMs for testing
  • Point in time recovery (up to a max of 5 days)
  • The ability to flag a VMDK as a pagefile disk. In this instance it will be replicated once (and then stopped) so that during recovery a disk is mounted but no replication bandwidth is required. SRM can’t do this and it’s very annoying!
  • vApps supported (and automatically updated when the vApp changes)
  • vCloud Director compatibility

If you already have storage array replication then you’ll probably want to evaluate Zerto and SRM.
If you don’t have (or want the cost of) array replication or want the flexibility of specifying everthing in the hypervisor then Zerto is likely to be the best solution.

DR to the Cloud (DRaaS)

Of particular interest to some customers and a huge win for Zerto is the ability to recover to the cloud. Building on the flexibility to replicate to any storage array and to abstract the underlying storage layout allows you to replicate to any provider who’s signed up to Zerto’s solution. Multisite and multitenancy functionality was introduced in v2.0 and today there are over 30 cloud providers signed up including some of the big guys like Terremark, Colt, and Bluelock. Zerto have tackled the challenges of a single appliance (providers obviously wouldn’t want to run one per customer) providing secure multi-tenant replication with resource management included.

vCloud Director compatibility is another feather in Zerto’s cap, especially when you consider that VMware’s own ‘vCloud Suite’ lags behind (SRM only has limited support for vCD). One has to assume that this will be a short term advantage as VMware have promised tighter integration between their products.

Pricing

Often this is what it comes down to – you can have the best solution in the market but if you’re charging the most then that’s what people expect. Zerto are targeting the enterprise so maybe it shouldn’t be a surprise that they’re also priced at the top end of the market. The table below shows pricing for SRM (both Standard and Enterprise edition) and Zerto;

SRM StandardSRM EnterpriseZerto Virtual Replication
$195 per VM$495 per VM$745 per VM

As you can see Zerto costs a significant premium over SRM. When making that comparison you may need to factor in the cost of storage array replication as SRM using vSphere Replication is severely limited. These are all list prices so get your negotiating hat on! We were told that Zerto were seeing good adoption from all sizes of customer from 15VMs through to service providers.

Final thoughts

I’ve not used SRM in production since the early v1.0 days and I’ve not used Zerto in production either so my thoughts are based purely on what I’ve read and been shown. I was very impressed with Zerto’s solution which certainly looks very polished and obviously trumps SRM in a few areas – hence why I took the time to investigate and write up my findings in this blogpost. From a simple and quick appliance based installation (which was shown in a live demo to us) through to the GUI and even the pricing model Zerto’s aim is to keep things simple and it looks as if they’ve succeeded (despite quite a bit of complexity under the hood). If you’re in the market for a DR solution take time to review the comparison with SRM above and see which fits your requirements and budget. Given how comprehensive the feature set is I wouldn’t be surprised to see this come out on top over SRM for many customers despite VMware’s backing for SRM and the cost differential.

Multi-hypervisor management could be a ‘killer feature’ for Zerto. It would distinguish the product for the forseeable future (I’d be surprised to see this in VMware’s roadmap anytime soon despite their more hypervisor friendly stance) and needs to happen before VMware bake comparable functionality into the SRM product. Looking at they way VMware are increasingly bundling software to leverage the base vSphere product there’s a risk that SRM features work their way down the stack and into lower priced SKU’s – good for customers but a challenge for Zerto. There’s definitely intriguing possibilities though – how about replicating from VMware to Hyper-V for example? As the use of cloud infrastructure increases the ability to run across heteregenous infrastructures will become key and Zerto have a good start in this space with their DRaaS offering. If you don’t want to wait and you’re interested in multi-hypervisor management (and conversion) today check out Hotlink (thanks to my fellow SFD#2 delegates for that tip).

I see a slight challenge in Zerto targeting the enterprise specifically. Typically these larger companies will already have storage array replication and are more likely to have a mixture of virtual and physical and therefore will still need array functionality for physical applications. This erodes the value proposition for Zerto. Furthermore if you have separate storage and virtualisation teams then moving replication away from the storage array could break accepted processes not to mention put noses out of joint! Replication at the storage array is a well accepted and mature technology whereas virtualisation solutions still have to prove themselves in some quarters. In contrast VMware’s SRM may be seen to offer the best of both worlds by offering the choice of both hypervisor and/or array replication – albeit with a significantly less powerful replication engine (if using vSphere Replication) and with the aforementioned constaints around replicating LUNs rather than VMs. Zerto also have the usual challenges around convincing enterprises that as a ‘startup’ they’re able to provide the expected level of support – for an eloquent answer to this read ‘Small is beautiful’ by Sudheesh Nair on the Nutanix blog (who face the same challenges).

Disclosure: the Storage Field Day #2 event is sponsored by the companies we visit, including flight and hotel, but we are in no way obligated to write (either positively or negatively) about the sponsors.

Further Reading

Steve Foskett and Gabrie Van Zanten discuss Zerto (from VMworld 2012)

Good introduction to Zerto v1.0 (Marcel Van Den Berg)

Zerto and vSphere Host replication – what’s the difference?

Zerto vs SRM (and VirtualSharp’s ReliableDR)

Step away from the array – fun Zerto blog in true Dr Seuss style

vBrownbag Zerto demo from VMworld Barcelona 2012

Zerto replication and disaster recovery the easy way

Take two Zerto and call me in the morning (Chris Wahl from a previous TFD)

Musings of Rodos – Zerto (Rodney Haywood from a previous TFD)

451 group’s report on Zerto (March 2012)

Storage field day #2 coverage

Twitter contacts

@zertocorp – the official Zerto twitter account

Shannon Snowdon – Senior Technical Marketing Architect

@zertojjones – another Zerto employee

Home labs – a poor man’s Fusion-IO?

November 22nd, 2012 6 comments
Print Friendly

While upgrading my home lab recently I found myself reconsidering the scale up vs scale out argument. There are countless articles about building your own home lab and whitebox hardware but is there a good alternative to the accepted ‘two whiteboxes and a NAS’ scenario that’s so common for entry level labs? I’m studying for the VCAP5-DCD so while the ‘up vs out’ discussion is a well trodden path there’s value (for me at least) in covering it again.

There are two main issues with many lab (and production) environments, mine included;

  1. Memory is a bottleneck and doubly so in labs using low end hardware – the vCentre appliance defaults to 8GB, as does vShield Manager so anyone wanting to play with vCloud (for example) needs a lot of RAM.
  2. Affordable yet performant shared storage is also a challenge – I’ve used both consumer NAS (from 2 to 5 bays) and ZFS based appliances but I’m still searching for more performance.

In an enterprise environment there are a variety of solutions to these challenges – memory density is increasing (up to 512GB per blade in the latest UCS servers for example) and on the storage front SSDs and flash memory have spurred innovations in the storage battle. In particular Fusion-IO have had great success with their flash memory devices which reduce the burden on shared storage while dramatically increasing performance. I was after something similar but without the budget.

When I built my newest home lab server, the vHydra I used a dual socket motherboard to maximise the possible RAM (up to 256GB RAM) and used local SSDs to supplement my shared storage. This has allowed me to solve the two issues above – I have a single server which can host a larger number of VMs with minimal reliance on my shared storage. The concepts are the same as solutions like Fusion-IO aim to do in production environments but mine isn’t particularly scalable. In fact it doesn’t really scale at all – I’ll have to revert to centralised storage if I buy more servers. Nor does it have any resilience – the ESXi server itself isn’t clustered and the storage is a single point of failure as there’s no RAID. It is cheap however, and for lab testing I can live with those compromises. None of this is vaguely new of course – Simon Gallagher’s vTardis has been using these same concepts to provide excellent lab solutions for years. Is this really a poor man’s Fusion-IO? There’s nothing like the peformance and nothing like the budget but the objectives are the same but to be honest it’s probably a slightly trolling blog title. I won’t do it again. Promise! :-)

If you’re thinking of building a home lab from scratch consider buying a single large server with local SSD storage instead of multiple smaller servers with shared storage. You can always scale out later or wait for Ceph or HDFS to elimate the need for centralised storage at all…

Tip: It’s worth bearing in mind the 32GB limit on the free version of ESXi – unless you’re a vExpert or they reinstate the VMTN subscription you’ll be stuck with 60 day eval editions if you go above 32GB (or buying a licence!).

Further Reading

Is performant a word? :-)

Categories: VMware Tags: , ,

Easing the pain of a VMware audit

November 21st, 2012 5 comments
Print Friendly

I recently had to complete an external audit of our VMware estate and thought it might be useful to others to know what the process entails, what you’ll need to provide to the auditors, and a few issues that I wasn’t aware of beforehand around licencing compliance. The initial approach by the auditor will describe the overall process and expected timelines (which will vary based on the size of your company).

There are two main steps in the process – self disclosure and discovery;

  1. Self disclosure is where you detail your use of VMware software including vCenters, ESX/ESXi hosts, VMs, and licences. In our case this was collated into an Excel spreadsheet provided by the auditor (the deployment detail workbook). You’ll also have to answer some high level questions about your company (such as how many locations you have), how you audit internally (how you track licences – third party tools, vCenter etc), when you initially deployed VMware in your company, and some info about your contacts for the audit. How you collect this information is up to you but there are a couple of good choices;
    • Export data from vCenter using the GUI
    • Export date from vCenter using PowerCLI scripts
    • Use third party tools.

    I used a mixture of RVTools (which is a handy and free download) and PowerCLI scripts. The native ‘Export’ feature in vCenter isn’t very flexible (there’s no way to export all the MAC addresses of VMs for example) but while RVTools came close it didn’t provide everything I needed either. I needed host uptime and while RVTools does show the last reboot time I still needed to translate that into days plus it didn’t cover licencing for each host (which I could have got from vCenter). I’ve included the script I ran at the end of this post in case it’s of use to someone else.

  2. Validation. Once the disclose is completed the auditor will want to ‘validate’ the information – auditor talk for “are you telling the truth, the whole truth, and nothing but the truth?”! This can be done in a variety of ways depending on the size of your estate, location, the auditor etc. It could include using your inhouse auditing tools (Centennial for example), data from directories like Active Directory or a scan of your network switches for a list of VMware MAC addresses (prefixes 00.05.69, 00.0C.29, 00.1C.14, as well as the more commonly known 00.50.56) . The latter was the approach we took due to a mixed Linux/Windows estate and the auditors preference. NOTE: you’ll do the actuall collection of all data not the auditors, even if they’re onsite.
    In an ideal world the information collected in this step matches up nicely with the information you’ve disclosed – any discrepancies will need investigating and explaining. A few things that caught me out here;

    • Ensure you keep track of any changes to the VMware environment after the audit process kicks off (this is an audit requirement). Some of my discrepancies were because another admin had decommissioned some VMs after my initial disclosure so they flagged up as ‘missing’. Simple to explain, but time consuming to track down! This could be a real challenge in a larger environment.
    • Remember that VMkernel ports also have VMware MAC addresses, not just the VMs. I spent a while trying to find ‘phantom’ VMs before tracking down the issue. RVTools shows these in a seperate tab so you’ll need to export both.
    • Even if you’re over entitled (you have more licences than you’re using) you’ll probably have to justify it, just to be sure you’re not hiding some part of your installation.

Read more…

Categories: VMware Tags: , , ,