Let your storage array do the heavy lifting with VAAI!
I’ve seen a few blogposts recently about storage features in vSphere5 and plenty of forum discussions about the level of support from various vendors but none that specifically address the Netapp world. As some of these features require your vendor to provide plugins and integration I’m going to cover the Netapp offerings and point out what works today and what’s promised for the future.
Many of the vSphere5 storage features work regardless of your underlying storage array, including StorageDRS, storage clusters, VMFS5 enhancements (provided you have block protocols) and the VMware Storage Appliance (vSA). The following vSphere features however are dependent on array integration;
- VAAI (the VMware Storage API for Array Integration). If you need a refresher on VAAI and what’s new in vSphere v5 check out these great blogposts by Dave Henry - part one covers block protocols (FC and iSCSI), part two covers NFS. The inimitable Chad Sakac from EMC also has a great post on the new vSphere5 primitives.
- VASA (the VMware Storage API for Storage Awareness). Introduced in vSphere5 this allows your storage array to send underlying implementation details of the datastore back to the ESXi host such as RAID levels, replication, dedupe, compression, number of spindles etc. These details can be used by other features such as Storage Profiles and StorageDRS to make more informed decisions.
The main point of administration (and integration) when using Netapp storage is the Virtual Storage Console (VSC), a vCenter plugin created by Netapp. If you haven’t already got this installed (the latest version is v4, released March 16th 2012) then go download it (NOW account required). As well as the vCenter plugin you must ensure your version of ONTAP also supports the vSphere functionality – as of April 19th 2012 the latest release is ONTAP 8.1. You can find out more about its featureset from Netapp’s Nick Howell. As well as the core vSphere storage features the VSC enables some extra features;
These features are all covered in Netapp’s popular TR3749 (best practices for vSphere, now updated for vSphere5) and the VSC release notes.
Poor old NFS – no VAAI for you…
It all sounds great! You’ve upgraded to vSphere5 (with Enterprise or Enterprise Plus licensing), installed the VSC vCenter plugin and upgraded ONTAP to the shiny new 8.1 release. Your Netapp arrays are in place and churning out 1′s and 0′s at a blinding rate and you’re looking forward to giving vSphere some time off for good behaviour and letting your Netapp do the heavy lifting…..
I spent some time at Christmas upgrading my home lab in preparation for the new VCAP exams which are due out in the first quarter of 2012. In particular I needed to improve my shared storage and hoped that I could reuse old h/w instead of buying something new. I’ve been using an Iomega IX2-200 for the last year but it’s performance is pretty pitiful so I usually reverted to local storage which rather defeated the purpose.
I started off having a quick look around at my storage options for home labs;
- Hardware appliances from QNAP, Synology, Iomega etc. There’s a great, comprehensive list at SmallNetBuilder.com which includes both performance and cost comparisons.
- Software appliances such as OpenFiler, FreeNAS, Datacore’s SANSymphony/SAN Melody, Starwind’s iSCSI SAN and Nexenta’s Community Edition.
- Virtual appliances such as UberVSA, LeftHand, Netapp’s ONTAP Simulator (Netapp customer’s only and capacity limited)
- Or you can spin your own using a variety of base OSs: Oracle Solaris 11, Oracle Solaris 11 Express (free for non-commercial), FreeBSD (older version of ZFS), OpenIndiana and the upcoming Illumian project which is a fork from the now discontinued OpenSolaris. An interesting project is napp-it.org which shows you how to build your own NAS server using ZFS – well worth a look! Jimmy Dansbo has just published his ‘Poor man’s storage appliance‘ which also looks very interesting and has an .OVA available for quick deployment.
Why pick Nexenta?
I’d used OpenFiler and FreeNAS before (both are very capable) but with so much choice I didn’t have time to evaluate all the other options (Greg Porter has a few comments comparing OpenFiler vs Nexenta). Datacore and Starwind’s solutions rely on Windows rather than being bare metal (which was my preference) and I’ve been hearing positive news about Nexenta more and more recently.
On the technical front the SSD caching and VAAI support make Nexenta stand out from the crowd.
UPDATE March 2012 – VMware have just confirmed that the fix will be released as part of vSphere5 U2. Interesting because as of today (March 15th) update 1 hasn’t even been released – how much longer will that be I wonder? I’m also still waiting for a KB article but it’s taking it’s time…
UPDATE May 2012 – VMware have just released article KB2013844 which acknowledges the problem – the fix (until update 2 arrives) is to rename your datastores. Gee, useful…
For the last few weeks we’ve been struggling with our vSphere5 upgrade. What I assumed would be a simple VUM orchestrated upgrade turned into a major pain, but I guess that’s why they say ‘never assume’!
Summary: there’s a bug in the upgrade process whereby NFS mounts are lost during the upgrade from vSphere4 to vSphere5;
- if you have NFS datastores with a space in the name
- and you’re using ESX classic (ESXi is not affected)
Our issue was that after the upgrade completed, the host would start back up but the NFS mounts would be missing. As we use NFS almost exclusively for our storage this was a showstopper. We quickly found that we could simply remount the NFS with no changes or reboots required so there was no obvious reason why the upgrade process didn’t remount them. With over fifty hosts to upgrade however the required manual intervention meant we couldn’ t automate the whole process (OK, PowerCLI would have done the trick but I didn’t feel inspired to code a solution) and we aren’t licenced for Host Profiles which would also have made life easier. Thus started the process of reproducing and narrowing down the problem.
- We tried both G6 and G7 blades as well as G6 rack mount servers (DL380s)
- We used interactive installs using a DVD of the VMware ESXi v5 image
- We used VUM to upgrade hosts using both the VMware ESXi v5 image and the HP ESXi v5 image
- We upgraded from ESXv4.0u1 to ESX 4.1 and then onto ESXiv5
- We used storage arrays with both Netapp ONTAP v7 and ONTAP v8 (to minimise the possibility of the storage array firmware being at fault)
- We upgraded hosts both joined to and isolated from from vCentre
Every scenario we tried produced the same issue. We also logged a call with VMware (SR 11130325012) and yesterday they finally reproduced and identified the issue as a space in the datastore name. As a workaround you can simply rename your datastores to remove the spaces, perform the upgrade, and then rename them back. Not ideal for us (we have over fifty NFS datastores on each host) but better than a kick in the teeth!
There will be a KB article released shortly so until then treat the above information with caution – no doubt VMware will confirm the technical details more accurately than I have done here. I’m amazed that no-one else has run into this six months after the general availability of vSphere5 – maybe NFS isn’t taking over the world as much as I’d hoped! I’ll update this article when the KB is posted but in the meantime NFS users beware.
Sad I know, but it’s kinda nice to have discovered my own KB article. Who’d have thought that having too much space in my datastores would ever cause a problem?
- Recall vicfg-* commands related to listing storage configuration
- Recall vSphere 4 storage maximums
- Identify logs used to troubleshoot storage issues
- Describe the VMFS file system
Skills and Abilities
- Use vicfg-* and esxcli to troubleshoot multipathing and PSA‐related issues
- Use vicfg-module to troubleshoot VMkernel storage module configurations
- Use vicfg-* and esxcli to troubleshoot iSCSI related issues
- Troubleshoot NFS mounting and permission issues
- Use esxtop/resxtop and vscsiStats to identify storage performance issues
- Configure and troubleshoot VMFS datastores using vmkfstools
- Troubleshoot snapshot and resignaturing issues
- Product Documentation
- vSphere Client
- vicfg-* , esxcli, resxtop/esxtop,vscsiStats, vmkfstools
There’s obviously a large overlap between diagnosing performance issues and tuning storage performance, so check section 3.1 in tandem with this objective.
Recall vicfg-* commands related to listing storage configuration
- esxcli corestorage | nmp | swiscsi
- showmount -e
- look for CONS/s – this indicates SCSI reservation conflicts and might indicate too many VMs in a LUN. This field isn’t displayed by default (press ‘f’ then ‘f’ again to add it)
Managing storage capacity is another potentially huge topic, even for a midsized company. The storage management functionality within vSphere is fairly comprehensive and a significant improvement over VI3.
- Identify storage provisioning methods
- Identify available storage monitoring tools, metrics and alarms
Skills and Abilities
- Apply space utilization data to manage storage resources
- Provision and manage storage resources according to Virtual Machine requirements
- Understand interactions between virtual storage provisioning and physical storage provisioning
- Apply VMware storage best practices
- Configure datastore alarms
- Analyze datastore alarms and errors to determine space availability
Tools & learning resources
Storage provisioning methods
There are three main protocols you can use to provision storage;
- Fibre channel
- Block protocol
- Uses multipathing (PSA framework)
- Configured via vicfg-mpath, vicfg-scsidevs
- block protocol
- Uses multipathing (PSA framework)
- hardware or software (boot from SAN is h/w initiator only)
- configured via vicfg-iscsi, esxcfg-swiscsi and esxcfg-hwiscsi, vicfg-mpath, esxcli
- File level (not block)
- No multipathing (uses underlying Ethernet network resilience)
- Thin by default
- no RDM, MSCS,
- configured via vicfg-nas
I won’t go into much detail on each, just make sure you’re happy provisioning storage for each protocol both in the VI client and the CLI.
Know the various options for provisioning storage;
- VI client. Can be used to create/extend/delete all types of storage. VMFS volumes created via the VI client are automatically aligned.
- CLI – vmkfstools.
- NOTE: When creating a VMFS datastore via CLI you need to align it. Check VMFS alignment using ‘fdisk –lu’. Read more in Duncan Epping’s blogpost.
- PowerCLI. Managing storage with PowerCLI – VMwareKB1028368
- Vendor plugins (Netapp RCU for example). I’m not going to cover this here as I doubt the VCAP-DCA exam environment will include (or assume any knowledge of) these!
When provisioning storage there are various considerations;
- Thin vs thick
- Extents vs true extension
- Local vs FC/iSCSI vs NFS
- VMFS vs RDM