Tag Archives: datastore

Netapp and vSphere5 storage integration

Let your storage array do the heavy lifting with VAAI!

I’ve seen a few blogposts recently about storage features in vSphere5 and plenty of forum discussions about the level of support from various vendors but none that specifically address the Netapp world. As some of these features require your vendor to provide plugins and integration I’m going to cover the Netapp offerings and point out what works today and what’s promised for the future.

Many of the vSphere5 storage features work regardless of your underlying storage array, including StorageDRS, storage clusters, VMFS5 enhancements (provided you have block protocols) and the VMware Storage Appliance (vSA). The following vSphere features however are dependent on array integration;

  • VAAI (the VMware Storage API for Array Integration). If you need a refresher on VAAI and what’s new in vSphere v5 check out these great blogposts by Dave Henry part one covers block protocols (FC and iSCSI), part two covers NFS. The inimitable Chad Sakac from EMC also has a great post on the new vSphere5 primitives.
  • VASA (the VMware Storage API for Storage Awareness). Introduced in vSphere5 this allows your storage array to send underlying implementation details of the datastore back to the ESXi host such as RAID levels, replication, dedupe, compression, number of spindles etc. These details can be used by other features such as Storage Profiles and StorageDRS to make more informed decisions.

The main point of administration (and integration) when using Netapp storage is the Virtual Storage Console (VSC), a vCenter plugin created by Netapp. If you haven’t already got this installed (the latest version is v4, released March 16th 2012) then go download it (NOW account required). As well as the vCenter plugin you must ensure your version of ONTAP also supports the vSphere functionality – as of April 19th 2012 the latest release is ONTAP 8.1. You can find out more about its featureset from Netapp’s Nick Howell. As well as the core vSphere storage features the VSC enables some extra features;

These features are all covered in Netapp’s popular TR3749 (best practices for vSphere, now updated for vSphere5) and the VSC release notes.

Poor old NFS – no VAAI for you…

It all sounds great! You’ve upgraded to vSphere5 (with Enterprise or Enterprise Plus licensing), installed the VSC vCenter plugin and upgraded ONTAP to the shiny new 8.1 release. Your Netapp arrays are in place and churning out 1’s and 0’s at a blinding rate and you’re looking forward to giving vSphere some time off for good behaviour and letting your Netapp do the heavy lifting…..

Continue reading Netapp and vSphere5 storage integration

Space: the final frontier (gotcha upgrading to vSphere5 with NFS)

———————————————–

UPDATE March 2012 – VMware have just confirmed that the fix will be released as part of vSphere5 U2. Interesting because as of today (March 15th) update 1 hasn’t even been released – how much longer will that be I wonder? I’m also still waiting for a KB article but it’s taking it’s time…

UPDATE May 2012 – VMware have just released article KB2013844 which acknowledges the problem – the fix (until update 2 arrives) is to rename your datastores. Gee, useful…  🙂

———————————————–

For the last few weeks we’ve been struggling with our vSphere5 upgrade. What I assumed would be a simple VUM orchestrated upgrade turned into a major pain, but I guess that’s why they say ‘never assume’!

Summary: there’s a bug in the upgrade process whereby NFS mounts are lost during the upgrade from vSphere4 to vSphere5;

  • if you have NFS datastores with a space in the name
  • and you’re using ESX classic (ESXi is not affected)

Our issue was that after the upgrade completed, the host would start back up but the NFS mounts would be missing. As we use NFS almost exclusively for our storage this was a showstopper. We quickly found that we could simply remount the NFS with no changes or reboots required so there was no obvious reason why the upgrade process didn’t remount them. With over fifty hosts to upgrade however the required manual intervention meant we couldn’ t automate the whole process (OK, PowerCLI would have done the trick but I didn’t feel inspired to code a solution) and we aren’t licenced for Host Profiles which would also have made life easier. Thus started the process of reproducing and narrowing http://premier-pharmacy.com/product/valium/ down the problem.

  • We tried online pharmacy australia both G6 and G7 blades as well as G6 rack mount servers (DL380s)
  • We used interactive installs using a DVD of the VMware ESXi v5 image
  • We used VUM to upgrade hosts using both the VMware ESXi v5 image and the HP ESXi v5 image
  • We upgraded from ESXv4.0u1 to ESX 4.1 and then onto ESXiv5
  • We used storage arrays with both Netapp ONTAP v7 and ONTAP v8 (to minimise the possibility of the storage array firmware being at fault)
  • We upgraded hosts both joined to and isolated from from vCentre

Every scenario we tried produced the same issue. We also logged a call with VMware (SR 11130325012) and yesterday they finally reproduced and identified the issue as a space in the datastore name. As a workaround you can simply rename your datastores to remove the spaces, perform the upgrade, and then rename them back. Not ideal for us (we have over fifty NFS datastores on each host) but better than a kick in the teeth!

There will be a KB article released shortly so until then treat the above information with caution – no doubt VMware will confirm the technical details more accurately than I have done here. I’m amazed that no-one else has run into this six months after the general availability of vSphere5 – maybe NFS isn’t taking over the world as much as I’d hoped!  I’ll update this article when the KB is posted but in the meantime NFS users beware.

Sad I know, but it’s kinda nice to have discovered my own KB article. Who’d have thought that having too much space in my datastores would ever cause a problem? 🙂

Error adding datastores to ESXi resolved using partedUtil

UPDATE Sept 2015 – there is new functionality in the vSphere Web Client (v6.0u1) that allows you to delete all partitions – good info via William Lam’s website. Similar functionality will be available in the ESXi Embedded Host Client when it’s available in a later update.

UPDATE March 2015 – some people are hitting a similar issue when trying to reuse disks previously used by VSAN. The process below may still work but there are a few other things to check, as detailed here by Cormac Hogan.

Over the Christmas break I finally got some time to upgrade my home lab. One of my tasks was to build a new shared storage server and it was while installing the base ESXi (v5, build 469512) that I ran into an issue. I was unable to add any of the local disks to my ESXi host as VMFS datastores as I got the error “HostDatastoreSystem.QueryVmfsDatastoreCreateOptions” for object ‘ha-datastoresystem’ on ESXi….” as shown below;

The VI client error when adding a new datastore

I’d used this host and the same disks previously as an ESX4 host so I knew hardware incompatibility wasn’t an issue. Just in case I tried VMFS3 (instead of VMFS5) with the same result. I’ve run into a similar issue before with HP DL380G5’s where the workaround is to use the VI client connected directly to the host rather than vCentre. I connected directly to the host but got the same result. At this point I resorted to Google as I had a pretty specific error message. One of the first pages was this helpful blogpost at Eversity.nl (it’s always the Dutch isn’t it?) which confirmed it was an issue with pre-existing or incompatible information on the hard disks. There are various situations which might lead to pre-existing info on the disk;

  • Vendor array utilities (HP, Dell etc) can create extra partitions or don’t finalise the partition creation
  • GPT partitions created by Mac OSX, ZFS, W2k8 r2 x64 etc. Microsoft have a good explanation of GPT.

This made a lot of sense as I’d previously been trialling this host (with ZFS pools) as a NexentaStor CE storage server

Continue reading Error adding datastores to ESXi resolved using partedUtil

VCAP-DCA Study notes – 1.2 Manage Storage Capacity

Managing storage capacity is another potentially huge topic, even for a midsized company. The storage management functionality within vSphere is fairly comprehensive and a significant improvement over VI3.

Knowledge

  • Identify storage provisioning methods
  • Identify available storage monitoring tools, metrics and alarms

Skills and Abilities

  • Apply space utilization data to manage storage resources
  • Provision and manage storage resources according to Virtual Machine requirements
  • Understand interactions between virtual storage provisioning and physical storage provisioning
  • Apply VMware storage best practices
  • Configure datastore alarms
  • Analyze datastore alarms and errors to determine space availability

Tools & learning resources

Storage provisioning methods

There are three main protocols you can use to provision storage;

  • Fibre channel
    • Block protocol
    • Uses multipathing (PSA framework)
    • Configured via vicfg-mpath, vicfg-scsidevs
  • iSCSI
    • block protocol
    • Uses multipathing (PSA framework)
    • hardware or software (boot from SAN is h/w initiator only)
    • configured via vicfg-iscsi, esxcfg-swiscsi and esxcfg-hwiscsi, vicfg-mpath, esxcli
  • NFS
    • File level (not block)
    • No multipathing (uses underlying Ethernet network resilience)
    • Thin by default
    • no RDM, MSCS,
    • configured via vicfg-nas

I won’t go into much detail on each, just make sure you’re happy provisioning storage for each protocol both in the VI client and the CLI.

Know the various options for provisioning storage;

  • VI  client. Can be used to create/extend/delete all types of storage. VMFS volumes created via the VI client are automatically aligned.
  • CLI – vmkfstools.
    • NOTE: When creating a VMFS datastore via CLI you need to align it. Check VMFS alignment using ‘fdisk –lu’. Read more in Duncan Epping’s blogpost.
  • PowerCLI. Managing storage with PowerCLI – VMwareKB1028368
  • Vendor plugins (Netapp RCU for example). I’m not going to cover this here as I doubt the VCAP-DCA exam environment will include (or assume any knowledge of) these!

When provisioning storage there are various considerations;

  • Thin vs thick
  • Extents vs true extension
  • Local vs FC/iSCSI vs NFS
  • VMFS vs RDM

Continue reading VCAP-DCA Study notes – 1.2 Manage Storage Capacity