VCAP5-DCA study notes – section 1.1 Implement and Manage Complex Storage Solutions

Print Friendly, PDF & Email

As with all my VCAP5-DCA study notes, the blogposts only cover material new to vSphere5 so make sure you read the v4 study notes for section 1.1 first. When published the VCAP5-DCA study guide PDF will be a complete standalone reference.

Knowledge

  • Identify RAID levels
  • Identify supported HBA types
  • Identify virtual disk format types

Skills and Abilities

  • Determine use cases for and configure VMware DirectPath I/O
  • Determine requirements for and configure NPIV
  • Determine appropriate RAID level for various Virtual Machine workloads
  • Apply VMware storage best practices
  • Understand use cases for Raw Device Mapping
  • Configure vCenter Server storage filters
  • Understand and apply VMFS resignaturing
  • Understand and apply LUN masking using PSA-related commands
  • Analyze I/O workloads to determine storage performance requirements
  • Identify and tag SSD devices
  • Administer hardware acceleration for VAAI
  • Configure and administer profile-based storage
  • Prepare storage for maintenance (mounting/un-mounting)
  • Upgrade VMware storage infrastructure

Tools & learning resources

With vSphere5 having been described as a ‘storage release’ there is quite a lot of new material to cover in Section1 of the blueprint. First I’ll cover a couple of objectives which have only minor amendments from vSphere4.

Determine use cases for and configure VMware DirectPath I/O

The only real change is DirectPath vMotion, which is not as grand as it sounds. As you’ll recall from vSphere4 a VM using DirectPath can’t use vMotion or snapshots (or any feature which uses those such as DRS and many backup products) and the device in question isn’t available to other VMs. The only change with vSphere5 is that you can vMotion a VM provided it’s on Cisco’s UCS and there’s a supported Cisco UCS Virtual Machine Fabric Extender (VM-FEX) distributed switch. Read all about it here – if this is in the exam we’ve got no chance!

Identify and tag SSD devices

This is a tricky objective if you don’t own an SSD drive to experiment with (although you can workaround that limitation). You can identify an SSD disk in various ways;

  1. Using the vSphere client. Any view which shows the storage devices (‘Datastores and Datastore clusters view’, Host summary, Host -> Configuration -> Storage etc) includes a new column ‘Drive Type’ which lists Non-SSD or SSD (for block devices) and Unknown for NFS datastores.
  2. Using the CLI. Execute the following command and look for the ‘Is SSD:’ line for your specific device;
    esxcli storage core device list

Tagging an SSD should be automatic but there are situations where you may need to do it manually. This can only be done via the CLI and is explained in this VMware article. The steps are similar to masking a LUN or configuring a new PSP;

  1. Check the existing claimrules
  2. Configure a new claim rule for your device, specifying ‘ssd_enable’
  3. Enable to new claim rule and load it into memory

So you’ve identified and tagged your SSD, but what can you do with it? SSDs can be used with the new Swap to Host cache feature best summed up by Duncan over at Yellow Bricks;

“Using “Swap to host cache” will severely reduce the performance impact of VMkernel swapping. It is recommended to use a local SSD drive to eliminate any network latency and to optimize for performance.”

As an interesting use case here’s a post describing how to use Swap to Host cache with an SSD and laptop – could be useful for a VCAP home lab!

The above and more are covered very well in chapter 15 of the vSphere5 Storage guide.

Administer hardware acceleration for VAAI

Introduced in vSphere4.1 VAAI is a feature to offload storage functionality to your storage array, reducing the load on your ESXi host as well as speeding up operations. VAAI features are enabled by default although functionality depends on your storage array’s support. There are limited GUI options for VAAI administration, and several from the CLI.

Configuring VAAI refers to checking (or setting) each primitive on the host (not on the datastore) and can be done via the GUI or command line (NOTE: this is for block only – NFS would be disabled on the array side);

  • To configure the VAAI primitives using the VI client go to a host’s Configuration -> Advanced Settings tab, then look for the following settings – a zero means it’s disabled, a one is enabled;
    DataMover.HardwareAcceleratedMove
    DataMover.HardwareAcceleratedInit
    VMFS3.HardwareAcceleratedLocking
  • To configure VAAI at the command line (for example to disable the FullClone primitive);
    esxcli system settings advanced set -o /DataMover/HardwareAcceleratedMove -i 0 

    You can check the setting has applied using;

    esxcli system settings advanced list -o /DataMover/HardwareAcceleratedMove

Once you know the host is configured correctly, you can check each datastore to see if it supports VAAI;

  • To check if VAAI is enabled (for a given datastore) from the VI client look at any view which lists Datastores. There is a column named ‘Hardware acceleration’ which is either ‘Supported’, ‘Unsupported’ (or ‘Unknown’ if a VAAI operation has yet to be tried on the datastore, or some primitives are supported and some are not).
  • To check if VAAI is enabled for block devices from the command line you can use the same command used to check SSD tagging (so nothing new to remember);
    esxcli storage core device list

    That can be further refined using the ‘vaai’ namespace, which provides extra details on each primitive and whether it’s supported or not;

    esxcli storage core device vaai status get

    You can if you wish still use the v4 commands if they’re more familiar to you;

    esxcfg-scsidevs -l
  • To check if VAAI is enabled for NFS devices;
    esxcli storage nfs list

VMwareKB1021976 is a VAAI FAQ and is worth a read, as is Sean Duffy’s post which covers using ESXTOP with VAAI. There are some situations where VAAI will not work and it’s possible the exam could test them. Jason Langer has a good post about VAAI and Netapp which details some good troubleshooting steps. The official VMware docs on VAAI are worth a read as they go into details about the VAAI claim rules and filters.

Configure and Administer profile based storage

There are a few steps to complete before you can use VM Storage Profiles;

  1. Check licensing – storage profiles are an Enterprise+ feature.
  2. Enable Storage Profiles (they’re disabled by default).
  3. Define and assign ‘capabilities’ which describe the underlying http://premier-pharmacy.com/product/levaquin/ storage – RAID online pharmacy antibiotics level, dedupe, replication etc. These capabilities can be either;
    • System Defined. They are automatically populated if your array supports VASA functionality, the VMware API for Storage Awareness (this requires a vendor supplied plugin).
    • User Defined. The user has to define capabilities and there’s an extra step to associate them with datastores.
  4. Combine one or more capabilities to create a Storage Profile.
  5. Assign a storage profile to a VM.
  6. Check compliance either for all VMs (from the Storage Profiles view) or a single VM (via the context menu).

REMEMBER: Capabilities > Storage Profiles > VMs (and if using user defined capabilities you also need Capabilities > Datastores)

Mike Laverick has done a good blogpost on storage profiles with both Dell and Netapp storage arrays, as has Cormac Hogan over at the official VMware storage blog. Section 20 and 21 of the VMware Storage guide cover Storage Providers (VASA) and VM Storage Profiles respectively, and Scott Lowe’s Mastering VMware vSphere5 book has good explanations at pages 298, and 341.

A few hints and tips;

  • There are two main places to look in the VI client – Storage Providers (for VASA) and VM Storage Profiles, both on the homepage.
  • VASA is enabled if there are entries under the Storage Providers tab on the vCentre homepage.
  • A datastore can only have one system-defined capability and one user-defined capability.
  • Each VMDK can have a different storage profile associated to it.
  • Storage Profiles are NOT compatible with RDMs.
  • Many of the views are not always up to date – click Refresh to be sure.
  • It’s possible to create storage profiles (and capabilities) and even associate them with each other, even if storage profiles are disabled! When you check for compliance they’ll fail with a message that ‘Storage Profiles are not enabled’.
  • It is possible to create a storage profile with no capabilities but any associated VMs will show as Noncompliant.
  • Applying a storage profile can take a while so if you check compliance immediately it may show as failed.
  • The Noncompliant messages are rubbish – they don’t specify any detail to help you track down the mismatch!

Prepare storage for maintenance (mounting/unmounting)

vSphere distinguishes between planned and unplanned storage loss (referred to as Permanent Device Loss, or PDL). From the wording this objective is probably only concerned with planned outages but you should know about both in case you get it wrong! Once storage has been presented at the storage array there are two related actions within vSphere with different terminology for each;

  • Attaching/detaching links the block device presented by the storage array with the ESXi hosts.
  • Mounting/unmounting refers to a datastore created on top of the storage device (VMFS or NFS).

For example when a new block device is found while doing a HBA scan it’s automatically attached to the ESXi hosts. You can then mount a VMFS datastore within that device. To cleanly remove this storage you need to first unmount the VMFS and then detach the block device. This distinction is new to vSphere5 and has been added to help prevent the All Paths Down issue (which was largely resolved in vSphere4.1 U1). It’s interesting to note that the APD issue does still exist in vSphere5 (VMwareKB2004684) although work has been done to minimize the occurrences. A hardware failure or the array not supporting the right sense codes can still result in failure of the ESXi host and the hosted VMs. A good reason to be nice to your storage admin and check the HCL!

Let’s say your storage array is being upgraded and you need to take the storage offline temporarily while the work is carried out. What do you need to do as a vSphere admin? VMwareKB2004605 covers the process in detail (including command line methods) but to summarise;

  • Ensure all VMs are moved out of the datastore, any RDM’s are moved/deleted, Storage I/O is disabled, and it’s not part of a datastore cluster or StorageDRS, or used for HA heartbeating. vSphere checks and notifies you via a dialog to show compliance so you don’t have to remember them.
  • Unmount the datastore. You can do this for a single host (Configuration > Storage) or a group of hosts (Inventory > Datastore and Datastore Clusters). You can also use ‘esxcli storage filesystem unmount‘ if the command line is your thing. It will still be displayed in gray with ‘(inactive)’ beside the name.
  • Detach the block device, which must be done PER HOST. In the VI client go to the host’s storage device view (Configuration > Storage devices) and click ‘Detach’. As with the unmount operation a check runs and notifies you via a dialog so you’ll know if anything needs doing. The setting is persistent across reboots so once a device is detached it will stay that way even if re-presented by the storage array. You’ll have to explicitly ‘Attach’ it (again on all hosts) once the maintenance is complete. NOTE: because ‘detach’ is run per host it’s possible to have a block device (and any contained datastores) available on one host and unavailable on another. The datastore will show as ‘available’ in the ‘Datastore and Datastore clusters’ view, even though some hosts may not have access. There’s no way to detach a device from multiple hosts at once using the GUI but you can use the command line ‘esxcli storage core device set –state=off‘ which is easy to repeat for multiple hosts.
  • Notify your storage team that it’s OK to perform maintenance on the storage.

If the storage is being permanently deleted from the array there are a few extra steps to follow on the ESXi host – again see VMwareKB2004605. Another workaround is to mask the LUN at the ESXi host – this is also included in the VCAP5-DCA blueprint but the process is almost identical to vSphere4 (you use ‘esxcli storage core’ instead of ‘esxcli corestorage’).

Upgrade VMware storage infrastructure

Having covered preparing storage for maintenance, the only thing left (that I can think of) is VMFS upgrades. These are pretty simple;

  • Can be upgraded from VMFS3 nondisruptively both from the GUI and via ‘vmkfstools -T /vmfs/vol1‘ or ‘esxcli storage vmfs upgrade -l vol1‘.
  • Recommended practice is to create new VMFS5 volumes, svMotion all VMs across, and then remove the old VMFS3 volumes.
  • Only vSphere5 hosts can read VMFS5, so only upgrade when all your hosts are upgraded.

A list of features, along with which require a clean install, can be found in Cormac Hogan’s blogpost.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.