Tag Archives: DirectPath I/O

VCAP5-DCA study notes – section 1.1 Implement and Manage Complex Storage Solutions

As with all my VCAP5-DCA study notes, the blogposts only cover material new to vSphere5 so make sure you read the v4 study notes for section 1.1 first. When published the VCAP5-DCA study guide PDF will be a complete standalone reference.

Knowledge

  • Identify RAID levels
  • Identify supported HBA types
  • Identify virtual disk format types

Skills and Abilities

  • Determine use cases for and configure VMware DirectPath I/O
  • Determine requirements for and configure NPIV
  • Determine appropriate RAID level for various Virtual Machine workloads
  • Apply VMware storage best practices
  • Understand use cases for Raw Device Mapping
  • Configure vCenter Server storage filters
  • Understand and apply VMFS resignaturing
  • Understand and apply LUN masking using PSA-related commands
  • Analyze I/O workloads to determine storage performance requirements
  • Identify and tag SSD devices
  • Administer hardware acceleration for VAAI
  • Configure and administer profile-based storage
  • Prepare storage for maintenance (mounting/un-mounting)
  • Upgrade VMware storage infrastructure

Tools & learning resources

With vSphere5 having been described as a ‘storage release’ there is quite a lot of new material to cover in Section1 of the blueprint. First I’ll cover a couple of objectives which have only minor amendments from vSphere4.

Determine use cases for and configure VMware DirectPath I/O

The only real change is DirectPath vMotion, which is not as grand as it sounds. As you’ll recall from vSphere4 a VM using DirectPath can’t use vMotion or snapshots (or any feature which uses those such as DRS and many backup products) and the device in question isn’t available to other VMs. The only change with vSphere5 is that you can vMotion a VM provided it’s on Cisco’s UCS and there’s a supported Cisco UCS Virtual Machine Fabric Extender (VM-FEX) distributed switch. Read all about it here – if this is in the exam we’ve got no chance!

Identify and tag SSD devices

This is a tricky objective if you don’t own an SSD drive to experiment with (although you can workaround that limitation). You can identify an SSD disk in various ways;

  1. Using the vSphere client. Any view which shows the storage devices (‘Datastores and Datastore clusters view’, Host summary, Host -> Configuration -> Storage etc) includes a new column ‘Drive Type’ which lists Non-SSD or SSD (for block devices) and Unknown for NFS datastores.
  2. Using the CLI. Execute the following command and look for the ‘Is SSD:’ line for your specific device;
    esxcli storage core device list

Tagging an SSD should be automatic but there are situations where you may need to do it manually. This can only be done via the CLI and is explained in this VMware article. The steps are similar to masking a LUN or configuring a new PSP;

  1. Check the existing claimrules
  2. Configure a new claim rule for your device, specifying ‘ssd_enable’
  3. Enable to new claim rule and load it into memory

So you’ve identified and tagged your SSD, but what can you do with it? SSDs can be used with the new Swap to Host cache feature best summed up by Duncan over at Yellow Bricks;

“Using “Swap to host cache” will severely reduce the performance impact of VMkernel swapping. It is recommended to use a local SSD drive to eliminate any network latency and to optimize for performance.”

As an interesting use case here’s a post describing how to use Swap to Host cache with an SSD and laptop – could be useful for a VCAP home lab!

The above and more are covered very well in chapter 15 of the vSphere5 Storage guide.

Continue reading VCAP5-DCA study notes – section 1.1 Implement and Manage Complex Storage Solutions

VCAP-DCA Study guide – 6.3 Troubleshooting Network Performance and Connectivity

Knowledge

  • Identify virtual switch entries in a Virtual Machine’s configuration file
  • Identify virtual switch entries in the ESX/ESXi Host configuration file
  • Identify CLI commands and tools used to troubleshoot vSphere networking configurations
  • Identify logs used to troubleshoot network issues

Skills and Abilities

  • Utilize net-dvs to troubleshoot vNetwork Distributed Switch configurations
  • Utilize vicfg-* commands to troubleshoot ESX/ESXi network configurations
  • Configure a network packet analyzer in a vSphere environment
  • Troubleshoot Private VLANs
  • Troubleshoot Service Console and vmkernel network configuration issues
  • Troubleshooting related issues
  • Use esxtop/resxtop to identify network performance problems
  • Use CDP and/or network hints to identify connectivity issues
  • Analyze troubleshooting data to determine if the root cause for a given network problem originates in the physical infrastructure or vSphere environment

Tools & learning resources

Identify virtual switch entries in a VMs configuration file

Contains both vSS and vDS entries;

image

In the example VM below it has three vNICs on two separate vDSs. When troubleshooting you may need to coordinate the values here with the net-dvs output on the host;

  • NetworkName will show “” when on a vDS.
  • The .VMX will show the dvPortID, dvPortGroupID and port.connectid used by the VM – all three values can be matched against the net-dvs output and used to check the port configuration details – load balancing, VLAN, packet statistics, security  etc

NOTE: Entries are not grouped together in the .VMX file so check the whole file to ensure you see all relevant entries.

image

Identify virtual switch entries in the ESX/i host configuration file

The host configuration file (same file for both ESX and ESXi);

  • /etc/vmware/esx.conf

Like the .VMX file it contains entries for both switch types although there are only minimal entries for the vDS. Most vDS configuration is held in a separate database and can be viewed using net-dvs (see section 6.3.7).

Command line tools for network troubleshooting

The usual suspects;

  • vicfg-nics
  • vicfg-vmknic
  • vicfg-vswitch (-b) for CDP
  • vicfg-vswif
  • vicfg-route
  • cat /etc/resolv.conf, /etc/hosts
  • net-dvs
  • ping and vmkping

Continue reading VCAP-DCA Study guide – 6.3 Troubleshooting Network Performance and Connectivity

VCAP-DCA Study notes – 2.1 Implement and Manage Complex Virtual Networks

The VCAP-DCA lab is still v4.0 (rather than v4.1) which means features such as NIOC and load based teaming (LBT) aren’t covered. Even though the Nexus 1000V isn’t on the Network objectives blueprint (just the vDS) it’s worth knowing what extra features it offers as some goals might require you to know when to use the Nexus1000V or just the vDS.

Knowledge

  • Identify common virtual switch configurations

Skills and Abilities

  • Determine use cases for and apply IPv6
  • Configure NetQueue
  • Configure SNMP
  • Determine use cases for and apply VMware DirectPath I/O
  • Migrate a vSS network to a Hybrid or Full vDS solution
  • Configure vSS and vDS settings using command line tools
  • Analyze command line output to identify vSS and vDS configuration details

Tools & learning resources

Network basics (VCP revision)

Standard switches support the following features (see section 2.3 for more details);

  • NIC teaming
    • Based on source VM ID (default)
    • Based on IP Hash (used with Etherchannel)
    • Based on source MAC hash
    • Explicit failover order
  • VLANs (EST, VST, VGT)

vDS Revision

The vDistributed switch separates the control plane and the data place to enable centralised administration as well as extra functionality compared to standard vSwitches. A good summary can be found at GeekSilver’s blog. Benefits;

  • Offers both inbound and outbound traffic shaping (standard switches only offer outbound)
    • Traffic shaping can be applied at both dvPortGroup and dvUplink PortGroup level
    • For dvUplink PortGroups ingress is traffic from external network coming into vDS, egress is traffic from vDS to external network
    • For dvPortGroups ingress is traffic from VM coming into vDS, egress is traffic from vDS to VMs
    • Configured via three policies – average bandwidth, burst rate, and peak bandwidth
  • Ability to build a third party vDS on top (Cisco Nexus 1000v)
  • Traffic statistics are available (unlike standard vSwitches)

image

NOTES:

  • CDP and MTU are set per vDS (as they are with standard vSwitches).
  • PVLANs are defined at switch level and applied at dvPortGroup level.
  • There is one DVUplink Portgroup per vDS
  • NIC teaming is configured at the dvPortGroup level but can be overridden at the dvPort  level (by default this is disabled but it can be allowed). This applies to both dvUplink Portgroups and standard dvPortGroups although on an uplink you CANNOT override the NIC teaming or Security policies.
  • Policy inheritance (lower level takes precedence but override is disabled by default)
    • dvPortGroup -> dvPort
    • dvUplink PortGroup -> dvUplinkPort

NOTE: Don’t create a vDS with special characters in the name (I used ‘Lab & Management’) as it breaks host profiles – see VMwareKB1034327.

Continue reading VCAP-DCA Study notes – 2.1 Implement and Manage Complex Virtual Networks

VCAP-DCA Study notes – 1.1 Implement and manage complex storage

Storage is an area where you can never know too much. For many infrastructures storage is the most likely cause of performance issues and a source of complexity and misconfiguration – especially given that many VI admins come from a server background (not storage) due to VMware’s server consolidation roots.

Knowledge

  • Identify RAID levels
  • Identify supported HBA types
  • Identify virtual disk format types

Skills and Abilities

  • Determine use cases for and configure VMware DirectPath I/O
  • Determine requirements for and configure NPIV
  • Determine appropriate RAID level for various Virtual Machine workloads
  • Apply VMware storage best practices
  • Understand use cases for Raw Device Mapping
  • Configure vCenter Server storage filters
  • Understand and apply VMFS resignaturing
  • Understand and apply LUN masking using PSA?related commands
  • Analyze I/O workloads to determine storage performance requirements

Tools & learning resources

Identify RAID levels

Common RAID types: 0, 1, 5, 6, 10. Wikipedia do a good summary of the basic RAID types if you’re not familiar with them. Scott Lowe has a good article about RAID in storage arrays, as does Josh Townsend over at VMtoday.

The impact of RAID types will vary depending on your storage vendor and how they implement RAID. Netapp (which I’m most familiar with) using a proprietary RAID-DP which is like RAID-6 but without the performance penalties (so Netapp say).

Scott Lowe has a good article about RAID in storage arrays, as does Josh Townsend over at VMtoday.

Supported HBA types

This is a slightly odd exam topic – presumably we won’t be buying HBAs as part of the exam so what’s there to know? The best (only!) place to look for real world info is VMware’s HCL (which is now an online, searchable repository). Essentially it comes down to Fibre Channel or iSCSI HBAs.

Remember you can have a maximum of 8 HBAs or 16 HBA ports per ESX/ESXi server.You should not mix HBAs from different vendors in a single server. It can work but isn’t officially supported.

Continue reading VCAP-DCA Study notes – 1.1 Implement and manage complex storage