VCAP-DCA Study Notes – 2.3 Deploy and Maintain Scalable virtual networks
- Identify VMware NIC Teaming policies
- Identify common network protocols
Skills and Abilities
- Understand the NIC Teaming failover types and related physical network settings
- Determine and apply Failover settings
- Configure explicit failover to conform with VMware best practices
- Configure port groups to properly isolate network traffic
Tools & learning resources
- Product Documentation
Identify, understand , and configure NIC teaming
The five available policies are;
- Route based on virtual port ID (default)
- Route based on IP Hash (MUST be used with static Etherchannel – no LACP). No beacon probing.
- Route based on source MAC address
- Route based on physical NIC load (vSphere 4.1 only)
- Explicit failover
NOTE: These only affect outbound traffic. Inbound load balancing is controlled by the physical switch.
Failover types and related physical network settings
- Cable pull/failure
- Switch failure
- Upstream switch failure
Change NIC teaming for FT logging (use IP hash) – VMwareKB1011966
Use uplink failure detection (also known as link state tracking) to handle physical network failures outside direct visibility of the host.
With blades you typically don’t use NIC teaming as each blade has a 1 to 1 mapping from its multiple pNIC to the blade chassis switch. That switch in turn may use an Etherchannel to an upstream switch but from the blade (and hence ESX perspective) it simply has multiple independent NICs (hence route on virtual port ID is the right choice).
Configuring failover settings
Failover settings can be configured at various levels on both standard and distributed switches;
- vSwitch, then port group
- dvPortGroup then dvPort
- dvUplinkPortGroup (NOTE: you can’t override at the dvUplinkPort level)
Explicit failover can be used to balance bandwidth while still providing resilience with minimal numbers of pNICs. If you only have two pNICs available;
- Configure a single vSwitch and add both pNICs
- Configure two portgroups with explicit failover orders;
- Configure the management traffic portgroup to use pNIC1 as active with pNIC2 as standby.
- Configure the VM network portgroup to use pNIC1 as standby and pNIC2 as active.
This achieves both separation of traffic over separate pNICs for optimal bandwidth as well as providing resilience to both portgroups. VMwareKB1002722 describes this in more detail.
You can configure NIC teaming using the CLI (although this procedure isn’t covered in the standard documentation so won’t be available during the VCAP-DCA exam).
NOTE: With the vDS you get a diagram showing the actual path traffic takes through the switch. You can also confirm the actual NICs used (and therefore whether your teaming is working as expected using esxtop.More on this in section 6.3 Troubleshooting Network Connectivity.
Identify common network protocols
This has been covered elsewhere (in section 7.2 on the ESX firewall) and should be common knowledge. A few protocols which aren’t so common but are supported in vSphere;
- CDP (OSI layer 2)
- NTP (UDP port 123)
Isolation best practices
The following are generally accepted best practices (don’t let Tom Howarth hear you say that);
- Separate VM traffic and infrastructure traffic (vMotion, NFS, iSCSI)
- Use separate pNICs and vSwitches where possible
- VLANs can be used to isolate traffic(both from a broadcast and security perspective)
- When using NIC teams use pNICs from separate buses (ie don’t have a team comprising two pNICs on the same PCI card – use one onboard adapter and one from an expansion card)
- Keep FT logging on a separate pNIC and vSwitch(ideally 10GB)
- Use dedicated network infrastructure (physical switches etc) for storage (iSCSI and NFS)
When you move to 10GB networks isolation is implemented differently (often using some sort of IO virtualisation like FlexConnect, Xsigo, or UCS) but the principals are the same. VMworld 2010 session TA8440 covers the move to 10GB and FCoE.