Monday, May 20, 2013

Questions regarding Networking in VMM 2012 SP1


Once in a while, I get questions from my beloved readers of my blog.
Some of them may also be quite relevant to the rest of the community, and this is the case for this blog post. I received some questions about networking in VMM and can happily share the Q&A with you here:
--------------------------------------------------------------------------------------------------------------------------
Environment:
I would like to implement the converged fabric method via SCVMM 2012 SP1.  Currently we do not have plans to use NVGRE,everything is using VLANs.
Our hosts have 2x10Gb and 4x1Gb physical NICs. For storage we use HBA's connected to EMC SAN.

Q1: Logical switches:
Is it a good idea to create two logical switches in SCVMM? One for datacenter(vNIC LM, vNIC Cluster, vNIC Mgmt) and one for VM Guests. Should I use the 2x10Gb for the VMGuests and the 4x1Gb for the datacenter traffic?  Will the 4x1 Gb be sufficient for datacenter traffic?
During the MMS 2013 session of Greg Cusanza there is only 1 logical switch used.

A1:
It depends on the physical adapters in most cases. If you have, let’s say 2x10GBe presented on your host, I would create one team (equal to one logical switch in VMM) and have the different traffic spread among virtual network adapters with corresponding QoS assigned to them.
But when you mix with NICs with different speed (1GBe) then you would not be too happy with the load balancing in that team. For this, you can safely create two logical switches with VMM and separate those NICs in those team, and assign the preferred traffic to each team. To decide which team and adapters you should use to each traffic, I would recommend to give Live Migration and Storage (iSCSI or SMB) a higher guarantee on minimum bandwidth. This to ensure that live migration traffic is able to execute faster, and that your virtual machines hard disks have sufficient IOPS.

See common configurations here (Examples are shown in Hyper-V in Windows Server 2012 with Powershell): http://technet.microsoft.com/en-us/library/jj735302.aspx

Q2: Logical networks:
The following blogsite mentions to create an logical network for each traffic (LM, Cluster, Mgmt, AppA-VLAN, AppB-VLAN, AppC-VLAN)
http://blogs.technet.com/b/scvmm/archive/2013/04/29/logical-networks-part-ii-how-many-logical-networks-do-you-really-need.aspx

On the otherhand the following videoblogpost shows to create only two logical networks. 1 Datacenter and 1 VM Guests, each with several Network Sites.
http://blogs.technet.com/b/yungchou/archive/2013/04/15/building-private-cloud-blog-post-series.aspx

What is your opinion about this? Which one is best practice? Has one got (dis)advantages? Would I loose a functionality if I choose one above the other?
(taking into account that we currently have 20 VLANs)

A2:
A logical network in VMM should represent the actual networks and sites that serves a function. Let’s say that ‘Management’ is the management network, where hosts connected to this network can communicate with each other. You can have different sites and subnets here (also VLANs) but all in all it’s the same logical network, serving the function for management traffic. Also remember that VM networks (which is an abstracted network of the logical network) is assigned to virtual network adapters while using logical switches and teaming. So in order to get this straight, you must have a logical network for every different network traffic you would use in this configuration. This is because a VM network can only be associated with one logical network.
Typically, you will end up with a similar configuration when using converged fabric in VMM, according to best practice:

1 Logical Network for Management
1 Logical Network (dedicated subnet/VLAN) for Live Migration
1 Logical Network (dedicated subnet/VLAN) for Cluster communication
1 or more Logical Networks for SMB3.0 traffic (to support multi-channel in a scale-out file server cluster)
1 or more Logical Networks for iSCSI traffic
1 or more Logical Networks for VM guests (the VM network you create afterwards will be associated with this logical network. By using Trunk you can easily assign right subnet, VLAN directly on your VMs virtual adapters).

For more information about common configuration with VMM, see http://blogs.technet.com/b/privatecloud/archive/2013/04/03/configure-nic-teaming-and-qos-with-vmm-2012-sp1-by-kristian-nese.aspx

Q3: Teaming:
In the same videoblog of Yung Chou, they mention that for the backend traffic we should use the uplink pp with teaming loadbalance alg: TransportPorts. This would give better loadbalancing.
For the VMguest traffic we should use Hyper-Vport.
This is the first time that I see this recommendation. What is your experience with this?

A3:
This is a tricky question and the answers is depending on how many NICs you have present on your host.
If the number of virtual NICs greatly exceeds the number of team members, then Hyper-V Port is recommended.
Address hashing is best used when you want maximum bandwidth availability for each connection.

I would recommend you to order the book ‘Windows Server 2012 – Hyper-V, Installation and configuration guide’ by Aidan Finn and his crew to get all the nasty details here.
For this to work from a VMM perspective, you would need to create to logical switches with different configurations. 

No comments: