Hyper-V and Network Teaming: Understanding the Link Speed

 

It’s great to see so many people trying out Hyper-V! I’ve long known that the virtual switch poses a conceptual hurdle. That’s why I wrote my earlier article to explain the virtual switch itself. I avoided talking about teaming in that post because mixing two difficult concepts usually ends badly. We had our longer series about Hyper-V and networking, which included a piece on teaming. However, that series requires a fair bit of reading, which no one wants to do when they’re anxious to get started on a new project. Unfortunately, not knowing several facts often results in a confused, frustrated administrator.

If you’re setting up a new Hyper-V virtual switch and you’re using a team of network adapters to host it and something doesn’t seem right, this article is for you.

Common “Problems” Reported for Hyper-V Virtual Switches and Network Teams

Here are a few different “problems” that people have found:

  • The Hyper-V virtual switch’s adapter reports that it is only 1 gigabit (or 10 gigabit), even though it’s bound to a team of multiple gigabit (or 10 gigabit) adapters
  • File copies over a Hyper-V virtual switch are only 1 gigabit, same configuration as the first bullet
  • The dedicated IP address for the team is gone or not working as expected

If any of these things have happened to you, nothing is wrong. This behavior is expected, normal, and fully conforms to every applicable networking standard.

Vital Concepts for Hyper-V Virtual Switches and Network Teaming

The two articles linked from the first paragraph are important reading. You need to go through them, or an equivalent, at some point. If you already have a team/switch built and you just want some answers, this article will get you up-to-speed quickly so that you can make sense of what you’re seeing.

I want to begin by spelling out the most important points.

1. The Hyper-V Virtual Switch is a Switch

Many people start by telling me the way their system worked initially, then how its behavior changed after creating a virtual switch. They’ll say, “I only added a virtual switch!” [emphasis mine] That statement does a fair job indicating someone new to the world of virtualization. Just because you might not be able to touch a see a thing does not make its existence trivial. The Hyper-V virtual switch is a frame-slinging, VLAN tagging, QoSing, machine-connecting powerhouse — just like a real switch. The only difference: it’s virtual. You’d never say, “My network didn’t change at all except that I added a new Cisco Catalyst between the core switch and the new servers.” At least, I hope you wouldn’t.

2. The Hyper-V Virtual Switch Does Not Have an IP

You cannot assign an IP address to the Hyper-V virtual switch. It has no layer 3 presence of any kind. You might see some things that make you think otherwise, but I promise you that this is the absolute truth.

3. The Hyper-V Virtual Switch Is Not Represented by a Network Adapter

You can bind the Hyper-V virtual switch to a physical adapter, but there isn’t a any adapter that is the Hyper-V virtual switch. In many cases, a virtual adapter will be created that has the same name as the Hyper-V virtual switch, but it is not the switch.

4. Network Adapter Teaming Does not Aggregate Bandwidth

Lots of people struggle mightily with this concept. They make a team on four gigabit adapters, see that link speed report of 4 gigabits, and believe that they’ve just made a great big 4 gigabit pipe. No! They have made a link aggregation of four gigabit adapters that act as a logical unit. If you want more explanation, try this article.

When multiple physical paths are available, some technologies can make use of that to increase transmission speed. However, adapter teaming does not inherently grant that capability. Some, such as MPIO and SMB multichannel, work better when you leave adapters unteamed.

We do not create adapter teams for bandwidth aggregation. To blatantly steal Microsoft’s acronym, we use them for load-balancing and failover. If you’re not familiar with the acronym that I’m referencing, all of the related PowerShell cmdlets contain “LBFO” and you invoke the graphical applet with lbfoadmin.exe.

Network Team Load Balancing

Load balancing is the closest that a team of network adapters ever comes to bandwidth aggregation. When a network application starts communicating with a remote system, the logical link group will choose a single member to carry that traffic. The next time that any application starts communicating, the group may choose a different member. With a Microsoft team, the load balancing algorithm makes that determination. Surprisingly enough, we have an article that covers the algorithms and how to choose one.

I want you to take away one thing from this point: a single communication stream can use exactly one physical pathway. It’s already computationally expensive to break data apart for network transmission and reassemble it at the destination. If all traffic needed to be broken apart and tracked over an arbitrary number of interconnects, network performance would degrade, not improve.

Failover

With adapter teams, we can prevent a failed NIC from isolating a host. If we extend that with stacked physical switches or separate physical switches with switch-independent adapter teams, we can ensure that a Hyper-V host has no single point for network failures.

Even though physical switch and NIC failures are rare, you can still get quite a bit of mileage from this feature. Your networking teams can update switch firmware without scheduling complete downtime. Physical switches can be attached to separate power supplies, partially shielding your traffic from electrical outages.

Examining The Network Adapter Team and Virtual Switch Package

Let’s walk through a team/switch combination and look at all of the parts.

Create a Network Adapter Team

You need to start this whole thing by building a team. It is possible to create the virtual switch first, but that’s just makes all of this even more confusing. My intent with this article is to explain the concepts, so these are some brief instructions, not an exhaustive explanation.

In PowerShell, use New-NetLbfoTeam:

New-NetLbfoTeam -Name vSwitchTeam -TeamMembers 'PTL - vSwitch', 'PBL - vSwitch' -TeamingMode SwitchIndependent -LoadBalancingAlgorithm Dynamic

I used a similar line to the above to create my team, so I need to give you a couple of notes. First, I named my real team “vSwitch”. I changed it to “vSwitchTeam” in the above so that it would be a little less confusing. The name can be just about anything that you want. I called mine “vSwitch” because I understand that it’s still a team and I prefer short names because I already type a lot. The “TeamMembers” use names of network adapters that I had already renamed. Use Get-NetAdapter to see what yours are (and Rename-NetAdapter if you don’t like them). The “PTL” and “PBL” in my names refer to the location on the physical server. “P” is for “PCI slot” (as opposed to onboard). “BL” means “bottom left” when viewing the server’s rear panel; “TL” means “top left”.

Graphics version of Windows Server provide a GUI for team creation. You can invoke it with LBFOadmin.exe or from Server Manager. On the Local Server tab, find the NIC Teaming line. It will have a blue link set to Disabled or Enabled, depending on the server’s current status. Click that link to open the NIC Teaming applet. You’ll find the New Team link under the Tasks drop-down in the Teams section:

linkspeed_newteamlbfoadmin

Once your team has been created, it will also have a “team NIC”. The team NIC is a logical adapter. It gives the operating system something to bind things to, like IP addresses or a Hyper-V virtual switch:

linkspeed_teamNICinnc

The distinction is a bit clearer in PowerShell:

linkspeed_teamNICinps

If we look at the properties for that team NIC, we’ll see that it reports a speed that is an aggregate of the speed of all constituent adapters:

linkspeed_teamadapterspeedlinkspeed_teamadapterspeedinps

If you only remember a single thing from this article, I want it to be this: these numbers are nothing more than a display. It’s not a promise, nothing has been done to prove it, and you may or may not ever be able to achieve it. For physical adapters, it’s the detected or overridden speed. For virtual adapters, the system performed an O(ax + an) until it ran out of adapters and showed you the result.

This is where a lot of people start doing pre-switch performance testing. That’s fine to do; I have no reason to stop you.

Create a Virtual Switch

I only use PowerShell to create virtual switches because the GUI annoys me. The GUI location is in Hyper-V Manager under the Virtual Switch Manager links, if that’s what you want to use. If you’ve never seen it before, it won’t take long to understand why it annoys me.

In PowerShell, use New-VMSwitch:

New-VMSwitch -Name vSwitch -NetAdapterName vSwitchTeam

If you’ve been looking at my pictures, there’s a mismatch between the above cmdlet and the “vSwitchTeam” item. That’s again because my personal team is named “vSwitch”. I’m using “vSwitchTeam” to draw a distinction between the virtual switch and the network team. The NetAdapterName parameter expects the name of the network adapter that you’re binding the virtual switch to. It’s name will be found in Get-NetAdapter.

For my long-time readers, you’ll also probably notice that I excluded the “AllowManagementOS” parameter. On my virtual switch, I would include that and set it to false so that I can take ownership of the virtual NIC creation process. However, I’m guessing that most of you found your way here because you used the “allow” option and currently have a virtual adapter named after your virtual switch. There’s nothing in the world wrong with that; it’s just not what I do.

In order for the rest of this article to make any sense, you must have at least one virtual NIC created for the management operating system.

Examining the Management Operating System’s Virtual Adapter

This is where all of your hard work has left you: You’ve got a brand new team and a brand new switch and now it appears that something isn’t right.

What I usually see is someone testing network speeds using file copy. Stop. Seriously, stop. If I’m interviewing you for an IT job and I ask you what you use to test network speed and you say “file copy”, there won’t be a call back. If you’re working for me and I catch you using file copy to test network speed, it would not be in your best interests to use me as a professional reference when you’re looking for your next job. Now, if file copies make you suspect a network speed problem, that’s fine, if and only if you verify it with an actual network speed testing tool.

But, with that out of the way, let’s say that this is what you saw that bothered you:

linkspeed_vnicspeedreport

The adapter that I’ve shown you here is named “vEthernet (Cluster)” because I don’t use the default vNIC that’s named after the switch. If your virtual switch is called “vSwitch” and you left defaults, then yours will be called “vEthernet (vSwitch)”. The name doesn’t matter, really, this is just explanatory. What we’re looking here is the speed. What people tend to say to me is:

If the virtual switch is connected to a team, why does its adapter only show 1 gigabit?

They ask me that because they don’t realize that this adapter does not connect to the team. It connects to the virtual switch. The virtual switch connects to the team. To make it even more interesting, I’ve seen different numbers for these virtual adapters. Sometimes it’s 10 Gbps (most common on 2012 R2 and earlier). I’ve even had mine report the team’s aggregate speed on occasion. First, remember that this number does not mean anything. Depending on your load balancing algorithm, 1 Gbps might be as fast as it will ever go, or it might be able to transmit across all team members. This readout cannot tell you that.

This is what adding a virtual switch does to your teamed system:

linkspeed_switchbanda

When you have a team without a virtual switch, then the “connection” between your physical system and the physical network is the team. It’s the first pathway that any outbound traffic will take. The host’s IP address is assigned to the logical NIC that represents the team. That’s the “before”.

When you add a Hyper-V virtual switch, you’re adding a switch. Sure, you can’t see or touch it, but it’s there. If you opted to “share” the physical network team with the virtual switch, then it created a virtual network adapter named after the virtual switch. The IP address of the host was transferred to that virtual adapter. That adapter has only a single “connection” to your switch. That’s the “after”.

The End Effect

Hopefully, the picture helps explain why you only see 1 Gbps (or 10 Gbps, depending on your hardware) for the management operating system’s vNICs. Hopefully, it also helps explain why you “lost” performance (for you file copiers). If you chose your load balancing algorithm wisely and set up network tests appropriately, you’ll see balancing from within the management operating system.

On to the big question: have you lost performance? The answer: NO!

  1. The purpose of employing a hypervisor is hosting virtual machines. You need to balance traffic across multiple guests. The transfer speed of the management operating system is not your top concern.
  2. By design, some of the accelerations and other network enhancements don’t work as well in the management operating system. Things are better in 2016, but that doesn’t matter. You should build your management operating system to play second fiddle to the guest operating systems because it is the management operating system for a hypervisor.
  3. Because of the virtual switch’s operation, the switch can slow transfer speeds down. You will only see it with 10 Gbps or greater adapters. VMQ is designed to alleviate the problem but might not eliminate it. It’s not related to teaming or aggregation, though.

To really see the benefits of network adapter teaming with Hyper-V, build a few guests and run transfer speed tests from all of them simultaneously. You’ll find that load balancing works just fine.

Altaro Hyper-V Backup
Share this post

Not a DOJO Member yet?

Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!

163 thoughts on "Hyper-V and Network Teaming: Understanding the Link Speed"

  • hassan says:

    hi
    i am having a problem with network
    i have 4 nics
    2 are realtek gbe and 1 is intel 82575eb(dual port)
    i have created a team of 1 relatek and 2 ports of intel 82575eb many times and reformatted server many times, obtained drivers from hardware manufacturer and also installed all updates. tried windows 7 and 2012 r2.
    if single realtek nic, connection speed is 1gbps and all the diskless clients boot very fast but if use team of 1 realtek and 2 intel 82575eb or only 2ports of intel 82575eb it really sucks.1 client boots in 4 minutes while just single realtek 1gbps it takes to boot all 25 units around 1 minute and 30 seconds or less.
    please help me out
    waiting for your kind response.

  • Stefan Di Nuto says:

    great artickle thanks!
    I have made a team with 2x10GB on all my hosts and disabled all 1GB Nic. When I run VMs like Exchange DAG and SQL Always on AG should I then still made a virtual NIC in the VMs for heartbeat and intracluster or does this not matter?
    If I have this NIC Team with this 20GB and wanna iSCSI implement. Should I then activate one of the 1GB NIC dedicated for iSCSI? pity that for iSCSI I have then only 1/2GB in that case.
    thanks.

    • Eric Siron says:

      I would make a pair of vNICs for iSCSI and configure them for MPIO.
      It wouldn’t hurt to add some vNICs to the VMs for those additional features. Added pathways give the system more options.

  • Stefan says:

    Hi Eric. thanks for iSCSI now all is clear. Today I made a new VNic (Add-Virtualnetworkadapter) on the Host. However how can I use this vNIC interface in a guest cluster like a Exchange DAG or Always on SQL for the cluster network? I don’s see the Network from a VM view.

    • Eric Siron says:

      One of us is confused about what you’re trying to do. You don’t use host vNICs in guests. You add vNICs directly to guests.

  • stefan says:

    Hi Eric. Thanks again. I checked it. Just add Hardware->NIC in the VM. Define a VLAN for heartbeat. Same on all Hosts.
    HPE still recommended to have sep. physical adapters on Host instead of converrged.
    Regards

  • Vitaliy says:

    Hello,

    What about load balancing in management OS? Like is I joined 2x10Gb ports in a team, then connected this team to virtual switch, and, lastly aded two virtual network adapters to management system – will network traffic from SMB multichannel on management OS be balanced between both adapters (teaming mode is set to Dynamic)?

  • NewGuy says:

    I had the same comment of “I only added a virtual switch!”, which is what made me post… I truly believe that there might be something wrong with the Hyper-V Virtual Ethernet Adapter.
    I’m bringing a new (to me) Dell R930 server online and I just want the “warm fuzzies” that the network is truly operating as fast as possible. I have 10GB switches and 10GB cards in 3 other Dell servers and I really haven’t pushed them to see for myself that *anything* transfers at or near the 10GB speed that all of this hardware promises.
    So, I began toying with iPerf and NTttcp in order to prove it. In the process, I discovered the infamous VMQ problems with the BroadCom chipset. I also discovered that some 10GB cards simply like PCIe x8, instead of x4, slots to work at maximum.
    My biggest discovery, IMHO, came when I had finally resolved that all was well with the new server (and the old one I was testing against) and I was consistently getting a saturated 9.9Gbps with BOTH sending and receiving using NTttcp… Here we go with “all I did was add a virtual switch” on the new server and then re-ran my tests. I found that SENDING data out at 9.9Gbps was still working, but RECEIVING data through the new Virtual Switch (with NO VMs even defined on the machine, let alone assigned to it) suddenly capped out around 6.5-6.6Gbps. I removed the vSwitch and “ta da”, all was 9.9 again. I tried to tweak the Virtual Adapter, but there aren’t really that many parameters that can be changed, so that seemed to be a deadend. All the changes to the Adapter only made things worse, so I stopped. I noticed during all of this that the Driver for the Virtual Adapter is dated 2006. I’m using Server 2016 as the host on both sides with all updates applied.
    Maybe there’s something else I can do or maybe, as this article suggests, “it is what it is” and this is all you can expect to get out of it.

    • Eric Siron says:

      With 10GbE adapters on 2016 you want to use a switch-embedded team instead of the traditional team/switch combo.
      6.5Gbps is about the maximum that a single CPU core can process, so when you get that speed in NTttcp, then you know that it’s limiting your traffic to one core for whatever reason. Once you start putting VMs on it, it will have a much easier time finding ways to balance the load and your overall capability will be near what the hardware can do. You can see it by running multiple simultaneous NTttcp transmissions inside multiple guests.

      • NewGuy says:

        I have done several migrations lately and it rarely goes above 2.0-2.2Gbps with everything going. For our environment, SET and the other configuration changes would probably go unnoticed. Unless there is a simpler way to achieve the maximum, I’m leaving it alone since migrations are rare and usually done after hours anyway.

        Thanks for your comments and your other articles!

        • Eric Siron says:

          Migrations are kind of a different beast because of all the cross-host synchronization that needs to be done. Higher-end cards can handle it (as in, iWarp/RoCE cards). In 2019 you can do additional tuning for RSS and vRSS with host vNICs. But, I’m not sure that the effort will ever pay out in the situation that you describe.

  • Oliver says:

    Hi there,

    I have 8 nics. 2 are direct attached to a Storage where Mpio is configured.

    But 6Nics left. 2 of them are 10gbps.

    Is it possible to put all Adapters into a SET converged configuration?
    What is the best config with this 4 x 1gbps and 2x10gbps Adapters in a two Node Cluster?

    Thanks,
    Oliver

    • Eric Siron says:

      SET requires all adapters to have the same speed, and usually the same manufacturer as well.
      The best configuration is to disable the 4 gigabit adapters and converge over the 10GbE.

  • Marcin says:

    Recently I’ve been trying to do the teaming the right way. I read lots of articles, visited many forums. Most suggests the approach you also seem to embrace. Teaming at a host level. However I found an interesting howto at MS: https://docs.microsoft.com/en-us/windows-server/networking/technologies/nic-teaming/create-a-new-nic-team-on-a-host-computer-or-vm
    They state teaming shall be done at a VMs level, which after a second thought makes sense to me. What do you make of it all?

    • Eric Siron says:

      Nope, nope, nope, nope.
      Teaming in guests only if you want to use SR-IOV on the host.
      Guest teaming isn’t bad if you only have a few VMs, but it’s a micro-management nightmare and doesn’t do anything positive. Except when you want to use SR-IOV.

Leave a comment or ask a question

Your email address will not be published. Required fields are marked *

Your email address will not be published.

Notify me of follow-up replies via email

Yes, I would like to receive new blog posts by email

What is the color of grass?

Please note: If you’re not already a member on the Dojo Forums you will create a new account and receive an activation email.