Switching Hyper-v CSV Clustering from Software teaming to Hardware isolation.

    I wrote this article for several reasons. There are a lot of reasons why you may want to go from a Software team to Hardware, or Vice Versa. I make no comments as to why you would do either.

    This is to say, There are some circumstances where you are better served with a Software team, and some where you will see better results with hardware and the NDIS driver.

    I will leave the why up to you. However, If you have the need, here is a rough guide to get you though the conversion of a Software Teamed 2012 or 2016 Cluster conversion of your network, To hardware NICS and Isolation.

    This article is high on text an low on screen shots. All I can do is apologize in advance.

    Some Additional titles:

    Converting the Windows Cluster from Software Teaming to Best Practices, Using Hardware Isolation.

    Another title to this article could be switching from Software teaming to Hardware isolation for CSV clusters.

    My software teamed cluster performs lousy.

    Two Standards in two Documents

    Why would you perform this change? You would do this if your using 1GB NICS for your cluster, and software teaming is not working well for you.

    I have found that the network changes around 2012 may be better suited to 10GB NICS then 1GB NICS. I find the 1GB NICS aggregation may not always be the best way to setup a cluster, depending on the needs of the customer, and the type of software running.

    I find that the original Network Best practices suit some customers from a maintenance perspective, as well as ease of understanding, when the IT persons are not full time network admins.

    This article demands that your Storage Networks are on a separate switch from the Clustering Networks.

    Furthermore, The Cluster Networks should be ideally isolated with Vlans, by Subnet, to prevent any collisions of packets from a unique subnet. If the Network is flat, That may not stop you from moving to this setup. Just know that some isolation is recommended. But, using a separate Subnet per Virtual network is still a must.

    The Design of your network should be based on the Initial Configuration Guide for Hyper-V networks for a CSV cluster (2010).

    If you do a Bing search for “network guide for Hyper-V clustering” , you will find this article,

    Hyper-V: Live Migration Network Configuration Guide (2010)

    And Bing “cluster guide for Hyper-V CSV cluster networks” and you should find:

    Network Recommendations for a Hyper-V Cluster in Windows Server 2012 (2013)

    The 2010 Documents lay out the Premise of Windows Clustering, with the goal of having CSV Live Migrations optimized. The 2013 Documents, Shows the embedded method of isolating the traffic within the Teamed interfaces.

    The Opinion Section

    Having been an Engineer lots of Hyper-V Cluster cases, the main reason for CSV cluster issues, was some form of mis-configuring the network, as per the best practices.

    In 2012, we have now added the additional choices of using a software Microsoft Team and or LBFO options using scripting to send the traffic down specific virtual network adapters.

    The change in server 2012 and 2016 has changed the perception of what can be done with the new technology. I now see the majority of Hyper-V networks using Microsoft teaming. Which is fine. Its fine, as long as you are using the virtual LANS; creating the isolation, along with the team. So the whole message of server 2012 and 2016 is Teaming and Isolation. What I am seeing all too often is just the teaming portion.

    For this reason, I am bringing back the original specifications, to make the statement, that Isolation is the Key element to a CSV cluster. Teaming can be fine, But Proper Isolation comes first.

    This means that if you are not a network person, and don’t have access to a network department to set things up for you, then you may want to stick with the original guidelines, which I am about to present to you. In this article, we are going to convert a Cluster from a Microsoft Teamed network, to a Vanilla Hardware Isolated, Cluster Network.

    The assumption here is that all of your Network adapters have been thrown into a team or two. In addition, No VLANS were created to isolate your traffic. If there were VLANS created, you simply use power-shell to delete them. See the following script, changing the commands new-LbfoTeam, to remove-LBFOteam and etc…

    Configuring networks 2/5

    Words of Caution

    You will start On a node, that is not being used by the cluster. We will make the changes to one node, at a time. When the first node is done, move to the next. 

    Please understand, This is a major change. You will likely have to take an outage to get this done across the whole cluster. I would make the changes on the first node (Node 1). Then shut all cluster nodes down, bringing up the node you converted. This will now be the primary cluster node, and as you update each additional node, they will come online, into the cluster represented by your Node 1 

     How to change you Cluster, One Node at a time

    So here are the steps:

    Pre Configuration

    1. Run Ipconfig /all > c:\ipconfig.txt on all cluster nodes.
    2. Pull the MACS on Software Teamed NICS in the Server Manager->NIC Teaming-> Teams 
    3. Save these files, even placing them in a spread, so you can match up the NICS you will be using. You need to make sure that you understand which NIC on each server will play each of the roles VMNIC, Live Migrate, CSV, Management, Storage, and Replication
    4. The columns of your spread are NIC name, NIC Description, MAC Address, OLD IP Address, New IP address, Cluster Workload and VLAN If Necessary
    5. Remember not to record NICS that have parenthesis around them or are multi-plex adapters. These will be going away. We will only rely on physical adapters.
    6. The Rows of your Spread are as follows:
      1. NODE X
        1. VMNIC (ex. no Subnet)
        2. LIVE MIGRATE (ex. 10.10.11.x)
        3. CSV  (ex. 10.10.12.x)
        4. HOST  (ex. 10.10.13.x)
        5. Replication  (ex. 10.10.14.x)
  • So your Subnets need to be assigned by #6 above. All VMNICS wont have an IP because they will all be dedicated to the Hyper-V switch. The easiest way to express this is to explain that Every Live Migrate will be on the 10.10.11 network. Every CSV will be on the 10.10.12 network. Every host will be on the 10.10.13 network. Every Replication will be on the 10.10.14 network.  Every host will have a dedicated NIC for each of these networks.
  • Replication above would be Ideal. If you don’t have enough NICS to have this network, then its ok. This NIC may also be called a Storage NIC. IF so, this NIC will Have MS Client sharing disabled, and uncheck register in DNS for this NIC.
  • the Cluster workload on your spread can be populated from the section below called workloads

Hopefully, these tidbits have helped you prepare for the actual work, because here we go. This will need to happen on every cluster Node.


    1. Go into every VM and change the NIC to not configured
    2. Remove all Network adapters from the Software teams.
    3. Delete the Virtual Hyper-v Network – if external
    4. Verify that all virtual NICS are gone. Restart server if they are not.
    5. Label 5 (or more) NICS as follows: (minimum 4)
      1. VMNIC
      3. CSV
      4. Management
      5. Replication
      6. Storage1, 2 etc.
    6. Go to Programs and Features and right click the Network adapter management application.
      1. Select the “Change” option
      2. Add NDIS if Broadcom, or whatever the 4 letter acronym is for advanced service of the NIC brand installer.
    7. Place an IP, Subnet, DNS and Gateway into the Management NIC.
    8. Place only an IP and subnet, for the Storage NICS. Disable for Microsoft Networks, and Uncheck Register in DNS
    9. Set Ip and subnet for Live Migrate, CSV, and replication
    10. Re-create the Hyper-v Network using the VMNIC. It will be external and dedicated to the VMS. (check the box)
    11. For 1GB nics – go to every NIC in device manager and disable for VMQing. If 10GB nics, leave enabled

    That’s it. Once you have done the second machine, make sure you can ping across to the other NIC on host1.

    As long as you pass the ping test, you may move to changing the other Nodes in the cluster.

    Once you have completed the changes, and all machines ping properly, you may restart the nodes in a round robin style if you have any kinks

    The only thing left to do, Is to set your priorities for the networks. The priorities are in the center pane of cluster manager. Choose network in the left selection plane.


    The settings should be changed to the following:

    1. VMNIC – NONE (dedicated to VMS)
    2. LIVE MIGRATE – Cluster Communication only
    3. CSV – Cluster Communication only
    4. Management- client only
    5. Replication- Cluster and client
    6. Storage1, 2 etc. – N/A no cluster communication

    And that’s it. Once you have that established, your cluster will now be working on physical NICS only.

    Now with the NICS you have left, you may go back and add selective hardware teaming where it makes sense. No Teaming on Storage NICS. Honestly, it may be better to wait to see if you have any area of weakness. For Example, If the VMs seem slow during intense workdays, you may remove the VMNIC, from Hyper-V, Team it, and then put the multiplexor back in as the VMNIC used for Hyper-V VMs.

    I hope this has been helpful. Please remember that this works automatically, because the Cluster is able to identify NICS on different servers, and automatically matches them to the same subnet. The NICs from the various servers will show up together in the network tab, because they are on the same subnet. As such, the cluster knows that they will be performing a specific job in the cluster, based on the settings laid out in this article.

    This is not better or worse then software teaming, but it may be the right choice for some IT teams, who are more familiar with hardware NIC assignments as part of their Job description.

    Also please understand the days of teaming with 1GB NICS is coming to a close. Very quickly 10GB NICS are becoming the standard. The difference in these setups becomes MOOT when your no longer using 1GB adapters for clustering.

    Thank you,

    Have a nice Day.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s