VMware Multiple NIC vMotion generates unicast flooding in the network

Blog post updated 2013-07-18
The problem described in this blog post has been fixed in the ESXi 5.0 U2 and in the 5.1 U1 releases but the original blog post is left unchanged.

I have been using the VMware multiple NIC vMotion, VMware KB 2007467 , configuration where possible and needed, e.g. running high consolidation ratio per ESXi host or running large virtual machines.

I am aware of the limitations for running multiple NIC vMotion including e.g ESXi hosts with 2×10 Gbps NICs same time as customer requires LACP configuration in the environment whenever possible.
Multiple NIC vMotion setup requires port group “Failover Order ” configuration set to active and standby (for the vmnics or dvUplinks) which is not a supported when running the “Load Balancing” configuration “Route based on IP hash”.

Some time ago i experienced unicast flooding when using the multiple NIC vMotion setup. This applies to at least vSphere 5.0 and 5.1 (ESXi 5.0 and ESXi 5.1)The ESXi hosts had 2.10 Gbps NIC each, both vMotion VMK virtual adapters were located on the same subnet and the vMotion network were placed on its own VLAN.

To get rid of the problem we had to remove the multiple NIC vMotion configuration and use a single 10 Gbps NIC for vMotion process, which in this case is good enough.
The issue has been addressed internally by VMware and will most likely be fixed in vSphere 5.1 U1. Instead of removing the multiple NIC vMotion configuration you could try to adjust the time-out but this can (and probably will) cause problem elsewhere in the network infrastructure so i wouldn’t recommend it.

So if you see unicast flooding generated by your multiple NIC vMotion setup i would advise you to only use one VMK virtual adapter for vMotion. Putting the VMK virtual adapters on different subnets will most likely give you problem according to VMware KB 2052092

11 pings

Skip to comment form

Comments have been disabled.