«

»

Nutanix AHV Network Configuration – Change Load Balancing

A while ago I published a blog post about how to configure the 10 Gbps Network cards (NICs) as the only active NICs for a specific AHV based Open vSwitch (OVS) bond which default is br0-up (previously bond0) on AHV. Read that blog post here.  This blog post will be about changing the load balancing from the default active-passive setup to balance-slb.

The reason for this is that I want to take advantage of both the 10 Gbps uplinks attached to my OVS via bond0 without having to enable LACP which is required if you want to use balance-tcp.

However, this still means (same as with active-backup configuration) 1 single VM can only take advantage of 1x10Gbps at any given time but total active NIC capacity is 2×10 Gbps for the AHV host e.g.:

  • VM1 uses the first 10 Gbps uplink
  • VM2 uses the second 10 Gbps uplink
  • VM3 …..

Changing the load balancing requires you to run commands remotely to the AHV hosts from your workstation, direct on the AHV host or via a Nutanix Controller Virtual Machine (CVM). I’ll use the CVM options since it gives me an option to run a command against all AHV hosts in the cluster.

Follow the below procedure to verify existing configuration, change load balancing configuration, adjust the load balancing rebalance time and again verify your settings.

Depending on the time of your deployment and how the deployment was done the bond interface name can be either br0-up or bond0. Verify using the first command listed below and then use that throughout the blog post.

  • Run any of the following commands to verify your existing settings:
    • hostssh ovs-appctl bond/show br0-up (picture shows Bond0 which is the old name)
      • will show configuration for all AHV hosts in the cluster
    • ssh root@192.168.5.1 ovs-appctl bond/show br0-up (picture shows Bond0 which is the old name)
      • will show configuration for the AHV host where the CVM is running meaning you need to run this command on all CVMs in the cluster
  • Run any of the following commands to set load balancing mode to balance-slb
    • hostssh ovs-vsctl set port br0-up bond_mode=balance-slb
      • will set configuration for all AHV hosts in the cluster
    • ssh root@192.168.5.1 ovs-vsctl set port br0-up bond_mode=balance-slb
      • will set configuration for the AHV host where the CVM is running meaning you need to run this command on all CVMs in the cluster
  • The default rebalancing configuration is 10 seconds for OVS balance-slb and Nutanix recommends to use 30 seconds to limit number of potential migrations for a VM between NIC X and NIC Y.
    With default rebalance policy you’ll se a maximum value of 10 000 next to the “next rebalance” output from the “ovs-appctl bond/show br0-up” command (picture shows Bond0 which is the old name).

    The next rebalance counter will start at 10 000 and count to zero before a rebalance happens and when reaching zero the counter will start over from 10 000.
    Run any of the below commands to change OVS balance-slb rebalancing to 30 seconds

    • hostssh ovs-vsctl set port br0-up other_config:bond-rebalance-interval=30000
      • will set configuration for all AHV hosts in the cluster
    • ssh root@192.168.5.1 ovs-vsctl set port br0-up other_config:bond-rebalance-interval=30000
      • will set configuration for the AHV host where the CVM is running meaning you need to run this command on all CVMs in the cluster
  • Verify the new configuration by running one of the following commands:
    • hostssh ovs-appctl bond/show br0-up
      •  will show configuration for all AHV hosts in the cluster (picture shows Bond0 which is the old name)

        As you can see the times has increased and shows 41 sec for all AHV hosts.
    • ssh root@192.168.5.1 ovs-appctl bond/show br0-up
      • will list configuration for the AHV host where the CVM is running meaning you need to run this command on all CVMs in the cluster

This was tested with AHV 20160925.43 and AOS 5.0.2

3 pings

Comments have been disabled.