vSphere NFS and VAAI monitoring

A couple of months ago i did a vSphere implementation where the back end storage system were presented to the vSphere environment via NFS. Since my move to Nutanix this will be more common and basically the only option today when performing vSphere implementations. Nutanix presents storage to Hyper-V via SMB and to KVM via iSCSI so there are other options as well as i know you are aware of.

One step during an implementation is of course to verify that everything works as expected and this blog post is dedicated to the VMware vSphere APIs for Array Integration (VAAI).
The main functionality of VAAI is to offload certain I/O tasks to the storage system which means the ESXi host can use its resources to perform other tasks and the operation will be faster. VAAI was introduced in vSphere 4.1 for block and for NFS in vSphere version 5.0.
The VAAI primitives are not the same for block and NFS and apart from the technical perspective there is also an operational difference between the two which i will focus on in this blog post.
There are many articles available around e.g. VAAI functionality, VAAI differences between block and NFS so i will not go into details.

When using block access to your storage system you can verify what VAAI primitives are supported per VMFS Datastore by running the following command:

  • esxcli storage core device vaai status get

The result will present the status of the VAAI primitives using the following output:

  • ATS Status: supported or unsupported
  • Clone Status: supported or unsupported
  • Zero Status: supported or unsupported
  • Delete Status: supported or unsupported

You can use the ESXi esxtop application or remotely run resxtop for monitoring the VAAI statistics via the following procedure:

  • Start esxtop, go to disk device, select the available counters via the “VAAISTATS = VAAI Stats” and “VAAISTATS/cmd = VAAI Latency Stats (ms)”
    • esxtop, press u, press f, press O and P, press enter
      Screen Shot 2014-06-16 at 12.16.43

The VAAISTATS (O) selection will give you three new esxtop colums:

  • CLONE_RD reads done via the VAAI API
  • CLONE_WR writes done via the VAAI API
  • CLONE_F failed VAAI commands and should be zero

This is fairly simple to monitor but what about when you connect to the back end storage via NFS?
As far as i know there are still no esxtop counters you can use to monitor VAAI statistics for NFS.

So this is the procedure i usually follow when verifying the Full File Clone (NAS) VAAI primitive functionality for a vSphere environment using storage accessed via NFS:

  • Make sure the ESXi host has the VAAI plugin installed by running the following command:
    • esxcli software vib list | grep -i VAAI
  • Verify the NFS Datastore/Datastores supports VAAI by running one of the two followings command:
    • vmkfstools -Ph /vmfs/volumes/NFS_Datastore_Name/
      Screen Shot 2014-06-16 at 13.00.40
    • esxcli storage nfs list
  • Create a virtual machine (VM) on the NFS datastore.
  • Clone the newly created VM to the same NFS Datastore and the same time monitor the ESXi server log files by running the following command:
    • tail -f /var/log/vpxa.log | grep -i VAAI

The tail command will provide you with an output similar to:

2014-06-16T09:55:01.524Z [FFF4CB70 info ‘Libs’ opID=2ab0ba16-7c-6b] VAAI-NAS :: NAS Mapping Used successfully for 32 times
2014-06-16T09:55:19.581Z [68B93B70 info ‘Libs’ opID=2ab0ba16-7c-6b] nutanix_nfs_plugin: Established VAAI session with NFS server
2014-06-16T09:55:19.666Z [68B93B70 info ‘Libs’ opID=2ab0ba16-7c-6b] VAAI-NAS [nutanix_nfs_plugin : /vmfs/volumes/94b4f095-bc73ee34] : CLONE [/vmfs/volumes/b494f095-ab73ee34/vcdx56-clone-001/vcdx56-clone-001-flat.vmdk] succeeded.

The output clearly states that the VAAI plugin was used successfully.
As you can see by the NFS plugin mentioned in the log and the IP address of the NFS target i used a Nutanix for my tests.

Anyone aware of another way to monitor VAAI for NFS attached storage for vSphere, please let me know.

8 pings

Skip to comment form

  1. Newsletter: August 10, 2014 | Notes from MWhite

    […] VAAI but I have not done the same with NFS.  Here is some good info on block and VAAI, and here is what you need for NFS.  Here is some additional first rate information on […]

Comments have been disabled.