«

»

Nutanix .Next 2016 Summary

This blog post will be my personal summary of what was announced, discussed and presented during the Nutanix .Next conference at Wynn resort in Las Vegas 20-22 June 2016. It was a great event and i’m sure you’ll find the new stuff really exciting

The key message during the conference was all about Nutanix is the Enterprise Cloud Company that can provide you with all the necessary features and capabilities, not just hyper converged infrastructure (HCI), you need to build and manage your own elastic (public), predictable (private) or hybrid cloud solution.

The below is just information and not complete feature functionality description.

Run all your virtual and physical workload on Nutanix with ABS

As you already know Nutanix supports VMs running on Nutanix using an in-guest iSCSI connection to a Nutanix Volume Group. This makes it possible for ESXi based VMs running on Nutanix to use SCSI based disks instead of NFS based disks via the hypervisor. Some application vendors does not support their application to run on NFS based vSphere datastores.

Acropolis Block Service (ABS) takes this even further and now Nutanix supports both VMs and physical servers to connect to the Nutanix Acropolis Distributed Storage Fabric (ADFS). Supported guest operating systems includes:

  • RHEL 6+
  • Windows 2008 R2 & Windows 2012 R2
  • Oracle RAC
  • Microsoft SQL Server
  • Microsoft Exchange Server
  • Microsoft Exchange Server
Screen Shot 2016-06-27 at 09.56.31

All the Nutanix ADFS features are supported and new storage is added when the Nutanix cluster scales out so no client actions required. The built-in load balancing features removes the need for MPIO and ALUA. The following are a few use cases for ABS:

  • When existing storage solution can be changed to Nutanix but not all physical servers are end of life.
  • Windows clusters that requires SCSI-3 persistent reservations for shared storage storage.
  • iSCSI for Microsoft Exchange Server.
  • Shared storage for Oracle RAC environments.
  • Windows Server Failover Clustering (WSFC).
  • Shared storage for Linux-based clusters.
  • Stand-alone physical servers with any of the above requirements.
  • Licensing requirements that forces organisations to stay with physical servers connected to remote storage.

Self-Service

So there will be a built in way in the Nutanix solution to create Projects and assign users (via Active Directory/LDAP) and resources to the projects. The resources includes CPU, RAM, storage and network. This will help simplify deployment and delivery of both VMs and application. project admins will give access to specific images which the self service end users can deploy anytime based on their quota.

The Self-Service Projects are well known to people familiar with e.g. OpenStack and for people familiar with VMware technologies you can compare a Project with a vRealize Automation Tenant.

This is definitely something i’ll blog more about in the future so stay tuned.

AHV Improvements

Two major improvements was announced regarding AHV formerly known as Acropolis Hypervisor, which was released 18 months ago and in short it is a Nutanix hypervisor built on KVM. Integrated in PRISM Element are the VM, Network and Virtual Machine HA management but additional configuration capabilities were announced. In addition some behind the scenes improvements has been added.

Dynamic Scheduling Plus Affinity and Anti-Affinity Rules

The intelligent scheduling will proactively identifies both CPU and Storage contention and automatically, if/when needed, move VMs between the AHV cluster available AHV Hosts. The intelligent scheduling honors the scheduling rules.

Initial VM scheduling/placement meaning AHV places the VMs on an AHV host based on current utilisation has been available since AHV was GA 18 months ago. However, configuration options will soon be available for the following:

  • VMs on same AHV Hosts – Useful for e.g. application isolation and when VM1 and VM2 are dependent on each other and communicates a lot. In that case by placing them on same AHV host the performance will increase.
  • VMs on different AHV Hosts – For applikation availability purposes e.g. making sure domain controller VM1 and domain controller VM 2 never runs on same AHV host
  • VM to AHV Host – This is good from e.g. a licensing perspective and in a mixed AHV cluster you can make sure VM x runs on AHV host Y where faster CPUs are available.

These configuration options are much like the vSphere Cluster DRS rules.

AHV Network Visualization

One thing that has not been exposed in the PRISM Element are the network connections between VMs, Bridges, Bonds and Physical switches. This has been solved and there is now a clear view of how the AHV network, build on Open vSwitch, comes together.

You will have a selection of multiple views and this particular screen shows Nutanix CVM and user VM network connection all way from OVS bridge, OVS bond to physical switch.

Screen Shot 2016-06-25 at 20.12.37

In addition to the layout the health of the connection, status of NICs, status of ports and VLAN misconfigurations will be shown. LLDP/CDP will be used to discover and validate the network topology and SNMP to get the configuration information from the switches

All flash extended capabilities

Nutanix have had their All-Flash array NX-9040 for quite some time now but now the all-flash capability has been extended to the NX-3000 series and is expected to be available on several NX G5 models in the future. With Nutanix data locality we make sure the VMs data is on the Nutanix node where it runs and this is really good for performance and also minimize network traffic. Data locality will become even more important when new flash technologies such as NVMe and 3D Point become mainstream since does not require much from these devices to make the network a bottleneck.

HCI to Containers via Acropolis Docker Services (ACS)

With the next, 4.7, release of AOS Nutanix will release a Docker volume driver, written in Go, which runs as a Docker volume extension. ADFS can now provide persistent, yes you heard right, storage to containers directly via iSCSI bypassing the hypervisor meaning Container data will still be available if you destroy or lose a Container or host. The storage presented behaves as an intermediate container that runs in privileged mode and provides the ADFS features like data locality, data saving features and data protection features. Screen Shot 2016-06-27 at 10.42.31

Manage ESXi based VMs via Nutanix PRISM

“Can we manage ESXi based VMs via Nutanix PRISM Element the same way we manage AHV based VMs via Nutanix PRISM Element”This is a question i get not every week but at least a few times each month. Until now the answer has been “No you can’t and i don’t know if we ever going to allow it either.”

Well, now Nutanix is taking our hypervisor agnostic solution to another level since it will be possible to do VM CRUD operations, Launch Console, Manage Snapshots and convert VM if needed from Nutanix PRISM.

Based on the capabilities you understand that vSphere cluster, vSphere network and all other non VM related operations still requires the vSphere Web Client and by that vCenter Server. However, once these things are setup you can use one single interface for managing both the Nutanix Cluster and the vSphere ESXI based VMs.

In addition Nutanix now provides the same interface for managing VMs for two different hypervisor solutions in addition to Nutanix Cluster and AHV management.

 

Screen Shot 2016-06-17 at 17.45.00

Maybe this is just the first step in bringing the vSphere management features into Nutanix AOS but i guess the future will tell.

Microsoft CPS Standard on Nutanix

Nutanix is Windows Server Hardware certified for 2012 and 2012 R2, Microsoft Private Cloud Fast Track validated, and a CPS Standard alliance partner. Microsoft CPS is based on Windows Server 2012 R2, System Center 2012 R2 and Windows Azure Pack. Microsoft CPS can be deployed via Nutanix PRISM Element to build an Azure-consistent hybrid cloud or. As an alternative a customer approach can be used to build a private cloud solution based on Windows Azure Pack, Windows Server and Hyper-V.