«

»

Acropolis Base Software 4.6 released

“VMTurbo"

So yesterday, 2016-02-22, Nutanix released Acropolis Base Software version 4.6 formerly known as Nutanix Operating System (NOS). Even though it is just a dot release it contains a lot of feature enhancement, performance enhancements and also new features.

This is the next step in the Nutanix journey to make the cloud invisible and it continues to make the infrastructure management easier and easier.

I will cover some of the new features in separate blog posts in a few days but this will at least provide you with an overview of what is included.

More 1 click upgrade goodness

As you might know we have been able to use the PRISM 1 click upgrade functionality for Acropolis Base Software, Hypervisor, Firmware (disk) and Nutanix Cluster Check (NCC) prior to 4.6 and now the following components are added to the much appreciated operational feature:

  • Foundation
  • BIOS
  • BMC

Screen Shot 2016-02-23 at 01.41.30

Volume Groups

Volume groups was introduced in Acropolis Base Software 4.5 and the initial reason was to add support for applications, read Microsoft Exchange, that was not supported to run on ESXi when NFS based storage was used. Volume Groups has also been used for e.g. Microsoft Failover Cluster, Oracle Real Application Cluster (RAC) and it provides iSCSI (Internet Small Computer Systems Interface) access from the VM to the Nutanix cluster storage.

A volume group is a logical entity that includes one or multiple disks used by one or multiple VMs. The disk is placed on an existing container and is managed separately from the VM(s) accessing the disk.
A volume group is the iSCSI target and the virtual machine disk in the volume group that resides on the Distributed Storage Fabric (DSF) is the LUN seen from the VM.

The improvements for 4.6 includes:

  • PRISM (GUI) management integration
  • Snapshot and disaster recover capabilities. After a restore operation the volume group will have the same UUID but if you clone a volume group it will get a new UUID.

Performance

As with any new version of Acropolis Base Software the performance is better. I’m sure there will be separate blogs posts all over the place covering this in more detail in a few days.

Just to give you an idea of what we are talking about i can mention that Nutanix is now very close to actually saturate the SSDs in their system for some workloads and I have seen latency drops with 25%

As you have heard me say many times it is not a single peak performance benchmark number that matters, it about fulfilling the application requirements (real world performance) at a predictable scale that matters and this is what Nutanix is doing. The 4.6 release also adds the peak performance number that some vendors likes to discuss.

Nutanix Guest Tools (NGT)

Nutanix Guest Tools (NGT) was introduced as a tech preview in Acropolis Base Software 4.5 but is now GA. NGT is a software package distributed and maintained by Nutanix. This is a key component in the Application Mobility Fabric ( AMF) that was announced back at the Nutanix .Next conference in June 2015. As of today it provides the following functionality:

  • Nutanix Guest Agent (NGA) service for communicating with Nutanix CVM
  • Nutanix VM Mobility Drivers
    • Makes it possible to move one VM from e.g. ESXi hypervisor to Acropolis Hypervisor (AHV).
    • Used when changing an entire cluster form e.g. ESXi to AHV or the other way around. This feature is called DIAL.
    • Cross Hypervisor DR projects.
  • Self service File Level Restore (FLR) Command Line Interface so files can be restored from Nutanix snapshots from within the VM.
  • Nutanix VSS Agent + H/W Provider for Windows VMs meaning you can take application consistent snapshots of Windows VMs running on AHV and ESX Windows VMs
  • Ability to run specific quiesce scripts on Linux based VMs before snapshot is taken.

NGT requires Port 2047 to be open in the firewall from the VM to the CVM.

The communication between the VM and the Nutanix clusters uses SSL certificates plus VM identity based authorisation and each Nutanix cluster is its own certificate authority (CA).

See a more detailed blog post about NGT here and how to install and manage NGT here

Replication Factor

When installing the Nutanix cluster you need to determine the availability for the cluster. This means number of data copies we store and the options are:

  • Replication Factor (RF) 2 meaning 2 data copies and 3 meta datacopies.
  • Replication Factor (RF) 3 meaning 3 data copies and 5 metadata copies.

This availability configuration is set on the cluster and also on individual containers. However, if you set RF 2 on the cluster level you can not set RF 3 on the Container level.
RF3 requires 5 Nutanix nodes so if you start with 3 or 4 Nutanix nodes you get RF2 and when you scale up there was no option of increasing the RF configuration to 3. With 4.6 you can change the cluster RF with just a single command. As usual, don’t make changes just because you can. Make changes to fulfil a requirement.

The command “ncli cluster set-redundancy-state” is used for this configuration.

Self Service File Level Restore

NGT is required for the File Level restore functionality and the feature makes it possible for a VM administrator to restore files for any available Nutanix Protection Domain snapshot for the specific VM. The only thing required from a Nutanix cluster administrative perspective is to activate the feature since it is not activated per default.

See separate blog post about this here (coming soon)

VSS Volume Shadow Copy Service 

VSS is supported on VMs running on AHV and ESXi and makes it possible to to take application consistent snapshots form a Nutanix perspective. It supports Windows Server 2008, 2008 R2, 2012 and 2012 R2 plus calling freeze and thaw scripts present in /sbin/post_freeze and /sbin/post_tha

If VSS snapshot not supported on the VM there will be an ESXi app consistent snapshot for VMs running on ESXi and AHV crash consistent snapshots for VMs running on AHV.

Security Technical Implementation Guide (STIG)

The following STIGs are included in the 4.6 release:

  • AHV STIG
  • NOS OS STIG
  • Prism Web Server STIG
  • Prism Proxy Server STIG
  • OpenJDK JRE 8 STIG

There will be a Nutanix Acropolis Security operational guide available on the Nutanix portal. Nutanix does not call it a security hardening guide since the software is secure already. This is about adding extra compliance that might be required for some customers.

AHV VLAN Trunking

AHV can now pass several VLANs to a single VM NIC if needed. This will increase the number of applications that can run on AHV. Personally i have only used this feature

Cross Hypervisor Disaster Recovery

It is possible to run a Nutanix cluster with one hypervisor, replicate the VMs to a Nutanix cluster running another hypervisor and then power on the VMs.
Support for Windows sever and client based operating systems as well as multiple Linux distributions.

Requirements are Application Mobility Driver that comes with NGT installed in the VM and replication setup between the Nutanix clusters.

Metro Availability

It is now possible to perform disaster avoidance based on Nutanix Cluster Inode ID management. Less data needs to be included in case of a data resent when the Nutanix cluster reestablish communication after a communication failure.

Technical preview

There are a few technical preview included, as usual, in a new Acropolis Base Software release and this is no exception. Remember, do not use tech preview feature in your production environment!!

Nutanix File Server

Many customers runs their existing file servers as VMs on Nutanix but now there is a Nutanix File Server option as well for AHV based Nutanix clusters. The initial release target home directories and user profiles and the Nutanix File Server consists of a minimum of 3 File Server VMs (FSVM) running on AHV.

There is a 1 to 1 relationship between Nutanix node and a FSVM. The file server provides a high-level namespace and you can have multiple File Servers running on top of each AHV based Nutanix Clusters but one FSVM can only belong to one File Server.

The following logical component makes up the file service:

  • File server – High-level namespace. Every file server has its own FSVMs
  • Share – Exposed to users
  • Folder – Where data is stored

The following pre-reqs applies:

  • Managed Acropolis Network with VLAN tag
  • Active Directory Administrator privilegies
    • Computer account and DNS records are created when joining the Active Directory.
  • Time Sync in place

As of Acropolis Base Software 4.6 SMB 2.x will be supported.

Windows or Linux Image Customization

When creating an AHV based VM you can now customise it from PRISM using a Cloudinit or Sysprep script.You can upload a script form your PC, call a script located on the Nutanix clusters (DSF) or paste an XML based script in the text box.

In place cluster conversion

This is the 1 click feature that will actually transform an entire cluster to a different hypervisor. The VMs will be powered off, the existing hypervisor network configuration will be stored, hypervisor hosts will be reinstalled, the hypervisor network configuration restored and the VMs will as a lot step be powered on.

The release notes can be found here and the software can be downloaded here.

Additional software released

It’s not only Acropolis Base software version 4.6 that was released. The following software versions were included in this

  • Prism Central 4.6 – Release notes available here and the software can be downloaded here.
    • Redesigned Prism Central Web Console User Interface and Home Page Dashboard
    • Entity Explorer
    • Configure Alerts and Notification
    • More Robust Historical Data
    • Prism Central Tech Previews
      • Dashboard Customization
      • Capacity Forecast and Planning – Really cool and will be very useful
      • Prism Search
  • NCC 2.2 – Release notes available here and the software can be downloaded here.
    • Run two or more individual checks at a time
    • Re-run failing checks
    • Run NCC health checks in parallel
    • Use npyscreen to display NCC status
  • Foundation 3.1 – Documentation and download available here.
    • VLAN Tagging
    • Improved Progress Monitoring
    • Support for the Lenovo HX platforms
    • AHV tar file is povided; ISO file is no longer provided
  • OpenStack Release 1.0
    • Drivers include Acropolis Compute (Nova), Image (Glance), Volume (Cinder), and network (Neutron) drivers.
    • The openstack release is deployed as a standalone System VM (SVM) that communicates with Acropolis APIs and very important enables the existing OpenStack environment to work with AHV without having to make changes to existing environment.  The AHV based Nutanix cluster will appear as a hypervisor.