SMB vCloud Director design challenge – number of licenses

I have previously written about a vCloud Director (vCD) implementation where the vCD based VMs must coexists with traditional vSphere based VMs in the same vSphere cluster, post available here.

A few days ago one customer added an even more interesting parameter to such a solution. The customer has an existing vSphere environment, running both production VMs (e.g. vCenter Server, DNS, SMTP) and development VMs, according to the below figure. Screen Shot 2013-06-04 at 14.39.11 I was asked to implement vCD in the existing vSphere environment meaning i once again had to implement vCD based VMs and vSphere based VMs in the same vSphere cluster. This is not, as mentioned in the other blog post, the recommended way of implementing vCD but if that’s the requirement we need to make the best out of it. As usual, make sure you document everything during the process.

There customer presented the below constraints and requirements:

  • 3 ESXi hosts with the following specification exists:
    • HP DL380 G7
      • 2 x CPU
      • 48 GB RAM
  • DIMM speed must be maintained at 1333 MHz.
  • vCD and vSphere based VMs must coexists in the same vSphere cluster.
  • vSphere licenses for 6 physical CPUs are purchased and in use.
  • vCD licenses for 4 physical CPUs has been purchased.
  • No additional physical servers can be purchased based on lack of rack space.
  • Additional hardware components e.g. CPU, RAM, NIC, hard disks can be purchased.
  • Must maintain n+1 failover capacity for the vSphere cluster.
  • Estimated RAM required in the cluster is 96 GB RAM before hardware will be replaced.
  • vCD implementation as soon as possible.

The most critical constraint in this case was the number of vCD licenses (green marked text). If implementing  vCD in the existing vSphere cluster we will end up in a situation where 1 of the 3 ESXi hosts were licensed for vSphere but not for vCD. Screen Shot 2013-06-04 at 13.02.03 The main problem as far as i can see, with implementing vCD in a vSphere cluster where not all ESXi hosts are licensed for vCD, is that the vSphere features vSphere HA and vSphere DRS are not aware of the vCD license contraint within the vSphere cluster.
From a vCD perspective it might work perfectly since we just prepare 2 out of the 3 ESXi hosts as shown in the below vCD figure . Screen Shot 2013-06-04 at 13.46.43 From a vSphere perspective i can identify a few potential issues e.g.:

  • What will happen in a vSphere DRS scenario when vSphere would like to move a vCD based VM from a vCD licensed ESXi host to a non vCD licensed ESXi host?
  • What will happen in a vSphere HA scenario when vSphere would like to start a vCD based VM on a non vCD licensed ESXi host?

If anyone has tested the above, please let me know the result.

The recommended datastore to vSphere cluster configuration is to present all datastores to all ESXi hosts in the vSphere cluster. To avoid the above described scenarios we could create a datastore to individual ESXi host (within the same vSphere cluster) mapping according to the below figure to ensure no vCD based VMs will run on a datastore accessible by the non vCD licensed ESXi host, in my case ESXi03.
Screen Shot 2013-06-04 at 14.30.44


  • Remove 1 ESXi host from the existing vSphere cluster meaning we run all vCD based VMs and vSphere based VMs on 2 ESXi hosts.
  • Add physical RAM to the two ESXi hosts, total RAM per ESXi host will be 96 GB. each.

Design justification

  • Lack of technical understanding/knowledge in regards of placing vCD licensed and non vCD licensed ESXi hosts in the same vSphere cluster.
  • Potential risk for human errors when managing vCD licensed and non vCD licensed ESXi hosts in the same vSphere cluster.
  • By adding additional physical RAM to each ESXi host we will maintain the total physical RAM requirements during an ESXi host failure.
  • RAM speed will be maintained at 1333 MHz even when the additional RAM is installed. Available RAM configuration options for HP DL380 G7 can be found here.
  • The ESXi host removed from the existing vSphere cluster, but still managed by the vCenter Server, can be used for a few of the existing and potential new vSphere based development VMs.
  • The ESXi host removed from the existing vSphere cluster will be configured to use small memory pages meaning a potential higher consolidation ratio, more VMs per ESXi host.

Design implications

  • The design decision will remove one third (1/3) of the total CPU compute power in the vSphere  cluster. However, past utilization and capacity management forecast shows 4 CPUs will be enough until the existing hardware is replaced. The potential bottleneck will be physical RAM.

2 pings

Comments have been disabled.