«

»

vCloud Director implementation for small businesses

So this will be the first blog post where i discuss an architectural challenge.

Several vCloud Director implementations i have conducted in Sweden lately has been for small businesses or at least including small virtual environments.

Some of the customer requirements included in my last project were:

  • 4 ESXi hosts available.
  • Run vCloud Director based virtual machines for development and testing purposes only on the 4 ESXi hosts.
  • Run vSphere based virtual machines for production on the 4 ESXi hosts.
  • Run the management virtual machines (vSphere based) on the 4 ESXi hosts.
  • The existing SLA includes Gold, Silver, Bronze and vCloud.

The excellent VMware vCloud Architecture Toolkit, vCAT 3.0  does not manage implementing vCloud Director when these kind of business and technical requirements exists.

The recommended VMware implementation approach means you separate the management workload (vSphere based virtual machines required to run the vSphere and vCloud Director environment) from the resource workload (where vCloud Director based virtual machines runs) in separate vSphere resource clusters.The below figure describes an example vCloud Director implementation using the approach with separate ESXi hosts for management cluster and resource cluster.

MGMT-RC

This approach was not possible in my project since the target number of virtual machine in the virtual environment required 3 ESXi hosts meaning even though it might not be a very good idea it was not possible to create two vSphere clusters including  2 ESXi hosts each.

The first thing on my agenda was to have the customer determine the priority among the SLA categories and we ended up with the following priority:

  • SLA Gold = 10
  • SLA Silver = 5
  • SLA Bronze = 3
  • SLA vCloud = 1

The Pay-As-You-Go allocation model will be used and the CPU Resource Guaranteed (%) will be set to 0 %. The justification for the Pay-As-You-Go allocation model and its configuration includes:

  • The flexibility required.
  • The low priority for the vCloud Directory based virtual machines.

The Pay-As-You-Go allocation model does not offer virtual machine CPU and Memory configuration through vCloud Director regarding:

  • Priority
  • Reservation
  • Limit

Screen Shot 2013-03-19 at 14.41.11

The reservation allocation model, which is not suitable in this scenario, is required to enable CPU and Memory configuration per virtual machine through vCloud Director regarding:

  • Priority
  • Reservation
  • Limit

Screen Shot 2013-03-19 at 14.57.49

When the priority among the different SLA categories and the allocation model for vCloud Director was determined i needed a way to guarantee the priority per virtual machine. The strategy i chose includes a vSphere cluster resource pool structure (based on the vCloud Director allocation model we were using) and take advantage of the “Shares value” configuration possibilities. The following vSphere cluster resource pool structure was created in the vSphere cluster.

Screen Shot 2013-03-19 at 19.18.51

The reason for creating the vCloud-SLA-vCloud vSphere cluster resource pool in the vCloud-SLA vSphere cluster resource pool is to make sure we do not interfere with the vSphere cluster resource pool managed by vCloud Director when configuring the shares values.

The below picture shows the vSphere cluster resource pool “System vDC (9b9225f8-6e0d-4c0a-abf7-b5ee6a58a10b)” created when the first Provider virtual datacenter (PvDC) was created.

Screen Shot 2013-03-19 at 19.26.51

The below picture shows the vSphere cluster resource pool “OvDC-PAYG-customer-01 (b269a4a7-4305-4a90-9848-752259b048ba)” created when the first Organization virtual datacenter (OvDC) was created.

Screen Shot 2013-03-19 at 19.32.59

When the vSphere resource pool structure in place i could focus on how to guarantee the priority between virtual machines. By using the priority defined among the SLA categories i can create a PowerCLI script and use the script to:

  • calculate the number virtual machines in each vSphere cluster resource pool.
    Important: Include only the vSphere cluster resource pools placed under the vSphere cluster root. In my case:

    • vCloud-SLA
    • vSphere-SLA-Bronze
    • vSphere-SLA-Gold
    • vSphere-SLA-Silver
  • calculate the number of vCPUs and Memory GB per virtual machine per vSphere cluster resource pool.
  • multiply the total number of vCPUs in each vSphere cluster resource pool with “the priority value for the vSphere Cluster resource pool virtual machines”
  • multiply the total number of Memory GB in each vSphere cluster resource pool with “the priority value for the vSphere Cluster resource pool virtual machines”
  • use the values from the above calculations and configure the shares values per vSphere cluster resource pool.

Example: If we got 4 powered on virtual machines (each with 1 vCPU and 1 Memory GB) running in the vSphere cluster resource pool “vSphere-SLA-Gold” i will configure the shares value according to the below figure.

Screen Shot 2013-03-19 at 19.48.15

Schedule the script to run as frequent as required, in our case every hour was enough.

Another option would be to use a static vSphere cluster resource pool configuration which will require manual interaction several times a day.

The script which i created can be found here, vSphere cluster resource pool configuration script

I believe there are several other ways of achieving what i wanted to achieve based on the customer business and technical requirements but this is the strategy i have chosen to use for the time being.

12 pings

Skip to comment form

  1. vSphere cluster resource pool configuration script | vcdx56

    […] This blog will include the technical aspect (the script) which i mentioned in the blog post http://magander.se/2013/03/19/vcloud-director-implementation-for-small-businesses/ […]

  2. SMB vCloud Director design challenge – number of licenses | vcdx56

    […] I have previously written about a vCloud Director (vCD) implementation where the vCD based VMs must coexists in the same vSphere cluster as traditional vSphere VMs, post available here. […]

Comments have been disabled.