«

»

vSphere cluster resource pool configuration script

“VMTurbo"
This blog will include the technical aspect (the script) which i mentioned in the blog post vCloud Director implementation for small businesses

Input parameters:

  • 4 vSphere cluster resource pools exists at the vSphere cluster root level
    • SLA-Gold with priority value 10
    • SLA-Silver with priority value 5
    • SLA-Bronze with priority value 3
    • SLA-vCloud with priority value 1

    Screen Shot 2013-03-20 at 21.22.30

  • Each of the vSphere cluster resource pool includes 4 virtual machines:
    • SLA-X-VM-01, 2 vCPU, 1 GB RAM
    • SLA-X-VM-02, 2 vCPU, 1 GB RAM
    • SLA-X-VM-03, 2 vCPU, 1 GB RAM
    • SLA-X-VM-04, 2 vCPU, 1 GB RAM

The below figure shows the vSphere cluster resource pool shares configuration in the vSphere Web Client before i ran the script.
Screen Shot 2013-03-20 at 14.50.22
When running the PowerCLI configuration script i receive the output presented in the below figure.
Screen Shot 2013-03-20 at 14.48.14
The below figure shows the vSphere cluster resource pool configuration in the vSphere Web Client after running the script the first time.
Screen Shot 2013-03-20 at 14.49.15
The Shares values seems to be ok if using the calculation formula described in the blog post http://magander.se/2013/03/19/vcloud-director-implementation-for-small-businesses/

  • CPU Shares per vSphere cluster resource pool = “SLA category priority value” X “number of total vCPUs assigned to the virtual machines in the vSphere cluster resource pool” 
  • Memory Shares per vSphere cluster resource pool = “SLA category priority value” X “number of total Memory GB assigned to the virtual machines in the vSphere cluster resource pool”

Using the above formula we get the following figures for the vSphere cluster resource pool SLA-Gold:

  • CPU Shares
    • 10 (SLA priority value) X 8 (number of vCPUs in the vSphere cluster resource pool) = 80
  • Memory Shares
    • 10(SLA priority value) X 4 (number of Memory GB in the vSphere cluster resource pool) = 40

I increased the number of vCPUs and number of Memory GB for the virtual machine SLA-vCloud-VM-01 according to the below figure.
Screen Shot 2013-03-20 at 14.54.46
The below figure shows the PowerCLI script output after running the script the second time. The values, for the SLA-vCloud section, changed compared to the first PowerCLI script output figure.
2
The Shares configuration changed to the expected values for the SLA-vCloud vSphere cluster resource pool according to the following calculation:

  • CPU Shares
    • 1 (SLA priority value) X 22 (number of vCPUs in the vSphere cluster resource pool) = 22
  • Memory Shares
    • 1 (SLA priority value) X 19 (number of Memory GB in the vSphere cluster resource pool) = 19

The below figure shows the vSphere cluster resource pool configuration in the vSphere Web Client after running the script the second time.
Screen Shot 2013-03-20 at 14.52.40
Below you’ll find the script and again thanks to Luc Dekens @LucD22 , Alan Renouf @alanrenouf  and Niklas Åkerlund @vNiklas for your input while creating the script.

The “Get login password” section in the script is used to avoid typing the password, required for the user running the script, in plain text in the script or in another file. The password is stored, encrypted, in the powerclicred file and can be used for vCenter Server authentication when scheduling the script in the “Windows Task Scheduler”.

Change the red marked text in the script to your required values.

# Script to configure vSphere Cluster Resource Pools shares based SLA priority and number of powered on VMs per vSphere Cluster Resource Pool
#
# Version 1.0
# Magnus Andersson, Real Time Services AB
#
Add-PSSnapin VMware.VimAutomation.Core -ErrorAction SilentlyContinue
#
#
$RPShares = @{}
#___________________________________________________________
Manual configuration section starts here
#
#Change the “name of the vSphere cluster resource pool” to the name of the vSphere Cluster Resource Pool you need to include.
# Change the X value. This value reflects the shares value per vCPU and GB RAM and should be the pre-defined SLA based priority value.
# Create one line per vSphere Cluster Resource Pool you need to include.
#
$RPShares.Add(“name of the vSphere cluster resource pool“, X)
#
#
# Get login password
$pwd = Get-Content c:vspherescriptspowerclicred | ConvertTo-SecureString
$cred = New-Object System.Management.Automation.PsCredential “domainusername“, $pwd
#
#
# Connect to vCenter Server
#
connect-viserver vCenterServer
# Manual configuration section ends here
#___________________________________________________________
#
#
foreach ($cluster in (get-cluster)) {
foreach ($p in ($RPShares.Keys | Sort-Object)) {
$RP = $cluster | Get-ResourcePool | ?{$_.Name -like $p}
$VMs = $RP | Get-VM | ?{$_.PowerState -eq “PoweredOn”}
$TotVRAM = (($vms | select MemoryMb | Measure-Object -Property MemoryMb -Sum).Sum/1000)
$TotVCPU = (($vms | select NumCPU | Measure-Object -Property NumCpu -Sum).Sum/1)
$VRAMShares = [math]::round($TotVRAM * $RPShares.Item($p))
$VCPUShares = [math]::round($TotVCPU * $RPShares.Item($p))
#
Write-Host “Resource Pool:”$RP.Name
Write-Host “Total CPU Shares: $VCPUShares (Total $TotVCPU vCPUs)”
Write-Host “Total RAM Shares: $VRAMShares (Total $TotVRAM GB RAM)”
Write-Host “”
#
if ($TotVCPU) {$RP | Set-ResourcePool -NumCpuShares $VCPUShares -Confirm:$false | Out-Null}
else {$RP | Set-ResourcePool -NumCpuShares 0 -Confirm:$false | Out-Null}
if ($TotVRAM) {$RP | Set-ResourcePool -NumMemShares $VRAMShares -Confirm:$false | Out-Null}
else {$RP | Set-ResourcePool -NumMemShares 0 -Confirm:$false | Out-Null}
}
}

I created a blog post about how to schedule a PowerCLI script in Windows task scheduler and it can be found here

10 pings

Skip to comment form

  1. Schedule PowerCLI script in Windows task scheduler | vcdx56

    […] about how to add a PowerCLI scrip to Windows task scheduler after publishing my blog post vSphere Cluster Resource Pool Configuration Script Depending on the customer requirements there are several ways of doing this. Customer requirements […]

Comments have been disabled.