Some times ago I wrote about an undocumented feature that allows to limit maximum disk size for VM in vCloud Director. I was asked numerous times if there is similar setting for vCPU and RAM maximums. Today I discovered there is, however it should be considered an experimental feature. I still find it useful as misconfigured VM with extremely large number of vCPUs or huge RAM will impact the host it is running on and cause excessive swapping or high CPU ready times so it is in best interest of the vCloud Director system administrator to prevent it. The other option is to use blocking tasks as described here: CPU and Memory Limit enforcement for vCloud Director and in a blog here.
The limit is set with cell-management-tool command on any cell. Restart of the cell is not necessary.
$VCLOUD_HOME/bin/cell-management-tool manage-config -n vmlimits.memory.limit -v 65536
$VCLOUD_HOME/bin/cell-management-tool manage-config -n vmlimits.cpu.numcpus -v 16
The settings in the example above will limit maximum size of a VM to 16 vCPUs and 64 GB RAM.
- The limit is vCloud Director instance wide and also applies to system administrators
- VM with resources set above the limit will fail to be powered on with an error:
The operation could not be performed, because there are no more CPU resources
The operation could not be performed, because there are no more Memory resources
- It can be cheated by using CPU or memory hot add and adding resource beyond the limits to an already powered on VM
Again, consider it an experimental feature and use at your own risk.
Although cloud services are providing access to abstracted seemingly infinite physical resources, the truth is that the physical infrastructure is not limitless. Pooling and distributed resource scheduling for compute, storage and network helps but at the end there is always a physical host, LUN or network uplink which constraints the granularity of scaling.
When it comes to storage it is the datastore size that limits the maximum size of virtual disk a cloud consumer can attach to his/her VM. While thin and fast provisioning and dedupe (NFS/VSAN) can be used to fit more data and storage DRS can shuffle the data around when a particular datastore is filling up at the end the service provider should not allow creation of arbitrary size of vdisks (vSphere maximum is 62 TB) to avoid datastore out of space condition. For example letting customers provision 4 TB thin disks on 3 TB LUNs is just asking for trouble.
Before vCloud Director 8.10 service providers were leveraging blocking tasks with custom orchestration to check if provisioned VM is within provider specified limits (RAM size, vDisk size, max vCPUs). There is reference implementation published here: CPU and Memory Limit enforcement for vCloud Director.
vCloud Director 8.10 brings hidden configuration option where service provider can globally set the maximum allowed size of virtual disk.
The option can be set with cell-management-tool command on a vCloud cell with the following syntax:
$VCLOUD_HOME/bin/cell-management-tool manage-config -n vmlimits.disk.capacity.maxMb -v 1000000
which would set maximum size of disk to 1000000 MB which is 1 TB.
Note: the command is run on one vCloud cell and its impact is immediate (no need to restart anything).
If the tenant tries to provision larger vDisk he will get the following error:
Note that the limit is not enforced for system administrators and existing disks are not affected.
What should be the limit is out of scope for this post as there are many considerations that should be taken into account:
- datastore size
- can datastore grow?
- thin provisioning
- fast provisioning
- tenant snapshots
- provider snapshots (backup software generated)
- yellow and red datastore thresholds
- Storage DRS
- deduplication on the array