Hyper-V is nearing the end of a long march to memory overcommit when they finally release Dynamic Memory early next year in Service Pack 1, but please do not call it “overcommit” when Microsoft Virtualization is around — it really riles them up. Just think of this new Dynamic Memory feature as memory thin-provisioning; each VM has the potential to utilize more physical memory as demand grows, just as long as another VM doesn’t request it first.
The design is achieved through two technologies: hot-add RAM and guest OS memory ballooning, the latter being a ripoff of the popular memory management feature pioneered in VMware ESX and trusted by customers for many, many years.
What makes Dynamic Memory unique is that Hyper-V administrators can configure a VM with startup and maximum memory. On boot, the VM only sees the startup amount. As demand increases, the host dynamically allocates more physical memory until the maximum limit is reached. VMware ESX is more straightforward, and works with all guest operating systems, because the full amount of RAM is presented to VMs at all times.
One of the unintended consequences of this Hyper-V engineering feat is additional workarounds for applications that are cognizant of available resources. For example, if an installer enforces a product’s minimum memory requirements, administrators must initially deploy Hyper-V VMs with static memory and switch to dynamic memory — if feasible — after installation is complete. Either that, or try the old MS Paint trick. Do any applications even enforce memory requirements? Yes:
Dynamic Memory is clearly a non-starter for things like Oracle databases and Java applications that are tuned for specific resources, which is not such a big issue when you stop and realize that nobody would consider running them on Hyper-V to begin with.
Only VMware vSphere has a comprehensive set of memory management technologies that delivers benefits without introducing changes or workarounds to operational procedures. Now that’s dynamic.