Hyper-V is nearing the end of a long march to memory overcommit when they finally release Dynamic Memory early next year in Service Pack 1, but please do not call it “overcommit” when Microsoft Virtualization is around — it really riles them up. Just think of this new Dynamic Memory feature as memory thin-provisioning; each VM has the potential to utilize more physical memory as demand grows, just as long as another VM doesn’t request it first.
The design is achieved through two technologies: hot-add RAM and guest OS memory ballooning, the latter being a ripoff of the popular memory management feature pioneered in VMware ESX and trusted by customers for many, many years.
What makes Dynamic Memory unique is that Hyper-V administrators can configure a VM with startup and maximum memory. On boot, the VM only sees the startup amount. As demand increases, the host dynamically allocates more physical memory until the maximum limit is reached. VMware ESX is more straightforward, and works with all guest operating systems, because the full amount of RAM is presented to VMs at all times.
One of the unintended consequences of this Hyper-V engineering feat is additional workarounds for applications that are cognizant of available resources. For example, if an installer enforces a product’s minimum memory requirements, administrators must initially deploy Hyper-V VMs with static memory and switch to dynamic memory — if feasible — after installation is complete. Either that, or try the old MS Paint trick. Do any applications even enforce memory requirements? Yes:
Dynamic Memory is clearly a non-starter for things like Oracle databases and Java applications that are tuned for specific resources, which is not such a big issue when you stop and realize that nobody would consider running them on Hyper-V to begin with.
Only VMware vSphere has a comprehensive set of memory management technologies that delivers benefits without introducing changes or workarounds to operational procedures. Now that’s dynamic.
Whether or a rip off or requiring a learning curve to operate efficiently, the fact is, from my testing, Dynamic Memory will efficiently allow for 20%-30% more VMs per host. I do think the ballooning architecture is a little confusing to application and software vendors looking to see exactly how much RAM is allocated or has the potential to be allocated, but I will take it as progress. Being that it is not an all or none option, Dynamic Memory will fit well with some workloads, and the use of staticly assigned memory will be used for others. This is something VMware shops have been doing for quite sometime. In the short run, 20%-30% more VMs per host equals additional hard dollar savings in a time when business need to stretch their budgets.
Rob
Well said, Rob. Thanks for weighing in… without resorting to personal attacks.
Eric
I wonder how this can work in long term.
VM needs memory –> Hot add
VM not using memory –> ballooning take it away
Since you cannot “hot remove” memory, after some weeks you will end up with many oversized VMs (in terms of RAM), with their balloons inflated. Pretty crappy !
Not to mention you need do manually configure min/max for each VM.
Not to mention any peak usage on the VM might trigger a hot add.
Not to mention you can use it only on OSes which supports hot add on Hyper-V ….
Looks like a very poorly designed feature.
Fernando that would be of course if you viewed DM through a VMware lens. The ballooning driver removes the memory from the VM as it needs to, making it available for the host to allocate to another VM. The VM will not be oversized, it will be using as much memory as it needs. Should it require the memory again, DM will ensure that it has it. The balloon being inflated actually doesn’t matter. I’m not sure how this works on VSphere, but it sounds like it’s slightly different.
Configuring a min/max isn’t really that much of an issue – you already set a maximum anyway, this just adds a startup figure, you can script it if you want. Peak usage on the VM triggering a hot add is exactly what dynamic memory is for, it’s not actually a drawback. Sure, it only works on OSes that support hot add on Hyper-V, but that just happens to be 80% of the target market anyway (Win2k3 and above).
I think that it is at least strange to have VMs with huge memory sizes, and the ballon inflated all times. I bet some applications would have trouble with that , and they were never designed with this in mind.
Even windows auto tune some parameters based on the total memory. If you add memory all time, you are playing a trick to the OS and applications.
Many applications will not be able to take advantage of this additional memory immediately.
Windows server 2003 standard does not support hot add (from my knowledge), so, I don’t think 80% of the OSes is a correct number (depend on how updated the customer is with latest MS OSes).
Fernando, rather than me telling you that “it isn’t like it is in the VMware world” repeatedly, if you want to know how this really works you should have a look at one of Ben Armstrongs sessions from TechEd, available online at http://www.msteched.com. He covers the nuts and bolts of how it works.
To your point about hot add support, it doesn’t use the Windows hot add framework to add memory which means that support for Windows 2003 is there (it’s available right now in the RC release of SP1).
I probably should have put my disclaimer: I work for Microsoft.
Let’s forgive Fernando for not being up to date with the latest breaking news — when Dynamic Memory was first announced, Enterprise or Datacenter were required. It turned out that some customers had standardized on Standard, no pun intended, and a hotfix was planned to accommodate that scenario.
Thanks Eric,
Enterprise version cost 4 times more than standard, and historically does not have all features (such as hot add). That’s why many organizations use standard version for almost everything, leaving Ent for clusters, etc etc
But looks like MS fixed it.
Also looks like you didn’t actually read my reply – hot add does not use the hot add framework that was traditionally used for adding memory to physical systems (so the Standard/Enterprise distinction doesn’t matter). Again, rather than me tell you it’s different, if you really want to know, go watch one of those TechEd recordings. I know this is a VMware site and you’re not inclined to do so, but until you do you’re coming from an uninformed position.
Cheers & Merry Christmas
Stu
Sorry, I was not clear on my reply .. I acknowledge the info you gave about standard version, I was just adding more info here, sorry.
No matter the terminology mess MS is doing with this, DM is copy of VMware proven ballooning technique, plus this hot add stuff.
In the end, the objective is to accommodate more VMs in the host, which is called, guess what … overcommit !
Funny to see you guys implementing something you criticize for so long, with some terminology changed to not look the same.
It reminds me the “Who needs VMotion ??” message from MS some time ago.
Fernando, rather than just making a bunch of (incorrect) assumptions, would it not be better to find out the facts first?
As for the actual blog posting, well its no more than the usual VMWare FUD is it really. “Dynamic Memory is clearly a non-starter for things like Oracle databases and Java applications that are tuned for specific resources” – Because you always install an enterprise database and throw it straight out into production with no further config changes don’t you?
Pingback: VMware vSphere offers the most comprehensive memory management and configuration capabilities | VCritical