The Truth About Hyper-V Memory Overcommit

The ability to assign more memory to virtual machines than physically available on a host is called memory overcommit and is a major factor that contributes to higher VM density — running more virtual machines per host increases efficiency and reduces cost.  VMware ESX has provided this feature for multiple generations, giving it an advantage over competing hypervisors.

The Cost Per Application Calculator makes it clear that investing in VMware vSphere 4 significantly reduces your datacenter hardware footprint and associated costs.  Scott Drummonds, the VMware performance expert, recently explained how memory overcommit is the only way to effectively use all of the physical RAM in a hypervisor.

Each time this topic comes up, Microsoft revs up their marketing machine and responds like this:

The truth is that Hyper-V will have memory overcommit the moment Microsoft figures it out.  If that day ever does come, watch the messaging quickly change to the familiar, “our customers asked us to implement this…” line.

Why is it fair to make such a bold claim?  Two words:

Dynamic Memory

It’s a little known fact that some of the early releases of Hyper-V R2 actually had a feature that allowed administrators to assign more RAM to virtual machines than was physically present on the host — “Dynamic Memory” a.k.a. memory overcommit.

We can look back and see what bloggers like Mark Wilson had to say:

Microsoft also spoke to me about a dynamic memory capability (just like the balloon model that competitors offer). I asked why the company had been so vocal in downplaying competitive implementations of this technology yet was now implementing something similar and Ward Ralston explained to me that this is not the right solution for everyone but may help to handle memory usage spikes in a VDI environment. Since then, I’ve been advised that dynamic memory will not be in the beta release of Windows Server 2008 R2 and Microsoft is evaluating options for inclusion (or otherwise) at release candidate stage.

And this blog provided more detail on the implementation:

In Hyper-V 1.0, physical memory was hard allocated to the VMs, but in 2.0 the pool of memory is dynamically allocated and removed based VM usage with no service interruption. Dynamically allocating memory to VMs can drastically improve host consolidation rates.

Hyper-V 2.0 VMs are configured with an initial RAM setting (how much the machine boots with) as well as minimum and maximum RAM values. Hyper-V then adds RAM using the Hot-Add function, and removes it using a balloon driver (for supported OSes).

For the visual learners in the crowd, take a look at this VM configuration dialog:

Note that this is nothing like Transparent Page Sharing in VMware ESX — Hyper-V VMs would actually be reconfigured using hot-add memory, so the guest operating systems must cooperate.

In an interview with Steven Bink, even Bob Muglia acknowledged the need for overcommit:

We talked about Vmware ESX and its features like shared memory between VMs, “we definitely need to put that in our product” later [Muglia] said it will be in the next release. Like hot add memory, disk and nic’s will be and Live migration of course, which didn’t make it in this release.

By the way, hot-add memory didn’t make it into Hyper-V R2, either — VMware ESX 4 has it today.

Quick Memory Overcommit?

Evidently, dynamic memory was not even up to the rigorous “Quick” standard and was dropped from the release train.  Perhaps in some future edition of Hyper-V, Quick Memory Overcommit will be offered — with just a few seconds of VM downtime as RAM allocation is dynamically adjusted.  But that’s  just speculation.

Sour Grapes

Instead of finding a way to implement memory overcommit in Hyper-V R2, Microsoft has taken the alternate approach of attacking VMware and declaring the feature unnecessary, unsafe, or too expensive.

The fact is that memory overcommit is an extremely valuable capability and VMware ESX has had it all along.  Some of the Linux-based hypervisors are starting to figure it out.  Until Hyper-V finally adds the feature we’ll continue to hear how easy it is to simply buy more RAM.

How long can you afford to wait?

(Visited 1,529 times, 1 visits today)
This entry was posted in Virtualizationism and tagged , , , . Bookmark the permalink.

36 Responses to The Truth About Hyper-V Memory Overcommit

  1. Pingback: uberVU - social comments

  2. Fernando says:

    One thing that normally passes unnoticed: The higher VM density is not only because TPS+Ballooning. The overall superior design, scheduler performance, and direct driver model, all together makes VM density on ESX much higher than any competitor can dream of.

    Another clear distinction is that what we call Memory Overcommit, is in reality the result of 2 technical features: TPS and ballooning (and swap if you will, but let’s not consider it).

    By the way, nice article. MS bashes this technology, so much on their blogs, and with so much emphasis, that it will eventually be a ridiculous situation when they finally implement it on Hyper-V.

  3. Pingback: Tweets that mention Microsoft failed to implement memory overcommit in Hyper-V R2 | VCritical --

  4. Phil says:

    I agree with the sentiment of the article, Microsoft will eventually release this because its what the market wants, and they will then push it for all its worth. However, the fact still remains that it *is* actually cheaper to buy RAM than buy ESX *unless* all your hosts are already fully populated.

    On the other hand, I have no idea what the feature of “superior design” is that Fernando refers to which apparently in itself allows for higher VM density. Could we have some real would facts to back that up, or is it just the usual FUD?

  5. Fernando says:


    Hyper-V uses generic windows drivers, not optimized for Virtualization. All I/O needs to go throughout the parent partition, creating a bottleneck on high consolidation scenarios.
    ESX uses a Direct Driver Model, which does not depend on a parent partition. More details here:

    Additionally, the CPU scheduler have great enhancements on vSphere:

    vSphere has a paravirtualized SCSI adapter, which generates more throughoutput, at the same time that uses less CPU cycles:

    I would also point the fact you can use resource pools to control resource usage, which also leads to more consolidation. On Hyper-V, there’s no way to guarantee resources to critical VMs when the system is under heavy load. On ESX, you can guarantee SLA and avoid that less important VMs takes valuable resources from your host.

    This is a result of a platform created from scratch for virtualization, not a general OS, “patched” to become a virtualization host. The superior design is obvious in this case.

    To not mention other features not related to performance/consolidation.

    You should not think that Memory Overcommitment is the only thing that makes ESX to support more VMs than Hyper-V, it is not by any means.

    Just adding more memory instead of buying ESX licenses is too simplistic. You will in the end, need more servers, to predict failover capacity. So, all your hosts will need to have memory to support all VMs, plus some additional memory. More servers means more spaces, more cables, more windows instances to manage and patch, etc etc … not a big deal on smaller environments, but significant for huge implementations.

    And when you buy ESX licenses, you do not only get the capacity to use memory better, but a bunch of features that Hyper-V lacks (not related to performance/consolidation).

  6. Alex says:

    Is the author a technical guy or just another gossip-collector?
    The statement on what was the Dynamic Memory in early R2 releases is WRONG. It is NOT overcommitement, it does NOT allow to give VMs more memory than the host has. It is a dynamic way of increasing/decreasing memory of the running VM.
    Speculating on the topic, you don’t understand is not the right thing.

    • Eric Gray says:

      What would be the use case for dynamically increasing/decreasing memory in this way, if it is not to overcommit?

      • Eric, I suppose he means that memory can be reallocated without overcommiting. We already have memory hot-add in Windows 2003 Enterprise and above, but I haven’t heard about memory hot-subtract.

        So “dynamic memory” in this case equals “memory hot-add”, and you’re still limited by physical RAM. Which means that Microsoft actually one step farther from “dynamic datacenter” than they claim to be.

      • Oh, just a little insight 🙂
        Microsoft marketing invented their own terms for industry well known techniques. So “maximum memory” on screenshot equals “memory configured” on vSphere and “minimum memory” equals “memory reserved”. They’re going to perform memory overcommitment via balooning, but refuse to call “memory overcommitment” 🙂

  7. Iain says:


    I think you may need to read this article again, overcommit has never been a way to give VMs more memory than the host has. It allows access to an amount of memory up to the physical limit of the host, but dynamically allocates it across the VMs.

    Microsoft have stated a couple of times how they’d like to be able to offer this, but the closest they will get is their “Quick” memory overcommit offering, which still incurs downtime for the VMs…

  8. Phil says:

    Fernando, that report regarding scsi performance is meaningless, all it says is Vmware managed to improve performance over previous versions. In no way does it prove ESX is more scalable than other solutions.

    Its like me posting a similar example from Microsoft

  9. Fernando says:


    It is not meaningless, but you are right, it is not comparing with Hyper-V. Would be interesting to see somewhere an I/O performance comparing, but I never saw it.

    I suppose you acknowledge the other points I gave. Lots or smaller architectural differences, plus TPS + Ballooning , makes a huge difference in the end for huge implementations.

    Don’t forget you have TPS + Ballooning on the free ESXi, and on all SMB versions of vSphere, which will make the difference of adding more memory, versus buy ESX, favorable to the ESX side, or negligible.

    You don’t need Enterprise Plus license for all situations.

  10. Phil, it is not actually cheaper to buy more memory. It depends on many factors.

    Take a look at diagrams here:

    If you’re going to include hardware costs, licenses for Windows etc – at some point it can happen that Hyper-V will be more expensive than vSphere Enterprise Plus.

  11. Phil says:

    Anton, it is for me since we pay .edu prices for everything Microsoft (SCVMM, SCOM, SCDPM) thus its a tiny percentage of the retail cost. VMWare has nothing comparable price wise in my sector.

    • Phil, we’re not .edu, we pay full cost. I’m actually system administrator in end-user telecom company, so I don’t want to sell you something 🙂
      That’s sad that VMware has such price policy, I’m not happy with VMware prices too, but after all price of hypervizor and management is overvalued by efforts of Microsoft marketing. In BIG real projects you’ll have expensive hardware, expensive SAN and double-expensive applications, so VMware price will be a couple of percents of total cost. So the best choice for you is platform you have good specialist around to implement and support, or advanced technology. And VMware has most advanced virtualization technologies in x86.
      In your case – maybe VMware licensing team will do something with academic licenses, at least I hope so.

      My opinion about Hyper-V is simple – it’s a good solution for SMBs with less than 10 VMs, but not ready for Enterprise.

      • Phil says:

        We have no issues with our production systems running 100 VMs, so would say its a bit harsh to state Hyper-v is only good for 10 or less.

        I believe VMware are facing obliteration in the small and medium sector if they dont start competing on price.

        • Ken says:

          Is “free” cheap enough for you? ESXi Free comes at the ridiculously low price of $0.00 and includes such features as memory over-commit. I believe that, at that price, the “cost” argument becomes moot.

          • Phil says:

            $0.00 includes:

            Live migration? No
            High availability? No
            Centralised management from one console? No

            The above items are what is important to me, not memory overcommit. Id be willing to bet that most people feel the same way, these are the 3 key features people look for in a virtualisation platform.

            • Phil, there is a little difference between “I want all for 0$” and “Does it really worth its price?”

              In first case VMware is beaten, yes, without any doubts. And not by Hyper-V, XenServer performs better. But is it really 0$? Is your hardware also free?

              If you’re implementing some complex solution and complex infrastructure you should count hardware costs, networking costs, electrical power costs, licenses etc. And the you would have some numbers with 6 or 7 digits to compare. But even if VMware solution will be more expensive – VMware has more features. Will you pay extra for them – it is for you to decide, but there is “free” things unless you see a piece of cheese and you’re a mouse.

              • Phil says:

                “XenServer performs better” – so where is the proof of this? Lots of people here complaining about MS FUD, but happy to create their own FUD against Hyper-v. Somewhat hyprocritical, no?

              • Fernando says:

                That’s a point people often ignore. Normally companies will spend several thousands of dollars on servers, storage, services, etc etc
                On this scenario, ESX licenses prices will not be so significant.

    • Eric Gray says:


      I get the impression that you are aware of VMware academic pricing, is that correct?


      • Phil says:

        Yes, there is a difference of more than £9k between a Hyper-V/SCVMM setup and comparable features in v-Sphere. I just don’t see a compelling reason to use v-Sphere.

        Being a “more mature” product is the often cited reason that v-Sphere is better – if I found hyper-v unstable in practice then I might buy into that. It seems to me that most negative posts around stability are from people who have either not used, or installed one underpowered workstation with, Hyper-v.

        Still , this is getting way off the original topic!

  12. Phil, I’m not saying that Hyper-V can not run 100 VMs, or 500 VMs. But it’s actually easier to run 100+ VMs on vSphere, easier life for administrator. You’ll have a lot of small things making you work a little simpler a little bit faster each. Only a little bit, but there many such things.
    And yes, vSphere licenses are not really cheap, but would you change your Mercedes C-class for Hyundai Accent?

  13. buy r4i says:

    Both ESX and Hyper-V are based on a hypervisor running on bare metal but uses 2 different architectures. The main difference as Andrew states is that once ESX is up and running the RHEL based servcie console can actually be unavailable and the VM’s on the ESX host will still run even though they will be unmanageable.

    In Hyper-V the VM’s or child partitions are dependant on both the parent partition and the hypervisor. If the parent partition is not available and working then the guest VM’s will not work at all.

    • Phil says:

      Is this comment based on any experience or are you just making assumptions?

      On Hyper-V R1 we frequently had situations where the parent partition was totally unresponsive which seemed to relate to a problem with the Shadow Copy service when backing up. The only affect of the parent partition being unavailable was that we couldnt use the console to manage the VMs, it DID NOT cause the running VMs to fail or become unresponsive.

      • Fernando says:

        From my understanding, the service console is a special VM, but I think it cannot completely fail without affecting the whole system. But definitely ESX VMs are less dependent on the SC.
        The point is that, when the VMKernel loads, it takes over, and all I/O goes directly to the Network and Storage layer, without the need of the SC.
        For Xen and Hyper-V, all I/O goes though the parent partition.
        ESXi goes further, eliminating the SC completely.

        • Robert Young says:

          Actually Both technologies scale very well and Fernando, a vmbus is provided for all hypervisor aware OS so no I/O goes through the parent partition. Again ESX, VSphere, XenServer, and Hyper-V have all good features but I do have to admit that when looking at “various workloads” on one server, ESX does handles the various memory requirements better while with Hyper-V I have to do more capacity planning.

          Now will MSFT come out with “Dynamic” or “Overcommit” memory, right now it does not seem to be at the top of their list yet as I stated earlier, they going with the model of capacity planning and placement first which would be to the benefit of hardware manufactures.

          In my environment I run over 115 server VM’s on a Hyper-V Cluster w/various workloads while I run 128 on my ESX servers. I manage them both with SCVMM R2 very well but again I have to say the time I spent getting ESX up and running required a little less then the tuning I had to do with Hyper-V with NIC’s, and the CSV’s and I/O redirect capabilities……..

  14. Robert Young says:

    Oh my Hyper-V Cluster is running on Hyper-V server NOT Windows 2008 Enterprise and it does not have a console at all and had to be configured with either “CLI” commands or a remote management console……..

  15. Pingback: Post-R2 release of Hyper-V may include memory overcommit - Dynamic Memory | VCritical

  16. Pingback: Hypervisor Wars : memory overcommit -

  17. Pingback: Microsoft and VMware memory battle is the same song and dance - The Windows Server Notebook

Comments are closed.