Hands off that CSV!

Enable Cluster Shared VolumesCluster Shared Volumes (CSV) is the new Microsoft Hyper-V R2 feature that allows virtual machines to be saved as plain text files and then imported into a spreadsheet or database for editing.  Wait… that’s a different kind of CSV.

Actually, CSV is a layer on top of NTFS shared storage that provides some of the functionality of a cluster filesystem — multiple hosts can access a single LUN simultaneously.  No more one VM per LUN jokes, please.

Unlike VMware VMFS, CSV relies on a single coordinator node for all metadata updates to the LUN — interesting.

The Secret Sauce

It turns out that CSV is a delicate feature and files on such a volume should never be managed directly.  In an effort to protect Hyper-V administrators from themselves, Microsoft has taken an interesting non-technical approach to preventing CSV misuse.

After installing Windows Server 2008 R2, configuring storage LUNs, enabling Failover Clustering, and adding cluster nodes, CSV must be enabled on the cluster.  In the process of doing so, a dialog box is presented that looks much like an end-user license agreement:

Enable Cluster Shared Volumes

The text inside the box states the following:

The Cluster Shared Volumes feature is only supported for use with Windows Server 2008 R2 Hyper-V role.  Creation, reproduction and storage of files on Cluster Shared Volumes that were not created by the Hyper-V role, including any user or application data stored under the ClusterStorage directory of the system drive on every node, are not supported and may result in unpredictable behavior, including data corruption or data loss on these shared volumes.

In order to proceed, administrators much check a box indicating that they have read the notice.  Note: If you are the one that checked the box in your organization, please be sure to pass the warning on to your coworkers.

Backups?  Someday.

What about backup (“reproduction”) of virtual machines stored on the CSV LUNs?  You cannot simply take backups of VM files like you would on a normal volume. In fact, if you attempt to back up any files on a Cluster Shared Volume by using the native Windows Server Backup tool, the following error is thrown:

Windows Server Backup - not for CSV

Which leaves the Hyper-V administrator with just one option at this point: in-guest backups only.  Eventually, System Center Data Protection Manager will support backing up Hyper-V VMs, but not right now.

Check out this hilarious ZDNet article bashing VMFS — they didn’t happen to mention any of these CSV advantages.

Is this an enterprise-class solution that is ready for your production workloads today?

(Visited 9,988 times, 1 visits today)
This entry was posted in Virtualizationism and tagged , , . Bookmark the permalink.

106 Responses to Hands off that CSV!

  1. Pingback: Tweets that mention Hyper-V R2 Cluster Shared Volumes must be handled with care | VCritical -- Topsy.com

  2. Shawn says:

    Thanks Eric for reminding me that backup does not currently work with CSV volumes. That’s a key point customers need to know.

    I also had a great chuckle at the ZDNET blog you linked to.

  3. Shawn says:


    Do you have any of the technical details on how CSV works?
    I don’t know the full details yet, but I have seen mention that sometimes IO to a volume must be ‘proxied’ over the network to another node in the cluster.

  4. Eric Gray says:

    Shawn, glad you could get a laugh out of it. I’m not sure what’s funnier: that ZDNet article or the fact that Hyper-V fans are linking to it.

    As for how that proxying works… if there is a storage link failure, nodes can still access the volume via the coordinator node — with performance degradation. Same thing happens if a non-coordinator node tries to access the volume, which is why current backup tools cannot be used.

    This is another one of those “good enough” version 1 Microsoft technologies that will eventually become a whipping boy and the reason you need to upgrade when version 2 is released. E.g., Quick Migration and one-VM-per-LUN was good enough earlier this year — now you see MSFT using terms like “crap” and “headache” to describe them. 🙂

    Thanks for reading VCritical.


  5. Michael Hong says:

    Nice post Eric! In addition to backups, anti-virus, file replication, and file encryption apps also do not support CSV. Since anti-virus apps don’t yet support CSV, customers will need to take the risk and filter out the C:\ClusterStorage CSV directory. I wonder what would happen if a virus broke out on the host and wrote random files to a CSV volume. I guess you’re out of luck if anything happens to your data because MS already warned you in that disclaimer!

  6. NiTRo says:

    That’s really good to know !
    Thanks Eric to point this out

  7. Stu Fox says:

    Interesting Eric, but I think you’re reading too much into that dialog box. Effectively it’s a warning that says “don’t try to use CSV for things other than Hyper-V (e.g. SQL) cause it’s not tested or supported”. That may well change in the future as it’s a very useful feature for other applications, but for now it’s Hyper-V only.
    Yes, we rely on a single coordinator node for metadata as you point out, but this is a very very small component of the disk access, the very large majority is written directly from the nodes to the disk directly (e.g. the biggest write would be to VHD files, and this is direct). We can sustain a failure of the coordination node, in this situation IO queues briefly while the LUN ownership is changed to another node.
    We won’t be the only ones supporting CSV backups – there are plenty of vendors who will be supporting this very soon (embarassingly probably quicker than DPM). The API is there on MSDN, so it’s available for anyone who wants to write a backup application for it, and we don’t have to go through a proxy to do our backup. From what I’ve seen out there most backups of VM’s are still using in-guest backup anyway, especially in the VMware world where VCB (at least the first version) was pretty clunky (haven’t seen what the changes are in VSphere).

    To reply to Shawn, the only situation where IO must go across the network is if you lose direct access from the node to the storage (e.g. someone pulled out the fiber cable, or your ISCSI network cable/card failed). I just demoed this at TechEd NZ this week, I removed access to the ISCSI storage for a host with a running VM, and the VM continued to run and just redirected IO to another node in the cluster. Then you can live migrate the VM’s off that failed node while you repair the fault.

    Question for Eric – what happens if a ESX host loses access to the storage underneath? Does that cause a reboot on another node, or does VSphere handle that somehow?


    Disclaimer: I work for Microsoft NZ, but these are my own opinions.

    • Eric Gray says:

      Hi Stu, great to have your input.

      The short answer is: don’t lose connectivity to your storage in the first place by using redundant connections and multipathing.

      The scenario you describe probably works better for a single VM than a dozen. Now you’ve given me an idea…

  8. Stu Fox says:


    It’s best practice to exclude the VHD & configuration file directories from your antivirus anyway so in practice your antivirus would not see the CSV anyway. If you’re using defense in depth there should be very few scenarios where a virus could get onto your system (the most likely scenario would be a worm rather than a virus but you’ve got the firewall turned on right?)



  9. Stu Fox says:


    Yeah, that’s good advce whether you’re virtualised or not, and one of our best practices as well. However the additional flexibility of redirected IO just gives us extra failure tolerance.


  10. Hi guys,

    Stu is spot on about the workings of CSV – it’s a highly resilient and scalable platform, which allows LUN sizes of up to 256TB, which, I think you’ll agree, is a great deal larger than what most platforms allow. From my knowledge, the maximum file size of a VMFS datastore is 2TB, and the recommendation within the ESX Server Configuration Guide, Page 77, is that you have 1 VMFS datastore per LUN. You can obviously add new extents up to a maximum of 64TB, but that’s still quite different from 256TB.

    With regards to your comment “Unlike VMware VMFS, CSV relies on a single coordinator node for all metadata updates to the LUN — interesting” – just to clarify, there isn’t one overall ‘head coordinator’, which all other nodes redirect their metadata through – each CSV has its own coordinator (don’t confuse that with each CSV needs its own dedicated node as a coordinator and thus # of CSVs has to equal # of nodes, as that is incorrect) i.e. if you have a 3 node cluster, and you create 3 CSVs, ‘ownership/coordination’ of those CSVs will be distributed across the 3 nodes, rather than 1 node holding all the cards. If you create 6 CSVs for those 3 nodes, each node will have 2 ‘ownerships’.

    If you want more info on CSV, I would read here: http://windowsitpro.com/article/articleid/100867/q-how-do-cluster-shared-volumes-work-in-windows-server-2008-r2.html and http://download.microsoft.com/download/F/2/1/F2146213-4AC0-4C50-B69A-12428FF0B077/WS08%20R2%20Failover%20Clustering%20White%20Paper.doc

    Sure, we have a pop-up message that encourages no tinkering directly with the CSV stuff, but hey, if someone wanted to tinker and break some VM-related files, on either platform, provided they were given security access, I’m sure they could do it…

    Eric – you’ve rightly pointed out that you wouldn’t want to lose connection to your storage, but one of the key strengths of the MS platform is that it includes our load-balanced MPIO out of the box, for iSCSI and FC, for free, and allows the storage vendor to hook in with their own MPIO extensions, DSM’s etc, out of the box too, to enable more advanced load-balancing etc. I’m pretty sure this is only now available in Enterprise Plus, however in the remainder of the SKUs, you still have the standard load balancing options.

    With regards to backup – Stu is correct, backup vendors will have releases very soon. Remember how our software release cycle works – RTM is usually a few months apart from General Availability, which, for W7 and R2 is October 22nd, so this is when the launches happen, the ecosystem appears more-so and begins to grow, OEMs ship hardware with pre-installed bits etc, so we have a bit of time yet. A number of major vendors will have solutions available around this time. I had hands on time with SnapManager for Hyper-V from NetApp a few days back, and this is designed to work with Hyper-V & CSV, and SnapDrive for R2 brings some cool Windows-specific functionality around on-the-fly LUN resizing (shrink and extend). DPM won’t make that date, but for in-guest and non-CSV backups, DPM 2007 will work just fine for R2.

    To comment on your ‘The scenario you describe probably works better for a single VM than a dozen” – inevitably it would do, and by that I guess you’re referring to the Microsoft solution being able to Live Migrate 1 VM at a time, between 2 hosts? If that is the case, I’d urge you to test whether it’s quicker to Live Migrate one VM after another or 2 at once. Remember though, the VMs would still be up and running, even though we’ve lost the storage connection to that box, we can get the VMs off it, without them going down. This, I believe, could be automated by using PRO. SCOM would pick up that a host had lost connectivity to the storage, and it would talk to VMM, VMM would put the host in maintenance mode, to avoid VMs being placed there, and to evacuate the VMs to other hosts on the cluster. Sure, in the meantime, going from having a 4GB Fiber pipe to send I/O down, to a number of gigabit (or 10gigE I guess!) network links, using SMB2 is bound to be less performant, but that’s the whole reason to monitor for this scenario, and automate the failover should it occur. If you have a gigabit iSCSI backend however, chances are, the performance difference of going into redirection wouldn’t actually be that large.

    Final bit from me 🙂 – based on this “This is another one of those “good enough” version 1 Microsoft technologies that will eventually become a whipping boy and the reason you need to upgrade when version 2 is released. E.g., Quick Migration and one-VM-per-LUN was good enough earlier this year — now you see MSFT using terms like “crap” and “headache” to describe them”, how would you, pushing aside the marketing stuff, describe VMware Fault Tolerance, taking into account things like this: http://communities.vmware.com/blogs/vmroyale/2009/05/18/vmware-fault-tolerance-requirements-and-limitations? I’m not trying to ‘have a go’ at that technology, I’m just curious about what you/your readers think about it?

    Have a great weekend,

    • Eric Gray says:

      Matt, congratulations, you are definitely in the running for the Longest Comment Award.

      I see a lot of “coming soon” in these rebuttals. That is a problem because the Microsoft Virtualization message is that Hyper-V is available now. Everyone agrees that there is a lot more to virtualization than a hypervisor. Therefore, any solution based on Hyper-V R2 is simply incomplete. That was not the intended message of this blog post, but we ended up there — thank you very much.

      Which PRO pack can respond to “all paths down” and initiate automatic live migrations? As far as I can tell, PRO only has CPU and memory monitors. Or, is that a Failover Clustering feature?

      Perfect timing to hear from the community about VMware FT — plenty of blogs will be written in this two-week period on the topic.

      Special thanks to Michael, NiTRo, Fernando, and canalha for contributing.


  11. Fernando Moraes says:

    Another good point is that CSV cannot be used with replicated storage, which many big companies are using for DR strategy:


    One of the best things of virtualiation is the ability to encapsulate VMs, making DR really easy.

    Sorry, not with Hyper-V.

    Enterprise ready ? hardly …

    Someone mentions about concurrent VMotion performance: As a VMware consultant on the field, I can say, vSphere boosts VMotion performance dramatically and moving 2, 3, 4 VMs at the same time is a piece of cake for vSphere, waaaay fast than moving a VM per time. As the link to Massimo’s blog mentions, with a single simultaneous Live Migration per time, you would never have anything like DRS.

  12. Stu Fox says:


    I think you’re mistaken in your assumption. CSV can be used with replicated storage, just that the storage vendors have to allow for it. Currently they most likely don’t, but this will change – count on it…

    And who says DRS style functionality has to migrate multiple machines at a time? Sure it might be a little slower to flatten the load, but if you’re going to call that out as a problem, you should probably call out the fact that DRS doesn’t have any knowledge of what the application in the VM is doing, so it could just as easily transfer the problem to another host without actually resolving it.



  13. canalha says:

    I’d like to add that CSV is a version 1 technology. From an user standpoint: how does it scale? No one knows, or at least I don’t have any peers that know about it, as no one has used it yet – for obvious reasons. How to troubleshoot it once I my apps have performance issues? Are there anyone experienced on it yet? No, it’s version 1.0.

    My point is that a key piece as CSV for Hyper-V impacts everything a hypervisor does to its VMs – so I’d never replace my very reliable VMs sitting today on VMware’s VMFS for it.

    We have to admit that VMware has a strong advantage here – they designed their clusters and their file system in harmony. It’s not an old design meant for something else being “patched” to support one more need, as with NTFS.

  14. Stu Fox says:


    I’ve seen really good scale and perf numbers internally, hopefully MSIT will publish that as a case study. In terms of troubleshooting etc etc, it’s just NTFS underneath a filter driver so it’s not actually very complicated. I also very much doubt that you’ll see perf problems from CSV (the nodes are writing directly to disk), I’d guess you’d see IO problems long before you see CSV problems. I”m also guessing there are a lot more people that know about NTFS than there are that know about VMFS.

    In regards to your last comment about NTFS being “patched” as I understand it there have been no changes to NTFS to support CSV. That’s the beauty of it, it’s just NTFS which is a very well understood (and very mature) file system.



  15. Fernando says:


    It is “patched” indeed. Take the example the remote replication support. A clear characteristic of a “patched” solution, in opposition to VMFS, which was built from scratch for a very specific propose. So, now vendors need to “patch” their replication software to work with CSV ??? I wonder why this was never an issue with VMFS from day 0.

    In summary, today, you *cannot* user Hyper-V for DR , period. You “just” need to wait storage vendors to fix their software.

    About DRS: having this 1 Live Migration might not be enough to adjust to rapidly changing conditions, or quickly evacuating a host if a hardware feilure is imminent. For huge implementations, with hundreds of VMs and dozens of hosts, that might be an issue. Again, enterprise ready is the point here. For smaller workloads that would be “just enough”. Well, considering MS implements this someday of course :-D. Today, if a VM is taking lots of resources on a Hyper-V host, the poor admin has nothing to do, unless move VMs manually (maybe with the money saved on licenses one can hire a team to do it 24/7 ? 😀 ).

    You statement on DRS is pointless. DRS does not look inside the VMs, and it is by design. It is about balancing workloads, no matter what is running inside the VM.

  16. NiTRo says:


    NTFS is a good and well-known filesystem but i recently got problems with big files on fragmented partition. The problem is it’s a filesystem build for and dedicated to virtualization. therefore, it has to do great 🙂

  17. NiTRo says:

    I mean “The problem is it’s a filesystem NOT build for and NOT dedicated to virtualization.”


  18. Stu Fox says:

    Interesting comment Eric. Does that mean that you believe that any virtualisation solution must include every feature that is possible to use, whether or not a customer is going to use it? If that is the case, I have many VMware customers I’d like you to meet who are using precisely two additional features over and above the standard ESX components:
    1. Vmotion
    2. DRS
    The fact is that regardless of whether or not there are things missing (or vendors are working towards having available at the official release date of Windows 2008 R2) this only matters if you are actually using those features. And that’s important.



    • Eric Gray says:


      You may have missed the memo — as a courtesy reminder, the official Microsoft position is as follows: 1) Nobody uses DRS, and 2) Hyper-V R2 is available now.

      It is true that while many Hyper-V customers have no need for exotic features such as the capability to back up virtual machine files, certain advanced users have discovered that these backups are helpful when things go awry. (In defense of Hyper-V, I should point out that its VM backups are not all that useful since there is no way to quickly bring up restored VMs like you can in ESX.)


  19. Stu Fox says:

    Fernando, my statement about DRS is far from pointless. If I have a machine that is pegged at 100% CPU and DRS decides to move it, unless DRS understands whether or not moving it will actually solve the problem that is causing 100% CPU then the move is ultimately fruitless. That’s the point.

    We also have DRS style functionality in PRO with VMM & OpsMgr, but with the added ability for our partners to extend this to deal with addiitonal vendor specific scenarios (see http://www-947.ibm.com/systems/support/supportsite.wss/docdisplay?lndocid=SYST-MANAGE&brandind=5000016 for an example), or indeed for you to extend the system yourself.

    Funny that people are talking about us “patching” our filesystem to deal with an evolution in technology yet are somehow failing to recognise that by that same criteria VMware are “patching” their solution to provide things like VCB. Technology evolves, and it is important for vendors to be able to adapt their technology and in some cases this takes partners a while to catch up.



  20. canalha says:

    Well, let’s get real then. You are a Microsoft employee, defending your product because that’s what pays your salary. I’m ok with that – I do the same every day. But let’s not get religious about that.

    Eric’s post was about Microsoft CSV, which quickly became a comparison to VMware’s VMFS. The simple fact here is that CSV is still unproven, has limitations on what backup and antivirus software it can be used with, and has limitations on DR usage. That’s enough for me.

  21. NiTRo says:

    I agree with canalha, young doesn’t mean bad. Just young. MS shouldn’t try to run before learn walking.

  22. Fernando says:


    DRS is designed to balance workloads across hosts, it does not intrude inside the VM. If a VM is under 100% CPU usage, there’s nothing DRS can do. Neither Hyper-V !!!! The IBM link is for hardware monitoring, something available on ESX a for so long already. I wonder if you are here just to confuse people. can

    Also , your comparing with VCB is completely wrong. VCB simply takes advantage of built-in capabilities (Snapshots) to allow backup tools to access data. This is not “patching”, the comparing is plain wrong.

  23. Rick Vanover says:

    Curiously, does anyone know if CSV is available with any of the free Microsoft virtualization offerings? I thought that its reliance on MSCS required some upward purchase.

    vStorage VMFS, is functional on the free versions of ESXi.

  24. Stu Fox says:

    Rick – CSV is available with Hyper-V Server 2008 R2, the free version that we ship, as is clustering and live migration.



  25. Rick Vanover says:

    Thank you Stu, that is an important clarification. Makes sense as the installation that only has the Hyper-V role and a core-like interface would run CSV for those selected services (clustered resources).

  26. Stu Fox says:

    Fernando – that’s my point. Balancing workloads is fine and useful, but is only part of the story. If you can’t actually resolve why the VM needs to be moved (and in some cases moving is enough) then all you’re doing is moving a problem around. DRS is a good first step, but only solves a subset of problems.
    The IBM stuff was just a demonstration of how our partners can extend the PRO functionality to do their own stuff (so in the IBM case we get hardware remediation alerts through so the admins can take action – or even extend the management pack to automate the action). That was my point – PRO is extensible by partners or anyone who wants to. I’m not here to confuse at all, I’m here to clarify.



  27. Fernando says:

    PRO can be extended, ok, but the problem you mention, Hyper-V does not solve today. Also, you cannot use DR today. Lots of things you will need to wait for vendors to fix. That’s my point again: Enterprise ready ???? Not today 😀 Good enough for less-demanding customers ? Probably.

  28. Vladan says:

    That’s why it reinforces me in my thinking that Hyper-V is STILL not ready for the production environements….

  29. Slick says:

    Even when Hyper-V is “ready” for production environments, it is still a free product. Do you really want your mission critical stuff on a product that they have to give away in order for people to use it??

    256TB LUN’s???? Can you imagine the disk latency on something that size?? And who the heck would design a virtual infrastructure with that size LUN? Come on….hardly something you should be bragging about.

    Please stick to enhancing your flagship products like Exchange, SQL and Sharepoint which are great.

  30. Stu Fox says:

    Slick – like ESXi? You think VMware would be giving that away if Microsoft & Citrix hadn’t entered the market?

  31. Fernando says:


    “You think VMware would be giving that away if Microsoft & Citrix hadn’t entered the market?”

    So what ? ESXi is an entry point product, not meant to be used for huge implementations.
    You are seriously becoming out of arguments. Better stop de defend the indefensible 😀

  32. Slick says:

    Stu, I will agree with you that they probably wouldn’t be giving it away now if MS and Citrix didn’t give theirs away.

    I don’t think that VMware wanted to do this, but they needed to “compete” in the free hypervisor market….now that there is one.

  33. Eric Gray says:

    Don’t forget — when licensed, ESXi has all the functionality of classic ESX, just without the service console.

  34. Erik says:

    @Matt: With regard to the FT requirements. Ok, FT has some limitations at the moment but we al know VMware is working hard on eliminating them. They’re not trying to sell the MS-like bullshit that you don’t really need multiple vCPUs, RDMs, Directpath I/O, etc.

    @Stu: With regard to “Does that mean that you believe that any virtualisation solution must include every feature that is possible to use, whether or not a customer is going to use it?” No, that’s not the point but if you don’t support core enterprise features and technologies it’s simply not enterprise ready. Deal with it.

    Stu: “I have many VMware customers I’d like you to meet who are using precisely two additional features over and above the standard ESX components: 1. Vmotion, 2. DRS” Yes, nice isn’t it. Two basic features which MS doesn’t even support (yet).

    I agree, don’t bite the hand that feeds you and I applaud your attempts to defend Hyper-V but the fact is that Hyper-V still has, even the R2 version, many shortcomings, caveats, flaws, whatever you want to call them. If you MS guys spend as much time in Hyper-V development as you do in making up your marketing bullshit, myth busting myth-buster videos and blog comments 😉 you would probably have these basic features.

  35. Pingback: VMFS Isn’t the Problem Here - blog.scottlowe.org - The weblog of an IT pro specializing in virtualization, storage, and servers

  36. Aubrey Williams says:

    Only a Microsoft employee can blog with a straight face, and make claims that Hyper V (from any perspective) is superior with VmWare ESX. Yes, CSV is a interesting technology, evididence that Microsoft is making some serious efforts in moving Hyper-V into the Enterprise Virtualization arena.

    But Microsoft Hyper-V is, currently…Hype! It’s product is young, unproven and frankly, has inferior enterprised features and performance when compared to Citrix and VmWare.

    Hyper-V if free for a opposite reason that Sharepoint and Exchange are not. If HyperV was 10$, Microsoft would not garner a single download of the product.

    And finally, as an enterprise VmWare architect, I don’t care about ridiculous theoretical maximums. I want to see performance numbers on CSV and HyperV, running various enterprise workloads. Until Micrsoft and it’s legion of propagandists start producing benchmarks, HyperV will remain in the same category as other free shareware.

  37. Rick Vanover says:

    @Stu: You say: I don’t think that VMware wanted to do this, but they needed to “compete” in the free hypervisor market….now that there is one.

    I agree with you, but as I’ll defend VMFS to the hill – VMware’s free offering is inferior to that of Citrix and Microsoft. But, that’s not VMware’s objective – they want the big installations with the big bucks.

  38. Roger Pan says:

    It is interesting to see all the comments about DR and CSV. Can someone from the “VMware” architect show one DR solution that actually works at the VMFS level? There is NONE, because VMware doesn’t let anyone put one into the hypervisor. You have to resort to Storage based replication to provide any sane DR for VMware, and for that you need to make sure that you stick to exact same models of the storage arrays, as most storage replication is within the same kind of arrays. And oh, the only exception is the mothership EMC. Nice way to take storage sales from non EMC shops.

    None of the changes that come with CSV necessitate any design changes from a storage replication perspective. The storage arrays do not care if the I/O is coming from one host or a hundred. Putting the onus on MSFT for storage vendors qualification times is disingenous. VMware has had 5 years and they still can’t get their act together on this. Talk to their partners and assess their level of frustration.

    Contrast that to Hyper-V. We were able to actually add support for a hypervisor based DR and Backup solution, including CSV before R2 was even released. Not the hack that VCB is.. We work with all three Hypervisors, and partner with all three vendors. So this is not a biased opinion. All our attempts to do this with VMware over the years have been fruitless. And no, this post is not from an MSFT employee.

    At least with respect to DR and Backup (CDP). The folks making tall claims from a VMware perspective need to reexamine their “facts”.

  39. Fernando says:


    Are you really serious ?

    Can you show us some “Hypervisor based” DR solution for Hyper-V ?

    MS itself says CSV does not support replication.You need to review your own stuff if before adding wrong statements here.

    ESX is the only Hypervisor to have an orchestrated DR solution (Site Recovery Manager), and it works like a charm by the way.

    You are miserably missing the facts here.

  40. Stu Fox says:

    (I wasn’t going to continue to respond here, as some of the claims were getting a bit silly – maybe the guy who said NTFS has a problem with large files should talk to the guy who said Exchange & SQL were excellent for instance. And maybe the guy who claimed I was arguing that Hyper-V was superior could read my comments again.)


    As I understand CSV supports replication, just that replication vendors have to allow for it (which is quite different). And to be fair, Hyper-V V1 had an automated DR solution (with the ability to fail back to the production site as well) using geo-clustering (which leverages vendor storage replication to replicate the VHD’s etc). You may not have known that, but geo clustering support is out of the box in Windows 2008, provided you have some way of replicating storage – just like SRM.



  41. Fernando says:

    Stu, you are right on the Geo Cluster stuff, I didn’t know it applies for Hyper-V.
    It is not technically a DR orchestrated solution, but might do the job. Anyway, CVS seems to break this functionality right ?

    I don’t thought or said for a moment NTFS is bad. The opposite actually. What I said is that CSV is kind of a “patch”, a “workaround” on top of NTFS.

    And I agree MS has killer products, such as Exchange, Office 2007 and so on 😀

  42. Roger Pan says:


    We are talking about 3rd party vendors being able to replicate CSV’s. Windows is open enough to allow us to do it. There weren’t any statements about CSV itself replicating to other CSV instances. That’s like saying VMFS replicates to remote VMFS file systems. (Which by the way would be great).

    You are blithely ignoring to mention the fact that SRM needs storage replication to be in place for it to be of any use. It’s great to failover if are lucky enough to fall under the compatibility matrix on your storage infrastructure side.

  43. Roger Pan says:

    Also, does anyone actually know how VMFS orchestrates the metadata updates for the filesystem? I can’t seem to find any information on this publicly.

    XenServer neatly sidesteps the problem by mapping virtual disks direct to logical volumes and the orchestration between the pools is a simple “you own” it now at the end of xenmotion.

  44. Roger Pan says:

    Here is an interesting link. Just like any performance numbers, associability to real world issues might not be there..


  45. NiTRo says:

    Stu, i’m not the guy who said NTFS has problems with big files, Microsoft does ! http://support.microsoft.com/kb/967351
    And like i said, with framented partition.

  46. Fernando says:


    Yeah, SRM needs replicated storage. Does Hyper-V have any similar solution without replicated storage ? Noooo …. Hyper-V does not have *any* DR solution.

    Where the “Hypervisor based” DR solution example for Hyper-V ?

    By they way good CSV supports 256TB volumes and 4 billion files per volume , everyone needs it uh ? Can you point some REAL advantages, and not these *meaningless* numbers ?

  47. Rick Vanover says:

    @Roger, @Fernando: In summary to what was said above – yes, VMware’s storage solution doesn’t address DR natively. But there are a lot of options – including SRM for a managed failover, volume replication at the storage system level and even out-of-band options -> Like PlateSpin Forge.

    The menu is thick at the VMware restaurant. But, don’t get me wrong, I get hungry for In-N-Out also.

  48. Rick Vanover says:

    @NiTRo: Yeah, that is crap comparing VMFS and CSV. Does not one respect the concept of a purpose built file system?

    Does anyone not care about the built in-efficiencies to accommodate big and large files with the primary and sub-blocks of the file system (1/2/4/8 MB primary blocks supplemented with 16 KB sub blocks).

    I’ll take it to the hill: VMFS is the most underrated technology VMware ever made.

  49. Fernando says:

    “The menu is thick at the VMware restaurant”

    Really ? What about Hyper-V restaurant ?

  50. Rick Vanover says:

    @Fernando: I’m making a thinly veiled reference there about price and offerings.

  51. Fernando says:

    Rick, sorry, I misunderstood you.

    I am waiting for an example of Hyper-V DR (just one is enough) 😀

  52. Roger Pan says:

    Just wait a month or two, you will see them show up from at least 2 vendors. A few months vs. years of wanting the same with VMFS. Keep in mind R2 has been out for a very short time. There are at least 2 already available for XenServer. The power of being open.. (funny to see myself defend MSFT for being open ha ha). I guess everything is relative.

    That other link was a FYI, with no opinions attached. You seem to have take a real cudgel to me sharing an informational link.

    On a different note the MS slide that people seem to refer to as shown in this link needs clarification. http://it20.info/blogs/main/archive/2009/03/19/196.aspx

    The specific capability that is being referred to is “stretched-cluster” support where the replicated storage is being used as a part of a single cluster with VM’s running on both sites separated by a distance. This by NO means suggests one cannot do DR with plain windows replication solutions. DR case obviously assumes that the target storage is used for failover.

    For people that use classic stretched windows clusters, this slide would make perfect sense. For all the rest it might convey an entirely different limitation that intends to convey.

  53. Stu Fox says:


    I thought I clarified this before – the DR solutions are built around geo-clustering, and there is plenty of stuff on the menu at Hyper-V restaurant:
    Netapp: http://www-download.netapp.com/edm/TT/docs/Seattle_Retail_Report_040909.pdf
    HP: http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA2-6905ENW.pdf
    Doubletake: http://www.doubletake.com/english/products/double-take-virtualization/Pages/Double-Take-for-Hyper-V.aspx



  54. Roger Pan says:

    Believe me I’d love to tout our shortly being announced solution, but want to avoid being killed by our marketing mavens :). We would have LOVED to be able to do the same with VMFS, we tried for years.

    I truly believe MS and it’s partner program gives it a compelling advantage wrt to third party solutions. VMware still hasn’t figured out to scale it’s partner ecosystem and enable partners that are dying to work with them..

  55. Fernando says:

    Ok, 1 more thing on the Hyper-V waiting list 😀

    This was *nothing* to do with being open. MS solution is NOT open my any means. NTFS and CSV as just closed as VMFS is.

    For the “stretched-cluster” stuff, the MS slide is not specific, but might be the case. Anyway, it is not that easy as it is with ESX. You need to “export” your Hyper-V VMs first, it is not a matter of replicating the storage. Not DR friendly.

    With ESX, you can do it without SRM: Replicate, register the VMs , and power them on. With Hyper-V, Eric shows in this blog that it is not that easy !

    Let’s summarize:

    Today, Hyper-V has no DR option.
    ESX has several.

    End of story, next time, bring real facts, not religious thoughts.

  56. Stu Fox says:


    You’re mistaken. With a stretch/geo cluster there is no need to export or import. You need the storage replicated underneath, but Windows failover clustering takes care of the failover (and failback as well if you can replicate your storage both ways). In fact it’s more automated than SRM, as the cluster failover takes place automatically. This is very very DR friendly, which is kind of the point of failover clustering.

    And how could I forget from my list of solutions our good friends at EMC: http://www.emc.com/collateral/software/white-papers/h6346-disaster-recovery-geographically-dispersed-cross-site-virtual-environment-wp.pdf



  57. Roger Pan says:

    We are obviously at different wavelengths. I’m coming from a vendor perspective of wanting to enable what customers have been asking for consolidated backup/dr in a storage agnostic solution.

    WRT to ‘open’ or not, i believe you are out of your depth. Can you point to VMFS equivalent for layering Upper Filters at the volume or the file level? Windows has had it for years. MSFT partners have used it to create replication solutions up the wazoo, there are about 10-15 of them I personally know. Retargeting them to support CSV is fairly trivial.

    WRT to no DR option. Do a google search for Hyper-V and DR. The FIRST page lists at least 3 different vendors. 🙂 I won’t name them as they are our competition. It’s not hard to do a search.

    WRT to to re-signaturing, without having to copy. It is possible and ISV’s like ours have used it to do minute level RTO’s on failover since W2k first came out. The only “new” thing is CSV support. I’m not aware of whether this resignaturing is possible within Hyper-V by itself though. We use the VMware resignaturing for our ESX product, so I’m well aware of it. And it is nice that end users can do it themselves.

  58. Fernando says:


    You are right, but I am talking about the a manual DR.

    SRM does not make the failover automatically by design. The failover cluster for Hyper-V is basically the regular MSCS, which primary objective is not DR.

    And by the way, you cannot use Hyper-V Geo cluster with Hyper-V R2.

    SRM is built from scratch for DR, and is 10 times more sophisticated for this propose.

  59. Roger Pan says:

    i meant w2k8, not w2k.. 🙂

  60. Stu Fox says:


    There is no reason you cannot use a geo-cluster with Hyper-V R2 provided your storage vendor can replicate the storage (as I’ve said above, there are a large number of them working towards this). The primary objective of failover clustering is availability – of which DR is one component of that. In the past it has certainly been true that DR was difficult because geo-clustering was not supported, but with the Windows 2008 release DR became an option. So technology evolves (as I said above, expecting it to do otherwise would be naive)



  61. Fernando says:

    Probably you use another Google version, since I could not find ANYTHING, even using all the possible terms.

    Please, post here your results, please. Note that Windows DR is different from Hyper-V DR !

    NTFS might have the ability to add filters just because it is a general propose filesystem. VMFS does not have it, because it does not need it, it is by design. You don’t need an antivirus filter for ex.

    I am still waiting a real example of DR solution for Hyper-V (any, really) … you are talking too much, and not adding any real info here.

  62. Fernando says:


    Again, you are right. I just mention that SRM is focused on DR.

  63. Stu Fox says:


    My comment is in the moderation queue, I suspect Eric is moderating comments that look like spam (there are a number of links in my post which is likely causing it as my other posts go straight through), but it has links to DR solutions from NetApp, HP & Doubletake, and my other post has a link to a solution from EMC.

    Because Hyper-V leverages the Windows environment, Windows DR and Hyper-V DR are very tightly tied together.

    NTFS has the ability to add filters because we can’t predict what people might want to do with it, so we provide them the ability to do lots of things with it through filters. That’s about extensibility, it’s nothing to do with the filesystem being general purpose. If you want to plug in a backup application, go for it (this is what DPM does – which lets us do host level backup without having to proxy the backup). If you want to plug in AV, again, go for it. It’s a pretty powerful concept that allows people to do what they want, not what we want.



  64. Fernando says:


    I saw the links (the comment went to my mail). This Double-Take seems to be an SRM version for Hyper-V, and others leverage the Failover Clustering.

    But as we already spoke, all them are not ready for CSV. This is a problem MS created, you cannot do DR with MS solution itself ! 3rd party also would need to adapt to CSV.

    For the NTFS filtering stuff: Don’t get me wrong, I think this is great, but as VMFS is focused solely on virtualization, I guess VMware probably does not want to invest time on this. The comparing is not apples to apples.

  65. Roger Pan says:

    Filters are not limited to NTFS, even FS level filters can tackle multiple File Systems.

    Volume level filters offer a FS agnostic solution with much lower complexity with no possibility of interactions with virus scanners etc.

    The CSV model even allows async solutions at the volume level work, overlapping writes to the same offsets from different nodes aren’t possible. LM can make the ‘ownership’ transfer, but in a non overlapping fashion.

    AFAIK, VMFS operates in similar ways.. except it doesn’t have any “open” form of filtering. It would be great if it did.

  66. Fernando says:


    This search does not return any DR solution for Hyper-V. It returns “SteelEye Technology Announces DR for Hyper-V” as the first result, and going further, not a real product.

    Stu pointed one, Double-Take, but it does not work for Hyper-V R2 (CSV).

  67. Fernando says:

    Roger, VMFS does not have it (at least from my knowledge), but probably because VMware does not want. All vSphere products have fully accessible APIs for doing pretty much everything, why not with VMFS ? I cannot speak on behalf of VMware, but I guess because there’s no demand for it.

  68. Roger Pan says:

    I’d beg to differ on the demand. 🙂 We live it, unfortunately for us, we can’t seem to do much about it. BTW, I’m a VMware user from their beta days. And I think they are a great engineering team…

  69. Elden says:

    Stumbled across this interesting conversation… lots of FUD, and very few facts. Now there is a lot of “passion” in some of these posts (some of which are pretty out there), so I don’t think this will sway anyone’s opinion in this room full of critics. But hopefully as a person that is part of the dev team that created and delivered CSV I can help answer some of your questions.

    First Let me speak to some of the blog comments:

    “It turns out that CSV is a delicate feature and files on such a volume should never be managed directly”. -That’s not accurate, you can open up Windows Explorer, a cmd prompt, a PowerShell window, or your view of choice. CSV was a solution that was designed with Hyper-V in mind… and was only tested with Hyper-V… so it is only supported with Hyper-V. Nothing more… nothing less… if you don’t like the wording the UI pops up, I’m receptive to that feedback. But don’t try to read into it too much.

    Let’s talk about backup. For guest based backup (which is the model many customers use), everything works… nothing to discuss. For host based backup we have some new API’s and vendors have to do some work to make their backup applications compatible with CSV. Computer Associates ARCserve supports CSV **today**, and many vendors are coming very VERY soon as we get closer to launch. Short answer is to go talk to your backup vendor, and I’m sure you will feel much better.

    Now on to replication… yes, replication solutions work with CSV. Good grief… so much FUD, so little facts. Yes, you can configure clustering across sites for DR. Windows Failover Clustering has been doing this for a decade with SQL, Exchange, etc… and it works just as well with Hyper-V with an army of vendors that tightly integrate. And yes, we can do live migrations across distances too…

    One of the unique things we can do is support a single cluster stretched across sites, what we call a multi-site cluster. This introduces some challenges with a few vendors solutions. One of the fundamental assumptions from the past is that only one node in the cluster will ever access a LUN at a time, and that’s what vendors built their multi-site solutions too. With CSV, that assumption has changed… as CSV allows all nodes in the cluster to simultaneously access a volume. As I said, this has caused some interoperability challenges… but we are working hard with our vendors to address this. And there are several vendors in the pipe that will have multi-site CSV support very soon. Unfortunately I’m not at liberty to discuss, as they haven’t made their formal announcements.

    I hope this helps answer some questions. If you have more questions on CSV, I’ll try to help out. If you are just going to dissect my post with lots of radical statements, I’ll leave you guys be with your inaccuracies and incorrect assumptions.

  70. Fernando says:


    You wrote a bible here just to confirm what we already pointed:

    We cannot have a Geo Cluster for now, because vendors needs to adjust their products to CSV. So, nothing new here. So, DR with Hyper-V R2 is not possible today, neither with MS or third parties products.

    Other vendors do not need to fix it. MS need ! CSV breaks MS Geo Cluster functionality, it has nothing to do with 3rd parties !

    You try to minimize this by saying all will be fixed “very soon” without any real fact, or real vendor announcement.

    By the way, ArcServe latest version does not have any mention to Hyper-V R2 or CSV (mentions Hyper-V only). I think you are bluffing LOL 😀


    You you were complaining about lack of real facts, but you did not provided any !

  71. Stu Fox says:


    Let’s stick to facts then:

    CA ARCserve Backup is enhanced to provide you with the following functionality. Refer to the Readme for full details.
    – Added support for Windows 2008 R2 functionality:
    +Cluster Shared Volumes
    +System volumes without drive letters
    +Volumes mounted from Virtual Hard Disks

    Hope that helps


  72. Fernando says:

    Thanks Stu, CA probably haven’t updated the main product page yet, I has mistaken on this one.
    But besides that, all the other stuff about DR still stands.

  73. tonyr says:

    fernando, your shortly going to be toast on your can’t rep csv tirade. We’ve had two different major san vendors in house the last month and both demonstrated csv reps!

    Also there was a scom mp for site recovery demonstrated! This was created by one of the above san vendors!

    So fernando at that point what will be left as a pain point, let me guess memory overcommit right?

  74. Fernando says:


    It is clear CSV can be replicated with SAN vendor technology. The point it, CSV does not work with MS *own* geo cluster solution.

    It is clear also 3rd parties can created DR solutions, but none are ready now, you guys are just saying “soon”, without any real facts or vendor announcements. Nobody here could point any real, ready to go , solution. If a customer needs it, what will you say ? You cannot even commit with any dates dude. Get real, and not emotional.

    I don’t want to go even more off topic, but if you read Eric’s other blog posts, you will see many other pain point for Hyper-V. (crappy linux support, no DRS, no decent NIC teaming, no decent snapshot support, just to say a few).

  75. tonyr says:

    So anything thats created by an ms partner is not a valid solution? How about the many 3rd party backup solutions, are they not valid solution for backups since ms didn’t create them?

    Since vwmare can only support 3rd party os’s does that make them NOT a valid solution for these os’s?

    The funny thing is that I’ve never had anything but vmware in production so its kinda odd discussing this. The deal is that my previous managers didn’t want to pay vmware another dime so I had to start down the path of hyper-v or xen. Neither one of these at the time would cover the problem but things have changed since then….

  76. Fernando says:

    “So anything thats created by an ms partner is not a valid solution?”

    Of course they are valid, did I said they aren’t ? But they are not ready yet. If your manager asks you a DR strategy for Hyper-V R2, you will need to ask him to wait until an unspecified date (can be tomorrow, can be in the next year).

    Well, depending on your needs, Hyper-V or Xen will fit, but for demanding enterprise environments you will have many pain points.

  77. tonyr says:

    “Of course they are valid, did I said they aren’t ? ”

    “The point it, CSV does not work with MS *own* geo cluster solution.”


  78. Fernando says:

    hahaahh 😀

    No, that’s not what I said dude ! OK, let’s recap: We are arguing that CSV is a “patch”, a “hack” to NTFS, to enable it to do something it was not originally supposed to do (as opposed to VMFS, created from scratch for this).

    This comes with some side effects. A good example is breaking the Geo Cluster functionality, which (as Stu pointed), worked with the first Hyper-V release.

    That does not prevent other vendors to do their own stuff for Hyper-V DR.

  79. tonyr says:

    I’d like to add once again all of the above posts are talking about corner cases.

    On my current contract I have an application that has a true 5 9’s requirement. will I run this on hyper-v r2? No not yet.. Will it meet 5 9’s not likely (but thats another story). I will run everything else on hyper-v r2. Its funny all the components are in the MS stack biztalk,sql, iis etc but the core is going to be vsphere!

  80. tonyr says:

    also every antivirus package is a hack right (ok, yes they are) but they are filter drivers and since ntfs was not designed with them in mind it means that ntfs is a cludged up FS right?

  81. Fernando says:

    I believe AV software is something NTFS was always aware of, and provide the necessary filtering possibilities.

    But being a real clustered file system, being accessed by many hosts at the same time is something it was never meant to do since its creation around 199x.

    CSV fixes that, but the implications/stability/scalability is unknown/unproven. See the main subject of this post.

  82. Elden says:

    Hi Fernando,

    I’m not quite sure what you mean… CSV was designed specifically for Hyper-V, so your bold statements are quite false. Additionally, this has zero relevance on any naïve assumptions on the functionality of the solution… I would love for you to articulate how VMFS’s SCSI reserve locking mechanism which completely limits it’s scalability is better than CSV? Or how VMFS’s file locking for split brain handling is better than our quorum arbitration?

    Side note: The correct terminology is a multi-site cluster, GeoCluster is a solution from Doubletake.

    Also, I’m a little confused… Failover Clustering is the only solution that allows nodes in the same cluster to be stretched across sites to achieve automatic DR recovery. Yes, we are actively working with replication vendors to sort out architectural considerations as Failover Clustering moves from a shared nothing to a distributed model. VMware cannot, and has never been able to do this. So can you help me understand how you are holding up some interoperability work being done on our side, which VMware can NOT do… as making VMware better? I’m quite perplexed on the foundation of your argument.

    Also, I would love for you to explain to me how VMFS enables simultaneous access for all nodes to read-only replica’s in DR sites that makes their solution superior?

  83. Fernando says:

    “CSV was designed specifically for Hyper-V, so your bold statements are quite false”

    NTFS was not make specifically for it. Of course CSV has !

    “VMware cannot, and has never been able to do this”
    And VMware does not need it. VMware SRM handles DR perfectly, without the need to have a multi-site cluster.

    “I would love for you to explain to me how VMFS enables simultaneous access for all nodes to read-only replica’s in DR sites that makes their solution superior?”

    SRM integrates directly to the storage system (IBM, EMC, HP, NetApp, etc ), and present the LUNs on the failover site when needed. No need to present a read only LUN. Much elegant way of handling this, and way superior than Hyper-V. This is the result of a focused product for DR, not a general propose cluster solution, adapted to Hyper-V. Can you make a complete DR testing without touching the original VMs with multi-site Cluster ?

    For the SCSI looking stuff. What you says if FUD.

    CSV is a version 1.0 product, completely unproven, and you are arguing that it does scale better than VMFS, which is a 10 years development, proved everyday over and over by 98% of fortune 1000 companies !!! Wake up , get real !

    Just look at VMware documentation on how VMFS scaled. Where is MS documents on this ? Where are real user scenarios ? There are none, since this is a new/unproven product.

  84. Fernando says:


    For the “VMFS does not scale” FUD (which you are not the only one to manifest), I found a very good clarification here:


    Take a look at it.

    Does MS provides any documentation on CSV limits ?

  85. canalha says:

    Guys: get over it, please. I think it stopped being productive several posts ago.

  86. Pingback: Configuration of Hyper-V Live Migration – RUN DMC Style | VM /ETC

  87. Pingback: The hypervisor is not running - Hypervisor.fr

  88. Pingback: Hyper-V CSV volumes must be monitored by a complex GUID instead of drive letter | VCritical

  89. Pingback: » CSV Ordinary Mind

  90. Paul Shearer says:

    To Fernando: We are a Fortune 500 and we are running HyperV R2 using CSVs. As for the backup and replication questions we leverage NETAPP (one of our SAN providers.) They have a HyperV Snapshot client that allows the CSV (VOLUME in NetAPP speak) to be placed in a consistent state for backup. This is even better than a host based backup as its all happening between our SAN and Backup infastructure.

  91. tonyr says:

    Paul, yep the netapp stuff is cool have you looked at their DR stuff for hyper-v r2

  92. Larry says:

    I remember just 20 years ago when no one here would even be having these kind of conversations. Virtualization? We were buying servers left and right to get away from the main-frame computing architecture. The more servers, the better. Compaq,Dell, IBM,HP all were very happy to offer thier hardware and sell as many servers as they could.

    The processors memory, I/O bus everything starting getting faster. Now they want us all again to move to a mainframe envirenment. After all, Virtualization is the same theory as mainframe computing.

    What we should really be angry about and as Industry prrofessionals, DEMAND, is that applications be able to run on the hardware without needing a layer of virtualization. After all, this is the only reason to virtualize is becouse those DAM applications will crash our machines if we put a bunch of them on the same machine!!!

    The processors memory, I/O bus everything starting getting faster. But even now they really havent broxen out of the x86 architecture -http://en.wikipedia.org/wiki/X86
    When they do, then what happens to the hypervisor? If the programers pull the plug on all this and program applications like, Exchanage, SQL, Oracle, SharePoint, Etc… to run on the same machine without actually needing to use a virtualization encapsulation and prevent one application from screwing up another application, what happens to Virtualization? If we have come this far, tell me, should it really be hard to do?

    So is Virtualization just another phase that the hardware companies are using to get us to purchase bigger, faster, machines that they are building?

    And why should we even need virtualization if applications are programed correctly in the first place to work efficiently on the hardware?

    By the way, I love both technologies and think Microsoft and Vmware have BOTH done a good job in responding to this ever complex evolving computing envirenment!

  93. iceman says:

    Good argument with CSV on top NTFS and VMFS . . .

    So how does CSV on NTFS handles disk fragmentation besides the OS Level?

    I applaud VMFS because this file system is built for VIRTUALIZATION PLATFORM from the day of creation 12 YEARS AGO (THAT’S HOW SOLID THIS TECHNOLOGY IS) and DOESN’T NEED IT.


  94. Davi says:

    Hey Guys,

    a) How about Red Hat Linux GFS: R/W to a VolumeGroup (wowww) regardless the technology, filetype or application?

    b) I can’t see how the CSV can stripe its volume across some LUNs thru controllers/paths to maximize performance. I read tool much about a hypothetical scenario when the node lost its connection to the storage. I would like to see good points about MS solution when 40 VMs are running at 70% CPU, eating I/O from the storage virtualizing DB, AD, FS, Applications etc for 1.000+ users.

    Come on guys, MS is loosing the battle to Linux, VMWare and IBM POWER VM (which in my opinion is the best VM “server” nowadays)…..

    However, the great joke is my clients asking me to deploy Hyper-V and Hyper-V + CSV on their datacenters regardless CVS is beta or not (haha I have seen that kind of disclaimer all across my beta softwares). God bless us!



  95. Davi says:

    One more thing: What a he** I need am antivirus software on my Hyper-V node? – Can any one give me a good reason?

    Tip: Protecting me from malware is not a good reason, ok? I don’t have this concern on VMWARE.


  96. Elden says:

    Windows has an integrated multi-path infrastructure called Microsoft MPIO. It’s an extensible model which allows storage vendors to also write custom DSM’s which can plug-in and provide enhanced functionality as well. CSV takes advantage of MS MPIO just as all other Windows components do, nothing special. So CSV fully supports multi-pathing, with all the load balancing and fault tolerance policies they provide.
    As you pointed out, CSV also increases the overall availability of the system by providing increased fault tolerance where it can recover from storage I/O faults, including complete loss of all storage paths by routing I/O over alternate networks. So CSV helps you achieve higher levels of availability and uptime to deliver on your SLA’s.

    CSV is a distributed orchestration layer on top of NTFS and for fragmentation it takes advantage of all the NTFS techniques. So again, nothing special here… exactly the same logic as a non-clustered normal NTFS volume.

    Iceman I’m a little confused by your comments… in my goal to provide technical answers, remember that CSV is not required for Hyper-V Live Migration or SCVMM Storage Quick Migration. CSV is an enabling technology that simplifies storage management, which customers are free to leverage as they please. Additionally, Failover Clustering has a very flexible plug-in storage model which allows ISV’s to plug in as well. So it is supported to do Live Migration with a wide range of scenarios, including such things such as 3rd party host based software replication, 3rd party hardware based storage replication, or 3rd party clustered file systems. We empower customers to choose what’s right for them.

    CSV is not in beta, it is an RTM’d feature available in Windows Server 2008 R2 which was released over a year ago (at this point). The vast majority of Hyper-V deployments are clustered and use CSV. The install base with CSV is quite significant at this point with a year of heavy usage under it’s belt, it’s quite tried and true at this point.

  97. Hi,

    Nice discussion but there are some missing details from Double-Take’s point of view.

    DT has different solutions to replicate Hyper-V in DR with/without a CVS implementation.
    A * Without CSV – we have two possibilities:
    A1. a stretch cluster implementing as extention of the MSFT Multi-Site Cluster
    A2. a host level replication for Hyper-V hosts (like VEEAM for VMware). DT Virtual Host for Hyper-V is a specific product constructed to replicate the active VM from an Hyper-V to another Hyper-V. Cluster to cluster is supported.

    B * with CSV * Double-Take has a protection for Cluster Shared Volume: for a real-time consistent replication you need to check the VRA technology for Hyper-V (Virtual Recovery Assistant). The same implementation has been done for VMware and it’s a sort of SRM for heterogeneous storage:


  98. Lawrence says:

    Hyper-V CSV backup is a feature in Microsoft DPM 2010 v3. DPM 2010 is also the only officially recommended mechanism from Microsoft to backing up Hyper-V VMs in CSV.

  99. MR says:

    Whether its live migration, P2V, V2V, 256TB support n many such features is a pure copy n paste of VMware technology…So don’t feel great about it..VMware rocks! 🙂

Comments are closed.