Responsible Thin Provisioning in VMware vSphere

A cost-saving feature introduced in VMware vSphere 4 is fully supported thin-provisioned virtual disks. Thin-provisioning decreases demand for SAN storage space by permitting virtual disks to consume just the space they actually use — and grow as needed — instead of pre-allocating all space up front.

What’s new is not necessarily the technology — it’s the management.  In fact, veteran VMware ESX admins have been creating thin provisioned virtual disks for years — for controlled scenarios — by way of the vmkfstools command.

Before vSphere and ESX 4, however, thin disks came with risk — there was no simple way of accounting for the overcommitted storage on each LUN.  Even with multiple gigabytes of free space, a small gang of thin-provisioned virtual machines could quickly quickly grow to exceed datastore capacity during a sudden demand spike.

Complete Storage Accounting

Now in vSphere 4 there is a new element in the capacity section of the datastore summary tab that shows total provisioned space — the maximum potential growth of all virtual machines if thin provisioned disks were fully utilized:

Datastore summary tab shows committed capacity

Virtual machines with snapshots have the potential of consuming even more datastore space, so vSphere accounts for this condition, too.  Take a look at this VM Summary tab, where the total provisioned storage includes:

  • Virtual hard disk (40GB)
  • Snapshot (another 40GB potential, worst-case)
  • VM swap file (1GB — sized according to RAM in theVM)

Storage resources for VM with snapshot

As you can see, a VM with a 40GB virtual disk can actually consume up to 81GB of space on your SAN because of a forgotten snapshot!  Use the  snapshot alarm to stay in control.  And don’t forget that VMware vSphere snapshots are perfectly suitable for production.

New Datastore Alarms

New alarms in vSphere prevent out-of-space surprises.  Administrators can monitor not only the free space on a datastore, but also the percentage overallocated — making it easy to adhere to policies concerning thin provisioning aggressiveness in your environment.

Datastore alarm for disk overallocation

Flexible Virtual Disk Re-configuration

When creating a new VM, opting for thin-provisioned disks is as easy as checking a box.  If you change your mind later, you don’t have to start over — during a Storage VMotion operation, administrators can opt to change the VM disk format on the fly (see below).  It is also possible to inflate thin disks into thick via a new menu in the Datastore Browser.

VM disk format selection during Storage VMotion

Danger!

You’ve probably heard the now-famous quote by Tom Bittman from Gartner:

Virtualization without good management is more dangerous than not using virtualization in the first place.

That goes double for thin-provisioned virtual disks.  Without comprehensive accounting and monitoring in place, your virtual infrastructure may be heading for disaster.  This level of insight is only available with VMware vSphere 4 — and it’s built right into the platform.

What about Microsoft virtualization?  Hyper-V R2 thin provisioning — known as “dynamic disks” — is not a best practice.  Perhaps due to the lack of accounting and monitoring of storage overcommitment — especially critical now with Cluster Shared Volumes and multiple VMs per LUN.

Are you using vSphere thin provisioning?

(Visited 3,869 times, 1 visits today)
This entry was posted in Virtualizationism and tagged , , , , . Bookmark the permalink.

18 Responses to Responsible Thin Provisioning in VMware vSphere

  1. Rick Schlander says:

    Eric,

    Great post – good to get the word out about this awesome new management tool built right into the platform.

    Thanks!

    Rick

  2. Eric Gray says:

    Glad you liked the post, Rick. Thanks for letting me know.

  3. Tom says:

    Thank you for writing this!!
    I believe you. 🙂 🙂

  4. Steven Ruby says:

    Hey Eric, Great post! This is exactly the reason I haven’t implemented “thin provisioning” yet. If I give a dba 4TB they are going to use 4TB, same thing with my VM environment, if it’s available it will be used!

    Glad I found your blog, hope things are going well!

  5. Eric Gray says:

    Hey Steve, good to hear from you! Thanks for dropping by.

  6. Pingback: Tweets that mention VMware vSphere offers advanced accounting for overcommitted storage | VCritical -- Topsy.com

  7. Pingback: Use VMware vCenter alarms with PowerShell to automatically migrate virtual machines | VCritical

  8. nate says:

    I’ve been using thin provisioning on ESX since 3.0 on my 3PAR arrays. What I have done and seems to work well/pretty foolproof is that I provision volumes thinly, and create VMs on them. VMware isn’t aware that the thin provisioning is transparent, so when it says there is 100GB in use there may only be 10GB in use. I usually provision 1TB luns and stop provisioning VMs once VMware thinks there is 500GB written. So that means even in the worst case VMware will only use roughly 500GB of physical space on the storage array. No chance of “over subscription”, the remaining space is there in case I get lazy or something and want to provision another VM on it. Since the space isn’t allocated on the storage there isn’t any cost associated to the “waste”. Thin provisioning licensing for my array at least is based on data written, so no lost $ there either.

    At the moment my production VMFS volumes are using 985GB out of 4096GB, my QA VMFS volumes are using 930GB out of 5120GB, and my internal IT VMFS volumes are using 410GB out of 4096GB, that’s written data to the array.

    From VMWare’s perspective, production VMFS volumes are using 2075GB, QA VMFS volumes are using 1788GB, internal IT VMFS volumes are using 725GB.

    So in total, out of 13,312GB of thin provisioned volumes for VMFS, 2325GB has been written or an 83% savings over thick provisioning? Given this effective usage of space I don’t imagine having to provision new volumes any time soon, and we’re running out of things to put in VMs.

    But at least I can rest well thinking that despite provisioning so much more space than VMWare really needs, it really won’t ever cause a space issue for our system. In all VMFS volumes account for 2.8% of the total space on our storage system.

  9. David says:

    Hi

    But whatis the performance overhead on the VMFS LUN, and underlying storage (removing the vendor for the moment, and if the storage is thin provisioned as per Nate)? Yes, thin provisioing is supported in production, but at what cost?

    I have it enabled on a number of low I/O vm’s – web servers, AD, app vm’s. but have avoided the large disk intensive machines for the moment, until I have seen more definitive numbers on the peformance and system “hit”.

    Do you have anything here Eric that could be referenced.

    Cheers
    David

  10. Pingback: How to recover when a VMware ESX datastore runs out of space | VCritical

  11. Pingback: Provision a Thin Provisioned Standby LUN For vSphere Thin Provisioning | VM /ETC

  12. Pingback: vSphere Thin-Provisioned Disk Performance | VCritical

  13. Eric Gray says:

    VMware just published a new performance study on thin provisioning:

    https://vcritical.com/2009/11/vsphere-thin-provisioned-disk-performance/

  14. Pingback: Welcome to vSphere-land! » Thin Provisioning Links

  15. Pingback: Most useful VMware vSphere storage posts of the last year « Steve Goodman's Tech Blog

  16. vicky says:

    Hi,
    I recently had a nightmare because of this disk space issue on my datastore. We are using ESXi 4.1. We dont have vSphere vCenter.
    We had reasonable amount of FREE disk space on the datastore. We also configured replication of VMs to another ESXi 4.1 host using third party software.

    Unexpectedly, one of the VMs grow excessively bigger. The provisoned size was about 570 GB for that VM. After the terrible issue, in ESXi vSphere Client I found this size to be around 1.2 TB.

    From my investigation, I concluded the amount of free disk space was the culprit.

    But this is still unclear to me: Is there any bug in ESXi 4.1 which allowed the growth of the VM beyond the provisioned size ?

Comments are closed.