Yesterday I wrote a summary of the hazards of using SCVMM to manage VI3 — highlighting previously published VCritical articles as well as linking to a new series of videos on Why Choose VMware. Looks like VMware may have hit a nerve, as the Microsoft virtualization team scrambled to issue a barrage of rebuttals.
My work has been called a lot of things, but “conjecture” was surprising. You may not like what I said but it is factual and reproducible. Inaccuracies have always been corrected — leave a comment or contact me directly.
Isn’t it ironic that Microsoft is playing the “rhetoric and FUD” card? Oh, please, when it comes to technology rhetoric and FUD, Microsoft invented the game — virtualization is no exception. How about: SCVMM manages virtual and physical … single pane of glass…
Here is what I learned by reading Rakesh’s rebuttal to the video series:
- The reason that SCVMM deletes your VMware templates silently and without warning is that it would be a lot of trouble to maintain two copies. Consider that a favor to VI3 admins, no extra charge.
- VMware doesn’t have intelligent placement. Or at least, Rakesh isn’t aware that VMware has intelligent placement as a feature of DRS. (Speaking of conjecture…)
Actually, a VM is intelligently placed each time it is powered on — not just the first time it is created.
The wizard that performs migration with VMotion allows users to select a cluster as the destination — VMware DRS will intelligently place the VM on the most appropriate host. Thank you, very much.
- When managing VMware ESX, SCVMM artificially limits your VMware ESX VM migrations because it knows better than you — memory overcommit is bad! But if you insist, you can use PowerShell instead of the SCVMM console to perform those tasks. Huh? Switch from this single pane of glass to this other single pane of DOS to write a script and…
- SCVMM allows you to view resource pools, which is practically the same as being able to manage them. Forget about the damage inflicted on your VI as SCVMM de-configures your careful resource partitioning — you should not be using resource pools anyway. Partitioning resources on a single host is just not a good idea. Use host groups to partition hosts from one another instead — much easier to understand.
- The whole concept of port groups is awkward and leads to confusion, just like those confusing VLANs on physical switches. If network admins would just set up separate physical switches for each layer 2 network it would be a lot less awkward. Why make things complicated?
But the most important thing I learned by reading that post is that “many many customers” are using SCVMM to manage VI3. I don’t know how many “many many” is, but if that truly is the case then surely one or two would read this article and be willing to comment in support of just how well it’s working for them.
Oh, the title of that article proclaims, “…we must be doing something right.” Well, you can’t argue with that — marketing is something.
Pingback: Episode 3 on “FUD on the hypervisor management” « UP2V
I am using SCVMM to manage ESX. It is actually cool to see both environments in a single console. Plus, since I will never be 100% virtualized (or at least in the next 3 years), I like Microsoft’s management story. The ability to manage both phyiscal and virtual workloads using Operations Manager is very nice and my management loves it.
Hopefully by next year, we’ll be almost 100% virtualized, and even if we wouldn’t be, SCVMM does not have a home in my datacenter.
Fred, thanks for the input. Would you mind expanding a bit on your environment? For example, about how many hosts are you managing and are they mostly Hyper-V?
Would like to know just how many Linux virtual machines are in these oh-so-many SCVMM environments. I’m betting on oh-so-few.
John, that is an excellent question, especially considering the lack of Linux guest tool functionality.
Pingback: Microsoft employee caught leaving deceptive comments on blog | VCritical
Pingback: PHP, CGI/Perl Script Installation Service