Multiple hypervisors? What about multiple SANs?

Some of the cool kids are talking about a multi-hypervisor strategy these days.  Mostly journalists and such who could never truly feel the pain of dealing with two vastly different virtualization management platforms.

As you may recall, Red Hat is attempting to bring their new RHEV product to market by playing the old “don’t get stuck with a single vendor” trick.  Red Hat’s CEO claims:

“…customers don’t want one platform. They want two.”

Let’s save the multi-hypervisor discussion for another day, but shift to the topic of storage diversity.

While there may be a few environments that can standardize on a single storage area network technology, it typically  makes sense to mix and match iSCSI, NFS, or Fibre Channel SANs to optimize for cost and performance.

During my recent foray into the chaotic world of Red Hat Enterprise Virtualization, I encountered an unbelievable storage limitation with their new KVM hypervisor.

“Virtualization diversity… when and where we say”

VMware vSphere customers are free to mix and match supported storage technologies within datacenters, clusters, and even hosts.

I would have thought that a company like Red Hat that is pushing heterogeneous solutions and multiple vendors might feel the same way when it comes to storage.  But that’s not quite how things go in the land of Red Hat Enterprise Virtualization.

With RHEV, a “Data Center” is configured to use a single type of storage;  all clusters and hosts in that Data Center are restricted to that, so choose wisely:

Storage Freedom

VMware vSphere allows administrators to move virtual disks for running VMs from one array to another — even between different types of arrays — with zero downtime.  Storage vMotion is a very powerful feature and it is simply not available from Red Hat – only VMware.

The Best Choice: VMware vSphere

Virtual machines have differing storage requirements — certainly not a one-size-fits-all component of an efficient virtual infrastructure.  The best platform for building your own private cloud is also the one with the broadest support for today’s storage technologies:  VMware vSphere.

Oh, by the way, did you know that VMware is positioned in the Leaders Quadrant of Gartner’s newly-released x86 Server Virtualization Magic Quadrant.  Take a look at the full article — I’ll give a free subscription to the VCritical RSS feed to the first 10 people that spot Red Hat’s position.

(Visited 1,013 times, 1 visits today)
This entry was posted in Virtualizationism and tagged , , . Bookmark the permalink.

20 Responses to Multiple hypervisors? What about multiple SANs?

  1. Jason Ruiz says:

    Seems a bit backwards with what KVM has to offer, but based on your previous articles it doesn’t surprise me.

    • Eric Gray says:

      Keep in mind that KVM is a kernel module that adds virtualization to a Linux host. RHEV is a management platform for such, and subject to the limitations of its designer.

      Open Source KVM can run VMs on any storage you like.

  2. Pingback: Tweets that mention All hosts in a RHEV data center are restricted to a single storage technology | VCritical -- Topsy.com

  3. Scott Lewis says:

    Forget multiple SANs, what about single SAN? I’ve been in environments where many hosts are loaded up with some 1gbps Ethernet adapters and access the SAN via iSCSI, while other hosts that have larger IO requirements will have a pair of fibre channel cards. VMware doesn’t really care how you attach to SAN storage. Are you telling me that in an emergency (ie: hardware failure), I wouldn’t be able to migrate to an iSCSI host temporarily from a FC host, or vice-versa, because the media type is different?

    Ouch, since now I need FC in all hosts in this scenario, which erases a bit more of the price “advantage”.

  4. Vincent says:

    It’s easy bashing a young product. VMWare is doing virtualization for almost 10 years now, KVM is just a few years old. I think it’s normal that it doesn’t have the same feature set then VMWare.

    If RHEV doesn’t fit your needs, don’t use it. Use something that does fit your needs and stop bashing other products. It’s like telling a 5 year old that he can’t sole algebra…

    BTW : http://www.linux-kvm.com/content/qemu-kvm-012-adds-block-migration-feature

    Regards,
    Vincent.

  5. >It’s easy bashing a young product.

    RedHat claims that RHEV is as good as vSphere.

  6. Vincent says:

    When staying in the same price range, RHEV is indeed as good as vSphere. It sure is a stable hypervisor.
    When i look at the competitive feature paper on the Red Hat site, i think Vsphere is actually better (more virtual Nics, more vCPU /vm,..). But Red Hat is right when tey say you can virtualize as good as vSphere.

    • >When staying in the same price range, RHEV is indeed as good as vSphere.

      Really? Looks like Red Hat is kidding. HA depends on single point of failure – physical Windows box. Your whole infrastructure depends on it.

  7. Dr. Kenneth Noisewater says:

    I’ve just done a few rounds of evaluating RHEV on modern hardware (Nehalem Xeons) and compared to vSphere… RHEV supports virtio drivers for system/boot disks for Linux, which the latest version of vSphere 4 we have does not (LSI SAS for boot, and pvscsi for data only). RHEV’s virtio drivers for disk and net are built into RHEL5 install, whereas vSphere needs to run guest additions.

    Those are the only advantages that RHEV has, besides pricing. Feature parity with vSphere 4 they’re promising for some time next year.

    Plus, the RHEV-M runs on Windows, which for a Linux company is thunderously retarded. Hell, ganeti is more supportable from a Linux person’s perspective than RHEV-M, and if I could get ganeti working properly with gluster..

  8. Alzhy Wziak says:

    We run KVM on RHEL 5.4 as Hypervisor.
    We no need RHEV-M as we manage via script and GUI.
    So who needs RHEV-M?

    But it will now be any day now and RHEV-M will be entirely JBOSS/Linux backend.

  9. Dr. Kenneth Noisewater says:

    RH says linux-based RHEV-M is scheduled for next year at the earliest.

    Also, does your KVM solution do clustering and transparent host migration? Hot-add CPU or RAM (which RHEV does _not_ do)? Host consolidation and power control (which RHEV “pretends” to do but actually doesn’t)?

    If it were up to me, I’d consider Ganeti as well, but the features of VMware are QUITE nice..

  10. aumon says:

    Hi Friends,

    Please help me to find some good RHEV deployment documentation.(not from Redhat)
    I am strugling on the storage configuration part.

    Regards,
    Arumon

    • Eric Gray says:

      Just out of curiosity, have you tried to get Red Hat support to help you with storage configuration?

      • arl says:

        Yes we tried, and failed.

        Created 6 vdisks in SCST and used these in rhev data center.
        We tested the scenario twice using different multiple vdisk setup.

        We failed miserably because rhev vdsm python scripts somehow
        failed. (checked even the python byte code).

        The problem is that even though rhev constructs VG using vdisks as PV
        it still does check PVs consistency every time one creates new resources.
        The consistency check seems to fail and the problem is within rhev.

        Those Red Hat “engineers” are just dummies – they actually cannot do
        anything other that asking questions like “have you contacted your SAN administrator?”

        Naturally all the engineers which are NOT dummies are used for development
        because they are not dummies (maybe) 😉

        The fix was to create one larger vdisk. And this was not funny.

        //arl

        DirectTalk === some cultures cannot handle directly to the point going semi offensive articles. proof reader told to me. but why should I care? [Sheldon Cooper]

    • Dr. Kenneth Noisewater says:

      Wish I could help, but I gave up on RHEV about 1.5 months ago. Perhaps when v3 comes out it’ll be worth looking at again..

  11. Chris Gilbert says:

    VMware/Vsphere are excellent products. That being said, they have a severe pricing issue. We have 8 VMware ESXi 4 servers, running the free edition. We have an Openfiler SAN/NAS device, exporting storage via NFS. VMware gives us loads for free and works well. However, when you want to upgrade a bit, their licensing steps on you heavily.

    There is no licensing option (which isn’t extremely expensive) to allow us just to do simple things:

    – Use vSphere vCentre (or whatever it’s called now) for centralised management.
    – Do backups using standard off the shelf backup software (currently using customised ghettoVCB scripts).
    – Possibly live migration (but not that important)

    Vmware Essentials you say? No, that has a limit of 3 hosts. After that, we must pay £757 per CPU for the Standard Edition. That’s absolutely ridiculous for the minimal amount of extra functionality from the free edition.

    What to do?

    We use the servers mostly for development purposes, and much as I would like to, I don’t think I can get through £15,000 worth of software licenses… I had a hard enough time getting through a £5000 self-built SAN.

    Essentials: GBP 380 (for 3 Servers), up to 6 CPUs
    Standard: GBP 757.20 per CPU.

    There is no bridge between the two – you can’t buy an essentials license and top it up with extra CPU licenses for more servers. You can’t buy two essentials licenses, only one.

    This means standard costs at least £9084 for our current hardware. That’s just for the hosts though.

    And then of course:

    vCenter Server Foundation: GBP 1,140.00 – Ok fine.
    vCenter Server Standard: GBP 3,795.00 – More than twice the price.

    Plus, you must also pay for one year’s support on top of all these license fees. Not really feasible for a software business our size without a lot of planning – so we will have to struggle along with free edition.

    The other major issue I have with VMware is – why the HELL do they keep changing the names of their products? Is this just to confuse people?

Comments are closed.