Last year we ran a little series called Ask the Experts where you all wrote in your virtualization related questions and we got them answered by experts at Intel, VMWare as well as our own head of IT/Datacenter - Johan de Gelas.

Given the growing importance of IT/Datacenter technology we wanted to run another round, this time handled exclusively by Johan. The topics are a little broader this time. If you have any virtualization or cloud computing related questions that you'd like to see Johan answer directly just leave them in a comment here. We'll be picking a couple and will answer them next week in a follow up post.

So have at it! Make the questions good - Johan is always up for a challenge :)

Comments Locked

55 Comments

View All Comments

  • HMTK - Friday, March 18, 2011 - link

    As another VMware bigot I couldn't agree more. Lots of interesting questions you asked. In the long rung I think Citrix has a problem. VMware is the market leader and currently has the best tech. Period. Microsoft however has deep, DEEP pockets and a great marketing machine and partner channel so they will become a very serious competitor to VMware. Citrix will be somewhere behind those two I think.

    As for price, most people here seem to think Hyper-V is free. The VMware hypervisor ESXi is free. To run Hyper-V you need at least a Windows Server 2008 R2 license. Tha management tools for both VMware and Microsoft are definitely NOT free so you'd have to do a case by case comparison if price is the most important for you. For small outfits (3 hosts) VMware Essentials is very affordable and Essentials Plus offers HA and VMotion. I do hope though that they stop with limiting CPU cores in certain SKU's: this is simply stupid.
  • bgoobaah - Thursday, March 17, 2011 - link

    I know the hardware is just way too expensive, but there is so little data out there for it but....

    UNIX Virtualization. Compare it against VMware/Linux features. HPVM and PowerVM are probably the two big players. I know HP machines came down a ton in price since the chipsets and memory are now the same as high end Xeons (blades only). I'm not sure about the IBM side. Maybe these options will become more important in the future. Maybe not if they cannot offer competitive features.
  • intelliclint - Friday, March 18, 2011 - link

    I am running 3 virtual servers at home. 2 Windows 2008 R2 servers and one Linux. They are all running on Hyper-V Core 2008 R2 stand-alone, which is free from MS. One server is Active Directory, the other Exchange, and the last is Firewall, VPN, Web Sever, and intrusion detection. I had little trouble setting up the MS servers, except that Exchange 2010 SP1 has to many pre-installations. The Linux server needed to have the drivers for the virtual NICs and hard drives to show up. You could use legacy but the virtual ones are faster.

    My point is I went from 3 machines to 1, that is 5 power connecters as two had redundant power supplies, and the only thing I had to do was add more ram to the one to a total of 8GB. I am saving about 400 watts, which can really add up.
  • HMTK - Friday, March 18, 2011 - link

    Not only Hyper-V is free but also ESXi and XenServer. Whatever platform you use, you'll have to purchase the licenses for the OS's in the VMs and the management tools if you want to use the really interesting stuff.

    Personally I wouldn't know what user Hyper-V server (the free stuff) is because you don't get a GUI which might be a plus. I run Hyper-V on my 2008 R2 desktop machine but without GUI I'd use ESXi or XenServer (I chose Hyper-V because of limited space). Thinner and better support for non-Microsoft OS's, especially on ESXi.

    I also always wonder whether you need or do not need an antivirus on your Hyper-V server
  • GullLars - Saturday, March 19, 2011 - link

    My question is what do the experts think of Fusion-IO's ioMemory and VSL (Virtual Storage Layer)? And how do they think this will effect virtualization and cloud computing in the short to medium timeframe, and long timeframe?

    Fusion-IO has been around for a few years now, and have done an outstanding job accelerating servers. Since their architecture is super-scalar in highly parallell enviroments and optimized for latency and power-efficient IO's, i'd think they'd have a great future in virtualization.
  • linesplice - Saturday, March 19, 2011 - link

    There is so much to talk about here on so many fronts, there probably needs to be some narrowing of topic and focus to occur. Some thoughts:

    - If you haven't virtualized servers yet, you're probably a smaller company.

    - Most jobs and economic growth occur through small/medium businesses (at least in the US - according to the department of labor). I'd recommend a focus on companies that don't have a hundred servers (and big budgets)

    - Desktop virtualization is all the buzz. I've been through the manufacturers materials and due to license costs (VMWare and Citrix) desktop virtualization is still expensive compared to 1:1 physical endpoints. You don't do it to save money (despite vendor claims which are mostly bolstered by the assumption you'll eliminate employees). It makes sense for hospitals and retail, but outside that? You still need MS licenses for the desktop, even thin clients aren't much less expensive than low-end workstations, and desktop virtualization still doesn't really work for laptop travelers (think airplane or non-connected environment).

    - Before someone says it - MSFT licenses. Frankly, unless you are a giant company and don't want to keep track of licenses info yourself, the only reason I can see to get an enterprise license or Software Assurance is because you want one license key to rule them all (or you already use every MSFT product ever made). We purchase OEM still because it's almost 50% less expensive, despite the need for us to track license info internally. I've yet to see an SA that saves anyone money since MSFT has historically released products on a longer than 3-year product cycle. Google apps isn't there yet for our company, but it might be in the next three years.

    - I read VMWare has reported less than 40% of the virtualization software purchased is actually deployed. It's the after-you-purchase gotch-ya's that I suspect hold up projects. What are those items?

    - Someone else mentioned it, but there is a lack of independent testing. Gartner, Aberdeen, Forrester and the like only offer information based on reviews from people who have done the work, and that's like playing the "telephone game". Isn't there some information somewhere that breaks down: a. difficulty to install (do I need a product-specific software expert in order to do it?), b. ease of management, c. complexity, d. cost, e. disaster recovery capabilities (and ease of doing so), f. gotch-ya's.
    (for reference, our company network engineer learned how to do a hyper-v install in a day)

    - I understand and appreciate all the Unix, Linux and Mac folks out there, but let's face it (if I stick with my SMB theme), most companies run Windows Server.

    - Are there complexities or problems when virtualizing in production on a SAN and then having to migrate to DR with DAS (what about drive letters, targets, managing disk space, etc.)?

    - Someone posted about NFS. Bleh. CIFS (like it or hate it) is more common.

    - What's new with virtualization today, and does it only apply to mega-installs? What's new for next year. After that, I see a lack of relevance due to changing tech.

    - Cloud computing...

    - For a Small/Medium business, what makes the most sense to outsource. We look at this all the time and we just keep coming up with the same answers over and over. Because we already host all our application internally, it's difficult to move just one of them somewhere else. Why? Because then we need to massively increase bandwidth to keep the applications communicating without bottlnecks. That increases cost and eliminates it as a possibility. Our company is extremely cost conscious.

    - Does cloudsourcing email make sense? It's much more expensive than doing it in-house for 1000 users. It's a no-brainer for a company of 25 or a 100 even.

    - If you aren't performing research or transactional processing, does cloudsourcing make sense? If yes, then for what applications? I'd cross off file server or document management immediately because of connectivity/bandwidth costs.

    There is probably more. Maybe a later post. Thanks!
  • HMTK - Sunday, March 20, 2011 - link

    - our customers are all SMB's (50 - 100 employees) and typically the only server that is NOT virtualized is the backup server

    - desktop virtualization is costly due to MS licensing and the need for more SAN storage. However, you save costs on deployment of new desktops. It works really well for laptop users. I manage a VMware View environment and the sales people take their VM along with them. XenDesktop works differently than View but it looks interesting as well.

    - Microsoft licensing is more than an easy way to keep track of software. OEM is cheaper in price but the various licensing programs can be cost savers elsewhere. You should NEVER simply make a comparison of €€ or $$$ there.

    I think you must be working with VERY small SMB's but even there virtualization makes sense. Current server hardware is way too powerful for many SMB workloads but a lot of SMBs have more than one server. Putting them all on a single host will save you money and make migration to other hardware later on a lot easier, even if you don't use the management servers like vCenter or SCVMM.

    Small example, I've got a customer with about 20 users and they want to use Remote Desktop Services (they got Microsoft licenses nearly for free). I ordered a PowerEdge T710 (dual Xeon E5620, 24 GB RAM, 5 x 146 GB 10k SAS) which is perfectly fine for their new SBS 2011 and a RDS server virtualized on the free ESXi. 2 separate servers would have cost more, used more power and space.

    I share your doubts about the usefulness of putting some (or all) your servers in an external cloud. For most small and medium companies, this doesn't make sense. Bandwith and guaranteed connections are way too expensive. Just keep it in house.
  • erhardm - Sunday, March 20, 2011 - link

    As we all know, virtualization has only a point when there's a SAN in use. My question is, knowing that it is usually I/O bound, what kind of HDD are the best used in the SAN? Are they used few high I/O 15K/10K Harddrives or are they used a lot of 7200rpm drives? What's the best ratio between capacity and I/O performance in a virtualized environment?
  • HMTK - Wednesday, March 23, 2011 - link

    You don't need a SAN to virtualize in a useful way but if you want all the bells and whistles, you need that SAN.

    Bulk storage/archiving/backup is typically fine on 7200 rpm drives but if you want several VM's on a single LUN, forget about those things. The difference between 7200 rpm and 10k rpm is considerable but if your budget allows, go for 15k. Those disks will have a longer useful life. 7200 rpm's are really way too slow. Most of our customers have 3.5" 300 GB 15k SAS drives in their arrays (RAID1, 5 or 10 depending on the purpose) and I'd like to keep enough free space for snapshots and moving things about. Usually we also have a mirror of high capacity 7200 rpm drives (1 TB+) to store ISO's, VM templates and other stuff that needs disk space but not performance.

    The next SANs we're going to sell will most definitely be with 2.5" disks. More spindles = more iops. We recently bumped into serious performance issues on a VDI setup on a 3.5" SAN that had enough disk space but not enough spindles. VMware recommends about 20 virtual desktops per LUN so a SAN with a mere 12 disks cannot host that many machines.
  • schuang74 - Thursday, March 24, 2011 - link

    What you find is in more modern implementations of shared storage for VM's is that a lot of companies are drifting away from traditional "fast" shared storage like SAN / Fiber implementations and moving to iSCSi utilizing slower 7200 / 10K SAS or SATA based drives. The trade off is you have lager arrays adding a higher number of spindles which compensate for the loss of rotational access. SAN's are pricy if all you use them for is shared storage. The strengths and benefits in a SAN isn't just the performance but rather the software, replication, snapshots, and redundancy. Running VM's off of an iSCSi array with 7200 RPM SATA 2/3 or SAS drives works well for most applications and for the applications.

    Newer storage products allow you to provide a hybrid of solutions allowing you to combine slower and faster storage cabinets that prioritize data utilization between the cabinets with the higher access pushed to the faster drives and the least accessed to the slower cabinets. Either way the long term maintenance and costs are much lower then the traditional SAN.

Log in

Don't have an account? Sign up now