Virtualization - Ask the Experts #2by Anand Lal Shimpi on July 27, 2010 12:10 PM EST
Our Ask the Experts series continues with another round of questions.
A couple of months ago we ran a webcast with Intel Fellow, Rich Uhlig, VMware Chief Platform Architect, Rich Brunner and myself. The goal was to talk about the past, present and future of virtualization. In preparation for the webcast we solicited questions from all of you, unfortunately we only had an hour during the webcast to address them. Rich Uhlig from Intel, Rich Brunner from VMware and our own Johan de Gelas all agreed to answer some of your questions in a 6 part series we're calling Ask the Experts. Each week we'll showcase three questions you guys asked about virtualization and provide answers from our panel of three experts. These responses haven't been edited and come straight from the experts.
If you'd like to see your question answered here leave it in the comments. While we can't guarantee we'll get to everything, we'll try to pick a few from the comments to answer as the weeks go on.
Question #1 by Eric A.
What types of computing will (likely) never benefit from virtuaization?
Answer #1 by Johan de Gelas, AnandTech Senior IT Editor
Quite a few HPC applications that scale easily over multiple cores and which can easily gobble up all the resources a physical host has. I don't see graphical intensive applications being virtualized quickly either. And of course webservers that need to scale out (use multiple nodes) don't have much use for virtualization either.
Question #2 by Alexander H.
GPGPU is becoming very important these days. When can we expect virtual machines to tap this resource unobstructed?
Answer #2 by Rich Brunner, VMware Chief Platform Architect
Let me speak from the bare metal hypervisor (server) POV. If you directly expose a GPGPU to a VM (virtual machine), you make VMware VMotion of the VM to a different system too difficult, fragile, and costly to attempt. There is no guarantee that the target system of a VMware VMotion even has any graphics controller above the simple 2D VGA capability living in the BMC of the server and few server customers want to waste the limited PCIe slots of a server for a graphics card. Even if you could claim some high performance graphics controller in each server today, which we do not see our SMB and enterprise customers rushing toward right now, there is still no guarantee of compatibility even at the GPGPU instruction set level (OpenCL vs CUDA vs DirectCompute). This compatibility breaks live migration. Attempting to address the compatibility requirements by emulation of the GPGU instruction set on systems which do not have it also leads to unacceptable performance. As a result, I do not expect anyone to seriously expose GPGUs in a commercial enterprise hypervisor scenario for at least a few more years. But, desktop hypervisors, which have less requirements for live migration, could get this to work sooner and paper over some of the incompatibility issues.
I think a better first step is for the hypervisor and VM to share common graphics rendering commands and primitives so that they hypervisor does not have to convert one graphics command set to another in order to render on the VM's behalf. A native driver in the hypervisor can then tweak the commands and take advantage of any hardware acceleration that a high-performance graphics card could provide if present. (This is being done today for "hosted" hypervisors such as VMware's Fusion product on the MAC with regards to OpenGL.) A GPGPU-capable graphics card offers the possibility of further "offline" (or asynchronous) acceleration of rendering and other hypervisor tasks that are invisible to the VMs on the server.
Having said that, it is clear that the microprocessor vendors are slowly integrating more capable graphics devices with their CPUs into the same processor package, at least for some market segments. If they ever decide to make this capability available in server processors, then more direct exposure of the GPGU to the VM which does not break VMware VMotion may become possible due to the resulting wide-spread availability and commonality of integrated GPGUs.
Question #3 by Craig R.
What is the Roadmap for breakthrough Security features and their implementation?
Answer #3 by Rich Uhlig, Intel Fellow
Going back to the early days of the definition of Intel VT, we actually had security in mind from the beginning, and so security is sort of already built into our existing VT feature roadmap. VMs provide a fundamentally stronger form of isolation between bodies of code because it works down to the OS kernel and device drivers running in ring 0. Our goal has been to help VMM software to further strengthen the security boundaries between VMs through hardware support. As an example, VT includes hardware mechanisms for remapping and blocking device DMA accesses to system memory, so that even a privileged ring-0 device driver running in one VM can’t access the memory belonging to another VM; that’s something that can’t be done without new hardware support. VT also simplifies the implementation of a VMM by reducing the amount of code needed to work around virtualization problems – that reduces the overall size of the trusted computing base and therefore the “attack surface” for malicious software to exploit. More recently, we’ve been adding hardware support to compute a cryptographic hash of the VMM kernel image that is loaded into the machine as it boots. This cryptographic measurement of the VMM can help to ensure that the VMM binary has not be tampered with before it begins to run. We call this “Trusted Execution Technology”.