Introduction

First dual-core in 2005, then quad-core in 2007: the multi-core snowball is rolling. The desktop market is still trying to find out how to wield all this power; meanwhile, the server market is eagerly awaiting the octal-cores in 2009. The difference is that the server market has a real killer application, hungry for all that CPU power: virtualization.

While a lot has been written about the opportunities that virtualization brings (consolidation, hosting legacy applications, resource balancing, faster provisioning...), most publications about virtualization are rather vague about the "nuts and bolts". We talked to several hypervisor architects at VMWorld 2008. In this article, we'll delve a bit deeper as we look to understand the impact of virtualization on performance.

Performance? Isn't that a non-issue? Modern virtualization solutions surely do not lose more than a few percent in performance, right? We'll show you that the answer is quite a bit different from what some of the sponsored white papers want you to believe. We'll begin today with a look at the basics of virtualization, and we will continue to explore the subject in future articles over the coming months.

In this first article we discuss "hardware virtualization", i.e. the technology that makes it possible to offer several virtualized server such as VMware's ESX, Xen, and Windows 2008's Hyper-V. We recently provided an introduction to application virtualization using Thinstall, SoftGrid, and others software packages at our new IT portal, it.anandtech.com. These articles are all about quantifying the performance of virtualized servers and understanding virtualization technologies a bit better.

Hardware or Machine Virtualization versus "Everyday" Virtualization

Every one of us has already used virtualization in some degree. In fact, most of us wouldn't be very productive without the virtualization that a modern OS offers us. A "natively running" server or workstation with a modern OS already virtualizes quite a few resources: memory, disks, and CPUs for example. For example, while there may only be only 4GB RAM in a Windows 2003 server, each of the tens of running application is given the illusion that they can use the full 2GB (or 3GB) user-mode address space. There might only be three disks in a RAID-5 array available, but as you have created 10 volumes (or LUNs), it appears as if there are 10 disks in the machine. Although there might only be two CPUs in the server, you get the impression that five actively running applications are all working in parallel at full speed.

So why do we install a hypervisor (or VMM) to make fully virtualized servers possible if we already have some degree of virtualization in our modern operating systems? Operating systems isolate the applications weakly by giving each process a well-defined memory space, separating data from instructions. At the same time, processes share the same files, may have access to some shared memory, and share the same OS configuration. In many situations, this kind of isolation was and is not sufficient. One process that takes up 100% of the CPU time may slow the other applications to snail speed for example, despite the fact that modern OSes use preemptive multitasking. In case of pure hardware virtualization, you will have completely separate virtual servers with their own OS (guest OS), and communication is only possible via a virtual network.

A Matter of Privileges
Comments Locked

2 Comments

View All Comments

  • toony - Tuesday, March 20, 2012 - link

    could you give me a analysis about EPT tech in detail or introduce me some reference about it? thx a lot...
  • RogerAlvarado - Tuesday, August 3, 2021 - link

    A deep shows changing for the approval of the phases for the field. The reforms of the https://softwaretested.com/technology/six-ways-sof... are met for the options. The theme is argued for the mid of the final mode for all affairs and activities for the visitors of the posts.

Log in

Don't have an account? Sign up now