Rethinking Virtualization

My first encounter with virtualization was in 1988, when a company called ReadySoft released a product called the A-Max. Because the Amiga and Macintosh shared the same Motorola 68000 based CPU, the A-Max allowed the Amiga to run the Mac OS as a virtual machine. The benefits were obvious, you could invest in a single machine (RAM, hard drives, video cards) and get the benefit of both platforms, using and allocating resources as needed for the task at hand. If you needed a full 4MB of RAM given to the Mac, you could bump the memory and work in Photoshop, or if you wanted to leave some breathing room for the Amiga, you could always bump it down to 1MB. This is much like Windows XP mode today in Windows 7, and this type of use still shares the many benefits it had in 1988 on my Amiga running Mac OS 7.

Server virtualization is a bit different. Server virtualization is focused on exposing very specific applications, and typically exposing only a single application on a given VM. With server virtualization, we don’t tend to use VMs to allow us to continue to run old applications ad hoc on older operating systems, nor to enable access to a non-primary platform that we want to share hardware with as we do with client virtualization. The reason we virtualize servers is primarily an issue that revolves around isolation and security. It’s certainly much more efficient to run 40 web sites off a single IIS server than host 40 VMs, each hosting a minimal Windows installation, each running an independent instance of IIS. If someone manages to exploit the web server, they haven’t exploited the email server as well. It also simplifies configuration, as you don’t have to debug a complex set of configuration requirements for each application, potentially conflicting dlls, and so on.

To this point, it seems that the entire virtualization market and process has been driven by the physical hardware workstation model of virtualization. Current operating systems are designed entirely around a physical hardware stack. Even with JeOS versions of operating systems, there is this entire virtual hardware layer the entire application and OS are going through, and the applications are typically running on established application stacks (ASP.NET, LAMP, etc.) indirectly through this hardware. But what if this legacy didn’t exist? What if someone looked at why we virtualize today, and created a platform specifically to address it?

With CPU virtualization support built in, what would an operating system look like today if it were built from the ground up for a virtual machine? Also, what would a virtual machine look like if it didn’t have to expose hardware abstractions to physical operating systems? Virtual machines are simulations of physical hardware: network cards, hard drives, BIOS, video cards. For most server virtualization functions, this is all at best useless, and at worst a drain on resources. What if, instead, virtual machines were simply a set of well defined services for networking, memory, and storage? Services that virtual operating systems could use directly, without the weight of device drivers, filesystem drivers, and all that cruft from the physical world?

If you carry the JeOS approach to its logical conclusion, what you end up with is a hypervisor that exposes not virtual machines, but virtual platforms. Stacks like ASP.NET, Apache/PHP, Mono, Tomcat/JVM, or even entire applications as operating systems like SQL Server or Exchange. I suspect that we’ll eventually end up somewhere like this, where the lines between the platform and the operating system continue to be erased.

I think it’s about time for the next phase of virtualization, a virtual machine that isn’t a machine at all.