What is I/O Virtualization (IOV)?
There are various forms of virtualization with some being much more popular than others. For example:
- Server Virtualization (we all know this one) – consolidating physical servers into virtual server that run on many fewer physical servers
- Desktop Virtualization – virutalizing desktops and running them on servers
- Network Virtualization – creating virtual networks inside the software that don’t require any physical network hardware (a must-have for server virtualization)
No matter the type of virtualization, the idea is that you are “decoupling the software from the hardware”, making the software hardware independent. There are more benefits to virtualization than I have space to write in this article but, trust me, it’s “good stuff”.
Instead, let’s talk about a topic that has been fascinating me lately and that is I/O Virtualization (or IOV).
Say Goodbye to Traditional PC Lifecycle Management
Traditional IT tools, including Microsoft SCCM, Ghost Solution Suite, and KACE, often require considerable custom configurations by T3 technicians (an expensive and often elusive IT resource) to enable management of a hybrid onsite + remote workforce. In many cases, even with the best resources, organizations are finding that these on-premise tools simply cannot support remote endpoints consistently and reliably due to infrastructure limitations.
What is I/O Virtualization?
Just as you decouple an operating system from the hardware with server virtualization, you decouple network and storage communications from it’s typical hardware cable path, network/storage switches, and network/storage adaptors with I/O Virtualization.
In my opinion, understanding IOV can best be described with pictures and math.
Here is how the typical server datacenter “does IO” today:
With traditional I/O, EVERY server has:
- Network – between 1-4+ Ethernet network connections that require individual NICs, Ethernet cables, and switch ports
- SAN – a large majority of servers are redundantly connected to a fibre channel (FC) SAN that requires individual HBAs, FC cables, and FC switch ports
If you have 6 connections per server and 100 servers, you are talking about 600 connections – that’s a lot.
The theory with IOV is to take a single cable (or two if you want redundancy) and consolidate all the network and SAN connections onto that single, high-speed cable (does it sound similar to consolidating many smaller servers onto a single high-capacity server?).
Here’s what it would look like:
Notice the huge reduction in network and storage cabling. That reduction is going to save you money in network and FC switches as well as time spent managing and troubleshooting all that cabling.
I like it because it just makes things simple and that’s how thing should be in a datacenter.
In this graphic, you’ll find some additional benefits of IOV:
As the slide points out, physical adapters are now virtual adapters but they work just as the traditional physical adapters would.
While I had heard about I/O virtualization first at VMworld 2009 when VMware was using IOV to connect the 1000+ servers in their lab environment, I didn’t have time to go into too much detail on it. Recently while attending VMware Partner Exchange (PEX) 2010, I had the chance to interview a real IOV guru from an IOV company, Virtensys (who also provided me the graphics for this post). In the video, Stephen Spellicy of Virtensys whiteboards how IOV works which may help to explain this even better than the graphics above.
In summary, virtualizing and consolidating IO in the datacenter is an area where most IT Pros need more education. This is a relatively new form of virtualization but has lots of benefits to offer us today and shows how future datacenters could be simplified and could be implemented at, potentially, lower costs thanks to IOV.