Last Update: Jun 27, 2022 | Published: Mar 12, 2012
What is virtualization? What are the different types of virtualization? And most importantly, what are the benefits of virtualization? In this guide, designed specifically for IT professionals who are new to virtualization, we’ll take a detailed look at the different types of virtualization as well as the benefits of each:
What I hope you get out of this article is that virtualization is not just for the datacenter, and it’s not just for large organization. Same thing applies to its benefits; virtualization has a lot to offer to IT professionals and in many cases, to end users. If you’re new to the concept of virtualization, or you’re unfamiliar with the different shapes virtualization can take, this article is the perfect place to start.
Historically, there has been a 1-to-1 relationship between physical servers and operating systems. Low CPU, memory, and networking requirements matched nicely with the limited hardware resources available. As this model continued, however, the costs of doing business rose. The amount of power, physical space, and hardware required meant that costs were adding up.
Virtualization is all about abstraction. Hardware virtualization is accomplished by abstracting the physical hardware layer by use of a hypervisor (aka – a Virtual Machine Monitor). The hypervisor handles sharing the physical resources of the hardware between the guest operating systems running on the host. Physical resources become abstracted versions in standard formats, so regardless of the hardware platforms, the hardware is presented as the same model. The virtualized operating system is able to hook into these resources as though they are physical entities. Learn About SolarWinds Virtualization Manager.
Various levels of hardware virtualization exist that perform various levels of abstraction:
Full – The guest OS is unaware that it is being virtualized. The hypervisor will handle all OS-to-hardware requests on demand and may cache the results for future use. In this instance, the virtualized OS is completely isolated from the hardware layer by the hypervisor. This provides the highest level of security and flexibility as a broader range of operating systems can be virtualized.
Hardware assisted – Hardware vendors have seen value in virtualization and have tailored their devices to enhance performance or functionality. This is most evident in the AMD-V and Intel Virtualization Technology processor enhancements. In the case of the AMD and Intel processors, specific CPU calls are not translated by the hypervisor, and are sent directly to the CPU. This reduces the hypervisor load and increases performance by removing the translation time from operating system calls.
Paravirtualized – The guest OS needs to be engineered in such a way that it knows that it is virtualized. The kernel of the operating system is adjusted to replace instructions that cannot be virtualized with methods that interact directly with the hypervisor. Value for paravirtualized environments comes in the form of lower overhead and optimized operations. Paravirtualization is typically seen in Linux environments with the Xen kernels included, although it is more and more common to find Full Virtualization vendors, including some paravirtualization drivers, in their latest products.
Hypervisors can also be categorized into one of two primary categories:
Type I – Type I hypervisors are installed directly on to the hardware, similar to how a regular operating system may be installed on a single server. There is very low overhead associated with this technique and performance is greater. VMware ESXi, Microsoft Hyper-V, and Citrix XenServer are all examples of Type I hypervisors.
Type II – Type II hypervisors are installed onto an existing operating system environment. There is higher overhead, as the entire operating environment resources are managed by the operating system, which may result in lower performance. VMware Workstation, Microsoft Virtual PC, and Oracle VirtualBox are examples of Type II hypervisors.
The main benefits of hardware virtualization include more efficient resource utilization, lower overall costs, and higher ROI, as well as increased uptime and IT flexibility. Let’s take a look at each of these benefits in more detail.
Managing applications and distribution becomes a very steep task for IT departments. Installation mechanisms differ from application to application. Some programs require certain helper applications or frameworks, and these applications or frameworks may conflict with existing applications or new applications. Additionally, one-off applications exist for special users.
But how are virtual desktops handled? How are laptops handled? How do you update the software? Plus, which mechanism are you using to deploy the software? The considerations are quite daunting.
Software virtualization, like virtualization in general, is able to abstract the software installation procedure and create virtual software installations. Virtualized software is an application that has been “installed” into its own self-contained unit. In Windows environments, this unit contains virtual registry, %TEMP% directories, and storage locations. An application becomes a single unit that can be deployed as easy as copying a file to a location. Plus, the application can be allowed to interact with local system resources or stay in the unit.
The installation of the software into the self-contained unit becomes a “diff” style operation. A clean operating system is configured, a snapshot is taken of the environment, the application is installed and configured, and a new snapshot of the environment is taken. The difference between the snapshots is the virtualized application.
This methodology provides some fairly significant benefits to application managers:
In the most basic form, memory virtualization is seen as virtual memory, or swap, on servers and workstations. Conceptually, swap exists as a way to handle memory-full systems without having to halt, or even kill, processes. Swap is a portion of the local storage environment that is designated as memory to the host system. The host sees the local swap as additional addressable memory locations and does not delineate between RAM and swap. However, the swap file is addressed at the upper bounds of the memory addressing, so the physical memory will be consumed before the swap is consumed. Using swap imposes a major performance degradation on the host system. The read/write speed of local storage, and even solid-state storage (SSD), is much slower than RAM. Plus, disk contention becomes a major issue as the read/write rates to the local storage is high, and impacts the ability of system operations to read from the same disk.
High bandwidth, low latency environments are making use of memory virtualization as well. This can be seen in technologies like InfiniBand and high performance cluster environments. Remote Direct Memory Access (RDMA) is used to provide remote access to another host’s memory without interfering with that host. Again, to the host utilizing the RDMA functionality, this becomes another section of addressable memory locations. However, the speed is much faster than that of the swap file as it is running over high bandwidth, low latency connections. As converged networking over 10Gb links becomes more and more prevalent, RDMA is going to become more and more of an option, as Ethernet standards for RDMA are being developed. Download a FREE 30-Day Trial of SolarWinds Virtualization Manager.
Server virtualization vendors are taking advantage of their ability to abstract the memory resources of a given host, and are providing some interesting memory related functions. These functions include:
Benefits to using memory virtualization include:
Historically, there has been a strong link between the physical host and the locally installed storage devices. However, that paradigm is changing drastically, almost to the point that local storage is no longer needed. As technology progresses, more advanced storage devices are coming to the market that provide more functionality, and serve to obsolete local storage.
Storage virtualization is a major component in storage best practices for servers, in the form of controllers and functional RAID levels. Operating systems and applications with raw device access prefer to write directly to the disks themselves. The controllers configure the local storage in RAID groups and present the storage to the operating system as a volume (or multiple volumes, depending on the configuration). The operating system issues storage commands to the volumes, thinking that it is writing directly to the disk. However, the storage has been abstracted and the controller is determining how to write the data or retrieve the requested data for the operating system.
Storage virtualization is becoming more and more present in various other forms:
Benefits to storage virtualization include:
Data exists in many forms in our environments. Sometimes the data is static or dynamic. Sometimes the data is stored in a database or in a flat file. Sometimes the data resides in the accounting system or the operations system. Sometimes the data is in Asia or Europe. Sometimes the data is integer based or string based.
Managing data location and availability can be difficult when trying to pull from many sources to analyze the data. Data virtualization deals with the ability to abstract the actual location, access method and data types, and allow the end user to focus on the data itself. This is typically seen in corporate/IT Dashboards, BI tools, and CRM tools.
The dashboard and BI/CRM tools are responsible for handling the abstraction of data locations. These tools are configured with various data sources that can aggregate the data into a single point for analysts to utilize. Data sources may include database connectors, APIs, website data, sensor data, file repositories, and application integrations. The analysts do not need to know where the data comes from, only that it exists and is correct.
Benefits to data virtualization include:
Virtualization can be seen as abstraction and creation of multiple logical systems on a single physical platform. For network virtualization this remains true, although not so clearly as server virtualization. Networking devices utilize both paravirtualization and hypervisor techniques.
The first is loosely based on the idea of paravirtualization, where the underlying software is creating a separate forwarding table for each virtual network, such as is done by MPLS within each VRF. In MPLS, the OS creates a single routing and forwarding database for each VRF, but marks each entry in the database with the tag for ownership. BGP is used to update the database, and shares the routes AND the tags to distribute the data throughout the network.
In the second type of hypervisor, the network device OS instantiates multiple instances of the OS. Perhaps the most common example of this might be Cisco ASA firewalls, with the use of Virtual Contexts. Each context appears as a totally separate ASA instance and shares access to the physical interfaces. No communication between contexts is possible within the ASA OS, and all traffic must pass on physical interfaces.
Benefits to network virtualization include:
Virtualization overall, irrespective of the type, helps improve scalability and resource utilization. In most cases, the main benefit to IT professionals is the ease of management, as virtualization helps to centralize administrative tasks, whether they involve day-to-day updates or large scale deployments and migrations.
But there’s a lot more to virtualization than what we’ve covered here. This guide was meant to help you get your head wrapped around the concept of virtualization, the main types of virtualization and their benefits. What we didn’t cover are the disadvantages, challenges and complexities of virtualization, but we’ll save that for another article.