Intro to Virtualization: Hardware, Software, Memory, Storage, Data and Network Virtualization Defined

Cloud Computing

What is virtualization? What are the different types of virtualization? And most importantly, what are the benefits of virtualization? In this guide, designed specifically for IT professionals who are new to virtualization, we’ll take a detailed look at the different types of virtualization as well as the benefits of each:

What I hope you get out of this article is that virtualization is not just for the datacenter, and it’s not just for large organization. Same thing applies to its benefits; virtualization has a lot to offer to IT professionals and in many cases, to end users. If you’re new to the concept of virtualization, or you’re unfamiliar with the different shapes virtualization can take, this article is the perfect place to start.

What is Hardware Virtualization?

Historically, there has been a 1-to-1 relationship between physical servers and operating systems. Low CPU, memory, and networking requirements matched nicely with the limited hardware resources available. As this model continued, however, the costs of doing business rose. The amount of power, physical space, and hardware required meant that costs were adding up.

Virtualization is all about abstraction. Hardware virtualization is accomplished by abstracting the physical hardware layer by use of a hypervisor (aka – a Virtual Machine Monitor). The hypervisor handles sharing the physical resources of the hardware between the guest operating systems running on the host. Physical resources become abstracted versions in standard formats, so regardless of the hardware platforms, the hardware is presented as the same model. The virtualized operating system is able to hook into these resources as though they are physical entities. Learn About SolarWinds Virtualization Manager.

Various levels of hardware virtualization exist that perform various levels of abstraction:

Full – The guest OS is unaware that it is being virtualized. The hypervisor will handle all OS-to-hardware requests on demand and may cache the results for future use. In this instance, the virtualized OS is completely isolated from the hardware layer by the hypervisor. This provides the highest level of security and flexibility as a broader range of operating systems can be virtualized.

Hardware assisted – Hardware vendors have seen value in virtualization and have tailored their devices to enhance performance or functionality. This is most evident in the AMD-V and Intel Virtualization Technology processor enhancements. In the case of the AMD and Intel processors, specific CPU calls are not translated by the hypervisor, and are sent directly to the CPU. This reduces the hypervisor load and increases performance by removing the translation time from operating system calls.

Paravirtualized – The guest OS needs to be engineered in such a way that it knows that it is virtualized. The kernel of the operating system is adjusted to replace instructions that cannot be virtualized with methods that interact directly with the hypervisor. Value for paravirtualized environments comes in the form of lower overhead and optimized operations. Paravirtualization is typically seen in Linux environments with the Xen kernels included, although it is more and more common to find Full Virtualization vendors, including some paravirtualization drivers, in their latest products.

Hypervisors can also be categorized into one of two primary categories:

Type I – Type I hypervisors are installed directly on to the hardware, similar to how a regular operating system may be installed on a single server. There is very low overhead associated with this technique and performance is greater. VMware ESXi, Microsoft Hyper-V, and Citrix XenServer are all examples of Type I hypervisors.

Type II – Type II hypervisors are installed onto an existing operating system environment. There is higher overhead, as the entire operating environment resources are managed by the operating system, which may result in lower performance. VMware Workstation, Microsoft Virtual PC, and Oracle VirtualBox are examples of Type II hypervisors.

Benefits of Hardware Virtualization

The main benefits of hardware virtualization include more efficient resource utilization, lower overall costs, and higher ROI, as well as increased uptime and IT flexibility. Let’s take a look at each of these benefits in more detail.

  • More Efficient Resource Utilization: Physical resources can be shared amongst virtual machines. Unused resources, although allocated to a virtual machine, can be used by other virtual machines if the need exists.
  • Lower Overall Costs Due to Server Consolidation: Now that it is possible for multiple operating systems to coexist on a single hardware platform, the sheer number of servers, rack space, and power consumption drops significantly.
  • Higher ROI: Servers can be expensive: By running multiple independent and isolated environments on a single hardware platform, IT is making better use of the purchase and getting the biggest bang for their buck.
  • Increased Uptime Due to Advanced Hardware Virtualization Features: The modern hypervisor providers are able to provide highly orchestrated operations that maximize the abstraction of the hardware and help ensure maximum uptime. These functions include the ability to migrate a running virtual machine from one host to another dynamically, as well as maintain a running copy of the virtual machine on another physical host in case the primary host fails.
  • Increased IT Flexibility: Hardware virtualization allows for quick deployment of server resources in managed and consistent ways. This results in IT being able to adapt quickly and provide the business with resources needed in an expedited time frame.

What is Software Virtualization?

Managing applications and distribution becomes a very steep task for IT departments. Installation mechanisms differ from application to application. Some programs require certain helper applications or frameworks, and these applications or frameworks may conflict with existing applications or new applications. Additionally, one-off applications exist for special users.

But how are virtual desktops handled? How are laptops handled? How do you update the software? Plus, which mechanism are you using to deploy the software? The considerations are quite daunting.

Software virtualization, like virtualization in general, is able to abstract the software installation procedure and create virtual software installations. Virtualized software is an application that has been “installed” into its own self-contained unit. In Windows environments, this unit contains virtual registry, %TEMP% directories, and storage locations. An application becomes a single unit that can be deployed as easy as copying a file to a location. Plus, the application can be allowed to interact with local system resources or stay in the unit.

The installation of the software into the self-contained unit becomes a “diff” style operation. A clean operating system is configured, a snapshot is taken of the environment, the application is installed and configured, and a new snapshot of the environment is taken. The difference between the snapshots is the virtualized application.

Benefits of Software Virtualization

This methodology provides some fairly significant benefits to application managers:

  • Client Deployments Become Easier: copying a file to a workstation or linking to a file in a network share can install Virtual applications. Existing deployment methodology can be leveraged to automate this functionality.
  • Added Security: Many software virtualization providers include the ability to link to LDAP/Active Directory group membership to ensure that you are able to run the software. This ensures approved users are granted access, and the software cannot be run on a machine that does not have access to the LDAP/Active Directory domain. Plus, time bomb functions may exist that expire the software after a specified amount of time.
  • Ease of Management: Managing updates becomes a much simpler task. Update one place; deploy the updated virtual application to the clients. If the update breaks something, just copy the original file back in place. Suddenly, it becomes possible to have a library of updated software for versioning and roll back functionality.
  • Software Migrations: Moving users from one software platform to another takes much time and consideration for deploying, and impact on end user systems. By running a virtualized software environment, the migration can be as simple as replacing one file with another.
  • Conflict Mitigation with Existing Software: Due to the fact that software is housed in virtualized containers, applications that do not play nicely with each other can co-exist on the same system. This is very useful for developers testing different software versions or running multiple versions of web browsers to verify application functionality.

What is Memory Virtualization?

In the most basic form, memory virtualization is seen as virtual memory, or swap, on servers and workstations. Conceptually, swap exists as a way to handle memory-full systems without having to halt, or even kill, processes. Swap is a portion of the local storage environment that is designated as memory to the host system. The host sees the local swap as additional addressable memory locations and does not delineate between RAM and swap. However, the swap file is addressed at the upper bounds of the memory addressing, so the physical memory will be consumed before the swap is consumed. Using swap imposes a major performance degradation on the host system. The read/write speed of local storage, and even solid-state storage (SSD), is much slower than RAM. Plus, disk contention becomes a major issue as the read/write rates to the local storage is high, and impacts the ability of system operations to read from the same disk.

High bandwidth, low latency environments are making use of memory virtualization as well. This can be seen in technologies like InfiniBand and high performance cluster environments. Remote Direct Memory Access (RDMA) is used to provide remote access to another host’s memory without interfering with that host. Again, to the host utilizing the RDMA functionality, this becomes another section of addressable memory locations. However, the speed is much faster than that of the swap file as it is running over high bandwidth, low latency connections. As converged networking over 10Gb links becomes more and more prevalent, RDMA is going to become more and more of an option, as Ethernet standards for RDMA are being developed. Download a FREE 30-Day Trial of SolarWinds Virtualization Manager.

Server virtualization vendors are taking advantage of their ability to abstract the memory resources of a given host, and are providing some interesting memory related functions. These functions include:

  • The ability to share common memory pages across multiple virtual machines. This is great for when a host is running  multiple copies of the same operating system. There is no need for multiple copies of the same pages to exist. Sharing the pages frees up memory to use elsewhere.
  • The ability to snapshot a memory state and revert back if the new state is not optimal.
  • The ability to transmit the memory state across the network to another host in order to move virtual machine operations to the new host.
  • Compress physical memory contents in order to save the physical host from utilizing swap.
  • Releasing unused, but allocated, memory for other virtual machines to utilize.

Benefits of Memory Virtualization

Benefits to using memory virtualization include:

  • Higher memory utilization by sharing contents and consolidating more virtual machines on a physical host.
  • Ensuring some memory space exists before halting services until memory frees up.
  • Access to more memory than the chassis can physically allow.
  • Advanced server virtualization functions, like live migrations.

What is Storage Virtualization?

Historically, there has been a strong link between the physical host and the locally installed storage devices. However, that paradigm is changing drastically, almost to the point that local storage is no longer needed. As technology progresses, more advanced storage devices are coming to the market that provide more functionality, and serve to obsolete local storage.

Storage virtualization is a major component in storage best practices for servers, in the form of controllers and functional RAID levels. Operating systems and applications with raw device access prefer to write directly to the disks themselves. The controllers configure the local storage in RAID groups and present the storage to the operating system as a volume (or multiple volumes, depending on the configuration). The operating system issues storage commands to the volumes, thinking that it is writing directly to the disk. However, the storage has been abstracted and the controller is determining how to write the data or retrieve the requested data for the operating system.

Storage virtualization is becoming more and more present in various other forms:

  • File servers: The operating system is writing to a remote location with no need to understand how to write to the physical media.
  • pNFS: A component of NFS v4.1, pNFS involves making a request for data over an NFS share. However, the data is stored in a large variety of disparate locations and medium. The requester has no idea where the data exists; that is handled by the NFS server.
  • DFS: Similar in concept to pNFS, DFS, Distributed File System, creates a filesystem-like view of data. However, the composition of the filesystem is differing file shares on the network. The filesystem appears to be a single volume, but it is comprised of multiple locations.
  • WAN Accelerators: Rather than send multiple copies of the same data over the WAN environment, WAN accelerators will cache data locally and present the re-requested blocks at LAN speed, while not impacting the WAN performance.
  • NAS and SAN: Storage is presented over the Ethernet network to the operating system. NAS presents storage as file operations (like NFS and CIFS). SAN technologies present storage as block level storage (like iSCSI and Fibre Channel). SAN technologies receive operating instructions as if the storage was a locally attached device.
  • Storage Pools: Enterprise level storage devices can aggregate common storage devices, in the form of like disk types (speeds and capacity), to present an abstracted view of the storage environment for administrators to handle. The storage device handles which disks to place the data upon, versus the storage administrator deciding how to divide the available disks. This usually leads to higher reliability and performance as more disks are used.
  • Storage Tiering: Utilizing the storage pool concept as a stepping stone, storage tiering analyzes most commonly used data and places it on the highest performing storage pool. The lowest used data is placed on the weakest performing storage pool. This operation is done automatically and without any interruption of service to the data consumer.

Benefits of Storage Virtualization

Benefits to storage virtualization include:

  • Data is stored in more convenient locations away from the specific host. In the event of a host failure, the data is not necessarily compromised.
  • The storage devices are able to perform advanced functions like deduplication, replication, thin provisioning, and disaster recovery functionality.
  • By abstracting the storage level, IT operations can become more flexible in how storage is partitioned, provided, and protected.

What is Data Virtualization?

Data exists in many forms in our environments. Sometimes the data is static or dynamic. Sometimes the data is stored in a database or in a flat file. Sometimes the data resides in the accounting system or the operations system. Sometimes the data is in Asia or Europe. Sometimes the data is integer based or string based.

Managing data location and availability can be difficult when trying to pull from many sources to analyze the data. Data virtualization deals with the ability to abstract the actual location, access method and data types, and allow the end user to focus on the data itself. This is typically seen in corporate/IT Dashboards, BI tools, and CRM tools.

The dashboard and BI/CRM tools are responsible for handling the abstraction of data locations. These tools are configured with various data sources that can aggregate the data into a single point for analysts to utilize. Data sources may include database connectors, APIs, website data, sensor data, file repositories, and application integrations. The analysts do not need to know where the data comes from, only that it exists and is correct.

Benefits of Data Virtualization

Benefits to data virtualization include:

  • Less end user domain knowledge of where the data is. Techniques for connecting to various sources may require higher technical skills, security levels, and understanding of how the data is stored.
  • Focus on correctly analyzing the data. The end user is spending their time focusing on their specific role or function and not worrying about how the data arrives, just that it does.

What is Network Virtualization?

Virtualization can be seen as abstraction and creation of multiple logical systems on a single physical platform. For network virtualization this remains true, although not so clearly as server virtualization. Networking devices utilize both paravirtualization and hypervisor techniques.

The first is loosely based on the idea of paravirtualization, where the underlying software is creating a separate forwarding table for each virtual network, such as is done by MPLS within each VRF. In MPLS, the OS creates a single routing and forwarding database for each VRF, but marks each entry in the database with the tag for ownership. BGP is used to update the database, and shares the routes AND the tags to distribute the data throughout the network.

In the second type of hypervisor, the network device OS instantiates multiple instances of the OS. Perhaps the most common example of this might be Cisco ASA firewalls, with the use of Virtual Contexts. Each context appears as a totally separate ASA instance and shares access to the physical interfaces. No communication between contexts is possible within the ASA OS, and all traffic must pass on physical interfaces.

Benefits of Network Virtualization

Benefits to network virtualization include:

  • Service Orientation: As each business service is added to your IT infrastructure, some parts of the infrastructure are shared resources, and others are dedicated resources. For example, Ethernet switches are regarded as shared resources, and VLAN configuration can be a dedicated resource on the shared switching resource.
  • Better Change Control: Virtualization improves Change Management by separating functions into many areas. Changes to the configuration within a virtualized area will have no impact to another area, thus making change approval easier.
  • Cost Savings: The cost of deploying and maintaining network equipment is high, and it can be cost effective to share firewalls, switches, and load balancers between services, instead of buying new physical equipment each time.
  • Security: Because the systems are logically separate, many security issues and risks can be easily addressed. Topics such as limiting access and limiting knowledge are simpler to handle.

Summary

Virtualization overall, irrespective of the type, helps improve scalability and resource utilization. In most cases, the main benefit to IT professionals is the ease of management, as virtualization helps to centralize administrative tasks, whether they involve day-to-day updates or large scale deployments and migrations.

But there’s a lot more to virtualization than what we’ve covered here. This guide was meant to help you get your head wrapped around the concept of virtualization, the main types of virtualization and their benefits. What we didn’t cover are the disadvantages, challenges and complexities of virtualization, but we’ll save that for another article.