One important evolution in server technology is virtualization, which has become an almost ubiquitous way to program servers to be more efficient. In this article, we’ll look at how it works and why it’s so important for networks running multiple applications.
What is virtualization?
Virtualized servers have outweighed physical servers for decades—an important concept that’s been put to use since the 1960s. So what does it mean and how does it save you time, money and resources?
Virtualization is a method that takes a single server and divides it into many smaller, “virtual” server environments. It’s standard practice in server architecture and allows one physical server to take on a bunch of tasks, while consolidating hardware and maximizing resources. With virtualization, you’re not limited to housing just one application per server. You can host a number of applications all on one machine within separate environments that are encapsulated apart from the core operating system.
If a traditional server is a 1:1 ratio (1 machine: 1 operating system), virtualization creates a 1:many ratio. All of a sudden, that single server is getting a lot more done at once. Need to scale up and create a large number of virtual machines quickly? Virtualization makes that easy, too. You can standardize how you program your smaller virtual servers and then rapidly roll them out when needed. You can also migrate between servers without downtime, which is another time-saving bonus.
Think of a server like a building with different floors. The first floor lobby is the server’s operating system. On top of that is the second floor, a software layer that’s broken into rooms—the smaller, individual servers with their own operating systems. Sometimes, there’s even a third floor, broken into more, even smaller rooms.
Harware vs. software virtualization
There are also different approaches to virtualization. First, there’s hardware partitioning (or hardware-assisted virtualization), where the computer’s CPU is augmented (e.g., with a chip) to take on common tasks and speed things up. That’s like adding a separate, faster elevator line in the lobby for regular visitors. Then, there’s software virtualization, which creates those second and third-floor rooms that house the smaller operating systems.
An important component of this is the hypervisor (or, virtual machine manager [VMM]), a layer of software that creates and runs these “guest” operating systems. They’re like the building’s floor plans and utilities, and they can either run directly on the hardware (Type 1, which is most efficient because it has direct access to the hardware), or they can run on top of the existing OS (Type 2, one layer up). Type 2 hypervisors are also known as paravirtualization, and are a bit less efficient, but have been improved by extensions over the years.
In addition to getting more done at once on the same machine, virtualization is helpful for maintenance, application security, scaling up, and provisioning, because the environments are isolated from one another.
Virtualization in the cloud
Virtualization has made a lot of what we do in the cloud (and hybrid cloud) possible. It’s become a large aspect of how we deploy operations to the cloud because it provides a layer of abstraction that’s essential to distributed environments. But it’s not without it’s challenges. Virtualizing in a hybrid cloud environment, for example, means that you could have a collection of different virtual machines running across public and private clouds, each hosting unique services. The complexity of ensuring the security of a multi-layered, virtualized environment in a hybrid cloud situation increases right along with the complexity of the set-up, so be sure to plan for IT security, data security, and virtual network monitoring.