Consumer technology is constantly changing, and the same goes for the technology used in data centers around the world. Just as consumers are now able to buy a single smartphone device to do just about anything they can dream up, IT buyers can now acquire a single device or solution for just about any infrastructure service they need.
This single device/solution concept is made possible by faster and faster server hardware, virtualization, and hyper-convergence.
In this course, we'll start by briefly introducing virtualization as a concept, in case it's new to you, and then discuss the state of virtualization today. Later on, we'll introduce you to hyper-convergence and the way it solves many of the challenges that virtualization introduces. By the end of the course, you'll understand the various types of hyper-convergence architectures and what sorts of use cases benefit most from hyper-convergence.
Already a Virtualization Expert?
If you already have a good grasp of virtualization and the advantages it brings, as well as the challenges it introduces, feels free to skip this lesson and move on to Lesson 2, "Hyperconvergence Foundations".
If you haven't started to virtualize your server infrastructure, or if you haven't started virtualizing but haven't yet achieved 100% virtualization, read this lesson before moving on. In this lesson, you'll learn about virtualization and become motivated to "virtualize everything", as is the norm at more and more companies.
What Is Virtualization?
If you work in an IT organization, surely you have at least heard about virtualization. Virtualization has changed the world of technology for large enterprises, small and medium-size businesses (SMBs), IT pros, and even many consumers.
Using software, virtualization abstracts away something that was traditionally physical and runs it as virtual.
But what does that mean, virtual? With server virtualization, the software emulates hardware for the purpose of abstracting the physical server from the operating system. This abstraction is done using a special piece of software called a hypervisor. The hypervisor either runs on top of or inside an operating system (such as Windows Server or a Linux variant) and allows you to run virtualized servers on top of that hypervisor. Those virtualized servers are typically called virtual machines or V Ms. It is inside the VMs that you can install just about any guest operating system, applications, and data that you choose.
What companies large and small can do with virtualization is to take their existing physical servers, virtualize them, and run them inside VMs that run on top of a single host server. The result of this type of virtualization is that companies can consolidate many physical servers onto far fewer physical servers. In fact, some organizations can consolidate all their VMs onto a single host. Or preferably, as best practices would dictate, they can run all their virtual machines across two hosts in a cluster, storing virtual machine images on shared storage, so that one host could take over for the other host in the event of failure.
Instead of using the dictionary definition of abstraction to describe virtualization, most admins describe it with phrases such as:
Virtualization allows you to run much, much more on a single physical server (host) than ever before.
Virtualization allows IT organizations to do more with less.
Virtualization is when you run your servers on top of software-based virtual machines instead of on hardware machines.
As you can see in Figure 1 - 1, with a single physical server (having its own CPU, memory, and storage I/O resources), you are layering a hypervisor on top in place of the typical server operating system. Then, on top of that hypervisor, you are running 2 VMs, each with its own CPU, memory, and I/O resources, so that you can install your own guest operating system to run applications and storage company data.
IT professionals have looked to virtualization to save them from some serious challenges. Namely, virtualization has helped to overcome availability challenges and increase operational efficiency.
How Virtualization Was Supposed to Change IT
It’s great to look at server virtualization from the perspective of the business and show all the money that can be saved. After all, in many cases, some of that money saved can be used in the IT organization for other uses.
But what if you look at server virtualization from the perspective of the staff who administers the infrastructure? How does server virtualization change the daily life of the infrastructure admin?
Operational Efficiency
Admins are continually pushed to add more and more applications or support more users and devices. However, rarely are they offered any additional resources to manage and support all the additional
infrastructure required. No additional administrative staff, no additional budget, and in many cases, not even any additional infrastructure to run the applications.
In other words, admins are simply expected to “do more with what you have… or with less”. This is especially true at SMBs, which have inherently smaller budgets than large enterprises.
Server virtualization is one of the few solutions that can actually allow admins to accomplish this “do-more-with-less” goal.
Server virtualization offers far greater efficiency in administration because:
Virtualized servers (and VMs) are portable. They can easily
be moved from one server to another, and virtual hardware can
be resized when new resources are needed — they can be cloned
or copied to add more VMs.
Virtualized servers (VMs) are all managed from a single centralized management interface. Monitoring, performance management, and troubleshooting are all far more efficient than having many physical servers to contend with.
By having many fewer servers, admins have fewer servers to keep current. This is especially helpful when servers need updating (both hardware and software) or when troubleshooting, should the unexpected occur.
Technology Agility
End users expect admins to be able to bring up new applications or VMs within minutes and ensure that applications never go down.
Meeting those expectations is next to impossible with traditional physical servers; however, with server virtualization, admins can meet them easily.
With server virtualization, VMs are hardware independent and can be easily backed-up, replicated, restored, cloned, or moved from one server or site to another.
Server virtualization allows admins to create a library of VM images and spin up new VMs whenever needed.
Finally, VMs can easily be moved from server to server or site to site with without downtime in most cases.
Advanced Availability Features
Server virtualization also allows administrators to leverage more advanced data center functionality than would ever be possible with a purely physical data center. Here are some examples:
Virtualization backup. Virtualization backup makes data protection easy because it can easily back up only the changed blocks of a VM’s disk storage and send them to tape. Protected VMs can be recovered onto other servers as needed, and the underlying hardware is abstracted. As a result, VMs can easily be restored onto a very different physical server than the original.
Replication. Replication can be done all in software for any VM or group of VMs that need offsite data protection.
Virtualization-Induced Challenges
As you’ve learned above, server virtualization immediately offers the IT organization numerous benefits. However, data centers rarely shrink. Data center footprints tend to grow over time, creating the need to add more virtualization hosts (physical servers) to provide resources to run more VMs. As the infrastructure and application criticality grows, so does the need for high availability. Availability assurance is one of the most popular features of most server virtualization hypervisors. With server virtualization, when a physical server fails, all VMs that were running on it can be automatically restarted on surviving hosts.
While high availability in server virtualization may be readily available and easy to implement, the same is not true for high availability for storage.
With virtual server host-to-host failover requiring shared storage to work, data center architects must utilize a shared storage system (SAN or NAS) to store the VM disk files on. High availability for server
virtualization mitigates a host failure in short order but offers nothing to mitigate any possible failure of the shared storage. With shared storage often being complex and expensive, the architecture that many server virtualization designs end up with is what’s called the “3-2-1 design” (also known as the “inverted pyramid of doom”).
Figure 1-2: The 3-2-1 infrastructure design
The 3-2-1 design (shown in Figure 1-2) is when, for example, you have 3 hosts, 2 network switches (for redundancy), and 1 shared storage array where all data is stored. In the 3-2-1 design, the shared storage array is the single point of failure, meaning that if the single shared storage array fails, everything else in the infrastructure goes down as well.
Unfortunately, too many organizations are stuck in the 3-2-1 design even as the number of servers in their infrastructure grows well beyond three hosts. Even large companies with 50 or more hosts still use this “inverted pyramid of doom” infrastructure design simply because they can’t afford the cost or handle the complexity to move beyond it. That’s because, to move to a redundant storage infrastructure where you don’t have the shared storage as the single point of failure, you must implement an infrastructure design similar to the one shown in Figure 1-3.
Figure 1-3: Virtual infrastructure with high availability for storage
With this design, you implement a redundant SAN or NAS array. When you do so, that array can eliminate the “1” in the 3-2-1 design architecture and ensure that critical applications running in the virtual infrastructure don’t go down should the shared storage suffer a failure.
The first challenge in this infrastructure addition is, of course, the cost. Buying shared storage to begin with is a challenge for many businesses. The idea of buying redundant shared storage and paying essentially double the price is painful.
The first challenge in this infrastructure addition is, of course, the cost. Buying shared storage to begin with is a challenge for many businesses. The idea of buying redundant shared storage and paying essentially double the price is painful.
Infrastructure admins commonly wear many different hats. Here are just some of the things an average infrastructure admin might have to deal with:
Hypervisor administration
Network administration with full redundancy
SAN (e.g., targets, LUNs, multipathing) or NAS administration
Hardware compatibility lists (HCL) management
Patching and updating multiple systems and related firmwareDealing with multiple support groups when unexpected trouble occurs, which usually results in finger-pointing and inefficiency
Maintaining multiple maintenance contracts, which is costly
and time-consuming
Virtualization is a Commodity
We’re pushing into the third decade of x86 virtualization being available to consumers, and into the second decade of server virtualization being utilized in the mainstream data center. At this point, most data center architects just expect virtualization. Many organizations have a “virtual first” policy, which essentially dictates that all workloads should be virtualized unless there’s a good reason to do
otherwise.
Accepting virtualization as a standard data center practice is great, and it provides tremendous benefits to IT organizations and to the businesses they serve. However, the complexity that eats away at growing virtualized environments can start to negate some of the effects that virtualization was adopted for in the first place.
What modern data centers truly need is a solution that offers them all the benefits of virtualization without all the complexities that come along with virtualization, once the virtual infrastructure grows and advanced functionality is added (such as high availability for storage). Hyperconvergence is that solution, and you’re going to learn about that next.
Up Next
You have a good understanding of what virtualization is and how it is supposed to help your company. But you’ve also just begun to realize that even while virtualization solves some big problems in the data center, it also introduces some new ones. In the next lesson, you’ll learn about a data center architecture called hyper-convergence that leverages virtualization and software-defined storage (SDS) to take efficiency, agility and cost savings to the next level.