Is VMware Clustering / VMotion Complex Compared to Microsoft Failover Clustering?

My last post on VMware VMotion urged several readers to protest, maybe because of its provocative title. What I did was to compare VMware clustering with Microsoft failover clustering. I got to the conclusion that both significantly add to the complexity of the environment. Interestingly, most commenters said, yes, Microsoft clustering is complex, but no, VMware clustering is not, yet failed to explain exactly why.

The complexity of clusters only partly lies in the software actually doing the clustering. What makes the stuff complex is the infrastructure you need for clustering. Let us compare Microsoft’s and VMware’s requirements:

Requirements for Microsoft Failover Clustering

Instead of defining hard requirements, Microsoft gives recommendations. In practice the following setup is often used:

  • Shared storage on a SAN
  • Dedicated private network for intra-cluster communication
  • Similar hardware and software on all nodes

Requirements for VMware Clustering

In contrast to Microsoft, VMware very clearly defines hard requirements:

  • Some kind of shared storage, typically on a SAN
  • Dedicated private network for intra-cluster communication
  • Compatible set of processors in all cluster nodes

It’s the Infrastructure!

Now, these requirements are strikingly similar. What should be noted, though, is that with VMware they are actual hard requirements, while with Microsoft they are only recommendations.

So, where does complexity come into play? With clusters, you simply have more components to worry about than with single servers. When administering a node, you always have to keep in mind that there are other nodes that should be configured similarly. Your network guys must setup and maintain an additional logical network. But most importantly, you need a SAN. Don’t tell me, SANs are simple, because they are not. And, since SANs really need to highly available, you need everything redundant: HBAs/NICs, switches, I/O controllers and so on. Just understanding a diagram of such a setup is way over the head of a large portion of the typical IT staff, let alone design and manage it.

Wrap-Up

As before, I want to point out that I am not against clusters in any way. But I happen to think that managing clusters along with all the required infrastructure is not a trivial job. In many cases, the benefits of a clustered solution will outweigh the trouble, but there may be situations where clustering is not justified.

Also, I try not to be biased. I have already written articles “pro VMware and con Microsoft”.

When comparing the complexity of VMware and Microsoft clustering I cannot find any fundamental difference. This is neither pro nor con VMware, but there seem to be a great deal of VMware evangelists around that take it as an affront if any of VMware’s products are considered less than miles ahead of the competition, especially Microsoft.

Of course, I am willing to learn. If you feel that I have misjudged the situation, please tell me exactly where. Do not write about how great or powerful VMotion is - I already know that - but explain why it requires a less complex infrastructure than Microsoft failover clustering.

Comments

Related Posts

Docker (Compose) Cheat Sheet

Docker (Compose) Cheat Sheet
This is a collection of tips and tricks I picked up while learning and working with Docker and Docker Compose on my home server and web server. Container Configuration Environment Variables Where to Define Environment Variables Environment variables are a common way to configure containers. To keep things organized, don’t put them in your Compose file but into dedicated files with the extension env. env_file vs. .env .env file: this “special” file can be used to set environment variable for use in the Compose file. The variables specified in .env are not available in the container. env_file: this section in the Docker Compose file lets you specify files that contain environment variables for use in the container. The variables specified in this section are not available in the Compose file. Bind Mounts vs. Docker Volumes Bind mounts let you control the directory structure. This has the advantage that you know exactly what gets stored where in the host’s file system. It has the disadvantage that you need to create the directory structure before you can start a container. Docker volumes are managed by the Docker engine. They’re stored in /var/lib/docker, “far away” from the Compose file. Personally, I very much prefer bind mounts because of the control they offer. I use subdirectories relative to the Compose file, e.g., ./data:/data. Keeping the container configuration and the container data in one place facilitates backups. Networking Expose vs. Ports Expose serves as documentation which ports a container is accessible on. Note: container ports are always accessible from other containers on the same Docker network. Ports makes container ports accessible to the host. Most of my services are accessible through the Caddy reverse proxy only. Opening ports to the host is, therefore, only rarely necessary. Static IP Address on the Host Network Use the Macvlan Docker network to attach a container directly to the host’s local network. Assign a static IP address by specifying the ip_range parameter in the ipam section of the Docker Compose file. See this configuration for an example. Disable Macvlan Container/Host Isolation Containers on a Macvlan network are isolated from the host. While the container can contact other machines on the local network, communications with the host are blocked. To work around that, create a virtual link with a route that points to the container’s IP address (example). Time Zone Containers should know about your local time zone. To achieve that, make it a habit to pass in /etc/localtime as a read-only volume to every container:
Virtualization & Containers

Latest Posts