Skip to main content

IBM Debuts Hyperconverged Servers

In May, IBM announced it was partnering with Nutanix to “bring new workloads to hyperconverged deployments.” In July IBM unveiled two new hyperconverged systems. So what does IBM’s move into the hyperconverged infrastructure market mean? For that matter, what is a hyperconverged infrastructure?

Per Wikipedia, a hyperconverged infrastructure describes systems that virtualize everything. It includes a hypervisor, software-defined storage and software-defined networking. It will typically run on commodity hardware.

This would be different from the IBM Power Systems servers that I’ve used over the years. In those environments, the machines connect to a storage area network (SAN) via fibre channel adapters. Although PowerVM gives me a great hypervisor and access to an internal network switch, a hyperconverged cluster of servers has direct-attached disks, and the servers communicate over a 10G Ethernet network, sans a SAN. Seriously, no SAN is involved.

So why is IBM interested in Nutanix? Their claim is that they are able to make your underlying infrastructure invisible. They have also been growing by leaps and bounds over the past few years.

It’s very possible that you are already running—or at least thinking about running—an x86-based Nutanix cluster. Historically, Nutanix clusters would run on x86 hardware from Nutanix, Dell, HP or Lenovo. You would set up your cluster and choose your hypervisor: ESXi, Hyper-V or Nutanix’s free hypervisor, AHV, which is based on CentOS KVM.

As noted, IBM has two new servers, the CS821 and CS822, which run the Nutanix software. They’re available in a few different hardware configurations.

The CS821 is model 8005-12N. It’s a 1U server that has 2×10 core 2.09 GHz POWER8 CPUs with up to 160 threads, 256G memory and 7.68 TB of flash.

The CS822 is model 8005-22N. It’s a 2U server that has 2×11 core 2.89 GHz POWER8 CPUs with up to 176 threads, 512G memory and 15.36 TB of flash.

Now, under the IBM-Nutanix union, you have a choice when it comes to the processor: POWER or x86. The CS821 and CS822 servers run AHV, and the virtual machines running on top of the hypervisor are running Linux on Power. AIX and IBM i aren’t supported as virtual machines at this time.

Nutanix handles all cluster management through its Prism product. The management interface is accessible via browser, command line, shell, etc. You mix and match your clusters based on the hypervisor you pick, and run them all through the same instance of Prism (although you would have to drill down to manage each cluster individually). With the CS821 and CS822 machines, this means that your new POWER based cluster will appear in Prism as just another cluster that happens to be using a different processor. You won’t be able to mix and match POWER and x86 nodes in the same cluster, but you can still manage a POWER cluster in much the same way as you’d manage an environment of existing x86 clusters.

What exactly do you gain by running Nutanix software? For starters, it’s an established product that’s scalable, reliable and distributed. The storage layer is handled by the Acropolis Distributed Storage Fabric (ADSF), which determines where to store your data on disk. Since a minimum cluster consists of three nodes, out of the box you will have resilience as the data gets copied―locally, and also to at least one other node, depending on the resiliency factor you choose and how many nodes are in the cluster.

ADSF is designed for virtualization. It handles tiering across your spinning hard disks, SSDs, etc., and, as your VMs relocate to different hosts in the cluster, it will take care of getting the hot data to the right node. In addition, ADSF handles snapshots, clones, deduplication and compression.

You can set up replication factors for your storage depending on how many nodes you have in your cluster. For example, choosing RF3 will allow for one node in your cluster to fail. RF5 will allow for two nodes to fail.

When it’s time to grow your cluster because you need more CPU, memory or disk, just add another node. It’s seamlessly discovered and integrated.

For an in-depth look at the technical specifications of the product, I recommend the Nutanix Bible.

Part 1 discusses a brief history of infrastructure and it discusses the problems that Nutanix is trying to solve. Part 2 primarily covers Prism, the basics of the GUI and navigation, upgrading your cluster and accessing I/O metrics. There are screen shots. In addition, there’s a capacity planning feature that includes details about projections of when it might make sense to add nodes based on the current and predicted workloads.

Part 3 is the book of Acropolis, the storage compute and virtualization platform. Acropolis is “a back-end service that allows for workload and resource management, provisioning, and operations…This gives workloads the ability to seamlessly move between hypervisors, cloud providers, and platforms.” Included is a visual comparison of the Acropolis and Prism layers. Another image shows a typical node. That’s followed by a visual of a cluster looks with the nodes linked together.

Different Nutanix components are defined, including:

  • Cassandra, the metadata store
  • Zookeeper, the cluster configuration manager
  • Stargate, the I/O manager
  • Curator, MapReduce cluster management and cleanup
  • Prism, the UI and API
  • Genesis, the cluster componenet and service manager
  • Chronos, the Job and Task Scheduler
  • Cerebro, Replication / DR manager
  • Pithos, vDisk configuration manager
  • Acropolis Services, handles task scheduling, execution, etc.
  • Dynamic Scheduler, makes VM placement decisions

Finally, you can see how Nutanix handles the different levels of potential failure, including disk and node failures.

There’s much more, and the document continues to be updated. If you read through the Nutanix Bible, I think you will have a very good understanding of the platform and how it differs from other cluster solutions you’ve used.

As you continue to plan for updates to your data center, you should really give IBM Hyperconverged Systems powered by Nutanix a closer look.