The inefficient efficiency of Hyperconvergence (and other alternatives)

Last week I was on a call with an executive of a large HCI vendor. He asked me what I actually thought about Hyperconvergence and whether, from my point of view, it will become dominant in all types of enterprises… my short answer was “yes and no”, and here is why.

When you talk with end users the first reason they give for choosing HCI is its simplicity (which also means lower TCO by the way). Generally speaking, HCI solutions are very good at serving “average” workloads but when the going gets tough, the tough get going… and hyperconvergence as we know it, is no longer enough.

When “good enough” is more than enough

Most modern HCI solutions scale pretty linearly up to a certain extent. By adding additional nodes to the cluster, you get more available resources almost immediately. They are hybrid or all-flash, covering a great number of different workloads, and internal compute resources are used to manage guest VMs as well as the distribute storage layer. You can’t ask too much. Latency consistency and IOPS are not always top, but they are good enough to satisfy the needs of end users.

The beauty of HCI lies in the fact that the SysAdmin can manage all the resources in the cluster from a single interface (usually VMware vCenter) and most of the painful tasks we usually find in traditional storage are simplified or simply non-existent. In fact, a good VMware Sysadmin can easily become a jack of all trades, and manage the whole infrastructure without too much effort.

When the infrastructure is small or highly virtualized, hyper convergence seems very efficient. Not because of its actual efficiency but because it looks efficient and quite predictable.

When “good enough” is not enough

The problem today, and probably in the foreseeable future, is that you can’t ask too much from a general purpose hyperconverged infrastructure. The perfect HCI cannot run all types of workloads while delivering high performance, large capacity and decent TCO.

HCI effectiveness strictly depends on the type of organization, its size and the kind of data and workloads managed. It can be very high in small organizations but decreases quite rapidly in larger ones, and when workloads, or their sum, have very specific characteristics which stress one particular resource more than others.

In some cases the solution comes form specialized HCI infrastructures. For example, this is somewhat of a stretch but, you can think about an Hadoop cluster as an HCI or, even better, solutions like HDS HSP (which also add OpenStack-based VM management to its specialized file system and is packaged as a single scale-out appliance).

Another interesting trend, especially when a lot of data is involved, is to leverage smarter storage solutions. Now there are several startups working on AWS-lambda-like functions applied to large storage repositories (usually object storage. i.e. OpenIO, NooBaa, Coho Data and, of course, Amazon AWS and Microsoft Azure ) and others are embedding database engines and interfaces into the storage system (e.g.: Iguaz.io). There are plenty of use cases, especially when you think about Big Data analytics or IoT. The idea is to offload specific compute tasks to a specialized data-centric infrastructure.

In other circumstances, when consistent low latency is the most important characteristic of the storage system, in-memory storage and I/O parallelization become more and more interesting. In this space there are several companies working on HW or SW products which leverage large RAM configurations and modern CPUs to achieve unprecedented results. Examples can be found in DataCore with its Parallel I/O technology, Diablo Technologies with its Memory 1 DIMMs and Plexistor as well as many others. In this scenario the goal is to bring data closer to the CPU and the application to reduce latency and optimiez network and storage communication.

Closing the circle

At the end of the day, Hyperconvergence is still a great solution for all those traditional workloads which don’t need large capacity storage or low latency. In other cases it becomes less effective and TCO increases rather quickly.

I don’t have exact numbers here, but it’s easy to see that in small enterprises (and highly virtualized environments) HCI is the way to go and it can potentially cover almost 100% of all needs. At the same time, with highly specialized applications accessing large amounts of data, such as for Big Data Analytics, we will need different architectures designed with high efficiency in mind.

HCI vendors are quite aware of this situation and I wouldn’t be surprised to see vendors like Nutanix leveraging the technology they acquired from Pernix to improve their products (or build more specialized solutions) and cover a larger number of workloads over time.

Originally posted on Juku.it



More From paidContent.org

Advertisement