In this technical blog post, IDC explains why enterprises should look to disaggregated hyperconverged infrastructure (dHCI) for simple-to-manage, high-performance devices that combine storage/networking/compute resources. Amazon, Google, and other hyperscalers popularized HCI software-defined storage, which then became available as an easy-to-adopt appliance. dHCI now improves the scalability, performance, and availability of HCI by enabling organizations to fine-tune the storage/compute resources in each node. Learn about this shift in storage architecture and economics, and how HPE delivers dHCI with unified management, automated configuration, and enterprise-class data services.
IDC Research Vice President, Eric Burgener, discusses the advancements behind HPE Nimble Storage dHCI and explains why organizations should look to this new architecture
In the mid-2000s, hyperconverged infrastructure (HCI) originally emerged as a new architecture being used by hyperscalers that were looking to create a more flexible infrastructure that was easier to manage, scale and refresh than more traditional storage infrastructure and cost significantly less. Using software-defined, server-based storage, all clustered together over industry-standard Ethernet, hyperscalers like Amazon, Facebook, Google and others popularized this approach. As awareness of HCI became more widespread, commercial versions were developed and offered in appliance form. The software-defined flexibility, easier management and scalability, and better economics gained HCI an immediate foothold with small and medium business as well as smaller enterprises, and the market quickly became one of the highest growth segments in enterprise storage.
The simplicity of HCI appealed strongly to many IT organizations that were also experiencing a migration of storage management responsibilities away from dedicated storage administration groups and more towards IT generalists like virtual, Linux and Windows administrators. When compared to how traditional 3 tier infrastructure was bought, deployed, managed, supported and upgraded, HCI had much to commend it. It offered a way to buy IT infrastructure that included all key resources (compute, storage, networking) under a single SKU, pre-validated to ensure interoperability, a unified management interface, a single point of support contact for the entire infrastructure, and a non-disruptive expansion and upgrade path that enabled multi-generational technology refresh. All of this combined to create what many IT managers recognize as the “HCI experience”.
Interestingly, the simplicity of the “HCI experience” was attractive even for sophisticated storage administrators who didn’t necessarily need storage management to be simple but found they liked it that way. It began to be deployed for more different types of workloads, and HCI clusters began to grow in size (in terms of node count). Vendors added features supporting more performance, denser capacity per node, and improved software functionality that made HCI a very able competitor in some smaller traditional SAN environments, and it was clear that HCI was having a cannibalizing effect on SAN revenues. In 2020, HCI will be almost a $9.4B market.
As traditional HCI configurations began to grow in size, however, and take on more mission-critical workloads, IT managers noted two issues:
- One, HCI was expanded by adding nodes which included relatively fixed ratios of compute and storage resources. In many cases, administrators might only need more storage capacity but were forced to add (and pay for) compute resources that they really didn’t need (and vice versa). In smaller organizations with more limited storage management expertise, the ease of purchasing, deploying and managing added resources justified the additional cost, but in larger configurations overprovisioning unneeded resources could become quite costly.
- And two, because of the way recovery occurred in HCI environments, recoveries from a node failure could take a long time. As IT managers sought to host more mission-critical workloads on HCI, recovery time could be a real problem, particularly when availability requirements were in the “four nines” and above range. “Four nines” equates to just under 53 minutes per year of downtime, and recent IDC research indicates that 69.3% of IT organizations manage what they consider their “strategic” workloads to “four nines” or above (32.2% actually manage these workloads to “five nines” or higher, which equates to slightly more than five minutes of downtime per year).
A Developing New Market Category: Disaggregated HCI
The concept of disaggregated HCI was created to address this requirement. The purpose of the disaggregated HCI model is to provide the HCI experience in terms of ease of ordering, deployment, management and support but resolve the independent resource scaling and availability concerns. These types of systems are targeted at customers who want the simplicity of the HCI experience for larger workloads that really require the consistent performance and high availability of SAN environments. Buyers of these types of products are likely to be enterprises who like the “HCI experience” but were concerned about HCI performance, availability and/or efficiency in larger configurations or those who need those traditional SAN capabilities but really prefer the ease of use and better economics of HCI. s this segment begins to grow, disaggregated HCI will cannibalize from those two separate segments.
With disaggregated HCI, an ability to more finely tune the ratio of compute to storage resources reduces resource overprovisioning and makes much more efficient use of available resources. This becomes more of an advantage at larger scale but also for even smaller workloads where unbalanced or unpredictable growth may result in the need for significantly more storage than compute resource (or vice versa). Disaggregated HCI lets customers dial in the optimal ratio of compute to storage resources and change that mix as necessary over time without overspending.
How one enterprise storage vendor – HPE – has implemented this solution is interesting. HPE combined HPE ProLiant servers with HPE Nimble Storage arrays into a single integrated system that scales in modular building blocks. Innovative software automates deployment, configuration and integration with VMware vCenter for unified management across all resources, while Virtual Volumes enables VM-centric data services and provisioning. Support for the entire solution comes directly from HPE with the industry’s most advanced AIOps platform, HPE InfoSight. The use of HPE Nimble Storage arrays instead of traditional HCI allows compute and storage to be scaled independently and provides an enterprise-class, feature rich set of proven data services (in-line data reduction, snapshots, encryption, quality of service, replication, etc.) that deliver “six nines” availability in practice. The system, called HPE Nimble Storage dHCI, delivers the storage performance, availability and resource configuration flexibility of SAN environments with the purchasing, deployment, management and support models of the “HCI experience”.
In closing, it is interesting to note that the hyperscalers – the original inventors of traditional HCI – have also moved back towards a disaggregated model when building infrastructure at scale. For them, the cost savings of being able to balance compute and storage resources in the right ratios at scale won the day. This is not to say that traditional HCI will not continue to grow – it is a great model that meets the needs of many organizations, but for larger workloads that have more unpredictable growth patterns and SAN-class performance and availability requirements, disaggregated HCI – including HPE Nimble Storage dHCI – can be a better, more cost-effective and efficient solution.
For more information about dHCI, HPE Nimble Storage, and ProLiant servers, contact CSPi Technology Solutions today.