Contact Us




Change Locale

On the road with NetApp HyperConverged Infrastructure

Servers, Storage and Networking | Posted on July 25, 2017 by Softchoice Advisor

Keith Aasen NetApp

This is a guest article written by Keith Aasen, Solutions Architect, NetApp. Keith has been working exclusively with virtualization since 2004. He designed and built some of the earliest VMware and VDI environments in Western Canada. He designed and implemented projects of variable sizes both in Canada and in the southern US for some of the largest companies in North America.


I recently completed a 6 city roadshow to talk about the NetApp announcements that made up the 25th-anniversary celebration of NetApp (if you missed it, you can watch the recording here). Although the payload for the 9.2 release of our ONTAP operating system was huge, I have done announcements like this before and was ready for the questions posed by customers in each city.

This roadshow, however, was the first opportunity I had to present the new NetApp HyperConverged Infrastructure (HCI). I was less sure how this was going to go over. With this offering, we are breaking the mold of HCI version 1.0, enabling true enterprise workloads on an HCI platform. As such, I was not sure how the attendees would respond. Would they understand the purpose and benefits of such an architecture? Would they understand the limitations of the existing offerings and how the NetApp HCI offering was different?

I shouldn’t have been worried.

As far as understanding the purpose of such an architecture, they definitely got it. Our partner community has done an excellent job of explaining how this sort of converged infrastructure is an enabler for data center transformation. What is it about converged infrastructure, hyper-converged in particular, that enables this transformation? In a word, Simplicity. HCI simplifies the deployment of resources, simplifies the management of infrastructure and even simplifies the teams managing the infrastructure.

This simplicity and the unification of the traditionally disparate resources allows customers to optimize the available resources reducing cost and increasing the value to the business.

So every city I visited got this, Simplicity was key. What about the limitations of the existing solutions?

The missing element of HCI version 1.0 solutions was Flexibility. These solutions achieved simple deployment but were wildly inflexible in how they were deployed, used and scaled. Here are some examples;

1. Existing Compute.

I asked the audience how many customers already had HCI deployed (very few) and then asked how many already had hypervisor servers deployed. Of course, everyone had that. Wouldn’t it be nice to leverage the existing investment in those servers rather than having to abandon the investment? You see, with NetApp HCI you can purchase the initial cluster loaded toward the storage components and then use your existing VMware hosts. Then as those hosts age, you can grow the HCI platform as it makes sense. Reduces the redundant compute in the environment and allows customers to move to an HCI platform as it makes sense for them.

2. Massive Scalability.

The means that most existing HCI vendors protect their data tends to limit the size of each cluster to a modest number of nodes to preserve performance. This results in stranded resources (perhaps one cluster has excess CPU while another is starving). This increases management costs and expense as stranded resources are unable to be used. The NetApp HCI platform can scale massively with no performance impact allowing no islands of resources to form. We isolate and protect different workloads through the use of Quality of Service policies.

3. Part of a larger Data Fabric.

In a Hybrid cloud data center model, it is critical to moving your data where you need it when you need to. Some data and applications lend themselves to the public cloud model, others do not. Perhaps you have data created on site that you want to leverage the cloud to do analytics against. The NetApp HCI platform is part of the NetApp Data Fabric which allows you to replicate the data to ONTAP based systems near or in major hyper-scale clouds such as AWS and Azure. This ability ensures you can have the right data on-prem and the right data in the cloud without being trapped.

I want to thank everyone who came out for the roadshow and for everyone who took the time to watch the recording of the webcast. If you want to hear more about the simplicity of HCI and the flexibility of the Hybrid Cloud model, please reach out to your technology partner.

Related Articles

Cloud | December 20, 2019 by Ryan Demelo

The stakes surrounding data security and risk mitigation rise with each passing year. Data breach costs continue to increase and potential threats grow more sophisticated.  According to IBM, the average total cost of a data breach – after accounting for remediation, reputational damage and regulatory issues – has reached $3.92 million. While smaller organizations may […]

Cloud | December 12, 2019 by Ryan Demelo

Digital transformation is changing the way businesses operate on a fundamental level. With many more digital platforms and emerging technologies like big data and the Internet of Things – the rate of data production has grown at a steady pace. With no sign of things slowing down, data protection is more important than ever. 

Cloud | November 28, 2019 by Ryan Demelo

Among the biggest obstacles to IT resilience is the “data dilemma.”  That data has become “the new oil” is a well-worn cliché by now. But clichés earn that status because they originate in the truth. And it’s true that today, data drives the decision-making that moves businesses forward. Protecting it is more important than ever. […]