Contact Us




Change Locale

Software-Defined Storage: what it is, and what it’s not

Servers, Storage and Networking | Posted on April 27, 2016 by bber

Editors note: Storage expert Brad Beresh gets real about his definition of Software-Defined Storage -and it doesn’t include the use of a traditional storage controller.

Recently, someone told me (with a very proud smile on his face) about his new purchase of a “software-defined storage” system. I am very interested in the field of software-defined anything, so I eagerly listened to the story.

After hearing about what he purchased, it turned out to be a traditional storage controller with familiar features: a dual controller, a selection of hard drive and SSD offerings, etc. “What about this controller makes it ‘software-defined?” I asked. “It came with thin provisioning, compression, tiering, and these features were embedded in the controllers software stack. It’s a software defined storage system.” He said.

I strongly disagree that the above would qualify as a software-defined offering.

To be fair, the term Software-Defined Storage (SDS to IT folk) is very vague and very misunderstood. It can mean everything to everyone and nothing to no one. Yes, the code of that controller contains the necessary features; and those features, for the most part, are based in software (some manufacturers have offloaded some of these for performance).

Here is how I define software-defined storage
To me, SDS should be thought of as a specific application used to provide access to any storage though any connection via any hardware.More specifically, my definition of SDS contains the following qualifications:

  • The application is installed on a server or system – any system with any processor type or OS and it should be manufacturer independent.
  • It allows systems to connect the way they like, whether through traditional means like fibre channel, via network file systems or iSCSI, etc.
  • The systems layer connects to storage – any storage.
  • The application layer is the gateway to all the storage and centrally provides all the features that are used in the storage world.
  • The environment is clustered and deployed as a single entity for total redundancy.

Think about what this means to a software-defined environment. As I grow in my capacity needs, I just add more storage. Any storage, anywhere in the system, anytime. To me, the core value of SDS is in its simplicity.

Buying SDS is simple
All users are able to access this storage as desired by the administrator, and by design, the system does not stall with compatibility issues. The buyer only cares about selecting the best offering that suits their needs for performance/capacity/size/power draw etc. On installation day, there is no need to ready the disk and apply all the features and restrictions. Just apply the saved profile desired and go.

Scaling SDS is simple
If I need more bandwidth or performance, I simply add in another server to the “grid”. The application will recognize a new system is added then analyze the performance of that new system, and apply the profile to have it added to the grid. If that new system happens to be a high-performance machine, the system will be intelligent enough to recognize this and dynamically use it appropriately. Send more load to it send more performance workloads to it.

Updating SDS is simple
If I need to update firmware on a piece of hardware, I can dynamically evacuate that piece out of the grid, do the update, then bring it back in… all while in production. If I need to replace an aging machine, I can evacuate the load and then remove the load being transferred to alternate resources.

This methodology also makes it easier to communicate with other ‘sites’ regardless if it’s geographically dispersed site or one on the cloud.

SDS is still a work in progress
I believe a lot of work has yet to take place to get software-defined storage to a universal reality. There is one solution from IBM that gets us closer, an appliance-based Elastic Storage Server branded as IBM’s Spectrum Scale. This solution leverages the underlying General Parallel File System (GPFS) technology and provides features like:

  • Geographical global namespace (setting up a single namespace across multiple physical locations)
  • High performance, scalable storage
  • An optimized Hadoop-like file system for big data and analytics
  • Integration with the Long Term File System and TSM technologies

If you have any questions or are interested in learning more, leave a comment below or reach out to me directly.

Related Articles

Collaboration | July 21, 2020 by Softchoice Advisor

In collaboration with Cisco. If you’re prioritizing the acceleration of remote work right now, how are you delivering fast, spread out and great experiences every time? Coming out of the acute crisis climate of COVID-19, many organizations faced the struggles of forced change, but also found some upside in the disruption. Now is a time […]

Cloud | June 16, 2020 by Jennifer Reed

In collaboration with Veeam.  Whether or not you agree that OK Computer, the third album by the English rock band Radiohead released in 1997, deserved its critical acclaim, know that the Library of Congress had already deemed the album “critically, historically, or aesthetically significant” when it was included in the National Recording Registry in 2014. […]

Cloud | May 25, 2020 by Softchoice Advisor

The Softchoice Virtual Discovery Expo (VDX) 2020 has now wrapped. Over 2,000 people registered to hear from Softchoice and our exhibitor partners about the areas driving their digital transformation today. This year, our full-day virtual tech expo happened in a much different context than the inaugural event in 2019. Attendees took away an important message: […]