Contact Us

|

Careers

|

Change Locale
close

Software-Defined Storage: what it is, and what it’s not

Servers, Storage and Networking | Posted on April 27, 2016 by bber

Editors note: Storage expert Brad Beresh gets real about his definition of Software-Defined Storage -and it doesn’t include the use of a traditional storage controller.

Recently, someone told me (with a very proud smile on his face) about his new purchase of a “software-defined storage” system. I am very interested in the field of software-defined anything, so I eagerly listened to the story.

After hearing about what he purchased, it turned out to be a traditional storage controller with familiar features: a dual controller, a selection of hard drive and SSD offerings, etc. “What about this controller makes it ‘software-defined?” I asked. “It came with thin provisioning, compression, tiering, and these features were embedded in the controllers software stack. It’s a software defined storage system.” He said.

I strongly disagree that the above would qualify as a software-defined offering.

To be fair, the term Software-Defined Storage (SDS to IT folk) is very vague and very misunderstood. It can mean everything to everyone and nothing to no one. Yes, the code of that controller contains the necessary features; and those features, for the most part, are based in software (some manufacturers have offloaded some of these for performance).

Here is how I define software-defined storage
To me, SDS should be thought of as a specific application used to provide access to any storage though any connection via any hardware.More specifically, my definition of SDS contains the following qualifications:

  • The application is installed on a server or system – any system with any processor type or OS and it should be manufacturer independent.
  • It allows systems to connect the way they like, whether through traditional means like fibre channel, via network file systems or iSCSI, etc.
  • The systems layer connects to storage – any storage.
  • The application layer is the gateway to all the storage and centrally provides all the features that are used in the storage world.
  • The environment is clustered and deployed as a single entity for total redundancy.

Think about what this means to a software-defined environment. As I grow in my capacity needs, I just add more storage. Any storage, anywhere in the system, anytime. To me, the core value of SDS is in its simplicity.

Buying SDS is simple
All users are able to access this storage as desired by the administrator, and by design, the system does not stall with compatibility issues. The buyer only cares about selecting the best offering that suits their needs for performance/capacity/size/power draw etc. On installation day, there is no need to ready the disk and apply all the features and restrictions. Just apply the saved profile desired and go.

Scaling SDS is simple
If I need more bandwidth or performance, I simply add in another server to the “grid”. The application will recognize a new system is added then analyze the performance of that new system, and apply the profile to have it added to the grid. If that new system happens to be a high-performance machine, the system will be intelligent enough to recognize this and dynamically use it appropriately. Send more load to it send more performance workloads to it.

Updating SDS is simple
If I need to update firmware on a piece of hardware, I can dynamically evacuate that piece out of the grid, do the update, then bring it back in… all while in production. If I need to replace an aging machine, I can evacuate the load and then remove the load being transferred to alternate resources.

This methodology also makes it easier to communicate with other ‘sites’ regardless if it’s geographically dispersed site or one on the cloud.

SDS is still a work in progress
I believe a lot of work has yet to take place to get software-defined storage to a universal reality. There is one solution from IBM that gets us closer, an appliance-based Elastic Storage Server branded as IBM’s Spectrum Scale. This solution leverages the underlying General Parallel File System (GPFS) technology and provides features like:

  • Geographical global namespace (setting up a single namespace across multiple physical locations)
  • High performance, scalable storage
  • An optimized Hadoop-like file system for big data and analytics
  • Integration with the Long Term File System and TSM technologies

If you have any questions or are interested in learning more, leave a comment below or reach out to me directly.

Related Articles

Cloud | August 19, 2019 by Softchoice Advisor

VMworld is the marquee VMware event of the year. The conference showcases the technology and solutions providers that are transforming the IT landscape. From mobility and the cloud to networking and security – VMworld offers a glimpse of what’s happening in IT now – and what’s coming next. The annual US conference kicks off in […]

Cloud | July 31, 2019 by Scott Mathewson

Most companies today use at least one cloud provider in some capacity.  Within two years, 92% of companies will be using two or more. This hybrid cloud world is forcing traditional data center design to evolve. The rise of hyper-converged systems and software-defined everything requires businesses to reevaluate traditional network and security designs to take […]

It’s fairly obvious that IBM’s POWER9 processors are, well, powerful. Staggeringly powerful. The evidence is straightforward: the Department of Energy’s Summit is the world’s fastest supercomputer, and it runs on POWER9 cores. Every day, Power Systems advance the frontiers of science on behalf of the world’s leading superpower. However, if you’re not the world’s leading […]