Among the biggest obstacles to IT resilience is the “data dilemma.”
That data has become “the new oil” is a well-worn cliché by now. But clichés earn that status because they originate in the truth. And it’s true that today, data drives the decision-making that moves businesses forward. Protecting it is more important than ever.
Nonetheless, many IT leaders find themselves stuck between investing time and resources in data protection and focusing on digital transformation – often a clear preference of the business. In fact, 66% of enterprise leaders complain that unplanned downtime is stifling digital innovation.
This gives rise to an inconvenient truth: Innovation makes IT resilience more complex.
Whatever the vertical, the business expects IT to demonstrate value by introducing technologies and solutions that move the business forward – in other words, getting results.
But the same technologies that make IT environments agile, scalable and dynamic also add new factors to the IT resilience equation. Each added element builds on existing layers, making it harder to protect data, applications and workloads across the whole environment. Removing these legacy systems may lessen the burden, but even this isn’t always straightforward.
As organizations continue to approach the challenges of digital innovation with heterogeneous infrastructure, the breadth and complexity of IT resilience will expand rather than contract. Sooner or later, the scale of IT operations will outweigh the resources available to maintain and protect them.
Below, we’ll dive into the 3 biggest factors getting between digital innovation and data protection.
1) Silos Upon Silos
The size and scale of the modern enterprise have become, well, massive. Many have long since crossed the threshold into petabytes (1,000 terabytes) of data territory. In fact, the quantity of data subject to business analytics is expected to increase 50x in the next five years.
Keeping these large-scale, data-driven organizations running are a mix of virtualization and hypervisor technologies. Running these are a blend of Windows, Linux or UNIX-based operating systems.
Traditional applications, database platforms and Exchange now share space with new SaaS applications and web versions of legacy apps. Block- and file-based storage systems live alongside object storage, file sync-and-share and cloud storage services. The sprawling software-defined data center shows no signs of slowing down.
This results in silos – and more silos. One side-effect of this arrangement is that ensuring every system and workload is available becomes, well, complicated.
For example, it’s often impossible to share software-defined storage created with one hypervisor vendor’s technology with the data from a workload hosted by rival provider. In turn, each stack requires its own data protection processes. This adds considerable complexity to the process of integrating new workloads.
Any adjustment, planned or otherwise, can’t take place without considering its impact across the entire infrastructure. Adding new elements only exacerbates the effects of these unknowns. Meanwhile, low visibility into applications and their interdependencies across a hybrid landscape make hitting RTO/RPO requirements that much more difficult.
Any unanticipated move could render an entire IT resilience plan unworkable. Yet, IDC finds organizations spend just 12.4% of their IT budgets on backup and recovery solutions. Furthermore, 47% of organizations elect to manage backup and disaster recovery operations in-house.
2) Compromise in the Cloud
About 42% of organizations struggle to manage their legacy environments on a day-to-day basis. Big challenges around uptime and resilience prevent IT teams from focusing on game-changing, business transformation projects. But not to worry, the cloud changes all that, right?
The answer isn’t quite cut-and-dried. The public cloud promises a whole suite of new advantages in support of digital innovation. But the emergence of IaaS, SaaS and PaaS hasn’t eliminated the layers of complexity around management and optimization.
In fact, a recent Softchoice study with IDC found that 86% of public cloud customers have repatriated some of their workloads. The results also suggested that 50% of these will be migrated to private clouds or on-premise data centers in the next two years.
Our research also revealed that complexity around protecting data and business continuity was the top repatriation driver. Many organizations have pursued either public or private cloud to improve IT performance and deliver new applications and services faster. But that eagerness may have come with some misunderstanding of the vulnerability of cloud architecture to unplanned interruptions.
The growing necessity of hybrid and multicloud environments has only added complexity. Another recent survey of organizations looking to migrate to the public cloud ranked the biggest issues facing would-be adopters as the complexity around security (57%), legacy infrastructure (45%) and governance, risk and compliance (39%).
3) The Infrastructure Learning Curve
The rise of mobility and the cloud has resulted in a Golden Age for end users. Each application and the data it uses is interconnected through standard protocols and platforms. It’s available from any location on the user’s choice of devices. And, it always works all the time!
But working away in the background are the IT professionals who manage the infrastructure supporting all these users across all these disparate platforms and devices. The push to deliver innovative services and applications has coincided with a measurable rise in complexity.
As new technologies like the cloud and software-defined everything push digital transformation forward, they also have some undesirable effects. The learning curve for those responsible for deploying, managing and keeping new technology available gets steeper.
IDC’s 2019 State of IT Resilience Report found 58% of respondents expect the complexity of their data protection requirements to increase over the next two years. Just 21% felt they had adequate IT staff and skills to meet their organization’s requirements over the same period.
Meanwhile, Veeam reports 43% of business leaders believe that cloud providers can deliver better service levels for mission-critical data that their internal IT processes.
Without agreed-upon protocols and standards or the right in-house experience, new technology investments often fail to realize the intended results. Traditional methods prove ineffective for dealing with the added complexity. The result is data protection and risk mitigation plans that are misaligned with the technology they’re meant to protect. IT admins are left with “trial and error” and “hope for the best.”
Solving the Data Protection Dilemma
The complexity of mixed infrastructure environments has skyrocketed. The result: protecting everything is a greater challenge than ever. To get ahead of the IT resilience curve, modern organizations need to streamline protection for applications, data and infrastructure. They need to introduce backup and recovery policies aligned to their business strategy. From here, they need to test their backups to ensure they can recover critical systems against service levels demanded by the business.
Working with a third-party service provider like Softchoice helps you offload IT resilience with expert guidance and 24×7 support. That way you can refocus on realizing the ideas that will drive your business forward.
Are you ready to take the next step toward IT resilience?
And, don’t miss the previous article in this series, “Is Your Risk Mitigation Strategy Resilient Enough?“
Protect your critical data and applications with our turnkey Backup as a Service solution. Reinforced by our deep understanding of data center and network technologies and enterprise-grade managed services, this offering helps you resolve issues faster and free IT resources to refocus on business transformation.