Be careful, they say, what you wish for. Most organizations have at last bought into the promise of near-fully virtualized enterprise data centers. And, in the long run, that’s unquestionably a good thing. After all, virtualizing your data center offers a variety of potential benefits from reduced upfront capital to lower operating costs. Yet a whole host of complexities – not the least of which of those is exacerbated by the diverse group of virtualization rivals vying for a place in your data center – has made the challenges of installing and configuring a virtual environment daunting.
For instance, as organizations have begun consolidating their physical number of servers, pushing the envelope and becoming more and more virtualized, solving yesterday’s challenges has created newer and bigger ones. One, of course, is a lack of standards around virtualization. If IT is mandated with building a private cloud, more often than not it finds itself buying, for instance, servers, storage, network infrastructure, VMware, hypervisor and management tools from a highly qualified but disparate group of vendors and then left to do all the heavy lifting required to sew everything together.
Standards? What standards?
But that’s really just the tip of the iceberg. IT may rise to the challenge and do a bang-up job creating a solid foundation for its highly virtualized environment. What happens next, however, is just as important – for instance, when a virtual machine fails over to another server and it’s on a different version with different storage. Is the failover going to work? And what happens if it doesn’t?
In short and to put it mildly, platform heterogeneity, differing standards and lack of cross-discipline expertise are making the quest to get to the cloud – with the goals of improved resilience and ease of service activation – a little more difficult than most of us realized. Every purchase takes on an air of three-dimensional chess, with the need to understand how each piece works on its own and how it may interact with every other one.
Building a better foundation
A new solution called IBM BladeCenter Foundation for Cloud is aiming to change all this. A pre-configured bundle that offers a foundation optimized for highly virtualized environments, it’s based on the IBM System x® platform, BladeCenter and VMware. All parts come fully integrated, including racks, servers, storage, networking, software, PDUs, KVM, cables, you name it. In a complex world with little to no standardization, here’s a comprehensive, end-to-end platform with configurations that have been through rigorous design, performance analysis and support for architectures based on leading technologies like convergence and management centralization. That means that – relatively speaking – it’s faster and easier to order and deploy with tested configurations for a variety of sized implementations.
BladeCenter Foundation for Cloud offers a highly resilient environment with no single point of failure and advanced virtualization capabilities designed to simplify management. It automates network and storage address virtualization for faster failover recovery and easier expansion, as well as offers best-in-class systems management tools that can minimize downtime and allow IT to control both physical and virtual IT resources through a single interface.
As platform heterogeneity and complexity threatens to slow down the drive toward virtualization, BladeCenter Foundation for Cloud promises to help simplify and accelerate IT’s virtualized data center acquisition and deployment with time-tested virtualization implementation methods and best practices gained from IBM’s track record of successful virtualization deployments.
While it’s a huge one, readying your infrastructure with virtualization is only the first phase in building a cloud for your organization.