Over the next five years, a billion more people will be using the internet with an estimated 15 billion connected devices and untold more visual and interactive content, increasing the internet’s data footprint to over 1,000 Exabytes – or (get used to hearing this term a lot more) – 1 Zettabyte.
This trifecta of more users, more devices and more data will inevitably escalate demands on IT and data centers, posing some daunting infrastructure challenges in terms of performance, efficiency and cost.
Wanted: more powerful and integrated engines.
And that means that, at the heart of the next-generation data center and everything related to it – from enterprise infrastructure to workstations to cloud installations, storage systems and high performance computing that will be expected to “just work” – will have to be more flexible, powerful engines to keep things humming.
In other words better processors.
Better in a whole host of ways – they’ll be expected to offer better than ever performance whatever the constraints happen to be (floor space, power or budget) and on workloads that range from the most complicated scientific exploration to simple, yet crucial, web serving and infrastructure applications.
But just like a car engine, your ability to realize its full potential depends on its optimal integration with the rest of the machinery. For instance, what would happen if you put a race car engine into an economy car? The short answer: not much. You’d never get enough fuel to the engine and the power train wouldn’t be able to handle that much power, so you’d likely just sit there with your tires spinning – say hello, performance bottleneck.
To steer the metaphor a little further, your driving experience is also significantly improved if that high-performance engine doesn’t overheat or wildly burn through fuel. In other words, beyond simply making processors that are able to deal with orders of magnitude more data, with more cores, cache and memory, they’ll also have to be more adaptive and intelligent. So, for instance, processors will have to be able to keep better track of how hard they’re running to deal with workload spikes as quickly as possible, get back to a lower power state, reducing average power draw and your cost of operation.
Enter a new engine that’s designed to meet these daunting infrastructure and performance challenges at the heart of the next-generation data center: the Intel® Xeon® Processor E5-2600 product family.
More cores. More memory. More integration. More bandwidth.
Compared to the prior generation 5600 processors, the latest E5-2600 offers more capabilities and performance:
- Up to two additional cores for more raw computational power
- More memory, increasing capacity from 288GB to 768GB
- Around two-thirds more cache and an additional memory channel with higher than ever per DIMM memory capabilities to reduce latency
- Faster connections throughout the system and increased bandwidth by up to two times
As alluded to earlier with our race car engine example, we know that as computational power of the processor increases the performance bottlenecks change. With the new E5-2600 family, Intel has focused on improving I/O, networking and storage to ensure that data gets to the processor as efficiently as possible.
As well, Intel’s Turbo Boost Technology 2.0 offers performance that adapts to spikes in workloads for more computational power when it’s needed:
- More performance: Higher turbo speeds maximize performance for single and multi-threated applications.
- More intelligence: Adapts to conditions by not engaging turbo when memory and I/O are the bottlenecks.
- More efficiency: Manages power and thermal headroom to optimize time spent in turbo mode.
All in all, these advances and others, designed and embedded into the new E5-2600 family of processors, combine to make systems higher performing, more efficient, secure, easier to manage and ready to meet the infrastructure challenges of tomorrow’s data center.
Zettabytes, here we come.