Faster Delivery = Happy Users
Automated Process = Fewer Errors
Standards = Cost Reduction
Order Visibility = Confidence
Linking Systems = Efficiency
Cost optimization in the cloud is a universal issue, whether using Microsoft or another provider. According to research by Softchoice, 57% of IT leaders have exceeded their cloud budget at one point or another. Twenty percent have exceeded it by more than 20%. Any organization investing in the cloud, to any extent, can experience cost overruns.
The budgetary practices that made sense in the era of on-premises (CapEx) computing are exactly the opposite of what’s required to run cloud efficiently (OpEx). Consider, for example, the issue of buying capacity for peak times. If your systems are on-prem, your company needs to retain the hardware required to deal with the high throughput. However, in the cloud, because resources are flexible, the opposite approach is optimal. It’s best to maintain a baseline of computing power for day-to-day operations, and then scale up as required.
Additionally, while the ease of purchasing new virtual machines in the cloud make it appealing for burst computing, it also makes it very easy to overspend. Whereas buying hardware takes time and requires approval processes.
However, there is good news. Cloud overspend is avoidable, with the right practices. Watch the webinar below or keep reading:
Ultimately, good cloud budgeting requires a comprehensive approach. However, there are a few quick steps that any organization using Azure can take to lower costs right away.
First of all, take advantage of Microsoft’s reserved instances. They’re appropriate for any stable, long-term workload, given that they can be bought in 1-year or 3-year blocks. They also offer savings of up to 72% versus the pay-as-you-go cost. As well, they’re more flexible than they were previously. Existing workloads can easily be transferred to a reserved instance, and the capacity can be reconfigured over time.
Another under-utilized feature is license portability. Organizations running Windows Server or SQL Server with Software Assurance can apply their license to the VM image. This reduces costs to a level comparable to Linux pricing.
Finally, a quick step that’s rarely taken is simply turning off instances when they’re not required. A surprising number of companies keep instances running when they’re unneeded. This is equivalent to leaving your air conditioner running when you’re away from home.
However, these steps don’t add up to a real cloud cost optimization strategy. This requires a comprehensive, organization-wide understanding of cloud requirements. It begins with governance.
A lot of cloud overspending comes down to a lack of transparency. In the worst-case scenario, IT is completely unaware of how different departments are using the cloud, and that makes accountability impossible. IT becomes responsible for hefty bills without being able to establish where they came from.
The best-case scenario, on the other hand, is knowing exactly who’s using cloud, why, and, and how mission-critical their usage is. Having this knowledge naturally leads to better practices. When it’s understood which departments are using what, it’s easy to enact a sensible chargeback policy. Knowing whether a given workload is mission-critical makes it clear whether surplus compute is required. Finally, it’s necessary to get a sense of when a workload needs to be operational in order to be able to turn it off when it’s not needed.
This isn’t a one-time project. Logging and labeling workloads need to be done on an ongoing basis. Clear and consistent reporting requires vigilance, especially as compute needs grow and evolve. Every purchase, no matter who makes it, requires some level of oversight. Remember that a misplaced decimal point in a provisioning script can entail a 10x overspend if it isn’t caught quickly.
Moreover, once this infrastructure is in place, there’s still the matter of choosing a cost measurement model that makes sense.
The most intuitive way of measuring cloud spend is the per-customer model. This model makes accountability easy. If a specific department overspends, then that department is responsible.
There’s no need for IT to be implicated when it’s not IT’s fault.
Unfortunately, this isn’t always possible or practical. Every organization contains a degree of ambiguity. Services are used cross-department, and projects can fall between departments frequently. Also, it’s much easier to accomplish this model with PaaS services, and not every organization is ready for (or needs) PaaS. Thus, sometimes a per-project model is more appropriate, especially in companies with changeable workloads. For example, a company engaging in merger and acquisition (M&A) activity can budget for a period of increased bandwidth around a merger.
Of course, both of these cost models can be deployed on an ongoing basis. It doesn’t have to be one or the other. Once a baseline of good governance and optimization is achieved, a variety of budgetary practices become attainable.
However, that baseline can be difficult to achieve without outside help.
At Softchoice, we’re prepared to help any organization take command of their costs at any stage of Azure adoption.
For firms that haven’t yet implemented the cloud, we introduce best practices early on, ensuring that appropriate governance and reporting is in place from the beginning. Meanwhile, in firms in which Azure is already implemented but is producing undue costs, we help bring finances under control.
Our Keystone services involve both short-term restructuring and long-term maintenance. We establish better cost management and install a cloud management dashboard that makes it easy to monitor spending. Then, on an ongoing basis, we offer optimization through scheduled reviews of insights and trends, offering recommendations as appropriate to stop overspend from creeping up, as it often can.
Contact Softchoice to learn how we help our many Azure customers make cloud cost optimization not only possible but easy.