The relationship between developers and the cloud was practically love at first sight. For years, migration to the cloud in countless industries continued unabated in light of its economy, flexibility, and scalability. But as cloud costs continue to skyrocket and as economic volatility has led to significant cost-cutting in businesses across the board, that is shifting. Some, possibly spurred by recent admissions by major cloud providers that cloud costs are getting out of control, have raised the question of “declouding,” either for specific use cases or in one notable case, entirely.
They have a point. Public cloud services are heading towards the $600 billion mark, and, as Basecamp’s CEO said, ‘much of these costs are based on provisions that never get used but are provisioned just in case they could be.’
The promise of the cloud was always supposed to be about its dynamism, where users had the freedom to procure computing resources when required by the business. Instead, we’ve come to a situation where companies are constantly procuring more than needed just in case – wasting time, energy, and money.
What needs to change to make the cloud’s promise a reality?
A History of Clouds – And How It Got So Expensive
The cloud came into existence to provide greater flexibility and scalability, powering the tech boom of the past 15 years. The pay-as-you-go model created nearly unlimited growth potential, enabling businesses to build new technologies at a rapid scale – to the delight of billions of users around the world.
But the scalability of the cloud has become its biggest financial drawback. The range of offerings combined with how easy it is to simply spin up new servers often causes users to procure significantly more than they need. Sometimes, this is done by DevOps teams who spin up servers for testing and leave them on as they must deal with a new business need and don’t delete them. Other times, expensive compute instances are provisioned intentionally to cover for extreme peaks in data to prevent a situation where there is not enough capacity for their applications – in short, to ensure critical business continuity.
Regardless, the cost of overprovisioning as well as the hidden costs of various cloud services is causing businesses to reassess their dependence on the cloud, wondering whether it’s worth the investment.
The Costs of Moving Back On-Prem
Yes, the cloud’s expensive, but would moving back to on-prem be less costly?
Companies like Dropbox famously moved 90% of its workloads back to on-prem servers, but this move isn’t right for everyone and has the potential to limit the innovation that the cloud enables.
Moving back on-prem means accepting the responsibility for ensuring high availability, low latency, and high performance for evolving customer needs. Businesses are also limited to the set capacity of their servers and the need to provide a buffer to ensure the system doesn’t scale out – where most of the time, this large buffer lies dormant and unused.
What’s more, you’ll need a team of superhero SysAdmins to manage your data center. This rare breed of engineer is hard to find and expensive to keep.
Security vulnerabilities, power outages from keeping servers cool, fires, floods, and a litany of other dangers can quickly undo any initial savings from moving on-prem.
On the flip side, what happens if things do go well and none of these dangers unfold? The very nature of on-prem infrastructure means one cannot address a major upswing in usage, which can cause significant UX and operational issues. This lack of agility makes it difficult to meet growing business demands at scale.
Instead, one’s entire operational needs need to be planned in advance, with the knowledge that most of one’s infrastructure will sit idle. That’s an expensive pill to swallow.
News Years Resolutions- Start A Cloud Flexibility Journey
Acknowledging that leaving the cloud is not necessarily the answer, it’s time to create a new approach to cloud operations that is designed to be flexible and cost-efficient. Cloud engineers have a right to demand that their infrastructure becomes more dynamic so that it is both cost-efficient and flexible enough to address the inevitable changes in usage. The list of possible waste is long – whether EBS, EIPs, or ELBs, rarely used instances or instances in distant regions. With the right tools that can track both waste and cost, it becomes possible to monitor where it’s possible to scale down – or “decloud” entirely.
Cloud cost is a tremendous challenge to businesses as they scale, but for most, leaving the cloud is not the most cost-efficient or agile way of growing. By making cloud infrastructure more dynamic, the early vision of tech democratization becomes much more realistic, as it becomes more affordable and accessible to all. Businesses can make their cloud footprint smaller while keeping it more efficient, effective, agile, and ultimately, much less expensive.
By Maxim Melamedov, CEO and Co-founder of Zesty