Cloud, a broken promise and back to basics

The journey to the cloud, accelerated by the outbreak of the Covid-19 pandemic, has caused more than a few nasty surprises for CIOs, CEOs and Boards of Directors who, after making investments, in many cases huge ones, to complete the promising leap to the Cloud, find that the promises have not been met.

Compartir esta entrada:

The so-called hyperscalers that provide infrastructure, platform and private cloud services continue to emphasise the benefits of the transition from on-premises to cloud infrastructure. However, it is clear that they will have to make an extra effort if they want to avoid a slowdown of this “evolution” by large companies or even a reversal of this trend.

Million-pound projects with no clear outcome, or where the result may be remarkable in terms of capacity transferred but unfathomable in terms of value added to the business, have taken some of the shine off the cloud and led many companies to rethink the role the cloud should play in their strategies.

This reality, which we and our customers are perceiving, is also being reported by consultancy firms observing and analysing the market. According to Gartner, more than half of all digitisation projects do not meet the expectations of the CEO/management team, either in terms of time (59%) or value generation (52%). If we focus specifically on the cloud, one possible path to reach the paradise that is digitisation, it is telling that the consultancy McKinsey & Company recognises that some companies, instead of capturing the potential business value associated with the cloud, estimated at $1 billion, are losing some of that value. This is mainly due to inefficiencies in the orchestration of cloud migrations, which add unexpected costs and delays. This expense is not trivial: the consultancy estimates that between 2021 and 2024, around $100 bn will be wasted on cloud migrations.

It is therefore clear that while the cost of hyperscale services may be lower on paper, in the end their total cost of ownership (TCO) is higher, and this is ultimately what will end up being reflected in the income statement. The reason for this unpleasant surprise is twofold. First, there are hidden costs in the use of cloud services, such as those associated with processing unexpected peaks. Second, their operating costs end up being higher, largely due to the greater complexity of the environment and, therefore, the need for specialised professionals. Since there is a shortage of these people in the market, this also involves an additional cost.

Persistent shortcomings in the cloud

There is a third reason for this disappointment, which is nothing new but is crucial: the lack of vision and control of performance in cloud environments, whether public, private or hybrid. It so happens that the efficiency shortcomings that large companies were already suffering from not only still exist but are multiplying in the world of the cloud. And because there is no real overview or comprehensive plan for capacity and performance, it is becoming increasingly difficult to know what is going on and how much it costs.

This is not because of a lack of tools. This is not because of a lack of tools. The vast majority of large companies have several of these, but far from contributing to the construction of this overview, they offer partial snapshots, limited to certain environments and with a capacity to act (understood as the potential for making changes) that is quite limited. At this point, we must not forget that once the decision to act has been taken, at most large companies it will have to be done through a third-party service provider, so that, in addition to adding time, more costs will be added to the maintenance work.

Faced with this nightmare, the concentration of providers, as has happened in the past, is gaining followers because quality of service and customer experience are certainly fundamental. However, CEOs and Boards of Directors really want results and take it for granted that technology, which is a cost, has to work and work well.

In the cloud, as in the on-premises world, operational efficiency is the priority and must be demonstrated through KPIs that are material to the business, that is, financially relevant.

Moreover, the pressure to deliver results is growing in a scenario of reduced IT investment. Gartner has halved its forecast for global IT spending in 2023. The consultancy forecasts the percentage growth in IT spending in 2023 compared to 2022 at 2.4%, amounting to $4.5 billion, compared to its forecast of 5.1% growth made in the last quarter. As such, CEOs, CFOs and the technology committees of Boards of Directors have a clear request for CIOs: an efficiency and service model that reduces costs without compromising operations.

Continuous vision and optimisation

The result is that we are witnessing the end of the cloud party and a return to basics. he monitoring and proactive management of technology, regardless of the model chosen, is the priority. So-called “Applied Observability” has risen to second place in the ranking of the “10 strategic technology trends for 2023”, and CIOS and CEOs agree that what they need is that single, detailed overview. They also want advanced capabilities to dive in, correlate, detect, resolve problems and optimise, optimise, optimise and optimise.

Company managers want to know for certain whether the infrastructures and applications that support the organisation’s operations, either in an on-premises model or in any of the different varieties of cloud, are working at full capacity. They want to know this not from a technical point of view, but in terms of business performance. They have questions such as: What is the consumption of infrastructure resources? Is it sized correctly? What are the response times of the different applications? Are they suitable for the different business processes? What improvements are necessary and what savings would they generate? Are the service level agreements being met? What is the performance level of my providers? And so on…

Performance can be understood as the optimal combination of efficiency/cost, and its continuous improvement is a must for large companies, which continue to face cost overruns, in some cases unbearable, in their ICT budgets.

Eliminating these cost overruns that, as we have seen, are the result of the sum of various costs (hidden costs due to unexpected processing, development costs, maintenance costs, etc.) requires an all-seeing eye and uninterrupted monitoring of the operation of infrastructures and applications, both during their development and once they are up and running.

It is also necessary to have the capacity to act considering all dependencies and do so in an increasingly automatic way. Likewise, it is essential to be able to see how decisions involving technological changes (migrations to the cloud) are reflected in the operating account and in customer service levels, and a key aspect is to be able to measure the results of the policies of outsourcing to technology providers, whether they are called IBM or hyperscalers.


In the cloud, as in the on-premises world, operational efficiency is the priority and must be demonstrated through KPIs that are material to the business.

Insights relacionados

  • Faced with poor technology Performance, ask for a second opinion

    Faced with poor technology Performance, ask for a second opinion

    There’s a new trend in the technology industry , and it’s been called FinOps, an acronym resulting from the fusion…

  • Technological performance, when the emperor has no clothes

    Technological performance, when the emperor has no clothes

    Aligning technology and business has always been a challenge, but one seemingly overcome with the advance of methodologies such as…

  • The cloud party is over, 2023 is the year of efficiency

    The cloud party is over, 2023 is the year of efficiency

    Millions have been invested in digital transformation in recent years, largely for the move to the cloud. Its siren calls…

Suscríbete a nuestra newsletter y no te pierdas nada