While cost-cutting strategies are not necessarily a better option than revenue-generating strategies, it is undeniable that the former are easier to implement. The explanation is quite simple, as revenue generation requires the rethinking of business processes, the sales model and the value proposition to customers. As a result, most companies opt for the former, with the undesirable socio-economic consequences of which we are already aware.
If we take a quick look at the most common cost control strategies in the area of technology we can mention, for example, the freezing of staff numbers or the halting of new projects, which, curiously enough, are usually those aimed at generating revenue. Other measures that have been gaining popularity in recent times are migrations to lower cost technologies that often end up translating into other types of added costs; not forgetting, of course, the favourite of all the measures, renegotiating contracts with providers.
In this last area, the prevailing trend is for volume-based outsourcing contracts and a constant reduction of prices, both for infrastructure providers and for integrators. In other words, price cutting has become the preferred way for companies to remain competitive.
However, without entering into ethical debates, we can highlight two paradoxes that are easy to spot with this type of strategy. he first lies in the lack of any assessment of how lowering prices for subcontractors will impact on the quality level, and the second, which borders on the absurd, is that if technology is considered truly strategic, shouldn’t its efficiency be prioritised over cost control?
At Orizon, these two points are fundamental in defining the concept of performance, which, in our experience, still does not receive the attention it truly deserves. When we warn that technology applications are not performing as they really should, what we are saying is firstly that the company is not achieving the business targets set or meeting the service commitments made, internal or external. Experience tells us that, on average, 20% of Service Level Agreements (SLAs) are not met, which in today’s hyper-interconnected scenario puts companies in an exceedingly difficult situation.
The second consequence of poor application performance is a direct increase in the consumption of technology resources. This over-consumption, in our experience, can easily represent up to 10% of infrastructure costs.
Thirdly, we cannot lose sight of the problems caused by low software quality that, as we have already pointed out, are often a consequence of the repeated downward price pressure on integrators’ services. Here it is worth noting that poor software quality, due to bad practices such as reworking, re-coding or non-execution of guarantees, can account for up to 15% of integrators’ costs.
It should also be added that the potential for improvement is enormous, largely due to the technological environment and, in common with many other areas, the role of the Pareto principle, the 80/20 rule or the law of the vital few, according to which a very small number of bad practices generate most of the problems.
IT governance and performance culture
In this context, and if companies genuinely want to improve the way they measure the value of technology, it is imperative for efficiency to be at the heart of the measurement model, and this means addressing the cost element more radically. Ultimately, it is about implementing a new culture, one of performance, and addressing the process of continuous improvement of software quality from the bottom up, as opposed to the approach used in today’s most common solutions.
The development and adoption of a performance culture, understood as a perpetual circle of improvement, is a key process within the IT governance of an organisation and also has an impact on each and every one of its areas, including, of course, the company’s operating account, which, at the end of the day, is decisive for the business.
o conclude, it is imperative to know for certain whether the technological applications and infrastructures we have are working properly and at full performance, from both a technical and a business support point of view. Measuring how software is working and identifying any problems it generates allows companies to develop better management of their providers and, even more importantly, to generate a cycle of continuous improvement of software quality, not only in the short-term, but also in the medium- and long-term, a time horizon that is not very common in our environment.