Software performance, banking is not winning

With their software applications of varying levels of innovation and quality, although always meeting certain minimum standards, the banks are obliged to “open every day” while meeting regulatory requirements and customer expectations in terms of availability, response times, usability and quality of service, which are becoming increasingly strict.

Compartir esta entrada:

Business managers and ICT professionals at the banks are perfectly aware of this situation, but they are less aware of the additional costs they are facing due to software failures and deficiencies that, in addition to affecting their operations, have an impact on infrastructure consumption and the time needed to resolve them This use of time and resources represents a cost that ultimately ends up hitting the bottom line.

For many banks, service and the customer mean sacrificing the business. Listening to CIOs and IT professionals from Spanish banks, it is abundantly clear that the IT areas have chosen not to gamble with the service or customer satisfaction (internal and external), but this philosophy inevitably has an associated cost and that requires the approval of the financial areas, the CEO and, ultimately, the Boards of Directors.

None of these three people would be very happy to know that the cost of the collar to keep the dog satisfied is constantly increasing, and this is precisely what is happening. Moreover, it is also one of the reasons for banking mergers, leaving aside financial crises and toxic assets.

The figures do not lie. Despite the willingness and commitment to innovation shown by banks, the sector’s expenditure on keeping its business running continues to be higher than that allocated to innovation. This was already the case in 2008, with 49.3% and 50.7% allocated to innovation and maintenance, respectively; and in recent years the weight of the latter has only continued to grow, and the figures now stand at around 40% and 60%, respectively.

What is behind this large maintenance budget? The answer, in our experience, is clear: technological performance is not as good as it should be and is damaging the ability of the banks to innovate.

It is clear to everyone that the technological side of banking is not simple, especially for banks that were operating prior to the digital revolution and whose roots are grounded in the mainframe world. One figure is enough to show this: on average, between 40% and 60% of a bank’s technical components change within a year, and according to our data, 50% of all components show some kind of bad practice. Even more worryingly, when software is modified or updated, the majority of bad practices remain because they were not identified and there are no mechanisms in place to detect and modify them.

The problem is widespread, regardless of the environment. Although it is true that many of the new banks started operating in the digital world and also that the longest-running ones have migrated some of their applications to the cloud, even today many of the truly core banking processes (customer onboarding, core banking, collection and payment management, risk and compliance) end up on the mainframe. This is sometimes called the ‘legacy’, mistakenly so considering that these systems are still in great shape in many banks due to their guarantees in terms of reliability, scalability and security.

As IBM is well aware, everything that ends up on the mainframe has a non-trivial cost, but so does everything that ends up in the cloud, as AWS and MS well know.

Moreover, performance shortcomings not only increase costs, but also impact on other key indicators such as availability. Banks continue to have problems with the availability of some basic operations more frequently than they would like and, furthermore, these failures can have a multiplying effect at critical moments when the number of transactions soars, as was demonstrated during the Covid-19 pandemic and is still happening today.

In most cases, and despite the fact that banks claim to have fully internalised the culture of software quality, the source of these failures, errors and deficiencies is found in the architectures and in the software, with the peculiarity (Pareto Principle) that a few cases are responsible for most of the problems. An additional problem is the strong dependence of banks on third-party suppliers, who are largely responsible for the inefficiencies.

In addition, although financial institutions also have a myriad of monitoring solutions and APM tools at their disposal, they fail to have a single, complete and detailed view of what is going on.

Automating continuous performance improvement

As a result of all this, at the end of the day, and the night, when it comes to performance, banks could earn more than they do, because there is significant scope for savings. This is precisely the ultimate goal of a Performance Operations Centre – Technical Performance Office like BOA (Boost & Optimize Applications).

The BOA platform, which currently manages more than 500 million business processes, mainly in the banking and insurance sector, has a threefold objective: to improve the performance of IT applications and infrastructures by detecting and eliminating problems, to increase response times and to facilitate the reduction of total costs.

BOA operates in five phases with feedback loops (data capture, census of processes and daily detections, data analysis and detection of improvement opportunities, monitoring of development and verification of compliance and impact on targets based on a series of KPIs) and in which intelligent automation is increasingly used. Currently, the BOA algorithms cover 79% of the problems or bad practices, a critical point considering, as I mentioned before, that the 50 most common issues are responsible for 80% of problems in the IT environment.

In addition to automation and intelligence for continuous improvement and the elimination of efficiencies that hinder performance, BOA provides a complete and business-oriented view of what is happening through dashboards, with KPIs for the different profiles: development, architecture, business channel, management and production.

It is about ensuring that problems and inefficiencies do not go unnoticed or unchecked, because when this control is missing, the business is unknowingly forced to incur cost overruns, for infrastructure or for repeated software modifications, which, in our experience, amount to around 15% of technology investments, which is not an insignificant amount.

Insights relacionados

  • Faced with poor technology Performance, ask for a second opinion

    Faced with poor technology Performance, ask for a second opinion

    There’s a new trend in the technology industry , and it’s been called FinOps, an acronym resulting from the fusion…

  • Technological performance, when the emperor has no clothes

    Technological performance, when the emperor has no clothes

    Aligning technology and business has always been a challenge, but one seemingly overcome with the advance of methodologies such as…

  • The cloud party is over, 2023 is the year of efficiency

    The cloud party is over, 2023 is the year of efficiency

    Millions have been invested in digital transformation in recent years, largely for the move to the cloud. Its siren calls…

Suscríbete a nuestra newsletter y no te pierdas nada