From just watching to taking action

Monitoring tools have evolved alongside the huge technological leaps in the business world and can no longer be limited to continuous monitoring and optimising. Now they must also support decision-making based on real, complete and detailed knowledge of the dynamic performance of the software.

Compartir esta entrada:

In the world of business IT, we have seen three major technological stages: the IBM mainframe world, the Java-Oracle world and, nowadays, the cloud world.

In relation to the first stage, people tend to agree that it is an environment famous for its stability and security. In fact, despite frequent predictions about its extinction, it is not just surviving but is in fact growing and still essential for many companies. That early stage has left a significant legacy, especially at organisations such as, for example, those in the financial or airline sectors, which were cutting-edge at the time.

For companies at the forefront of technology, the mainframe (first the IBM 360 and then the Z) was the way forward. That explains why these companies have a legacy that they are obliged to optimise so that it continues to guarantee good service and compliance.

In the 1990s, the second stage, distributed systems, came onto the scene. The mainframe was no longer alone, it began to coexist with other environments and relational databases such as Oracle and the Java language emerged.

Companies differed in terms of their investment in distributed environments, with some investing more and others less, meaning that while in practice all companies transformed the front end to make it more user-friendly, the back-end scenarios varied. Some opted to develop all their new applications in the distributed world, while others kept the mainframe as the core of their system.

The reason for the differing commitments to the distributed world, the sense and wisdom of which is yet to be determined, was mainly financial, but not exclusively so. The mainframe was more costly and distributed systems promised a positive impact on the financial statement, as well as an alternative.

Now we come to the third phase: cloud and micro-services. As a financial model, the cloud allows CFOs to convert a fixed cost into a variable cost and undoubtedly it also implies a change in technological structures. Companies want to be in the cloud because they want to increase automation and be faster, and in the cloud world, a priori, development is faster and, supposedly, lower cost.

The new challenges of the cloud world

The cloud model promises more automation, agility, flexibility, scalability and lower costs. It therefore seems difficult not to surrender to its call, but progress is not without its difficulties since, from the viewpoint of monitoring, optimisation and decision-making focused on improving performance, problems appear in the cloud that were previously resolved or did not exist. Whereas in the mainframe world, and to some extent in the distributed world, there was end-to-end monitoring, timings were perfectly controlled, and predictability was possible; the situation is quite different in the cloud.

Effectively, under the cloud model there are options, and it is therefore a journey that requires decisions to be made and in which, moreover, it is difficult to know for sure what is happening. The dispersion is extreme due to the fact that there are services that you control but many others that belong to third parties. In short: the cloud can become a bit of a black box.

The situation can be further complicated by, on the one hand, the limitations established by legislation and, on the other, the presence or otherwise of a culture of technological excellence. Technological fervour or arbitrary decisions can turn the cloud’s supposed freedom of choice into a poisoned chalice.

In the cloud model, as a result of the freedom to choose “parts”, the complexity of the systems increases, and unpredictability characterises the actual implementations. At the same time, organisational structures grow, and development teams become dispersed and start to use independent methodologies.

In short, the management and control of development processes becomes unfeasible. The result is that their performance is measured in terms of service failures or outages, a function towards which today’s AIOps tools are geared. However, there are four additional factors that have a direct impact on competitiveness.

The first is infrastructure costs, both in zOS environments and especially in the cloud world, where in many cases architectures scale poorly in terms of cost, in both private and public cloud installations, where some problems are different. The second is user experience, from the viewpoint of both response time and failure rates, and from the perspective of service perception. Here again the situation is more complex in cloud environments where page loading is no longer sequential. The third key factor is compliance with Service Level Agreements (SLAs) and the fourth is the efficiency of the development cycle, understood as the objective measurement of the dynamic performance of the software at provider, group/application and programmer level.

Independent performance centre

In this context, none of the current APM, AIOps, DevOps or observability solutions are capable of responding in a unified way to the needs inherent to a process of monitoring, control and continuous improvement of the four aspects mentioned above. This is where the success of the BOA (Boost & Optimize Applications) platform lies, the only one where monitoring (proactive rather than reactive) and optimisation are carried out without leaving the platform.

BOA is also immune to the technological evolution described above. Regardless of the type of infrastructure, architecture, flavour of application or IT service consumption model, BOA operates as an independent performance centre focused on the continuous improvement of software code and optimal use of infrastructures To do this, it provides insight and the ability to act on the main performance KPIs.

BOA makes it possible to implement a proactive and smart monitoring standard based on eight phases: data intake from multiple sources, processing and transformation, consolidation, definition of KPIs, establishment of procedures for the automated resolution of incidents, support for all these processes and continuous measurement of the results. A virtuous circle with constant feedback that, thanks to the continuous incorporation of new artificial intelligence algorithms, is becoming increasingly automated.

With BOA, it is possible to know what is happening and at the same time ensure continuous improvement and informed decision-making to guarantee the best performance levels.

Insights relacionados

  • Faced with poor technology Performance, ask for a second opinion

    Faced with poor technology Performance, ask for a second opinion

    There’s a new trend in the technology industry , and it’s been called FinOps, an acronym resulting from the fusion…

  • Technological performance, when the emperor has no clothes

    Technological performance, when the emperor has no clothes

    Aligning technology and business has always been a challenge, but one seemingly overcome with the advance of methodologies such as…

  • The cloud party is over, 2023 is the year of efficiency

    The cloud party is over, 2023 is the year of efficiency

    Millions have been invested in digital transformation in recent years, largely for the move to the cloud. Its siren calls…

Suscríbete a nuestra newsletter y no te pierdas nada