Bank of England
The Bank of England, London Ben Stansall/AFP

The Bank of England recently experienced a near-miss when a systems outage delayed wholesale bank payments for several hours. Transactions due to be made overnight didn't make it through until the morning. This was put down to a regular IT update which lead to "intermittent technology communication problems".

Fortunately, the problem was only limited to these wholesale payments, and did not affect other vital applications like the real-time gross settlement (RTGS) service or the clearing house automated payment system (CHAPS), and didn't impact the general public.

But in 2014, the Bank wasn't so lucky, as a glitch took out both these applications. This delayed the processing of property payments on a massive scale, and hit thousands of home buyers. For large financial institutions like the Bank of England, the stakes are extremely high. Any kind of IT failure can have a huge impact on the consumers and businesses who rely on them – especially when a problem in one part of the system has a knock-on effect that takes down multiple critical applications.

So how can financial institutions make sure that problems in one system don't spread to other parts of the infrastructure? There are a variety of security products that organisations can purchase to protect against glitches and external threats. But it is one thing to buy a product, and another to integrate it in a way that fully protects the system without interfering with the complex interdependencies of critical applications.

Many organisations obtain products immediately before surveying how they would fit into the existing infrastructure. After plumbing them in, they have to work backwards to trace how the product can work with the existing legacy infrastructure, and often find that it cannot be integrated without impeding functions further down the line.

The trouble is that when changes are made to one application, it is very difficult to tell whether or not it will have a knock-on effect on the wider infrastructure. Many financial institutions work on old legacy systems that, to adapt to developments in technology, have been continually added to by various people over the years.

Often the people who first designed these systems have moved on from the company, with newer coders patching over old systems. This means that no one has a complete picture of how the entire architecture functions. These infrastructures rely on a web of vital applications with complex and often opaque interdependencies. Without visibility of the entire architecture, it is impossible to tell for sure whether changing part of one application will have a negative impact somewhere else in the infrastructure.

This is even further complicated by the way data use has changed in recent years. Sensitive data is no longer kept in a main data centre, centralised system. Cloud computing and bring-your-own-device schemes blur the perimeter of where data is held in the back end, and online and mobile banking do the same at the customer end.

Meanwhile third parties can store data and provide applications that work both within and outside the existing system. Data storage is now spread out between multiple locations with an almost untraceable complexity, and the idea of securing a perimeter with a firewall is outdated.

Most banks and financial services institutions now realise that to fully protect against failure, applications each need protection policies fitted to them individually, governing the way each one is allowed to interact and share data with other applications and other users.

To do this they first need to understand interdependencies throughout the wider system, to trace how each application interacts with surrounding components. And detailed new regulations coming into force this year further intensify this need for visibility, by shifting the mandate from yearly tick-box compliance exercises into a continued assurance that systems will continue to function securely.

Under MiFID II, for instance, financial institutions will need to be able to ensure that development and testing environments for new software are completely sealed off from communicating with the live production environment, so that rogue or untested code cannot affect the wider system.

But ensuring this interaction cannot take place requires understanding of exactly how each component fits into the surrounding infrastructure, and awareness of every contact point. To meet the requirements of GDPR, all companies will need the capability to protect against and detect any kind of leak of customer information, and be able to erase an individual's data from the whole of their system, if requested.

These new rules require financial firms to have complete visibility over where data is stored throughout the system, and how information is transferred between different applications. Gaining this kind of insight demands a deep expertise from the perspective of infrastructure, to build a real-time picture of every way each application communicates with its surroundings. Only then can policies be designed to protect the functioning of each one, without interfering with their existing interdependencies, so that each critical application can be kept functioning around the clock.