Sophia robot
REUTERS/Denis Balibouse

The Financial Stability Board (FSB) recently published a report warning of the impending risk to the economy posed by banks introducing artificial intelligence (AI) technology. The report called on companies to employ more specialist staff to oversee their AI strategies to reduce the risk of unintended consequences from systems that were opaque. This issue of Opaque AI is one that AI technology firms have actively acknowledged. Significant developments are being made to explore how employees in organisations can leverage different types of AI for different situations, depending on the specific business objectives they set and the degree of AI-transparency they demand. By uncovering the many faces of AI, financial services firms can be encouraged to explore the huge potentials of these powerful technologies.

One of the most popular notions of AI (and fuel for sensationalist, doomsday headlines) is based on the sort of technology that has 'human level' cognitive skills, also known as AGI or 'Artificial General Intelligence'. In spite of some impressive progress in a series of specialities, from driving cars to playing Go, AGI is hardly even on the horizon and not ready to take over the world, should it ever want to do so. What major public debates between the likes of Elon Musk and Mark Zuckerberg leave out is that AI is something that's already in common use in a business context today, and that the real risks being observed are not about whether it will leave us all in devastation.

Not all AI is created equally, and financial services organisations are already using the technology in their own ways. AI appears in two particular forms – Transparent AI and Opaque AI – each with diverse uses, applications and impacts for businesses and users in general. In short, Transparent AI is a system whose insights can be understood and audited, allowing one to reverse engineer each of its outcomes to see how it arrived at any given decision. Meanwhile, Opaque AI, on the contrary, is a system that cannot easily reveal how it works. Similar to the human mind, it can be challenging for it to explain exactly how it has arrived at a certain insight or conclusion.

There is no 'good' or 'bad' AI – only appropriate or inappropriate use of each system, depending on one's own needs. Opaque AI can bring along a range of benefits in the right circumstances. Having to be transparent is a constraint on AI and will limit its power and effectiveness, and in some instances an Opaque system might therefore be the preferable solution.

Banks using AI today are less interested in apocalyptic scenarios, and focused more on the very real risks posed by this technology if it is used incorrectly in the here and now. These dangers can include regulation violations, diminished business value and significant brand damage. Though not disastrous in their impact on humanity, these can still spell the success or failure of a major financial institution.

One potential problem with an Opaque system relates to that of bias. Without the user's knowledge, an Opaque AI system may start to favour policies that break your organisation's brand promise. It can be quite easy for an AI system to use 'neutral' data to infer sensitive customer traits, which it can then use to make non-neutral decisions. So, for example, an Opaque AI in a bank could interpret seemingly neutral customer data and use it to start offering better deals to people based on race, gender, sexual orientation, political affiliation, etc. – some of which would, for obvious reasons, lead to disastrous outcomes.

The choice between Transparent and Opaque becomes even more important in highly regulated industries, including the financial services industry. For example, proper use of Opaque AI in lending will result in improved accuracy and fewer errors. However, if banks are required to demonstrate how these operational improvements were achieved though reverse engineering the decision process, that becomes a challenge or even a liability.

Financial services firms need to determine how much they are willing to trust their AI. To have complete reliance on an AI system, either the AI needs to be Transparent so that business management can understand how it works or, if the AI is Opaque, it needs to be tested before it is fully implemented. These tests need to be vigorous, thorough and extend beyond searching for viability in delivering business outcomes, looking also for unintended biases. In some financial services areas, Opaque AI seems to be ruled out completely because of its lack of explainability.

If a bank is to use AI most effectively, a potential solution is that its employees must be able to switch confidently between the different flavours of AI. Staff should be able to oversee AI strategies to reduce the risk of unintended consequences from systems that were opaque. With the upcoming GDPR in May 2018, banks will need to have the ability to explain exactly how they reach certain algorithmic-based decisions about their customers. That favours organisations that are able to use software controls to allow Opaque AI where that's acceptable and insist on Transparent AI where necessary. It gives them a particular advantage: they can easily comply, yet still use the edge that Opaque AI can give them.

Financial services firms are increasingly at a crossroads when it comes to selecting which AI system is right for them. They have to balance improvements in customer experience and better decisions versus the liability of non-compliance and privacy violations. It is, therefore, all the more essential that companies understand the risks and benefits of AI and work with AI technology that can be trusted, controlled, and adapt in line with ever-changing regulatory and customer requirements.

Dr Rob Walker is Vice President, Decision Management & Analytics at Pegasystems.