The latest Investing Matters Podcast episode with London Stock Exchange Group's Chris Mayo has just been released. Listen here

Less Ads, More Data, More Tools Register for FREE

Pin to quick picksTNT.L Regulatory News (TNT)

  • There is currently no data for TNT

Watchlists are a member only feature

Login to your account

Alerts are a premium feature

Login to your account

AI in banking needs to be 'explainable'

28 Jun 2022 07:00

RNS Number : 3422Q
Tintra PLC
28 June 2022
 

 

28 June 2022

TINTRA PLC

 ("Tintra", the "Group" or the "Company")

 

Recent Press Articles

"AI in banking needs to be 'explainable'"

 

Richard Shearer, Group CEO, recently authored an article on "AI in banking needs to be 'explainable'", which was published in Finance Derivative is a global finance and business analysis magazine, published by FM. Publishing, Netherlands. The article can be viewed online at:

 

https://www.financederivative.com/ai-in-banking-needs-to-be-explainable/

 

A full copy of the text can also be found below. 

 

 

 

For further information, contact:

 

Tintra PLC

(Communications Head)

Hannah Haffield

h.haffield@tintra.com

Website www.tintra.com

020 3795 0421

 

Allenby Capital Limited

(Nomad, Financial Adviser & Broker)

John Depasquale / Nick Harriss / Vivek Bhardwaj

 

020 3328 5656

 

 

Tintra - Comment Piece

AI in banking needs to be 'explainable'

In the world of banking, AI is capable of making decisions free from the errors and prejudices of human workers - but we need to be able to understand and trust those decisions.

 

This growing recognition of the importance of 'Explainable AI' (XAI) isn't unique to the world of banking, but a principle that animates discussion of AI as a whole.

 

IT and communications network firm Cisco has recently articulated a need for "ethical, responsible, and explainable AI" to avoid a future built on un-inclusive and flawed insights.

 

It's easy to envisage this kind of future unfolding, given that - in early February - it was revealed that Google's DeepMind AI is now capable of writing computer programs at a competitive level - and if we can't spot flaws and errors at this stage, a snowball effect of automated, sophisticated, but misguided AI could start to dictate all manner of decisions with worrying consequences.

 

In some industries, these consequences could be life-or-death. Algorithmic interventions in healthcare, for example, or the AI-based decisions made by driverless cars need to be completely trustworthy - which means we need to be able to understand how such AI arrive at their decisions.

 

Though banking-related AI may not capture the imagination as vividly as a driverless car turned rogue by its own artificial intelligence, the consequences of opaque, black box approaches are no less concerning - especially in the world of AML, in which biased and faulty decision-making could easily go unnoticed, given the prejudices which already govern that practice.

 

As such, when AI is used to make finance and banking-related decisions that can have ramifications for individuals, organisations, or even entire markets, its processes need to be transparent.

 

Explaining 'explainable' AI

 

To understand the significance of XAI, it's important to define our terms.

According to IBM, XAI is "a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms."

These methods are increasingly necessary due to the ever-increasing advancement of AI capabilities.

 

To those outside the sphere of this technology, it might be assumed that the data scientists and engineers who design and create these algorithms should be able to understand how their AI makes its decisions, but this isn't necessarily the case.

 

After all, AI is - as a rule - employed to perform and exhibit complex behaviours and operations, and outperforming humans is therefore a sought-after goal, on the one hand, and an insidious risk on the other - hence the need for interpretable, explainable AI.

 

There are many business cases to be made for the development of XAI, with the Royal Society pointing out that interpretability in AI systems ensures that regulatory standards are being maintained, system vulnerabilities are assessed, and policy requirements are met.

 

However, the more urgent thread running throughout discussions of XAI is the ethical dimension of understanding AI decisions.

 

The Royal Society points out that achieving interpretability safeguards systems against bias; PwC names "ethics" as a key advantage of XAI; and Cisco points to the need for ethical and responsible AI in order to address the "inherent biases" that can - if left unchecked - inform insights that we might be tempted to act upon uncritically.

 

This risk is especially urgent in the world of banking and, for AML, in particular.

 

 

Bias - eliminated or enhanced?

 

Western AML processes still involve a great deal of human involvement - and, crucially, human decision making.

 

This leaves the field vulnerable to a range of prejudices and biases against people and organisations based in emerging markets.

 

On the face of it, these biases would appear to be rooted in risk-averse behaviours and calculations - but, in practice, the result is an unsophisticated and sweeping set of punitive hurdles that unfairly inconvenience entire emerging regions.

 

Obviously, this set of circumstances seems to be begging for AI-based interventions in which prejudiced and flawed human workers are replaced with the speed, efficiency, and neutral coolness of calculation that we tend to associate with artificial intelligence.

 

However, while we believe this is absolutely the future of AML processes, it's equally clear that AI isn't intrinsically less biased than a human - and, if we ask an algorithm to engage with formidable amounts of data and forge subtle connections to determine the AML risk of a given actor or transaction, we need to be able to trust and verify its decisions.

 

That, in a nutshell, is why explainable AI is so necessary in AML: we need to ensure that AI resolves, rather than repeats, the issues that currently characterise KYC/AML practices.

 

There are different ways this can be achieved. The Royal Society proposes two categories: either the development of "AI methods that are inherently interpretable" or, alternatively, "the use of a second approach that examines how the first 'black box' system works."

 

Transparency and trust

 

The specific method used to achieve explainable AI in AML isn't as important as the drive to ensure that we don't place all our eggs in a potentially inscrutable basket: any AI we use to eliminate prejudice needs to have trust, confidence, and transparency placed at the heart of its calculations.

 

If we don't put these qualities first, the 'black box' of incomprehensible algorithms may well continue to put a 'black mark' by the names of innocent organisations whose only crime is to exist in what humans and AI falsely perceive to be the 'wrong place.'

 

ENDS

 

Richard Shearer, CEO of Tintra PLC

https://www.tintra.com

 

 

This information is provided by Reach, the non-regulatory press release distribution service of RNS, part of the London Stock Exchange. Terms and conditions relating to the use and distribution of this information may apply. For further information, please contact rns@lseg.com or visit www.rns.com.Reach is a non-regulatory news service. By using this service an issuer is confirming that the information contained within this announcement is of a non-regulatory nature. Reach announcements are identified with an orange label and the word “Reach” in the source column of the News Explorer pages of London Stock Exchange’s website so that they are distinguished from the RNS UK regulatory service. Other vendors subscribing for Reach press releases may use a different method to distinguish Reach announcements from UK regulatory news.RNS may use your IP address to confirm compliance with the terms and conditions, to analyse how you engage with the information contained in this communication, and to share such analysis on an anonymised basis with others as part of our commercial services. For further information about how RNS and the London Stock Exchange use the personal data you provide us, please see our Privacy Policy.
 
END
 
 
NRAEADKXASPAEFA
Date   Source Headline
8th Jan 20247:00 amRNSCancellation - Tintra plc
4th Jan 20241:21 pmRNSResult of General Meeting 4 January 2024
27th Dec 20237:00 amRNSUpdate on Proxy Votes Re: Special Resolutions
14th Dec 20236:01 pmRNSProxy votes relating to Special Resolutions
7th Dec 20232:20 pmRNSGeneral Meeting & Matched Bargain Facility
6th Dec 202311:02 amRNSResignation of Nominated Adviser and Broker
6th Dec 20239:04 amRNSCirc re. General Meeting
28th Nov 20236:00 pmRNSIntention to Seek Cancellation from Trading on AIM
6th Nov 20237:24 amRNSRule 2.8 and Proposed Tender Offer
6th Nov 20237:00 amRNSTender Offer replaces Possible Offer & AIM delist
2nd Nov 20234:14 pmRNSResult of Adjourned AGM and GM
31st Oct 20236:16 pmRNSHalf-year Report to 31 July 2023
20th Oct 202311:54 amRNSPCA/PDMR Dealing & Holding in Company
19th Oct 202310:44 amRNSForm 8.3 - TINTRA PLC Amendment
17th Oct 20236:00 pmRNSNotice of Reconvened AGM & Notice of GM
11th Oct 20233:37 pmRNSForm 8.3 - TINTRA PLC
6th Oct 20234:06 pmRNSForm 8 (DD) - Offeree Concert Party - Tintra plc
6th Oct 20239:35 amRNSForm 8 (DD) - Offeree Concert Party - Tintra plc
5th Oct 20231:35 pmRNSForm 8.3 - TINTRA PLC
5th Oct 202310:32 amRNSForm 8 (DD) - Offeree Concert Party - Tintra plc
5th Oct 20237:00 amRNSExtension of PUSU deadline pursuant to Rule 2.6(c)
3rd Oct 202310:15 amRNSResumption of Trading on AIM
3rd Oct 202310:15 amRNSRestoration - Tintra plc
2nd Oct 20233:30 pmRNSAnnual Financial Report for the year to 31 Jan 23
2nd Oct 20231:00 pmRNSForm 8.3 - Tintra plc - Empire Global
29th Sep 202311:33 amRNSForm 8.3 - Tintra PLC
29th Sep 202311:03 amRNSForm 8 (DD) - Offeree Concert Party - Tintra PLC
29th Sep 202310:46 amRNSForm 8.3 - Offeree - Updated
29th Sep 202310:45 amRNSDirector/PDMR Shareholding
27th Sep 202311:30 amRNSForm 8.3 - Tintra plc - Crescent Moon Ventures
26th Sep 20237:00 amRNSForm 8 (OPD) Offeror - Tintra PLC
25th Sep 20232:30 pmRNSTechnology Partnership Agreement
25th Sep 20232:26 pmRNSForm 8.3 - Tintra plc - A Stuart-Bamford
25th Sep 20232:15 pmRNSForm 8.3 - Tintra plc - P Jackson
21st Sep 20236:23 pmRNSForm 8 (OPD) - Tintra Plc
14th Sep 20231:35 pmRNSForm 8.3 - Tintra PLC
7th Sep 20231:30 pmRNSStatement re Possible Offer
1st Sep 202311:30 amRNSFurther re Audited Accounts to 31 Jan 2023
31st Aug 20235:00 pmRNSRevised Financing Arrangement
10th Aug 20237:00 amRNSPlacement Facility Update
1st Aug 20237:30 amRNSSuspension - Tintra plc
31st Jul 20235:30 pmRNSConversion of Securities
31st Jul 202312:00 pmRNSResult of AGM
31st Jul 20239:00 amRNS12 Month Unaudited Results to 31 January 2023
19th Jul 20232:00 pmRNSExpected delay in publication of Annual Report
11th Jul 20237:00 amRNSSubscription Update
7th Jul 20237:00 amRNSNotice of AGM and Directorate & Management Change
5th Jul 20237:00 amRNSRepayment & Termination of Placement Facility
30th Jun 20239:00 amRNSDirectorate & Management Change
9th Jun 20233:43 pmRNSBusiness Update

Due to London Stock Exchange licensing terms, we stipulate that you must be a private investor. We apologise for the inconvenience.

To access our Live RNS you must confirm you are a private investor by using the button below.

Login to your account

Don't have an account? Click here to register.

Quickpicks are a member only feature

Login to your account

Don't have an account? Click here to register.