Connect with us

Perspectives

A.I. deployment risks outstripping controls

As financial institutions and fintechs deploy more A.I. tools, how ready are they to explain their machines’ decisions?

Published

on

The importance of artificial intelligence, particularly machine learning, was underlined earlier this week by a Refinitiv survey among over 400 banks and asset managers. It shows financial institutions are further along in machine-learning deployment than is widely known.

It did not address the question of how A.I. is governed. It noted that the majority of C-suite executives now consider machine learning a core strategy. Refinitiv didn’t ask them if they, or their data scientists, know how to explain a machine’s recommendation or decision.

Given the still-nascent use of A.I., such questions may not come to the fore yet. But they will.

Deployment, more and more

Refinitiv’s survey suggests M.L. is being applied mainly by buy sides (particularly in the U.S.) to manage risks, analyze performance and generate trading ideas.

DigFin, anecdotally, finds banks especially keen on A.I. for operational efficiency, sometimes with regulatory encouragement.

Tarrill Baker, chief data officer at HSBC, says banks are racing to deploy A.I. to meet compliance requirements, such as liquidity reports, or to deter crime.

Many aspects of finance could be driven by these models

Huayi Dong, Daiwa Capital Markets

But various A.I. techniques will be applied to an increasing range of services, perhaps sooner than we think.

“As A.I. becomes more integrated with our lives, many aspects of finance could be driven by these models,” said Huayi Dong, managing director and global head of electronic trading solutions at Daiwa Capital Markets, speaking at a recent conference organized by industry association ASIFMA.

Such decisions will touch both institutional and consumer relationships. User behavior, captured by a firm’s algorithms, will help machines determine everything from allocations to whether to short a stock; from much to charge for insurance or whether to grant a loan.

Opening the black box

Right now surveys such as Refinitiv’s focus on the efficacy of A.I. For example, data scientists are most concerned about the quality of data inputs, or how readily they can combine unstructured data (news articles, research reports, transcripts) with structured data (financial and company information).

But if the data inputs are bad, can the algo still be used? If it is, and a customer doesn’t like the result, or a client’s trade goes wrong, or a chatbot delivers faulty information, who gets blamed? The computer coders, the machine trainers, the data suppliers, the portfolio manager or loan officer or sales trader, the CEO?

A.I. governance can be sliced in many ways but the essential question is: can you explain why the machine responded a certain way? This will have an increasing bearing on client relationships, brand protection, and legal liability.

We can’t stifle innovation

Tarrill Baker, HSBC

For example, Dong says best practice requires data scientists to understand how the computer’s model works. “The results should be replicable, and audited,” he said, noting firms should expect to reproduce the results before a regulator. The same goes for when a financial institution uses a third-party fintech: the firm needs to understand how the fintech’s models work.

One way to go about this is to only deploy an A.I. after considering the increase of value for the customer against the potential risk, says HSBC’s Baker. “Customer protection is paramount but we can’t stifle innovation,” she said.

Underestimating the consequences

Andreas Burner, Vienna-based chief innovation officer at SmartStream Technologies, a longstanding vendor to financial institutions, worries that data-science teams don’t understand the consequences of their use of A.I., particularly among fintech startups.

“If the risk in a use case has a regulatory impact, don’t apply deep learning,” he said, which includes natural language processing (NLP). “It’s okay if you’re using it for things like monitoring liquidity detection or identifying other patterns – things that won’t impact the market.”

Most finance applications should rely instead on techniques in machine learning that allow a machine, after a lot of data feeding, to do a regression analysis, generalize data, and detect patterns. Some of these techniques rely on mathematical formulas to vote on outcomes; others rely on massive amounts of data and vast computing power to fuel a data scientist’s algorithm.

Deep learning is a no-go for KYC

Andreas Burner, SmartStream

These methods are more reliable, provided the data inputs are valid; but Burner reckons too many financial applications are being tested using more exotic versions of A.I. that could end in tears.

Startups using, say, facial-recognition software based on machine learning are making themselves vulnerable to fraudsters, Burner says: “Deep learning is a no-go for KYC.”

More human than it looks

Nor should machine learning be taken for granted as an arbiter of truth.

“Statistics are probabilistic; A.I. is a black-box model based on prior experience or training data,” said Jason Tu, CEO of MioTech, an A.I. company that sells products to hedge funds and investment firms. Citing the example of AlphaGo’s surprise moves in its famous matchup against Go master Lee Sedol, he said, “In financial services, unstructured data is more art than science.”

Take sentiment analysis around a piece of news, a tool that more fund managers are eager to use. The same news might be considered positive or negative by different people. Training data creates choices that over time will deliver different results. “It’s not just the algos,” Tu said.

Unstructured data is more art than science

Jason Tu, MioTech

For now, banks and fintechs say they are only deploying A.I. in cases that won’t hurt customers. Tu, for example, says his team backtests its models against human experience, so they can discard any unusual results.

In other words, there shouldn’t be room in financial services for an AlphaGo-like surprise, particularly if the outcome could have a material impact if things go wrong, either on customers or on a firm’s regulatory obligations. Financial institutions should retain human supervision for such cases.

“If we use A.I. for a marketing campaign and we lose money, well, okay,” said Shameek Kundu, Singapore-based group chief data officer at Standard Chartered. “But if we’re an insurance company and we make biased or wrong decisions, that would impact the customer.”

Who, or what, is accountable?

But fears over machines run amok may miss the broader point: governance is not just about computers.

In Singapore, for example, banks have been shown to have a lending bias against Malays. The same thing happens to minorities in other countries. That’s true of human loan officers today. Is a machine going to replicate that bias? Probably, if it’s trained with the same inputs.

At some point in the future, when computers achieve general artificial intelligence, they may need to be held separately accountable. But the industry is not there yet.

We have to have the same accountability

Shameek Kundu, Standard Chartered

In the meantime, financial institutions need to extend their standards to A.I. “We have to make human efforts,” Kundu said. “We have to have the same accountability, whether we train people or machines.”

That sounds like a tall order, given the recent history of financial institutions. The need for better oversight may not be specific to A.I. (just as the need for supervising computers isn’t an issue just for finance), so perhaps banks and asset managers don’t need new or special governance for these tools.

But the speed, frequency and scale with which new sources of data are being applied to decisions are going to test today’s protocols for accountability. At some point, someone will want to take the gloves off and really see what their A.I. can really do.


DigFin direct!

Register to receive DigFin's newsletter

 
  • Hauptseite
  • Grocery Gourmet Food
  • A.I. deployment risks outstripping controls