Banks and insurers have been adopting various types of A.I. for up to ten years now, but only now is the technology at a point where it will make serious headway. But this will require more sophisticated technology, which in turn will require a lot more trusted data, according to speakers at a conference in Hong Kong on artificial intelligence.
Georgio Mosis, head of innovative technologies at AIA, says the phase of digitizing paper, and then going mobile, is now giving way to fully realizing big data and the A.I. that makes insights possible. In turn this is going to change insurance companies’ priorities.
“Digital technologies create more touchpoints with customers, so customer engagement is becoming more important,” he said.
Todd Rathje, chief revenue officer at New York-based vendor Workfusion, says the global financial services industry last year spent $42 billion on manually intensive data-entry work. That is now the biggest target for applying A.I. tools.
From data entry to cognition
But it’s not the same A.I. as institutions have been using. Business process automation gave way to robotic process automation (RPA), with bots on desktops from operations desks to trading floors.
“The next wave is RPA-plus, or even RPA++,” he said, as cognitive capabilities are placed on top of these workflow tools.
That reflects the changing nature of data. RPA is good at data entry, but as more firms expand their operations to include unstructured data, they’ll need more sophisticated tools, such as natural language processing, to read, understand, and categorize information.
In markets such as Hong Kong, the biggest opportunities will be trade finance and working capital, says Jeffrey Ng, head of fintech solutions at Ping An OneConnect, the tech-vending arm of Ping An’s bank.
The firm is trying to sell its platform to enable smaller financial institutions build entire new businesses from scratch, on cutting-edge, customer-focused platforms.
Banks and insurers onboarding vendor applications should also find it easier to deploy solutions that cover multiple needs, assuming they embrace open source.
“If everything is now one business application, why not standardize it?” said Pavlos Panagiotidis, a data scientist at SAP. “If data models can be standardized, why not also standardize the algorithm?…This will be more important in the market.”
Data is a multiplierGeorgio Mosis, AIA
For AIA’s Mosis, the differentiator will be the data itself. “Data and the way you use it is a multiplier…if the problem can be solved by A.I., the question is then do you have the data.” This is especially the case with the challenge of explainability – being able to identify why machines make the decisions they do. Financial institutions will be under pressure from customers, shareholders and regulators to ensure they can do so. “You can’t have a black box to make decisions,” Mosis said. “The interpretation of the A.I. is more important.”
But as firms adopt greater use of unstructured data, such interpretations become harder. Therefore the A.I. tools need to become more sophisticated: today’s chatbot, for example, relies on decision trees (a lot of if/then scripts) to communicate, but they’re going to need neural networks and self-learning.
“When I see my kids playing with [Apple’s voice assistant] Siri, that’s the level of communication we need to reach,” Mosis said. “We’re not there yet.”