A common setting for comedy sketches or cartoons based on the "computer says no" meme is the bank branch, housing a hapless teller, powerless but to pass on the decision made by a computer. The underlying concept here is that humans have been bypassed by anonymous machines for making decisions, rendering the role of the human redundant and the decision-making process a cold and seemingly arbitrary one.
While this is not quite the truth in most businesses, the use of automated decision-making in financial services – whether at the branch, over digital channels, or during payment transactions – is well established. Especially on the retail side, pricing and credit/underwriting-related decisions are rarely made by front-office staff directly, with approval instead made by centralized systems based on information collated by staff and other sources. This may seem impersonal; however, it is actually fairer in that it drives far more consistent decision-making. It allows faster decisions, with those supporting payments, for example, happening almost instantaneously, and from an efficiency perspective it is far more scalable than a process requiring highly trained and experienced staff.
Significantly, while decision-making may appear relatively "black-box" in nature, decisions will actually be made through a combination of rules created by analysts based on an institution's policies and risk preferences, and algorithms derived through predictive analytic models. These models are typically complex and based on vast amounts of historic data, but importantly in this regulated industry, the relationship between input data and decisions will typically be understood and explainable. For example, there has been increasing awareness of credit scores in recent years, and most people realize that their credit-worthiness will be related to past actions (such as making credit payments on time). However, it also means that characteristics that may be deemed unacceptable to be used for such decision-making (e.g., gender or ethnicity) can be excluded from the process (at least explicitly).
However, advances in the field of artificial intelligence (AI), particularly in machine learning for predictive analytics, are creating challenges in this respect. In contrast to the traditional analytics approach, where a data scientist will create a predictive model, modern AI is shifting toward self-learning and behavioral/agent-based approaches that are more adaptable to changing and complex situations. While this is great for fast-moving environments (such as fraud or cybersecurity), the relationship between input data and output decisions is often not clear. For example, AI could be used for video facial analysis that could evaluate the truthfulness of applications. While still theoretical at this stage, an AI agent may become highly effective in accuracy terms in detecting deceitful applications if it were able to analyze enough applications; however, it would not be able to explain the factors why. From a regulatory perspective, this is likely to be unsatisfactory, and in turn, it is driving the next wave of innovation in AI – the shift to explainable AI, where the basis of decisions and decision flow can be detailed and evidenced.
Straight Talk is a weekly briefing from the desk of the Chief Research Officer. To receive this newsletter by email, please contact us.