Originally published on Société Générale’s Tech Magazine Special Edition 2017
Artificial Intelligence in Financial Services: it has been called a boon, but there are times when it can go bust. And those busts can be incredibly costly. Just look at the loss the capital markets experienced on 23 April, 2013, after a fake news tweet about explosions at the White House triggered high-frequency trading algorithms – based on sensitivity to emotional word triggers – into overdrive reaction about a fake terrorist attack: a loss of $136bn in three minutes.[1]
Three minutes later, humans identified it as fake news, but the damage had already been done in the financial market.
The cost to the markets may have been the result of a very simple rules based algorithm, one that, had these new high frequency trading patterns been fused with more sophisticated machine learning mechanisms, might have been avoided.
But this is where one of the fundamental questions about AI, finance, and machine learning comes into play. It is not just about incorporating smarter machine learning, nor about the most sophisticated maths being applied to assess risk profiles, market trends, or even identifying transaction anomalies that predict fraud. It’s actually about how to make strong AI without bias.
So, can we actually remove bias from AI algorithms if it is human beings who create those algorithms?
The Risk of Stereotyping
Because algorithms process huge chunks of data behind the scenes, it’s tricky to know exactly why one client may receive promotions for a particular type of ISA or interest rate, when another with similar credit worthiness and transaction behaviour receives another. Or why one frequent traveler’s legitimate transaction behavior across geographies triggers a fraud alert, but not the person who’s unaware that a card number has been stolen where the fraudulent use pattern stays closer to home. Those are simple examples, but this has further reaching implications on access to transparency around conduct risk for banks – especially because regulation compliance using AI must prove intention to comply and actual compliance.
This compliance starts with the data input. A recent UK Government Office for Science report notes that data used by automated systems should be carefully reviewed before being run through AI. Why? Because “algorithmic bias may contribute to the risk of stereotyping”.
Three Princeton University academics have shown that AI applications replicate stereotypes shown in human-generated data, and this prejudice applies to both gender and race[2]. Using the GloVe algorithm, these academics reported that machine learning “absorbs stereotyped biases” when learning language (a rule-based exercise) after running an experiment with word pairing. For example, European names were consistently associated with pleasant terms, compared to unpleasant associations common to African sounding names. Same goes for female names, which were more associated with family than career, as compared to male names. Each Machine Learning test replicated the same results as those tests completed by humans.
Algorithms Can Be Codified With Bias
This underscores the importance of scrutinizing data before running it through AI systems because machine learning systems are only able to interpret the data they are trained upon. It also reiterates the importance of understanding that rules based algorithms can be codified with bias, and that as culture changes (and diversity of both employees and customers increases), we would be remiss to have AI systems that we “train” on current (or older) culture that fixes, freezes, the outputs according to outdated social, political, and legal norms.
This is especially true as we look at the implications of AI in roles like fraud detection and even underwriting, insurance, and lending. As we start to look at incorporating new sources of risk data from social media, wearables, and an extended digital footprint, into crafting new risk profiles, we must be aware of potential algorithmic bias.
No doubt AI will transform the speed and efficiency of financial services, but as long as humans are creating the algorithms powering artificial intelligence and machine learning, the possibility for biased outcomes exist. Either we become more stringent in how we code and what data we train the machines on, or we aim to create an algorithm that spots our biases for us. That might just be the most significant insurance policy we can create.
[1] Official Associated Press Twitter account @ 13.07 on 23 April, 2013: “Breaking: Two Explosions in the White House and Barack Obama is injured.”
[2] Semantics derived automatically from language corpora contain human-like biases, by Joanna Bryson, Aylin Caliskan, and Arvind Narayanan, published in Science