Alpha Challengers – Sharing the Knowledge Episode 2: Artificial Intelligence
Mark.Freed / 25 Jul 2020
Sharing the Knowledge: AI in Finance
Organised by Iva Horčicová & Molly Petken, E2W Alpha Challengers recently had a great Sharing the Knowledge session focused on ‘Artificial Intelligence in Finance’. With three expert speakers – Julia Valentine, Steven Hunter and Ivana Bartoletti – all at the top of their games, it felt like there were several hours of insights and analysis squeezed in to just one hour of talking.
There was so much covered we can’t fit it all in an article. Instead, we’ve picked out some of our highlights – please do watch the video if you’d like to learn more. We’re sure it will be a great use of your time.
Please also remember to sign up for future seminars, so you don’t miss out on these opportunities in future.
What is AI?
It’s easier to say what it isn’t. Despite what some may think, it’s not Terminators and Skynet. The general idea is to train machines to think like humans and even teach them to improve themselves. But what you can do with this is as varied as reading text, spotting patterns in data and creating Eurovision song contest
Here are a couple of more specific examples:
- Many companies have to rebalance their portfolios regularly to keep them in line with their policy rules. Identifying patterns in this process (such as the time of day it happens) can be valuable. It’s something that the best traders can do intuitively, to an extent at least, but AI can do it better. This is something that is likely to be used more and more as the cost of prediction gets cheaper.
- Analysts monitor news sources to get information about companies. AI can do this at scale much more quickly, plus it can then sift through the noise to find information that’s actually relevant to an analyst’s requirements (so, for example, information that could affect the prices of a company’s bonds). This means making sense of what’s being said, such as identifying the difference between a company being awarded a prize and the company being awarded a new contract.
Does it have back-office applications as well?
Very much so. Machine learning, in particular, can make a huge difference when it comes to risk management and fraud detection. This is because AI can learn what good customer behaviour looks like, so it can then spot the actions that stand out. And it can do this much more quickly and effectively than people can. It can also be used to support more vulnerable customers by identifying the early signs of people in trouble, which can allow financial companies to give them help before the situation becomes more serious.
What about the challenges with this technology?
It’s true that these technologies have costs as well. First of all, there’s the issue of privacy. People need to know if their data is being used to train a machine, plus if a decision about them has been made by AI, there needs to be a way to flag this so they can move it to a person for review.
From an ethical perspective, there mustn’t be any bias in the system – and this can come from a wide range of sources. Using the wrong data or labelling it incorrectly can both cause problems, but even when you get this right, there can still be challenges.
A lot of data will be historic, as it’s more readily available, but it may not reflect the current situation and it could leave some groups under-represented, which may lead to a bias against them down the line.
A famous example from last year saw Goldman Sachs accessed of gender discrimination when a husband and wife with shared finances both applied for Apple Cards, but the algorithm behind the cards gave the husband a credit limit 20 times higher than his wife’s.
It’s a difficult situation, because financial companies aren’t there to be social equalisers, but they also mustn’t be discriminating or locking people out.
How do you tackle bias?
The process starts by recognising that there’s no one way to do this – and that even bias itself can vary, with something that works in one place not working elsewhere. One option is testing, so you can see the impact and implications for those who are likely to be affected the most. Another is to use algorithms to check for bias inside other algorithms. There are many more.
Is fintech a threat to the traditional players?
It depends on who you ask. Some of the traditional firms see new entrants as a threat. Others are searching for fintech companies to partner with, so they can develop their offerings. A third group are so set in their ways they don’t see fintech at all.
The thing is that while some fintech companies are actively setting out to disrupt the market, many more want to help companies that are already in place, rather than competing with them.
Are legislators and regulators keeping up with the developments?
Legislators are behind the curve, but that’s hardly surprising when you compare the speed of politics with the speed of technological development. The regulators, though, are making an amazing effort to look at ways that laws intersect and provide guidance on how to work with this. For example, the FCA worked closely with the Information Commissioner to interpret GDPR for AI, so you have privacy that is compliant with an audited algorithm.
This is a significant area, as more and more decisions are being made by algorithms using personal data. For example, Uber employees are currently in court aiming to use GDPR to see how the Uber algorithm works.
Should people be worried about their jobs?
In general, probably not. That said, some highly repetitive or standardised jobs are likely to be at risk. For example, a paralegal who has to review thousands of documents for key terms and phrases could see a machine doing their job more quickly (and cheaply). But there will also be replacement jobs created by this trend – and those roles that rely on creativity or human interaction are likely to remain, as it is hard to build a machine to replace this.
I really like this article
Back to blog