Artificial Intelligence: What is AI allowed to do in the financial sector?
Elizabeth McCaul, a member of the ECB’s Supervisory Board, announced last year that the central bank intends to use the AI-based tool “Heimdall” to screen candidates for managerial positions in supervised banks. Will key decisions affecting access to top positions be made by machine intelligence in the future? At the very least, extensive data and information is aggregated using AI and relevant information is extracted.
This is just one example of possible areas of application for AI in the financial sector. In most cases, the advantage of the algorithms is that they are able to recognize patterns and regularities from large amounts of data that are not accessible to humans: for example in asset management and portfolio analysis, in the simulation of financial key figures or even in credit scoring and in of fraud detection.
Advanced AI methods and artificial neural networks are high-performing. However, one should also keep an eye on the challenges in their application.
Challenge number 1: Black Box
Black box models generate a specific output based on incoming data. Due to the complexity, however, it is difficult to understand how the system came to a conclusion. From a human perspective, the inner workings are therefore “in the dark”. In the context of credit scoring, for example, there are requirements with regard to the transparency of credit decisions. So the computer may be supposed to be better at predicting whether a private customer will service his loan or not – the shortcoming remains that these decisions are not sufficiently understood and explained. This topic has already been recognized in the European Commission’s Ethics guidelines for trustworthy AI, by the Federal Financial Supervisory Authority and within the framework of the World Economic Forum addressed.
Challenge number 2: Adversarial Attacks
Another potential weakness of AI models is related to the black box problem. They are susceptible to so-called “adversarial attacks”. What is meant by this is that marginal changes in the input signals – which cannot be registered by humans (or are almost unnoticed) – provoke incorrect results. For example, it would be possible for application data for a loan to be “manipulated” by minimal adjustments in such a way that a specific loan decision is triggered by the neural network. For the human loan officer, these changes are barely noticeable or seem of little relevance.
Artificial intelligence – history of an idea
In the 1950s, a research paper coined the term artificial intelligence (AI) for the first time. AI should “solve the kind of problems that were previously only intended for humans”. However, the term remains controversial to this day. It is unclear what exactly intelligence encompasses – and to what extent it requires a separate consciousness.
With this variety of artificial intelligence, the system generates knowledge from large amounts of data – by using photos, for example, to learn what a cat looks like. However, some experts do not yet see intelligent behavior in this pattern recognition.
Deep learning methods brought about a breakthrough for many applications, including image recognition. The neuronal networks of the brain with their many nodes are digitally modeled.
American IT companies such as Google, Microsoft, IBM and Amazon in particular have commercialized AI applications. They can be found, for example, in speech recognition in smartphones, self-driving cars or as chatbots that communicate with customers on shopping sites.
Challenge number 3: biases
One speaks of a bias when the data used to train the algorithm does not represent the population to which it is to be applied in the future. This is also the case, for example, with the ChatGPT software that received a lot of attention recently. The “P” stands for “pre-trained” and refers – at the time of writing this column article – to a period up to and including 2021. ChatGPT therefore does not know, for example, that the election to the Berlin House of Representatives had to be repeated in 2023. In this context, the researcher at the LMU Munich, Lorena Jaume-Palasí, points out the danger that the past – with its corresponding values, but also discrimination – will be made the yardstick for the future. So it would be possible that an algorithm used for the credit decision was trained on the basis of a data set that contains only very few previous borrowers from a certain region, all of whom also have a negative credit history. This could result in the algorithm coming to the supposed conclusion that all borrowers from the region concerned will not be able to service their loans in the future either. On this basis, the algorithm will not grant a loan to anyone from this region.
outlook
The fact is: AI is finding its way into more and more application areas in the financial sector. In the past few months alone, this development has gained enormous momentum. Market participants have high hopes for the use of methods of artificial intelligence in the financial sector. On the part of the supervisory authorities, the use of AI-based tools will also be viewed critically, primarily with a view to black box properties. In some cases, the providers are faster than the regulators can keep up with. Certain obstacles still need to be cleared before AI can be used more widely.
Also read: How good is ChatGPT as a money manager?