by Stuart McMeechan.
From the smart home to self-driving cars, artificial intelligence is predicted to play an increasingly significant role in the future. But you don’t have to wait until then, argues Stuart McMeechan, EY Director, Risk Advisory, because it is already capable of transforming risk monitoring.
Those responsible for monitoring risks across large organisations are likely to find themselves under greater pressure than ever before. New and changing regulation is adding to their workload, while heavier fines and the power of social media are pushing the financial and reputational cost of breaches sky high. At the same time, increasing digitalisation means that both the amount of data available and the need for it to be analysed is expanding rapidly.
The good news is that support is available in the form of machine learning (ML), a type of AI that can continuously learn from the data it is fed.
Practical benefits, now
In an area that can sometimes seem big on ideas but short on specifics, let’s begin with some concrete examples.
Traditional methods of identifying unusual activity or fraudulent transactions involve hard coding a set of rules. For example, the 2nd line of defence might monitor the procurement controls by looking for purchases payable to a vendor with the same address as an employee. Or an internal audit function may try to identify inflated sales by flagging orders processed just before the month end but with a credit memo posted against it just after month end.
With supervised learning, a type of machine learning, a computer is supplied with ‘training data’. Training data is a set of transactional data with the known anomalies tagged, for example the sham expenses or dummy sales. An algorithm then ‘learns’ what makes these anomalies unique and uses this logic to analyse future transactions or activity, flagging those that follow a similar pattern.
There are a number of advantages to the ML approach. Firstly, use of an ML algorithm means that risk analysis and decisions are not based on a small number of humans who built the rules. Instead, the algorithm uses advanced statistics to spot the linking data points and build the rules itself, making exceptions more likely to be real. Another key advantage is that, as new training data flows through the algorithm, it adapts over time, meaning it can pick up on emerging risks and changing human behaviour.
The advantages above can enable risk personnel to focus more of their limited resources on investigating activity with a very high chance of risk, making them better able to protect their organisation from financial or reputational loss.
So, once set up, not only is the human resource input for ML lower than with traditional methods, but the output is far higher in terms of risk monitoring.
The feedback loop
Machine learning’s key feature is its ability to continuously learn and refine its decision making. This requires the implementation of a feedback loop. For example, if the algorithm flags activity that is actually low risk, this should be fed back into the ML algorithm, which over time will make it more accurate.
The need for a feedback loop will require the risk function to operate differently, perhaps having dedicated resources to continuously train ML algorithms. In return, they get a new method of risk detection that is able to spot new or more complex activity by unearthing emerging patterns that humans may not have detected.
AI has huge potential in this area, but it is still an unfamiliar technology for risk managers. The key is using the right algorithms and training them correctly to look for and find the sort of anomalies risk managers want to identify. As training data is going to be crucial to powering ML, it is worth storing transactions of interest in a database now, so that they are ready and waiting to fuel monitoring algorithms. In some cases, a company may not have sufficient examples of suspect transactions to train an algorithm with. In this case, they may look to training data brokers for cross-organisation training data.
Exciting though this is, it’s probably just the tip of the iceberg. Today’s ML applications tend to be limited to specific areas such as fraud detection, KYC compliance and credit risk. But if we look right across the risk spectrum, it’s not hard to see that data plays a crucial part in monitoring risk at every stage. Therefore, ML is likely to disrupt the analysis methods throughout the classic three lines of defence model.
For example, ML could be used to monitor those cases where key ERP controls are being bypassed. Or a Key Risk Indicator (KRI) could be accompanied by a pack of high risk anomalies found my ML.
As we began by talking about the future it seems fitting to finish on how AI can play a part in predicting it. ML’s forecasting ability is based on its capacity to learn all relevant factors and plot how they would influence any future event it is asked to calculate. This could not only help to spot a potential fraud or control failure before it materialises, but be a valuable tool in assessing less black and white areas of risk management. For example, will the consultants of the future turn to machines to measure the risks around their clients’ strategic options?
What does seem certain is that AI in general and ML in the immediate term will be an increasingly important part of our personal and professional lives. The first step we recommend to clients is running a pilot, as the sooner businesses start to understand and use ML, the more likely they are to realise the benefits.
For more information visit our website
Contact: Stuart McMeechan