AI: treat with caution when investing
The roaring global success of ChatGPT has revealed to the general public a fundamental trend: the new possibilities offered by artificial intelligence (AI) to improve the efficiency of a large number of economic sectors. So much so that some people can already imagine themselves using AI to manage their investment portfolio independently. This is a bad idea, however, especially if investing isn’t your specialty. Here’s everything you need to know.
Key takeaways
|
Although the use of AI has been expanding in the world of finance for several years now, the ChatGPT phenomenon has undoubtedly boosted the democratisation of this technology. However, as is the case with all new technologies, AI must be treated with care. Especially if you’re not an expert. It’s important to remain cool-headed and apply critical thinking to avoid facing serious disappointments. Don’t even think about getting started without calling on the expertise of a seasoned professional.
AI in the world of finance
AI already exists in the world of finance. Trading, asset allocation, risk management and even credit assessment are just some of the areas where it already generates real efficiency gains. According to the IMF, spending on AI in the financial sector is forecast to double by 2027, reaching USD 97 billion.
Like with all new technologies, as well as bringing opportunities, AI also carries some risks. Although professionals take these very seriously, it is not always easy for individual investors to fully grasp the issues or to retain their excitement when new “toys” like ChatGPT or other AI solutions are released.
One of the main risks is believing the technology and the data generated by it unquestioningly. This is especially true since many off-the-shelf AIs are cropping up and give the impression to each individual that they can do everything, including helping them manage their investment portfolio on their own. To understand how this unhesitating belief is risky, a hypothetical example is more helpful than a detailed explanation.
One of the main risks is believing the technology and the data generated by it unquestioningly.
From euphoria to stock market crash
Let’s take a look at the following scenario. You have decided to use AI software to manage your investments. You choose ready-to-use software that promises you tailored investment advice based on an algorithm that adapts to your needs and is fed by a huge quantity of high-quality data.
While you are looking for new opportunities, one morning, the software draws your attention to some important information. This could well be an indicator of a trend to follow if you want to reap big returns on your company investments. Basically, your software informs you that a company has analysed and compared – also using AI – thousands of interviews and conversations with business leaders and their companies’ results. Comparing share price performance versus natural language processing analysis revealed an intriguing correlation: companies whose executives say “please,” “thank you,” and “you’re welcome” more often than others show better share price performance. Based on this, the AI suggests you invest heavily in those companies. Although the companies present very diverse profiles, the AI does not have any particular reservations. It confirms both the safety and effectiveness of its investment recommendation.
Some time later, your software also suggests that you invest massively in companies that are particularly engaged with CSR issues. After all, there could be a correlation between these “responsible” criteria and performance, since the good manners highlighted previously seem to indicate that the firms reviewed in the study show concern for social aspects. As the weeks go by, you see the shares of these companies appreciate, which reinforces your resolve to follow AI. However, one morning, things change and take a very unpleasant turn.
The first companies with the polite bosses identified by your AI publish their results – they’re terrible. As a result, their share prices nosedive. Naturally, your first instinct is to sell and pull out of these investments as quickly as possible, but you cannot find a buyer. You finally realise that, far from having benefited from tailored advice, your software made the same recommendation to a very large number of investors. Like you, they used AI and made mass purchases of these companies’ shares, which initially caused their prices to inflate. Now that the results have been published, no one wants to buy these stocks, everyone wants to sell them and the investment loss is immense.
On top of this already unpleasant situation is another concern; the panic on the markets is generating a widespread crisis of confidence in the CSR sector, which had nothing to do with the initial companies. Investors now associate CSR with window-dressing and are keen to distance themselves from it massively. The CSR stock market crash is looming and the outcome seems complex because the risk is diluted across a significant number of players. What began as a risk of bankruptcy for certain players is becoming a systemic crisis that could shake up the entire financial sector.
How did it get to this point? How could AI get it so wrong? Quite simply because the pattern found between the politeness of certain business leaders and the results of their share prices was only a spurious correlation and in no way a causal effect that could predict the future. Sadly, the lesson has come too late and has been learned the hard way.
A simple spurious correlation should in no case guide investment decisions.
Understanding this hypothetical nightmare scenario
Does this example seem extreme to you? It is nevertheless both realistic and plausible. First of all, because the correlation between politeness and business performance was actually found by an American analysis company called Prattle using its natural language processing software. Secondly, because a causal link could seem plausible and this makes us less vigilant. Yet when Prattle adjusted its results based on market and sector movements over the two days following the earnings call, the politeness effect had all but disappeared. It was a simple spurious correlation that could in no way guide investment decisions.
To function, AI processes colossal quantities of data and, as it is customary to say in statistics, the more you look, the more you find. By studying how AI works, we now know that the larger the data sets, the more they must contain arbitrary correlations. These correlations appear simply because of the size of the data processed. And although many of these correlations are spurious, they are still used to try to predict the future in the markets. Certain patterns found by these mechanisms, which are sometimes called “overfitting,” will thus be used to develop strategies that may work in the short term by chance, but which in no way guarantee recurring success in the longer term.
With the democratisation of AI tools in finance, the subject is topical since more and more players, including novice investors, will find themselves confronted with AI investment proposals and will perhaps not have the ability to assess its relevance. Spurious correlations did not come about with the emergence of AI. They have always existed on the markets. This is exactly why financial professionals always check the relevance of the patterns found before investing. For example, did you know that butter production in Bangladesh is closely linked to US stock market returns? It is indeed a correlation, but only a human mind is currently capable of detecting the absurdity of keeping such a correlation in a predictive financial model.
Financial professionals always check the relevance of the patterns found before investing; novices rarely do.
Risks that could descend into systemic crisis
AI must be handled with care and, above all, the data presented must not be taken at face value. As well as the circulation of spurious correlations, other risks can emerge with AI.
-
- Garbage in, Garbage out. As shown in the example of spurious correlations, poor data quality can lead to false results. In statistics, the expression “Garbage in, Garbage out” is quite common, meaning that if bad data is inputted, you should not expect to have good results at the output stage. Nowadays, many AI models are based on data sets created by humans, who have potentially introduced errors or biases. This can lead in particular to the perpetuation of discrimination or errors by conveying spurious correlations, akin to that of butter production in Bangladesh.
- Black boxes that are difficult to crack. Just like neural networks, some very sophisticated AI systems obey their own logic, which is impossible for the human brain to comprehend. There is therefore a certain lack of explainability and this poses a real problem when it comes to adjusting investment strategies. In these conditions, it is not possible to know what to change in the event of poor performance or what to replicate on another model if the performance is there. It is also impossible to determine whether the success of the strategy is actually due to the superiority of the AI model and its ability to extract the relationships underlying the processed data, or whether other factors are at play.
- From herd behaviour to a chain of negative reactions. Simultaneous execution of large sales or purchases by traders using similar AI-based models could create new sources of vulnerability. How? By generating market volatility and instability linked to episodes of illiquidity. Large-scale deployment of the same AI models could amplify the interconnectedness of financial markets and institutions in unexpected ways, potentially increasing correlations and dependencies of variables that were previously unrelated. Furthermore, the use of off-the-shelf algorithms by much of the market could lead to herd behaviour, convergence and one-way markets, further amplifying volatility risks. Without market participants “willing” to act as shock absorbers by taking the opposite view of transactions, this herd behaviour could lead to episodes of illiquidity, or even the creation of new systemic risks.
Humans must remain in charge
Alongside the opportunities it generates, AI carries its share of risks. This is all the more true in investing as financial markets are, by their nature, unstable environments, which do not lend themselves well to the development of automated predictive models that rely on the past to foresee the future. Certainly, AI will learn, progress and adapt. However, today and particularly in unexpected situations, humans are capable of making more informed decisions by drawing on their knowledge. Our situational intelligence still far surpasses that of AI.
AI does not (yet) outperform human intelligence and if you want to incorporate AI into your investment strategy, we highly recommend doing so through a seasoned professional. They will be able to verify what computers produce in terms of data quality, explainability, stability and performance. Now is clearly not the time to entrust the management of your retirement savings plan or your assets to ChatGPT. AI is not bad in itself, but it is essential to warn new investors: the proper use of this technology remains complex.