High performance gaming and artificial intelligence computing giant NVIDIA launched its Deep Learning Institute (DLI) last year, and is now offering the first courses on applying this technology to the finance vertical.
One of the first deep learning lab courses focusing specifically on the domain of capital markets trading will be taking place on 5 December at Newsweek's AI and Data Science in Capital Markets event in New York (places are limited).
Andy Steinbach, head of AI in financial services and senior director at NVIDIA, explained: "There's not a lot of academic research that shows how to take these neural network techniques and adapt them to finance. It became clear to us that was sorely needed.
"We set out to develop labs that would show how to marry the basic building blocks like auto-encoders, recurrent neural networks, reinforcement learning, with very relevant finance problems like algorithmic trading, statistical arbitrage, optimising trade execution, and so we have done that."
High performance computing and AI
NVIDIA has the wind at its back as it pushes into the AI and deep learning future. Parallel computing hardware makes building models and training algorithms on large data sets achievable much faster than with CPUs.
NVIDIA's DGX1 system, a powerful out-of-the-box deep learning starter appliance for a data science team, comes with a cloud software registry containing deep learning frameworks in pre-built, plug-and-play software containers.
Steinbach said: "A lot of IT infrastructure teams haven't developed expertise on deep learning yet. So that task of getting deep learning frameworks running – whether it's CPU or GPU – might fall to the data science team. But the DGX allows you to avoid your data science team spending a lot of time on infrastructure operations and just to do the AI work you need them to do."
In addition to GPU architecture being well-suited to algorithms that need to scale across many parallel calculations, deep learning frameworks help deal with the diversity of networks. For example, recurrent neural networks are good for financial engineering because they allow you to incorporate time series. Tools like auto-encoders help you get around the fact that you might not have labelled data; you might not actually know what you're looking for; some behavioural pattern - but you can't label it, you want the network to discover it.
Then there are more advanced techniques like reinforcement learning that can learn game strategies, which Google used to create Alpha Go that beat the world's best Go player; or methods like generative adversarial networks, which actually learn to mimic something so well they can fool the other party looking at it.
Steinbach said: "Deep learning frameworks allow researchers to very quickly structure these networks and not have to recreate the wheel in terms of software every time."
"It's interesting that most of these algorithms reduce to a master training algorithm under the hood, called back-propagation, which is highly parallel and makes it easier to scale the computations across many GPUs.
"That algorithm takes large streams of data – it can be images, it can be audio, it can be tick data – and trains these networks and all the parameters, and so these complicated deep learning algorithms map onto this relatively simple algorithm that's massively parallel.
"And if your deep neural network isn't training fast enough on a big data set, you can essentially push a button and scale it out to more GPUs in a data centre."
Market prediction problems
While there has been something of a "big bang" within AI around use cases like computer vision, applying this type of learning to the world of finance remains a proposition with less available academic proof points and example code to fall back on. Trying to predict asset prices means running models on a large non-stationary dataset, and by virtue of this, attempting to predict the future – something quants have been grappling with for decades.
Financial markets have been described as a short term voting machine, and longer term, a weighing machine; in other words, trend following and speculation is followed by more meaningful valuation. Deciphering these non-deterministic behavioural patterns is a challenge; models of behaviour are accurate only to a point, known as model risk.
"However, deep learning represents a paradigm shift in data science, which comes about when you start out by using the data to build a model, rather than the other way around," said Steinbach. "That avoids model risk in a big way and so that's going to become more significant in quant financial engineering."
The behavioural patterns alluded to above can have characteristic time scales. These patterns can exist at the microsecond level, for example, where high frequency traders are making tiny arbitrage gains by exploiting small differences between different exchanges with high speed networks. There are day trading patterns around momentum stocks where people all have to close out their positions at lunch time or at the end of the day.
"These days many of these sets of rules will be computer driven. What if you could study the patterns in the market and you could infer what rules people are using to trade; if you could infer that, then you could predict ahead on that time scale," said Steinbach.
Deep learning algorithms are also useful for flagging when a regime change is happening in the market; the algorithm may not be able to predict the actual behaviour after that, but at least you have this red flag, coded into your algorithm that can say, "hey all bets are off - you better reduce your risk," noted Steinbach.
"The algorithm may be able to tell you the market regime is changing - and looking through a set of algorithms trained on past behaviour, it actually recognises that it's changing to a different regime from a couple of years back, or another behavioural pattern it recognises.
"Deep learning can spot trends, sentiment changes and risk-on, risk-off behaviour, which could be something as simple as a sector rotation from one type of stock into more defensive type."
Overfitting and deep learning
Overfitting is a classic statistical problem whereby a model doesn't generalise well from training data to unseen data. It's when a model describes random error and noise rather than the underlying relationship or signal, often because the model is excessively complex and perhaps has too many parameters. Deep learning can mitigate overfitting using regularisation techniques such as "dropout".
Steinbach said: "What the dropouts do is randomly set weights to zero; what they are trying to do is make all of the individual neurons be able to contribute something on their own.
"They are trying to eliminate unlikely combinations of neurons that might contribute to overfitting. There is a kind of Okham's Razor principle; if you have a crazy curve that goes through every data point you are likely to be overfitting.
"If you use these techniques that properly avoid overfitting, the network won't find a correlation that's not there. If it can't produce accurately, it won't give you the wrong answer."
Andy Steinbach will be talking about deep learning at Newsweek's AI and Data Science in Capital Markets conference on December 6-7 in New York, the most important gathering of experts in Artificial Intelligence and Machine Learning in trading. Join us for two days of talks, workshops and networking sessions with key industry players.