Machine Learning

Generating Financial Series with Generative Adversarial Networks Part 2

Fernando De Meer

24/06/2019

No Comments

This is a follow-up post to a  recent post in which we discussed how to generate 1-dimensional financial time series with Generative Adversarial Networks. If you haven’t read that post yet we suggest you to do so, since it introduces the building blocks used in this one.

Here we will go over the process behind generating multidimensional time series with GANs, the challenges behind this task and our approach to overcome them.

From 1 to many dimensions

In theory, jumping from generating 1-dimensional series to multidimensional should be easy, since we are only switching from vectors to matrices and a modification of our neural networks should do the trick right? Wrong!

This approach resulted in mode collapse, as the networks had to be scaled up significantly and thus made training very costly and unstable, we ended up with a generator outputting nonsense most of the time, so we had to come up with a more sophisticated generation procedure.

Back at the drawing board, we looked for inspiration in a recently published paper that presented a novel GAN setup, which similar performance to WGAN-GP in terms of sample quality and robustness but much more computationally efficient. This setup is called Relativistic Average GAN and it involves having a loss that allows the generator to lower the probability of real data being real!

As crazy and counter-intuitive as this may seem, the setup works wonders, and the mathematical justifications of this are quite solid.  We recommend to read the paper behind it if you’re interested in the full details, but for the purpose of this post, it’s just a GAN setup with nice training qualities and fast training times.

Conditional GAN

In all the GAN setups described so far, the input to the generator is always defined as a noise vector. This way, we are simply asking a well-trained Generator to generate a random sample from the model distribution. It is possible however to condition this generation on additional information that is also added as an input to the Generator.

In many papers, Conditional GANs have been proven to be more robust and able to produce better quality samples than classical GANs. Conditional information is very diverse, it can be in the form of class as in [1], it can be a corrupted image in order to perform reconstruction as in [2] or it can be as a base photo to “beautify” it as in [3]. In our case, as we will detail in an upcoming section we will teach our GAN to generate different dimensions of a time series by conditioning on one dimension, which we will call the “base” dimension.

Multidimensional Generation Procedure

In order to overcome the “curse of dimensionality” of the multidimensional case, we propose the following generation procedure:

  1. We first take rolling periods of the series, then choose one of the dimensions of the series as the “base” dimension and all the others as “associate” dimensions.
  2. We construct a dataset for each “associate” dimension by taking the synchronous periods of the “base” and the corresponding “associate” dimension.
  3. We train a Relativistic Average GAN in a conditional setup by supplying the “base” dimension as conditional information to the generator and have it generate the “associate” dimension. We then input both series to the discriminator. This way the generator can learn what the “associate” dimension behaviour would be given a “base” dimension.
  4. We now train a WGAN-GP as in our previous post only on the “base” dimension and generate scenarios of the “base” dimension.
  5. Finally, for each sample of the “base” dimension, we conditionally generate each of the “associate” dimensions with each of the trained Relativistic Average GANs by giving as input the “base” dimension.

Example Generation

In order to illustrate our procedure, we choose as a dataset a 2-dimensional time series composed of the VIX and the SP500 indices. These two indices have a pronounced negative correlation. Whenever the SP500 has a prolonged downwards period, the market volatility increases, so the VIX does as well.

VIX and SPY dynamics

We choose the daily returns of the SP500 as the “base” series and the returns of the VIX as the “associate” series. We construct the dataset by taking rolling periods of 100 days advancing 100 days every time, making pairs of “base” series from the SP500 and “associate” series from the VIX following Steps 1 and 2. After having trained our Relativistic Average GAN as in Step 3, we can ask it to conditionally generate “associated” series, by giving the “base” series as inputs. The resulting rolling prices series look like the following:synthetic VIX scenarios

synthetic VIX scenarios

synthetic VIX scenarios

It’s easy to see that the RaGAN has learned what rules the dynamics of the VIX in respect to the SP500, it spikes on downturns of the SP and slowly decreases on its upward movements.

Following Step 4, now we need to generate new “base” series as in our earlier post. As in Step 3 we have trained the Relativistic Average GANs on returns of both the SP500 and the VIX, along with the SP500 returns we need to generate the starting point of the VIX for each SP500 “base” series (remember that the VIX is a mean-reverting series, the level at which it starts changes its dynamics a lot!).

Finally following Step 5 we conditionally generate the VIX returns, given our synthetic SP500 series and the VIX starting point and calculate the rolling series of the VIX prices. The results look like the following:

synthetic VIX scenarios

synthetic VIX scenarios

synthetic VIX scenarios

The generated samples clearly reflect the negative correlation present in the real data, and the VIX series also have their characteristic shape, with sudden spikes and curve-shaped downtrends!  In an upcoming post we will explore how to measure the quality of our generated datasets, the way to do this is still an ongoing debate and it’s a hot area of research, but we will go with a widely accepted approach that consists on testing the performance of a classifier trained on synthetic data.