Risk Management

To be or not to be (correlated)

P. López

05/07/2018

No Comments

There are many problems that a data scientist encounters when “fighting” financial data for the first time: nothing is normally distributed, most problems are tough (low signal to noise ratio) and non-stationary high-dimensional time series are ubiquitous.

In Quantdare we have spoken many times about one of the main sources of non-stationarity in financial time series: volatility. It is very well-known that volatility (standard deviation) is not constant in financial markets and volatility clustering is one of the clearest characteristics of financial returns.

In this blog, you can find some references to the matter, for example in our posts Learning with kernels, Financial Time Series Generation or How do stock market prices work? (and probably more that I’m missing).

First things first,  the covariance matrix for two assets can be written as:

\[\Sigma = \begin{bmatrix} \sigma_{1}^{2} & \rho \sigma_{1} \sigma_{2}\\ \rho \sigma_{1} \sigma_{2} & \sigma_{2}^{2}\end{bmatrix}\]

where \(\sigma_{i}\) is the volatility (standard deviation) of asset i.

When we speak about time-varying volatility, we are actually speaking about \(\sigma_{i}\) changing its behaviour with time. However, the off-diagonal terms in the covariance matrix can vary with time too, (and they do). These off-diagonal terms are the covariances and, if standardised with the volatilities of the variables, they become the well-known Pearson correlation coefficient \(\rho\) (if you don’t know what I’m talking about, you can visit this really neat explanation on what’s Pearson correlation and how it is used in finance).

Why is this important?

As you probably know, covariance is important because most (or all) the portfolio optimisation problems include the following quadratic form:

\[\omega^{T}\,\Sigma\,\omega \]

where  \(\Sigma\) is the variance-covariance matrix.

This quadratic form is the level of risk (variance) of our portfolio and, consequently, this expression is vital for risk management, diversification, computing the efficient frontier and much more…

It is easy to see that, if the matrix \(\Sigma\) changes with time, then our optimisation objective will change too and our optimal solution or risk assessment can become totally wrong!

In addition, there’s strong academic evidence in favor of the existence of contagion, i.e. increasing correlations between assets during financial crisis. This is especially true for equity markets but it’s a stylised fact of financial returns in general. If we don’t take this effect into account, we would be underestimating the level of risk of our portfolio, and that’s really scary.

Let’s look at a very simple example.

A clarifying example

Gold is broadly known as the anti-dollar, because it has a negative correlation with the US Dollar.

Using EURUSD exchange rate as an inverse proxy for the value of the dollar, we can compute the strength of this relationship. As EURUSD is the Dollar denominated price of one Euro, we will be expecting a positive correlation between gold prices and EURUSD.

However, as correlations are dynamic and vary with time, how can we know which is the true value of the instantaneous correlation between EURUSD and Gold, as a function of time?

One first solution would be to compute the rolling Pearson correlation coefficient between the returns of both time series. But an important doubt quickly comes to our mind: what window should we use?

Correlation USD Gold

As you can see, this simple estimate of correlation can vary a lot depending on the sample window we’re using so we might need a more powerful tool to solve this problem.

The experiment

In order to find out which is the best way to uncover the true dynamic correlation between two assets, we’re going to create two artificial correlation time series. To generate those we’ve just filtered the rolling sample correlations of the previous plot. One of these correlations varies quickly and the other one is much smoother.

Dynamic correlations in finance

To generate random samples from a multivariate Gaussian distribution we need more information than just correlations. We have to artificially create full covariance matrices that vary with time. From each of these dynamic covariances we are going to sample (jointly) a pair of artificial returns.

In order to do so, in each time step, we use the Cholesky decomposition \(L_{t}\) of the instantaneous covariance matrix \(\Sigma_{t}\) to draw two correlated samples from a multivariate gaussian distribution:

\[ \Sigma_{t} = L_{t}^{T}L_{t} \quad x_{t} = L_{t}^{T}z_{t} \]

where \(x_{t}\) is a vector of generated random returns and \(z_{t}\) is a vector of independent samples from a standard normal.

Repeating this procedure for several timesteps \(t\), we get:

fast returns slow returns

It’s easy to notice that for the fast-varying covariance, the volatility of the returns is less stable and so will be the correlation coefficient between the two series.

Once we have generated these random pairs of series, we’re going to recover the dynamic correlation coefficient for each pair, using different standard techniques. Since we have created these pairs ourselves, we know the true dynamic correlation of the pairs and thus, we are now able to set a proper competition between the different algorithms. May the best win!

The tools of the quant

To tackle this problem, there are several tools that any good quant can use:

    1. Dynamic conditional correlation model: this model is a form of multivariate GARCH that assumes an ARMA process for the conditional correlation matrix and univariate GARCH(1, 1) processes for the volatility of the individual assets.
    2. BEKK model: this is another multivariate GARCH version but, in this case, the full covariance matrix follows an ARMA process.
    3. Risk metrics 2006: this methodology by JP Morgan is broadly used in the industry. The classical version proposes estimating the covariance matrix using a exponentially weighting scheme of samples with a smoothing parameter of 0.94. The 2006 version of the methodology actually blends the estimates made with a range of different smoothing parameters in order to have a more accurate final estimate.
    4. Kalman filter: this is actually a state space model but can be used to compute instantaneous regression betas as it is explained in this very nice post. In order to be able to use it, the beta of the regression has to be equivalent to the correlation coefficient. By definition, this is the case when the samples are scaled to unit variance. In our example, as volatility also changes with time, we have standardised the returns with the one year rolling volatility.

Once we have introduced some algorithms to try, let’s make them compete:

Fast-varying correlation

Comparison with fast correlation

Slow-varying correlation

Comparison with slow correlation

In the plots above, the gray area represents the square error of the estimates.

Surprisingly enough, all the methods are pretty good at estimating the original dynamic correlation of the returns, and, at least, they are all able to estimate roughly if correlation changes quickly or slowly.

In order to compare the inference accuracy of the different algorithms we can use the Mean Square Error of the estimated correlation relative to the true one:

Fast Slow
DCC 0.0194 0.0068
BEKK 0.0204 0.0079
Riskmetrics 0.0229 0.0162
Kalman 0.0142 0.0038

 

Even though Kalman filter is a very general technique, the algorithm manages to beat all the models that were specifically designed for finance. Bravo!

Getting Bayesian

And, finally, just because anything is better if you make it Bayes’ way, we can estimate correlation as the beta of a bayesian rolling regression. It is extremely easy to implement using probabilistic programming in Python thanks to the pymc3 package. In the docs, you can find a very nice example of bayesian rolling regression.

Using this Bayesian approach, it is straightforward to come up with some uncertainty bounds around our correlation estimate.

Even though having uncertainty estimates is much better than not having them, the plots suggest that the estimated percentile 10 and percentile 90 bounds are a bit too conservative:

bayesian correlation fast

bayesian correlation slow

To sum up, Kalman filter and other dynamic regression methods (including Bayesian ones) really seem better in estimating time-varying correlation than the (more complex) multivariate financial models specifically designed for this task. Deal with it, JP Morgan!

Unfortunately, our example is too simple for portfolio applications. If you are not interested in just the correlation between two series or if you need to estimate the full covariance matrix between many assets, the dynamic regression approach is not an elegant option anymore, and the problem can be much more complex than this simple example. For those cases, this paper proposes a very elegant alternative to the traditional multivariate GARCH models when estimating a full covariance matrix.

I hope you find this post useful and, from now on, please get your correlations right!

add a comment