post list
QuantDare
categories
risk management

Playing with Prophet on Financial Time Series

rcobo

risk management

Exploring Extreme Asset Returns

rcobo

risk management

Playing around with future contracts

J. González

risk management

BETA: Upside Downside

j3

risk management

Returns clustering with k-Means algorithm

psanchezcri

risk management

Predicting Gold using Currencies

libesa

risk management

Inverse ETFs versus short selling: a misleading equivalence

J. González

risk management

Cointegración: Seguimiento sobre cruces cointegrados

T. Fuertes

risk management

Using Decomposition to Improve Time Series Prediction

libesa

risk management

Clasificando el mercado mediante árboles de decisión

xristica

risk management

In less of a Bayes haze…

libesa

risk management

Cointegración

T. Fuertes

risk management

Cópulas: una alternativa en la medición de riesgos

mplanaslasa

risk management

¿Por qué usar rendimientos logarítmicos?

jsanchezalmaraz

risk management

In a Bayes haze…

libesa

risk management

Teoría de Valores Extremos

kalinda

risk management

Principal Component Analysis

j3

04/11/2016

1
Principal Component Analysis

Principal Component Analysis (PCA) is a technique used to reduce the dimensionality of a data set, finding the causes of variability and sorting them by importance.

How?

If you have a set of observations (features, measurements, etc.) that can be projected on a plane (X, Y) such as:

i1

You can display the previous graph from X* and Y* axes, which remain orthogonal.
i2If your observations were these:

i3

And you do the same base change, X*  Y*:
i4It turns out that you can explain all observations in a single X* dimension. The Y* axis does nothing because it contains no information and can be ignored. This is because it takes the same value for all observations.

I have therefore reduced the dimensions from two to one, without losing information.

 

 

So how did I obtain X* and Y* axes?

The first Principal Component (X*) is defined as the linear combination of the original variables that has maximum variance. The values in this first component will be represented as:Expression 1Where O is the matrix of observations that has average zero, and therefore X* too.Expression 2Where S  is the matrix of variance and covariance of the observations. And imposing the restriction a’1a1=1 and by the Lagrange multiplier:Expression 3Maximizing this expression implies deriving respect to a1 and equalising to zero.Expression 4Which happens to be Sa1=λa1 where a1 is an eigenvector of the matrix and the corresponding eigenvalue λ .

Ufff, algebra… so to sum up?

You have to find the X* axis, such that the orthogonal distance to the points is minimum. X* will contain greater variability of data and will therefore be the first Principal Component.

And with Y*? Simply take one that is orthogonal to X*, and this will be your second Principal Component.

Okay, so thus far, you’ve made a base change, and can represent the points in a different way in the plane and sort them by importance… Want an example?

Given 4 assets, 2 fixed income and 2 equities. If we take the values of Annual Return, volatility and maximum drawdown, we have the following matrix O.
tabla3D plot

WoW, 3D!

You can transform this representation to another 2D without losing anything if, for example, volatility and maximum draw down provide the same information to the whole set, or if they’re correlated.

The associated eigenvalues with the normalized covariance matrix O are:tabla2

And the representations of the new components are related to the original variables, as follows:
tabla2Taking nearly 100% of the information contained in Oi5You have reduced dimensionality maintaining the relationship between sets. This lets you view the status of assets in the plane in two new axes, one that measures the risk as a combination of volatility and  MDD. The other measures the return.

What about adding more dimensions?

It’s  definitely possible. PCA allows you to understand multidimension data sets with the most representative subset.

And N Assets in a set could be the dimensions, and their returns the observations. If every eigenvalue groups the closest Asset, you could filter duplications and build rich universes with fewer elements.

And, given an Asset Allocation such as:

Expression 5
where x  are weights and R  returns of Assets, and the first Principal Component of the covariance matrix of n Assets is the one that contains the most information, the associated eigenvector creates new weights that maximize the variance of W.

Uffff, enough for today!!

¿Quieres leer este post en español?

Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someone

add a comment

[…] Principal Component Analysis [Quant Dare] Principal Component Analysis (PCA) is a technique used to reduce the dimensionality of a data set, finding the causes of variability and sorting them by importance. >How? If you have a set of observations (features, measurements, etc.) that can be projected on a plane (X, Y) such as: DataSet representation You can display the previous graph from X* and Y* axes, which remain orthogonal. New axes […]

wpDiscuz