The Kalman filter is a method for estimating unobservable state variables from observable variables that may contain measurement errors.
This algorithm requires two types of equations: those that relate the state variables to observable variables (main equations) and those that determine the temporal structure of state variables (state equations). State variable estimations are made based on the dynamics of these variables (time dimension) as well as the measurements of the observable variables obtained at each time point (transverse dimension). The dynamics are summarised in two steps:
- Estimate the state variables using their own dynamics (prediction stage).
- Improve the first estimation using the information of the observable variables (correction stage).
An attractive feature of the methodology we are dealing with is that it has a recursive character. Once the algorithm predicts the new state at time t, it adds a correction term and the new “corrected” state serves as the initial condition in the next step, t+1. In this way, the estimation of state variables uses all available information at the moment, and not only the information from the previous stage to when the estimation is made (this is known as “signal extraction”).
Suppose the following principle equation:
Zt = Ht * xt + ut
Zt is a vector of observable variables, Xt is the vector of state variables, and Ht is the matrix that relates the two. The error term Ut is included to accommodate the possibility of the observable variables containing some measurement error. It’s a term that is not correlated with observable variables and does not have autocorrelation. Further, a normal distribution of null mean is assumed to follow.
ut = N (0,Rt)
Suppose, on the other hand, that the dynamics of the state variables is determined by the following state equation:
xt = At * xt-1 + et
et is an error vector that is included to model the uncertainty of state variables. It is a term whereby its components are not correlated with each other, do not present autocorrelation nor keep correlation with the error of the main equation. One assumption is that it follows a Gaussian distribution of null mean:
et ~ N(0,Q)
…where Q denotes the covariance matrix of the error term components (this can be constant or variable over time).
1. Prediction Equations: Let Xt,opt be an optimal estimator of the vector and Pt the variance of the state vector, the prediction equations are:
x t+1, opt p = At * xt, opt
Pt+1p = At * Pt * At‘ + Q.
These terms are nothing more than the predictions of the state variables, taking into account only the dynamics that have been assumed to follow.
2. Correction Equations: improve the predicted data using the information of observable variables for t+1, which is known:
xt+1, opt = xt+1, opt p + Kt+1 * (Zt+1 -Zt+1 p ) = xt+1, opt p + Kt+1 *(Zt+1 – Ht+1 * xt+1, opt p)
Pt+1*** = (ID – Kt+1* Ht+1) * Pt p
where Kt is the term known as the Kalman Gain:
Kt = Pt p * Ht‘ * (Ht*Pt p* Ht ‘ + Rt )-1
We see that it’s a simple algorithm to implement, as each stage t has to execute the prediction followed by the subsequent correction. This process requires the choice of good initial conditions for the state vector and its variance as well as the choice of models that describe the dynamic of the state variables and models for the variance of errors of the principle and state equations.
Example: Estimate of the temporary market beta. Japanese Market.
We performed the test of estimating the evolution of the market beta of an action according to the Kalman filtering.
Our starting model is CAPM, which defines the following principle equation:
Rt = at + bt * Rm,t -> Rt = Ht * xt + ut, where Ht = (1 Rm,t) and xt = (at, bt)
The observable variables are the market yield Rm,t and the shares, Rt. In particular, we have assumed the constant term at (which, in the CAPM model, represents the yield of a safe asset) always constant and null. Therefore, we have a unique state variable that is the market beta, bt. On the other hand, it has been assumed that beta following an autoregressive process of order one, therefore, At = Id = 1. In addition, all errors of the principle and state equations have been assumed with constant variance.
The results show a beta variable over time with the following aspect:
We consider this as a good alternative to the classic definition of beta because it does not assume a constant beta for the whole period, something that constitutes the main criticism to the model CAPM. In addition, this is a method that filters the estimation noise step by step.
This has been a first approximation. Future works using this same method should allow a variance for the errors of the equation of state and variable principle over time. For example, estimating a timely model, such as a Garch.
Read me in Spanish.