When dealing with mathematical modeling, choosing the right scale to frame the equations can make the difference between a successful and lasting model, or poor description of reality.
In today’s post, we explore two important scaling procedures that arise in finance: the annualisation of returns and volatility. These are common terms in the industry and building bricks to many other metrics, so it is paramount to have their meaning and underlying assumptions straight.
Context and motivation
The usual measure to asses a portfolio’s profitability is the arithmetic return, whose formula you are familiar with,
$$r_t = \frac{p_t}{p_{t-1}} – 1.$$
The price \(p_t\) could be the portfolio value (the total sum of its assets under management), a stock price, an interest rate index price or even a currency pair value. The frequency with which we sample prices can be daily, weekly, monthly, etc. Typically, the larger the time scale \(T\) in which you measure the return, the larger this number becomes in absolute terms.
To this arithmetic expression we link the so called logarithmic return, which goes as
$$ \ln \left(1 + r_t\right) = \ln \left(\frac{p_t}{p_{t-1}}\right), $$
where, in order to simplify the equations, one can define the logarithmic price,
$$ P_t := \ln p_t \Rightarrow \ln \left(1 + r_t\right) = P_t – P_{t-1}. $$
This nonlinear, yet simple, transformation has many powerful applications. It is going to ease the manipulation of compound magnitudes, and allow us to leverage statistical theorems.
Statistical realizations
Each day, for each listed asset, the price goes up and down, generating at the end of the day a realization of the return. Starting at any date, after a year has gone by we have 260 realizations of the random process that generates the daily returns, 12 of the monthly and 52 of the weekly ones. This sample of a larger population, that grows every day that goes by, can be defined through statistics such as the mean \(\mu\) and the variance \(\sigma^2\). It is precisely in this definition where concepts start to get messy with annualisation.
Scaling profits
From a sequence of returns \(\{r_t\}\) one can compute the total return through the compounding formula,
$$ 1+R_T = \frac{p_T}{p_0} = \prod_{i=1}^T(1+r_i) $$
If we wanted to compare two assets with different track records in both length and market moment, it would be unfair to do it through the total return, right? After all, one of them had more time to grow (and lose). So it is an accepted practice to annualise the return, that is, to find the yearly return \(r_{260}\) that would produce the same total return at the end of the time-span. The formula to answer that question is a simple geometrical mean,
$$ 1 + r_{\tau} = (1+R_T)^{1/n} = \left(\prod_{i=1}^T(1+r_i)\right)^{1/n}, $$
whose exponent is the ratio between the number of realizations \(N\) that generated that total return and the number of realizations \(\tau\) that make up a year (there is not much consensus with this number, since each firm seems to use their own estimate of average days in a year. We take 52 weeks x 5 weekdays, so \(\tau= 260\)).
$$ n = \frac{N}{\tau} $$
So far, so good. The above formula is well-known and there is not much mystery about it. But here is where the magic starts. Let’s take logarithms on both sides of the annualisation equation and operate them with their transforming properties. We get
$$ \ln \left(1+r_\tau\right) = \frac{\tau}{N} \left(\sum_{i=1}^T\ln(1+r_i)\right). $$
Now, if we make \(\tau\rightarrow 1\), we obtain a familiar expression: the sample estimator for the mean!
$$ \ln \left(1+r_1\right) = \frac{1}{T} \left(\sum_{i=1}^T\ln(1+r_i)\right) = \mu_T. $$
\(1+r_1\), which is the constant daily return which ends up with the same total return as the original realization of returns under consideration, is equal to the mean of the logarithmic returns. And for any other time span, different from a year, and defined by \(\tau\) number of days, it would be as simple as multiplying the mean by \(\tau\),
$$\ln \left(1+r_{\tau} \right) = \tau \cdot \mu_T.$$
So, the mean profitability in a time span of \(\tau\) days scales linearly with \(\tau\) with respect to the mean of the distribution of log returns.
Scaling risk
Let’s have a look at risk, which is understood as the price variations in the future. You have probably seen a formula involving a square root, but where did it come from and what is its interpretation?
Random walk
Scaling the returns was easy, after all, we were playing with their definition and the mathematical properties of logarithms. To do the same little game with risk, we need to introduce some strong assumptions to recover the formula known to everyone. We state that log prices follow what is known as a random walk,
$$ P_{t} = P_{t-1} + a_t, $$
where \(a_t\) is known as innovation, and source of the stochastic behaviour. The innovation term is modeled as a normal white noise process, that is
$$ a_t \sim \mathcal{N}\left(0, \sigma^2\right). $$
If from an initial log price \(P_0\) you start evolving the process along \(\tau\) steps, you would get
$$ P_\tau-P_0 = \sum_{i=1}^\tau a_i. $$
The gap between the final and initial log price is the sum of the realization of each innovation through the path. The variance of this walk is the sum of the variances of each innovation since they are identically and independently distributed random variables,
$$ Var\left(P_\tau-P_0\right) = Var\left(\sum_{i=1}^\tau a_i\right) = \sum_{i=1}^\tau Var\left(a_i\right). $$
Since we have assumed constant variance through the innovation process, we find
$$ Var\left(P_\tau-P_0\right) = \tau\sigma^2, $$
which gives us the famous formula for volatility annualisation,
$$ \sigma_\tau = \sqrt{\tau}\sigma. $$
So, the variance of profitability for \(\tau\) days scales with the square root of \(\tau\) with respect to the standard deviation of the underlying distribution of log returns.
When we set \(\tau = 260\), we would get back the deviation of the yearly return. The interesting thing is that you could have used more than one-year realizations to compute that annual volatility. Why? Because the more sample realizations you use to compute the mean and the standard deviation, the more accurate and close they should get to the true population statistic. You could also compute monthly volatility with a three-year-long track record. Simply set \(\tau = 21\).
Fallacies
The previous results are pretty nice and elegant, but we must put them into the context of their assumptions. To get there, we have assumed that both the mean and the variance exist, which is not a fact you can take for granted with financial figures (yes, the computer will always give you a number, but that doesn’t mean it is meaningful).
Secondly, provided they exist, they should be constant in time. Once again this condition is only met when we compute them with a sufficient number of realizations, in the span of more than 10 years.
And finally, returns should follow a normal distribution! And what’s more, we have built all the previous assuming zero mean, when it is clearly not zero (you could argue it is small, but it is actually in the order of magnitude of log returns).
To get a better estimation of monthly or annual variation one could go ahead and build more complex models which account for variability in time for the mean and the variance. That path is exciting and full of mathematical challenges which require skill to be tackled. But as former trader Nassim Taleb warned us, you are better of creating a strategy that benefits from volatility, rather than trying to predict it accurately.
Conclusions
Today we have learned that annualising is just a particular case of scaling the location and scale parameters of a normal distribution of log returns. The first order momentum, the mean or expected value, scales linearly with time. The second order momentum, the standard deviation, scales with a square root.
So next time you use them to contrast the performance between two portfolios, funds or indices, you might want to check if they have similar distributions of log returns through time before annualising their results, otherwise, it might be an unfair comparison.
Thanks for reading!