Markov chains are well-known in the world of both mathematics and finance. It is common to describe the market as a group of states, for instance bull and bear. From these two there are different ways to create a great deal of other states. If you want to establish the transition relationship between states, Markov chains are really useful.

There are some articles in this blog that introduce de idea of Markov chains, and that explain a method to use them as an investment indicator. I encourage you to get in-depth into these posts. Meanwhile, here you can find some basic notions to understand how to use Markov chains to detect when a trend is over and as a result adapt your investment.

## Markov chain

A Markov chain is a discrete time stochastic process that goes from one state to another with a certain probability. These chains have some special properties and characteristics.

### Transition matrix

Transition matrix is a stationary matrix whose elements define the conditional probability of the fact that the chain passes from one state to another, from i to j, given that the chain is in a particular state. That is:

\[P = \left(\begin{array}{c}p_{11} & … & p_{1N} \\.&&.\\ p_{N1}&…&p_{NN}\end{array}\right)\] \[p_{ij} = P(X_{t+1}=x_{j}|X_{t}=x_{i})\]### Probability of state change

The main characteristic of Markov chains is that they do not have memory. That means that the chain only looks at the previous state to check which state will be the next. Therefore, the conditional probability of being in state j in n days given that today the chain is in state i is independent to the path between states i and j:

\[P^{n} = \left(\begin{array}{c}p^{n}_{11} & … & p^{n}_{1N} \\.&&.\\ p^{n}_{N1}&…&p^{n}_{NN}\end{array}\right)\] \[p^{n}_{ij} = P(X_{t+n}=x_{j}|X_{t}=x_{i})=(p_{ij})^{n}\]## In practice

Let’s try to use Markov chains with Apple stock. Firstly we use a technique which splits its historical prices in different market regimes. Notice that you could use whatever technique trustworthy to you. In this case, the method identifies 7 regimes like in the next figure, where red colours mean negative regimes and green colours mean positive regimes:

Let be each regime a state of a Markov chain. In this way, the chain does not care about the previous regimes of the stock but only the immediately preceding regime.

Following with the example, the transition matrix shows that Apple keeps in the same regime with a high probability, especially in the case of the best regime, regime 7:

If the probability of being in the same state along n days is computed we realise, as transition matrix revealed, that the most popular state is regime 7, which means high run-up. We also discover that the rest of states are kept for a few days, so we can assume that if about 6 days have passed a new regime will arrive.

## Investment strategy based on Markov chain

We will try a strategy which bases on the data analised from this Markov chain. The idea is to reduce the exposure to Apple stock when it is in a bad regime and to use the Markov chain to increase it when the probability of being in that regime is lowering.

The rules are as following:

- Basic strategy:
- If it is a strong bear, that is regime is 1, the investment will be 30%
- If it is a bear, that is regime is 2, the investment will be 70%
- If it is neutral negative, that is regime is 3, the investment will be 80%
- If regime is between neutral positive and strong bull, that is regime is 4, 5, 6, or 7, the investment will be 100%

- Based on Markov chain analysis:
- If regime 7 has been kept more than 30 days, the investment will be reduced 10% up to 90%
- If regime 1 has been kept more than 9 days, the investment will be increased 10% up to 100%
- If regime 2 or 3 have been kept more than 6 days, the investment will be increased 10% up to 100%

Next figure shows the result of this strategy using the Markov chain created by Apple stock.

The basic part of the strategy plays an important role in the overperforming; however, Markov chain helps in certain moments to decrease the exposure when a positive trend is about to end, as in July 2020.

## To conclude

Markov chains may be handy to know the probability of a state change; however it is equally useful to have a good system that determines the set of states to create a Markov chain.