Tag: quant

Understanding Nifty Volatility

Definition

Volatility (σ) is a measure for variation of price of a financial instrument over time. Historic volatility is derived from time series of past market prices. There are different ways of calculating volatility. At StockViz, we use Yang Zhang Volatility.

σ is one of the biggest contributor of option premiums. Understanding its true nature will help you trade it better.

Volatility spikes

Observe the volatility spikes since 2005. Even though the average is around 0.3, its not uncommon to have huge swings.

nifty-volatility

Fat tails abound

nifty-volatility-10-histogram

nifty-volatility-20-histogram

nifty-volatility-30-histogram

nifty-volatility-50-histogram

Trading strategy

Always try to be on the long-side of volatility. It might be tempting, while trading options, to try and clip the carry on θ-decay. But you should always be aware of the fat-tails of volatility that can crush many months of carry P&L overnight.

The most important assumption

Prices and Returns

Prices don’t follow a statistical distribution (they are not ‘stationary’.) There is no obvious mean price and it doesn’t make sense to talk about the standard deviation of the price. Working with such non-stationary timeseries is a hassle.

NIFTY 2005-2014

But returns, on the other hand, are distributed somewhat like a normal (Gaussian) distribution.

nifty-histogram

And there doesn’t seem to be any auto-correlation between consecutive returns.

nifty-autocorrelation

If returns are normally distributed, then how are prices distributed? It turns out that the logarithm of the price is normally distributed. Why? Because

returns(t) = log(price(t)/price(t-1))

Now statisticians can magically transform a random time-series (prices) into something that is normally distributed (returns) and work with that instead. Almost all asset pricing models that you will come across in literature has this basic assumption at heart.

Fat tails

The assumption that returns are normally distributed allow mathematically precise models to be constructed. However, they are not very accurate.

In the normal distribution, events that deviate from the mean by five or more standard deviations (“5-sigma events”) have lower probability, thus meaning that in the normal distribution rare events can happen but are likely to be more mild in comparison to fat-tailed distributions. On the other hand, fat-tailed distributions have “undefined sigma” (more technically, the variance is not bounded).

For example, the Black–Scholes model of option pricing is based on a normal distribution. If the distribution is actually a fat-tailed one, then the model will under-price options that are far out of the money, since a 5- or 7-sigma event is much more likely than the normal distribution would predict.

Precision vs Accuracy

When you build models, the precision that they provide may lull you into a false sense of security. You maybe able to compute risk right down to the 8th decimal point. However, it is important to remember that the assumptions on which these models are build don’t led themselves to accuracy. At best, these models are guides to good behavior, and nothing more.

accuracy vs precision

Sources:
Fat-tailed distribution

Big Data’s Big Blind-spots

big data's big problems

Yesterday, we discussed how theoretical models can be used to draw biased conclusions by using faulty assumptions. If the models then get picked up without an understanding of those assumptions, it leads to expensive mistakes. But are empirical models free from such bias, especially if the data-set is big enough? Absolutely not.

In an article titled “Big data: are we making a big mistake?” in the FT, author Tim Harford points out that by merely finding statistical patterns in the data, data scientists are focusing too much on correlation and giving short shrift to causation.

But a theory-free analysis of mere correlations is inevitably fragile. If you have no idea what is behind a correlation, you have no idea what might cause that correlation to break down.

All the problems that you had in “small” data exist in “big” data, but they are only tougher to find. When it comes to data, size isn’t everything, you still need to deal with sample error and sample bias.

For example, it is in principle possible to record and analyse every message on Twitter and use it to draw conclusions about the public mood. But while we can look at all the tweets, Twitter users are not representative of the population as a whole. According to the Pew Research Internet Project, in 2013, US-based Twitter users were disproportionately young, urban or suburban, and black.

Worse still, as the data set grows, it becomes harder to figure out if a pattern is statistically significant, i.e., can such a pattern have emerged purely by chance.

The whole article is worth read, plan to spend some time on it: Big data: are we making a big mistake?

Models don’t lie, incorrect assumptions do

Snarl

An engineer, a physicist and an economist are stranded on a deserted island with nothing to eat. A crate containing many cans of soup washes ashore and the three ponder how to open the cans.
Engineer: Let’s climb that tree and drop the cans on the rocks.
Physicist: Let’s heat each can over our campfire until the increase in internal
pressure causes it to open.
Economist: Let’s assume we have a can opener.

In a recent paper titled Chameleons: The Misuse of Theoretical Models in Finance and Economics Paul Pfleiderer of Stanford University lays out how some models, built on assumptions with dubious connections to the real world, end up being used to inform policy and other decision making. He terms these models “Chameleons.”

Notice how similar these two abstracts are:

To establish that high bank leverage is the natural (distortion-free) result of intermediation focused on liquid-claim production, the model rules out agency problems, deposit insurance, taxes, and all other distortionary factors. By positing these idealized conditions, the model obviously ignores some important determinants of bank capital structure in the real world. However, in contrast to the MM framework – and generalizations that include only leverage-related distortions – it allows a meaningful role for banks as producers of liquidity and shows clearly that, if one extends the MM model to take that role into account, it is optimal for banks to have high leverage.

– “Why High Leverage is Optimal for Banks” by Harry DeAngelo and René Stulz.

To establish that high intake of alcohol is the natural (distortion free) result of human liquid-drink consumption, the model rules out liver disease, DUIs, health benefits, spousal abuse, job loss and all other distortionary factors. By positing these idealized conditions, the model obviously ignores some important determinants of human alcohol consumption in the real world. However, in contrast to the alcohol neutral framework – and generalizations that include only overconsumption-related distortions – it allows a meaningful role for humans as producers of that pleasant “buzz” one gets by consuming alcohol, and shows clearly that if one extends the alcohol neutral model to take that role into account, it is optimal for humans to be drinking all of their waking hours.

– “Why High Alcohol Consumption is Optimal for Humans” by Bacardi and Mondavi 😉

These are a good illustration that one can generally develop a theoretical model to produce any result within a wide range.

Read the whole thing at your leisure: Chameleons: The Misuse of Theoretical Models in Finance and Economics

Machine Learning Stocks, Part II

We had previously rolled out the “similar stocks” feature that grouped stocks based on risk and technical parameters. The idea of grouping/clustering stocks is that it allows you to find better alternatives to your favorite stocks. We combined machine learning with our Fundamental Quantitative Scores to cluster stocks based on fundamental metrics.

For example, here’s what Glenmark’s looks like:

Glenmark

 

Related:

Machine Learning Stocks
Quantitative Value Series
StockViz Trading/demat Account
Fundamental Quantitative Scores for Stocks