Category: Investing Insight

Investing insight to make you a better investor.

HFT: The debate goes mainstream


Michael Lewis’ new book “Flash Boys” is all about High Frequency Trading (HFT.) High Frequency Traders, he says, with advanced computers make tens of billions of dollars by jumping in front of investors. ”

The United States stock market, the most iconic market in global capitalism, is rigged.

I am not sure “rigged” is the right word. Traders have been trying to get ahead of the “flow” for ages. Its just that technology finally caught up recently and now allows firms to do what they used to do more efficiently. Barry Ritholtz of TBP points out that trading has always been a zero-sum game.

One trader’s gain is another trader’s loss. Only in the case of HFT, the losers are the investors — by way of their pension funds, retirement accounts and institutional funds. The HFT’s take — the “skim” — comes out of these large institution’s trade executions.

The defense came hard and fast. William O’Brien, president of BATs Global Markets: Its like GM writing a book saying it’s unfair for the automotive industry that Elon Musk created a new car.

We welcome anyone that is building a better mouse trap for our nation’s investors but I don’t think blind accusation is the right way. I know he has a business model that says everybody but him rips you off.

High-frequency traders account for 40 to 70% of all trading on every stock market in the US. The numbers for India are said to be similar. And since HFTs only intermediate trades, it is hard for them to lose money. Tradebot, one of the biggest high-frequency traders around, had not had a losing day in four years.

How can individual/retail investors protect themselves from getting skimmed by HFTs? First, understand that you can’t avoid them. So stop trying to trade intra-day and lengthen your investment horizon to at least a couple of months. Second, know that the “skim” is a few pennies/paisas and affects you in a meaningful way only if you make a lot of trades.

Josh Brown of TRB:

The bottom line is this – there have always been insiders, unscrupulous dealers and some participants with unfair advantages over others. HFT is just the latest in a long line of shenanigans and the moment you outlaw it or modify it or babysit it out of existence, there’ll be a new broad-daylight robbery format waiting right behind it.
The stock market hasn’t become rigged, IT STARTED OUT RIGGED.


Big Data’s Big Blind-spots

big data's big problems

Yesterday, we discussed how theoretical models can be used to draw biased conclusions by using faulty assumptions. If the models then get picked up without an understanding of those assumptions, it leads to expensive mistakes. But are empirical models free from such bias, especially if the data-set is big enough? Absolutely not.

In an article titled “Big data: are we making a big mistake?” in the FT, author Tim Harford points out that by merely finding statistical patterns in the data, data scientists are focusing too much on correlation and giving short shrift to causation.

But a theory-free analysis of mere correlations is inevitably fragile. If you have no idea what is behind a correlation, you have no idea what might cause that correlation to break down.

All the problems that you had in “small” data exist in “big” data, but they are only tougher to find. When it comes to data, size isn’t everything, you still need to deal with sample error and sample bias.

For example, it is in principle possible to record and analyse every message on Twitter and use it to draw conclusions about the public mood. But while we can look at all the tweets, Twitter users are not representative of the population as a whole. According to the Pew Research Internet Project, in 2013, US-based Twitter users were disproportionately young, urban or suburban, and black.

Worse still, as the data set grows, it becomes harder to figure out if a pattern is statistically significant, i.e., can such a pattern have emerged purely by chance.

The whole article is worth read, plan to spend some time on it: Big data: are we making a big mistake?

Models don’t lie, incorrect assumptions do


An engineer, a physicist and an economist are stranded on a deserted island with nothing to eat. A crate containing many cans of soup washes ashore and the three ponder how to open the cans.
Engineer: Let’s climb that tree and drop the cans on the rocks.
Physicist: Let’s heat each can over our campfire until the increase in internal
pressure causes it to open.
Economist: Let’s assume we have a can opener.

In a recent paper titled Chameleons: The Misuse of Theoretical Models in Finance and Economics Paul Pfleiderer of Stanford University lays out how some models, built on assumptions with dubious connections to the real world, end up being used to inform policy and other decision making. He terms these models “Chameleons.”

Notice how similar these two abstracts are:

To establish that high bank leverage is the natural (distortion-free) result of intermediation focused on liquid-claim production, the model rules out agency problems, deposit insurance, taxes, and all other distortionary factors. By positing these idealized conditions, the model obviously ignores some important determinants of bank capital structure in the real world. However, in contrast to the MM framework – and generalizations that include only leverage-related distortions – it allows a meaningful role for banks as producers of liquidity and shows clearly that, if one extends the MM model to take that role into account, it is optimal for banks to have high leverage.

– “Why High Leverage is Optimal for Banks” by Harry DeAngelo and René Stulz.

To establish that high intake of alcohol is the natural (distortion free) result of human liquid-drink consumption, the model rules out liver disease, DUIs, health benefits, spousal abuse, job loss and all other distortionary factors. By positing these idealized conditions, the model obviously ignores some important determinants of human alcohol consumption in the real world. However, in contrast to the alcohol neutral framework – and generalizations that include only overconsumption-related distortions – it allows a meaningful role for humans as producers of that pleasant “buzz” one gets by consuming alcohol, and shows clearly that if one extends the alcohol neutral model to take that role into account, it is optimal for humans to be drinking all of their waking hours.

– “Why High Alcohol Consumption is Optimal for Humans” by Bacardi and Mondavi 😉

These are a good illustration that one can generally develop a theoretical model to produce any result within a wide range.

Read the whole thing at your leisure: Chameleons: The Misuse of Theoretical Models in Finance and Economics