«1. Scene setting Content: 1. Scene setting 1 The 300 Club believes that modern portfolio theory and practice are failing institutional 2. How it all ...»
“The incredibly inaccurate efficient market theory [caused] a lethally dangerous combination of asset bubbles, lax controls, pernicious incentives and wickedly complicated instruments that led to our current plight.” However, long before then, academic researchers had been training their guns on the EMH. Notably, however, none of them anticipated the catastrophic outcomes narrated by Lowenstein and Grantham. Instead, they were much more concerned about the nitty-gritty of improving the explanatory powers of the theory at the margin.
Hardly anyone questioned its foundations. This was an era in which the ‘rational expectations’ school of thought, pioneered by economists at the University of Chicago, was in rapid ascendancy. It believed in the primacy of markets as an article of faith: markets knew how to value resources and allocate them most efficiently through an impartial and robust price mechanism. The invisible hand of the market, so the argument ran, knew better than the visible boot of the state.
6| The 300 Club | The Death of Common Sense | April 2012 So, the new research focused on the narrow issue of whether past price changes could predict future price changes. They did find weak evidence that the past foretold the future.
But these studies did not address a number of critical questions:
• How is the information generated before it impacts market prices?
• What mechanism causes the information to be reflected in prices?
• What is the incentive for anyone to generate the information?
• Why would anybody do any research on a company, if trading on information is unprofitable?
• If nobody collects any information, how can prices still reflect all the information?
• Most importantly, are markets ‘efficient’ in the sense that they can price assets correctly?
These questions led to a number of refinements of the original idea propounded by Samuelson.
Grossman and Stiglitz  focused on information acquisition. They showed that those who invest in research are rewarded through speculative profits so that they at least recoup the cost of that activity. By being the first mover of the ‘invisible hand’, they drive prices towards their fair economic value. Thus, by extension, the authors envisaged the role of active management backed by superior resources and skills.
In a parallel tract, there also emerged the arbitrage pricing theory [Ross, 1976], which showed that the activity of arbitrageurs would naturally drive the expected returns to a level that correctly reflects the risk-return trade-off of any asset.
The idea was further refined in a paper that was based on the old adage from John Maynard Keynes that ‘markets stay irrational longer than you can stay solvent’ [Schleifer and Vishny, 1997]. They showed that high financing risk forces arbitrageurs to be cautious about exploiting mispricing. The outcome can be calamitous, if this risk is ignored, as happened in the case of Long-Term Capital Management.
Its highly leveraged bet on the convergence of US vs European and Japanese bond yield following the Asian currency crisis was sound and the convergence did actually happen.
However, in the meantime, the leverage bankrupted LTCM and created a systemic crisis in 1998.
Over time, empirical studies came to acknowledge that active management can, and does, regularly exploit the deviations from equilibrium prices via specialised knowledge, lower trading costs, low management fees and a financing structure that rides out price anomalies persisting over a long period.
Indeed, if everybody shared the same opinion, nobody would trade [Black, 1986].
Differences in opinion create inefficiency and this in turn is the basis for trading. Earnings from active management are a reward for informed investors for identifying and exploiting mispricing created by other investors. But that is not all.
Researchers argued that the segmentation of markets and investors can have an impact on the market values of securities, on top of their business fundamentals [Barberis, Schleifer and Wurgler, 2003]. Investors are shown to pigeon-hole securities – by, for example, geography, index or size – due to information limitations, trading restrictions and trading costs.
All they will discover is that it is nigh on impossible to test the two key propositions of the EMH: (a) markets are efficient since they incorporate all available information and (b) markets provide a fair valuation of securities. Neither of these propositions can be independently tested via the conventional econometric methods. Hence the EMH can never be rejected [Campbell, Lo, and MacKinlay, 1997].
While a number of factors – ‘anomalies’ – have been identified as delivering higher returns over time that cannot be explained by the EMH, there is no consensus on whether these factors reflect the existence of an inefficient market or the dynamic nature of risks that no model can explain.
The sceptics, as a result, go for the jugular: anomalies mean that the whole paradigm of rational expectations that reigned supreme for nearly fifty years is no more than an ideological aspiration about how markets ought to work under the tenets of neo-classical economics. The crash-landing of its two cherished idols – CAPM and EMH – in 2008 shows all too well that they were as remote from the complexities of markets as the man on the moon.
Writing in The New York Times Magazine in September 2009, another Nobel Laureate, Paul Krugman, argued that Chicago School free market theorists “mistook beauty…. for truth”.
The synthetic outrage provoked by the article generated more heat than light [Frydman and Goldberg, 2011].
The advocates of the EMH countered that it is still alive and well except for periodic distortions. The stock market is a voting mechanism in the short term, but a weighing mechanism in the long term. True value will win out in the end.
They also contend that the EMH never stated that the markets are ‘efficient’ in the sense they can price assets correctly: all it said was that prices reflect all known information. It does not say that this information is valued correctly in any sense: prices merely reflect the current consensus of the market without preventing market changes on a whim. In short, markets can be inefficient and inaccurate.
This volte-face is all the more remarkable for its tacit subtlety. For belief continues to reign supreme over reason: reality is not allowed to obscure the theory! No wonder the average investor is bewildered. No wonder Lowenstein and Grantham pull no punches.
For now, it is worth restating the measured conclusion of the most detailed review presented in a recent landmark report commissioned by the Norwegian Government Pension Fund [Ang, Goetzmann, and Scherfer 2009];
“The balance between indexation and active management is a choice variable for which the optimum depends on general beliefs about the existence and potential of manager skill, the pricing opportunities afforded within a given market, the time preferences and risk aversion of the investor, and the expertise and incentive contract of the specific manager.” Translation: the EMH leapt from unwarranted assumptions to pre-conceived conclusions.
4. A bullet dodged The original attempts to check the randomness of stock prices looked at whether the way a price behaved in the past is any guide to how it will behave in the future. They showed that stock prices did not behave as random walks. Future price changes were influenced by the past movements: market has a memory, after all.
8| The 300 Club | The Death of Common Sense | April 2012 To economists and psychologists engaged in the field of behavioural finance, such short run momentums are consistent with the “bandwagon effect”. The famous example of that was the psychological contagion leading to irrational exuberance with the tech bubble in the 1990s [Shiller, 2001].
The behaviouralists acknowledge the inherent fallibility of mortal investors. To them, humans are highly imperfect organisms, given to bouts of greed and fear. They are impatient; they make analytical errors, suffer from bad data interpretation and overrate their abilities. Moreover, they are hard-wired for self deception, plain ad hocery, and faulty logic, contrary to the premises of the EMH. They are not rational, calculating machines, without systematic biases, whose behaviours can be predicted by mathematical models.
The most memorable indictment from this behaviourial perspective came from
“Just because markets are unpredictable doesn’t mean they are efficient. The leap in this logic was one of the most remarkable errors in the history of economic thought.” Before then, however, the new behavioural edifice had started to expose fault lines in the EMH, since the landmark publication of Prospect Theory [Kahneman and Tversky, 1979].
It accepts that there is often a reasonable balance between different types of investors in the market and deviations in valuation are often corrected. But look under the bonnet and you’ll find a whole bunch of behavioural cognitive biases ticking away – sometimes cancelling each other out, most times not. These biases reflect imperfections in their perceptions of reality.
In finance, four biases are most common:
• Mental accounting: dividends are perceived as additions to income; capital gains are not • Biased expectations: people tend to be overconfident in their predictions of the future • Reference dependence: investment decisions are affected by an investor’s reference point which tends to be arbitrary • Representativeness heuristic: investors mistake good companies for good stocks, not realising that their stock is usually already fairly valued, leaving little upside potential.
If the new behavioural finance is closer to reality than the old EMH paradigm of rational, calculating utility-balancing economic man, why has it failed to make major inroads into conventional thinking? There are two reasons.
The first reason is the power of the old guard, protecting the citadel for free market economics. It was the scientist Max Plank who showed that science advances “one funeral at a time”. It requires the old, controlling generation to die before new ideas that threaten their conception can take hold.
The second reason is that the EMH appears to work a lot of the time and then suddenly blows up. It is analogous to the relationship between Newton’s laws of gravity and Einstein’s theory of relativity. The former approximates the latter so long as the odd stuff about the speed of light and anti-gravity are taken out.
The problem seems to arise from the frequency of events in markets. High frequency events tend to follow the predictions of the EMH. If asset prices tend to deviate overly from an accepted norm, then the normal mechanism of the market brings them back in line.
Finally, for all its fresh insights, the new behavioural finance can tell us why things go horribly wrong but not when. Not surprisingly, Samuelson admired Kahneman but considered much of the work in the behavioural finance “a lot of noise” [Bernstein, 2007].
He doubted if one could make money out of it. To him, most investors do not even understand how to capitalise on behavioural anomalies even if they are sceptics about
efficiency and fans of behavioural finance. However, he did not address the bigger issue:
namely, to what extent can such behavioural biases cause market contagion with disastrous consequences for the world economy?
Research attention remained firmly focused on the nuts-and-bolts of the EMH. On the one hand, some proponents of behavioural finance recognise its limitations, as spelt out by Samuelson. On the other hand, the proponents of EMH started to factor in the behavioural effects.
This synthesis is clear from the emergence of The Adaptive Markets Hypothesis [Lo, 2004].
It argues that investors are hardly capable of the kind of utility optimisation assumed in the EMH. Since optimisation is costly and since humans are limited in their computational abilities, they engage in ‘satisficing’: making choices that are satisfactory, not optimal.
Such decisions are reached not analytically, but through trial and error that enables one to develop simple rules of thumb that evolve into heuristics over time.
Thus, when the environment changes, the heuristics of the old environment are not
necessarily suited to the needs of the new. The mismatch gives rise to behavioural biases:
actions seem ill-advised in the context in which they are taken.
However, according to Lo, the new paradigm of AMH is still in its infancy and requires a
great deal more empirical testing before it dislodges the EMH. He admits that:
“The internal consistency and logical elegance of the EMH framework are almost hypnotic, and it is all too easy to forget that the EMH is merely a figment of our imagination, meant to serve as an approximation – and not always a terribly accurate one – to a far more complex reality.
Unlike the law of gravity and the theory of special relativity, there are no immutable laws of nature from which the EMH has been derived.” The implications are clear. Neither the CAPM nor the EMH have the necessary empirical credence; quite the reverse. Yet, they remained firmly anchored in the investor psyche and policy thinking in the West – at least until the 2008 market meltdown. It reminded us all
too painfully that [Derman, 2011]:
“CAPM is a useful way of thinking about a model world that is, quite often, far from the world we live in.”
5. The moment of reckoning In hindsight, it beggars belief that the sub-prime mortgage boom in the US lasted for as long as it did.
The Federal Reserve could not foresee a concealed time bomb. Nor did it have the inkling that any sub-prime crisis in the US would soon tip into a global disaster by the new markto-market rules introduced after 2004. So keen it was to sustain the economic recovery in the 1990s that at every whiff of a market downturn, fresh liquidity was pumped into the system. With the banking system awash with cash, product innovation proliferated.