Posted on

Much Quantitative Research is Misleading – at Best.

First let me state a few disclaimers, just in case anyone bothers to read this post.

I am a firm believer in quantitative analysis and scientific method – without which we would still be living in caves and wearing animal skins for warmth. Unfortunately however the tendency of most researchers (probably unwittingly) is to draw wide and sweeping conclusions from limited testing on limited data. This is especially prevalent in financial market research and is not limited to that research aimed at the retail market; weighty and didactic tomes aimed at sophisticated institutional investors can be equally misleading.

Most of this website is at present devoted to a systematic approach to trading individual stocks which I have rather preposterously called the “Smart Beta Rotational Stock Momentum System”.  For many years my trading and my research have been based on the belief that momentum is a real phenomenon in financial markets which can be relied upon to produce profit  provided one runs profits and cuts losses. Worryingly, no one can come up with a definitive explanation for why momentum exists and some deny that it does exist.

Have I done enough testing on enough data to convince myself that my system represents some sort of reality and will remain profitable in future years? The answer is no, not yet and perhaps not ever.

Can such certainty or at least a high degree of probability ever be achieved? Are financial markets deterministic or random? Are they perhaps deterministic but practically speaking unpredictable? I cannot answer any of these questions definitively and nor can anyone else. At present.  Perhaps not ever.

Where have I failed? Where have I been slapdash? Primarily I am not satisfied that I have tested the system over enough stocks and over a long enough period. I have concentrated mostly on US listed stock data and my data provider has few stocks with a daily price history going back beyond 1962. I believe that CRSP has NYSE daily listed stock data going back in some cases to 1921 and clearly it is important to obtain as much data as you can going back in time as far as possible.

But even then markets change and data from 1921 may not necessarily be relevant to trading today’s market.  For instance my published research on the futures markets show that trend “efficiency” has deteriorated steadily over the past 40 years: the increased noise from ever increasing participation in these markets has made it ever more difficult to profit from momentum trades, hence perhaps the less than inspiring recent 5 year performance of many of the leading Commodity Trading Advisors.

I have a sneaky feeling that there IS an inherent order to the universe (including even that part of it we call the financial markets) but we are still a long way from discovering it. And I don’t mean god – I’m an atheist.

But look, let’s get to the point – enough tendentious and self-opinionated waffle.

I have spent the past couple of weeks looking at “market timing” and “asset allocation” and have looked at a number of well written papers on the topic. In particular it is frequently claimed that you can achieve better risk adjusted and/or absolute return by exiting the market during a downturn and re-entering when the storm subsides. On a fully mechanical basis. I have made such claims myself but have seriously come to wonder whether the suggested process has been or will be robust over the long term.

Why? Well, briefly, a friend in Singapore set a hare running a couple of weeks ago when he mentioned a concept called “Dual Momentum”. Nothing new here – get out when momentum turns down, get back in again when it turns up. Nonetheless my friend pointed me towards a well written paper which lead me down the avenue of “trading the equity curve”.

For the uninitiated “trading the equity curve “is exactly the same approach. Stop your trading/trading program/investment/whatever when the market turns sour – you hope to achieve lower drawdown and volatility and higher CAGR. Hope.

I used a 12 month lookback. Put simply, if today’s price level dips below that of 260 trading days ago, cease trading. When (if) today’s price exceeds that of 12 months ago, re-commence trading.  And yes, I studied many different lookback periods and many different methods including all the usual smoothing tools such as moving averages of price.

I also tested the approach on many different instruments and trading schemes. Over many different time periods. And no – I cannot claim I have looked at “enough” instruments” or enough time periods.

After two weeks of intense coding and back testing what I can claim is as follows:

Sometimes it works, sometimes it doesn’t! 

I will post just two examples.

Both examples use exactly the same system with exactly the same (large) portfolio of ETFs and identical parameters except in one respect: the day of the month on which re-allocation takes place.  Take a good look and remember these two examples differ only in terms of the monthly re-allocation date.

Believe me, much more extreme examples came up in my research.

Using Date 1 as a re-allocation date suggests that using this risk on/ risk off method could yield an almost identical CAGR for lower volatility and a lower drawdown. Using Date 2 as a re-allocation date strongly suggests otherwise. Too little data leads inexorably to curve fitting and incorrect, overly optimistic conclusions.


So very many websites out there claim success based on the use of a risk on/risk off methodology on a tiny handful of indices or ETFs.  Even if they (usefully) back test over different time period (albeit never going back far enough since the data is not available) the conclusions have to be misleading. So what if swapping between the Lehman 20 Years US Bond Index and the S&P 500 monthly based on momentum has worked (in theory ) over the past 20 years? That is no guarantee the relationship and correlation will hold over the next 20, whatever economic theory might suggest.  It should, it may, but will it?

The only conclusion you can draw is that only maximum diversification will give you a chance of success over the long term even if this does mean a lower, watered down CAGR.

Use multiples of everything. Stocks, bonds, currencies, other asset classes ,investment methodologies (mechanical or otherwise)  geographies, service providers, banks, brokers, custodians…need I go on. You can never know what will implode/explode or when – only that it will, with monotonous regularity.

I have no doubt that many will take a different view. But for what it’s worth, that’s mine.

15 thoughts on “Much Quantitative Research is Misleading – at Best.

  1. Comment addressed to me on another website:

    Were the “round trip signals” (exit trading completely, and reenter sometime later) numerous
    enough to give you a warm feeling of robustness? Eyeballing the equity curves, I think I
    may only see 4-6 places where the equity curve goes flatline, i.e., when one of these
    “signals” is active. Personally, I like to look at this figure of merit: (#signals / #sys.parameters)
    as a feelgood indication of possible robustness, and I prefer to see it above 100-to-1.
    You’ve got one parameter (12 month lookback); do you have >100 signals?

    Maybe it would be useful to divide your capital into 28 equal pieces, and trade each piece separately.
    The first piece trades your system + your portfolio and reallocates on the first of the month.
    The second piece trades your system + portfolio and reallocates on the second of the month.
    The third piece trades your system and reallocates on the third of the month … et cetera.
    Now you can be CERTAIN that you haven’t accidentally or deliberately “cherry picked” a
    reallocation date. You trade them all!

  2. My reply:

    What you see here, as you can well imagine, is the tip of a large iceberg. To be brief I am very sure of the robustness of the following conclusion:
    Sometimes it works, sometimes it doesn’t”

    The above example is one of very many tests I ran on different systems and different single instrument equity curves which gave results of many shades in between “works” and does not work”.

    You are correct in your assumption that the screenshots above show very few trades but as explained these were supplemented by many other trades on many different equity curves. It matters not that some curves were based on trading systems, or on the SPY, some on mutual funds, some on bonds and so forth. As you know, I tend to be pretty thorough.

    This is not to say “Market Timing does not work”. What it does say is that like any other trading system there are times when it works and times where it does not. In aggregate I believe that it DOES work.

    But the point of the article was that it was inspired by scepticism concerning the figures contained on various websites specialising in tactical asset allocation and the like.

    That is my whole point: they use an absurdly small number of instruments in their tests and hence very few trades. In my view it is extremely misguided and certainly NOT robust to take a few examples of “dual momentum” and to draw the conclusion that it works. Yes, I believe it does work in the aggregate and over many instruments and time periods but I would under no circumstances care to use “dual momentum” on a couple of indices as do the quoted websites. I would not care to switch any one particular stand alone investment on and off (or any small group of investments) relying on this method to get me in and out at the “right ” time.

    To your other point, I am in entire agreement although a division of capital into 28 is not really feasible. What the one featured system on my website DOES do is to divide capital into 5, each of which portion trades and re-allocates on a separate day of the relevant month. A further twist is that the dates are rolling and not fixed.

    You are thus scaling both in and out of trades as well as hedging your bets with different roll dates.

    An of course as you can well imagine I have back tested many other roll periods.

  3. My further comment:
    Also, let’s face is, trend following, momentum trading – whatever you care to call it IS market timing.

    But neither you or I would care to draw the conclusion that it works by back testing a single 2 or 3 instrument portfolio.

    One of the things I am doing at present is to back test that system on my website against random portfolios constructed from the great majority of the 110,000 equity instruments contained in the CSI world equity database.

    And as we both well know even that is no guarantee of “robustness” in terms of giving any form of reliable predictability for future performance. Its probably about the best you can do (unless you want to subscribe to a few other longer dated databases) but it is certainly no guarantee of future success.

  4. Have you looked at the Absolute Momentum paper from the author of the blog you mention? In that paper the author successfully applies a simple trend following rule to a number of different markets, including equities, bonds, real estate, commodities, and gold. His data goes back to 1974, while your examples go back to only 2001. You should also look at “Time Series Momentum” by Moskowitz, Oii, and Pedersen who successfully apply the same method to futures on 58 different markets and “A Century of Evidence on Trend Following Investing” by Hurst, Oii, and Pedersen that applies a similar trend following method to 4 major asset classes all the way back to 1903.

    1. Thanks, yes, read all of that. My examples were merely two at the tip of my back testing iceberg. I don’t believe on running an asset allocation policy based on 4 or 5 instruments even if these instruments are “somewhat” representative of an asset class as a whole. For bonds I would split it out a great deal further than these guys do; ditto all other classes. As per my silly little book where I used a great number of indices, some of them home made, in my testing,. In one case going back to 1897.

  5. Well, the bottleneck is getting the actual data. ETFs are relatively new, and last I checked, nobody backcast them to god knows when. And anyone who provides data for free provides crummy quality. EG for Futures data, Quandl just has gaping holes in so many of its instruments to the point that I had to throw out the entire script, because removing a day or two every so often is one thing–but losing chunks because of no OHL data (Settle will do in place of Close) is just unacceptable.

    CRSP and other stock data is A) prohibitively expensive for the hobbyist researcher or small fund and B) is limited to equities data, I believe.

    If there’d be a compendium of free, high-quality data that anyone could just backtest on, things would be good.

    1. For my book I used index data adjusted for costs and tracking error. As you say ETF data is just too recent – iShares were one of the first there but that was only in 1996. For commodities and bonds I made my own inices using futures prices and the T Bill 3 month rate. Currently I am using CSI data and about to subscribe to Norgate. For backtesting the 45,000 mutual fund time series provided by CSI are very helpful as a substitute for indices/ETFs.

  6. increasing participation in these markets has made it ever more difficult to profit from momentum trades

    I would think momentum effects would grow stronger as more pile on.

    1. I don’t think it works quite that way unfortunately. I published quite a few pieces over the past couple of years on the breakdown of “trend efficiency” in the futures markets. I will try to re-publish all that stuff here on my personal website. Effectively my argument is that “bad ” volatility has increased (noise) making trends more difficult to follow. Futures prices progress from A to B in a more and more jagged line causing more unprofitable chop. Periods of strong efficient trends can still make up for this but my research shows that in futures at least trend following has become progressively more difficult over the past 40 years.

      1. Sounds like an interesting topic. Feel free to ping me isomorphisms@@@@@@@sdf… or elsewhere when you put them up.

  7. two weeks of intense coding

    Was the 2008–14 period (between your book and now) when you learnt to code? (noting your b.g. in law and banking)

    1. Gosh no. I first became computer literate back in 1987 in Hong Kong when I was working as an analyst for SBCI’s stockbroking outfit. I was engaged in the full gamut of equity research and prepared analysis reports for institutional investors using spreadsheets to form earning forecasts, balance sheet and cash flow forecasts and so forth. I was involved with technical analysis also and the preparation of charts with PE bands and all the fun off the fair using Apple and data direct from the HKSE which had to be back adjusted. So I have been in the game for more years than I care to remember.

    2. I love your website by the way! I must have an in depth look at it.

      1. Cool! Comments / criticisms always welcome. There should be Disqus comments on most pages. (And the wordpress site is just a mirror of some of the older content;, etc. and is the way the URL’s work.

Leave a Reply