Logical Invest Universal Investment Strategy

Here is the blurb from Logical Invest:

“The idea for this Universal Investment Strategy was to develop a strategy which has an adaptive allocation between 0% and 100% for each ETF (TLT / SPY) depending on the market situation.

The way to calculate the optimum composition is done by calculating which composition had the maximum Sharpe ratio during an optimized look back period (normally 50-80 days). During normal market periods, the maximum Sharpe ratio is not at a 100% SPY or at a 100% TLT allocation, but somewhere in between. To calculate this maximum Sharpe ratio, I loop through all possible compositions from 0%SPY-100%TLT to 100%SPY-0%TLT and calculate the resulting Sharpe ratio for the look back period.

For my UIS strategy I tweaked the Sharpe formula a little bit. Normally the Sharpe ratio is calculated by Sharpe = rd/sd with rd=mean daily return and sd= standard deviation of daily returns. I don’t use the risk free rate, as I only use the Sharpe ratio to do a ranking. My algorithm uses the modified Sharpe formula Sharpe = rd/(sd^f) with f=volatility factor. The f factor allows me to change the importance of volatility.

- If f=0, then sd^0=1 the ranking algorithm will choose the composition with the highest performance without considering volatility.
- If f=1, this is the normal Sharpe formula.
- If f>1, then I rather want to find SPY-TLT combinations with a low volatility. With high f values, the algorithm becomes a “minimum variance” or “minimum volatility” algorithm.
To get good results, the f factor should normally be higher than 1. This way you do not need to rebalance too much. In a whipsaw market, rebalancing also has the negative effect of selling low and buying high on small intermediate market corrections. This is why a system which considers only performance will not do well.

The good f factor for a system can be found by “walk forward” optimization iterations of your backtests. Normally a good value for f is about 2, but the factor changes slightly, adapting to the current market conditions.”

I chose to code in Python and hence copied the coding found in the Quantopian link above, adapted for my own software:

returns = returns.pct_change().dropna() ratio = [0, 1] max_sharpe = -1000 ratios = [] for weight in np.arange(lower_bound, upper_bound, increment): ratios.append([weight, 1 - weight]) for c in range(len(ratios)): temp_returns = returns.ix[:, 0] * ratios[c][0] + returns.ix[:, 1] * ratios[c][1] sharpe = temp_returns.mean() / (temp_returns.std() ** volatilityFactor) if sharpe > max_sharpe: ratio = ratios[c] max_sharpe = sharpe wts = ratio

There could of course be something wrong with my interpretation especially since my results do not match closely to a further version I drafted using Scipy Optimise with basin hopping. I must try a brute force grid search in Scipy and see how it compares with the brute force method outlined here.

I used unleveraged TLT and SPY taken from Yahoo before the selfish cads closed down their useful source of readily available data.

Anyway here is what I got:

The trouble is that in this (perhaps faultily constructed) version it did not do what it promised on the box: much of the time it took on / off binary bets as can be seen below:

Here is what I get with simple risk parity / inverse volatility weighting approach for a very slightly different period( a better risk adjusted return and a lower max dd):

## 2 Comments