Workshop Portfolio Testing the Relative Strength Strategy

Developing viable mechanical strategies requires rigorous historical testing to ensure that we are not putting our faith (and our money) into statistical flukes. When the Relative Strength strategy was tested out-of-sample, it actually performed better relative to the market than it did in the original test used to develop it. That gives us confidence that the strategy is a valid one.

Format for Printing

Format for printing

Request Reprints

Reuse/Reprint

By Todd Beaird (TMF Synchronicity)
January 2, 2001

Some of you may be wondering about the viability of mechanical investing after the beating it took last year. Remember that here at the Workshop, our goal is to outperform the market. Even if we beat the market by 10% per year (a very lofty goal), that means that being down 20% when the market is down 30% is a winning year. We also have to remember that highly volatile strategies, even very successful ones, tend to go down more in a bad market but make up for it when the market recovers.

Over the long run (and you are investing in the market for the long run, right?) you'll be richly rewarded if you beat the market -- on average. You don't have to beat it every year. Our expectation is that our strategies will beat the market soundly over the long run. That expectation is largely based on past performance, but how do we know our mechanical investing strategies will beat the market going forward? It is possible that our superb historical results have been little more than random chance. In fact, it is even probably that this is the case for at least some strategies.

How can this be? Here at the Workshop we spend a lot of time looking for strategies that have outperformed the market in the past. If you spend enough time searching through large amounts of data, as we do, it's not difficult to find some combination of factors that are associated with stocks that beat the market. Those are the strategies that we talk about, since no one is interested in strategies that underperformed. But how do we know that they didn't just beat the market by luck? We have to test very, very carefully. As we mentioned a few weeks ago, the simplest way to test would be retest using data that was not part of our original test. This is known as "out-of-sample" data.

Let's look at the returns for a five-stock Relative Strength-26 week (RS-26) annual strategy, with portfolios selected each January and held for one year. The RS-26 strategy was developed in 1997 when it was backtested to 1986. Besides being one of our core strategies, the Relative Strength factor is used in almost all of our other strategies. It is probably the single most important factor that we use.

The RS-26 strategy has been officially followed at the Workshop since January 1, 1998. This makes results from 1998 forward out-of-sample, and the 1986 through 1997 results are in-sample. But three years of out-of-sample returns don't make for a very impressive test.

Thanks to the amazing volunteer efforts of the Workshop community, there is now data going back to 1969. The data from 1969 through 1985 is completely out-of-sample since it was not used (or even available) when the original strategy was designed. As a matter of fact, many of us were concerned that Relative Strength strategies would underperform in bear markets like the ones that prevailed from 1969 thru 1985.

The RS-26 strategy starts with the 100 stocks that are currently ranked "Timeliness 1" by Value Line ("T1"). If we know that a group of stocks returned an average of 30% per year, we wouldn't be very happy with a strategy that pulled five stocks from that group that returned a mere 20% per year -- even if 20% beats the pants off the market. We'd be better off choosing stocks from the original group at random. So it is important to compare our strategy with both the market and with the "universe" of stocks that we started with. That's what the tables below do.

In-Sample test, 1986-1997

Year   RS-26    T1      S&P 
1986   49.6%   23.5%   18.7%
1987   12.4%   -1.2%    5.3%
1988  -11.1%   16.0%   16.6%
1989   46.7%   28.7%   31.7%
1990    6.2%   -6.6%   -3.1%
1991  121.5%   56.7%   30.5%
1992   20.0%   10.1%    7.6%
1993    8.3%   18.5%   10.1%
1994   42.8%    4.6%    1.3%
1995   22.3%   31.3%   37.6%
1996   26.8%   27.0%   23.0%
1997   17.2%   25.8%   33.4%
CAGR   26.9%   18.5%   17.0%
As you can see, the RS-26 strategy performs well in-sample, as expected. We wouldn't be talking about it if it didn't. However, it shows considerable volatility. We also see that it lost to the market in three of the last five years studied and to the T1 stocks in four of the last five years. Still the long-term average is impressive.

Let's look further back. How did this strategy perform out-of-sample?

Out-of-sample test, 1969-1985
Year   RS-26    T1      S&P
1969  -14.5%  -17.7%  -11.4%
1970   12.0%   -8.9%    3.9%
1971   76.9%   26.5%   14.3%
1972   20.0%   10.1%   19.0%
1973   19.7%  -17.1%  -14.7%
1974  -14.8%  -23.1%  -26.5%
1975   18.6%   51.6%   37.2%
1976   34.4%   35.3%   23.9%
1977   15.1%   15.8%   -7.2%
1978   31.0%   19.8%    6.6%
1979   10.4%   25.6%   18.6%
1980   53.5%   50.2%   32.5%
1981  -22.0%   -1.9%   -4.9%
1982   18.2%   33.7%   21.5%
1983   54.3%   25.2%   22.6%
1984  -25.7%   -8.6%    6.3%
1985   60.4%   38.6%   31.7%
CAGR   17.0%   12.5%    8.7%
From 1969 through 1985, the stocks ranked Timeliness 1 by Value Line substantially outperformed the S&P 500, returning 12.5% per year compared to only 8.7% for the index. In our first test (above, 1986-1997), the difference was much smaller (18.5% for T1, compared to 17.0% for the S&P).

But look at the returns for RS-26. It returned a hefty 17% compound annual growth rate (CAGR) in the out-of-sample period. If anything, these results are even better than the in-sample data. The average return isn't as high, but it beats the market and the Timeliness 1 stocks by a much wider margin. We still see a lot of volatility and years where it loses to T1 or the market or both, but many very impressive winning years make up for that. This is why we are so excited about our Relative Strength strategies.

But a skeptic could question these great numbers. After all, past performance is no guarantee of future results. So let's look at how this strategy has performed since we started tracking it at the Workshop. Results for 2000 are through December 26. Unfortunately, we do not have returns for Timeliness 1 stocks for 2000.

Post-discovery period, 1997-2000
Year   RS-26   VLT1    S&P 
1998   28.0%    9.3%   28.6%
1999  116.6%   24.1%   21.0%
2000  -31.8%           -9.3%
CAGR   23.6%           12.2%
From 1998 through 2000, RS-26 nearly doubled the returns of the S&P, in spite of the recent terrible year. Also, our RS-26 strategy outperformed Value Line's Timelines 1 stocks in 1998 and 1999. Interestingly enough, the Timeliness 1 stocks underperformed the S&P overall for 1998 through 1999. I'm looking forward to the results from year 2000 for Timeliness 1.

So, the results look pretty impressive, if you have the stomach for the volatility and the patience to ride out the bad years. But remember that this is only one of our Workshop strategies, and I've only examined only January starts. I'll be presenting more results in upcoming weeks here and on the Foolish Workshop discussion board. I look forward to seeing you next week -- same Fool Time, same Fool Channel.

P.S. There's still time to give to Foolanthropy! Help us surpass last year's total of $800,000.