Monday, November 2, 2009

Risk periodicity revisited

Our monthly newsletter went out last week, and we immediately received inquiries and comments about one of the topics: risk. I extended my recent blog remarks on this subject to shed further light on it, but there's still confusion and a need for greater clarity.

What are the potential time periods we could choose for risk statistics? Well, being realistic we have years, quarters, months, and days.

One of the key aspects of any risk measurement is to have a big enough sample to make the results valuable. Many risk statistics assume that the return distribution is normal. And while many have found that this is an invalid assumption, the basic rule as to the expected quantity of inputs still generally holds: 30. Most firms, I believe, will use 36 months, although many obviously use more or less, but for now let's assume we're going to use 36.

Okay, so let's consider again our options: Years. Is it realistic to expect many money management firms to have 36 years of returns? And, even if they did, would there be a lot of value in reviewing them in trying to assess risk? Probably not. I don't know about you, but the Dave Spaulding of 36 years ago is quite different than today's model, and the same can probably be said for many firms, along with their portfolio managers and research staffs. Looking at a 36-year period might prove of interest but of not a lot of value when it comes to risk assessment.

Let's try quarters: 36 quarters equals nine years. Many firms can support this. We could derive the risk for a rolling 36-quarter basis, yes? But do people think in these terms? I wouldn't rule this out, but doubt if it would be very popular.

Next we have months. We only need three years to come up with 36 months. This is achievable by many firms and provides recent enough information to provide greater confidence that the management hasn't changed too much in this time. We start to see "noise" appearing a bit more here, though. Noise, as our newsletter points out, can refer to a few things, including inaccuracies which often exist in daily valuations and the excessive volatility which might appear, which is often smoothed out over a monthly basis. While one might still sense its presence, it isn't as sharp with months as it is with days. Think about some of the huge shifts we've seen in the market on a daily basis; by the time we get to month-end, they've often been counterbalanced or offset by huge swings going the other way. Is there value for the investor to include such movements in their analysis?

For daily all we need is about two months of management to have 36 events, so this should be easy for everyone save for the firm that just hung their shutter. A concern with daily is that we may be looking too close at the numbers, after all, aren't investors supposed to have long term horizons? Can we be thinking long term if we're staring at days? Granted, I look at the market throughout the day myself, but I also have to confess that doing so can cause a certain degree of anxiety. The market often reacts to big swings from day to day, where some investors see big positive moves as opportunities for short-term profit, while some see big drops as chances to get issues they're interested in at a bargain price. The fact that a 150+ up movement is followed by a 200+ point down movement reflects activity that will cause large volatility numbers but probably doesn't' help a lot for risk evaluation. The chance of error creeping in is much greater with daily data. Partly because most firms don't reconcile daily, they reconcile monthly. Even benchmark providers won't often correct errors on daily data (they may not correct it on end-of-month data, either, but we at least hope that they would).

One must also take into consideration comparability. Morningstar, for example, uses monthly data. And while they shouldn't be considered the "last word in periodicity," they are arguably using an approach that they have found has the greatest value. The GIPS (r) Executive Committee has decided to require a 36-month standard deviation effective January 2011. And, I believe that most firms employ months in their calculations.

An interesting argument is to tie the report's periodicity to the frequency of reports: e.g., if you meet with a client quarterly, use quarterly periods. There may be a variety of reasons for quarterly sessions; to think that this means we want quarterly periods is, I think, a stretch. One could easily confirm the client's wishes here. If they DO want quarterly, then fine, provide it. But often they are looking to the manager to be the "expert" on the frequency to employ.

But, at the end of the day, (as it happens to be as I complete this note), firms can choose whatever measure they feel best meets their needs. But beware: you can't compare a 36 year period of risk statistics if you used years, months, and days ... try it to confirm this statement. Or, for that matter a 3 year period where you used different periods ... not comparable. Sorry.

1 comment:

  1. I think people providing risk/stat/characteristics need to KNOW the following before their presentation.

    For example, portfolio BETA could be calculated in various ways: using linear reqression, covariance variance, and weighted portfolio average of security beta. What does BETA mean? Will it depend of the followings:
    The equation/method used to generate the results
    The frequency of the data used to calculate
    The data source used

    ReplyDelete

Note: Only a member of this blog may post a comment.