Tuesday, December 31, 2013

A great example of why BF is better than BHB

The late Damien Laker once opined that there was no difference between the two "Brinson models," Brinson-Fachler and Brinson-Hood-Beebower. I went out of my way to enlighten him on this subject, pointing out that the allocation effect for the latter uses only the benchmark sector return while the former uses the benchmark sector return minus the benchmark return:

As a result, there can be sizable differences in the results and, at times, even sign-switching!

The BHB model rewards investors who overweight positively performing sectors, while the BF only does so if the sector outperforms the benchmark. When discussing this in our training classes, I explain that if all the sectors are positive, BHB would want the investor to overweight them all!

The WSJ today reported that all 10 stock sectors of the S&P 500 will, in fact, have positive gains for this year ("All 10 Stock Sectors Post Gains in Big Year." Dan Strumpf Page C4).

Thus, we have such an occurrence when the investor is supposed to overweight everything; but how can they do this without borrowing money? They can't. BF, more correctly, I believe, wants the investor to only overweight those sectors that fall above the benchmark itself.

Thus, we have a perfect example to draw upon to demonstrate the superiority of the older (by only one year) model, which Gary Brinson crafted with Nimrod Fachler.

Friday, December 27, 2013

Be mindful of the costs before setting new rules

In yesterday's Wall Street Journal there was an article regarding the impact of the so-called "Volker Rules" on small banks ("Banks Play Small Ball Vs. Volker," by Andrew B. Johnson and Andrew Ackerman). In it we find the following: "In a legal filing with the U.S. Court of Appeals for the District of Columbia, the trade group said U.S. regulators showed 'utter disregard' for the costs the provision would impose on numerous small banks." [emphasis added]

I was immediately reminded of some of the provisions that have been proposed over the years for those who comply with the Global Investment Performance Standards (GIPS(R)). I mentioned before the response from one individual to the costs that would be realized by firms to provide compliant presentations to existing clients ("well, isn't that just too bad").

At the most recent GIPS conference in Boston we learned that consideration was being given to require compliant firms to provide copies of presentations to all mutual fund shareholder prospects. The costs have been estimated by some to exceed $1 million a year for some firms. Having sat in many meetings of groups involved with the Standards, I know that cost is rarely, if ever, a topic of discussion. But it needs to be. What are the benefits of such a requirement versus the costs that would be incurred? As I've  noted in the past, this idea about mutual fund shareholders brings up the bigger question: who is the prospect/client for a GIPS compliant firm? In my (and many others') view, the fund itself is the client/prospect, not the shareholders, as they are being marketed to by the fund itself.

While I would not characterize the attitude of those who oversee the Standards as showing "utter disregard" for the costs, I do hope they will be mindful to the impact of new ideas on the purse strings of those who have chosen to comply. The idea, long a hallmark of the Standards, of a "level playing field," may be a thing of the past if costs make it no longer feasible for firms to comply.

Taking costs into consideration should be a required step for those who set rules that others are required to follow!

Tuesday, December 24, 2013

Seasons' Greetings!!! Merry Christmas!!! Happy New Year!!!

I am taking this week and next week off, so won't be posting very much. My office closed early today, and will close early on New Year's Eve. And, of course we'll be closed tomorrow and New Year's Day.

2013 has been a grand year for our company in so many ways. I am grateful for the contributions from our entire team, but especially Patrick Fowler, our company President & COO; Chris Spaulding, EVP who is responsible for business development; and Steve Sobhi, VP and head of Western Region Sales.

Let me take this opportunity to extend a warm Merry Christmas to our Christian friends and colleagues, and a Happy New Year to all of you. May 2014 be a fruitful, blessed, healthy, and stupendous new year for you, your families, and your companies.

Thursday, December 19, 2013

Why do we compound returns?

John Simpson, CIPM and I are each (individually) teaching our Fundamentals of Performance Measurement course this week to two different clients: CalPERS in California and Florida State Board of Admin in, well, Florida, where else? One topic we take up is "multi-period performance," which means geometric linking.

I typically ask "why don't we just add the individual period returns together?" For example, if our returns are 1%, 2%, and 3%, why not just show 6% (the sum of the three)? The math is pretty simple, right?

The reason is compounding: returns compound. That is, returns from later periods benefit from gains/losses from prior periods. But what do we mean by that?

Well, I constructed an example to demonstrate how this works, and I think it's helpful.

Column A shows the three monthly returns (1.00%, 2.00%, 3.00%), which, if simply added together (what we might call "arithmetic linking") yields 6.00 percent (note that I don't show this in the table). The geometrically linked result appears in cell A5 (6.1106%).

To demonstrate how compounding works, we begin with an investment of $100 (B1). Column B shows the three periods' gains based solely on the returns applied to this amount ($1.00, $2.00, $3.00).

Compounding means that subsequent period returns are applied to all the prior period results. And so, let's step through this.

Cell C3 shows how the second period's return is applied to the prior period's result (2.00% of  the $1.00 earned in the first period equals $0.0200).  And so, the second period's 2% return not only gains from it being applied to the initial value ($100) but also the amount earned in the second period ($1.00).

Cell C4 is the third period's return (3.00%) applied against the prior period's gain that is earned from the initial value (i.e., it's 3.00% × 2.00% × $100).

Cell D4's gain comes from applying the third period's return to the second period's gain ($0.0200), which, in turn, came from the second period's return (2.00%) applied to the first period's gain (i.e., it's 3.00% × 2.00% × 1.00% × $100).

Cell E4's value comes from the application of the third period's return times the gain from the first period: 3.00% × 1.00% × $100: i.e., we don't just apply subsequent period returns to the immediate period before, but to all prior periods.

When we add these seven values together (1.00 + 2.00 + 3.00 + 0.0200 + 0.0600 + 0.0006 + 0.0300) we get 6.1106. Note that this ties in nicely with the compounded return. We earned $6.1106 for the period, which is 6.1106% of the initial value ($100).

Does this make sense? Each subsequent period's return is applied to each prior period's gain, as well as the starting value.

Here's an interesting point: you get the same results, regardless of the order of the returns.

This is because of the commutative property of multiplication (i.e., A × B = B × A); since compounding involves multiplication, the rule has to apply here.

Thoughts, reactions, insights, other ideas?

Wednesday, December 18, 2013

Acknowledging an anniversary and offering an opinion

We are putting the "final touches" on The Journal of Performance Measurement's (R) Supplement issue, that will include summaries of two of our surveys: attribution and presentation standards. While writing my letter (as publisher), I realized that it's been 20 years since the AIMR-PPS(R) was published. I'm not aware that this met with any fanfare, though it should have. It's clear that this publication took what the FAF (Financial Analysts Federation) dreamed of in 1986 and made it a reality; it also set the groundwork for a global standard, that became a reality in 1999 with the publication of GIPS(R).

I was sent a question yesterday regarding GIPS, dealing with the frequency of mutual funds being included in composites. I think there are two parts to this question.

First, should mutual funds be included in a firm's definition? In my view there is no reason why it shouldn't. It isn't difficult to get these accounts into composites (either as standalone, combined with other funds that are investing in the same strategy, or with separate accounts being managed to the fund's strategy) and it increases the firm's assets under management.

And so, if you agree, then the second question is, should the funds be in a composite? It makes no sense to ask this question if the funds are not part of the firm definition, right? And so, if they are, they must be in a composite. The only question is, do you combine them with separate accounts that are managed to the same strategy?

Reason for: it increases the composite's assets.

Reason against: given that funds typically experience daily cash flows, their performance may be materially different from that of the individual accounts, so separating them may make sense.

And so, it's up to you whether to combine.

Thoughts?  Further questions or comments?

Monday, December 16, 2013

Francis Xavier Desharnais

It is with deep sadness that I inform you of the death of Frank Desharnais. Frank was a bona fide performance measurement professional: with more than 25 years in our industry, Frank held leadership roles at Lazard Asset Management, Deutsche Asset Management, Neuberger Berman, and, most recently, Turner Investment Partners.

Ours is a rather small industry, and Frank was well known and liked by many. I learned of his passing from our mutual friend, Diana Merenda, who worked with him at Deutsche and Lazard.

Frank was a member of the Performance Measurement Forum and a frequent attendee at our annual PMAR (Performance Measurement, Attribution & Risk) conferences, which I think speaks of his desire to continue to grow professionally, despite his experience and expertise.

He was a New Yorker by birth as well as choice, and only left the area to assume the role at Turner, a Philadelphia-area investment advisor. Despite the relocation, he remained a Yankee fan.

I knew Frank for around 20 years, though I regret not having spent more time with him. I had the chance to dine with him shortly after he joined Turner, and enjoyed learning more about him.

Frank left us too soon, and I know that he will be missed by many, especially by his parents, siblings, and other relatives. May he rest in peace.

Friday, December 13, 2013

Annualization and leap years ... there's the rub!

Nine years ago I was reviewing a client's performance system, and noticed on one of their reports they showed "annualized" and "cumulative" returns for the prior one, two, three, five, ... years.

It struck me as quite odd that the prior year's cumulative and annualized returns were different: how could this be? Recall that we annualize returns by taking the cumulative return, adding one, and either (a) taking the nth root, where "n" is the number of years or (b) raising the number to the transposition of n, and then subtracting one. Well, any number raised to the 1/1 (i.e., one) power, or having the 1st root, will yield that number, meaning the annualized return HAS TO equal the cumulative. But why not in this case?

Well, after some reflection I realized they weren't using the number of years (which in this case would have been "1") but rather the number of days in the period (366, since the prior year was 2004, a leap year) and dividing by 365. Oops!

This started me on a quest; sadly, one not unlike Don Quixote's, which has yet to get me to my desired goal of a definitive and clear answer to the question: what to do?

A client asked us this week about this very subject, and my colleagues (John Simpson and Jed Schneider) and I engaged in a back-and-forth discussion on it. Space does not allow me to do much here, other than to say that there is no "rule" on this subject, there are a variety of ways firms handle it (some not so good, some okay), and that the "right" way is rather complex. And finally, one must weigh the complexity with the error that results from using something other than the "right" method. I suspect that most folks would deem it "immaterial."

I will take this subject up in this month's newsletters, and expand upon it further in an upcoming article.

Wednesday, December 11, 2013

Tuesday, December 10, 2013

Handling cash when trading

Asset management firms have, for the past few decades, generally agreed that "trade date" (t/d) accounting is preferred. This practice has been so common that custodians regularly provide trade date reports (where they used to only do settlement date (s/d)) and even brokerage statements are often reported in a trade date manner. While managers (and their clients) are concerned about settlement, it's understood to be in the hands of the "back office" folks, and will only become an issue if there are problems.

If a security is sold, resulting in proceeds of $100,000, the portfolio managers want to know this so they can, that day or in a day or two, invest that money; however, if it's held in a "limbo state," awaiting settlement, then they may not be aware or recall.

Being consistent

A client sent us a note recently regarding an office that recognized the security part of trades on trade date, but the cash movement on settlement date. For example, if 10,000 shares of a security is sold on December 10 for €125,000, the security's position on their system is reduced by this amount; however, the cash that results won't appear until s/d. Or, if they decide to buy 25,000 shares of a security costing $250,000, the security will appear on t/d, but the cash won't be reduced until settlement date.

This is simply incorrect; it will result in return errors, especially if t/d and s/d span a month-end. I suggest this needs to be corrected on both a going-forward and historic basis.

Is GIPS(R) at fault?

You may recall that the Global Investment Performance Standards defines trade date as T (the day of the trade), T+1, T+2, or T+3. And so, T+3 can be considered trade date, so perhaps someone is suggesting that both the cash movement and trading are being done on trade date. While I can see the logic of this argument, the expectation is that they are the same day. If this isn't the case, errors will result.

Trading ahead

Many firms engage in trading ahead of cash arriving. For example, if a client tells them they are wiring €1 million on Friday, the portfolio manager may trade on Tuesday or Wednesday, knowing the cash will be available on Friday for settlement. While controls and formal agreements should be in place to allow this to happen, that is a separate issue from this discussion.

I recommend that the firm create a "pseudo cash" account (or "anticipated cash," or "cash due," etc.) for the amount coming in; the trades that occur should go against this amount. Thus, a simulated external cash flow occurs on the date trading commences. When the cash actually comes in, it can be treated as a cash flow, netting against the outflow from this secondary cash account.
Policies & procedures

Firms should probably have policies regarding cash flow treatment if it is anything but "standard" trade/date movement (and, of course, be correct!).

Something rather simple can, at times, be complex. Hopefully this is helpful. If you have any thoughts, please share them; thanks!

Monday, December 9, 2013

How often should a GIPS compliant firm be verified?

The Spaulding Group surveys on the Global Investment Performance Standards have consistently shown that most compliant firms get their verifications done annually. But is this the optimal frequency? Well, let's consider this topic for a moment.

How about more often than annual?

A couple verifiers we know encourage quarterly verifications. We believe strongly that this most benefits the verifiers themselves, as it allows them to keep their staff busy throughout the year. The disadvantaged are, in my view, the clients themselves. Why? Because we believe that quarterly is:
  • Too frequent. There is nothing to be gained from having your verifier show up four times a year.
  • Disruptive. Having your verifier come to your offices four times a year can create stress and create frustration. But even if your verifier is one who rarely shows up to your offices (but insists on doing their work remotely), you still are disrupted: you've got to gather materials together four times a year and answer any questions that arise, which takes time away from your normal and no doubt more productive activities.
  • Potentially more expensive, since there is no economy of scale that you're taking advantage of.
And so, we strongly oppose such a level of frequency. Mind you, we like to be kept busy throughout the year, too, but we offer many other services (e.g., consulting, operations reviews, systems  reviews, training), and so spend time doing these activities rather than revisiting our clients' offices three more times (recall that we only do "on site" verifications).

How about less frequently than annual?

By having verifications done less than annually, the firm should save some money, since by bringing the verifier in every two or three years, for example, they can "bundle" the years together, and obtain a degree of economy of scale.

However, we don't recommend this, because:
  1. Falling out of compliance. It is very easy for firms to overlook something or make mistakes if they extend the period between verifications for too long. Asset management firms are dynamic, adding new strategies, changing old ones, adding new clients, experiencing staff and organizational changes, which can often impact the firm's GIPS compliance. In addition, the Standards are complex, and changes do occur, and so less frequent verifications increases the risk of becoming non-compliant.
  2. Marketing disadvantage. Recall that GIPS-compliant firms must now not only state whether they've been verified, but also, the period of their verification. Once you've gone beyond a year, your verification is seen by many as being "stale." Thus, you may have a bit of a credibility challenge with some prospects. Unfortunately, you may not even be aware that you're being overlooked because of a stale verification.
  3. What prospects expect. We have reason to believe that most prospective clients expect annual verifications. Since most firms get annual verifications, to do them less often isn't seen as a wise decision. Do the potential savings justify the risks? Recall that verification, like compliance itself, is an investment.
And so, what is the optimal frequency for verifications.

Well, just like Goldilocks, who found one bed to be too hard, one too soft, and one that was "just right,"
  • Quarterly is too often
  • Every two, three, or more years is too seldom
  • Annual is "just right."
Most of our verification clients get their verifications done annually. We believe this is the optimal frequency. The benefits of
  • Increased confidence the firm is remaining in compliance
  • Less disruption to the organization
  • The ability to avoid "stale" verifications
  • Which makes the firm more competitive
make this frequency ideal. We believe that prospective clients, too, expect to see annual.

If you have any questions, thoughts, or insights on this,  please share!

Wednesday, December 4, 2013

We're winning ... finally! Money-weighting is catching on!

Having preached the benefits of money-weighted returns for the past several years, along with a group of comrades, including Steve Campisi, CFA, I'm quite pleased to see that the idea is catching on.

GASB, the U.S. Government Accounting Standards Board, now requires public pension funds to report the internal rate of return (IRR). I was asked to provide some guidance during the directive's development. One could fully understand why time-weighting was considered, but they wisely saw the wisdom of money-weighting.

Our neighbor to the north, Canada, has a new set of security industry standards that also mandate the IRR.

GIPS(R)'s new initiative to encourage asset owners to comply includes the recommendation for the IRR. Regretfully, I didn't recommend in my comment letter that this be a requirement: I should have. But, a recommendation, for now, is very good.

What next? Well, there's plenty of room for more. For example, the GIPS Executive Committee should see the wisdom of mandating the use of the IRR whenever the manager controls cash flows. This is something a few of us have been asking about for some time; actually, dating back to the mid-1990s under the AIMR-PPS(R).

Stay tuned; I'm sure more will follow!

Tuesday, December 3, 2013

Let's disaggregate aggregation

I recently learned of a firm that has enhanced their system to provide the ability to aggregate multiple portfolios for performance reporting. They reference GIPS(R), so clearly the fact that it's permitted within the Standards for composite returns is seen as justification for their work.

When embarking on any performance or risk development / design work, it is extremely important to understand WHAT is being done; the PURPOSE of it; what it is intended TO DO.

I have many times criticized the use of the aggregate method; to me, it's seriously flawed. I've shown examples of the flaws, which apparently prompted the GIPS Executive Committee to declare the ability to add accounts intra-month invalid (which I'm fine with, but would have preferred a public comment period, since this was a change to the Standards, and as such, warranted input from those affected).

Are we looking for (a) the average return of (b) the return of the portfolios, as if they constituted a single account? Chances are the former, which would have dismissed this approach.

There are many ways to calculate returns; it's important to spend the time understanding what is intended in order to properly select the ideal approach.

Monday, December 2, 2013

Gilbert Beebower

I just learned (via P&I, November 25, page 4) of the passing of Gilbert Beebower.

You may recall that he, along with Gary Brinson and L. Randolph Hood, wrote a piece for the Financial Analysts Journal (1986: "Determinants of Portfolio Performance") that helped in the development of the nascent ideas of performance attribution. What many of us refer to as the "BHB" model was adopted by numerous asset managers and software vendors.

Although the article's chief aim was to recognize the importance of asset allocation, a peripheral benefit was the encouragement to employ attribution to better identify the sources of return. And even though the earlier Brinson-Fachler model appears to be the dominant approach, we cannot overlook the contributions of Beebower, et al.

He no doubt made many contributions to our industry, and is deserving of our recognition.

A Half-Dozen Sharpe Ratio Facts

Just when you thought it was safe to ...

I'm doing a bit of research on the Sharpe and Information ratios, and am finding loads of confusion. This may end up becoming an article, but for now I'll share with you some "facts," at least as I understand them, regarding the Sharpe ratio.

1. The Sharpe ratio was developed by William F. Sharpe (Sharpe, William F. (1966). "Mutual Fund Performance." Journal of Business), who was awarded the Nobel Prize in Economics.

1a. Sharpe's Nobel was not awarded for this risk measure, but rather for his work on CAPM.

2. Sharpe referred to his measure as "reward to variability," and to Treynor's (which was published in 1965) as the "reward to volatility."

2a. Neither terms seem to have made it into the common lexicon.

3. Sharpe introduced a revised version of his formula in 1994 (Sharpe, William. (1994). "The Sharpe Ratio." Journal of Portfolio Management). It appears, though not yet confirmed, that the earlier version dominates in our industry.

4. Although it appears that many firms use annualized values in their formula, Sharpe (1994) states that "To maximize information content, it is usually desirable to measure risks and returns using fairly short (e.g., monthly) periods. For purposes of standardization it is then desirable to annualize the results."

5. In Sharpe (1994), the author acknowledges how "The literature surrounding the Sharpe Ratio has, unfortunately, led to a certain amount of confusion." For example, he cites an article by Treynor-Black that define the ratio as "the square of the measure we describe," which, as Sharpe points out, would mean that it is always positive.

6. Although the Sharpe ratio is often criticized, from our research it remains the dominant risk-adjusted measure.

Stay tuned: more to follow!

Wednesday, November 27, 2013

Happy Thanksgiving & Happy Chanukah

Apparently, this is the first time since 1888 that Thanksgiving and Chanukah overlapped (I wasn't around, but my friends Steve Campisi and Herb Chain recalled the event; both are significantly older than I) , and it won't occur again for 70,000+ years (I doubt that I'll see that event). And so,...

Tuesday, November 26, 2013

Introducing Sonia Renza-Elizabeth Spaulding

So proud, I'm showing off; hope you don't mind.

Our third grandchild and first granddaughter.

Born to our son, Christopher, and his lovely wife, Monica.

  • Born 11/25/2013
  • At 4:18 pm
  • 8lbs. 13oz
  • 21 inches
  • Renza in name = Monica's mom
  • Elizabeth in name = Chris' mom; aka, my wife, Betty

Annualizing performance attribution

I was in Australia (both Sydney and Melbourne) last week, (1) speaking at a conference, (2) conducting a conference workshop, and (3) teaching our two-day attribution class. It was a very hectic time, and I failed to do any posting, which is a bit unusual for me.

Yesterday, we held our monthly "Think Tank" session, and one caller asked about "annualizing attribution." I found this to be quite an intriguing topic.

It occurred to me that there are three issues that must be considered:
  1. Should we even be reporting attribution for periods greater than a year?
  2. If yes, should we annualize?
  3. If yes, how (i.e., what are the mechanics)?
At the present time, I'm not sure that it's advisable to extend attribution past a year, but need to give this some more thought, as well as discuss the topic with a few colleagues. My concern is that longer period results might "smooth out" the details and provide little benefit to the recipient. However, they may also prove to be quite insightful! In addition to chatting, I will most likely conduct some testing, too, to discover what we might learn.

In the mean time, if you have any thoughts, please let me know.

Friday, November 15, 2013

Topology and performance attribution

It may seem like a stretch, but the very esoteric and complex mathematical area topology and performance attribution have something in common: mapping. Topologists map points from one surface to another, while PMPs (Performance Measurement Professionals) map weights and returns from the portfolio to the benchmark.

The following graphic should give you an idea of the mapping we do:
I will have more to say about this topic in this month's newsletters.

Tuesday, November 12, 2013

More on reporting standards, guidance, principles

I have had the opportunity to engage in conversation with several folks over the past month or so regarding the CFA Institute's "Principles for Investment Reporting." I remain generally opposed; actually, perhaps more so.

For a document that promotes "transparency," it hardly lives by that principle, given its own lack of transparency (who are its members?).

At no time have the views of the public been solicited regarding the content of the principles.

Interestingly, although the document is intended for the asset management firm's clients, the committee, as I understand it, has no members that work for a pension fund, foundation, endowment, etc. How can you ensure you're providing the right principles without seeking the input from these very persons?

In a recent conversation regarding its "five principles," one, in particular, garnered a lot of attention: "Client preferences are reflected in the investment report." Think about it, how difficult will it be to be able to check off this principle? Are asset managers expected to solicit feedback regarding their performance from their clients? What if the firm decides they can't justify the expense to modify their reports to meet their "client preferences"? This principle, while sounding quite attractive and perhaps even reasonable, will no doubt open up a huge "can of worms."

The industry has yet to be asked if they think this is a good idea; and yet, whenever we've surveyed the market, we have found general opposition to anything that looks like a reporting standard.

I am unclear as to what "principles" are supposed to be; guidance would, I believe, be more welcome. And, I think avoiding having managers feeling obligated or compelled to say that they "comply" would be preferred.

Monday, November 11, 2013

Investing in employees through training

Yes, this IS a dilemma: do you invest the money into providing your performance and risk measurement team the training they need to do a good job, and run the risk they'll leave, or avoid teaching them and save the money?

Fortunately, most firms see the wisdom in providing the education. It not only enhances the team's ability to do an effective job, it also helps boost their morale. Having skilled employees means that they benefit the firm, but also become more marketable to others; this is a fact, but one all firms must confront.

Retaining good employees can always be a challenge, no doubt. But one way to do this is by making such an investment.

The Spaulding Group has been offering training for more than 20 years. Next week, I'll be teaching a class in Melbourne, Australia. John Simpson and Jed Schneider also regularly conduct classes. And this week, we're offering a webinar dedicated to attribution. To learn more about our offerings, please contact us.

Monday, November 4, 2013

Flourishing as an objective

I have begun to read Mass Flourishing, by Edmund Phelps, a Nobel Laureate in Economics. He begins with the following:
“Flourishing is the heart of prospering – engagement, meeting challenges, self-expression, and personal growth…A person’s flourishing comes from the experience of the new: new situations, new problems, new insights, and new ideas to develop and share.”

To flourish is, in my view, a worthy goal. Is it not a great way to live one's life?

One of the reasons the world of investment performance and risk measurement appeals to many of us is its dynamics: it constantly provides new situations, new problems, new insights, and new ideas. It is, in reality, a place to flourish. I hope you agree.

Saturday, November 2, 2013

Buy high and sell low isn't a recipe for success, but it's the strategy most investors seem to follow

In this weekend's WSJ, Jason Zweig discusses how investors too often make poor contribution/withdrawal decisions (see "How Investors Leave Billions on the Table").

He mentions how Pimco's Total Return fund had a 12-month return (as-of September 30) of -0.74%, that outperformed its index (Barclays U.S. Agg) by more than 100 bps. And yet, the average investor had a return of -1.4%, still better than the index (that had a -1.89% return), but far below the fund itself. And why? Because they decided to withdraw $7.3 billion in May and June, just before the fund rebounded.

"Buy low, sell high" is the mantra one should practice, but too often, because of emotions, we observe "buy high, sell low."

Jason's citing of the average investor experience is acknowledging the value of a money-weighted alternative, that takes cash flows into consideration. All funds should provide their investors with their respective personal rate of return. If it outperforms the fund itself, bravo - good timing of cash flows! But if it didn't, well, they goofed. It also wouldn't hurt to provide a comparable index, that takes these flows into consideration. Meaning, the investor will see (1) how the fund did, relative to the time-weighted index and (2) how they, the investor, did, using a money-weighted return, alongside the index presented in a money-weighted fashion.

Friday, November 1, 2013

"A stunning implication..."

The words in this post's title appear in Nobel Laureate in Economics Leon Lederman's The God Particle. I can't say exactly why, but when I read it those words struck me.

A stunning implication.

I once read that good writers write one word at a time; I believe Lederman does exactly that.

We are, if you will, in the "implication" game. That is, we regularly assess the implications of actions taken by portfolio managers. While we call this "performance attribution," that is essentially what is being evaluated: the implications of the actions taken.

In their 1986 FAJ paper, Brinson, Hood & Beebower pointed out how while attribution wasn't new, it was evolving. I've commented that it has evolved considerably since then. The question we should ponder: is it still evolving or evolving enough? Have we thought that it's as good as it gets? I would hope not.

We haven't seen any many new linking methods introduced of late, nor multicurrency models. Perhaps Menchero, Frongello, CariƱo's, and GRAP's are adequate for most people's needs, so it's possible that no additional linking methods are justified. But I believe that multicurrency should be able to grow beyond what we have today.

At a minimum, we should have a framework around which such models can be used and claimed. For example, one of our clients said they offered Karnosky-Singer, but only produce a single currency effect. We've concluded that this cannot legitimately be K-S, as the beauty of the model Denis and Brian crafted is that this effect is bifurcated between what happens in the underlying market and the contribution from the currency forwards that are included.

Perhaps it's time for another "GRAP." An initiative to sit back and ask
  • what are we doing right,
  • what are we doing wrong,
  • what don't we still not understand,
  • what can be improved upon,
  • what is truly lacking,
  • and what else?
Something to reflect upon, perhaps, as we begin to bring to a close another calendar year.

Thursday, October 31, 2013

If God didn't want us to lie, He wouldn't have invented politicians

I've been wanting to use the line in today's heading, and stumbled upon a way (though it may be a stretch). I came up with it recently, and think it's clever (but I'm biased a bit). As a former politician, I am fully aware of the linkage between the art of lying and politics. But enough of that. How did I decide I could use it here?

Well, I had a conversation recently with someone from a public pension fund, who told me that she is under some pressure from one of the state's elected officials to bring their group into compliance with GIPS(R) (Global Investment Performance Standards). I thought this was excellent.

Although I haven't yet had the chance to chat with this fellow, I'm guessing that the recognition that the Standards promote ethical behavior, full disclosure, and transparency, as well as being seen as best practice, are enough reasons to justify this step.

It's way too early to tell whether compliance by asset owners will catch on, but here's at least one case where it's likely.

Thursday, October 24, 2013

DON'T ONLY "show me the money"

A memorable line from the Tom Cruise/Cuba Gooding movie, Jerry Macguire, is "show me the money," uttered multiple times by Gooding. As a result, it has become a part of our society's broader lexicon.

This reinforces the point that we occasionally look more at the money than we ought to, as I pointed out in my post about Warren Buffett's post 2008 success.

I visited an asset manager's website recently and found that they declared the following (and I am paraphrasing a bit): we turned $1 million into roughly $20 million over the past 29 years.

Impressive, right? Wow! To turn $1 million into $20?

Although I don't have the exact dates, I estimated that the annualized return was 9.89 percent, which also may sound impressive. This was an equity strategy, so let's see how the S&P 500 did. I called  upon my friend, Steve Campisi, CFA, who reported it was 10.10 percent! This firm's return isn't bad, but it isn't what the index did. Again, perhaps with some adjustments to the dates we may see that the manager did better. But using only dollars and not showing a benchmark is, in my view, a "no no."

Monday, October 21, 2013

A webinar like no other ... kind of scary!

This Halloween (October 31) at 11:00 AM (EST), John Simpson, Jed Schneider, and I will host our monthly webinar. This month has a theme ... I wonder if you can figure it out? It's titled:

We will cover a lot of scary and interesting stuff about performance and risk measurement. It will be fun and informative! Hope you can join us.
The Spaulding Group's monthly webinar series is intended as an inexpensive way to provide quality training and education to your staff and colleagues. For many, it's become a "lunch and learn" session.
To learn more or to sign up, please contact Jaime Puerschner at 732-873-5700 or by email at JPuerschner@SpauldingGrp.com.
You'll have fun ... we guarantee it!

p.s., wearing costumes is optional for this program.
p.p.s., this webinar is free for our verification clients and members of the Performance Measurement Form.

Sunday, October 20, 2013

Baseball and material errors

It is rare that people keep track of errors, but baseball, with its love of statistics, does just that: goof during a game and it will usually get recorded.

In last night's Detroit Tigers vs. Boston Red Sox game, we were treated to three errors, two by the same person. And, in my view, there was one more, though it was ruled a hit, that we'll touch on that shortly.

In baseball, we don't distinguish between the degree of the error: that is, there is no mention of one being "material" and the other "non-material." This doesn't mean that commentators, reporters, pundits, and fans won't lay the blame on someone's deeds.
Going into the 7th inning, Detroit was up 2-1 over Boston. In that inning, Detroit shortstop, Jose Iglesias, mishandled what might be called a "routine double-play ball," which resulted in the bases becoming loaded. The next Boston batter, Shane Victorino, hit a grand slam home run. There is little doubt that failing to "turn two" was a major factor in the game's outcome.

In the 9th inning, Detroit's Austin Jackson reached first on what was ruled an infield single; I scored it as an error, because the Boston shortstop, Stephen Drew, appeared to mishandle the ball just as Iglesias had two innings earlier. But Jackson didn't make it past second base, and Boston went on to win (5-2).

Material errors truly make a difference they cause results to turn out differently than they would have otherwise.

p.s., It's interesting that offensive errors are not tracked. Detroit's Prince Fielder stumbled while getting back to third base and was tagged out, as the second part of a double play. This base-running error may have cost Detroit a run or two.

p.p.s., Perhaps it was fitting that Detroit's error prone shortstop struck out for the last out of the game.

Friday, October 18, 2013

Timing & GIPS Compliance

There are probably few documents that say as much about timing as the Global Investment Performance Standards (GIPS(r)). But our focus is more limited, and won't address everything that appears.

One of the most important questions is when should a manager become compliant? Must they wait five years (since a firm must report five years or since inception) or at least one year?

First, the issue about "five years or since inception" is often confusing: if the firm has five or more years of history, THEN they must show at least five (building to 10); however, if the firm is less than five years old, then they must report since inception (and again, build to 10 years of annual returns).

Now, to the question: ASAP! That is, as soon as possible the firm should begin to become compliant. And "why?," you might ask. Because the sooner you begin, the easier the process.
  • The firm can design its policies and procedures, and immediately begin to use them.
  • They can add accounts to composites as they are brought on.
  • They won't have to look back over history but rather will be building in "real time."
  • And, the firm can immediately take advantage of their claim of compliance, even though their history may not be extensive.
GIPS now requires firms to show "stub" periods for performance; that is, you must report composite performance if the strategy began with accounts being added within the year. This means that firms can almost immediately have something to report. But just as I discussed earlier this week, returns for short time periods have limited value. That being said, if a firm wishes to grow their business, GIPS compliance is usually a good start. The firm can include a disclosure about the limited time being represented. To me, this would be in keeping with "full disclosure," but it isn't a requirement.

Wednesday, October 16, 2013

Timing & Risk Reporting

How long before you report risk measures as part of your performance?

Returns can be shown for a day, a few days, a week, a month, a quarter, a year, etc. Risk statistics typically rely upon a series of returns. But how many and which ones?

The standard seems to be 36. This conforms well with the usual expectation that we have at least 30 observations for standard deviation to be meaningful. But 36 what? Can we use days, for example?

Well, on the surface that would seem to make some sense, yes? Why not start reporting risk after a strategy or portfolio or composite has been managed for a month or so? We can run numbers, right?

Well, yes, we can. But, we generally think daily returns as being "too noisy." What do we mean by this? Essentially that the fluctuations are too extreme and don't necessarily exhibit anything. We can see choppy days that smooth out a bit when we shift to months. If we were to find days acceptable, why not hours? We could measure hourly returns and track the prior 36; or, the prior 36 minute returns; or, to be REALLY extreme, the prior 36 seconds. This is all possible, but would serve no purpose other than to confuse.

I think we'd actually prefer to use quarters, but the 30 observation rule would mean we couldn't begin reporting until we've been at it for 7 1/2 years; that seems a bit long. Thus, we've pretty much settled on months.

Could we run our risk measures using less than 30 observations? Yes, of course we could; we could, for example use just a few months. But to do so would mislead, I believe. To show someone a return for the prior quarter, for example, along with its associated standard deviation (based either on days or months) would suggest that the number has meaning, when, in reality, it doesn't; the period just isn't long enough (or, if days were used, as noted above, it's too noisy). One must guard against reporting anything that could be misconstrued; we can't mislead our clients or prospects.

What if the client insists on seeing risk measures immediately? Then, of course, do what the client wants; but also include a disclaimer regarding the shortcomings of using a short period to draw any sort of viable or valid conclusion.

GIPS(R) (Global Investment Performance Standards) now requires compliant firms to report an ex post, annualized, 36-month standard deviation. We again see the use of 36 months. Could you report for shorter periods? Again, of course you could. But, I think it's better not to, unless you include a disclaimer; something like "we are showing you a 24-month cumulative return, and so decided to show its associated 24-month annualized standard deviation, but given that this is for a relatively short period, please don't put a lot of emphasis on it, because the statistic doesn't have a whole lot of value until we reach 30 months.

Monday, October 14, 2013

A week about timing, starting with returns

It occurred to me that we could spend some focused time on the issue of time (pun intended). Let us begin with rates of return.

One of the ironies of performance measurement is that the term "time-weighting" has really nothing to do with the weighting of time; it's a term that was carried over from the 1968 BAI (Bank Administration Institute) performance standards. But time is an important component of rates of return.

We can speak of the issue of frequency of valuations. At one time it was not uncommon for firms to value their portfolios annually. Today, that may seem quite odd, but given the lack of computer power and absence of any performance systems, asset managers relied primarily on manual calculations. Over time (that word again), we saw a shrinking of the valuation frequency to quarterly, than monthly, and now it's typically either (a) daily or (b) whenever a large cash flow occurs. Some occasionally speak of "real time" valuations, but I think that would take this topic to an extremity that is ill advised.

When should you begin to report your performance?

But for the purpose of this discussion I am not speaking of such things. Rather, my focus is on how much time is needed before one should begin to REPORT PERFORMANCE!

Let's say that you've begun a new strategy or just opened shop and have your first client. When do you begin to report your rates of return (internally, to your client(s), or to prospective investors)?

The Global Investment Performance Standards (GIPS(R)) have, in a way, answered this, at least for prospective investors, because they now require the reporting of "stub periods." That is, returns for periods less than a year. And so, if you've begun a strategy in October, we probably expect to see returns for the end of the year, starting with November or December (depending on your timing to add an account to your composite).

It seems to be fairly common practice to report monthly returns, and so, if we have a new client we will most likely be reporting returns to them almost immediately.

As far as internal reporting, many firms report daily, weekly, and/or month-to-date returns. This is fine, as it is a way to "keep your finger on the pulse" of what is going on. What is done with the information is important to consider. That is, how much importance is placed on it, how is it being interpreted, and what actions may be taken as a result?

Short-term reporting of returns is all perfectly well, provided we understand that this information has very little meaning. It would be wrong to draw much from just a month or even a few months' of returns. If they are extremely bad, perhaps we look to determine what is going wrong; but if they are extremely good, don't start celebrating just yet. You need more time to properly assess skill.

A gambling analogy

I hope I am not disparaging our profession by bringing up gambling; it seems to fit, at least in this case.

What's the worse thing that can happen to someone the first time they visit a casino? I think it's to win a lot of money. And why? Because this may make them think that
  1. They're pretty good at gambling
  2. It's pretty easy to win
  3. They have a secret strategy that no one else ever figured out.
If they win big, they'll be back. And, eventually they will lose. But, given their earlier success, they may believe that the loss was an aberration (odd, because their win was probably the aberration), and so they will continue to gamble, knowing that another big win is just around the corner.

If they were to record their wins and losses over time, chances are they'd find that, on average, they lost. But they may not realize this unless they gamble over a period of time. The casinos know the odds; they want to keep gamblers in their casinos (thus, the typical absence of windows or clocks) and to keep them coming back (thus the "comps"), knowing that the winners will, on average, become losers. This has to be true; otherwise, where did the money for the fancy and lavish buildings come from?

How much reliance should be placed on short-term investment performance?

The same can be said for investing. Perhaps over a short time someone does extremely well with their investing. There's a reason the industry generally disallows annualizing returns for periods less than a year: it's because a good month or two, annualized, will present a return that is based on the assumption that their performance will continue, when there is no assurance that it will (thus the standard line, past performance is no indication of future results).

If a manger has a good month, two, three, or even several more, it is probably still too early to celebrate, at least too enthusiastically. There's also a reason why institutions typically want at least five years of performance before bringing a new manager on: because a short period of success may be non-sustainable.

We have, on occasion, been contacted by folks who have invested their own money for a few months; they've decided they want to become GIPS compliant. And while we encourage early adoption of the Standards (we'll discuss this later this week), it may be too early for these folks to quit their day job to enter the world of professional money management.

A benchmark for timing may be the requirements for a normal distribution: in general, we want at least 30 observations. We often "round this" to 36 months, which is often the basis for risk measurement (we'll discuss this, too, this week).

In some firms, a new manager who does extremely well in a short time may be prematurely rewarded; this is partly done out of fear that this individual may go elsewhere. But will the success continue? Only time will tell!

Warning labels

Should there be a disclosure with initial short period returns? Perhaps. Something to the effect that this performance is for a short period, and may not yet reflect the true skill of the investor or the strategy; that additional time will be needed to fully gauge this success. And, that success relative to the strategy's benchmark may fluctuate over time, and that by no means should the reader expect continuous out-performance.

Time matters, even with time-weighting; it's just a matter of how much it matters.

Tuesday, October 8, 2013

Explaining what we do ... in a picture

I occasionally describe the formulas we use as a series of bifurcations, starting with time- and money-weighting. And, sometimes I begin with a graphical representation. Well, this morning, I decided to take that graphic to its nth degree, and solicited input from my colleagues, John Simpson and Jed Schneider.

This journey began with the following:

And after a few iterations, it now appears as:

 I know ... you can't read it. But, if you click on it you can.

Is it done? No, probably not, but it will be soon. It's a series of bifurcations (and one or two trifurcations (is that a word?) tossed in), which summarizes the world of rates of return. I think it's kind of cool ... how about you?

Monday, October 7, 2013

Far be it for ME to "rain on Warren Buffett's parade," but ...

I was struck by a front page story in today's Wall Street Journal titled "Buffett's Crisis-Lending Haul Reaches $10 Billion." But it was actually the summary under "What's News" that initially got my attention: "Buffett's investments during the financial crisis have brought in $10 billion, a pre-tax return of nearly 40%."

Now, to read "40%" would get anyone's attention, except for one thing: there's something missing! And what's that?

TIME! Over what time period was this return realized? The past day, week, month, year, five years?

A return without time
is worthless.

I became suspicious when I read that a loan of $4.4 billion "is expected to net Berkshire a profit of at least $680 million." Sorry, but I'm not really that impressed with these numbers.

We find the following chart included in the article

which highlights six of the companies Mr. Buffett invested in during the crisis. We see the amount invested as well as the profit (from dividends and appreciation). On the surface, to make $9.95 billion on a $25.20 billion investment seems great, but without the element of time, what's the point?

And so, I decided to do my own analysis on the statistics provided. I calculated the cumulative and annualized returns for each investment, and compared them with the S&P 500 for the same period; and what do we see?

With all due respect to the Sage of Omaha, these returns are not terribly impressive. Unless I am missing something, for each investment the S&P 500 did better; in some cases, MUCH better.

An important point regarding my numbers: they start with the month end value for the S&P prior to the month of the initial investment and end at the end of September 2013. If profits were realized much sooner, then these returns would have to be altered. But not knowing this information, I carried it through the end of last month. Are my numbers perfect? Of course not, as I am missing some key information, but they at least do something that is critically important: include the element of time.

We can never lose sight of the fact that with returns we need the associated time to be included, too, otherwise, it's a meaningless statistic. Just as to hear that some baseball player has a certain number of home runs, without knowing the length of time it means zip!

p.s., in addition to time, a benchmark is also critically important, to fully gauge the success of one's investing.

Friday, October 4, 2013

A geometric approach to materiality (Part III)

I didn't anticipate a third posting on this topic, but I received an interesting note from a reader that I wanted to share and comment on:

I disagree with your thoughts on using "arithmetic relative" when considering material differences in portfolio returns. By reporting returns as a percentage, this is already stating a relative figure (to the value of the portfolio). Therefore, I think the "arithmetic absolute" is a better method of determining any material difference.

I like to think of it this way: why should this decision be taken on luck? If an error occurred in a month where performance was close to stale, why should this be more material than one where performance was fortunate enough to be high (or unfortunate enough to have a large negative position)?

For example:

A fund of $100,000,000 has failed to account for a $500,000 cash flow. It has reported returns of 1% in August 2013 and 15% in September 2013. For simplicity let us assume cash flows occur only on the 1st of the month.

If the error occurred in August, the actual return would be 0.49751% (modified Dietz).

If the error occurred in September, the actual return would be 14.4335% (modified Dietz).

Using "arithmetic relative," August would show 50.249% error and September a 3.777% error. So you would probably want to state a material difference if the error occurred in August but not September. However, in monetary terms the error is the same; it’s just luck which month it happened to occur in.

If using "arithmetic absolute" and having a limit of 50 bps a material difference would be stated whichever month the error occurred in. I think this is much more consistent as the error is of the same value.

As an investor, I would be more concerned with the monetary impact. Percentages are a nice way to compare between funds, but at the end of day profit or loss is where my concern would lie. Hence, I still believe "arithmetic absolute" is more appropriate in determining material differences.

This reader raises a very interesting point: yes, both the mistake and the resulting magnitude in the error are identical, at least from an absolute perspective, so why wouldn't we treat them the same?

My suggestion that arithmetic relative or, better yet (it appears) geometric, is better than arithmetic absolute to determine materiality for errors has to do with utility theory. That is, does one experience the same reaction when they see an identical difference in absolute terms (e.g., an error 1.00%) when the returns are low (e.g., 0.25% to 1.25%) versus when they're high (e.g., 27.35% to 28.35%)? I suspect not. Going from 0.25 to 1.25 is a big jump (in relative terms), while from 27.35 to 28.35 the increase does seem to be as great.

The point, I believe, rests on what your definition of "materiality" is. Not only do the GIPS(R) standards (rightly) fail to prescribe thresholds for materiality (leaving that properly in the hands of the compliant firm), but it also lacks a definition for the term. My belief is that in the context of errors, it's a change that would cause the reader to have a different perspective in the information shown. Of course, we all react differently, so it's impossible to know for sure what this would be in every case, so we base it on our own best judgment; perhaps the "prudent man rule" applies here.

As an analogy, if your child came home and they said they got an A on an exam, but later said they were mistaken, it was actually an A- or A+, would your response be significantly different? But, if they said it was actually a C? Should the policy be consistently applied based on the magnitude of the error (50 or 100 bps, for example, in absolute terms) or the likely response to the error, using our best judgment?

When I teach our firm's attribution class I occasionally address the issue of proportionality, and use weight lifting as an example. I used to regularly lift weights, so I have some familiarity with this topic. If, for example, you're engaged in a particular exercise where you typically begin with 20 lbs, then go to 30, then to 40, you are increasing each time (in absolute terms) by 10 lbs. But, if you do a different exercise where you start with 120 lbs, and go to 130, and then 140, you are again bumping up by 10 lbs each time. But, do you think that you feel the same increase between the different weights? I strongly doubt it. Going from 20 to 40, for example, is a doubling of the weight, while going from 120 to 140 is only a small percentage increase. This analogy isn't perfect, but hopefully it helps.

In reality, if you prefer arithmetic absolute, that's fine with me; most firms seem to use this approach. Plus, it is probably easier to implement.

This exercise has allowed me to devote additional time to this rather interesting topic, to provide some examples, and to craft (what I believe is) the first attempt at a geometric approach (as noted a couple days ago, I'm sure Carl Bacon is proud, and perhaps a bit envious that he didn't think of it first!). I also want to thank our reader for submitting his comments, as they've allowed me to ponder this a bit further and offer some additional perspectives.

Care to chime in? Please do!

p.s., Sadly, I had to stop lifting weights some time ago because I was often accused of using steroids.

p.p.s., A more detailed review of this topic will be presented in this month's newsletters.