In Defense of Cash

There is a debate in the investment community about the merits of Schwab including a cash allocation in its new roboadvisor offering. Let us leave aside the merits of roboadvisors (short answer: they are great for some people, while terrible for others) and focus on the idea of an investor holding a steady cash allocation as a percentage of total investable assets. Betterfront and WealthFront, two of the early movers in the roboadvisor space, have piled on Schwab. The upstarts argue the cash allocation was merely a cynical ploy orchestrated by Schwab to generate higher revenues from client accounts. Schwab meanwhile argues this is merely a prudent allocation. So, is cash a good, or bad investment in a portfolio account? The answer to this debate holds implications not just for roboinvestors, but for all investors alike and sure enough, I think there is a conclusive answer (as the title suggests).

Here is Betterment’s argument against cash:

  • “Cash has a significant chance of a negative real return over time due to inflation risk.”
  • “Cash assets can present a conflict of interest when the investment manager is advising cash and then re-investing it for its own revenue.”
  • “You never hold cash at Betterment, as we use fractional shares. That ensures every dollar—down to the penny—is fully invested in a diversified portfolio of stocks and bonds.”

The crux of these points are ancillary to the true debate. In fact, Betterment’s argument boils down to a marketing stance, more so than an investment argument. Cullen Roche at Pragmatic Capitalism nicely demonstrates how over the very long run, cash does in fact generate a nice, non-correlated return for portfolios; yet, this is merely the tip of the iceberg in defense of cash. I will do my best to round out the case here.

First, it’s important to note that Warren Buffett would strongly disagree with the roboadvisor assessment of cash. Alice Schroder offers the following take on Buffett’s perspective: “he thinks of cash differently than the conventional investors. This is one of the most important things I learned from him: the optionality of cash. He thinks of cash as a call option with no expiration date, an option on every asset class, with no strike price.” [emphasis added]. Here we have one of the foremost authorities on Warren Buffett labeling a cash allocation as amongst “the most important” elements of Buffett’s investment prowess. If cash is so important to Warren Buffett, who are these roboadvisors to say otherwise? 

While this point is merely an appeal to (quite the) authority, it might be worth exploring how and why this is not merely fallacious thinking. For that, we can turn to Claude Shannon, also known as “the father of information theory.” I first cited Shannon in my post explaining how the Kelly Criterion can be used to size positions. Unsurprisingly, this was not Shannon’s only investment insight. One of the more interesting conclusions Shannon came to about investing demonstrates how it is possible to “make money off of a random walk” with cash being the secret weapon. 

First let us look at the chart Betterment offered to support its case against cash, as it helps set the stage for why cash is so effective and what Betterment and WealthFront may be missing in building their story:

 Notice something about both lines? There is nothing jagged or wavelike to them. Have you ever observed a stock moving in such fashion? Has any actual historical performance visually appeared as smooth these two lines other than Madoff’s fund? Sure, this is standard operating procedure for presenting simulations of what forward performance could look like in an optimized portfolio, but this is only effective as a rough guide. Reality assuredly will be different, and while no one can guarantee the end return will be different (despite this likely being the case), we all can guarantee that the path in getting from the bottom left to the top right will be different. The fact is, the path of stock price movements have consequences for portfolio returns (human behavioral consequences aside--this alone could be its own extended blog post). There is considerable evidence behind the notion that in the short run, stock market movements are merely a random walk. This is another way of saying that stock price movements will be noisy and volatile, with up and down days scattered across time following no real, predictable patterns. In many respects, this is one of the more important philosophical underpinnings behind the existence of roboadvisors in the first place. It should then be no wonder that this fact has serious consequences for the benefits of cash as a strategic allocation.

The following is an explanation for how cash can effectively boosts returns from Fortune’s Formula by William Poundstone:

Shannon described a way to make money off a random walk. He asked the audience to consider a stock whose price jitters up and down randomly, with no overall upward or downward trend. Put half your capital into the stock and half into a “cash” account. Each day, the price of the stock changes. At noon each day, you “rebalance” the portfolio. That means you figure out what the whole portfolio (stock plus cash account) is presently worth, then shift assets from stock to cash account or vice versa in order to recover the original 50-50 proportion of stock and cash.

To make this clear: Imagine you start with $1,000, $500 in stock and $500 in cash. Suppose the stock halves in price the first day. (It’s a really volatile stock.) This gives you a $750 portfolio with $250 in stock and $500 in cash. That is now lopsided in favor of cash. You rebalance by withdrawing $125 from the cash account to buy stock. This leaves you with a newly balanced mixed of $374 in stock and $375 cash. 

Now repeat. The next day, let’s say the stock doubles in price. The $375 in stock jumps to $750. With the #375 in the cash account, you have $1,125. This time, you sell some stock, ending up with $562.50 each in stock and cash.

Look at what Shannon’s scheme has achieved so far. After a dramatic plunge, the stock’s price is back to where it began. A buy-and-hold investor would have no profit at all. Shannon’s investor has made $125.

This scheme defies most investor’s instincts. Most people are happy to leave their money in a stock that goes up. Should the stock keep going up, they might put more of their free cash into the stock. In Shannon’s system, when a stock goes up, you sell some of it. You also keep pumping money into a stock that goes down.

Poundstone then offers a chart of Shannon’s performance in a 50/50 cash/stock portfolio rebalanced once per each unit of time:

It turns out the rebalanced portfolio beats the fully invested portfolio while also minimizing volatility. The example above is clearly a far more extreme version of the cash allocation and stock volatiltiy Schwab (or most investors) would take on in a real portfolio; however, even in more subtle form the effect is noticeable and real. Note how jagged, rather than smooth, these lines are. Jagged lumpiness is a reality we all must contend with in financial markets.

Long-term investors of all kinds need to acknowledge how hard it is to predict short-run movements in stocks. Even in a good value investing opportunity with an impending catalyst, one can never know with certainty which way a stock will move. We can rely on “asset classes” in the most general sense to earn a positive return over long enough timeframes, but we never can now in advance how long that long-run needs to be. Further, we must also acknowledge the unfortunate reality that it is possible for decades of stagnation on price appreciation even with a growing “intrinsic value”—we call this multiple compression. In such an environment (more so than in a trending environment), cash serves as imposed discipline: one systematically buys low and sells high when this kind of rebalancing is automatic.

Clearly rebalancing is part of the roboadvisors' strategy in switching between stocks and bonds when an allocation leaves a tolerance band; however, there are long stretches of time when stocks and bonds are correlated and meaningful periods of time where cash would offer not just a strong buffer against volatility, but an actual enhancer of return. Poundstone references how counterintuitive Shannon’s methodology appears. As counterintuitive as it may be, it is assuredly true and the benefits are both actual and behavioral. If you like better returns, with less volatility, then cash must be an important component of your portfolio.

Lucky to Live in this Era of Indexation

Last week we were greeted with writings from two of the best investors and thought-leaders: Howard Marks of Oaktree and Murray Stahl of Horizon Kinetics. The decades of wisdom acquired by both Marks and Stahl now share with us youngens via these readings is a gift we must all take advantage of. I am about to grossly oversimplify the points from both of these greats in order to riff off of it into a point of my own. I give this warning both to preempt any complaints about my simplification, and as a suggestion to do yourself a favor and read what both of these gentlemen have to say before going one sentence further here. If you are kind and/or interest enough to return to this site, once done with those piece, please feel free to do so.

Since I received a link to Marks’ memo first, my evening reading started there and proceeded to Stahl’s piece. This was a fortunate coincidence. Marks lays out the case for the role luck plays in living life and attaining success in financial markets, tracing it to the idea markets are mostly efficient, but for those areas with a “lack of information…and competition.” Meanwhile, Stahl examines what he believes to be one of the single largest sources of market inefficiency today in what he calls “indexation.” After reading both pieces, I couldn’t help but think: “we are lucky to be investors in markets in this era of indexation.” This one thought struck me as the perfect conjunction between the two pieces.

Stahl has used the word “indexation” to explain the phenomenon whereby more assets and managers are investing in indices and ETFs which are designed to “provide portfolio exposure to very specific criteria, such as an asset class, an industry sub-sector, a growth metric, a stock market capitalization band, and so forth.” Over time, Stahl has discovered and invested in several of the inefficiencies resulting from such a phenomenon, including the “owner-operator” whose major stockholder manages the company, spin-offs designed to streamline business operations, etc. I recommend reading Stahl as to why these opportunities arise in today’s market.

Why do I say we are lucky to invest in this era of indexation? Because, as Stahl argues, indexation is an incredible source of market inefficiency. As more and more dollars seek out exposure in the broadest of ways, there is ample opportunity for those of us who seek to “turn over as many rocks as possible” to find the right opportunity. Two of my favorite setups fit this bill, although I never specifically delineated these ideas in writing as an outgrowth of indexation. This is so because both setups existed as long as there have been markets, and are in many respects traceable to behavioral traits of human beings. What has changed is that indexation provides a natural outlet through which these behavioral weaknesses are even more pronounced than in years past. I have named these setups “Guilty by Association” and “I’ve got a Label, but I don’t Subscribe.” While there are similarities between the two, they deserve to be thought about separately.

"Guilty by Association"

When a company is “Guilty by Association” they are treated in the same way as another, more identifiable peer group or index solely by some kind of perceived proximity. These tend to be situations that are more macro in nature, where a broader problem is reflected upon a specific company or sector. Some examples might be helpful.

During the crisis period in Europe, all European stocks were hit with equal force. The market “threw the good out with the bad” so-to-speak. One particular class of opportunities we spent considerable time on (and ultimately made significant investments in) was businesses listed in Europe, with a revenue base that was largely global. In other words, these were companies that traded in Europe, though they did the majority of their business outside of Europe itself. In these situations, there was selling, even from investors not situated in Europe, due to fears about the Eurozone’s viability. Yet, these companies themselves were in a position where if the Euro actually collapsed, they were unlikely to be significantly impacted in a negative way. In other words, they were “Guilty by Association” with the currency in which their shares were priced.

Another example would be the hatred of muni bonds in today’s environment. This entire asset class is hated due to concerns about Detroit’s bankruptcy and Puerto Rico’s solvency woes. Because Detroit and Puerto Rico are municipalities, conventional investment wisdom beholds that municipal bonds in the general sense must therefore be in trouble. This kind of extrapolation is abundant and wrong.

Indexation impacts these areas because people who invest in broad-based ETFs or indices sell their exposure entirely, in order to avoid the perceived fear. In doing so, the selling of the basket forces mechanical selling of all the subsidiary components without consideration for which specific constituents are and are not impacted on a fundamental level by the fear. Thus, the good that gets thrown out with the bad and is “guilty by association.”

"I've got a Label, but I don't Subscribe"

This is the micro twin of “guilty by association.” Since so much money is moving into ETFs, and ETFs are trading with all kinds of sector and niche labels, there is pressure to fit each and every company into some kind of cookie-cutter genre. These labels impact how analysts and investors alike think about specific companies. Stocks get assigned to analysts based on the “sector” they cover, and many investors invest in sectors or companies that are in accordance with a specific mandate. I had been planning a blog post for a while called “Beware of Labels,” but I think all of those points would better fit the context of this post. One of the biggest misnomers in today’s markets is the “technology” label.  Mr. Market today dumbs "technology" down to mean: a) any company that is on the Internet; and/or, b) any company that makes hardware.

In my opinion, there simply is no such thing as an Internet company. There are retail companies who operate on the Internet (and at this point is there a single retail company who doesn’t operate on the Internet?), there are B2B companies who use the Internet to offer their services, there are financial platforms who provide web-based platforms. To ascribe the label “Internet” to one company and not another is merely referential of the fact that some companies are old and some companies are new. And even that is an oversimplification, for there are older Internet companies that are still called as much, despite being more analogous to marketing companies. And yet somehow, all these various, wide-ranging businesses end up with the “Technology” label despite the fact that their differences are far more pronounced and abundant than their similarities.

In a perfect world, we would throw away the technology label and call these companies what they are, whether that be media, retail, etc., but this isn’t a perfect world and that creates opportunities for us investors seeking out inefficiencies. Heck the “Telecommunications” sector is somehow a sub-sector of “Technology” and includes a company as old as AT&T (though I am aware AT&T today was actually one of the Baby Bells who ended up swallowing Mama whole). The biggest impact labeling has is in how analysts model these companies and the types of investors who are drawn to (or pushed away from) different sectors. We all know how popular comparables analysis and that too gets incredibly misleading when similarities and differences are conflated with one another.

An example of this would be my investment experience with Google. Over the past few years, Mr. Market has called Google an “Internet stock” and a “one-trick-pony” at that. To that end, analysts and investors alike oversimplified in comparing Google only to other Internet stocks, and in a perceived battle against Apple, this same community viewed the company as out of its league (See GigaOM, CBS News and HBR on the "one-trick-pony"). I took a different perspective: Google is more akin to a media company whose advantage lies in the infrastructure and distribution side. Wikipedia describes media as “the storage and transmission channels or tools used to store and deliver information or data.” This certainly seems like an apropos description of Google, and it’s more clearly reflective of who pays Google money at the end of the day--advertisers, much like how we think about “traditional” media. If you think about Google this way, and realize one of the company’s crucial advantages is in how it stores, aggregates, categorizes and distributes information, it’s clear that Google does and can do far more things than “just” search. YouTube is a natural fit in this type of company, more so than just an Internet or search company, and as such, it leverages the advantages of Google’s platform while also leaving open the opportunity for Google to naturally segue into other areas altogether. Within that context, Google looks far less like a one-trick-pony, YouTube’s valuation becomes increasingly important (see my writeup on the importance of YouTube), and the company is in fact more diverse and capable beyond “just” search.

Labeling is a human endeavor; something we do in many disparate fields. One of the most well-known is the biological taxonomy (I think every adult still remembers “King Phillip came over for good spaghetti”), which is an organizational hierarchy. While labels have always been used in stock markets, only now are they actual forces behind the mechanical allocation of capital. This is so due to the proliferation of ETFs and “indexation.” Even in biology, there are blurred lines between different species, etc. This is but one reason why we have seen a great increase in spin-offs: when some companies who are thought of and thus modeled “that” way, have a subsidiary that doesn’t fit the bigger mold, that subsidiary tends to be “underappreciated” by Mr. Market.

Market Inefficiencies

A lot of people, myself included, like ripping on the Efficient Market Hypothesis. This is certainly not without merit; however, as Marks emphatically argues, there is much truth and wisdom in the idea that market participants are in fact really good at incorporating known information into the price of securities. When we look to make investments, we must then do what Marks’ implies in another of his spectacular memos, by asking ourselves “what is the mistake that makes this a mispriced investment opportunity?” With these two examples based on the problems associated with Stahl’s “indexation” we have two areas in the abstract within which we can identify mistakes. To that end, we are lucky to live in this era of indexation for how it exposes the market to repeatedly and mistakenly misvalue companies.

 

Disclosure: Long Google

Michael Mauboussin on the Santa Fe Institute and Complex Adaptive Systems

Michael Mauboussin of Credit Suisse is one of the best strategists on Wall Street and a thought leader who consistently introduces some of the most compelling topics to the financial community. It is therefore no surprise Mauboussin is now Chairman of the Board of Trustees at the Santa Fe Institute, an organization which specializes in the multi-disciplinary study of complex adaptive systems. I recently had the privilege of interviewing Mauboussin about his involvement with the Santa Fe Institute and his thoughts on complexity. Enjoy (and be sure to follow the links to some fascinating further readings):

 

Elliot: Now that you’re Chairman of the Board of Trustees at the Santa Fe Institute, what are your goals and visions for how to more broadly inject SFI’s lessons on complexity into the financial community’s understanding of markets?

Michael: In my role at SFI, the primary goal is to make sure that the Institute does, and can, do great science. The unifying theme is the study of complex adaptive systems. But the goal is to have a place where there’s support for important, transdisciplinary research. 

That said, I would love to continue to see this type of thinking work its way into our understanding of financial markets. That is happening to some degree. One example is Andrew Lo’s work on the Adaptive Market Hypothesis. Another example is Blake LeBaron’s work on markets using agent-based models. I think it’s a more complete way of viewing markets than a standard rational agent model or the assumption of the absence of arbitrage. The problem is that modeling complex adaptive systems is a lot messier than those other approaches.

Elliot: When we last met at an event introducing The Success Equation to SFI members in New York, I asked you what the right “success equation” is for a young investor. Your response was to “keep coming to these events.” How did you first learn about the Santa Fe Institute? And how did you come to embrace the SFI?

Michael: I first learned about SFI in 1995 at a Baltimore Orioles baseball game, where Bill Miller was my host and the proselytizer. He explained how this new research group dedicated to the study of complex systems was coming up with cool and useful insights about business and markets. Specifically, he was taken with Brian Arthur’s work on “increasing returns.” This work showed that under some conditions returns actually move sharply away from the mean. This is counter to classic microeconomic thinking that assumes returns are mean-reverting. 

In many ways I was primed for the message. I had been doing a lot of reading, especially in the area of science, and so this way of thinking made sense to me from the beginning.

Elliot: Did you have a bias towards one market philosophy before you adopted the complex adaptive system mental model?  

Michael: Although I had a solid liberal arts background before starting on Wall Street, I had very little background in business or finance. As a result, I had few preconceived notions of how things worked. It’s a challenge to come up with clear conclusions based on an observation of what happens in markets. On the one hand, you see clear evidence that some people do better than the indexes and that there are patterns of booms and crashes over the centuries. These suggest that markets are inefficient. On the other hand, there’s also clear evidence that it’s really hard to beat the market over time, and that the market is more prescient than the average investor. So for me, at least, there was an intellectual tug of war going on in my head. 

I have to admit to being struck by the beauty of the efficient markets hypothesis as described by the economists at the University of Chicago. At the forefront of this, of course, was Eugene Fama, who recently won the Nobel Prize in part for his work in this area. What’s alluring about this approach is that it comes with a lot of mental models. You can equate risk with volatility. You can build portfolios that are optimal relative to your preference for risk. And so forth. Because you can assume that prices are an unbiased estimate of value, you can do a lot with it. The market’s amazing ability to impound information into prices impresses me to this day.

So it was with this mental tug of war as a backdrop that I learned about the idea of complex adaptive systems. Suddenly, it all clicked into place. A simple description of a complex adaptive system has three parts. First, there are heterogeneous agents. These can be ants in an ant colony, neurons in your brain, or investors in a market. Second, these agents interact leading to a process called “emergence.” The product of emergence is a global system that has properties and characteristics that can’t be divined solely by looking at the underlying agents. Reductionism doesn’t work

What instantly drew me to this way of thinking is that it describes markets very well and it is very common in nature. The central beauty of this approach is that it provides some sense of when markets are likely to be efficient—in the classic sense—and when inefficiencies will creep in. Specifically, markets tend to be efficient when the agents operate in a truly heterogeneous fashion and the aggregation mechanism is working smoothly. Diversity is essential, both in nature and in markets, and the system has to be able to take advantage of that diversity. There are some neat examples in experimental economics to show how this works. It’s really wondrous. 

On the flip side, when you lose diversity the system can become very inefficient. And that’s also what we see in markets—diversity loss leads to booms and crashes. Now the loss in diversity can be sociological, in which we all start to believe the same thing, or it can be technical, such as the winding up or winding down of a leverage cycle. But here we have a framework that accommodates the fact that markets are pretty darned good with the fact that they periodically go haywire. And SFI was at the center of this kind of thinking.

Elliot: It’s interesting that your answer on what theory of markets you subscribe to is not in the “black or white” vein whereby one must be in one camp and one camp only. It seems like much of the divisiveness in today’s discourse (in many arenas) stems from people’s unwillingness to see these kinds of shades of grey, though as you suggest, that mentality is not for everyone.  Do you meet resistance from people when explaining your stance? Is there a way to get others to embrace “complexity” when people have an innate desire for linear, orderly explanations that are essentially either/or answers? 

Michael: Most of us are uncomfortable with ambiguity—we’d rather just have a point of view and stick to it. But in markets, the real answer clearly lies between the folks who believe that markets are perfectly efficient and those who believe it’s largely inefficient. By the way, if you think the market is mostly inefficient there is no reason to participate because even if you have a sense that you are buying a dollar at a discount there is no assurance that the market will ever recognize that value. So some degree of market efficiency is essential even for those who believe that markets have inefficiencies. 

My goal is less to get people to change their view and more to establish a better understanding of how things work. Once you learn about markets as a complex adaptive system and appreciate its implications, I find it difficult to go back to a more traditional point of view.  

Elliot: In More Than You Know, you said, “The best way to describe how I feel following a SFI symposium is intellectually intoxicated.” Are there steps you take following these events to transform the ideas you’ve learned and the relationships you’ve built into expanding the scope of your own knowledgebase? And how are you able to harness this intoxication into productive output? 

Michael: I wish I could be more systematic in this regard, but I think it’s fair to say that the ideas from SFI have permeated every aspect of my work. Perhaps a couple of examples will help make the point.

I’ve already mentioned conceptualizing markets as a complex adaptive system. This alone is a large step, because rather than simply moaning about the limitations of standard finance theory, you have a framework for thinking about what’s going on.

I’ve also already mentioned Brian Arthur’s work on increasing returns. Many businesses are being defined less by their specific market segment and more by the ecosystem they create. And it is often the case that in a battle of ecosystems, one will come out on top. So this set of steps provides a mental model to understand the process of increasing returns and, as important, how to identify them in real time.

Ideas from SFI have inspired my work in many other ways, from understanding power law distributions in social systems to network theory to collective decision making to the processes underlying innovation. I could go on. But suffice it to say that there is hardly an area of markets, business, or decision making where your thinking wouldn’t be improved by learning, and internalizing, the kinds of ideas coming out of the SFI.

Elliot: In More Than You Know, you also introduce Charlie Munger and SFI as “Two sources in particular [that] have inspired my thinking on diversity. The first is the mental-models approach to investing, tirelessly advocated by Berkshire Hathaway's Charlie Munger. The second is the Santa Fe Institute (SFI), a New Mexico-based research community dedicated to multidisciplinary collaboration in pursuit of themes in the natural and social sciences.” It seems only natural that adopting Charlie Munger’s perspective to mental models would lead one to the SFI. Can you talk about the synergies between these two worldviews in making you a better analyst? What role did your adoption of Munger’s framework play in your attraction to the SFI?

Michael: Charlie Munger is a very successful businessman. Probably the first thing to note about him is that he reads constantly. He’s a learning machine. There’s bound to be a good outcome if you dedicate yourself to reading good stuff over a long period of time. That alone should be inspiring.

So as I think about the synergies between the worldviews, a few thoughts come to mind. First, it’s essential to provide your mind with good raw material. That means exposing yourself to a lot of disciplines and learning the key tenets. It also means spending time with people who think differently than you do. 

Second, you have to be willing and able to make connections. What are the similarities between disease and idea propagation? What can an ant colony teach me about innovation? What do physical phenomena, such as earthquakes, tell us about social phenomena, such as stock market crashes? You need good raw material to make connections, but you also have to be careful to avoid superficial links.

Finally is the idea of thinking backwards. Munger is a big advocate for this. You observe that something is where it is: How did it get there? Why did it get there? There are some fascinating challenges in this regard right now. We know, for example, that the sizes of cities and companies follow power laws. Why? By what mechanism does this happen? No one really knows, and the prospect of solving those kinds of challenges is exciting.   

But I have to finish with the point that this approach to the world is not for everyone. The interest or capability to work in this fashion is far from universal. So I wouldn’t recommend this to everybody. Rather, I would encourage it if you have a proclivity to think this way.  

Elliot: You talk of a benefit of the mental models approach as having a diverse array of models that you can fit a given situation, rather than fitting a given situation to a one-size-fits-all model.  Can you shed some insight on a) how you built up your quiver of models; b) how you organize these models (either mentally or tangibly); and c) how you choose which model to use in a given situation?

Michael: Yes, I think the metaphor is that of a toolbox. If you have one tool only, you’ll try to apply it to all of the problems you see. And we all know people who are just like that.

The mental models approach seeks to assemble a box with many tools. The idea is to learn the big ideas from many disciplines. What are the main ideas from psychology? Sociology? Linguistics? Anthropology? Biology? And on and on. In many cases you don’t have to be a deep expert to get leverage from a big idea. One of my favorite examples is evolution. Spend some time really understanding evolution. It is a mental model that applies broadly and provides insights that other approaches simply can’t.

I’m not sure I’m much of an example, but I have strived to read widely. This in part has been inspired by the people and ideas I have encountered at SFI. Most of my organization comes through writing or teaching. For me, that is a way to consolidate my understanding. If I can’t effectively write or teach something, I don’t understand it. Now I’m sure I write about things I don’t understand as well, but I try my best to represent the science as accurately as possible.

As for choosing the right model, the key there is to look for a fit. One concept that intrigues me is that nature has taken on and solved lots of hard problems, and there’s a lot we can learn from observing how nature works. So you might learn how to run a committee more effectively if you understand the basic workings of a honeybee colony. Or you might have insight about the resources your company should allocate to experimentation by examining ant foraging strategies. 

The risk is that you take the wrong tool out of the toolbox. But I think that risk is a lot smaller than the risk of using the same tool over and over. I’ll also mention that the work of Phil Tetlock, a wonderful psychologist at the University of Pennsylvania, suggests that so-called “foxes,” people who know a little about a lot of topics, tend to be more effective forecasters than so-called “hedgehogs,” those with a single worldview. So not only is this an intellectually appealing way to go, there’s solid evidence that it’s useful in the real world. 

Elliot: When you cite how Brian Arthur’s work “showed that under some conditions returns actually move sharply away from the mean. This is counter to classic microeconomic thinking that assumes returns are mean-reverting.” It makes me think about feedback loops and this passage from More Than You Know: “Negative feedback is a stabilizing factor, while positive feedback promotes change. Too much of either type of feedback can leave a system out of balance.” Positive feedback loops are seemingly the force that drives conditions away from the mean. How can we think about feedback loops in a more constructive way and are there steps that we can take to understand when/where/how they will appear?  As a follow-up, is there a good mental model for thinking about when and how breakpoints appear in feedback loops?

Michael: There’s been a great deal written about this idea, albeit not necessarily using this exact language. One classic work on this is Everett Rogers’s book, Diffusion of Innovations. He was one of the first to describe how innovations—whether a new seed of corn or an idea—spread. From this a lot of other ideas emanated, including the idea of a tipping point, where momentum for diffusion accelerates. 

The Polya urn model is also useful in this context. A basic version of the model starts with balls of two colors, say black and white, in an urn at some ratio. You then randomly select one ball, match it with a ball of the same color, and replace it. For example, say you started with 3 black balls and 3 white balls, so 50 percent of the balls are black. Now you draw a ball, observe that it’s black, and return it to the urn with an additional black ball. So the percentage of black balls is now 57 percent (4/7). 

This urn model is very simple but demonstrates the principles behind positive feedback nicely. Specifically, it’s nearly impossible in advance to predict what’s going to happen, but once one color gets ahead sufficiently, it dominates the outcomes. (You can play a little more sophisticated version here.) It’s interesting to hit the simulator over and over to simply observe how the outcomes vary.

Another area where this model pops up is in format, or standard, wars. The classic example is Betamax versus VHS, but there are plenty of examples throughout history. Here again, as one standard gets ahead, positive feedback often kicks in and it wins the war.   

Now I don’t think there’s any easy way to model positive feedback, but these are some of the mental models that may help one consider what’s going on.

Elliot: You talk about Munger’s advice to think backwards and invert. I think your first book was Expectations Investing which provided a framework for estimating the embedded assumptions in an equity’s price. Yet you also warn that this way of thinking isn’t for everyone. Was this something you realized after sharing the ideas with many or were you always aware of this? Do you have any ideas for why this has a relatively narrow audience? Is there a natural tie-in to the behavioral biases of humans and why this doesn’t work for everyone? (For example, the human proclivity towards the narrative bias to explain past events) And if so, how can we think backwards more rationally and overcome these biases?

Michael: Steven Crist, the well-known handicapper, has a line about horse race bettors in his essay, “Crist on Value,” that I love to repeat. He says, “The issue is not which horse in the race is the most likely winner, but which horse or horses are offering odds that exceed their actual chances of victory. This may sound elementary, and many players may think they are following this principle, but few actually do.” Take out the word “horse” and insert the word “stock” and you’ve captured the essence of the problem. 

Our natural tendency is to buy what is doing well and to sell what is doing poorly. But as Crist emphasizes, it doesn’t really matter how fast the horse will run, it matters how fast the horse will run relative to the odds on the tote board. Great investors separate the fundamentals from the expectations, and average investors don’t. Most of us are average investors.   

My advice, then, is to try to be very explicit about segregating the fundamentals and the expectations. Sometimes high expectations stocks are attractive because the company will do better still than what’s in the price. Great. That’s a buy. Sometimes there are stocks with low expectations that are dear because the company can’t even meet those beat down results. That’s called a value trap. So, constantly and diligently ask and answer the question, “what’s priced in?” Doing so is very helpful.  

Learning Risk and the "Limits to Forecasting and Prediction" With the Santa Fe Institute

Last October, I had the privilege to attend Santa Fe Institute and Morgan Stanley's Risk Conference, and it was one of my most inspiring learning experiences of the year (read last year's post on the conference, and separately, my writeup of Ed Thorp's talk about the Kelly Criterion). It's hard not to marvel at the brainpower concentrated in a room with some of the best practitioners from a variety of multi-disciplinary fields ranging from finance to physics to computer science and beyond and I would like to thank Casey Cox and Chris Wood for inviting me to these special events.  

I first learned about the Santa Fe Institute (SFI) from Justin Fox's The Myth of the Rational Market. Fox concludes his historical narrative of economics and the role the efficient market hypothesis played in leading the field astray with a note of optimism about the SFI's application of physics to financial markets. Fox highlights the initial resistance of economists to the idea of physics-based models (including Paul Krugman's lament about "Santa Fe Syndrome") before explaining how the profession has in fact taken a tangible shift towards thinking about markets in a complex, adaptive way.  As Fox explains:

These models tend to be populated by rational but half-informed actors who make flawed decisions, but are capable of learning and adapting. The result is a market that never settles down into a calmly perfect equilibrium, but is constantly seeking and changing and occasionally going bonkers. To name just a few such market models...: "adaptive rational equilibrium," "efficient learning," "adaptive markets hypothesis," "rational belief equilibria." That, and Bill Sharpe now runs agent-based market simulations...to see how they play out.

The fact that Bill Sharpe has evolved to a dynamic, in contrast to equilibrium-based perspective on markets and that now Morgan Stanley hosts a conference in conjunction with SFI is telling as to how far this amazing multi-disciplinary organization has pushed the field of economics (and importantly, SFI's contributions extend well beyond the domain of economics to areas including anthropology, biology, linguistics, data analytics, and much more). 

Last year's focus on behavioral economics provided a nice foundation upon which to learn about the "limits to forecasting and prediction." The conference once again commenced with John Rundle, a physics professor at UC-Davis with a specialty in earthquake prediction, speaking about some successful and some wrong natural disaster forecasts (Rundle operates a great site called OpenHazards). Rundle first offered a distinction between forecasting and prediction. Whereas prediction is a statement validated by a single observation, forecasting is a statement for which multiple observations are required for a confidence level.

He then offered a permutation of risk into its two subcomponents. Risk = Hazard x exposure.  The hazard component relates to your forecast (ie the potential for being wrong) while the exposure relates to the magnitude of your risk (ie how much you stand to lose should your forecast be wrong). I find this a particularly meaningful breakdown considering how many colloquially conflate hazard with risk, while ignoring the multiplier effect of exposure.

As I did last year, I'll share my notes from the presentations below. Again, I want to make clear that my notes are geared towards my practical needs and are not meant as a comprehensive summation of each presentation. I will also look to do a second post which sums up some of the questions and thoughts that have been inspired by my attendance at the conference, for the truly great learning experiences tend to raise even more questions than they do offer answers.

Antti Ilmanen, AQR Capital

With Forecasting, Strategic Beats Tactical, and Many Beats Few

Small, but persistent edges can be magnified by diversification (and to a lesser extent, time). The bad news is that near-term predictability is limited (and humility is needed) and long-term forecasts which are right might not setup for good trades. I interpret this to mean that the short-term is the domain of randomness, while in the long-term even when we can make an accurate prediction, the market most likely has priced this in.

Intuitive predictions inherently take longer time-frames. Further, there is performance decay whereby good strategies fade over time. In order to properly diversify, investors must combine some degree of leverage with shorting. Ilmanen likes to combine momentum and contrarian strategies, and prefers forecasting cross-sectional trades rather than directional ones.

When we make long-term forecasts for financial markets, we have three main anchors upon which to build: history, theory, and, current conditions. For history, we can use average returns over time, for theory, we can use CAPM, and for current conditions we can apply the DDM. Such forecasts are as much art as they are science and the relative weights of each input depend on your time-horizon (ie the longer your timeframe, the less current conditions matter for the inevitable accuracy of your forecast).

Historically the Equity Risk Premium (ERP) has averaged approximately 5%, and today's environment the inverse Schiller CAPE (aka the cyclically adjusted earnings yield) is approximately 5%, meaning that 4-5% long run returns in equity markets are justifiable, though ERPs have varied over time. Another way to look at projected returns is through the expected return of a 60/40 (60% equities / 40% bonds) portfolio. This is Ilmanen's preferred methodology and in today's low-rate environment the prospects are for a 2.6% long-run return.

In forecasting and market positioning, "strategic beats tactical." People are attracted to contrarian signals, though the reality of contrarian forecasting is disappointing. The key is to try and get the long-term right, while humbly approaching the tactical part of it. Value signals like the CAPE tend to be very useful for forecasting. To highlight this, Ilmanen shared a chart of the 1/CAPE vs. the next five year real return.

Market timing strategies have "sucked" in recent decades. In equity, bond and commodity markets alike, Sharpe Ratios have been negative for timing strategies. In contrast, value + momentum strategies have exhibited success in timing US equities in particular, though most of the returns happened early in the sample and were driven more by the momentum coefficient than value. Cheap starting valuations have resulted in better long-run returns due to the dual forces of yield capture (getting the earnings yield) and mean reversion (value reverting to longer-term averages). 

Since the 1980s, trend-following strategies have exhibited positive long-run returns. Such strategies work best over 1-12 month periods, but not longer. Cliff Asness of AQR says one of the biggest problems with momentum strategies is how people don't embrace them until too late in each investment cycle, at which point they are least likely to succeed. However, even in down market cycles, momentum strategies provided better tail-risk protection than did other theoretically safe assets like gold or Treasuries.  This was true in eight of the past 10 "tail-risk periods," including the Great Recession.

In an ode to diversification, Ilmanen suggested that investors "harvest many premia you believe in," including alternative asset classes and traditional capital markets. Stocks, bonds and commodities exhibit similar Sharpe Ratios over long time-frames, and thus equal-weighting an allocation to each asset class would result in a higher Sharpe than the average of the constituent parts. We can take this one step farther and diversify amongst strategies, in addition to asset classes, with the four main strategies being value, momentum, carry (aka high yield) and defensive.

Over the long-run, low beta strategies in equities have exhibited high returns, though at the moment low betas appear historically expensive relative to normal times.  That being said, value as a signal has not been useful historically in market-timing.

If there are some strategies that exhibit persistently better returns, why don't all investors use them? Ilmanen highlighted the "4 c's" of conviction, constraints, conventionality and capacity as reasons for opting out of successful investment paths.

 

Henry Kaufman, Henry Kaufman & Company

The Forecasting Frenzy

Forecasting is a long-term human endeavor, and the forecaster in the business/economics arena is from the same vein as soothsayers and palm readers. In recent years, the number of forecasters and forecasts alike has grown tremendously. Sadly, forecasting continues to fail due to the following four behavioral biases:

  1. Herding--forecasts minimally fluctuate around a mean, and few are ever able to anticipate dramatic changes. When too many do anticipate dramatic changes, the path itself can change preventing such predictions from coming true.
  2. Historical bias--forecasts rest on the assumption that the future will look like the past. While economies and markets have exhibited broad repetitive patterns, history "rhymes, but does not repeat."
  3. Bias against bad news--No one institutionally predicts negative events, as optimism is a key biological mechanism for survival. Plus, negative predictions are often hard to act upon. When Kaufman warned of interest rate spikes and inflation in the 1970s, people chose to tune him out rather than embrace the uncomfortable reality. 
  4. Growth bias--stakeholders in all arenas want continued expansion and growth at all times, even when it is impractical.

Collectively, the frenzy of forecasts has far outpaced our ability to forecast. With long-term forecasting, there is no scientific process for making such predictions. An attempt to project future geopolitical events based on the past is a futile exercise. In economics, fashions contribute to unsustainable momentums, both up and down, that lead to considerable challenges in producing accurate forecasts.

Right now, Kaufman sees some worrying trends in finance. First, is the politicization of monetary policy, and he fears this will not reverse soon. The tactics the Fed is undertaking today are unprecedented and becoming entrenched. The idea of forward guidance in particular is very dangerous, for they rely entirely upon forecasts. Since it's well established that even expert forecasts are often wrong, then logic dictates that the entire concept of forward guidance is premised on a shaky foundation. Second, monetary policy has eclipsed fiscal policy as our go-to remedy for economic troubles. This is so because people like the quick and easy fixes offered by monetary solutions, as opposed to the much slower fiscal ones. In reality, the two (fiscal and monetary policy) should be coordinated. Third, economists are not paying enough attention to increasing financial concentration. There are fewer key financial institutions, and each is bigger than what used to be regarded as big. If/when the next one fails, and the government runs it through the wind-down process, those assets will end up in the hands of the next remaining survivors, further concentrating the industry.

The economics profession should simply focus on whether we as a society will have more or less freedom going forward. Too much of the profession instead focuses on what the next datapoint will be. In the grand scheme of things, the next datapoint is completely irrelevant, especially when the "next" completely ignores any revisions to prior data.  There is really no functional, or useful purpose for this type of activity.

 

Bruce Bueno de Mesquita, New York University

The Predictioneer's Game

The standard approach to making predictions or designing policy around questions on the future is to "ask the expert." Experts today are simply just dressed up oracles. They know facts, history and details, but forecasts require insight and methods that experts simply don't have. The accuracy of experts is no better than throwing darts. 

Good predictions should use logic and evidence, and a better way to do this is using game theory. This works because people are rationally self-interested, have values and beliefs, and face constraints. Experts simply cannot analyze emotions or account for skills and clout in answering tough geopolitical questions. That being said, game theory is not a substitute for good judgment and it cannot replace good internal debate.

People in positions of power have influencers (like a president and his/her cabinet). In a situation with 10 influencers, there are 3.6 million possible interactions that exist in a complex adaptive situation (meaning what one person says can change what another thinks and does). In any single game, there are 16 x (N^2-N) possible predictions, where N is the number of players.

In order to build a model that can make informed predictions, you need to know who the key influencers are. Once you know this, you must then figure out: 1) what they want on the issue; 2) how focused they are on that particular problem; 3) how influential each player could be, and to what degree they will exert that influence; and, 4) how resolved each player is to find an answer to the problem.  Once this information is gathered, you can build a model that can predict with a high degree of accuracy what people will do.  To make good predictions, contrary to what many say, you do not need to know history. It is much like a chessmaster who can walk up to a board in the middle of a game and still know what to do next.

With this information, people can make better, more accurate predictions on identified issues, while also gaining a better grasp for timing. This can help people in a game-theory situation come up with strategies to overcome impediments in order to reach desired objectives.

Bueno de Mesquita then shared the following current predictions:

  • Senkaku Island dispute between China and Japan - As a relevant aside, Xi Jinping's power will shrink over the next three years. Japan should let their claims rest for now, rather than push. It will take two years to find a resolution, which will most likely include a joint venture between Japan and China for expropriation of the natural gas reserves.
  • Argentina - The "improvements" in today's business behavior are merely aesthetic in advance of the key mid-term elections. Kirshner is marginalizing political rivals, and could make a serious move to consolidate power for the long-term.
  • Mexico - There is a 55% chance of a Constitutional amendment to open up energy, a 10% chance of no reform, and a 35% chance for international oil companies to get deep water drilling rights.  Mexico is likely to push through reforms in fiscal policy, social security, energy, labor and education, and looks to have a constructive backdrop for economic growth.
  • Syria with or without Assad will be hostile to the Western world.
  • China will look increasingly inward, with modest liberalization on local levels of governance and a strengthening Yuan.
  • The Eurozone will have an improving Spain and a higher likelihood that the Euro currency will be here to last.
  • Egypt is on the path to autocracy.
  • South Africa is at risk of turning into a rigged autocracy.

 

Aaron Clauset, University of Colorado and SFI

Challenges of Forecasting with Fat-Tailed Data

(Please note: statistics is most definitely not my strong suit. The content in Clauset's talk was very interesting, though some of it was over my head. I will therefore try my best to summarize the substance based on my understanding of it)

In attempting to predict fat-tail events, we are essentially trying to "predict the unpredictable." Fat tails exhibit high variance, so the average of a sample of data does not represent what is seen numerically. In such samples, there is a substantial gap between the two extremes of the data, and we see these distributions in book sales (best-sellers like Harry Potter), earthquakes (power law distributions), market crashes, terror attacks and wars. With earthquakes, we know a lot about the physics behind them, and how they are distributed, whereas with war we know it follows some statistical pattern, but the data is dynamic instead of fixed. This is true with war, because certain events influence subsequent events, etc.

Clauset approached the question of modeling rare events through an attempt to ascertain how probable 9/11 was, and how likely another one is. The two sides of answering this question are building a model (to discover how probable it was) and making a prediction (to forcast how likely another would be). For the purposes of the model, one would care only about large events because they have disproportionate consequences. When analyzing the data, we don't know what the distribution of the upper tail will look like because there simply are not enough datapoints. In order to overcome these problems, the modeler needs to separate the tail from the body, build a multiple tail model, bootstrap the data and repeat.

In Clauset's analysis of the likelihood for 9/11, he found that it was not an outlier based on both the model, and the prediction. There is a greater than 1% chance of such an event happening. While this may sound small, it is within the realm of possible outcomes, and as such it deserves some attention. This has implications for policymakers, because considering it is a statistical possibility, we should pursue our response within a context that acknowledges this reality.

There are some caveats to this model however. An important one is that terrorism is not a stationary process, and events can create feedback loops which drive ensuing events. Further, events themselves that in the data appear independent are not actually so. When forecasting fat tails, model uncertainty is always a big problem. Statistical uncertainty is a second one, due to the lack of enough data points and the large fluctuations in the tails themselves. Yet still, there is useful information within the fat tails which can inform our understanding of them. 

 

Philip Tetlock, University of Pennsylvania

Geopolitical Forecasting Tournaments Test the Limits of Judgment and Stretch the Boundaries of Science

I summarized Tetlock's talk at last year's SFI Risk Conference, so I suggest checking out those notes on the IARPA Forecasting Tournament as well. IARPA has several goals/benefits: 1) making explicit one's implicit theories of good judgment; 2) getting people in the habit of treating beliefs like testable hypothesis; and, 3) helping people discover the drivers of probabilistic accuracy. (All of the above are reasons I would love to participate in the next round). With regard to each area there are important lessons. 

There is a spectrum that runs from perfectly predictable on the left to perfectly unpredictable on the right, and no person or system can perfectly predict everything. In any prediction, there is a trade-off between false positives and correct hits. This is called the accuracy function. 

With the forecasting tournament, people get to put their pet theories to the test. This can help improve the "assertion-to-evidence" ratios in debates between opposing schools of thought (for example, the Keynesians vs the Hayekians). Predictions would be a great way to hold opposing schools of thought accountable to their predictions, while also eliciting evidence as to why events are expected to transpire in a given way.

In the tournament, the participants are judged using a Brier Score, a measure that originated in weather forecasting to determine accuracy on probabilistic predictions over time. The people who perform best tend to have a persistence in good performance. The top 2% of performers from one year demonstrated minimal regression to the mean, leading to the conclusion that predictions are 60% skill and 40% luck on the luck/skill spectrum.

There are tangible benefits of interaction and collaboration. The groups with the smartest, most open-minded participants consistently outperformed all others. Those who used probabilistic reasoning in making predictions were amongst the best performers. IARPA concentrated the talent of some of the best performers in order to see if these "super teams" could beat the "wisdom of crowds." Super teams did win quite handily. Ability homogeneity, rather than a problem, was an enhancer of successes. Elitist algorithms were used to generate forecasts by "extremizing" the forecasts from the best forecasters, and weighting those most heavily (5 people with a .7 Brier would upgrade to approximate a .85 based on the non-correlation of their success. Slight digression: it was interesting sitting behind Ilmanen during this lecture and seeing him nod his head, as this theme resonated perfectly with his points on diversifaction in a portfolio resulting in the portfolio's Sharpe Ratio being above the average of its constituent parts)

There are three challenges when thinking about the value of a forecasting tournament. First, automation from machines is getting better, so why bother with people? While this is important, human judgment is still a very valuable tool and can actually improve the performance of these algorithms. Second, the efficient market theory argues that what can be anticipated is already "priced in" so there should be little economic value to a good prediction anyway. Yet markets and people alike have very poor peripheral vision and good prediction can in fact be valuable in that context. Last, game theory models like Buena de Mesquita's can distill inputs from their own framework. While this may be a challenge, it's probably even better as a complementary endeavor.

Buffett, Soros and Uncle Sam

 I recently came across an interesting piece comparing the returns of Warren Buffett and George Soros (h/t @ReformedBroker). The post immediately caught my attention, for both Buffett and Soros are two of my favorite minds in investing.  I am oversimplifying greatly, but from Buffett, I learned much about the importance of patience, quality and management integrity, while from Soros, I learned the importance of identifying self-fulfilling cycles and reflexive processes in financial markets. While some like to contrast these two gentlemen as taking opposing views to markets, I think their approaches are not mutually exclusive.  In fact, combining the lessons from these two gentlemen has been a potent force in crafting my own, unique approach to investing.

In the piece comparing the relative performance of Buffett and Soros, the author includes the following chart:

The author then asks, if “George's track record is better but Warren is richer. Why?” while offering the following answer:

The snowball of POSITIVE compounding for longer. Both were born in August 1930 and Warren ran his hedge fund from 1957 but George didn't set up his until 1969. Warren was lucky to be in Omaha while Dzjchdzhe Shorash was in Budapest, more affected by WW2. Also Warren got into currency trading and philanthropy later. George's outperformance is due to stronger international diversification and because reflexivity is ignored. Value investing is copied more than reflexivity investing. The boom bust of Eurozone sovereign credits and subprime CDOs are quintessential examples of reflexivity. Crises are PREDICTABLE. And profitable if you have expertise.

Sure some of these factors certainly played a role in Buffett’s wealth relative to Soros, though this is largely misleading and the most crucial point is ignored entirely. Simply put, these return figures are not presented on an apples to apples basis.  Buffett’s returns are presented using the growth in Berkshire Hathaway’s book value, while Soros’ returns are presented using his hedge funds’ returns.  In this comparison, the author is therefore comparing Buffett’s after-tax returns, with Soros’ pre-tax returns. (There is a second key point missed that many Buffett followers will pick up on: book value does not reflect the true realizable value of many Berkshire assets, and therefore, is understated relative to the intrinsic value of the company. While important, my intent here is to focus simply on the tax consequences so beyond this mention, I will skip digging into the consequences of this reality).

We can re-plot the relative returns of Soros and Buffett in order to more closely portray what the comparative returns would look like on an after-tax basis.  For the purposes of this comparison, I assumed that each year, 20% of Soros’ returns would be paid out in taxes.  This is obviously a simplification, and not intended to be historically accurate, as everyone has their own unique tax profile, and long and short-term trades have different consequences.  I am merely cherry-picking a number that if anything, is probably favorable to Soros in light of the following factors: 1) capital gains tax rates were higher than today’s 15% during much of the time period covered in this analysis; 2) we know that Soros profited in capital markets subject to hybrid tax rates between long and short-term capital gains (like commodity and foreign exchange markets); and, 3) from Soros’ own journal in Alchemy of Finance (which I strongly recommend reading), we know that he engaged in many short-term, speculative trades that would be subject to ordinary income tax rates.

There is a second simplification I’ve made for the purposes of this comparison in assuming that returns were earned on a straight-line basis, rather than calculating each individual’s returns per year, adjusting for taxes and plotting those out.  Again, the purpose here is to demonstrate the impact of taxes on returns, and not to be perfectly precise with who is better than whom.  

As we can see below, the end result looks quite different when compared on an after-tax basis:

 

 

Plotted this way, Buffett’s compounded annual growth rate (CAGR) remains 21.4%, while Soros’ is 21.0%.  Now some might argue that an investor in Berkshire would still have to pay taxes on his or her investment, and this is true, but the clear intent in the article cited was to compare the performance track-record of each investor as stated by the author, and as evidenced by the author’s focus on the CAGR of Berkshire’s book value, rather than the performance of the stock itself. 
One of the biggest problems with performance generally speaking is how reporting systemically does not take into account tax consequences, yet there can be huge differences between two strategies with identical “returns.”  In reality, it’s only after-tax returns that matter.  Buffett’s partner, Charlie Munger offered the following important point on targeting after-tax, rather than pre-tax returns (from Munger's "On the Art of Stock Picking"):
Another very simple effect I very seldom see discussed either by investment managers or anybody else is the effect of taxes. If you're going to buy something which compounds for 30 years at 15% per annum and you pay one 35% tax at the very end, the way that works out is that after taxes, you keep 13.3% per annum. In contrast, if you bought the same investment, but had to pay taxes every year of 35% out of the 15% that you earned, then your return would be 15% minus 35% of 15% or only 9.75% per year compounded. So the difference there is over 3.5%.And what 3.5% does to the numbers over long holding periods like 30 years is truly eye-opening. If you sit back for long, long stretches in great companies, you can get a huge edge from nothing but the way that income taxes work.

I am a fan and student of Mr. Buffett and Mr. Soros and have no bone to pick in this race, though it should be clear to all that both men’s returns are about as good as they get over such a long time-frame.  To summarize, there are two key points here that I want to emphasize.  For individual investors, it’s extremely important to plan your investments in such a way as to maximize after-tax, not pre-tax returns.  Don’t be fooled simply by the appreciation in your portfolio.  Think about what portion of your gains you are paying to Uncle Sam (taxes) come April 15th each year.  For those who work with investment managers or invest via funds, when looking at performance reports, it’s extremely important to think about what the after-tax returns of a strategy look like.

 

Disclosure: Long shares of BRK.B in my own and client accounts.