A day with SFI learning "Optimality vs Fragility"

Recently I had the privilege of attending Santa Fe Institute's latest joint conference with Morgan Stanley. This time, the topic was "Optimality vs Fragility: Are Optimality and Efficiency the Enemies of Robustness and Resilience?" The topic was both intriguing and timely, and the speakers were interesting, informative and a little bit more controversial than in years past. This made for an outstanding day. The audience in the room included some big names in finance and science alike, setting the stage for fascinating Q&As and stimulating conversations during the breaks.

This year, rather than writing one big post covering all of the lectures, I will break each down into its own entry. Here are the subsequent posts in order (and their respective links). Let this serve as your guide in navigating through the day:

Cris Moore--Optimization from Mt. Fuji to the Rockies

Nassim Taleb--Defining and Mapping Fragility

John Doyle--Universal Laws and Architectures for Robust Efficiency in Nets, Grids, Bugs, Hearts and Minds

Rob Park--Logic and Intent: Shaping Today's Financial Markets

Juan Enriquez--Are Humans Optimal?

Dan Geer--Optimality and Fragility of the Internet

I like to think about are how the lectures relate to what I do in markets and where there is overlap and dissention between the speakers. Further, I like to analyze how some of these lectures fit (or don't) with my preexisting views. I would love to hear what others think. Here are a few of my observations to get you all started:

  • Cris Moore's point that "best" is not necessarily optimal, and a confluence of models (what he calls data clusters) can yield better outcomes is extremely important in financial markets.
  • Nassim Taleb's suggestion that stress tests should focus on accelerating pain, rather than spot analysis is a powerful one that all risk managers should think about.
  • John Doyle's observation about the tradeoffs between robustness and efficiency is directly applicable to portfolio construction.
  • Rob Park's explanation of how algorithms are designed to express human intent, and the areas in which that can go has me rethinking my understanding of the risks from HFT.
  • Juan Enriquez opened everyone's eyes to how big the advances are in life science and the consequences this holds for the "secular stagnation" debate.
  • Dan Geer's explanation for why we have a choice between two of "security, convenience and freedom" online is both an enlightening and frightening call to action.

Again I will caution that these are my notes from the sessions. There is no guarantee of accuracy or completeness. I specifically focused on points that were intriguing to me, and purposely left out areas where the subject matter and terminology were too far removed from my competency. 

Nassim Taleb at SFI

Nassim Taleb

Defining and Mapping Fragility

 

  • Black swans are not about fat tail events. They are about how we do not know the probabilities in the tail.
  • The absence of evidence vs evidence of absence is very severe
    • Too much is based on non-evidentiary methods
  • Financial instruments (options) are more fat-tailed than the function suggests
    • P(x) is non-linear
    • Thus the dynamics of exposure are different than the dynamics of the security
    • To that end law of large numbers doesn’t apply in options
  • “Anyone who uses the word variance does not trade options”
    • The measure of a fat tail is a distribution’s kurtosis
  • There was a great chart of 50 years of data across markets
    • In the S&P in particular, 80% of the kurtosis can be represented by 1 single day (1987 crash)
    • This would not converge in your data studying a broad look at the S&P
    • One  can only talk about variance if the error coefficient of the variance is under control
    • In Silver, 98% of its 50-year variance comes from 1 observation
  • EVT-extreme value theory is very problematic because we don’t know what the tail alpha is.
    • In VAR, a small change can add many 0000s
    • There is no confidence at all in the tails of these models
    • The concentration of tail events without predecessors means that such events do not occur in the data. Tails that don’t occur are problematic.
  • A short option position pays until a random shock. Asymmetric downside to defined, modest upside. This bet does not like variability (dispersion), volatility.
  • Look at the level of k (believe kurtosis??) and see sensitivity to the scale of the distribution. This is fragility.
    • Volatility = the scale of the distribution
    • The payoff in the tail increases as a result of sigma
  • If you define fragility, you can measure it even without understanding the probabilities in the tail
    • Nonlinearity of the payoff in the tail means that the rate of harm increase disproportionately to an instance of harm
    • What is nonlinear has a negative response to volatility
  • Fragility hates 2nd order effects. For example: if you like 72 degree room temperature, 2 days at 70 degrees is better than 1 at 0 and the next at 140.
  • Lots of nature demonstrates “S” curves
    • In the convex face of the s-curve, we want dispersion. In the concave face we do not (stability)
  • How to measure risk in portfolios: takes issue with IMFs emphasis on stress tests looking at a “worst” past instance, which is a stationary point in time.
    • Dexia went out of business shortly after “passing” such a stress test
    • Solution: do 3 stress tests and figure out the acceleration of harm past a certain point, as conditions get worse. 
      • We should care about increasing levels of risk, not degree
      • Risk increases asymmetrically, so if the rate of acceleration is extreme, this is stress.
  • Praised Marty Liebowitz for “figuring out convexity in bonds”
  • “Convex losses, concave gains --->thin tials ---> robust”
  • Antifragile=convex, benefits from variability
  • Can take the past to see the degree of fragility. You get more information and more measurable data from something that went down and then back up in the past, than something that went down and stayed at 0. 
  • Adding information is concave (N). Convex is when we add dimensions (D), spurious correlations increase.
    • There is a large D, small N problem in epidemiology.
    • NSA is one of the few areas that uses data well, but this is so because they are not interesting in many things, only the few that have value to what they’re trying to do.
  • PCA analysis, variations are regime dependent. 
  • We can lower nonlinearity of a price (buying options) 
    • Hard to turn fragile into anti-fragile, but can make it robust (tea cup can put lead in it).
    • Robust requires the absence of an absorption barrier – no O, no I in transition probabilities. Don’t stay or die in a specific state.
  • “Small is beautiful”
  • Q that “VAR is the best we have”:
    • A pilot says on a flight to Moscow: “We don’t have a map of Moscow, but we do have one of Paris.” You get off that plane. We don’t take random maps for that reason, and same logic applies to VAR.
    • Using VAR under this logic is troubling because it encourages people to take more risk than they really think they are taking. They anchor to the probabilities of VAR, not reality.
  • Liquidation costs are concave. There are diseconomies of scale from massive size.

 

A Conversation with Joe Peta on Baseball, Stocks and Performance Analysis

I'm very excited to share my interview with Joe Peta here. Joe is the author of Trading Bases: A Story About Wall Street, Gambling and Baseball and Managing Director at Novus. As you may realize from the title, Trading Bases combines (compounds) a bunch of my interests. In his work at Novus, Joe has been doing some groundbreaking research on performance attribution by dissecting volumes of individual fund and investment manager performance records in order to gain deep insight into how to identify the true drivers of alpha. His insights for traders and investors alike from years of experience on trading desks, running his own hedge fund, and now studying some of the best alpha-makers in the world are incredibly valuable for anyone looking to make money in the stock market. Plus Joe's outlook on life, whereby he took a challenging physical injury and turned it into an intellectually stimulating and ultimately rewarding experience in baseball betting is both inspiring and a great lesson for all in critical thinking. I'm very thankful that Joe took the time to discuss some of these topics with me here! Enjoy:

 

Elliot: Novus has been putting out some great stuff, particularly on the Julian Robertson and his Tiger Cub progeny, as well as European short interest. Can you talk a little about your role in Novus and your effort to build out the investing equivalent of baseball’s WAR--the WAGEs?

Joe: Novus was founded by ex-Fund of Fund investors who were dismayed at the lack of use of analytics in evaluating fund managers. Despite the fact that study after study has shown that today’s fund winners correlate poorly with tomorrow’s winners, in 2007, our founders wondered why the entire industry of allocators (endowments, state and corporate pension plans, sovereign wealth funds, fund of funds, etc.) all based their allocation decisions on the same process: Review the 1-, 3-, and 5-year track record and have a face-to-face interview with the fund manager to see if “he gets it.”

With data and analytic tools available that didn’t exist in the years prior when the “track record/interview” process was the only way to evaluate managers, Novus was founded with the idea of augmenting that process with analytics. Since our founding nearly seven years ago, the firm counts as clients not just allocators but asset managers (hedge funds, for the most part) as well. Allocators use us to manage their data and provide portfolio analytics while managers, thanks to the possession of daily data, can use Novus as an invaluable self-analysis tool.

Novus found me after reading my book Trading Bases. The first time I saw their analytics platform for hedge fund managers I nearly slapped myself in the head, thinking: “these guys have commercialized the exact stuff I tried to build on Excel at my hedge fund.” The whole goal of this type of analysis is to find out what you or your firm are good at and then leverage those skills. It’s a simple concept really rooted in the division of labor – if you want your hedge fund to be successful, find out what you do that persistently creates alpha and then leverage those skills. Leveraging alpha is smart; leveraging beta is risky; and, undertaking activities that you’re not good at is stupid.

These are exactly the management concepts that have arisen out of the sabermetric revolution in baseball. Buster Posey doesn’t try to steal bases, because he’s slow-footed. Getting thrown out stealing would erode his other highly-coveted skills that make him more valuable than his peers. Yet, I see PM’s who specialize in tech consistently losing money trading bonds or oil. If confronted, they’ll spin some sort of crap about correlations, but it’s just another form of behavioral finance weakness. They either don’t know what they are good at, and therefore don’t stick to it, or they simply cannot help themselves from trying to be something they are not. (I used to tell a fabulous semiconductor PM I knew who consistently lost money trading oil that he would laugh his ass off if T. Boone Pickens dabbled in SOX trading. My friend would laugh knowingly – and then go right back to trading the USO.)

Novus, at its core, is designed to detect those skills and weaknesses. It can help an allocator invest more intelligently and it can help managers improve returns. Baseball, sadly, is way more advanced at separating skill from luck when examining results than the financial industry is.

 

Elliot: Are you still running the baseball hedge fund while at Novus?

Joe: My complete immersion in the world of model-based baseball betting covered the 2011 and 2012 baseball seasons. The entire 2011 experience is chronicled in my book and that led to the 2012 adventure of raising a $1million fund and moving to Vegas for the majority of the season. Other than some personal season-long futures bets that I made this year and last, I have made virtually no single game bets. It’s a time consuming venture that really ended when my book came out last year and I started to look for the next step in my career.

Two questions that arise from those that read the book are 1) if the model returned 41% and 14% in ’11 and ’12 respectively why don’t you raise a bigger fund, and 2) why would you disclose virtually your entire process in the book? The answer is that my experience in Vegas convinced me the returns aren’t scalable. I estimated my true edge to be somewhere in the low teens (based on my model’s slightly better Least Mean Squared forecasting error over more than 4,800 regular season games and my allocation schedule which produced daily volatility roughly akin to the QQQs.) I learned that my $1mm fund could only be 2 to 2.5x bigger (at the most) before the returns would have diminished due to the inability to size my bets up to reflect a larger fund. You might protest that 13% a year on say, $2mm is still $260,000 in returns a year and I would agree that’s a very material amount of money, but it’s money that would belong to my investors; managers typically get 20% of the profits. So at the most optimistic, you could say I had a business model that had an annual expected value of $50,000. The 23-year old me would have jumped at that, but as a married forty-something financial industry professional with young children, it’s just not a viable business model. I love that I had a two-year experience that resulted from a horrible accident, but I’ve essentially removed myself from that world..

 

Elliot: While WAGEs is a comparative stat, how can I use it to my advantage as an individual and an investor to better assess my own performance? How can I use WAGEs to think critically and smartly about myself in the quest for constant improvement?

Joe: I have 100% confidence that a WAGEs (Worth of Actively Generated skills) type analysis of every single active trader/investor can yield extremely valuable insights that can lead to self-improvement and ultimately higher returns. Here’s why:

If you’ve decided to actively manage your money or your client’s money you are rejecting the idea of passive investing. The SPY’s are essentially the replacement-level player of the investing world. (Obviously, you can find any ETF to replicate a strategy that might be more refined that the S&P 500.) So by deciding to actively invest, you’re implicitly stating you can outperform the low-cost, readily available index.

It turns out that if you really think about what the SPYs are and what they represent, there are three potential areas of weakness you could exploit (skill required in parenthesis).

1) The SPYs are 100% invested at all times. (Market exposure)

2) The SPYs invest in 500 different companies (Security selection)

3) The weighting of each company is pre-defined (Sizing)

(Let’s address a potential fourth area—sector allocations are also pre-defined. You could cite sector selection as an area of SPY weakness and therefore a skill you may possess to exploit it, but in truth if that’s your skill, you’d be better off passively investing in an ETF subset of the SPYs, or what I call active-passive investing).

To justify active investing, one must possess a persistent edge in at least one of those areas of skill. A WAGEs-based analysis will determine your skill-based grades (or value) in each of those areas. The savvy investor will therefore limit his or her activities to the areas of skill and neutralize the areas where there is anti-skill. I have seen plenty of instances where poor market exposure management consistently erodes security selection skill. That can be identified with a WAGEs analysis and corrected by choosing to run a static-net exposure fund. Yet, that skill/detraction remains undetected or certainly unquantified just looking at results.

By the way, in the world of hedge fund investing, to my eye, by far the most crucial and the most persistent (or sticky) skill from one period to the next is the Sizing skill. If I ever meet a manager who tells me they run an evenly-weighted portfolio of 33 (3% equal-weighting) or 40 (2.5% equal-weighting) stocks, I would never invest with them. The ability to size positions, to outsize bets when the expected return is highest, is the hallmark of successful investors (e.g. Tiger Cubs). If you neglect to utilize that dial, I simply don’t believe you have much of a chance of ever beating a passive fund over time. In short, I suspect you are simply a style investor (say, deep value) and I’m certain I can get that factor exposure cheaper via an ETF.

 

Elliot: That’s a powerful point you make about position sizing. In Trading Bases you spent some time outlining how you came up with your own tiered approach, based on your model’s level of conviction in a given outcome. Was this point about position sizing something you were aware of in developing your own trading strategy? Is there anything you would do differently now with what you have learned on the performance attribution side ever since? Are there any particular approaches to position sizing that you have found effective in the fund managers you have analyzed?

Joe: My sizing decisions for the fund were informed by the experiences of any serious black jack or poker player: you need to get more chips in the pot when the odds are more in your favor. This philosophy, of course, is expressed numerically in the Kelly Criterion. I modified the Kelly formula to reflect the philosophy, but I sacrificed the chance to maximize returns (Kelly's design by formula) by having far more respect for draw downs. (The idea of never making a bet today which can imperil your ability to take advantage of a bet with positive expected value in the future is one I spend a great deal of time on in the book, highlighted by my first-hand experience with the Lehman bankruptcy. In fact, despite zero references to baseball, it's the longest chapter in the book). Applying that today, and merging it with my experience at Novus, I'd have to say if I were back at a hedge fund I would spend a long time with the PM and analysts exploring the idea of narrowing the number of holdings in the portfolio. We (Novus) have definitely found that 'farm teams' -- those sets of positions, typically between one-half percent and two percent of a fund that almost all large funds carry, typically have sub-par performance. Those small positions that PMs love to say, "force me to follow the name" end up costing a lot more alpha, if not outright dollars, than many realize. So, I'd consider the merits of a more concentrated approach, and if I were an allocator investing in hedge funds, I'd be very receptive to that strategy.

 

Elliot: Are there differences in how we need to think about investment styles? For example, in WAR we adjust a player’s value to reflect his position. In market performance attribution, are there different ways to structure the analysis for value, growth, macro, etc. styles?

Joe: I think all attribution/evaluation has to be context-dependent. For instance, if you are the healthcare PM at a multi-manager platform hedge fund forced to run 20% net long, you should not be dinged because the sandbox you are forced to invest in presented an unfavorable environment this year. There are so many things out of the control of a manger (dispersion, volatility, multiple expansion and contraction) that you want to isolate the actual factors that the manager can control. The same goes for factors that you are referring to such as value vs. growth, large cap vs. small cap, etc.

That said, if a PM by charter runs a hedge fund with zero restrictions on style, sectors, etc. they need to be graded on their ability to shift allocations whether it’s by sector, asset class, etc. I actually find this is a pretty easy trait to evaluate over time that can be revealing to the PM.

 

Elliot: What are some of the behavioral differences between working on a trade desk and running a baseball-based hedge fund? Do you think it’s advantageous that in baseball your wager is set upon its placement, whereas in markets you get instantaneous feedback and can change your mind (and position) on a whim?

Joe: My approach and implementation of baseball wagering, while massively informed by my time on trading desks, purposefully looked nothing like my stock-trading activity. I can explain why with a story about my mid and late-career experiences on Wall Street. While running the desk at a $200mm+ long/short equity hedge fund for five years, I became obsessed with the weaknesses I felt our head PM (and my partner) exhibited, because his shortcomings fell in the area of behavioral finance. Later, after the fall of Lehman Brothers (which was our third partner in the management company and by far, the largest investor in our fund) necessitated the closing of our fund, I moved back to the buy side. There, my assistant (who I had handpicked based on first-hand observation at a prior job) simply couldn’t “get it” when it came to executing customer orders and maintaining order in our prop book. It drove me bananas, because like my former partner, the shortcomings weren’t due to a lack of intelligence or a fear of doing the homework around our names, etc.; it was a behavioral finance weakness. Specifically, they had an inability to do nothing. Both of them felt the need to react to the latest stimuli: whether an item on CNBC, or changing prices on the screens in front of them, etc. and then take action. I was so agitated by their behavior, which consistently cost money, that I swore if I ever ran a large trading organization, in the spirit of the Notre Dame locker room in which hangs a “Play Like a Champion Today” sign that every player touches on the way to the field, I’d hang a sign over the trading desk that everyone would be forced to stare at all day. It would read: “Don’t Just do Something, Stand There”.

I ran my baseball fund after these observations, and as such, I took that philosophy to its most logical extreme; my fund was 100% model-based so I never put my fingers on the scale. Now, unless you are running a black-box, algorithm-based hedge fund, that approach isn’t entirely applicable to stock trading. After all, stocks aren’t priced with a correct answer like the outcome of a baseball game. Stock trading absolutely involves pattern recognition and the need to sometimes be a contrarian. I’ve always said: “stock trading is like surfing – you can use astronomy to read the charts, you can hire weathermen to give you tide and wind predictions and you can even put sensors in the water. But if you really want to know what surfing conditions are like, if you want to feel the tides, you’ve got to get in the fucking water.”

Model-based baseball betting allowed me to completely remove emotion from the investment picture. Most traders would do well to do less trading, but still, the markets are different.

 

Elliot: I love the idea of “Don’t Just do Something, Stand There.” In performance analysis, is there a quantifiable connection between lower turnover and long-term returns? Alternatively, is there a way we can judge how efficient and minimalist a fund’s actions are beyond just looking at turnover?

Joe: I've very agnostic when it comes to investing styles. I am steadfast in believing there isn't just one way to invest or trade, or run a fund. This applies to every style, strategy, system, etc. you can think of. However, there is only one way to judge the effectiveness of any money manager and that's value creation (think, alpha) vs. a passive benchmark (risk-adjusted, of course.)

Therefore, you will never hear me say, "you trade too much," or "you need to let your winners run", etc. What I will say though is this, "You should stop whipping your net exposure around because you're no good at it." Or, "your winners, on average, contribute only 10% more basis points than your average loser, and that's bottom-quartile performance." Those data-based insights suggest a change in behavior is needed (less trading and let winners run/cut losses earlier, respectively.) That's what I love about properly designed data analysis -- it doesn't fault style, it identifies skills.

So to your second question, we can certainly measure a fund's efficiency, and all things being equal more efficient is better than less efficient, but I'm not sure it matters. Trading/turnover isn't necessarily inefficient, but examining the results will determine if the PM ordering the turnover makes effective decisions.

 

Elliot: Markets are a dynamic world, changing constantly. There are interconnections and feedback loops, and no two periods are exactly the same. Do you think this is an area where performance analysis in baseball is materially different than that of markets?

Joe: Baseball, due to lack of inter-dependency is far, far, far easier to model accurately than other sports, let alone financial markets. A baseball game is basically a series of 70 or so one-on-one match-ups between pitcher and batter that can be modeled with a fair degree of certainty (thanks to modern computing power, pioneering work done on persistence and reliability of results, as well as aging curves done by the early sabermetric researchers, and of course, baseball’s wonderfully data-rich history) that just isn’t possible in other sports where teammates play such an important role in results.

I’m fond of saying that “it didn’t matter what league he played in, who his catcher was, the ballpark he pitched in or the batters he faced. Randy Johnson could be relied upon to strike out 33% of the batters he faced during a ten-year stretch of his prime.” You just can’t do that in other sports. You would have to qualify things as follows, “Tom Brady, playing behind this offensive line, throwing to these receivers, supported by this running back, and leading a Bill Bellichick-designed scheme can be expected to complete 62% of his throws.”

Trading and the markets, of course, are even harder to model (in the short-term) and the reason can be summed up, as is often the case, by a Warren Buffet quote (which he attributes to his mentor, Benjamin Graham, “In the short run, the stock market is a voting machine, but in the long run it is a weighing machine.”

 

Elliot: What are some of the best adaptive traits you have seen both behaviorally and empirically from traders to accommodate changing markets? Are there certain people you just knew would be able to succeed in any market environment no matter what?

Joe: The best book I read on this topic is Way of the Turtle, by Curtis Faith. I absolutely believe there are people who can succeed in any market environment. They are extremely adaptive, trust their instincts and most importantly, have an ability to understand the trading environment they are currently in. By that I mean this: The greatest PMs have an ability to listen to input from their analysts and take action on it at one period of time, and then take that exact same information and do the opposite at a different period of time. That’s understanding field position and it’s an ineffable skill. However, if you run a hedge fund it’s invaluable. (Now, one can argue that if you’re a Warren Buffet-type investor or running a low-turnover, value-oriented mutual fund with massively sticky 401(k) assets, etc. that type of skill means little. I’d happen to agree with you, but it’s not the world most investors live in).

 

Elliot: How are we to think about something like situational awareness in both baseball and investing? Some traders I have seen are great at knowing when to press (swing for the fences) and when to be defensive, how can we quantify such traits? Further, how can we determine when this management of net exposure is a sustainable, persistent edge versus something that has been executed well in the past, though will not necessarily be repeatable in the future?

Joe: Great questions that to some degree combines a few of my answers above. I used the term “ineffable” to describe the allocation process of a PM, so clearly I agree with the premise of your question. Additionally, when fellow golfers desperate for improvement confronted the notoriously prickly Ben Hogan, he’d growl “the answer is in the dirt.” Well, I firmly believe that when it comes to a trader/PM identifying a persistent edge, the answer is in the data.

Not necessarily in the results, but the data. Successful investors, whether they excel at exposure management, security selection, or sizing do so in very identifiable ways, and it can be detected in roughly 140-150 trading days (or about 7 months). From an analytics standpoint, I can’t identify how a PM or trader might be good at exposure management, but I’m telling you I think, by examining the data, I believe I have an excellent chance of identifying it going forward – at least a better chance than someone who is just examining returns.

A lot of that confidence comes from my study of baseball analytics. For example, looking at the strikeout, walk, and groundball rates of a pitcher after he’s faced just 70 batters offers a better chance of predicting his ERA going forward than anyone else can relying on just his current ERA or the “eye test.” That’s because I’m looking at past skill demonstration, not results. It should be very easy to see how that type of analysis is applicable to asset manager evaluation.

 

Elliot: I must say, in reading Trading Bases, I was extremely impressed with your optimistic worldview and how you used that perspective in order to turn a difficult injury into an exciting and rewarding challenge. Although you’re not working at a trading desk right now, do you think there are any skills you acquired, or something you learned about yourself in your time running the baseball fund which would benefit you in the markets?

Joe: I think every good trader realizes they benefit from every new day they sit in front of the screens. It's another observation for the internal, pattern recognition database. In that way I think it's a lot like being a quarterback. The more reps you get, the slower the endeavor becomes, leading to better decision making. Fortunately, unlike athletes, as we get more experience, we don't lose our physical (actually, cognitive) skills -- at least not at anywhere close to the rate athletic skill ages. That said, while my 15+ years as a trader were enormously helpful in everything about conceiving, modeling, and running a baseball fund, I can't think of a way it would make me a better trader of stocks today.

 

Elliot: Do you think the presence of uneconomic/irrational gamblers, whose primary interest in placing a wager is supporting a partisan rooting interest in baseball betting, creates a persistent inefficiency?

Joe: That is an interesting question because my knee-jerk reaction is yes, of course. In my book, I describe the Yankees, prior to the 2011 playoffs, possessing a retail mark-up similar to an Hermes scarf. (Analogies like this helped my wife, the initial editor of my manuscript, wade through the baseball stuff).

You only need to look at the history of point spread results to know this. For years, and years, if you simply bet against the Super Bowl Champion in every game the next year, you made money. Blindly betting against the Dallas Cowboys and Notre Dame appear to have a small positive expected value, because they attract irrational, fan-based wagers. (Vegas-based wisdom has it that if you’re going to bet against USC when they play a night game at home, never place the bet before the last minute. That’s because an 8:00 PST start means it’s the last game on the board on Saturday night and deep-pocketed Los Angeles-based gamblers, desperate to make up for earlier losses, can be counted on to bet on USC in droves pushing up their line in the last hour before kick-off.)

However, there is some strong evidence that very sharp gambling syndicates have neutralized that play to the point that currently there may not be much meat left on that bone. The biggest reason the house wins isn’t dumb money; it’s the $11 to win $10 spread on a football and basketball game (however this house edge is considerably smaller in baseball).

 

Elliot: Did your model produce a higher IRR on long-term plays (ie your “futures basket”) or on your single-game wagers?

Joe: I don’t have any confidence in projecting the true expected value of my futures bets because the sample size is too small. I do know that from 2011 to 2013 I had a smaller forecasting error each year than any other publication I could find, in print or online, including the Vegas line. (That was independently verified in 2013 by the website “Stats in the Wild” and all my futures picks from 2011 forward are in print.) About 40% of the way through the 2014 season, I’m struggling to maintain a 50% pace on my preseason ten-pack of picks. At about 10 picks a year, and since futures plays are so sensitive to changes in personnel due to trades, injuries, etc. I don’t have as much confidence as I do in battling through 2,430 games on a daily basis – and my capital allocation in 2011/12 reflected that.

 

Elliot: Betting on teams, games and seasons seems to me the macro equivalent in baseball, while fantasy baseball strikes me as a kind of micro. In macro, mean reversion tends to be a powerful force, while in micro there are far more feedback loops. Does this analogy translate to baseball? What are some implications of differences in performance analysis and attribution in macro vs micro?

Joe: I’d be lying if I tried to force my answer to fit that analogy. I would simply make this distinction: there is a much larger variance in outcomes in the short-term even if the model is just as accurate. Therefore, the real skill is proper capital allocation. You can make the same distinction in stock market investing when comparing a “trade” versus an “investment.”

 

Elliot: I’m not sure if you have read some of the recent analyses of John Maynard Keynes’ investment track-record (here’s a good overview from Jason Zweig). There were some interesting observations about how the man who had perhaps the best access to global economic data, and the deepest understanding of the macro system could not make money on his macro wagers, and his results only became stellar once he shifted to a focus on micro. Given the analogy between the micro and macro in the baseball world with the market world, and given your results and conviction were highest in the micro realm in contrast to the macro, do you think there is a broader implication of this? Have you seen anything in your performance analysis at funds that suggests there’s a better, more consistent opportunity set in micro than macro?

Joe: Let me try and give you an answer that touches on both your questions -- I think I speak for everyone at Novus who has taken a look at performance data and reached this conclusion: Value-adding micro skills (stock selection and sizing, to name two big ones) are far more prevalent and persistent than value-adding macro skills (say, exposure management, sector/strategy allocation, etc.). That said, that does not mean the latter does not exist. My personal view is this: Those that add value via macro-informed implementation are infrequent, but the managers that have “it” display it consistently. For instance, as I state in my conference presentations (backed up with some fun data) I believe Jim Cramer has that macro skill. I would trust him to set my net exposures on a daily basis, but there are a whole lot of fund managers (and probably RIAs, etc.) who would be better off not touching that dial.

 

Elliot: The question about hot and cold hitters has bugged me. On the one hand, I completely acknowledge the connection between luck/skill and ultimate outcomes. On the other, I think Yogi Berra nailed it in saying that “baseball is ninety percent mental and the other half is physical.” I have seen players change their approach when in slumps, and hot hitters bring a more patient plate approach, to the point where at least some degree of the outcome can’t merely be the random dispersion of non-related, uncorrelated events and simple cluster luck. To connect this to trading, I have seen great traders lose track of their process when their mental fortitude was in a vulnerable position. Is there something to what I’m saying or am I overthinking this?

Joe: I think the biggest danger in your conclusion is that as humans, “our eyes lie.” We do a very poor job of recalling the events that don’t fit our narrative. I’m not saying you aren’t right; I’m simply noting that proving a connection between those events is monumentally difficult. Look, I get the fact that as a Knicks fan you were petrified when Reggie Miller had the ball late in a game. It’s true that he had the ability to “heat up” and make five straight long-range buckets quickly, but the truth is, he has that ability at any time of the game, because like Steph Curry today, he’s one of the most skilled shooters. Time and time again, studies show there is a miniscule, if any, amount of evidence to point to athletes performing at a higher level “in the clutch.”

I believe that’s true in our profession as well. If you truly are a trained, skilled professional, you will have periods of under-or over-performance throughout your career. It’s just that our need to explain, or attach a narrative to everything which leads people to conclude, “ahh, I was going through a divorce during that bad period” while completely ignoring the fact that a parent died four years earlier during a period in which they outperformed.

 

Elliot: Let me rephrase this question to ask “apart for the clutch hitter/shooter/scorer” point for I do think that’s fairly convincing. I think my reluctance to fully embrace (as opposed to largely believe in) the concept is most evident in cold streaks. What starts as a random distribution can lead to self-doubt, which leads to an adjusted approach, making ruts a self-perpetuating feedback loop. In investing and trading, I’ve seen this manifest as the idea where mistakes beget more mistakes. Is there a way to attribute this to some other factor rather than mere random distribution?

Joe: Personally, you will get no disagreement from me. I don't think people can perform better in the clutch, or in high leverage situations in any endeavor. The implied premise in asserting as much is that they don't try as hard at other times. However, I'm equally sure that ignoring or dismissing the opposite side of that coin is a fallacy. Pressure can absolutely destroy someone's decision making and performance.

I'm reminded of the very first mentor I had on Wall Street, a relationship which is fondly recalled in Trading Bases. The head of Lehman's Nasdaq trading was my age, almost to the day, but it was always a mentor/mentee relationship. He was a former defensive lineman on Penn State's 1986 NCAA championship team, a huge physical presence. And, during my first months on the desk, he said to me in an effort to test my fortitude and confidence: "Peta, do you know what the vomit zone is? It's what I call putts from 5 to 7 feet. Everyone can make them on the practice green, but when you need to make one to win a hole, it makes some people want to vomit."

I think that marries both of our points. You can't get better in a match than the skill level you possess on the practice green, but you damn well can get worse, or as that noted investor M. Mathers has said, "Knees weak, arms are heavy, there's vomit on his sweater already, mom's spaghetti"

 

Elliot: So far this season, which baseball teams are most over and underperforming your model? Do you think any of these teams are legitimate in their over / underperformance? Do you see any quantifiable lessons or adjustments that could have captured some of these differences?

Joe: I haven’t run my team models on a daily basis like I used to, but the degree of the Giants success has surprised me. They’ve got the best record in baseball even after dropping six of their last seven games. While I was wrong on the early season success of their pitching staff because they’ve been better – through a skill-based lens – than I projected, the same wasn’t true of the offense. The hitters also had better results than I projected but it wasn’t due to skill-based performance, it was luckier. Through the first quarter of the season, they benefitted from the sequencing of their hits, which is uncontrollable, or as I dub it in my book, “cluster luck.” That has reverted in the last 30 or so games, as they’ve cooled off. There are some other examples as well, but the ability to make money off of observations like this is dependent on pricing, of course. Since I’m not betting daily, I don’t know what sort of adjustments the oddsmakers have made to each team’s expectations. It’s always possible that I may have been right to be pessimistic on the Giants over the last week but if the oddsmakers were more pessimistic, I may not have made any money.

The main thing to do from a model-building standpoint is attempt to identify the results which are repeatable. Michael Mauboussin, who I know we are both big fans of, captures it perfectly when he describes the need to detangle skill from luck when looking at results. I love that description because it makes me think of a thoughtful and curious scientist in a lab using tweezers to examine a strand of DNA.

 

Elliot: Which team out there do you think has been hit by particularly bad luck, and should have a better run from here-on-out?

Joe: The Dodgers, with their stellar starting rotation and a quite potent lineup had no business being 9 1/2 games out of first place as they were when we started conversation (as recently as June 8). It's down to 4 games now and they are not only going to win the NL West, but they are also going to the World Series. Now, I bet on the Nationals to win the NL during Spring Training, but that was based on price. The Dodgers are the best team in the NL by a comfortable margin. We'll see if it plays out.

Two other teams, that are far better than their records are the two teams with the least amount of wins in each league: the Tampa Bay Rays and the Chicago Cubs. In the case of the Cubs though, it might not translate into any more success during the 2nd half of the season because they very well may sell off any of their assets with value.

On the other side of the coin, if there's a team I think could lose 100 games despite being comfortably ahead of that pace now, it's the New York Mets. (Sorry.)

 

Elliot: I want to dispute one of your assured suggestions in Trading Bases. Wayne Gretzky is the second best hockey player ever, to Mario Lemieux. Discuss…

Joe: I took a shot at hockey fans in a footnote in the book (suggesting certain sabermetric formulas could be used in hockey, but until we find a hockey fan that can operate a calculator it remains unproven) and incurred the wrath of many readers for it. I simply won’t take the bait here and admit I know nothing about hockey.

 

Elliot: It is torture being a Mets fan...

Joe: I so wanted to be on the Daily Show during my book tour just so I could use the ‘50 Shades of Shea’ line on Mets’ fan Jon Stewart.

The Market's Betting Line: A Look at Implied Growth

This post is co-written by Elliot Turner and David Doran

There’s been a great conversation in the blogosphere specifically about the validity and predictive power of Cyclically Adusted P/E Ratios (aka CAPE) and more generally about the valuation of US equities. The incredible run of the last year has only made the valuation debate more important. John Hussman has been one of the more vocal advocates on the Bearish side, while the pseudonymous @JesseLivermore from the Twittersphere has done an outstanding job deconstructing CAPE. In doing so, @JesseLivermore has highlighted why CAPE might not be relevant along with presenting an alternative way to look at valuation.

Here is what the CAPE looks like right now:


We think indicators like the CAPE offer valuable information, though they can never be looked upon conclusively. Further, we think that no one indicator has any worth outside of context. In an effort to simplify, context is often underappreciated in the investment community.

One key area where CAPE fails to ascertain the context is how it thinks about the valuation of businesses. Since the CAPE deals with P/E, it is purely a reflection of the price of a security, relative to its earnings power. This ignores capitalization, and as such, presents a considerable problem for comparison across time. Consider: a company that trades at $100 per share with $10 cash/share and $0 debt/share that earns $10 in net income/share vs a company that trades at $100 per share with $0 cash/share and $10 debt/share that also earns $10 in net income. On a P/E basis, both companies trade at a 10 P/E, though both companies are not “worth” the same. Assuming all else is equal, the company with a net cash position of $10/share is clearly worth more to the equity holder than the company with $10/share of debt.

For this reason, some turn to the market’s Price to Book ratio, or the Tobin Q in order to gain insight from the relationship between the market price and companies’ asset value. In fact, Eugene Fama in his studies on market efficiency long ago documented that a low P/B has high predictive power for outsized returns. While Fama was specifically referencing individual securities, it is reasonable to conclude that buying the market at lower P/Bs should thus lead to higher returns than buying at high P/Bs.



As of today, the market’s P/B is similarly situated to where it was when the 1990s bull market began in 1995. It is below the average P/B of the last 24 years and right at the average of this timeframe if you exclude the 1998-2001 bubble years. While not at extremely low levels, the P/B is near the low-end of its range of the prior 24 years. While some may argue CAPE is better at incorporating the mean-regressing nature of the market than P/B, book value itself is a slow-moving metric and gives us a sense of where the tangible worth of businesses stand. Further, changes in book value are far less volatile than changes in earnings, and as a result, book value offers investors a more stable arbiter of value. Altogether, P/B tells a very different story than CAPE.

We do not want to take the extra step of concluding whether the market is fairly priced, cheap or expensive in today’s environment. Instead, our goal is to lay out an additional concept--implied growth--as another contribution to the CAPE Debate, show where the bar is set in terms of future expectations, be slightly suggestive as to which way we lean, but ultimately leave it to you the reader to decide and opine on whether the market’s estimation of implied growth is fair or not. That being said, we do think it’s a stretch to call today’s valuations extreme. The numbers implicit in today’s market price are rationalizable when viewed in the proper context and compared to historical levels of growth, cost of capital and returns.

Earnings and Book Meet at Implied Growth

Looking at P/B in conjunction with CAPE certainly provides better context, and consequently, better predictive power for investors; however, in our opinion, we felt there were still limitations to this approach. What if there were a way to look at earnings relative to book? A way where earnings and book value could be used together to get a sense for where the market’s valuation stands? We could look at ROE over time, which is reflective of the relationship between P/B and P/E, but that datapoint alone looks like the definition of a random walk.

One thing we could do is invert (start with a certain datapoint and work backwards), to figure out what we really want to know: at what level of market valuation can I expect the best future return? There is a formula which allows an investor to figure out what the P/B ratio for a given security should be, if you know the ROE, Implied growth (g), and cost of capital (r). This formula is a derivation of the Dividend Discount Model, which is based on the incontrovertible principle that an asset is worth the sum of its discounted future cash flows.

Justified P/B = (ROE- g) / (r - g)

For the purposes of the market, it turns out, we can actually plug in every single one of these variables to the equation aside for the growth rate. This is extremely valuable, because then we can adjust the formula to solve for what the level of implied growth is, given ROE, cost of capital and P/B. This is an extremely useful metric because it will tell us what level of earnings growth the market is pricing in at this current moment in time. We can then compare our current data point to implied growth historically to provide context. Using the context, we can then judge whether the market is being too optimistic or pessimistic.

As of today, here is the information we know:

Market’s P/B = 2.58
ROE(10 year average)=13.62%
r (WACC) = 7.91% (We have used the Equity Risk Premium as estimated by Aswath Damodaran and added it to the 10 year Treasury yield. See our discussion on WACC below)

When we plug these numbers into the equation above and solve for g, we get a result of g=4.31%.

It’s important to define what exactly this number means. When we talk about growth, we are talking about the implied growth rate of underlying earnings, which means this number is inclusive of the net effect of equity issuance (repurchases). In other words, if companies in aggregate were net issuers of new shares, then this would be a drain on future growth; conversely, were companies to repurchase shares this would be accretive to future growth.

One of the shortfalls of CAPE analysis that @JesseLivermore astutely pointed is how today and yesterday are not exactly the same. Making matters yet more complex is that no two datapoints in the marketplace are stationary on even an intraday basis. The world is dynamic and things change fast. This is but one reason why we like looking at implied growth. Implied growth does something for us that neither CAPE nor any other backwards looking metric tells us: implied growth provides a very clear threshold level for which the future needs to match in order for us to capture our desired return (desired return being the 10-year Treasury plus an Equity Risk Premium). Simultaneously, implied growth tells us what levels of growth will leave us short of our desired return, and what hurdle we need to clear for an outsized return.

The key takeaway here is that in order for an equity investor today to earn his/her cost of capital (7.91%) then earnings must grow at a rate of 4.31% from here on out. In essence, this 4.31% serves as our betting line: if growth hits the over, an investor should earn in excess of his cost of capital; and if growth hits the under, then an investor’s earnings will fall short of the cost of capital. Historically g has been in the range of 5-6%, having averaged 5.3% since 1950. This suggests that in isolation, today’s levels are in-line with the “slow-growth” environment so many market analysts make reference to.

Investors can then use this benchmark and ask hypothetically: what would my return look like ten years down the line, given different levels of actual realized growth, holding all else constant? Let’s say the growth in earnings actually tracks at 3.31%, falling a full 1% shy of the 4.31% implied in today’s market price. Holding P/B and ROE constant, we can then solve for “r” to get our expected return. With 3.31% growth, we then find that “r”=7.31%. This would leave us 60 BPS of our expected return based on today’s metrics. Alternatively, let’s say growth ends up a full 1% above 4.31%. Solving for “r” we see that our expected return would check in at 8.53%, or 62 BPS above today’s level. Let’s say you were concerned about a rise in interest rates, and wanted to know what implied growth would look like were benchmark rates to rise by 1%, then you could add 100 BPS to the 7.91% WACC, hold ROE and P/B constant and again solve for “g”. We then learn that were rates to rise by 1%, the market would be pricing in 5.93% growth in EPS. We’ll have a deeper discussion about rates below. While we present these as examples, it’s possible for an investor to model out numerous different scenarios using this equation, assuming various paths for interest rates, ROE and/or growth.

There’s an important point to make here: an investor need not make precisely 7.91% over time even if growth were to come in precisely at 4.31% annualized. This is merely reflective of the rate at which intrinsic value will increase by, and what at what level the market would be fairly valued. As we should all know by now, markets often overshoot to the upside and to the downside. There is also a high degree of path dependency depending on when an investor’s capital needs take place. Were all our equation inputs to stay exactly the same for 10 years except for the denominator in P/B (ie were the index price to stagnate, but ROE, WACC and g to stay the same over that time period) then the outcome would be a very cheap equity market, but no actual return on capital over the prior decade. To an extent this is an oversimplification considering an investor would also earn dividends that can be reinvested, thus increasing the overall yield on the portfolio as time marches on; however, the fact remains that it’s certainly possible for the justified value to work out as planned, but at the same time, for Mr. Market to not cooperate.

One more beautiful feature of this equation is that we can deploy it with equal veracity and power in individual stocks or for the broader market, whereas CAPE’s power is confined to the broader market and to a lesser extent, cyclical stocks. There is a second nice element as applied to the micro: the level of implied growth in the market provides us a nice reference point to think about individual company valuations. We can use this level as a benchmark from which to say a given stock is expensive or cheap. If a stock’s expected growth is greater (less) than the market’s implied growth, then that company most likely deserves an above (below) market multiple. But this point is merely an aside.

Micro Factors that Matter in the Big Picture

In the discussion on CAPE, many warning about overvaluation today cite “artificially low interest rates” as distorting the discount rate applied to equity valuations. In the context of CAPE this is a point (albeit one we would still argue is weak). In the context of looking at implied growth, that point is rendered moot entirely. One must be a realist and look at the actual, factual influence that interest rates play in the cost of capital for a company. Low interest rates, as is evidenced in various arena, allow borrowers to tap into capital at a lower cost. Rates at some point will eventually rise and raise the cost of capital. However, given the length of time of low interest companies had ample time to finance themselves cheaply and lock in rates for the long-term. The early years in a rising rate regime will be somewhat mitigated by companies already having secured low rate financing.

Let us illustrate how these low rates work to a company’s advantage. Assume we managed a company and were planning to undertake a new project. From this new project, we expect a return of $100 on an investment funded with $100 of debt (and $0 equity) at a 10% interest rate. If everything were to go smoothly, our return would be $100 minus the cost of our debt, or 10% of $100. We can write this out as $100 - ($100 * 0.10) = $90. Now let’s say interest rates for our company’s debt were to drop to 5%, but our expected return were to stay exactly the same. The math on this very same project would turn into $100 - ($100 * 0.05) = $95. Nothing changed insofar as the opportunities for the business go; however, it would now be able to earn $5 extra on the very same project purely because of the lower cost of capital.

Complain all you would like about interest rates being “artificially” low, but the reality of the situations is clear. So long as companies can and do tap into lower cost sources of capital, then the returns available to those companies will rise accordingly. We will let academics handle the debate about how and why interest rates are so low and instead we will focus on business analysis and looking at the ways in which low interest rates tangibly alter the math in valuing companies.

Stated another way, the change in interest rates is shifting a portion of returns on corporate investment from the pockets of bondholders to the pockets of shareholders. When Siemens did a debt-backed share repurchase, their CFO, now CEO, Peter Loescher had the following to say: “Our plan to swap expensive equity for historically cheap debt capital is being executed in grand style. The fixed interest rates we’ve obtained will ensure that we’ll continue to profit from today’s extremely favorable conditions over the long term.” We see this as a stated corporate objective from many blue chip companies around the world, with Federal Express recently joining the chorus in announcing a simultaneous bond issue and share repurchase. Fedex issued $2 billion in bonds with rates ranging from 4% to 5.1% in order to buy back more stock. Their ROE is currently 9.9%. In 2013, a highly levered company like Tenet Healthcare was able to issue debt at 4.25% and 4.375% and directly use the proceeds to retire debt at 8.875% and 10.00% respectively. The bottom line is that this is valuable to shareholders.

If you are spending time arguing whether low interest rates are justifiable, you are mistaking the forest for the trees and failing to see that companies are actually capitalizing on the status quo in order to drive shareholder value into the future. If you are able to hold ROE static but send WACC lower, then all of a sudden, you have created extra value for shareholders. This is the outcome of the point we were making above about companies capitalizing on the low cost of interest rates.

The Historical Context of Implied Growth

To construct a chart of implied growth for the S&P 500 since mid-2000, we had to come up with a cost of capital to plug into the equation. The first thing we did for cost of capital was using Aswath Damodaran’s Equity Risk Premium and adding that to the 10-year Treasury yield at each point in time. Each datapoint uses the rolling 10 year average ROE for the S&P 500 and the TTM book value for the purposes of P/B. We plugged in the known numbers to the equation above and solved for implied growth on a rolling basis.

Since something like the cost of capital can be highly subjective and very noisy from day-to-day depending on changes in the market’s price and equity risk premium, we decided to take a second look using the historical long-run return on equities. We did this because we thought it would be important to try and drown out some of the “noise” in the data created by these fluctuations. That was done by taking the actual annualised equity return for that particular data point going back to 1929 and then adding the spot ten year yield to approximate a “historical wacc.” For example, to get the equity risk premium for the year 2001, we took the total annualised returns of the S&P 500 from 1929 to 2001 and subtracted the total annualised returns of the 10-year Treasury over that same time period to give as an approximation of the actual excess return earned by equity holders over that time period. For the WACC, we then added that number to the 10-year spot Treasury at that point in time.

This is what we found using both WACCs:



Low P/B inherently means that the implied growth sets a lower hurdle for an investor to earn their cost of capital. This is so even when combining a low P/B and a historically average ROE. From both of these charts it becomes pretty obvious just how obscene the implied expectations were in the indices in 2000. Especially considering that eps growth has averaged 5.3% since 1950. 2011 is another interesting data point, During the European debt crisis sell off, the S&P was pricing in close to 0 eps growth despite actual robust eps growth and a bounce back in ROE’s. In fact, the orange line shows clearly that for much of 2011 and 2012, markets were pricing in zero earnings growth in perpetuity. This is consistent with much of the rhetoric. While the popular discussion has shifted towards “multiple expansion” being the sole driver of equity returns in 2013, it’s important to consider the context: equities went from pricing in absolutely no growth, to pricing in fairly modest growth assumptions.

It is with this implied growth equation we can see very clearly exactly what people are referring to when they say “multiple expansion” has been the driver of returns. Interestingly, since the bounce-back from the Great Recession began, the 5% threshold for implied growth has been strong resistance for equity markets. Each time implied growth has reached those levels, markets have either sold off or gone sideways. In light of the aforementioned 5.3% average annual growth since 1950, this market action is consistent with the “low growth” environment theme, as equities have basically been ping-ponging between zero implied growth and 5% ever since the bounceback from crisis began.

It’s also worth pointing out how despite the S&P 500 being higher today than it was in 2000, the implied growth is much lower. Through this, we can visualize how the market’s multiple has compressed over time. In valuation theory, another way to breakdown value is to think of the equation Current Value = Tangible Value + Future Value. When implied growth is lower, then the future value portion of this equation is also lower, but the tangible value level is higher. Tangible value includes the present asset base of the company and the value of the company’s sustainable earnings level. Here we can see pretty clearly the impact that increasing capitalizations of corporations can have on the question of fair value, something that CAPE simply cannot and does not do.



Disclosure: Both authors are long shares of Siemens (NYSE: SI)

Biotech: Popping the Allegations of a Bubble

There’s a popular view going around these days that biotech is in a bubble. Jim Grant was the first that I know of who publicly expressed this view, and he rationalized his argument asserting the Fed’s zero interest rate policy was distorting markets without even a cursory mention of the specific developments which have transpired in the biotech sector over the recent past. Several smart market participants who I respect greatly have echoed this perspective. I want to use this post to dispel that notion.

I’m a humanities, not science guy. I also am a generalist, not a biotech investor. I have some exposure to the sector and don’t plan to increase or decrease that exposure any time soon. That being said, I will leave the science vague and hope someone more knowledgeable and with more skin in the game can expand on this argument. As recently as the late 1980s, the drug discovery process was entirely centered around literally sifting through dirt in order to find molecules that may hold some therapeutic power. The science was simply a matter of “leaving no stone unturned” in a quest to find anything that just might work. In the late 1980s, there was a pivotal moment where drug discovery evolved to a process of learning how diseases and ailments operated on a molecular level and then working backwards via inversion to find proteins which could positively change the active mechanism of the problem. If you are interested in this development and its business effect, I strongly recommend the book Billion Dollar Molecule by Barry Werth.

Today we are undergoing another profound change and the catalyst was the mapping of the human genome. Not long ago, Peter Thiel and Marc Andreessen debated whether there was real innovation happening in our economy today. Surprisingly not even Andreessen who took the “yes there is innovation” side of the debate even mentioned genomics and the impact it’s having on people’s lives around the world. The only real mention of biotech was Thiel’s complain about the FDA getting in the way too much, though if anything, this is not borne out by what has transpired these last few years. The problem for biotech is that its impact is very intangible compared to the Smartphones we all carry in our pockets everywhere. Genomics has greatly accelerated the process and efficiency of drug discovery. The results are evident, though people don’t see or feel it. In 2012 new drug approvals by the FDA hit a sixteen year high. Although 2013 did not see a new high in approvals, it did see the largest aggregate market opportunity for new approvals. I will oversimplify to make the point very clear: let’s say the average drug development timeframe was 10 years and has now accelerated to 5 years. Drug development inherently becomes worth more money if the time to earning first cash flows is cut in half.

This above provides some justification for why “this time is different.” Things can be different and still a bubble though, so to try and further dispel this notion I want to point out two anecdotal examples for why the bubble assertion is wrong. Again I want to qualify that overvalued and/or overextended does not mean something is a bubble. For starters, let me borrow Robert Shiller’s definition of a bubble: (as paraphrased by me from Shiller’s panel at the Economist’s Buttonwood Gathering) a bubble is a price-mediated feedback between prices and market participants, with excessive enthusiasm, media participants, and regret from those who are not involved. The “psycho-economic phenomenon” is a defining characteristic that becomes ingrained in a culture and is related to long-term expectations that cannot be pinned down quantitatively. Let me offer the following chart, and you tell me where there's a bubble:

Simply put, we see none of this. Biotech has barely reentered the market participant’s conscious despite the big players returning to top line growth for the first time in years following their patent cliff. There are few if any stories in mainstream media about biotech billionaires at all. If you want to see hype, look no further than social media companies. Do we see anything remotely resembling an awareness in the masses that biotech has been a strong sector? I get asked all the time by clients about Tesla, Bitcoin, Twitter, etc., and I’ve never once been asked about biotech. And yes I think Tesla, Bitcoin, and Twitter prices comfortably fit Shiller’s definition of a bubble.

So let me offer two anecdotes on price and valuation in biotech to provide some context to this discussion.

1) Regeneron: In 1991 is considered common knowledge that Regeneron was a bubble, was insanely priced and was unsustainable. Here’s what the pundits were saying at the time (and do read that link for it's quite telling how similar the complaints are today): “Regeneron is a real long shot for investors: With no potential products even slated for clinical trials, the company is a good 10 to 12 years from delivering a marketable product…. ‘These are companies that have no product, and no prospect of revenue for three years or more. It only makes sense for them to make money when investors are in a feeding frenzy.’”

Fast-forward to today and an investor in Regeneron’s IPO is up ~1,789% in 23 years compared to ~390% for the S&P 500. This alone does not prove biotech is not a bubble but it highlights an important point that is fairly unique to this sector: even if you pay a high starting price, when you are right you will make multiples of your money. Simply put: bad investments in biotech will be worthless and successful investments will be worth multiples. The starting point matters little.

2) Incyte: This company is experiencing a wave of success, up a cool 207% over the past 52 weeks. Their first approved product, Jakafi earned $235.4 million in revenue in 2013 and is still growing today. In the pipeline, Incyte is working on one of the first treatments for pancreatic cancer which actually improves patient survivability. This first big spate of commercial success and big pipeline expansion would have a rational observer expecting this company to be way above record high levels, especially if we are in a bubble, right? Wrong.

In 1999 a shares of this stock were changing hands below today’s prices, though well above where INCY was upon earning its first real evenues. Back then there weren’t any signs of imminent success to be found. It’s definitely much easier to call something a bubble in hindsight, but the magnitude of the differences between then and now is striking. The fact that 1999 is still so fresh in many market participants’ memories is probably a powerful force in the proliferation of bubble assertions today.

Preclinical biotechs are valued based on odds of approval, the size of the market opportunity, the percent of the market the treatments can capture and discounted to today based on the time it will take to earn positive cash flow. It is unquestionably silly when biotechs surge in unison riding the wave of one company’s success. What happened in the wake of Intercept’s NASH primary endpoint success is not rational, and many companies did not deserve the pop they received. But that happens all the time in markets even when there is no bubble.

This is not a great time to pile into biotech, as some of these favorable developments discussed above have been reflected in prices. A bubble means run for shelter and seek cover, and that too is inappropriate right now. Something that is overextended is not necessarily also a bubble. It’s very possible, almost probable that biotech will go down 15% before going up from here. The fact of the matter is that biotech is the same as it ever was. The big boys are priced in-line with the market and have no premium attached to their multiple, and the small companies that investors get “right” will be worth multiples of what they are today, while the wrong ones will be worthless.

 

Thanks to my buddy who helped me pull this together so quickly today, you know who you are!

 

Disclosure: No position in any of the stocks mentioned, though a small portion of the portfolio is long specific biotech stocks.