Benchmarks are great to tell you where you are compared to others. In venture capital, there are three common problems with benchmarks: History, math, and ego.
1. Benchmarks are History
Benchmarks are backward looking. Past performance is no guarantee of future results. Maybe market mechanics changed. Maybe the supply and demand of startup funding in a specific segment changed. Think about investing in Bitcoin in early 2017 vs. early 2019.
2. Benchmarks, Averages, and the Long Tail
Union Square Ventures’ Albert Wenger wrote an outstanding blog post on sample correlation in fat-tailed distributions such as venture fund returns. That is the reason why we are thinking in “top-quartile” or “top-10%” or “top-5%” rather than taking a sample mean and variance.
Another way to look at this is the distribution of preliminary venture fund results by vintage years. Cambridge Associates produces reports for some of the VC funds they are tracking (which, again, is a small sample of the total number of VC funds). Here is an excerpt of the Gross Total Value to Paid in Multiple (“TVPI”) table — for US Information Technology Investments. All performance figures provided in these reports are gross returns from portfolio investments and are gross of fees, expenses and carried interest. Year of Initial Investment is defined as the calendar year in which a fund made its initial investment in a portfolio company. Pooled Return aggregates all cash flows and ending NAVs in a sample to calculate a dollar-weighted return. Top 10%, Upper Quartile, Lower Quartile, and Bottom 10% are the thresholds based on the individual investment TVPIs included in an initial investment year.
As you can see from these numbers, you could have outperformed more than half of all VC investments in US Information Technology by merely putting the LP capital calls into a savings account (of a bank that didn’t fail in 2007 or 2008, I should add). But look at the Top 10% performance and the difference to the Upper Quartile. The fact that the Pooled Return is so much higher than the Median hints at how much the top 10% must have outperformed the rest of the investments.
3. Benchmarks and Our Egos
It’s a myth that we get better by benchmarking ourselves against others. In reality, the improvement of our outcomes does not come from mimicry. Harvard research has shown that comparing ourselves to others has negative consequences:
In some cases, we benchmark against those who are more capable or accomplished, which can be counterproductive when we fail to match them. In other cases, often in a subconscious effort to preserve our self-esteem, we rate ourselves against people who are less successful — a “downward comparison” that is obviously anathema to personal development.
Structured reflection on successes and mistakes through after-event reviews (AERs) turn out to be more effective and promote experience-based leadership development. Focus on getting better than you were yesterday. Live up to your own potential and aspirations, not somebody else’s. My hunch: Structured reflection reveals the “why,” not just that “what.”