Coinmetrics was created to publish hard-to-acquire data about major public blockchains, and to promote some ratios we thought were instructive. Since the founding of this website, the field of cryptoasset valuation has matured and grown significantly. The cryptoassets in question also continue to grow and change, meriting thoughtfulness about various analytical tools. While users are more empowered than ever, uncertainty remains about a) whether ratio analysis is appropriate, b) how to interpret major ratios, and c) the shortcomings of such analyses. In this piece, we’ll discuss ratio analysis and discuss its shortcomings and some common mistakes. As always, we urge skepticism and restraint in the interpretation of our data.

The truth about the nascent industry of cryptoasset valuation and fundamental analysis is that very few best practices exist. Serious attempts at economic models have been made – one that we particularly like is this hard-to-parse paper from Pagnotta and Burashi. (The authors kindly wrote a non-technical explainer to detail their model. That is a discussion for another time.) If you don’t have a PhD in statistics, and you’re still interested in assessing cryptoasset networks, you might find ratios to be more convenient. We have made it as simple as possible, and you can even use our formula builder tool to make your own ones. But where to start? There is a lot of data at your disposal and not a lot of guidance. So let’s start by assessing ratios generally.

I. Types of ratios

The data we have enables you to conduct time-series and cross-sectional analysis. Time series analysis means tracking the change in characteristics over time. For instance, you might analyze long-term US equity trends with the cyclically adjusted price-to-earnings ratio (or CAPE). That measures aggregate equity prices for a basket of stocks against the corporate earnings they are producing (with the denominator – real earnings per share – smoothed on a trailing 10-year basis). What you have with the CAPE is a numerator that tracks a pricing element and a denominator drawn from a fundamental valuation indicator. The cyclical adjustment refers to the smoothing of earnings to subtract out variance over market cycles. If earnings stay roughly constant and price rises rapidly, the ratio will increase and might present a warning signal to the value-minded investor. This is a ratio which is informative over time: you just compare it against itself!

If you believe the CAPE is useful, then you might develop a strategy that involves selling the index when it hits some key threshold – maybe 25 – and buying when it’s in a trough – maybe around 8. We aren’t endorsing the CAPE, but it is a common ratio used to analyze US equity, as it offers a lot of history to compare the present against. (Other ratios in that same family include the “Buffett indicator” – the aggregate public equity market cap against GDP – or market-wide Tobin’s Q: the market value of equity against corporate net worth. )

All of the time-series strategies that exploit those ratios rely on a simple assumption: the belief that some stable, fundamental relationship exists between price and underlying value of an asset, and that if the relationship deviates in some meaningful way from the long-term average, it will eventually revert to the mean. The pitfall comes if the relationship isn’t stable at all, and you are misled into thinking it will mean-revert. Having insufficient data is one way to virtually ensure you fall victim to this. Ultimately, even ratios with a hundred-year datasets like the Shiller CAPE might not be stable. Some analysts suspect that we’ve entered a new era of rich valuations, and that we’ll never return to the more modest valuations of yesteryear. One’s belief regarding the stability of the relationship is therefore vitally important in trading against the ratio on a time-series basis.

Cross-sectional analysis involves comparing an asset to others in a similar class. This could mean comparing Exxon to other oil and gas stocks or Ford to a family of automakers. Ratios allow for the comparison of assets which are different in size: Ford might have earnings which are 20x that of a competitor, but when you standardize those against price, you can determine whether one is more richly valued than the other. If you are careful, and you pick a good sample, you can use this method to determine what the market thinks of an asset compare to its peers. You might compare the P/E ratios of large pharmaceuticals and find that the market is willing to pay more to per dollar of Johnson & Johnson earnings than one of Pfizer or Merck.

In equity valuation, checking relative P/E among like-kind assets is a popular way of teasing out their individual traits and figuring out which companies the market really likes, relative to a common baseline. If two companies have virtually identical business models, margins, earnings, and leverage ratios, but one has a much higher EV/EBIDTA multiple, that’s probably a sign that the market is a lot more optimistic about that company. It might have a star CEO or some valuable IP or competitive positioning or regulatory relationship or a hot new product line. The point is that ratios are used cross-sectionally for a huge variety of reasons – but they are only meaningful if the comparison is valid and fair. Comparing the P/E ratio of a utility company to a fast-growing tech company will probably not be informative – the tech company is bound to have a higher multiple, just by its very nature.

So there you have it. The big mistakes one can make with ratios is assuming, for time-series purposes, that a fundamental ratio is stable over time; and that two assets with radically different features can be meaningfully compared on a cross-sectional basis.

II. Ratio analysis in the wild

How are ratios useful, beyond just comparing two companies or assets? One answer is that they can be used in valuation. In fact, if you read sell-side equity research from Wall Street institutions, you’ll find that ratio analysis is used frequently for valuation. Even though it’s not a substitute, in equity valuation, for the more rigorous DCF, discounted dividend, or discounted residual income models, it is still extremely common.

Typically it works a bit like this: Company A is planning an IPO. It sits within a given industry group. It has a comparable business model, book value, and strategic positioning to several other companies within that industry group. The average EV/EBIDTA ratio for its peers is E. Company A had an EBIDTA of N in the last fiscal year. Thus, we can apply the EV/EBIDTA ratio to Company A (E = x / N, E and N are both known, solve for x) and derive an implied valuation, assuming it ends up being priced similarly to its peers. It works the same way with P/E or P/S or any of the other common ratios. (Finviz does a good job of illustrating these.) The simple idea here is that you can guess at a company’s valuation from some fundamental piece of information (cash flow, earnings, revenue, sales, etc) and the common ratio of that fundamental to market price for its closest market relatives.

This exercise is no guarantee of a great valuation, and indeed, ratio-based valuation is generally much less precise and rigorous than other approaches. However, it is a decent way to eyeball the value of a financial asset, sanity-check it, and situate it within its peer group.

To sum up: ratios are quite powerful, and can generate a comparison between an asset and its own history (time series), and asset and its peer group (cross-sectional), and in some cases it can be used to derive a valuation of the asset. Since cryptoassets generally don’t yield cash flows (there are some exceptions, like Siafunds, Augur if it ever launches, masternodes, and various other schemes), they can hardly be valued by adding up expected earnings. Thus ratio analysis can only yield relative values, as mentioned above.

III. What not to do

A while back, a well-reputed fund released a memo with an analysis of Bitcoin price on a per-transaction basis. They posted a chart that looked a bit like this:

The idea here is that the Bitcoin price had a fairly stable and mean-reverting relationship with transaction count. The blue line appeared to show that the typical range for the “price per 1000 txn” ratio was between about 2 and 4, and when it went above that, it might be a good time to sell. You could have designed an even more granular strategy to counter-trade the ratio if you wanted. This strategy would have worked pretty well from Q2 2013 to Q1 2017. Let’s zoom out and scroll forward in time:

As you can see, the period where the ratio traded in the 2-4 range only accounted for one portion of Bitcoin history, when Bitcoin price and transaction behavior were relatively stable. After that, Bitcoin price skyrocketed and the ratio rose to new highs, where it has remained for well over a year. The ratio would have been useless after early 2017, and if you had been using the narrow strategy for your trading strategy, you would have missed the entire rally.

This happened because Bitcoin’s transaction count hit a ceiling at around 300-400k transactions per day, but price continued to rise unabated. Even after price cooled off in Q1 2018, the ratio didn’t return to its 2015 levels. A serious discontinuity had emerged. The Bitcoin system was able to tolerate a lot more value injection while supporting it with roughly the same number of transactions per day. This is because, ultimately, Bitcoin price isn’t a function of its transaction count! That’s only one small variable in the entire system – other important variables include average transaction value, fees, hashrate, accumulated trust, miner concentration, developer innovation and so on. Designing an investment strategy around a particular, narrow set of historical data is referred to as overfitting a model. It is always possible to overfit your model to some sample. But since the future is much harder to predict, and complex systems may not maintain stable characteristics over time, overfitted models are prone to failure. That was the case with the price-per-1000-transactions ratio and trading strategy.

IV. Best practices

What was the mistake here? The creators of the price-to-transaction ratio erred in three ways:

  1. they assumed that the relationship between prices and transactions would be relatively stable and mean-reverting;
  2. they used an inferior fundamental variable (transaction count) as a proxy for demand; and
  3. they trained their model on an insufficiently large dataset.

Now, my criticism might appear harsh as the chart may have been a throwaway model, but it is a common mistake to make. The solutions to this are:

  • have a firm intuition as to the relationship between the fundamental variable and price
  • don’t train a model on just a few months of data, especially if it is concerned with long-term trends
  • don’t assume that the future will be just like the past
  • think carefully about how your fundamental indicator might decouple, inflect, or be considerably altered with regards to price
  • make sure the data you’re using isn’t too noisy
  • make sure you understand the data that is the basis of your model

That last point bears repeating. The easy access to data can lead some into being complacent about the nature and source of that data. There are few trusted data providers in this industry, and many websites are side projects set up by developers in their spare time. Additionally, data can be extremely political, as was the case in the scaling debate. We think that the solution to this is extreme transparency, auditability, open source backends, and frankness about the shortcomings of the data. (Reminder: you can visit our github to audit our process or experiment with our code.) We post frequently about the shortcomings of our data. It certainly isn’t perfect, and we urge you not to treat it as if it is a canonical record of truth. See this and this and this post on the topic.

More generally, it is worth having a very precise understanding of the data you’re tracking over time. What does transaction count, actually mean? It is an aggregation of transactions in a 24h period. But each transaction is a collection of inputs and outputs, which constitute payments and change outputs. The distinction between a payment and a change output is sometimes subjective and unknowable. Transactions can encode a single output or hundreds of them! Additionally, any payment can transmit an arbitrarily large amount of value in satoshis. Thus, the relationship between transaction count and the economic activity occurring on the network is an extremely tenuous one – we will cover this in a future post.

In statistics, if a variable is unknowable or difficult to apprehend directly with data, one usually tries to find a variable which has similar variation to that untrackable variable that you do have data for. The replacement or substitute variable is called a proxy. If you’re using transaction count to proxy the number of actual payments taking place between entities on the Bitcoin blockchain on a given day, you are using a poor proxy. Any variation in the number of outputs per transaction, change outputs per transaction, payments per transaction, value per transaction, or type of transaction would alter the nature of the relationship between the proxy and the underlying ground truth. And in practice, those things are changing! Our data unfortunately isn’t granular or precise enough to reflect that, but new tools coming soon will rectify that.

Future posts will cover how the relationship between ground truth and proxies in Bitcoin is changing, and what that means for ratio analysis. In particular, we will look at the impact of batching on transaction count and payments.

 

Leave a Reply

Your email address will not be published. Required fields are marked *