Technical Scalability Creates Social Scalability

May 25, 2021 | 16 minute read

This essay assumes familiarity with Nick Szabo’s essay on Social Scalability, Vitalik Buterin’s essay on Weak Subjectivity, and Haseeb Qureshi’s essay on Why Decentralization Isn’t As Important As You Think.

This essay is not a rebuttal of Szabo’s essay.

Near the end of his essay, Szabo defines social scalability as “sacrific[ing] computational efficiency and scalability—consum[ing] more cheap computational resources—in order to reduce and better leverage the great expense in human resources needed to maintain the relationships between strangers involved modern institutions such as markets, large firms, and governments.”

The triumph of Bitcoin over Bitcoin Cash and Bitcoin SV certainly supports this theory.

However, since Szabo wrote his seminal essay in February 2017, the crypto ecosystem has come a long way. Although Szabo conceived of the idea of smart contracts about 20 years ago, only over the last 24 months has the world at large come to appreciate the most useful application of smart contracts: DeFi.

DeFi changes everything.

This essay assumes that the most important function of a blockchain is not non-sovereign money in and of itself. Rather, the most important function of a blockchain is DeFi.

Changing the primary function of the system from being strictly about censorship-resistant, non-sovereign money, to a highly functional and programmable development environment for financial applications should have impacts on every single layer of the technology stack from the networking layer (e.g. gossip vs Turbine) through the execution environment (e.g. EVM vs SeaLevel). (For a brief overview of how Solana differs from Ethereum, see the appendix of this essay)

Blockchains should therefore be designed and managed over time first and foremost as DeFi development platforms.

Predictability Is Power

“I very frequently get the question: 'What's going to change in the next 10 years?' And that is a very interesting question; it's a very common one. I almost never get the question: 'What's not going to change in the next 10 years?' And I submit to you that that second question is actually the more important of the two—because you can build a business strategy around the things that are stable in time. ... [I]n our retail business, we know that customers want low prices, and I know that's going to be true 10 years from now. They want fast delivery; they want vast selection. It's impossible to imagine a future 10 years from now where a customer comes up and says, 'Jeff I love Amazon; I just wish the prices were a little higher,' [or] 'I love Amazon; I just wish you'd deliver a little more slowly.' Impossible. And so the effort we put into those things, spinning those things up, we know the energy we put into it today will still be paying off dividends for our customers 10 years from now. When you have something that you know is true, even over the long term, you can afford to put a lot of energy into it.”

— Jeff Bezos, Founder & CEO, Amazon

Coinbase has about 50M registered users. So does Robinhood. So do most of the major US banks.

Let’s assume that Coinbase’s strategic imperative is to migrate 100% of its users to DeFi as quickly as possible, and that the regulatory environment supports this. How would Coinbase do this today on Ethereum?

This is an unanswerable question for now. It’s not that it’s technically impossible. It may very well be possible. However, there is not a single person or organization in the world that can answer this question today. Why?

Because no one actually knows how Ethereum is going to scale. As a simple example, Vitalik has said that he believes in the near-to-medium term, optimistic roll ups are likely the best scaling solution, and that in the long term, zk-rollups will become dominant. When and how will this transition happen? What infrastructure will need to be (re)built? How will capital flow between these various roll ups, and what are the implications for smart contract developers, wallets, users, liquidity providers, fiat on ramps, etc.?

Moreover, it appears that whatever scaling solutions end up mattering, it’s unlikely that there is a single monolithic instantiation of that scaling solution (meaning, for example, a single Optimism roll up). The future of Ethereum scaling is going to be heterogeneous.

In the long run, this is probably a good thing for Ethereum. Each of the various scaling solutions has some trade-offs, and it’s not clear which set of trade-offs is optimal, or the best way to combine those scaling solutions. Therefore, the best thing for the Ethereum ecosystem in the long run is to experiment with many scaling solutions, and figure out which ones work best for which applications, and then figure out bridges and other interoperability solutions and latency problems after that.

Furthermore, all of the teams building scaling solutions are well funded, shipping, and onboarding customers. So none of the teams are going to give up anytime soon.

So how is Coinbase going to onboard more than 50M users to DeFi?

At that level of scale, the most important thing to optimize for is certainty. Any company at that scale will demand certainty both in the present and as far into the future as possible.

Organizations at scale simply cannot afford to bet on the wrong tech stack. The opportunity cost of being wrong, and the explicit cost of having to later migrate/bridge, are massive.

I contend that the only blockchain protocol that can answer this question—or will be able to answer this question within the next 24 months—is Solana.

All of the rollup-based scaling solutions are subject to the problems I outlined above. The same is true for sharding. Despite billions in research and development from an array of capable teams (Cosmos, Polkadot, Avalanche, etc.), none of the sharding systems actually work at meaningful scale (most don’t even work in production at all). Even once they work at proof-of-concept scale, there will be lots of emergent problems that have to be managed (e.g. failed cross-shard transactions, exchange integrations, and more).

To be clear, I’m not saying that scaling via sharding and roll ups can’t work. I’m actually reasonably optimistic that both solutions will, eventually. But, both of these scaling strategies don’t really work today, and will create a lot of secondary and tertiary problems that have to be worked through. It’s hard to see a world in which impartial organizations that demand certainty around scalability will get the certainty they need in the next 24 months because there are just so many intertwined components to scaling Ethereum.

The Social Scalability Costs of Fragmentation

Beyond the lack of certainty outlined above, fragmenting applications across shards and roll ups creates explicit new social coordination costs that are not present in single-shard systems. Some examples:

  1. Block times and computational throughput vary between layer 1 and various layer 2s. This has direct implications for how risk should be managed in all DeFi protocols that manage risk (which includes almost all the major primitives other than Uniswap/Sushiswap). DeFi teams have already committed to deploy their contracts across many layer 1s and 2s. However, each execution environment will require unique risk parameters. This will increase the amount of social coordination required in each protocol’s community, and slow down the rate of development.
  2. Withdrawing from optimistic rollups (ORUs) takes a long time. It’s widely expected that market makers will bridge liquidity between rollups and layer 1s. However, the implementation details of this are tricky. Should protocol front ends offer this natively? If so, should they “contract” with a specific market maker (e.g. how Citadel Securities contracts with Robinhood for PFOF)? Or should front ends leave this to users to figure out? What if the user wants to move from one ORU to another… how does the user tell the application this, so that the user can use Connext or Thorchain instead of withdrawing to Layer 1?
  3. For Metamask users (who are, for the most part, power users), it may be reasonable for users to manage this complexity on their own. But for curated wallets like Exodus or Argent that try to abstract these complexities, how much additional dev time will wallet teams spend managing these problems? How much development of actual new features will be foregone? What if a market maker stops providing liquidity on that particular bridge / segment for some reason? What backup options are there?
  4. Developer tooling must be updated to deal with new data structures (unprocessed transactions for ORU, and zk outputs for ZKR). Indexing and query layers will need to implement serious upgrades, and it’s likely that application developers will need to re-write their subgraphs to deal with new data structures (probably impossible to map EVM subgraph to Starkware’s Cairo, for example). Developers are going to be forced to rewrite huge amounts of their applications across various heterogenous scaling solutions.

DevEx is going to become meaningfully more challenging as sharding and roll ups proliferate. None of these problems are intractable, but they will slow the rate of development, and will frustrate a lot of developers who just don’t want to deal with all of these plumbing issues.

Predictable, Boring Scaling

Solana supports a sustained rate of 50,000 transactions per second today on a global network of more than 600 nodes right now. But most importantly, Solana provides a predictable path to scale indefinitely. Because it natively executes transactions in parallel on GPUs, it can take advantage of the massive gains in parallelism that GPUs provide.

Moore’s Law is perhaps the most important economic force of the last 50 years. However, today it’s kind of a sham.

About 10-15 years ago Moore’s law stopped working for single-thread computation. It stopped because heat creation increases super-linearly with clock speed. That’s why fanned devices (desktops and laptops) have stalled at about 3.5-4 GHz, and fanless devices (phones and tablets) at 1.5-2.0GHz. While there have been some gains in single-threaded performance in the last 10 years from various optimizations, single-threaded performance is not doubling every 18-24 months.

Almost all of the computational gains from the last decade have come from chip specialization (FPGAs and ASICs) and parallelism. Modern desktop graphics cards have more than 4,000 cores. The number of cores per card has been growing in a manner consistent with Moore’s law over the last decade, and that is expected to continue because increasing core count doesn’t impact heat creation to nearly the same degree that increasing the clock speed does.

Solana is the only blockchain that natively supports intra-shard parallelism via its SeaLevel runtime. SeaLevel executes transactions natively on GPUs. When Nvidia releases new GPUs in 12-24 months with 8,000 cores, the Solana network’s computational bandwidth will roughly double.

And developers won’t know, or care, or have to change a single line of code.

This is the definition of predictability: writing code now, knowing that it will work in perpetuity, and knowing that the code will execute for less cost tomorrow than it does today.

The primary physical limitation in scaling computation is heat dissipation. Solana literally scales at the limits of physics.

Quantifying Decentralization And Weak Subjectivity

On the surface, many consider the Solana protocol to not be decentralized enough. They don’t really quantify this claim, but they repeat it regardless. Let’s quantify the things that can be quantified.

First, hardware costs:

  • Bitcoin runs on a $25 Raspberry Pi, and only requires a flimsy internet connection.
  • Ethereum is intended to run on a $500 laptop (though this claim is questionable given current gas limits), and requires a broadband connection.
  • Solana runs on a ~$3,500 server, and assumes a gigabit connection.

The last major consideration is the state size. At 50,000 TPS and billions of users, Solana’s state size will balloon. This is totally fine! Why? Because 1) Solana is assumed to be running on servers with upgradeable storage (not laptops which are not upgradeable), 2) NVMe SSDs scale read and write performance linearly via RAID 0, and 3) multi-terabyte NVMe SSDs are less than $300. The cost and logistics of scaling state storage is trivial.

If you are this far into this essay and understand everything that I’ve written so far, there is a greater than 50% probability you’re using a $2,000+ Macbook Pro. Given this reality—that the world’s 50-100M developers have a strong preference for high end machines—I’m skeptical that optimizing for $500-$1,000 hardware is optimal. What is so special about the $500-$1,000 price point?

It’s fair to consider what the upper bound of hardware requirements should be. Certainly $25,000 is too high because developers don’t own that kind of hardware. Instead of prescribing an arbitrary hardware cost, a better way to think about this is to consider how many nodes is enough to achieve sufficient censorship resistance. Obviously, the word “sufficient” is inherently subjective, but if you assume that 1M is the right number, then the natural question is, “are there enough $3,500 servers with gigabit connections in the world such that there is a reasonable path to 1M nodes?”

Given the number of gamers, developers, and businesses in the world with high-end hardware, it’s pretty hard to see how the answer to that question is no.

The question of hardware cost cannot be considered in isolation. It must be considered in the context of the design goals of the system.

Earlier, I argued that blockchains should be optimized for DeFi rather than maximum censorship resistance (which would mean optimizing for 100M or 1B nodes, instead of 1M). Optimizing for $25 or even $500 hardware is simply not necessary because the vast majority of people will never run a node to begin with. So why bother to optimize hardware cost and a protocol for them?

Everything Is Weakly Subjective

This leads to weak subjectivity and the recognition that decentralization isn’t as important as you think.

The world is weakly subjective. What does that mean? Consider the following examples.

When you last walked into a high rise structure, did you examine the architecture first, and interview the major contractors, to ensure the structure would not collapse and kill you?

Or consider an airplane. Or a car. Or your house.

Everything in the world is based on some degree of trust. The world would not function if every person had to independently verify the structural integrity of everything they interact with.

Conversely, this can be stated as: the world works because everyone knows that enough other people have verified the assumptions of a system such that it is safe with an extremely high degree of probability.

This is the foundational assumption of weak subjectivity. Applied to node count, the key question is that even if you are not running your own node, can you reasonably assume that enough other people and institutions are running nodes such that you can trust the system?

Today, Solana has about ~600 nodes, which is up from about ~100 a year ago. Like virtually all blockchains, this number is growing over time. So long as the ecosystem continues to grow, there is no reason to believe that node count will shrink. The Solana network—like all other major chains—naturally decentralizes over time as more people use it and more value flows over it.

This is also why Qureshi is right, and that decentralization isn’t as important as you think. Decentralization matters to achieve censorship resistance. It’s not clear where that threshold is (not enough counterfactuals that we can use to reason from), but the actual threshold itself doesn’t matter. What does matter is 1) that the shape of the curve is in fact a reverse S-curve, and 2) knowing that blockchains decentralize—and therefore increase censorship resistance—over time. So long as the chain is decentralizing at a sufficient pace—and you believe it can sustain that pace of increasing decentralization—then users are very likely to maintain the censorship resistance that they need.

Risk-Decentralization Curve

Credit:Haseeb Qureshi, Why Decentralization Isn’t As Important As You Think


The fundamental question at hand is: what is the dog, and what is the tail?

For the first decade of crypto since the inception of Bitcoin in 2009, it was clear that censorship-resistant, non-sovereign money was the dog, and other applications were the tail.

But that is changing. It is now clear to everyone in crypto that DeFi is going to completely reshape finance. I contend that roles have reversed: DeFi is the dog, and non-sovereign money is the tail.

Both require degrees of censorship resistance, but given engineering constraints, there is a fundamental trade-off between maximizing the utility of DeFi, and maximizing the censorship resistance of the system.

When the fundamental design goals of the system change, so should the tech stack. In order to scale DeFi to billions of people, every layer of the stack has to be reconsidered from first principles.

Credits: Thanks to Hasu for reviewing drafts of this post.

Disclosure: Multicoin has established, maintains and enforces written policies and procedures reasonably designed to identify and effectively manage conflicts of interest related to its investment activities. Multicoin Capital abides by a “No Trade Policy” for the assets listed in this report for 3 days (“No Trade Period”) following its public release. At the time of publication, Multicoin Capital holds positions in SOL and ETH.

Appendix: Brief Comparison of Ethereum vs Solana

Bitcoin and Ethereum made lots of assumptions in their respective designs. Perhaps the most salient were at the network layer and execution layer.

The Networking Layer

At the network layer, Bitcoin and Ethereum utilize gossip protocols. What is a gossip protocol? A protocol in which every node rebroadcasts data indiscriminately to every other node. While this maximizes resiliency, it comes at the cost of performance. Rebroadcasting data in a highly redundant manner is by definition not efficient, and therefore is not correctly optimized for high-throughput DeFi applications.

Solana, on the other hand, invented a new networking protocol called Turbine, which is inspired by the BitTorrent protocol. Turbine is optimized for efficiency. How does it work? Let’s consider a 1MB block. Rather than transmitting the entire block to another node, a node transmits 10KB (1% of the block) to nodes #2 and #3, and then those node rebroadcast that 10KB chunk to nodes #4 and #5, etc. The original node then broadcasts a different 10KB packet to nodes #6 and #7, and those nodes rebroadcast that 10KB to nodes #8 and #9, etc. Although this introduces some latency in data propagation, it maximizes absolute data throughput of the network as a whole. Moreover, the beauty of this model is that as the number of nodes increases, absolute bandwidth available stays the same. The only degradation is that latency increases log(n) (which is very sublinear), as opposed to most other systems, which increase linearly or superlinearly.

The Execution Layer

At the execution layer, the EVM is a single-threaded computer. Because any transaction can modify any part of the global state, in order to support parallelism, the system needs some way to ensure that two transactions don’t try to write to the same part of the state concurrently. The EVM elects not to deal with this problem at all, and simply executes all transactions serially.

Solana is the only protocol that attempts to deal with intra-shard concurrency. How? Solana’s runtime, SeaLevel, requires that transaction headers specify all the parts of the state that the transaction will touch. With this information, SeaLevel can determine which transactions may collide, and will serialize those. All non-overlapping transactions can be parallelized, and run concurrently across thousands of GPU cores in parallel.