Questions From the Crypto Idea Maze
By Kyle SamaniDecember 20, 2018 | 10 Minute Read
As 2018 comes to a close, we’ve been thinking about some of the big questions that crypto still needs to answer.
Although there is some clarity on the mega theses for crypto —1) global, state-free money 2) decentralized/open/composable finance 3) web3—it’s still unclear how society is going to get there. Or in other words, the idea maze is real. Our friend Jon Choi published a fantastic crypto idea maze a few months ago.
After digesting Jon’s post, you can get mentally trapped. One useful way out of this mental trap is to think smaller, and try to find avenues in which things can compound. With compounding, it becomes possible to re-examine problem and opportunity sets in new ways that break out of the idea maze (e.g. it never made sense for Uber to try to pull off Uber Eats without already having the massive logistics infrastructure and driver fleet from the transportation business).
So the point of this post is to do just that: to think smaller, and explore the unanswered questions across the crypto landscape. A side effect of this post is that it serves as a useful benchmark for how we are thinking at the end of 2018. A year from now, we will look back on this and laugh. That’s a good thing.
It’s impossible to cleanly separate all of the things that crypto impacts (and that impact crypto), but we’ve tried to segment the major questions for clarity.
Some of the sections get pretty technical. For reader convenience, the sections are ordered progressing from least technical to most technical.
- What layers of the Web3 stack can capture value today, and in the future?
What if AugurCos demonstrate to regulators that prediction markets aren’t bad, and then regulators legalize prediction markets. Why should AugurCos still use Augur?
- How can we objectively model the value of governance for governance tokens (e.g. ZRX)?
Some have tried to model this as the cost of forking, and while that is probably the right mental framework, the actual model itself is extremely subjective.
- What are all the forms of state that a protocol can store in order to maintain defensibility?
- Maker stores the value of the collateral deposited
- Augur stores the history of events that it has acted as an oracle for, and deposited capital that’s currently locked in markets.
- Work tokens – such as Skale, Keep, Graph, and Livepeer – store information about layer 2 nodes and the work they’re performing.
0x stores some small amount of state about how users interact with the contract. Is the amount of state enough to be worth governing?
- As we think about bridging fiat to crypto, who will win?
- Right now it looks like Coinbase has the lead.
- But Robinhood, Revolut, and startups like Good Money are going even farther up the customer experience ladder so that they can be both the consumer’s legacy fiat bank and bridge to crypto.
- And the legacy asset managers like Fidelity are entering the marketplace.
Can any of these players create network effects and increasing returns to scale to crowd out their competitors?
- When and under what circumstances will proprietary payment currencies get forked out?
- DDEX just forked ZRX.
- GNT, RDN, BAT, and many others are ripe for forking.
Can external loci of control, such as the Brave browser, enable a proprietary payment token like BAT to maintain some value?
- Under what circumstances do zombie chains like Bitcoin Private, Litecoin, and Ethereum Classic die?
- The obvious answer is when miners stop mining them. Miners will mine so long as they can sell mined coins, which means that so long as someone is willing and able to buy, a chain will stay alive.
- There are two major reasons for an exchange to delist an asset (other than dev team committing fraud):
- If liquidity becomes low enough that the administrative cost of keeping the trading pair open creates more costs than revenues they generate from trading the asset.
- A chain is 51% attacked, allowing the attackers to steal funds from the exchange.
- What gaming use cases actually make games better?
- There are four kinds of players: killers, achievers, socializers, and explorers. Which of these player types is most likely to care about true digital scarcity, and sovereign asset ownership?
STABLECOINS AND FIATCOINS
- Which type of stablecoin will be used to on-board users to dapps? Fiat-backed, seigniorage shares, or collateralized?
- When will the first seigniorage shares-based systems enter a crisis of confidence, and collapse?
Given how capital inefficient collateralized stablecoins are, can they scale to $1T+?
Given that collateral-backed stablecoins are a perfect example of unregulated leverage in the economy (shadow banking), how much demand can there be? Will consumers ever use a system like Maker to get a mortgage, or to get leverage on their stock portfolios? What about large hedge funds, who take on most of the leverage in highly regulated equity and credit markets?
- Will fiat-backed stablecoins fulfill Bitcoin’s vision of providing a “way out” for people who live in Venezuela, Argentina, and other hyperinflationary countries?
Since the USD is the world’s most recognized currency, why would these people opt into BTC over USDC or DAI?
- What will be the first country to move their fiat currency onto a permissionless, global, public blockchain without KYC, while retaining seigniorage rights?
- For a country that’s respected (e.g. Switzerland or Singapore), this is arguably the single largest arbitrage on the planet. This would allow that central bank to increase demand for its currency tremendously, instantly enriching itself and its citizens.
TOKENIZED SECURITIES 1. Securities are not fundamentally bearer assets. Given that, why should they be on a blockchain? - The best answer is probably around administrative efficiency. - The more interesting long-term answer is that by tokenizing value, it becomes composable, and eligible for use as part of the open finance stack.
Will it play out such that the growth of public DLTs creates enough urgency among the legacy financial institutions to implement permissioned DLTs that address all of the technical shortcomings of the legacy systems?
Can we create legal constructs in which equity holders cryptographically commit to certain actions – for example not selling their equity for 10 years – in order to receive some preferential treatment over equity holders who are unwilling to make the same commitment?
GENESIS TOKEN DISTRIBUTION
For currencies (not layer 2 tokens), does token distribution at the time of genesis matter? What about currencies which were theoretically fair, but practically unfair?
- It’s estimated that Satoshi alone has ~5% of all BTC. Other early miners likely comprise upwards of 15-20% of the BTC supply.
- At genesis, Ethereum launched with 72M ETH spread across ~9,000 ICO participants. Today, there are ~100M ETH. It’s unclear if/when ETH will move to a supply cap, or some tail emissions, but it seems likely that this will happen before there are 150M ETH given the current supply schedule. This would imply that ~50% of all ETH were dispersed at genesis.
- The Dfinity foundation will own >40% of all DFN tokens at launch.
- Ripple Inc. and its founders own well over 60% of total Ripple tokens (XRP).
GOVERNANCE AND LINDY EFFECT
- When should a chain be upgradeable?
- Under no circumstances?
- Only if there’s a catastrophic bug? (Note, answering “yes” to this question implies that the underlying protocol is weakly subjective, not purely objective)
- Frequently in the “early days,” and much less frequently as the chain evolves?
- How should hard fork decisions be made?
- Off-chain, and thus subjectively?
- One-coin, one-vote?
Can we figure out some sort of self-sovereign identity or other sybil prevention mechanism that would allow for one-person-one-vote, or quadratic-voting mechanisms?
- Should code be law?
- If not, how do we handle dispute resolution? Should it be strictly opt in, or opt out?
- Should there be native protocol layer support? If so, how do you know which “jurisdiction” a conflict should be arbitrated in?
As the global market for arbitration services grows, people will create new legal jurisdictions as a form of competition to attract commerce, and ultimately generate revenues for providing arbitration services in those jurisdictions. When will we see the first entirely virtual legal jurisdiction that lives exclusively on a blockchain?
- Is the “hardness” of money monotonically inversely correlated with hard forks?
- Let’s say that after a decade long battle that includes many hard forks, a single smart contract platform emerges as the winner, and that chain powers many trillions of dollars of commerce, but that it no longer hard forks because there is simply too much inertia as the total user base continues to grow. Will people and institutions consider that asset hard money if they don’t believe it will fork on a go-forwards basis, even if it has forked in the past?
Are commodity-money digital assets compatible with fractional reserve banking system?
- Who will act as the lender of last resort in the event of a run on the banks?
Is QE intrinsically bad?
- In most countries, the answer is clearly yes, as most currencies have been hyperinflated away on a long enough time scale.
- But in countries like the US, QE helped save the system during the great depression and financial crisis. Or as Ray Dalio likes to say, QE allows for a “beautiful deleveraging.”
- Will stand alone privacy coins such as Zcash, Monero, and Grin survive?
- Given that all privacy systems are based on getting “lost in the crowd,” if a smart contract platform becomes the big winner and copies all of the privacy tech, what reason do stand alone privacy coins have to exist?
- Will consumers ever care about online privacy? What could cause them to care?
- Does everything above not matter at all, and that distribution is the only thing that matters? If so, can Telegram, Facebook, Google, or others figure out how to leverage their existing distribution to win?
- In order to deliver the best user experiences, do teams need to go full stack, a la Tari/Big Neon and Algorand?
SCALING AND DECENTRALIZATION
- There are fundamentally 7 ways to scale blockchains. Vitalik highlights 5, but there are a couple more. The 6th is improvements in consensus that break the pareto-efficient frontier (whereas many separate chains and big blocks would be moving movements on the pareto-efficient frontier, AKA the scalability trilemma). #7 is recursive composition of ZKPs, a la Coda.
- What is the right combination of layer 1 and layer 2 scaling?
- If you rely exclusively on layer 2 scaling, all kinds of second order problems emerge:
- What plasma chain am I (the user) in?
- What are all of the possible plasma chains?
- Why does it take so long to withdraw my NFT from a plasma chain?
- Why do I have to pay 10 basis points to withdraw from a plasma chain instantly?
- I thought I had 10 ETH, but my wallet says I only have 4 ETH that are accessible?
While these are all fundamentally solve-able problems, in the short term it’s clear that relying on layer 2 scaling is going to create a multitude of UX problems. Given how diverse the range of layer 2 solutions are, it will likely take a few years before the market even knows how to develop a UX that abstracts all of this complexity. If you focus on layer 1 scaling, the only solutions that offer a path to 1,000x+ improvements are supernodes (centralization of block production, a la EOS), and recursive composition of ZKPs, a la Coda. While the EOS approach is the “easiest,” it may create enough baggage that isn’t recoverable from, even if EOS ultimately adopts everyone else’s innovations. Coda’s approach is probably the most technically risky given that we need at least a 1,000x improvement in ZKP wall-clock proving times for this approach to really work, but it probably provides the “cleanest” path for layer 1 scaling…
Will sharding work in the foreseeable future? Ethereum’s sharding roadmap has been pushed back and redone so many times that it’s hard to take the timeline seriously at this point. Although we’re pretty certain sharding will eventually work, there are still a lot of unsolved problems. Even once sharding is working, basically all of the UX problems outlined above with regards to layer 2 scaling will apply. This will slow the sharding roll out significantly. How many block producers/miners is enough to ensure sufficient censorship resistance, and provide high enough guarantees with regards to data availability? Both censorship resistance and data availability by definition face diminishing returns as node count increases, while node count increases slow down network performance. In Bitcoin, there are <15 mining pools that matter, and ~10,000 full nodes that act as unpaid validators. Note that miners provide consensus on transaction ordering, whereas full nodes provide consensus on transaction validity. In Ethereum 1.0, there are <25 mining pools that matter, and ~7,500 full nodes that act as unpaid validators. In EOS, there are 21 block producers (BPs), and another ~60 paid, backup BPs that act as validators. There are another ~200 that are unpaid. It looks like Tendermint is going to launch with support for 100 – 300 paid validators, and no unpaid validators. In Coda, there will likely be < 1000 BPs, but every single user (desktop and mobile) will be a full-node that can verify the correctness of the recursive SNARKs that Coda produces. Assuming Coda is adopted by 50,000 people, 50,000 nodes would validate the integrity of the chain. Will there be many smaller chains (e.g. as Cosmos zones), or will there be a few, large chains (e.g. Ethereum), or will everything exist as part of a multi-chain system (e.g. Polkadot)? If there are many small chains, how do they guarantee security? If there’s a giant chain, how does it scale? If there is a multi-chain, can they create greater-than-the-sum-of-their-parts returns with regards to safety? Given that running the EVM on plasma is an unsolved problem, what is the right approach to layer 2 chains? Adopt handicapped versions of plasma that only support asset transfers (e.g. Plasma Cash/Debit) and not full EVM execution. This is what Loom and Elph are building. Adopt Skale, which provides a probabilistic approach to safety (whereas plasma provides absolute safety guarantees), grow the network, and eventually move to full plasma if/when EVM on plasma is solved. How much layer-2 execution can be replaced by something like zero-knowledge execution (see here and here)? Will payment and state channel networks develop in such a way that meaningful liquidity can be routed through them? Can any entities other than exchanges actually become liquidity hubs in practice? Can independent businesses provide this? If so, can they do so without taking on balance sheet risk on the price of Bitcoin? Given that the cost of borrowing Bitcoin today is about 12%, will users accept that kind of cost to route payments over payment channels?
THE WEB3 STACK
- How do we decentralize asynchronous communications and content sharing protocols on the Internet (messaging, social media, message boards, etc)?
- Freenet, Zeronet, and Scuttlebutt rely on different networking stacks, and gossip different kinds of information based on node availability and P2P trust models. However, none of these systems are anywhere near usable. This is largely due to the lack of a logically centralized storage and database system.
- IPFS offers the world with logically centralized but architecturally decentralized storage, but it’s still unclear how Filecoin will work to ensure persistence.
Can Textile build the tooling to abstract all of the additional complexities developers need on top of IPFS, including key management, different types of one/two-way relationship models, authentication, etc?
- What will a decentralized database look like?
Will developers store data in something like Bigchain/Bluzelle/Chromapolis/Picolo, each of which is effectively a traditional database with some added immutability and proof-of-stake properties? Or will developers throw data into IPFS (and perhaps Ethereum), and use The Graph to query it? Or will developers want a bit more control, and leverage something like OrbitDB, which provides different types of structured databases on top of a decentralized event log on IPFS? How much of the right hand side of the Web3 stack needs to be decentralized? As STARKs become easier to integrate to provide proofs of computational integrity, why aren’t the services on the right hand side of the Web3 stack better off as centralized service that provide STARK proofs? Censorship is the best counter. However, will the market need more than 2 or 3 STARK-ified web3 services?
- Permissioned and consortium chains are slowly but quietly growing. At what point do they “dock” into public chains, and what are the benefits to the permissioned chains, and to the public chains, when this happens? (this is analogous to companies building out intranets before the internet + cloud was usable, and then integrating and migrating intranet services with/to cloud services).
ZERO KNOWLEDGE PROOFS (ZKP)
- Do ZKPs break everything?
- As Nick Szabo articulated in his seminal essay on social scalability, technical inefficiency creates social scalability. That is, because 10,000+ nodes independently verify the integrity of the Bitcoin blockchain, Bitcoin becomes socially scalable for everyone.
- Using ZKPs, a single node can perform a computation (for our purposes, update the state of the ledger) and produce a proof that anyone can verify. Given this, why should we architect trust-minimized systems with redundancy and inefficiency when we can architect it with ZKPs at both layer 1 and layer 2 instead?
- The biggest technical barrier is performance. We need at least a 1,000x improvement in wall-clock time for prover systems in order for this to become possible.
We’ve spent a lot of time this year thinking about the questions above. We’ve begun to develop strong non-consensus conviction on some of these questions. Most are still wide open.
Ultimately, the only way to answer these questions is with data. Amazingly, all of the data required to answer these questions is in plain sight, living in the blockchains for anyone to examine. While it’s easy to answer these kinds of questions with enough hindsight, it’s much harder to answer them with conviction – and make portfolio allocations – while there is still some uncertainty.
Where possible, we’ve been investing in our internal data science infrastructure to help us better answers these questions. We’re also spending a lot of time on the ground with entrepreneurs and developers to figure out what’s worhking and what’s not. Sometimes, just a few anecdotes from developers can help us develop the conviction to make an investment decision.
We’re going to see a lot of existing hypotheses challenged in 2019, and we’re going to see the emergence of many more. For example, the decentralized/open/composable finance mega thesis was not coherent going into 2018. Going into 2019, that opportunity set is quite clear.
After reading this, given the level of uncertainty, it’s natural to ask: why are we devoting our lives to this?
Unintuitively, despite the incredible uncertainty in the implementation details, we have remarkable clarity on the ultimate outcomes, and those are the 3 mega-theses for crypto: 1) a $100T global, state-free money 2) open/decentralized/composable finance, and 3) web3 (self-sovereign ownership of data).
We know our north star, and we continue to work towards it.
Bonus: see Zao Yang’s amazing thread on the evolution of open questions in crypto 2015 – 2018.