If there's one thing I've learned over the past couple of years, it's that one synchronous block-space will not be sufficient for all of the applications we'll eventually see on-chain.
This wasn't as obvious in 2018, when the possibilities weren't as fleshed out. Last year's DeFi summer and this year's NFT summer have cemented the fact that there's a lot of application design space to play with. And the rise of non-Ethereum Layer 1s this year have further drilled in that block-space is in very high demand.
Given that we're likely to live in a multi-chain world and applications on different chains will want to communicate with one another, it's valuable to understand how the connective tissue between chains will work. I want to explore one angle of this in an abstract sense, without necessarily picking on a single project
In this post, I want to examine a specific tradeoff inherent in these systems. It's a tradeoff that's kinda obvious when you reflect on it.
I'm also standing on the shoulders of giants in presenting it; it's spelled out in excellent paper by Zamyatin et al. I aim to just put some more color behind the core statement of that paper.
The tradeoff, in the words of the paper's authors, is that cross-chain communication is impossible without either:
The relationship between these is fundamental and quite intuitive, but requires digging through a little bit of jargon. I hope to make the intuition clear here.
Let's talk a bit about synchrony assumptions. They're a common framework for thinking about communication in distributed systems and are criminally not well-understood.
Think of a distributed system as a message-passing protocol that needs to work in spite of failures or adversaries in the communication infrastructure. We can then define how bad those failures can get before the system stops working. This is exactly what these synchrony assumptions aim to do.
A system that functions under the 'synchronous' assumption assumes that messages can only be delayed by the network/adversaries by some known fixed upper bound.
A system that functions under the 'asynchronous' assumption, on the other hand, assumes that the network/adversaries can delay messages by any finite amount of time.
The latter is more robust. It says that a system is better with dealing with the uncertainties introduced by the environment. There are also 'middle ground' assumptions that are usually called 'partial synchrony', but that's not necessary to define here.
How does this relate to cross-chain communication? Well, the aforementioned paper shows that if you have a system that implements asynchronous cross-chain communication, you could use it to solve an older problem called 'fair exchange'. And we know from a much older paper that asynchronous fair exchange is impossible without a trusted third party (TTP).
Therefore, asynchronous cross-chain communication is impossible without a TTP. And we can only give up this requirement on a TTP by having a more stringent synchrony assumption on the system.
This has all been very theoretical so far. Let's talk about real systems.
From the previous section, we know that to communicate a message between two chains, we either:
What really are the tradeoffs here? It might be helpful to compare systems that require a TTP with those that require a stronger synchrony assumption.
TTPs seem well understood. A TTP is just a 'lower security/decentralization' middleman that exists between the two chains. Cross-chain systems that rely on a validator set separate from the two chains they're bridging are an example of this.
A stronger synchrony assumption manifests as something like retry logic or a timeout. Hash time-locked contracts (HTLCs) are an illustrative example. A few years ago, hash time-locked contracts were considered the important primitive for cross-chain communication. In HTLCs, the stronger synchrony assumption manifests as necessary retry logic or a free option problem.
In the previous section, I stated that systems with minimal synchrony assumptions were better at dealing with uncertainties in the environment. In the case of something like a HTLC, the communication can fail if there's a strong financial incentive to make the communication fail. Like a market maker who wants to block or delay the value transfer because they hold certain assets in both venues.
A strong enough financial incentive in the environment can make the communication fail.
This last point is sorta true of systems that require a TTP though! In that case, the TTP needs to be trusted to have a financial incentive to maintain the integrity of the bridge. The difference is in how the financial incentive to block communication is expressed.
I believe there's a natural duality here. Either you have to trust a lower security 'middleman' or you need to trust that the communication won't be disrupted.
If the cross-chain communication introduces juicy enough MEV, the communication might fail in either case! Either because the TTP goes corrupt or because a big enough market maker makes it fail.
How do modern cross-chain communication systems distribute across this tradeoff? There are better articles out there that categorize all systems; I'll juts look at a few illustrative examples here.
Connext's nxtp is a classic example of a cross-chain system that adds a stronger synchrony assumption. Once users have finished their negotiation with routers, there's a two phase prepare/fulfill mechanism for completing the appropriate transactions on both sides of the bridge. If the communication doesn't happen within a timeout, it doesn't happen.
On the other hand, the Wormhole bridge between Solana and Ethereum introduces a TTP. The 'guardian' validator set that exists between the two chains is required to make a 2/3 majority attestation about a transfer for it to be carried through. If this set is corrupted or (collectively) doesn't want a transfer to go through, it won't.
One thing that needs stressing is that neither approach is universally better. Marketing material from systems on either side of this tradeoff will claim that they're making the 'right' tradeoff, but as stated before, both tradeoffs present a way in which these systems can fail.
There are also some systems that make one of these tradeoffs less obviously. My favorite example is the Optimism DAI bridge for fast withdrawals. The TTP here is the Maker DAO itself! Users require the DAO to provide DAI liquidity and thus integrity to the proposed fDAI asset.
It's kinda odd to think of a DAO as a TTP, but if we consider that anything with less robust security than the chains communicating is 'trusted' relative to them, a DAO can serve this purpose! I'm personally excited to see more DAOs taking on the role of TTP for facilitating cross-chain communications.
The future looks multi-chain and facilitating communication between chains will present real opportunity for those ready for it. It's important for those of us using these new systems to understand the tradeoffs they're making.
In this post, I examined one fundamental tradeoff, between strong synchrony assumptions and trusted third parties (TTPs). The boundaries between chains are very interesting to me, so I'll dive into other facets in future ones.
As is true in everything I write, if this stuff is interesting to you, feel free to get in touch! I love pushing deeper into the unknown.