Examining the ‘Honesty’ Assumption
Editorial

Examining the ‘Honesty’ Assumption

Threshold Network
Threshold Network

In a recent blog post, we covered how the term “decentralization” has become misused and abused within the Web3 ecosystem, denuding it of its original technical meaning. Nor is ‘‘decentralization” the only term that is often misappropriated. Other commonly misused and misunderstood terms are “honesty” and “trust.”

Version one of the Threshold bridge is what is known as a “trustless” bridge from Bitcoin to Ethereum. To gain a clear understanding of why this is, we should first define and understand its antithesis: a “trusted” bridge.

With a trusted bridge, you send your assets to the bridge (could be centralized, could be decentralized), and trust it will mint a token in exchange. You then send your minted tokens back and trust these will be redeemed. As a trusted bridge user, you have to “trust” that your requirements will be honored. Just as someone who puts their money in a bank must trust that it will be there to withdraw at a later date, with a custodial bridge a user trusts that when they send an asset over the bridge, it will issue them their token on Ethereum. When a user then sends their token back sometime in the future, they “trust” that they will receive their asset in exchange. The second term referred to above – ‘honesty’ – doesn’t really enter into the equation, because there is only one entity you are interacting with: the centralized authority. “Honesty” comes into play when you are dealing with decentralized models in which multiple operators have to approve or collateralize transactions.

Regardless of whether a bridge is “fully trusted” or “trustless”, users are still vulnerable to both theft and censorship. Neither is a perfectly secure solution.

That users are often confused about the meanings of “trust” and “honesty” in Web3 is understandable. In the non-technical, quotidian world, these words go hand-in-hand. Both have positive connotations. If you “trust” someone, it’s likely because they are “honest”. Trust begets honesty, and honesty begets trust. But at the technical, programmatic level, these concepts have very distinct, separable qualities.

Although it might seem counterintuitive at a linguistic level, a protocol cannot simultaneously be “trustless’ and require honesty. If a bridge is characterized as “trustless”, people may infer that this means it requires “zero trust” because it is perfectly, programmatically “honest.” However, on a truly technical level, the situation becomes more complex.

tBTC V1: A ‘Trustless Bridge’

tBTC V1 was deemed “trustless” because transactions were decentralized and crypto-economically secured, rather than performed by a “trusted”, centralized authority. Anytime anyone wanted to deposit BTC in order to bridge it over to Ethereum, three randomly selected operators using the system had to collateralize the deposit using their own ETH, thereby over-collateralizing the BTC. For example, if a user wanted to deposit 10 Bitcoin, three Keep operators were selected who had to come up with 10-15 BTC worth of ETH to collateralize the deposit. If anyone ever colluded to steal the underlying Bitcoin, they would end up losing more value in ETH than they would gain in BTC. Therefore, everyone would be incentivized to be ‘honest’.

While this sort of theft never occurred, if it had the system would automatically create an auction using the collateralized ETH to try to buy tBTC. It would offer a higher and higher price until it was offering the entire 150% collateral and a buyer took advantage of the easy arbitrage. The tBTC the system then buys is immediately destroyed to maintain the supply peg. For example, if 1 BTC was stolen, the system would use the bond to buy back and destroy 1 tBTC. So, in the rare event that people collude to steal the underlying bitcoin, everyone else profits on the arbitrage and then makes the customer whole.

Despite its relative security, the major drawback of this design was the huge capital constraint and, therefore, creating a supply cap on the total amount of Bitcoin that could be bridged. That supply cap was based on how much capital the operators would come up with to secure the system. For example, if the bridge were to secure a billion dollars worth of BTC, operators would have to find 150% of that value in ETH as collateral. That turned out to be far too much of an ask for operators. And yet to arrange the protocol in any other manner would mean that the bridge was no longer crypto-economically secure – with the collateral restraint gone, operators could theoretically be incentivized to collude, act dishonestly, and steal the underlying BTC.

Because of this supply constraint on the system, tBTC v1 had an inherent limitation on its growth. Its success depended entirely on how much collateral operators were willing to put into the system in the form of ETH. The supply of ETH was not able to meet the demand for how much BTC people wanted to bridge.

One of the most noteworthy aspects of tBTC v1 turned out not to be its success as a product, but rather the interesting and unexpected social phenomenon it produced. Occasionally, someone would “misfund” by accidentally sending the wrong amount to the underlying Bitcoin addresses. For example, someone would want to make a 1 BTC deposit. The operators would have to collateralize that deposit with 1.5 BTC worth of ETH. But suppose the person making the deposit accidentally sent 10 BTC instead of 1? At the level of the smart contract, there is nothing to keep this from happening. Bitcoin does not stop you from sending the wrong amount, and on the Ethereum side, nothing stops the operators from rejecting that deposit. In other words, there was no crypto-economic security since the deposit was severely under-collateralized. In this circumstance, the operators providing the ETH were incentivized to act selfishly; to take the 10 BTC. After all, they would only have had to put up 1.5 worth of BTC themselves. From a purely game-theoretic perspective, you would expect that the operators would act in their own self-interest and simply take the money. But, this is not the behavior we witnessed in such situations. Instead, something truly beautiful would happen.

What was observed instead was that operators went out of their way to try to return the BTC to the person making the incorrect deposit. tBTC v1 wasn’t even built to accommodate such altruism, and so operators were trying to find loopholes in order to refund misfunded deposits. Surprisingly, operators didn’t want to be in charge of collateralizing that much BTC. It was continuously observed that operators would try to “make things right” rather than taking the money and running away with it.

tBTC v1 had been modeled as a completely cynical system; one that expected operators to act both selfishly and hyper-rationally. However, this simply was not the case.

tBTC V2: A ‘Trust-Minimized Bridge’

This impulse for altruism became the inspiration for what would evolve into tBTC V2, as it sparked the following question: what if the capital constraint could be replaced by what is called an “honesty assumption”? The idea would be to have 100 signers on each wallet. In order to steal any underlying Bitcoin, 51 members of the signing group would need to collude. But if 51 operators are honest and refuse to steal, then none can do so. If it is assumed a certain percentage of the operator base simply won’t steal, it is then possible to calculate the probability that any particular wallet is immune to this sort of theft. The ‘honesty parameter’ then asks "what percentage of operators are honest"? And then from there "what is the probability that a wallet will have at least 50 dishonest operators, given that X% of operators are honest?" And finally "what is the probability that we have nothing but good wallets for 5 straight years given that X% of operators are honest and we produce a wallet every week?"

Since the risk calculation was sufficiently low, there was no longer the need to have each wallet be individually collateralized. Instead, V2 offers what is termed a “coverage pool.” This operates very similarly to insurance: people put in money and are then paid a percentage yield on committed assets over time. The coverage pool then pays out in those rare instances of dishonesty and theft. Creating the coverage pool and removing the onus of collateral from individual operators and wallets has allowed v2 to massively reduce the capital constraint that hampered the growth of V1.

This all begs a question: does this mean that wallets themselves are no longer crypto-economically secure? It does not. Instead, the proper model for conceptualizing the v2 bridge is as a gradient rather than as something binary. The v2 bridge exists on a gradient, because its users exist on a gradient: some are more honest than others. Thus the tBTC V2 system model can be termed a “trust-minimized” bridge.

So, whether someone is touting a “trustless bridge” or a “fully trusted bridge” as being a protocol that is perfectly crypto-economically secure, there is one thing you can trust: they aren’t being fully honest.