I’m writing this simply as a record of what I believe to be background history of the Avalanche consensus protocol. It may be of interest to some people.
In 2016, Amaury gave the talk A Journey to the Moon, Bitcoin as a World Currency. Since even before then, he was intellectually possessed with how to spread out consensus work over time. Currently, Bitcoin requires fast validation of blocks once received, in order to maintain network incentives. However, in order to scale efficiently this work must be spread out consistently over time.
From the time I started working on Bitcoin Cash, we often spoke of this subject, and he had sent me a video regarding the “avalanching” process in Ripple’s consensus mechanism. Their mechanism is efficient, but requires permissioned nodes for it to work. Now, I had been interested in statistically based consensus protocols since working at Apcera, as they are more efficient and faster than RAFT and Paxos. I was working on these ideas as alternative to RAFT for our state store – though I never implemented them in practice. Ripple uses a similar, but different, mechanism to what I was working on for Apcera and within a different context.
The basic premise of a statistical consensus protocol is to sample or broadcast a state to a large enough subset of nodes such that via hypergeometric relationships, you can ensure that the network will come to consensus within a specific amount of time. This is well-known statistics usually applied to quality assurance in manufacturing, and to my knowledge were not applied previously to network protocols. The fundamental concept is that nodes with incorrect state are like damaged products.
In March of 2018, I discussed this idea with Amaury. In order to avoid a permissioned network, the idea was to estimate the total node count in the network via other statistical tools, and then use that to decide how many to sample for an estimate of state (a quorum). If the estimate had an uncertainty that was too high this quorom would be thrown out, a new one picked, and the process repeated. Amaury liked the idea, but we did not finalize a particular protocol as there were concerns about network overhead, and how to apply it to only doublespends. This was shortly before the the Satoshi Vision conference in Tokyo where Emin Gün Sirer was present.
In Tokyo, Emin, Amaruy, and I met privately to discuss this idea, as well as a few others. On May 16th of 2018, the paper Snowflake to Avalanche: A Novel Metastable Consensus Protocol Family for Cryptocurrencies was published by “Team Rocket.” (Who were the sworn enemies of Satoshi in the Pokémon series). Emin, to my knowledge, was the first person to vote that this paper had been anonymously posted on IPFS. My personal belief is that Emïn is the originator of the paper. That after out conversations he went back to his lab and had grad students write the paper – either with participation of Amaury or otherwise. I was not invited to assist in this process.
The exact protocol is not identical, but is in practice the same as what was discussed. Sybil resistance, I don’t believe, is necessary and is not part of the protocol I discussed with Amaury. Sybil resistence comes from the same ideas as Bitcoin: One node, one vote. If only a subset of nodes participate in voting by using a staking protocol, a sufficient non-staked set of nodes and censor signed votes just like they can on Bitcoin. Again, like the Bitcoin consensus protcol, a sybil attack on Avalanche only lets you censor transactions, it does not allow you to create invalid states.
Now, whomever wrote the Avalanche paper deserves credit for introducing the academic and mathematical rigor I would not have otherwise bothered with. It’s disappointing to me that I was (assuming my assertions are correct) not invited to participate in writing this paper, and enabled to gain useful experience. This seems to be common practice within academia, and has happened to me before.
I am also not a fan of Avalanche as a stand alone cryptocurrency consensus protocol. It lacks a way to provide fair coin distribution. Proof of Work has a rightful place in making scarce digital resources. I also believe that Avalanche, as specified will suffer from large bandwidth requirements. Confirming each transaction takes around 2kb in network traffic. This is significantly more overhead than the Bitcoin network. Additionally, to my knowledge the problem of using Avalanche only on double spends is still unsolved – prohibiting the potential to save bandwidth through that mechanism.