The Blockchain Trilemma

Yanis Mekhfi
-
May 10, 2023

THE BLOCKCHAIN TRILEMMA

Introduced in 2017 by Vitalik Buterin in one of its blog posts, the “Blockchain Trilemma” is one of the biggest issues to a wider adoption of decentralized blockchain. In the middle of a massive bull run, the Ethereum network couldn't deal with the growing number of users and it caused very high transaction time and fees, especially when big events took place like the mint of the Crypto Kitties in 2017. These problems led the founder of ethereum to think about the Trilemma, which states that a blockchain can’t be fully decentralized, safe and highly scalable at the same time. Is it still true ? Is it essential to gather these 3 characteristics ? Let’s try to figure it out !

Decentralization: at the heart of public blockchains

Decentralization is a very important characteristic of public blockchains, since the first one that worked, the Bitcoin network, is about allowing people to store and exchange value without relying on any trusted third party. But why is it so important ? The traditional banking system has been working for decades and already allows people to store their assets safely and to process transactions quickly in many countries.

The existing system is effectively functional, but it has many flaws due to its centralisation. Besides technical improvements - only an internet connection is required to its functioning for example - , the Bitcoin network allows everyone in the world to exchange and store value in an independent and permissionless way, while banks can deny anyone’s transactions arbitrarily.

Moreover, banks allow transfers of value through currencies that are managed exclusively by central banks whereas Bitcoin (the network) uses bitcoin (the token, the currency) that follows precise and unalterable rules, unless the majority of the network participants decides otherwise.

Decentralization is therefore the key point to the blockchain rise, ensuring them to avoid the single point of failure problem and to be permissionless, robust and censorship resistant. But how to measure this ? How decentralized should a blockchain be ? To answer these questions, the most known indicator is the Nakamoto coefficient, that represents the minimum effort needed to take control of a blockchain.

The results depend on the calculation method and change over time
Highest score - highest decentralization

There are several ways to calculate the Nakamato coefficient, which are more or less complicated and which show different results depending on the chosen parameters. In the table above, we express the Nakamoto coefficient in its simplest way : the number of nodes required to alter the consensus mechanism. Following the results in the picture above, taking over the 2 biggest nodes on ethereum is enough to disrupt the production for new blocks, whereas Mina can’t be stopped unless the 78 biggest nodes are taken over.

However, the Nakamoto coefficient can’t be the only measure to estimate how decentralized a network is. Using the example of Mina, if all of the 78 nodes required to shut down the network are located in Ukraine, it would still be a nice Nakamoto coefficient, but the network wouldn’t be decentralized enough because it depends on a very specific area of the world that is currently at war. Of course, the coefficient has to be high, but it is also important to consider other parameters, such as the geographical concentration of nodes, to ensure the protocol security.

II/ Security and blockchain consensus mechanisms

Highly correlated to decentralization, security is another important top in our triangle. This part of the trilemma is often challenged because decentralized blockchains are still very young and still being tested over time. Many exploits have been discovered, and others are still to come as these protocols are often open source, which allows malicious people to look at possible exploits easily. Security risks related to the use of a blockchain are numerous (phishing attacks, exploits, scams, …) but most of them are the result of human errors. We will focus in this section on the risks related to consensus choices.

Quick overview of some blockchain consensus mechanisms

As you can see from this graph, many consensus mechanisms have been designed in recent years. The Proof-of-Work (PoW) remains to date the most trusted consensus algorithm as it runs the Bitcoin network since its beginnings, without any shortage, hack or exploit. In short, network participants that want to take part in the creation and the validation of new blocks in the chain have to solve a mathematical problem with their devices that is getting harder over time. Their energy consumption (costly and rising as the problem gets harder) proves their willingness to secure the network rather than compromising it.

By default, the protocol considers as legit the longest chain that has been created. To compromise the network, you need to gather more than 51% of the computing power used for Bitcoin ($1,2m per hour at 22 feb. 2023) and to maintain it long enough (a very long time) to exceed the longest chain. Time is on the side of this consensus system : the more time passes, the more it becomes hard to take over the longest chain. In addition to being extremely expensive, it is also useless for the attacker because the bitcoin price would crash as everything that happens on-chain is transparent.

Highly secured and trusted in the industry, the PoW consensus is the Bitcoin network strength but also a weakness. In these times when the climate emergency is increasingly important for users, PoW may be seen as a huge and useless energy expense to the eyes of its critics.
The Proof-of-Stake (PoS) consensus, adopted by ethereum with the merge, is getting more and more popular as it is way greener than Bitcoin, and because it identifies honest network participants by requiring them to lock a defined amount of cryptos on the network (staking), which they could lose if they validate wrong blocks.

No more energy consumption, but PoS is considered by many as less secure because it is still very new and needs to be tested over time. Also, even if it’s very expensive, only tokens are needed to take over the system, unlike Bitcoin that requires numerous computers to be taken down.

However, PoS is very popular among builders and users because it paves the way to scalability and programmability.

III/ Scalability: required for mass adoption

Being the convenient and robust technology that it is, the blockchain is often a victim of its own success when faced with a very large number of transactions. The more a blockchain is decentralized, the less scalable it will be, meaning that its transaction rate would be lower. The transaction rate shows us how scalable the blockchain is by representing the number of transactions treatable in a given time (usually measured per second), which is capped by the speed at which the network participants come to a consensus for each block of the chain.

In short, the consensus takes time to achieve mainly because the miners / validators do not necessarily have sufficient internet bandwidth and high-quality computers to run a highly scalable blockchain (with high block sizes, fast block creation frequency).

Since the previous bull run, in which Ethereum has been challenged a lot, scalability has become an important issue for Web3 builders. There are many projects trying to solve this problem either by creating new blockchains based on new technologies (Sharding, subnets, sidechains, …) or by developing layer 2 solutions on top of existing blockchains.

Layers 2 are especially popular among the new projects aiming to fix scalability issues of blockchains because it is currently too hard to natively speed up their performances while maintaining decentralization and security.

There are already many different technologies for layer 2, but they often work by processing transactions off the main chain : transactions are executed on a secondary blockchain (the layer 2) that is fully focused on scalability, and transferred to the main blockchain (the layer 1) in the form of a single transaction to be confirmed. This approach in two steps allows users to make transactions quickly on the Layer 2, while taking advantage of the security and the decentralization of the Layer 1.

Another example of recent scalability solutions are modular blockchains, such as Celestia or Polygon Avail. In opposition to monolithic blockchains - as known as traditional layers 1 (Ethereum, Cardano, Solana, Tezos, Avalanche, …) - modular blockchains are divided into three parts that have a specific role in the blockchain functioning.

Monolithic blockchains versus Modular blockchains

In a few words, smart contracts are developed and executed in the “Execution layer” and the transactions data is stored and validated by the “Consensus & Data Availability” layer. The “Settlement” layer is an intermediate step that acts as a hub for every execution layer in which conflicts are settled and transactions are finalized.

In the case of Celestia, the blockchain is therefore a “Consensus & Data availability” layer on top of which it would be possible to build any kind of blockchain, depending on your abilities and needs. For example, a small project that needs its own blockchain can focus exclusively on the scalability of the execution layer, while delegating the security and the decentralization of its blockchain to the data availability & consensus layer of Celestia, which would already gather numerous & diversified validators.

The scalability of blockchains is a recurring issue because their usage has significantly increased with the advent of more complex utilities. For example, the Web3 gaming industry didn’t exist a few years ago, and is now about to bring millions of players on-chain to trade virtual pets in Dogamí or collectible cards in Oval3. That’s why Dogamí has chosen Tezos for its blockchain infrastructure : a very scalable blockchain, secured by a PoS and that has a Nakamoto coefficient of 5.

IV/ A question of choice and use case

While it is hard to combine decentralization with security and scalability, the whole ecosystem is working hard on trying to solve this important problem for the infrastructure blockchains.

As we can see with the triangle above, the trilemma remains unsolved given that there are no projects that manage to place itself at the exact middle of the triangle. At the moment, each blockchain has to make trade-offs depending on its purpose : a decentralized value reserve like bitcoin doesn’t need extreme scalability, but can’t compromise safety whereas a supply chain blockchain between a few actors would probably need security and scalability over decentralization.

That doesn’t mean the Trilemma is insoluble: layer 2 solutions and modular blockchains are very promising provided that they are able to maintain the security of the infrastructure despite its increasing complexity**.**

In the end, this could help to offer decentralization solutions that can really compete with proprietary infras that are still often faster, less expensive & more convenient to use.