Settling the Block Size Debate

blocksize

This is a guest post by Eric Lombrozo, the Co-CEO and CTO of Ciphrex Corp., a software company pioneering decentralized consensus network technology. Lombrozo is also a founding member of the CryptoCurrency Security Standards Steering Committee and has been a longtime contributor to the open source Bitcoin core development effort.

Introduction

In the last few months, a contentious debate has arisen surrounding the issue of a hardcoded constant in the consensus rules of the Bitcoin network. While on the surface it appears to be a simple enough change, this single issue has opened up a veritable Pandora’s box.

What is the block-size limit and why is it there?

When the Bitcoin network was in infancy, several assumptions had to be made regarding what kind of computational resources a typical Bitcoin node would have. Among these resources were network bandwidth, storage space and processor speed. If blocks were allowed to grow too big they would swamp these resources, making it easy to attack the network or discourage people from running a node. On the other hand, if blocks were too small, network resources would be underutilized unnecessarily, keeping the number of transactions too low. Despite the fact that available computational resources vary widely between devices and computer technology continues to evolve, for the sake of simplicity, a size was chosen: one megabyte.

It all comes down to economics

The block size limit is, at its core, an economic decision. It balances transaction load with availability of computational resources to handle the load. Block space is subject to the same economic principles of supply and demand as any other scarce resource.

In the early days of the Bitcoin network, it was expected the transaction load would remain well below the proscribed limit for some time. However, it was anticipated that eventually blocks would fill up as more and more nodes joined the network.

In order to deal with resource scarcity, a transaction fee had already been introduced into the protocol design. The idea was that a fee market would naturally develop, with those wanting their transactions to get more highly prioritized for processing offering to pay higher fees. Until fees could sustain the network, an inflationary block reward subsidy which halves every 210,000 blocks (about once every four years) would provide incentive for miners to continue building blocks.

The problem is that there’s sort of a chicken-and-egg situation inherent in this. As long as block space is abundant, there’s little incentive to develop the proper infrastructure to accommodate a fee market. Until recently, most of this infrastructure has assumed abundance. Block space scarcity requires a significant change to how we propagate transactions. However, these problems are only now starting to get addressed as we start having to deal with scarcity.

As the block reward subsidy continues to decrease over time, fees will become more and more important as a means for paying for the network. Blockchains are expensive to maintain – and block space isn’t free. Someone needs to foot the bill. Removing scarcity by increasing block size shifts the costs from transaction senders to validator nodes – the very nodes upon which we vitally depend for the continued secure operation of the network.

But isn’t the block size limit about scalability?

A major misconception that has arisen around this issue is that this is all about being able to scale up the network. Scalability is indeed a serious concern – recently there’s been some significant progress in addressing this via noncustodial offchain payment channels, but it is tangential to this discussion and beyond the scope of this article. The block-size limit is about economics, not about scalability. Hopefully, we can clear up this misconception once and for all.

In computer systems, scalability is the property of a system to be able to continue to operate within design specifications as we increase the amount or size of data it must process. Since computational resources are finite, any real-world computer system has limits beyond which it will no longer behave desirably.

A maximum block size was imposed on the Bitcoin network to make sure we never surpass this limit. Undesirable behavior isn’t merely a matter of annoyance here – the security of the network itself is at stake. Surpassing the operational limit means the security model upon which the network was built can no longer be relied upon.

One of the assumptions in this security model is that it is reasonably cheap to validate transactions. Beyond a certain block size, the cost of transaction validation grows beyond what the honest nodes in the network can bear or are willing to bear. Without proper validation, no transaction can be considered secure.

Raising the block size limit can only securely increase the number of transactions the network processes as long as the limit still remains below the point where validation starts to fail.

Can’t we make optimizations to the software to reduce validation cost?

Yes, we can. There are a number of areas that could still be improved; among them are block propagation mechanisms, compression or pruning of data, more efficient signature schemes, and stronger limits on operation counts. Many of these things are indeed good ideas and should be pursued. Everything else remaining equal, reducing the cost of validation can only improve the network’s health. However, barring a major algorithmic breakthrough along with a substantial protocol redesign and/or a sudden acceleration of Moore’s Law and Nielsen’s Law, there’s a hard theoretical limit to how much we can really do to reduce costs.

Even given these optimizations, the computational cost of validation grows at a greater-than-linear rate with block size. Roughly, this means that multiplying the block size by X raises the cost of validating the block by a factor larger than X. We’d still be many orders of magnitude shy of being able to satisfy global demand if we expect to include each of everyone’s transactions in the blockchain and also expect the network to continue to function properly. Block space is inherently scarce.

At what point does validation start to fail?

There is strong evidence we’re quickly nearing this point if we haven’t already passed it. The computational cost of validation is increasing at a far faster rate right now than the cost of computational resources is decreasing. In addition to the superlinear complexity inherent in the protocol, the protocol design has two fateful errors that further compound the problem: It assumes miners will properly validate blocks and that clients will be able to easily obtain reasonably secure, short proofs for their transactions that they can check for themselves.

But don’t miners validate blocks?

Sometimes. That’s the first error. They have at least some incentive to do so. However, miners don’t get paid to validate blocks. They get paid for finding nonces that make blocks hash within a target, aka proof-of-work. It doesn’t really matter who validates the blocks they build upon. They do lose their rewards if their own block is invalid or they mine atop an invalid block, but as long as those who are feeding them blocks almost always feed them valid blocks, it can actually be rational to cut corners on validation to save costs. The occasional invalid block might statistically be more than offset by the cuts in nominal operational expenses. This scenario has something of a tragedy of the commons element to it.

Particularly costly for miners is propagation latency. The longer it takes them to receive, validate, and propagate a new block, the higher the chance someone else will beat them to it. Also costly for miners are maintenance and support for validation nodes to make sure they correctly adhere to the consensus rules. Then there’s the actual cost of the computational resources required to run the node. And even if the miner is willing to incur all these costs, a software bug could still cause them to mine an invalid block. And all the above would be true even without adding mining pools to the picture, which greatly amplify validation errors.

These concerns are not merely hypothetical musings. This scenario has already come to pass. Around July 4, 2015, a network fork occurred exactly for these reasons, which caused many clients, websites and online services to accept invalid blocks. Rather than putting up with the costs of validation themselves, they were relying on the miners to validate for them…and the miners themselves were not validating. The costlier validation is, the more likely such scenarios become.

Unfortunately, the protocol currently lacks a means to directly compensate validators securely. If we could do this it would likely lead to a much more robust and secure economic model.

Why can’t clients validate the blocks themselves?

While the resource requirements to run a full validation node are still within the capabilities of modern servers, they have greatly surpassed the capabilities of smaller devices, particularly mobile devices with intermittent or restricted network connections. Even most desktop and laptop systems are already heavily taxed by having to validate one megabyte blocks – and the need to run one’s own validation node greatly degrades the end-user experience.

The second fateful error in the protocol design is the assumption that despite it not being practical for most users to run a full validation node, they’d still be able to request from other nodes reasonably secure, short proofs that the blocks and transactions they’ve received are valid. But a satisfactory mechanism for this has so far failed to materialize. This issue is greatly exacerbated when miners also fail to do proper validation. Instead, most clients are currently relying on centralized services to perform the validation for them – and sadly, the July 4, 2015 fork demonstrated that even with good intentions, these centralized services cannot be counted upon to always properly validate either.

What can we do about all this?

Like it or not, the era of block space abundance is coming to an end. Even if after addressing all the above concerns we agreed it was safe to raise the size limit, sooner or later we’ll bump up against the new limit…probably sooner rather than later. And if we raise it without carefully addressing the above concerns as well as the enormous risk associated with incompatible changes to consensus rules, we risk nothing short of a collapse of the network security model, making the issue of block size moot.

Regardless of whether – and when – we ultimately end up increasing the block size limit, we are already dealing with scarcity. Nodes that relay transactions on the network are already being forced to prioritize them to reduce memory load and avoid denial-of-service attacks. Spam attacks are already causing some blocks to fill up. A fee market is already starting to develop. I trust human ingenuity to find a good way forward presented with these challenges.

Regarding validation costs, I’m hopeful that eventually we’ll be able to make it cheap and efficient for everyone to properly validate – or at worst, develop mechanisms for securely outsourcing and enforcing validation. This will require substantial reengineering of the protocol – and perhaps of the Internet itself. Even if this is still a few years off, given all the major developments occurring in this space I believe it will eventually happen. For now, I would strongly urge caution when doing anything that further increases validation costs.

 

Photo Tiger Pixel / Flickr

The post Settling the Block Size Debate appeared first on Bitcoin Magazine.