Bitcoin Forum
January 05, 2026, 11:59:51 PM *
News: Latest Bitcoin Core release: 30.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Scaling internal to node but trustlessly with "coalition"  (Read 97 times)
Bipedaljoe (OP)
Jr. Member
*
Offline Offline

Activity: 31
Merit: 10


View Profile
December 31, 2025, 06:46:59 PM
 #1

Bitcoin could scale to practically infinite size block by scaling a node internally by delegation. I.e., a team of 1024 people could manage 1024 times larger blocks (this requires ordered Merkle tree to coordinate in parallel, and propagation of transactions in mempool and block filtered by transaction hash range). But, such scales by trust which is limited by Dunbar's number perhaps, so teams would reasonably be more closer to 16 or 32 people, which is only 16-32 times larger blocks. But you can apply trustless coordination whilst still scaling internal to the node. Similar schemes are often suggested in blockchain projects but assumed to be centrally agreed on, "split the blockchain into 16 shards", but it is possible for nodes to scale internally into as many parts as they want, whilst still using similar trustless mechanisms. You can do "stateless validation" where block validation and block production is randomly delegated within a team (thus no one knows what "shard" they will be assigned to at a block) while each member has a dedicated shard where they store their part of the blocks and UTXO database. And you can throw in an extra round of audit that is random as well (like Polkadot has done). And, then request all members mine on the block, together, which proves devotion to a certain node-as-a-team (in the consensus mechanism sense, this behaves a bit like a "coalition", which may be easier to see if you consider proof-of-stake). The node can at the same time use P2pool or similar, to be part of a larger mining pool.

Summary: There are a lot of scaling ideas circulating in blockchain space, they all generally assume everyone should agree on a certain number of shards (and one validator per shard), but you can apply a very similar model internally within a node instead. Then you get something resembling what mining pools have been, but for everything else a node does. I think this may be an idea that has not been getting much attention. With this, a node that has hardware enough to handle 32 GB blocks can do so, others might need to team up with a thousand people with trustless coordination, and others might need a team of 16 people that trust one another. This approach forces nothing on anyone, and does not really change the protocol (besides requiring ordered Merkle tree...) It acknowledges Nakamoto consensus already scales, and any scaling of it should be internal to how it already works.

Bipedaljoe (OP)
Jr. Member
*
Offline Offline

Activity: 31
Merit: 10


View Profile
December 31, 2025, 08:06:20 PM
 #2

isnt the bandwidth the real bottleneck here, even if you have a thousand people sharing the work they still all need to download the huge blocks anyway

No the beauty of it is that you can propagate transactions in blocks and mempool by transaction hash ranges, so that rather than being node-to-node it becomes sub-node to sub-node. So if you take 1 MB block size, and then instead do 1024 sub-nodes with 1 MB "sub-blocks", you get 1 GB blocks but each sub-node manages roughly the same bandwidth and computation and storage as the 1 MB block single-node (on a simple trust-based version they manage upwards twice the sub-block size to include the UTXO requests and such between one another, with the trustless approach it is probably still roughly same). This is why it scales to infinite block size. Note, it requires an ordered Merkle tree, so that sub-nodes can do parts of the block by themselves and know what part (and then only assemble the Merkle root together).
Bipedaljoe (OP)
Jr. Member
*
Offline Offline

Activity: 31
Merit: 10


View Profile
January 01, 2026, 10:02:35 AM
 #3

It sounds a bit like how mining pools already work but for validation, if we can pool hashpower why not pool validation power too right

Exactly. And if pooling of validation can be done "underneath" Nakamoto consensus and internal to the validator, no reason to centrally agree on how it has to be done (i.e., the approach Vitalik Buterin and many have). Anyone who has to pool to compete with the larger block size is incentivized to do so, and to figure out how to do so, or they do not get paid (as they cannot validate, only the beefy nodes can manage such large blocks on their own).
Bipedaljoe (OP)
Jr. Member
*
Offline Offline

Activity: 31
Merit: 10


View Profile
January 01, 2026, 12:44:28 PM
 #4

In the sharding debate, this model can be positioned as intra-validator sharding subordinating inter-validator sharding. It combines the best of both worlds. Thesis, anti-thesis, synthesis. I made an illustration to show it.



Thesis: Inter-validator sharding
Anti-thesis: Intra-validator sharding
Synthesis: Intra-validator sharding subordinates inter-validator sharding

With the synthesis, a node can do as it wants, shard or not shard, use trust or trust-minimization, thus it is dynamic to need relative to available hardware just as is the rationale for intra-validator sharding, but it supports teaming up as is the rationale for inter-validator sharding. It is the best of both worlds, exactly. Also more complex, but self-organizing (nodes are incentivized to solve the needs they themselves have for their level of available hardware and bandwidth, there is here no one-size-fits-all solution to sharding).
DaveF
Legendary
*
Offline Offline

Activity: 4074
Merit: 7062



View Profile WWW
January 01, 2026, 02:53:41 PM
Merited by Mia Chloe (3)
 #5

IMO and nothing more this looks like an answer in search of a question.

Off the shelf hardware that is YEARS OLD can more then handle much larger blocks then we are doing now. I have a customer who is running 3 7th gen PCs each with 2 VMs on it
All 6 VMs are the same and are running full nodes of BTC - BCH - LTC - ETH -[I forgot the 5th coin  Roll Eyes ] Along with whatever software it needs to talk to the ATM. And they are not even stressing.

Same with the bandwidth. Eliminating the IBD even someone on 56k dial up internet could keep up with 16 meg blocks. Yeah you don't want to run a node on dial up but the point remains the same.

By the time we get and need much large block validation old hardware will be that much more powerful.

As I started with,IMO not saying it's not a good idea, just that it's not really needed at the moment.

-Dave

This space for rent.
Bipedaljoe (OP)
Jr. Member
*
Offline Offline

Activity: 31
Merit: 10


View Profile
January 02, 2026, 07:48:54 AM
 #6

Dave the position you describe is the "intra-node sharding" side. This is one of the two "sides" in the scaling debate, and the other is "inter-node sharding". In the "intra-node sharding" side one argument is typically that advances in hardware means intra-node sharding will always be sufficient. The "inter-node sharding" side rejects that claim. Two sides, that both claim the other is wrong, two sides that cannot be reconciled. I simply point out that both sides can be combined into a unified model. Then, reality can show who was right and who was wrong (most likely, both sides were right on some things and wrong on some things).
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!