Bitcoin could scale to practically infinite size block by scaling a node internally by delegation. I.e., a team of 1024 people could manage 1024 times larger blocks (this requires ordered Merkle tree to coordinate in parallel, and propagation of transactions in mempool and block filtered by transaction hash range). But, such scales by trust which is limited by Dunbar's number perhaps, so teams would reasonably be more closer to 16 or 32 people, which is only 16-32 times larger blocks. But you can apply trustless coordination whilst still scaling internal to the node. Similar schemes are often suggested in blockchain projects but assumed to be centrally agreed on, "split the blockchain into 16 shards", but it is possible for nodes to scale internally into as many parts as they want, whilst still using similar trustless mechanisms. You can do "stateless validation" where block validation and block production is randomly delegated within a team (thus no one knows what "shard" they will be assigned to at a block) while each member has a dedicated shard where they store their part of the blocks and UTXO database. And you can throw in an extra round of audit that is random as well (like Polkadot has done). And, then request all members mine on the block, together, which proves devotion to a certain node-as-a-team (in the consensus mechanism sense, this behaves a bit like a "coalition", which may be easier to see if you consider proof-of-stake). The node can at the same time use P2pool or similar, to be part of a larger mining pool.
Summary: There are a lot of scaling ideas circulating in blockchain space, they all generally assume everyone should agree on a certain number of shards (and one validator per shard), but you can apply a very similar model internally within a node instead. Then you get something resembling what mining pools have been, but for everything else a node does. I think this may be an idea that has not been getting much attention. With this, a node that has hardware enough to handle 32 GB blocks can do so, others might need to team up with a thousand people with trustless coordination, and others might need a team of 16 people that trust one another. This approach forces nothing on anyone, and does not really change the protocol (besides requiring ordered Merkle tree...) It acknowledges Nakamoto consensus already scales, and any scaling of it should be internal to how it already works.
