NOTE: Moderated topic.
I think you forgot to hit that self moderate button.
It would be relatively easy to make the blocks contain hashes of off-chain bundles that record additional transactions. These bundles could then be whatever size, or they could be one-megabyte and there could be dozens or hundreds as needed.
Nodes getting just the blocks could then easily verify that a block chain has grown from the genesis block and see how much proof-of-work it contains, allowing them to pick valid longest-chains without tracking the bulk of transactions.
I don't quite understand this. Are you talking about full nodes here? If so, then the full node would still have to download all of those bundles, verify all of the transactions, make sure that they hash to the hashes in the block, and check that that hashes to the merkle root. The bandwidth requirement is still the same and the CPU overhead is slightly higher due to more hashing. A full node has to do this otherwise a malicious miner could be producing malicious blocks or just adding in arbitrary hashes.
Spending a txOut would require transmitting both the merkle branch of the txOut in the current txOut set (to show that it hasn't been spent) and the bundle containing the tx record where that txOut originates (so that the client can check the old transaction). The receiving client could then check the validity of the txOut.
What is the "merkle branch of the txOut"?
And, poof. You create another level of "lightweight client" that checks the block chain itself but doesn't check individual transactions except for those transactions that directly affect it.
It isn't very lightweight if you are still downloading 60+ Gb of blockchain, although I suppose it is lighter than the requirements of a full node.
And the block size no longer limits the transaction rate.
So it would scale better, or at least it wouldn't fail with a hard limit when transaction rates increase.
Right, but then what happens when someone decides to spam the network, as we have seen in the past? This brings back the old argument against having an unlimited block size, which is what you have essentially proposed.
But would it scale better *enough*? Regardless of how it's done, lifting the tx rate limit means increasing the bandwidth/storage limit for anybody who's downloading and checking the full transaction record - by the same amount as
if you had increased the block size limit itself. Because, ultimately, they are the same limit.
It would allow more transactions, but you would eventually run into hardware limitations and potentially stop many users from running full nodes due to the bandwidth and storage requirements. This is a centralizing factor.
One advantage to miners over just increasing the block size: you'd only need to download the block itself to get the ability to form a new valid block, so you still get the propagation times etc of one-megabyte maximum-size blocks. You aren't particularly penalized for bandwidth, provided you can use your bandwidth FIRST to get the block and THEN start downloading the bundle.
The disadvantage is that miners who haven't yet finished downloading the transaction bundle would risk orphan blocks if they include any transactions that were available before the previous bundle because they wouldn't know yet whether those tx were in the bundle. So if a tx didn't make it into the very first block it could have been in, it might be a long-ish time before anybody would risk including it in a new block.
As sdp, a miner would still have to download the bundles and verify before mining otherwise the block he is building on could be invalid. However, many miners are SPV mining so they aren't validating the block anyways or are doing so in parallel. It won't affect them, but it is bad practice to do so.
I would also like to point out that changing the block structure to do this would require a hard fork anyways.