Smaller blocks with shorter generation times would solve this.
No, it wouldn't.
This would be a better solution that creating monolithic blocks that may not be filled.
If they are not filled, then they are not monolithic.
Blocks are only as big as the number of bytes worth of transactions they contain. When people talk about increasing the blocksize, theya re talking about increasing the maximum size allowed per block. This allows the block to contain more transactions because it will require more transactions before it encounters the limit.
Every block also has an 80 byte header.
Lets look at an example. Lets say:
- You have 9999920 bytes worth of unconfirmed transactions
- You allow a maximum block size of 10000000 bytes
- The block is solved after 10 minutes
Ten minutes later you confirm all the transactions in a single block. After you add the one 80 byte header you will have added 10000000 bytes to the blockchain.
If, instead:
- You have the same 9999920 bytes worth of unconfirmed transactions
- You allow a maximum block size of 500000 bytes
- A block is solved every 30 seconds
Then after 20 blocks you'll again have added 10000000 bytes to the blockchain,
BUT . . .
Since each block required a header, you'll only have confirmed:
499980 * 20 = 9999600 bytes worth of transactions.
Since you started with 9999920 bytes worth of unconfirmed transactions and you only confirmed 9999600 bytes, you still have:
9999920 - 9999600 = 320 bytes worth of unconfirmed transactions.
You could solve this problem of confirming less transactions by allowing the blocksize to be a bit bigger than 500000 bytes, but then the total number of bytes added to the blockchain every 10 minutes would be larger.
Having smaller blocks that are generated faster will either result in a blockchain that grows faster, or less transactions confirmed in a given amount of time.