The 'transaction index' that you're describing sounds like it would have have more overhead just to relay one chunk than
Matt's relaynetwork protocol typically needs to relay a whole block.
Clients wouldn't confirm/update the chaintip after every transaction. It is mainly just a way to prevent wasting bandwidth due to receiving the newblock messages from multiple peers.
The main idea is to send partial (unverified) blocks. I think the relaynetwork protocol doesn't do block validation either?
It remembers what transactions have already been sent and either sends a reference or the full transaction depending. The assumption being
p2p --> bitcoind <-> relaynode <-> relaynode <-> bitcoind <--- p2p
Each relaynode has a bitcoind doing border routing?
Inherently, under that system, the entire block must be received before it can be relayed to peers. With smaller messages, they can be forwarded, even if the rest of the block has not been received.
For larger blocks, especially if the 1MB limit is lifted, that latency would be more significant.
Block verification is not needed a DOS protection, indeed, but it's arguably important that the verifying network censor invalid blocks to avoid exposing SPV clients to them who will not be able to verify them in order to maintain strong incentives against producing invalid blocks.
SPV clients would still have to make sure that there are sufficient confirms. This is just a way to distribute blocks without having to receive the entire block first.
At the other end of the spectrum there are
ideas that can give basically perfect parallelism and perfect efficiency relative to already transmitted data, but they're certainly more complex.