It doesnt take too long to process that far, but it starts really slowing down at around 350,000. I think that is when p2sh started being used a lot more and tx just got bigger and more complicated
P2SH should actually speed up parallel verification. You can fully validate a P2SH transaction without looking at any other transactions.
The only serial part is connecting it to the inputs. That requires verifying that the inputs >= outputs and also that running a Hash160 for each input to make sure that the sub-script matches the hash.
User veqtrus on Reddit informs me that checkpoints were phased out after v10.0 (headers-first). If that's so, then they're basically irrelevant now. Block #295000 is really ancient history.
Block 295000 has a difficulty of 6119726089. With a 4 GH/s miners, you could find a header that builds on that block every 200 years or so. For that effort, you can force everyone in the network to store 80 bytes of data.
Before headers first, you could force everyone on the network to store a 1MB of data, since you could force them to store a full block. With headers first, they will accept and store your header [
*], but won't download the full block, since it isn't on the longest chain.
It is a cost tradeoff. The latest block has a difficulty of 1880739956. Even if that was the checkpoint, it would only be around 3 times harder to create fake blocks. Moving to headers first improved things by 12,500 (1MB / 80 bytes).
Headers first combined with 295000 as a checkpoint gives 3841 times better protection than blocks-only and 399953 as a checkpoint.
[
*] Actually, I am not sure if they even commit it to disk. It might just waste 80 bytes of RAM.