Please note that this post is supposed to be neutral and informational. If there is anything wrong about the information, please let me know and I will fix it.
All right, let's do this. Mostly balanced response. Just a couple issues. First mistake:
There is also a verification slowdown that is caused by large blocks. The time required to verify a block grows exponentially in relation to the block size.
The time required to verify a block does NOT grow exponentially in relation to the block size. The time required to verify a
transaction does -- with current implementation -- grow exponentially in relation to the block size. One can indeed create a block consisting of a single transaction large enough to fill the block to max size, and this would be much worse in a double-sized block. But don't mistake a transaction size issue with a block size issue. There are also other ways of mitigating this problem.
Segregated witness is a solution to transaction malleability which has a side effect of decreasing the size of a transaction and thus increasing the number of transactions which can be put into a block.
No, segregated witness does
nothing to decrease the size of a transaction. Instead, it actually increases transaction size by a few bytes. There is, however, accounting legerdemain involved that simply does not count part of the transaction as included in the transaction. But make no mistake - the entire transaction must still flow across the network, and the entire transaction's bytes are necessary for any validation of the transaction.