We need to distinguish between segregation and aggregation, to have a truly scalable blockchain you need both.
Segregation is what SW offers already, it helps in pruning unnecessary, witness data once blocks are buried enough and re-orgs are practically impossible.
As of aggregation in its ultimate form, as @ETF and Andrew has reminded correctly we have Schnorr which has its own story and controversies (not too much, but still) and the main problem would be hard fork phobia in bitcoin and I don't exactly know what they are going to do with it.
But I think there is something we can do in terms of aggregation with current ECDSA technology. I don't exactly know how effective it would be because I have no statistics right now showing how important it would be, but I think it is worth discussing here.
Following is a modified version of op's pseudo script:
type: AGGREGATED,
inputs: {
tx_ids: <[id_list]>[,id_list] // id_list :: id [,id]
tx_pubkeys: <pubkey>[,<pubkey>]
aggregated signatures: <signature> [,<signature>]
}
,
outputs: {
type: P2KH
new_owners: <(address, amount, script)> [,<(address, amount, script)>]
}
Here we are on the same old rail but with a small difference which may sometimes have huge advantage to make some
boiler plates substantially smaller.
Every payment made to a wallet address in bitcoin creates a utxo that is spendable once as an input of a txn. Now suppose a business/person has announced an address to another group/person for
recurring payments, with current bitcoin transaction format for the owner to spend the total balance it takes a one-payment-one-input transaction which is amazingly stupid and a waste of space/bandwidth.
According to above proposed model, wallets arrange inputs with same "PubkeyHash" in their output script in id-list(s) and disclose their unique pubkey(s) separately along with corresponding signature(s).
It looks to be a good pre-Shnorr optimization, imo
As of now I've not extensively studied the possibility of implementing it in a soft-soft way, to be honest.