I wasn't aware of 131 when I wrote that text, but aggregating public key reuse is a perennial proposal which reoccurs every 6 to 9 months and is shot down each time.
Do you have a link to any of these proposals? I wrote BIP131, so I'm curious to see other people's (failed) take on it.
Inserting a huge incentive to compromise fungibility and privacy into the system to get a modest capacity boost is a non-starter, even more than it was a non-starter in 2011 when I first recall seeing it proposed. And yes, some people currently use Bitcoin in ways that damage the system-- it can take that-- but in no way makes it acceptable to reward that harmful behavior.
I keep seeing this posted over an over again by multiple people, but it makes no sense to me. I'm referring to the notion that me using the system a certain way has any effect your privacy. Like me re-using addresses has any effect on you. Could you explain how this is so? Its not a self-evident claim at all.
Lets say my shower is bugged with a hidden camera. This will cause
my privacy to be lost, not anyone else's. The only way this bugged shower will effect
your privacy is if I were to paste a naked picture of you over the camera hole. In order for me to do this, I need a naked picture of you in the first place. I can't violate someone's privacy unless I have their private information in the first place. If you're sending me bitcoin from a wallet that generates a new change address every transaction, and I'm using the same address over and over again, what is the proverbial naked picture I have of you that I'm leaking to the world?
(As an aside, the example you give is pretty broken-- if every customer pays to the same address you cannot distinguish which of multiple concurrent payments has actually gone through; so that only works so long as your hotdog stand is a failure, as soon as you had multiple customers paying close together in time it turns into a mass of confusion.)
Oh come on, you're being pedantic. If you want to accept concurrent payments, you print out multiple QR codes, one for each register.
No kind of aggregation can be just done by "flipping a bit in the version field", as that is utterly incompatible with the design of the system; violates all the layering, and would be rejected as coin-theft by all the existing nodes.
You're the only person who thinks this.
In fact, the way you're describing it here would result in _immediate_ funds loss, even absent an attacker. Imagine an additional payment shows up that you weren't expecting when you signed but happens to arrive first at miners, and the total value of that additional payment get converted into fees! As you described it here, it would also be replay vulnerable... where someone sends the same transaction to the chain a second time to move new payments that have shown up since.
Only UTXOs confirmed in a block previous to the signed input get included in the coalesce. Lets say you have three UTXOs to the same address:
#1 value 4 BTC in block 1000
#2 value 2 BTC in block 2000
#3 value 1 BTC in block 3000
and you included input #2 into your coalescing transaction, only output #1 and #2 will be included in the coalesc. Output #3 is left off because it's included in a block
after the output that was included in the TX. The total input amount in this case would be 6 BTC.
That kind of design also results in a substantial scalablity loss as every node would need an additional search index for the utxo set (or perform a linear scan of it, which takes tens of seconds currently) in order to gather all the inputs with the same scriptpubkey.
You just invoked a pet peeve of mine. You can't throw out numbers like that without mentioning the hardware. Currently the UTXO database has 35 million rows. On my 3 year old laptop I can query a 35 million table database and have it finish in a few hundred milliseconds. I guess if you're running this on an Apple II it would take "tens of seconds"...
Even if it does take a long time to query the UTXO database, it shouldn't matter because the purpose is to
spend those outputs. This results in a smaller UTXO database, which makes subsequent queries faster.