Bitcoin Forum
September 17, 2025, 01:33:44 PM *
News: Latest Bitcoin Core release: 29.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Thaddeus Dryja's post-quantum recovery method: could it also fight spam?  (Read 127 times)
d5000 (OP)
Legendary
*
Offline Offline

Activity: 4396
Merit: 9392


Decentralization Maximalist


View Profile
September 04, 2025, 07:57:26 PM
Last edit: September 05, 2025, 05:56:44 PM by d5000
Merited by vapourminer (4), mcdouglasx (1), stwenhao (1)
 #1

In May, Tadge Dryja has published a very interesting recovery mechanism for coins if the quantum threat becomes real.

Basically, the mechanism works this way: From a certain deadline on (e.g. when QCs become strong enough to break ECDSA), people would only be able to move coins locked by non-quantum safe scripts if they, before moving, publish a second proof that they know the private key. The easiest way is to calculate a valid TXID of a transaction which should be sent later. This TXID must then be published in an OP_RETURN output to become a valid proof, and must be confirmed for several blocks before the coins will be able to move, to prevent a quantum attacker stealing the proof.

I've already mentioned it here.

But now thinking about the spam problem which is discussed since 2023 approximately and led to the lift of OP_RETURN limits, @stwenhao mentioned an UTXO "expiration date" proposed in 2020 May 2025 in DelvingBitcoin (Edit: see this link) as a possible idea to avoid to pollute the UTXO set with NFTs/tokens which use "fake public keys".

What if this idea of an "UTXO expiration date" is combined with Dryja's method?

A possible scheme could be the following one:

1) UTXOs are stored in the UTXO set for a full halving period.
2) After that period expires, the UTXOs are dropped. Transactions which are sent spending these UTXOs will fail validation.
3) All old UTXOs which suffered that fate, can however be "reactivated" with Tadge Dryja's method; publishing a proof like a valid TXID with the private key associated with the address (or script) in an OP_RETURN output of another transaction (of any kind). The Bitcoin client detects this OP_RETURN and adds the UTXO again to the UTXO set for a full halving period.
4) Once quantum computing becomes a problem, and a post-quantum cryptosystem has been deployed, the UTXO expiration period could be shortened and there would be a quite smooth transition from the "spam prevention" mechanism to the "quantum recovery mechanism".

Apart from development cost, I see only one disadvantage: hodlers holding coins for more than 4 years would have to pay a little bit more transaction fees for the OP_RETURN output to be able to move the coins, and they would not be able to move them "spontaneously". The method would thus not be confiscatory.

But the big advantage is the following: Stampchain-style "fake public keys" used only to store data and other UTXOs which will never be moved (e.g. dust) would be expire definitely from the UTXO set. One could even create a pruning mechanism for these UTXOs, similar to OP_RETURN pruning.

This would definitely erradicate any incentives to use fake public keys / fake addresses to store arbitrary data on-chain, because they would not be permanent and thus NFTs would no longer be possible. It would also protect old UTXOs like "Satoshi's coins" from quantum computing attacks.

Thoughts?

I don't feel knowledgeable enough to propose this on the mailing list and may suffer from Dunning Kruger's a bit Wink so for sure the proposal has some flaw but I thought it to be a possible way to move forward with better chances to succeed than other proposals I've read about. Was such a thing already discussed perhaps?

stwenhao
Hero Member
*****
Offline Offline

Activity: 505
Merit: 1029


View Profile
September 05, 2025, 04:04:20 AM
Last edit: September 05, 2025, 04:25:14 AM by stwenhao
Merited by vapourminer (4), d5000 (4)
 #2

Quote
proposed in 2020 in DelvingBitcoin
You read "May 20" as 2020, but it is "May 20, 2025", if you hover your mouse over it. This idea is quite recent.

Quote
UTXOs are stored in the UTXO set for a full halving period.
Why there should be any time limit at all? I think instead of UTXO expiration, it should be something like "UTXO hashing and pruning" instead. Which means, that if you have a tree of all existing UTXOs, you can locate each of them in a merkle tree of transactions. Which also means, that for every existing UTXO, you can provide the block header hash, in which it was created, the merkle root, and the whole path, from that merkle root, into a given transaction. And then, inside a transaction, there are SHA-256 chunks, with Initialization Vector, and partial hashes, which you can get from running SHA-256 on top of a given 512-byte data chunk.

Which means, that there is no need for UTXOs to expire. If a given user can provide a proof, that a given coin is spendable, then that user could potentially reveal such data even after decades, because why not. The only limitation would be then the cost of spending a given coin. And then, nodes can only have the minimal information about UTXO tree, enough to remove old entries, and add a new ones, but they won't need to store everything, and that duty can be moved from nodes to users, where each user would care about its own coins, just like they keep their private keys.

Quote
hodlers holding coins for more than 4 years would have to pay a little bit more transaction fees for the OP_RETURN output to be able to move the coins
All proofs should live in a separate space, unrelated to the existing transaction structure. This property is needed, because you may have some timelocked transaction, or some multisig, and you don't want to lose it, if another party will not be there. Which means, that it should be something like witness for legacy nodes: an additional transaction data, which is ignored by the current version, but required to be there by new nodes. I usually call it "commitment space".

Quote
One could even create a pruning mechanism for these UTXOs, similar to OP_RETURN pruning.
Note that pruned node can be "unpruned", by downloading data from peers. Which means, that if the UTXO set is hashed, then it should be possible to "unprune" things, by importing proofs, that coins are there (even without having any signatures, which would allow them to be moved somewhere else).

Quote
I don't feel knowledgeable enough to propose this on the mailing list and may suffer from Dunning Kruger's a bit
I guess some content like that is likely to be accepted on the mailing list. I think you are close to reaching the level, where your posts would be published there. You can always try, in the worst case, they will tell you to go back to the forum, but I think it wouldn't be the case.

But if you want to write something there, then note one thing: starting a new topic on the mailing list is discouraged. If you reply to someone else's post, then your chances of getting through moderation are increased. Which means, that if you have some new idea, then on bitcointalk, it is obvious to start a new topic. But on the mailing list, it would be better, if you would place your reply under the old topic of Tadge Dryja, or anything else, what was accepted in the past.

Why? Because each topic on the list is traced separately. If you quote someone, and write a reply, then it is usually on-topic, and it is accepted. But if you start a completely new topic, then you need to justify, that talking about this new thing is worth discussing. And if your initial post won't do so, then it may be rejected.

Quote
Was such a thing already discussed perhaps?
Well, as you can see, there is also Delving Bitcoin. It is more similar to forum, because then, your posts are accepted first, and may be removed later. So, you can start on Delving, if you are worried about starting with the mailing list. Because it definitely meets "Delving level" in my eyes, but it may not meet "mailing list level" yet, especially if you won't quote anything from the past, and start a new topic, then it may be rejected.

Edit: Another reason to not set any timestamps, like "4 years", is that the current pruning level is arbitrarily chosen by node operators. Someone can run it with "prune=550", someone else with "prune=210000", and at the end of the day, it is not related to "how much time passed", but rather "how many bytes it took". Which means, that it seems quite reasonable, to connect UTXO pruning with existing block pruning. If you can see the last 288 blocks, then any UTXOs below that level, are unreachable for your node. So, you could have "prune=550", and "pruneutxo=210000" in your config, and decide, how much space are you going to allocate for the UTXO set, before you will start hashing that content, and keeping only the hash of some database entries, instead of keeping everything.

Proof of Work puzzle in mainnet and testnet4.
d5000 (OP)
Legendary
*
Offline Offline

Activity: 4396
Merit: 9392


Decentralization Maximalist


View Profile
September 05, 2025, 08:01:55 PM
Last edit: September 07, 2025, 04:30:41 PM by d5000
Merited by stwenhao (1)
 #3

Why there should be any time limit at all? I think instead of UTXO expiration, it should be something like "UTXO hashing and pruning" instead.
That would cater to the UTXO set scalability problem and would make UTXOs less expensive to maintain (if done well) for the nodes but have not any other beneficial side effects like reducing the incentives for NFTs. It would also not solve the "quantum recovery" problem.

I've re-read the discussion from May 2025 and indeed it went into that direction (if I interpret correctly, the idea of @ajtowns was that the most recent UTXOs would stay in a traditional UTXO set and the older ones / of lower value being "located" in an Utreexo "accumulator".)

The idea here was a bit different, albeit of course inspired by the DelvingBitcoin thread. You would have a clear expiration date and after then would only be able to re-construct the UTXO providing an additional proof. If this proof isn't even possible to construct like it occurs with "fake public keys", then the UTXO can be forgotten about by the nodes. So the idea was to lower incentives to store NFTs on-chain as a "side effect", and in addition providing a post-quantum recovery mechanism at the same time.

Anyway I think I found the flaw of the idea to use this mechanism to fight spam: A NFT or "other type of data/spam" creator still has incentives to create his NFT if the data stay in the blockchain files.

This could only be prevented if the software didn't only allow you to prune (like on a regular pruned node) but remove the original data completely like OP_RETURN in theory allows to, with only some additional kind of proof being stored if the UTXO is then recovered with an additional proof. This would thus need a new way of storing the blockchain, or at least these UTXOs.

If this is possible with some ZeroSync-style magic then I still think this combination of "UTXO expiration" and "UTXO recovery" could be an interesting emergency mechanism. Not something that should be implemented in any case but if spam and quantum computing problems become hard to handle in other ways.

Edit: A relatively simple way to achieve what I wanted to achieve -- to allow the UTXOs to be pruned also from blockchain data once they expire:

1) allow nodes to store an UTXO hash (instead of the original UTXO) also in the blockchain data once the expiration time has passed; this hash would then be part of the transaction's merkle tree.
2) the person owning the key to spend the UTXO must thus, in addition to the proof that he owns the private key, provide the complete UTXO from another source: either from his own storage/backup, or from "archival nodes" who store all UTXOs regardless of their expiration, and could charge a fee for that.

Effectively this would ensure that spam in "fake public keys" can be forgotten forever, albeit it would be a bit more controversial than the original proposal because it de facto requires long term hodlers to backup their UTXOs too (if they don't want to move them around).

Edit 2: To frame it in other words, this would create a three-tier system of nodes:

1) archival nodes, where everything is stored;
2) regular pruned nodes, where only the last XXXX blocks are stored (like pruned nodes now),
3) nodes storing everything except OP_RETURNs and expired UTXOs, which could be the standard configuration.

NotATether
Legendary
*
Offline Offline

Activity: 2086
Merit: 8931


Search? Try talksearch.io


View Profile WWW
September 07, 2025, 01:04:00 PM
 #4

Basically, the mechanism works this way: From a certain deadline on (e.g. when QCs become strong enough to break ECDSA), people would only be able to move coins locked by non-quantum safe scripts if they, before moving, publish a second proof that they know the private key. The easiest way is to calculate a valid TXID of a transaction which should be sent later. This TXID must then be published in an OP_RETURN output to become a valid proof, and must be confirmed for several blocks before the coins will be able to move, to prevent a quantum attacker stealing the proof.

I don't understand why a second proof is necessary for normal addresses. If you can already figure out the private key of a public key using kangaroos, then how will this protect P2WPKH-P2SH addresses? They are unlocked using a private key already. Similarly, the other two public key hash addresses already use OP_EQUALVERIFY. Is the author suggesting that this is only meant for addresses with custom scripts?

If so, then you gotta think that there are quite few of these addresses relative to the P2[W]PKH addresses. So how will this be a net benefit?

██
██
██
██
██
██
██
██
██
██
██
██
██
... LIVECASINO.io    Play Live Games with up to 20% cashback!...██
██
██
██
██
██
██
██
██
██
██
██
██
d5000 (OP)
Legendary
*
Offline Offline

Activity: 4396
Merit: 9392


Decentralization Maximalist


View Profile
September 07, 2025, 04:44:57 PM
 #5

I don't understand why a second proof is necessary for normal addresses. If you can already figure out the private key of a public key using kangaroos, then how will this protect P2WPKH-P2SH addresses? They are unlocked using a private key already.
I hope I understand your question correctly. Smiley Tadge Dryja's method is meant for the case quantum computers become so fast that it becomes dangerous to sign a transaction and broadcast it into the mempool, because an attacker with a fast QC could simply double spend it before it was confirmed, computing the private key with the public key and then replacing the transaction.

If you instead have to submit and confirm the TXID (or another proof you know the private key) earlier, and only a transaction with that proof "attached" is accepted by the consensus, then this kind of attack becomes impossible. In the post-QC scenario of course this means that there is already a post-quantum cryptosystem available.

Before that "Quantum armageddon day", the simple resource to not re-use addresses is enough to prevent quantum theft.

PS: I don't know what kangaroos have to do with that -- aren't kangaroos a method to solve puzzle transactions (with low security) with traditional computing means?

Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!