Bitcoin Forum
December 13, 2024, 03:18:21 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2]  All
  Print  
Author Topic: UTXO set size does not increase storage costs, you just suck at computer science  (Read 3163 times)
killerstorm (OP)
Legendary
*
Offline Offline

Activity: 1022
Merit: 1033



View Profile
March 18, 2013, 05:15:31 PM
 #21

Which branch of computer science postulates that? Transactions arrive nearly randomly during the inter-block period, so the mean time available to verify transaction is about 5 minutes.

What is going on here with this "in-RAM database requirement"?

DoS attack: somebody sends you lots of fake transactions with non-existent inputs. To dismiss such transaction you need to confirm that input is indeed non-existent, and it requires one or more read operations.

If you store UTXO set on hard disk drives, it would be pretty much trivial to DoS attack you because one HDD can do something like 100 random reads per second.

This is why people think that there should be a limit on UTXO set. For example, via introduction of minimal viable transaction output value. (Say, if TXO value is 1 satoshi it might be seen as spam.)

Please understand me correctly, this is just what I saw in discussions here, it appears to be a prevalent opinion.

Of course, it is possible to design system in such a way that UTXO set size won't be a problem and won't affect costs, this is what I'm talking about here...

Chromia: a better dapp platform
Peter Todd
Legendary
*
expert
Offline Offline

Activity: 1120
Merit: 1164


View Profile
March 18, 2013, 05:33:42 PM
 #22

Or maybe there is some sort of game-theoretic advantage to broadcasting the transactions as late as possible? I'm not aware of any, but I haven't put much thought into the game-theoretic aspects of this concept of hiding the transactions from the other miners. To me it seems counterproductive globally.

There is: while your competitors are downloading your new block they are wasting their time; they aren't working on extending the best chain. Provided your block still is downloaded and verified by >50% of the hashing power before the next block is downloaded and verified, your block will be extended by the majority of the hashing power and thus you benefit from the reduced competition from the miners who couldn't download and validate your block fast enough. This also means miners can offer a discount on fees, provided you promise not to submit your transactions to other miners. (trivially verified if you keep track of orphans)

2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1073



View Profile
March 18, 2013, 06:56:38 PM
 #23

There is: while your competitors are downloading your new block they are wasting their time; they aren't working on extending the best chain. Provided your block still is downloaded and verified by >50% of the hashing power before the next block is downloaded and verified, your block will be extended by the majority of the hashing power and thus you benefit from the reduced competition from the miners who couldn't download and validate your block fast enough. This also means miners can offer a discount on fees, provided you promise not to submit your transactions to other miners. (trivially verified if you keep track of orphans)
I still don't get it.

"My block" consists of (1) header (2) list of hashes of positively verified transactions (3) coinbase transaction. "My block" is the shortest, so it actually has a propagation advantage over the "legacy blocks". "My nodes" don't hide the transactions, they undertake to distribute them as widely as possible before the "my block" gets broadcast. All the nodes operating through "my protocol" have plenty of time to verify the early broadcast transactions and just need to store in the RAM the hashes of positively verified transactions.

The "legacy block" format allowed hiding the blocks from the p2p network by not broadcasting them and disclosing them late at the end of the block. Hence the "block verification" overhead came from and allowed to play the deception games.

The way I see it the "my block" is beneficial over "legacy block" both in the information-theoretic sense (nearly halves the network load) and in the game-theoretic sense.

BTW. "my block" isn't really mine. This optimization was openly discussed after the publishing of Microsoft's "red balloon" paper.

The only game-theoretic advantage that I see for the "legacy block" is that by being "legacy" the code doesn't need to change and the people who memorized the source code of Satoshi's client have an advantage over the developers new to the Bitcoin. If the hard-fork is allowed that advantage is nullified. What else am I missing?

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1073



View Profile
March 20, 2013, 09:29:19 PM
 #24

What is going on here with this "in-RAM database requirement"?

DoS attack: somebody sends you lots of fake transactions with non-existent inputs. To dismiss such transaction you need to confirm that input is indeed non-existent, and it requires one or more read operations.

If you store UTXO set on hard disk drives, it would be pretty much trivial to DoS attack you because one HDD can do something like 100 random reads per second.

This is why people think that there should be a limit on UTXO set. For example, via introduction of minimal viable transaction output value. (Say, if TXO value is 1 satoshi it might be seen as spam.)

Please understand me correctly, this is just what I saw in discussions here, it appears to be a prevalent opinion.

Of course, it is possible to design system in such a way that UTXO set size won't be a problem and won't affect costs, this is what I'm talking about here...
Thanks for the explanation. I think such a DoS would/should trigger the "misbehaving peer" defense that is already implemented in the Satoshi's client.

I see still the majority of developers considers the block verification as sort of a race that happens after each block gets broadcast. I consider it the implementation artefact in the proof of concept code. Both from information-theorethic and game-theoretic point of views the block should only refer to the transactions that were already broadcast and are normally verified with at-a-leisure speed during the inter-block period.

I'm not really familiar with game theoretic methods of system analysis. I'm way more comfortable with hierarchical control systems methodology, where the nodes intent to honestly cooperate but have limited information about the state of the whole system or various resource limitations in internal processing.

I'll be watching the evolution of this problem with great interest. Maybe someone has some game-theoretic attack hidden it their sleeve. Or it may turn that this is just a problem in industrial and organizational psychology, where the hard-scientific reasoning have to take the backseat.

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!