Bitcoin Forum
May 30, 2024, 07:38:28 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2]  All
  Print  
Author Topic: [Whitepaper] The Decrits Consensus Algorithm  (Read 2364 times)
phillipsjk
Legendary
*
Offline Offline

Activity: 1008
Merit: 1001

Let the chips fall where they may.


View Profile WWW
November 15, 2014, 05:48:47 PM
 #21

Records are every 10 seconds, so that would cut latency by 1/60th, but the actual data needed to be sent is only a few bytes regardless of the number of transactions (among well-connected peers). And the memory cache is organized in a way so that it is slightly more difficult to receive the data, but once done it is very easy to convert it into a new packet for each connected peer. It's pretty moot though because in testing 100k tx takes milliseconds to process. 5 second confirmations are very much a possibility, although typical will be more on the range of 10-12 seconds.

Making the records more frequent does not at all reduce the latency that is the product of: the laws of physics and processing delays.

My (admittedly slow) node spends about 700ms processing generating new Bitcoin blocks, plus another 1.7 seconds processing new P2Pool blocks (which are every 30 seconds). This is with a SSD, BTW.

Edit: Given the aggressive block record times, how are accidental forks and orphans (children of potential forks) handled?

James' OpenPGP public key fingerprint: EB14 9E5B F80C 1F2D 3EBE  0A2F B3DE 81FF 7B9D 5160
Ix (OP)
Full Member
***
Offline Offline

Activity: 218
Merit: 128


View Profile
November 15, 2014, 07:11:54 PM
 #22

Making the records more frequent does not at all reduce the latency that is the product of: the laws of physics and processing delays.

It doesn't reduce the overall latency, but it certainly does divide it up into smaller amounts. Tongue That is all that matters for getting confirmations. And as I said, the bandwidth factor is significantly reduced so the latency of transmitting the data is all during the original transaction, not during the record transmission.

Quote
My (admittedly slow) node spends about 700ms processing generating new Bitcoin blocks, plus another 1.7 seconds processing new P2Pool blocks (which are every 30 seconds). This is with a SSD, BTW.

I assume most of that is verifying transactions? If the two peers have seen all the transactions in a record, there is no need to verify anything but the record's signature. I'm not going to delve too deeply into how my memory cache works, but the transaction order is canonical and once the peer has received a record packet from another peer and converted it into something the local cache can use, it is looking for mostly contiguous chunks of memory which is extremely fast. And it doesn't need to search - it is getting a direct pointer to the location of the transaction.

Quote
Edit: Given the aggressive block record times, how are accidental forks and orphans (children of potential forks) handled?

There is no competition - one voice controls one block of time (unless there are so many that multiple control each block of time, then they use a modulo function to determine which transactions they include). Duplicated transactions in subsequent records that did not acknowledge the earlier other are just ignored, and they do not pose any real penalty on data transmission or verification time. Double spent transactions are resolved by ceding to the record that controlled an earlier block of time - most of the time, this is covered in a little bit of detail in the paper. Unfortunately for the sake of SPV nodes this might require a "reversed transactions" log which still won't require much resources. But maybe not, as if they are worried enough to see their balance update, they will notice that it did not update anyway.
phillipsjk
Legendary
*
Offline Offline

Activity: 1008
Merit: 1001

Let the chips fall where they may.


View Profile WWW
November 15, 2014, 09:48:29 PM
 #23

Quote
My (admittedly slow) node spends about 700ms processing generating new Bitcoin blocks, plus another 1.7 seconds processing new P2Pool blocks (which are every 30 seconds). This is with a SSD, BTW.

I assume most of that is verifying transactions? If the two peers have seen all the transactions in a record, there is no need to verify anything but the record's signature. I'm not going to delve too deeply into how my memory cache works, but the transaction order is canonical and once the peer has received a record packet from another peer and converted it into something the local cache can use, it is looking for mostly contiguous chunks of memory which is extremely fast. And it doesn't need to search - it is getting a direct pointer to the location of the transaction.

The 700ms is the time required to build a (500kB) block from already verified transactions. The 1.7s was an estimate based on be expected Dead-on-arrival rate vs. my actual DOA rate. It may include disk/network/hasher latency, but I don't really know.

I am assuming the large spikes in the pypy side of the graph are caused by disk cache misses. The disk is a SSD, so a lag of an extra second would probably have to correspond to over 500 seeks (assuming 2ms each). P2Pool being run by pypy appears to use about 3x the memory as P2Pool run by python (1200 vs 400 MB). My memory usage graph is blank for some reason.

James' OpenPGP public key fingerprint: EB14 9E5B F80C 1F2D 3EBE  0A2F B3DE 81FF 7B9D 5160
Ix (OP)
Full Member
***
Offline Offline

Activity: 218
Merit: 128


View Profile
November 16, 2014, 03:24:16 AM
 #24

I don't know enough about bitcoin's memory cache or the python or pypy implementation's memory cache to comment. It seems silly to be hitting the disk for any reason for transactions that are not very old. Even with bitcoin's bloated transactions, 300 bytes/sec is pitifully small and anyone should be able to keep several hours' worth in memory without an issue.
Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!