Bitcoin Forum
November 12, 2024, 06:26:12 AM *
News: Check out the artwork 1Dq created to commemorate this forum's 15th anniversary
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 [9]  All
  Print  
Author Topic: New paper: Accelerating Bitcoin's Trasaction Processing  (Read 36344 times)
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
January 06, 2014, 02:04:42 PM
 #161

Our proposal includes keeping the hashes of off-chain blocks in each block (new ones not included in any of the ancestors of the block). The chain, when read sequentially therefore contains all off-chain block hashes, and thus any node can know the entire tree (if a node is missing some block it simply requests it or its header).

Interesting.

That eliminates the need for each sub-tree to be evaluated.  The weight for each block can be directly calculated purely from within the block.

You would need to embed a sub-header into the coinbase (or OP_RETURN tx).

Each node would have to remember all headers from orphaned blocks that are referenced.  A block would be invalid if it re-references a header.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
asdf
Hero Member
*****
Offline Offline

Activity: 527
Merit: 500


View Profile
January 07, 2014, 10:29:38 AM
Last edit: January 07, 2014, 11:35:15 AM by asdf
 #162

Would you be including the prefixes of the merkle tree nodes too? These would serve as a checksum, so the receiver would at least know which transaction has the colliding prefix. In fact, with the tree nodes included, you could be much less conservative with the prefix length, sending 2 or event 1 byte(s).

Collisions could also be handled by having a re-send system.

Hashes were grouped together into 32 hash groups and the merkle root for those hashes.

The sender could estimate how much of a prefix is required.  If 4 bytes was required, then a group of 32 hashes would require (32 * 4 + 32) bytes = 5 bytes per hash.

The receiver could replace the 32 hashes with its best guess and then check the CRC/Merkle root.  If it doesn't match, then it could ask for those hashes in expanded form.

Assuming a block with 4000 transactions in a block.

Peer 1: Sends block with 5 bytes per hash.

This works out at 20kB

Peer 2: Checks block and finds 2 groups with a mismatch

Peer 2: sends tx request

byte[32]: Block hash
varInt: group count
int: index 1
int: index 2

This works out as 41 bytes

Peer 1: Sends full transaction hashes for those blocks

This works out at 64 * 32 = 2048 bytes

This is much less than 4000 * 32 = 128kB.  However, it is at the cost of an additional round trip delay.

If each compressed hash was prefixed by a length, then the sender could try to estimate which hashes require longer prefixes.

The birthday paradox may cause potential issues here.

This "groups" idea is essentially the same thing as the merkle tree, if you make the group size 2. With smaller group size, you have to resend less transaction hashes in the case of collision. Also, the birthday paradox is less of an issue; you could use smaller prefixes.

Also, consider that we can compress the merkle nodes. Collisions in the nodes can be checked with the parent nodes all the way up the tree, but it becomes exponentially less likely that you'll need to resend a branch as you move up the tree. The advantage of this hierarchical checksum is that you can make the prefixes, of the nodes and the transaction hashes, much smaller.

I think that re-requesting any transactions is too costly, so we should work out the prefix size so that there are "probably" no collisions, but only just. Someone needs to do the math.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
January 07, 2014, 01:35:46 PM
 #163

I think that re-requesting any transactions is too costly, so we should work out the prefix size so that there are "probably" no collisions, but only just. Someone needs to do the math.

Right.  The prefix length depends on the the size of the memory pool and the number of transactions in a block.

Assume there are 100k entries in the memory pool and 4k transactions per block.

If you have 4 bytes per transactions, then the odds of a transaction of a transaction matching one of the transactions in the memory pool is (100k * (1/pow(2, 32)) = 1 in 43,000.

If there are 4000 transactions in the block, then at least one of them will collide every 43000/4000 = 10.75 blocks, which is probably acceptable.

However, there is no clear definition of the size of the memory pool.  When a peer requests a block, it could specify how many bytes per transaction.  The reply would use at least that many bytes per entry.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
coinzcoinzcoinz
Hero Member
*****
Offline Offline

Activity: 530
Merit: 500


View Profile
November 10, 2014, 04:23:16 PM
 #164

Darkcoin solved this
https://www.youtube.com/watch?v=zBjUPj-TmFE
DumbFruit
Sr. Member
****
Offline Offline

Activity: 433
Merit: 267


View Profile
November 10, 2014, 07:54:36 PM
Last edit: November 10, 2014, 08:23:42 PM by DumbFruit
 #165

Is there a better description about what it solved exactly? Instant worldwide consensus is impossible. Running two nodes on the same computer and getting fast confirmation times by jacking up the block times gets a person a first class ticket to Facepalm Town.

Update: Alright, this is what he's talking about;
https://www.darkcoin.io/about/what-is-darkcoin/instant-transaction-technology/

So Darkcoin uses the idea of "locking" a transaction by "masternodes", and the masternodes can somehow give a user some kind of confidence that the transaction isn't being double spent.

The exact same problems that you have with consensus otherwise would exist with "locks", unless I'm missing something.

Under 4.3 I would like to see a better defense of "This would happen very quickly, in the matter of a few seconds in most cases."
Is this because of highly centralized "Masternodes"?

By their (dumb) fruits shall ye know them indeed...
instagibbs
Member
**
Offline Offline

Activity: 114
Merit: 12


View Profile
November 10, 2014, 10:58:20 PM
 #166


A fairly broken, over-complicated, brittle version of 2-of-2 signing.

Why not just use GA.it or something similar? Same benefits(required some trust), with no crazy systematic risks.

(I know why but it's rhetorical. Any decentralized system that claims secure 0-conf without trust is lying.)
ZcOoe0wRpk
Newbie
*
Offline Offline

Activity: 2
Merit: 0


View Profile
May 30, 2023, 01:48:07 AM
 #167

i will be carefully monitoring this technology; if it proves to be successful on paper and test runs, and bitcoin developers fail to implement this, i'll be investing in alt-coin which uses it.
In the end, this proposal did not receive recognition from bit development. He now has an alternative version called Kaspa, which implements the extension of the BTC consensus. It is Kaspa, created by avivz78 and his student Yonatan Sompolinsky. A single block of 1s will quickly achieve 32bps, complete decentralization, and pow coins.
Flexystar
Full Member
***
Offline Offline

Activity: 1092
Merit: 227



View Profile
June 15, 2023, 01:48:50 PM
 #168

I am seeing the paper in layman’s view since I’m not that too technical. However I have been involved in one thread in regard to understand why the block size has been kept limited to 1MB or 3MB or something else. Now here in your paper I read that you can actually modify the block size and include higher number of transactions. Before this paper my understanding was the block was limited in the size as well as it can never be touched since the time Satoshi released it.

But is this happening any soon or does this paper suggest the scalability issue and how it can be resolved “if” this is implied.
Pages: « 1 2 3 4 5 6 7 8 [9]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!