Bitcoin Forum
May 03, 2024, 06:56:16 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2]  All
  Print  
Author Topic: Reasons to keep 10 min target blocktime?  (Read 4348 times)
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1083


View Profile
July 23, 2013, 12:25:54 PM
 #21

Of course this would also require that the partial confirmations be broadcast along with the hash of all the transactions in the block, in order for nodes to know whether their transaction is being worked on, and for them to be able to verify the block (that its difficulty is greater than or equal to 1/600th of the block chain difficulty).

This could be split.  Miners could broadcast "template" blocks.

POW > 1/600: Just the header (every second)
POW > 1/60: Header + block hashes (every 10 seconds)

You could also make it more merkle based.

So, I send

header + all hashes

Later I just have to send

header + merkle root

If I change the block, I could send the updated hashes since the last time.

Clients that heard the first transmission could build up the new block.

Other miners could send

previous: <old hash>
new hash: <new hash>
transactions: 0, <new coinbase>

Having said that, inherently, miners need to update the coinbase transaction for the "extra nonce".

At 500 transactions, sending all the hashes would 16kB, so it is not insignificant.

Another option would be to pay miners to include transactions.  If only 1% of transactions need to be included and there are 512 transactions, then you only need to send the path.  This gives 320 per transaction and 5 transaction, so 1.6kB.    You couldn't send that every second to every node.

Nodes could flag themselves as "HEADER_MONITOR" nodes, and support lots of headers.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
1714762576
Hero Member
*
Offline Offline

Posts: 1714762576

View Profile Personal Message (Offline)

Ignore
1714762576
Reply with quote  #2

1714762576
Report to moderator
1714762576
Hero Member
*
Offline Offline

Posts: 1714762576

View Profile Personal Message (Offline)

Ignore
1714762576
Reply with quote  #2

1714762576
Report to moderator
1714762576
Hero Member
*
Offline Offline

Posts: 1714762576

View Profile Personal Message (Offline)

Ignore
1714762576
Reply with quote  #2

1714762576
Report to moderator
Once a transaction has 6 confirmations, it is extremely unlikely that an attacker without at least 50% of the network's computation power would be able to reverse it.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714762576
Hero Member
*
Offline Offline

Posts: 1714762576

View Profile Personal Message (Offline)

Ignore
1714762576
Reply with quote  #2

1714762576
Report to moderator
1714762576
Hero Member
*
Offline Offline

Posts: 1714762576

View Profile Personal Message (Offline)

Ignore
1714762576
Reply with quote  #2

1714762576
Report to moderator
runeks
Legendary
*
Offline Offline

Activity: 980
Merit: 1008



View Profile WWW
July 23, 2013, 01:25:20 PM
Last edit: July 23, 2013, 01:36:30 PM by runeks
 #22

Of course this would also require that the partial confirmations be broadcast along with the hash of all the transactions in the block, in order for nodes to know whether their transaction is being worked on, and for them to be able to verify the block (that its difficulty is greater than or equal to 1/600th of the block chain difficulty).

This could be split.  Miners could broadcast "template" blocks.

POW > 1/600: Just the header (every second)
POW > 1/60: Header + block hashes (every 10 seconds)

You could also make it more merkle based.

So, I send

header + all hashes

Later I just have to send

header + merkle root

If I change the block, I could send the updated hashes since the last time.

Clients that heard the first transmission could build up the new block.

Other miners could send

previous: <old hash>
new hash: <new hash>
transactions: 0, <new coinbase>

Having said that, inherently, miners need to update the coinbase transaction for the "extra nonce".

At 500 transactions, sending all the hashes would 16kB, so it is not insignificant.

Another option would be to pay miners to include transactions.  If only 1% of transactions need to be included and there are 512 transactions, then you only need to send the path.  This gives 320 per transaction and 5 transaction, so 1.6kB.    You couldn't send that every second to every node.

Nodes could flag themselves as "HEADER_MONITOR" nodes, and support lots of headers.
I had thought of a "diff"-like approach to this.

We have a cache time, which basically determines how far back a miner can reference a previous block. So if the miner then changes the transactions in a block he or someone else has published, he would simply send:

Code:
<block_header>
ref_block = <block_hash_reference>
add_txs = <list of transactions to be added>
rm_txs = <list of transactions to be removed>

Nodes would then cache the last <cache time> seconds of blocks, and they would be able to reconstitute the complete block from blocks in the cache by adding the transactions <add_txs> and removing the transactions <rm_txs> from <block_hash_reference>.

So if we use 60 seconds as a cache time it means we only need to send out "full blocks" (block header plus all transaction hashes needed to verify the block header) every 60 seconds.

If we assume a 1 MB block size and 300 bytes per transaction, that's around 3500 transaction hashes of 32 bytes each. But blocks are only 0.5 MB on average, because they start out with a size of 0 and end up with a size of 1 MB (if we assume the maximum block size is used, as a worst case scenario). So each full block message is 3500*32*0.5 bytes = 55 KB on average, every 60 seconds. That's 0.9 KB/s for the complete blocks. Add to that the block headers every second, which may contain insertions or deletions. It should be possible to keep the data rate at around 10 KB/s for 10 peers.

When a new node joins the network it can ask for the latest full block, so it's able to reconstitute blocks from "diff messages".

Perhaps a Merkle Tree solution would be more efficient, since nodes really only care about their own transactions. I haven't done the calculations on this.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1083


View Profile
July 23, 2013, 01:39:36 PM
 #23

Code:
<block_header>
ref_block = <block_hash_reference>
add_txs = <list of transactions to be added>
rm_txs = <list of transactions to be removed>

Pretty much the same as I suggested.  However, it is smaller just to have

<index>, <new hash>

Since, any new hash will either replace an old one or extend the chain.

you could have

<index>, <000....000>

to mean delete.

Quote
Nodes would then cache the last <cache time> seconds of blocks, and they would be able to reconstitute the complete block from blocks in the cache by adding the transactions <add_txs> and removing the transactions <rm_txs> from <block_hash_reference>.

That assumes the new miner is the same as the old one.

The needs to be a rule that miners try to keep their block similar to the template block.

For example, you could require that blocks have their transactions sorted according to their hash.

Quote
If we assume a 1 MB block size

That's 1.7kB/s

Quote
and 300 bytes per transaction, that's around 3500 transaction hashes of 32 bytes each, every 60 seconds.

That's double relative to just downloading the chain and people are already complaining about it.

I don't think clearing the cache every minute is a good plan.  Better to keep it for at least 1-2 blocks length.

Quote
Perhaps a Merkle Tree solution would be more efficient, since nodes really only care about their own transactions. I haven't done the calculations on this.

The problem is you need to give the full path.  I think a full template and diffs is at least as efficient.

Another way to "template" is to allow grouping of transactions directly.  Transactions are already sent anyway.

Encouraging miners to use mostly the same transactions is a good plan anyway.

A miner sees a header and notices it has a hash that the miner doesn't have, so it asks his peers for it.  It would also show evidence of double spending.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
runeks
Legendary
*
Offline Offline

Activity: 980
Merit: 1008



View Profile WWW
July 23, 2013, 02:36:07 PM
 #24

Code:
<block_header>
ref_block = <block_hash_reference>
add_txs = <list of transactions to be added>
rm_txs = <list of transactions to be removed>

Pretty much the same as I suggested.  However, it is smaller just to have

<index>, <new hash>

Since, any new hash will either replace an old one or extend the chain.

you could have

<index>, <000....000>

to mean delete.
Right. The concept holds. How it's encoded is less important.

Nodes would then cache the last <cache time> seconds of blocks, and they would be able to reconstitute the complete block from blocks in the cache by adding the transactions <add_txs> and removing the transactions <rm_txs> from <block_hash_reference>.

That assumes the new miner is the same as the old one.
No that's not necessary as far as I can see. You simply have a module which gets blocks above or equal to 1/600th of the difficulty submitted to it. This module simply keeps the last full block template or set of block templates in memory, and figures out which of the previous templates to publish a diff against. So it quickly calculates which of the templates from the last 60 seconds would give the smallest message size if a diff was produced against it, and publishes this to the network.

But as you mention, the 60 second cache time isn't really necessary if nodes can just ask for the last n block templates when they connect to the network.

If we assume a 1 MB block size

That's 1.7kB/s

Quote
and 300 bytes per transaction, that's around 3500 transaction hashes of 32 bytes each, every 60 seconds.

That's double relative to just downloading the chain and people are already complaining about it.
I think you are misunderstanding me.

If each block can be no larger than 1 MB, then if we assume each transaction is 300 bytes, then it contains around 3500 transactions.

Publishing the hash, 32 bytes each, of 3500 transactions is 109 KB. But the block is not 1 MB right after a new block is found. It starts out being very small, so that the full block templates are small right after a new block is found, and get larger and larger until the next block is found. That's why I assume they are, on average, only 50% of 109 KB.

Quote
I don't think clearing the cache every minute is a good plan.  Better to keep it for at least 1-2 blocks length.
Yeah that makes sense. If new nodes can just connect and request full block templates, then there's no need to have a short cache time.

I was originally thinking it would be a "broadcast only"-protocol, so that miners just broadcast partial confirmations and they cascade throughout the network through the other peers. This keeps traffic down, but it means that new nodes need to wait until their cache contains the relevant full block templates in order to verify blocks.

Quote
A miner sees a header and notices it has a hash that the miner doesn't have, so it asks his peers for it.  It would also show evidence of double spending.
Yes this is interesting too. It means that unless miners are deliberately mining double spends, it will be easier for the miners to find and resolve double spends.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1083


View Profile
July 23, 2013, 02:54:24 PM
 #25

Publishing the hash, 32 bytes each, of 3500 transactions is 109 KB. But the block is not 1 MB right after a new block is found. It starts out being very small, so that the full block templates are small right after a new block is found, and get larger and larger until the next block is found. That's why I assume they are, on average, only 50% of 109 KB.

It depends.  It might be full size, but change as paying transactions overwrite free ones.

I think trying to have all miners target the same set of transactions would be a good thing.  Obviously, they would each have their own coinbase.

OTOH, the more complex you make it, the less willing miners might be to bother.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
runeks
Legendary
*
Offline Offline

Activity: 980
Merit: 1008



View Profile WWW
July 23, 2013, 03:06:52 PM
 #26

Publishing the hash, 32 bytes each, of 3500 transactions is 109 KB. But the block is not 1 MB right after a new block is found. It starts out being very small, so that the full block templates are small right after a new block is found, and get larger and larger until the next block is found. That's why I assume they are, on average, only 50% of 109 KB.

It depends.  It might be full size, but change as paying transactions overwrite free ones.

I think trying to have all miners target the same set of transactions would be a good thing.  Obviously, they would each have their own coinbase.

OTOH, the more complex you make it, the less willing miners might be to bother.
Good point. If the 1 MB limit is reached then it might reach 1 MB shortly after a new block is found, and transactions are just replaced with others than have higher fees after that.

I think miners should be free to choose whichever transactions they wish. I don't want to change which transactions they mine. That should be up to them completely. I think it might be worthwhile to allow diffs of diffs, to reduce network traffic further. I need to implement something first to find out how much processing power and RAM this will consume. I think the real constraint is bandwidth, calculating and reassembling diffs shouldn't take much processing power, even if it's multiple levels of diffs, I think.

Complexity shouldn't be a problem, as its hidden from the miners. I imagine they just connect to my program instead of bitcoind, and I say that the difficulty is 1/600th of what it really is. When I receive a share that is within the partial confirmation range I process it and send it into the partial confirmation P2P network, and when it's a valid block I send it on to bitcoind for it to be published. Bear in mind that only solo miners and pools would need to publish these partial confirmations. Pool miners send their shares to the pools anyway, and then the pool just needs to be connected to the partial confirmation P2P network.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1083


View Profile
July 23, 2013, 03:43:05 PM
 #27

Good point. If the 1 MB limit is reached then it might reach 1 MB shortly after a new block is found, and transactions are just replaced with others than have higher fees after that.

However, miners could target old transaction "groups" based on the merkle tree.  If 4 transactions were overwritten but part of the same branch, then you could use that root instead of the entire hash.

It is basically a compression algorithm problem.

In fact, it could be implemented as exactly that on a peer to peer connection basis.

You send the entire block header + hashes and the compression algorithm compresses repeats.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
piotr_n
Legendary
*
Offline Offline

Activity: 2053
Merit: 1354


aka tonikt


View Profile WWW
July 23, 2013, 05:36:55 PM
 #28

The main reason why the 10 min doesn't change is that the network does not agree on it.
If you could convince 50+% of miners to change the protocol, I have no doubts that they'd be able to handle a forked client which does it for them - whoever would had stayed on the satoshi branch would have been doomed. Or on any other branch.
If they had enough hashing power nobody would be able to stop it, in such case, but same applies to you trying to change the protocol.
But miners don't seek new rules in this protocol - they like the money as it is.
At least they are sane Smiley
So no worries

Check out gocoin - my original project of full bitcoin node & cold wallet written in Go.
PGP fingerprint: AB9E A551 E262 A87A 13BB  9059 1BE7 B545 CDF3 FD0E
stdset (OP)
Hero Member
*****
Offline Offline

Activity: 572
Merit: 506



View Profile
August 03, 2013, 06:09:50 AM
Last edit: August 03, 2013, 07:29:34 AM by stdset
 #29

It seems to me, that the main idea is to use p2pool-like fast blockchain to provide faster confirmations.
Lets compare that to hardfork:

P2pool-like chain:
Pros: no hardfork needed
Cons: all major pools need to be convinced to use it. Bitcoin infrastructure needs to be upgraded to get the advantage of using that p2pool-like chain. At any moment significant minig power can switch back to good&old mining style, what would reduce usefulness of this solution. Any amount of p2pool miniconfirmations might turn to nothing if next block is found by a non-participating party.

Hardfork:
Pros: all bitcoin infrastructure gets upgraded at once.
Cons: It's a hardfork. There is a theoretical risk of increasing orphans rate.

I don't mention cost of running an SPV node, because that cost is very low anyway.

Btw, recently I sold several btc locally to several diffrent people. Not all of them understand how bitcoin works. A common bitcoin user just wants bitcoins to appear in his wallet. It was not very comfortable for me leaving them before they see their btc received. Even though I was trying to explain how everyting works, convincing them, that there is nothing to worry about.

stdset (OP)
Hero Member
*****
Offline Offline

Activity: 572
Merit: 506



View Profile
August 03, 2013, 07:39:16 AM
 #30

If miners use software that verifies that blocks it finds are reported by the pool, and the pool publishes the headers of its shares, then no time is needed to verify the pool is working honestly.
Good solution, surprisingly simple.

TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1083


View Profile
August 03, 2013, 09:32:23 AM
 #31

Let's say you create blocks every 10 seconds, and append them to a P2Pool-like share-chain, which much follow the same rules as the Bitcoin blockchain in the sense that no transactions in a share must conflict with a transaction in a previous share. The effect of this is that fees become irrelevant, or at least that you can only choose which transactions to include from the previous 10 seconds of transactions. If a block becomes full after 5 minutes, you would be forced to mine the share chain and not be able to remove low fee transactions even if new high fee transactions come in. You only have a window of 10 seconds within which you can choose which transactions to include. Once that share is calculated you cannot remove transactions from the share chain.

This isn't necessarily true, the rule could be that you can't create a double spend.

If TX1 is in the fast-chain, then transactions that spend any of the inputs into TX1 cannot be added to main chain blocks.

There would be no problem with leaving TX1 out of your block as long as you don't include a double spend of any of TX1's input.

The fast-chain would have to be fast, signature checks could be skipped.  The owner of the UTXO would be the only one who knew the public key.  The only thing that would need to be checked would be that the public key provided hashes to the address.

The fast chain could restrict itself to only standard transactions.  Using more complex transactions would be slower.

There is a risk that when a transaction is broadcast someone changes the transaction, since they know the public key (and signature checks are skipped).  Some kind of fast signature would be useful.

A new "fast transaction" standard transaction could be added.  This would include a 4 byte nonce.

The rule could be that fast transactions must have a number of leading zeros in their hash.  If someone modified the transaction, then they would have to re-do the nonce updates.

The number of zeros could be dynamically controlled to try to keep spam low.  It should be low enough that it takes < 10 seconds to solve on a mobile device.

Is there an actual signature algorithm which gets high speed at the expense of lower security?  I guess any algorithm would work if the key size was lowered? 

Could an ECDSA 64 bit key give 5-10 mins of security and be very fast?

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!