Bitcoin Forum
December 15, 2024, 07:38:35 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2]  All
  Print  
Author Topic: Exactly 600 seconds between blocks  (Read 2948 times)
Come-from-Beyond (OP)
Legendary
*
Offline Offline

Activity: 2142
Merit: 1010

Newbie


View Profile
August 31, 2013, 01:54:32 PM
 #21

If you really want 600 seconds between blocks, a better idea is to generate some "difficulty transactions" that reuse the peer "diffculty transactions" and these can be based off time elapsed since the last block, or average blocks during the last n seconds.

I like ur idea, it seems to be better.
BombaUcigasa
Legendary
*
Offline Offline

Activity: 1442
Merit: 1005



View Profile
August 31, 2013, 03:42:54 PM
 #22

If you really want 600 seconds between blocks, a better idea is to generate some "difficulty transactions" that reuse the peer "diffculty transactions" and these can be based off time elapsed since the last block, or average blocks during the last n seconds.

I like ur idea, it seems to be better.
Again, there is still some security reduction (attack resilience, honest work) for the sake of human accessibility (predictable responsiveness). Someone should make an altcoin and test this.

Consider an attacker with an imaginary number of peers on his private network (a GPU farm for example), ignoring p2p nodes for a while and deciding by himself that his total hash power is sufficient for the whole network for a 10 minute block frequency. His private network nodes will "see" that everyone "else" is working hard and long to find the block, but the 10 minutes time period has elapsed. The private peers will then accept lower difficulty blocks as valid, and when one is found it will be sent to the public network.

This private miner can create blocks with lower difficulty than expected, how can the whole network accept them? Looking at the block, it is valid based on the previous blockchain, but the difficulty appears to be below the broadcast time rate. Obviously the network will discard this block. What if the network has two such miners?

What if a public node that works honestly finds a block but by the time it is broadcasted the network advances difficulty and someone else finds a block with a higher difficulty? Obviously the network will accept the higher blocks, but doesn't this cause orphans and lost efficiency? The idea of agreeing on honest work with unknown results but also ensuring a timely performance is pretty cool, but implementing a method that is both more efficient and more secure than what we have now could be impossible.
BombaUcigasa
Legendary
*
Offline Offline

Activity: 1442
Merit: 1005



View Profile
August 31, 2013, 03:50:17 PM
 #23

You might also want to read this:
https://bitcointalk.org/index.php?topic=102355.0
https://bitcointalk.org/index.php?topic=260180.0
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
August 31, 2013, 05:12:02 PM
 #24

Since the, as I call them, spacer blocks have no ordering there is no orphan problem for these blocks.

You need some mechanism to merge chains, in order for the whole thing to work properly.  If you have 4 fast blocks between 2 full blocks, then you need some way to maintain ordering (between the slow blocks).

A -> a, b, c, d -> B

Looking at B, you can't tell if there were 4 fast blocks.

There would need to be a fast chain, with (at least) 2 inputs.

If b and c happened together, then the result is that d must merge the 2 chains.  It has 2 parents, rather than just 1.

A -> a -> (b, c) => d -> B

It should be allowed to merge "around" a full block.  The objective is just to allow PoW to be merged into a tree more efficiently.

Code:
+-------------------+
|                   |
V                   |
A -> a -> b -> d -> B
     |
     +--> c

B is has a "slow" link back to A. 

The work that went into c is potentially wasted, but it could be merged back in.

Code:

+-------------------+
|                   |
V                   |
A -> a -> b -> d -> B -> e -> f
     |                   ^
     |                   |
     +--> c -------------+

Someone looking at f can trace back the entire tree starting at f.  Since c and e would be backwards links (and included in the header), c (and e) must have been created before f.

This allows all orphans to be merged back in, and so prevents wasting of PoW.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!