Bitcoin Forum
May 07, 2024, 10:12:34 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: 1 2 [All]
  Print  
Author Topic: Reasons to keep 10 min target blocktime?  (Read 4348 times)
stdset (OP)
Hero Member
*****
Offline Offline

Activity: 572
Merit: 506



View Profile
July 21, 2013, 08:19:54 PM
Merited by ABCbits (1)
 #1

I know it was dicussed many times before. But, I didn't find clear, convincing analysis, showing advantages of 10 min blocktime over shorter times.
From this topic: https://bitcointalk.org/index.php?topic=250735.msg2666847#msg2666847 it seems, that in the first approximation orphans rate doesn't depend on target blocktime. Of course, after all, it depends. But again, is there a solid analysis of this dependancy?
Are there other advantages of longer blocktimes over shorter ones?
On the other hand, there are clear advantages of shorter blocktimes over longer ones. One can argue, that those advantages are more of cosmetic, marketing nature. But we shouldn't underestimate importance of marketing. And not all of the advantages are cosmetic. E.g. with shorter blocktimes one needs much less time to estimate honesty of a non pps mining pool.

"Bitcoin: mining our own business since 2009" -- Pieter Wuille
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8411



View Profile WWW
July 21, 2013, 08:47:54 PM
Merited by ABCbits (6)
 #2

You know this has been discussed many times before. it would be really best if you'd spent some more time studying them rather than starting a new thread and externalize the studying cost…

You seem to know the primary arguments against it, but I'll repeat the ones I think are most interesting:

(1) Orphaning rate depends on the time relative to communications & validation delay (formula given in the link).  At the limit as the block-time goes to zero the network will stop converging and typical reorganizations tend to infinitely long.  The actual delays depend on network topography and block size. And as an aside— in the past we've seen global convergence times on Bitcoin get up to over two minutes, although the software performance has been improved since then it doesn't seem that there a ton of headroom before convergence failures would be likely in practice, certainly fast convergence is harder with larger blocks.

(1a) There have been altcoins who didn't understand this and set their block times to be stupidly low and suffered pretty much instant convergence failure (e.g. liquidcoin). There are other ones that may start failing if they ever get enough transaction volume that validation actually takes a bit of time.

(2) The computational/bandwidth/storage cost of running a SPV node, or query a remote computation oracle for signing, or to present a bitcoin proof in a non-bitcoin chain is almost entirely due to the header rate. Going to 5 minutes, for example, would double these costs. Increasing costs for the most cost-sensitive usages is not very attractive.

(3) With the exception of 1 confirmation transactions, once you are slow enough that orphaning isn't a major consideration there is no real security difference that depend on the particular rate. For moderate length attacks sum computation matters and how you dice it up doesn't matter much. One confirm security— however— isn't particular secure.

(3a)  If there is actually a demand for fast low security evidence of mining effort,  you can achieve that simply by having miners publish shares like P2Pool does. You could then look at this data and estimate how much of the network hashrate is attempting to include the transaction you're interested in.  This doesn't, however, create the orphaning/convergence problems of (1) or the bandwidth/storage impact on disinterested nodes of (2).

(3b) Because mining is a stochastic lottery confirmations can take a rather long time even when the mean is small. Few things which you can describe as "needing" a 2 minute mean would actually still be happy with it taking 5 times that sometimes. Those applications simply need to use other mechanisms than global consensus as their primary mechanism.

(4) While you can debate the fine details of the parameters— perhaps 20 minutes or 5 minutes would have been wiser— because of the above none of the arguments are all that compelling.  Changing this parameter would require the consent of all of the surviving Bitcoin users, absent a really compelling argument it simply isn't going to happen.

If you'd like to explore these ideas just as an intellectual novelty,  Amiller's ideas about merging in evidence of orphaned blocks to target an orphaning rate instead of a time are probably the most interesting—  the problem then becomes things like how to prevent cliques of fast miners self-centralizing against further away groups who can't keep up, and producing proofs for SPV clients which are succinct in the face of potentially quite fast blocks.
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1068



View Profile
July 22, 2013, 02:09:57 AM
Last edit: July 22, 2013, 02:29:30 AM by 2112
Merited by ABCbits (2)
 #3

In addition to the excellent points made by gmaxwell:

(5) you really want to make sure that your coin can work and converge through Tor. That includes:

(5a) plain Tor: 3 hops

(5b) hidden service Tor: 6 hops + onion address lookup time

Edit: If you still aren't convinced please reread the discussion with Lolcust about Geist Geld with 15 seconds of interblock time.

https://bitcointalk.org/index.php?topic=42417.msg528001#msg528001

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1025



View Profile
July 22, 2013, 05:06:41 AM
Merited by ABCbits (1)
 #4

And if that isn't enough, bitcoin has a bunch of magic numbers in it.  Most of them seem to have been picked because they were round numbers that fit into a broad range of values that were "good enough".

Consider that the average block time could have been chosen anywhere from 1 second to 86400 seconds.  1 second is clearly too short for network convergence.  86400 seconds is clearly too long for effective commerce.  Between those two extremes is a broad band that is "good enough" for both, as well as regions that are questionable for one or the other.  Would 60 seconds be too short?  Probably.  120?  Maybe.  Would 3600 seconds be too long?  Probably.  1800?  Maybe.

And the various magic numbers interact.  For example, the block time and the subsidy function come together to set the generation curve.

Satoshi was either very lucky, or very thoughtful.  The numbers he picked all came together to give us a workable system.  The success of the system gives a very strong (quasi)anthropic argument: we are here discussion the merits of various key values because the system works well enough for us to care about it.  If he had chosen poorly, bitcoin would not have taken off.

And finally, consider litecoin.  Litecoin has a 150 second block time.  This number appears to still be large enough that convergence isn't a huge problem.  But litecoin does not appear to be used for commerce other than buying and selling litecoins, so a shorter time seems to provide no commercial benefits.  If the long block time was holding us back, the world would have beaten a path to litecoin.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
dserrano5
Legendary
*
Offline Offline

Activity: 1974
Merit: 1029



View Profile
July 22, 2013, 08:55:13 AM
 #5

Satoshi was either very lucky, or very thoughtful.  The numbers he picked all came together to give us a workable system.  The success of the system gives a very strong (quasi)anthropic argument: we are here discussion the merits of various key values because the system works well enough for us to care about it.  If he had chosen poorly, bitcoin would not have taken off.

Without completely discarding a bit of luck, I lean towards "very thoughtful". We don't know how many chains he/they started to test different sets of parameters until finally accepting and announcing the current one.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8411



View Profile WWW
July 22, 2013, 03:32:47 PM
 #6

And finally, consider litecoin.  Litecoin has a 150 second block time.  This number appears to still be large enough that convergence isn't a huge problem.
I'm not so sure there— as I mentioned we've observed propagation times in Bitcoin that would have made litecoin catch fire with its inter-block gap… Difference is that Bitcoin is bigger and has more transactions, though perhaps the performance improvements made to 0.8+ have removed this risk.
yaffare
Newbie
*
Offline Offline

Activity: 45
Merit: 0


View Profile
July 22, 2013, 07:37:24 PM
 #7

Isnt the argumentation at least a little flawed?

Because shorter block times also mean shorter blocks.

Should make no difference, if you have for 10 minute block time, 1 time long validation time,
while for 1 minute blocks, 10 time short validation time. The sum is the same.
Meni Rosenfeld
Donator
Legendary
*
expert
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
July 22, 2013, 07:46:07 PM
 #8

(3) With the exception of 1 confirmation transactions, once you are slow enough that orphaning isn't a major consideration there is no real security difference that depend on the particular rate. For moderate length attacks sum computation matters and how you dice it up doesn't matter much. One confirm security— however— isn't particular secure.
That's false, which is the main point I was trying to explain in https://bitcoil.co.il/Doublespend.pdf.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8411



View Profile WWW
July 22, 2013, 07:49:09 PM
 #9

That's false, which is the main point I was trying to explain in https://bitcoil.co.il/Doublespend.pdf.
I'm speaking specifically about confirm count without regard to latency.  If what I was saying wasn't true, miners publishing all their diff 1 shares would suddenly make Bitcoin a million times more secure.
Meni Rosenfeld
Donator
Legendary
*
expert
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
July 22, 2013, 08:29:52 PM
 #10

That's false, which is the main point I was trying to explain in https://bitcoil.co.il/Doublespend.pdf.
I'm speaking specifically about confirm count without regard to latency.
Under the assumption that orphaning isn't an issue, with a lower mean block time you need to wait less on average for a given level of security.

For example, let's say that with either 5 min or 10 min mean time, orphaning isn't an issue. Let's say also that you want a 10%-hashrate attacker to have less than 2% chance of successfully double spending, so you wait for 3 confirms.

If 10 min is the time constant, you have to wait 30 mins on average. If it's 5 min, you have to wait 15 mins. This is an advantage of a lower time constant, as suggested by the OP.

 If what I was saying wasn't true, miners publishing all their diff 1 shares would suddenly make Bitcoin a million times more secure.
The behavior of the security function is different in different regions of parameters so an extreme case does not prove anything about a more typical case. With infinitesimal blocks the focus is on total compute power spent as you say, but with more typical block times the focus is on the statistical properties of an atemporal random walk.

Of course with a network difficulty of 1 you'd never converge, but this defies the assumption that "you are slow enough that orphaning isn't a major consideration".


E.g. with shorter blocktimes one needs much less time to estimate honesty of a non pps mining pool.
If miners use software that verifies that blocks it finds are reported by the pool, and the pool publishes the headers of its shares, then no time is needed to verify the pool is working honestly.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
Cubic Earth
Legendary
*
Offline Offline

Activity: 1176
Merit: 1018



View Profile
July 22, 2013, 09:09:39 PM
 #11

I am concerned that a possible practical improvement is getting lost in the theory discussion.  With lots of factors to balance, there is a risk that any big change would lead to problems.  No one wants that.  But from what I understand, changing the target interval to 5 minutes would be safe.  Max block size would be shrunk to 500KB, coins issued per block would be cut in half.

Lets take your arguments gmaxwell, and look at them in the context of this specific proposal.

(1) Orphaning rate depends on the time relative to communications & validation delay (formula given in the link).  At the limit as the block-time goes to zero the network will stop converging and typical reorganizations tend to infinitely long.  The actual delays depend on network topography and block size. And as an aside— in the past we've seen global convergence times on Bitcoin get up to over two minutes, although the software performance has been improved since then it doesn't seem that there a ton of headroom before convergence failures would be likely in practice, certainly fast convergence is harder with larger blocks.

So the blocktime would be at 5 min in this case.  The orphan rate would be higher, but I'm guessing it would it rise by more than 5% or 10%.  And lets remember, the orphan rate is under 1% of all blocks, so the actual percentage of orphans would change from lets say  0.80% to something like 0.85%.  I'm just making these numbers up, so let me know if you disagree or have better estimates of the actual values.  I don't know to much about the convergence concept.  I would guess that 500KB blocks every 5 minutes still be well within the margin of safety though.

(1a) There have been altcoins who didn't understand this and set their block times to be stupidly low and suffered pretty much instant convergence failure (e.g. liquidcoin). There are other ones that may start failing if they ever get enough transaction volume that validation actually takes a bit of time.

Good to know.  Lets not do that!  I don't think anyone would think 5 minutes is stupidly low (unless they were stupid?).

(2) The computational/bandwidth/storage cost of running a SPV node, or query a remote computation oracle for signing, or to present a bitcoin proof in a non-bitcoin chain is almost entirely due to the header rate. Going to 5 minutes, for example, would double these costs. Increasing costs for the most cost-sensitive usages is not very attractive.

This may be a good point.  I don't know enough about how the headers work, but I'll try.  Empty-block headers are about 80 bytes.  If every block thus far had been empty, we would have ~20MB would of block headers since the genesis.  That number would be ~40MB if it had been 5 mins from the start.  A question: would the header-of-a-100-transaction-block + the header-of-an-empty-block be the same size as the two 50-transaction headers combined?  If that is true, than the added header 'overhead' costs would seem to be reasonable, basically an additional 80 bytes every 10 minutes over what we generate now.

(3) With the exception of 1 confirmation transactions, once you are slow enough that orphaning isn't a major consideration there is no real security difference that depend on the particular rate. For moderate length attacks sum computation matters and how you dice it up doesn't matter much. One confirm security— however— isn't particular secure.

Seems like Meni is taking you up on the security side.  Statistics aside, lets not forget the importance of that first confirmation!  I've met dozens of people in coffee shops helping them to get some bitcoins.  In purely human terms, there is a big difference between 5 and 10 minutes.  5 minutes is fast food, 10 is not.  You expect putting gas in a car to take 5 minutes - 10 would be a drag.  5 minutes just feels a lot faster than 10.  And since that first confirmation is way, way (maybe 100x?) more secure than an unconfirmed one, everything happening twice as fast would be very convenient for the bitcoin community as it is today.  I agree that in the future there will be lots of solutions to these issues, but to get there from here we need to keep people happy and things convenient so that the community lives on.

Of course, if Meni is right, and I think he is, then you can multiply that savings if you need more confirmations.  The current security provided by three confirmations (30 mins) could be replaced by four confirmations at 5 minutes (20 mins), and time savings will be afforded at any desired level of security.

(3a)  If there is actually a demand for fast low security evidence of mining effort,  you can achieve that simply by having miners publish shares like P2Pool does. You could then look at this data and estimate how much of the network hashrate is attempting to include the transaction you're interested in.  This doesn't, however, create the orphaning/convergence problems of (1) or the bandwidth/storage impact on disinterested nodes of (2).

It's good there are options, but this is a very technical solution.  Don't forget we want to make bitcoin better for even minimally technical users, so therefore it is not an argument against 5 minute blocktime.

(3b) Because mining is a stochastic lottery confirmations can take a rather long time even when the mean is small. Few things which you can describe as "needing" a 2 minute mean would actually still be happy with it taking 5 times that sometimes. Those applications simply need to use other mechanisms than global consensus as their primary mechanism.

The distribution curve doesn't change shape, but the timescale underneath it will be shrunk by a factor of two!  Again, transactions where two people initiate the transaction and then wait it out, but that 1, 2 or 6 blocks.  Whatever your number is, you will be waiting half as long on average.  Adjusting to keep the security level the same would not result in a 50% times savings, but maybe 30%.  Meni will tell you the exact amount.

(4) While you can debate the fine details of the parameters— perhaps 20 minutes or 5 minutes would have been wiser— because of the above none of the arguments are all that compelling.  Changing this parameter would require the consent of all of the surviving Bitcoin users, absent a really compelling argument it simply isn't going to happen.

No one wants to wait longer.  This is 99% a technical issue, and if the core dev team gives it's blessing, people will be very excited.  Why not propose raising it to 20 minutes and see if there is a similar level of excitement.  I tend to think we need a hard fork to prove to ourselves that we are dynamic community and can handle the challenge.  With core dev support, consensus will easily be over 95%.

d'aniel
Sr. Member
****
Offline Offline

Activity: 461
Merit: 251


View Profile
July 22, 2013, 10:55:51 PM
 #12

That's false, which is the main point I was trying to explain in https://bitcoil.co.il/Doublespend.pdf.
I'm speaking specifically about confirm count without regard to latency.
Under the assumption that orphaning isn't an issue, with a lower mean block time you need to wait less on average for a given level of security.

For example, let's say that with either 5 min or 10 min mean time, orphaning isn't an issue. Let's say also that you want a 10%-hashrate attacker to have less than 2% chance of successfully double spending, so you wait for 3 confirms.

If 10 min is the time constant, you have to wait 30 mins on average. If it's 5 min, you have to wait 15 mins. This is an advantage of a lower time constant, as suggested by the OP.

In the case where an attacker is purchasing his hashing power on-demand, wouldn't halving the block period also halve the cost of any n-block chain reversal, since on average the attacker would need to rent the same fraction q of total hashing power, but for only half the time?
Meni Rosenfeld
Donator
Legendary
*
expert
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
July 22, 2013, 11:27:45 PM
 #13

Seems like Meni is taking you up on the security side.  Statistics aside, lets not forget the importance of that first confirmation!  I've met dozens of people in coffee shops helping them to get some bitcoins.  In purely human terms, there is a big difference between 5 and 10 minutes.  5 minutes is fast food, 10 is not.  You expect putting gas in a car to take 5 minutes - 10 would be a drag.  5 minutes just feels a lot faster than 10.  And since that first confirmation is way, way (maybe 100x?) more secure than an unconfirmed one, everything happening twice as fast would be very convenient for the bitcoin community as it is today.  I agree that in the future there will be lots of solutions to these issues, but to get there from here we need to keep people happy and things convenient so that the community lives on.
On the other hand, in practice even 0-confirmation transactions are reasonably secure. There is no need to wait for confirmations in a coffee shop.

That's false, which is the main point I was trying to explain in https://bitcoil.co.il/Doublespend.pdf.
I'm speaking specifically about confirm count without regard to latency.
Under the assumption that orphaning isn't an issue, with a lower mean block time you need to wait less on average for a given level of security.

For example, let's say that with either 5 min or 10 min mean time, orphaning isn't an issue. Let's say also that you want a 10%-hashrate attacker to have less than 2% chance of successfully double spending, so you wait for 3 confirms.

If 10 min is the time constant, you have to wait 30 mins on average. If it's 5 min, you have to wait 15 mins. This is an advantage of a lower time constant, as suggested by the OP.

In the case where an attacker is purchasing his hashing power on-demand, wouldn't halving the block period also halve the cost of any n-block chain reversal, since on average the attacker would need to rent the same fraction q of total hashing power, but for only half the time?
Yes, but that's negligible since for <50% hashrate, success probability decreases exponentially with confirmations. In my example, if the block time is 5 min, the merchant can wait for 1 more confirmation (20 mins, still much less than 30), cutting success probability by a factor of 3 thus tripling the average cost of a successful attack.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
Cubic Earth
Legendary
*
Offline Offline

Activity: 1176
Merit: 1018



View Profile
July 22, 2013, 11:33:39 PM
 #14

In the case where an attacker is purchasing his hashing power on-demand, wouldn't halving the block period also halve the cost of any n-block chain reversal, since on average the attacker would need to rent the same fraction q of total hashing power, but for only half the time?

Meni's paper is a very good read.  Here is a table from his paper and the two notes that followed.

•If the attacker controls more hashrate than the honest network, no amount of confir- mations will reduce the success rate below 100%.

•There is nothing special about the default, often-cited figure of 6 confirmations. It was chosen based on the assumption that an attacker is unlikely to amass more than 10% of the hashrate, and that a negligible risk of less than 0.1% is acceptable. Both these figures are arbitrary, however; 6 confirmations are overkill for casual attackers, and at the same time powerless against more dedicated attackers with much more than 10% hashrate.

gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8411



View Profile WWW
July 22, 2013, 11:48:21 PM
 #15

I am concerned that a possible practical improvement is getting lost in the theory discussion.
You have studiously ignored what was probably the most important point I made.  There is no improvement possible here which requires a rule change. You could happily publish half-difficulty shares and share-chain like P2Pool if faster evidence of confirmation was needed.  In light of that alone, regardless of any speculation about perhaps 5 minutes might actually also be safe (speculation I consider highly risky because it's only safe under the current network load and topology), I believe your advocacy here has no chance of forward progress.
Meni Rosenfeld
Donator
Legendary
*
expert
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
July 23, 2013, 07:46:57 AM
 #16

Meni's paper is a very good read.  Here is a table from his paper and the two notes that followed.
These two notes are actually part of a list of 7 notes which has some figures in the middle.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
runeks
Legendary
*
Offline Offline

Activity: 980
Merit: 1008



View Profile WWW
July 23, 2013, 09:22:02 AM
 #17

I am concerned that a possible practical improvement is getting lost in the theory discussion.
You have studiously ignored what was probably the most important point I made.  There is no improvement possible here which requires a rule change. You could happily publish half-difficulty shares and share-chain like P2Pool if faster evidence of confirmation was needed.  In light of that alone, regardless of any speculation about perhaps 5 minutes might actually also be safe (speculation I consider highly risky because it's only safe under the current network load and topology), I believe your advocacy here has no chance of forward progress.
This is not equivalent to a shorter block time. The issue with this approach is that the miners who do not participate in this system have greater freedom in choosing which transactions to include in a block, and can thus make more money from fees. Perhaps this isn't an issue now, but it will become an issue at some point.

Let's say you create blocks every 10 seconds, and append them to a P2Pool-like share-chain, which much follow the same rules as the Bitcoin blockchain in the sense that no transactions in a share must conflict with a transaction in a previous share. The effect of this is that fees become irrelevant, or at least that you can only choose which transactions to include from the previous 10 seconds of transactions. If a block becomes full after 5 minutes, you would be forced to mine the share chain and not be able to remove low fee transactions even if new high fee transactions come in. You only have a window of 10 seconds within which you can choose which transactions to include. Once that share is calculated you cannot remove transactions from the share chain.

This creates an incentive for miners to not participate in this faster share chain, unless we can find some elegant way of compensating miners for their effort that doesn't bloat the block chain (I haven't been able to figure out how to do this yet).
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8411



View Profile WWW
July 23, 2013, 09:29:07 AM
 #18

This is not equivalent to a shorter block time. The issue with this approach is that the miners who do not participate in this system have greater freedom in choosing which transactions to include in a block, and can thus make more money from fees. Perhaps this isn't an issue now, but it will become an issue at some point.
Wherever did I say it was optional?   I'm sorry if I was unclear. You can achieve equivalent behavior without a hard fork and without bloating up the headers with a higher block rate. I didn't say that you could achieve actual early consensus without imposing _any_ new rules (although if you only want evidence of intention thats another matter). You simply enforce the sharechain as a soft forking rule at the tip of the chain, and forget it with it buried non miners who don't care about this activity could ignore it.

You're over-constraining it with assumptions. 10 seconds? The poster referenced 5 minutes.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1083


View Profile
July 23, 2013, 09:58:46 AM
 #19

This is not equivalent to a shorter block time. The issue with this approach is that the miners who do not participate in this system have greater freedom in choosing which transactions to include in a block, and can thus make more money from fees. Perhaps this isn't an issue now, but it will become an issue at some point.

Even as evidence it would be helpful.  Each miner broadcasts all headers with 1/64 of the standard difficulty.  This allows ties to be broken more quickly, so random reversals are much less likely.

It helps even with malicious reversals, as long as honest nodes are willing to mine against a slightly shorter chain.

Forks could be compared using total number of low POW headers on each block in the chain.

So, if there were 2 possible chains which fork at B/B', then the first chain would still win.

A(63) <- B(72) <- C(37) <- D(58)

and

A(63) <- B'(6) <- C'(9) <- D'(4) <- E(7) <- F(6)

Miners would know that almost all of the hashing power is against the first fork, so it will eventually overpower then 2nd fork.

Even if only 75% of the miners have that rule, the top fork will win.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
runeks
Legendary
*
Offline Offline

Activity: 980
Merit: 1008



View Profile WWW
July 23, 2013, 11:56:23 AM
 #20

This is not equivalent to a shorter block time. The issue with this approach is that the miners who do not participate in this system have greater freedom in choosing which transactions to include in a block, and can thus make more money from fees. Perhaps this isn't an issue now, but it will become an issue at some point.
Wherever did I say it was optional?
I don't think I understand your proposal then. How can a soft fork not be optional? Isn't the fact that it's optional what makes it a soft fork?

Quote
You simply enforce the sharechain as a soft forking rule at the tip of the chain, and forget it with it buried non miners who don't care about this activity could ignore it.
I don't understand this.

Quote
You're over-constraining it with assumptions. 10 seconds? The poster referenced 5 minutes.
The same would be the case for 5-minute blocks in the share chain. After a new 5-minute share is found, miners following this approach would no longer be able to alter the transactions in that share, and replace them with higher fee transactions that come in afterwards. The miners who ignore the share chain/soft fork would be able to, thus earning more fees.

This is not equivalent to a shorter block time. The issue with this approach is that the miners who do not participate in this system have greater freedom in choosing which transactions to include in a block, and can thus make more money from fees. Perhaps this isn't an issue now, but it will become an issue at some point.

Even as evidence it would be helpful.  Each miner broadcasts all headers with 1/64 of the standard difficulty.  This allows ties to be broken more quickly, so random reversals are much less likely.

It helps even with malicious reversals, as long as honest nodes are willing to mine against a slightly shorter chain.

Forks could be compared using total number of low POW headers on each block in the chain.

So, if there were 2 possible chains which fork at B/B', then the first chain would still win.

A(63) <- B(72) <- C(37) <- D(58)

and

A(63) <- B'(6) <- C'(9) <- D'(4) <- E(7) <- F(6)

Miners would know that almost all of the hashing power is against the first fork, so it will eventually overpower then 2nd fork.

Even if only 75% of the miners have that rule, the top fork will win.
I agree. It's an interesting concept.

I've been thinking of writing a proof-of-concept using a 1-second "block time". Ie. miners would publish blocks with 1/600th of the difficulty into a P2P network. This could both be useful for other miners, to see which chain is being worked on the most, but also by non-mining nodes, to see if their transactions have been picked up by the network.

The latter benefit was the cause of my initial interest. I thought it would be nice to be able to see if ones transaction has been picked up by the network, and is being worked on. As it is now you wait in ignorance for ~10 minutes, and at some point you get a single confirmation. With this system you'd get a "partial confirmation" every ~1 second on average, and you'd be able to get good response on, first of all, whether your transaction has been picked up, but also when it's discarded in favor of higher fee transactions. If you receive partial confirmations for your transaction for 2 minutes and they stop all of the sudden, then you'd want to resend your transaction with a higher fee if you want it included in the next block. This would increase feedback between miners and non-mining nodes and make fee discovery more efficient.

Of course this would also require that the partial confirmations be broadcast along with the hash of all the transactions in the block, in order for nodes to know whether their transaction is being worked on, and for them to be able to verify the block (that its difficulty is greater than or equal to 1/600th of the block chain difficulty).
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1083


View Profile
July 23, 2013, 12:25:54 PM
 #21

Of course this would also require that the partial confirmations be broadcast along with the hash of all the transactions in the block, in order for nodes to know whether their transaction is being worked on, and for them to be able to verify the block (that its difficulty is greater than or equal to 1/600th of the block chain difficulty).

This could be split.  Miners could broadcast "template" blocks.

POW > 1/600: Just the header (every second)
POW > 1/60: Header + block hashes (every 10 seconds)

You could also make it more merkle based.

So, I send

header + all hashes

Later I just have to send

header + merkle root

If I change the block, I could send the updated hashes since the last time.

Clients that heard the first transmission could build up the new block.

Other miners could send

previous: <old hash>
new hash: <new hash>
transactions: 0, <new coinbase>

Having said that, inherently, miners need to update the coinbase transaction for the "extra nonce".

At 500 transactions, sending all the hashes would 16kB, so it is not insignificant.

Another option would be to pay miners to include transactions.  If only 1% of transactions need to be included and there are 512 transactions, then you only need to send the path.  This gives 320 per transaction and 5 transaction, so 1.6kB.    You couldn't send that every second to every node.

Nodes could flag themselves as "HEADER_MONITOR" nodes, and support lots of headers.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
runeks
Legendary
*
Offline Offline

Activity: 980
Merit: 1008



View Profile WWW
July 23, 2013, 01:25:20 PM
Last edit: July 23, 2013, 01:36:30 PM by runeks
 #22

Of course this would also require that the partial confirmations be broadcast along with the hash of all the transactions in the block, in order for nodes to know whether their transaction is being worked on, and for them to be able to verify the block (that its difficulty is greater than or equal to 1/600th of the block chain difficulty).

This could be split.  Miners could broadcast "template" blocks.

POW > 1/600: Just the header (every second)
POW > 1/60: Header + block hashes (every 10 seconds)

You could also make it more merkle based.

So, I send

header + all hashes

Later I just have to send

header + merkle root

If I change the block, I could send the updated hashes since the last time.

Clients that heard the first transmission could build up the new block.

Other miners could send

previous: <old hash>
new hash: <new hash>
transactions: 0, <new coinbase>

Having said that, inherently, miners need to update the coinbase transaction for the "extra nonce".

At 500 transactions, sending all the hashes would 16kB, so it is not insignificant.

Another option would be to pay miners to include transactions.  If only 1% of transactions need to be included and there are 512 transactions, then you only need to send the path.  This gives 320 per transaction and 5 transaction, so 1.6kB.    You couldn't send that every second to every node.

Nodes could flag themselves as "HEADER_MONITOR" nodes, and support lots of headers.
I had thought of a "diff"-like approach to this.

We have a cache time, which basically determines how far back a miner can reference a previous block. So if the miner then changes the transactions in a block he or someone else has published, he would simply send:

Code:
<block_header>
ref_block = <block_hash_reference>
add_txs = <list of transactions to be added>
rm_txs = <list of transactions to be removed>

Nodes would then cache the last <cache time> seconds of blocks, and they would be able to reconstitute the complete block from blocks in the cache by adding the transactions <add_txs> and removing the transactions <rm_txs> from <block_hash_reference>.

So if we use 60 seconds as a cache time it means we only need to send out "full blocks" (block header plus all transaction hashes needed to verify the block header) every 60 seconds.

If we assume a 1 MB block size and 300 bytes per transaction, that's around 3500 transaction hashes of 32 bytes each. But blocks are only 0.5 MB on average, because they start out with a size of 0 and end up with a size of 1 MB (if we assume the maximum block size is used, as a worst case scenario). So each full block message is 3500*32*0.5 bytes = 55 KB on average, every 60 seconds. That's 0.9 KB/s for the complete blocks. Add to that the block headers every second, which may contain insertions or deletions. It should be possible to keep the data rate at around 10 KB/s for 10 peers.

When a new node joins the network it can ask for the latest full block, so it's able to reconstitute blocks from "diff messages".

Perhaps a Merkle Tree solution would be more efficient, since nodes really only care about their own transactions. I haven't done the calculations on this.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1083


View Profile
July 23, 2013, 01:39:36 PM
 #23

Code:
<block_header>
ref_block = <block_hash_reference>
add_txs = <list of transactions to be added>
rm_txs = <list of transactions to be removed>

Pretty much the same as I suggested.  However, it is smaller just to have

<index>, <new hash>

Since, any new hash will either replace an old one or extend the chain.

you could have

<index>, <000....000>

to mean delete.

Quote
Nodes would then cache the last <cache time> seconds of blocks, and they would be able to reconstitute the complete block from blocks in the cache by adding the transactions <add_txs> and removing the transactions <rm_txs> from <block_hash_reference>.

That assumes the new miner is the same as the old one.

The needs to be a rule that miners try to keep their block similar to the template block.

For example, you could require that blocks have their transactions sorted according to their hash.

Quote
If we assume a 1 MB block size

That's 1.7kB/s

Quote
and 300 bytes per transaction, that's around 3500 transaction hashes of 32 bytes each, every 60 seconds.

That's double relative to just downloading the chain and people are already complaining about it.

I don't think clearing the cache every minute is a good plan.  Better to keep it for at least 1-2 blocks length.

Quote
Perhaps a Merkle Tree solution would be more efficient, since nodes really only care about their own transactions. I haven't done the calculations on this.

The problem is you need to give the full path.  I think a full template and diffs is at least as efficient.

Another way to "template" is to allow grouping of transactions directly.  Transactions are already sent anyway.

Encouraging miners to use mostly the same transactions is a good plan anyway.

A miner sees a header and notices it has a hash that the miner doesn't have, so it asks his peers for it.  It would also show evidence of double spending.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
runeks
Legendary
*
Offline Offline

Activity: 980
Merit: 1008



View Profile WWW
July 23, 2013, 02:36:07 PM
 #24

Code:
<block_header>
ref_block = <block_hash_reference>
add_txs = <list of transactions to be added>
rm_txs = <list of transactions to be removed>

Pretty much the same as I suggested.  However, it is smaller just to have

<index>, <new hash>

Since, any new hash will either replace an old one or extend the chain.

you could have

<index>, <000....000>

to mean delete.
Right. The concept holds. How it's encoded is less important.

Nodes would then cache the last <cache time> seconds of blocks, and they would be able to reconstitute the complete block from blocks in the cache by adding the transactions <add_txs> and removing the transactions <rm_txs> from <block_hash_reference>.

That assumes the new miner is the same as the old one.
No that's not necessary as far as I can see. You simply have a module which gets blocks above or equal to 1/600th of the difficulty submitted to it. This module simply keeps the last full block template or set of block templates in memory, and figures out which of the previous templates to publish a diff against. So it quickly calculates which of the templates from the last 60 seconds would give the smallest message size if a diff was produced against it, and publishes this to the network.

But as you mention, the 60 second cache time isn't really necessary if nodes can just ask for the last n block templates when they connect to the network.

If we assume a 1 MB block size

That's 1.7kB/s

Quote
and 300 bytes per transaction, that's around 3500 transaction hashes of 32 bytes each, every 60 seconds.

That's double relative to just downloading the chain and people are already complaining about it.
I think you are misunderstanding me.

If each block can be no larger than 1 MB, then if we assume each transaction is 300 bytes, then it contains around 3500 transactions.

Publishing the hash, 32 bytes each, of 3500 transactions is 109 KB. But the block is not 1 MB right after a new block is found. It starts out being very small, so that the full block templates are small right after a new block is found, and get larger and larger until the next block is found. That's why I assume they are, on average, only 50% of 109 KB.

Quote
I don't think clearing the cache every minute is a good plan.  Better to keep it for at least 1-2 blocks length.
Yeah that makes sense. If new nodes can just connect and request full block templates, then there's no need to have a short cache time.

I was originally thinking it would be a "broadcast only"-protocol, so that miners just broadcast partial confirmations and they cascade throughout the network through the other peers. This keeps traffic down, but it means that new nodes need to wait until their cache contains the relevant full block templates in order to verify blocks.

Quote
A miner sees a header and notices it has a hash that the miner doesn't have, so it asks his peers for it.  It would also show evidence of double spending.
Yes this is interesting too. It means that unless miners are deliberately mining double spends, it will be easier for the miners to find and resolve double spends.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1083


View Profile
July 23, 2013, 02:54:24 PM
 #25

Publishing the hash, 32 bytes each, of 3500 transactions is 109 KB. But the block is not 1 MB right after a new block is found. It starts out being very small, so that the full block templates are small right after a new block is found, and get larger and larger until the next block is found. That's why I assume they are, on average, only 50% of 109 KB.

It depends.  It might be full size, but change as paying transactions overwrite free ones.

I think trying to have all miners target the same set of transactions would be a good thing.  Obviously, they would each have their own coinbase.

OTOH, the more complex you make it, the less willing miners might be to bother.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
runeks
Legendary
*
Offline Offline

Activity: 980
Merit: 1008



View Profile WWW
July 23, 2013, 03:06:52 PM
 #26

Publishing the hash, 32 bytes each, of 3500 transactions is 109 KB. But the block is not 1 MB right after a new block is found. It starts out being very small, so that the full block templates are small right after a new block is found, and get larger and larger until the next block is found. That's why I assume they are, on average, only 50% of 109 KB.

It depends.  It might be full size, but change as paying transactions overwrite free ones.

I think trying to have all miners target the same set of transactions would be a good thing.  Obviously, they would each have their own coinbase.

OTOH, the more complex you make it, the less willing miners might be to bother.
Good point. If the 1 MB limit is reached then it might reach 1 MB shortly after a new block is found, and transactions are just replaced with others than have higher fees after that.

I think miners should be free to choose whichever transactions they wish. I don't want to change which transactions they mine. That should be up to them completely. I think it might be worthwhile to allow diffs of diffs, to reduce network traffic further. I need to implement something first to find out how much processing power and RAM this will consume. I think the real constraint is bandwidth, calculating and reassembling diffs shouldn't take much processing power, even if it's multiple levels of diffs, I think.

Complexity shouldn't be a problem, as its hidden from the miners. I imagine they just connect to my program instead of bitcoind, and I say that the difficulty is 1/600th of what it really is. When I receive a share that is within the partial confirmation range I process it and send it into the partial confirmation P2P network, and when it's a valid block I send it on to bitcoind for it to be published. Bear in mind that only solo miners and pools would need to publish these partial confirmations. Pool miners send their shares to the pools anyway, and then the pool just needs to be connected to the partial confirmation P2P network.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1083


View Profile
July 23, 2013, 03:43:05 PM
 #27

Good point. If the 1 MB limit is reached then it might reach 1 MB shortly after a new block is found, and transactions are just replaced with others than have higher fees after that.

However, miners could target old transaction "groups" based on the merkle tree.  If 4 transactions were overwritten but part of the same branch, then you could use that root instead of the entire hash.

It is basically a compression algorithm problem.

In fact, it could be implemented as exactly that on a peer to peer connection basis.

You send the entire block header + hashes and the compression algorithm compresses repeats.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
piotr_n
Legendary
*
Offline Offline

Activity: 2053
Merit: 1354


aka tonikt


View Profile WWW
July 23, 2013, 05:36:55 PM
 #28

The main reason why the 10 min doesn't change is that the network does not agree on it.
If you could convince 50+% of miners to change the protocol, I have no doubts that they'd be able to handle a forked client which does it for them - whoever would had stayed on the satoshi branch would have been doomed. Or on any other branch.
If they had enough hashing power nobody would be able to stop it, in such case, but same applies to you trying to change the protocol.
But miners don't seek new rules in this protocol - they like the money as it is.
At least they are sane Smiley
So no worries

Check out gocoin - my original project of full bitcoin node & cold wallet written in Go.
PGP fingerprint: AB9E A551 E262 A87A 13BB  9059 1BE7 B545 CDF3 FD0E
stdset (OP)
Hero Member
*****
Offline Offline

Activity: 572
Merit: 506



View Profile
August 03, 2013, 06:09:50 AM
Last edit: August 03, 2013, 07:29:34 AM by stdset
 #29

It seems to me, that the main idea is to use p2pool-like fast blockchain to provide faster confirmations.
Lets compare that to hardfork:

P2pool-like chain:
Pros: no hardfork needed
Cons: all major pools need to be convinced to use it. Bitcoin infrastructure needs to be upgraded to get the advantage of using that p2pool-like chain. At any moment significant minig power can switch back to good&old mining style, what would reduce usefulness of this solution. Any amount of p2pool miniconfirmations might turn to nothing if next block is found by a non-participating party.

Hardfork:
Pros: all bitcoin infrastructure gets upgraded at once.
Cons: It's a hardfork. There is a theoretical risk of increasing orphans rate.

I don't mention cost of running an SPV node, because that cost is very low anyway.

Btw, recently I sold several btc locally to several diffrent people. Not all of them understand how bitcoin works. A common bitcoin user just wants bitcoins to appear in his wallet. It was not very comfortable for me leaving them before they see their btc received. Even though I was trying to explain how everyting works, convincing them, that there is nothing to worry about.

stdset (OP)
Hero Member
*****
Offline Offline

Activity: 572
Merit: 506



View Profile
August 03, 2013, 07:39:16 AM
 #30

If miners use software that verifies that blocks it finds are reported by the pool, and the pool publishes the headers of its shares, then no time is needed to verify the pool is working honestly.
Good solution, surprisingly simple.

TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1083


View Profile
August 03, 2013, 09:32:23 AM
 #31

Let's say you create blocks every 10 seconds, and append them to a P2Pool-like share-chain, which much follow the same rules as the Bitcoin blockchain in the sense that no transactions in a share must conflict with a transaction in a previous share. The effect of this is that fees become irrelevant, or at least that you can only choose which transactions to include from the previous 10 seconds of transactions. If a block becomes full after 5 minutes, you would be forced to mine the share chain and not be able to remove low fee transactions even if new high fee transactions come in. You only have a window of 10 seconds within which you can choose which transactions to include. Once that share is calculated you cannot remove transactions from the share chain.

This isn't necessarily true, the rule could be that you can't create a double spend.

If TX1 is in the fast-chain, then transactions that spend any of the inputs into TX1 cannot be added to main chain blocks.

There would be no problem with leaving TX1 out of your block as long as you don't include a double spend of any of TX1's input.

The fast-chain would have to be fast, signature checks could be skipped.  The owner of the UTXO would be the only one who knew the public key.  The only thing that would need to be checked would be that the public key provided hashes to the address.

The fast chain could restrict itself to only standard transactions.  Using more complex transactions would be slower.

There is a risk that when a transaction is broadcast someone changes the transaction, since they know the public key (and signature checks are skipped).  Some kind of fast signature would be useful.

A new "fast transaction" standard transaction could be added.  This would include a 4 byte nonce.

The rule could be that fast transactions must have a number of leading zeros in their hash.  If someone modified the transaction, then they would have to re-do the nonce updates.

The number of zeros could be dynamically controlled to try to keep spam low.  It should be low enough that it takes < 10 seconds to solve on a mobile device.

Is there an actual signature algorithm which gets high speed at the expense of lower security?  I guess any algorithm would work if the key size was lowered? 

Could an ECDSA 64 bit key give 5-10 mins of security and be very fast?

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
Pages: 1 2 [All]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!