2112
Legendary
Offline
Activity: 2128
Merit: 1073
|
|
October 04, 2011, 05:49:40 AM |
|
Anyone using block times for anything important is quite simply a fool.
This quote is worth preserving. There is a number of people on this forum who claim that bitcoin as-it-is-right-now is ready to maintain certifiable accounts for businesses. Who will be right?
|
|
|
|
FreeMoney
Legendary
Offline
Activity: 1246
Merit: 1016
Strength in numbers
|
|
October 04, 2011, 06:28:07 AM |
|
Anyone using block times for anything important is quite simply a fool.
This quote is worth preserving. There is a number of people on this forum who claim that bitcoin as-it-is-right-now is ready to maintain certifiable accounts for businesses. Who will be right? Luke is not saying bitcon isn't good to go. He's saying the times shouldn't be used as times.
|
Play Bitcoin Poker at sealswithclubs.eu. We're active and open to everyone.
|
|
|
kano (OP)
Legendary
Offline
Activity: 4606
Merit: 1851
Linux since 1997 RedHat 4
|
|
October 04, 2011, 11:15:15 AM |
|
Anyone using block times for anything important is quite simply a fool.
This quote is worth preserving. There is a number of people on this forum who claim that bitcoin as-it-is-right-now is ready to maintain certifiable accounts for businesses. Who will be right? Luke is not saying bitcon isn't good to go. He's saying the times shouldn't be used as times. ... and of course luke-jr is also correct about the times themselves. They are the time you requested a getwork, which is NOT the time the block was found. Looking at that in terms of a relationship between the two numbers: well the difference can be anything from 0 seconds up to 120 seconds. (0 seconds if you are very lucky, have a very fast hashing device and very short network latency) Default pushpool accepts work up to 120 seconds after the getwork request? (msg.c: WORK_EXPIRE_INT = 120,) So I guess that's also a possible hack to make your blocks always win if they are in an orphan battle? Since you expect people to respond in 60 seconds (and most mining programs assume that also) the pool can push the time back almost 60 seconds and the miners would see no unexpected extra expired blocks? ... Meanwhile ... back on topic. So the suggestions are to actually redo the default 2016 calculation? (due to the unstated but expected hacks due to having 2 different calculations working together ...) Edit: though by the sounds of things it wont get implemented anyway ...
|
|
|
|
mndrix
Michael Hendricks
VIP
Sr. Member
Offline
Activity: 447
Merit: 258
|
|
October 04, 2011, 02:19:50 PM |
|
Any block chain fork should be avoided as long as feasibly possible. When it happens, we should have a large list of housekeeping changes to make at once.
Is anyone maintaining a branch with these sort of housekeeping changes? A block chain fork may be years in the future, but it might be worth maintaining such a branch to keep such long-term changes in mind.
|
|
|
|
Gavin Andresen
Legendary
Offline
Activity: 1652
Merit: 2301
Chief Scientist
|
|
October 04, 2011, 02:35:06 PM |
|
Is anyone maintaining a branch with these sort of housekeeping changes? A block chain fork may be years in the future, but it might be worth maintaining such a branch to keep such long-term changes in mind.
Good idea. Who wants to volunteer? I'm too busy...
|
How often do you get the chance to work on a potentially world-changing project?
|
|
|
Luke-Jr
Legendary
Offline
Activity: 2576
Merit: 1186
|
|
October 04, 2011, 06:20:41 PM |
|
Any block chain fork should be avoided as long as feasibly possible. When it happens, we should have a large list of housekeeping changes to make at once.
Is anyone maintaining a branch with these sort of housekeeping changes? A block chain fork may be years in the future, but it might be worth maintaining such a branch to keep such long-term changes in mind. I was thinking that would be a good idea. I can help, but I don't think I have time to maintain such a branch by myself.
|
|
|
|
kano (OP)
Legendary
Offline
Activity: 4606
Merit: 1851
Linux since 1997 RedHat 4
|
|
October 05, 2011, 12:52:44 PM |
|
Just an FYI Difficulty estimator at #ozcoin (;;bc,diffchange) is saying: -9.4% expected but calculated based on only the last 3 days: -13.2% ...
|
|
|
|
Transisto
Donator
Legendary
Offline
Activity: 1731
Merit: 1008
|
|
October 05, 2011, 01:48:27 PM |
|
Just an FYI Difficulty estimator at #ozcoin (;;bc,diffchange) is saying: -9.4% expected but calculated based on only the last 3 days: -13.2% ...
Is Ozcoi.in estimate more accurate than bitcoinchart @ -5% ? Estimate based on 3 day are near meaningless because of luck factor and anyway difficulty adjust based on past 2016, (we're already half-way). IMO what we're seeing was the delay for people in EU to realize unprofitably and stopping their rig.
|
|
|
|
Bobnova
|
|
October 05, 2011, 02:34:46 PM |
|
Hell you don't even have to have an attack, and the "doomsday" scenario is written into the code. Mining right now is marginally profitable if you have efficient GPUs and cheap electricity. It fairly obviously isn't profitable for a decent number of people, as evidenced by the dropping hash rate and difficulty. Now look into the future a little it to the 50% drop in rewards. Presto! Anybody without free electricity won't be mining profitably anymore, and bitcoin has a namecoin type issue.
Bitcoin prices better at least double by then, or bitcoin is in serious trouble.
|
BTC: 1AURXf66t7pw65NwRiKukwPq1hLSiYLqbP
|
|
|
Transisto
Donator
Legendary
Offline
Activity: 1731
Merit: 1008
|
|
October 05, 2011, 02:49:58 PM |
|
... Mining right now is marginally profitable if you have efficient GPUs and cheap electricity. ... Presto! Anybody without free electricity won't be mining profitably anymore, and bitcoin has a namecoin type issue.
Bitcoin prices better at least double by then, or bitcoin is in serious trouble.
* Have you pondered over ASICs efficiency to say "efficient GPUs" * Winter in coming in the US so don't worry about people not having free electricity. * Price does not have to double, only difficulty has to halve. * Calm down.
|
|
|
|
Meni Rosenfeld
Donator
Legendary
Offline
Activity: 2058
Merit: 1054
|
|
October 05, 2011, 05:19:56 PM |
|
Hell you don't even have to have an attack, and the "doomsday" scenario is written into the code. Mining right now is marginally profitable if you have efficient GPUs and cheap electricity. It fairly obviously isn't profitable for a decent number of people, as evidenced by the dropping hash rate and difficulty. Now look into the future a little it to the 50% drop in rewards. Presto! Anybody without free electricity won't be mining profitably anymore, and bitcoin has a namecoin type issue.
Bitcoin prices better at least double by then, or bitcoin is in serious trouble.
Halving is not going to cause doomsday, for several reasons. - Capital expenditure is a major component in mining cost. First, this means that there will be plenty of people who are making more than twice their electricity cost. - Second, it means that in the time before halving, people will avoid buying hardware in anticipation of decreased profitability, so the difficulty will be less than it would have otherwise been. - The price will gradually increase in the time before halving in anticipation of the reduced supply. * Price does not have to double, only difficulty has to halve.
Not a counterargument by itself, because the point with doomsday is that difficulty doesn't get a chance to adjust.
|
|
|
|
Transisto
Donator
Legendary
Offline
Activity: 1731
Merit: 1008
|
|
October 05, 2011, 07:27:59 PM |
|
Not a counterargument by itself, because the point with doomsday is that difficulty doesn't get a chance to adjust.
about THE halving to 25btc, I think volumes will be written on the subject, when time will come. Lots of things may change by then like a bigger share of miners using ASIC-FPGA, and greater adoption overall. And at that time there will still be people mining in winter scenario, enough for having the difficulty adjusted within a month or so. What fatal difference does that make to wait 30min per confirmation instead of 10min ? When investing into bitcoins this a basics concept to understand and live with.
|
|
|
|
maaku
Legendary
Offline
Activity: 905
Merit: 1012
|
|
October 05, 2011, 07:48:50 PM |
|
The problem is that if you have a drop-off of 50%, that's 50% of the potential computing power of the network that is now disillusioned with mining. And in fact, once you cross that 50% threshold it becomes *more* profitable for such miners to collaborate on forking the bitcoin (in terms of BTC, ignoring for the moment what effect this would have on the exchanges).
In reality people come and go for their own reasons, and you're never going to get 100% buy-in to cheat the system from miners who left. But it becomes a possibility once you cross that 50% threshold, so a conservative approach would be to never let that happen.
|
I'm an independent developer working on bitcoin-core, making my living off community donations. If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
|
|
|
kano (OP)
Legendary
Offline
Activity: 4606
Merit: 1851
Linux since 1997 RedHat 4
|
|
October 06, 2011, 12:17:57 AM Last edit: October 06, 2011, 01:44:50 AM by kano |
|
Just an FYI Difficulty estimator at #ozcoin (;;bc,diffchange) is saying: -9.4% expected but calculated based on only the last 3 days: -13.2% ...
Is Ozcoi.in estimate more accurate than bitcoinchart @ -5% ? Estimate based on 3 day are near meaningless because of luck factor and anyway difficulty adjust based on past 2016, (we're already half-way). IMO what we're seeing was the delay for people in EU to realize unprofitably and stopping their rig. OK then I calculated it myself - feel free to point out any errors in: http://tradebtc.net/diffcalc.htmlFirstly "3 day are near meaningless" - um you ever done statistics? see if you understand this: a sample > 20% of population ... OK so from my table: last 432 times: 147800 18:00:10 2-Oct-2011 UTC 0x1a09ee5d (1689334.4045971) 14m 40s 10m 51.66s -8.61% back to first block after last diff change: 147168 22:47:04 27-Sep-2011 UTC 0x1a09ee5d (1689334.4045971) 6m 24s 10m 53.95s -8.99% last 2016 times: 146216 00:35:02 21-Sep-2011 UTC 0x1a098ea5 (1755425.3203287) 1m 36s 10m 41.79s -6.96% So I guess bitcoinchart is a piece of ... also
|
|
|
|
Transisto
Donator
Legendary
Offline
Activity: 1731
Merit: 1008
|
|
October 06, 2011, 12:30:38 AM |
|
at 1060/2016 52% 2011-09-28 we are on the 5th so 29-30-31-1-2-3-4-5 , 8day out of 14 57% We're at expected 9.6% increase...
You are right BTCchart is POS.
|
|
|
|
sadpandatech
|
|
October 06, 2011, 01:59:06 AM Last edit: October 06, 2011, 06:52:01 AM by sadpandatech |
|
You are right BTCchart is POS.
Ayee, we went -3.76495171661444% 9/27 and, using http://dot-bit.org/tools/nextDifficulty.php has been the closest thing to being accurate I have seen. according to dot-bit; Instant as of block 148,241 = +1.35848534244243% Expected at block 149,184 = -3.92677906619912% That will likely change and the instant was -4%~ earlier due to variance causing 2 days worth of bad luck for a few of the larger pools.
|
If you're not excited by the idea of being an early adopter 'now', then you should come back in three or four years and either tell us "Told you it'd never work!" or join what should, by then, be a much more stable and easier-to-use system. - GA
It is being worked on by smart people. -DamienBlack
|
|
|
kano (OP)
Legendary
Offline
Activity: 4606
Merit: 1851
Linux since 1997 RedHat 4
|
|
October 06, 2011, 05:21:44 AM |
|
You are right BTCchart is POS.
Ayee, we went -3.76495171661444% 9/27 and, using http://dot-bit.org/tools/nextDifficulty.php as been the closest thing to being accurate I have seen. according to dot-bit; Instant as of block 148,241 = +1.35848534244243% Expected at block 149,184 = -3.92677906619912% That will likely change and the instant was -4%~ earlier due to variance causing 2 days worth of bad luck for a few of the larger pools. My table is simply the data for the last 2017 blocks. Do with it as you will. However "Instant" yes that figure is COMPLETELY meaningless (yes that is an example of it being true) as for the guess at -4% I'd love to know where they even get that from Look at my actual figures and tell me how you can estimate -4% ? Take the number at half way and divide by 2?
|
|
|
|
sadpandatech
|
|
October 06, 2011, 07:22:39 AM Last edit: October 06, 2011, 07:41:04 AM by sadpandatech |
|
My table is simply the data for the last 2017 blocks. Do with it as you will. However "Instant" yes that figure is COMPLETELY meaningless (yes that is an example of it being true) as for the guess at -4% I'd love to know where they even get that from Look at my actual figures and tell me how you can estimate -4% ? Take the number at half way and divide by 2? Sorry, did not see the table before. I was wondering where you got your %'s from. Couple of 'errors', well, questions really. Where are you getting your starting "running average" figure from? And, wouldn't the running average time be more accurate if at the point of difficulty change you bumped it up or down by the same % that difficulty adjusted. Being that in the case of it going down the 'average' time to solve should be less so the penalty in adjustment % would be greater. Its a sloppy fix, and not being a smarty guy like you peeps I can't offer up a fancy, terminology laced explanation, but give it a try and see. ;p i.e. 147167 - 147168 should consider the difficulty adjustment, instead it appears to be the same as 147166 - 147167 147168 22:47:04 27-Sep-2011 UTC 0x1a09ee5d (1689334.4045971) 6m 24s 10m 53.95s -8.99% 147167 22:40:40 27-Sep-2011 UTC 0x1a098ea5 (1755425.3203287) 2m 00s 10m 53.45s -8.91% 147166 22:38:40 27-Sep-2011 UTC 0x1a098ea5 (1755425.3203287) 1m 58s 10m 52.95s -8.83% Should we not bump the running time average from 10m 53.95s up by .37647003433787% to 11m 18.57s ? Edit; very sloppy on my part. Would like to see the actual spreadsheet to see how you are doing the math. I know the time is lacking the needed adjustment for dif change but am not sure about it being fixed by bumping at the 'running average'. We would want to add the % change I suggested into the formula you are using for running average before it actually calculates the new running average time. Which if my brain isn't completly fried would only be adding the .3%~ to the .50s, the expected calculated change in time. So instead of 50s up that it currently is, it would be .5188s added to 53.45s or a new current running average time of 10m 53.97s Very trivial when I look at it like that, but across a few more dif changes it would begin to add up if not corrected for. imho.
|
If you're not excited by the idea of being an early adopter 'now', then you should come back in three or four years and either tell us "Told you it'd never work!" or join what should, by then, be a much more stable and easier-to-use system. - GA
It is being worked on by smart people. -DamienBlack
|
|
|
kano (OP)
Legendary
Offline
Activity: 4606
Merit: 1851
Linux since 1997 RedHat 4
|
|
October 06, 2011, 07:38:51 AM |
|
My running average is simply the average of all block generation times from the top down to the line it's shown on. The first one is of course the time to generate the last block, the 2nd is the average of 1 and 2 etc ... So you can see what the average is at any point running back from 'now' (now being block 148231 23:57:28 5-Oct-2011 UTC - in the past) I can regenerate it (it's just php - and takes - as you can read - almost 4 minutes - to do 2017 getblockbycount's to my bitcoind) The standard difficulty calculation is only interested in the blocks from calculation time back to the last difficulty change, so I've made the averages and % relate to that. Yeah I guess the 2016 line isn't really accurate since it crossed the diff change boundary, ignore it
|
|
|
|
sadpandatech
|
|
October 06, 2011, 07:46:05 AM |
|
My running average is simply the average of all block generations times from the top down to the line it's shown on.
ahhh, must need sleep. Also did not see you were running backwords. ;/ Why not from bottom up? edited my last post a bit to fix my half ass attempt at maths... Yeah I guess the 2016 line isn't really accurate since it crossed the diff change boundary, ignore it hehe, I am fairly certain even if I keep missing the rest that my bump idea will fix that. Just do it in reverse, since I for whatever dumb reason thought you were calculating from older first. so instead of it dropping .50s, drop it .52s
|
If you're not excited by the idea of being an early adopter 'now', then you should come back in three or four years and either tell us "Told you it'd never work!" or join what should, by then, be a much more stable and easier-to-use system. - GA
It is being worked on by smart people. -DamienBlack
|
|
|
|