Bitcoin Forum
May 06, 2024, 10:52:01 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 [10] 11 »  All
  Print  
Author Topic: Increasing the block size is a good idea; 50%/year is probably too aggressive  (Read 14267 times)
andrewbb
Member
**
Offline Offline

Activity: 81
Merit: 10


View Profile
October 24, 2014, 06:44:09 PM
 #181

Maybe this is a stupid question, but...


Miners are in it for the fees (plus mining coins).

Why not just set a minimum fee in relation to the probability of mining a coin?
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714992721
Hero Member
*
Offline Offline

Posts: 1714992721

View Profile Personal Message (Offline)

Ignore
1714992721
Reply with quote  #2

1714992721
Report to moderator
1714992721
Hero Member
*
Offline Offline

Posts: 1714992721

View Profile Personal Message (Offline)

Ignore
1714992721
Reply with quote  #2

1714992721
Report to moderator
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
October 24, 2014, 07:27:59 PM
 #182

Heck, jump to 20MB and grow it at 40% for 20 years; that's fine with me *but* be prepared to alter that if there be a need.

That's fine by me. My last proposal does this. What does everyone think? I say we start building some idea of what can pass community consensus. We may need to leave NewLiberty searching for the legendary ideal solution.

Since the monumental debate which occurred in February 2013 (which you will remember), I have despaired for 18 months that nothing will get done until the limit is actually hit, throughput is crippled for days, and world's press laughs at Bitcoin for years afterward.

Which is why I even argued for a 3MB cap, just to buy some time.

https://bitcointalk.org/index.php?topic=709970.msg8129058#msg8129058

That 40% for 20 years is more than fine by me :-)

acoindr
Legendary
*
Offline Offline

Activity: 1050
Merit: 1002


View Profile
October 24, 2014, 07:43:13 PM
 #183

Since the monumental debate which occurred in February 2013 (which you will remember), I have despaired for 18 months that nothing will get done until the limit is actually hit, throughput is crippled for days, and world's press laughs at Bitcoin for years afterward.

I do remember. I'm really hoping it doesn't take running into the limit to provide impetus to take action. Not only would we likely get negative press as you mention, but it would highlight the issue to people completely oblivious to it. If we can't get people with good knowledge of the subject to agree how would we fare after adding even more noise to the signal?

Which is why I even argued for a 3MB cap, just to buy some time.

https://bitcointalk.org/index.php?topic=709970.msg8129058#msg8129058

I think half-measures only increase the likelihood we can't get a comprehensive solution.

That 40% for 20 years is more than fine by me :-)

Thanks for your feedback and for being IMO reasonable Smiley
Syke
Legendary
*
Offline Offline

Activity: 3878
Merit: 1193


View Profile
October 24, 2014, 08:12:35 PM
 #184

1) What is the maximum value for MAX_BLOCKSIZE functionally possible given the APIs being used?

There was a 32 MB limit to messages (which I assume still exists), so 32 MB is the max it could simply be raised to right now without further code changes. Breaking blocks into multiple messages would be a significant code change to go above 32 MB.

Buy & Hold
David Rabahy
Hero Member
*****
Offline Offline

Activity: 709
Merit: 501



View Profile
October 24, 2014, 09:25:29 PM
 #185

So who are we kidding with this?  Are we doing the block segment code now or later?  Bump it to 32MB now to buy us time to do the block segment code.
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
October 24, 2014, 09:30:47 PM
Last edit: October 24, 2014, 11:00:26 PM by solex
 #186

Breaking blocks into multiple messages would be a significant code change to go above 32 MB.

So who are we kidding with this?  Are we doing the block segment code now or later?  Bump it to 32MB now to buy us time to do the block segment code.

The block segment code is not even needed for a very long time (edit: apart from node bootstrapping).
IBLT blocks of 32MB would support at least 3GB standard blocks on disk, or 20,000 TPS.

Realpra
Hero Member
*****
Offline Offline

Activity: 815
Merit: 1000


View Profile
October 24, 2014, 09:33:12 PM
 #187

I will just weigh in on this as someone who has worked with Bitcoin merkle trees in practice:

1. The limit could be infinite and we would be fine. Hence I support any proposal to increase block size as much as possible.
2. That said a slow ~40% growth rate gives us time to improve the clients to scale nicely. Again I give full support to this.
3. The things that makes this possible is swarm nodes and aggressive merkle tree pruning.

There are two hard forks needed in Bitcoin this is the first. The next will be more decimals. Nothing else I know about is needed.
(Made sure of that before I jumped the wagon you know Wink)

Scaling details:
Swarm nodes:
Put/implemented as SIMPLY as possible (can also be trustless, decentralized and peer to peer) -> Two people run a "half" node each and simply tell each whether their half of the block was valid, boom 2X network capacity.
(Rinse and repeat/complicate as needed Wink)

Aggressive merkle tree pruning:
1. Spent/provably unspendable TXs are pruned.
2. Dust size and rarely/never used unspent TXs can ALSO be pruned by miners -> The owner, should he exist, will just later have to provide the merkle branch leading to the header chain and other TX data at spend time. (Self storage of TX data, not just keys basically)
A miner who does not know about a TX will have to either A not include it in his blocks or B Get the information from someone else.

Security:
Complex issue, but it will be okay. (In another thread I have described how Bitcoin clients can delay miner blocks from miners that NEVER include their valid TXs for instance.)

In general ->
Bitcoin is consensus based, if the issue is serious enough it will be solved. A Bitcoin software "crash" will never happen because all issues will be solved.
In 2010 anyone could spend anyone elses Bitcoin... you probably didn't even know about that right? What happened? -> Nothing; solved and "forgotten".

Cheap and sexy Bitcoin card/hardware wallet, buy here:
http://BlochsTech.com
acoindr
Legendary
*
Offline Offline

Activity: 1050
Merit: 1002


View Profile
October 24, 2014, 11:09:08 PM
 #188

2. That said a slow ~40% growth rate gives us time to improve the clients to scale nicely. Again I give full support to this.

Awesome. We're looking good for 40% annual increases Smiley

As Cubic Earth said we don't need 100% consensus. We just need general consensus. Let's try to keep rallying around a 40% increase game plan. The more people trumpet it the more it becomes the agreed upon way forward.
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
October 24, 2014, 11:11:33 PM
 #189

Since the monumental debate which occurred in February 2013 (which you will remember), I have despaired for 18 months that nothing will get done until the limit is actually hit, throughput is crippled for days, and world's press laughs at Bitcoin for years afterward.

I do remember. I'm really hoping it doesn't take running into the limit to provide impetus to take action. Not only would we likely get negative press as you mention, but it would highlight the issue to people completely oblivious to it. If we can't get people with good knowledge of the subject to agree how would we fare after adding even more noise to the signal?

Which is why I even argued for a 3MB cap, just to buy some time.

https://bitcointalk.org/index.php?topic=709970.msg8129058#msg8129058

I think half-measures only increase the likelihood we can't get a comprehensive solution.

That 40% for 20 years is more than fine by me :-)

Thanks for your feedback and for being IMO reasonable Smiley


The 40% per year starting at 20MB as a half measure.
Its an improvement over the first round of 50% but is still picking some numbers, with some justification, arbitrarily guessed.

We aren't seeking "legendary" nor "ideal", but thank you for your rhetoric, and also for being a solidly reliable unvarying advocate for whatever the loudest voice says.  
I know I can rely on you for that, if any of the better suggestions catch traction, that you will just pile on with whichever you think is likely to get consensus.
You are also very reasonable, and your reasons clear:  Seek consensus.  Attack dissension.

I don't need to be right.  I am just as happy to be wrong, the happiness comes from improvement.

It will be nice to have this 40% solution in pocket as the minimum quality, temporary patch, while a fix may be devised that would not need future adjustment.
The max block size was already reduced once for being too large, and also once for being too small.  It isn't as though we haven't been here before, it would be a good one to see solved eventually.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
acoindr
Legendary
*
Offline Offline

Activity: 1050
Merit: 1002


View Profile
October 24, 2014, 11:18:25 PM
 #190

It will be nice to have this 40% solution in pocket as the minimum quality, temporary patch, while a fix may be devised that would not need future adjustment.

Awesome. Like I said, I'm happy for you to keep searching. If I can count you in for passing a 40% solution in the meantime I'll be your best friend Wink
David Rabahy
Hero Member
*****
Offline Offline

Activity: 709
Merit: 501



View Profile
October 25, 2014, 01:30:29 AM
 #191

Um, is that it?  How do we know if we've reached consensus?  When will the version with the increased MAX_BLOCKSIZE be available?
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
October 25, 2014, 02:38:39 AM
 #192

Um, is that it?  How do we know if we've reached consensus?  When will the version with the increased MAX_BLOCKSIZE be available?

I would stipulate that we agree that both Gavin's first and second solution are an improvement over the current code, I'd further opine that the second is a better guess even than the first.
I would maintain that our best so far is still a horrible miss of an opportunity.  With any luck we won't get another opportunity on this one in quite a while.  It is not a good solution, but it can get us at least up to the next time it has to be adjusted.

It is probably a different question whether to make a change, and if so when.  And another question as to whether there is a consensus to do so.

The answer to both might be in the same little bit of work.

In order to increase predictability, we might want to have some criteria for looking at this parameter, not just for now, but also for future? 
We have done the expedient before, in changing it,
Each time should continue to be an improvement over the last.  It is a patch not a fix, and it will probably last longer than what came before.
It is far less than Satoshi's suggestion.  We should recognize that it very well may need to change again.


Your questions, David are good ones.  They suggest the way to answer it may be in a few other questions:

If the plan is to keep changing MAX_BLOCKSIZE whenever we think MAX_BLOCKSIZE is awry, how does one know when MBS is off? 
What defines a crisis sufficient to get easy consensus?

Or put another way:
How how to measure risk of preventing legitimate transactions?  When risk is high enough we do this again.


Answering these satisfactorily, would likely foster an easy consensus.

This would also be a step toward the design goals, discussed on the last page.
If we get those defined, ahead of hitting that change criteria, we may yet end up with something still better.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
BitMos
Full Member
***
Offline Offline

Activity: 182
Merit: 123

"PLEASE SCULPT YOUR SHIT BEFORE THROWING. Thank U"


View Profile
October 25, 2014, 02:54:38 AM
 #193

The people at keepbitcoinfree.org don't want to change the 1MB now at all. They think, for Tor and other consideration, it's necessary, but I agree with Syke that not everyone needs to be able to run a full node.

Thank you for bringing this perspective so eloquently. There are in my limited knowledge only 6 options to scalability :

1. with the fees, it will be adjusted automatically (don't pay enough, no tx for you) / BEST OPTION
2. bigger blocks
3. faster blocks
4. alts
5. data compression (it will fit in those 640ko btw)
6. dynamic blocks (everything change depending of the usage)

6. being a little bit complex and that with alts no need to fork! maybe the global payment system is just a pipe dream... but a global payment system why not...

money is faster...
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
October 25, 2014, 03:09:50 AM
Last edit: October 25, 2014, 04:38:03 AM by NewLiberty
 #194

1) What is the maximum value for MAX_BLOCKSIZE functionally possible given the APIs being used?

2) What is the maximum value which has been tested successfully?  Have any sizes been tested that fail?

3) Why not just set it to that value once right now (or soon) to the value which works and leave it at that?
       3.1) What advantage is there to delaying the jump to maximum tested value?

No miner is consistently filling up even the tiny 1MB blocks possible now.  We see no evidence of self-dealing transactions.  What are we afraid of?

Heck, jump to 20MB and grow it at 40% for 20 years; that's fine with me *but* be prepared to alter that if there be a need.  How will we know we need to jump it up faster?  A few blocks at the current maximum is hardly a reason to panic but when the pool of transactions waiting to be blocked starts to grow without any apparent limit then we've waited too long.

The first time it was fixed was from 32MB, it was reduced to 1MB temporarily until other things were fixed.  Pretty much all the reasons for that have abated since though.
(backstops in front of backstops)

The maximum successfully "tested" is what we have now, 1MB.
and there it sits at the top of the wish list.
https://en.bitcoin.it/wiki/Hardfork_Wishlist


We are at an average of less than 1/3rd of that now?
https://blockchain.info/charts/avg-block-size
https://blockchain.info/charts/avg-block-size?showDataPoints=false&timespan=all&show_header=true&daysAverageString=7&scale=0&address=

If we were to extrapolate the growth rate, we are still far from a crisis, or from getting transactions backed up because of this.
This provides opportunity for still better proposals to emerge in the months ahead.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
David Rabahy
Hero Member
*****
Offline Offline

Activity: 709
Merit: 501



View Profile
October 25, 2014, 04:42:30 AM
 #195

But clearly some blocks are already full right up to the 1MB limit.  I've been doing transactional systems for 30+ years; the serious trouble will start when the average over reasonable periods of time, e.g. an hour or so but not more than a day, begins to approach ~70%.

http://en.wikipedia.org/wiki/Little's_law

Per https://blockchain.info/charts/n-transactions?showDataPoints=true&timespan=1year&show_header=true&daysAverageString=1&scale=0&format=csv&address=, Nov. 28, 2013 had the most transactions in a day, i.e. 102010.  From https://blockchain.info/block-height/271850 to https://blockchain.info/block-height/272030, i.e. 180 blocks that day, one wonders what the block size distribution looked like.  Gosh, it would be useful to have the size of the pool of waiting transactions at that time.

Per https://blockchain.info/charts/n-transactions-per-block?showDataPoints=false&timespan=1year&show_header=true&daysAverageString=1&scale=0&format=csv&address=, we had an average of 560 transactions per block (only the 8th highest day so far).  Feb. 27, 2014 had the highest average transactions per block of 618 so far.

April 3, 2014 had the highest average block size at 0.365623MB.  Arg, a day is too long.  I just bet the hourly average peaks around 70% of 1MB.

Does *anyone* have a record of the pool or waiting transactions?  That's our key.  When there are ~2,000 transactions in the queue waiting then we would expect a full 1MB block to be coming out next.  When there are ~4,000 transactions in the queue waiting then we would expect two full 1MB blocks to be coming out next.  In this state, transactions can expect to take ~20 minutes to confirm.  ~6,000 waiting -> 30 minute confirmation times.  And so on.

7t/s * 60s/m = 420t/m, 420t/m * 10m/block = 4200t/block.  That does not match observations:  Observations reveal only about 2000t/block.  2000t/block * 1block/10m = 200t/m, 200t/m * 1m/60s ~= 3.3t/s.  Who thinks we can squeeze 4200t/block?  3.3t/s * 86400s/d = 285,120t/d.  Trouble is closer than we thought.  70% * 285.120t/d = 199,584t/d.

Gentlemen, I've seen this too many times before; when the workload grows to somewhere north of 200,000t/d we *will* begin to see the pool of waiting transactions grow to tens of thousands and confirmation times will be well over an hour.

Increase the MAX_BLOCKSIZE as soon as is reasonable.  20MB, 32MB, whatever.  Then enhance the code to segment blocks to exceed the API limit after that.
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
October 25, 2014, 05:35:44 AM
Last edit: October 25, 2014, 06:26:01 AM by NewLiberty
 #196

But clearly some blocks are already full right up to the 1MB limit.  I've been doing transactional systems for 30+ years; the serious trouble will start when the average over reasonable periods of time, e.g. an hour or so but not more than a day, begins to approach ~70%.

So if there were a flexible adjustment that kept the highest average below 70%, or even below 50% to be safer, then we would have a flexible adjustment that would be fit for purpose?   Maybe even better than a fixed increase?  Would it be even better if the target max size were 400% of the average, to bring the average to 25% of the max?


http://en.wikipedia.org/wiki/Little's_law

Per https://blockchain.info/charts/n-transactions?showDataPoints=true&timespan=1year&show_header=true&daysAverageString=1&scale=0&format=csv&address=, Nov. 28, 2013 had the most transactions in a day, i.e. 102010.  From https://blockchain.info/block-height/271850 to https://blockchain.info/block-height/272030, i.e. 180 blocks that day, one wonders what the block size distribution looked like.  Gosh, it would be useful to have the size of the pool of waiting transactions at that time.
The largest block of that period  by a good margin was in the 880KB range.  https://blockchain.info/block-height/271998
The average was bit less than half that block.

Per https://blockchain.info/charts/n-transactions-per-block?showDataPoints=false&timespan=1year&show_header=true&daysAverageString=1&scale=0&format=csv&address=, we had an average of 560 transactions per block (only the 8th highest day so far).  Feb. 27, 2014 had the highest average transactions per block of 618 so far.

April 3, 2014 had the highest average block size at 0.365623MB.  Arg, a day is too long.  I just bet the hourly average peaks around 70% of 1MB.

Does *anyone* have a record of the pool or waiting transactions?  That's our key.  When there are ~2,000 transactions in the queue waiting then we would expect a full 1MB block to be coming out next.  When there are ~4,000 transactions in the queue waiting then we would expect two full 1MB blocks to be coming out next.  In this state, transactions can expect to take ~20 minutes to confirm.  ~6,000 waiting -> 30 minute confirmation times.  And so on.

7t/s * 60s/m = 420t/m, 420t/m * 10m/block = 4200t/block.  That does not match observations:  Observations reveal only about 2000t/block.  2000t/block * 1block/10m = 200t/m, 200t/m * 1m/60s ~= 3.3t/s.  Who thinks we can squeeze 4200t/block?  3.3t/s * 86400s/d = 285,120t/d.  Trouble is closer than we thought.  70% * 285.120t/d = 199,584t/d.

Gentlemen, I've seen this too many times before; when the workload grows to somewhere north of 200,000t/d we *will* begin to see the pool of waiting transactions grow to tens of thousands and confirmation times will be well over an hour.

What did you see and where did you see it.
200Kt/d may be several years away, yes?
https://blockchain.info/charts/n-transactions?timespan=all&showDataPoints=false&daysAverageString=1&show_header=true&scale=0&address=
(If you are fond of extrapolating this might tell you something)

Or 200kt/d may be much sooner.
So flexible limits are probably better than "whatever" right?

Increase the MAX_BLOCKSIZE as soon as is reasonable.  20MB, 32MB, whatever.  Then enhance the code to segment blocks to exceed the API limit after that.


FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
acoindr
Legendary
*
Offline Offline

Activity: 1050
Merit: 1002


View Profile
October 25, 2014, 06:30:34 PM
 #197

Gavin, if you're still reading in my humble opinion we need a few things to move forward.

You're still the closest thing this community has to Satoshi/leadership so I think impetus comes from you. IMO you should update your Scalability Roadmap to reflect 40% annual increases noting that seems to be able to garner consensus. Maybe a second update includes a step-by-step process to rolling out the update (fork), so people know what to expect. I think starting to talk about things in a matter-of-fact way will engender confidence and expectation of what's to come. Thanks again for your hard work.

For the rest of us I think it's helpful to be supportive, in any way we can, of pushing forward this update. Notice I've updated my signature. If we can explain to those wondering whether it's the best option that it certainly is by implementing something, while covering the most possible bases people may go along.
Cubic Earth
Legendary
*
Offline Offline

Activity: 1176
Merit: 1018



View Profile
October 25, 2014, 09:06:35 PM
 #198

Gavin, if you're still reading in my humble opinion we need a few things to move forward.

You're still the closest thing this community has to Satoshi/leadership so I think impetus comes from you. IMO you should update your Scalability Roadmap to reflect 40% annual increases noting that seems to be able to garner consensus. Maybe a second update includes a step-by-step process to rolling out the update (fork), so people know what to expect. I think starting to talk about things in a matter-of-fact way will engender confidence and expectation of what's to come. Thanks again for your hard work.

For the rest of us I think it's helpful to be supportive, in any way we can, of pushing forward this update. Notice I've updated my signature. If we can explain to those wondering whether it's the best option that it certainly is by implementing something, while covering the most possible bases people may go along.

My sentiments exactly.  I have actually found this whole thread to be quite heartening.

NewLiberty - you have done a good job tirelessly advocating for a certain approach.  It almost seems as if every even-numbered post is yours, and every odd-numbered post is from a different poster who steps up to the plate to explain to you the flaws in your position.  You seem like a perfectionist, which in general isn't a bad thing at all.  But action is needed now, even if we don't have some perfect 'forever' solution. Fortunately we don't need a 100% consensus to move forward, just an undefined super-majority.  I'd guess we have it around the 20MB / 40% concept.

My personal opinion is 20MB /40% is a kick-ass combo.  Between that, the headers-first downloading, the O(1) miner backbone system, and (hopefully) sidechains, we are looking at the most important set of technical improvements since bitcoin started.
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
October 25, 2014, 11:16:04 PM
Last edit: October 26, 2014, 04:51:00 AM by solex
 #199

My personal opinion is 20MB /40% is a kick-ass combo.  Between that, the headers-first downloading, the O(1) miner backbone system, and (hopefully) sidechains, we are looking at the most important set of technical improvements since bitcoin started.

+1 Great summary.

OT, but relevant. An interesting piece by Jake Yocom-Piatt, highlighting the CPU bottleneck of signature verification, which seems to rear its head next, after the artificial constraint on block size is lifted:

https://blog.conformal.com/btcsim-simulating-the-rise-of-bitcoin/

The road ahead might be rocky, but at least it is better than facing a dead-end.

NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
October 26, 2014, 01:28:31 AM
Last edit: October 26, 2014, 01:46:22 AM by NewLiberty
 #200

Gavin, if you're still reading in my humble opinion we need a few things to move forward.

You're still the closest thing this community has to Satoshi/leadership so I think impetus comes from you. IMO you should update your Scalability Roadmap to reflect 40% annual increases noting that seems to be able to garner consensus. Maybe a second update includes a step-by-step process to rolling out the update (fork), so people know what to expect. I think starting to talk about things in a matter-of-fact way will engender confidence and expectation of what's to come. Thanks again for your hard work.

For the rest of us I think it's helpful to be supportive, in any way we can, of pushing forward this update. Notice I've updated my signature. If we can explain to those wondering whether it's the best option that it certainly is by implementing something, while covering the most possible bases people may go along.

My sentiments exactly.  I have actually found this whole thread to be quite heartening.

NewLiberty - you have done a good job tirelessly advocating for a certain approach.  It almost seems as if every even-numbered post is yours, and every odd-numbered post is from a different poster who steps up to the plate to explain to you the flaws in your position.  You seem like a perfectionist, which in general isn't a bad thing at all.  But action is needed now, even if we don't have some perfect 'forever' solution. Fortunately we don't need a 100% consensus to move forward, just an undefined super-majority.  I'd guess we have it around the 20MB / 40% concept.

My personal opinion is 20MB /40% is a kick-ass combo.  Between that, the headers-first downloading, the O(1) miner backbone system, and (hopefully) sidechains, we are looking at the most important set of technical improvements since bitcoin started.

There have been no unaddressed flaws yet in what I have advocated, other than the fact that it does not include a concrete proposal for a specific finished algorithm.  Undecided
What I have been advocating is for a flexible solution to be designed so that we won't have to do this again.  It hasn't been accomplished yet.

If folks decide that reaching an average of 1/3 full blocks is a sufficient impetus to implement something without delay, even if that implementation may well have to be adjusted again in the future (one way or the other), and in future years when it may be much harder to put through such a change... then of course the expedient 2nd Gavin solution will be implemented.

If however before the implementation, there is a flexible proposal that doesn't also introduce unmanaged perverse incentives, I suspect folks may line up behind that.  

In the mean time, I expect to continue taking this role of the loyal opposition in order to either 1)  Find that better solution, or 2) Galvanize the consensus.  
If this discussion we are having here doesn't happen publicly, and look at every option, people may think that what is selected is not the best that we can do under the circumstances.

If after exhausting all arguments against it, the time comes to implement (probably early 2015), the discussions and debate should have concluded with every criticism being having had a chance to be heard, and the best we can do at the moment being implemented.  That will either be "the simplest thing that can possibly work" or something less simple but with better chances of working indefinitely.  Either are an improvement.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
Pages: « 1 2 3 4 5 6 7 8 9 [10] 11 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!