Bitcoin Forum
May 06, 2024, 02:32:16 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 [9] 10 11 »  All
  Print  
Author Topic: Increasing the block size is a good idea; 50%/year is probably too aggressive  (Read 14267 times)
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
October 23, 2014, 09:28:00 PM
 #161

David, this is what you need

At this point it might be advisable to relax the presentation with some charts based on actual data Wink.

Unfortunately I'm using quite old data (missing some blocks so the longest chain ends at block 210060), as you can see by this query result.

Code:
     last_block_date     | blocks_in_db | avg_blocktime_seconds | avg_blocktime_minutes
------------------------+--------------+-----------------------+-----------------------
 2012-11-29 01:19:00+01 |       210060 |  586.2221984194991907 |    9.7769351613824622

So here we go: some histograms, click images for slightly larger versions.







Observations and clarifications/notes:

  • I'm looking at overlapping sequences, so a block that takes 127 minutes to calculate would result in multiple sequences being counted
  • The case of a 3-block sequence taking at least 127 minutes to find happened 759 out of 210,060 times (0.3613%)
  • The case of a 4-block sequence taking at least 127 minutes to find happened 1,551 out of 210,060 times (0.7383%)
  • 135,421 blocks (out of 210,060) have been solved in less than 10 minutes (64.47%)
  • There can be negative block times because miners clocks can be unsynced.
  • The block that took longest to calculate was block #2 (7719 minutes). It might've been block #1, but we don't know how long that took.
  • I put a confusing date on the upper right corner, data is from 2012-11-29
  • first and last "bins" include the rest of the data (for example last bin contains the number of blocks that took 127 minutes or more to find)
  • Surprisingly to me the "bin" with the most blocks is the 1-2 minutes bin (1:00 to 1:59.999) (bar labeled "1" in the charts")

queries I used are here: http://pastebin.com/tPg1RQtG

Does this stuff look like it could be correct to you guys?

EDIT: some corrections

1715005936
Hero Member
*
Offline Offline

Posts: 1715005936

View Profile Personal Message (Offline)

Ignore
1715005936
Reply with quote  #2

1715005936
Report to moderator
1715005936
Hero Member
*
Offline Offline

Posts: 1715005936

View Profile Personal Message (Offline)

Ignore
1715005936
Reply with quote  #2

1715005936
Report to moderator
Remember that Bitcoin is still beta software. Don't put all of your money into BTC!
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715005936
Hero Member
*
Offline Offline

Posts: 1715005936

View Profile Personal Message (Offline)

Ignore
1715005936
Reply with quote  #2

1715005936
Report to moderator
1715005936
Hero Member
*
Offline Offline

Posts: 1715005936

View Profile Personal Message (Offline)

Ignore
1715005936
Reply with quote  #2

1715005936
Report to moderator
1715005936
Hero Member
*
Offline Offline

Posts: 1715005936

View Profile Personal Message (Offline)

Ignore
1715005936
Reply with quote  #2

1715005936
Report to moderator
acoindr
Legendary
*
Offline Offline

Activity: 1050
Merit: 1002


View Profile
October 24, 2014, 12:35:57 AM
 #162

NewLiberty, we can continue back and forth trying to sway one another and who knows how that will turn out. How about the following compromise:

We implement Gavin's plan - go to 20MB blocks and 50% annual increases thereafter. That is the default. However, we add a voting component. We make it possible to restrain the increase by say 1/2 if enough blocks contain some flag in the block header. It could also be used to increase scheduled increase by 1/2 if the model is too conservative for computing growth. There was a header variable mentioned before I think in the block size debate, the first time around.

I think this is the best of both worlds. It provides a measure of predictability, and simplicity, while allowing the community to bend capacity more inline with the growth of the time if needed. What do you think?
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
October 24, 2014, 03:21:08 AM
 #163

NewLiberty, we can continue back and forth trying to sway one another and who knows how that will turn out. How about the following compromise:

We implement Gavin's plan - go to 20MB blocks and 50% annual increases thereafter. That is the default. However, we add a voting component. We make it possible to restrain the increase by say 1/2 if enough blocks contain some flag in the block header. It could also be used to increase scheduled increase by 1/2 if the model is too conservative for computing growth. There was a header variable mentioned before I think in the block size debate, the first time around.

I think this is the best of both worlds. It provides a measure of predictability, and simplicity, while allowing the community to bend capacity more inline with the growth of the time if needed. What do you think?

I don't recall Gavin ever proposed what you are suggesting here.  1st round was 50% per year, 2nd proposal was 20MB + 40% per year, yes?


I'm less a fan of voting than you might imagine.  
It is mostly useful when there are two bad choices rather than one good one, and a choice is forced.  I maintain hope for a good solution yet.  To give us an easy consensus.

This flag gives only miners the votes?  This doesn't seem better than letting the transactions or the miner fee be the votes?
Its better than a bad idea though, as it does provide some flexibility and sensitivity to future realities and relies on proof of work for voting.
It fails the test of being a self-regulating approach, and remains based on arbitrary guesses.
So I don't think it is the "best" of either world, but also not the worst.  More like an engineering trade-off.

Presumably this is counting years by blocks, yes?
This would give 100MB max blocks size in 2018 and gigabyte 6 years later, but blocks are coming faster than the years, so wouldn't likely take that long.

At such increases, Bitcoin could support (current) Visa processing peak rates within a decade, and a lot sooner if the votes indicate faster and the block solving doesn't slow too much.  (perhaps as soon as 6 years, by 2020)

The idea has a lot of negatives.  Possibly its fixable.
Thank you for bringing forward the suggestion.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
October 24, 2014, 03:38:46 AM
Last edit: October 24, 2014, 04:45:58 AM by NewLiberty
 #164

My (ideal) goals, in particular, would be to (1) never kick out grandma, and (2) never prevent a minor from including a legitimate transaction. (edited to add: those are in priority order)

We share these design goals, and priority.  They aren't comprehensive for me though.

I'd add (at least)
3) provide conditions conducive for mining when the transaction fees are supporting the network  
4) avoid future changes on the same issue.
And of course avoid introducing new unmitigated risks (more of a criteria than a goal).


If we wanted to be brutally pedantic on ourselves we could kick around the definitions of who grandma might be, and what makes a transaction legitimate, but I agree with the sentiment entirely.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
Syke
Legendary
*
Offline Offline

Activity: 3878
Merit: 1193


View Profile
October 24, 2014, 05:31:59 AM
 #165

If we wanted to be brutally pedantic on ourselves we could kick around the definitions of who grandma might be, and what makes a transaction legitimate, but I agree with the sentiment entirely.

I'd rather not implement a grandma-cap on bitcoin's growth. Grandma doesn't need to run a full node. She can use an SPV or other thin client.

Buy & Hold
-ck
Legendary
*
Offline Offline

Activity: 4102
Merit: 1632


Ruu \o/


View Profile WWW
October 24, 2014, 05:53:35 AM
 #166

I dunno know; here I am watching for blocks at or near the limit of 1MB and along comes ... it just seems strange to me https://blockchain.info/block-height/326639 -- apparently the miner couldn't be bothered to include even just one other transaction except the coinbase transaction in the block?  Could the pool have been empty from his point of view?
This is a common optimisation virtually all crappy pools use shortly after a new block since their software can't scale to get miners to work on the new block full of transactions quickly, they just broadcast a blank sheet for the first lot of work after a block change. Most pools blame bitcoind for being so slow to accept a block and generate a new template, and this is actually quite slow, but it's obviously more than just this (since I don't ever include transaction-free blocks in my own pool software).

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
October 24, 2014, 06:01:22 AM
 #167

I dunno know; here I am watching for blocks at or near the limit of 1MB and along comes ... it just seems strange to me https://blockchain.info/block-height/326639 -- apparently the miner couldn't be bothered to include even just one other transaction except the coinbase transaction in the block?  Could the pool have been empty from his point of view?
This is a common optimisation virtually all crappy pools use shortly after a new block since their software can't scale to get miners to work on the new block full of transactions quickly, they just broadcast a blank sheet for the first lot of work after a block change. Most pools blame bitcoind for being so slow to accept a block and generate a new template, and this is actually quite slow, but it's obviously more than just this.

Gee. When gmaxwell said that there was a lot of low hanging fruit, in terms of possible improvements, perhaps it was not obvious just how low and how dangling some of that fruit actually is.

btchris
Hero Member
*****
Offline Offline

Activity: 672
Merit: 504

a.k.a. gurnec on GitHub


View Profile WWW
October 24, 2014, 12:38:05 PM
 #168

If we wanted to be brutally pedantic on ourselves we could kick around the definitions of who grandma might be, and what makes a transaction legitimate, but I agree with the sentiment entirely.

I'd rather not implement a grandma-cap on bitcoin's growth. Grandma doesn't need to run a full node. She can use an SPV or other thin client.

Although we can argue about details, we (or at least I) have been using "grandma" as shorthand for "Bitcoin hobbyist", which Gavin had equated to "somebody with a current, reasonably fast computer and Internet connection, running an up-to-date version of Bitcoin Core and willing to dedicate half their CPU power and bandwidth to Bitcoin." Is that reasonable?
btchris
Hero Member
*****
Offline Offline

Activity: 672
Merit: 504

a.k.a. gurnec on GitHub


View Profile WWW
October 24, 2014, 12:53:48 PM
 #169

My (ideal) goals, in particular, would be to (1) never kick out grandma, and (2) never prevent a minor from including a legitimate transaction. (edited to add: those are in priority order)

We share these design goals, and priority.  They aren't comprehensive for me though.

I'd add (at least)
3) provide conditions conducive for mining when the transaction fees are supporting the network  
4) avoid future changes on the same issue.
And of course avoid introducing new unmitigated risks (more of a criteria than a goal).

It seems to me that (1) and (2) could both be implemented with either the static (Gavin's) method or some reactive method, although the I suspect the reactive method can do (1) more safely/conservatively. If a reactive method can do (2) safely enough (I suspect it could), I'd prefer it. A reactive method seems much more likely to meet (4).

If I understand you correctly, (3) takes us back to an artificial cap on block size to prevent a perceived, as Gavin put it, "Transaction Fee Death Spiral." I've already made my rant on that subject; no need to repeat it.

I'm of the opinion that reaching consensus on (3) is more important, and possibly more difficult, than any static-vs-reactive consensus. (3) is an economic question, whereas static-vs-reactive is closer to an implementation detail.
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
October 24, 2014, 01:34:04 PM
 #170

If we wanted to be brutally pedantic on ourselves we could kick around the definitions of who grandma might be, and what makes a transaction legitimate, but I agree with the sentiment entirely.

I'd rather not implement a grandma-cap on bitcoin's growth. Grandma doesn't need to run a full node. She can use an SPV or other thin client.
lol @ grandma-cap for Bitcoin
We agree, in this case "grandma" is substituting for "bitcoin enthusiast" WRT bandwidth availability, I think.
I'm guessing he was thinking that our enthusiast might be living with grandma, or maybe is grandma, IDK?

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
btchris
Hero Member
*****
Offline Offline

Activity: 672
Merit: 504

a.k.a. gurnec on GitHub


View Profile WWW
October 24, 2014, 01:40:20 PM
 #171

I'd rather not implement a grandma-cap on bitcoin's growth. Grandma doesn't need to run a full node. She can use an SPV or other thin client.
lol @ grandma-cap for Bitcoin
We agree, in this case "grandma" is substituting for "bitcoin enthusiast" WRT bandwidth availability, I think.
I'm guessing he was thinking that our enthusiast might be living with grandma, or maybe is grandma, IDK?

Actually I picked the term up from NewLiberty's post, but yes that's what I was assuming it meant. Should the term "grandma-cap" make it into the BIP?
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
October 24, 2014, 01:59:47 PM
Last edit: October 25, 2014, 03:03:23 AM by NewLiberty
 #172

I'd rather not implement a grandma-cap on bitcoin's growth. Grandma doesn't need to run a full node. She can use an SPV or other thin client.
lol @ grandma-cap for Bitcoin
We agree, in this case "grandma" is substituting for "bitcoin enthusiast" WRT bandwidth availability, I think.
I'm guessing he was thinking that our enthusiast might be living with grandma, or maybe is grandma, IDK?

Actually I picked the term up from NewLiberty's post, but yes that's what I was assuming it meant. Should the term "grandma-cap" make it into the BIP?

Ah yes, the backstop reference, grandma at the ball park watching grandkid play, protected by the backstop.

Its that fence behind the kid, that protects them from the wild pitch and thrown bat.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
October 24, 2014, 02:37:20 PM
Last edit: October 25, 2014, 12:11:44 PM by NewLiberty
 #173

My (ideal) goals, in particular, would be to (1) never kick out grandma, and (2) never prevent a minor from including a legitimate transaction. (edited to add: those are in priority order)

We share these design goals, and priority.  They aren't comprehensive for me though.

I'd add (at least)
3) provide conditions conducive for mining when the transaction fees are supporting the network  
4) avoid future changes on the same issue.
And of course avoid introducing new unmitigated risks (more of a criteria than a goal).

It seems to me that (1) and (2) could both be implemented with either the static (Gavin's) method or some reactive method, although the I suspect the reactive method can do (1) more safely/conservatively. If a reactive method can do (2) safely enough (I suspect it could), I'd prefer it. A reactive method seems much more likely to meet (4).

If I understand you correctly, (3) takes us back to an artificial cap on block size to prevent a perceived, as Gavin put it, "Transaction Fee Death Spiral." I've already made my rant on that subject; no need to repeat it.

I'm of the opinion that reaching consensus on (3) is more important, and possibly more difficult, than any static-vs-reactive consensus. (3) is an economic question, whereas static-vs-reactive is closer to an implementation detail.

I think you are missing the point entirely on #3, probably my fault for being overly brief there and not really explaining the point in this context.

The artificial cap on block size would fail the test of #3.  So would a too high max block size if node maintenance storage costs make processing transactions unfeasible if only supported by TX fees.  We have never seen a coin yet that survives on transaction fee supported mining.  Bitcoin survives on its inflation.  What is sought there is to compensate at an appropriate level.  We don't know what that level is, but it may be something like a fraction of a percentage of all coins.
Currently TX fees are 1/300th the miner compensation.  After the next halving, we may be around 1/100 if TX continue to grow.  Fees will still be well within marginal costs and so not significant still.
This is fundamentally a centralisation risk, and a security risk through not creating perverse incentives.  

Much mining can be done with a single node.  Costs are discrete between nodes and mining and asymmetric.  If costs for node maintenance overwhelm the expected rewards at <x hashrate / y nodes then we lose all mining under that hashrate irrespective of other costs, and we lose nodes per hashrate.  People look at hashrate to determine network health and not so much at node population and distribution, but both are essential.

It is not so much an artificial limit created for profitability, it is a technical limit to preserve network resilience through node population and distribution by being sensitive to the ratio.  Much of the discussion on blocksize economics treats mining and node maintenance as the same thing.  They aren't the same thing at all.  Its more a chain length vs hashrate issue.
In later years, there is a very long chain, and most coins transacted will be the most recent.  Old coins are meant to get to move for free, this reduces the UTXO block depth.  We don't know how it will play out, its uncharted territory.  #3 is more about not creating a perverse incentive to unbalance this that doesn't materialize until the distant future than about encouraging compensation through artificial constrain on supply.

For better clarity I should swap #3 for

3) provide conditions conducive for node maintenance and mining when the transaction fees are supporting the network by avoiding perverse incentives.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
btchris
Hero Member
*****
Offline Offline

Activity: 672
Merit: 504

a.k.a. gurnec on GitHub


View Profile WWW
October 24, 2014, 04:46:39 PM
 #174

Much mining can be done with a single node.  Costs are discrete between nodes and mining[,] and [those costs are] asymmetric.
...
People look at hashrate to determine network health and not so much at node population and distribution, but both are essential.

(Text in brackets added by me to indicate what I understood you to be saying.) Agreed.

If costs for node maintenance overwhelm the expected rewards at <x hashrate / y nodes then we lose all mining under that hashrate irrespective of other costs, and we lose nodes per hashrate.

We lose nodes per hashrate, which is bad and leads to (or rather continues the practice of) miners selling their votes to node operators, but I don't see how we lose hashrate, we just centralize control of hashrate to amortize node maintenance costs (still bad).

In later years, there is a very long chain, and most coins transacted will be the most recent.
...
We don't know how it will play out, its uncharted territory.  #3 is more about not creating a perverse incentive to unbalance this that doesn't materialize until the distant future than about encouraging compensation through artificial constrain on supply.

So long as the grandma-cap can be maintained, it seems like all of your discussion would already be covered. The hope has always been that new techniques (IBLT, tx pruning, UTXO commitments, etc.) will keep this possible.

However there is no way to see into the distant future. Any chosen grandma-cap could be incorrect, and any cap more restrictive than that to meet #3 could also be incorrect. I don't disagree that #3 is desirable, only that it may not be implementable. Having said that, as long as a more restrictive cap has little to no chance of interfering with #2 (never prevent a miner from including a legitimate tx), I'd have no problem with it.

3) provide conditions conducive for node maintenance and mining when the transaction fees are supporting the network by avoiding perverse incentives.

TL;DR - This goal implies the "only permit an exponential increase in the max blocksize during periods of demand" rule in your initial example, correct?
Syke
Legendary
*
Offline Offline

Activity: 3878
Merit: 1193


View Profile
October 24, 2014, 04:48:29 PM
 #175

lol @ grandma-cap for Bitcoin
We agree, in this case "grandma" is substituting for "bitcoin enthusiast" WRT bandwidth availability, I think.
I'm guessing he was thinking that our enthusiast might be living with grandma, or maybe is grandma, IDK?

Some people want everyone to run a full node. I'm suggesting that's not a good idea. We should not limit bitcoin growth such that everyone can run a full node. Not everyone needs to run a full node to benefit from bitcoin.

Buy & Hold
BitMos
Full Member
***
Offline Offline

Activity: 182
Merit: 123

"PLEASE SCULPT YOUR SHIT BEFORE THROWING. Thank U"


View Profile
October 24, 2014, 04:56:09 PM
 #176


This is a common optimisation virtually all crappy pools use shortly after a new block since their software can't scale to get miners to work on the new block full of transactions quickly, they just broadcast a blank sheet for the first lot of work after a block change. Most pools blame bitcoind for being so slow to accept a block and generate a new template, and this is actually quite slow, but it's obviously more than just this (since I don't ever include transaction-free blocks in my own pool software).

It's a personal honor to read from you.  Shocked

money is faster...
btchris
Hero Member
*****
Offline Offline

Activity: 672
Merit: 504

a.k.a. gurnec on GitHub


View Profile WWW
October 24, 2014, 04:57:58 PM
 #177

lol @ grandma-cap for Bitcoin
We agree, in this case "grandma" is substituting for "bitcoin enthusiast" WRT bandwidth availability, I think.
I'm guessing he was thinking that our enthusiast might be living with grandma, or maybe is grandma, IDK?

Some people want everyone to run a full node. I'm suggesting that's not a good idea. We should not limit bitcoin growth such that everyone can run a full node. Not everyone needs to run a full node to benefit from bitcoin.

A "bitcoin enthusiast" is not everyone. See Gavin's definition I just quoted above. Without at least some limit, bitcoin nodes become centralized. Also, the suggested exponential upper limits seem very unlikely to limit bitcoin growth.
acoindr
Legendary
*
Offline Offline

Activity: 1050
Merit: 1002


View Profile
October 24, 2014, 05:39:00 PM
Last edit: October 24, 2014, 06:42:54 PM by acoindr
 #178

I don't recall Gavin ever proposed what you are suggesting here.  1st round was 50% per year, 2nd proposal was 20MB + 40% per year, yes?

His Scalability Roadmap calls for 50% annually. He has since mentioned 40% being acceptable, but none of his proposals seem to have been accepted by you, so what's the difference? Do you feel 40% is better than 50%? Would 30% be even better? What is your fear?

That's why I asked you for a bullet point list (not a paper), to get an idea of your thinking in specifics of concern.

I'm less a fan of voting than you might imagine.  
It is mostly useful when there are two bad choices rather than one good one, and a choice is forced.  I maintain hope for a good solution yet.  To give us an easy consensus.

This is the problem I have with you. You seem to think there is some mystical silver bullet that simply hasn't been discovered yet, and you implore everyone keep searching for it, for once it's found the population will cheer, exalt it to the highest and run smiling to the voting booths in clear favor. That is a pipe dream. Somebody is always going to see things differently. There is no ideal solution because everything is subjective and arbitrary in terms of priority of the advocate. The only ideal solution is to remove all areas of concern, meaning no cap at all but with everyone in the world having easy access to enough computing resources to keep up with global transaction numbers. That's not our situation so we have to deal with things as best we can. Best, again, is completely subjective. The people at keepbitcoinfree.org don't want to change the 1MB now at all. They think, for Tor and other consideration, it's necessary, but I agree with Syke that not everyone needs to be able to run a full node.

The idea has a lot of negatives.  Possibly its fixable.
Thank you for bringing forward the suggestion.

Then suggest something. At least I tried moving in a direction toward your priority. Can we see that from you? Again - what I think is most important involves some measure of simplicity and predictability. We're building a global payment system, to be used potentially by enormous banks and the like, this isn't aiming to be some arbitrary system for a few geeks trading game points.
David Rabahy
Hero Member
*****
Offline Offline

Activity: 709
Merit: 501



View Profile
October 24, 2014, 06:23:23 PM
 #179

1) What is the maximum value for MAX_BLOCKSIZE functionally possible given the APIs being used?

2) What is the maximum value which has been tested successfully?  Have any sizes been tested that fail?

3) Why not just set it to that value once right now (or soon) to the value which works and leave it at that?
       3.1) What advantage is there to delaying the jump to maximum tested value?

No miner is consistently filling up even the tiny 1MB blocks possible now.  We see no evidence of self-dealing transactions.  What are we afraid of?

Heck, jump to 20MB and grow it at 40% for 20 years; that's fine with me *but* be prepared to alter that if there be a need.  How will we know we need to jump it up faster?  A few blocks at the current maximum is hardly a reason to panic but when the pool of transactions waiting to be blocked starts to grow without any apparent limit then we've waited too long.
acoindr
Legendary
*
Offline Offline

Activity: 1050
Merit: 1002


View Profile
October 24, 2014, 06:40:27 PM
 #180

Heck, jump to 20MB and grow it at 40% for 20 years; that's fine with me *but* be prepared to alter that if there be a need.

That's fine by me. My last proposal does this. What does everyone think? I say we start building some idea of what can pass community consensus. We may need to leave NewLiberty searching for the legendary ideal solution.
Pages: « 1 2 3 4 5 6 7 8 [9] 10 11 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!