Bitcoin Forum
August 19, 2017, 02:04:58 AM *
News: Latest stable version of Bitcoin Core: 0.14.2  [Torrent].
 
   Home   Help Search Donate Login Register  
Pages: [1] 2 3 4 5 6 »  All
  Print  
Author Topic: A Scalability Roadmap  (Read 12474 times)
jonny1000
Full Member
***
Offline Offline

Activity: 120


View Profile
October 06, 2014, 06:02:49 PM
 #1

Please see Gavin's writeup below:
https://bitcoinfoundation.org/2014/10/a-scalability-roadmap/

I think this is a very good and interesting read with some fantastic points, however this is likely to be considered controversial by some in this community.  In particular a fixed schedule for increasing the block size limit over time is a significant proposal.

Is Gavin saying this should grow at 50% per year because bandwidth has been increasing at this rate in the past?  Might it not be safer to choose a rate lower than historic bandwidth growth?  Also how do we know this high growth in bandwidth will continue?

Gavin mentioned that this is "similar to the rule that decreases the block reward over time", however the block reward decreases by 50%, an increase by 50% is somewhat different.  A 50% fall every 4 years implies that there will never be more than 21 million coins, 50% growth in the block size limit implies exponential growth forever.  Perhaps after 21 million coins is reached Bitcoin will stop growing, therefore if one wants to make a comparison, the block size limit increase rate could half every 4 years, reaching zero growth when 21 million coins are reached.  Although I do not know the best solution to this.  Can anyone explain why exponential growth is a good idea?

In my view, should volumes increase above the 7 transaction a second level in the short term, a quick fix like doubling the block size limit should be implemented.  A more long term solution like an annual increase in the block size limit could require more research into transaction fees and the impact this could have on incentivising miners.  Ultimately we may need a more dynamic system where the block size limit is determined in part by transaction volume, the network difficulty and transactions fees in some way, as well as potentially a growth rate.  Although a more robust theoretical understanding of this system may be required before we reach that point.
 
Many thanks
1503108298
Hero Member
*
Offline Offline

Posts: 1503108298

View Profile Personal Message (Offline)

Ignore
1503108298
Reply with quote  #2

1503108298
Report to moderator
1503108298
Hero Member
*
Offline Offline

Posts: 1503108298

View Profile Personal Message (Offline)

Ignore
1503108298
Reply with quote  #2

1503108298
Report to moderator
1503108298
Hero Member
*
Offline Offline

Posts: 1503108298

View Profile Personal Message (Offline)

Ignore
1503108298
Reply with quote  #2

1503108298
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652


Chief Scientist


View Profile WWW
October 06, 2014, 06:25:02 PM
 #2

Is Gavin saying this should grow at 50% per year because bandwidth has been increasing at this rate in the past?  Might it not be safer to choose a rate lower than historic bandwidth growth?  Also how do we know this high growth in bandwidth will continue?

Yes, that is what I am saying.

"Safer" : there are two competing threats here: raise the block size too slowly and you discourage transactions and increase their price. The danger is Bitcoin becomes irrelevant for anything besides huge transactions, and is used only by big corporations and is too expensive for individuals. Hurray, we just reinvented the SWIFT or ACH systems.

Raise it too quickly and it gets too expensive for ordinary people to run full nodes.

So I'm saying: the future is uncertain, but there is a clear trend. Lets follow that trend, because it is the best predictor of what will happen that we have.

If the experts are wrong, and bandwidth growth (or CPU growth or memory growth or whatever) slows or stops in ten years, then fine: change the largest-block-I'll-accept formula. Lowering the maximum is easier than raising it (lowering is a soft-forking change that would only affect stubborn miners who insisted on creating larger-than-what-the-majority-wants blocks).


RE: a quick fix like doubling the size:

Why doubling? Please don't be lazy, at least do some back-of-the-envelope calculations to justify your numbers (to save you some work: the average Bitcoin transaction is about 250 bytes big). The typical broadband home internet connection can support much larger blocks today.

How often do you get the chance to work on a potentially world-changing project?
jonny1000
Full Member
***
Offline Offline

Activity: 120


View Profile
October 06, 2014, 06:54:59 PM
 #3

Dear Gavin

Thanks for your reply.  When I said “Safer” I meant the risk that average bandwidth speeds grow less than historic rates in the future. I totally agree with you saying that reducing the block size limit would only be a soft fork, rather than the hard fork of increasing the limit, I hadn’t thought of that.  I guess you could be right, maybe the best option is to follow trends.

Raise the block size too slowly and you discourage transactions and increase their price.  The danger is Bitcoin becomes irrelevant for anything besides huge transactions, and is used only by big corporations and is too expensive for individuals. Hurray, we just reinvented the SWIFT or ACH systems.

This is a good point, however what about all the people like Andreas Antonopoulos, who constantly say things like, “Bitcoin is not a faster, cheaper more efficient way of shopping, thinking of it this way misses the point, its more than that, it’s a distributed platform for open, permissionless, trustless, blah blah blah…… innovation.”
This is also an interesting point of view and I think it would be good to somehow find a balance between this and faster cheaper transactions.  But I think you are right, somehow the block size limit needs to increase.
 
Yes I admit I was being lazy by saying doubling.  A normal home internet connection can easily keep up, however is there not a potential danger of block propagation times becoming larger, which also needs to be considered, at least now when IBLT hasn’t been “fully” implemented?
Cubic Earth
Sr. Member
****
Offline Offline

Activity: 390



View Profile
October 06, 2014, 07:14:13 PM
 #4

I like the block size scaling idea.

It:

1) Grows the on-chain transaction limit.

2) Should keep the network within reach of hobbyists, and therefore, as decentralized as now.

3) Is extremely simple, and everyone should be able to understand

4) Provides some certainty going forward.  Since bitcoin is a foundation upon which many things are built, having that certainty sooner is better.


A question: when would the 50% increases start?  Could the progression be kick-started by jumping directly to, say 2MB or 4MB, and then doubling thereafter?  Or would that put too much strain on the network?

BTC tip jar: 1FDWkfHkEef72AwQazThWZr3GH4LiJtWr2
https://bitco.in/forum/members/cubicearth.2359/
jonny1000
Full Member
***
Offline Offline

Activity: 120


View Profile
October 06, 2014, 07:51:48 PM
 #5

This is what 50% growth per annum looks like.  How will miners earn income when the block reward is low and the block size limit is increasing at such an exponential rate, that transaction fees will also be low, even if demand grows at say 40% per annum?

elendir
Jr. Member
*
Offline Offline

Activity: 58


View Profile
October 06, 2014, 08:00:31 PM
 #6

I wonder what the miners think about the blocksize proposal. I believe they can choose the size of the block to be anywhere between 0 to 1MB today. Should the 1MB limit be raised, the final decision will remain with the miners, right?

Coinee = First cash-to-cash money transfer backed by bitcoin.
Cubic Earth
Sr. Member
****
Offline Offline

Activity: 390



View Profile
October 06, 2014, 08:27:44 PM
 #7

I wonder what the miners think about the blocksize proposal. I believe they can choose the size of the block to be anywhere between 0 to 1MB today. Should the 1MB limit be raised, the final decision will remain with the miners, right?

Correct.  Miners do not have to include any transactions in their blocks if they so choose.

Also, miners do not have to build off any particular block.  Lets say a miner starts issuing blocks filled to the brim with 1-satoshi transactions.  The other miners could all agree (or collude, if you see it that way) to reject the 'spam' block and build off the previous one.

BTC tip jar: 1FDWkfHkEef72AwQazThWZr3GH4LiJtWr2
https://bitco.in/forum/members/cubicearth.2359/
alpet
Legendary
*
Offline Offline

Activity: 1681


View Profile WWW
October 07, 2014, 12:49:15 PM
 #8

Hi All!
Why not possible partial nodes vs full nodes? Just for example: I will deploy partial nodes on some office computers, and set for every  node disk space quota to 10Gb. Some from this nodes will distribute blockchain for 2011-2012 years, some for 2013-2014. I think this solution little more flexible and distributable for many users.
P.S.: This text was automatically translated from Russian.

Novacoin we trust!
Плавайте поездами Аэрофлота.
jonny1000
Full Member
***
Offline Offline

Activity: 120


View Profile
October 07, 2014, 01:02:14 PM
 #9

I have tried to analyze what is going on graphically.  As Gavin said, typically there is a price boom which causes higher demand for transactions.  This is represented by a shift to the right in the below demand curve.  In order to keep the transaction price low a similar shift to the right may be required in the supply curve, which could be caused by an increase in the block size limit.  I think it might be a bit presumptuous to assume demand will continue to grow exponentially, especially at such a high rate.

Current supply & demand curves for space in blocks


Shift in supply & demand curves

Note: figures for illustrative purposes only.
delulo
Sr. Member
****
Offline Offline

Activity: 441


View Profile
October 07, 2014, 05:43:21 PM
 #10

Am I interpreting this
Quote
Imposing a maximum size that is in the reach of any ordinary person with a pretty good computer and an average broadband internet connection eliminates barriers to entry that might result in centralization of the network.
right, that it refers to barriers to entry of being a node?

instagibbs
Member
**
Offline Offline

Activity: 114


View Profile
October 07, 2014, 06:18:47 PM
 #11

Hi All!
Why not possible partial nodes vs full nodes? Just for example: I will deploy partial nodes on some office computers, and set for every  node disk space quota to 10Gb. Some from this nodes will distribute blockchain for 2011-2012 years, some for 2013-2014. I think this solution little more flexible and distributable for many users.
P.S.: This text was automatically translated from Russian.

https://github.com/bitcoin/bitcoin/pull/4701

It's being worked on.
Peter R
Legendary
*
Offline Offline

Activity: 1050



View Profile
October 08, 2014, 04:26:47 AM
 #12

Originally, I imagined a floating blocksize limit based on demand, but after reading Gavin's roadmap I support his recommendation.  The limit should be increased (in a codified way) at a constant yearly % based on historical growth rates for internet bandwidth.  It's important that the blocksize limit be known a priori in order to give innovators more confidence to build on top of our network.

On a somewhat related note, here's a chart that shows Bitcoin's market cap overlaid with the daily transaction volume (excluding popular addresses).  The gray line extrapolates when in time and at what market cap we might begin to bump into the current 1 MB limit.    


Run Bitcoin Unlimited (www.bitcoinunlimited.info)
Cubic Earth
Sr. Member
****
Offline Offline

Activity: 390



View Profile
October 08, 2014, 04:50:23 AM
 #13

I was expecting Gavin's roadmap to be hotly debated, but this thread has been relatively quite.  Is the debate unfolding somewhere else?  Or is there just not much debate about it?

Nice charts, PeterR.

BTC tip jar: 1FDWkfHkEef72AwQazThWZr3GH4LiJtWr2
https://bitco.in/forum/members/cubicearth.2359/
theymos
Administrator
Legendary
*
expert
Offline Offline

Activity: 2758


View Profile
October 08, 2014, 05:37:37 AM
 #14

If the experts are wrong, and bandwidth growth (or CPU growth or memory growth or whatever) slows or stops in ten years, then fine: change the largest-block-I'll-accept formula. Lowering the maximum is easier than raising it (lowering is a soft-forking change that would only affect stubborn miners who insisted on creating larger-than-what-the-majority-wants blocks).

Lowering the limit afterward wouldn't be a soft-forking change if the majority of mining power was creating too-large blocks, which seems possible.

I think that a really conservative automatic increase would be OK, but 50% yearly sounds too high to me. If this happens to exceed some residential ISP's actual bandwidth growth, then eventually that ISP's customers will be unable to be full nodes unless they pay for a much more expensive Internet connection. The idea of this sort of situation really concerns me, especially since the loss of full nodes would likely be gradual and easy to ignore until after it becomes very difficult to correct.

As I mentioned on Reddit, I'm also not 100% sure that I agree with your proposed starting point of 50% of a hobbyist-level Internet connection. This seems somewhat burdensome for individuals. It's entirely possible that Bitcoin can be secure without a lot of individuals running full nodes, but I'm not sure about this.

Determining the best/safest way to choose the max block size isn't really a technical problem; it has more to do with economics and game theory. I'd really like to see some research/opinions on this issue from economists and other people who specialize in this sort of problem.

1NXYoJ5xU91Jp83XfVMHwwTUyZFK64BoAD
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1176


Core dev leaves me neg feedback #abuse #political


View Profile
October 08, 2014, 07:27:56 AM
 #15

I'm not really qualified to comment on the merits of Gavin's plan, but on the surface it sounds like a thoughtful proposal.  I must say, it is exciting to see solutions to scalability being proposed, and I'm sure it is encouraging to the greater Bitcoin community at large.  Just the fact that a plan is on the table should be a nice jab to the naysayers/skeptics who have been "ringing the alarm bell" on this issue.

Although, in a sense they are correct that issues require action.  I would like to thank Gavin and the other developers for all the great work they've done and continue to do for Bitcoin.  

Hats off to you sir.

spin
Sr. Member
****
Offline Offline

Activity: 359


View Profile
October 08, 2014, 10:25:25 AM
 #16

The post is a great read on the direction of ongoing development. Such posts are really helpful for hobbyists such as myself to get an idea where things are headed.   Keen on testing and supporting some of the new stuff.  I'd love to test out headers first for example.

Quote
After 12 years of bandwidth growth that becomes 56 billion transactions per day on my home network connection — enough for every single person in the world to make five or six bitcoin transactions every single day. It is hard to imagine that not being enough; according the the Boston Federal Reserve, the average US consumer makes just over two payments per day.

I have no idea but the average consumer is not the only one making transactions.  There are also business.  So the 2 per day stat is not all that's relevant.   But 5 to 6 bn p.p.p.d. should cover that also Smiley

Small typo:
Quote
I expect the initial block download problem to be mostly solved in the next relase or three of Bitcoin Core. The next scaling problem that needs to be tackled is the hardcoded 1-megabyte block size limit that means the network can suppor only approximately 7-transactions-per-second.


If you liked this post buy me a beer.  Beers are quite cheap where I live!
194YjsiwmGm3hcbPcJWWyzRAS9CQLX1fJL
cbeast
Donator
Legendary
*
Offline Offline

Activity: 1736

Let's talk governance, lipstick, and pigs.


View Profile
October 08, 2014, 10:51:55 AM
 #17

Is there a relationship between hashrate and bandwidth? If by increasing blocksize and thereby increasing bandwidth, would that eat into the available bandwidth for hashing? For example, if a peta-miner maxes out their bandwidth with just hashrate, then increasing blocksize would lower their hashrate. They would have to buy more bandwidth if it is available. It might favor miners living where there is better internet connectivity rather than cheap electricity or cold climate. It could help decentralize mining by enlarging the blocksize.

Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.
TierNolan
Legendary
*
Offline Offline

Activity: 1120


View Profile
October 08, 2014, 03:07:06 PM
 #18

What is the plan for handling the 32MB message limit?

Would the 50% per year increase be matched by a way to handle unlimited block sizes?

Blocks would have to be split over multiple messages (or the message limit increased)

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
Skoupi
Sr. Member
****
Offline Offline

Activity: 252

Skoupi the Great


View Profile
October 08, 2014, 03:13:43 PM
 #19

Raise it too quickly and it gets too expensive for ordinary people to run full nodes.

Ordinary people aren't supposed to run full nodes anyway  Tongue
Mrmadden
Newbie
*
Offline Offline

Activity: 6


View Profile
October 08, 2014, 03:58:47 PM
 #20

50% is conservative based on extrapolated storage and computing power cost efficiencies, decreasing 100% and 67% annually.

50% is very risky based on extrapolated bandwidth cost decreases, decreasing 50% annually.

I would dial it back from 50% to 40%.  Hobbyists will want to download full nodes remotely, and that is just too close for comfort.
Pages: [1] 2 3 4 5 6 »  All
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!