Bitcoin Forum
November 19, 2024, 01:17:11 PM *
News: Check out the artwork 1Dq created to commemorate this forum's 15th anniversary
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 4 5 6 7 8 9 10 11 »  All
  Print  
Author Topic: Increasing the block size is a good idea; 50%/year is probably too aggressive  (Read 14296 times)
acoindr
Legendary
*
Offline Offline

Activity: 1050
Merit: 1002


View Profile
October 14, 2014, 08:24:35 PM
Last edit: October 15, 2014, 04:09:47 AM by acoindr
 #21

As I think it through 50% per year may not be aggressive.

Drilling down into the problem we find the last mile is the bottleneck in bandwidth:

http://en.wikipedia.org/wiki/Last_mile

That page is a great read/refresher for this subject, but basically:

Quote
The last mile is typically the speed bottleneck in communication networks; its bandwidth limits the bandwidth of data that can be delivered to the customer. This is because retail telecommunication networks have the topology of "trees", with relatively few high capacity "trunk" communication channels branching out to feed many final mile "leaves". The final mile links, as the most numerous and thus most expensive part of the system, are the most difficult to upgrade to new technology. For example, telephone trunklines that carry phone calls between switching centers are made of modern optical fiber, but the last mile twisted pair telephone wiring that provides service to customer premises has not changed much in 100 years.

I expect Gavin's great link to Nielsen's Law of Internet Bandwidth is only referencing copper wire lines. Nielsen's experience, which was updated to include this year and prior (and continue to be inline with his law), tops out at 120 Mbps in 2014. Innovation allowing increases in copper lines is likely near the end, although DSL is the dominant broadband access technology globally according to a 2012 study.

The next step is fiber to the premises. A refresher on fiber-optics communication:

Quote
Fiber-optic communication is a method of transmitting information from one place to another by sending pulses of light through an optical fiber. The light forms an electromagnetic carrier wave that is modulated to carry information. First developed in the 1970s, fiber-optic communication systems have revolutionized the telecommunications industry and have played a major role in the advent of the Information Age. Because of its advantages over electrical transmission, optical fibers have largely replaced copper wire communications in core networks in the developed world. Optical fiber is used by many telecommunications companies to transmit telephone signals, Internet communication, and cable television signals. Researchers at Bell Labs have reached internet speeds of over 100 petabits per second using fiber-optic communication.

The U.S. has one of the highest ratios of Internet users to population, but is far from leading the world in bandwidth. Being first in technology isn't always advantageous (see iPhone X vs iPhone 1). Japan leads FTTP with 68.5 percent penetration of fiber-optic links, with South Korea next at 62.8 percent. The U.S. by comparison is 14th place with 7.7 percent. Similar to users leapfrogging to mobile phones for technology driven services in parts of Africa, I expect many places to go directly to fiber as Internet usage increases globally.

Interestingly, fiber is a future-proof technology in contrast to copper, because once laid future bandwidth increases can come from upgrading end-point optics and electronics without changing the fiber infrastructure.

So while it may be expensive to initially deploy fiber, once it's there I foresee deviation from Nielsen's Law to the upside. Indeed, in 2012 Wilson Utilities located in Wilson, North Carolina, rolled out their FTTN (Fiber to the Home) with speeds offerings of 20/40/60/100 megabits per second. In late 2013 they achieved 1 gigabit fiber to the home.
trout
Sr. Member
****
Offline Offline

Activity: 333
Merit: 252


View Profile
October 14, 2014, 09:16:17 PM
 #22

doesn't matter how high is the physical limit, with exponential growth (any x% per year) rule it is going to be reached and exceeded, whereupon keeping the rule is just as well as making the max size infinite.


why not linear growth, like  +n MB per block halving, or quadratic like +n MB per n'th block halving?
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2301


Chief Scientist


View Profile WWW
October 14, 2014, 10:07:23 PM
 #23

why not linear growth, like  +n MB per block halving, or quadratic like +n MB per n'th block halving?

Because network bandwidth, CPU, main memory, and disk storage (the potential bottlenecks) are all growing exponentially right now, and are projected to continue growing exponentially for the next couple decades.

Why would we choose linear growth when the trend is exponential growth?

Unless you think we should artificially limit Bitcoin itself to linear growth for some reason. Exponential growth in number of users and usage is what we want, yes?

How often do you get the chance to work on a potentially world-changing project?
Syke
Legendary
*
Offline Offline

Activity: 3878
Merit: 1193


View Profile
October 14, 2014, 11:45:38 PM
 #24

To answer your question of What would also happen if the block size were increased to 1 GB tomorrow is the introduction of new attack vectors, which if exploited would require intervention to resolve by miners, and development.

Like what? What "new" attack vectors? It is already quite cheap to attack the current 1 MB blocksize. What would it cost to attack a 1 GB blocksize vs the current 1 MB blocksize?

Buy & Hold
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
October 15, 2014, 12:18:33 AM
 #25

To answer your question of What would also happen if the block size were increased to 1 GB tomorrow is the introduction of new attack vectors, which if exploited would require intervention to resolve by miners, and development.

Like what? What "new" attack vectors? It is already quite cheap to attack the current 1 MB blocksize. What would it cost to attack a 1 GB blocksize vs the current 1 MB blocksize?
The cost is not that significant.  Heck, the whole BTC market cap is not that significant.

If there were 6 GB block size bloat per hour?
A financial attack could do this independently.
Miners could do this free-ish.
Small miners would fail, as would all hobby miners.

Full nodes would become centralized, increased 51% risks, etc.
These are just the obvious.  No more decentralisation for Bitcoin.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
Syke
Legendary
*
Offline Offline

Activity: 3878
Merit: 1193


View Profile
October 15, 2014, 05:51:07 AM
 #26

The cost is not that significant.  Heck, the whole BTC market cap is not that significant.

If there were 6 GB block size bloat per hour?
A financial attack could do this independently.
Miners could do this free-ish.
Small miners would fail, as would all hobby miners.

Full nodes would become centralized, increased 51% risks, etc.
These are just the obvious.  No more decentralisation for Bitcoin.

From the wiki:

Quote
Note that a typical transaction is 500 bytes, so the typical transaction fee for low-priority transactions is 0.1 mBTC (0.0001 BTC), regardless of the number of bitcoins sent.

To spam the 1 MB blocksize takes roughly .2 BTC per block, or 1.2 BTC per hour. That's only $500 per hour.

To spam a 1 GB blocksize takes roughly 200 BTC per block, or 1200 BTC per hour. That's $500,000 per hour!

A 1 GB blocksize is far more costly to attack. We could increase the blocksize to 1 GB now and nothing would happen because there aren't that many transactions to fill such blocks.

Buy & Hold
painlord2k
Sr. Member
****
Offline Offline

Activity: 453
Merit: 254


View Profile
October 15, 2014, 01:45:06 PM
 #27


To spam the 1 MB blocksize takes roughly .2 BTC per block, or 1.2 BTC per hour. That's only $500 per hour.

To spam a 1 GB blocksize takes roughly 200 BTC per block, or 1200 BTC per hour. That's $500,000 per hour!

A 1 GB blocksize is far more costly to attack. We could increase the block size to 1GB now and nothing would happen because there aren't that many transactions to fill such blocks.


I had a current cost of $252/hour to spam a 1MB/block. --> $252.000/hour to spam a 1GB block.
With 1 MB block, if an attacker spam the blocks, the users have no way to counter the attack raising the fees they pay. To move the cost  for the attacker to $252.000/hour they should pay 10$ per transaction.
With 1 GB block, if the attacker spam the blocks, the users just need to move from 1 cent to 2 cent and the cost of the attack move from $252K to $504K per hours. At 4 cents per transaction it become $1M per hour.

Remember the cost of spamming the blocks go directly in the pockets of miners. So they can reinvest the money in better bandwidth and storage and move to 2GB blocks, doubling the cost for the attacker.
At $1M per hour it is $100M in four days and $750M in a month and $8.760 trillion per year (more than ten times the miners income today)
trout
Sr. Member
****
Offline Offline

Activity: 333
Merit: 252


View Profile
October 15, 2014, 05:05:01 PM
 #28

why not linear growth, like  +n MB per block halving, or quadratic like +n MB per n'th block halving?

Why would we choose linear growth when the trend is exponential growth?


Because exponential growth is unsustainable, it is bound to cap at some point in the near future.
We have no idea at what constant it will reach saturation. Instead  we can try
 a slow growth of the parameter, knowing that it will surpass any constant and thus probably
catch up with the real limit at some point, and hoping that the growth is slow enough to be
at most a minor nuisance after that.
acoindr
Legendary
*
Offline Offline

Activity: 1050
Merit: 1002


View Profile
October 15, 2014, 05:56:05 PM
 #29

Because exponential growth is unsustainable

Not inherently. It depends on the rate of growth and what is growing. For example, a 1% per year addition to bandwidth is exceedingly conservative, based on historical evidence.

it is bound to cap at some point in the near future.

Define 'near future'. Is that 5 years, 10 years, 40? And what makes you say that? It's easy to make a general unsupported statement. Don't be intellectually lazy. Show the basis for your reasoning, please.
trout
Sr. Member
****
Offline Offline

Activity: 333
Merit: 252


View Profile
October 15, 2014, 06:23:31 PM
 #30

Because exponential growth is unsustainable

Not inherently. It depends on the rate of growth and what is growing. For example, a 1% per year addition to bandwidth is exceedingly conservative, based on historical evidence.

it is bound to cap at some point in the near future.


physical parameters have physical limits, which are constants.
So unbounded growth is unsustainable. Even linear growth.
However, with less than exponential growth one can expect it to be negligible
from some point on (that is, less than x% per year for any x).

Looking at the past data and just extrapolating the exponent one sees is a myopic
reasoning: the exponential growth is only due to the novelty of the given technology.
It will stop when the saturation is reached, that is, when the physical limit of the parameter
in question is close.

If you want a concrete example, look at the CPU clock growth over the next few decades.

Quote
Define 'near future'. Is that 5 years, 10 years, 40? And what makes you say that? It's easy to make a general unsupported statement. Don't be intellectually lazy. Show the basis for your reasoning, please.
I'm not making predictions on constants that we don't know; but when speaking
about exponential growth it  is not even necessary.  Want to know how fast the exponent
growth? Take your 50% growth, and just out of curiosity  see for which n your (1.5)^n exceeds
the number of atoms in the universe. Gives some idea.  Yes, you can put 1% or  (1.01)^n, the difference is
not important.

Of course one can say, let's put it 50% per year until the bandwidth stops growing that fast,
and then we fork again. But this only postpones the problem.  Trying to predict now  exactly when this happens, and to  program for it now, seems futile.



Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2301


Chief Scientist


View Profile WWW
October 15, 2014, 06:34:45 PM
 #31

Of course one can say, let's put it 50% per year until the bandwidth stops growing that fast,
and then we fork again. But this only postpones the problem.  Trying to predict now  exactly when this happens, and to  program for it now, seems futile.

Okey dokey.  My latest straw-man proposal is 40% per year growth for 20 years. That seems like a reasonable compromise based on current conditions and trends.

You seem to be looking hard for reasons not to grow the block size-- for example, yes, CPU clock speed growth has stopped. But number of cores put onto a chip continues to grow, so Moore's Law continues.  (and the reference implementation already uses as many cores as you have to validate transactions)

PS: I got positive feedback from a couple of full-time, professional economists on my "block size economics" post, it should be up tomorrow or Friday.

How often do you get the chance to work on a potentially world-changing project?
trout
Sr. Member
****
Offline Offline

Activity: 333
Merit: 252


View Profile
October 15, 2014, 07:39:13 PM
 #32

Of course one can say, let's put it 50% per year until the bandwidth stops growing that fast,
and then we fork again. But this only postpones the problem.  Trying to predict now  exactly when this happens, and to  program for it now, seems futile.

Okey dokey.  My latest straw-man proposal is 40% per year growth for 20 years. That seems like a reasonable compromise based on current conditions and trends.

You seem to be looking hard for reasons not to grow the block size-- for example, yes, CPU clock speed growth has stopped. But number of cores put onto a chip continues to grow, so Moore's Law continues.  (and the reference implementation already uses as many cores as you have to validate transactions)

Actually, I'm not looking for reasons not to grow the block size: I  suggested sub-exponential growth instead, like, for example, quadratic (that was a serious suggestion).

About  the 40% over the 20 years - what if you overshoot, by, say, 10 years?
And as a result of 40% growth over the 10 extra years the max block size grows so much
that it's effectively infinite? ( 1.4^10 ~ 30). The point being, with an exponent it's too easy to
overshoot. Then if you want to solve the resulting problem by another fork, it may be much
harder to reach a consensus, since the problem will be of a very different nature (too much centralization
vs too expensive transactions).

acoindr
Legendary
*
Offline Offline

Activity: 1050
Merit: 1002


View Profile
October 15, 2014, 08:14:08 PM
 #33

I'm not making predictions on constants that we don't know; but when speaking
about exponential growth it  is not even necessary.  Want to know how fast the exponent
growth? Take your 50% growth, and just out of curiosity  see for which n your (1.5)^n exceeds
the number of atoms in the universe. Gives some idea.

But the proposal isn't to exceed the number of atoms in the universe. It's to increase block size for 20 years then stop. If we do that starting with a 20MB block at 50% per year we arrive at 44,337 after 20 years. That's substantially under the number of atoms in the universe.

The point being, with an exponent it's too easy to overshoot.

How so? You can know exactly what value each year yields. It sounds like you're faulting exponents for exponents sake. Instead, give the reason you feel the resulting values are inappropriate. Here they are:

1: 20
2: 30
3: 45
4: 68
5: 101
6: 152
7: 228
8: 342
9: 513
10: 769
11: 1153
12: 1730
13: 2595
14: 3892
15: 5839
16: 8758
17: 13137
18: 19705
19: 29558
20: 44337
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
October 15, 2014, 08:53:18 PM
 #34


To spam the 1 MB blocksize takes roughly .2 BTC per block, or 1.2 BTC per hour. That's only $500 per hour.

To spam a 1 GB blocksize takes roughly 200 BTC per block, or 1200 BTC per hour. That's $500,000 per hour!

A 1 GB blocksize is far more costly to attack. We could increase the block size to 1GB now and nothing would happen because there aren't that many transactions to fill such blocks.


I had a current cost of $252/hour to spam a 1MB/block. --> $252.000/hour to spam a 1GB block.
With 1 MB block, if an attacker spam the blocks, the users have no way to counter the attack raising the fees they pay. To move the cost  for the attacker to $252.000/hour they should pay 10$ per transaction.
With 1 GB block, if the attacker spam the blocks, the users just need to move from 1 cent to 2 cent and the cost of the attack move from $252K to $504K per hours. At 4 cents per transaction it become $1M per hour.

Remember the cost of spamming the blocks go directly in the pockets of miners. So they can reinvest the money in better bandwidth and storage and move to 2GB blocks, doubling the cost for the attacker.
At $1M per hour it is $100M in four days and $750M in a month and $8.760 trillion per year (more than ten times the miners income today)
Theory and practice diverge here.
A miner can put as many transactions as they like in a block with no fees.
The cost is then replicated across every full node which must store it in perpetuity.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
David Rabahy
Hero Member
*****
Offline Offline

Activity: 709
Merit: 503



View Profile
October 15, 2014, 09:01:05 PM
 #35

Perhaps I am stating the obvious:

1MB/10m = 1,667B/s

Do not try running Bitcoin on a system with less bandwidth, i.e. even 9600 baud isn't enough.  Hmm, what does happen?  Do peers give up trying to catch it up?

A (the?) serious risk of continuing with a block size that is too small: If/when the block size bottlenecks Bitcoin then the backlog of transactions will accumulate.  If the inflow doesn't eventually subside long enough then the backlog will accumulate without limit until something breaks and besides which who wants transactions sitting in some queue for ages.

What functional limit(s) exist constraining the block size?  2MB, 10MB, 1GB, 10GB, 1TB, 100TB, at some point something will break.  Let's crank up the size on the testnet until it fails just to see it happen.

The alternative to all this is to reduce the time between blocks.  Five minutes between blocks gives us the same thing as jumping up to 2MB.

Why do we fear hard forks?  They are a highly useful tool/technique.  We do not know for sure what the future will bring.  When the unexpected comes then we must trust those on the spot to handle the issue.
trout
Sr. Member
****
Offline Offline

Activity: 333
Merit: 252


View Profile
October 15, 2014, 09:10:57 PM
 #36


10: 769
...
20: 44337

so if the bandwith growth happens to  stop in 10 years, then in 20 years you end up
with max block of 44337 whereas the "comfortable" size (if we consider 1MB being comfortable
right now) is only 769.
I call that "easy to overshoot" because predicting technology for decades ahead is hard,
and the difference between these numbers is huge.
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1006


100 satoshis -> ISO code


View Profile
October 15, 2014, 09:35:58 PM
 #37

A miner can put as many transactions as they like in a block with no fees.

This is solved by implementing IBLT as the standard block transmission method, although this is not a short-term goal.

A miner can get his or her IBLT  blocks accepted only if the vast majority of the transactions in it are already known to, and accepted as sensible, by the majority of the network. It shifts the pendulum of power back towards all the non-mining nodes, because miners must treat the consensus tx mempool as work they are obliged to do. It also allows for huge efficiency gains, shifting the bottleneck from bandwidth to disk storage, RAM and cpu (which already have a much greater capacity). In theory, 100MB blocks which get written to the blockchain can be sent using only 1 or 2MB blocks on the network. I don't think many people appreciate how fantastic this idea is.

The debate here is largely being conducted on the basis that the existing block transmission method is going to remain unimproved. This is not the case, and a different efficiency method, tx hashes in relayed blocks, is already live.

Okey dokey.  My latest straw-man proposal is 40% per year growth for 20 years. That seems like a reasonable compromise based on current conditions and trends.

Gavin, I hope you proceed with what you think is best.

NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
October 15, 2014, 09:45:09 PM
 #38

Why do we fear hard forks?  They are a highly useful tool/technique.  We do not know for sure what the future will bring.  When the unexpected comes then we must trust those on the spot to handle the issue.
From a security perspective, the more useful something is, the more risk it tends to have.
Hard forks today are much easier than they will be later, there are only a couple million systems to update simultaneously with an error free code change and no easy back out process.
Later there will hopefully be a few more systems, with more functionality and complexity.
This is the reason I maintain hope that a protocol can be designed to accommodate the needs of the future with less guesswork/extrapolation.  This is not an easy proposition, and it is not one with which most are accustomed to developing.  This is because it is not enough to make something that works, we need something that can't be broken in an unpredictable future.

Whatever the result of this, we limit the usability of Bitcoin to some segment of the world's population and limit the use cases.
2.0 protocols have larger transaction sizes.  Some of this comes down to how the revenue gets split, with whom and when.  Broadly the split is between miners capitalizing on the scarce resource of block size to exact fees, and the Bitcoin protocol users who are selling transactions.

Block rewards are mapped out to 2140.  If we are looking at 10-20 years ahead only, I think we can still do better.

If we start with Gavin's proposal and set a target increase of 50% per year, but make this increase sensitive to the contents of the block chain (fee amounts, number of transactions, transaction sizes, etc) and adjust up or down the increase in maximum size based on actual usage and need and network capability, we may get some result that can survive as well as accommodate the changes that we are not able to predict.

50% may be too high, it may be too low, 40% ending after a period likewise, maybe high maybe low.
The problem is that we do not know today what the future holds, these are just best guesses and so they are guaranteed to be wrong.

Gavin is on payroll of TBF, which is primarily the protocol users and somewhat less represented by miners.  This is not to suggest that his loyalties are suspect, I start with the view that we all want what is best for Bitcoin, but I recognize that he may simply be getting more advice and concern from some interests and less from others.  All I want is the best result we can get, and have the patience to wait for that.  After all, how often do you get to work on something that can change the world?  It is worth the effort to try for the best answer.

JustusRanvier had a good insight on needing bandwidth price data from the future, which is not available in the block chain today, but ultimately may be with 2.0 oracles.  However depending on those for the protocol would introduce other new vulnerabilities.  The main virtue of Gavin's proposal is its simplicity, its main failures are that it is arbitrary, insensitive to changing conditions and inflexible.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
acoindr
Legendary
*
Offline Offline

Activity: 1050
Merit: 1002


View Profile
October 15, 2014, 10:03:51 PM
 #39

so if the bandwith growth happens to  stop in 10 years

Why would it? Why on earth would it???

Look, Jakob Nielsen reports his bandwidth in 2014 is 120Mbps, which is around the 90Mbps figure Gavin mentions for his own calculations. Let's use 100Mbps as a "good" bandwidth starting point which yields:

1: 100
2: 150
3: 225
4: 338
5: 506
6: 759
7: 1139
8: 1709
9: 2563
10: 3844
11: 5767
12: 8650
13: 12975
14: 19462
15: 29193
16: 43789
17: 65684
18: 98526
19: 147789
20: 221684

Researchers at Bell Labs just set a record for data transmission over copper lines of 10Gbps. So we can use that as a bound for currently existing infrastructure in the U.S. We wouldn't hit that until year 12 above, and that's copper.

Did you not read my earlier post on society's bandwidth bottleneck, the last mile? I talk about society moving to fiber to the premises (FTTP) to upgrade bandwidth. Countries like Japan and South Korea already have installed this at over 60% penetration. The U.S. is at 7.7% and I personally saw fiber lines being installed to a city block a week ago. Researchers at Bell Labs have achieved over 100 petabits per second internet data transmission over fiber-optic lines. Do you realize how much peta is? 1 petabit = 10^15bits = 1 000 000 000 000 000 bits = 1000 terabits

That's a real world bound for fiber, and that's what we're working toward. Your fears appear completely unsubstantiated. On what possible basis, given what I've just illustrated, would you expect bandwidth to stop growing, even exponentially from now, after only 10 years?!?
Syke
Legendary
*
Offline Offline

Activity: 3878
Merit: 1193


View Profile
October 15, 2014, 10:33:00 PM
 #40

Theory and practice diverge here.
A miner can put as many transactions as they like in a block with no fees.
The cost is then replicated across every full node which must store it in perpetuity.

And the rest of the miners are free to ignore a block like that. You have yet to convince me there's a problem. Miners can fill blocks with worthless free transactions today.

Then maybe the problem isn't a large maximum blocksize, but the allowance of unlimited feeless transactions. They are not free for the network to store in perpetuity, so why not eliminate them?

Eliminate free transactions and eliminate the maximum blocksize. Problem solved forever.

Buy & Hold
Pages: « 1 [2] 3 4 5 6 7 8 9 10 11 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!