Bitcoin Forum
May 22, 2024, 06:10:32 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 [3] 4 »  All
  Print  
Author Topic: Bitcoin Block Size Conflict Ends With Latest Update  (Read 3347 times)
ArticMine
Legendary
*
Offline Offline

Activity: 2282
Merit: 1050


Monero Core Team


View Profile
June 22, 2015, 04:46:20 PM
 #41

   8MB cap
   Doubling every two years (so 16MB in 2018)
    For twenty years


It is one thing to temporarily increase the size to give a bit more breathing room for sidechains , payment channels and the lightning network to be tested and another thing to simply double the limit every two years to remove the pressure of discovering more efficient means of scaling.

This is a horrible plan and one that I cannot support. (coming from someone that has defended Gavin and Hearn in the past)

Why do you think hardware can not keep up with it? What in the last 25 years shows anything towards that idea? I think we will most likely be fine.

It isn't blockchain bloat and hard disk space I'm concerned with but network latency and bandwidth which won't scale that quickly.

Many valid security and centralization concerns, some of which satoshi szabo is concerned with -
https://twitter.com/NickSzabo4/status/611259452402987008


History indicates otherwise. Nielsen's Law of Internet Bandwidth http://www.nngroup.com/articles/law-of-bandwidth/ 50% per year compounded so a doubling every two years is actually below that.

Edit: 1.5^20 > 3325 vs 2^10 = 1024

Concerned that blockchain bloat will lead to centralization? Storing less than 4 GB of data once required the budget of a superpower and a warehouse full of punched cards. https://upload.wikimedia.org/wikipedia/commons/8/87/IBM_card_storage.NARA.jpg https://en.wikipedia.org/wiki/Punched_card
BitUsher
Legendary
*
Offline Offline

Activity: 994
Merit: 1034


View Profile
June 23, 2015, 12:01:47 AM
Last edit: June 23, 2015, 12:19:19 AM by BitUsher
 #42

History indicates otherwise. Nielsen's Law of Internet Bandwidth http://www.nngroup.com/articles/law-of-bandwidth/ 50% per year compounded so a doubling every two years is actually below that.

Edit: 1.5^20 > 3325 vs 2^10 = 1024

The link you provided cites advertised plans boasting hypothetical peak bandwidth possibilities and not real life bandwidth averages.

Additionally, this doesn't address all the concerns:

1) Latency caused by larger blocks incentivizes the centralization of mining pools
2) Not everyone worldwide lives in locations which has bandwidth growing at the same rates
3) Advertised bandwidth rates are not the same as real world bandwidth rates
4) ISPs often put soft caps on total bandwidth transferable per month  on accounts and stunting the user speed to a crawl.  More are no longer advertising unlimited bandwidth per month and setting clear total amount transferable limits and hardcaps with overage charges.
5) Full nodes at home need to compete with the bandwidth needs of HD video streaming used by most users which is getting increasingly demanding. Most people don't want to expend most of their bandwidth on supporting a full node and stop streaming Netflix and or torrenting.
6) Supporting nodes over TOR is a concern
7) Most ISP plans are asynchronous and have much slower upload speeds that are also not growing at the same rates as download speeds.
8 There are interesting attacks possible with larger blocks - http://eprint.iacr.org/2015/578.pdf

Lets look at the historical account of real world bandwidth averages -


http://explorer.netindex.com/maps?country=United%20States

1/2008      5.86 Mbps
12/2008    7.05 Mbps
12/2009    9.42  Mbps
12/2010    10.03  Mbps
12/2011    12.36   Mbps
12/2012    15.4   Mbps
12/2013    20.62   Mbps
12/2014    31.94  Mbps

Thus you can see that even if I were to ignore many parts of the world where internet isn't scaling as fast and focus on the "first world", bandwidth speeds aren't scaling up as quickly as you suggest.
yayayo
Legendary
*
Offline Offline

Activity: 1806
Merit: 1024



View Profile
June 23, 2015, 12:19:56 AM
 #43

History indicates otherwise. Nielsen's Law of Internet Bandwidth http://www.nngroup.com/articles/law-of-bandwidth/ 50% per year compounded so a doubling every two years is actually below that.

Edit: 1.5^20 > 3325 vs 2^10 = 1024

The link you provided cites advertised plans boasting hypothetical peak bandwidth possibilities and not real life bandwidth averages.

Additionally, this doesn't address all the concerns:

1) Latency caused by larger blocks incentivizes the centralization of mining pools
2) Not everyone worldwide lives in locations which has bandwidth growing at the same rates
3) Advertised bandwidth rates are not the same as real world bandwidth rates
4) ISPs often put soft caps on total bandwidth used on accounts and stunting the user speed to a crawl.  More are no longer advertising unlimited bandwidth per month and setting clear total amount transferable limits and hardcaps with overage charges.
5) Full nodes at home need to compete with the bandwidth needs of HD video streaming used by most users which is getting increasingly demanding. Most people don't want to expend most of their bandwidth on supporting a full node and stop streaming Netflix and or torrenting.
6) Supporting nodes over TOR is a concern

Lets look at the historical account of real world bandwidth averages -

http://explorer.netindex.com/maps?country=United%20States

1/2008      5.86 Mbps
12/2008    7.05 Mbps
12/2009    9.42  Mbps
12/2010    10.03  Mbps
12/2011    12.36   Mbps
12/2012    15.4   Mbps
12/2013    20.62   Mbps
12/2014    31.94  Mbps

Thus you can see that even if I were to ignore many parts of the world where internet isn't scaling as fast and focus on the "first world", bandwidth speeds aren't scaling up as quickly as you suggest.

You have to take into account that these measurements are download speeds. However what's the real bottleneck are upload speeds, which are far below that.

Also these measurements are short-term and for direct ISP-connectivity only (this is on what Nielson's observations are based on). These measurements and growth rates are in no way applicable to a decentralized multi-node network. In addition most ISPs have explicit or implicit data transfer limitations - they will throttle down your connection if you exceed a certain transfer volume.

So I fully agree with your assessment that Hearn's and Gavin's plan is horrible - in fact, it's even more horrible than you've shown.
There's no way I will ever support such a fork.

ya.ya.yo!

.
..1xBit.com   Super Six..
▄█████████████▄
████████████▀▀▀
█████████████▄
█████████▌▀████
██████████  ▀██
██████████▌   ▀
████████████▄▄
███████████████
███████████████
███████████████
███████████████
███████████████
▀██████████████
███████████████
█████████████▀
█████▀▀       
███▀ ▄███     ▄
██▄▄████▌    ▄█
████████       
████████▌     
█████████    ▐█
██████████   ▐█
███████▀▀   ▄██
███▀   ▄▄▄█████
███ ▄██████████
███████████████
███████████████
███████████████
███████████████
███████████████
███████████████
███████████▀▀▀█
██████████     
███████████▄▄▄█
███████████████
███████████████
███████████████
███████████████
███████████████
         ▄█████
        ▄██████
       ▄███████
      ▄████████
     ▄█████████
    ▄███████
   ▄███████████
  ▄████████████
 ▄█████████████
▄██████████████
  ▀▀███████████
      ▀▀███
████
          ▀▀
          ▄▄██▌
      ▄▄███████
     █████████▀

 ▄██▄▄▀▀██▀▀
▄██████     ▄▄▄
███████   ▄█▄ ▄
▀██████   █  ▀█
 ▀▀▀
    ▀▄▄█▀
▄▄█████▄    ▀▀▀
 ▀████████
   ▀█████▀ ████
      ▀▀▀ █████
          █████
       ▄  █▄▄ █ ▄
     ▀▄██▀▀▀▀▀▀▀▀
      ▀ ▄▄█████▄█▄▄
    ▄ ▄███▀    ▀▀ ▀▀▄
  ▄██▄███▄ ▀▀▀▀▄  ▄▄
  ▄████████▄▄▄▄▄█▄▄▄██
 ████████████▀▀    █ ▐█
██████████████▄ ▄▄▀██▄██
 ▐██████████████    ▄███
  ████▀████████████▄███▀
  ▀█▀  ▐█████████████▀
       ▐████████████▀
       ▀█████▀▀▀ █▀
.
Premier League
LaLiga
Serie A
.
Bundesliga
Ligue 1
Primeira Liga
.
..TAKE PART..
BitUsher
Legendary
*
Offline Offline

Activity: 994
Merit: 1034


View Profile
June 23, 2015, 12:33:05 AM
Last edit: June 23, 2015, 12:44:32 AM by BitUsher
 #44

You have to take into account that these measurements are download speeds. However what's the real bottleneck are upload speeds, which are far below that.

Also these measurements are short-term and for direct ISP-connectivity only (this is on what Nielson's observations are based on). These measurements and growth rates are in no way applicable to a decentralized multi-node network. In addition most ISPs have explicit or implicit data transfer limitations - they will throttle down your connection if you exceed a certain transfer volume.

So I fully agree with your assessment that Hearn's and Gavin's plan is horrible - in fact, it's even more horrible than you've shown.
There's no way I will ever support such a fork.

ya.ya.yo!

Correct ...I added 2 more items while you were typing. Except, keep in mind those speed are the total combined upload and download speeds so 31.94 Mbps average means 21.88MBps download and 9.86MBps upload in 12/2014 thus the situation is worse than what you are even suggesting.

When you look at many other countries the growth is much slower as well. I was deliberately taking one of the better case scenarios to be generous.


all these numbers I am citing are only considering Broadband as well, and not mobile which is much slower and with much lower soft caps.
All these users are sharing these node demands with other data , much of it streaming video which is very demanding. Their proposal will essentially make hosting a full node at home a thing of the past in time.
 
danielW
Sr. Member
****
Offline Offline

Activity: 277
Merit: 253


View Profile
June 23, 2015, 03:16:28 AM
 #45

Moore's law is not a law but a trend. The bitcoin foundation already went almost bankrupt betting on a trend.

We can increase the size when the trend materialises, not before.

btw I am pretty sure my internet speed in Australia did not increase x8 in last 6 years.

Decentralisation is Bitcoins unique killer feature and I believe it should be the top priority, even if it results in slower user growth short term.
 
jubalix
Legendary
*
Offline Offline

Activity: 2618
Merit: 1022


View Profile WWW
June 23, 2015, 03:17:59 AM
 #46

The real answer is not to just invest in BTC, but have some LTC, Peercoin, NXT, even Doge etc etc.

If BTC does fail LTC or others will take its place and not make the same mistakes.

Admitted Practicing Lawyer::BTC/Crypto Specialist. B.Engineering/B.Laws

https://www.binance.com/?ref=10062065
MicroGuy
Legendary
*
Offline Offline

Activity: 2506
Merit: 1030


Twitter @realmicroguy


View Profile WWW
June 23, 2015, 03:20:16 AM
 #47

The correct answer is to go with Gavin's idea except double the block size every 4 years instead of 2.

We can fork it again later, but with the more conservative approach, you can give some love to the Chinese and keep them happy.
Sitarow
Legendary
*
Offline Offline

Activity: 1792
Merit: 1047



View Profile
June 23, 2015, 03:23:48 AM
Last edit: June 23, 2015, 03:39:38 AM by Sitarow
 #48

Moore's law is not a law but a trend. The bitcoin foundation already went almost bankrupt betting on a trend.

We can increase the size when the trend materialises, not before.

btw I am pretty sure my internet speed in Australia did not increase x8 in last 6 years.

Decentralisation is Bitcoins unique killer feature and I believe it should be the top priority, even if it results in slower user growth short term.
 

Moore's law now is used in the semiconductor industry to guide long-term planning and to set targets for research and development.
The capabilities of many digital electronic devices are strongly linked to Moore's law: quality-adjusted microprocessor, memory capacity, this trend has continued for more than half a century, "Moore's law" should be considered an observation or projection and not a physical or natural law.

Moore's law describes a driving force of technological and social change, productivity, and economic growth.
Gordon Moore in 2015 foresaw that the rate of progress would reach saturation: "I see Moore’s law dying here in the next decade or so."

However, The Economist news-magazine has opined that predictions that Moore's law will soon fail are almost as old, going back years and years, as the law itself, with the time of eventual end of the technological trend being uncertain.

Hard disk drive areal density
Quote
– A similar observation (sometimes called Kryder's law) was made as of 2005 for hard disk drive areal density.[119] Several decades of rapid progress resulted from the use of error correcting codes, the magnetoresistive effect, and the giant magnetoresistive effect. The Kryder rate of areal density advancement slowed significantly around 2010, because of noise related to smaller grain size of the disk media, thermal stability, and writability using available magnetic fields.[120][121]

Network capacity
Quote
– According to Gerry/Gerald Butters,[122][123] the former head of Lucent's Optical Networking Group at Bell Labs, there is another version, called Butters' Law of Photonics,[124] a formulation that deliberately parallels Moore's law. Butter's law says that the amount of data coming out of an optical fiber is doubling every nine months.[125] Thus, the cost of transmitting a bit over an optical network decreases by half every nine months. The availability of wavelength-division multiplexing (sometimes called WDM) increased the capacity that could be placed on a single fiber by as much as a factor of 100. Optical networking and dense wavelength-division multiplexing (DWDM) is rapidly bringing down the cost of networking, and further progress seems assured. As a result, the wholesale price of data traffic collapsed in the dot-com bubble. Nielsen's Law says that the bandwidth available to users increases by 50% annually.

Source

The correct answer is to go with Gavin's idea except double the block size every 4 years instead of 2.

We can fork it again later, but with the more conservative approach, you can give some love to the Chinese and keep them happy.

Gordon E. Moore, co-founder of the Intel Corporation and Fairchild Semiconductor, In 1975, he revised the forecast doubling time to two years.
The period is often quoted as 18 months because of Intel executive David House, who predicted that chip performance would double every 18 months (being a combination of the effect of more transistors and their being faster)

How long will it take for all the 21 Million BTC subsidy to be unlocked? Perhaps it was too fine tuned with Moore's law.

Physical data storage Kryder's law and with new solid state memory technology may have accelerated this area of development into Moore's law territory.

Network bandwidth tuned to Nielsen's Law.
Oscilson
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250



View Profile
June 23, 2015, 10:11:42 AM
 #49

The correct answer is to go with Gavin's idea except double the block size every 4 years instead of 2.

We can fork it again later, but with the more conservative approach, you can give some love to the Chinese and keep them happy.

Start with 2MB next year, double the block size every 2 years
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
June 23, 2015, 10:25:10 AM
 #50

Start with 2MB next year, double the block size every 2 years
I hardly doubt that that would be enough. Even though the number would only be 4 times less, and in 20 years we would have 2GB blocks.
We would only have 6 (in practice; theoretical is 14) transactions per second until 2018. I think that we might need more.

This is not a thread about discussing Moore's law, or other similar ones. Let us not drift away too much.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
Q7
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


View Profile WWW
June 23, 2015, 11:55:39 AM
 #51

Finally. But it caught me by surprise when they announced that the block size will double every two year which I think need to reconsider whether it would be practical in the long run. It's not the computer spec that we are talking about over here because for me, processing power can scale up accordingly but what is more worrying is the internet bandwidth usage. Where I come from, it will mean paying a ton for the bills.

BitUsher
Legendary
*
Offline Offline

Activity: 994
Merit: 1034


View Profile
June 23, 2015, 12:56:21 PM
 #52

Looks like those bandwidth test averages  are even more optimistic than we both assumed as they mainly deal with burst speeds, have some flaws, and don't consider packet limiting and restrictions done by ISPs.


I found some excellent data.  Ookla has been empirically measuring upload and download speeds for over a decade and from all over the world, based on its speed test results:

http://explorer.netindex.com/maps

The pricing information is lacking, however.  
The data are far from "excellent". They are mostly bullshit numbers measured using short-term bursts. I checked several markets that I'm very familiar with and those all were successfully gamed by the DOCSIS cable providers using the equipment with "powerboost" (or similar marketing names).

"Powerboost" is a ultra-short-term (few to few-teen or sometimes few-ty seconds) bandwidth increase made available to the modems of the customers that haven't maxed out their bandwidth in previous minutes.

The configuration details are highly proprietary and vary by market and by time of day&week. But the overall effect is that that the DOCSIS modem seriously approaches 100Mbps LAN performance for a few packets in bursts.

On some markets that I know the VDSL2 competitors (that optimize average bandwidth over periods of weeks) didn't even rank on the "TOP ISPS". In reality of the non-burst-y loads the VDSL2 providers outperform DOCSIS providers, especially on the upload side as the VDSL2 technology is a fundamentally symmetric technology that's being sold as asymmetric only for market-segmenting reasons.

I'm not even going to delve into further restrictions on consumer broadband where the providers explicitly limit number of packet flows that can be handled by the customer's equipment. OOKLA (and almost everyone else) measures a 2-flow single-tcp connections, which have really nothing in common with peer-2-peer technologies like Bitcoin or Bittorrent.

Executive summary:

Bullshit marketing numbers, divide by 3-5-10 to get real number achievable with P2P technologies and continuous operation.

------------------------------------------------

The correct answer is to go with Gavin's idea except double the block size every 4 years instead of 2.

This is better, but I still don't like the idea of creating a framework where we continuously "kick the can down the road" and what is does to de-incentivize better solutions. I do agree with Gavin and Hearn that we should increase the limit, and it is better to do so before we start hitting the blocklimit continuously, for the sake of temporarily buying some more time to implement and test other solutions and do empathize with them as to why they want to incorporate code to scale automatically so they can avoid this multi-year hard fork debate in the future... but this "crisis" is actually a good thing because it is forcing us to think of creative solutions and test more.

 
peligro
Hero Member
*****
Offline Offline

Activity: 593
Merit: 500


1NoBanksLuJPXf8Sc831fPqjrRpkQPKkEA


View Profile
June 23, 2015, 03:05:38 PM
 #53

Well, let's see the sequel, it seems need more information.

.......

Parameters are:

    8MB cap
    Doubling every two years (so 16MB in 2018)
    For twenty years
   ....

Does that mean in 20 years after it starts doubling the blocks will be 8192 MB, which is 8.192 GB per block?

Have I done my maths right?

8 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 = 8192

To keep it simple I'm not considering the difference between mebibytes and megabytes.

What? that's too much right Undecided He told doubling every 2 years, so every 2 years increased 8MB. The parameters will implement next years(2016) and then in 2018 increase 8MB, so 8+8=16MB.  
You means 20 years after starts. Start from 2016 and the end is 2036.

8MB(2016) + 8MB = 16MB(2018) + 8MB = 24MB(2020) + 8MB = 32MB(2022) + 8MB = 40MB(2024) + 8MB = 48MB(2026) + 8MB = 56MB(2028) + 8MB = 64MB(2030) + 8MB = 72MB(2032) + 8MB = 80MB(2034) +8MB = 88MB in 2036.
CMIIW.


~iki

No, the 8102MB are correct. You only summarized 8MB each 2 years, you need to double the max blocksize of the previous year.

Though 8.1GB per block, so each minute, sound extreme. Ok, 20 years in the future is a long way but still. I wonder how harddiscspace will develop till then. Or whatever we will use to write data on then.  Roll Eyes
yayayo
Legendary
*
Offline Offline

Activity: 1806
Merit: 1024



View Profile
June 23, 2015, 03:33:45 PM
 #54

Start with 2MB next year, double the block size every 2 years
I hardly doubt that that would be enough. Even though the number would only be 4 times less, and in 20 years we would have 2GB blocks.
We would only have 6 (in practice; theoretical is 14) transactions per second until 2018. I think that we might need more.

This is not a thread about discussing Moore's law, or other similar ones. Let us not drift away too much.

The point is that Bitcoin does not scale well to process all transactions natively. It's simply a waste of resources to process all microtransactions on the blockchain. Microtransactions < $ 1 simply do not need the same level of security as bigger transactions. They should move off-chain/side-chain/second-layer.

The max_blocksize should be increased conservatively to ensure that decentralization is not hurt, because decentralization gives Bitcoin value. Solutions for microtransactions are being developed right now.

Hearn's and Gavin's plan is simply not well thought out and outright dangerous for the future of Bitcoin as a decentralized currency. Relying on Moore's / Nielson's "law" is simply linear extrapolation of past trends (based on a very limited timespan), without any evidence that these past trends can be sustained in future. There are natural boundaries for e.g. further miniaturization, so betting on these trends for another 20 years is unwise.

ya.ya.yo!

.
..1xBit.com   Super Six..
▄█████████████▄
████████████▀▀▀
█████████████▄
█████████▌▀████
██████████  ▀██
██████████▌   ▀
████████████▄▄
███████████████
███████████████
███████████████
███████████████
███████████████
▀██████████████
███████████████
█████████████▀
█████▀▀       
███▀ ▄███     ▄
██▄▄████▌    ▄█
████████       
████████▌     
█████████    ▐█
██████████   ▐█
███████▀▀   ▄██
███▀   ▄▄▄█████
███ ▄██████████
███████████████
███████████████
███████████████
███████████████
███████████████
███████████████
███████████▀▀▀█
██████████     
███████████▄▄▄█
███████████████
███████████████
███████████████
███████████████
███████████████
         ▄█████
        ▄██████
       ▄███████
      ▄████████
     ▄█████████
    ▄███████
   ▄███████████
  ▄████████████
 ▄█████████████
▄██████████████
  ▀▀███████████
      ▀▀███
████
          ▀▀
          ▄▄██▌
      ▄▄███████
     █████████▀

 ▄██▄▄▀▀██▀▀
▄██████     ▄▄▄
███████   ▄█▄ ▄
▀██████   █  ▀█
 ▀▀▀
    ▀▄▄█▀
▄▄█████▄    ▀▀▀
 ▀████████
   ▀█████▀ ████
      ▀▀▀ █████
          █████
       ▄  █▄▄ █ ▄
     ▀▄██▀▀▀▀▀▀▀▀
      ▀ ▄▄█████▄█▄▄
    ▄ ▄███▀    ▀▀ ▀▀▄
  ▄██▄███▄ ▀▀▀▀▄  ▄▄
  ▄████████▄▄▄▄▄█▄▄▄██
 ████████████▀▀    █ ▐█
██████████████▄ ▄▄▀██▄██
 ▐██████████████    ▄███
  ████▀████████████▄███▀
  ▀█▀  ▐█████████████▀
       ▐████████████▀
       ▀█████▀▀▀ █▀
.
Premier League
LaLiga
Serie A
.
Bundesliga
Ligue 1
Primeira Liga
.
..TAKE PART..
Klestin
Hero Member
*****
Offline Offline

Activity: 493
Merit: 500


View Profile
June 23, 2015, 04:38:58 PM
 #55

   8MB cap
   Doubling every two years (so 16MB in 2018)
    For twenty years


It is one thing to temporarily increase the size to give a bit more breathing room for sidechains , payment channels and the lightning network to be tested and another thing to simply double the limit every two years to remove the pressure of discovering more efficient means of scaling.

This is a horrible plan and one that I cannot support. (coming from someone that has defended Gavin and Hearn in the past

At first I thought you were being alarmist, but I came to the conclusion that you are absolutely correct once I realized as you did that:

- Increasing the block limit to X MB guarantees that every block will be exactly X MB
- Bandwidth is a limiting factor right now, since 1MB barely makes it over a 14.4k baud modem in 10 minutes, and that's the best we now have
- Miners have absolutely no control over block size whatsoever, and no method (such as transaction fee policies) to decide which transactions to include
- This proposed code change is permanent and can never ever be revised based on future facts
BitUsher
Legendary
*
Offline Offline

Activity: 994
Merit: 1034


View Profile
June 23, 2015, 07:10:21 PM
 #56

At first I thought you were being alarmist, but I came to the conclusion that you are absolutely correct once I realized as you did that:

I'm picking up on the sarcasm so let me address your comments:

- Increasing the block limit to X MB guarantees that every block will be exactly X MB

Agreed. Just like now where most blocks are 20-30% full we cannot assume that transaction volume will automatically fill the available block limit. Like Gavin and Hearn, I think it is prudent to prepare for this possibility beforehand whether it is caused by an attacker "testing" the network or wide scale adoption. Thus one should test for and understand the tradeoffs with all the available block being used to prepare for it.

- Bandwidth is a limiting factor right now, since 1MB barely makes it over a 14.4k baud modem in 10 minutes, and that's the best we now have

Bandwidth is fine right now, but there are other considerations to consider like miners concern of propagation time (some are already purposely limiting or not including transactions for a slight edge), running a node over TOR, and I have shown that historically network bandwidth hasn't scaled at the same rate of what is being proposed.

- Miners have absolutely no control over block size whatsoever, and no method (such as transaction fee policies) to decide which transactions to include

Miners ultimately decide what the limit to set will be as they can choose to include transactions or not even if developer increase the block limit. My concern with this is the same concern I have with Garziks proposal. The fact that 5 Chinese companies control 60% of the hashrate and can set the limit with or without any developers agreeing concerns me. I would be much happier giving miners more control if the hashrate was more distributed but we are a long ways off from that at the moment.  

- This proposed code change is permanent and can never ever be revised based on future facts

Sure , we can always switch it back with another hard fork... but why not come to consensus on a proposal that is best for the community and properly tested to avoid the negative PR and a few more years of work debating and testing.

Gavin Just submitted the BIP a couple days ago.... so its still early on the testing . We should probably test his suggestion on a testnet  under various scenarios before we agree or disagree with it. Initially, I don't think it is a wise proposal, but I am certainly open to my mind being changed with the right evidence.
SISAR
Hero Member
*****
Offline Offline

Activity: 651
Merit: 518



View Profile
June 24, 2015, 12:07:52 AM
 #57

read below :

An issue that has been the source for months of debate and rancor throughout the Bitcoin mining and developer community over Bitcoin's block size appears to have reached a long-awaited resolution. Within the most recent BitcoinXT update, Gavin Andresen has begun the process of revising the block chain individual block size from 1 MB to 8 MB starting next year. This is deemed necessary for the overall growth and usability of Bitcoin, as the current limits of seven transactions per second are becoming insufficient for the growing global community as consumer and business interest increases.

These impending updates were revealed on GitHub, and this is what is in store for the upcoming “hard fork”, taken directly from GitHub, posted by Gavin Andresen:

Implement hard fork to allow bigger blocks. Unit test and code for a bigger block hard fork.

Parameters are:

    8MB cap
    Doubling every two years (so 16MB in 2018)
    For twenty years
    Earliest possible chain fork: 11 Jan 2016
    After miner supermajority (code in the next patch)
    And grace period once miner supermajority achieved (code in next patch)

The 1 MB block size debate has been a constant issue for months, with Andresen and Mike Hearn discussing the need to upgrade the block size to as much as 20 MB. China's major exchanges and mining interests came out against any block size changes initially, deriding the extra operating costs and complexities involved with mostly empty blocks. After further review, the increase was deemed warranted to an 8 MB size, much smaller than the 20 MB requests by the Core Developers. An accord was reached, and the revisions will take effect next year.

We attempted to contact Hearn and Andresen for more information and will provide updated information as it becomes available. It seems some details are still to be sorted out in the next coding batch within the coming days. We’ll keep our readers informed of any further developments.

What do you think of these new core updates and the automatic changes every two years? Share above and comment below.

Source : https://www.cryptocoinsnews.com/bitcoin-block-size-conflict-ends-latest-update/

Bitcoin XT is a fork of Bitcoin that almost no one uses, why care what Gavin and Mike are doing with it?
SebastianJu
Legendary
*
Offline Offline

Activity: 2674
Merit: 1082


Legendary Escrow Service - Tip Jar in Profile


View Profile WWW
June 24, 2015, 12:39:28 PM
 #58

Start with 2MB next year, double the block size every 2 years
I hardly doubt that that would be enough. Even though the number would only be 4 times less, and in 20 years we would have 2GB blocks.
We would only have 6 (in practice; theoretical is 14) transactions per second until 2018. I think that we might need more.

This is not a thread about discussing Moore's law, or other similar ones. Let us not drift away too much.

The point is that Bitcoin does not scale well to process all transactions natively. It's simply a waste of resources to process all microtransactions on the blockchain. Microtransactions < $ 1 simply do not need the same level of security as bigger transactions. They should move off-chain/side-chain/second-layer.

The max_blocksize should be increased conservatively to ensure that decentralization is not hurt, because decentralization gives Bitcoin value. Solutions for microtransactions are being developed right now.

Hearn's and Gavin's plan is simply not well thought out and outright dangerous for the future of Bitcoin as a decentralized currency. Relying on Moore's / Nielson's "law" is simply linear extrapolation of past trends (based on a very limited timespan), without any evidence that these past trends can be sustained in future. There are natural boundaries for e.g. further miniaturization, so betting on these trends for another 20 years is unwise.

ya.ya.yo!

Even though i dont like the way gaving and hearn handling things with something like an extortion, the solution is most probably most near to what will happen in the future. Moores law is the most exact thing we can guess that it will develop.

Saying that... if its another way at the end then nothing prevents us from changing the protocol again and adjusting to the actual needs then.

Please ALWAYS contact me through bitcointalk pm before sending someone coins.
Oscilson
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250



View Profile
June 28, 2015, 08:15:08 AM
 #59

I think we should increase the size of blocks slowly, double every 3 or 4 years. This will increase the fees paid by user to maintain the network and encourage the development of sidechain, so that main chain grows slowly.
forlackofabettername
Full Member
***
Offline Offline

Activity: 150
Merit: 100


View Profile
June 28, 2015, 08:44:26 AM
 #60

Hold on Nancy, i think we've got a broad consensus forming in the community to exclude Gavin and Mike from the devteam ... so what's that shit you've been talking about again?

"If you see fraud and don't shout fraud, you are a fraud"
  -- Nassim Taleb
Pages: « 1 2 [3] 4 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!