Bitcoin Forum
May 09, 2024, 02:44:26 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 [3] 4 5 6 7 8 9 10 11 12 13 »  All
  Print  
Author Topic: Gavin Andresen Proposes Bitcoin Hard Fork to Address Network Scalability  (Read 18346 times)
deepceleron
Legendary
*
Offline Offline

Activity: 1512
Merit: 1032



View Profile WWW
October 13, 2014, 06:38:57 PM
 #41


For each additional second it takes for a block to propagate there is only ~1/600 chance (maybe more if the difficulty is increasing) that the block will be orphaned because of extra propagation time. If the amount of additional TX fees makes it worth the miners to take this risk then they will include the additional TXs and take the risk of their found block being orphaned

It's about 1 in 600.5001389 chance a block will be found within a second following another. However a block find faster than the network latency does not always result in an orphan - it is not an orphan if the same miner also found the following block.

That is the problem with orphans, they reduce the strength of decentralized mining against attack, by favoring larger miners and discarding proof-of-works that would otherwise strengthen the difficulty. An extreme demonstration of this was just had on testnet after a reset to difficulty 1 vs difficulty 100k hashrate: Even holding 1% of the network hashrate it was impossible to get your block find published, because the largest miner was finding blocks at nearly one per second and building upon their own chain even though they didn't have a majority hashrate nor were they running an "attack" client.

An attacker can enhance the chance of a malicious block acceptance by not including irrelevant transactions. This is in addition to a 51% attack actually becoming a 49.83% attack with a one second delay between legitimate miners. There is already a multisecond delay in pooled mining between the pool software learning of a new block from bitcoin, and pushing it out to miners, and their miner software flushing the work and getting the new block hashing on hardware.
1715222666
Hero Member
*
Offline Offline

Posts: 1715222666

View Profile Personal Message (Offline)

Ignore
1715222666
Reply with quote  #2

1715222666
Report to moderator
1715222666
Hero Member
*
Offline Offline

Posts: 1715222666

View Profile Personal Message (Offline)

Ignore
1715222666
Reply with quote  #2

1715222666
Report to moderator
"With e-currency based on cryptographic proof, without the need to trust a third party middleman, money can be secure and transactions effortless." -- Satoshi
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715222666
Hero Member
*
Offline Offline

Posts: 1715222666

View Profile Personal Message (Offline)

Ignore
1715222666
Reply with quote  #2

1715222666
Report to moderator
1715222666
Hero Member
*
Offline Offline

Posts: 1715222666

View Profile Personal Message (Offline)

Ignore
1715222666
Reply with quote  #2

1715222666
Report to moderator
Conqueror
Legendary
*
Offline Offline

Activity: 1354
Merit: 1020


I was diagnosed with brain parasite


View Profile
October 13, 2014, 06:43:44 PM
 #42

I think is is wise choice.
Fork it!
Minecache
Legendary
*
Offline Offline

Activity: 2198
Merit: 1024


Vave.com - Crypto Casino


View Profile
October 13, 2014, 06:58:31 PM
 #43

What's a fork. And what's the difference between forks when it's stiff or flaccid?

Not everyone's a bitcoin MSc Engineer FFS.

BittBurger
Hero Member
*****
Offline Offline

Activity: 924
Merit: 1001


View Profile
October 13, 2014, 07:54:20 PM
 #44

I am glad to see that the viewpoint of the dev team has changed from "Well deal with that later" to "We need to do this now, so there is a later".

The industry is evaluating bitcoin for its capabilities and restrictions now, for things they will build "later".

Hats off to Gavin for taking the reigns.  Wish it wasn't such a one man show in this regard.

Any chance someone can begin to manage the development priorities, enhancements, and create a timeline for execution, so there is a "5 year plan" set in stone?

-B-

Owner: "The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
View it on the Blockchain | Genesis Block Newspaper Copies
Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1129


View Profile
October 14, 2014, 03:15:52 AM
 #45

So, the limit is defined in main.h, line 42.
https://github.com/bitcoin/bitcoin/blob/5505a1b13f75af9f0f6421b42d97c06e079db345/src/main.h#L42

And the test is done at main.cpp at line 727.
https://github.com/bitcoin/bitcoin/blob/3222802ea11053f0dd69c99fc2f33edff554dc17/src/main.cpp#L727

So this looks like a very simple change. 

Is there any way for someone to play sillybuggers right around the transition period?  With the main chain and other chains at different heights, can the client ever be in a state where it's willing to accept larger blocks in one chain, or unwilling to accept larger blocks in one chain, due to the block height of some different chain?

TheFootMan
Hero Member
*****
Offline Offline

Activity: 490
Merit: 500


View Profile
October 14, 2014, 07:42:32 PM
 #46

So, the limit is defined in main.h, line 42.
https://github.com/bitcoin/bitcoin/blob/5505a1b13f75af9f0f6421b42d97c06e079db345/src/main.h#L42

And the test is done at main.cpp at line 727.
https://github.com/bitcoin/bitcoin/blob/3222802ea11053f0dd69c99fc2f33edff554dc17/src/main.cpp#L727

So this looks like a very simple change. 

Is there any way for someone to play sillybuggers right around the transition period?  With the main chain and other chains at different heights, can the client ever be in a state where it's willing to accept larger blocks in one chain, or unwilling to accept larger blocks in one chain, due to the block height of some different chain?




Could anyone lay out exactly what are the risks with the proposed hard fork? Ie. what could go wrong, and what are the percentages it could go wrong (if possible to calculate it).
tvbcof
Legendary
*
Offline Offline

Activity: 4592
Merit: 1276


View Profile
October 14, 2014, 08:24:20 PM
 #47

Would not the best solution be a dynamic update?

With gavin suggestions we have that max blocksize will become

2014: 1mb
2015: 1.5mb
2016: 2.25mb
2017: 3.375mb
2018: 5.0625
2019: 7.59375
2020: 11.390625
2021: 17.0859375
2022: 25.62890625
2023: 38.44335937
...

LOL!  I gotta wonder if that was not one of the ideas that came out of the closed door sessions with the CFR.  I guess we'll never know since Gavin never saw fit to either swear off private conversations or give the community a de-briefing of any which may or may not have happened.  At least that I saw.  A huge percentage of humans lack the capability to conceptualize an exponential function, and people who make it to CFR status understand this deficiency well and deploy it liberally as a weapon.

That said, I find the proposed formula workable insofar as it would not destroy the core concepts of the system.  At these relatively modest rates the system would still be operational under significant attack and it would still be feasible to verify transactions fully by my estimation.  I'm in.

The downside of this modest growth is that it would not come close to solving the 'problem' of giving the system the capacity to serve as an exchange currency.  So, what's the point?  Especially if sidechains or some other logical scaling mechanisms develop.  One way or another, let's run up into the economics of transaction fees to see how they actually work before something as drastic as a hard fork.


sig spam anywhere and self-moderated threads on the pol&soc board are for losers.
FattyMcButterpants
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250



View Profile
October 15, 2014, 03:34:46 AM
 #48

Would not the best solution be a dynamic update?

With gavin suggestions we have that max blocksize will become

2014: 1mb
2015: 1.5mb
2016: 2.25mb
2017: 3.375mb
2018: 5.0625
2019: 7.59375
2020: 11.390625
2021: 17.0859375
2022: 25.62890625
2023: 38.44335937
...

LOL!  I gotta wonder if that was not one of the ideas that came out of the closed door sessions with the CFR.  I guess we'll never know since Gavin never saw fit to either swear off private conversations or give the community a de-briefing of any which may or may not have happened.  At least that I saw.  A huge percentage of humans lack the capability to conceptualize an exponential function, and people who make it to CFR status understand this deficiency well and deploy it liberally as a weapon.

That said, I find the proposed formula workable insofar as it would not destroy the core concepts of the system.  At these relatively modest rates the system would still be operational under significant attack and it would still be feasible to verify transactions fully by my estimation.  I'm in.

The downside of this modest growth is that it would not come close to solving the 'problem' of giving the system the capacity to serve as an exchange currency.  So, what's the point?  Especially if sidechains or some other logical scaling mechanisms develop.  One way or another, let's run up into the economics of transaction fees to see how they actually work before something as drastic as a hard fork.


The proposed changes to the block size has a goal of keeping up with anticipated TX volume growth over time. You need to remember that the average block size right now is well under 1 MB so even though the max block size is 1 MB, we are really not starting at that size when comparing the max block size in the future to today's block size.

In the event that the block size growth is not able to keep up with TX growth then the protocol could easily be forked again to increase the block size growth
tvbcof
Legendary
*
Offline Offline

Activity: 4592
Merit: 1276


View Profile
October 15, 2014, 04:32:19 AM
 #49

...
In the event that the block size growth is not able to keep up with TX growth then the protocol could easily be forked again to increase the block size growth

Ya, of course it could.  What could be easier than a simple little hard fork?  Personally I never understood why we didn't do hard forks several times per week.


sig spam anywhere and self-moderated threads on the pol&soc board are for losers.
CIYAM
Legendary
*
Offline Offline

Activity: 1890
Merit: 1078


Ian Knowles - CIYAM Lead Developer


View Profile WWW
October 15, 2014, 04:50:27 AM
 #50

I'm not sure if this had been mentioned elsewhere but if *malleability* could be resolved then one very simple way to reduce the amount of data per tx is to only put the transaction id's in them (not the actual signed transaction script).

From memory a typical raw tx is 200+ bytes so if we just stored the tx hash (32 bytes) then we have just made nearly a ten-fold improvement in bandwidth usage for blocks (of course if a node doesn't have all the txs in a new block already its memory pool then it would need to request those in order to validate the block).

Note that the tx scripts still need to be stored (until they can be pruned) so this is not a suggestion about "disk storage" but about reducing *bandwidth* (the txs are already being broadcast so they don't really need to be *repeated* in each block as well).

With CIYAM anyone can create 100% generated C++ web applications in literally minutes.

GPG Public Key | 1ciyam3htJit1feGa26p2wQ4aw6KFTejU
Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1129


View Profile
October 15, 2014, 06:01:38 PM
 #51

I'm not sure if this had been mentioned elsewhere but if *malleability* could be resolved then one very simple way to reduce the amount of data per tx is to only put the transaction id's in them (not the actual signed transaction script).

From memory a typical raw tx is 200+ bytes so if we just stored the tx hash (32 bytes) then we have just made nearly a ten-fold improvement in bandwidth usage for blocks (of course if a node doesn't have all the txs in a new block already its memory pool then it would need to request those in order to validate the block).

Note that the tx scripts still need to be stored (until they can be pruned) so this is not a suggestion about "disk storage" but about reducing *bandwidth* (the txs are already being broadcast so they don't really need to be *repeated* in each block as well).


This.  *EXACTLY* this.  I said this three pages ago and the responding silence was deafening.  Come on guys, this is a REALLY good idea, and needs a response. 
Minecache
Legendary
*
Offline Offline

Activity: 2198
Merit: 1024


Vave.com - Crypto Casino


View Profile
October 16, 2014, 12:23:38 AM
 #52

What's a hard fork, and what's a soft fork? FFS I've asked this before and no one answered. Can't the bitcoin genius step up to the plate and use this forum to educate instead of joke and berate?

I await...

cbeast
Donator
Legendary
*
Offline Offline

Activity: 1736
Merit: 1006

Let's talk governance, lipstick, and pigs.


View Profile
October 16, 2014, 01:35:21 AM
 #53

What's a hard fork, and what's a soft fork? FFS I've asked this before and no one answered. Can't the bitcoin genius step up to the plate and use this forum to educate instead of joke and berate?

I await...
http://bitcoin.stackexchange.com/questions/9173/what-is-a-hard-fork
Quote
Simply put, a so-called hard fork is a change of the Bitcoin protocol that is not backwards-compatible; i.e., older client versions would not accept blocks created by the updated client, considering them invalid. Obviously, this can create a blockchain fork when nodes running the new version create a separate blockchain incompatible with the older software.

http://bitcoin.stackexchange.com/questions/30817/what-is-a-soft-fork
Quote
Softforks restrict block acceptance rules in comparison to earlier versions.

That way, any blocks considered valid by the newer version are still valid in the old version. If at least 51% of the mining power shifts to the new version, the system self-corrects. (If less than 51% switch to the new version, it behaves like a hardfork though.)

Blocks created by old versions of BitcoinCore that are invalid under the new paradigm might commence a short-term "old-only fork". Eventually they would be overtaken by a fork of the new paradigm, as the hashing power working on the old paradigm would be smaller ("only old versions") than on the new paradigm ("accepted by all versions").

Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.
scarsbergholden
Hero Member
*****
Offline Offline

Activity: 686
Merit: 500



View Profile
October 16, 2014, 01:38:09 AM
 #54

...
In the event that the block size growth is not able to keep up with TX growth then the protocol could easily be forked again to increase the block size growth

Ya, of course it could.  What could be easier than a simple little hard fork?  Personally I never understood why we didn't do hard forks several times per week.


There are serious risks to the miners when Bitcoin is hard forked. There is a possibility that some miners will not accept the fork at first which would result in the miners mining on a what will be a worthless blockchain.

The network should really only be forked when it is absolutely necessary.

Argwai96
Legendary
*
Offline Offline

Activity: 1036
Merit: 1000


Thug for life!


View Profile
October 16, 2014, 11:59:12 PM
 #55

I'm not sure if this had been mentioned elsewhere but if *malleability* could be resolved then one very simple way to reduce the amount of data per tx is to only put the transaction id's in them (not the actual signed transaction script).

From memory a typical raw tx is 200+ bytes so if we just stored the tx hash (32 bytes) then we have just made nearly a ten-fold improvement in bandwidth usage for blocks (of course if a node doesn't have all the txs in a new block already its memory pool then it would need to request those in order to validate the block).

Note that the tx scripts still need to be stored (until they can be pruned) so this is not a suggestion about "disk storage" but about reducing *bandwidth* (the txs are already being broadcast so they don't really need to be *repeated* in each block as well).

this change would require users to check with the nodes on every single transaction on the blockchain. How the blockchain is setup now, a user can download what they think is the blockchain, and easily check each block to make sure no transaction spends the same input, while if the blockchain only contains the TXID of a Tx then someone would not only need to download the blockchain but also connect to what they hope is an honest node to confirm the inputs and outputs of each TX
CIYAM
Legendary
*
Offline Offline

Activity: 1890
Merit: 1078


Ian Knowles - CIYAM Lead Developer


View Profile WWW
October 17, 2014, 02:16:14 AM
 #56

this change would require users to check with the nodes on every single transaction on the blockchain. How the blockchain is setup now, a user can download what they think is the blockchain, and easily check each block to make sure no transaction spends the same input, while if the blockchain only contains the TXID of a Tx then someone would not only need to download the blockchain but also connect to what they hope is an honest node to confirm the inputs and outputs of each TX

The transaction hashes (malleability issues aside) mean they don't need to worry about the *honesty* of their peers - either each tx matches or it doesn't. The blockchain hasn't been changed in basic design either and for *normal bitcoin core nodes* that relay txs they will already have (at least most of) the txs for new blocks in their memory pool.

If you are worried about a "brand new node" that needs to catch up then I would suggest that they could be given "full blocks". As stated my goal wasn't to save on disk space (so the blocks would still be *stored* exactly as now). The idea is that for "general block publication" (to nodes that have most if not all relevant txs in their memory pool) there is no need for the tx scripts to be included in the block (saving a lot of bandwidth).

Nodes that "don't relay transactions" (nor keep a memory pool) would be a problem but I think that generally they would be connecting to things like "stratum" servers which could always just send them "complete blocks".

With CIYAM anyone can create 100% generated C++ web applications in literally minutes.

GPG Public Key | 1ciyam3htJit1feGa26p2wQ4aw6KFTejU
Argwai96
Legendary
*
Offline Offline

Activity: 1036
Merit: 1000


Thug for life!


View Profile
October 17, 2014, 02:59:52 AM
 #57

this change would require users to check with the nodes on every single transaction on the blockchain. How the blockchain is setup now, a user can download what they think is the blockchain, and easily check each block to make sure no transaction spends the same input, while if the blockchain only contains the TXID of a Tx then someone would not only need to download the blockchain but also connect to what they hope is an honest node to confirm the inputs and outputs of each TX

The transaction hashes (malleability issues aside) mean they don't need to worry about the *honesty* of their peers - either each tx matches or it doesn't. The blockchain hasn't been changed in basic design either and for *normal bitcoin core nodes* that relay txs they will already have (at least most of) the txs for new blocks in their memory pool.

If you are worried about a "brand new node" that needs to catch up then I would suggest that they could be given "full blocks". As stated my goal wasn't to save on disk space (so the blocks would still be *stored* exactly as now). The idea is that for "general block publication" (to nodes that have most if not all relevant txs in their memory pool) there is no need for the tx scripts to be included in the block (saving a lot of bandwidth).

Nodes that "don't relay transactions" (nor keep a memory pool) would be a problem but I think that generally they would be connecting to things like "stratum" servers which could always just send them "complete blocks".

So to restate what your proposal, you are saying that blocks  (when initially propagated) should only contain the TXID of the confirmed transactions in each block and nodes should store the signed message of each TXID in their mem pool.

A new node would need to check each signed TX in order for it to validate that the blockchain is in fact valid.

I am confused as to how the blocks would go from only having a TXID to having the entire signed TX.
CIYAM
Legendary
*
Offline Offline

Activity: 1890
Merit: 1078


Ian Knowles - CIYAM Lead Developer


View Profile WWW
October 17, 2014, 03:10:05 AM
 #58

So to restate what your proposal, you are saying that blocks  (when initially propagated) should only contain the TXID of the confirmed transactions in each block and nodes should store the signed message of each TXID in their mem pool.

Yes - basically the blocks just contain the txids - which can be matched with those in each nodes memory pool (assuming they are present - they may need to "request" txs if they don't already know about them).

A new node would need to check each signed TX in order for it to validate that the blockchain is in fact valid.

I am confused as to how the blocks would go from only having a TXID to having the entire signed TX.

That would be part of the "validation" - to illustrate:

Code:
Before Validation:
<block header><txid 1><txid 2>...<txid n>

Step 1:
<block header><tx 1 expanded><txid 2>...<txid n>

Step 2:
<block header><tx 1 expanded><tx 2 expanded>...<txid n>

Step n:
<block header><tx 1 expanded><tx 2 expanded>...<tx n expanded>

So during validation the block is "expanded" by replacing the txids with the actual txs and then the expanded block can be persisted.

With CIYAM anyone can create 100% generated C++ web applications in literally minutes.

GPG Public Key | 1ciyam3htJit1feGa26p2wQ4aw6KFTejU
franky1
Legendary
*
Offline Offline

Activity: 4214
Merit: 4475



View Profile
October 17, 2014, 03:37:51 AM
 #59

if gavin proposes to fork the chain. i propose that he forks it and jumps the max limit to 20mb or more for 2015, and then do the 50% increase a year,
as at the moment the tx per second limits for the next 10 years are proposed by gavin as
2014: 7/sec
2015: 10/sec(rounded down for easy maths, dont knitpick)
2016: 15/sec
2017: 22/sec
2018: 33/sec
2019: 49/sec
2020: 73/sec
2021: 109/sec
2022: 163/sec
2023: 244/sec
2024: 366/sec

366 tx per second is still not competitive enough to cope with visa/mastercard tx/s volume in 10 years.

yet if we start at 20mb in 2015
2015: 140/sec
2016: 230/sec
2017: 345/sec
2018: 517/sec
2019: 775/sec
2020: 1162/sec
2021: 1743/sec
2022: 2614/sec
2023: 3921/sec
2024: 5881/sec

which is more appealing as able to handle large volume.

just remember this is just opening up the potential tx volume and it wont actually bloat the blockchain unless there is actual transactions to fill the limits, meaning in 2015 we may have the potential of handling 140 tx per second even if actual tx average is still less than a dozen (thus giving a nice buffer space to cope with unpredictable growth)

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
CIYAM
Legendary
*
Offline Offline

Activity: 1890
Merit: 1078


Ian Knowles - CIYAM Lead Developer


View Profile WWW
October 17, 2014, 04:23:42 AM
 #60

Note that with the "compressed blocks" idea we'd be able to handle 50 txs/sec with the current 1MB limit.

With CIYAM anyone can create 100% generated C++ web applications in literally minutes.

GPG Public Key | 1ciyam3htJit1feGa26p2wQ4aw6KFTejU
Pages: « 1 2 [3] 4 5 6 7 8 9 10 11 12 13 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!