Bitcoin Forum
December 12, 2024, 11:08:25 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 [3]  All
  Print  
Author Topic: How merchant will behave when there is hard fork & they are not sure who win?  (Read 3343 times)
acoindr
Legendary
*
Offline Offline

Activity: 1050
Merit: 1002


View Profile
February 22, 2013, 08:16:13 PM
 #41

...

I find it neither surprising nor unhealthy that the various primary developers have different ideas about what is important.  
...

I agree. I call it free market forces check and balance.

No hard fork will be introduced unless it's clear who will win.
...

Yes, that appears to be the case. I like Gavin's approach to upgrading:

A hard fork won't happen unless the vast super-majority of miners support it.

E.g. from my "how to handle upgrades" gist https://gist.github.com/gavinandresen/2355445

Quote
Example: increasing MAX_BLOCK_SIZE (a 'hard' blockchain split change)

Increasing the maximum block size beyond the current 1MB per block (perhaps changing it to a floating limit based on a multiple of the median size of the last few hundred blocks) is a likely future change to accomodate more transactions per block. A new maximum block size rule might be rolled out by:

New software creates blocks with a new block.version
Allow greater-than-MAX_BLOCK_SIZE blocks if their version is the new block.version or greater and 100% of the last 1000 blocks are new blocks. (51% of the last 100 blocks if on testnet)
100% of the last 1000 blocks is a straw-man; the actual criteria would probably be different (maybe something like block.timestamp is after 1-Jan-2015 and 99% of the last 2000 blocks are new-version), since this change means the first valid greater-than-MAX_BLOCK_SIZE-block immediately kicks anybody running old software off the main block chain.


This ensures consensus vote from the field.
twolifeinexile (OP)
Full Member
***
Offline Offline

Activity: 154
Merit: 100



View Profile
February 22, 2013, 08:26:37 PM
 #42

...
I find it neither surprising nor unhealthy that the various primary developers have different ideas about what is important.  
...

I agree. I call it free market forces check and balance.

No hard fork will be introduced unless it's clear who will win.
...

Yes, that appears to be the case. I like Gavin's approach to upgrading:

A hard fork won't happen unless the vast super-majority of miners support it.

E.g. from my "how to handle upgrades" gist https://gist.github.com/gavinandresen/2355445

Quote
Example: increasing MAX_BLOCK_SIZE (a 'hard' blockchain split change)

Increasing the maximum block size beyond the current 1MB per block (perhaps changing it to a floating limit based on a multiple of the median size of the last few hundred blocks) is a likely future change to accomodate more transactions per block. A new maximum block size rule might be rolled out by:

New software creates blocks with a new block.version
Allow greater-than-MAX_BLOCK_SIZE blocks if their version is the new block.version or greater and 100% of the last 1000 blocks are new blocks. (51% of the last 100 blocks if on testnet)
100% of the last 1000 blocks is a straw-man; the actual criteria would probably be different (maybe something like block.timestamp is after 1-Jan-2015 and 99% of the last 2000 blocks are new-version), since this change means the first valid greater-than-MAX_BLOCK_SIZE-block immediately kicks anybody running old software off the main block chain.

This ensures consensus vote from the field.
If the approeach as described by Gavin, some concensus of 10%%~99% of the hashing power, then it should be able to maintain the confidence.
To me, the size itself is not the problem, it is the confidence that is the problem.
Maged
Legendary
*
Offline Offline

Activity: 1204
Merit: 1015


View Profile
February 23, 2013, 04:57:31 AM
 #43

...mit would be raised, but the discussions about whether it would be raised at all with opposition from a significant amount of people. That turned the idea of raising the limit into a complete non-starter since it requires a hard fork, despite the fact that changing the maximum block size has been the plan since the very beginning. ...

Do you have anything by way of actual evidence of this?  Not that I do or don't believe it, but it is easy and common on this forum for people to pull shit like this right out of their ass.

Why did the limit get set as it is knowing that it was going to be a nightmare if/when it ever was to be changed (if you have any real clue?)
That realization about the impossibility of raising the maximum block size via a hard fork worried me enough to the point where I actually think that my statement that it would require a hard fork might be wrong.  Wink

But seriously, I think that I've come up with a soft-fork version of accomplishing the same thing. A soft-fork would allow us to deploy the rules for increasing the maximum block size in less then a month instead of several years, assuming it was coded ahead of time and it was considered extremely high priority. It's a terrible hack job, but to the users it would be fairly seamless. Not to give too much away just yet, but it'll be fully backwards compatible functionality wise down to version 0.6.0. I'll post the technical details in the next few days.

twolifeinexile (OP)
Full Member
***
Offline Offline

Activity: 154
Merit: 100



View Profile
February 23, 2013, 05:08:51 AM
 #44

...mit would be raised, but the discussions about whether it would be raised at all with opposition from a significant amount of people. That turned the idea of raising the limit into a complete non-starter since it requires a hard fork, despite the fact that changing the maximum block size has been the plan since the very beginning. ...

Do you have anything by way of actual evidence of this?  Not that I do or don't believe it, but it is easy and common on this forum for people to pull shit like this right out of their ass.

Why did the limit get set as it is knowing that it was going to be a nightmare if/when it ever was to be changed (if you have any real clue?)
That realization about the impossibility of raising the maximum block size via a hard fork worried me enough to the point where I actually think that my statement that it would require a hard fork might be wrong.  Wink

But seriously, I think that I've come up with a soft-fork version of accomplishing the same thing. A soft-fork would allow us to deploy the rules for increasing the maximum block size in less then a month instead of several years, assuming it was coded ahead of time and it was considered extremely high priority. It's a terrible hack job, but to the users it would be fairly seamless. Not to give too much away just yet, but it'll be fully backwards compatible functionality wise down to version 0.6.0. I'll post the technical details in the next few days.
How? Old version reject block bigger than the limit, you new version generate one and ask older version to accept? You discovered a vulnerability/bug in the old version?
tvbcof
Legendary
*
Offline Offline

Activity: 4774
Merit: 1283


View Profile
February 23, 2013, 05:20:14 AM
 #45

...mit would be raised, but the discussions about whether it would be raised at all with opposition from a significant amount of people. That turned the idea of raising the limit into a complete non-starter since it requires a hard fork, despite the fact that changing the maximum block size has been the plan since the very beginning. ...

Do you have anything by way of actual evidence of this?  Not that I do or don't believe it, but it is easy and common on this forum for people to pull shit like this right out of their ass.

Why did the limit get set as it is knowing that it was going to be a nightmare if/when it ever was to be changed (if you have any real clue?)
That realization about the impossibility of raising the maximum block size via a hard fork worried me enough to the point where I actually think that my statement that it would require a hard fork might be wrong.  Wink

But seriously, I think that I've come up with a soft-fork version of accomplishing the same thing. A soft-fork would allow us to deploy the rules for increasing the maximum block size in less then a month instead of several years, assuming it was coded ahead of time and it was considered extremely high priority. It's a terrible hack job, but to the users it would be fairly seamless. Not to give too much away just yet, but it'll be fully backwards compatible functionality wise down to version 0.6.0. I'll post the technical details in the next few days.

Awsome!  A 'terrible hack job' to accomplish something I desperately don't want.  What's not to love? Smiley

I believe my bitcoind build pre-dates 0.6 so I'll be anxiously awaiting your details.


sig spam anywhere and self-moderated threads on the pol&soc board are for losers.
Maged
Legendary
*
Offline Offline

Activity: 1204
Merit: 1015


View Profile
February 26, 2013, 03:03:14 AM
 #46

...mit would be raised, but the discussions about whether it would be raised at all with opposition from a significant amount of people. That turned the idea of raising the limit into a complete non-starter since it requires a hard fork, despite the fact that changing the maximum block size has been the plan since the very beginning. ...

Do you have anything by way of actual evidence of this?  Not that I do or don't believe it, but it is easy and common on this forum for people to pull shit like this right out of their ass.

Why did the limit get set as it is knowing that it was going to be a nightmare if/when it ever was to be changed (if you have any real clue?)
That realization about the impossibility of raising the maximum block size via a hard fork worried me enough to the point where I actually think that my statement that it would require a hard fork might be wrong.  Wink

But seriously, I think that I've come up with a soft-fork version of accomplishing the same thing. A soft-fork would allow us to deploy the rules for increasing the maximum block size in less then a month instead of several years, assuming it was coded ahead of time and it was considered extremely high priority. It's a terrible hack job, but to the users it would be fairly seamless. Not to give too much away just yet, but it'll be fully backwards compatible functionality wise down to version 0.6.0. I'll post the technical details in the next few days.
How? Old version reject block bigger than the limit, you new version generate one and ask older version to accept? You discovered a vulnerability/bug in the old version?
I'll put up a post about it in Development & Technical Discussion sometime this week when I have a few spare hours to answer the inevitable questions. For now, I just want people to realize that there may be other ways to go about this.

jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1111


View Profile
February 26, 2013, 05:29:51 PM
 #47

...mit would be raised, but the discussions about whether it would be raised at all with opposition from a significant amount of people. That turned the idea of raising the limit into a complete non-starter since it requires a hard fork, despite the fact that changing the maximum block size has been the plan since the very beginning. ...

Do you have anything by way of actual evidence of this?  Not that I do or don't believe it, but it is easy and common on this forum for people to pull shit like this right out of their ass.

Why did the limit get set as it is knowing that it was going to be a nightmare if/when it ever was to be changed (if you have any real clue?)
That realization about the impossibility of raising the maximum block size via a hard fork worried me enough to the point where I actually think that my statement that it would require a hard fork might be wrong.  Wink

But seriously, I think that I've come up with a soft-fork version of accomplishing the same thing. A soft-fork would allow us to deploy the rules for increasing the maximum block size in less then a month instead of several years, assuming it was coded ahead of time and it was considered extremely high priority. It's a terrible hack job, but to the users it would be fairly seamless. Not to give too much away just yet, but it'll be fully backwards compatible functionality wise down to version 0.6.0. I'll post the technical details in the next few days.
How? Old version reject block bigger than the limit, you new version generate one and ask older version to accept? You discovered a vulnerability/bug in the old version?
I'll put up a post about it in Development & Technical Discussion sometime this week when I have a few spare hours to answer the inevitable questions. For now, I just want people to realize that there may be other ways to go about this.

Very interesting.... It sounds like an exploit. Can't wait to see it

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
Pages: « 1 2 [3]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!