Bitcoin Forum
May 11, 2024, 09:44:34 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [17] 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 ... 86 »
  Print  
Author Topic: The Barry Silbert segwit2x agreement with >80% miner support.  (Read 119966 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
ComputerGenie
Hero Member
*****
Offline Offline

Activity: 1092
Merit: 552


Retired IRCX God


View Profile
May 26, 2017, 10:19:59 AM
 #321

OK, someone set me straight...
"BU is dead", "BU is nothing", "BU is irrelevant"...

BU is still well above 1/3 of the hashrate signal (AntPool-16.77 %,BTC.TOP-10.06 %,ViaBTC-4.19 %, etc).
Bitmain, who supposedly signed this agreement, is still signaling the same as they have been.

 Huh

If you have to ask "why?", you wouldn`t understand my answer.
Always be on the look out, because you never know when you'll be stalked by hit-men that eat nothing but cream cheese....
1715420674
Hero Member
*
Offline Offline

Posts: 1715420674

View Profile Personal Message (Offline)

Ignore
1715420674
Reply with quote  #2

1715420674
Report to moderator
1715420674
Hero Member
*
Offline Offline

Posts: 1715420674

View Profile Personal Message (Offline)

Ignore
1715420674
Reply with quote  #2

1715420674
Report to moderator
According to NIST and ECRYPT II, the cryptographic algorithms used in Bitcoin are expected to be strong until at least 2030. (After that, it will not be too difficult to transition to different algorithms.)
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715420674
Hero Member
*
Offline Offline

Posts: 1715420674

View Profile Personal Message (Offline)

Ignore
1715420674
Reply with quote  #2

1715420674
Report to moderator
Gyrsur
Legendary
*
Offline Offline

Activity: 2856
Merit: 1520


Bitcoin Legal Tender Countries: 2 of 206


View Profile WWW
May 26, 2017, 10:39:12 AM
 #322

Quote
Activate a 2 MB hard fork on September 21, 2017

2009 the year when Bitcoin went live.

21 millions bitcoin in total.

hopefully the 9/21 will not become the 9/11 of Bitcoin.

EDIT: ok, the HF will be moved to December.

-ck (OP)
Legendary
*
Offline Offline

Activity: 4102
Merit: 1632


Ruu \o/


View Profile WWW
May 26, 2017, 11:23:29 AM
 #323

OK, someone set me straight...
"BU is dead", "BU is nothing", "BU is irrelevant"...

BU is still well above 1/3 of the hashrate signal (AntPool-16.77 %,BTC.TOP-10.06 %,ViaBTC-4.19 %, etc).
Bitmain, who supposedly signed this agreement, is still signaling the same as they have been.
It actually takes time and effort to make pools signal something different. Pools, being the conservative entities that they are, will only change when they have a clear thing to change to. Their miner agreement clusterfuck has yet to produce any code for them to use to signal with so they'll just leave everything as is for now.

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
ComputerGenie
Hero Member
*****
Offline Offline

Activity: 1092
Merit: 552


Retired IRCX God


View Profile
May 26, 2017, 11:31:38 AM
 #324

OK, someone set me straight...
"BU is dead", "BU is nothing", "BU is irrelevant"...

BU is still well above 1/3 of the hashrate signal (AntPool-16.77 %,BTC.TOP-10.06 %,ViaBTC-4.19 %, etc).
Bitmain, who supposedly signed this agreement, is still signaling the same as they have been.
It actually takes time and effort to make pools signal something different. Pools, being the conservative entities that they are, will only change when they have a clear thing to change to. Their miner agreement clusterfuck has yet to produce any code for them to use to signal with so they'll just leave everything as is for now.
That's about what I thought, just wanted to be sure I was getting it.  Undecided

If you have to ask "why?", you wouldn`t understand my answer.
Always be on the look out, because you never know when you'll be stalked by hit-men that eat nothing but cream cheese....
Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3074



View Profile
May 26, 2017, 12:22:17 PM
Last edit: May 26, 2017, 12:47:45 PM by Carlton Banks
 #325

Both camps have clearly already decided on central planning to dictate or coerce whether it's going to be on-chain or off-chain scaling and we seemingly have a binary choice between the two stupid extremes.  Neither wants to let the market choose freely and decide for itself how best to grow.  I'd argue that both sides are spineless cowards in this regard.

So, everyone should just make their own personalised Bitcoin, with exactly the rules and limits they want, "because decentralise all the things" ? Roll Eyes


Do it. I'm sure everyone will be accepting DooMADCoin and CarltonCoin without question, they'll audit our code themselves, just a quick 5 minute job before they trade with us for the first time ever, and they'll keep their "money" with all the other IndividualCoins thye get from everyone else.

This is the inherent problem with a static blocksize.  In the act of choosing in advance a maximum amount of space to allow, you're also deciding in advance how the growth will occur, rather than allowing demand to speak for itself.  SegWit 1MB vs SegWit 2MB are both stupid answers to the problem.  Make it variable.

You're wrong.

You may as well pull out Keynsian economics arguments in favour of making the 21 million BTC supply variable too. It's been explained to you a million and 1 times why variable blocksizes are not a sensible design, but you put your fingers in your ears and start shouting about variable blocksizes again, in a vain attempt to wish all the problems with that idea away.

Variable blocksizes do not have any valid design, you tried to make the design valid by making the variability neglible, and therefore pointless. If, and let me repeat, if, someone can design a variable blocksize algorithm that's actually going to work in this inconveniently real world, by all means, make the case. But in the meantime, seeing as no valid design exists, please be quiet

Vires in numeris
CoinCube
Legendary
*
Offline Offline

Activity: 1946
Merit: 1055



View Profile
May 26, 2017, 01:07:22 PM
 #326

OK, someone set me straight...
"BU is dead", "BU is nothing", "BU is irrelevant"...

BU is still well above 1/3 of the hashrate signal (AntPool-16.77 %,BTC.TOP-10.06 %,ViaBTC-4.19 %, etc).
Bitmain, who supposedly signed this agreement, is still signaling the same as they have been.
It actually takes time and effort to make pools signal something different. Pools, being the conservative entities that they are, will only change when they have a clear thing to change to. Their miner agreement clusterfuck has yet to produce any code for them to use to signal with so they'll just leave everything as is for now.

The lack of code is probably a good thing at this juncture. I have been following this dispute only recently but the strong impression I get is that the only possible path to consensus on something like the miner agreement is if the code is ultimately produced, vetted and shipped by the core developers.

The miner agreement at this stage is simply a proposal. That proposal must now be vetted by the rest of the community to determine if a grudging consensus can be built around it.

Hopefully such a consensus will be possible. It strikes me as a far healthier path forward then stagnation forever in the current state or a civil war leading to a split into two coins.

DooMAD
Legendary
*
Online Online

Activity: 3780
Merit: 3125


Leave no FUD unchallenged


View Profile
May 26, 2017, 01:23:36 PM
Last edit: May 26, 2017, 01:36:30 PM by DooMAD
 #327

Not sure what tangent you've wandered off on now, but the crux of the matter was that one camp clearly pressurises on-chain tx, mostly to the exclusion of all else, and the other camp pressurises off-chain tx, mostly to the exclusion of all else.  Neither camp wants to consider a healthy mix between the two.  Both are willing to herd and funnel users into their desired and preempted growth ideal by either providing potentially too much or potentially too little space, respectively.  This is the inherent problem with a static blocksize.  In the act of choosing in advance a maximum amount of space to allow, you're also deciding in advance how the growth will occur, rather than allowing demand to speak for itself.  SegWit 1MB vs SegWit 2MB are both stupid answers to the problem.  Make it variable.

You're wrong.

You may as well pull out Keynsian economics arguments in favour of making the 21 million BTC supply variable too. It's been explained to you a million and 1 times why variable blocksizes are not a sensible design, but you put your fingers in your ears and start shouting about variable blocksizes again, in a vain attempt to wish all the problems with that idea away.

Variable blocksizes do not have any valid design, you tried to make the design valid by making the variability neglible, and therefore pointless. If, and let me repeat, if, someone can design a variable blocksize algorithm that's actually going to work in this inconveniently real world, by all means, make the case. But in the meantime, seeing as no valid design exists, please be quiet

That's a fair bit of verbal fluff just to voice the view that you don't personally think it's a valid design.  Did you have anything else to offer besides opinion?  Also, you can't argue it's "negligible" or "pointless" when you yourself clarified in the other thread the small potential of a maximum 8.16MB combined base and witness after 4 years.  Or is that what you deem negligible now?  Plus, if I had gone with larger adjustments, potentially resulting in larger increases, you no doubt would have bitched about the possible threat to node decentralisation and the usual 'gigablocks by midnight' nonsense.  There's literally no pleasing you.  If it wasn't proposed by Core, you shoot it down by fair means or foul.

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
-ck (OP)
Legendary
*
Offline Offline

Activity: 4102
Merit: 1632


Ruu \o/


View Profile WWW
May 26, 2017, 01:24:20 PM
 #328

The lack of code is probably a good thing at this juncture. I have been following this dispute only recently but the strong impression I get is that the only possible path to consensus on something like the miner agreement is if the code is ultimately produced and vetted and shipped by the core developers.

The miner agreement at this stage is simply a proposal. That proposal must now be vetted by the rest of the community to determine if a grudging consensus can be built around it.

Hopefully such a consensus will be possible. It strikes me as a far healthier path forward then stagnation forever in the current state or a civil war leading to a split into two coins.
That sounds good in principle but alas the miner agreement does a lot wrong that core can't condone. If they were to run with core's segwit implementation and a 2MB hard fork it would be different (as already proposed on the mailing list), but the agreement goes to great pains to say that segwit will be on a different bit implementation and be activated concurrently with a hardfork. Core can't agree to something that undoes the existing implementation which won't expire till November to adopt their more radical approach. If core agrees to do segwit followed by 2MB HF, it has to be with their existing implementation or they lose the next 6 months' opportunity to activate the heavily tested prepared segwit component already. The mining consortium has to ease their stance to meet them or we do nothing for another 6 months again or fork galore or risk an outside provided code base to work off. BU proved that's not a safe option with their incredibly unstable implementation of just one feature. The miner agreement is one by people who appear to not even know what they're agreeing to and ignores - or doesn't understand - what is realistically doable in a safe manner. The halfway point is core agreeing to their current segwit implementation AND a 2MB base blocksize hard fork that they implement - it's my gut feeling that's what we'll end up with but there needs to be a lot of rhetoric, chest thumping and circle jerking in the interim.

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
ComputerGenie
Hero Member
*****
Offline Offline

Activity: 1092
Merit: 552


Retired IRCX God


View Profile
May 26, 2017, 01:45:56 PM
 #329

Someone, please remind me why we need any size limit in 2017; when CPUs are pushing 100 GFLOPS and GPUs are pushing 10k, surely more can be processed than produced...
Why are we still sitting with a 2011 mentality?  Huh

If you have to ask "why?", you wouldn`t understand my answer.
Always be on the look out, because you never know when you'll be stalked by hit-men that eat nothing but cream cheese....
CoinCube
Legendary
*
Offline Offline

Activity: 1946
Merit: 1055



View Profile
May 26, 2017, 01:56:05 PM
Last edit: May 26, 2017, 02:41:23 PM by CoinCube
 #330

The lack of code is probably a good thing at this juncture. I have been following this dispute only recently but the strong impression I get is that the only possible path to consensus on something like the miner agreement is if the code is ultimately produced and vetted and shipped by the core developers.

The miner agreement at this stage is simply a proposal. That proposal must now be vetted by the rest of the community to determine if a grudging consensus can be built around it.

Hopefully such a consensus will be possible. It strikes me as a far healthier path forward then stagnation forever in the current state or a civil war leading to a split into two coins.
That sounds good in principle but alas the miner agreement does a lot wrong that core can't condone. If they were to run with core's segwit implementation and a 2MB hard fork it would be different (as already proposed on the mailing list), but the agreement goes to great pains to say that segwit will be on a different bit implementation and be activated concurrently with a hardfork. Core can't agree to something that undoes the existing implementation which won't expire till November to adopt their more radical approach. If core agrees to do segwit followed by 2MB HF, it has to be with their existing implementation or they lose the next 6 months' opportunity to activate the heavily tested prepared segwit component already. The mining consortium has to ease their stance to meet them or we do nothing for another 6 months again or fork galore or risk an outside provided code base to work off. BU proved that's not a safe option with their incredibly unstable implementation of just one feature. The miner agreement is one by people who appear to not even know what they're agreeing to and ignores - or doesn't understand - what is realistically doable in a safe manner. The halfway point is core agreeing to their current segwit implementation AND a 2MB base blocksize hard fork that they implement - it's my gut feeling that's what we'll end up with but there needs to be a lot of rhetoric, chest thumping and circle jerking in the interim.


As a member of the community who is totally uninvolved in either mining or development but who nevertheless has a substantial interest in bitcoin I kindly request the all parties commence immediately with the necessary chest thumping so we can move on to the halfway point bolded above.


Holliday
Legendary
*
Offline Offline

Activity: 1120
Merit: 1010



View Profile
May 26, 2017, 02:02:17 PM
 #331

Someone, please remind me why we need any size limit in 2017; when CPUs are pushing 100 GFLOPS and GPUs are pushing 10k, surely more can be processed than produced...
Why are we still sitting with a 2011 mentality?  Huh

It's been 4 days since I restarted my Bitcoin node (in order to make a configuration change) and I've uploaded 300 gigabytes of data to peers since then. I'm on track to upload over 2 terabytes this month.

A block size increase will increase the bandwidth required to run a full node.

Increasing the bandwidth required to run a full node will certainly reduce the amount of full nodes. It's up to the user to decide if that is desirable or not.

If you aren't the sole controller of your private keys, you don't have any bitcoins.
Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3074



View Profile
May 26, 2017, 02:03:27 PM
 #332

Not sure what tangent you've wandered off on now, but the crux of the matter was that one camp clearly pressurises on-chain tx, mostly to the exclusion of all else, and the other camp pressurises off-chain tx, mostly to the exclusion of all else.  Neither camp wants to consider a healthy mix between the two.  Both are willing to herd and funnel users into their desired and preempted growth ideal by either providing potentially too much or potentially too little space, respectively.  This is the inherent problem with a static blocksize.  In the act of choosing in advance a maximum amount of space to allow, you're also deciding in advance how the growth will occur, rather than allowing demand to speak for itself.  SegWit 1MB vs SegWit 2MB are both stupid answers to the problem.  Make it variable.

You're wrong.

You may as well pull out Keynsian economics arguments in favour of making the 21 million BTC supply variable too. It's been explained to you a million and 1 times why variable blocksizes are not a sensible design, but you put your fingers in your ears and start shouting about variable blocksizes again, in a vain attempt to wish all the problems with that idea away.

Variable blocksizes do not have any valid design, you tried to make the design valid by making the variability neglible, and therefore pointless. If, and let me repeat, if, someone can design a variable blocksize algorithm that's actually going to work in this inconveniently real world, by all means, make the case. But in the meantime, seeing as no valid design exists, please be quiet

That's a fair bit of verbal fluff just to voice the view that you don't personally think it's a valid design.  Did you have anything else to offer besides opinion?  Also, you can't argue it's "negligible" or "pointless" when you yourself clarified in the other thread the small potential of a maximum 8.16MB combined base and witness after 4 years.  Or is that what you deem negligible now?  Plus, if I had gone with larger adjustments, potentially resulting in larger increases, you no doubt would have bitched about the possible threat to node decentralisation and the usual 'gigablocks by midnight' nonsense.  There's literally no pleasing you.  If it wasn't proposed by Core, you shoot it down by fair means or foul.

*sigh*

There is no point in variable blocksize, because there's no way of stopping it being turned into "gigablocks by midnight", your caricature of preference when it comes to making a point, seeing as you've got no valid arguments against the fact that variable blocksize is easily gamed, hence your moratoirum on that argument in your so-called discussion thread.

And my argument, that you didn't anticipate having to censor, was that allowing such tiny increases is pointless anyway. It's not the size of the increase that makes the slightest bit of difference, it's the actual method you're arguing for, "market driven" variability in blocksize.

See if this can penetrate the thickness of your skull: the market includes people that don't like Bitcoin, and want it to become centralised and fail. Those people will push the blocksize as high as possible no matter what cost to them, and the more well financed such opponenets are, the more likely they are to throw every resource they have at raising that blocksize to the absolute max.

Hence, blocksize should always be a programmed constant, with step changes, sure, but a programmed constant, just like the coin supply. Now, argue that (irrefutable) point, or shut your mouth

Vires in numeris
ComputerGenie
Hero Member
*****
Offline Offline

Activity: 1092
Merit: 552


Retired IRCX God


View Profile
May 26, 2017, 02:06:06 PM
 #333

It's been 4 days since I restarted my Bitcoin node (in order to make a configuration change) and I've uploaded 300 gigabytes of data to peers since then. I'm on track to upload over 2 terabytes this month.
A block size increase will increase the bandwidth required to run a full node.
Increasing the bandwidth required to run a full node will certainly reduce the amount of full nodes. It's up to the user to decide if that is desirable or not.
Given that I run one on what is considered "small" bandwidth (10M down 0.5M up) by modern standards, I'm not sure I'm seeing that as a viable reason/excuse.

Edit: I'm not sure I see the logic in throttling a protocol because some potential users cannot utilize the same bandwidth as "Hillbilly Wireless" (and, yes, that's actually my ISP)

If you have to ask "why?", you wouldn`t understand my answer.
Always be on the look out, because you never know when you'll be stalked by hit-men that eat nothing but cream cheese....
BillyBobZorton
Legendary
*
Offline Offline

Activity: 1204
Merit: 1028


View Profile
May 26, 2017, 02:30:43 PM
 #334

Looks like Tone Vays and Trace Mayer joined the UASF team. I see UASF to continue gaining traction as miners keep fucking around. ASICBOOST needs to die, if Jihan keeps avoiding segwit softfork to keep milking fees he's going down in history as the goliath that destroyed by an army of UASF davids.
ComputerGenie
Hero Member
*****
Offline Offline

Activity: 1092
Merit: 552


Retired IRCX God


View Profile
May 26, 2017, 02:32:23 PM
 #335

...ASICBOOST needs to die...
Can we keep the tinfoil-hat stuff out of this?  Roll Eyes

If you have to ask "why?", you wouldn`t understand my answer.
Always be on the look out, because you never know when you'll be stalked by hit-men that eat nothing but cream cheese....
DooMAD
Legendary
*
Online Online

Activity: 3780
Merit: 3125


Leave no FUD unchallenged


View Profile
May 26, 2017, 02:37:52 PM
 #336

There is no point in variable blocksize, because there's no way of stopping it being turned into "gigablocks by midnight", your caricature of preference when it comes to making a point, seeing as you've got no valid arguments against the fact that variable blocksize is easily gamed, hence your moratoirum on that argument in your so-called discussion thread.

If there were no safeguards in place, then it would be easily gamed, which is why several safeguards have been introduced and more are under review.  I want to take every reasonable precaution to prevent gaming the system.


And my argument, that you didn't anticipate having to censor, was that allowing such tiny increases is pointless anyway.

Yes, it's fair to say I wasn't anticipating you completely contradicting yourself by saying the the proposal was too conservative and the adjustments were too small.  How large would you like the potential increases to be?   Tongue


See if this can penetrate the thickness of your skull: the market includes people that don't like Bitcoin, and want it to become centralised and fail. Those people will push the blocksize as high as possible no matter what cost to them, and the more well financed such opponenets are, the more likely they are to throw every resource they have at raising that blocksize to the absolute max.

Hence, blocksize should always be a programmed constant, with step changes, sure, but a programmed constant, just like the coin supply. Now, argue that (irrefutable) point, or shut your mouth

Has anyone actually done any research into how many transactions drop out of the mempool because they took too long to confirm?  All of the spam transactions people link to when they post threads about spam attacks have already been confirmed into a block.  So see if this can penetrate the thickness of your skull, the 1MB limit didn't prevent the spam from increasing the total size of the blockchain.  The spam was confirmed into the blockchain.  All the 1MB cap achieved was to make it take longer to confirm.  

Spam.  Still.  Got.  In.

While I don't doubt you have the right intentions, the fact is you don't seem to understand what it is you're even arguing for.  The scaling proposal you support does not prevent motivated attackers from increasing the total size of the blockchain.  There isn't a single proposal that does.  Which is why I want to look at things like increased fees for repeated transactions from the same address over a short period.  Something that would actually help deter spam, more than any arbitrary cap ever could.

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
CoinCube
Legendary
*
Offline Offline

Activity: 1946
Merit: 1055



View Profile
May 26, 2017, 02:48:58 PM
 #337

Just to interject

From what I have read on this dispute it appears very clear that regardless of their technical merits neither

1) Variable blocksize
or
2) SegWit alone without a block size increase

can achieve consensus at this moment in bitcoin history. Thus neither are viable pathways forward right now.

Some compromise position must be adopted. It does not matter if it is a perfect solution. All it needs to be is safe and good enough to achieve widespread if grudging consensus.

mindrust
Legendary
*
Offline Offline

Activity: 3248
Merit: 2434



View Profile WWW
May 26, 2017, 03:29:39 PM
 #338

Someone, please remind me why we need any size limit in 2017; when CPUs are pushing 100 GFLOPS and GPUs are pushing 10k, surely more can be processed than produced...
Why are we still sitting with a 2011 mentality?  Huh

It's been 4 days since I restarted my Bitcoin node (in order to make a configuration change) and I've uploaded 300 gigabytes of data to peers since then. I'm on track to upload over 2 terabytes this month.

A block size increase will increase the bandwidth required to run a full node.

Increasing the bandwidth required to run a full node will certainly reduce the amount of full nodes. It's up to the user to decide if that is desirable or not.

Taking the ability to run full nodes from the users and giving it to the elite mining companies who have access to resources will disturb the balance. Users will have no say  in bitcoin's future. Miners will be the only deciding factor.

Too powerful miners will be a too ez target for governments. Goverment killz minerz, bitcoin dies.

That's what you all BU shills do not understand from the beginning. Please Fork the fuck away asap, take your BUcoin and let us be.

.
.BLACKJACK ♠ FUN.
█████████
██████████████
████████████
█████████████████
████████████████▄▄
░█████████████▀░▀▀
██████████████████
░██████████████
████████████████
░██████████████
████████████
███████████████░██
██████████
CRYPTO CASINO &
SPORTS BETTING
▄▄███████▄▄
▄███████████████▄
███████████████████
█████████████████████
███████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
███████████████████████
█████████████████████
███████████████████
▀███████████████▀
█████████
.
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
May 26, 2017, 03:48:47 PM
 #339

Someone, please remind me why we need any size limit in 2017; when CPUs are pushing 100 GFLOPS and GPUs are pushing 10k, surely more can be processed than produced...
Why are we still sitting with a 2011 mentality?  Huh

It's been 4 days since I restarted my Bitcoin node (in order to make a configuration change) and I've uploaded 300 gigabytes of data to peers since then. I'm on track to upload over 2 terabytes this month.

A block size increase will increase the bandwidth required to run a full node.

Increasing the bandwidth required to run a full node will certainly reduce the amount of full nodes. It's up to the user to decide if that is desirable or not.

Taking the ability to run full nodes from the users and giving it to the elite mining companies who have access to resources will disturb the balance. Users will have no say  in bitcoin's future. Miners will be the only deciding factor.

But this is already the case.  And solely due to bitcoin's design, which determines consensus as by hash power consensus, where miners are put in a game that leads to a Nash equilibrium of all of them playing by the same rules, which we call "consensus".  What changed from the original conception, although highly predictable, was the clustering of mining power into pools which are not very high in number: 5 for majority, 20 for essentially 99% majority.  
Bitcoin's design, on top of that, makes it extremely difficult for a miner pool to not be in the majority consensus with the very slow difficulty adaptation, which kills all small minority miners deviating from the consensus.

What does this bring us ?
a) Even if there are only 20 mining pools, if they remain decentralized, meaning, they don't collude, meaning, they don't "sit in a room and agree on something", then bitcoin's protocol remains what it is.
b) If a serious hash majority of these mining pools (say, 8 of the big ones) colludes, meaning, they sit in a room, and decide to take action together, then that action will be what bitcoin's new protocol will become.
c) If two comparable camps appear in these mining pools, then they can remain locked up in a prisoner's dilemma like in (a), or they can "pull the trigger" and break miner consensus.  ONLY AT THAT POINT, users can say something (in the market).  The only thing a UASF could obtain, is that the trigger is pulled somewhat faster by making believe at least one camp that they might win.  But the nodes not necessarily being representative for the market, that's a risky bet.

Users can vote in the market if miners break miner consensus, and fork into two coins.  As long as there is miner consensus, users can just decide to accept that, or to leave bitcoin and their holdings behind them.

This was not the case as long as miners were manifold, and dispersed throughout the P2P network, because the P2P network FILTERED blocks that came from about anywhere.  But this is not the case any more.  Miners don't need the P2P network with Joe's node in his basement to get the blocks from their peers.  It would make them lose too many blocks.  You can estimate the connectivity between miners by the orphan rate.

https://blockchain.info/charts/n-orphaned-blocks?timespan=60days

There have been 3 orphaned blocks more than a month ago, and nothing since !!

You would almost think that there is only ONE MINER POOL, so good are their connections.  Joe's node in his basement is totally out of this game.

3 blocks in 2 months, that's essentially 3 collisions when there have been 10000 blocks produced.  In other words, miner pools know each-other's blocks within a time lapse of the order of 10 minutes * 3 / 10 000 = 180 ms !!

Joe's node in his basement has filtered ZILCH between miners.  In 180 ms, they know one-another's blocks (otherwise, purely statistically, they would have published many more orphaned blocks).

dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
May 26, 2017, 04:01:09 PM
 #340

Increasing the bandwidth required to run a full node will certainly reduce the amount of full nodes. It's up to the user to decide if that is desirable or not.

I think the first requirement of a user is to be able to do transactions in a cheap and permissionless way.  It is very funny to run your own node in your basement if you're a geek, but you don't need that to do transactions.  If you ask a user whether he wants to be able to transact cheaply and without any hassle, but he has to invest in a few TB hard disk and maybe upgrade his internet link if he wants to be a geek in his basement ; or whether he wants to keep using his old core-duo PC with his 300 GB disk for his geeky full node in his basement, but with the current transaction difficulties, I think the answer will be very fast ; but it is sufficient to make a hard fork and ask the market to vote over the two models: that's how the free market is supposed to work:
do you want a big car that can run over all kinds of terrain but can only reach 100 mph, or do you want a sports car that can do 200 mph but only on good roads ?  Sell both !  The customer will chose.  There's maybe a public for both.

By far most users of bitcoin have never run a full node.  They don't need it.  The full node copies simply what the miners made, and the miners are the entities that keep the miners in check.  The wallet is what downloads the right data from a full node, that doesn't need to be trusted because the wallet can verify the cryptographic authenticity, namely that the data it received, was the data that miners made.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [17] 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 ... 86 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!