Bitcoin Forum
November 08, 2024, 06:09:09 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: I hold no view seeking information: 4 to 8 MB now. Technical Objections?  (Read 2234 times)
jubalix (OP)
Legendary
*
Offline Offline

Activity: 2632
Merit: 1023


View Profile WWW
April 22, 2017, 03:03:18 AM
Merited by ABCbits (3)
 #1

What prevents say a blocksize to 4MB or 8MB now or some such as BTC may be reasonably said to be achieving higher usage, and bandwidth and storage is cheaper?

Why would the pro Segwit camp be against this?

Why would the pro BU camp be against this?

I am seeking technical arguments only and maybe financial interests as a consequence of the alternatives, not emotive responses.

At present I fail to see why both camps could not just upgrade to 4 or 8MB and continue of their BU and Segwit campaigns. Sort of just raises the threshold amount and allows more usage at less fees.

I also fail to see why if satoshi originally had 32 MB blocks why any party would object to going up to this size as usage increases

Also I don't buy you will get spam in blocks, because you have to pay to be included so all transactions are legitimate.


Admitted Practicing Lawyer::BTC/Crypto Specialist. B.Engineering/B.Laws

https://www.binance.com/?ref=10062065
cryptoanarchist
Legendary
*
Offline Offline

Activity: 1120
Merit: 1003



View Profile
April 22, 2017, 03:33:11 AM
 #2

Part of the reason so many people are switching clients away from Core is because they have tried to ask the same question and just got name-called and banned. What does that tell you?

I'm grumpy!!
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3542
Merit: 6886


Just writing some code


View Profile WWW
April 22, 2017, 03:48:52 AM
Merited by ABCbits (6)
 #3

There are multiple problems with a block size increase in general, regardless of how big you make it. While 4 or 8 MB might sound reasonable to you, it in fact is not really that sustainable when you consider everything else that can result from having larger blocks.

An important thing to keep in mind when designing large robust systems is that you must always assume that the worst case scenario can and will happen.

Firstly, there is the problem with quadratic sighashing. Increasing the block size to 4 MB means that we would be allowing a theoretical 4 MB transaction which, due to its size, can take a long time to validate. A block was mined a while back that took ~30 seconds to validate since it was just one gigantic 1 MB transaction. Due to quadratic sighashing, a similar 4 MB transaction in a 4 MB block would take 480 seconds to validate since sighashing is quadratic.

Secondly, increasing the block size in general increases the burden on full nodes in terms of bandwidth and disk space. Right now the blockchain is already fairly large and growing at a fairly fast pace. It gains ~1 GB every week or so. Considering the worst case scenario, that would mean that the blockchain would grow at a rate of 4 GB per week. That growth is quite large and hard to sustain. Full nodes need to download that amount of data per week and upload it to multiple peers. That consumes a lot of bandwidth and people will likely stop running full nodes due to the extra cost of that bandwidth. Furthermore, it will become more and more difficult to bring new nodes online since it consumes so much bandwidth and disk space so it is unlikely that people will be starting up new full nodes. Overall, this extra cost is a centralizing pressure and will result in fewer full nodes and an increased burden on those who are currently running full nodes. And larger blocks don't just effect the bandwidth and disk space, they also require more processing power and memory to fully process, so that raises the minimum machine requirements as well.

There was a paper published a year or so ago which analyzed the ability of the network to support certain sized blocks, and I think they concluded that based upon network bandwidth alone, the network might have been able to support 4 MB blocks and still keep most of the full nodes. However they did not consider machine specs and requirements for larger blocks so if you factor those in, the maximum handleable block size is likely smaller.

Lastly, such a change would require a hard fork. Hard forks are hard to coordinate and to get everyone on board and upgrade at the same time. By this time, 2 block size increase hard forks have been attempted, and both have failed. With all of the politics, contention, and toxicity going on right now, I highly doubt that we would be able to get the consensus required to activate such a hard fork. Additionally, planning out, implementing, and testing a safe fork (regardless of hard or soft) takes a long time, so such a fork would not be ready for months, if not a year or more.

jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1008


Core dev leaves me neg feedback #abuse #political


View Profile
April 22, 2017, 04:10:55 AM
Merited by ABCbits (1)
 #4

There are multiple problems with a block size increase in general, regardless of how big you make it. While 4 or 8 MB might sound reasonable to you, it in fact is not really that sustainable when you consider everything else that can result from having larger blocks.

An important thing to keep in mind when designing large robust systems is that you must always assume that the worst case scenario can and will happen.

Firstly, there is the problem with quadratic sighashing. Increasing the block size to 4 MB means that we would be allowing a theoretical 4 MB transaction which, due to its size, can take a long time to validate. A block was mined a while back that took ~30 seconds to validate since it was just one gigantic 1 MB transaction. Due to quadratic sighashing, a similar 4 MB transaction in a 4 MB block would take 480 seconds to validate since sighashing is quadratic.

Secondly, increasing the block size in general increases the burden on full nodes in terms of bandwidth and disk space. Right now the blockchain is already fairly large and growing at a fairly fast pace. It gains ~1 GB every week or so. Considering the worst case scenario, that would mean that the blockchain would grow at a rate of 4 GB per week. That growth is quite large and hard to sustain. Full nodes need to download that amount of data per week and upload it to multiple peers. That consumes a lot of bandwidth and people will likely stop running full nodes due to the extra cost of that bandwidth. Furthermore, it will become more and more difficult to bring new nodes online since it consumes so much bandwidth and disk space so it is unlikely that people will be starting up new full nodes. Overall, this extra cost is a centralizing pressure and will result in fewer full nodes and an increased burden on those who are currently running full nodes. And larger blocks don't just effect the bandwidth and disk space, they also require more processing power and memory to fully process, so that raises the minimum machine requirements as well.

There was a paper published a year or so ago which analyzed the ability of the network to support certain sized blocks, and I think they concluded that based upon network bandwidth alone, the network might have been able to support 4 MB blocks and still keep most of the full nodes. However they did not consider machine specs and requirements for larger blocks so if you factor those in, the maximum handleable block size is likely smaller.

Lastly, such a change would require a hard fork. Hard forks are hard to coordinate and to get everyone on board and upgrade at the same time. By this time, 2 block size increase hard forks have been attempted, and both have failed. With all of the politics, contention, and toxicity going on right now, I highly doubt that we would be able to get the consensus required to activate such a hard fork. Additionally, planning out, implementing, and testing a safe fork (regardless of hard or soft) takes a long time, so such a fork would not be ready for months, if not a year or more.



Mostly correct, but these problems are solvable. 

Flextrans (bunlded in BitcoinClassic)  offers an already coded solution to the quadratic hashing.   We can also limit the transaction size or number of sigops.

Bandwidth and diskspace requirements would increase naturally, but Gavin did testing on 8mb blocks, and if you think about it, even full 32mb blocks only
represent 1.68 TB a year of storage.

Hard forks require coordination, but many alt coins have successfully hard forked without any major issues that I'm aware of, and I don't think it would require a year. 
Even if it was done very abruptly, miners kicked off the main chain unexpectedly could simply rejoin. 

cryptoanarchist
Legendary
*
Offline Offline

Activity: 1120
Merit: 1003



View Profile
April 22, 2017, 05:06:01 AM
 #5

Quote
Bandwidth and diskspace requirements would increase naturally, but Gavin did testing on 8mb blocks, and if you think about it, even full 32mb blocks only
represent 1.68 TB a year of storage.

Not only that, but we're already seeing TB hard drives at Best Buy. I'm old enough to remember that not so long ago (2000) I was blown away when I guy ordered a whole GB of memory for his Sun Starfire Server. He had a total WHOPPING 4GB in it!!! That was an insane amount of memory in a server the size of a refrigerator! Now most people have more than that in their frickin laptops.

By the time Bitcoin scales up to Visa sizes, most people will probably have a 4TB of RAM and 500TB (or more) hard drives.

I'm grumpy!!
Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3080



View Profile
April 22, 2017, 07:14:52 AM
Merited by ABCbits (1)
 #6

I had to re-download the 120 GB blockchain only recently, due (I think) to not allowing Bitcoin Core to shutdown properly and some part of the database becoming corrupted.

If the blockchain was growing at 8x the rate (or with Segwit, 20x or 40x if the base block was increased to 4 MB or 8 MB), I would have had to completely change all my plans around that, it would have taken so much more than the 2+ days that it took (using a Sandy Bridge laptop with external 1TB Samsung 850 SSD). 8MB baseblock with a corresponding 32 MB witness block (i.e. 40x) would have been far too much to contemplate.


This illustrates that even 100 GB IBD and validation is taxing in real world conditions, today.


Why are on-chain advocates never interested in the alternative hard-fork proposals, such as improving the tx encoding efficieny or using more space efficient signatures? By alwasy insisting on plain blocksize increases and nothing else, on-chain big-blockers ignore the threat of Mike Hearn's infamous "only 8 Google datacenters will run the Bitcoin network" future scenario. Jonald Fyookball talks constantly about this, and yet recently admitted he doesn't even run a Bitcoin node, he thinks someone else should do it for him.

If we increase even to 4MB, there is a risk that node numbers will go down, because the computer and net resources needed to keep up with that size of blockchain growth will overwhelm too many people that run nodes today. And I'm willing to risk the 4MB (that is, 1 MB base + 3 MB witness blocks) that Segwit increases the blocksize to. 8MB? Forget it for at least 2 years, it's bound to hurt the node count even more than 4MB.

Vires in numeris
jubalix (OP)
Legendary
*
Offline Offline

Activity: 2632
Merit: 1023


View Profile WWW
April 22, 2017, 11:06:18 AM
 #7

There are multiple problems with a block size increase in general, regardless of how big you make it. While 4 or 8 MB might sound reasonable to you, it in fact is not really that sustainable when you consider everything else that can result from having larger blocks.

An important thing to keep in mind when designing large robust systems is that you must always assume that the worst case scenario can and will happen.

Firstly, there is the problem with quadratic sighashing. Increasing the block size to 4 MB means that we would be allowing a theoretical 4 MB transaction which, due to its size, can take a long time to validate. A block was mined a while back that took ~30 seconds to validate since it was just one gigantic 1 MB transaction. Due to quadratic sighashing, a similar 4 MB transaction in a 4 MB block would take 480 seconds to validate since sighashing is quadratic.

Secondly, increasing the block size in general increases the burden on full nodes in terms of bandwidth and disk space. Right now the blockchain is already fairly large and growing at a fairly fast pace. It gains ~1 GB every week or so. Considering the worst case scenario, that would mean that the blockchain would grow at a rate of 4 GB per week. That growth is quite large and hard to sustain. Full nodes need to download that amount of data per week and upload it to multiple peers. That consumes a lot of bandwidth and people will likely stop running full nodes due to the extra cost of that bandwidth. Furthermore, it will become more and more difficult to bring new nodes online since it consumes so much bandwidth and disk space so it is unlikely that people will be starting up new full nodes. Overall, this extra cost is a centralizing pressure and will result in fewer full nodes and an increased burden on those who are currently running full nodes. And larger blocks don't just effect the bandwidth and disk space, they also require more processing power and memory to fully process, so that raises the minimum machine requirements as well.

There was a paper published a year or so ago which analyzed the ability of the network to support certain sized blocks, and I think they concluded that based upon network bandwidth alone, the network might have been able to support 4 MB blocks and still keep most of the full nodes. However they did not consider machine specs and requirements for larger blocks so if you factor those in, the maximum handleable block size is likely smaller.

Lastly, such a change would require a hard fork. Hard forks are hard to coordinate and to get everyone on board and upgrade at the same time. By this time, 2 block size increase hard forks have been attempted, and both have failed. With all of the politics, contention, and toxicity going on right now, I highly doubt that we would be able to get the consensus required to activate such a hard fork. Additionally, planning out, implementing, and testing a safe fork (regardless of hard or soft) takes a long time, so such a fork would not be ready for months, if not a year or more.



Mostly correct, but these problems are solvable. 

Flextrans (bunlded in BitcoinClassic)  offers an already coded solution to the quadratic hashing.   We can also limit the transaction size or number of sigops.

Bandwidth and diskspace requirements would increase naturally, but Gavin did testing on 8mb blocks, and if you think about it, even full 32mb blocks only
represent 1.68 TB a year of storage.

Hard forks require coordination, but many alt coins have successfully hard forked without any major issues that I'm aware of, and I don't think it would require a year. 
Even if it was done very abruptly, miners kicked off the main chain unexpectedly could simply rejoin. 


OK... I see the quadratic issue seems solvable by part of segwit or Flextrans, so this would seem to at least reasonable argument that this part of the solution should be acceptable to all parties

As to the size issue, well, I feel 4 MB is not that large I mean even I could download that in time from where I am....I guess this argument is one of degree.....and mechanisms to distribute info.....

Admitted Practicing Lawyer::BTC/Crypto Specialist. B.Engineering/B.Laws

https://www.binance.com/?ref=10062065
Nagadota
Hero Member
*****
Offline Offline

Activity: 574
Merit: 500


ClaimWithMe - the most paying faucet of all times!


View Profile WWW
April 22, 2017, 11:52:53 AM
 #8

OK... I see the quadratic issue seems solvable by part of segwit or Flextrans, so this would seem to at least reasonable argument that this part of the solution should be acceptable to all parties

As to the size issue, well, I feel 4 MB is not that large I mean even I could download that in time from where I am....I guess this argument is one of degree.....and mechanisms to distribute info.....
Sure, it's not that large when it's just one and you're downloading it as a slight addition to what it was before.

But when you're talking about downloading hundreds of gigabytes of data each year and needing to keep your node running nearly all of the time to keep up and having to have great bandwidth to handle it, that's when nodes become less accessible.

Block size increase is great for a temporary increase in transaction capacity.  But when you're talking about long-term upgrades to the protocol before Bitcoin can become even more immutable because it's sufficiently scalable, you have to consider that you can't just keep shoving up the block size.  You have to find ways to upgrade the protocol and actual system properly, otherwise you'll end up with the scenario of a few mining monopolies running full nodes and no one else.

jubalix (OP)
Legendary
*
Offline Offline

Activity: 2632
Merit: 1023


View Profile WWW
April 22, 2017, 01:58:22 PM
Last edit: April 22, 2017, 02:19:26 PM by jubalix
 #9

OK... I see the quadratic issue seems solvable by part of segwit or Flextrans, so this would seem to at least reasonable argument that this part of the solution should be acceptable to all parties

As to the size issue, well, I feel 4 MB is not that large I mean even I could download that in time from where I am....I guess this argument is one of degree.....and mechanisms to distribute info.....
Sure, it's not that large when it's just one and you're downloading it as a slight addition to what it was before.

But when you're talking about downloading hundreds of gigabytes of data each year and needing to keep your node running nearly all of the time to keep up and having to have great bandwidth to handle it, that's when nodes become less accessible.

Block size increase is great for a temporary increase in transaction capacity.  But when you're talking about long-term upgrades to the protocol before Bitcoin can become even more immutable because it's sufficiently scalable, you have to consider that you can't just keep shoving up the block size.  You have to find ways to upgrade the protocol and actual system properly, otherwise you'll end up with the scenario of a few mining monopolies running full nodes and no one else.

ok i see this may be a considerable issue...what would help alleviate this, code wise apart from block size::

Admitted Practicing Lawyer::BTC/Crypto Specialist. B.Engineering/B.Laws

https://www.binance.com/?ref=10062065
Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3080



View Profile
April 22, 2017, 02:42:42 PM
 #10

Block size increase is great for a temporary increase in transaction capacity.  But when you're talking about long-term upgrades to the protocol before Bitcoin can become even more immutable because it's sufficiently scalable, you have to consider that you can't just keep shoving up the block size.  You have to find ways to upgrade the protocol and actual system properly, otherwise you'll end up with the scenario of a few mining monopolies running full nodes and no one else.

ok i see this may be a considerable issue...what would help alleviate this, code wise apart from block size::

Increase the efficiency with which the transactions themselves are stored in the blocks. Don't make the blocks bigger, make the transactions smaller. I can't believe I'm the only person regularly arguing in favour of this incredibly common-sense and simple concept, which actually fulfills the definition of what the word "scaling" means.

Vires in numeris
jubalix (OP)
Legendary
*
Offline Offline

Activity: 2632
Merit: 1023


View Profile WWW
April 22, 2017, 03:05:47 PM
 #11

Block size increase is great for a temporary increase in transaction capacity.  But when you're talking about long-term upgrades to the protocol before Bitcoin can become even more immutable because it's sufficiently scalable, you have to consider that you can't just keep shoving up the block size.  You have to find ways to upgrade the protocol and actual system properly, otherwise you'll end up with the scenario of a few mining monopolies running full nodes and no one else.

ok i see this may be a considerable issue...what would help alleviate this, code wise apart from block size::

Increase the efficiency with which the transactions themselves are stored in the blocks. Don't make the blocks bigger, make the transactions smaller. I can't believe I'm the only person regularly arguing in favour of this incredibly common-sense and simple concept, which actually fulfills the definition of what the word "scaling" means.

how much can we do this, in your understanding? or any one else's for that matter?

It Seems segwit does allow some compression, is thier part of that codebase we can take that would not be objectionable to say the BU camp?

Or what other algorithms can we use to effect compression?

Admitted Practicing Lawyer::BTC/Crypto Specialist. B.Engineering/B.Laws

https://www.binance.com/?ref=10062065
cryptoanarchist
Legendary
*
Offline Offline

Activity: 1120
Merit: 1003



View Profile
April 22, 2017, 04:02:38 PM
 #12

Block size increase is great for a temporary increase in transaction capacity.  But when you're talking about long-term upgrades to the protocol before Bitcoin can become even more immutable because it's sufficiently scalable, you have to consider that you can't just keep shoving up the block size.  You have to find ways to upgrade the protocol and actual system properly, otherwise you'll end up with the scenario of a few mining monopolies running full nodes and no one else.

ok i see this may be a considerable issue...what would help alleviate this, code wise apart from block size::

Increase the efficiency with which the transactions themselves are stored in the blocks. Don't make the blocks bigger, make the transactions smaller. I can't believe I'm the only person regularly arguing in favour of this incredibly common-sense and simple concept, which actually fulfills the definition of what the word "scaling" means.

how much can we do this, in your understanding? or any one else's for that matter?

It Seems segwit does allow some compression, is thier part of that codebase we can take that would not be objectionable to say the BU camp?

Or what other algorithms can we use to effect compression?

For most of us, Carlton has been on 'ignore' for years now. He's a known troll. SegWit doesn't make transactions smaller, it just separates part out. Now, ask yourself why that is a better solution than just making the blocks bigger, and then you'll start smelling the bullshit.

I'm grumpy!!
mda
Member
**
Offline Offline

Activity: 144
Merit: 13


View Profile
April 22, 2017, 07:08:53 PM
 #13

No technical objections, only economical ones.
You can't run bitcoin between three full nodes and support a price of hundred thousands dollars per coin.
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3542
Merit: 6886


Just writing some code


View Profile WWW
April 22, 2017, 09:29:46 PM
Merited by ABCbits (3)
 #14

Mostly correct, but these problems are solvable. 

Flextrans (bunlded in BitcoinClassic)  offers an already coded solution to the quadratic hashing.   We can also limit the transaction size or number of sigops.
Segwit also offers an already coded, tested, and reviewed (flextrans has not had the same level of testing and review. IIRC the current implementation has many vulnerabilities in it as well) solution to quadratic sighashing. However, changing to flextrans is a lot more work than changing to segwit. It completely rewrites the entire transaction format. Doing a complete rewrite of that in every single wallet software will take a lot of time, and there are a lot of things that can be messed up in rewriting that. As someone who has partially implemented segwit for a wallet, segwit is really not that bad to implement as it is an extension on the existing transaction format.

Limiting the transaction size is not really a solution. It also prevents people from making those large transactions which can, at times, actually be useful. For example they can be used to clean up a large amount of spam UTXOs but cost less than multiple transactions. F2pool did this with the large 1 MB transaction they made when they cleaned up a lot of small, spam UTXOs from broken brainwallets.

Bandwidth and diskspace requirements would increase naturally, but Gavin did testing on 8mb blocks, and if you think about it, even full 32mb blocks only
represent 1.68 TB a year of storage.
As I said, the concern is not just on disk space but also bandwidth.

Do you also realize that people aren't going to be buying new hard drives every year just to store the blockchain? Maintaining a growth of 1.68 TB a year would be quite burdensome on node owners as they would constantly have to get new hard drives and have the bandwidth to allow that much data to and from their node to multiple peers while still also having disk space for their personal data and bandwidth for personal use.

Hard forks require coordination, but many alt coins have successfully hard forked without any major issues that I'm aware of, and I don't think it would require a year. 
Even if it was done very abruptly, miners kicked off the main chain unexpectedly could simply rejoin. 
Altcoins don't have the same number of users, network conditions, market cap, or development situation as Bitcoin does. Bitcoin and altcoins are not comparable.

It is not an issue of "kicked off the main chain" but rather that a short-term, abrupt hard fork would almost certainly result in two chains and a lot of uncertainty about which is Bitcoin.

jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1008


Core dev leaves me neg feedback #abuse #political


View Profile
April 23, 2017, 12:26:29 AM
 #15

a short-term, abrupt hard fork would almost certainly result in two chains and a lot of uncertainty about which is Bitcoin.

How so?  If the hard fork was done with a clear majority of hashpower (especially via a majority of pools), the longest chain would clearly be Bitcoin.


achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3542
Merit: 6886


Just writing some code


View Profile WWW
April 23, 2017, 12:44:36 AM
 #16

How so?  If the hard fork was done with a clear majority of hashpower (especially via a majority of pools), the longest chain would clearly be Bitcoin.
A hard forks require not just a majority but practically everyone involved. That means it must have nearly all miners and nearly all users. Just a "clear majority" (e.g. 75%) is not enough as the remaining can still keep the original chain going, albeit rather slowly for a few weeks. Even if all of the current hash power forked away, users who did not go along with the fork can also keep the original chain going by using older mining hardware which had been previously retired.

jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1008


Core dev leaves me neg feedback #abuse #political


View Profile
April 23, 2017, 12:46:31 AM
 #17

the remaining can still keep the original chain going

They could, sure.  But I doubt there would be much confusion about what was happening.

achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3542
Merit: 6886


Just writing some code


View Profile WWW
April 23, 2017, 12:56:01 AM
 #18

They could, sure.  But I doubt there would be much confusion about what was happening.
Bitcoin has quite a large userbase, and I'm pretty sure that a significant portion of the community would not really be aware of a hard fork (just like a significant portion of the community doesn't really seem to be aware of BU or segwit or know anything about them). Furthermore, there is an issue of transaction replay, what the new coins are listed as in exchanges, and what they are calling themselves. If both forks are calling themselves "Bitcoin" or some variation of that (e.g. "Bitcoin Original" or something), then that can be confusing. It is also especially confusing for new users who just started using Bitcoin or are about to start using Bitcoin when the fork happens as they don't know enough about what is going on to make an informed decision. Unless a hard fork actually gains consensus (i.e. practically everyone in the community agrees with it), I think there will be confusion when a hard fork happens.

jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1008


Core dev leaves me neg feedback #abuse #political


View Profile
April 23, 2017, 01:28:54 AM
 #19

They could, sure.  But I doubt there would be much confusion about what was happening.
Bitcoin has quite a large userbase, and I'm pretty sure that a significant portion of the community would not really be aware of a hard fork 

That is nonsense.  A Bitcoin fork in the context we are speaking about would be all over the news, maybe even mainstream news.
It's all anyone would talk about for weeks and months.

Anyway, I've made point that your objections are solvable.  People that don't want things to be solved will always see objections.


achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3542
Merit: 6886


Just writing some code


View Profile WWW
April 23, 2017, 01:42:59 AM
 #20

That is nonsense.  A Bitcoin fork in the context we are speaking about would be all over the news, maybe even mainstream news.
It's all anyone would talk about for weeks and months.
BU, Segwit, Classic, and XT are all over the news and even hit mainstream media. It has been nearly what everyone has talked about for months, practically years. Just take a look at the first page of Bitcoin Discussion and of both subreddits; BU and segwit (and previously Classic and XT) comprise a significant portion of what people are talking about. And yet a significant number of people still don't know what they are and ask questions about them, just like the OP of this thread. The current situation is comparable to said hypothetical hard fork and yet many people are still asking questions about the proposals and confused about what they are and what they do.

Anyway, I've made point that your objections are solvable.  People that don't want things to be solved will always see objections.
You have in no way addressed my counter points to your "solutions" to my objections.

Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!