Bitcoin Forum
April 24, 2024, 02:02:43 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 [9] 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 »
  Print  
Author Topic: Post your SegWit questions here - open discussion - big week for Bitcoin!  (Read 84727 times)
Blockchain Mechanic
Full Member
***
Offline Offline

Activity: 380
Merit: 103

Developer and Consultant


View Profile WWW
January 04, 2017, 10:52:13 AM
 #161

Quote
current transactions have a quadratic scaling problem, it would be possible to create 32MB of transaction(s) on one block that would take half hour to process with current bitcoind on the current network protocol, bringing the network to a standstill. By the way, segwit transactions don't have this scaling problem, they scale linearly.

I am only just beginning to spread my wings in the deeper aspects of the protocol, you have just given me some interesting tid bit for me to pursue.

But does this not re-enforce my statement ? Segwit + 32 MB blocks...?

No. Why would it?

Ask yourself this question: why was the 32 MB limit abandoned in favour of a 1MB limit

i don't need to ask myself, i know.

Patronizing attitude aside, you did not address the issue.

I think you may be having problems interpreting my intentions


There was no content in what you or ck said that supported the idea that a 32 MB limit would be feasible. And yet you made a positive statement to the contrary (in bold above).

And so, I was asking you the most helpful question I could, in order to help you understand. If you're more interested in losing control of your ego/emotions, then you're definitely asking yourself the wrong questions (and asking in the wrong forum/website also, we don't help people with their emotional outburst problems here)


sorry if my statement put you off, your response was pretty curt and lacked some shall i say finesse. I wanted to say, that since we already know we will fill up the blocks even with segwit, why not shift the conversation from "blockweight == 4 MB" and make it really about true scaling while maintaining functionality? A 32 MB wweight is larger as makes the "purists" like myself , stop making noise and really think.

Again, i'm sorry, in hindsight perhaps i did misinterpret

Equality vs Equity...
Discord :- BlockMechanic#8560
1713967363
Hero Member
*
Offline Offline

Posts: 1713967363

View Profile Personal Message (Offline)

Ignore
1713967363
Reply with quote  #2

1713967363
Report to moderator
1713967363
Hero Member
*
Offline Offline

Posts: 1713967363

View Profile Personal Message (Offline)

Ignore
1713967363
Reply with quote  #2

1713967363
Report to moderator
Be very wary of relying on JavaScript for security on crypto sites. The site can change the JavaScript at any time unless you take unusual precautions, and browsers are not generally known for their airtight security.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3071



View Profile
January 04, 2017, 10:52:39 AM
 #162

perhaps a self adjusting max block weight would serve us better ?

You're demonstrating the same problem with your presentation here.

achow101 said nothing that could lead you to this conclusion, and you've said nothing to qualify the statement, so I'm going to ask you a question again.


What reasoning can you provide for wanting an algorithm that adjusts the size of witness blocks?

Vires in numeris
cellard
Legendary
*
Offline Offline

Activity: 1372
Merit: 1250


View Profile
January 04, 2017, 01:56:50 PM
 #163

Patronizing attitude aside, you did not address the issue. I am not campaigning for 32 MB blocks, simply making my position known and asking ..."why not?" It's a hell of a lot more expensive now to spam transactions , even with a script just to troll, you pay a significant amount and nothing short of a bored billionaire or state can sustain that.... now let's be honest, if a billionaire seriously decided to put the screws to bitcoin, we'd all feel it. they were willing to increase the overall size to just below 4 MB , why not just REVERT to 32 MB and have segwit?
First of all, the maximum block size is not actually 32 MB but rather the maximum message size. This effectively sets the upper limit of any maximum, so with segwit and largest possible blocks, that would be 8 MB max block size but 32 MB max block weight.

While segwit makes sighashing linear, having 32 times the maximum means that it will take at most 32 times longer to verify the worst case block. That can take up to several minutes.

That aside, making the maximum block size (with segwit and all, so actually max block weight) 32 MB puts a significant strain on all full nodes. That is a theoretically maximum of 32 MB every ten minutes, which amounts to ~4.6 GB every day. This means a few things: the blockchain will grow at a maximum rate of ~4.6 GB every single day, a lot of download bandwidth will be eaten up, and even more upload bandwidth will be eaten up. This means that it will become very difficult for regular users to maintain proper full nodes (i.e. default settings as people normally do, no bandwidth limiting). This will hurt decentralization as full nodes will be centralized to high bandwidth high powered servers likely located in data centers. At the very least, it becomes very costly to maintain a full node.

Besides the cost of operating a full node, having such a large maximum makes starting up a new full node even more expensive than it already is. The full node first has to download the entire blockchain. Right now it is at 100 GB. Should the blockchain grow at 4.6 GB per day, that would become very large, very quickly. People would be spending hours, probably days, to download and verify the entire thing.

Now you might say that this won't happen as this is the worst case scenario. However, with these proposals you always need to think of the worst case scenario. If the worst case scenario cannot be handled, then the proposal needs to be changed such that the worst case scenario can be handled. You can't just say that the worst case scenario probably won't happen because there is still a chance that the worst case can happen, and that is not good, especially with changing consensus being so difficult now.

Thank you for your detailed answer and explanations.... If you don't, mind... so two years from now... with segwit, we somehow start filling blocks again..now what ? Is the problem not compounded ?

I'm not just asking tech here, but socially as well, in two years , if segwit is overwhelmed, the nay sayers will start the "i told you so " carnival.  

Please, i actually have no idea how we could truly scale bitcoin, hence my choice of the best of both worlds. Worst case would ruin us but perhaps a self adjusting max block weight would serve us better ?

Core devs WANT to raise teh blocksize to 2MB, contrary to the popular btc reddit FUD, but they want to do it right, and right means segwit goes first, it's as simple as that. You have neutral people like Andreas.A advocating for segwit first before hardforking too. I don't know what else those tools need to realize they are wrong. If only we could all cooperate and get segwit going as soon as possible, we could potentially fuel the current rocket into another solar system, since with segwit a lot of cool features will be possible.

I think there was only one Core dev that wanted to stay in 1MB (or even making it smaller). The rest want 2MB.
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3374
Merit: 6535


Just writing some code


View Profile WWW
January 04, 2017, 03:46:31 PM
 #164

Thank you for your detailed answer and explanations.... If you don't, mind... so two years from now... with segwit, we somehow start filling blocks again..now what ? Is the problem not compounded ?

I'm not just asking tech here, but socially as well, in two years , if segwit is overwhelmed, the nay sayers will start the "i told you so " carnival. 

Please, i actually have no idea how we could truly scale bitcoin, hence my choice of the best of both worlds. Worst case would ruin us but perhaps a self adjusting max block weight would serve us better ?
By that point in time, there should be multiple things available: 1) a well liked hard fork proposal that contains a block weight increase as well as several other things and 2) second layer solutions such as LN or sidechains. Right now, all available solutions are essentially just "kicking the can down the road" meaning that nothing will truly fix the problem, just delay the inevitable. The Bitcoin network cannot scale to VISA levels just by block size alone, it requires second layer solutions such as LN and sidechains in order to scale up that high. Hopefully by the time the block size becomes a problem again there will the these second layer solutions.

Core devs WANT to raise teh blocksize to 2MB, contrary to the popular btc reddit FUD, but they want to do it right, and right means segwit goes first, it's as simple as that. You have neutral people like Andreas.A advocating for segwit first before hardforking too. I don't know what else those tools need to realize they are wrong. If only we could all cooperate and get segwit going as soon as possible, we could potentially fuel the current rocket into another solar system, since with segwit a lot of cool features will be possible.

I think there was only one Core dev that wanted to stay in 1MB (or even making it smaller). The rest want 2MB.
Many of the Core devs are in favor of even larger block sizes (segwit has a max of 4 MB). IIRC most are in favor of a well designed dynamic block size algorithm. However all current dynamic block size proposals can be relatively easily gamed by miners or others to either push the limit to something large and undesirable or to something small and still undesirable.

Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3071



View Profile
January 04, 2017, 04:34:07 PM
 #165

IIRC most are in favor of a well designed dynamic block size algorithm. However all current dynamic block size proposals can be relatively easily gamed by miners or others to either push the limit to something large and undesirable or to something small and still undesirable.

I'm not convinced the design problem can be solved, although Satoshi famously solved a supposedly insolvable problem to get us to where we are, so never say never.

But the issue with dynamic sizing is this: there will always be some practical absolute maximum blocksize, above which block validation would take too long for some critical proportion of the network to handle. So it will always be necessary to have some margin of safety below that absolute maximum as a de facto maximum.

And what, in practice, is the difference between that outcome and the current approach of making the blocksize a consensus rule? Miners can already choose less than the practical maximum, and they do. Increasingly less so, but occasionally blocks far less than 1MB make it into the chain. I'm failing to see how that state of affairs differs from having a capped dynamic size TBH, other than it being simpler. I would be happy to be proved wrong (dynamic resizing was my initial preference when the debate about blocksize in the community began).

Vires in numeris
amaclin
Legendary
*
Offline Offline

Activity: 1260
Merit: 1019


View Profile
January 04, 2017, 05:28:10 PM
 #166

...although Satoshi famously solved a supposedly insolvable problem...
Sorry, what problem?
(Bonus question: what is the cost of solution?)
Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3071



View Profile
January 04, 2017, 06:13:26 PM
 #167

...although Satoshi famously solved a supposedly insolvable problem...
Sorry, what problem?
(Bonus question: what is the cost of solution?)

The so-called "Byzantine Generals" problem, where a message is sent securely over an insecure transmission channel, the cost of the solution is the energy used for proof-of-work hashing

Why am I being quizzed about facts that we're both aware of?

Vires in numeris
amaclin
Legendary
*
Offline Offline

Activity: 1260
Merit: 1019


View Profile
January 04, 2017, 07:06:29 PM
 #168

The so-called "Byzantine Generals" problem, where a message is sent securely
over an insecure transmission channel, the cost of the solution is the energy
used for proof-of-work hashing
You say the words without thinking they meaning.
The original problem has several subjectives: generals, armies and messengers.
There are not "miners-who-try-hashes-for-a-profit" in this math scheme.
What if the cost of solution is bigger than the cost of armies?

Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3071



View Profile
January 04, 2017, 07:19:28 PM
 #169

What if the cost of solution is bigger than the cost of armies?

You're off topic, but your analogy is, in fact, a direct mapping of actual reality: actual armies (and other instruments of force/violence) are what prop up central banking hegemony. And if securing the Bitcoin network were more expensive than mustering an army that could take on every global superpower simultaneously, then it might be a better idea to "simply" do the latter. Would you like to start a new thread for this bizarre tangent you're leading us into? In the appropriate sub, maybe?

Vires in numeris
cellard
Legendary
*
Offline Offline

Activity: 1372
Merit: 1250


View Profile
January 04, 2017, 10:59:35 PM
 #170

IIRC most are in favor of a well designed dynamic block size algorithm. However all current dynamic block size proposals can be relatively easily gamed by miners or others to either push the limit to something large and undesirable or to something small and still undesirable.

I'm not convinced the design problem can be solved, although Satoshi famously solved a supposedly insolvable problem to get us to where we are, so never say never.

But the issue with dynamic sizing is this: there will always be some practical absolute maximum blocksize, above which block validation would take too long for some critical proportion of the network to handle. So it will always be necessary to have some margin of safety below that absolute maximum as a de facto maximum.

And what, in practice, is the difference between that outcome and the current approach of making the blocksize a consensus rule? Miners can already choose less than the practical maximum, and they do. Increasingly less so, but occasionally blocks far less than 1MB make it into the chain. I'm failing to see how that state of affairs differs from having a capped dynamic size TBH, other than it being simpler. I would be happy to be proved wrong (dynamic resizing was my initial preference when the debate about blocksize in the community began).


Indeed, a dynamic blocksize sounds so elegant, since anything that doesn't require consensus and just acts as dictated by an algorithm would be ideal to be implemented in a system like bitcoin, as you said, it is easily exploitable. I've heard Monero adresses the spam trolls with a dynamic fee, but im not sure how it works... as far as I know, we already have a dynamic fee (the higher the transaction demand the higher the fee) so I don't see how they are solving the problem.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
January 04, 2017, 11:22:13 PM
 #171

If we aren't continually filling blocks then that is a disaster.
jackg
Copper Member
Legendary
*
Offline Offline

Activity: 2856
Merit: 3071


https://bit.ly/387FXHi lightning theory


View Profile
January 05, 2017, 02:53:52 AM
 #172

If we aren't continually filling blocks then that is a disaster.

Do you mean like this, this and several other blocks before them.

There are just released and are NOT full (as I understand it). If the limit is 1,000KB and some of these are 1KB off then there's a problem isn't there? Most transactions if simply sent, 1 sending address --> 1 recieving address, (which are most likely) are less than 500Bytes (most less than 300bytes, then these blocks aren't being filled as there is space for at least another TWO transactions to fit in that block?

Maybe improving the network to not do this is a better place to start than segwit? (although I'll accept segwit when it comes live (After 95% adoption)).
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3374
Merit: 6535


Just writing some code


View Profile WWW
January 05, 2017, 03:12:29 AM
 #173

Do you mean like this, this and several other blocks before them.

There are just released and are NOT full (as I understand it). If the limit is 1,000KB and some of these are 1KB off then there's a problem isn't there? Most transactions if simply sent, 1 sending address --> 1 recieving address, (which are most likely) are less than 500Bytes (most less than 300bytes, then these blocks aren't being filled as there is space for at least another TWO transactions to fit in that block?

Maybe improving the network to not do this is a better place to start than segwit? (although I'll accept segwit when it comes live (After 95% adoption)).
I think he is talking about the blocks that are not 990+ Kb that occur semi frequently. These are often either empty blocks or are simply just not full.

jackg
Copper Member
Legendary
*
Offline Offline

Activity: 2856
Merit: 3071


https://bit.ly/387FXHi lightning theory


View Profile
January 05, 2017, 12:33:35 PM
 #174

Do you mean like this, this and several other blocks before them.

There are just released and are NOT full (as I understand it). If the limit is 1,000KB and some of these are 1KB off then there's a problem isn't there? Most transactions if simply sent, 1 sending address --> 1 recieving address, (which are most likely) are less than 500Bytes (most less than 300bytes, then these blocks aren't being filled as there is space for at least another TWO transactions to fit in that block?

Maybe improving the network to not do this is a better place to start than segwit? (although I'll accept segwit when it comes live (After 95% adoption)).
I think he is talking about the blocks that are not 990+ Kb that occur semi frequently. These are often either empty blocks or are simply just not full.

But most of the blocks I added also aren't full and they appear more frequently. A 990KB or less block definitely means that there may not be a problem as there must be a deficit of transactions or an ill configured miner
-ck
Legendary
*
Offline Offline

Activity: 4088
Merit: 1631


Ruu \o/


View Profile WWW
January 05, 2017, 01:03:16 PM
 #175

Anything over 990kb is effectively full for the current network block size limits. The algorithm to fill the last few bytes of a 1MB block is designed to not waste heaps of time sorting through thousands of transactions just to find the last few bytes to fill the block. However none of this matters because the size of the block ultimately is up to the miner's configuration. Some miners haven't even bothered changing the default which is set to 750kb in bitcoind, while others (like p2pool miners) have lowered it to work around terrible speed issues in their mining pool design with more transactions. Furthermore the network may have 1 million transactions pending but the miner is free to mine their next block with absolutely zero transactions beyond their generation transaction, and many large pools still do such an optimisation as a workaround for slow block changes in the rest of their tool chain. Basing block size on some dynamic mechanism based on the last block sizes is silly since it means it will depend on miners' whims as to how big the block is, and not really represent how many pending transactions are on the network. Alternatively basing it on the number of pending transactions is also silly because one man's high priority transaction is another's spam, and vice versa. Dynamic sounds good in theory but fails to address the issue that not all miners are altruistic and choose defaults that are best for the network.

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
nikkisnowe
Member
**
Offline Offline

Activity: 105
Merit: 10


View Profile
January 05, 2017, 08:58:52 PM
 #176

Is there potential that the 95% threshold could be reduced at some point?  Considering that a single miner with greater than 5% can prevent SegWits adoption, I assume that this was never the intention.
amaclin
Legendary
*
Offline Offline

Activity: 1260
Merit: 1019


View Profile
January 05, 2017, 09:19:27 PM
 #177

Is there potential that the 95% threshold could be reduced at some point?
Yes. Point.
Because 90% is also a majority. The majority *can* change anything in any consensus at any moment.
cellard
Legendary
*
Offline Offline

Activity: 1372
Merit: 1250


View Profile
January 07, 2017, 11:20:34 PM
 #178

Is there potential that the 95% threshold could be reduced at some point?
Yes. Point.
Because 90% is also a majority. The majority *can* change anything in any consensus at any moment.

So if we reach 90% of segwit activation, we could vote that it's not 95% anymore but 90%? wouldn't that piss some people off? I don't get it so I would like to know.

Also, has a 95% of a big group of people ever agreed on doing anything? Shouldn't this have been foreseen a long time ago? I think 95% it's too much... but at the same time, it's great to guarantee that the big majority of people want something, but that 5% could be potential trolls, so I think 90% is probably a good enough compromise.
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3374
Merit: 6535


Just writing some code


View Profile WWW
January 08, 2017, 12:53:46 AM
 #179

So if we reach 90% of segwit activation, we could vote that it's not 95% anymore but 90%? wouldn't that piss some people off? I don't get it so I would like to know.
If 90% of the miners decide to orphan the blocks of the remaining 10%, then they would have 100% consensus. It isn't that we can vote to change the threshold but rather that the threshold can effectively be lowered if a majority of miners can orphan all the blocks of the miners who are not signalling the change.

Also, has a 95% of a big group of people ever agreed on doing anything? Shouldn't this have been foreseen a long time ago? I think 95% it's too much... but at the same time, it's great to guarantee that the big majority of people want something, but that 5% could be potential trolls, so I think 90% is probably a good enough compromise.
Yes, this has been done before. All soft forks in the past have had a threshold of 95% and we have activated several soft forks already with that threshold.

dlemfjqm
Newbie
*
Offline Offline

Activity: 47
Merit: 0


View Profile
January 08, 2017, 01:30:44 AM
 #180

What does it mean for Bitcoin if Segwit never activates?
Pages: « 1 2 3 4 5 6 7 8 [9] 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!