Bitcoin Forum

Bitcoin => Development & Technical Discussion => Topic started by: gentlemand on February 12, 2019, 07:14:14 PM



Title: Luke Jr's 300kb blocks
Post by: gentlemand on February 12, 2019, 07:14:14 PM
I see this subject has bubbled up again on Twitter and Luke Jr is proposing an experimental soft fork with smaller blocks.

https://cryptoinsider.com/on-reducing-the-bitcoin-block-size-to-300kb/

Rather unfortunately it appears to have been latched on to by the usual casualties who zombified themselves with BCH and other dead ends. They're expressing lots of 'concern' about ongoing sustainability now Bitcoin turns out to have had big blocks all along.

Does it have any merit as an idea or is the entire thing one guy's worry being twisted into yet another attempt to undermine BTC?



Title: Re: Luke Jr's 300kb blocks
Post by: gmaxwell on February 12, 2019, 07:33:22 PM
one guy's worry being twisted into yet another attempt to undermine BTC?
I never even heard of it until your link.  So yes.


Title: Re: Luke Jr's 300kb blocks
Post by: Carlton Banks on February 12, 2019, 07:39:00 PM
It's a really old proposal. I thought it was trying to be too smart/subtle.


The idea was to reduce to 300kB base size, but also set a graduated increase schedule, based on absolute block heights (the 300kb step was set to take place at a blockheight back in 2017 IIRC). It finally reached 1MB base size again in 2024, and continued at a percentage rate (also IIRC). In other words, if the proposal was adopted today, we'd be past the 300kB stage already.

This was partly a psychologically based proposal, which is why people reacted badly, lol. I think Luke knew that 300kB base size would get laughed off, but he figured that since the blockchain grows constantly, that the closer we get to 2024 (when 1MB base would be reached again), the more people might begin to realise that reducing from the 1MB base size was smarter than it sounded back when the blockchain was a more manageable size.

Note that all of this is using base block figures, the real possible block size would be x2-4 the base block (so 300kb would in fact be 600-1200kB if all transactions in a given block are segwit txs).

Bear in mind that as we're still not in 2024, Luke's plan may actually work, and reducing the base from 1MB to whatever the schedule would be stepped to now (which has of course increased beyond 300kB) might look good to some people. It would probably still take some convincing, but there's still 5 years left on the clock.


Title: Re: Luke Jr's 300kb blocks
Post by: andreibi on February 12, 2019, 08:04:25 PM
We're going to have to wait until the Lightning network get to near or half capacity. So far, the current solution is working.


Title: Re: Luke Jr's 300kb blocks
Post by: aliashraf on February 12, 2019, 10:06:59 PM
What I get from Luke Jr's original tweet is about a demonstration to prove (to somebody?) that it is possible to have temporary soft-forks!

I don't see anything like an advocacy for such a change in Luk's tweet. The article OP has linked to is a different story tho....

I didn't have enough time to track the writer down but my first impressions are not encouraging. He is obviously twisting things for indirectly accusing Core team of being too much LN oriented and engaged in kinda conspiracy against bitcoin to reduce its throughput hence making fees to jump high and pushing people to layer-2 solutions like LN.

Edit: it is just ridiculous,  I  found more tweets from Luke, he is seriously championing for such a soft fork! The guy is getting completely out of rail by his fruitless obsession with decentralization, imo.


Title: Re: Luke Jr's 300kb blocks
Post by: odolvlobo on February 13, 2019, 12:38:19 AM
Articles about Twitter threads are stupid.


Title: Re: Luke Jr's 300kb blocks
Post by: gmaxwell on February 13, 2019, 05:04:15 AM
The idea was to reduce to 300kB base size, but also set a graduated increase schedule, based on absolute block heights (the 300kb step was set to take place at a blockheight back in 2017 IIRC). It finally reached 1MB base size again in 2024, and continued at a percentage rate (also IIRC). In other words, if the proposal was adopted today, we'd be past the 300kB stage already.
That was the old thing, but apparently on twitter he's talking about something even less reasonable now.

In any case, I showed this thread to other developers and the response was uniformly a big wtf.

Initial sync time does suck and is a problem, but the damage has been done-- since long ago-- no amount of size reduction will solve it. If it would then perhaps these ideas would get a bit of traction, but it won't...


Title: Re: Luke Jr's 300kb blocks
Post by: aliashraf on February 13, 2019, 06:48:06 AM
What Luke is championing for (in Twitter  ;D) is both improving sync time and block propagation delay.

Of the two, the latter is improved by FIBRE a lot (very low << 0.001 orphan rates right now) and is under even more developments AFAIK but the first one,  initial sync time is an open problem.

As of initial sync problem,bootstrapping,  besides what Greg Maxwell has reminded above (Luke's proposal not helping with current situation just reducing the pace by which it is getting worse), I think there is definite need for the original problem to be addressed somehow.

In btctalk we've been discussing it recently:

here         https://bitcointalk.org/index.php?topic=5046152.msg46692966#msg46692966

and here: https://bitcointalk.org/index.php?topic=5103449.msg49479330#msg49479330


Title: Re: Luke Jr's 300kb blocks
Post by: 1Referee on February 13, 2019, 08:28:46 AM
In any case, I showed this thread to other developers and the response was uniformly a big wtf.
He seems pretty vocal on Twitter, which actually worried me a bit, because it's beyond ridiculous to even suggest something like that where we are right now.

Initial sync time does suck and is a problem, but the damage has been done-- since long ago-- no amount of size reduction will solve it. If it would then perhaps these ideas would get a bit of traction, but it won't...
It's not as bad as it seems. If you're running a decent setup, the sync time is pretty reasonable actually. I set up a second node myself earlier this year, and it synced in like 11-12 hours, and that while I expected it to take a day at least. RPI's are a different story, but then again, run a decent setup and you don't have these problems.

In the end, the average person won't run a node even with a very small block size. Why? They just don't give a fuck. People who do give a fuck, and merchants, will continue to spec out their hardware to run their node in the most stable possible manner.

Luke Jr lost his mind.


Title: Re: Luke Jr's 300kb blocks
Post by: aliashraf on February 13, 2019, 09:14:17 AM
It's not as bad as it seems. If you're running a decent setup, the sync time is pretty reasonable actually. I set up a second node myself earlier this year, and it synced in like 11-12 hours, and that while I expected it to take a day at least. RPI's are a different story, but then again, run a decent setup and you don't have these problems.

In the end, the average person won't run a node even with a very small block size. Why? They just don't give a fuck. People who do give a fuck, and merchants, will continue to spec out their hardware to run their node in the most stable possible manner.
Are you arguing like "it is just fine, people should sit on the backbone (like me) and boot in half a day if they got real incentives, being a bitcoin whale (like me)" ? And you expect Luke to appreciate your argument and back-off?  ;D

Average users have incentives to join, like you and other bitcoin whales :P they just can't and it is getting worse as the time passes. A UTXO commitment/reconciliation protocol could change the scene radically, imo.



Title: Re: Luke Jr's 300kb blocks
Post by: 1Referee on February 13, 2019, 09:34:32 AM
Are you arguing like "it is just fine, people should sit on the backbone (like me) and boot in half a day if they got real incentives, being a bitcoin whale (like me)" ? And you expect Luke to appreciate your argument and back-off?  ;D
It worked till where we are today, and the number of nodes continues to increase. That's not exactly a sign of there being a problem, or does it? ::)

If you also add that with Lightning growing even more nodes will pop up, because whoever the operators are, there is a financial incentive to run a node now, so the number of nodes will continue to increase.

Average users have incentives to join, like you and other bitcoin whales :P they just can't and it is getting worse as the time passes. A UTXO commitment/reconciliation protocol could change the scene radically, imo.
Average user incentives are buy in low and sell higher. They use light weight clients, or they don't use a client at all, but leave their coins on an exchange.

Don't break something that works extremely well today. We know that it works. Messing around with something so important is a gamble, and will likely drive people away from Bitcoin.


Title: Re: Luke Jr's 300kb blocks
Post by: Carlton Banks on February 13, 2019, 10:26:17 AM
That was the old thing, but apparently on twitter he's talking about something even less reasonable now.

In any case, I showed this thread to other developers and the response was uniformly a big wtf.

I see. Ok, Luke's a bit single-minded about 300kB. This completely invalidates my interpretation of the old proposal; it seems he even wanted everyone to take a 300kB base size seriously even back when he first promoted it.


Initial sync time does suck and is a problem, but the damage has been done-- since long ago-- no amount of size reduction will solve it. If it would then perhaps these ideas would get a bit of traction, but it won't...

Wasn't there an idea from sipa to change how transactions are serialized that reduced the entire chain's size? If one cares about IBD, one ought to be most interested in proposals that remediate the historic chain size as well as those that improve it forwards in time.


Title: Re: Luke Jr's 300kb blocks
Post by: aliashraf on February 13, 2019, 10:38:44 AM
It worked till where we are today, and the number of nodes continues to increase. That's not exactly a sign of there being a problem, or does it? ::)
Actually the number of nodes has slightly decreased in 2018 while bitcoin continues to grow and becomes more popular, so yes, there is a problem.

Quote
If you also add that with Lightning growing even more nodes will pop up, because whoever the operators are, there is a financial incentive to run a node now, so the number of nodes will continue to increase.
LN nodes can do their job by connecting to light clients like spruned (https://github.com/gdassori/spruned) and running LN nodes is not much of a financial incentive and do not compensate for fancy full node setups.

On the other hand given mass adoption of LN, channel setup/flush operations will load the network even more.
It is what Luke jr fails to understand, he thinks people migrate from main chain to layer-2, it is not exactly what happens when LN gets adopted, instead new use cases would emerge.

Quote
Average users have incentives to join, like you and other bitcoin whales :P they just can't and it is getting worse as the time passes. A UTXO commitment/reconciliation protocol could change the scene radically, imo.
Average user incentives are buy in low and sell higher. They use light weight clients, or they don't use a client at all, but leave their coins on an exchange.
Current spv wallets should be eliminated and replaced with pruned full nodes (that sync fast, imo) Luke is right in this respect: Don't transact if you can't verify.

Quote
Don't break something that works extremely well today. We know that it works. Messing around with something so important is a gamble, and will likely drive people away from Bitcoin.
Not a best engineering practice at all. Improvements are inevitable we just have to be cautious and not to panic.
"Extremely well" was too much by the way. ;)


Title: Re: Luke Jr's 300kb blocks
Post by: aliashraf on February 13, 2019, 12:48:06 PM
Luke Jr lost his mind.

An argument can be made that Luke Jr was never in a rational frame of mind, to begin with.
;D ;D
Sorry couldn't help it, can't.  :D


Title: Re: Luke Jr's 300kb blocks
Post by: 1Referee on February 13, 2019, 12:50:41 PM
Actually the number of nodes has slightly decreased in 2018 while bitcoin continues to grow and becomes more popular, so yes, there is a problem.
Fluctuations are pretty normal, especially with how the bear market made lots of 'enthusiasts', startups and services quit, and thus they take down their nodes.

https://bitnodes.earn.com/dashboard/?days=730

As you can see, there is steady/healthy growth. The size of the chain has grown significantly, but the number of nodes increased regardless of that. So no, there is no problem. :)

On the other hand given mass adoption of LN, channel setup/flush operations will load the network even more.
It is what Luke jr fails to understand, he thinks people migrate from main chain to layer-2, it is not exactly what happens when LN gets adopted, instead new use cases would emerge.
What bothers me more is that he is basically betting on a measure that will force people to layer 2, and that's something you don't do. Stimulating free use of the network is what we have always had, and that's exactly why Segwit adoption isn't picking up. I don't like that certain entities aren't upgrading, but on the other hand, that's the freedom they have, and we should respect that. Bitcoin is freedom at the end of the day.

Current spv wallets should be eliminated and replaced with pruned full nodes (that sync fast, imo) Luke is right in this respect: Don't transact if you can't verify.
Not much to disagree with here, but hey, it all comes down to the freedom aspect. People can choose whatever they think offers the most usefulness to them.


"Extremely well" was too much by the way. ;)
Right. It works well enough in its current state is more fitting I guess. :P

An argument can be made that Luke Jr was never in a rational frame of mind, to begin with.
That cracked me up, lol. :D


Title: Re: Luke Jr's 300kb blocks
Post by: Carlton Banks on February 13, 2019, 01:46:55 PM
Luke Jr lost his mind.

An argument can be made that Luke Jr was never in a rational frame of mind, to begin with.

Luke has eccentric ideas for sure, but he also has good ideas that get adopted.


Title: Re: Luke Jr's 300kb blocks
Post by: gmaxwell on February 13, 2019, 02:27:20 PM
"The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man."

That said ... there are degrees. :)

I'm disappointed with the press circus Luke has contributed to here, -- it's not the first time he's set things up perfectly for his words to be taken out of context and then been so so surprised at what happened. But he does make useful contributions, and in the fullness of time drawing more attention to the initial sync problem may be one too, even though I disagree with the approach.

As you can see, there is steady/healthy growth.
Careful with those assumptions,  if you filter out nodes from a couple popular VPS providers and nodes that have obvious behavioral tells that they're fake... the picture looks much less rosy.  A lot of "nodes" are sybils setup-- presumably-- to track transaction origins.

Wasn't there an idea from sipa to change how transactions are serialized that reduced the entire chain's size? If one cares about IBD, one ought to be most interested in proposals that remediate the historic chain size as well as those that improve it forwards in time.
Blockstream has unpublished code that implements an alternative serialization that reduces tx sizes by around 25%.   I don't think it would actually improve IBD time except for very fast computers on fairly slow internet connections... initial sync is more utxo-update bound than bandwidth bound for most users. It might even slow it down, since the compact serialization is slower to decode. On a ludicrously fast machine (24 core 3GHz, nvme storage, syncing from local hosts over 10gbe) sync currently only proceeds at about 50mbit/sec.  I've been nagging them to publish it.  Their interest is in using it to increase capacity on the sat signal, but it's more generally useful.

I expect and hope that all the IBD activity will move into the background. After that happens, then the time it takes is less important than the resources-- and at that point a 25% bandwidth improvement would look pretty good.


Title: Re: Luke Jr's 300kb blocks
Post by: Carlton Banks on February 13, 2019, 02:37:00 PM
So Luke's new proposal is actually a way to soft-fork to 600k weight units, which is not at all the same as 300kB.

That's truly bizarre as an idea, apologies to Luke. That would mean keeping the base size at 1MB, and only being able to use 600kB within that base limit. The segwit discount would still reduce fees for segwit tx's, but the incentive to use segwit tx's as a way to boost capacity would disappear if the base size (1MB) was higher than the weigh limit (600kWU). Don't see the rationale for that at all.


I'm disappointed with the press circus Luke has contributed to here, -- it's not the first time he's set things up perfectly for his words to be taken out of context and then been so so surprised at what happened. But he does make useful contributions, and in the fullness of time drawing more attention to the initial sync problem may be one too, even though I disagree with the approach.

It seems like Luke has a fascination for exploring possibilities without much reasoning as to why the ends are desirable. In the case of the actual BIP141 segwit soft fork, that approach was great, as Luke was motivated to figure out a way to implement segwit. Someone with an "it'll never work" attitude would never have done so.


Blockstream has unpublished code that implements an alternative serialization that reduces tx sizes by around 25%.   I don't think it would actually improve IBD time except for very fast computers on fairly slow internet connections... initial sync is more utxo-update bound than bandwidth bound for most users. It might even slow it down, since the compact serialization is slower to decode. On a ludicrously fast machine (24 core 3GHz, nvme storage, syncing from local hosts over 10gbe) sync currently only proceeds at about 50mbit/sec.  I've been nagging them to publish it.  Their interest is in using it to increase capacity on the sat signal, but it's more generally useful.

50Mbit/s is high-end validation performance? Interesting.


I expect and hope that all the IBD activity will move into the background. After that happens, then the time it takes is less important than the resources-- and at that point a 25% bandwidth improvement would look pretty good.

Are you referring to the hybrid SPV concept? (SPV synchronisation finishes first, IBD continues in the background). Or new UTXO set tech?


Title: Re: Luke Jr's 300kb blocks
Post by: gmaxwell on February 13, 2019, 02:50:44 PM
would disappear if the base size (1MB) was higher than the weigh limit (600kWU).
There isn't any _size_ limit in the protocol or implementation at all anymore, the weight limit completely replaced the blocksize limit. There isn't any "base size" in the protocol.  So a limit to 600k weight would result in blocks roughly 15% of current sizes given current usage patterns (though probably more since usage would change).



Title: Re: Luke Jr's 300kb blocks
Post by: 1Referee on February 13, 2019, 02:54:24 PM
Careful with those assumptions,  if you filter out nodes from a couple popular VPS providers and nodes that have obvious behavioral tells that they're fake... the picture looks much less rosy.  A lot of "nodes" are sybils setup-- presumably-- to track transaction origins.

That makes sense, but still, even with that in consideration, the fact that we see how more and more individuals are running a lightning node at this stage for geeky purposes (hundreds of physical nodes have been sold, and there is plenty of demand for more, but little to no supply to meet it), and later to earn a pretty penny by scooping up routing fees, we'll see the ratio between 'fake' nodes and legit ones become completely different.

People need an incentive to run a node at home, and Lightning provides that incentive, especially in form of these plug-n-play physical nodes.


Title: Re: Luke Jr's 300kb blocks
Post by: gmaxwell on February 13, 2019, 03:06:11 PM
and later to earn a pretty penny by scooping up routing fees
That's almost certainly not the case. The only time when fees could at all be high is when there aren't many people doing it.  What we've seen so far in lightning (and previously in joinmarket) is that fees rapidly race to pretty low values in competition.

Quote
and Lightning provides that incentive, especially in form of these plug-n-play physical nodes.
At the moment, but eliminating any need to run a node is a major focus of development effort for lightning developers.

There is an inherent incentive: radically improved security and privacy.  But it's only enough to overcome a certain (low) level of cost... thus the concern about managing that cost.

As an aside, a lot of that "node hardware" being sold won't keep up for that long due to limited memory/storage/speed.


Title: Re: Luke Jr's 300kb blocks
Post by: 1Referee on February 13, 2019, 03:53:33 PM
That's almost certainly not the case. The only time when fees could at all be high is when there aren't many people doing it.  What we've seen so far in lightning (and previously in joinmarket) is that fees rapidly race to pretty low values in competition.
Definitely agree with more competition resulting in lower fees, but it comes down to transactional volumes in the end. Many hops make a pretty penny (pretty enough to continue running a node) after a month or so. I strongly believe that Lightning is capable of that with enough adoption.

As an aside, a lot of that "node hardware" being sold won't keep up for that long due to limited memory/storage/speed.
That's a valid concern. These physical nodes indeed have a shelf life which I seem to have ignored. Thanks for pointing out.


Title: Re: Luke Jr's 300kb blocks
Post by: aliashraf on February 13, 2019, 04:31:19 PM
My final take on this thread:

Do something , anything, about fast sync, not in Luke's approach but with his spirit: No SPVs, more full nodes.

Since OP has started this thread I'm banging my head over and over again
https://i.imgur.com/7RpQZaC.gif



Title: Re: Luke Jr's 300kb blocks
Post by: jubalix on February 17, 2019, 12:06:11 AM
I saw this as well.

Is, His argument "full nodes are dropping" does not want to centralise to keep network strong?

where can we get a figure on how many full nodes, I saw coin dance had nodes but could not see the full nodes





Title: Re: Luke Jr's 300kb blocks
Post by: jubalix on February 17, 2019, 12:10:20 AM
It's a really old proposal. I thought it was trying to be too smart/subtle.


The idea was to reduce to 300kB base size, but also set a graduated increase schedule, based on absolute block heights (the 300kb step was set to take place at a blockheight back in 2017 IIRC). It finally reached 1MB base size again in 2024, and continued at a percentage rate (also IIRC). In other words, if the proposal was adopted today, we'd be past the 300kB stage already.

This was partly a psychologically based proposal, which is why people reacted badly, lol. I think Luke knew that 300kB base size would get laughed off, but he figured that since the blockchain grows constantly, that the closer we get to 2024 (when 1MB base would be reached again), the more people might begin to realise that reducing from the 1MB base size was smarter than it sounded back when the blockchain was a more manageable size.

Note that all of this is using base block figures, the real possible block size would be x2-4 the base block (so 300kb would in fact be 600-1200kB if all transactions in a given block are segwit txs).

Bear in mind that as we're still not in 2024, Luke's plan may actually work, and reducing the base from 1MB to whatever the schedule would be stepped to now (which has of course increased beyond 300kB) might look good to some people. It would probably still take some convincing, but there's still 5 years left on the clock.


ok if this is the case this identical to my argument of some sort of increase over time at some rate ax^n where n is 0.05? or some such, you could probably work n out as a function of blockspace, network load, usage, user base (estimates of course) and text improvement curves HD space and bandwidth.

Edit

Sorry I mean {\displaystyle f(x)={\frac {L}{1+e^{-k(x-x_{0})}}}} Logistical function (S-curve) for block size and have been saying that now for about 2 years???


Title: Re: Luke Jr's 300kb blocks
Post by: jubalix on February 17, 2019, 12:13:42 AM
It's not as bad as it seems. If you're running a decent setup, the sync time is pretty reasonable actually. I set up a second node myself earlier this year, and it synced in like 11-12 hours, and that while I expected it to take a day at least. RPI's are a different story, but then again, run a decent setup and you don't have these problems.

In the end, the average person won't run a node even with a very small block size. Why? They just don't give a fuck. People who do give a fuck, and merchants, will continue to spec out their hardware to run their node in the most stable possible manner.
Are you arguing like "it is just fine, people should sit on the backbone (like me) and boot in half a day if they got real incentives, being a bitcoin whale (like me)" ? And you expect Luke to appreciate your argument and back-off?  ;D

Average users have incentives to join, like you and other bitcoin whales :P they just can't and it is getting worse as the time passes. A UTXO commitment/reconciliation protocol could change the scene radically, imo.



What LJr really wants to do is be PeerCoin


Title: Re: Luke Jr's 300kb blocks
Post by: cellard on February 17, 2019, 05:15:56 AM
I saw this as well.

Is, His argument "full nodes are dropping" does not want to centralise to keep network strong?

where can we get a figure on how many full nodes, I saw coin dance had nodes but could not see the full nodes





I don't see his proposal would make people suddenly make the effort to run full nodes. The current growth of the blockchain is not that big of a deal within current settings. My folder is around 235 GB. A 4TB drive is pretty cheap these days, so should have you covered for years, and during these years I assume that disk sizes will keep growing as well.

How realistic is that it becomes impossible to host the blockchain on a single drive? I don't see it happening at the current linear growth.

Maybe it would be cool to have a way to host the blockchain on different HDDs. I don't see why this isn't possible, the client just must know where it left on the last file to keep downloading and validating on the next assorted HDD. The entire blockchain is hosted there so it counts as a full node.

Anyway my point was, his 300kb idea will not change the mind of people to do the effort to run a full node. It's a matter of mentality, not if we have 1MB or 300kb. The difference is not that big of a deal imo. People without the right mentality to run a full node will stay on Electrum or whatever.


Title: Re: Luke Jr's 300kb blocks
Post by: jubalix on February 17, 2019, 09:34:23 AM
and later to earn a pretty penny by scooping up routing fees
That's almost certainly not the case. The only time when fees could at all be high is when there aren't many people doing it.  What we've seen so far in lightning (and previously in joinmarket) is that fees rapidly race to pretty low values in competition.

Quote
and Lightning provides that incentive, especially in form of these plug-n-play physical nodes.
At the moment, but eliminating any need to run a node is a major focus of development effort for lightning developers.

There is an inherent incentive: radically improved security and privacy.  But it's only enough to overcome a certain (low) level of cost... thus the concern about managing that cost.

As an aside, a lot of that "node hardware" being sold won't keep up for that long due to limited memory/storage/speed.

I have asked this before, and not really had a straight answer, why should block size not be increased with a S-curve function? (https://en.wikipedia.org/wiki/Logistic_function). If I recall you put me to rights on the demand side issue, and sort of said yeah maybe ... but my memory is sketchy and it was a long time ago.

I would like a square answer as to why not?

It could forever and a day end the whole blocksize issue with almost nil impact on the current core philosophy.

What is the negative technical argument?

or negative any argument to this?



Title: Re: Luke Jr's 300kb blocks
Post by: cellard on February 18, 2019, 03:43:17 AM
and later to earn a pretty penny by scooping up routing fees
That's almost certainly not the case. The only time when fees could at all be high is when there aren't many people doing it.  What we've seen so far in lightning (and previously in joinmarket) is that fees rapidly race to pretty low values in competition.

Quote
and Lightning provides that incentive, especially in form of these plug-n-play physical nodes.
At the moment, but eliminating any need to run a node is a major focus of development effort for lightning developers.

There is an inherent incentive: radically improved security and privacy.  But it's only enough to overcome a certain (low) level of cost... thus the concern about managing that cost.

As an aside, a lot of that "node hardware" being sold won't keep up for that long due to limited memory/storage/speed.

I have asked this before, and not really had a straight answer, why should block size not be increased with a S-curve function? (https://en.wikipedia.org/wiki/Logistic_function). If I recall you put me to rights on the demand side issue, and sort of said yeah maybe ... but my memory is sketchy and it was a long time ago.

I would like a square answer as to why not?

It could forever and a day end the whole blocksize issue with almost nil impact on the current core philosophy.

What is the negative technical argument?

or negative any argument to this?



I don't think anything but linear is safe... you don't really know how hardware will progress across time, how much it will cost and so on. I don't see any solution to the so called "scaling on chain" that's why Bitcoin has become de-facto digital gold and not something that can be used realistically at scale (as in global usage) on-chain. This doesn't mean research on the field should stop, you never know... however what's clear is most of the effort in Bitcoin should be spent in review already-existing code rather than more exotic stuff. I mean Core had that inflation bug recently while other clients weren't affected. So Luke should be reviewing code instead of attempting stuff that probably will never have any consensus anyway.


Title: Re: Luke Jr's 300kb blocks
Post by: jubalix on February 24, 2019, 02:06:05 AM
and later to earn a pretty penny by scooping up routing fees
That's almost certainly not the case. The only time when fees could at all be high is when there aren't many people doing it.  What we've seen so far in lightning (and previously in joinmarket) is that fees rapidly race to pretty low values in competition.

Quote
and Lightning provides that incentive, especially in form of these plug-n-play physical nodes.
At the moment, but eliminating any need to run a node is a major focus of development effort for lightning developers.

There is an inherent incentive: radically improved security and privacy.  But it's only enough to overcome a certain (low) level of cost... thus the concern about managing that cost.

As an aside, a lot of that "node hardware" being sold won't keep up for that long due to limited memory/storage/speed.

I have asked this before, and not really had a straight answer, why should block size not be increased with a S-curve function? (https://en.wikipedia.org/wiki/Logistic_function). If I recall you put me to rights on the demand side issue, and sort of said yeah maybe ... but my memory is sketchy and it was a long time ago.

I would like a square answer as to why not?

It could forever and a day end the whole blocksize issue with almost nil impact on the current core philosophy.

What is the negative technical argument?

or negative any argument to this?



I don't think anything but linear is safe... you don't really know how hardware will progress across time, how much it will cost and so on. I don't see any solution to the so called "scaling on chain" that's why Bitcoin has become de-facto digital gold and not something that can be used realistically at scale (as in global usage) on-chain. This doesn't mean research on the field should stop, you never know... however what's clear is most of the effort in Bitcoin should be spent in review already-existing code rather than more exotic stuff. I mean Core had that inflation bug recently while other clients weren't affected. So Luke should be reviewing code instead of attempting stuff that probably will never have any consensus anyway.

Would you agree that if usage is up, and HD space cost goes down, bandwith cost goes down, the we are effectively seeing the 1MB becoming smaller? for no reason?

I.E. we can afford lager block at least to the extent the bandwidth and HD space costs fall and cpu power per /$ goes up?





Title: Re: Luke Jr's 300kb blocks
Post by: Wind_FURY on February 24, 2019, 08:23:08 AM
and later to earn a pretty penny by scooping up routing fees
That's almost certainly not the case. The only time when fees could at all be high is when there aren't many people doing it.  What we've seen so far in lightning (and previously in joinmarket) is that fees rapidly race to pretty low values in competition.

Quote
and Lightning provides that incentive, especially in form of these plug-n-play physical nodes.
At the moment, but eliminating any need to run a node is a major focus of development effort for lightning developers.

There is an inherent incentive: radically improved security and privacy.  But it's only enough to overcome a certain (low) level of cost... thus the concern about managing that cost.

As an aside, a lot of that "node hardware" being sold won't keep up for that long due to limited memory/storage/speed.

I have asked this before, and not really had a straight answer, why should block size not be increased with a S-curve function? (https://en.wikipedia.org/wiki/Logistic_function). If I recall you put me to rights on the demand side issue, and sort of said yeah maybe ... but my memory is sketchy and it was a long time ago.

I would like a square answer as to why not?

It could forever and a day end the whole blocksize issue with almost nil impact on the current core philosophy.

What is the negative technical argument?

or negative any argument to this?



I don't think anything but linear is safe... you don't really know how hardware will progress across time, how much it will cost and so on. I don't see any solution to the so called "scaling on chain" that's why Bitcoin has become de-facto digital gold and not something that can be used realistically at scale (as in global usage) on-chain. This doesn't mean research on the field should stop, you never know... however what's clear is most of the effort in Bitcoin should be spent in review already-existing code rather than more exotic stuff. I mean Core had that inflation bug recently while other clients weren't affected. So Luke should be reviewing code instead of attempting stuff that probably will never have any consensus anyway.

Would you agree that if usage is up, and HD space cost goes down, bandwith cost goes down, the we are effectively seeing the 1MB becoming smaller? for no reason?

I.E. we can afford lager block at least to the extent the bandwidth and HD space costs fall and cpu power per /$ goes up?


Have you recently tried doing the 200Gb initial blockchain download? It is a pain. It might be easy with your bandwidth, but not all Bitcoin users will have the access to high bandwidth, or upgrade to higher bandwidth. I believe they would quit.


Title: Re: Luke Jr's 300kb blocks
Post by: BitcoinFX on February 24, 2019, 10:40:08 AM
My final take on this thread:

Do something , anything, about fast sync, not in Luke's approach but with his spirit: No SPVs, more full nodes.

Since OP has started this thread I'm banging my head over and over again
https://i.imgur.com/7RpQZaC.gif



... snip ...

And apparently shrinking the blocksize is a solution now  :D

Where the alternative solution is to make money (a digital cash !) 'heavier' ? ...

- https://news.mlh.io/i-hacked-the-middle-out-compression-from-silicon-valley-06-16-2015

"... Please let me know if I overlooked anything that could make me a member of the Three Comma Club. I want a boat. And doors that open vertically..."

- https://www.hoover.org/research/middle-out-economics

"... in which he advanced a middle-out thesis for economic growth: “The fundamental law of capitalism is, if workers don’t have any money, businesses . . . don’t have any customers.” ..."

  ::)

   :D


Title: Re: Luke Jr's 300kb blocks
Post by: cellard on April 02, 2019, 03:19:17 AM
and later to earn a pretty penny by scooping up routing fees
That's almost certainly not the case. The only time when fees could at all be high is when there aren't many people doing it.  What we've seen so far in lightning (and previously in joinmarket) is that fees rapidly race to pretty low values in competition.

Quote
and Lightning provides that incentive, especially in form of these plug-n-play physical nodes.
At the moment, but eliminating any need to run a node is a major focus of development effort for lightning developers.

There is an inherent incentive: radically improved security and privacy.  But it's only enough to overcome a certain (low) level of cost... thus the concern about managing that cost.

As an aside, a lot of that "node hardware" being sold won't keep up for that long due to limited memory/storage/speed.

I have asked this before, and not really had a straight answer, why should block size not be increased with a S-curve function? (https://en.wikipedia.org/wiki/Logistic_function). If I recall you put me to rights on the demand side issue, and sort of said yeah maybe ... but my memory is sketchy and it was a long time ago.

I would like a square answer as to why not?

It could forever and a day end the whole blocksize issue with almost nil impact on the current core philosophy.

What is the negative technical argument?

or negative any argument to this?



I don't think anything but linear is safe... you don't really know how hardware will progress across time, how much it will cost and so on. I don't see any solution to the so called "scaling on chain" that's why Bitcoin has become de-facto digital gold and not something that can be used realistically at scale (as in global usage) on-chain. This doesn't mean research on the field should stop, you never know... however what's clear is most of the effort in Bitcoin should be spent in review already-existing code rather than more exotic stuff. I mean Core had that inflation bug recently while other clients weren't affected. So Luke should be reviewing code instead of attempting stuff that probably will never have any consensus anyway.

Would you agree that if usage is up, and HD space cost goes down, bandwith cost goes down, the we are effectively seeing the 1MB becoming smaller? for no reason?

I.E. we can afford lager block at least to the extent the bandwidth and HD space costs fall and cpu power per /$ goes up?





Yes, I agree that we could afford doubling the blocksize right now and it would be far from the end of the world. However the main point being discussed here by those that consider all the game theory involved is: HOW do you make a blocksize increase without ending up in a clusterfuck of 2 competing "Bitcoins", with all the drama that always carries? (exchanges listing one or another, price crashing, miners speculating with hashrate, everyone claiming they own the real bitcoin....) Because of that, I don't see how hard forks are possible anymore at all, not mattering what the hardfork is about, there wouldn't be enough consensus, so you would end up with 2 competing coins.


Title: Re: Luke Jr's 300kb blocks
Post by: Carlton Banks on April 02, 2019, 08:43:55 AM
I don't see how hard forks are possible anymore at all

There are alot of other hardfork changes that are totally non-controversial, so you're exagerrating


Title: Re: Luke Jr's 300kb blocks
Post by: spartacusrex on April 03, 2019, 10:35:13 AM
Can I be cheeky and say - This problem has already been fixed. We just need to survive until the fix is implemented.

( I'm a programmer - not cryptographer. Slap me down on:error )

It's all to do with the new zk-STARKs. Like zk-SNARKS.. but faster smaller better hash based quantum secure version ?

They still take hours to process and compute and are still far too large, but that'll be 'fixed'. The pace of improvements is just going too fast at the moment :)

When it is - next 10/20 years - we'll be able to use a recursive fixed size zero-knowledge proof that proves the latest block is valid, it is linked to it's parent, a proof that the parent proof is valid, and a cumulative POW..  ::)

Some of this teck is already out there in various popow (Proof of Proof of Work) forms using the original SNARKs.

So - in 51 years - you'll have a zk-proof that the last 50 years of the blockchain is valid, with the cumulative Total POW, in about 10-20 MB and then the normal chain for the last year.


Title: Re: Luke Jr's 300kb blocks
Post by: jubalix on April 03, 2019, 01:25:33 PM
and later to earn a pretty penny by scooping up routing fees
That's almost certainly not the case. The only time when fees could at all be high is when there aren't many people doing it.  What we've seen so far in lightning (and previously in joinmarket) is that fees rapidly race to pretty low values in competition.

Quote
and Lightning provides that incentive, especially in form of these plug-n-play physical nodes.
At the moment, but eliminating any need to run a node is a major focus of development effort for lightning developers.

There is an inherent incentive: radically improved security and privacy.  But it's only enough to overcome a certain (low) level of cost... thus the concern about managing that cost.

As an aside, a lot of that "node hardware" being sold won't keep up for that long due to limited memory/storage/speed.

S-Curve why not?