Bitcoin Forum
May 29, 2024, 12:36:27 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2]  All
  Print  
Author Topic: SegWit + Variable and Adaptive (but highly conservative) Blockweight Proposal  (Read 2054 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
unamis76
Legendary
*
Offline Offline

Activity: 1512
Merit: 1009


View Profile
May 12, 2017, 05:01:30 PM
 #21

I remember vaguely of reading about this proposal on the Development forum (not sure if OP is the same) and I think deploying this would be a very reasonable solution that would be of interest for both "sides" of this question. In addition to this many people have already talked for several times about dynamic blocks... And I think the only reason this hasn't been implemented yet is because we don't have enough developments regarding that.

I'm all for scaling. I wouldn't mind seeing a system like this go live on Bitcoin.

Plus, this thread is a nice read in a forum where there's been a lot of hate lately Smiley
hv_
Legendary
*
Offline Offline

Activity: 2520
Merit: 1055

Clean Code and Scale


View Profile WWW
May 12, 2017, 05:05:11 PM
 #22

I like the idea of adaptive blocksizes.  

I also favor the idea that capacity should always outpace demand, so I think the increases have to be substantially greater than this.  I think Ethereum does it in a smart way.  If I am correct, it is something like 20% more than a moving average of actual capacity used.  This ensures blocks keep getting bigger as needed.  

How do you differentiate real demand from spam demand?

If someone like Ver decides to dump millions of dollars worth of spam transactions in order to make the blockchain huge, how do you stop this? since if it's automated, the blockchain will just adapt to this demand (even if its fake) centralizing the nodes as a result.

I just don't see how flexible blocksize schemes aren't exploitable.

Define Spam first.

 Grin

Carpe diem  -  understand the White Paper and mine honest.
Fix real world issues: Check out b-vote.com
The simple way is the genius way - Satoshi's Rules: humana veris _
DooMAD (OP)
Legendary
*
Offline Offline

Activity: 3794
Merit: 3157


Leave no FUD unchallenged


View Profile
May 14, 2017, 11:09:56 AM
 #23

How do you differentiate real demand from spam demand?

If someone like Ver decides to dump millions of dollars worth of spam transactions in order to make the blockchain huge, how do you stop this? since if it's automated, the blockchain will just adapt to this demand (even if its fake) centralizing the nodes as a result.

I just don't see how flexible blocksize schemes aren't exploitable.

Define Spam first.

 Grin

I think that's another one of those things that everyone finds difficult to agree on.  The safest definition for me would be deliberate and repeated transactions with no intention to transfer any value, but it's not always easy to recognise such transactions if the culprit is determined to cover their tracks.  Some attackers are more blatant than others.  But equally it's easy to lose context and assume that all small value transactions or transactions with low fees are spam, but this isn't a safe assumption due to users in less economically wealthy parts of the world getting involved.  All we can really do is minimise the motivation to engage in deliberate spamming by making it expensive or difficult (or both) to do.

Was Litecoin's spam fix ever implemented in Bitcoin?  And if not, could we look at implementing that as part of this proposal?

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
The One
Legendary
*
Offline Offline

Activity: 924
Merit: 1000



View Profile
May 17, 2017, 10:46:54 PM
 #24

Else IF more than 90% of block's size, found in the first 2016 of the last difficulty period, is less than 50% MaxBlockSize
    THEN BaseMaxBlockSize = BaseMaxBlockSize -0.01MB
      WitnessMaxBlockSize = WitnessMaxBlockSize -0.03MB


If 1mb block had 1 * 250k tx and 2mb block had 1 * 250k tx - surely the file size of the block is the same? or not? Empty spaces don't have any bytes??

If there any point in reducing the maxblocksize?

..C..
.....................
........What is C?.........
..............
...........ICO            Dec 1st – Dec 30th............
       ............Open            Dec 1st- Dec 30th............
...................ANN thread      Bounty....................

arklan
Legendary
*
Offline Offline

Activity: 1778
Merit: 1008



View Profile
May 17, 2017, 11:35:36 PM
 #25

i think it's meant to be like the rising and (rare, but possible) falling of the difficulty. it can adjust up, or down, as needed.

though as i sit here typing this i'm having a hard time thinking of a real reason it would need to get smaller. the difficulty, of course, needs to match the available hash power to provide security and keep the prescribed block times. but since we can already make and confirm smaller blocks if we want to, i don't know that a reduction in max size is needed.

i'm not a coder of any kind, though. don't take my word for it.

i don't post much, but this space for rent.
DooMAD (OP)
Legendary
*
Offline Offline

Activity: 3794
Merit: 3157


Leave no FUD unchallenged


View Profile
May 18, 2017, 06:06:04 PM
 #26

If 1mb block had 1 * 250k tx and 2mb block had 1 * 250k tx - surely the file size of the block is the same? or not? Empty spaces don't have any bytes??

If there any point in reducing the maxblocksize?
i think it's meant to be like the rising and (rare, but possible) falling of the difficulty. it can adjust up, or down, as needed.

though as i sit here typing this i'm having a hard time thinking of a real reason it would need to get smaller. the difficulty, of course, needs to match the available hash power to provide security and keep the prescribed block times. but since we can already make and confirm smaller blocks if we want to, i don't know that a reduction in max size is needed.

i'm not a coder of any kind, though. don't take my word for it.

There are actually a few reasons for that decision:

Partly it's the simple fact that we don't know what the future holds or what the levels of demand may be as time goes by.  So if we're aiming for the proposal to be adaptable to demand in real time, it makes sense that we don't want to arbitrarily limit the types of situations it can adapt to.  

Then, as previously mentioned, there's the disincentives to spam, or to game the system with artificial volume.  If demand isn't legitimate, a reduction will negate any fraudulent increases as soon as the attack can't be maintained.  We don't want to encourage spam.  That's a huge no-no.  While miners can certainly choose voluntarily to make smaller blocks, it should be noted there is a clear financial benefit to be gained from cramming in more transactions and collecting more fees as a result.  Gaming the system to reach a higher blocksize to squeeze in more tx in this manner is also a no-no.  We want natural and organic growth, not manipulation.

Also, many deem fee pressure to be an important characteristic of Bitcoin.  In an ideal world it should have a fair amount of consistency and not fluctuate too wildly.  While we obviously don't want fees to be too high, at the same time, we don't want them to be too low, either.  If the space available exceeds demand, fees could potentially diminish, which could sway the alignment of incentives for miners.  This particular issue is a big problem with all of the "whole number" blocksize proposals, that generally involve at least doubling the blocksize and completely obliterating any kind of fee pressure.  As such, changes should be smaller and more frequent.

And lastly, the legitimate concerns over the costs of bandwidth for full nodes as the total blocksize increases.  We have to take every reasonable precaution to prevent any large increases that could potentially result in a drop in node count.  Plus, there have been enough instances in this increasingly ugly scaling debate where one side appears to be shouting over the other and not taking into consideration opposing views.  With this proposal, I'd hope those of both sides of the argument at least feel their voice is being heard.  

Hope that clears it up.  Smiley

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
arklan
Legendary
*
Offline Offline

Activity: 1778
Merit: 1008



View Profile
May 18, 2017, 06:11:16 PM
 #27

a very detailed and clear response. thanks!

i don't post much, but this space for rent.
The One
Legendary
*
Offline Offline

Activity: 924
Merit: 1000



View Profile
May 18, 2017, 06:48:23 PM
 #28

If 1mb block had 1 * 250k tx and 2mb block had 1 * 250k tx - surely the file size of the block is the same? or not? Empty spaces don't have any bytes??

If there any point in reducing the maxblocksize?

Hope that clears it up.  Smiley

You haven't answered my questions.

..C..
.....................
........What is C?.........
..............
...........ICO            Dec 1st – Dec 30th............
       ............Open            Dec 1st- Dec 30th............
...................ANN thread      Bounty....................

DooMAD (OP)
Legendary
*
Offline Offline

Activity: 3794
Merit: 3157


Leave no FUD unchallenged


View Profile
May 18, 2017, 07:38:44 PM
 #29

If 1mb block had 1 * 250k tx and 2mb block had 1 * 250k tx - surely the file size of the block is the same? or not? Empty spaces don't have any bytes??

If there any point in reducing the maxblocksize?

Hope that clears it up.  Smiley

You haven't answered my questions.

In the scenario you've described, both blocks would be the same size.  Empty and unused space indeed doesn't create any additional data requirements.  But I did give at least 3 reasons why there is a point in being able to reduce the maximum blocksize if the space isn't being used.  Unused space has the potential to be abused.  We want to limit the potential for abuse.

By the same token, you could ask if there is any point in having a maximum blocksize at all.  It essentially amounts to the same thing.  Smaller is generally considered safer.

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
The One
Legendary
*
Offline Offline

Activity: 924
Merit: 1000



View Profile
May 18, 2017, 08:14:08 PM
 #30

If 1mb block had 1 * 250k tx and 2mb block had 1 * 250k tx - surely the file size of the block is the same? or not? Empty spaces don't have any bytes??

If there any point in reducing the maxblocksize?

Hope that clears it up.  Smiley

You haven't answered my questions.

In the scenario you've described, both blocks would be the same size.  Empty and unused space indeed doesn't create any additional data requirements.  But I did give at least 3 reasons why there is a point in being able to reduce the maximum blocksize if the space isn't being used.  Unused space has the potential to be abused.  We want to limit the potential for abuse.

By the same token, you could ask if there is any point in having a maximum blocksize at all.  It essentially amounts to the same thing.  Smaller is generally considered safer.

That is what i want to know.

..C..
.....................
........What is C?.........
..............
...........ICO            Dec 1st – Dec 30th............
       ............Open            Dec 1st- Dec 30th............
...................ANN thread      Bounty....................

d5000
Legendary
*
Offline Offline

Activity: 3920
Merit: 6395


Decentralization Maximalist


View Profile
May 20, 2017, 12:56:41 AM
 #31

Regarding the problem to "decrease" the maximum block size eventually: I have thought a bit about it, I'm not an expert but I think also it would be desirable to decrease the maximum block size in the case blocks are far from being full, to dis-incentive spam attacks. It would however not be a show-stopper because the proposal is really so conservative that a spam attack would be very, very expensive anyway.

Regarding franky1's orphan risk because of rescanning nodes: I think there is no other way than what aklan said, to store the maxblocksize changes in the blockchain, so the nodes are aware of the changes when they rescan. There would be perhaps another possibility  - to make a conditional decision ("if CheckedBlockSize > ActualMaxBlockSize and CheckedBlockHeight < (ActualBlockHeight - 2016) then AcceptBlock") so nodes can accept larger blocks when they rescan and are more than one difficulty period under the actual block height, but I don't know if this introduces new attack vectors like nodes passing a fake ActualBlockHeight value.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
DooMAD (OP)
Legendary
*
Offline Offline

Activity: 3794
Merit: 3157


Leave no FUD unchallenged


View Profile
May 21, 2017, 11:08:27 PM
Last edit: May 21, 2017, 11:22:42 PM by DooMAD
 #32

Regarding franky1's orphan risk because of rescanning nodes: I think there is no other way than what aklan said, to store the maxblocksize changes in the blockchain, so the nodes are aware of the changes when they rescan. There would be perhaps another possibility  - to make a conditional decision ("if CheckedBlockSize > ActualMaxBlockSize and CheckedBlockHeight < (ActualBlockHeight - 2016) then AcceptBlock") so nodes can accept larger blocks when they rescan and are more than one difficulty period under the actual block height, but I don't know if this introduces new attack vectors like nodes passing a fake ActualBlockHeight value.

There must be a simple fix, since (at least as far as I remember seeing) no one raised the issue when BIP106 was originally proposed, or, more particularly, when lukejr proposed reducing the blocksize.  I'm sure a dev wouldn't have made a proposal with a gaping hole in it.  Someone would have voiced concerns well before this point if it were a showstopper.  Obviously this wouldn't work as a soft fork, so if all nodes are upgraded, it stands to reason we can tell them not to reject blocks that were valid at the time.

As for blocks being newly appended to the chain at the moment of a reduction, miners could voluntarily operate a soft cap of .01 base and .03 witness under the current threshold if they wanted to play it safe.  Effectively they could operate two weeks in lieu of the actual limit.  Plus that's only an issue if the blocks are full to the brim at the time.

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
franky1
Legendary
*
Offline Offline

Activity: 4228
Merit: 4501



View Profile
May 21, 2017, 11:22:38 PM
Last edit: May 21, 2017, 11:35:58 PM by franky1
 #33

Regarding the problem to "decrease" the maximum block size eventually: I have thought a bit about it, I'm not an expert but I think also it would be desirable to decrease the maximum block size in the case blocks are far from being full, to dis-incentive spam attacks. It would however not be a show-stopper because the proposal is really so conservative that a spam attack would be very, very expensive anyway.

Regarding franky1's orphan risk because of rescanning nodes: I think there is no other way than what aklan said, to store the maxblocksize changes in the blockchain, so the nodes are aware of the changes when they rescan. There would be perhaps another possibility  - to make a conditional decision ("if CheckedBlockSize > ActualMaxBlockSize and CheckedBlockHeight < (ActualBlockHeight - 2016) then AcceptBlock") so nodes can accept larger blocks when they rescan and are more than one difficulty period under the actual block height, but I don't know if this introduces new attack vectors like nodes passing a fake ActualBlockHeight value.

There must be a simple fix, since (at least as far as I remember seeing) no one raised the issue when BIP106 was originally proposed, or, more particularly, when lukejr proposed reducing the blocksize.  I'm sure a dev wouldn't have made a proposal with a gaping hole in it.  Someone would have voiced concerns well before this point if it were a showstopper.  Obviously this wouldn't work as a soft fork, so if all nodes are upgraded, it stands to reason we can tell them not to reject blocks that were valid at the time.

As for blocks being newly appended to the chain at the moment of a reduction, miners could voluntarily operate a soft cap of .01 base and .03 witness under the current threshold if they wanted to play it safe.  Effectively they could operate a week in lieu of the actual limit.  Plus that's only an issue if the blocks are full to the brim at the time.

there is a simple fix.. without all the cludgy code to drop the blocksize and ensure resyncing doesnt cause issues of orphaning when blocksize drops

afterall decreasing the blocksize hurts everyone, should in a fortnights time demand picks up again but hits the decreased wall.. (so thats just silly) as all the complexities of trying to avoid the rescan orphan things i said before

but if the block remained at 4mb but was 'empty' it would cost a spammer a hell of a lot more to fill it. compared to a block that decreased to under 4mb..


the solution is simple.. a new fee priority formulae
a better fee priority mechanism ensures the spammers pay more for spamming every block while not causing issues for the normal folk.

here is one example - not perfect. but think about it
imagine that we decided its acceptable that people should have a way to get priority if they have a lean tx and signal that they only want to spend funds once a day. (reasonable expectation)
where
if they want to spend more often costs rise,
 if they want bloated tx, costs rise..
to which, things Like LN would become a viable option for those that are innocent but need to spend regularly.

which then allows those that just pay their rent once a month or buys groceries every couple days to be ok using onchain bitcoin.. and where the costs of trying to spam the network (every block) becomes expensive where by they would be better off using LN. (for things like faucet raiding/day trading every 1-10 minutes)

so lets think about a priority fee thats not about rich vs poor(like the old one was) but about reducing respend spam and bloat.

lets imagine we actually use the tx age combined with CLTV to signal the network that a user is willing to add some maturity time if their tx age is under a day, to signal they want it confirmed but allowing themselves to be locked out of spending for an average of 24 hours.(thats what CLTV does)

and where the bloat of the tx vs the blocksize has some impact too... rather than the old formulae with was more about the value of the tx


as you can see its not about tx value. its about bloat and age.
this way
those not wanting to spend more than once a day and dont bloat the blocks get preferential treatment onchain ($0.01).
if you are willing to wait a day but your taking up 1% of the blockspace. you pay more ($0.44)
if you want to be a spammer spending every block. you pay the price($1.44)
and if you want to be a total ass-hat and be both bloated and respending EVERY BLOCK you pay the ultimate price($63.72)

note this is not perfect. but think about it

in short dcreasing the blocksize consensus can cause more issues for everyone, requires more coding and more cludge
where as a fee priority makes frequent spammers pay more.
the fee priority also makes sure that people, innocent or guilty of spamming DONT pay the same penalty. thus being fair to the innocent that care about the transactions they make and penalise the ones that dont care and just wanna respend as fast as possible but refuse to use LN

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
DooMAD (OP)
Legendary
*
Offline Offline

Activity: 3794
Merit: 3157


Leave no FUD unchallenged


View Profile
May 21, 2017, 11:35:01 PM
 #34

the simple solution is a better fee priority formulae..
that way you dont have to decrease the blocksize that hurts everyone should in a fortnights time demand picks up again but hits the decreased wall.. (as thats just silly)

but if the block remained at 4mb but was 'empty' it would cost a spammer a hell of a lot more to fill it. compared to a block that decreased to under 4mb
by decreasing the blocksize means he can fill the block with less transactions. which is stupid aswell as all the complexities of trying to avoid the rescan orphan things i said before

a better fee priority mechanism ensures the spammers pay more for spamming every block while not causing issues for the normal folk

Well, I did ask:

Was Litecoin's spam fix ever implemented in Bitcoin?  And if not, could we look at implementing that as part of this proposal?

which is related to fees making it harder to spam, and then the thread died for almost 3 days.   Grin

But yeah, let's look at the fee priority mechanism as well.  Each level of security we can add makes it that bit more robust.  But I'm reluctant to drop the reduction aspect in the same way I'm reluctant to adopt Carlton's fixed upper cap.  I sincerely doubt you'd accept his idea and there's no way he'd accept yours, heh.  Both views are a bit too far towards one of the polarised extremes.  In order to be a compromise, I'm trying to steer this thing somewhere towards a happy middle-ground.

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
franky1
Legendary
*
Offline Offline

Activity: 4228
Merit: 4501



View Profile
May 21, 2017, 11:46:06 PM
Last edit: May 22, 2017, 12:01:50 AM by franky1
 #35

the simple solution is a better fee priority formulae..
that way you dont have to decrease the blocksize that hurts everyone should in a fortnights time demand picks up again but hits the decreased wall.. (as thats just silly)

but if the block remained at 4mb but was 'empty' it would cost a spammer a hell of a lot more to fill it. compared to a block that decreased to under 4mb
by decreasing the blocksize means he can fill the block with less transactions. which is stupid aswell as all the complexities of trying to avoid the rescan orphan things i said before

a better fee priority mechanism ensures the spammers pay more for spamming every block while not causing issues for the normal folk

Well, I did ask:

Was Litecoin's spam fix ever implemented in Bitcoin?  And if not, could we look at implementing that as part of this proposal?

which is related to fees making it harder to spam, and then the thread died for almost 3 days.   Grin

But yeah, let's look at the fee priority mechanism as well.  Each level of security we can add makes it that bit more robust.  

value of a TX is meaningless.. and 'rich' spammers got around the old fee mechanism by having a TX where one output had 10kbtc and the other outputs of the same tx had 1sat each
which allowed him to not pay a fee because the old formulae was based on value.

this ended up hurting everyone else though. especially those from 3rd world countries who were not spammers, were not able to have 10k btc to counter the fee, where they just innocently wanted to send more than a couple hours labour (only a few cents) but ended up paying more in fee's.. while malicious spammers didnt pay a fee because they simply worked around the 'value' test


all that matters is how 'fresh' the coins are and how bloated the tx is.

i see no reason at all for ANYONE to have the need for 10%-20% allocation of a block just for 1tx
so things like
4k txsigops of 20k blocksigops
or
16k txsigops of 80k blocksigops
is literally asking for trouble.(5tx fills the block)

i suggest if the block is going to be 4mb (80k blocksigops)
then make txsigops 2k... and make sure even if blocksigops rises.. txsigops does not. that way each increase makes it harder.

also
the 100kb 'larger than' tx data rule.. again who the hell deserves 10% of block space.
bring that down to 10kb or less. and keep it down even if the blocksize increases.

that way it makes it cost more to fill up

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
franky1
Legendary
*
Offline Offline

Activity: 4228
Merit: 4501



View Profile
May 22, 2017, 12:09:30 AM
Last edit: May 22, 2017, 12:45:41 AM by franky1
 #36

But I'm reluctant to drop the reduction aspect in the same way I'm reluctant to adopt Carlton's fixed upper cap.  I sincerely doubt you'd accept his idea and there's no way he'd accept yours, heh.  Both views are a bit too far towards one of the polarised extremes.  In order to be a compromise, I'm trying to steer this thing somewhere towards a happy middle-ground.

carltons 'infinite growth' .. or as the reddit script Fudster buzzword calls it "gigabytes by midnight"
i facepalm that.

imagine it this way.
we are in 2013.. consensus is 1mb.. but policy is 0.5mb
now imagine if that 0.5mb was not simply a decision pools made alone. but nodes had some control of. to ensure it didnt jack up above 0.5mb too fast.
where nodes had a speed test benchmark mechanism in their node which publicised what they could cope with.
nodes wouldnt necessarily orphan the blocks above 0.5mb. but would atleast highlight to pools where pools should slowdown if there was not a good healthy node capability

EG
2018 new rules
8mb consensus
nodes publicise 4mb capability

pools made blocks below 4mb at healthy increments of 1mb-4mb over time(eg 0.25mb/year (kind of like 2011-2015(roughly))).. where pools knew what they do would not cause risks to nodes.. thus not cause orphan drama. or drops of node count
if pools know what the network can handle then pools know what not to risk


separate rant
what i do truly laugh at is while the "gigabytes by midnight" fudsters are screaming 'it will kill full node count'
they are not arguing how many full nodecounts are dropping due to prunned, no witness(stripped/filtered/downstream) features which have been added and told are "all good and safe"

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
DooMAD (OP)
Legendary
*
Offline Offline

Activity: 3794
Merit: 3157


Leave no FUD unchallenged


View Profile
May 22, 2017, 01:15:50 PM
 #37

With the Miner-activated fork vs User-activated fork situation looming on the horizon, time is running out if you don't want a fixed blocksize <air quote>"solution"</air quote>, which will undoubtedly make us revisit this same horrific debacle when we start hitting another purpose-built wall later.  Whoever activates the fork, either blinkered and shortsighted outcome is foolish.

Either we reach an intelligent compromise soon, or we descend into chaos and farce once again in the future.

It's time to decide.

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
d5000
Legendary
*
Offline Offline

Activity: 3920
Merit: 6395


Decentralization Maximalist


View Profile
May 22, 2017, 07:09:11 PM
 #38

Yes. I think what we need is code - an actual implementation draft - and a real BIP proposal, as soon as we can. The UASF/MAHF polarization doesn't look good. Unfortunately, I am the wrong person for this (I only know a little Python, no C++ at all).

Any news with respect to the "orphaning risk problem"? I have looked at Luke's BIP but there's nothing about it, there. And unfortunately here my knowledge about the issue comes to a limit.

Maybe it would be even OK to start with a BIP and ignore that issue for now, or draft two versions (one with the decrease option, the other without it).

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!