Bitcoin Forum
December 01, 2021, 07:57:45 PM *
News: Latest Bitcoin Core release: 22.0 [Torrent]
 
   Home   Help Search Login Register More  
Poll
Question: Would you approve the compromise "Segwit + 2MB"?
Yes - 78 (62.4%)
No - 35 (28%)
Don't know - 12 (9.6%)
Total Voters: 125

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [All]
  Print  
Author Topic: [POLL] Possible scaling compromise: BIP 141 + BIP 102 (Segwit + 2MB)  (Read 14254 times)
d5000
Legendary
*
Offline Offline

Activity: 3010
Merit: 2925


Decentralization Maximalist


View Profile
March 07, 2017, 08:07:34 PM
Last edit: April 06, 2017, 03:05:23 AM by d5000
 #1

Update: For those new to the topic: There is already a precise proposal with a patch ready to be tested that implements this compromise solution, called "Segwit2MB", by Core security auditor Sergio Demian Lerner.

I have read this compromise proposal from "ecafyelims" at Reddit and want to know if there is support for it here in this forum.

Compromise: Let's merge BIP 102 (2MB HF) and BIP 141 (Segwit SF)

Quote from: Reddit user ecafyelims
Let's merge BIP 102 (2MB HF) and BIP 141 (Segwit SF) into a single HF (with overwhelming majority consensus).

Since Segwit changes how the blocksize is calculated to use weights, our goal with the merger would be 2MB of transactional data.

Segwit weighting system measures the transaction weight to be 3x(non-witness base data) + (base data with witness data). This weight is then limited to 4M, favoring witness data.

Transactions aren't all of base or witness. So, in practice, the blocksize limit is somewhere between 1MB (only base data) and 4MB (only witness data) with Segwit.

With this proposed merger, we will increase Segwit weight limit from 4M to 8M. This would allow 2MB of base data, which is the goal of the 2MB HF.

It's a win-win solution. We get 2MB increase and we get Segwit.

I know this compromise won't meet the ideals of everyone, but that's why it's a compromise. No one wins wholly, but we're better off than where we started.

It's very similar to what was already proposed last year at the Satoshi Roundtable. What is the opinion of the Bitcointalk community?

1638388665
Hero Member
*
Offline Offline

Posts: 1638388665

View Profile Personal Message (Offline)

Ignore
1638388665
Reply with quote  #2

1638388665
Report to moderator
1638388665
Hero Member
*
Offline Offline

Posts: 1638388665

View Profile Personal Message (Offline)

Ignore
1638388665
Reply with quote  #2

1638388665
Report to moderator
1638388665
Hero Member
*
Offline Offline

Posts: 1638388665

View Profile Personal Message (Offline)

Ignore
1638388665
Reply with quote  #2

1638388665
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1638388665
Hero Member
*
Offline Offline

Posts: 1638388665

View Profile Personal Message (Offline)

Ignore
1638388665
Reply with quote  #2

1638388665
Report to moderator
1638388665
Hero Member
*
Offline Offline

Posts: 1638388665

View Profile Personal Message (Offline)

Ignore
1638388665
Reply with quote  #2

1638388665
Report to moderator
1638388665
Hero Member
*
Offline Offline

Posts: 1638388665

View Profile Personal Message (Offline)

Ignore
1638388665
Reply with quote  #2

1638388665
Report to moderator
DooMAD
Legendary
*
Offline Offline

Activity: 2898
Merit: 1966


Leave no FUD unchallenged


View Profile WWW
March 07, 2017, 08:17:16 PM
 #2

I'm all for compromise, but still feel that any static, fixed size is a clumsy and crude solution.  As many have argued previously, it's merely kicking the can down the road.  SegWit plus a modified hybrid of BIP100 and BIP106 would be more flexible, adaptable and future-proof.  Not only that, but a sudden, arbitrary one-time surge in space leads to uncertainty and the possibility of abuse by spammers.  The change is healthier being gradual and predictable.

d5000
Legendary
*
Offline Offline

Activity: 3010
Merit: 2925


Decentralization Maximalist


View Profile
March 07, 2017, 08:22:56 PM
 #3

@DooMAD: What you say surely is valid - it's a short-term fix, but it would fix the current bottlenecks and already would enable Lightning and other offchain methods to be tested and in 2-3 years we can then switch to a more flexible variant.

And I think it would be very difficult in the actual situation to reach an agreement that includes some kind of "vote" by the miners for a certain block size, although your proposal seems to be much more moderate than BU (I have read it only superficially, though).


Gimpeline
Hero Member
*****
Offline Offline

Activity: 555
Merit: 507



View Profile
March 07, 2017, 08:28:58 PM
 #4

You have a good point, but I'm pretty sure that there will be no compromise.
The BU side thinks that the Segwit side are idiots, and the Segwit side knows that the BU side are idiots.
There is no compromise
Sundark
Hero Member
*****
Offline Offline

Activity: 560
Merit: 502


View Profile
March 07, 2017, 08:29:11 PM
 #5

We can say for sure that anything SegWit related is no longer gonna be accepted. Anti SegWit movement is too strong.
Naysayers will just conclude that this compromise is a simply backdoor for SegWit and 2MB blocks is just smoke and mirrors.
AgentofCoin
Legendary
*
Offline Offline

Activity: 1092
Merit: 1001



View Profile
March 07, 2017, 08:35:14 PM
 #6

Unfortunately, I think we are about to head into the next evolution of Bitcoin
blockchain theory, in which what will occur has not really happened before
and we will learn new lessons about how Bitcoin truly functions.

Some will lose greatly and others will win, but the current community and
economy will suffer as a whole. When certain actors are no longer properly
incentivized, they will split the single chain premise because you can. For
someone to take such a risk with confidence, it is either clairvoyance or
absolute madness.

This is mostly due to communication failures, misunderstandings, ideologies,
and egos. Compromises are likely over. Now is the quiet before the storm.

If something doesn't happen soon, all out war will begin.
But, maybe that is the only answer to this question, sadly.

Extremism is malicious, within a Consensus system.

I support a decentralized & unregulatable ledger first, with safe scaling over time.
Request a signed message if you are associating with anyone claiming to be me.
-ck
Legendary
*
Offline Offline

Activity: 3416
Merit: 1359


Ruu \o/


View Profile WWW
March 07, 2017, 08:44:34 PM
 #7

Code them up together, but allow each component to be activated *separately* thus allowing clients to choose which component they wish to support... I suspect support for BIP102 will be a lot higher now (yes I know about quadratic scaling issue.)

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 07, 2017, 09:19:17 PM
 #8

Is it not the case that segwit coded as a hard fork would mean that all UTXO's can be spent with segwit? No stupid network topology introduced like with the soft fork mechanism? If so, then yes I think it would be accepted, unless someone things there are reasons why it would be a bad idea. Although my worry then would be that we would be fighting for the next capacity hard fork with no leverage.

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
hv_
Legendary
*
Offline Offline

Activity: 2100
Merit: 1051

Clean Code and Scale


View Profile WWW
March 07, 2017, 09:50:54 PM
 #9

Given that dev fraction ( blockstream core) solution  SW looks rejected by miner fraction, compromise should be proposed from second fraction, not first one again.

And we might need a third, merchants?, to moderate in case.

Carpe diem  -  understand the White Paper and mine honest.
Fix real world issues: Check out b-vote.com
The simple way is the genius way - Satoshi's Rules: humana veris _
AgentofCoin
Legendary
*
Offline Offline

Activity: 1092
Merit: 1001



View Profile
March 07, 2017, 10:15:44 PM
Last edit: March 07, 2017, 10:27:33 PM by AgentofCoin
 #10

Is it not the case that segwit coded as a hard fork would mean that all UTXO's can be spent with segwit? No stupid network topology introduced like with the soft fork mechanism? If so, then yes I think it would be accepted, unless ...

Given that dev fraction ( blockstream core) solution  SW looks rejected by miner fraction, compromise should be proposed from second fraction, not first one again.
And we might need a third, merchants?, to moderate in case.


The issue here is that if BU community and BU devs are not willing to cap the blocksize
or cap the blockweight, then there can never be compromise. They will fork eventually
since they are extremists. They are not looking out for the future, only themselves now,
in the most perfect form of greed. The greed that kills the golden goose, which is the most
stupid of all greeds.

 - BU's fundamental purpose is Semi-Unrestricted block building (accelerates network centralization).
This is to bring about a more currency like device now, instead of later.
They do not mind network centralization or do deny/ignore its possibility of occurrence.

 - CORE's fundamental purpose is Semi-Restricted block building (preserves network decentralization).
This is to maintain the unregulatibility and other like aspects now and later.
They do not mind slowed user growth or high fees or do deny/ignore their possible impacts.

They are fundamentally opposed. Like a couple that has different interests now and changed over time.
The normal situation would be that the couple would break up and each do their own thing.

If a compromise can be reached, it will be either full capitulation or a masterful answer still unknown.

I support a decentralized & unregulatable ledger first, with safe scaling over time.
Request a signed message if you are associating with anyone claiming to be me.
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 07, 2017, 10:29:13 PM
 #11

BU's fundamental purpose is Semi-Unrestricted block building (accelerates network centralization).
CORE's fundamental purpose is Semi-Restricted block building (preserves network decentralization).

Bigger blocks tend toward network centralisation, but decentralises the user base (more people can afford to send bitcoin).
Small blocks allow greater network decentralisation, but centralises the user base (only a few big actors can afford to send bitcoin).

How decentralised is LN?

It would seem the former of these two options was envisiged by the creator at the time. Nodes centralising around well connected mining nodes and bitcoin service providers, and users using SPV wallets.

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
AgentofCoin
Legendary
*
Offline Offline

Activity: 1092
Merit: 1001



View Profile
March 07, 2017, 10:44:25 PM
 #12

BU's fundamental purpose is Semi-Unrestricted block building (accelerates network centralization).
CORE's fundamental purpose is Semi-Restricted block building (preserves network decentralization).
...
How decentralised is LN?

I am not qualified to answer.
In theory, if designed and implemented appropriately, equal to Bitcoin's.


It would seem the former of these two options was envisiged by the creator at the time. Nodes centralising around well connected mining nodes and bitcoin service providers, and users using SPV wallets.

Yes, but there is one problem that is a constant misunderstanding in the
whole Bitcoin community.

When this belief was stated by Satoshi, Nodes were a single entity.
The miners were validators and the validators were miners.
There was only one. Now, there are two separate systems.

Due to these two separate systems, there are two possible choices now.
Satoshi's original comments (as to Nodes) no longer apply to today's reality.

I support a decentralized & unregulatable ledger first, with safe scaling over time.
Request a signed message if you are associating with anyone claiming to be me.
tbonetony
Sr. Member
****
Offline Offline

Activity: 441
Merit: 250


No zuo no die why you try, u zuo u die dont be shy


View Profile
March 07, 2017, 10:51:51 PM
 #13

yes I like this idea. If this can be a configuration choice for node operator to vote, that would be even better.

Forget about BU and BC, we need Bitcoin United Grin

Customizable full-featured crypto trading platform in development. Asking for demo if interested.







I offer private S9 rental for various length: https://bitcointalk.org/index.php?topic=1708351.0
Swimmer63
Legendary
*
Offline Offline

Activity: 1593
Merit: 1004



View Profile
March 07, 2017, 11:08:22 PM
 #14

I am not technically qualified to comment in detail.  But I am very much for compromise and 2MB with Segwit is an excellent place to start.  Let's see who is serious about moving btc ahead. 
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 07, 2017, 11:21:52 PM
 #15

It would seem the former of these two options was envisiged by the creator at the time. Nodes centralising around well connected mining nodes and bitcoin service providers, and users using SPV wallets.

Yes, but there is one problem that is a constant misunderstanding in the
whole Bitcoin community.

When this belief was stated by Satoshi, Nodes were a single entity.
The miners were validators and the validators were miners.
There was only one. Now, there are two separate systems.

Due to these two separate systems, there are two possible choices now.
Satoshi's original comments (as to Nodes) no longer apply to today's reality.

I don't think the fact that not all nodes are mining nodes changes the fundamental premise. In fact it strengthens it as there are more validation nodes.

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 07, 2017, 11:32:22 PM
 #16

BU's fundamental purpose is Semi-Unrestricted block building (accelerates network centralization 1).
CORE's fundamental purpose is Semi-Restricted block building (preserves network decentralization 2).

1 Bigger blocks tend toward network centralisation,
3 but decentralises the user base (more people can afford to send bitcoin).

2 Small blocks allow greater network decentralisation,
4 but centralises the user base (only a few big actors can afford to send bitcoin).

your putting words into people mouths.

1. Bu/bigblockers dont want one brand running anything... most bu/bigblockers are happy with bitcoinj, xt, classic, btcd, etc all running on the same level playing field all using real consensus to come to agreement.. and if core got rid of blockstream corporation. would be happy with core too.(main gripe is blockstreams centralist control)

2. small blockers have shown distaste for anything not blockstream inspired/funded..  (rekt campains agains bitcoinj, xt, classic and bu)

3. correct(people keep funds on thier personal privkeys and use future LN services voluntarily)

4. correct(people move funds to new keys and multisigs where payments need an LN counter party signing. but done forcefully due to politics/fee war games)

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
BitUsher
Legendary
*
Offline Offline

Activity: 994
Merit: 1031


View Profile
March 07, 2017, 11:41:27 PM
 #17

Changing MAX_BLOCK_SIZE to 2MB + segwit literally means we will see 4-8MB blocks.

No thanks ,

I don't care is this HF proposal comes from core or another repo , I will reject it and stay on the original chain. Developers have no power over what software I choose to run .

4-8MB blocks is too big and I am only interested in HF that include many HF wishlist items and permanently solve the scaling problem instead of kicking the can down the road a few months.


Here are some academic papers that reflect how dangerous blocks over 4MB currently are and one reason why segwit limits blocksizes to 4MB max.

http://bitfury.com/content/5-white-papers-research/block-size-1.1.1.pdf

http://fc16.ifca.ai/bitcoin/papers/CDE+16.pdf

This is a dangerous precedent giving control to miners or developers without community consensus. Hard forks should be a matter of last resort and when a SF is on the table that offers the same essential blocksize increase as classic we should be greatful and take it instead.

There is significant moral hazard and social hazard in forcing a HF on the community and how that will make us all insecure in the future. If the Developers or miners are viewed as a group that is perceived they can change the protocol without overwhelming consensus of the bitcoin users than they can easily be manipulated and attacked by bad actors like states and others.

There will guarantee be a split where 2-3 coins exist for one thing which will cause short term havoc due to uncertainty, loss of immutability , breaking the 21 million limit promise, moral hazard , social hazard, ect...(Remember Bitcoin is actually used for things unlike Ethereum which is 100% speculation, thus a split is far more damaging to bitcoin)

The ETF follows the most worked chain only, thus if the miners are swayed back to the originally chain after many of us dump our split coins all those ETF investors lose their investment value.

4-8MB blocks will lead to centralization, increase miner and node centralization and force me to shutdown my personal node.
AgentofCoin
Legendary
*
Offline Offline

Activity: 1092
Merit: 1001



View Profile
March 08, 2017, 12:01:35 AM
 #18

It would seem the former of these two options was envisiged by the creator at the time. Nodes centralising around well connected mining nodes and bitcoin service providers, and users using SPV wallets.

Yes, but there is one problem that is a constant misunderstanding in the
whole Bitcoin community.

When this belief was stated by Satoshi, Nodes were a single entity.
The miners were validators and the validators were miners.
There was only one. Now, there are two separate systems.

Due to these two separate systems, there are two possible choices now.
Satoshi's original comments (as to Nodes) no longer apply to today's reality.
I don't think the fact that not all nodes are mining nodes changes the fundamental premise. In fact it strengthens it as there are more validation nodes.

No, you missed my point. I thought it was more evident.
Non-mining nodes are not incentivized the way Miners Nodes are.

The point being:
both can have opposite votes now (two sides), instead of the vote
always tending toward the same incentivized choice by Hardfork.
Now there are more possibilities such as Softforks. (User activated
fork is now another one, but is a new theory which is riskier than
either soft or hard.)

All I'm saying is when you cite Satoshi about Nodes, it was before
the Node split and is entirely in a different context that no longer
exists.

I support a decentralized & unregulatable ledger first, with safe scaling over time.
Request a signed message if you are associating with anyone claiming to be me.
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 08, 2017, 12:14:55 AM
 #19

I don't think the fact that not all nodes are mining nodes changes the fundamental premise. In fact it strengthens it as there are more validation nodes.
No, you missed my point. I thought it was more evident.
Non-mining nodes are not incentivized the way Miners Nodes are.

The point being:
both can have opposite votes now (two sides), instead of the vote
always tending toward the same incentivized choice by Hardfork.
Now there are more possibilities such as Softforks. (User activated
fork is now another one, but is a new theory which is riskier than
either soft or hard.)

All I'm saying is when you cite Satoshi about Nodes, it was before
the Node split and is entirely in a different context that no longer
exists.

But non mining nodes are the majority of the network. Miners have to produce blocks that follow the network consensus or they get orphaned. Hard fork consensus means nodes update first, miners follow.

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
AgentofCoin
Legendary
*
Offline Offline

Activity: 1092
Merit: 1001



View Profile
March 08, 2017, 12:19:03 AM
 #20

I don't think the fact that not all nodes are mining nodes changes the fundamental premise. In fact it strengthens it as there are more validation nodes.
No, you missed my point. I thought it was more evident.
Non-mining nodes are not incentivized the way Miners Nodes are.

The point being:
both can have opposite votes now (two sides), instead of the vote
always tending toward the same incentivized choice by Hardfork.
Now there are more possibilities such as Softforks. (User activated
fork is now another one, but is a new theory which is riskier than
either soft or hard.)

All I'm saying is when you cite Satoshi about Nodes, it was before
the Node split and is entirely in a different context that no longer
exists.
But non mining nodes are the majority of the network. Miners have to produce blocks that follow the network consensus or they get orphaned. Hard fork consensus means nodes update first, miners follow.

That doesn't matter. Miners can fork away without node validators
and the minority hash chain's difficultly will be too high (and may not
adjust in time to prevent possible failure). Meanwhile the Miner nodes
are on a new chain, mining & semi validating away. The minority hash
chain, could in short time, have no hash at all, in theory.

Your statement only applies if there is no malicious forking.
When the two node systems split, this possibility came into existence.
Prior to this, they always moved as one, since there was only one.

I support a decentralized & unregulatable ledger first, with safe scaling over time.
Request a signed message if you are associating with anyone claiming to be me.
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 08, 2017, 12:27:48 AM
 #21



I see your point about malicious forking. At this point I need others view points to consider.

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 08, 2017, 01:32:02 AM
Last edit: March 08, 2017, 01:46:39 AM by franky1
 #22



I see your point about malicious forking. At this point I need others view points to consider.
side note for you both,
"malicious forking"
many people are over using umbrella terms. of "forking"

try to stick with clear definitions (even im using gmaxwells buzzwords to keep things clear(so even he cant poke the bear))

EG soft = pools move without node
EG hard = node move and pools follow

consensus = if its near unanimous agreement(few orphans orphans now and again but one chain)
controversial = if its arguable low agreement (lots of orphans before it settles down to one chain)
bilateral split = intention avoidance of consensus/orphans/opposition (an altcoin creator with second chain sustaining life)



now then
segwit is in essense a soft consensus.
segwit has 2 parts. although pools change the rules without nodes consent. segwit has other things. like changing the network topology (FIBRE) so that they are upstream (central/top close to pools) able to translate and pass downstream block data in a form nodes can consent to/accept.

however putting segwit aside and looking at bip9 which is how pools were given the vote. a soft bilateral split can happen
bip9 allows
soft(pool) BILATERAL SPLIT

BIP9 changed to a new quorum sensing approach that is MUCH less vulnerable to false triggering, so 95% under it is more like 99.9% under the old approach.  basically when it activates, the 95% will have to be willing to potentially orphan the blocks of the 5% that remain
If there is some reason when the users of Bitcoin would rather have it activate at 90%  ... then even with the 95% rule the network could choose to activate it at 90% just by orphaning the blocks of the non-supporters until 95%+ of the remaining blocks signaled activation.
in essence ignoring opposing pools where by those other pools still hash
which can lead to:
a split of 2 chains where they continue and just build ontop of THEMselves, or
they give up and find other jobs, or
they change software and join the majority



totally different bip not even in any bitcoin version right now..
hard(node and pool) BILATERAL SPLIT
new UASF does this. (although the buzzword is meant to make it sound easier by stroking the sheep to sleep by pretending its a "soft" (S of UASF)fix. but because its node caused. its hard)

because it involves intentionally rejecting valid blocks purely based on who created it. it leads to the pool:
split of 2 chains where they continue and just build ontop of THEMselves, or
they give up and find other jobs, or
they change software and join the majority.

however because nodes are doing this. it can cause further controversy(of the nodes not pools) because the nodes are then differently selecting what is acceptable and hanging onto different chains

which is where UASF is worse than hard consensus or hard bilateral due to even more orphan drama

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
AgentofCoin
Legendary
*
Offline Offline

Activity: 1092
Merit: 1001



View Profile
March 08, 2017, 02:28:37 AM
 #23



I see your point about malicious forking. At this point I need others view points to consider.
side note for you both,
"malicious forking"
many people are over using umbrella terms. of "forking"

try to stick with clear definitions (even im using gmaxwells buzzwords to keep things clear(so even he cant poke the bear))

EG soft = pools move without node
EG hard = node move and pools follow

consensus = if its near unanimous agreement(few orphans orphans now and again but one chain)
controversial = if its arguable low agreement (lots of orphans before it settles down to one chain)
bilateral split = intention avoidance of consensus/orphans/opposition (an altcoin creator with second chain sustaining life)
...

For the sake of clarification for users who wish to know what I was referring to,
it would be a "controversial hardfork". Node banning due to a majority of miners
being out of consensus does not matter, since those miners are intentionally leaving
the node network for good to create a chain that will directly fight with the old (replays).
In a sense, it is like dragging the old network with it. It would be a test in whether
network follows hash or the hash follows the network.

No one would want to do a purposeful bilaterial hardfork since then there will be time to
protect against such an situation and economies will be able to choose one or the other.
Bilaterial fork is when two parties agree to disagree. A controversial hardfork is by its
nature malicious since it will cause issues if performed correctly.

The real difference is that one is programmed to split and the other is an accidental split
that is purposefully attempting to maintain an invalid chain to the point in which it may
become a valid chain. In this event, malicious miners could in theory, with enough hash,
continue indefinitely, never needing to provide forewarning, like a bilaterial.

A controversial hardfork is malicious. A Bilateral hardfork in theory, is not.

I support a decentralized & unregulatable ledger first, with safe scaling over time.
Request a signed message if you are associating with anyone claiming to be me.
kiklo
Legendary
*
Offline Offline

Activity: 1092
Merit: 1000



View Profile
March 08, 2017, 06:03:27 AM
 #24

I have read this compromise proposal from "ecafyelims" at Reddit and want to know if there is support for it here in this forum.

Compromise: Let's merge BIP 102 (2MB HF) and BIP 141 (Segwit SF)

Quote from: Reddit user ecafyelims
Let's merge BIP 102 (2MB HF) and BIP 141 (Segwit SF) into a single HF (with overwhelming majority consensus).

Since Segwit changes how the blocksize is calculated to use weights, our goal with the merger would be 2MB of transactional data.

Segwit weighting system measures the transaction weight to be 3x(non-witness base data) + (base data with witness data). This weight is then limited to 4M, favoring witness data.

Transactions aren't all of base or witness. So, in practice, the blocksize limit is somewhere between 1MB (only base data) and 4MB (only witness data) with Segwit.

With this proposed merger, we will increase Segwit weight limit from 4M to 8M. This would allow 2MB of base data, which is the goal of the 2MB HF.

It's a win-win solution. We get 2MB increase and we get Segwit.

I know this compromise won't meet the ideals of everyone, but that's why it's a compromise. No one wins wholly, but we're better off than where we started.

It's very similar to what was already proposed last year at the Satoshi Roundtable. What is the opinion of the Bitcointalk community?

Guy, there is No Compromise , no matter how , if Segwit is Activated the Miners are Fucked.
Segwit allows LN to function without Trust and will allow LN to Steal Transactions Fess directly from the miners.

There is No Compromise Possible as long as they want Segwit.


 Cool

FYI:
LN Does Needs Segwit so that LN can be trust less.
Otherwise LN requires a separate Trust system in place or very long extensive Time Locks , Both of which LN Devs don't want.

https://www.reddit.com/r/Bitcoin/comments/5eqm2b/can_ln_work_without_segwit/?st=izzovrzk&sh=b2fe8b0a
Quote
Yeah you can do LN without segwit. It's less efficient, and there are some features you won't be able to do.

With segwit, you can have a 3rd party "watch" your channel for you in case your counterparty tries to broadcast an old, fraudulent transaction.
The 3rd party can automatically grab your money back for you. And the watcher doesn't even know about your transactions or your balances while watching.

That whole feature is pretty much gone without segwit.
You'd have to tell the watcher everything about your channel, and the only thing they'd be able to do is e-mail you to let you know if fraud occurred.


The other main disadvantage to segwit-less LN is that channels would have a preset duration. That's a pretty big downside.

If segwit doesn't activate after a long time, we could re-program some of the current code to work without segwit.
I think everyone's hoping we don't have to as that'd be a bit disappointing, but doable.
As I meme'd at scaling HK, there are levels of LN we are prepared to accept


In Short , without Segwit any version of LN is going to be Crap and No threat to the Miners at all.   Wink

FYI2:
Bitcoin Unlimited Fixes the issue for good without segwit, Nothing else needs to happen.

BTC Core Devs placed their own personal interests in front of BTC performance , they have to be fired as they can no longer be trusted.
Segwit is a Trojan, once activated it can never be removed from the blockchain.

https://www.reddit.com/r/btc/comments/5vbofp/initially_i_liked_segwit_but_then_i_learned/

Quote
You wanted people like me to support you and install your code, Core / Blockstream?

Then you shouldn't have a released messy, dangerous, centrally planned hack like SegWit-as-a-soft-fork - with its random, arbitrary, centrally planned, ridiculously tiny 1.7MB blocksize -
and its dangerous "anyone-can-spend" soft-fork semantics.

Now it's too late. The market will reject SegWit - and it's all Core / Blockstream's fault.

The market prefers simpler, safer, future-proof, market-based solutions such as Bitcoin Unlimited.

Quote
The damage which would be caused by SegWit (at the financial, software, and governance level) would be massive:

    Millions of lines of other Bitcoin code would have to be rewritten (in wallets, on exchanges, at businesses) in order to become compatible with all the messy non-standard kludges and workarounds which Blockstream was forced into adding to the code (the famous "technical debt") in order to get SegWit to work as a soft fork.

    SegWit was originally sold to us as a "code clean-up". Heck, even I intially fell for it when I saw an early presentation by Pieter Wuille on YouTube from one of Blockstream's many, censored Bitcoin scaling stalling conferences)

    But as we all later all discovered, SegWit is just a messy hack.

    Probably the most dangerous aspect of SegWit is that it changes all transactions into "ANYONE-CAN-SPEND" without SegWit - all because of the messy workarounds necessary to do SegWit as a soft-fork. The kludges and workarounds involving SegWit's "ANYONE-CAN-SPEND" semantics would only work as long as SegWit is still installed.

    This means that it would be impossible to roll-back SegWit - because all SegWit transactions that get recorded on the blockchain would now be interpreted as "ANYONE-CAN-SPEND" - so, SegWit's dangerous and messy "kludges and workarounds and hacks" would have to be made permanent - otherwise, anyone could spend those "ANYONE-CAN-SPEND" SegWit coins!

    Segwit cannot be rolled back because to non-upgraded clients, ANYONE can spend Segwit txn outputs. If Segwit is rolled back, all funds locked in Segwit outputs can be taken by anyone. As more funds gets locked up in segwit outputs, incentive for miners to collude to claim them grows.


freedomno1
Legendary
*
Offline Offline

Activity: 1722
Merit: 1070


Learning the troll avoidance button :)


View Profile WWW
March 08, 2017, 07:42:06 AM
Last edit: March 08, 2017, 08:49:27 AM by freedomno1
 #25

Code them up together, but allow each component to be activated *separately* thus allowing clients to choose which component they wish to support... I suspect support for BIP102 will be a lot higher now (yes I know about quadratic scaling issue.)

When you get a transaction stuck on the chain for days on end with the standard fee and you have a two to three hour window to lock in a rate on a Bitcoin exchange to convert to fiat.
Yep BIP102 most certainly will start acquiring more support from the average users.

On the topic of compromise if people can get the support to do this scaling compromise then sure lets go with it.
In the end as long as the block-size rises in the short term we can keep kicking the bucket and that works for now.

While were at it someone could make a MSR on what Bitcoin's operating requirement is for the Chinese miners.

I worry that a scaling compromise may be just perpetual kicking  of the can and if we can't even kick the can we may really just end up with two chains.
A hard-fork may be the only way we will see consensus, as the capabilities and requirements where the miners are located is part of the current issue and a stakeholder mandate is needed not just a miner one.
https://randomoracle.wordpress.com/2016/01/25/observations-on-bitcoins-scaling-challenge/




No one would want to do a purposeful bilaterial hardfork since then there will be time to
protect against such an situation and economies will be able to choose one or the other.
Bilaterial fork is when two parties agree to disagree. A controversial hardfork is by its
nature malicious since it will cause issues if performed correctly.

The real difference is that one is programmed to split and the other is an accidental split
that is purposefully attempting to maintain an invalid chain to the point in which it may
become a valid chain. In this event, malicious miners could in theory, with enough hash,
continue indefinitely, never needing to provide forewarning, like a bilaterial.

A controversial hardfork is malicious. A Bilateral hardfork in theory, is not.


I agree no one would want to do a purposeful hardfork if it can be resolved with a soft-fork but someone would at least try to get a general feel of the best solution in the case that a hardfork is the resulting outcome.

As you pointed out there are two clear development paths.

 - BU's fundamental purpose is Semi-Unrestricted block building (accelerates network centralization).
This is to bring about a more currency like device now, instead of later.
They do not mind network centralization or do deny/ignore its possibility of occurrence.

 - CORE's fundamental purpose is Semi-Restricted block building (preserves network decentralization).
This is to maintain the unregulatibility and other like aspects now and later.
They do not mind slowed user growth or high fees or do deny/ignore their possible impacts.

Whether it becomes two over one and we see a bi-lateral hardfork is what the real question is.


(Mumble sometimes someone bumps an old post and necros but this one is relevant to today)
https://bitcointalk.org/index.php?topic=48.0
Anon TumbleBit
Newbie
*
Offline Offline

Activity: 11
Merit: 0


View Profile
March 08, 2017, 08:19:58 AM
 #26

No hard fork please, you know the price crashed due to Chinese Antpool mined BU block? Hard fork will 100% kill bitcoin, or hazard the price.
kiklo
Legendary
*
Offline Offline

Activity: 1092
Merit: 1000



View Profile
March 08, 2017, 08:24:10 AM
 #27

No hard fork please, you know the price crashed due to Chinese Antpool mined BU block? Hard fork will 100% kill bitcoin, or hazard the price.

No Offense but if you truly believe what you just said, You should sell every crypto coin you own and stick with cash.
You are too timid for this environment.


 Cool
d5000
Legendary
*
Offline Offline

Activity: 3010
Merit: 2925


Decentralization Maximalist


View Profile
March 08, 2017, 08:20:34 PM
 #28

I have read DooMAD's proposal now and I like it a bit. It would give less powers to miners as they only can vote for small block size increases, but would eliminate the need for future hardforks. The only problem I see is that it could encourage spam attacks (to give incentives to miners to vote higher blocksizes) but spam attacks will stay as expensive as they are today will be even more expensive than today because of the "transaction fees being higher than in last period" requirement, so they are not for everyone.

Code them up together, but allow each component to be activated *separately* thus allowing clients to choose which component they wish to support... I suspect support for BIP102 will be a lot higher now (yes I know about quadratic scaling issue.)

That certainly sounds like a good idea, if the community decides to support this proposal. Would Core allow to activate that kind of compromise proposal coded into a real pull request?

Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 08, 2017, 08:28:21 PM
 #29

It most likely will not work. As I have outlined in a recent post, there are too many different and "entrenched" camps.

There are a lot of different "camps":
1) BU only.
2) Core only.
3) Soft-fork only.
4) Hard-fork only.
5) Only block-size increase.
6) Only block-size decrease.
7) No hard-fork at any cost.
8.) Other?
We can even expand on this. There are people that think 51% of hashrate (node percentage is irrelevant) is adequate for a hard fork as an upgrade, and there are those who think that 100% is required. Both of these ideologies are absurd.

I have read DooMAD's proposal now and I like it a bit. It would give less powers to miners as they only can vote for small block size increases, but would eliminate the need for future hardforks. The only problem I see is that it could encourage spam attacks (to give incentives to miners to vote higher blocksizes) but spam attacks will stay as expensive as they are today will be even more expensive than today because of the "transaction fees being higher than in last period" requirement, so they are not for everyone.
I do have to add that, while I think that it would be still extremely hard to gather 90-95% consensus on both ideas, I think both would reach far higher and easier support than either Segwit or BU.

That certainly sounds like a good idea, if the community decides to support this proposal. Would Core allow that kind of compromise?
Core can not stop community/miner consensus. Let me see a viable BIP + code first, then we can talk about that.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
AgentofCoin
Legendary
*
Offline Offline

Activity: 1092
Merit: 1001



View Profile
March 08, 2017, 08:49:31 PM
 #30

It most likely will not work. As I have outlined in a recent post, there are too many different and "entrenched" camps.

There are a lot of different "camps":
1) BU only.
2) Core only.
3) Soft-fork only.
4) Hard-fork only.
5) Only block-size increase.
6) Only block-size decrease.
7) No hard-fork at any cost.
8.) Other?
We can even expand on this. There are people that think 51% of hashrate (node percentage is irrelevant) is adequate for a hard fork as an upgrade, and there are those who think that 100% is required. Both of these ideologies are absurd.
...

I agree with this and for some reason the community is blind to it.
There seems to be a denial and slight delusion about all this.
This is mainly why I have come to the unfortunate conclusion that a hardfork to
attack or to split the community is not far off now. We are just chasing the tail.

Also, I'm sure if the ETF is denied on Friday, Core supporters and BU supporters
will blame the other, and if accepted (which IMO is not likely) each side will take
the credit. The point being that those two sides are locked into perpetual hate.

I support a decentralized & unregulatable ledger first, with safe scaling over time.
Request a signed message if you are associating with anyone claiming to be me.
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 08, 2017, 09:53:56 PM
 #31

Core can not stop community/miner consensus.
firstly core bypassed community consensus using bip9 by going soft..

Let me see a viable BIP + code first, then we can talk about that.

secondly read bip 9, yep it can be changed. even gmaxwell admits this.
BIP9 changed to a new quorum sensing approach that is MUCH less vulnerable to false triggering, so 95% under it is more like 99.9% (C) under the old approach.  basically when it activates, the 95% will have to be willing to potentially orphan the blocks of the 5% that remain(B)
If there is some reason when the users of Bitcoin would rather have it activate at 90%  ... then even with the 95% rule the network could choose to activate it at 90% just by orphaning the blocks of the non-supporters until 95%+ of the remaining blocks signaled activation.(A)

^ this is where the UASF comes in (A leads to B leads to C)

secondly you personally know about banning nodes. you have often highlighted yourself as the guy with all the 'know' of the node IP's everyone should ban

its not rocket science

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
-ck
Legendary
*
Offline Offline

Activity: 3416
Merit: 1359


Ruu \o/


View Profile WWW
March 08, 2017, 10:56:49 PM
 #32

Code them up together, but allow each component to be activated *separately* thus allowing clients to choose which component they wish to support... I suspect support for BIP102 will be a lot higher now (yes I know about quadratic scaling issue.)

That certainly sounds like a good idea, if the community decides to support this proposal. Would Core allow to activate that kind of compromise proposal coded into a real pull request?
Core would not because they're all convinced we must have segwit before increasing the block size to prevent a quadratic scaling sigop DDoS happening... though segwit doesn't change the sigops included in regular transactions, it only makes segwit transactions scale linearly which is why the blocksize increase proposal is still not on the hard roadmap for core as is. If block generation is biased against heavy sigop transactions in the core code (this does not need a consensus change, soft fork or hard fork) then pool operators would have to consciously try and include heavy sigop transactions intentionally in order to create a DDoS type block - would they do that? Always assume that if a malicious vector exists then someone will try and exploit it, though it would be very costly for a pool to risk slow block generation/orphanage by doing so.

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
Carlton Banks
Legendary
*
Offline Offline

Activity: 3220
Merit: 2599



View Profile
March 08, 2017, 11:26:35 PM
 #33

Core would not because they're all convinced we must have segwit before increasing the block size to prevent a quadratic scaling sigop DDoS happening... though segwit doesn't change the sigops included in regular transactions, it only makes segwit transactions scale linearly which is why the blocksize increase proposal is still not on the hard roadmap for core as is. If block generation is biased against heavy sigop transactions in the core code (this does not need a consensus change, soft fork or hard fork) then pool operators would have to consciously try and include heavy sigop transactions intentionally in order to create a DDoS type block - would they do that? Always assume that if a malicious vector exists then someone will try and exploit it, though it would be very costly for a pool to risk slow block generation/orphanage by doing so.

Maybe you can help to dispel some commonplace FUD about the sigops DDoS attack.

Bitcoin already has a sigops per block limit to mitigate the risk of this attack (despite the FUD that introducing such a limit is all that's needed to solve the problem, which is pretty dumb considering that's already what happens Roll Eyes)




Now, surely this limit could continue to apply to the base 1MB block _after_ segwit is activated (which can only contain sigs from non-segwit transactions)? Is this how the fork is coded anyway?


Vires in numeris
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 09, 2017, 01:09:57 AM
 #34

No hard fork please, you know the price crashed due to Chinese Antpool mined BU block? Hard fork will 100% kill bitcoin, or hazard the price.

That's just silly hysteria. You should be made aware that BU blocks have been mined into the main Bitcoin blockchain for over a year now.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1002


Core dev leaves me neg feedback #abuse #political


View Profile
March 09, 2017, 02:14:42 AM
 #35

Forgive me if i'm a bit skeptical that segwit is a good idea.  It sounds very complicated and
that once we do it, we'll be stuck with it.

I thought it was initially promoted as a way to avoid a hard fork but if we're going to
be forking to 2MB anyway, what is the point?


AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 09, 2017, 02:22:12 AM
 #36

No hard fork please, you know the price crashed due to Chinese Antpool mined BU block? Hard fork will 100% kill bitcoin, or hazard the price.

That's just silly hysteria. You should be made aware that BU blocks have been mined into the main Bitcoin blockchain for over a year now.

It's just a flag added to the block to show whether the pool is voting for segwit or BU, nothing more. The blocks are exactly the same otherwise as far as I know.

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 09, 2017, 03:06:09 AM
 #37

No hard fork please, you know the price crashed due to Chinese Antpool mined BU block? Hard fork will 100% kill bitcoin, or hazard the price.

That's just silly hysteria. You should be made aware that BU blocks have been mined into the main Bitcoin blockchain for over a year now.

It's just a flag added to the block to show whether the pool is voting for segwit or BU, nothing more. The blocks are exactly the same otherwise as far as I know.

Well, yes. I was just illustrating how hysterical the notion that "the price crashed due to Chinese Antpool mined BU block" was.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 09, 2017, 04:43:12 AM
Last edit: March 09, 2017, 05:00:35 AM by franky1
 #38

Bitcoin already has a sigops per block limit to mitigate the risk of this attack (despite the FUD that introducing such a limit is all that's needed to solve the problem, which is pretty dumb considering that's already what happens Roll Eyes)

cores limit is not low enough

MAX_BLOCK_SIGOPS_COST = 80000;
MAX_STANDARD_TX_SIGOPS_COST = MAX_BLOCK_SIGOPS_COST/5;

who the damned F*ck should be allowed to build a single tx that uses 20% of a block!!
who the damned F*ck should be allowed to build a single tx has 16,000 sigops!!

lower the TX sigop limit to something rational and you wont have the worry of delays due to sigop validation

also. one thing CB is missing out on

Core would not because they're all convinced we must have segwit before increasing the block size to prevent a quadratic scaling sigop DDoS happening... though segwit doesn't change the sigops included in regular transactions, it only makes segwit transactions scale linearly

meaning native transaction users can still sigop SPAM.
segwit has nothing to do with disarming the whole block.. just segwit transaction users

again because carlton is not quite grasping it
though segwit doesn't change the sigops included in regular transactions,

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 09, 2017, 07:07:17 AM
 #39

firstly core bypassed community consensus using bip9 by going soft..
No.

secondly you personally know about banning nodes. you have often highlighted yourself as the guy with all the 'know' of the node IP's everyone should ban
Banning nodes is completely fine.

Anyone ready for the altcoin called BTU? Roll Eyes




https://twitter.com/zaifdotjp/status/839692674412142592
https://twitter.com/SatoshiLite/status/839676935768715264

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
Carlton Banks
Legendary
*
Offline Offline

Activity: 3220
Merit: 2599



View Profile
March 09, 2017, 07:38:33 AM
 #40

apparently, Sigop DDoS attack is possible now, because the Sigops per block limit is too high.


Why isn't anyone using the attack then? Cheesy


Always assume that if a malicious vector exists then someone will try and exploit it

Vires in numeris
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 09, 2017, 04:52:22 PM
 #41

Bitcoin already has a sigops per block limit to mitigate the risk of this attack (despite the FUD that introducing such a limit is all that's needed to solve the problem, which is pretty dumb considering that's already what happens Roll Eyes)
who the damned F*ck should be allowed to build a single tx that uses 20% of a block!!
who the damned F*ck should be allowed to build a single tx has 16,000 sigops!!

Bitcoin is permissionless. The proper answer to "who the damned F*ck should be allowed to build..." is anyone who wishes to.

But another aspect of permissionless is the fact that no miner is compelled to include that transaction into a block. In fact, there is a natural disincentive to any miner including such a transaction into a block. Blocks that take a long time to validate are likely to be orphaned by other blocks which validate quickly.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
Carlton Banks
Legendary
*
Offline Offline

Activity: 3220
Merit: 2599



View Profile
March 09, 2017, 05:32:06 PM
 #42

there is a natural disincentive to any miner including such a transaction into a block. Blocks that take a long time to validate are likely to be orphaned by other blocks which validate quickly.

You're sounding like a Segwitter jbreher


How come Franky, the Bitcoin genius who loudly shouts about how loudly shouting makes him right all the time, can't figure this out too

Vires in numeris
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 09, 2017, 05:37:35 PM
 #43

Bitcoin already has a sigops per block limit to mitigate the risk of this attack (despite the FUD that introducing such a limit is all that's needed to solve the problem, which is pretty dumb considering that's already what happens Roll Eyes)
who the damned F*ck should be allowed to build a single tx that uses 20% of a block!!
who the damned F*ck should be allowed to build a single tx has 16,000 sigops!!

Bitcoin is permissionless. The proper answer to "who the damned F*ck should be allowed to build..." is anyone who wishes to.

But another aspect of permissionless is the fact that no miner is compelled to include that transaction into a block. In fact, there is a natural disincentive to any miner including such a transaction into a block. Blocks that take a long time to validate are likely to be orphaned by other blocks which validate quickly.

but this is where the rules need to be defined. so that if a malicious pool did add it. the LOWER tx sigops would be knowingly rejected by consensus so pools wont bother.

by keeping it at 16,000 malicious pools 'could' accept it and know its a valid block and only have to worry about 'timing' as the disincentive, rather than RULES.

however if you prefer not to have RULES and just rely on belief or faith of pools.. then that is a weakness.
however if belief and faith were strong for such an occurrence not to happen. than quadratics has never been a problem and never be a problem because bitcoin is protected by belief that it will get orphaned due to 'time'.

meaning segwit doesnt need 100% of users to move their funds to segwit keys (weeks after activation) just to possibly get a segwit fix to fix it(never gonna happen anyway), because belief alone in 'time' has protected the network thus far and will continue to

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 09, 2017, 06:09:04 PM
 #44

Bitcoin already has a sigops per block limit to mitigate the risk of this attack (despite the FUD that introducing such a limit is all that's needed to solve the problem, which is pretty dumb considering that's already what happens Roll Eyes)
who the damned F*ck should be allowed to build a single tx that uses 20% of a block!!
who the damned F*ck should be allowed to build a single tx has 16,000 sigops!!

Bitcoin is permissionless. The proper answer to "who the damned F*ck should be allowed to build..." is anyone who wishes to.

But another aspect of permissionless is the fact that no miner is compelled to include that transaction into a block. In fact, there is a natural disincentive to any miner including such a transaction into a block. Blocks that take a long time to validate are likely to be orphaned by other blocks which validate quickly.

but this is where the rules need to be defined.

No new rules need be defined.

Quote
however if you prefer not to have RULES and just rely on belief or faith of pools.. then that is a weakness.
however if belief and faith were strong for such an occurrence not to happen. than quadratics has never been a problem and never be a problem because bitcoin is protected by belief that it will get orphaned due to 'time'.

No weakness. No belief. No faith. Natural incentives of rational self-interest.

Quote
meaning segwit doesnt need 100% of users to move their funds to segwit keys (weeks after activation) just to possibly get a segwit fix to fix it(never gonna happen anyway), because belief alone in 'time' has protected the network thus far and will continue to

Yes - now you're getting it.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
d5000
Legendary
*
Offline Offline

Activity: 3010
Merit: 2925


Decentralization Maximalist


View Profile
March 09, 2017, 08:49:31 PM
 #45

I have read DooMAD's proposal now and I like it a bit. [...]
I do have to add that, while I think that it would be still extremely hard to gather 90-95% consensus on both ideas, I think both would reach far higher and easier support than either Segwit or BU.

I don't understand that statement. Are you talking about DooMAD's idea (modified BIP100+BIP106) or the compromise proposed by "ecafyelims", or both?

I ask because I think DooMAD's "10%-blocksize-change-voting proposal" sounds interesting and if there is support by staff/respected community members/devs then it would be worth discussing it in a separate thread to elaborate a "final BIP".

Core would not because they're all convinced we must have segwit before increasing the block size to prevent a quadratic scaling sigop DDoS happening... though segwit doesn't change the sigops included in regular transactions, it only makes segwit transactions scale linearly which is why the blocksize increase proposal is still not on the hard roadmap for core as is.

Well, if I understand right (as a [almost] non-programmer) then that problem could be solved by coding the change proposal in a way that explicitly delays the hardfork until a lot of time (several months) has passed after the Segwit activation. That should be possible - it would then signal the "big blockers" that their desired blocksize change will come, but would give the system time to adopt Segwit transactions.

franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 09, 2017, 09:07:31 PM
 #46

Quote
meaning segwit doesnt need 100% of users to move their funds to segwit keys (weeks after activation) just to possibly get a segwit fix to fix it(never gonna happen anyway), because belief alone in 'time' has protected the network thus far and will continue to

Yes - now you're getting it.

my "doesnt need" was sarcasm because segwit does actually need it.
the sarcasm was pointed at those that think segwit will fix the issues simply by the belief in 'time'

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 09, 2017, 09:39:16 PM
 #47

Quote
meaning segwit doesnt need 100% of users to move their funds to segwit keys (weeks after activation) just to possibly get a segwit fix to fix it(never gonna happen anyway), because belief alone in 'time' has protected the network thus far and will continue to

Yes - now you're getting it.

my "doesnt need" was sarcasm because segwit does actually need it.
the sarcasm was pointed at those that think segwit will fix the issues simply by the belief in 'time'

Your sarcasm escaped me. Though I must admit your statement elicited some surprise, as you have indeed been otherwise consistent in your advancing the notion that the quadratic hashing time solution within The SegWit Omnibus Changeset could simply be stepped around by an attacker.

Why I am trying to tell you is that it doesn't matter. It's not a problem in any systemic sense. Market forces will conspire to orphan blocks that take an inordinate amount of time to verify. Regardless of which -- or even if any -- scaling solution be adopted.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
-ck
Legendary
*
Offline Offline

Activity: 3416
Merit: 1359


Ruu \o/


View Profile WWW
March 09, 2017, 09:43:26 PM
 #48

apparently, Sigop DDoS attack is possible now, because the Sigops per block limit is too high.


Why isn't anyone using the attack then? Cheesy


Always assume that if a malicious vector exists then someone will try and exploit it
From memory the closest we've come to that to date was that single transaction 1MB block from f2pool that took nodes up to 25 seconds to validate. It is possible to make it much worse, but newer versions of bitcoind (and probably faster node CPUs) would have brought that down. Rusty at the time estimated it could still take up to 11 seconds with 1MB:

https://rusty.ozlabs.org/?p=522

So yeah it has been used... possibly unwittingly at the time.

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 09, 2017, 10:12:47 PM
 #49

Your sarcasm escaped me. Though I must admit your statement elicited some surprise, as you have indeed been otherwise consistent in your advancing the notion that the quadratic hashing time solution within The SegWit Omnibus Changeset could simply be stepped around by an attacker.

Why I am trying to tell you is that it doesn't matter. It's not a problem in any systemic sense. Market forces will conspire to orphan blocks that take an inordinate amount of time to verify. Regardless of which -- or even if any -- scaling solution be adopted.

so there has never been an excessively quadratic blocks in bitcoin.. due to faith in "time"

Cheesy are you sure Cheesy

careful how you reply, careful what you say next. the blockchain history never lies

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 09, 2017, 10:30:48 PM
 #50

From memory the closest we've come to that to date was that single transaction 1MB block from f2pool that took nodes up to 25 seconds to validate. It is possible to make it much worse, but newer versions of bitcoind (and probably faster node CPUs) would have brought that down. Rusty at the time estimated it could still take up to 11 seconds with 1MB:

https://rusty.ozlabs.org/?p=522

So yeah it has been used... possibly unwittingly at the time.

by limiting blocks ability to make a 1mb tx
by having improvements of libsec
by having hardware improvements (raspberry Pi is now atleast v3) quadratics doesnt become an issue.

if blocks grow to say 8mb we just keep tx sigops BELOW 16,000 (we dont increase tx sigop limits when block limits rise).. thus no problem.

however needing to rely on "probability of time" or "hope of 100% user migration of funds to segwit keypairs" is not a good enough solution to me and not a good enough thing to be selling segwit as the solution for. (because 100% key adoption wont occur, malicious users will stay with native keys on purpose)

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Hexadecibel
Human Intranet Liason
VIP
Hero Member
*
Offline Offline

Activity: 571
Merit: 504


I still <3 u Satoshi


View Profile
March 09, 2017, 10:50:53 PM
 #51

No compromise.
AgentofCoin
Legendary
*
Offline Offline

Activity: 1092
Merit: 1001



View Profile
March 09, 2017, 11:05:21 PM
Last edit: March 09, 2017, 11:38:07 PM by AgentofCoin
 #52

No compromise.

When I read your comment, this is what my mind flashed to.
https://www.youtube.com/watch?v=9fdcIwHKd_s

I support a decentralized & unregulatable ledger first, with safe scaling over time.
Request a signed message if you are associating with anyone claiming to be me.
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 09, 2017, 11:34:01 PM
 #53

Your sarcasm escaped me. Though I must admit your statement elicited some surprise, as you have indeed been otherwise consistent in your advancing the notion that the quadratic hashing time solution within The SegWit Omnibus Changeset could simply be stepped around by an attacker.

Why I am trying to tell you is that it doesn't matter. It's not a problem in any systemic sense. Market forces will conspire to orphan blocks that take an inordinate amount of time to verify. Regardless of which -- or even if any -- scaling solution be adopted.

so there has never been an excessively quadratic blocks in bitcoin.. due to faith in "time"

are you sure

careful how you reply, careful what you say next. the blockchain history never lies

No, "there has never been an excessively quadratic blocks in bitcoin". Yes, I am sure.

Of course, 'excessive' requires finesse. There has been, to my knowledge, one block in the history of bitcoin that contained a single transaction of nearly 1MB. That transaction took quite a long time to validate due to the quadratic hash time issue.

But what has been the repercussions of this event? Naddadamnthing. Well, there has been endless FUD yammering and chicken-little-ing. But in terms of the core function of Bitcoin, there has been exactly zero effect. In other words, not excessive.

Of course, if a persistent repeated sequence of such blocks were to be somehow mined back-to-back, that might slow transaction processing to a crawl*.

That is, if no other miner bothered to mine a competing block. Which, of course, is what a rational miner would do in such a situation. For then he would reap the rewards of a more-quickly validating block. (That would be the coinbase reward for solving a block).

The 'excessivity' solves itself. Through natural incentive of rational self-interest.

Sure, we should replace the current algorithm with one that scales linearly. After we address more pressing issues. Such as the cartel-like hard capped transaction production quota.

*Anyone supporting the core approach to 'scaling' has already tacitly accepted transaction processing being slowed to a crawl.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
-ck
Legendary
*
Offline Offline

Activity: 3416
Merit: 1359


Ruu \o/


View Profile WWW
March 09, 2017, 11:37:46 PM
 #54

Of course, if a persistent repeated sequence of such blocks were to be somehow mined back-to-back, that might slow transaction processing to a crawl*.

That is, if no other miner bothered to mine a competing block. Which, of course, is what a rational miner would do in such a situation. For then he would reap the rewards of a more-quickly validating block. (That would be the coinbase reward for solving a block).

The 'excessivity' solves itself. Through natural incentive of rational self-interest.
You keep talking about miners mining this more quickly validating block... there is no code currently that can try to validate two different blocks concurrently and pick the one that validates faster. The first one that comes in will be under validation while any other blocks come in wait before they can be validated so unless someone has a rewrite that does what you claim, the problem still exists. First block that hits will always win.

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
unamis76
Legendary
*
Offline Offline

Activity: 1512
Merit: 1001


View Profile
March 09, 2017, 11:48:32 PM
 #55

This and other similar mix of BIP's has been suggested... If it scales, I'm down for it or pretty much anything else.

I have read DooMAD's proposal now and I like it a bit. It would give less powers to miners as they only can vote for small block size increases, but would eliminate the need for future hardforks. The only problem I see is that it could encourage spam attacks (to give incentives to miners to vote higher blocksizes) but spam attacks will stay as expensive as they are today will be even more expensive than today because of the "transaction fees being higher than in last period" requirement, so they are not for everyone.

Very interesting post. The other issue I see here is that such code doesn't exist, thus isn't tested, so it can't be deployed anytime soon.

franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 09, 2017, 11:50:11 PM
 #56

also worth noting

miners dont care about blocksize.

an ASIC holds no hard drive. an asic receives a sha256 hash and a target.
all an asic sends out is a second sha256 hash that meets a certain criteria of X number of 0's at the start

miners dont care if its 0.001mb or 8gb block..
block data still ends up as a sha hash

and a sha hash is all a miner cares about



pools (the managers and propagators of blockdata and hash. do care what goes into a block)

when talking about blockdata aim your 'miner' argument to concern pools ... not the miner(asic)

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 10, 2017, 12:02:52 AM
 #57

Of course, if a persistent repeated sequence of such blocks were to be somehow mined back-to-back, that might slow transaction processing to a crawl*.

That is, if no other miner bothered to mine a competing block. Which, of course, is what a rational miner would do in such a situation. For then he would reap the rewards of a more-quickly validating block. (That would be the coinbase reward for solving a block).

The 'excessivity' solves itself. Through natural incentive of rational self-interest.
You keep talking about miners mining this more quickly validating block... there is no code currently that can try to validate two different blocks concurrently and pick the one that validates faster. The first one that comes in will be under validation while any other blocks come in wait before they can be validated so unless someone has a rewrite that does what you claim, the problem still exists. First block that hits will always win.

No disrespect intended. But should excessively-long-to-validate blocks ever become significant, mining using an implementation that does not perform parallel validation is a guaranteed route to bankruptcy.

"no code" - you sound pretty sure of yourself there. It may even be the case ... right up until the point in time that it is not.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
-ck
Legendary
*
Offline Offline

Activity: 3416
Merit: 1359


Ruu \o/


View Profile WWW
March 10, 2017, 01:03:17 AM
 #58

Of course, if a persistent repeated sequence of such blocks were to be somehow mined back-to-back, that might slow transaction processing to a crawl*.

That is, if no other miner bothered to mine a competing block. Which, of course, is what a rational miner would do in such a situation. For then he would reap the rewards of a more-quickly validating block. (That would be the coinbase reward for solving a block).

The 'excessivity' solves itself. Through natural incentive of rational self-interest.
You keep talking about miners mining this more quickly validating block... there is no code currently that can try to validate two different blocks concurrently and pick the one that validates faster. The first one that comes in will be under validation while any other blocks come in wait before they can be validated so unless someone has a rewrite that does what you claim, the problem still exists. First block that hits will always win.

No disrespect intended. But should excessively-long-to-validate blocks ever become significant, mining using an implementation that does not perform parallel validation is a guaranteed route to bankruptcy.

"no code" - you sound pretty sure of yourself there. It may even be the case ... right up until the point in time that it is not.
Right, there is no *public* code that I'm aware of, and I do hack on bitcoind for my own purposes, especially the mining components so I'm quite familiar with the code. As for "up until the point in time that it is not", well that's the direction *someone* should take with their code if they wish to not pursue other fixes for sigop scaling issues as a matter of priority then - if they wish to address the main reason core is against an instant block size increase. Also note that header first mining, which most Chinese pools do (AKA SPV/spy mining), and as proposed for BU, will have no idea what is in a block and can never choose the one with less sigops.

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 10, 2017, 01:11:41 AM
 #59

im starting to see what game jbreher is playing.

now its public that segwit cant achieve sigop fix. he is now full on downplaying how bad sigops actually is...
simply to down play segwits promises by subtly saying 'yea segwit dont fix it, but it dont mtter because there is never been a sigop problem'

rather than admit segwit fails to meet a promise. its twisted to be 'it dont matter that it doesnt fix it'.

much like luke JR downplaying how much of a bitcoin contributor his is at the consensus agreement. by backtracking and saying he signed as a human not a bitcoin contributor.. much like changing his hat and pretending to be just a janitor to get out of the promise to offer a dynamic blocksize with core.

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 10, 2017, 01:37:55 AM
 #60

Of course, if a persistent repeated sequence of such blocks were to be somehow mined back-to-back, that might slow transaction processing to a crawl*.

That is, if no other miner bothered to mine a competing block. Which, of course, is what a rational miner would do in such a situation. For then he would reap the rewards of a more-quickly validating block. (That would be the coinbase reward for solving a block).

The 'excessivity' solves itself. Through natural incentive of rational self-interest.
You keep talking about miners mining this more quickly validating block... there is no code currently that can try to validate two different blocks concurrently and pick the one that validates faster. The first one that comes in will be under validation while any other blocks come in wait before they can be validated so unless someone has a rewrite that does what you claim, the problem still exists. First block that hits will always win.

No disrespect intended. But should excessively-long-to-validate blocks ever become significant, mining using an implementation that does not perform parallel validation is a guaranteed route to bankruptcy.

"no code" - you sound pretty sure of yourself there. It may even be the case ... right up until the point in time that it is not.
Right, there is no *public* code that I'm aware of, and I do hack on bitcoind for my own purposes, especially the mining components so I'm quite familiar with the code. As for "up until the point in time that it is not", well that's the direction *someone* should take with their code if they wish to not pursue other fixes for sigop scaling issues as a matter of priority then - if they wish to address the main reason core is against an instant block size increase. Also note that header first mining, which most Chinese pools do (AKA SPV/spy mining), and as proposed for BU, will have no idea what is in a block and can never choose the one with less sigops.

https://bitco.in/forum/threads/buip033-passed-parallel-validation.1545/

For those unwilling to click through:

Quote
BUIP033: Parallel Validation
Proposer: Peter Tschipper
Submitted on: 10/22/2016

Summary:

Essentially Parallel Validation is a simple concept. Rather than validating each block within the main processing thread, we instead create a separate thread to do the block validation. If more than one block arrives to be processed then we create yet another thread. There are currently up to 4 parallel block processing threads available making a big block DDOS attack impossible. Furthermore, if any attacker were somehow able to jam all 4 processing threads and another block arrived, then the processing for the largest block would be interrupted allowing the smaller block to proceed, unless the larger block or blocks have most proof of work. So only the most proof of work and smallest blocks will be allowed to finish in such
as case.

If there are multiple blocks processing at the same time, when one of the blocks wins the race to complete, then the other threads of processing are interrupted and the winner will be able to update the UTXO and advance the chain tip. Although the other blocks that were interrupted will still be stored on disk in the event of a re-org.

...

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 10, 2017, 01:40:11 AM
 #61

im starting to see what game jbreher is playing.
...

Now you just look silly. I'll leave it at that.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
-ck
Legendary
*
Offline Offline

Activity: 3416
Merit: 1359


Ruu \o/


View Profile WWW
March 10, 2017, 02:04:32 AM
 #62

https://bitco.in/forum/threads/buip033-passed-parallel-validation.1545/

For those unwilling to click through:

Quote
BUIP033: Parallel Validation
Proposer: Peter Tschipper
Submitted on: 10/22/2016

Summary:

Essentially Parallel Validation is a simple concept. Rather than validating each block within the main processing thread, we instead create a separate thread to do the block validation. If more than one block arrives to be processed then we create yet another thread. There are currently up to 4 parallel block processing threads available making a big block DDOS attack impossible. Furthermore, if any attacker were somehow able to jam all 4 processing threads and another block arrived, then the processing for the largest block would be interrupted allowing the smaller block to proceed, unless the larger block or blocks have most proof of work. So only the most proof of work and smallest blocks will be allowed to finish in such
as case.

If there are multiple blocks processing at the same time, when one of the blocks wins the race to complete, then the other threads of processing are interrupted and the winner will be able to update the UTXO and advance the chain tip. Although the other blocks that were interrupted will still be stored on disk in the event of a re-org.

...

Thanks I wasn't aware of that. Probably something worth offering in conjunction with BIP102 then.

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
nillohit
Full Member
***
Offline Offline

Activity: 154
Merit: 100

***crypto trader***


View Profile
March 10, 2017, 10:57:16 AM
 #63

I support SegWit  Grin

П    |⧛ ☛  Join the signature campaign and earn free PI daily!  ✅ |⧛    П
|⧛         ☛  PiCoin - get in now  ✅     ☛ No ICO!  ✅          |⧛
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 10, 2017, 11:45:35 AM
 #64

I do have to add that, while I think that it would be still extremely hard to gather 90-95% consensus on both ideas, I think both would reach far higher and easier support than either Segwit or BU.
I don't understand that statement. Are you talking about DooMAD's idea (modified BIP100+BIP106) or the compromise proposed by "ecafyelims", or both?
Both.

I ask because I think DooMAD's "10%-blocksize-change-voting proposal" sounds interesting and if there is support by staff/respected community members/devs then it would be worth discussing it in a separate thread to elaborate a "final BIP".
The idea is worth discussing on its own, regardless of whether there is support by others. Do note that "support by staff" (if you're referring to Bitcointalk staff) is useless. Excluding achow101 and potentially dabs, the rest have very limited or just standard knowledge. Did you take a look at recent luke-jr HF proposal? Achow101 modified it by removing the initial size reduction. Read: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013544.html

if blocks grow to say 8mb we just keep tx sigops BELOW 16,000 (we dont increase tx sigop limits when block limits rise).. thus no problem.
That's not how this works.

https://bitco.in/forum/threads/buip033-passed-parallel-validation.1545/

For those unwilling to click through:

Quote
BUIP033: Parallel Validation
Proposer: Peter Tschipper
Submitted on: 10/22/2016

Summary:

Essentially Parallel Validation is a simple concept. Rather than validating each block within the main processing thread, we instead create a separate thread to do the block validation. If more than one block arrives to be processed then we create yet another thread. There are currently up to 4 parallel block processing threads available making a big block DDOS attack impossible. Furthermore, if any attacker were somehow able to jam all 4 processing threads and another block arrived, then the processing for the largest block would be interrupted allowing the smaller block to proceed, unless the larger block or blocks have most proof of work. So only the most proof of work and smallest blocks will be allowed to finish in such
as case.

If there are multiple blocks processing at the same time, when one of the blocks wins the race to complete, then the other threads of processing are interrupted and the winner will be able to update the UTXO and advance the chain tip. Although the other blocks that were interrupted will still be stored on disk in the event of a re-org.
Which effectively.. solves nothing.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
DooMAD
Legendary
*
Offline Offline

Activity: 2898
Merit: 1966


Leave no FUD unchallenged


View Profile WWW
March 10, 2017, 03:11:46 PM
 #65

I ask because I think DooMAD's "10%-blocksize-change-voting proposal" sounds interesting and if there is support by staff/respected community members/devs then it would be worth discussing it in a separate thread to elaborate a "final BIP".
The idea is worth discussing on its own, regardless of whether there is support by others. Do note that "support by staff" (if you're referring to Bitcointalk staff) is useless. Excluding achow101 and potentially dabs, the rest have very limited or just standard knowledge. Did you take a look at recent luke-jr HF proposal? Achow101 modified it by removing the initial size reduction. Read: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013544.html

I could get behind Achow101's proposal (the link in that linuxfoundation text ended with an extraneous "." which breaks the link) if that one proves less contentious.  I'd also consider tweaking mine to a lower percentage if that helped, or possibly even a flat 0.038MB if we wanted an absolute guarantee that there was no conceivable way it could increase by more than 1MB in the space of a year.  But recurring increases every diff period are unlikely if the total fees generated has to increase every time.  We'd reach an equilibrium between fee pressure easing very slightly when it does increase and then slowly rising again as blocks start to fill once more at the new, higher limit.

Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 10, 2017, 03:44:17 PM
 #66

I support SegWit  Grin
I forgot to mention in my previous post, that this is a healthy stance to have as the majority of the technology oriented participants of the ecosystems are fully backing Segwit.

I could get behind Achow101's proposal (the link in that linuxfoundation text ended with an extraneous "." which breaks the link) if that one proves less contentious.
I think it does, as it doesn't initially reduce the block size. This is what made luke-jr's proposal extremely contentious and effectively useless.

I'd also consider tweaking mine to a lower percentage if that helped, or possibly even a flat 0.038MB if we wanted an absolute guarantee that there was no conceivable way it could increase by more than 1MB in the space of a year. 
I don't like fixed increases in particular either. Percentage based movements in both directions would be nice, but the primary problem with those is preventing the system from being gamed. Even with 10%, eventually this 10% is going to be a lot. Who's to say that at a later date, such movements would be technologically acceptable?

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
DooMAD
Legendary
*
Offline Offline

Activity: 2898
Merit: 1966


Leave no FUD unchallenged


View Profile WWW
March 10, 2017, 04:07:54 PM
 #67

I'd also consider tweaking mine to a lower percentage if that helped, or possibly even a flat 0.038MB if we wanted an absolute guarantee that there was no conceivable way it could increase by more than 1MB in the space of a year. 
I don't like fixed increases in particular either. Percentage based movements in both directions would be nice, but the primary problem with those is preventing the system from being gamed. Even with 10%, eventually this 10% is going to be a lot. Who's to say that at a later date, such movements would be technologically acceptable?

The thing to bear in mind is we'll never make any decision if we're too afraid to make a change because there's a possibility that it might need changing at a later date.  Plus, the good news is, it would only require a soft fork to restrict it later.  But yes, movements in both directions, increases and decreases alike would be ideal.  This also helps as a disincentive to game the system with artificial transactions because your change would be undone next diff period if demand isn't genuine.

Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 10, 2017, 05:03:37 PM
 #68

I'd also consider tweaking mine to a lower percentage if that helped, or possibly even a flat 0.038MB if we wanted an absolute guarantee that there was no conceivable way it could increase by more than 1MB in the space of a year. 
I don't like fixed increases in particular either. Percentage based movements in both directions would be nice, but the primary problem with those is preventing the system from being gamed. Even with 10%, eventually this 10% is going to be a lot. Who's to say that at a later date, such movements would be technologically acceptable?
The thing to bear in mind is we'll never make any decision if we're too afraid to make a change because there's a possibility that it might need changing at a later date.  Plus, the good news is, it would only require a soft fork to restrict it later.  But yes, movements in both directions, increases and decreases alike would be ideal.  This also helps as a disincentive to game the system with artificial transactions because your change would be undone next diff period if demand isn't genuine.
You could argue that it may already be quite late/near impossible to make such 'drastic' changes. I've been giving this some thought, but I'm not entirely sure. I'd like to see some combination of the following:
1) % changes either up or down.
2) Adjustments that either align with difficulty adjustments (not sure if this makes thing complicated or riskier, hence the latter) or monthly adjustments.
3) Fixed maximum cap. Since we can't predict what the state of the network and underlying technology/hardware will be far in the future, it is best to create a top-maximum cap a few years in the future. Yes, I know that this requires more changes later but it is better than nothing or 'risking'/hoping miners are honest, et. al.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 10, 2017, 07:40:37 PM
Last edit: March 10, 2017, 07:51:04 PM by franky1
 #69

You could argue that it may already be quite late/near impossible to make such 'drastic' changes. I've been giving this some thought, but I'm not entirely sure. I'd like to see some combination of the following:
1) % changes either up or down.
2) Adjustments that either align with difficulty adjustments (not sure if this makes thing complicated or riskier, hence the latter) or monthly adjustments.
3) Fixed maximum cap. Since we can't predict what the state of the network and underlying technology/hardware will be far in the future, it is best to create a top-maximum cap a few years in the future. Yes, I know that this requires more changes later but it is better than nothing or 'risking'/hoping miners are honest, et. al.

imagine a case where there were 2 limits.(4 overal 2 for nodes 2 for pools)
hard technical limit that everyone agree's on. and below that a preference limit (adjustable to demand of dynamics).

now imagine
we call the hard technical limit (like old consensus.h) that only moves when the NETWORK as a whole has done speed tests to say what is technically possible and come to a consensus.
EG 8mb has been seen as acceptable today by all speed tests.
the entire network agrees to stay below this, pools and nodes
as a safety measure its split up as 4mb for next 2 years then 8mb 2 years after that..

thus allowing for upto 2-4 years to tweak and make things leaner and more efficient and allow time for real world tech to enhance.
(fibre obtic internet adoption and 5G mobile internet) before stepping forward the consensus.h again



then the preferential limit(further safety measure) that is adjustable and dynamic (policy.h) and keeps pools and nodes inline in a more fluid temporary adjustable agreement. to stop things moving too fast. but fluid if demand occurs

now then, nodes can flag the policy.h whereby if the majority of nodes preferences are at 2mb. pools consensus.h only goes to 1.999
however if under 5-25% of nodes are at 2mb and over 75% of nodes are above 2mb. then POOLS can decide on the orphan risk of raising their pools consensus.h above 2mb but below the majority node policy

also note: pools actual block making is below their(pools) consensus.h

lets make it easier to imagine.. with a picture

black line.. consensus.h. whole network RULE. changed by speed tests and real world tech / internet growth over time (the ultimate consensus)
red line.. node policy.h. node dynamic preference agreement. changed by dynamics or personal preference
purple line.. pools consensus.H. below network RULE. but affected by mempool demand vs nodes overall preference policy.h vs (orphan)risk
orange line.. pools policy.h below pools consensus.h


so imagine
2010
32mb too much, lets go for 1mb
2015
pools are moving thier limit up from 0.75mb to 0.999mb
mid 2017
everyone agree's 2 years of 4mb network capability (then 2 years of 8mb network capability)
everyone agree's to 2mb preference
pools agree their max capability will be below everyones network capability but steps up due to demand and node preference MAJORITY
pools preference(actual blocks built). below other limits but can affect the node minority to shift(EB)
mid 2019
everyone agree's 2 years of 8mb network capability then 2 years of 16mb network capability
some move preference to 4mb, some move under 3mb some dont move
late 2019
MINORITY of nodes have their preference shifted by dynamics of (EB)
2020
MINORITY nodes manually change their preference to not be controlled by dynamics of (EB)
late 2020
MINORITY of nodes have their preference shifted by dynamics of (EB)
2021
MINORITY nodes manually change their preference to not be controlled by dynamics of (EB)
mid 2021
a decision is made whereby nodes preference and pools preference are safe to control blocks at X% scaling per difficulty adjustment period
pools preference(actual blocks built). below other limits but can shift the MINORITY nodes preference via (EB) should they lag behind

p.s
its just a brainfart. no point knit picking the numbers or dates. just read the concept. i even made a picture to keep peoples attention span entertained.

and remember all of these 'dynamic' fluid agreements are all extra safety limits BELOW the black network consensus limit

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
d5000
Legendary
*
Offline Offline

Activity: 3010
Merit: 2925


Decentralization Maximalist


View Profile
March 10, 2017, 08:35:55 PM
 #70

I like that we're moving forward in the discussion, it seems. The original compromise that was the reason for me to start the thread now looks a bit dated.

I would support Lauda's maximum cap idea, as it's true that there could be circumstances where such a flexible system could be gamed.

The challenge is now to find a number for this cap. I had done some very rogue calculations that a 1 TB/year blockchain (that would be equivalent to approximately 20 MB blocks) would enable 160 million people to do about 1-3 transactions (depending on the TX size) per month. That would be just enough for this user base if we assume that Lightning Network and similar systems can manage smaller pauments. 1 TB/year seems pretty high, but I think it's manageable in the near future (~5 years from now).

Obviously if we want the 7 billion people on earth to be able to use Bitcoin on-chain the limit would be much higher, but I think even the most extreme BU advocates don't see that as a goal.

AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 10, 2017, 08:52:28 PM
 #71

My thoughts are:

Was the 1 MB cap introduced as an anti spam measure when everybody used the same satoshi node, and did that version simply stuff all mempool transactions into the block in one go?

Big mining farms are probably not using reference nodes, since they probably wouldn't be able to pick transactions where they have been priotised using a transaction accelerator.

Increasing the block size cap in the simplest manner would avoid BU technical debt, as the emergent consensus mechanism probably wouldn't work very well if people do not configure their nodes (it would hit a 16MB cap in a more complicated manner.)

Miners have to way up the benefits of the higher processing costs required to build a bigger block versus the orphan risk associated with the delay caused by it. In other words, a more natural fee market develops.

So it won't be massive blocks by midnight.

Any comments? (probably a silly question  Wink )


Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 10, 2017, 09:12:25 PM
 #72

I like that we're moving forward in the discussion, it seems. The original compromise that was the reason for me to start the thread now looks a bit dated.

I would support Lauda's maximum cap idea, as it's true that there could be circumstances where such a flexible system could be gamed.

The challenge is now to find a number for this cap. I had done some very rogue calculations that a 1 TB/year blockchain (that would be equivalent to approximately 20 MB blocks) would enable 160 million people to do about 1-3 transactions (depending on the TX size) per month. That would be just enough for this user base if we assume that Lightning Network and similar systems can manage smaller pauments. 1 TB/year seems pretty high, but I think it's manageable in the near future (~5 years from now).

Obviously if we want the 7 billion people on earth to be able to use Bitcoin on-chain the limit would be much higher, but I think even the most extreme BU advocates don't see that as a goal.

mhm
dont think 7billion by midnight.

think rationally. like 1billion over decades.. then your fears start to subside and you start to see natural progression is possible

bitcoin will never be a one world single currency. it will be probably in the top 10 'nations' list. with maybe 500mill people. and it wont be overnight. so relax about the "X by midnight" scare storys told on reddit.

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 10, 2017, 10:39:55 PM
 #73

imagine a case where there were 2 limits.(4 overal 2 for nodes 2 for pools)
hard technical limit that everyone agree's on. and below that a preference limit (adjustable to demand of dynamics).
Yes, that's exactly what my 'proposal/wish' is supposed to have. A dynamic lower bound and a fixed upper bound. The question is, how do we determine an appropriate upper bound and for what time period? Quite a nice concept IMHO. Do you agree?

i even made a picture to keep peoples attention span entertained
What software did you do this in? (out of curiosity)

The challenge is now to find a number for this cap. I had done some very rogue calculations that a 1 TB/year blockchain (that would be equivalent to approximately 20 MB blocks) would enable 160 million people to do about 1-3 transactions (depending on the TX size) per month. That would be just enough for this user base if we assume that Lightning Network and similar systems can manage smaller pauments. 1 TB/year seems pretty high, but I think it's manageable in the near future (~5 years from now).
Problems:
1) 20 MB is too big right now.
2) 1 TB is definitely too big. Just imagine the IBD after 2 years.
3) You're thinking too big. Think smaller. We need some room to handle the current congestion, we do not need room for 160 million users yet.

Increasing the block size cap in the simplest manner would avoid BU technical debt, as the emergent consensus mechanism probably wouldn't work very well if people do not configure their nodes (it would hit a 16MB cap in a more complicated manner.)
Preference level for me:
Segwit + dynamic block size proposal (as discussed so far) > Segwit alone > block size increase HF alone > BTU emergent consensus. The latter is risky and definitely not adequately tested.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 10, 2017, 10:57:33 PM
 #74

Preference level for me:
Segwit + dynamic block size proposal (as discussed so far) > Segwit alone > block size increase HF alone > BTU emergent consensus. The latter is risky and definitely not adequately tested.

Preference level for me would be (current moment of thought - I reserve the right to change my mind):
Segwit + dynamic block size HF > block size HF > BTU > Segwit SF. The latter introducing a two tiered network system and a lot of technical debt.

Although a quick and simple static block size increase is needed ASAP to allow time to get the development of the preferred option right.

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 11, 2017, 12:31:40 AM
Last edit: March 11, 2017, 12:48:10 AM by jbreher
 #75

https://bitco.in/forum/threads/buip033-passed-parallel-validation.1545/

For those unwilling to click through:

Quote
BUIP033: Parallel Validation
Proposer: Peter Tschipper
Submitted on: 10/22/2016

Summary:

Essentially Parallel Validation is a simple concept. Rather than validating each block within the main processing thread, we instead create a separate thread to do the block validation. If more than one block arrives to be processed then we create yet another thread. There are currently up to 4 parallel block processing threads available making a big block DDOS attack impossible. Furthermore, if any attacker were somehow able to jam all 4 processing threads and another block arrived, then the processing for the largest block would be interrupted allowing the smaller block to proceed, unless the larger block or blocks have most proof of work. So only the most proof of work and smallest blocks will be allowed to finish in such
as case.

If there are multiple blocks processing at the same time, when one of the blocks wins the race to complete, then the other threads of processing are interrupted and the winner will be able to update the UTXO and advance the chain tip. Although the other blocks that were interrupted will still be stored on disk in the event of a re-org.
Which effectively.. solves nothing.

Exactly. There is no problem which requires solving. This merely eliminates the DoS potential that quadratic hash time exploits might incur, if there was not this obvious workaround already inherent in the protocol.

Lesser implementations that have no embedded nullification of this exploit may wish to take note.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 12:43:59 AM
 #76


i even made a picture to keep peoples attention span entertained
What software did you do this in? (out of curiosity)


i just quickly opened up microsoft excel and added some 'insert shape' and lines..
i use many different packages depending on what i need. some graphical some just whatever office doc i happen to already have open

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 12:47:02 AM
 #77

Exactly. There is no problem which requires solving. This merely eliminates the DoS potential that quadratic has time exploits might incur, if there was not this obvious workaround already inherent in the protocol.

lol

blockstreamer: segwit solves quadratics, its a must, its needed. quadratics is a big deal and segwit promises to solve it
community: malicious users will stick to native keys, thus still quadratic spamming even with segwit active.. meaning segwits promise=broke
blockstreamer: quadratics has never been a problem relax its no big deal

i now await the usual rebuttal rhetoric
"blockstream never made any contractual commitment nor guarantee to fix sigop spamming" - as they backtrack earlier promises and sale pitches
or
personal attack (edit: there we have it, p.S personal attacks aimed at me sound like whistles in the wind)

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 11, 2017, 12:49:16 AM
 #78

Exactly. There is no problem which requires solving. This merely eliminates the DoS potential that quadratic has time exploits might incur, if there was not this obvious workaround already inherent in the protocol.

lol

blockstreamer: segwit solves quadratics, its a must, its needed. quadratics is a big deal and segwit promises to solve it
community: malicious users will stick to native keys, segwits promise=broke
blockstreamer: quadratics has never been a problem relax its no big deal

You're looking ridiculous again, franky1. Y'all might wanna reel you-self back in.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 11, 2017, 12:15:32 PM
 #79

Having a little thought about this concept of 'emergent consensus'. Is not the fact that different versions of nodes, or different versions of different node implementations that exists on the network today a form or 'emergent consensus'?

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 12:33:39 PM
Last edit: March 11, 2017, 12:48:28 PM by franky1
 #80

Having a little thought about this concept of 'emergent consensus'. Is not the fact that different versions of nodes, or different versions of different node implementations that exists on the network today a form or 'emergent consensus'?

to answer your question is..

basically that BU and core already have the variables..

nodes: consensus.h policy.h
pools: consensus.h policy.h

and that all nodes have 2 limits although not utilitised to the best of their ability.. meaning at non mining level core does not care about policy.h

and the punchline i was going to reveal to Lauda about my example of dynamics.
BU uses
consensus.h (...) as the upperbound limit (32mb(2009), then 1mb for years and in the future going up as the hard limits EG 16mb)
policy.h (...) as the more fluid value BELOW consensus.h that if the node is in minority. can be pushed by EB or the user manually without needing to wait for events. which is signalled in their useragent eg 2mb and dynamically going up

core however, require tweaking code and recompiling to change both each time)

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Searing
Copper Member
Legendary
*
Offline Offline

Activity: 2842
Merit: 1429


Clueless!


View Profile
March 11, 2017, 12:38:20 PM
 #81

I have read this compromise proposal from "ecafyelims" at Reddit and want to know if there is support for it here in this forum.

Compromise: Let's merge BIP 102 (2MB HF) and BIP 141 (Segwit SF)

Quote from: Reddit user ecafyelims
Let's merge BIP 102 (2MB HF) and BIP 141 (Segwit SF) into a single HF (with overwhelming majority consensus).

Since Segwit changes how the blocksize is calculated to use weights, our goal with the merger would be 2MB of transactional data.

Segwit weighting system measures the transaction weight to be 3x(non-witness base data) + (base data with witness data). This weight is then limited to 4M, favoring witness data.

Transactions aren't all of base or witness. So, in practice, the blocksize limit is somewhere between 1MB (only base data) and 4MB (only witness data) with Segwit.

With this proposed merger, we will increase Segwit weight limit from 4M to 8M. This would allow 2MB of base data, which is the goal of the 2MB HF.

It's a win-win solution. We get 2MB increase and we get Segwit.

I know this compromise won't meet the ideals of everyone, but that's why it's a compromise. No one wins wholly, but we're better off than where we started.

It's very similar to what was already proposed last year at the Satoshi Roundtable. What is the opinion of the Bitcointalk community?



AT this point in time it is about POWER to move the future of btc imho. The devs of any flavor..most are mega whales..so it is the coding/power trip now...
as reasonable as this sounds...I just don't see it happening because of  price above 1k and 1mb block size for bitcoin core is just dandy for all they care. NOT saying
I agree on either camp but bitcoin core...sees BTC as a store of value..so imho...it can sit at 1mb for years as long as price does reflect that store of value thinking

thus stalemate..thus 1mb btc...so only other option if I'm correct (hope i'm not) is an attempted BU fork and/or BU gets 51% of the folk to push their view

all very silly..just compromise already..its NOT like if we had another unexpected btc fork like back in the day ..they would not pop out a hard fix anyway

(what do I know I at one time drank the BFL kool aid) but just seems it is about status/power and the devs of any flavor just really, really don't like the other camp

 

Old Style Legacy Plug & Play BBS System. Get it from www.synchro.net. Updated 1/1/19. It also works with Windows 10 and allows 16 bit DOS game doors on the same Win 10 Machine! Five Minute Install! Look it over uninstalls just as fast! Freeware! Full BBS System! It is a frigging hoot!:)
Carlton Banks
Legendary
*
Offline Offline

Activity: 3220
Merit: 2599



View Profile
March 11, 2017, 01:03:19 PM
 #82

just compromise already

Segwit IS the compromise. I've refrained from saying this up until recently, but I think 4MB is too big. I'd be much happier with a Segwit proposal that kept the size at 1MB, but in the hope that others would recognise that 4MB is meeting in the middle, I helped to promote Segwit hoping they would accept it. The fact they have rejected Segwit only demonstrates that bigger blocks has got nothing to do with it, it's about having power over the source code.

Vires in numeris
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 01:26:03 PM
Last edit: March 11, 2017, 01:36:54 PM by franky1
 #83

segwit is not the compromise

activating segwit solves nothing.
moving people to segwit keys after activation is then a 'percentage of solution'

never a complete 100% solving bugs, or never 100% fixing or never 100% boosting. because even after activation segwit will still be contending against native key users

also the 4mb segwit weight is not utilised.
AT VERY BEST the expectation is 2.1mb.. the other 1.9mb would be left empty.
segwit cannot resegwit again to utilise the 1.9mb extra weight.

the extra weight would be (from reading core/blockstream plans) with bloat data to include confidential commitments appended onto the end of a tx. (not extra tx capacity) to bloat a tx that would have, without confidential commitments been alot leaner



segwit also turns into a 2 tier network of upstream 'filters' and downstream nodes. rather than a equal network of nodes that all agree on the same thing.

for the reddit crew.. in simple terms. segwit fullnode = full data.. downstream= 'tl:dr' nodes

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
7788bitcoin
Legendary
*
Offline Offline

Activity: 1918
Merit: 1016


Crypto Casino & Sportsbook


View Profile
March 11, 2017, 02:01:17 PM
 #84

They are just two very different approaches... I thought the activation of Segwit will then followed by "easier/better" future plan of blocksize increase?

Perhaps we can compromise and buy time by allowing bigger blocks (eg. 2MB) to activate, and then decide if Segwit should be implemented?


██████████████████████████████████████████████████████████████████████
████████▀▀▀        ▀▀█████████████████████████████████████████████████
██████▀    ▄▄▄▄▄▄▄▄    ███████████████████████████████████████████████
█████    ▄█████████▌   ▐█████▀  ▐███████████████▌  ▀██████████████████
████▌   ▐██████████    █████    ████████████████    ██████████████████
████▌   ▐█████████▄▄▄▄█████▌   ▐███████████████▌   ▐███▀▀█████████████
█████    ▀███████████████▀▀        ▄███████████    ██▀   ▐████████████
██████▄     ▀▀███████▀▀         ▄▄███▀▀▀▀█████▌   ▐▀   ▄███▀▀   ▀█████
█████████▄▄     ▀▀███▄  ▄▄    ████▀    ▄   ███       ▄███▀   ▄█  ▐████
█████████████▄▄     ▀████▌   ▐███▀   ███   ██▌      ████    ██▀  █████
██████▀▀   ▀█████▄    ███    ████   ███▌  ▐██    ▌  ▐██▌      ▄▄██████
█████    ▄████████    ▐██    ██▀▀   ██▀   ▐▀    ▐█   ██▌   ▀██▀▀  ████
████▌   ▐████████▀    ███▄     ▄▄▄     ▄    ▄   ▐██   ██▄      ▄▄█████
████▌   ███████▀    ▄███████████████████████████████▄  ▀▀██████▀▀ ████
█████    ▀▀▀▀     ▄█████████▀    ▀█▀    ▀█       ▀████▄▄         ▄████
██████▄▄    ▄▄▄▄████████████  █████  ██  █  █  █  ████████████████████
█████████████████████████  █▄    ▄█▄    ▄█  █  █  ████████████████████
██████████████████████████████████████████████████████████████████████
▄▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▄
█  ▄▀▄             █▀▀▐▀▄▄
█  █▀█             █  ▐  ▐▌
█       ▄██▄       █  ▌  █
█     ▄██████▄     █  ▌ ▐▌
█    ██████████    █ ▐  █
█   ▐██████████▌   █ ▐ ▐▌
█    ▀▀██████▀▀    █ ▌ █
█     ▄▄▄██▄▄▄     █ ▌▐▌
█                  █▐ █
█                  █▐▐▌
█                  █▐█
▀▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▀█
▄▄█████████▄▄
▄█▀▀▀█████████▀▀▀█▄
▄█▀    ▄▀█████▀     ▀█▄
▄█▄    █        ▀▄   ███▄
▄████▀▀▀▀▄       ▄▀▀▀▀▀███▄
████      ▀▄▄▄▄▄▀       ███
███     ▄▄███████▄▄     ▄▀█
█  ▀▄ ▄▀ ▀███████▀ ▀▄ ▄▀  █
▀█   █     ▀███▀     ▀▄  █▀
▀█▄▄█▄      █        █▄█▀
▀█████▄ ▄▀▀ ▀▀▄▄ ▄▄███▀
▀█████        ████▀
▀▀█▄▄▄▄▄▄▄█▀▀
● OVER 1000 GAMES
● DAILY RACES AND BONUSES
● 24/7 LIVE SUPPORT
inBitweTrust
Hero Member
*****
Offline Offline

Activity: 658
Merit: 500



View Profile
March 11, 2017, 02:06:49 PM
 #85

This second compromise is nothing but a veiled attempt at setting a precedent that we force a HF without consensus on the community and either giving the decision to miners or developers instead of the users themselves. As we can see from this poll , consensus over a HF is not anywhere near being found and thus the HF proposal offered isn't anywhere good enough to be considered. I don't want to even consider accepting politically motivated hard forks and just want to focus on whats right for bitcoin.

Carlton Banks
Legendary
*
Offline Offline

Activity: 3220
Merit: 2599



View Profile
March 11, 2017, 02:07:12 PM
 #86

They are just two very different approaches... I thought the activation of Segwit will then followed by "easier/better" future plan of blocksize increase?

Perhaps we can compromise and buy time by allowing bigger blocks (eg. 2MB) to activate, and then decide if Segwit should be implemented?


Segwit IS the compromise, and it's more of a compromise towards big blocks than what you're suggesting. Roll Eyes


Which is bigger, 2 MB blocks or 4 MB blocks   Roll Eyes

Vires in numeris
naughty1
Sr. Member
****
Offline Offline

Activity: 406
Merit: 250



View Profile
March 11, 2017, 02:14:09 PM
 #87

As many have argued before, I think this will not work, whereas in the other hand it is damaging, I personally think segwit + a different transformation will be more flexible, such as BIP 106. I have a great deal of faith in this transformation, and we need a healthy and predictable change that we should not take unexpected measures. But in the actual situation, I think this is very difficult, the miners will always keep their decision, so it is difficult to change.





        ▄▄█████████▄▄
     ▄███▀▀       ▀▀███▄
   ▄██▀               ▀██▄
  ██▀ ▄▄             ▄▄ ▀██
 ██▀  ▐██████▄ ▄██████▌  ▀██
██▀    ██  ███ ███  ██    ▀██
██      █▄ ▐██ ██▌ ▄█      ██
██▄      ▀ ▐██ ██▌ ▀      ▄██
 ██▄        ██ ██        ▄██
  ██▄        ███        ▄██
   ▀██▄              ▄██▀
     ▀███▄▄       ▄▄███▀
        ▀▀█████████▀▀
.
▄▄▄▄▄▄▄▄▄▄      ██                                         
██████████  ▄▄  ██▄▄▄▄▄▄  ▄▄  ▄▄▄▄▄▄▄▄  ▄▄▄▄▄▄▄▄▄  ██▄     
██          ██  ████████  ██  ████████  █████████  ████▄   
██          ██  ██        ██     ▄▄██▀  ██   ▄██▀  ██ ▀██▄ 
██          ██  ██        ██  ▄██▀▀     ██▄██▀▀    ██   ▀██▄
██████████  ██  ████████  ██  ████████  █████████  ██     ██
▀▀▀▀▀▀▀▀▀▀  ▀▀  ▀▀▀▀▀▀▀▀  ▀▀  ▀▀▀▀▀▀▀▀  ▀▀▀▀▀▀▀▀▀  ▀▀     ▀▀

Finance




           ▄█▄    ▄▄▄▄▄▄███████
         ▄█████▄   ▀███████████
 █▄    ▄█████████    ██████████
 ███▄▄█████████▀   ▄██████████▌
 ████████████▀   ▄████████████
▐██████████▀   ▄█████████▀ ▀██
▐█████████▄   █████████▀     ▀
████████████▄  ▀█████▀
███████▀▀▀▀▀     ▀█▀







AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 11, 2017, 02:14:22 PM
 #88

Which is bigger, 2 MB blocks or 4 MB blocks   Roll Eyes

And that 4MB is 1MB of transactional data space, and 3MB of segwit data space, the latter of which is mostly reserved for future use.

So don't mislead others into thinking that all of a sudden we will get a 4 fold increase in transactional capacity. We won't.

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 02:15:43 PM
 #89

They are just two very different approaches... I thought the activation of Segwit will then followed by "easier/better" future plan of blocksize increase?

Perhaps we can compromise and buy time by allowing bigger blocks (eg. 2MB) to activate, and then decide if Segwit should be implemented?

or if having an organised hard consensus (meaning old nodes have to drop off anyway(the small minority outside activation threshold)
dynamic blocks (using policy.h(lower bound) as the dynamic flagging scaler) and segwit keys. where the witness is appended to the tail of the tx.
without needing to have separation(of tree's(blocks)).

that way ALL nodes validate the same thing.

(ill get to the punchline later about the then lack of need for segwit.. but want to see if people run scenario's in their head first to click their lightbulb moment into realising what segwit does or doesnt do)

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 02:20:26 PM
 #90

for clarity

Which is bigger, 2 MB blocks or 4 MB blocks   Roll Eyes

And that 4MB is
1MB of transactional data space, and 3MB buffer space, that only partially fills dependant on the % of segwit users in the base block
(0% segwit in 1mb base=0of the 3mb extra used(1mb total))
(10% segwit in 1mb base=0.1mb of the 3mb used(1.1mb total))
(100% segwit in 1mb base=1.1mb of the 3mb used(2.1mb total))

 the latter of which(atleast 1.9mb) is mostly reserved for future use.

So don't mislead others into thinking that all of a sudden we will get a 4 fold increase in transactional capacity. We won't.

FTFY

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 04:49:34 PM
 #91

And that 4MB is 1MB of transactional data space, and 3MB of segwit data space, the latter of which is mostly reserved for future use.

So don't mislead others into thinking that all of a sudden we will get a 4 fold increase in transactional capacity. We won't.
In theory you can get up to 14 TPS with Segwit. However, with realistic usage that is not the case (similarly with the current network having a theoretical capacity of 7 TPS). Segwit will definitely deliver >2 MB according to the latest usage patterns.

segwit is not the compromise
It is.

activating segwit solves nothing.
It does.

because even after activation segwit will still be contending against native key users
Nobody cares. You can't DOS the network with "native" keys post Segwit.

segwit also turns into a 2 tier network of upstream 'filters' and downstream nodes. rather than a equal network of nodes that all agree on the same thing.
This is only the case if the majority of nodes don't support Segwit. Ironically to your statement, the big majority is in favor of Segwit.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 05:05:19 PM
Last edit: March 11, 2017, 06:13:38 PM by franky1
 #92

And that 4MB is 1MB of transactional data space, and 3MB of segwit data space, the latter of which is mostly reserved for future use.

So don't mislead others into thinking that all of a sudden we will get a 4 fold increase in transactional capacity. We won't.
In theory you can get up to 14 TPS with Segwit. However, with realistic usage that is not the case (similarly with the current network having a theoretical capacity of 7 TPS). Segwit will definitely deliver >2 MB according to the latest usage patterns.
emphasis >2mb
> (should be UPTO, but your saying more than) but only if 100% segwit key use to get 2mb
dont down play it that nothing needs to be done by users to attain the 2mb..
also factoring in native spam and users not using sgwit keys. the entire baseblock wont be 100% segwit users meaning not attaining 2mb
EG
imagine there were 4500 users. so far they argue over blocksize that can only fit ~2250
even if 4499 users moved to segwit.
1 users can make 2249 NATIVE transactions. meaning only 1 segwit transaction gets in. so the 'blocksize. only becomes 1.000444

segwit is not the compromise
It is.

lauda: compromise meaning lost, sold out, victim 'you left your password on girlfriends phone, now your funds are compromised'
community: compromise meaning agreed reduced level

segwit is not an agreed reduced level. its a risk of screwing many over for the fortunes of the corporate elite

activating segwit solves nothing.
It does.

go on PROVE IT!! explain it

because even after activation segwit will still be contending against native key users
Nobody cares. You can't DOS the network with "native" keys post Segwit.

you can.
native keys still work after segwit activates. otherwise 16mill coins are locked and unspendable!!

segwit also turns into a 2 tier network of upstream 'filters' and downstream nodes. rather than a equal network of nodes that all agree on the same thing.
This is only the case if the majority of nodes don't support Segwit. Ironically to your statement, the big majority is in favor of Segwit.
segwit activates by pool only.
meaning(if all pools were equal for simple explanation)
19 out of 20 pools activate it.
1 pool gets disreguarded.
but then the node count turns to
~3000 upstream full validation UPSTREAM filters.
and
3000 hodgepodge of downstream nodes that dont fully validate, may have witness may not have, may be prunned may not be.

which the upstream nodes wont sync from but "could" filter to (if they were not banlist biased)

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 05:09:46 PM
 #93

emphasis >2mb
> (UPTO) but only if 100% segwit key use to get 2mb
dont down play it that nothing needs to be done by users to attain the 2mb.. users need to move funds to new keys to attain it.
There is nothing wrong with that. Users are incentivized to start using Segwit and plenty of providers are either already ready or are 'in-progress'.

lauda compromise meaning lost, sold out, victim 'you left your password on girlfriends phone, now your funds are compromised'
No. That is just one of the meanings, see here: http://www.dictionary.com/browse/compromise

segwit is not an agreed reduced level. its a risk of screwing many over for the fortunes of the corporate elite
This is bullshit and you know it.

go on PROVE IT!! explain it
Everything is properly explained on the Bitcoin Core website. Do I really need to draw it out for you?

you can.
native keys still work after segwit activates. otherwise 16mill coins are locked and unspendable!!
Wrong. The DOS attack vector is not present at 1 MB, and you can't create a 2 MB block with native keys when Segwit is activated.

~3000 upstream full validation UPSTREAM filters.
and
3000 hodgepodge of downstream nodes that dont fully validate, may have witness may not have, may be prunned may not be.
You can blame BU for their stubbornness to implement SWSF. A lot of the very outdated nodes are irrelevant IMO, they don't properly validate some newer soft forks anyways (+ potentially have security holes as they can be very outdated, e.g. <0.10.0).

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 05:13:27 PM
 #94

you can.
native keys still work after segwit activates. otherwise 16mill coins are locked and unspendable!!
Wrong. The DOS attack vector is not present at 1 MB, and you can't create a 2 MB block with native keys when Segwit is activated.

lauda please

native keys would fill the 1mb base block so that segwit cant get a chance.. thus there wont be a 2mb block..
EG
imagine there were 4500 users. so far they argue over blocksize that can only fit ~2250
even if 4499 users moved to sgwit.
1 users can make 2249 NATIVE transactions. meaning only 1 segwit transaction gets in the base. so the 'blocksize. only becomes 1.000444
in short
even if 99.9% of users moved over to segwit, they are still subject to normal bloat from a malicious bloater filling the base block. which takes up the base block space to not allow segwit key users in. thus the ratio of segwit in base:witness is super low... thus total blocksize remains super low, but the base block is super filled with native bloat

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 05:15:41 PM
 #95

you can.
native keys still work after segwit activates. otherwise 16mill coins are locked and unspendable!!
Wrong. The DOS attack vector is not present at 1 MB, and you can't create a 2 MB block with native keys when Segwit is activated.

lauda please

native keys would fill the 1mb base block so that segwit cant get a chance.. thus there isnt a 2mb block..
In other words: You can't DOS the network at 1 MB using native keys post Segwit. Which is my whole point. Stop with these strawman arguments.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 05:21:41 PM
 #96

In other words: You can't DOS the network at 1 MB using native keys post Segwit. Which is my whole point. Stop with these strawman arguments.

you need to really study more.
simply saying "cant b'coz cant" or "wrong because ad-hom"

is becoming very apparent as your rebuttal.

please study these things beyond the 2 paragraph sales pitches of empty promises.

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 05:31:37 PM
 #97

In other words: You can't DOS the network at 1 MB using native keys post Segwit. Which is my whole point. Stop with these strawman arguments.
you need to really study more.
simply saying "cant b'coz cant" or "wrong because ad-hom"

is becoming very apparent as your rebuttal.

please study these things beyond the 2 paragraph sales pitches of empty promises.
I don't need to study anything. You have a fallacious way of arguing and reasoning. You completely changed my argument in order to refute it with your own. You created an argument that I did not make, also known as a strawman argument. You can't DOS the network with native keys with Segwit. Period. You should buy this with your employers money:


"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
European Central Bank
Legendary
*
Offline Offline

Activity: 1288
Merit: 1087



View Profile
March 11, 2017, 05:39:00 PM
 #98

no one will compromise and something will break. either that's bitcoin itself or the will of one of the opposing sides. i kind of get the impression the unlimited fans would prefer to fatally mangle bitcoin and then blame core afterwards.

best case is that unlimited becomes the alt it always wanted to be and everyone else ignores it until it goes away.
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 05:42:54 PM
Last edit: March 11, 2017, 06:22:51 PM by franky1
 #99

In other words: You can't DOS the network at 1 MB using native keys post Segwit. Which is my whole point. Stop with these strawman arguments.
you need to really study more.
simply saying "cant b'coz cant" or "wrong because ad-hom"

is becoming very apparent as your rebuttal.

please study these things beyond the 2 paragraph sales pitches of empty promises.
I don't need to study anything. You have a fallacious way of arguing and reasoning. You completely changed my argument in order to refute it with your own. You created an argument that I did not make, also known as a strawman argument.

you can fill blocks after activation with native transactions, otherwise the 16mill coins(46mill UTXO's) are locked and unspendable (because they are on native keys right now).

if you are saying native keys cant be spent on activation day.. then your funds cannot be added to a block (because your own funds are on native keys right now)


if you can admit native transactions can be added to blocks. you start to see that people with native keys will just spam the 1mb base block.
thus
reducing the room inside the 1mb baseblock to reduce how many other peoples tx's get in. and thus reduce the ratio of base:witness usage.. to then not attain the 2mb you harp on about.
EG
if only a couple segwit tx gets in.. it equates to something small like ~1.000450 total serialised blocksize, but where the 'block' is 100% full. meaning everyone elses tx is sat in mempool waiting.. and waiting



my point being is this
you said
Segwit will definitely deliver >2 MB according to the latest usage patterns.

you have mis-sold a "definitely deliver' by then saying > (im thinking you should have used < but even that is still mis-selling)

meaning its an EMPTY promise.
just like saying
bitcoin 2009-2016 will definetly deliver >7tx/s (actual math was something like 7.37tx/s)

which we all know we never got to 7tx/s... thus it was an empty promise

much like ISP's mis-selling internet speeds
sign with us and you will definitely get upto 100mb/s

users sign up.. no one gets 100mb/s and best some people get is 60mb/s and majority get under 40mb/s

and you can then come back with the stupid argument "i did say > (morethan (but logically you should have said upto) or be more honest abaout chances of getting it) i never promised actually get"

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Carlton Banks
Legendary
*
Offline Offline

Activity: 3220
Merit: 2599



View Profile
March 11, 2017, 06:10:31 PM
 #100

Which is bigger, 2 MB blocks or 4 MB blocks   Roll Eyes

And that 4MB is 1MB of transactional data space, and 3MB of segwit data space, the latter of which is mostly reserved for future use.

So don't mislead others into thinking that all of a sudden we will get a 4 fold increase in transactional capacity. We won't.

When you say "Segwit data", you're talking about the data that signs transactions, to prove that the real user actually sent the money.


Are you sure it's not you misleading everyone dwarf? By pretending that signing the transactions is somehow something new, or unneeded? Smiley

Vires in numeris
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 11, 2017, 06:14:15 PM
 #101

Which is bigger, 2 MB blocks or 4 MB blocks   Roll Eyes

And that 4MB is 1MB of transactional data space, and 3MB of segwit data space, the latter of which is mostly reserved for future use.

So don't mislead others into thinking that all of a sudden we will get a 4 fold increase in transactional capacity. We won't.

When you say "Segwit data", you're talking about the data that signs transactions, to prove that the real user actually sent the money.


Are you sure it's not you misleading everyone dwarf? By pretending that signing the transactions is somehow something new, or unneeded? Smiley

Don't worry, franky1 gave a bit more detail in case my wording could be considered as misleading.

Quote from: franky1
for clarity

Quote from: AngryDwarf on Today at 02:14:22 PM
Quote from: Carlton Banks on Today at 02:07:12 PM
Which is bigger, 2 MB blocks or 4 MB blocks   Roll Eyes

And that 4MB is
1MB of transactional data space, and 3MB buffer space, that only partially fills dependant on the % of segwit users in the base block
(0% segwit in 1mb base=0of the 3mb extra used(1mb total))
(10% segwit in 1mb base=0.1mb of the 3mb used(1.1mb total))
(100% segwit in 1mb base=1.1mb of the 3mb used(2.1mb total))

 the latter of which(atleast 1.9mb) is mostly reserved for future use.

So don't mislead others into thinking that all of a sudden we will get a 4 fold increase in transactional capacity. We won't.

FTFY

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
Carlton Banks
Legendary
*
Offline Offline

Activity: 3220
Merit: 2599



View Profile
March 11, 2017, 06:19:41 PM
 #102

Don't worry, franky1 gave a bit more detail in case my wording could be considered as misleading.

There is no "reserved for future use". Franky is a misleading statement incarnate.


The Segwit testnet mined a 4MB block, just by including alot of multi-signature transactions (which obviously have a much heavier transaction:signature ratio than regular transactions)



Do you have any arguments that don't involve subverting plainly observable facts?

Vires in numeris
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 11, 2017, 06:30:46 PM
 #103

There is no "reserved for future use". Franky is a misleading statement incarnate.

The Segwit testnet mined a 4MB block, just by including alot of multi-signature transactions (which obviously have a much heavier transaction:signature ratio than regular transactions)

So is this the extreme case where a large number of inputs is used in a transaction to fill up the segwit space?

Perhaps you should share with the class what this test case is. Please expand my knowledge and dispell my misconceptions by providing a bit more information.

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 06:32:40 PM
Last edit: March 11, 2017, 07:03:07 PM by franky1
 #104

Don't worry, franky1 gave a bit more detail in case my wording could be considered as misleading.

There is no "reserved for future use". Franky is a misleading statement incarnate.

The Segwit testnet mined a 4MB block, just by including alot of multi-signature transactions (which obviously have a much heavier transaction:signature ratio than regular transactions)

Do you have any arguments that don't involve subverting plainly observable facts?

much like bitcoin testnet got its 7tx/s.. but never happened in reality on bitcoin main net after 8 years of trying

which is why devs are thinking about other novel things to append to transactions such as confidential commitments to fill up atleast 1.9mb gap. because the 2.1mb fill isnt even going to get reached

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
d5000
Legendary
*
Offline Offline

Activity: 3010
Merit: 2925


Decentralization Maximalist


View Profile
March 11, 2017, 06:38:08 PM
 #105

The challenge is now to find a number for this cap. [...]
1) 20 MB is too big right now.
2) 1 TB is definitely too big. Just imagine the IBD after 2 years.
3) You're thinking too big. Think smaller. We need some room to handle the current congestion, we do not need room for 160 million users yet.

160 million users and 20 MB maximum block size (1 TB/year) as a mid-term goal is based on the present consumer HD market storage prices, but also on the idea to capture a significant (at least 10%) part of the market of Western Union and similar services (WU claims to have 1 billion clients). The remittance market is, for the time being, the most interesting one for BTC if it manages to continue to offer fees of less than ~1 USD per (simple) transaction.

The "upper cap" of 20 MB could be the mid-term cap, to be reached ~ 10 years from now. We could set a lower cap for the first 2-3 years (5 MB should be enough, or 2 MB + Segwit) because of current bandwith limitations. Or a moving cap based on speed tests like the one Franky proposes (good idea, I think).


Carlton Banks
Legendary
*
Offline Offline

Activity: 3220
Merit: 2599



View Profile
March 11, 2017, 06:41:42 PM
 #106

There is no "reserved for future use". Franky is a misleading statement incarnate.

The Segwit testnet mined a 4MB block, just by including alot of multi-signature transactions (which obviously have a much heavier transaction:signature ratio than regular transactions)

So is this the extreme case where a large number of inputs is used in a transaction to fill up the segwit space?

The opposite

Multi signature means a single input signed by more than one key.



How can you pretend not to understnad something so simple.....

Vires in numeris
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 06:45:51 PM
 #107

if you are saying native keys cant be spent on activation day.. then your funds cannot be added to a block (because your own funds are on native keys right now)
I didn't say that; stop twisting my argument.

if you can admit native transactions can be added to blocks. you start to see that people with native keys will just spam the 1mb base block.
Irrelevant. You can spam whatever you want, miners can prioritize Segwit transactions and are incentivized to do so.

you have mis-sold a "definitely deliver' by then saying > (im thinking you should have used < but even that is still mis-selling)

meaning its an EMPTY promise.
Nope. Read the above.

160 million users and 20 MB maximum block size (1 TB/year) as a mid-term goal is based on the present consumer HD market storage prices, but also on the idea to capture a significant (at least 10%) part of the market of Western Union and similar services (WU claims to have 1 billion clients). The remittance market is, for the time being, the most interesting one for BTC if it manages to continue to offer fees of less than ~1 USD per (simple) transaction.

The "upper cap" of 20 MB could be the mid-term cap, to be reached ~ 10 years from now. We could set a lower cap for the first 2-3 years (5 MB should be enough, or 2 MB + Segwit) because of current bandwith limitations. Or a moving cap based on speed tests like the one Franky proposes (good idea, I think).
You are not thinking about this straight. Let's say there is no risk for 20 MB blocks, DoS nor orphan wise. Storing 1 TB per year maybe won't be *that big of a problem* for existing nodes. You are forgeting about:
1) IBD.
2) Reindexing in case something goes wrong.

Imagine syncing and validating a 3 TB blockchain from scratch. Do I need to run my nodes only on top end Xeon machines? There was some discussions about a 'drastic' future in which new nodes would never be able to catch up (I think this was scaling workshop 2015).

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 11, 2017, 06:47:34 PM
 #108

There is no "reserved for future use". Franky is a misleading statement incarnate.

The Segwit testnet mined a 4MB block, just by including alot of multi-signature transactions (which obviously have a much heavier transaction:signature ratio than regular transactions)

So is this the extreme case where a large number of inputs is used in a transaction to fill up the segwit space?

The opposite

Multi signature means a single input signed by more than one key.



How can you pretend not to understnad something so simple.....

Quote
Perhaps you should share with the class what this test case is. Please expand my knowledge and dispell my misconceptions by providing a bit more information.

I've never used multi-sig, so it is a gap in my knowledge. Perhaps if pro-segwit people would stop hurling insults around and explain things better I might change my mind on what I think the best way forward is. So please explain this test case, and how it would work without segwit.

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
AliceWonderMiscreations
Full Member
***
Offline Offline

Activity: 182
Merit: 100


View Profile WWW
March 11, 2017, 06:50:49 PM
 #109

Imagine syncing and validating a 3 TB blockchain from scratch. Do I need to run my nodes only on top end Xeon machines? There was some discussions about a 'drastic' future in which new nodes would never be able to catch up (I think this was scaling workshop 2015).

No, bandwidth matters more than threads.

The "drastic" future of which you speak sounds like FUD to me.

However I would highly recommend using a Xeon anyway simply to get ECC - it's worth it. As transistors continue to get smaller, the bits flipped by cosmic rays increase.

I hereby reserve the right to sometimes be wrong
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 06:59:33 PM
 #110

The "drastic" future of which you speak sounds like FUD to me.

much like in 1996(in the days of 56k and 4gb hard drive) shouting
dont make Call of Duty:MW in the future an online multiplayer download 'coz 60gb download and 1mb/s bandwidth'..

in short w wont have 20mb blocks tonight. so lets NOT stop dynamic blocks starting at 2mb this year. purely with the '20mb blocks by midnight' doomsday rhetoric..
...when the reality is years away

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Carlton Banks
Legendary
*
Offline Offline

Activity: 3220
Merit: 2599



View Profile
March 11, 2017, 07:00:06 PM
 #111

I've never used multi-sig, so it is a gap in my knowledge. Perhaps if pro-segwit people would stop hurling insults around and explain things better I might change my mind on what I think the best way forward is. So please explain this test case, and how it would work without segwit.

I'd do it were it not for your false accusation that I hurled insults at you

The evidence that this didn't even happen is in plain black and white on this page

Vires in numeris
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 07:03:32 PM
 #112

No, bandwidth matters more than threads.
I don't think downloading 1 TB over 1 year is a problem. The upload side though would be.

The "drastic" future of which you speak sounds like FUD to me.
No. It is research, and you can try to find the video yourself.

However I would highly recommend using a Xeon anyway simply to get ECC - it's worth it. As transistors continue to get smaller, the bits flipped by cosmic rays increase.
Oh yes, let's make nodes expensive to run. That is good for decentralization!

in short w wont have 20mb blocks tonight. so lets NOT stop dynamic blocks starting at 2mb this year. purely with the '20mb blocks by midnight' doomsday rhetoric..
...when the reality is years away
Nobody is talking about 20 MB blocks tonight; you have reading comprehension problems.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
d5000
Legendary
*
Offline Offline

Activity: 3010
Merit: 2925


Decentralization Maximalist


View Profile
March 11, 2017, 07:10:41 PM
Last edit: March 11, 2017, 07:21:28 PM by d5000
 #113

@Lauda: Maybe I'm overly optimistic about technology development in the coming 10 years. (A less than 160 million possible user base for 2027 would mean a pretty low "cap" for Bitcoin's price imho, as I don't believe in the "digital gold" thing and even using LN you need some on-chain transactions. A not small part of the population of this forum thinks that Bitcoin will replace the whole fiat money next year or so Wink ).

In the case of IBD I think that in that in that "drastic future" most users will end downloading blockchain snapshots. That has some centralization risks but I think they are manageable. Also reindexing maybe won't be a thing low-end-equipment users would do regularly - they would simply redownload a snapshot.

We're obviously talking about end users with consumer-level equipment. Professional users that use servers in well-connected datacenters should have no problems with 20 MB blocks, I think.

Edit: What upper limit would you consider realistic?

AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 11, 2017, 07:11:27 PM
 #114

The Segwit testnet mined a 4MB block, just by including alot of multi-signature transactions (which obviously have a much heavier transaction:signature ratio than regular transactions)

So how many multiple signatures was associated with the address of this transaction, and many signatures did it require?

Would the seperation of witness data to transaction data make the space used any less?

How important is this to the implementation of lightning networks?

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
Carlton Banks
Legendary
*
Offline Offline

Activity: 3220
Merit: 2599



View Profile
March 11, 2017, 07:17:18 PM
 #115

You can't expect good will from those to whom you demonstrate bad will, Dwarf

Vires in numeris
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 11, 2017, 07:24:58 PM
 #116

You can't expect good will from those to whom you demonstrate bad will, Dwarf

Calling me Dwarf instead of AngryDwarf might be perceived as insult if it was to come from an Elf. Maybe its perception of tone. Neither did I directly say you was throwing insults, you implied it from me stating it came from pro-segwit people.

Internet forums are not for the thin skinned.

If you don't want to answer the question for me, you could answer it for the benefit of other people.

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
Carlton Banks
Legendary
*
Offline Offline

Activity: 3220
Merit: 2599



View Profile
March 11, 2017, 07:32:53 PM
 #117

If you name is inherently insulting to you, you should take responsibility for choosing it

Vires in numeris
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 11, 2017, 07:43:21 PM
 #118

If you name is inherently insulting to you, you should take responsibility for choosing it

I could make a word play on your username, but I assume you would rather divert this into a slanging match, rather than add to the technical discussion.

So like I say, you can choose to explain it for the benefit of other people, or you can choose not to.

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 07:50:12 PM
 #119

@Lauda: Maybe I'm overly optimistic about technology development in the coming 10 years. (A less than 160 million possible user base for 2027 would mean a pretty low "cap" for Bitcoin's price imho, as I don't believe in the "digital gold" thing and even using LN you need some on-chain transactions. A not small part of the population of this forum thinks that Bitcoin will replace the whole fiat money next year or so Wink ).
You shouldn't be optimistic nor relying on speculative predictions of the future when it comes to Bitcoin's security. You need to be conservative to say the least. If you don't believe that Bitcoin is digital gold, or you don't understand where the current value stems from, then you have to re-examine everything.

In the case of IBD I think that in that in that "drastic future" most users will end downloading blockchain snapshots. That has some centralization risks but I think they are manageable. Also reindexing maybe won't be a thing low-end-equipment users would do regularly - they would simply redownload a snapshot.
You shouldn't throw in centralizing aspects like they are trivial changes. The impact of something like that, and potential security concerns are probably not properly researched.

We're obviously talking about end users with consumer-level equipment. Professional users that use servers in well-connected datacenters should have no problems with 20 MB blocks, I think.
I don't understand why you want me, as a user, to spend a lot of money to run my node in datacenters? I use Bitcoin Core for everything, node, wallet, cold storage.

Edit: What upper limit would you consider realistic?
In what time frame? Next 5, 10 years?

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 08:02:17 PM
 #120

You shouldn't be optimistic nor relying on speculative predictions of the future when it comes to Bitcoin's security.

then stop throwing speculations of things like 20mb blocks... or ur gangs other fake doomsdays of "gigabytes by midnight" "visa by tomorrow"

as thats speculative predictions of the future.

stick to rational REAL abilities now

8mb upper limit and 2mb lower (dynamic) limit.

that gives a while to reassess the 8mb over time.
rather that waiting and halting growth for years due to fears of 20mb blocks.


I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 08:13:11 PM
 #121

You shouldn't be optimistic nor relying on speculative predictions of the future when it comes to Bitcoin's security.
then stop throwing speculations of things like 20mb blocks... or ur gangs other fake doomsdays of "gigabytes by midnight" "visa by tomorrow"
You are engaging in a discussion between another user and myself; we are free to discuss whatever we want and however we want. If you can't comprehend our discussion properly, then don't comment on it.

8mb upper limit and 2mb lower (dynamic) limit.
Where is the academic research backing up that 8 MB is safe as upper limit? For which time frame? How does the 2 MB grow to 8 MB? At what periods?

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 08:17:12 PM
 #122

You shouldn't be optimistic nor relying on speculative predictions of the future when it comes to Bitcoin's security.
then stop throwing speculations of things like 20mb blocks... or ur gangs other fake doomsdays of "gigabytes by midnight" "visa by tomorrow"
You are engaging in a discussion between another user and myself; we are free to discuss whatever we want and however we want. If you can't comprehend our discussion properly, then don't comment on it.

if you want a private conversation between 2 people then go to Private message with them

8mb upper limit and 2mb lower (dynamic) limit.
Where is the academic research backing up that 8 MB is safe as upper limit? For which time frame? How does the 2 MB grow to 8 MB? At what periods?

even core deemed 8mb network safe but decided 4mb super safe.. go ask your buddies.
as for how...

are you forgetting the example of dynamics . i even drew you a picture to keep your concentration span open.

what you need to understand by having the limits is that the NODES flag what they are happy with and POOLS work below that.
meaning it wont get out of control of what general nodes can cope with. because the nodes are flagging it.

oh and i remember rusty russel (blockstream) had some stats of 8mb being fine. so try not to proclaim that 8mb is evil as its your gangs own agreement that 8mb is fine as a upper limit, but a preference for less as a lower limit

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 08:20:26 PM
 #123

if you want a private conversation between 2 people then go to Private message with them
You need to visit some courses in order to improve your faulty comprehension skills. We can discuss whenever we want and wherever we want.

even core deemed 8mb network safe but decided 4mb super safe.. go ask your buddies.
Core deemed "8 MB network safe"? Where?
Segwit 8 MB weight safe != 8 MB block size limit safe.

are you forgetting the example of dynamics . i even drew you a picture to keep your concentration span open
No. I'm asking you for the specifics of your proposal, but there are none apparently.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 11, 2017, 08:22:20 PM
 #124

even core deemed 8mb network safe but decided 4mb super safe.. go ask your buddies.
Core deemed "8 MB network safe"? Where?
Segwit 8 MB weight safe != 8 MB block size limit safe.

So what is the technical reason for this to be the case?

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 08:23:52 PM
 #125

even core deemed 8mb network safe but decided 4mb super safe.. go ask your buddies.
Core deemed "8 MB network safe"? Where?
Segwit 8 MB weight safe != 8 MB block size limit safe.
So what is the technical reason for this to be the case?
For one (ignoring everything else): Linear sigops validation vs. quadratic validation.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 08:29:23 PM
Last edit: March 11, 2017, 08:47:27 PM by franky1
 #126

For one (ignoring everything else): Linear sigops validation vs. quadratic validation.

lol segwit doesnt solve it!!

even after segwit activates native key users are not disarmed.
segwit only offers to disable people who voluntarily use segwit keys to perform quadratics by not having the quadratics problem in a segwit key tx signing.

it does not take the quadratics opportunity away from native key users.

reducing or keeping tx sigops limit at or below 16,000 sigops per block no matter what the blocksize is. ensures quadratic spammers cannot spam large sigop quadratic tx's
core 0.14: MAX_STANDARD_TX_SIGOPS_COST = MAX_BLOCK_SIGOPS_COST/5;
core 0.14: MAX_BLOCK_SIGOPS_COST = 80000;
meaning
core 0.14: MAX_STANDARD_TX_SIGOPS_COST = 16000

core 0.12: MAX_BLOCK_SIZE = 1000000;
core 0.12: MAX_BLOCK_SIGOPS = MAX_BLOCK_SIZE/50;
meaning
core 0.12: MAX_BLOCK_SIGOPS = 20000
core 0.12: MAX_STANDARD_TX_SIGOPS = MAX_BLOCK_SIGOPS/5;
meaning
core 0.12: MAX_STANDARD_TX_SIGOPS = 4000;

segwit actually allowed increasing tx sigops limit since v0.12 from 4000 to 16000 . while thinking that people will all be using segwit keys will be enough defense.. they have not thought about native users taking advantage.

oh and please dont instantly reply, untill you read the code.
read the code dont just reply "wrong because blockstream are kings"


edit: after reading lauda's instant reply below.. i actually pasted in and done the maths for him from cores own code...
lauda: read the code.

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 08:35:44 PM
 #127

lol segwit doesnt solve it!!

even after segwit activates native key users are not disarmed.
segwit only offers to disable people who voluntarily use segwit keys to perform quadratics by not having the quadratics problem in a segwit key tx signing.
You still don't understand how Segwit works?
1 MB quadratic hashing = no DoS risk AFAIK.
2 MB quadratic hashing -> DoS risk.
Segwit activated 2 MB block (Segwit TXs) = no DoS risk (linear hashing)
Segwit activated 1 MB block (old TXs) = no DoS risk at 1 MB (quadratic hashing).

reducing or keeping tx sigops limit at or below 16,000 sigops per block no matter what the blocksize is. ensures quadratic spammers cannot spam large sigop quadratic tx's
That.. doesn't actually solve it IIRC. Have you actually proposed this somewhere/done calculations or did you pull 16k out of thin air?

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 11, 2017, 08:38:28 PM
 #128

What is the reason for old tx's using quadratic hashing instead of linear hashing, and why is it considered safe with segwit if not for normal transactions?

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 08:39:46 PM
 #129

What is the reason for old tx's using quadratic hashing instead of linear hashing, and why is it considered safe with segwit if not for normal transactions?
That's the way that it is currently implemented; a known inefficiency (O(n^2) time). This is one of the reasons for which Segwit is quite beneficial. They packed up a lot of improvements at once.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 11, 2017, 08:43:43 PM
 #130

What is the reason for old tx's using quadratic hashing instead of linear hashing, and why is it considered safe with segwit if not for normal transactions?
That's the way that it is currently implemented; a known inefficiency (O(n^2) time). This is one of the reasons for which Segwit is quite beneficial. They packed up a lot of improvements at once.

But is there any reason that this could not be implemented on the old tx's?

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 08:47:51 PM
 #131

But is there any reason that this could not be implemented on the old tx's?
Segwit introduces a new transaction type which can't be malleated as old TX's and which have linear scaling. I don't know what exactly is needed in order to make old TXs scale linearly as well. I'm going to assume that it may require a hard fork of some type.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 08:52:22 PM
 #132

But is there any reason that this could not be implemented on the old tx's?
Segwit introduces a new transaction type which can't be malleated as old TX's and which have linear scaling. I don't know what exactly is needed in order to make old TXs scale linearly as well. I'm going to assume that it may require a hard fork of some type.

sigop attack
v0.12 had a 4000 sigop per tx limit (read the code)
v0.14 had a 16000 sigop per tx limit (read the code)

so now check the code.
https://github.com/bitcoin/bitcoin/tree/0.14/src
core 0.14: MAX_BLOCK_SIGOPS_COST = 80000;
core 0.14: MAX_STANDARD_TX_SIGOPS_COST = MAX_BLOCK_SIGOPS_COST/5;
meaning
core 0.14: MAX_STANDARD_TX_SIGOPS_COST = 16000

https://github.com/bitcoin/bitcoin/tree/0.12/src
core 0.12: MAX_BLOCK_SIZE = 1000000;
core 0.12: MAX_BLOCK_SIGOPS = MAX_BLOCK_SIZE/50;
meaning
core 0.12: MAX_BLOCK_SIGOPS = 20000
core 0.12: MAX_STANDARD_TX_SIGOPS = MAX_BLOCK_SIGOPS/5;
meaning
core 0.12: MAX_STANDARD_TX_SIGOPS = 4000;
nothing stops a native tx from sigops spamming, you can only control how much spam is allowed

blockstream just thinks that people using segwit keys is enough defense they have not realised native users will stick to native keys

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 11, 2017, 08:53:55 PM
 #133

But is there any reason that this could not be implemented on the old tx's?
Segwit introduces a new transaction type which can't be malleated as old TX's and which have linear scaling. I don't know what exactly is needed in order to make old TXs scale linearly as well. I'm going to assume that it may require a hard fork of some type.

That is what I am wondering. If segwit was implemented as a hard fork, could the transaction malleation and quadratic sigops spam attack be solved for good. Could a native address automatically be a segwit address, negating the need for users to move UTXO's from native keys to segwit keys (which is going to cost a transaction fee and put unnecessary pressure on network capacity)?

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 08:56:01 PM
 #134

That is what I am wondering. If segwit was implemented as a hard fork, could the transaction malleation and quadratic sigops spam attack be solved for good. Could a native address automatically be a segwit address, negating the need for users to move UTXO's from native keys to segwit keys (which is going to cost a transaction fee).

nope.

only real defense is keep limit per tx down........................ or blindly think malicious people will move to segwit keys to disarm themselves so everyone is using segwit keys

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 08:58:22 PM
 #135

-snip-
nothing stops a native tx from sigops spamming
You still don't understand it. Spamming with native keys after Segwit has been activated is useless. You can't DoS the network with them.

If segwit was implemented as a hard fork, could the transaction malleation and quadratic sigops spam attack be solved for good.
No. The difference between SFSF and SFHF is negligible (aside from hard forks being dangerous without consensus). In order for something like that happen, it would probably require a whole different BIP and approach.

Could a native address automatically be a segwit address, negating the need for users to move UTXO's from native keys to segwit keys (which is going to cost a transaction fee and put unnecessary pressure on network capacity)?
I doubt it. Even the other attempt at fixing malleability with a hard fork called Flextrans (from the Classic dev, i.e. a BTU supporter) doesn't do that.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 09:00:26 PM
 #136

-snip-
nothing stops a native tx from sigops spamming
You still don't understand it. Spamming with native keys after Segwit has been activated is useless. You can't DoS the network with them.

you can by filling the baseblock.

EG
a block based on v:0.12 fills the 1mb block with sigop tx's totallying 4000 sigops per tx
a block based on v:0.14 fills the 1mb block with sigop tx's totallying 16000 sigops per tx

edit: here is the clincher justdoing 5 tx's uses up the blocks max tx count.. no one else can be added

READ THE CODE. not the sales pitch by blockstreamers on reddit

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 11, 2017, 09:12:44 PM
 #137

-snip-
nothing stops a native tx from sigops spamming
You still don't understand it. Spamming with native keys after Segwit has been activated is useless. You can't DoS the network with them.

Is this because it will eventually only be possible to send to a segwit key, or is there some function in the two tier network that the SWSF creates?

If segwit was implemented as a hard fork, could the transaction malleation and quadratic sigops spam attack be solved for good.
No. The difference between SFSF and SFHF is negligible (aside from hard forks being dangerous without consensus). In order for something like that happen, it would probably require a whole different BIP and approach.

So does this mean a soft fork bypasses consensus?

Could a native address automatically be a segwit address, negating the need for users to move UTXO's from native keys to segwit keys (which is going to cost a transaction fee and put unnecessary pressure on network capacity)?
I doubt it. Even the other attempt at fixing malleability with a hard fork called Flextrans (from the Classic dev, i.e. a BTU supporter) doesn't do that.

Does flextrans require a new address key type as well?

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
d5000
Legendary
*
Offline Offline

Activity: 3010
Merit: 2925


Decentralization Maximalist


View Profile
March 11, 2017, 09:14:39 PM
 #138

If you don't believe that Bitcoin is digital gold, or you don't understand where the current value stems from, then you have to re-examine everything.
Maybe we've different opinions here. I believe Bitcoin's value comes mainly from its usability as a value transfer (and later, also value storage) platform for many use cases among many users ("network effect") and its advantage ahead of similar cryptocurrencies ("altcoins"). But that could lead to a long discussion, so here in this thread, let's focus on the block size issue. Wink

In the case of IBD I think that in that in that "drastic future" most users will end downloading blockchain snapshots. That has some centralization risks but I think they are manageable. [...]
You shouldn't throw in centralizing aspects like they are trivial changes. The impact of something like that, and potential security concerns are probably not properly researched.
Then I would encourage research on that topic - I think it's inevitable at some point to provide "lighter" IBD procedures. Maybe Electrum and other light wallets could serve as objects in such a study.

Quote
We're obviously talking about end users with consumer-level equipment. Professional users that use servers in well-connected datacenters should have no problems with 20 MB blocks, I think.
I don't understand why you want me, as a user, to spend a lot of money to run my node in datacenters? I use Bitcoin Core for everything, node, wallet, cold storage.

No! Obviously the goal must be to allow end users running their node on PCs or notebooks. That was only a comment about professional equipment today - because the power of pro-equipment should be reached by consumer-level hardware at most a decade later. (Connectivity/bandwidth is another point, here you're right that mainly the upload bandwidth growth is a major bottleneck).
Edit: What upper limit would you consider realistic?
In what time frame? Next 5, 10 years?
Let's say 5 years, 10 years maybe is too far away.

(The 20 MB blocks were only an example to show the approximate relation between block size and possible user base, for now, I won't insist on this number)

Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 09:31:54 PM
 #139

you can by filling the baseblock.

EG
a block based on v:0.12 fills the 1mb block with sigop tx's totallying 4000 sigops per tx
a block based on v:0.14 fills the 1mb block with sigop tx's totallying 16000 sigops per tx
Are you trying to say that Bitcoin can be DOS'ed at 1 MB now? Roll Eyes

Is this because it will eventually only be possible to send to a segwit key, or is there some function in the two tier network that the SWSF creates?
No. You can refuse to use Segwit if you do not want to.

So does this mean a soft fork bypasses consensus?
No. Soft forks are backwards compatible, therefore mitigating the risk of a network split.

Does flextrans require a new address key type as well?
Yes.
I believe Bitcoin's value comes mainly from its usability ....
 But that could lead to a long discussion, so here in this thread, let's focus on the block size issue. Wink
It sounded like I was talking to Roger Ver for a second, but okay.

Then I would encourage research on that topic - I think it's inevitable at some point to provide "lighter" IBD procedures. Maybe Electrum and other light wallets could serve as objects in such a study.
Then encourage it, but don't spread it around like it is trivial until we know for 'sure'.

No! Obviously the goal must be to allow end users running their node on PCs or notebooks. That was only a comment about professional equipment today - because the power of pro-equipment should be reached by consumer-level hardware at most a decade later. (Connectivity/bandwidth is another point, here you're right that mainly the upload bandwidth growth is a major bottleneck).
Noted. My bad.

Let's say 5 years, 10 years maybe is too far away.

(The 20 MB blocks were only an example to show the approximate relation between block size and possible user base, for now, I won't insist on this number)
We also need to determine whether we are talking about a block size in the traditional sense or a post-Segwit 'base + weight' size (as the "new" block size). Which is it?

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 11, 2017, 09:38:51 PM
 #140

What is the reason for old tx's using quadratic hashing instead of linear hashing, and why is it considered safe with segwit if not for normal transactions?
That's the way that it is currently implemented; a known inefficiency (O(n^2) time). This is one of the reasons for which Segwit is quite beneficial. They packed up a lot of improvements at once.

But is there any reason that this could not be implemented on the old tx's?

The 'DoS' doesn't even require a protocol change to nullify. Indeed, there is a natural incentive already in the protocol that ensures it will never become a systemic problem. If large-time-to-verify-blocks ever became A Thing, miners will employ parallel validation. This will ensure that such large-time-to-verify-blocks will be orphaned by faster-to-verify-blocks.

Miners who gravitate to parallel validation will earn more income, and miners who do not employ parallel validation will become bankrupted over time. As will miners who create such DoS blocks.

This is already part of the protocol. No change is needed.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 11, 2017, 09:45:37 PM
 #141

So does this mean a soft fork bypasses consensus?
No. Soft forks are backwards compatible

Note that this requires believing that making nodes that currently operate in a trustless manner suddenly dependent upon others for security fits the definition of 'backwards compatible'. I think that definition of 'backwards compatible' is ludicrous. YMMV.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 10:15:09 PM
 #142

The 'DoS' doesn't even require a protocol change to nullify. Indeed, there is a natural incentive already in the protocol that ensures it will never become a systemic problem. If large-time-to-verify-blocks ever became A Thing, miners will employ parallel validation. This will ensure that such large-time-to-verify-blocks will be orphaned by faster-to-verify-blocks.

Miners who gravitate to parallel validation will earn more income, and miners who do not employ parallel validation will become bankrupted over time. As will miners who create such DoS blocks.

This is already part of the protocol. No change is needed.
I've asked for a refreshment about 'parallel validation':
Quote
<harding> many miners currently mine empty blocks on top of unvalidated (but PoW-correct) new blocks.  There's no reason to expect them to behave differently under BTU, so most miners would probably extend the chain with the high-validation-work block rather than create an alternative block at the same height.
<harding> Thus parallel validation doesn't get you anything unless a low-validation-work block is coincidentally produced at the same time as a high-validation-work block.
<harding> parallel validation only helps you in the rare case that there are two or more blockchains with the same PoW.  Miners are disincentivized to create such chains since one of them is certain to lose, so the incentives probably favor them extending a high-validation-work block rather than creating a competing low-validation-work block.
<harding> Imagine block A is at the tip of the chain.  Some miner than extends that chain with block B, which looks like it'll take a long time to verify.  As a miner, you can either attempt to mine block C on top of block B, mining without validation but creating chain ABC that certainly has the most PoW.  Or you can mine block B' that is part of chain AB' that will have less PoW than someone who creates chain ABC.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 10:18:52 PM
 #143

So does this mean a soft fork bypasses consensus?
No. Soft forks are backwards compatible

Note that this requires believing that making nodes that currently operate in a trustless manner suddenly dependent upon others for security fits the definition of 'backwards compatible'. I think that definition of 'backwards compatible' is ludicrous. YMMV.

something we can agree on.. needing segwit nodes as the 'upstream filters' (gmaxwell own buzzword) is bad for security. plus its not "backward compatible"

i prefer the term backward trimmed(trimmable), or backwards 'filtered' (using gmaxwells word) to make it clearer old nodes are not getting full validatable blockdata
not a perfect term. but atleast its slightly more clearer of what segwit is "offering" compared to the half truths and half promises and word twisting to offset giving a real answer.

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
d5000
Legendary
*
Offline Offline

Activity: 3010
Merit: 2925


Decentralization Maximalist


View Profile
March 11, 2017, 10:19:48 PM
 #144

Let's say 5 years, 10 years maybe is too far away.
We also need to determine whether we are talking about a block size in the traditional sense or a post-Segwit 'base + weight' size (as the "new" block size). Which is it?
The 20 MB I mentioned before were calculated by the straightforward traditional [non-segwit] way. So to compare to my previous calculation, and because unfortunately Segwit is still not active, I would be more interested in the "traditionally-calculated" value. But you can obviously add an estimation for a post-Segwit size.

Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 10:25:23 PM
 #145

sigop attack
v0.12 had a 4000 sigop per tx limit (read the code)
v0.14 had a 16000 sigop per tx limit (read the code)

so now check the code.
https://github.com/bitcoin/bitcoin/tree/0.14/src
core 0.14: MAX_BLOCK_SIGOPS_COST = 80000;
core 0.14: MAX_STANDARD_TX_SIGOPS_COST = MAX_BLOCK_SIGOPS_COST/5;
meaning
core 0.14: MAX_STANDARD_TX_SIGOPS_COST = 16000
You almost made me fall for this.. I was too tired to check whether your numbers were true or not myself right away. That '80000' number is the Segwit number, i.e. it is scaled for the 4 MB weight. 80 000/4 = 20 000. Now if you apply 'MAX_BLOCK_SIGOPS_COST/5;' on this number, you get.. 4000.  Roll Eyes

The 20 MB I mentioned before were calculated by the straightforward traditional [non-segwit] way. So to compare to my previous calculation, and because unfortunately Segwit is still not active, I would be more interested in the "traditionally-calculated" value. But you can obviously add an estimation for a post-Segwit size.
I'm not exactly sure how to mitigate the DOS vector in that case. If that was mitigated in some way, I'd say 10 MB upper limit for the next 5 years. I doubt that someone could expect that we'd need more than 30 TPS + all the secondary layer solutions so quickly.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 10:28:23 PM
 #146

sigop attack
v0.12 had a 4000 sigop per tx limit (read the code)
v0.14 had a 16000 sigop per tx limit (read the code)

so now check the code.
https://github.com/bitcoin/bitcoin/tree/0.14/src
core 0.14: MAX_BLOCK_SIGOPS_COST = 80000;
core 0.14: MAX_STANDARD_TX_SIGOPS_COST = MAX_BLOCK_SIGOPS_COST/5;
meaning
core 0.14: MAX_STANDARD_TX_SIGOPS_COST = 16000
You almost made me fall for this.. I was too tired to check whether your numbers were true or not myself right away. That '80000' number is the Segwit number, i.e. it is scaled for the 4 MB weight. 80 000/4 = 20 000. Now if you apply 'MAX_BLOCK_SIGOPS_COST/5;' on this number, you get.. 4000.  Roll Eyes

i used 0.12 as an example of how many quadratics were permissible prior to segwit
and
i used 0.14 as an example of how many quadratics were permissible post segwit
prior: 4000
post: 16000

but in actual fact it is not v0.14 =4000 prior segwit its actually still 16,000 prior segwit (for pools using these uptodate versions EG 0.14 today)
check the code

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 11, 2017, 10:33:48 PM
 #147

So does this mean a soft fork bypasses consensus?
No. Soft forks are backwards compatible

Note that this requires believing that making nodes that currently operate in a trustless manner suddenly dependent upon others for security fits the definition of 'backwards compatible'. I think that definition of 'backwards compatible' is ludicrous. YMMV.

This is my biggest concern about segwit being implemented as a soft fork. All nodes are equal until they are not. If it was implemented as a hard fork, we wouldn't have this two tier network system, if I understand correctly.

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 10:35:53 PM
 #148

This is my biggest concern about segwit being implemented as a soft fork. All nodes are equal until they are not. If it was implemented as a hard fork, we wouldn't have this two tier network system, if I understand correctly.
Segwit is like any other soft fork before it. Nodes that do not update, do not validate new rules..Alternatively in the HF, nodes that do not update are cut off from the network.

i used 0.12 as an example of how many quadratics were permissible prior to segwit
and
i used 0.14 as an example of how many quadratics were permissible post segwit
prior: 4000
post: 16000

but in actual fact it is not v0.14 =4000 prior segwit its actually still 16,000 prior segwit
check the code
No, you misunderstand this completely. This is absurd. You can create a TX with 20k sigops maximum, you just can't do this with Core. You're confusing policy and consensus rules and you're misread the code. See this for example: https://github.com/bitcoin/bitcoin/pull/8438

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 10:37:42 PM
 #149

No, you misunderstand this completely. This is absurd. You can create a TX with 20k sigops maximum, you just can't do this with Core. You're confusing policy and consensus rules and you're misread the code.

admit there is a 2 tiered system. not the word twisting

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 10:39:25 PM
 #150

No, you misunderstand this completely. This is absurd. You can create a TX with 20k sigops maximum, you just can't do this with Core. You're confusing policy and consensus rules and you're misread the code.
admit there is a 2 tiered system. not the word twisting
As soon as you admit to being wrong with your "numbers". We all know that day won't come. Roll Eyes

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 11, 2017, 11:02:17 PM
 #151

This is my biggest concern about segwit being implemented as a soft fork. All nodes are equal until they are not. If it was implemented as a hard fork, we wouldn't have this two tier network system, if I understand correctly.
Segwit is like any other soft fork before it. Nodes that do not update, do not validate new rules..Alternatively in the HF, nodes that do not update are cut off from the network.

Did any soft fork that came before it create a two tier network system? At least with a hard fork miners will not create segwit blocks until the vast majority of nodes have upgraded. Those who find there nodes unable to sync will upgrade there nodes. With the two tier network system introduced with the SWSF, nodes that have not been upgraded are being filtered data, so they are no longer full nodes. This appears to be a mechanism to bypass full node consensus, if the miners agree to start creating segwit blocks. Miners that do not wish to upgrade find they have too or risk having their blocks orphaned, so are basically forced to upgrade. Please someone correct my misunderstanding, otherwise I have a right to feel rather uncomfortable about this.

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 11:14:06 PM
 #152

No, you misunderstand this completely. This is absurd. You can create a TX with 20k sigops maximum, you just can't do this with Core. You're confusing policy and consensus rules and you're misread the code. See this for example: https://github.com/bitcoin/bitcoin/pull/8438

this is your mis understanding
the 20k limit (old v0.12) is the BLOCK LIMIT for sigops
the 4000 is the TRANSACTION limit
meaning a malicious spammer can make 5 TX of 4,000 sigops in v0.12 without and FILL THE BLOCKS sigop limit (no more tx's allowed)

the 80k limit (v0.14) is the BLOCK LIMIT for sigops
the 16000 is the TRANSACTION limit
meaning a malicious spammer can make 5 TX of 16,000 sigops in v0.12 without and FILL THE BLOCK sigop limit (no more tx's allowed)

as for your link - https://github.com/bitcoin/bitcoin/pull/8438
Quote
Treat high-sigop transactions as larger rather than rejecting them

meaning they acknowledge they are allowing transactions to be more quadratically used to attack.

they simply think that its not a problem. but lets say in the future. things move forward. if they then made it 32000sigops per tx. and 160,000 per block. still thats 5 tx per block and also because a native malicious user will do it .. the TIME to process 5tx of 32,000 compared to last years 5tx of 4000 will impact...



the solution is yes increase BLOCK sigop limits. but dont increase TX sigop limits. keep it low 16,000 maybe but preferably 4000 as a constant barrier against native key malicious quadratic creators.
meaning if it was 80,000 a malicious user has to make 20 tx to fill the blocks 80,000 limit. instead of just 5..
and because its only 4000X20 instead of 16,000X5 the validation time is improved
but they havnt

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 11, 2017, 11:20:02 PM
 #153

So does this mean a soft fork bypasses consensus?
No. Soft forks are backwards compatible

Note that this requires believing that making nodes that currently operate in a trustless manner suddenly dependent upon others for security fits the definition of 'backwards compatible'. I think that definition of 'backwards compatible' is ludicrous. YMMV.

This is my biggest concern about segwit being implemented as a soft fork. All nodes are equal until they are not. If it was implemented as a hard fork, we wouldn't have this two tier network system, if I understand correctly.

Well, there is yet another effect which seems rarely discussed. Under The SegWit Omnibus Changeset, there are essentially two classes of bitcoins. Those that have been created by legacy, and those which have been created by SegWit. This is by definition a destruction of fungibility.

How important fungibility is to you is something only you can decide.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
d5000
Legendary
*
Offline Offline

Activity: 3010
Merit: 2925


Decentralization Maximalist


View Profile
March 11, 2017, 11:30:33 PM
 #154

I'm not exactly sure how to mitigate the DOS vector in that case. If that was mitigated in some way, I'd say 10 MB upper limit for the next 5 years. I doubt that someone could expect that we'd need more than 30 TPS + all the secondary layer solutions so quickly.

OK, 10 MB looks good for me (it would be possible to handle at least 50 million users with that) - and it's also close to Franky's 8 MB. With Segwit, if I understand it well, that transaction capacity (30 tps) would be equivalent to a 2-4 MB limit approximately.

jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 11, 2017, 11:44:17 PM
 #155

The 'DoS' doesn't even require a protocol change to nullify. Indeed, there is a natural incentive already in the protocol that ensures it will never become a systemic problem. If large-time-to-verify-blocks ever became A Thing, miners will employ parallel validation. This will ensure that such large-time-to-verify-blocks will be orphaned by faster-to-verify-blocks.

Miners who gravitate to parallel validation will earn more income, and miners who do not employ parallel validation will become bankrupted over time. As will miners who create such DoS blocks.

This is already part of the protocol. No change is needed.
I've asked for a refreshment about 'parallel validation':
Quote
<harding> many miners currently mine empty blocks on top of unvalidated (but PoW-correct) new blocks.  There's no reason to expect them to behave differently under BTU, so most miners would probably extend the chain with the high-validation-work block rather than create an alternative block at the same height.
<harding> Thus parallel validation doesn't get you anything unless a low-validation-work block is coincidentally produced at the same time as a high-validation-work block.
<harding> parallel validation only helps you in the rare case that there are two or more blockchains with the same PoW.  Miners are disincentivized to create such chains since one of them is certain to lose, so the incentives probably favor them extending a high-validation-work block rather than creating a competing low-validation-work block.
<harding> Imagine block A is at the tip of the chain.  Some miner than extends that chain with block B, which looks like it'll take a long time to verify.  As a miner, you can either attempt to mine block C on top of block B, mining without validation but creating chain ABC that certainly has the most PoW.  Or you can mine block B' that is part of chain AB' that will have less PoW than someone who creates chain ABC.

Harding's concern would be founded. But only to the point that all miners would suddenly start performing only zero-transaction block mining. Which of course is ludicrous.

What is not said, is that miners who perform zero-transaction mining do so only until they are able to validate the block that they are mining atop. Once they have validated that block, they modify the block that they are mining to include a load of transactions. They cannot include the load of transactions before validation, because until validated, they have no idea which transactions they need to exclude from the block they are mining. For if they mine a block that includes a transaction that was mined in a previous block, their block would be orphaned for invalidity.

So what would happen with parallel validation under such a scenario?

Miner A is mining at height N. As he is doing so, miner B solves a block that contains a aberrant quadratic-hash-time transaction (let us call this 'ADoS block' (attempted denial of service)) at height N, and propagates it to the network.
Miner A, who implements parallel validation and zero-transaction mining stops mining his height A block. He spawns a thread to start validating the ADoS block at height N. He starts mining a zero-transaction block at height N+1 atop ADoS.
Miner C solves a normal validation time block C at height N and propagates it to the network.
When Miner A receives block C, he spawns another thread to validate block C. He is still mining the zero-transaction block atop ADoS.
A short time thereafter, Miner A finishes validation of block C. ADoS is still not validated. So Miner A builds a new block at height N+1 atop block C, full of transactions, and switches to mining that.
From the perspective of Miner A, he has orphaned Miner B's ADoS block.
Miner A may or may not win round N+1. But statistically, he has a much greater chance to win round N+1 than any other miner that does not perform parallel validation. Indeed, until the ADoS block is fully validated, it is at risk of being orphaned.
The net result is that miners have a natural incentive to operate in this manner, as it assures them a statistical advantage in the case of ADoS blocks. So if Miner A does not win round N+1, another miner that implements parallel validation assuredly will. End result: ADoS is orphaned.

End result: Harding's concern is irrelevant. The quadratic hash time problem solves itself. No change to the protocol needed.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
LazyTownSt
Newbie
*
Offline Offline

Activity: 17
Merit: 0


View Profile
March 11, 2017, 11:45:17 PM
 #156

This is a massive issue. Im surprised at the lack of votes so far
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 11, 2017, 11:59:29 PM
 #157

This is a massive issue. Im surprised at the lack of votes so far

'Voting' is pointless. The only 'votes' that matter are tendered by people choosing which code they are running.

I'm 'voting' BU.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 12, 2017, 08:14:23 AM
 #158

this is your mis understanding
the 20k limit (old v0.12) is the BLOCK LIMIT for sigops
the 4000 is the TRANSACTION limit
meaning a malicious spammer can make 5 TX of 4,000 sigops in v0.12 without and FILL THE BLOCKS sigop limit (no more tx's allowed)

the 80k limit (v0.14) is the BLOCK LIMIT for sigops
the 16000 is the TRANSACTION limit
meaning a malicious spammer can make 5 TX of 16,000 sigops in v0.12 without and FILL THE BLOCK sigop limit (no more tx's allowed)
Nope. Wrong. You are confusing policy & consensus rules and Segwit. The 80k number is Segwit only. A non-Core client can create a TX with 20k maximum sigops, which is the maximum that the consensus rules allow (not the numbers that you're writing about, e.g. 4k nor 16k).

Well, there is yet another effect which seems rarely discussed. Under The SegWit Omnibus Changeset, there are essentially two classes of bitcoins. Those that have been created by legacy, and those which have been created by SegWit. This is by definition a destruction of fungibility.
No. It does not destroy fungibility.

End result: Harding's concern is irrelevant. The quadratic hash time problem solves itself. No change to the protocol needed.
Definitely; everyone is a honest actor in this network and we are all living on a rainbow. Roll Eyes

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 12, 2017, 09:08:33 AM
 #159

Well, there is yet another effect which seems rarely discussed. Under The SegWit Omnibus Changeset, there are essentially two classes of bitcoins. Those that have been created by legacy, and those which have been created by SegWit. This is by definition a destruction of fungibility.
No. It does not destroy fungibility.

Do you understand that 'fungibility' is the property that no units of a thing have differing characteristics from other units?

Quote
End result: Harding's concern is irrelevant. The quadratic hash time problem solves itself. No change to the protocol needed.
Definitely; everyone is a honest actor in this network and we are all living on a rainbow.

Way to make a technical rebuttal, Lauda. You're certainly on your game tonight.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 12, 2017, 09:19:03 AM
 #160

Do you understand that 'fungibility' is the property that no units of a thing have differing characteristics from other units?
So for you, being part of UTXO vs Segwit UTXO is an adequate characteristic to destroy fungibility? What happens when *all* (in theory) keys are Segwit UTXO? Fungibility suddenly returned?

Way to make a technical rebuttal, Lauda. You're certainly on your game tonight.
I've come to realize that it is pointless to event attempt that since you only perceive what you want to. You are going to come to the same conclusion each time, regardless of whether you're wrong or not.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
AliceWonderMiscreations
Full Member
***
Offline Offline

Activity: 182
Merit: 100


View Profile WWW
March 12, 2017, 09:24:44 AM
 #161

Do you understand that 'fungibility' is the property that no units of a thing have differing characteristics from other units?
So for you, being part of UTXO vs Segwit UTXO is an adequate characteristic to destroy fungibility?

Actually it is. That isn't disputed by most SegWit supporters.

I hereby reserve the right to sometimes be wrong
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 12, 2017, 09:29:22 AM
 #162

Do you understand that 'fungibility' is the property that no units of a thing have differing characteristics from other units?
So for you, being part of UTXO vs Segwit UTXO is an adequate characteristic to destroy fungibility? What happens when *all* (in theory) keys are Segwit UTXO? Fungibility suddenly returned?

I think you are on the verge of understanding that issue.

Quote
Way to make a technical rebuttal, Lauda. You're certainly on your game tonight.
I've come to realize that it is pointless to event attempt that since you only perceive what you want to. You are going to come to the same conclusion each time, regardless of whether you're wrong or not.

OK... sure. I'm quite certain you are unable to poke a hole in my scenario there. Why don't you try? Or even ... why don't you ping Harding with what I posted, and have him see if he can poke holes in it?

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 12, 2017, 09:32:34 AM
Last edit: March 12, 2017, 09:47:51 AM by Lauda
 #163

That isn't disputed by most SegWit supporters.
Source?

I think you are on the verge of understanding that issue.
I don't see why it is an issue. I see it as a non-issue, just as you see quadratic validation as a non issue. Roll Eyes

OK... sure. I'm quite certain you are unable to poke a hole in my scenario there. Why don't you try? Or even ... why don't you ping Harding with what I posted, and have him see if he can poke holes in it?
It was not worth bothering to be frank[1]; I just quickly went through it and saw your conclusion. I'm not going to be a messenger between you and someone with clearly superior understanding. Find a way to contact him yourself.

[1] - Looks like I'm turning into Franky. Roll Eyes

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 12, 2017, 12:27:45 PM
Last edit: March 12, 2017, 12:48:49 PM by AngryDwarf
 #164

-snip-
nothing stops a native tx from sigops spamming
You still don't understand it. Spamming with native keys after Segwit has been activated is useless. You can't DoS the network with them.

I'd like to understand the reason spamming with native keys is useless after segwit activation.

However, it would seem that core is mitigating the problem by putting in restrictive policies right now:

Note: Code from 0.14 branch, not backtracked it to see when it was added - edit checked it is in the 0.13.x branch, so maybe not something new

policy.h

Code:
/** The maximum weight for transactions we're willing to relay/mine */
static const unsigned int MAX_STANDARD_TX_WEIGHT = 400000;

policy.cpp

Code:
   // Extremely large transactions with lots of inputs can cost the network
    // almost as much to process as they cost the sender in fees, because
    // computing signature hashes is O(ninputs*txsize). Limiting transactions
    // to MAX_STANDARD_TX_WEIGHT mitigates CPU exhaustion attacks.
    unsigned int sz = GetTransactionWeight(tx);
    if (sz >= MAX_STANDARD_TX_WEIGHT) {
        reason = "tx-size";
        return false;
    }

net_processing.cpp

Code:
   // Ignore big transactions, to avoid a
    // send-big-orphans memory exhaustion attack. If a peer has a legitimate
    // large transaction with a missing parent then we assume
    // it will rebroadcast it later, after the parent transaction(s)
    // have been mined or received.
    // 100 orphans, each of which is at most 99,999 bytes big is
    // at most 10 megabytes of orphans and somewhat more byprev index (in the worst case):
    unsigned int sz = GetTransactionWeight(*tx);
    if (sz >= MAX_STANDARD_TX_WEIGHT)
    {
        LogPrint("mempool", "ignoring large orphan tx (size: %u, hash: %s)\n", sz, hash.ToString());
        return false;
    }

wallet.cpp

Code:
       // Limit size
        if (GetTransactionWeight(wtxNew) >= MAX_STANDARD_TX_WEIGHT)
        {
            strFailReason = _("Transaction too large");
            return false;
        }

In other words, segwit activation is not needed for the change. It is effective right now. So what does segwit activation bring?







Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 12, 2017, 12:55:19 PM
 #165

I'd like to understand the reason spamming with native keys is useless after segwit activation.
This is quite simple. Take a look:
1) 1 MB (current) -> No DoS risk (quadratic hashing).
2) 2 MB (bare block size increase) -> DoS risk (quadratic hashing).
3) 2 MB post Segwit (implies almost 100% Segwit usage) -> No DoS risk (linear hashing).
4) 1 MB post Segwit (implies 100% native keys) -> No DoS risk (quadratic hashing); the same as the first line.

I'll check the remainder of your post later today.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 12, 2017, 01:01:03 PM
 #166

I'd like to understand the reason spamming with native keys is useless after segwit activation.
This is quite simple. Take a look:
1) 1 MB (current) -> No DoS risk (quadratic hashing).
2) 2 MB (bare block size increase) -> DoS risk (quadratic hashing).
3) 2 MB post Segwit (implies almost 100% Segwit usage) -> No DoS risk (linear hashing).
4) 1 MB post Segwit (implies 100% native keys) -> No DoS risk (quadratic hashing); the same as the first line.


1) 1 MB (current) -> lower DoS risk (quadratic hashing).
2) 2 MB (bare block size increase) -> higher DoS risk (quadratic hashing).
3) 2 MB post Segwit (implies almost 100% Segwit usage) -> No DoS risk (linear hashing).
4) 1 MB post Segwit (implies 100% native keys) -> lower DoS risk (quadratic hashing); the same as the first line.

Is that a fair FTFY?

Also

5) 2 MB post Segwit (implies 100% native keys) -> higher DoS risk (quadratic hashing) - unless there are plans to limit native key space in blocks

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 12, 2017, 01:03:07 PM
 #167

1) 1 MB (current) -> lower DoS risk (quadratic hashing).
2) 2 MB (bare block size increase) -> higher DoS risk (quadratic hashing).
3) 2 MB post Segwit (implies almost 100% Segwit usage) -> No DoS risk (linear hashing).
4) 1 MB post Segwit (implies 100% native keys) -> No lower risk (quadratic hashing); the same as the first line.

Is that a fair FTFY?
Yes and no. Writing it like that seems rather vague considering that we don't have exact data on it. It would be nice if someone actually did some in-depth research into this and tried to construct the worst kind of TX possible (validation time wise).

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 12, 2017, 01:06:39 PM
 #168

Yes and no. Writing it like that seems rather vague considering that we don't have exact data on it. It would be nice if someone actually did some in-depth research into this and tried to construct the worst kind of TX possible (validation time wise).

As core not done any research on this then? - also check edits above.

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 12, 2017, 01:10:13 PM
 #169

5) 2 MB post Segwit (implies 100% native keys) -> higher DoS risk (quadratic hashing) - unless there are plans to limit native key space in blocks
No. You still don't understand Segwit. You can not create a 2 MB block using 100% native keys when Segwit is activated. You can only create a 1 MB block if you're using 100% native keys.

As core not done any research on this then?
I'm saying that you and I don't have adequate data, and no exact data in this thread. There was some article about a block that takes longer than 10 minutes to validate at 2 MB somewhere.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
AngryDwarf
Sr. Member
****
Offline Offline