Bitcoin Forum
May 08, 2024, 01:54:13 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: what is SegWit's arbitrary discount rate of witness data segment  (Read 1219 times)
throwaway lol (OP)
Newbie
*
Offline Offline

Activity: 2
Merit: 0


View Profile
June 14, 2017, 06:22:02 AM
 #1

i am trying to understand this part of bitmain's new blog post:

Quote
... if the arbitrary discount rate of witness data segment is removed. The weight parameter, which is designed for artificial rates, may need to be deleted and we need to be frank and straightforward in the software code about different limitations on different kind of blocks and other parameters. A SegWit without the artificial discount rate will treat legacy transaction type fairly and it will not give SegWit transactions an unfair advantage. It will also help the capacity increasing effect of SegWit more significantly than with the discounted rate. We will also push for and encourage changes in code, in main block or in extension block, that will make Lightning Network run more safely and reliably than Core’s present version of SegWit does.

so please explain what this is referring to without bringing all the drama into it. thanks!
1715176453
Hero Member
*
Offline Offline

Posts: 1715176453

View Profile Personal Message (Offline)

Ignore
1715176453
Reply with quote  #2

1715176453
Report to moderator
1715176453
Hero Member
*
Offline Offline

Posts: 1715176453

View Profile Personal Message (Offline)

Ignore
1715176453
Reply with quote  #2

1715176453
Report to moderator
"If you don't want people to know you're a scumbag then don't be a scumbag." -- margaritahuyan
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715176453
Hero Member
*
Offline Offline

Posts: 1715176453

View Profile Personal Message (Offline)

Ignore
1715176453
Reply with quote  #2

1715176453
Report to moderator
throwaway lol (OP)
Newbie
*
Offline Offline

Activity: 2
Merit: 0


View Profile
June 14, 2017, 10:42:16 AM
 #2

so nobody knew or did this get buried under all the spams in this board?
franky1
Legendary
*
Offline Offline

Activity: 4214
Merit: 4475



View Profile
June 14, 2017, 10:55:11 AM
 #3

in short he is saying

by removing the
'base:witness (1:3) block inside a block' and removing all the cludgy maths... then EVERYONE, meaning those who want to use segwit keypairs or legacy(old/native/traditional) keypairs can all sit side by side in a REAL 4mb block and all have extra space to happily play in.

segwit only functions if people use segwit keypairs. the formation of a '1:3 block inside a block' to fool the old rules does not itself solves the issues segwit falsely promises to fix.. it involves the people using segwit keypairs afterwards that causes any change.


but even those segwit keypairs have to in a '(1:3) block inside a block' rely on getting their partial transaction data inside the base(1mb) area. which automatically reveals that segwit hits a few problems.

so by taking away the "base:witness (1:3) block inside a block" and just having 4mb block.. everyone gets the cake and gets to eat it

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
gmaxwell
Staff
Legendary
*
Offline Offline

Activity: 4172
Merit: 8414



View Profile WWW
June 14, 2017, 11:06:26 AM
 #4

The resource usage of blocks is limited by the programming of the system.  This is necessary because we live in a physical world where resources have costs and Bitcoin's security depends on participation being widespread so the costs can't be too great, along with other reasons.

Users compete with each other to get access to the limited resources by bidding with fees. This is how transaction fees arise.

Prior to segwit the limit was effectively just size (there are other conditions but they're seldom hit).

One problem with this is that most of the size of transactions is signatures which are prunable and never access after they are validated, so they cheap for the network to deal with but most of the long term resource costs of a transaction are its outputs-- which go into a database which every node must have rapid access to. But additional outputs add very little size (even less than the amount of space they take for the node to store them) to a transaction.  This creates bad incentives where people are encouraged to make tiny outputs that never get spent, burdening the system.

Segwit eliminates the size limit and replaces it with a weight limit.  The weight is computed so non-witness data (e.g. the above outputs) count 4x as much towards the limit as witness data (signatures).   The fixes the incentives by balancing the costs of signatures vs outputs to be roughly equal.  It's also how segwit achieves a capacity increase in a way which is fully backwards compatible with old nodes.

Finding a way to address the UTXO incentives issue was a major sticking point and breakthrough that made it possible to get many people to support any capacity increase at all.

You can see more about it here:  https://segwit.org/why-a-discount-factor-of-4-why-not-2-or-8-bbcebe91721e

As to why Bitmain would complain about it,  I am aware of no sensible reason.  Unfortunately, some foolish/malicious people have incorrectly described this as _lowering fees_ and perhaps Bitmain is laboring under this misunderstanding, but this change in calculation does not lower fees-- it effectively shifts some of the fees from the time a coin is spent to the time a coin is created-- at all except to the extent that it increases capacity and additional capacity will likely lower their fee income. Even a fairly small increase could dramatically lower miner fee income in the short term.  (But they claim to WANT an increase in capacity, one even bigger than segwit.)
gmaxwell
Staff
Legendary
*
Offline Offline

Activity: 4172
Merit: 8414



View Profile WWW
June 14, 2017, 11:34:08 AM
 #5

I just saw this on reddit: https://segwit.org/segregated-witness-and-aligning-economic-incentives-with-resource-costs-7d987b135c00   it also covers this subject.
franky1
Legendary
*
Offline Offline

Activity: 4214
Merit: 4475



View Profile
June 14, 2017, 11:43:15 AM
 #6

gmax.. go twist your words elsewhere

1. the usage of the blocks is limited by the programming of the system, by which this means the developers
...world average internet, cpu, hard drive and ram have no issues with 4-8mb.. so stop telling the world 1mb is all the network can cope with and should stick with.

2. the UTXO database is a separate database than the blockchain..
anyone can make a UTXO database have the same data whether its using a blockchain in a 1:3 template or a single 4mb template..

3. it was your crew that proposed the 'discount' as a discount.

anyway
to avoid bickering.. a single 4mb block with extra things like limiting txsigops to 4k or under, allows both native and segwit keys to fully utilise extra and real blocksize increase while mitigating the issues that segwit cannot solve

the arguments about the UTXO database are things outside of bitcoin protocol rules and block template designs so they can be dealt with separately at any time

P.S
gmax how about try to bring back a new fee priority formulae and stop the false pretences which have already pushed fee's up to silly amounts

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
jbreher
Legendary
*
Offline Offline

Activity: 3038
Merit: 1660


lose: unfind ... loose: untight


View Profile
June 14, 2017, 11:51:51 AM
 #7

Segwit eliminates the size limit and replaces it with a weight limit.  The weight is computed so non-witness data (e.g. the above outputs) count 4x as much towards the limit as witness data (signatures).   The fixes the incentives by balancing the costs of signatures vs outputs to be roughly equal.  

It 'fixes' the incentives only in cases where nodes throw away data needed for any future validations of the transactions. Whether or not this 'pruning' is a good idea is a matter of reasonable debate.

Quote
It's also how segwit achieves a capacity increase in a way which is fully backwards compatible with old nodes.

Turning fully-validating nodes into non-validating nodes is a rather funny definition of 'backwards compatible'.

Quote
Finding a way to address the UTXO incentives issue was a major sticking point and breakthrough that made it possible to get many people to support any capacity increase at all.

UTXO set size is some function of (# users) * (# of addresses holding value per user). As far as privacy is concerned, best practice dictates distributing your value across several addresses. Are we to follow The SegWit Omnibus Changeset with a recommendation for each user to hold all their Bitcoin on a single address? Privacy be damned?

Quote
As to why Bitmain would complain about it,  I am aware of no sensible reason.  

Introduction of a new fixed centrally-planned variable? Preferential incentive for offchain transactions over onchain transactions? Myopic much?

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
gmaxwell
Staff
Legendary
*
Offline Offline

Activity: 4172
Merit: 8414



View Profile WWW
June 14, 2017, 12:22:50 PM
 #8

It 'fixes' the incentives only in cases where nodes throw away data needed for any future validations of the transactions. Whether or not this 'pruning' is a good idea is a matter of reasonable debate.
Incorrect. Regardless of if you prune the data or not, you don't need to access it, so it doesn't impact your working set size.

Quote
Turning fully-validating nodes into non-validating nodes is a rather funny definition of 'backwards compatible'.
They continue to validate everything they've always validated. They continue to prevent inflation, prevent double spending, -- they don't validate the new segwit things, but the user with the old node is not using those things themselves, and they know they don't validate them-- so they don't relay or mine them.

Quote
UTXO set size is some function of (# users) * (# of addresses holding value per user). As far as privacy is concerned, best practice dictates distributing your value across several addresses. Are we to follow The SegWit Omnibus Changeset with a recommendation for each user to hold all their Bitcoin on a single address? Privacy be damned?
Segwit provides absolutely no pressure to use fewer addresses for your managing your own coins.  As I mentioned, it shift fees from the output spending time, to the output creating time-- so it is generally fee neutral for the user except for outputs which are never spent. 


Quote
Introduction of a new fixed centrally-planned variable?

"1" is also a variable, there is no such thing as a neutral option there.

Quote
Preferential incentive for offchain transactions over onchain transactions? Myopic much?
Nothing about segwit is "preferential for offchain"-- if anything it's slightly the opposite.

jbreher
Legendary
*
Offline Offline

Activity: 3038
Merit: 1660


lose: unfind ... loose: untight


View Profile
June 14, 2017, 01:58:16 PM
 #9

It 'fixes' the incentives only in cases where nodes throw away data needed for any future validations of the transactions. Whether or not this 'pruning' is a good idea is a matter of reasonable debate.
Incorrect. Regardless of if you prune the data or not, you don't need to access it, so it doesn't impact your working set size.

How, pray tell, does one validate a transaction, if one does not possess the signature data?

Quote
Quote
Turning fully-validating nodes into non-validating nodes is a rather funny definition of 'backwards compatible'.
they don't validate the new segwit things

Exactly. The SegWit Omnibus Changeset renders them non-fully-validating. In my view, that does not comport with 'backwards compatible'.

Quote
Quote
UTXO set size is some function of (# users) * (# of addresses holding value per user). As far as privacy is concerned, best practice dictates distributing your value across several addresses. Are we to follow The SegWit Omnibus Changeset with a recommendation for each user to hold all their Bitcoin on a single address? Privacy be damned?
Segwit provides absolutely no pressure to use fewer addresses for your managing your own coins.  


I did not say that The SegWit Omnibush Changeset creates new pressure to use fewer addresses. I merely point out that any benefit to UTXO set size of The SegWit Omnibus Changeset is marginal at best.

Quote
Quote
Introduction of a new fixed centrally-planned variable?
"1" is also a variable, there is no such thing as a neutral option there.

Yes, '1' is a variable. However, there is indeed a neutral option. And it is '1'. Because what is being paid for is space on the chain, in the form of bytes contained in a transaction.

Quote
Quote
Preferential incentive for offchain transactions over onchain transactions? Myopic much?
Nothing about segwit is "preferential for offchain"-- if anything it's slightly the opposite.

I have yet to see a cogent argument which supports your case. Or does your position not include an implication that Lightning is the step-function scalability jump that is enabled by The SegWit Omnibus Changeset? Regardless, the challenge is to your assertion that "As to why Bitmain would complain about it,  I am aware of no sensible reason."

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
The One
Legendary
*
Offline Offline

Activity: 924
Merit: 1000



View Profile
June 14, 2017, 03:46:37 PM
 #10

The resource usage of blocks is limited by the programming of the system.  This is necessary because we live in a physical world where resources have costs and Bitcoin's security depends on participation being widespread so the costs can't be too great, along with other reasons.

Users compete with each other to get access to the limited resources by bidding with fees. This is how transaction fees arise.

Prior to segwit the limit was effectively just size (there are other conditions but they're seldom hit).

One problem with this is that most of the size of transactions is signatures which are prunable and never access after they are validated, so they cheap for the network to deal with but most of the long term resource costs of a transaction are its outputs-- which go into a database which every node must have rapid access to. But additional outputs add very little size (even less than the amount of space they take for the node to store them) to a transaction.  This creates bad incentives where people are encouraged to make tiny outputs that never get spent, burdening the system.

Segwit eliminates the size limit and replaces it with a weight limit.  The weight is computed so non-witness data (e.g. the above outputs) count 4x as much towards the limit as witness data (signatures).   The fixes the incentives by balancing the costs of signatures vs outputs to be roughly equal.  It's also how segwit achieves a capacity increase in a way which is fully backwards compatible with old nodes.

Finding a way to address the UTXO incentives issue was a major sticking point and breakthrough that made it possible to get many people to support any capacity increase at all.

You can see more about it here:  https://segwit.org/why-a-discount-factor-of-4-why-not-2-or-8-bbcebe91721e

As to why Bitmain would complain about it,  I am aware of no sensible reason.  Unfortunately, some foolish/malicious people have incorrectly described this as _lowering fees_ and perhaps Bitmain is laboring under this misunderstanding, but this change in calculation does not lower fees-- it effectively shifts some of the fees from the time a coin is spent to the time a coin is created-- at all except to the extent that it increases capacity and additional capacity will likely lower their fee income. Even a fairly small increase could dramatically lower miner fee income in the short term.  (But they claim to WANT an increase in capacity, one even bigger than segwit.)

This is economically unsound and flawed. Bastiat: What is seen and what is unseen - the unseen part has not been recognised by developers who specialise in codings and not economics.

..C..
.....................
........What is C?.........
..............
...........ICO            Dec 1st – Dec 30th............
       ............Open            Dec 1st- Dec 30th............
...................ANN thread      Bounty....................

JaredR26
Full Member
***
Offline Offline

Activity: 219
Merit: 100


View Profile
June 14, 2017, 06:46:48 PM
 #11

Segwit eliminates the size limit and replaces it with a weight limit.  The weight is computed so non-witness data (e.g. the above outputs) count 4x as much towards the limit as witness data (signatures).   The fixes the incentives by balancing the costs of signatures vs outputs to be roughly equal.  It's also how segwit achieves a capacity increase in a way which is fully backwards compatible with old nodes.

I'm really confused on this, can you clarify?

As far as I understand it, segwit does not eliminate the size limit.  The size limit is still 1mb.  It just adds a weight limit of 3+1=4mb.  Correct?

And the witness discount is only used for relative fee prioritization, right?

So in that light, miners could easily decide they can earn more by modifying the code to prioritize transactions differently, primarily by the limit that gets hit first(probably the 1mb limit).  Right?

It seems to me that the witness discount is an inherent property of the lower blocksize limit versus the higher blockweight limit.  And in the decade-long persepctive, there would/should be no cost for the witness data, as that isn't the constraint the miners have to prioritize on (unless blockweight is consistently hit before blocksize).  Am I confused on how this works?  Does witness data weight actually count against the blocksize limit itself?
gmaxwell
Staff
Legendary
*
Offline Offline

Activity: 4172
Merit: 8414



View Profile WWW
June 14, 2017, 10:54:41 PM
 #12

Segwit eliminates the size limit and replaces it with a weight limit.  The weight is computed so non-witness data (e.g. the above outputs) count 4x as much towards the limit as witness data (signatures).   The fixes the incentives by balancing the costs of signatures vs outputs to be roughly equal.  It's also how segwit achieves a capacity increase in a way which is fully backwards compatible with old nodes.

I'm really confused on this, can you clarify?

As far as I understand it, segwit does not eliminate the size limit.  The size limit is still 1mb.  It just adds a weight limit of 3+1=4mb.  Correct?

And the witness discount is only used for relative fee prioritization, right?

So in that light, miners could easily decide they can earn more by modifying the code to prioritize transactions differently, primarily by the limit that gets hit first(probably the 1mb limit).  Right?

It seems to me that the witness discount is an inherent property of the lower blocksize limit versus the higher blockweight limit.  And in the decade-long persepctive, there would/should be no cost for the witness data, as that isn't the constraint the miners have to prioritize on (unless blockweight is consistently hit before blocksize).  Am I confused on how this works?  Does witness data weight actually count against the blocksize limit itself?

You are confused as to how it works.

Segwit actually does _eliminate_ the size limit.   The weight limit is constructed in a way which is compatible with the old limit, such that pre-segwit nodes will not think their limit is violated under any condition.

There are not two distinct limits, avoiding that was a design _requirement_ because multiple limits require multidimensional optimization in mining which would be a serious computational burden and because it would make accurate fee estimation intractable.  (because the fees you would need to pay would depend on the relative contention of the various limits, which depends on the compositions of the transactions in the future after you author your own.)

Weight = 3 x witness-stripped-size + size; and the limit is that the weight must be less than 4 million.   Old nodes receive witness stripped blocks and so they always accept the blocks under their own limits.

(And, FWIW, this is how Bitcoin does all the calculations since 0.13-- the results are the same as the only logic when there are no segwit txn-- so the size is already gone, just witness tx are not yet in use).

Selecting transactions by highest fee per weight is the unique income maximizing solution, no other priority order can produce more fee income (+/- small knapsack boundary effects-- e.g. you might skip a higher rate input in order to fill the block more completely).


This is economically unsound and flawed. Bastiat: What is seen and what is unseen - the unseen part has not been recognised by developers who specialise in codings and not economics.
I doubt you know anything about the background and specialties of the developers.   And, the stable fee behavior in the network today is a testament to Satoshi's ideas actually working here.

How, pray tell, does one validate a transaction, if one does not possess the signature data?
By validating everything else.  You don't validate the new features that the transaction is using which you don't understand (but which you know you don't understand)-- but they're also not relevant to you.

Quote
Quote
Quote
Are we to follow The SegWit Omnibus Changeset with a recommendation for each user to hold all their Bitcoin on a single address? Privacy be damned?
Segwit provides absolutely no pressure to use fewer addresses for your managing your own coins.  

I did not say that The SegWit Omnibush Changeset creates new pressure to use fewer addresses. I merely point out that any benefit to UTXO set size of The SegWit Omnibus Changeset is marginal at best.
I think you need to reread your own message you claim that segwit recomemnds each user hold all their Bitcoin in a single address, and I pointed out that segwit creates no benefit for a typical user to do that. You presented no argument that the UTXO benefit is marginal.

Quote
Yes, '1' is a variable. However, there is indeed a neutral option. And it is '1'. Because what is being paid for is space on the chain, in the form of bytes contained in a transaction.
Space on the chain is not relevant to the operating costs of nodes today and will be increasingly irrelevant to the operating costs in the future.  If the system has fees related to irrelevant imaginary costs instead of actual costs its capacity will necessarily be much lower.

Quote
Or does your position not include an implication that Lightning is the step-function scalability jump that is enabled by The SegWit Omnibus Changeset?
Lightning isn't "enabled" by segwit (and not that lightning was proposed long before Segwit) other than the fact that flaws in the protocol make it exceptionally difficult to write correct and safe automatic transaction processing software, so lightning implementations have based their work so far on using segwit.

Variogam
Sr. Member
****
Offline Offline

Activity: 276
Merit: 254


View Profile
June 15, 2017, 12:13:51 AM
Last edit: June 15, 2017, 12:31:19 AM by Variogam
 #13

Segwit eliminates the size limit and replaces it with a weight limit.  The weight is computed so non-witness data (e.g. the above outputs) count 4x as much towards the limit as witness data (signatures).   The fixes the incentives by balancing the costs of signatures vs outputs to be roughly equal.

The problem is 4MB weight is arbitrary number which only roughly reflect exact ratio for optimally balancing the costs of signatures vs outputs. Around 6MB weight (ratio to 1:5) should be more optimal, no? Exactly (29+141)/29 MB weight, if Luke-Jr is right at:
https://github.com/btc1/bitcoin/pull/11

Quote from: luke-jr
Non-witness data creates the need to produce witness data in the future to verify it. Currently a 1 MB block can create UTXOs such that it requires four 1 MB blocks to clean it up! This is why it is logical to weigh witness data (which cleans UTXOs out) at 1/4 the size of non-witness data (which creates UTXOs that must later be cleaned up). It balances the creation with the cleaning: 80 weight units for each.

BTW, ratio to 1:4 does not equeal to 1/4, so it would be more logical to have 5MB weight instead in the quote above...

Quote from: luke-jr
Thinking on this topic tonight, it occurred to me that spending actually uses not just the 72 bytes of signature, but also the pubkey size, and the input txid and index: a total of 141 bytes. On the creation side, there is also the amount, so about 29 bytes. This is a 5:1 ratio. An alternative to simply increasing the weight limit might be to apply the discount also to the input txid and index and/or adjusting the weight ratio to 1:5.
jbreher
Legendary
*
Offline Offline

Activity: 3038
Merit: 1660


lose: unfind ... loose: untight


View Profile
June 15, 2017, 02:46:15 AM
 #14

How, pray tell, does one validate a transaction, if one does not possess the signature data?
By validating everything else.  You don't validate the new features that the transaction is using which you don't understand (but which you know you don't understand)-- but they're also not relevant to you.

What is this - some sort of bait and switch? You seem to be confused and conflating two separate sub-threads of this discussion. Shall we review?

You said:
The weight is computed so non-witness data (e.g. the above outputs) count 4x as much towards the limit as witness data (signatures).   The fixes the incentives by balancing the costs of signatures vs outputs to be roughly equal.  

It seems to me that you must be referring to the so-called prunability of the signatures - no? It certainly seems to be the case from the following, because when I responded:
It 'fixes' the incentives only in cases where nodes throw away data needed for any future validations of the transactions. Whether or not this 'pruning' is a good idea is a matter of reasonable debate.

you replied with:
Regardless of if you prune the data or not, you don't need to access it, so it doesn't impact your working set size.

Maybe we are conflicted about what you mean by 'working set size'? When one is validating, the witness data is part of the data you are working with, right? (duh - otherwise there would be nothing to validate against). Accordingly, is it not part of the 'working set'? Are are we dealing with an arcane ephemeral definition-of-passion?

So again I ask: How, pray tell, does one validate a transaction, if one does not possess the signature data?

Quote
Quote
Quote
Quote
UTXO set size is some function of (# users) * (# of addresses holding value per user). As far as privacy is concerned, best practice dictates distributing your value across several addresses. Are we to follow The SegWit Omnibus Changeset with a recommendation for each user to hold all their Bitcoin on a single address? Privacy be damned?
Segwit provides absolutely no pressure to use fewer addresses for your managing your own coins.  

I did not say that The SegWit Omnibush Changeset creates new pressure to use fewer addresses. I merely point out that any benefit to UTXO set size of The SegWit Omnibus Changeset is marginal at best.
I think you need to reread your own message you claim that segwit recomemnds each user hold all their Bitcoin in a single address, and I pointed out that segwit creates no benefit for a typical user to do that. You presented no argument that the UTXO benefit is marginal.

Are you intentionally misrepresenting my position again? It is there plain as day. I asked you a question trying to get you to clarify your position, as I was somewhat incredulous that you would claim that which you seem to claim. I never claimed that "segwit recomemnds [sic] each user hold all their Bitcoin in a single address". I merely pointed out that your claim of SegWit fixing some imagined flaw in UTXO set size is marginal at best.

Let me ask you directly: Do you claim that UTXO set size is NOT some function of (# users) * (# of addresses holding value per user)?

Sorry - let me quote you directly, so I don't misrepresent you in turn. Your claim was:
Quote
Finding a way to address the UTXO incentives issue was a major sticking point and breakthrough ...

So again, with SegWit's 'way to address the UTXO incentives issue' shown to be marginal at best, where did I claim what you assert I claim?

Quote
Quote
Yes, '1' is a variable. However, there is indeed a neutral option. And it is '1'. Because what is being paid for is space on the chain, in the form of bytes contained in a transaction.
Space on the chain is not relevant to the operating costs of nodes today and will be increasingly irrelevant to the operating costs in the future.  If the system has fees related to irrelevant imaginary costs instead of actual costs its capacity will necessarily be much lower.

Finally something we can agree on. So why are you choking the baby in the crib artificially constraining transaction throughput again?

Quote
Quote
Or does your position not include an implication that Lightning is the step-function scalability jump that is enabled by The SegWit Omnibus Changeset?
Lightning isn't "enabled" by segwit (and not that lightning was proposed long before Segwit) other than the fact that flaws in the protocol make it exceptionally difficult to write ...

Hey! Another thing with which I can agree! However, it is only tangentially related to my point. Let us again review, removing the portion that seems to have distracted you:

Quote
Quote
Preferential incentive for offchain transactions over onchain transactions? Myopic much?
Nothing about segwit is "preferential for offchain"-- if anything it's slightly the opposite.

I have yet to see a cogent argument which supports your case. Regardless, the challenge is to your assertion that "As to why Bitmain would complain about it,  I am aware of no sensible reason."

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
June 15, 2017, 04:52:56 AM
 #15

The resource usage of blocks is limited by the programming of the system.  This is necessary because we live in a physical world where resources have costs and Bitcoin's security depends on participation being widespread so the costs can't be too great, along with other reasons.

You very well know that this is not true.  Bitcoin's security depends on proof of work, and the decentralization of those providing it.  Period.  Bitcoin has  a proof-of-work consensus, meaning, the consensus decisions (ALL decisions) in bitcoin are taken SOLELY by proof of work (mining).  Nobody else can decide anything else cryptographically/technically in bitcoin, not any "majority of full nodes" (too easy to Sybil) and bitcoin chose not to be a proof of stake coin, so stake holders have no cryptographic/technical decision power either.

Of course, bitcoin being a value-token, the other power in the system is the economic one, which is given by people who buy and sell coins in the market.  They determine market value.  But the decisions in bitcoin, on which its security depends, are taken solely by proof of work, and that was designed that way on purpose.  So no non-mining entity contributes what so ever to any decision or security in the system.  I'm not saying whether this is good or bad, I'm saying that this is the way the system was designed, and is working.

The resource usage by proof of work is many, many orders of magnitude larger than any other form of practical resource usage in bitcoin, and that's by design too.  So if you can waste all the resources needed for proof of work, you can always use the very insignificantly small additional resources to keep the network running.  

Variogam
Sr. Member
****
Offline Offline

Activity: 276
Merit: 254


View Profile
June 15, 2017, 03:38:43 PM
 #16

Maybe we are conflicted about what you mean by 'working set size'? When one is validating, the witness data is part of the data you are working with, right? (duh - otherwise there would be nothing to validate against). Accordingly, is it not part of the 'working set'? Are are we dealing with an arcane ephemeral definition-of-passion?

So again I ask: How, pray tell, does one validate a transaction, if one does not possess the signature data?

Maybe you noticed when you sync full node from zero, it goes fast first and only last months take so long to finish. The reason for this is until last checkpoint, no signature data for transactions is ever validated. Thus you dont need old witness data to sync until last checkpoint. Only after the last checkpoint is reached, signature data for transactions is validated and so witness data is required as well. Basically SegWit helps syncing new full nodes, althought MUCH better performance would be if you just downloaded latest UTXO set and started syncing from this point.
franky1
Legendary
*
Offline Offline

Activity: 4214
Merit: 4475



View Profile
June 15, 2017, 03:50:54 PM
Last edit: June 15, 2017, 04:22:10 PM by franky1
 #17

You very well know that this is not true.  Bitcoin's security depends on proof of work, and the decentralization of those providing it.  Period.  Bitcoin has  a proof-of-work consensus, meaning, the consensus decisions (ALL decisions) in bitcoin are taken SOLELY by proof of work (mining).  Nobody else can decide anything else cryptographically/technically in bitcoin, not any "majority of full nodes" (too easy to Sybil) and bitcoin chose not to be a proof of stake coin, so stake holders have no cryptographic/technical decision power either.

you are soo soo wrong

even if a PoW hash is correct.
nodes can reject/orphan blocks for many reasons.. and they do.
Eg they reject blocks if a tx was maliciously added with no taint(no history)
Eg they reject blocks if a tx was maliciously added which brings the data limit above the size limit
Eg they reject blocks if a tx was maliciously added with no signature proof of ownership
and many other reasons

dino. please go do some research.. pools only collate data into a 'batch' of transactions known as a block.. they then get ASICS to form a special hash and secure it.. so its easy to spot if alterations are made later because the hashes wont match..

 its then the symbiotic network of nodes that then validate the block is good, honest and follows the rules where all the transactions held within are correct and the hashes match, amungst other things

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
jbreher
Legendary
*
Offline Offline

Activity: 3038
Merit: 1660


lose: unfind ... loose: untight


View Profile
June 15, 2017, 06:23:46 PM
 #18

Maybe we are conflicted about what you mean by 'working set size'? When one is validating, the witness data is part of the data you are working with, right? (duh - otherwise there would be nothing to validate against). Accordingly, is it not part of the 'working set'? Are are we dealing with an arcane ephemeral definition-of-passion?

So again I ask: How, pray tell, does one validate a transaction, if one does not possess the signature data?

Maybe you noticed when you sync full node from zero, it goes fast first and only last months take so long to finish. The reason for this is until last checkpoint, no signature data for transactions is ever validated. Thus you dont need old witness data to sync until last checkpoint. Only after the last checkpoint is reached, signature data for transactions is validated and so witness data is required as well.

Yes - this is yet one other 'enhancement' added during the elaboration of the QT client. However, this change is a fundamental violation of the trust model. In and of itself, such a change to the trust model is nothing to balk at. Indeed, it does bring concrete benefits to several use cases.

The Sin, however, is in claiming that using this client allows one to use Bitcoin in a trustless manner. If one does not validate all transactions from t=0, then one is outsourcing the chain of custody to others. Others that you must trust.

This change should have been advertised far, wide, and loudly to have contained this tradeoff.

Quote
Basically SegWit helps syncing new full nodes, althought MUCH better performance would be if you just downloaded latest UTXO set and started syncing from this point.

WTF did any of the validation-skipping done for several versions of the client have to do with your statement about SegWit?

With that out of the way...

So again I ask: How, pray tell, does one validate a transaction, if one does not possess the signature data?

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
Variogam
Sr. Member
****
Offline Offline

Activity: 276
Merit: 254


View Profile
June 15, 2017, 08:25:24 PM
 #19

The Sin, however, is in claiming that using this client allows one to use Bitcoin in a trustless manner. If one does not validate all transactions from t=0, then one is outsourcing the chain of custody to others. Others that you must trust.

It works this way for many years already, Core and BU not validating old signatures. You can hardly use Bitcoin in trustless manner, every one must trust something, be it node software, operating system or the hardware it runs on - you cant make or check in full everything.
jbreher
Legendary
*
Offline Offline

Activity: 3038
Merit: 1660


lose: unfind ... loose: untight


View Profile
June 15, 2017, 09:48:17 PM
 #20

The Sin, however, is in claiming that using this client allows one to use Bitcoin in a trustless manner. If one does not validate all transactions from t=0, then one is outsourcing the chain of custody to others. Others that you must trust.

It works this way for many years already, Core and BU not validating old signatures. You can hardly use Bitcoin in trustless manner, every one must trust something, be it node software, operating system or the hardware it runs on - you cant make or check in full everything.

I have a sign above the entrance to my lair:

Welcome to jbreher's Bitcoin node
Proudly checking every block-included transaction since 2011.

Just because you are incapable of operating in a trustless environment, does not mean we all are.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!