Bitcoin Forum
April 26, 2024, 05:14:40 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 [5] 6 7 8 9 10 11 12 13 14 »  All
  Print  
Author Topic: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF  (Read 21354 times)
iCEBREAKER
Legendary
*
Offline Offline

Activity: 2156
Merit: 1072


Crypto is the separation of Power and State.


View Profile WWW
March 16, 2016, 09:48:08 PM
 #81


I really don't understand why we need to force our beloved wallet devs through this complicated mess.   Cry
New address format? How to explain to users? All infrastructure needs to be upgraded... What a gargantuan task...  Cry
Why do we need segwit again?  Cry

Have you been in a cave for the last 6 months?  Did you miss https://bitcoincore.org/en/2016/01/26/segwit-benefits/ ?

Segwit has been explained in many ways, from technical BIPS to colorful info-graphics.

Most wallet, etc. providers had little to no trouble adding segwit, because they like the idea: https://bitcoincore.org/en/segwit_adoption/

jl777 is only whining about his difficulties because he doesn't like anything Core supports and because he's a terrible dev who never finishes a single project he starts (eg SuperNET).

I find the shadowy linkages between alt-coin scammer jl777 and Classic fascinating.  The bitco.in alliance between him, the DashHoles, and the Frap.doc crew make for an interesting demographic.


██████████
█████████████████
██████████████████████
█████████████████████████
████████████████████████████
████
████████████████████████
█████
███████████████████████████
█████
███████████████████████████
██████
████████████████████████████
██████
████████████████████████████
██████
████████████████████████████
██████
███████████████████████████
██████
██████████████████████████
█████
███████████████████████████
█████████████
██████████████
████████████████████████████
█████████████████████████
██████████████████████
█████████████████
██████████

Monero
"The difference between bad and well-developed digital cash will determine
whether we have a dictatorship or a real democracy." 
David Chaum 1996
"Fungibility provides privacy as a side effect."  Adam Back 2014
Buy and sell XMR near you
P2P Exchange Network
Buy XMR with fiat
Is Dash a scam?
1714151680
Hero Member
*
Offline Offline

Posts: 1714151680

View Profile Personal Message (Offline)

Ignore
1714151680
Reply with quote  #2

1714151680
Report to moderator
1714151680
Hero Member
*
Offline Offline

Posts: 1714151680

View Profile Personal Message (Offline)

Ignore
1714151680
Reply with quote  #2

1714151680
Report to moderator
1714151680
Hero Member
*
Offline Offline

Posts: 1714151680

View Profile Personal Message (Offline)

Ignore
1714151680
Reply with quote  #2

1714151680
Report to moderator
Even if you use Bitcoin through Tor, the way transactions are handled by the network makes anonymity difficult to achieve. Do not expect your transactions to be anonymous unless you really know what you're doing.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714151680
Hero Member
*
Offline Offline

Posts: 1714151680

View Profile Personal Message (Offline)

Ignore
1714151680
Reply with quote  #2

1714151680
Report to moderator
1714151680
Hero Member
*
Offline Offline

Posts: 1714151680

View Profile Personal Message (Offline)

Ignore
1714151680
Reply with quote  #2

1714151680
Report to moderator
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3374
Merit: 6535


Just writing some code


View Profile WWW
March 16, 2016, 09:54:00 PM
 #82

I was told by gmax himself that a node that doesnt validate all signatures should call itself a fully validating node.
As long as it fully validates all of the NEW blocks and transactions that it receives. HISTORICAL blocks and the transactions within them are not validated because they are HISTORICAL and are tens of thousands of blocks deep.

Also, I am making an optimized bitcoin core and one of these optimizations is rejecting a tx whose contents doesnt match the txid. The thinking being that if the hashes dont match, there is no point in wasting time calculating the signature

not sure what libsecp256k1's speed has anything to do with the fact that it is still much slower to calculate than SHA256.
And how are you checking the txids if they are not provided? A tx message can be sent unsolicited with a new transaction and it does not contain the txid. In fact, there is no network message that I could find that sends a transaction with its txid. Of course, I think it is safe to assume that if a node requested a specific transaction that it would check the hash of the data it received so that it knows whether that data is correct. But for unsolicited transactions, then the only way to verify them is to check the signature.

So my point again, is that all witness data needs to be stored permanently for a full node that RELAYS historical blocks to a bootstrapping node. If we are to lose this, then we might as well make bitcoin PoS as that is the one weakness for PoS vs PoW. So if you are saying that we need to view bitcoin as fully SPV all the time with PoS level security for bootstrapping nodes, ok, with those assumptions lots and lots of space is saved.
No, when bootstrapping historical blocks the witness data is not required because it doesn't need to validate historical blocks. See above.

However, with such drastic assumptions I can (and have) already saved lots more space without adding a giant amount of new protocol and processing.

So this controversy has at least clarified that segwit INCREASES the size of the permanently needed data for fully validating and relaying node. Of course for SPV nodes things are much improved, but my discussion is not about SPV nodes.

So the powers that be can call me whatever names they want. I still claim that:

N + 2*numtx + numvins > N

And as such segwit as way to save permanent blockchain space is an invalid claim.Now the cost of 2*numtx+numvins is not that big, so maybe it is worth the cost for all the benefits we get.

However on the benefits claims, one of them is the utxo dataset is becoming a lot more manageable. this is irrelevant as that is a local inefficiency that can be optimized without any external effects. I have it down to 4 bytes of RAM per utxo, but I could make it smaller if needed

It just seems a lot of unsupported (or plain wrong) claims are made to justify the segwit softfork. And the most massive change by far is being slipped in as a minor softfork update?
If you are going to run your node from now until the end of time continuously and save all of the data relevant to the blocks and transactions that it receives and call of that data "permanent blockchain data", then yes, I think it does require more storage than a simple 2 Mb fork.

Since when has anyone ever claimed that segwit is "a way to save permanent blockchain space"?

What I still dont understand is how things will work when a segwit tx is sent to a non-segwit node and that is spent to another non-segwit node. How will the existing wallets deal with that?
Since you keep saying stuff about sending transactions between nodes, I don't think you understand how Bitcoin transactions work. It isn't sending between things but creating outputs from inputs after proving that the transaction creator can spend from those inputs. The inputs of a transaction don't affect the outputs of a transaction except for the amounts.

A transaction that spends a segwit input can still create a p2pkh and p2pk output which current nodes and wallets understand. p2pkh and p2pk are two output types that wallets currently understand. Those p2pkh and p2pk outputs can be spent from just like every other p2pkh and p2pk output is now. That will not change. The inputs and the scriptsigs of spending from those outputs will be the exact same as they are today. Segwit doesn't change that.

Rather segwit spends to a special script called a witness program. This script becomes a p2sh address, another output type which current wallets know about and can spend to.

Segwit wallets would instead always create p2sh addresses because that is the only way that segwit can implement witness programs to be backwards compatible. Those p2sh addresses are distributed normally but can only be spent from with a witness program.

What happens if an attacker created segwit rawtransactions and sent them to non-segwit nodes? there are no attack vectors?
Then the attacker is just sending the owner of an address a bunch of Bitcoin. If it is a bunch of spam outputs, then it can be annoying, but that is something that people can already do today.

what about in zeroconf environments? how does a full relaying node mine a block with segwit inputs? or do existing full nodes cease to be able to mine blocks after segwit softfork?
Well firstly, full nodes don't mine blocks.

The data that composes the block is the data that currently comprises of a block. The header is the same. The Coinbase transaction just has the OP_RETURN output to add the witness root to the blockchain. The transactions are the transactions with the current format. If a block is requested by another node that wants the witness data, then the block is sent with the transactions serialized in the witness serialization format.

And even a simpleton like me can understand how to increase blocksizes with a hardfork, so why not do that before adding massive new changes like segwit? especially since it is more space efficient and not prone to misunderstandings
And in the future, what is to say that simpletons will be able to understand segwit? In the future, someone would still be saying that segwit is too complicated and that we should not use it. In the future it will still be large changes and it will still be prone to misunderstandings. Nothing will change in the future except instead of increasing the block size limit from 1 Mb to 2 Mb, they will be clamoring to increase the block size limit from 2 Mb to 4 Mb. The situation would literally be the same.



If that was you asking in #bitcoin-dev earlier, you need to wait around a bit for an answer on IRC-- I went to answer but the person who asked was gone.  BIPs are living documents and will be periodically updated as the functionality evolves. I thought they were currently up to date but haven't checked recently; make sure to look for pull reqs against them that haven't been merged yet.
Yeah, I asked on #bitcoin-core-dev as achow101 (I go by achow101 pretty much everywhere else except here, although I am also achow101 here). I logged off of IRC because I went to sleep, probably should have asked it earlier.

I will look at the BIP pulls and see if there is anything there.



A question that still niggles me is segwit as a soft fork. I know that just dredges up the same old discussion about pros and cons of soft vs hard but for a simpleton such as me it seems that if the benefits of segwit are so clear, then compromising on the elegance of implementation in order to make it a soft fork seems a strange decision.
It was originally proposed as a hard fork, but someone (luke-jr I think) pointed out that it could be done as a soft fork. Soft forks are preferred because they are backwards compatible. In this case, the backwards compatibility is that if you run non-upgraded software, you can continue as you were and have no ill effect. You just won't be able to take advantage of the new functionality provided by segwit.

Alternatively, if this were done as a hard fork, then everyone would be required to upgrade in order to deploy segwit and then that would essentially force everyone to use segwit.

BlindMayorBitcorn
Legendary
*
Offline Offline

Activity: 1260
Merit: 1115



View Profile
March 16, 2016, 10:11:40 PM
 #83

I asked some of these questions 3 months ago.  Never got a decent answer.

Blockstream wants soft-forked SegWit to fix the malleability problems (that would be needed for the LN, if they ever get it to work), and to force ordinary p2p bitcoin users subsidize the costs of complicated multisig transactions (ditto).  But these reasons do not seem explain the urgency and energy that they are putting on the SegWit soft fork.  Maybe they have other undeclared reasons?  Perhaps they intend to stuff more data into the extension records, which they would not have to justify or explain since, being in the extension part, "ordinary users can ignore it anyway"?

As for SegWit being a soft fork, that is technically true; but a soft fork can do some quite radical changes, like imposing a negative interest (demurrage) tax, or raising the 21 million limit.  One could also raise the block size limit that way.  These tricks would all let old clients work for a while, but eventually everybody will be forced to upgrade to use coins sent by the new verson.

You've come to the right place for answers, professor. Openness is our middle name!

Now that that's all settled: What's Stolfi on about here? The 75% discount?

Forgive my petulance and oft-times, I fear, ill-founded criticisms, and forgive me that I have, by this time, made your eyes and head ache with my long letter. But I cannot forgo hastily the pleasure and pride of thus conversing with you.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
March 16, 2016, 10:22:57 PM
 #84

I was told by gmax himself that a node that doesnt validate all signatures should call itself a fully validating node.
A node not verifying signatures in blocks during the initial block download with years of POW on them is not at all equivalent to not verifying signatures _at all_.

I agree it is preferably to verify more-- but we live in the real world, not black and white land; and offering multiple trade-offs is essential to decentralized scalability.   If there are only two choices: Run a thin client, verify _nothing_; or run a maximally costly node and verify EVERYTHING then large amounts of decentralization will be lost because everyone who cannot justify or afford the full cost will have no option but to not participate in running a full node.  This makes it essential to support half steps-- it's better to allow people to choose to save resources and not verify months old data-- which is very likely correct unless the system has failed-- since the alternative is them verifying nothing at all.

Quote
Also, I am making an optimized bitcoin core and one of these optimizations is rejecting a tx whose contents doesnt match the txid. The thinking being that if the hashes dont match, there is no point in wasting time calculating the signature                                        
Every piece of Bitcoin software does this.  It is a little obnoxious that you spend so much time talking about these optimizations you're "adding" which are basic behaviors that _every_ piece of Bitcoin software ever written has always done, as if you're the only person to have thought of them or how they distinguish this hypothetical node software you claim to be writing.                                    
                                                                                                                                            
Quote
However, with such drastic assumptions I can (and have) already saved lots more space without adding a giant amount of new protocol and processing.
Your claims of saved space (10GB) earlier on the list, were already five times larger than what Bitcoin Core already does... another case of failing to understand the state of the art while thinking that some optimization you just came up with is vastly better while it's actually inferior.                                                                                                            
                                                                                                                                            
Segwit is not about saving space for plain full noes, the space is already saved in Core (if the user chooses to save it). As you note, local space savings can be done purely locally.  Segwit increases flexibility; fixes design flaws; saves space for nodes acting as SPV servers; and saves _bandwidth_; and none of these be done as purely local changes.

Quote
I still claim that:
N + 2*numtx + numvins > N
As I pointed out, that is purely a product of whatever serialization an implementation chooses to store the data.

Quote
However on the benefits claims, one of them is the utxo dataset is becoming a lot more manageable. this is irrelevant as that is a local inefficiency that can be optimized without any external effects. I have it down to 4 bytes of RAM per utxo, but I could make it smaller if needed
Taking a hint from your earlier pedantry... It sounds like you have a long way to go... Bitcoin Core uses 0 bytes of RAM per UTXO. By comparison, the unreleased implementation you are describing is embarrassingly inefficient-- Bitcoin core is infinity fold better. Smiley

What I still dont understand is how things will work when a segwit tx is sent to a non-segwit node and that is spent to another non-segwit node. How will the existing wallets deal with that? What happens if an attacker created segwit rawtransactions and sent them to non-segwit nodes? there are no attack vectors? what about in zeroconf environments? how does a full relaying node mine a block with segwit inputs? or do existing full nodes cease to be able to mine blocks after segwit softfork?
jl777, I already responded to pretty much this question directly just above. It seems like you are failing to put in any effort to read these things, disrespecting me and everyone else in this thread; it makes it seem like responding to you further is a waste of time. Sad

The segwit transactions are non-standard to old nodes. This means that old nodes/wallets ignore them until they are confirmed-- they don't show them in the wallet, they don't relay them, they don't mine them, so even confusion about unconfirmed transactions is avoided.
If you don't understand the concept of transaction standardness, you can learn about it from a few minutes of reading the Bitcoin developer guide: https://bitcoin.org/en/developer-guide#non-standard-transactions and by searching around a bit.

This is a really good explanation, thanks for taking the time to write it up. My understanding of Bitcoin doesn't come direct from the code (yet!) I have to rely on second hand information. The information you just provided has really deepened my understanding of the purpose of the scripting system over and above "it exists, and it makes the transactions work herp" which probably helps address your final paragraph...
[...]

Indeed it does. I am sincerely sorry for being a bit abrasive there: I've suffered too much exposure to people who aren't willing to reconsider positions-- and I was reading a stronger argument into your post than you intended--, and this isn't your fault.

Quote
I'm trying not to get (too) sucked into the conspiracy theories on either side, I'm only human though so sometimes I do end up with five when adding together two and two.

A question that still niggles me is segwit as a soft fork. I know that just dredges up the same old discussion about pros and cons of soft vs hard but for a simpleton such as me it seems that if the benefits of segwit are so clear, then compromising on the elegance of implementation in order to make it a soft fork seems a strange decision.
It would be a perfectly reasonable question, if it were the case there was indeed a compromise here.

If segwit were to be a hardfork? What would it be?

Would it change how transaction IDs were computed, like elements alpha did? Doing so is conceptually simpler and might save 20 lines of code in the implementation... But it's undeployable: even as a hardfork-- it would break all software, web wallets, thin wallets, lite wallets, hardware wallets, block explorers-- it would break them completely, along with all presigned nlocktime transactions, all transactions in flight. It would add more than 20 lines of code in having to handle the flag day.  So while that design might be 'cleaner' conceptually the deployment would be so unclean as to be basically inconceivable. Functionally it would be no better, flexibility it would be no better.  No one has proposed doing this.

Would it instead do the same as it does not, but instead put the commitment someplace else in the block rather than as a coinbase transaction OP_RETURN? -- at the top of the hashtree?  This is what Gavin Andresen responded to segwit proposing.  This would be deployable as a lite-client compatible semi-hardfork, like the blocksize increase. Would this be more elegant?

In that case... All that changes changing is the position of the commitment from one location to another. Writing the 32+small extra bytes of data in one place in the block rather than another place. It would not change the implementation except some constants about where it reads from. It would not change storage, it would not change performance. It wouldn't be the most logical and natural way to deploy it (the above undeployable method would be).  Because it would be a hard fork, all nodes would have to upgrade for it at the same time.  So if you're currently on 0.10.2 because you have business related patches against that version which are costly to rebase-- or just because you are prohibited from upgrading without a security audit, you'll be kicked off the network under the hard fork model when you don't upgrade by the flag day. Under the proposed deployment mechanism you can simply ignore it with no cost to you (beyond the general costs of being on an older version) and upgrade whenever it makes sense to do so-- maybe against 0.14 when there finally are some new features that you feel justify your upgrade, rather than paying the upgrade costs multiple times.  One place vs the other doesn't make a meaningful difference in the functionality, though I agree top 'feels' a little more orderly. But again, it doesn't change the functionality, efficiency or performance, it wouldn't make the implementation simpler at all. And there there is other data that would make more sense to move to the top (e.g. stxo/utxo commitments) which haven't been designed yet, so if segwit was moved to the top now that commitment at the top would later need to be redesigned for these other things in any case.  It's not clear that even greenfield that this would be more elegant than the proposal, and the deployment-- while not impossible for this one-- would be much less elegant and more costly.

So in summary:  the elegance of a feature must be considered holistically. We must think about the feature itself, how it interacts with the future, and-- critically-- the effect of deploying it.  Considered together the segwit deployment proposed is clearly the most elegant approach.  If deployment were ignored, the elements alpha approach would be slightly preferable, but only slightly -- it makes no practical difference-- but it is so unrealistic to deploy that in Bitcoin today that no one has proposed it. One person did propose changing the commitment location; but the different location to a place that would only be possible in a hardfork but the location makes no functional difference for the feature and would add significant amounts of deployment cost and risk.
jl777 (OP)
Legendary
*
Offline Offline

Activity: 1176
Merit: 1132


View Profile WWW
March 16, 2016, 10:35:28 PM
 #85

Segwit is not about saving space for plain full noes, the space is already saved in Core (if the user chooses to save it). As you note, local space savings can be done purely locally.  Segwit increases flexibility; fixes design flaws; saves space for nodes acting as SPV servers; and saves _bandwidth_; and none of these be done as purely local changes.
Again I apologize for not being smart enough to instantly understand all the changes segwit does and I was misled by errant internet posts that segwit saved HDD space for the blockchain.

thank you for clarifying that it wont save space for full nodes.

Also, my understanding now is that iguana can just treat the segwit tx as standard p2sh and with the caveat that until it fully processes the witness data, it would just need to trust that any such tx that are mined are valid.

I would debate with you on many claims you make that I dont agree with, but I see no point to debate with words. I will make an iguana release that will demonstrate my claims. Fair enough?

James

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3374
Merit: 6535


Just writing some code


View Profile WWW
March 16, 2016, 10:41:39 PM
 #86

Also, my understanding now is that iguana can just treat the segwit tx as standard p2sh and with the caveat that until it fully processes the witness data, it would just need to trust that any such tx that are mined are valid.
Yes. If you are following the standardness and validation rules that Bitcoin Core uses, then it should be a non-issue.

2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1065



View Profile
March 17, 2016, 12:14:33 AM
 #87

My point, perhaps poorly expressed, was that if you think these problems are 'not hard', you must have solutions in mind, no?  I'd be interested in hearing your ideas.  I am genuinely interested, not being sarcastic here.
It wasn't only me that had those solutions in mind. In fact they are already included in the "segregated witness" proposal, but without the "segregation" part. The "segregation" just splits the transaction in two parts. In fact one could come up with a deficient "segregated witness" proposal that wouldn't fix the discussed problems. They are orthogonal concepts.
 

Which solutions are you referring to here?

The same we discussed less than an hour ago; 9:20am vs. 10:10am.
The advantage of segwit is that it elegantly fixes a couple of other hard problems (malleability, O(n^2) sigops issue)
What about fixing those "other problems" (I don't want to say "hard", because IMO they aren't "hard" by themselves) without the segregation? Impossible or just not worth it?

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
johnyj
Legendary
*
Offline Offline

Activity: 1988
Merit: 1012


Beyond Imagination


View Profile
March 17, 2016, 12:30:17 AM
 #88

https://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/segregated-witness-and-its-impact-on-scalability/

Quote
There are still malleability problems that remain, like Bitcoin selecting which part of the transaction is being signed, like the sighash flags. This remains possible, obviously. That's something that you opt-in to, though. This directly has an effect on scalability for various network payment transaction channels and systems like lightning and others

IMO, segwit is a clean up of the transaction format, but in order to do that without a hard fork, it uses a strange way of twin-block structure, which cause unnecessary complexity. Raised level of complexity typically open many new attack vectors, so far this has not been fully analyzed

And the 75% discount of witness data also change the economy of the blockchain space, so that it specially designed to benefit lightning network and other stuffs

In fact, if you do it in a hard fork, you can redesign the whole transaction format at will, no need to do so many different hacks everywhere to make old nodes unaware of the change (these nodes can work against upgraded nodes in certain cases, especially when some of the upgraded hashing power do a roll back)

RHA
Sr. Member
****
Offline Offline

Activity: 392
Merit: 250


View Profile
March 17, 2016, 12:39:38 AM
 #89

Also I made the mistake of making sure the transaction hash matches for a transaction. I had assumed that if the transaction hash doesnt match, it is invalid rawbytes. Are you saying that we dont need to verify that the transaction hashes match? As you know verifying signatures is very time consuming compared to verifying txid. So if verifying txid is not available anymore, that would dramatically increase the CPU load for any validating node.
Anymore? It was never done in the first place. Verifying the transaction has always been checking the signatures because the creating and verifying signatures involve the hash of the transaction.
Also, I am making an optimized bitcoin core and one of these optimizations is rejecting a tx whose contents doesnt match the txid. The thinking being that if the hashes dont match, there is no point in wasting time calculating the signature

Also, I am making an optimized bitcoin core and one of these optimizations is rejecting a tx whose contents doesnt match the txid. The thinking being that if the hashes dont match, there is no point in wasting time calculating the signature
Every piece of Bitcoin software does this.  It is a little obnoxious that you spend so much time talking about these optimizations you're "adding" which are basic behaviors that _every_ piece of Bitcoin software ever written has always done, as if you're the only person to have thought of them or how they distinguish this hypothetical node software you claim to be writing.                                    

Can't you, gmaxwell and knightdk, settle on verifying txid at last?
It's really hard to get info on SegWits here if even such an obvious thing (one would think) gets contradictory answers. Wink
hhanh00
Sr. Member
****
Offline Offline

Activity: 467
Merit: 266


View Profile
March 17, 2016, 12:42:53 AM
 #90

Segwit is not about saving space for plain full noes, the space is already saved in Core (if the user chooses to save it). As you note, local space savings can be done purely locally.  Segwit increases flexibility; fixes design flaws; saves space for nodes acting as SPV servers; and saves _bandwidth_; and none of these be done as purely local changes.
Again I apologize for not being smart enough to instantly understand all the changes segwit does and I was misled by errant internet posts that segwit saved HDD space for the blockchain.

thank you for clarifying that it wont save space for full nodes.

Also, my understanding now is that iguana can just treat the segwit tx as standard p2sh and with the caveat that until it fully processes the witness data, it would just need to trust that any such tx that are mined are valid.

I would debate with you on many claims you make that I dont agree with, but I see no point to debate with words. I will make an iguana release that will demonstrate my claims. Fair enough?

James

The problem is that what you lost a lot of credibility by making your claims earlier and now it'll be hard to take your software seriously. Basically, you are asking us to check out your rocket after you argued against the laws of gravity.

AliceGored
Member
**
Offline Offline

Activity: 117
Merit: 10


View Profile
March 17, 2016, 12:44:29 AM
 #91

I asked some of these questions 3 months ago.  Never got a decent answer.

Blockstream wants soft-forked SegWit to fix the malleability problems (that would be needed for the LN, if they ever get it to work), and to force ordinary p2p bitcoin users subsidize the costs of complicated multisig transactions (ditto).  But these reasons do not seem explain the urgency and energy that they are putting on the SegWit soft fork.  Maybe they have other undeclared reasons?  Perhaps they intend to stuff more data into the extension records, which they would not have to justify or explain since, being in the extension part, "ordinary users can ignore it anyway"?

As for SegWit being a soft fork, that is technically true; but a soft fork can do some quite radical changes, like imposing a negative interest (demurrage) tax, or raising the 21 million limit.  One could also raise the block size limit that way.  These tricks would all let old clients work for a while, but eventually everybody will be forced to upgrade to use coins sent by the new verson.

You've come to the right place for answers, professor. Openness is our middle name!

Now that that's all settled: What's Stolfi on about here? The 75% discount?

The discount is the question you won't get a good answer for. Fundamental economics of Bitcoin, price per byte, changed drastically, with a soft fork.
jl777 (OP)
Legendary
*
Offline Offline

Activity: 1176
Merit: 1132


View Profile WWW
March 17, 2016, 12:45:35 AM
 #92

Segwit is not about saving space for plain full noes, the space is already saved in Core (if the user chooses to save it). As you note, local space savings can be done purely locally.  Segwit increases flexibility; fixes design flaws; saves space for nodes acting as SPV servers; and saves _bandwidth_; and none of these be done as purely local changes.
Again I apologize for not being smart enough to instantly understand all the changes segwit does and I was misled by errant internet posts that segwit saved HDD space for the blockchain.

thank you for clarifying that it wont save space for full nodes.

Also, my understanding now is that iguana can just treat the segwit tx as standard p2sh and with the caveat that until it fully processes the witness data, it would just need to trust that any such tx that are mined are valid.

I would debate with you on many claims you make that I dont agree with, but I see no point to debate with words. I will make an iguana release that will demonstrate my claims. Fair enough?

James

The problem is that what you lost a lot of credibility by making your claims earlier and now it'll be hard to take your software seriously. Basically, you are asking us to check out your rocket after you argued against the laws of gravity.

N + 2*numtxids + numvins > N

I still claim that is true, not sure how that loses me any credibility


http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3374
Merit: 6535


Just writing some code


View Profile WWW
March 17, 2016, 01:10:19 AM
 #93

N + 2*numtxids + numvins > N

I still claim that is true, not sure how that loses me any credibility
I believe I have forgotten to address this. Can you please explain how you are getting this?

AFAIK the txids aren't in any structure used by Bitcoin except in the inventories. Those might be stored, depends on the implementation. However, when it comes to the wtxids, there is absolutely no reason to store them. Their sole purpose is to simply have a hash of all of the data in a segwit transaction and have that be applied to the witness root hash in the coinbase transaction. There is no need to store the wtxids since nothing ever references them.

Where are you getting numvins from?

Anyways, your formula is wrong if you assume that the regular txid is currently being stored. Rather it should be

N + wtxid + numvins > N

and that is only if you are going to store wtxids which are not necessary to store anyways.

jl777 (OP)
Legendary
*
Offline Offline

Activity: 1176
Merit: 1132


View Profile WWW
March 17, 2016, 01:24:14 AM
 #94

N + 2*numtxids + numvins > N

I still claim that is true, not sure how that loses me any credibility
I believe I have forgotten to address this. Can you please explain how you are getting this?

AFAIK the txids aren't in any structure used by Bitcoin except in the inventories. Those might be stored, depends on the implementation. However, when it comes to the wtxids, there is absolutely no reason to store them. Their sole purpose is to simply have a hash of all of the data in a segwit transaction and have that be applied to the witness root hash in the coinbase transaction. There is no need to store the wtxids since nothing ever references them.

Where are you getting numvins from?

Anyways, your formula is wrong if you assume that the regular txid is currently being stored. Rather it should be

N + wtxid + numvins > N

and that is only if you are going to store wtxids which are not necessary to store anyways.
I was told the extra space needed was 2 bytes per segwit tx plus 1 byte per vin, though maybe the 1 byte per vin can be reduced to 1 bit. Not sure how that is possible without new script opcodes, so maybe that is a possibility in the fullness of time sort of thing.

Regardless, the total space needed is more for segwit tx than normal tx, this is confirmed by wuille, lukejr and gmaxwell.

now I never said segwit wasnt impressive tech as that is quite a small overhead. my point is that segwit does not reduce the permanent space needed and if you feel that the HDD space needed to store the blockchain (or the data need to be shared between full nodes) is a factor that is important to scalability, then segwit does not help scalability regarding those two factors.

I do not speak about any other factors, only the permanent space used. Originally I was told that segwit did everything, including allow improved scalability and what confused me was that it was presented in a way that led me (and many others) to believe that segwit reduced the permanent storage needed.

now this is clarified that segwit does not reduce the space needed and that segwit softfork will force any node that wants to be able to validate setwit tx to also upgrade to segwit, then I think the rest is about implementation details.

And maybe someone can clarify the text on the bitcoincore.org site that presents segwit as curing cancer and world hunger?

James

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
BlindMayorBitcorn
Legendary
*
Offline Offline

Activity: 1260
Merit: 1115



View Profile
March 17, 2016, 01:28:13 AM
 #95

I asked some of these questions 3 months ago.  Never got a decent answer.

Blockstream wants soft-forked SegWit to fix the malleability problems (that would be needed for the LN, if they ever get it to work), and to force ordinary p2p bitcoin users subsidize the costs of complicated multisig transactions (ditto).  But these reasons do not seem explain the urgency and energy that they are putting on the SegWit soft fork.  Maybe they have other undeclared reasons?  Perhaps they intend to stuff more data into the extension records, which they would not have to justify or explain since, being in the extension part, "ordinary users can ignore it anyway"?

As for SegWit being a soft fork, that is technically true; but a soft fork can do some quite radical changes, like imposing a negative interest (demurrage) tax, or raising the 21 million limit.  One could also raise the block size limit that way.  These tricks would all let old clients work for a while, but eventually everybody will be forced to upgrade to use coins sent by the new verson.

You've come to the right place for answers, professor. Openness is our middle name!

Now that that's all settled: What's Stolfi on about here? The 75% discount?

The discount is the question you won't get a good answer for
. Fundamental economics of Bitcoin, price per byte, changed drastically, with a soft fork.

How come? Huh

Forgive my petulance and oft-times, I fear, ill-founded criticisms, and forgive me that I have, by this time, made your eyes and head ache with my long letter. But I cannot forgo hastily the pleasure and pride of thus conversing with you.
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3374
Merit: 6535


Just writing some code


View Profile WWW
March 17, 2016, 01:39:42 AM
 #96

I was told the extra space needed was 2 bytes per segwit tx plus 1 byte per vin, though maybe the 1 byte per vin can be reduced to 1 bit. Not sure how that is possible without new script opcodes, so maybe that is a possibility in the fullness of time sort of thing.
I think it might actually be 33 bytes per vin because of the implementation being used which does not introduce the new address type. This is so that the p2sh script will still verify true to old nodes. It is a 0 byte followed by 32 byte hash of the witness.

Regardless, the total space needed is more for segwit tx than normal tx, this is confirmed by wuille, lukejr and gmaxwell.

now I never said segwit wasnt impressive tech as that is quite a small overhead. my point is that segwit does not reduce the permanent space needed and if you feel that the HDD space needed to store the blockchain (or the data need to be shared between full nodes) is a factor that is important to scalability, then segwit does not help scalability regarding those two factors.
And I don't think that anybody has ever said that it would reduce the space needed to store it. If you are believing everything you read on the internet, you need a reality check. When you read these things, make sure that they are actually backed up by reputable sources e.g. the technical papers.

I do not speak about any other factors, only the permanent space used. Originally I was told that segwit did everything, including allow improved scalability and what confused me was that it was presented in a way that led me (and many others) to believe that segwit reduced the permanent storage needed.
Could you cite the article(s) which did that? If it was something on bitcoin.org or bitcoincore.org then that could be fixed.

now this is clarified that segwit does not reduce the space needed and that segwit softfork will force any node that wants to be able to validate setwit tx to also upgrade to segwit, then I think the rest is about implementation details.
Sure. Any other questions about implementation?

And maybe someone can clarify the text on the bitcoincore.org site that presents segwit as curing cancer and world hunger?
Does it portray segwit that extremely positive? I read it and it didn't seem that way to me.

gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
March 17, 2016, 01:44:50 AM
 #97

What about fixing those "other problems" (I don't want to say "hard", because IMO they aren't "hard" by themselves) without the segregation? Impossible or just not worth it?
A strong malleability fix _requires_ segregation of signatures.

A less strong fix could be achieved without it if generality is abandoned (e.g. only works for a subset of script types, rather than all without question) and a new cryptographic signature system (something that provides unique signatures, not ECC signatures) was deployed.

And even with giving up on fixing malleability for most smart contracts, it's very challenging to be absolutely sure that a specific instance is actually non-malleable. This can be seen in the history of BIP62-- where at several points it was believed that it addressed all forms of malleability for the subset of transactions it attempted to fix, only to  later discover that there were additional forms.  If a design is inherently subject to malleability but you hope to fix it by disallowing all but one possible representation there is a near endless source of ways to get it wrong.

Segregation removes that problem. Segwitness using scripts achieve a strong base level of non-malleability without doubt or risk of getting it wrong, both in design and by script authors. And only segregation applies to all scripts, not just a careful subset of "inherently non-malleable rules".

Getting signatures out from under TXIDs is the natural design to prevent problems from malleability and engineers were lamenting that Bitcoin didn't work that way as far back as 2011/late-2012.

Can't you, gmaxwell and knightdk, settle on verifying txid at last?
It's really hard to get info on SegWits here if even such an obvious thing (one would think) gets contradictory answers. Wink
Knightdk will tell you to defer to me if there is a conflict on such things.

But here there isn't really, I think-- we're answering different statements. I was answering "The thinking being that if the hashes dont match, there is no point in wasting time calculating the signature".

Knightdk is responding about verifying lose transactions, there is no "verify the transaction ID", because no ID is even sent. You have nothing to verify against. All you can do is compute the ID.

I was referring to processing blocks. Generally first step of validating a block, after connecting it to a chain, is checking the proof of work. The second step is hashing the transactions in the block to verify that the block hash is consistent with the data you received. If it is not, the information is discarded before performing further processing. Unlike a loose transaction, you have a block header, and can actually validate against something.

In fact, if you do it in a hard fork, you can redesign the whole transaction format at will, no need to do so many different hacks everywhere to make old nodes unaware of the change (these nodes can work against upgraded nodes in certain cases, especially when some of the upgraded hashing power do a roll back)
No, you can't-- not if you live in a world with other people in it.  The spherical cow "hardforks can change anything" ignores that a hardfork that requires all users shutting down the Bitcoin network, destroying all in flight transactions, and invalidating presigned transactions (thus confiscating some amount of coins) will just not be deployed.

Last year I tried proposing an utterly technically simple hard fork to fix the time-warp vulnerability and provide extranonce in the block header using the prev-hash bits that are currently always forced to zero (often requested by miners and ASIC makers-- and important for avoiding hardcoding block logic in asics) and it was _vigorously_ opposed by Mike Hearn and Gavin Andresen-- because it would have required that smartphone wallets upgrade to fix their header checks and difficulty calculation.  ... and that was for something that would be just a well contained four of five lines of code changed.

I hope that that change eventually happens; but given that it was attacked so aggressively by the two biggest advocates of "hard forks are no big deal", I can't imagine a radical backwards incompatible change to the transaction format happening; especially when the alternative is so easy and good that I'd prefer to use it for increased similarity even in an explicitly incompatible system.

The discount is the question you won't get a good answer for. Fundamental economics of Bitcoin, price per byte, changed drastically, with a soft fork.
What? It's an explicit goal. Transaction "size" in a particular serialization (which isn't necessarily used for transmission or storage) does not well reflect the costs of a transaction to the system. This has created a misalignment of incentives which has been previously misused (e.g. a miner creating blocks which expand the UTXO set size by almost a megabyte twiddling around with dust-spam (known private keys)).  

At the end of the day signatures are transmitted at most once to a node and can be pruned. But data in the UTXO set must be in perpetual online storage. It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits (even acknowledged by strong proponents of blocksize increase: e.g.  http://gavinandresen.ninja/utxo-uhoh (ignore anything in it about storing the UTXO set in ram, no version of Bitcoin Core has ever done that; that was just some confusion on the part of the author)). Prior problems with UTXO bloating attacks forced the introduction of the "dust limit" standardness rule, which is an ugly hack to reduce the bleeding from this misalignment of incentives.

In Montreal scaling Bitcoin fixing this costing imbalance was _the_ ray of light that got lots of people thinking that some agreement to a capacity bump could be had: if capacity could be increased while _derisking_ UTXO impact, or at least making it no worse-- then many of the concerns related to capacity increases would be satisfied.  So I guess it's no shock to see avowed long time Bitcoin attackers like jstolfi particularly picking on this aspect of a fix as a measure to try to undermine the ecosystem.

One of the challenges coming out of Montreal was that it wasn't clear how to decide on how the corrected costing should work. The "perfect" figures depend on the relative costs of storage, bandwidth, cpu, initial sync delays, etc.. which differ from party to party and over time-- though the current size counting is clearly poor across the board. Segwit addressed that, open parameter because optimizing it's capacity required a discount which achieved a dual effect of also fixing the misaligned costing.

The claims that the discounts have something to do with lightning and blockstream have no substance at all.
(1) Lightning predates Segwit significantly.
(2) Lightning HTLC transactions have tiny signatures, and benefit less than many transaction styles (in other words the recosting should slightly increase their relative costs), though no one should care because channel closures are relatively rare. Transactions that do large multisigs would benefit more, because the current size model radically over-costs them relative to their total cost to Bitcoin nodes.
(3) Blockstream has no plans to make any money from running Lightning in Bitcoin in any case;  we started funding some work work on Lightning because we believed it was long term important for Bitcoin and Mike Hearn criticized us for not funding it if we thought it important, because one of our engineers _really_ wanted to work on it himself, and because we were able to work out a business case for using it to make sidechains scalable too.

N + 2*numtxids + numvins > N
I still claim that is true, not sure how that loses me any credibility
In one post you were claiming 42 bytes per a one in / one out transaction, the other you appeared to be claiming 800 bytes.  In any case, even your formula depends on what serialization is used; one could choose one where it was smaller and not bigger. The actual amount of true entropy added is on the order of a couple bits per transaction (are segwit coins being spent or not and what script versions).

To characterize that as "SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY", when the same signaling will allow the use of new signature schemes that reduce the size of transactions on average about _30%_ seems really deceptive, and it makes me sad that you're continuing with this argument even after having your misunderstandings corrected.

I thought you said you were said you were actually going to write the software you keep talking about and speak through results; rather than the continued factually incorrect criticisms you keep making of software and designs which you don't care to spend a minute to learn the first thing about? We're waiting.

In the mean time: Shame on you, and shame on you for having no shame.
jl777 (OP)
Legendary
*
Offline Offline

Activity: 1176
Merit: 1132


View Profile WWW
March 17, 2016, 02:02:35 AM
 #98

N + 2*numtxids + numvins > N
I still claim that is true, not sure how that loses me any credibility
In one post you were claiming 42 bytes per a one in / one out transaction, the other you appeared to be claiming 800 bytes.  In any case, even your formula depends on what serialization is used; one could choose one where it was smaller and not bigger. The actual amount of true entropy added is on the order of a couple bits per transaction (are segwit coins being spent or not and what script versions).

To characterize that as "SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY", when the same signaling will allow the use of new signature schemes that reduce the size of transactions on average about _30%_ seems really deceptive, and it makes me sad that you're continuing with this argument even after having your misunderstandings corrected.

I thought you said you were said you were actually going to write the software you keep talking about and speak through results; rather than the continued factually incorrect criticisms you keep making of software and designs which you don't care to spend a minute to learn the first thing about? We're waiting.

In the mean time: Shame on you, and shame on you for having no shame.
I corrected my mistaken estimates and I made it clear I didnt know the exact overheads. I did after all just start looking into segwit yesterday. Unlike you, I do make mistakes, but when I understand my mistake, I admit it. Maybe you can understand the limitations of mortals who are prone to make errors.

Last I was told, the vinscript that would otherwise be in the normal 1MB blockchain needs to go into the witness area. Is that not correct? If it goes from the 1MB space to the witness space, how is that 30% smaller? (I am talking about permanent storage for full relaying/verifying nodes)

I only responded to knightdk's questions, should I have ignored his direct question?

luke-jr told me it takes 2 bytes per tx and 1 byte per vin extra using segwit as opposed to a 2MB hardfork. I thought you also confirmed this. Now you are saying that using segwit reduces the total permanent space used by 30%, if that is really the case then I will change my view.

please explain to me how lukejr is wrong when he says it takes 2 bytes per tx and 1 byte per vin. i will update the title to match my understanding, without shame when I see my mistake. Imagine I am like rainman. I just care about the numbers

James

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
TooDumbForBitcoin
Legendary
*
Offline Offline

Activity: 1638
Merit: 1001



View Profile
March 17, 2016, 02:04:25 AM
 #99

Classic cargo cult now in bed with jl777, treating him like Jim Jones?  Amazing.  

jl777 views Classic as a headless steamroller with an empty driver's seat he can fill - accruing all glory and power.

Techno-eunuchs in Classic (Peter R, HashFastDefendantDoc, many others) are hypmotized by jl777's tecnobabble, and are now forced to type with one hand while they rub themselves up a genie with the other.

Classic/jl777 or GOP/Trump - which is more entertaining/sad/doomed?








▄▄                                  ▄▄
 ███▄                            ▄███
  ██████                      ██████
   ███████                  ███████
    ███████                ███████
     ███████              ███████
      ███████            ███████
       ███████▄▄      ▄▄███████
        ██████████████████████
         ████████████████████
          ██████████████████
           ████████████████
            ██████████████
             ███████████
              █████████
               ███████
                █████
                 ██
                  █
veil|     PRIVACY    
     WITHOUT COMPROMISE.      
▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂
|   NO ICO. NO PREMINE. 
   X16RT GPU Mining. Fair distribution.  
|      The first Zerocoin-based Cryptocurrency      
   WITH ALWAYS-ON PRIVACY.  
|



                   ▄▄████
              ▄▄████████▌
         ▄▄█████████▀███
    ▄▄██████████▀▀ ▄███▌
▄████████████▀▀  ▄█████
▀▀▀███████▀   ▄███████▌
      ██    ▄█████████
       █  ▄██████████▌
       █  ███████████
       █ ██▀ ▀██████▌
       ██▀     ▀████
                 ▀█▌




   ▄███████
   ████████
   ███▀
   ███
██████████
██████████
   ███
   ███
   ███
   ███
   ███
   ███




     ▄▄█▀▀ ▄▄▄▄▄▄▄▄ ▀▀█▄▄
   ▐██▄▄██████████████▄▄██▌
   ████████████████████████
  ▐████████████████████████▌
  ███████▀▀▀██████▀▀▀███████
 ▐██████     ████     ██████▌
 ███████     ████     ███████
▐████████▄▄▄██████▄▄▄████████▌
▐████████████████████████████▌
 █████▄▄▀▀▀▀██████▀▀▀▀▄▄█████
  ▀▀██████          ██████▀▀
      ▀▀▀            ▀▀▀
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3374
Merit: 6535


Just writing some code


View Profile WWW
March 17, 2016, 02:10:07 AM
 #100

luke-jr told me it takes 2 bytes per tx and 1 byte per vin extra using segwit as opposed to a 2MB hardfork. I thought you also confirmed this. Now you are saying that using segwit reduces the total permanent space used by 30%, if that is really the case then I will change my view.

please explain to me how lukejr is wrong when he says it takes 2 bytes per tx and 1 byte per vin. i will update the title to match my understanding, without shame when I see my mistake. Imagine I am like rainman. I just care about the numbers
Where did luke-jr tell you this? Did he explain why? I don't understand the 1 byte per vin part and would like to see the explanation for it.

What gmaxwell is saying is that segwit allows for future upgrades. One of those future upgrades could be an upgrade to a different signature scheme which does have the 30% reduction.

Pages: « 1 2 3 4 [5] 6 7 8 9 10 11 12 13 14 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!