Bitcoin Forum
June 23, 2024, 02:26:57 PM *
News: Voting for pizza day contest
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 [45] 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 ... 315 »
881  Bitcoin / Development & Technical Discussion / Re: are there any actual stats on chain reorgs, by depth? on: March 19, 2016, 12:26:03 AM
It seems 10 blocks at current hashrate would be very rare

Extremely! This is closely related to the odds of pulling off a double-spend 10 blocks deep. D&T explained nicedly at https://bitcoin.stackexchange.com/questions/1170/why-is-6-the-number-of-confirms-that-is-considered-secure.

The worst was the overflow bug. That would have been around 53. https://en.bitcoin.it/wiki/Value_overflow_incident

My node only goes back to 2012, so I missed out on the overflow fork. But I have the next big one. My node saw the .7/.8 fork at a reorg of 30:

Code:
  {
    "height": 225459,
    "hash": "000000000000036e459f26b1dc02683c9beeb061eedf18689995143354467245",
    "branchlen": 30,
    "status": "valid-headers"
  },


Feel free to analyze my data at http://pastebin.com/LZxst5vD.
Fantastic!! thanks for this invaluable history, 2012 is quite a ways back. Also reorgs from bugs/new versions are different from "normal" reorgs caused by propagation

and the hashrate was a small fraction back around block 225K. Not sure, but I think it is harder to reorg now vs then? If only since we now have ASIC so the tech leaps in hashrate from CPU to GPU to FPGA to ASIC is already done.

It seems vast majority are just 1 deep.
around 10 2 deep
just a few 3 deep

the 6 block reorg seems to be an outlier along with the 30. All the talk about eventual consensus, but with bitcoin it looks like 99.999% of the time, within 3 blocks it is in consensus

unless your node is special.

with propagation of around 15 seconds though, it seems other than at the edge of one block, it would take three fast blocks to require a 3 block reorg. Maybe that 6 block reorg was an actual attack? it seems way out of bounds and both your node and 2112's has it so it is unlikely that is it some local connectivity issue

since nowadays the miners are all probably doing their best to push the new blocks as fast as possible, that combined with the nextgen ASIC probably not going to be much more than 2x increase, maybe as little as 5 blocks would be enough, but there isnt much difference between 5 and 10, so probably best to setback 10 blocks
882  Bitcoin / Development & Technical Discussion / Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY on: March 19, 2016, 12:08:27 AM
The modified hash only applies to signature operations initiated from witness data, so signature operations from the base block will continue to require lower limits.

The way it is worded makes it sound fantastic...

However, I couldnt find info about the the witness data immunities from the attacks. Are you saying that signature attacks are not possible inside the witness data?

Clearly if signatures are moved from location A to location B, then saying signature attacks are not possible in location A. OK, that is good. but what about location B?

Are sigs in the witness data immune from malicious tx via lots of sigs? It is strange this isnt specifically addressed. Maybe its just me and my low reading comprehension. But all the text on that segwit marketing page seems quite one sided and of the form:

###
things are removed from the base blocks so now there are no problems with the base block, without addressing if the problems that used to be in the base block are actually solved, or just moved into the witness data.
###

We could easily say SPV solves all signature attack problems. Just make it so your node doesnt do much at all and it avoids all these pesky problems, but the important issue to many people is what is the effect on full nodes. And by full, I mean a node that doesnt prune, relays, validates signatures and enables other nodes to do the bootstrapping.

Without that, doesnt bitcoin security model change to PoS level? I know how much you hate PoS

James
883  Bitcoin / Development & Technical Discussion / Re: are there any actual stats on chain reorgs, by depth? on: March 18, 2016, 11:26:15 PM
what was max depth with the one of 165 branches?
Code:
    {
        "height" : 363736,
        "hash" : "000000000000000013fe26675faa8f7dccd55ce5485bb6d0373fa66345901436",
        "branchlen" : 6,
        "status" : "valid-fork"
    },

Has anybody seen a 10 block reorg in the last year?

It sure seems like a good value for a backstop. The cost being a few minutes recalc and offline time if a bigger reorg happens
884  Bitcoin / Development & Technical Discussion / Re: are there any actual stats on chain reorgs, by depth? on: March 18, 2016, 11:00:54 PM
it could be 2112's node is special
The results I posted were from a portable laptop that had seen multiple ISPs but runs only occasionally. On the other laptop that had seen only one ISP but runs more often I had 165 branches instead of 15.

You'll need way more nodes to have meaningful statistics.

what was max depth with the one of 165 branches?
885  Bitcoin / Development & Technical Discussion / Re: are there any actual stats on chain reorgs, by depth? on: March 18, 2016, 10:21:03 PM
very nice! didnt know it listed all reorgs.
so if I read that right since block 395487, the max depth reorg has been: 1

if so, does that mean a node that has been constantly running for 1 year would be needed to run getchaintips, to get the entire year's history of reorgs?

It seems 10 blocks at current hashrate would be very rare

James
More or less. But remember that the reorganizations aren't really globally identical. You'll really want several long-running nodes on several ISPs to have meaningful statistics.

Also, testnet has way more and longer reorgs, very good for testing the handling of the reorgs.

reorgs are totally dependent on each node, yes. but if your node is seeing max reorg depth of 1, i would think any node that is actually connected to the internet and not being flooded, would not be seeing reorgs of 10 blocks

For testing I can just manually force a reorg, I just wanted to calibrate things with the right initial offset using current network characteristics

3 seems too low, but 10 should be at most a monthly occurance, which is fine for a several minute recalc. But if anybody has more extensive data or much worse reorgs, please post!

it could be 2112's node is special
886  Bitcoin / Development & Technical Discussion / Re: are there any actual stats on chain reorgs, by depth? on: March 18, 2016, 09:54:43 PM
The official Bitcoin client has "getchaintips" command that gives you the output like this:
Code:
[
    {
        "height" : 403263,
        "hash" : "00000000000000000210fee63635c72da771c6b136a63f20ff9d058ce778618d",
        "branchlen" : 0,
        "status" : "active"
    },
    {
        "height" : 402922,
        "hash" : "00000000000000000692c289cc4a4e9726018d6a030ad10c84f70adc2e920597",
        "branchlen" : 1,
        "status" : "valid-fork"
    },
    {
        "height" : 402751,
        "hash" : "00000000000000000180755378324c16847b2f49af428c763b8ff411d51d7767",
        "branchlen" : 1,
        "status" : "valid-headers"
    },
    {
        "height" : 402723,
        "hash" : "000000000000000000263e1722b119e2547a4167b2f9d5bfc89f0d9dc28e0e18",
        "branchlen" : 1,
        "status" : "valid-headers"
    },
    {
        "height" : 399879,
        "hash" : "0000000000000000005642fd14f92213a8ff68aa5071588f84dd2f1c61e93d0e",
        "branchlen" : 1,
        "status" : "valid-fork"
    },
    {
        "height" : 399706,
        "hash" : "000000000000000006aef254935a76c8b321519dcd86d4f5f5092f51ef374251",
        "branchlen" : 1,
        "status" : "valid-fork"
    },
    {
        "height" : 399332,
        "hash" : "000000000000000006972f107e48524ce3e01752c24a282beafc7b046f29f221",
        "branchlen" : 1,
        "status" : "valid-headers"
    },
    {
        "height" : 399247,
        "hash" : "00000000000000000228b92c2c2c8a91673c45f6aa10fb9acabfb587f73ccc81",
        "branchlen" : 1,
        "status" : "valid-fork"
    },
    {
        "height" : 399138,
        "hash" : "00000000000000000382da0f1d9cb0bffc6edbbdd53ba8f87877ffe11263100d",
        "branchlen" : 1,
        "status" : "valid-headers"
    },
    {
        "height" : 399098,
        "hash" : "000000000000000001bb9eac7a0684d2d0481b9a7fe74568ada10ada5734c857",
        "branchlen" : 1,
        "status" : "valid-fork"
    },
    {
        "height" : 398487,
        "hash" : "0000000000000000008fb4567640f05fddc79d28fcb0601a187b8e5cb46cb84c",
        "branchlen" : 1,
        "status" : "valid-headers"
    },
    {
        "height" : 398217,
        "hash" : "0000000000000000078f1ed2154a4db3d7cfa5615c584585c07e3f53ddba6aa8",
        "branchlen" : 1,
        "status" : "valid-fork"
    },
    {
        "height" : 398205,
        "hash" : "0000000000000000067886752a9524c8043341b5adc4100fbe11e7e45af1020d",
        "branchlen" : 1,
        "status" : "valid-headers"
    },
    {
        "height" : 397486,
        "hash" : "00000000000000000158e2c73ded7c24f0087514d646ae59ff017347bfff4c15",
        "branchlen" : 1,
        "status" : "valid-fork"
    },
    {
        "height" : 395487,
        "hash" : "000000000000000002782e988a2da3aa252752f3accae8624242527ca99de06e",
        "branchlen" : 1,
        "status" : "headers-only"
    }
]
very nice! didnt know it listed all reorgs.
so if I read that right since block 395487, the max depth reorg has been: 1

if so, does that mean a node that has been constantly running for 1 year would be needed to run getchaintips, to get the entire year's history of reorgs?

It seems 10 blocks at current hashrate would be very rare

James
887  Bitcoin / Development & Technical Discussion / are there any actual stats on chain reorgs, by depth? on: March 18, 2016, 08:12:21 PM
I know the odds for a deep reorg are small, but I was hoping some website is keeping stats on how often the small reorgs are happening. I guess it would be relative to a specific node, unless there is some internal bitcoind stats that can be queried across all peers.

I am assuming that 10 block deep reorg is very rare, but wanted to get statistical confirmation.

It would incur a several minute recalculation cost to have to regenerate the data for all the blocks after the reorg, so dont want to set this at something that happens every day. But no sense to set it so it would be expected to happen once a year.

The shorter the delay, the faster things will work for realtime queries as it minimizes the amount of data that needs to be scanned that is not in the parallel format. Though, now I think about it, maybe I can make a special case for smaller bundle at the end.

Regardless, of the realtime calc speed, I currently delay creation of a bundle until 10 blocks after the last block of that bundle and wanted to know what is the probability that it will need to be regenerated base on real world stats

James
888  Bitcoin / Development & Technical Discussion / Re: Segwit details? on: March 17, 2016, 03:23:11 AM
OK, so maybe the fact that I am trying to analyze what segwit softfork in the upcoming weeks will do, that explains my not understanding that future upgrades with a new signature scheme are part of the analysis... Would these changes require a hardfork, or the usual softfork can change the signature scheme? It is kind of hard to analyze something based on unspecified future upgrades with a different signature scheme.
Well I think those changes could be soft forked in because it changes the script version number, which I think would only affect the address type. I could be wrong though.

maybe there can be just a single aggregated signature for all the tx in a block? I have no idea if that is possible, but if it is, then that could be added to the coinbase and then we wont need any witness data at all. Did I get that right?
I am fairly certain that this isn't possible since it would require the private keys that can spend the inputs of all of the transactions to sign it. However, I could be wrong as I am not well versed in many parts of cryptography. There maybe is an algorithm which could combine all of the signatures, I don't know. You'll have to ask gmaxwell, he is the "chief cryptographer".
I would think that to implement a blockwide aggregated signature, would at the least require a three step process:

1. block is mined to determine the tx that are in it
2. the txids of this protoblock would need to be broadcast
3. nodes that are running and part of the protoblock txid would need to sign and return to miner(s)?
4. miner prunes out all the signatures that are aggregated and publishes optimized block

Not sure if the libsecp256k1-zkp lib's schnorr routines are sufficient for this and clearly it cant be done with all sigs, and of course details about timing and protocol for the above have plenty to be defined. like when is the mining reward earned, etc. so this is just a fantasy protocol for now

I am not saying the above is possible, just that the above is the minimum back and forth that would be needed and it has some privacy issues, so some privacy enhancements are probably needed too. A bitmap of the aggregate signers would probably be needed, but that can be run length encoded to take up relatively small amount of space
889  Bitcoin / Development & Technical Discussion / Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF on: March 17, 2016, 02:31:12 AM
Classic cargo cult now in bed with jl777, treating him like Jim Jones?  Amazing.  

jl777 views Classic as a headless steamroller with an empty driver's seat he can fill - accruing all glory and power.

Techno-eunuchs in Classic (Peter R, HashFastDefendantDoc, many others) are hypmotized by jl777's tecnobabble, and are now forced to type with one hand while they rub themselves up a genie with the other.

Classic/jl777 or GOP/Trump - which is more entertaining/sad/doomed?
I did ask for an iguana childboard here, but was totally ignored on that, not even a rejection. Maybe if that wasnt just ignored, I wouldnt be so active elsewhere. bitco.in gave me a child board the next day, so I am more active there. it is as simple as that.

I do not agree with classic's position against RBF, that does not make sense to me. I still have not heard any rational explanation about how RBF breaks zeroconf. zeroconf cant work when the blocks are full, as when the blocks are full you cant know when a tx in the mempool is likely to confirm. If anything, defining the RBF behavior allows a much better statistical model to predict when an unconfirmed tx will confirm.

Convince me with the math, or you can call me names and I stay unconvinced.

With RBF, I came here, asked some questions, got reasonable answers and made my analysis. I dont like the changing of sequenceid into the RBF field, but it isnt the horrible devils' spawn that it is made out to be. However, the people there are much better behaved and nobody trolled me or insinuated that I dont understand bitcoin at all.

James
890  Bitcoin / Development & Technical Discussion / Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY on: March 17, 2016, 02:22:32 AM
luke-jr told me it takes 2 bytes per tx and 1 byte per vin extra using segwit as opposed to a 2MB hardfork. I thought you also confirmed this. Now you are saying that using segwit reduces the total permanent space used by 30%, if that is really the case then I will change my view.

please explain to me how lukejr is wrong when he says it takes 2 bytes per tx and 1 byte per vin. i will update the title to match my understanding, without shame when I see my mistake. Imagine I am like rainman. I just care about the numbers
Where did luke-jr tell you this? Did he explain why? I don't understand the 1 byte per vin part and would like to see the explanation for it.

What gmaxwell is saying is that segwit allows for future upgrades. One of those future upgrades could be an upgrade to a different signature scheme which does have the 30% reduction.
luke-jr: https://www.reddit.com/r/Bitcoin/comments/4amg1f/iguana_bitcoin_full_node_developer_jl777_argues/d12bcz7

plz note i didnt start any of the reddit threads, they seem to spontaneously start

OK, so maybe the fact that I am trying to analyze what segwit softfork in the upcoming weeks will do, that explains my not understanding that future upgrades with a new signature scheme are part of the analysis... Would these changes require a hardfork, or the usual softfork can change the signature scheme? It is kind of hard to analyze something based on unspecified future upgrades with a different signature scheme.

maybe there can be just a single aggregated signature for all the tx in a block? I have no idea if that is possible, but if it is, then that could be added to the coinbase and then we wont need any witness data at all. Did I get that right?
891  Bitcoin / Development & Technical Discussion / Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY on: March 17, 2016, 02:02:35 AM
N + 2*numtxids + numvins > N
I still claim that is true, not sure how that loses me any credibility
In one post you were claiming 42 bytes per a one in / one out transaction, the other you appeared to be claiming 800 bytes.  In any case, even your formula depends on what serialization is used; one could choose one where it was smaller and not bigger. The actual amount of true entropy added is on the order of a couple bits per transaction (are segwit coins being spent or not and what script versions).

To characterize that as "SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY", when the same signaling will allow the use of new signature schemes that reduce the size of transactions on average about _30%_ seems really deceptive, and it makes me sad that you're continuing with this argument even after having your misunderstandings corrected.

I thought you said you were said you were actually going to write the software you keep talking about and speak through results; rather than the continued factually incorrect criticisms you keep making of software and designs which you don't care to spend a minute to learn the first thing about? We're waiting.

In the mean time: Shame on you, and shame on you for having no shame.
I corrected my mistaken estimates and I made it clear I didnt know the exact overheads. I did after all just start looking into segwit yesterday. Unlike you, I do make mistakes, but when I understand my mistake, I admit it. Maybe you can understand the limitations of mortals who are prone to make errors.

Last I was told, the vinscript that would otherwise be in the normal 1MB blockchain needs to go into the witness area. Is that not correct? If it goes from the 1MB space to the witness space, how is that 30% smaller? (I am talking about permanent storage for full relaying/verifying nodes)

I only responded to knightdk's questions, should I have ignored his direct question?

luke-jr told me it takes 2 bytes per tx and 1 byte per vin extra using segwit as opposed to a 2MB hardfork. I thought you also confirmed this. Now you are saying that using segwit reduces the total permanent space used by 30%, if that is really the case then I will change my view.

please explain to me how lukejr is wrong when he says it takes 2 bytes per tx and 1 byte per vin. i will update the title to match my understanding, without shame when I see my mistake. Imagine I am like rainman. I just care about the numbers

James
892  Bitcoin / Development & Technical Discussion / Re: Segwit details? segwit wastes precious blockchain space permanently on: March 17, 2016, 01:24:14 AM
N + 2*numtxids + numvins > N

I still claim that is true, not sure how that loses me any credibility
I believe I have forgotten to address this. Can you please explain how you are getting this?

AFAIK the txids aren't in any structure used by Bitcoin except in the inventories. Those might be stored, depends on the implementation. However, when it comes to the wtxids, there is absolutely no reason to store them. Their sole purpose is to simply have a hash of all of the data in a segwit transaction and have that be applied to the witness root hash in the coinbase transaction. There is no need to store the wtxids since nothing ever references them.

Where are you getting numvins from?

Anyways, your formula is wrong if you assume that the regular txid is currently being stored. Rather it should be

N + wtxid + numvins > N

and that is only if you are going to store wtxids which are not necessary to store anyways.
I was told the extra space needed was 2 bytes per segwit tx plus 1 byte per vin, though maybe the 1 byte per vin can be reduced to 1 bit. Not sure how that is possible without new script opcodes, so maybe that is a possibility in the fullness of time sort of thing.

Regardless, the total space needed is more for segwit tx than normal tx, this is confirmed by wuille, lukejr and gmaxwell.

now I never said segwit wasnt impressive tech as that is quite a small overhead. my point is that segwit does not reduce the permanent space needed and if you feel that the HDD space needed to store the blockchain (or the data need to be shared between full nodes) is a factor that is important to scalability, then segwit does not help scalability regarding those two factors.

I do not speak about any other factors, only the permanent space used. Originally I was told that segwit did everything, including allow improved scalability and what confused me was that it was presented in a way that led me (and many others) to believe that segwit reduced the permanent storage needed.

now this is clarified that segwit does not reduce the space needed and that segwit softfork will force any node that wants to be able to validate setwit tx to also upgrade to segwit, then I think the rest is about implementation details.

And maybe someone can clarify the text on the bitcoincore.org site that presents segwit as curing cancer and world hunger?

James
893  Bitcoin / Development & Technical Discussion / Re: Segwit details? segwit wastes precious blockchain space permanently on: March 17, 2016, 12:45:35 AM
Segwit is not about saving space for plain full noes, the space is already saved in Core (if the user chooses to save it). As you note, local space savings can be done purely locally.  Segwit increases flexibility; fixes design flaws; saves space for nodes acting as SPV servers; and saves _bandwidth_; and none of these be done as purely local changes.
Again I apologize for not being smart enough to instantly understand all the changes segwit does and I was misled by errant internet posts that segwit saved HDD space for the blockchain.

thank you for clarifying that it wont save space for full nodes.

Also, my understanding now is that iguana can just treat the segwit tx as standard p2sh and with the caveat that until it fully processes the witness data, it would just need to trust that any such tx that are mined are valid.

I would debate with you on many claims you make that I dont agree with, but I see no point to debate with words. I will make an iguana release that will demonstrate my claims. Fair enough?

James

The problem is that what you lost a lot of credibility by making your claims earlier and now it'll be hard to take your software seriously. Basically, you are asking us to check out your rocket after you argued against the laws of gravity.

N + 2*numtxids + numvins > N

I still claim that is true, not sure how that loses me any credibility

894  Bitcoin / Development & Technical Discussion / Re: address balances at specific block? on: March 16, 2016, 11:13:16 PM
Since nothing in the Bitcoin consensus algorithm works with balances, using them for comparison would be potentially unwise-- it's perfectly possible to have an incorrect 'balance' that is actually a latently corrupted state.  Bitcoin Core gettxoutsetinfo will give you a hash of the serialized utxo set for this kind of diagnostic purpose, though there is no text specification for the particular serialization it uses, you'd have to extract that from the implementation.
when does the (sum of vouts) - (sum of vins) for an address not match what it should be?

For things that use scripts with multisig, if statements, CLTV, etc. I just treat the p2sh hash as the "address" and not deal with the probabilistic value allocations to the potential redeemers. I think even for the strange anyone can spend outputs, it is either spent or not spent, so the difference of sums would work

I have commissioned an independently tabulated balances that I will cross check against. The plan would be to analyze any that are not matching, which will hopefully be not many. the gettxoutsetinfo should be quite useful for that, i forgot about that one. thanks

After all, block explorers are displaying a balance (potentially inaccurate it might be). I always assume my data is wrong, until I get it verified. Granted if the independently generated balances match iguana's output, it is possible for something to slip through, but at least for an initial test it seems like a good place to start and since the independent balances will be generated from bitcoin core, I know it is 100% correct

James
895  Bitcoin / Development & Technical Discussion / Re: Segwit details? segwit wastes precious blockchain space permanently on: March 16, 2016, 10:35:28 PM
Segwit is not about saving space for plain full noes, the space is already saved in Core (if the user chooses to save it). As you note, local space savings can be done purely locally.  Segwit increases flexibility; fixes design flaws; saves space for nodes acting as SPV servers; and saves _bandwidth_; and none of these be done as purely local changes.
Again I apologize for not being smart enough to instantly understand all the changes segwit does and I was misled by errant internet posts that segwit saved HDD space for the blockchain.

thank you for clarifying that it wont save space for full nodes.

Also, my understanding now is that iguana can just treat the segwit tx as standard p2sh and with the caveat that until it fully processes the witness data, it would just need to trust that any such tx that are mined are valid.

I would debate with you on many claims you make that I dont agree with, but I see no point to debate with words. I will make an iguana release that will demonstrate my claims. Fair enough?

James
896  Bitcoin / Development & Technical Discussion / Re: Segwit details? on: March 16, 2016, 09:22:46 PM
There is no such thing as "size"-- size is always a product of how you serialize it.  An idiotic implementation could store non-segwit transactions by prepending them with a megabyte of zeros-- would I argue that segwit saves a megabyte per transaction? No.  

It's likely that implementations will end up using an extra byte per scriptsig to code the script version, though they could do that more efficiently some other way... but who cares about a byte per input? It certainly doesn't deserve an ALL CAPS forum post title-- you can make some strained argument that you're pedantically correct; that doesn't make you any less responsible for deceiving people, quite the opposite because now it's intentional. And even that byte per input exists only for implementations that don't want to do extra work to compress it (and end up with ~1 bit per transaction).
Since you feel I am deceiving people by using uppercase, I changed it to lower case. The first responses I got did not make it clear the overhead was 2 bytes per tx and 1 byte per vin.

I was under the misunderstanding that segwit saved blockchain space, you know cuz I am idiotic and believed stuff on the internet.

I am glad that now we all know that a 2MB hardfork would be more space efficient than segwit as far as permanent blockchain space.

What I still dont understand is how things will work when a segwit tx is sent to a non-segwit node and that is spent to another non-segwit node. How will the existing wallets deal with that? What happens if an attacker created segwit rawtransactions and sent them to non-segwit nodes? there are no attack vectors? what about in zeroconf environments? how does a full relaying node mine a block with segwit inputs? or do existing full nodes cease to be able to mine blocks after segwit softfork?

And even a simpleton like me can understand how to increase blocksizes with a hardfork, so why not do that before adding massive new changes like segwit? especially since it is more space efficient and not prone to misunderstandings
897  Bitcoin / Development & Technical Discussion / Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY on: March 16, 2016, 09:11:41 PM
my reaction was based on the answers I was getting and clearly it is a complex issue. segwit is arguably more changes to bitcoin than all prior BIP's combined. I dont think anybody would say otherwise.
I agree with that, after all, there are 6 separate BIPs for it of which 4 (or 5?) are being implemented in one go.

Now please ignore the space savings for nodes that are not full nodes. I am assuming that to bootstrap a node it will need to get the witness data from somewhere, right? so it is needed permanently and thus part of the permanent HDD requirement.
Again, no. Full nodes don't have to validate the signatures of transactions in super old blocks. This is something that Bitcoin Core currently does and will continue to do. Since it doesn't check the signatures of transactions in historical blocks those witnesses don't need to be downloaded, although I suppose they could. There will of course be people who run nodes that store all of that data (I will probably be one of those people) in case someone wants it.

I still dont fully understand how the size of the truncated tx+ witness data is as small as 2 bytes per tx + 1 byte per vin. But even if that is the case, my OP title is accurate. as N+2*numtx+numvins is more than N

that is the math I see.
From the view point of a new node some years in the future, that data isn't needed and it most certainly won't be downloaded or needed. But for now, yes, you will be storing that data but it can be pruned away like the majority of the blockchain can be now. Keep in mind that a pruned node is still a full node. It still validates and relays every transaction and block it receives. Full Nodes do not have to store the entire blockchain, they just need to download it. That extra data is also not counted as what is officially defined as the blockchain.

Also I made the mistake of making sure the transaction hash matches for a transaction. I had assumed that if the transaction hash doesnt match, it is invalid rawbytes. Are you saying that we dont need to verify that the transaction hashes match? As you know verifying signatures is very time consuming compared to verifying txid. So if verifying txid is not available anymore, that would dramatically increase the CPU load for any validating node.
Anymore? It was never done in the first place. Verifying the transaction has always been checking the signatures because the creating and verifying signatures involve the hash of the transaction.

Before I go making new threads about that, let us wait for some clarity on this issue.

I think if the witness data is assumed to be there permanently, then we dont increase the CPU load 10x or more to have to validate sigs vs validate txid, so it would be a moot point.
It won't increase the CPU load because that is what is currently being done and has always been done. In fact, signature validation in Bitcoin Core 0.12+ is significantly faster than in previous versions due to the use of libsecp256k1 which dramatically increased the performance of the validation.
I was told by gmax himself that a node that doesnt validate all signatures should call itself a fully validating node.

Also, I am making an optimized bitcoin core and one of these optimizations is rejecting a tx whose contents doesnt match the txid. The thinking being that if the hashes dont match, there is no point in wasting time calculating the signature

not sure what libsecp256k1's speed has anything to do with the fact that it is still much slower to calculate than SHA256.

So my point again, is that all witness data needs to be stored permanently for a full node that RELAYS historical blocks to a bootstrapping node. If we are to lose this, then we might as well make bitcoin PoS as that is the one weakness for PoS vs PoW. So if you are saying that we need to view bitcoin as fully SPV all the time with PoS level security for bootstrapping nodes, ok, with those assumptions lots and lots of space is saved.

However, with such drastic assumptions I can (and have) already saved lots more space without adding a giant amount of new protocol and processing.

So this controversy has at least clarified that segwit INCREASES the size of the permanently needed data for fully validating and relaying node. Of course for SPV nodes things are much improved, but my discussion is not about SPV nodes.

So the powers that be can call me whatever names they want. I still claim that:

N + 2*numtx + numvins > N

And as such segwit as way to save permanent blockchain space is an invalid claim.Now the cost of 2*numtx+numvins is not that big, so maybe it is worth the cost for all the benefits we get.

However on the benefits claims, one of them is the utxo dataset is becoming a lot more manageable. this is irrelevant as that is a local inefficiency that can be optimized without any external effects. I have it down to 4 bytes of RAM per utxo, but I could make it smaller if needed

It just seems a lot of unsupported (or plain wrong) claims are made to justify the segwit softfork. And the most massive change by far is being slipped in as a minor softfork update?

898  Bitcoin / Development & Technical Discussion / Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY on: March 16, 2016, 08:24:00 PM
I apologize for my technical ineptitudes.

I am trying to understand how segwit saves blockchain space as that is what it is being marketed as and with a softfork in the upcoming weeks.
It saves space in that it does not require that the signatures be downloaded. Since clients that are syncing do not need verify the signatures of old transactions, the witness data is then not needed. This would mean that they do not need to download the witnesses and can thus save bandwidth and space. This is only for several years in the future after segwit has been used for a while.

So i look at the BIP, I find each segwit tx needs a commitment hash, plus more. this is 32 bytes per tx that to my ignorant simple minded thinking is in addition to what would be needed if it was a normal tx.
Not every transaction needs that hash, just like how not every transaction now has its txid stored in the blockchain. Just like it is done today, the merkle root of the wtxids is stored, and that is it.

Now I am being told that we dont need to count the space used by the witness data. This confuses me. I like to count all permanently required data as the permanent space cost.
The witnesses are not permanent and can be deleted. After a few years, they won't even need to be downloaded.

I apologize for asking questions like this, but I am just a simple C programmer trying to allocate space for the segwit and noticing it seems to take more space for every tx. I await to be properly learned about how the commitment hash is not actually needed to be stored anywhere and how that would still allow a full node to properly validate all the transactions.
Checking the transaction hash is not part of the validation of transactions. Rather the signature is validated with the witness data and checks that the signature is valid for that transaction. The block contains a merkle root. This merkle root is calculated by hashing all of the transaction hashes. The same is done for the witness root hash which is just the hash of all of the wtxids.

Or did I misunderstand? Does segwit mean that it is no business of full nodes to verify all the transactions?
See above.

OK, I apologize again for misunderstanding the BIPs
that is why I am posting questions.
It is okay to ask questions, but when you spread around false information (like the title of this thread) then it is not okay. I admit, sometimes I do say false things, but I usually do say things with stuff like "I think" or "from what I understand" to indicate that what I say may not be the truth.

can you show me a specific rawtxbytes for a simple tx as it is now and what the required bytes are if it was a segwit? and for the segwit case it would need to be two sets of bytes, I think I understand that part well enough.
Kind of, I can't build one right now and I don't have a segnet node set up yet so I can't pull one from there. I will give you a similar example, pulled from the BIP. In this example there are two inputs, one from p2pk (currently used) and one from a p2wpkh (segwit but this format will probably not be used).

Take for example this transaction:
Code:
01000000000102fff7f7881a8099afa6940d42d1e7f6362bec38171ea3edf433541db4e4ad969f00000000494830450221008b9d1dc26ba6a9cb62127b02742fa9d754cd3bebf337f7a55d114c8e5cdd30be022040529b194ba3f9281a99f2b1c0a19c0489bc22ede944ccf4ecbab4cc618ef3ed01eeffffffef51e1b804cc89d182d279655c3aa89e815b1b309fe287d9b2b55d57b90ec68a0100000000ffffffff02202cb206000000001976a9148280b37df378db99f66f85c95a783a76ac7a6d5988ac9093510d000000001976a9143bde42dbee7e4dbe6a21b2d50ce2f0167faa815988ac000247304402203609e17b84f6a7d30c80bfa610b5b4542f32a8a0d5447a12fb1366d7f01cc44a0220573a954c4518331561406f90300e8f3358f51928d43c212a8caed02de67eebee0121025476c2e83188368da1ff3e292e7acafcdb3566bb0ad253f62fc70f07aeee635711000000

Here it is broken down:
Code:
nVersion:  01000000
    marker:    00
    flag:      01
    txin:      02 fff7f7881a8099afa6940d42d1e7f6362bec38171ea3edf433541db4e4ad969f 00000000 494830450221008b9d1dc26ba6a9cb62127b02742fa9d754cd3bebf337f7a55d114c8e5cdd30be022040529b194ba3f9281a99f2b1c0a19c0489bc22ede944ccf4ecbab4cc618ef3ed01 eeffffff
                  ef51e1b804cc89d182d279655c3aa89e815b1b309fe287d9b2b55d57b90ec68a 01000000 00 ffffffff
    txout:     02 202cb20600000000 1976a9148280b37df378db99f66f85c95a783a76ac7a6d5988ac
                  9093510d00000000 1976a9143bde42dbee7e4dbe6a21b2d50ce2f0167faa815988ac
    witness    00
               02 47304402203609e17b84f6a7d30c80bfa610b5b4542f32a8a0d5447a12fb1366d7f01cc44a0220573a954c4518331561406f90300e8f3358f51928d43c212a8caed02de67eebee01 21025476c2e83188368da1ff3e292e7acafcdb3566bb0ad253f62fc70f07aeee6357
    nLockTime: 11000000
An upgraded node would request and receive this transaction like this. It would contain all of the witness data.

But a non-upgraded node (and a node that doesn't want witness data) would only receive
Code:
0100000002fff7f7881a8099afa6940d42d1e7f6362bec38171ea3edf433541db4e4ad969f00000000494830450221008b9d1dc26ba6a9cb62127b02742fa9d754cd3bebf337f7a55d114c8e5cdd30be022040529b194ba3f9281a99f2b1c0a19c0489bc22ede944ccf4ecbab4cc618ef3ed01eeffffffef51e1b804cc89d182d279655c3aa89e815b1b309fe287d9b2b55d57b90ec68a0100000000ffffffff02202cb206000000001976a9148280b37df378db99f66f85c95a783a76ac7a6d5988ac9093510d000000001976a9143bde42dbee7e4dbe6a21b2d50ce2f0167faa815988ac11000000
which is considerably smaller than the witness serialization because it doesn't include the witness.

If the combined overhead is as small as 3 bytes, per tx, then it is amazing. but I dont understand how it is done.

But still even if it is only 3 bytes more, it is more data required permanently, so it is reducing the total tx capacity per MB and it anti-scaling from that standpoint. I cannot speak to the scaling impact of optimizd signature handling, etc.
Those three bytes (and whatever other overhead because I am fairly sure there is other overhead in the witness data) are not part of the transaction that goes into the count of the size of a block.

Also, it seems a 2MB hardfork is much, much simpler and provides the same or more tx capacity. Why isnt that done before segwit?
Because Segwit adds more useful stuff and 2 Mb hard fork doesn't solve the O(n^2) hashing problem. The Classic 2 Mb hard fork had some weird hackish workaround for the O(n^2) hashing problem. In terms of lines of code, I believe that there was an analysis done on segwit and the classic hard fork that found that the amount of code lines added was roughly the same. This is because the hashing problem needed to be fixed with that hard fork.

I suppose you could also say that segwit is more "efficient". Segwit, with roughly the same amount of code as the classic hard fork, brings much more functionality to Bitcoin than a 2 Mb hard fork does.
my reaction was based on the answers I was getting and clearly it is a complex issue. segwit is arguably more changes to bitcoin than all prior BIP's combined. I dont think anybody would say otherwise.

Now please ignore the space savings for nodes that are not full nodes. I am assuming that to bootstrap a node it will need to get the witness data from somewhere, right? so it is needed permanently and thus part of the permanent HDD requirement.

I still dont fully understand how the size of the truncated tx+ witness data is as small as 2 bytes per tx + 1 byte per vin. But even if that is the case, my OP title is accurate. as N+2*numtx+numvins is more than N

that is the math I see.

Also I made the mistake of making sure the transaction hash matches for a transaction. I had assumed that if the transaction hash doesnt match, it is invalid rawbytes. Are you saying that we dont need to verify that the transaction hashes match? As you know verifying signatures is very time consuming compared to verifying txid. So if verifying txid is not available anymore, that would dramatically increase the CPU load for any validating node.

Before I go making new threads about that, let us wait for some clarity on this issue.

I think if the witness data is assumed to be there permanently, then we dont increase the CPU load 10x or more to have to validate sigs vs validate txid, so it would be a moot point.

but it negates your point that the witness data can just go away

James
899  Bitcoin / Development & Technical Discussion / Re: Segwit details? on: March 16, 2016, 07:33:29 PM
Oh, the amount of misinformation in this thread!!

jl777 you have misunderstood the segwit BIPs.

Transaction ID

A new data structure, witness, is defined. Each transaction will have 2 IDs.

Definition of txid remains unchanged: the double SHA256 of the traditional serialization format:

  [nVersion][txins][txouts][nLockTime]
  
A new wtxid is defined: the double SHA256 of the new serialization with witness data:

  [nVersion][marker][flag][txins][txouts][witness][nLockTime]

from the BIP...

the wtxid is based on all of the original, plus marker (1 byte?) flag (1 byte) and witness, which appears to be:

 1-byte - OP_RETURN (0x6a)
   1-byte - Push the following 36 bytes (0x24)
   4-byte - Commitment header (0xaa21a9ed)
  32-byte - Commitment hash: Double-SHA256(witness root hash|witness nonce)

all this seems to be above and beyond what would be needed for the normal, plus the nVersion (4 bytes) and nLockTime (4 bytes) are duplicated. To a simple C programmer like me it sure looks like instead of reducing the net amount as required by anything claiming to save space, it is increasing the size by approx 50 bytes.
NO NO NO!!!!. AND PLEASE STOP SPREADING THIS!! IT IS WRONG.

The above with the OP_RETURN is how the witness root hash (the hash of the wtxids where the wtxids are leaves on a hash tree). This is added to the Coinbase transaction of the block. The point of doing this is to commit the witnesses to the blockchain without adding all of the witnesses to the size calculation of the blockchain except for these 38 bytes.

The wtxid (also called witness hash) is the hash of the transaction with all of the witness data and the transaction data. The transaction format if witness serialization is specified is 4 bytes for the transaction version (currently used), 1 byte for a marker (new, is always 0), 1 byte for a flag (new), the input count (currently used), the inputs (currently used), the output count (currently used), the outputs (currently used), the witnesses (new), and the locktime (currently used). All of this is what is hashed to get the wtxid. However, the regular txid is just the hash of everything that is currently used (so it contains none of the new stuff) so as to maintain backwards compatibility. If witness serialization is not specified then only the currently used stuff is sent in the tx message.

Likewise, with blocks, from what I understand, the block sent will contain the new transaction format if witness serialization is specified. Otherwise it will just include the transaction data that is currently used so that it maintains backwards compatibility. The only "bloat" caused by segwit is the 38 bytes in a second output of the Coinbase to commit the wtxids in a similar manner to the way that regular txids are committed and 2 extra bytes per transaction. Only upgraded nodes would get the witness data and those 2 extra bytes.

P.S. So my understanding is that you need a special segwit address (that is somehow determined to be a segwit address using what mechanism?) so both sender and receiver need to already have the segwit version. I guess just ignoring all the existing nodes is at least some level of backward compatibility. But are you sure all users will quickly get used to having to deal with two types of addresses for every transaction and they will make sure they know what version the other party is running. Doesnt this bifurcate the bitcoin universe? maybe the name should be "bifurcating softfork"
The addresses are different. An upgraded node will use p2sh addresses, which we currently use. Those p2sh addresses can be spent to using the current methods so non-upgraded users can still spend to those addresses. To spend from those addresses requires segwit, and the way that the scriptsig is set up, non-upgraded nodes will always validate those transactions as valid even if the witness (which those nodes cannot see) is invalid. Those segwit transactions still create outputs the regular way, so they can still send to p2pkh or p2pk outputs which non-upgraded users can still receive and spend from. Segwit transactions are considered by old nodes as transactions which spent an anyonecanspend output and thus are treated with a grain of salt. The best course of action is to of course wait for confirmations as we already should still be doing now.

Whoa, whoa, wait...

From the point of view of old clients, segwit adds one coinbase output that contains the root hash of the Merkle tree that commits to the witness transaction ids. It uses 47 extra bytes per block, so technically, yes, it "wastes precious blockchain space".

So, 47 bytes per block. That's not too unreasonable. But...

This then gets us to my question that is not being answered. On average, how many bytes in the blockchain will be needed for a standard payment sent via segwit?

Is this ever less than it would be now?
Is this ever the same as it is now?
Is this usually about 50 bytes more per tx?

50 bytes per transaction for fully-validating nodes? This needs to be answered.

I would like to see an answer to this too, it seems everyone is avoiding this question. Fully validating nodes are very important for those of us that want to verify the blockchain ourselves and are required for bootstrapping new nodes.


01000000000102fff7f7881a8099afa6940d42d1e7f6362bec38171ea3edf433541db4e4ad969f0 0000000494830450221008b9d1dc26ba6a9cb62127b02742fa9d754cd3bebf337f7a55d114c8e5c dd30be022040529b194ba3f9281a99f2b1c0a19c0489bc22ede944ccf4ecbab4cc618ef3ed01eef fffffef51e1b804cc89d182d279655c3aa89e815b1b309fe287d9b2b55d57b90ec68a0100000000 ffffffff02202cb206000000001976a9148280b37df378db99f66f85c95a783a76ac7a6d5988ac9 093510d000000001976a9143bde42dbee7e4dbe6a21b2d50ce2f0167faa815988ac000247304402 203609e17b84f6a7d30c80bfa610b5b4542f32a8a0d5447a12fb1366d7f01cc44a0220573a954c4 518331561406f90300e8f3358f51928d43c212a8caed02de67eebee0121025476c2e83188368da1 ff3e292e7acafcdb3566bb0ad253f62fc70f07aeee635711000000

the above is from https://github.com/bitcoin/bips/blob/master/bip-0143.mediawiki
it is a 2 input 2 output tx in the witness space.

In addition to the above, the much smaller anyonecan spend tx is needed too. I think it will be about 100 bytes?

so we have a combined space of around 800 bytes against the 1000 bytes the usual 2 input/2 output tx occupies. Or was that 400 bytes that the 2input/2output tx takes?
Again, NO. See above.
OK, I apologize again for misunderstanding the BIPs
that is why I am posting questions.

can you show me a specific rawtxbytes for a simple tx as it is now and what the required bytes are if it was a segwit? and for the segwit case it would need to be two sets of bytes, I think I understand that part well enough.

If the combined overhead is as small as 3 bytes, per tx, then it is amazing. but I dont understand how it is done.

But still even if it is only 3 bytes more, it is more data required permanently, so it is reducing the total tx capacity per MB and it anti-scaling from that standpoint. I cannot speak to the scaling impact of optimizd signature handling, etc.

Also, it seems a 2MB hardfork is much, much simpler and provides the same or more tx capacity. Why isnt that done before segwit?

900  Bitcoin / Development & Technical Discussion / Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY on: March 16, 2016, 07:28:09 PM
I apologize for my technical ineptitudes.

I am trying to understand how segwit saves blockchain space as that is what it is being marketed as and with a softfork in the upcoming weeks.

So i look at the BIP, I find each segwit tx needs a commitment hash, plus more. this is 32 bytes per tx that to my ignorant simple minded thinking is in addition to what would be needed if it was a normal tx.

Now I am being told that we dont need to count the space used by the witness data. This confuses me. I like to count all permanently required data as the permanent space cost.

I apologize for asking questions like this, but I am just a simple C programmer trying to allocate space for the segwit and noticing it seems to take more space for every tx. I await to be properly learned about how the commitment hash is not actually needed to be stored anywhere and how that would still allow a full node to properly validate all the transactions.

Or did I misunderstand? Does segwit mean that it is no business of full nodes to verify all the transactions?

James

****
OK, it was made clear that the commitment hash is just for the wtxid calculation so it doesnt really exist anywhere. The cost is 2 bytes per tx and 1 byte per vin. Still it is increasing HDD space used permanently, which is what confused me as segwit was marketed as saving HDD space and helping with scaling
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 [45] 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 ... 315 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!