Bitcoin Forum
May 07, 2024, 04:01:33 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 [6] 7 8 9 10 11 12 13 14 »  All
  Print  
Author Topic: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF  (Read 21359 times)
BlindMayorBitcorn
Legendary
*
Offline Offline

Activity: 1260
Merit: 1115



View Profile
March 17, 2016, 02:18:04 AM
 #101

Classic cargo cult now in bed with jl777, treating him like Jim Jones?  Amazing.  

jl777 views Classic as a headless steamroller with an empty driver's seat he can fill - accruing all glory and power.

Techno-eunuchs in Classic (Peter R, HashFastDefendantDoc, many others) are hypmotized by jl777's tecnobabble, and are now forced to type with one hand while they rub themselves up a genie with the other.

Classic/jl777 or GOP/Trump - which is more entertaining/sad/doomed?

I'm so glad you're here! Can you put this in words the r/btc crowd could relate to? Me included.  Undecided

Quote from: Gmax
What? It's an explicit goal. Transaction "size" in a particular serialization (which isn't necessarily used for transmission or storage) does not well reflect the costs of a transaction to the system. This has created a misalignment of incentives which has been previously misused (e.g. a miner creating blocks which expand the UTXO set size by almost a megabyte twiddling around with dust-spam (known private keys)).  

At the end of the day signatures are transmitted at most once to a node and can be pruned. But data in the UTXO set must be in perpetual online storage. It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits (even acknowledged by strong proponents of blocksize increase: e.g.  http://gavinandresen.ninja/utxo-uhoh (ignore anything in it about storing the UTXO set in ram, no version of Bitcoin Core has ever done that; that was just some confusion on the part of the author)). Prior problems with UTXO bloating attacks forced the introduction of the "dust limit" standardness rule, which is an ugly hack to reduce the bleeding from this misalignment of incentives.

In Montreal scaling Bitcoin fixing this costing imbalance was _the_ ray of light that got lots of people thinking that some agreement to a capacity bump could be had: if capacity could be increased while _derisking_ UTXO impact, or at least making it no worse-- then many of the concerns related to capacity increases would be satisfied.  So I guess it's no shock to see avowed long time Bitcoin attackers like jstolfi particularly picking on this aspect of a fix as a measure to try to undermine the ecosystem.

One of the challenges coming out of Montreal was that it wasn't clear how to decide on how the corrected costing should work. The "perfect" figures depend on the relative costs of storage, bandwidth, cpu, initial sync delays, etc.. which differ from party to party and over time-- though the current size counting is clearly poor across the board. Segwit addressed that, open parameter because optimizing it's capacity required a discount which achieved a dual effect of also fixing the misaligned costing.

Forgive my petulance and oft-times, I fear, ill-founded criticisms, and forgive me that I have, by this time, made your eyes and head ache with my long letter. But I cannot forgo hastily the pleasure and pride of thus conversing with you.
You get merit points when someone likes your post enough to give you some. And for every 2 merit points you receive, you can send 1 merit point to someone else!
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715097693
Hero Member
*
Offline Offline

Posts: 1715097693

View Profile Personal Message (Offline)

Ignore
1715097693
Reply with quote  #2

1715097693
Report to moderator
1715097693
Hero Member
*
Offline Offline

Posts: 1715097693

View Profile Personal Message (Offline)

Ignore
1715097693
Reply with quote  #2

1715097693
Report to moderator
1715097693
Hero Member
*
Offline Offline

Posts: 1715097693

View Profile Personal Message (Offline)

Ignore
1715097693
Reply with quote  #2

1715097693
Report to moderator
jl777 (OP)
Legendary
*
Offline Offline

Activity: 1176
Merit: 1132


View Profile WWW
March 17, 2016, 02:22:32 AM
 #102

luke-jr told me it takes 2 bytes per tx and 1 byte per vin extra using segwit as opposed to a 2MB hardfork. I thought you also confirmed this. Now you are saying that using segwit reduces the total permanent space used by 30%, if that is really the case then I will change my view.

please explain to me how lukejr is wrong when he says it takes 2 bytes per tx and 1 byte per vin. i will update the title to match my understanding, without shame when I see my mistake. Imagine I am like rainman. I just care about the numbers
Where did luke-jr tell you this? Did he explain why? I don't understand the 1 byte per vin part and would like to see the explanation for it.

What gmaxwell is saying is that segwit allows for future upgrades. One of those future upgrades could be an upgrade to a different signature scheme which does have the 30% reduction.
luke-jr: https://www.reddit.com/r/Bitcoin/comments/4amg1f/iguana_bitcoin_full_node_developer_jl777_argues/d12bcz7

plz note i didnt start any of the reddit threads, they seem to spontaneously start

OK, so maybe the fact that I am trying to analyze what segwit softfork in the upcoming weeks will do, that explains my not understanding that future upgrades with a new signature scheme are part of the analysis... Would these changes require a hardfork, or the usual softfork can change the signature scheme? It is kind of hard to analyze something based on unspecified future upgrades with a different signature scheme.

maybe there can be just a single aggregated signature for all the tx in a block? I have no idea if that is possible, but if it is, then that could be added to the coinbase and then we wont need any witness data at all. Did I get that right?

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
jl777 (OP)
Legendary
*
Offline Offline

Activity: 1176
Merit: 1132


View Profile WWW
March 17, 2016, 02:31:12 AM
 #103

Classic cargo cult now in bed with jl777, treating him like Jim Jones?  Amazing.  

jl777 views Classic as a headless steamroller with an empty driver's seat he can fill - accruing all glory and power.

Techno-eunuchs in Classic (Peter R, HashFastDefendantDoc, many others) are hypmotized by jl777's tecnobabble, and are now forced to type with one hand while they rub themselves up a genie with the other.

Classic/jl777 or GOP/Trump - which is more entertaining/sad/doomed?
I did ask for an iguana childboard here, but was totally ignored on that, not even a rejection. Maybe if that wasnt just ignored, I wouldnt be so active elsewhere. bitco.in gave me a child board the next day, so I am more active there. it is as simple as that.

I do not agree with classic's position against RBF, that does not make sense to me. I still have not heard any rational explanation about how RBF breaks zeroconf. zeroconf cant work when the blocks are full, as when the blocks are full you cant know when a tx in the mempool is likely to confirm. If anything, defining the RBF behavior allows a much better statistical model to predict when an unconfirmed tx will confirm.

Convince me with the math, or you can call me names and I stay unconvinced.

With RBF, I came here, asked some questions, got reasonable answers and made my analysis. I dont like the changing of sequenceid into the RBF field, but it isnt the horrible devils' spawn that it is made out to be. However, the people there are much better behaved and nobody trolled me or insinuated that I dont understand bitcoin at all.

James

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3388
Merit: 6631


Just writing some code


View Profile WWW
March 17, 2016, 03:03:36 AM
 #104

OK, so maybe the fact that I am trying to analyze what segwit softfork in the upcoming weeks will do, that explains my not understanding that future upgrades with a new signature scheme are part of the analysis... Would these changes require a hardfork, or the usual softfork can change the signature scheme? It is kind of hard to analyze something based on unspecified future upgrades with a different signature scheme.
Well I think those changes could be soft forked in because it changes the script version number, which I think would only affect the address type. I could be wrong though.

maybe there can be just a single aggregated signature for all the tx in a block? I have no idea if that is possible, but if it is, then that could be added to the coinbase and then we wont need any witness data at all. Did I get that right?
I am fairly certain that this isn't possible since it would require the private keys that can spend the inputs of all of the transactions to sign it. However, I could be wrong as I am not well versed in many parts of cryptography. There maybe is an algorithm which could combine all of the signatures, I don't know. You'll have to ask gmaxwell, he is the "chief cryptographer".

AliceGored
Member
**
Offline Offline

Activity: 117
Merit: 10


View Profile
March 17, 2016, 03:20:32 AM
Last edit: March 17, 2016, 03:30:47 AM by AliceGored
 #105

The discount is the question you won't get a good answer for. Fundamental economics of Bitcoin, price per byte, changed drastically, with a soft fork.

What? It's an explicit goal. Transaction "size" in a particular serialization (which isn't necessarily used for transmission or storage) does not well reflect the costs of a transaction to the system. This has created a misalignment of incentives which has been previously misused (e.g. a miner creating blocks which expand the UTXO set size by almost a megabyte twiddling around with dust-spam (known private keys)).  

“What?” Yes, it is an explicit goal, an under-publicized one. Glad to hear you acknowledge that you are realigning, in your view, the misaligned incentives of the current system, via a soft fork without a full node referendum.

At the end of the day signatures are transmitted at most once to a node and can be pruned. But data in the UTXO set must be in perpetual online storage. It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits (even acknowledged by strong proponents of blocksize increase: e.g.  http://gavinandresen.ninja/utxo-uhoh (ignore anything in it about storing the UTXO set in ram, no version of Bitcoin Core has ever done that; that was just some confusion on the part of the author)). Prior problems with UTXO bloating attacks forced the introduction of the "dust limit" standardness rule, which is an ugly hack to reduce the bleeding from this misalignment of incentives.

I’m aware that Core is focused on encouraging a gradation of nodes on the network. To me, a full node means a full, archival, fully validating node, and that’s what I’m concerned with. You are applying economic favoritism in order to achieve benefits for these new partial full nodes, which is ok, as long as everyone is aware of it. With a handful of miners activating it, I’m not sure you have the full consent of the network to pursue this goal. With a soft fork, full consent is not required or even relevant.

In Montreal scaling Bitcoin fixing this costing imbalance was _the_ ray of light that got lots of people thinking that some agreement to a capacity bump could be had: if capacity could be increased while _derisking_ UTXO impact, or at least making it no worse-- then many of the concerns related to capacity increases would be satisfied.  So I guess it's no shock to see avowed long time Bitcoin attackers like jstolfi particularly picking on this aspect of a fix as a measure to try to undermine the ecosystem.

So… changing these incentives was _the_ ray of light that allowed “lots of people” (assuming blockstream here) that a capacity increase could be had, fascinating. Before your email became the core roadmap, and before the conclusion of the HK conference, almost everyone thought that we would be hard forking at least some block size increase. Interesting to hear that perspective was wrong all along.


One of the challenges coming out of Montreal was that it wasn't clear how to decide on how the corrected costing should work. The "perfect" figures depend on the relative costs of storage, bandwidth, cpu, initial sync delays, etc.. which differ from party to party and over time-- though the current size counting is clearly poor across the board. Segwit addressed that, open parameter because optimizing it's capacity required a discount which achieved a dual effect of also fixing the misaligned costing.

This is all just you playing economic central planner, and the 1MB anti DOS limit from 2010 has become your most valued control lever, kudos.

The claims that the discounts have something to do with lightning and blockstream have no substance at all.
(1) Lightning predates Segwit significantly.

Not surprising, segwit was designed with the "side" benefit of making sig heavy settlement tx cheaper, and a main benefit of fixing malleability which LN requires.

(2) Lightning HTLC transactions have tiny signatures, and benefit less than many transaction styles (in other words the recosting should slightly increase their relative costs), though no one should care because channel closures are relatively rare. Transactions that do large multisigs would benefit more, because the current size model radically over-costs them relative to their total cost to Bitcoin nodes.

Waves hands.

(3) Blockstream has no plans to make any money from running Lightning in Bitcoin in any case;  we started funding some work work on Lightning because we believed it was long term important for Bitcoin and Mike Hearn criticized us for not funding it if we thought it important, because one of our engineers _really_ wanted to work on it himself, and because we were able to work out a business case for using it to make sidechains scalable too.

I will be paying attention as to whether this statement remains true. You got your jabs in at both Gavin and Mike, so, kudos again.
jl777 (OP)
Legendary
*
Offline Offline

Activity: 1176
Merit: 1132


View Profile WWW
March 17, 2016, 03:23:11 AM
 #106

OK, so maybe the fact that I am trying to analyze what segwit softfork in the upcoming weeks will do, that explains my not understanding that future upgrades with a new signature scheme are part of the analysis... Would these changes require a hardfork, or the usual softfork can change the signature scheme? It is kind of hard to analyze something based on unspecified future upgrades with a different signature scheme.
Well I think those changes could be soft forked in because it changes the script version number, which I think would only affect the address type. I could be wrong though.

maybe there can be just a single aggregated signature for all the tx in a block? I have no idea if that is possible, but if it is, then that could be added to the coinbase and then we wont need any witness data at all. Did I get that right?
I am fairly certain that this isn't possible since it would require the private keys that can spend the inputs of all of the transactions to sign it. However, I could be wrong as I am not well versed in many parts of cryptography. There maybe is an algorithm which could combine all of the signatures, I don't know. You'll have to ask gmaxwell, he is the "chief cryptographer".
I would think that to implement a blockwide aggregated signature, would at the least require a three step process:

1. block is mined to determine the tx that are in it
2. the txids of this protoblock would need to be broadcast
3. nodes that are running and part of the protoblock txid would need to sign and return to miner(s)?
4. miner prunes out all the signatures that are aggregated and publishes optimized block

Not sure if the libsecp256k1-zkp lib's schnorr routines are sufficient for this and clearly it cant be done with all sigs, and of course details about timing and protocol for the above have plenty to be defined. like when is the mining reward earned, etc. so this is just a fantasy protocol for now

I am not saying the above is possible, just that the above is the minimum back and forth that would be needed and it has some privacy issues, so some privacy enhancements are probably needed too. A bitmap of the aggregate signers would probably be needed, but that can be run length encoded to take up relatively small amount of space

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
iCEBREAKER
Legendary
*
Offline Offline

Activity: 2156
Merit: 1072


Crypto is the separation of Power and State.


View Profile WWW
March 17, 2016, 03:28:12 AM
 #107

Classic cargo cult now in bed with jl777, treating him like Jim Jones?  Amazing.  

jl777 views Classic as a headless steamroller with an empty driver's seat he can fill - accruing all glory and power.

I did ask for an iguana childboard here, but was totally ignored on that, not even a rejection. Maybe if that wasnt just ignored, I wouldnt be so active elsewhere. bitco.in gave me a child board the next day, so I am more active there. it is as simple as that.

I do not agree with classic's position against RBF, that does not make sense to me. I still have not heard any rational explanation about how RBF breaks zeroconf. zeroconf cant work when the blocks are full, as when the blocks are full you cant know when a tx in the mempool is likely to confirm. If anything, defining the RBF behavior allows a much better statistical model to predict when an unconfirmed tx will confirm.

Convince me with the math, or you can call me names and I stay unconvinced.

With RBF, I came here, asked some questions, got reasonable answers and made my analysis. I dont like the changing of sequenceid into the RBF field, but it isnt the horrible devils' spawn that it is made out to be. However, the people there are much better behaved and nobody trolled me or insinuated that I dont understand bitcoin at all.

James

We can't make a new childboard for every project you think of.  You start like 4 new things every month, and none of them are ever completed.  If you demonstrate the capacity to follow through on things you begin, perhaps your latest weekly brain fart vaporware might be taken seriously.

Glad you saw through Classic's FUD about RBF.  It doesn't make Classic look good to reject such an obviously beneficial feature, especially for the sake of preserving their false hope about zero-conf tx viability.

The problem in this thread is you contradicting yourself:
Quote
jl777: 'I'm just a simple C programmer' (everybody take a shot, per alt sub rules!  Cheesy)
jl777: 'I'm just here to ask questions'
jl777: 'I just heard about SEGWIT and I'm here to fix it!'
jl777: 'I can't be bothered to read the freaking SEGWIT manual (BIP docs) but will still post my melodramatic FUDDY conclusions about "wasting precious blockchain space"'

Do you see the problem ^there?

IMO, it looks like Gavin helped hype Iguana with the name-drop in order to introduce you as Classic's latest FUDster-In-Chief, following in the ignoble footsteps of Hearn's and Toomin's failures.

Your usual 'baffle them with techno-babble BS' strategy may work in the altcoin space, but there are much higher standards in Bitcoin.   Wink


██████████
█████████████████
██████████████████████
█████████████████████████
████████████████████████████
████
████████████████████████
█████
███████████████████████████
█████
███████████████████████████
██████
████████████████████████████
██████
████████████████████████████
██████
████████████████████████████
██████
███████████████████████████
██████
██████████████████████████
█████
███████████████████████████
█████████████
██████████████
████████████████████████████
█████████████████████████
██████████████████████
█████████████████
██████████

Monero
"The difference between bad and well-developed digital cash will determine
whether we have a dictatorship or a real democracy." 
David Chaum 1996
"Fungibility provides privacy as a side effect."  Adam Back 2014
Buy and sell XMR near you
P2P Exchange Network
Buy XMR with fiat
Is Dash a scam?
johnyj
Legendary
*
Offline Offline

Activity: 1988
Merit: 1012


Beyond Imagination


View Profile
March 17, 2016, 03:42:25 AM
 #108


In fact, if you do it in a hard fork, you can redesign the whole transaction format at will, no need to do so many different hacks everywhere to make old nodes unaware of the change (these nodes can work against upgraded nodes in certain cases, especially when some of the upgraded hashing power do a roll back)
No, you can't-- not if you live in a world with other people in it.  The spherical cow "hardforks can change anything" ignores that a hardfork that requires all users shutting down the Bitcoin network, destroying all in flight transactions, and invalidating presigned transactions (thus confiscating some amount of coins) will just not be deployed.


This is a networked society, I don't think a hard fork is that difficult as you said. Ethereum just had one and no one complains

Just like a soft fork, you have a long period to inform all the users to upgrade, those who don't care, their software will just not be able to talk to the network and the transactions will be dropped. Anyone can make a hard fork right away, but if major exchanges, major service providers/merchants are not accepting his coins, there is no point of that minority coin

When a large bank upgrading their system, all the users of that bank can not access the banking service for at least hours or whole night/weekend, no one complains. And sometimes when they have an incident, that could happen during middle of the day and suddenly all the payment can not be done in the whole country, still no one cares, only a piece of news appear on the newspaper

Of course banks can always reverse transaction so it's a bit different than bitcoin. However, bitcoin is use at your own risk, no one will compensate anyone's bitcoin loss due to incompetent devs or forks, so it is the user's responsibility to keep himself updated with the latest change in bitcoin


gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8411



View Profile WWW
March 17, 2016, 05:53:37 AM
Last edit: March 17, 2016, 06:05:23 AM by gmaxwell
 #109

This is a networked society, I don't think a hard fork is that difficult as you said. Ethereum just had one and no one complains
You're getting caught up on terms, thinking that all hard forks are the same. They aren't.  Replacing the entire Bitcoin system with Ethereum would, complete with the infinite inflation schedule of ethereum would just be a hardfork. ... but uhhh.. it's not the same thing as, say, increasing the Bitcoin Blocksize, which is not the same as allowing coinbase txn to spend coinbase outputs...

Quote
Just like a soft fork, you have a long period to inform all the users to upgrade, those who don't care, their software will just not be able to talk to the network and the transactions will be dropped.
That isn't like a soft fork, soft forks don't kick anyone out of the network. And you seem to have missed what I said, because of nlocked transactions changing the transaction format would effectively confiscate some people's Bitcoins.

Quote
When a large bank upgrading their system, all the users of that bank can not access the banking service for at least hours or whole night/weekend, no one complains.
Yes, Banks are centralized systems-- ones which usually only serve certain geographies and aren't operational 24/7. Upgrading them is a radically different proposition than a decentralization system.  A Bitcoin hard fork is a lot more like switching from English to Metric system, except worse, because no one values measurement systems based on how immune to political influence they are.

I’m aware that Core is focused on encouraging a gradation of nodes on the network. To me, a full node means a full, archival, fully validating node, and that’s what I’m
Your usage of the word full node is inconsistent with the Bitcoin communities since something like 2010 at least. A pruned node is a full node. You can invent new words if you like, but keep in mind the purpose of words is to communicate, and so when you make up new meanings just to argue that you're right, you are just wasting time.

You claim to be concerned with validating, but I do not see you complaining that classic has functionality so that miners will skip validation: https://www.reddit.com/r/Bitcoin/comments/4apl97/gavins_head_first_mining_thoughts/

Quote
So… changing these incentives was _the_ ray of light that allowed “lots of people” (assuming blockstream here) that a capacity increase could be had, fascinating. Before your email became the core roadmap, and before the conclusion of the HK conference, almost everyone thought that we would be hard forking at least some block size increase. Interesting to hear that perspective was wrong all along.
No, not blockstream people (go look for proposals from Blockstream people-- there are several blocksize hardforks suggested). Because of the constant toxic abuse, most of us have backed away from Bitcoin Core involvement in any case.

Quote
Not surprising, segwit was designed with the "side" benefit of making sig heavy settlement tx cheaper, and a main benefit of fixing malleability which LN requires.
Fixing this is a low enough priority that we canceled work on BIP62 before soft-fork segwit was invented. In spite of this considerable factual evidence, you're going to believe what you want, please don't waste my time like this again:

Quote
(2) Lightning HTLC transactions have tiny signatures, and benefit less than many transaction styles (in other words the recosting should slightly increase their relative costs), though no one should care because channel closures are relatively rare. Transactions that do large multisigs would benefit more, because the current size model radically over-costs them relative to their total cost to Bitcoin nodes.

Waves hands.

luke-jr told me it takes 2 bytes per tx and 1 byte per vin extra using segwit as opposed to a 2MB hardfork. I thought you also confirmed this. Now you are saying that using segwit reduces the total permanent space used by 30%, if that is really the case then I will change my view.

please explain to me how lukejr is wrong when he says it takes 2 bytes per tx and 1 byte per vin. i will update the title to match my understanding, without shame when I see my mistake. Imagine I am like rainman. I just care about the numbers
Luke told you what the Bitcoin Core segwitness implementation stores. For ease of implementation it stores the flags that way. Any implementation could do something more efficient to save the tiny amount of additional space there, Core probably won't bother-- not worth the engineering effort because it's a tiny amount.

Part of what segwitness does is facilitate signature system upgrades. One of the proposed upgrades now saves an average of 30% on current usage patterns-- I linked it in an earlier response. It would save more if users did whole block coinjoins. The required infrastructure to do that is exactly the same as coinjoin (because it is a coinjoin), with a two round trip signature-- but the asymptotic gain is only a bit over 41%.  It'll be nice for coinjoins to have lower marginal fees than non-coinjoins; but given the modest improvement possible over current usage, it isn't particularly important to have whole block joins with that scheme; existing usage gets most of the gains.
johnyj
Legendary
*
Offline Offline

Activity: 1988
Merit: 1012


Beyond Imagination


View Profile
March 17, 2016, 08:10:19 AM
 #110


Quote
Just like a soft fork, you have a long period to inform all the users to upgrade, those who don't care, their software will just not be able to talk to the network and the transactions will be dropped.

That isn't like a soft fork, soft forks don't kick anyone out of the network. And you seem to have missed what I said, because of nlocked transactions changing the transaction format would effectively confiscate some people's Bitcoins.


The world will not collapse because of a bitcoin hard fork, and since it has been advertised as an experiment, everyone knows it can have many disruptions, they all play with risk capitals and will tighten their security belt if well-informed. By successfully doing a hard fork, you cleared the way to many difficult changes in future. You can't spell a new soft fork trick every time when you want a backward-incompatible change. If you have to do a hard fork anyway in future, the earlier the better

If you are aiming for million dollars per bitcoin, it is still very early stage of the development

BlindMayorBitcorn
Legendary
*
Offline Offline

Activity: 1260
Merit: 1115



View Profile
March 17, 2016, 08:14:47 AM
Last edit: March 17, 2016, 08:54:26 AM by BlindMayorBitcorn
 #111


Quote
Just like a soft fork, you have a long period to inform all the users to upgrade, those who don't care, their software will just not be able to talk to the network and the transactions will be dropped.

That isn't like a soft fork, soft forks don't kick anyone out of the network. And you seem to have missed what I said, because of nlocked transactions changing the transaction format would effectively confiscate some people's Bitcoins.


The world will not collapse because of a bitcoin hard fork, and since it has been advertised as an experiment, everyone knows it can have many disruptions, they all play with risk capitals and will tighten their security belt if well-informed. By successfully doing a hard fork, you cleared the way to many difficult changes in future. You can't spell a new soft fork trick every time when you want a backward-incompatible change. If you have to do a hard fork anyway in future, the earlier the better

If you are aiming for million dollars per bitcoin, it is still very early stage of the development

I didn't know anything about anybody losing time-locked coins in a hard frok. That's not cool. Powerfully not cool!  Angry

Forgive my petulance and oft-times, I fear, ill-founded criticisms, and forgive me that I have, by this time, made your eyes and head ache with my long letter. But I cannot forgo hastily the pleasure and pride of thus conversing with you.
JorgeStolfi
Hero Member
*****
Offline Offline

Activity: 910
Merit: 1003



View Profile
March 17, 2016, 08:57:16 AM
 #112

A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

Quote
It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits

Biggest issue of this week, perhaps?  

Surely you know that the non-mining relay nodes invalidate the few security guarantees that the protocol can offer. Simple clients should not connect to them, but to miners (or relay nodes that are know to be run by miners). It makes no sense to twist the protocol inside out in order to to meet CONJECTURAL needs of those nodes.

The only cost that really matters is the marginal cost for a miner to add another transaction to his candidate block.  That is the cost that the transaction fees have to cover.  The magnitude of that cost is one of the great mysteries of bitcoin, extensively discussed but never estimated. But it seems to be very small (at least for competent miners) and is probably dependent only on the total size of the transaction.  But anyway the developers have no business worrying about that cost: the fees are payment for the miners, it should be the miners who decide how much to charge, and for what.

Academic interest in bitcoin only. Not owner, not trader, very skeptical of its longterm success.
ChronosCrypto
Newbie
*
Offline Offline

Activity: 25
Merit: 0


View Profile
March 17, 2016, 02:59:10 PM
 #113

Thanks for your answers, gmax. I understand segwit much better now, in areas like the backward-compatibility in the soft-fork scenario, and the changes to the "base" block.
BlindMayorBitcorn
Legendary
*
Offline Offline

Activity: 1260
Merit: 1115



View Profile
March 17, 2016, 03:22:48 PM
 #114

Are Core developers against a hard frok because it will somehow confiscate time-locked coins? How many people aside from Blockstream employees have time-locked coins now? (I know this is off-topic, might need a new thread.)

Forgive my petulance and oft-times, I fear, ill-founded criticisms, and forgive me that I have, by this time, made your eyes and head ache with my long letter. But I cannot forgo hastily the pleasure and pride of thus conversing with you.
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3388
Merit: 6631


Just writing some code


View Profile WWW
March 17, 2016, 03:45:07 PM
 #115

Are Core developers against a hard frok because it will somehow confiscate time-locked coins? How many people aside from Blockstream employees have time-locked coins now? (I know this is off-topic, might need a new thread.)
No, they were against hard forking for changing the way that a txid was calculated.

sickpig
Legendary
*
Offline Offline

Activity: 1260
Merit: 1008


View Profile
March 17, 2016, 05:12:19 PM
 #116

A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

Quote
It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits

Biggest issue of this week, perhaps?  

Surely you know that the non-mining relay nodes invalidate the few security guarantees that the protocol can offer. Simple clients should not connect to them, but to miners (or relay nodes that are know to be run by miners). It makes no sense to twist the protocol inside out in order to to meet CONJECTURAL needs of those nodes.

The only cost that really matters is the marginal cost for a miner to add another transaction to his candidate block.  That is the cost that the transaction fees have to cover.  The magnitude of that cost is one of the great mysteries of bitcoin, extensively discussed but never estimated. But it seems to be very small (at least for competent miners) and is probably dependent only on the total size of the transaction.  But anyway the developers have no business worrying about that cost: the fees are payment for the miners, it should be the miners who decide how much to charge, and for what.

according to Adam Back SegWit discount applied to signature data will fix an incentive bug in bitcoin, see:

https://www.reddit.com/r/btc/comments/4aka3f/over_3000_classic_nodes/d11atxc

Bitcoin is a participatory system which ought to respect the right of self determinism of all of its users - Gregory Maxwell.
molecular
Donator
Legendary
*
Offline Offline

Activity: 2772
Merit: 1019



View Profile
March 17, 2016, 05:35:16 PM
 #117

A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

(Thanks for answering this one question about malleability fix I had. So it can simply be done by omitting sigs from the txid hash input, cool. If not, please let me know)

It seems to me many people have a problem with segwit because of the "hackish" softfork and/or because of the change of the economic model (2 classes of blockspace).

If we did the points listed by JorgeStolfi above as a hardfork, would that be an option for the proponents of segwit? Seems to me such a hardfork could gain wide consensus, maybe wide enough to be considered safe by everyone? It would certainly appeal to the people who just want a simple blocksize increase and it should (I don't know, though) also satisfy the people who want segwit now.

What would be missing compared to segwit? fraud proofs? change of economic model?


PGP key molecular F9B70769 fingerprint 9CDD C0D3 20F8 279F 6BE0  3F39 FC49 2362 F9B7 0769
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3388
Merit: 6631


Just writing some code


View Profile WWW
March 17, 2016, 05:48:34 PM
 #118

A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

(Thanks for answering this one question about malleability fix I had. So it can simply be done by omitting sigs from the txid hash input, cool. If not, please let me know)

It seems to me many people have a problem with segwit because of the "hackish" softfork and/or because of the change of the economic model (2 classes of blockspace).

If we did the points listed by JorgeStolfi above as a hardfork, would that be an option for the proponents of segwit? Seems to me such a hardfork could gain wide consensus, maybe wide enough to be considered safe by everyone? It would certainly appeal to the people who just want a simple blocksize increase and it should (I don't know, though) also satisfy the people who want segwit now.

What would be missing compared to segwit? fraud proofs? change of economic model?


Maybe you should read gmaxwell's posts about doing a hard fork to change that calculation. They are a few posts above this in this thread.

Bergmann_Christoph
Sr. Member
****
Offline Offline

Activity: 409
Merit: 286


View Profile WWW
March 17, 2016, 06:23:11 PM
 #119

A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

Quote
It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits

Biggest issue of this week, perhaps?  

Surely you know that the non-mining relay nodes invalidate the few security guarantees that the protocol can offer. Simple clients should not connect to them, but to miners (or relay nodes that are know to be run by miners). It makes no sense to twist the protocol inside out in order to to meet CONJECTURAL needs of those nodes.

The only cost that really matters is the marginal cost for a miner to add another transaction to his candidate block.  That is the cost that the transaction fees have to cover.  The magnitude of that cost is one of the great mysteries of bitcoin, extensively discussed but never estimated. But it seems to be very small (at least for competent miners) and is probably dependent only on the total size of the transaction.  But anyway the developers have no business worrying about that cost: the fees are payment for the miners, it should be the miners who decide how much to charge, and for what.

according to Adam Back SegWit discount applied to signature data will fix an incentive bug in bitcoin, see:

https://www.reddit.com/r/btc/comments/4aka3f/over_3000_classic_nodes/d11atxc

Funny, r/btc gave him a symbol as president of blockstream.

Not funny how he plays with words.

He is asked

Quote
Next you'll claim "Classic isn't doing anything to combat UTXO bloat but Blockstream is!"

and he answers

Quote
well Bitcoin developers are yes, via the mechanism I described. Classic isnt doing that ...

Just a notice, slightly offtopic --

--
Mein Buch: Bitcoin-Buch.org
Bester Bitcoin-Marktplatz in der Eurozone: Bitcoin.de
Bestes Bitcoin-Blog im deutschsprachigen Raum: bitcoinblog.de

Tips dafür, dass ich den Blocksize-Thread mit Niveau und Unterhaltung fülle und Fehlinformationen bekämpfe:
Bitcoin: 1BesenPtt5g9YQYLqYZrGcsT3YxvDfH239
Ethereum: XE14EB5SRHKPBQD7L3JLRXJSZEII55P1E8C
rizzlarolla
Hero Member
*****
Offline Offline

Activity: 812
Merit: 1001


View Profile
March 17, 2016, 08:49:02 PM
 #120

A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

(Thanks for answering this one question about malleability fix I had. So it can simply be done by omitting sigs from the txid hash input, cool. If not, please let me know)

It seems to me many people have a problem with segwit because of the "hackish" softfork and/or because of the change of the economic model (2 classes of blockspace).

If we did the points listed by JorgeStolfi above as a hardfork, would that be an option for the proponents of segwit? Seems to me such a hardfork could gain wide consensus, maybe wide enough to be considered safe by everyone? It would certainly appeal to the people who just want a simple blocksize increase and it should (I don't know, though) also satisfy the people who want segwit now.

What would be missing compared to segwit? fraud proofs? change of economic model?



Yeah, both hackish (although possibly beautiful code) and the economic model, if I understand that correctly.

I don't think segwit could ever achieve HF consensus, my opinion. However if a winning hard fork was achieved, I would respect that.
A soft fork is not right here, and could well be considered an attack.

Why not 2mb first, which is on every partisan roadmap. Then segwit maybe. maybe not.
(I am assuming 2mb is more easily coded than segwit, and not as complicated as segwit as was stated earlier. Although the ease of coding is only a small part of the reason segwit should not be introduced yet. certainly not introduced by core. a SF attack on nodes.)
Pages: « 1 2 3 4 5 [6] 7 8 9 10 11 12 13 14 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!