TierNolan
Legendary
Offline
Activity: 1232
Merit: 1104
|
|
March 20, 2016, 08:56:11 PM |
|
But why does the fork have to make the existing outputs unspendable? I know it is possible to make any sort of fork, but who is proposing anything that would make these locktime tx unspendable?
The actual fork under discussion has this property. Restricting all transactions to 1MB would prevent the O(N 2) part of the hashing problem. Even better would be to restrict transactions to 100kB. As I understand it, core already considers transactions above 100kB as non-standard. The benefit of restricting transactions to 100kB should improve things by a factor of 100 (assuming O(N 2)). The problem with doing that is locked transactions. There could be a locked 200kB transaction that spends some outputs and where an alternative transaction can no longer be created (private keys lost and/or multisig outputs). A soft fork which restricted transactions to 100kB unless the height is evenly divisible by 100 would be a reasonable compromise here. Locked transactions can still be spent, but only in every 100th block. Mostly likely nobody has 100kB+ locked transactions anyway.
|
1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
|
|
|
jl777 (OP)
Legendary
Offline
Activity: 1176
Merit: 1134
|
|
March 20, 2016, 09:04:08 PM |
|
But why does the fork have to make the existing outputs unspendable? I know it is possible to make any sort of fork, but who is proposing anything that would make these locktime tx unspendable?
The actual fork under discussion has this property. Restricting all transactions to 1MB would prevent the O(N 2) part of the hashing problem. Even better would be to restrict transactions to 100kB. As I understand it, core already considers transactions above 100kB as non-standard. The benefit of restricting transactions to 100kB should improve things by a factor of 100 (assuming O(N 2)). The problem with doing that is locked transactions. There could be a locked 200kB transaction that spends some outputs and where an alternative transaction can no longer be created (private keys lost and/or multisig outputs). A soft fork which restricted transactions to 100kB unless the height is evenly divisible by 100 would be a reasonable compromise here. Locked transactions can still be spent, but only in every 100th block. Mostly likely nobody has 100kB+ locked transactions anyway. if >100kb is nonstandard, then odds are very very high that there are no such pending tx and moving forward, CLTV can be used cool idea to have an anything goes block every 100, it probably isnt an issue but since it is impossible to know for sure, probably a good idea to have something like that, but for something that probably doesnt exist, then 1 in 1000 should be good enough, or just make it nonstandard and as long as any single miner is mining them it will eventually get confirmed.
|
|
|
|
ChronosCrypto
Newbie
Offline
Activity: 25
Merit: 0
|
|
March 20, 2016, 09:06:19 PM |
|
But why does the fork have to make the existing outputs unspendable? I know it is possible to make any sort of fork, but who is proposing anything that would make these locktime tx unspendable?
There could be a locked 200kB transaction that spends some outputs and where an alternative transaction can no longer be created (private keys lost and/or multisig outputs). There isn't. 100kb is a huge transaction (100 times bigger than a "normal" large transaction). IMO, that's a perfectly acceptable threshold. If larger is needed, you can always create a second transaction. The "one every 100 blocks" exception really isn't needed here. It's more cool than useful.
|
|
|
|
ChronosCrypto
Newbie
Offline
Activity: 25
Merit: 0
|
|
March 20, 2016, 09:37:35 PM |
|
But why does the fork have to make the existing outputs unspendable? I know it is possible to make any sort of fork, but who is proposing anything that would make these locktime tx unspendable?
There could be a locked 200kB transaction that spends some outputs and where an alternative transaction can no longer be created (private keys lost and/or multisig outputs). There isn't. 100kb is a huge transaction (100 times bigger than a "normal" large transaction). IMO, that's a perfectly acceptable threshold. If larger is needed, you can always create a second transaction. The "one every 100 blocks" exception really isn't needed here. It's more cool than useful. So would a hard frok to Classic result in the loss of time-locked coins? You mean coins that are time-locked in transactions larger than 100kb? That's enormous. Of course there aren't any such coins. But no, I think Classic has a 1mb transaction-size upper bound, which is a reasonable solution.
|
|
|
|
BlindMayorBitcorn
Legendary
Offline
Activity: 1260
Merit: 1116
|
|
March 20, 2016, 09:45:07 PM |
|
But why does the fork have to make the existing outputs unspendable? I know it is possible to make any sort of fork, but who is proposing anything that would make these locktime tx unspendable?
There could be a locked 200kB transaction that spends some outputs and where an alternative transaction can no longer be created (private keys lost and/or multisig outputs). There isn't. 100kb is a huge transaction (100 times bigger than a "normal" large transaction). IMO, that's a perfectly acceptable threshold. If larger is needed, you can always create a second transaction. The "one every 100 blocks" exception really isn't needed here. It's more cool than useful. So would a hard frok to Classic result in the loss of time-locked coins? You mean coins that are time-locked in transactions larger than 100kb? That's enormous. Of course there aren't any such coins. But no, I think Classic has a 1mb transaction-size upper bound, which is a reasonable solution. Just checking.
|
Forgive my petulance and oft-times, I fear, ill-founded criticisms, and forgive me that I have, by this time, made your eyes and head ache with my long letter. But I cannot forgo hastily the pleasure and pride of thus conversing with you.
|
|
|
JorgeStolfi
|
|
March 21, 2016, 04:55:24 AM |
|
(Note that there is no way for a miner to determine when a transaction T1 was signed. Even if it spends an UTXO in a transaction T2 that was confirmed only yesterday, it is possible that both T1 and T2 were signed a long time ago.)
Your argument is technically specious. Transactions in Bitcoin have 4 byte version field, that gives us potential for billions of rule-sets to apply to the old transactions. The correct question to ask: why this wasn't and isn't changed as the rules gets changed? I am not sure if I understood your comment. Miners cannot apply old semantics when the transaction has an old version field, because that field can be faked by the clients to sabotage the change. E.g., suppose that the change imposed a mininum output amount of 0.0001 BTC as a way to reduce spam attacks on the UTXO database. An attacker could frustrate that measure by issuing transactions with the pre-fork version tag. Does that answer your comment?
|
Academic interest in bitcoin only. Not owner, not trader, very skeptical of its longterm success.
|
|
|
2112
Legendary
Offline
Activity: 2128
Merit: 1073
|
|
March 21, 2016, 05:42:54 AM |
|
(Note that there is no way for a miner to determine when a transaction T1 was signed. Even if it spends an UTXO in a transaction T2 that was confirmed only yesterday, it is possible that both T1 and T2 were signed a long time ago.)
Your argument is technically specious. Transactions in Bitcoin have 4 byte version field, that gives us potential for billions of rule-sets to apply to the old transactions. The correct question to ask: why this wasn't and isn't changed as the rules gets changed? I am not sure if I understood your comment. Miners cannot apply old semantics when the transaction has an old version field, because that field can be faked by the clients to sabotage the change. E.g., suppose that the change imposed a mininum output amount of 0.0001 BTC as a way to reduce spam attacks on the UTXO database. An attacker could frustrate that measure by issuing transactions with the pre-fork version tag. Does that answer your comment? You started writing really weird conflated stuff. What do fees have to do with transaction syntax? The version field should be used to clearly describe syntax rules governing the transaction format. The amount of fees doesn't change the syntax, so doesn't require change of the version. The existing client already has "misbehavior" score to disconnect itself from other peers that try to abuse it in various ways. There's no point to invent new mechanisms to do it. All that could be possibly required is to tune the specific values for various misbehavior demerits.
|
|
|
|
JorgeStolfi
|
|
March 21, 2016, 06:12:56 AM |
|
You started writing really weird conflated stuff. What do fees have to do with transaction syntax? ... The amount of fees doesn't change the syntax, so doesn't require change of the version.
Sorry, I don't understand your objections. There are no "meta-rules" that specify what the validity rules can be. They are not limited to "syntax", whatever that means. Any computable predicate on bit strings could in principle be a validity rule, as long as it does not completely break the system. Right now there are no validiy rules that refer to fees. The minimum fee, like the Pirate Code, "is more what you'd call 'guideline' than actual rule"; each miner decides whether to require it (or even to require more than it). But the minimum could be made into a validity rule. the difference woudl be that each miner would not only impose it on his blocks, but also reject blocks solved by other miners that contain transactions that pay less than that fee. The version field should be used to clearly describe syntax rules governing the transaction format.
As I wrote, this cannot be guaranteed. If a fork (rule change) was executed to fix a bug or prevent an attack, the miners cannot continue to use the old rules for transactions that have the old version tag; that would negate the purpose of the fork. They must reject such transactions. So, it is not safe to retain signed but unconfirmed transactions without broadcasting them.
|
Academic interest in bitcoin only. Not owner, not trader, very skeptical of its longterm success.
|
|
|
jl777 (OP)
Legendary
Offline
Activity: 1176
Merit: 1134
|
|
March 21, 2016, 04:34:45 PM |
|
So, it is not safe to retain signed but unconfirmed transactions without broadcasting them.
What do you mean by safe? Hypothetically (not suggesting anybody has suggested this), but wouldnt a softfork (or hardfork) be able to freeze a specific set of addresses? so KYC can be added to bitcoin via softfork and only the majority of hashpower needs to be bought/convinced to conduct this softfork attack Since a hardfork is much more visible and requires buyin by the community at large, the softfork attack appears to be much more of a threat than a hardfork attack, but if all the miners switched to a KYC version, along with all the big companies, then this seems a pretty viable attack vector, even as a hardfork. James
|
|
|
|
JorgeStolfi
|
|
March 21, 2016, 09:37:59 PM |
|
So, it is not safe to retain signed but unconfirmed transactions without broadcasting them.
What do you mean by safe? I mean that, even if your wallet is bug-free and up-to-date, you cannot be sure that your transaction can be confirmed, until it is; and that risk increases with time -- because soft-fork changes to the protocol can render the transaction invalid. Since those mothballed transactions are not publicly accessible, there is no way for soft-fork proponents to make sure that they will not be invalidated. In some cases (such as security or bug fixes), they must be invalidated. Conversely, those who hold such transactions may not have the private keys or other conditions needed to create valid versions of them. This may be bad news for the Lightning Network. The latest attempt at the LN design, IIUC, uses long-lived bidirectional channels, and unconfirmed and unbroadcast transactions ("cheques") that may have to be held by the participants for months or years. It was already pointed out that fee hikes could cause problems, forcing the receiver of a cheque to pay (via CPFP) the fees that the sender was supposed to pay. But soft-forks could make the cheque completely unspendable. Then the receiver would lose all the payments that he received through the channel. If the channels have 100 year timouts, maybe both parties would effectively lose all the coins that they put into the channel. Even if the risk of one cheque being invalidated is low -- say, 1 chance in 1'000'000 -- it may be unacceptable when there are 100'000 people doing 100 transactions per month in the LN. Moreover, asingle change can precipitate many such incidents in a short time. Hypothetically (not suggesting anybody has suggested this), but wouldnt a softfork (or hardfork) be able to freeze a specific set of addresses? so KYC can be added to bitcoin via softfork and only the majority of hashpower needs to be bought/convinced to conduct this softfork attack.
Of course. A cooperating mining majority can do anything.
|
Academic interest in bitcoin only. Not owner, not trader, very skeptical of its longterm success.
|
|
|
TierNolan
Legendary
Offline
Activity: 1232
Merit: 1104
|
|
March 21, 2016, 11:06:19 PM |
|
Since those mothballed transactions are not publicly accessible, there is no way for soft-fork proponents to make sure that they will not be invalidated.
Barring emergency fixes, you can make it so that the change depends on the transaction version number. Any soft fork should be backwards compatible, unless there is a good reason not to. In some cases (such as security or bug fixes), they must be invalidated.
Even for security and bug fixes, the objective should be to not make any transactions invalid. If that isn't possible, then keep the number to a minimum. Transaction which use an undefined version number are fair game though. This may be bad news for the Lightning Network. The latest attempt at the LN design, IIUC, uses long-lived bidirectional channels, and unconfirmed and unbroadcast transactions ("cheques") that may have to be held by the participants for months or years.
A soft fork which breaks the Lightning Network would have significant opposition. You are likely much safer if you use transactions of a type that are very popular. Breaking unusual edge cases is one thing, breaking extremely popular transaction formats is another.
|
1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
|
|
|
JorgeStolfi
|
|
March 22, 2016, 03:24:55 AM |
|
Any soft fork should be backwards compatible, unless there is a good reason not to. ... Even for security and bug fixes, the objective should be to not make any transactions invalid.
That is mathematically impossible. A soft fork, by definition, is a change that only makes the rules more restrictive: that is, some transactions that were valid by the old rules are invalid by the new ones, whereas all transactions that are valid by the new rules are also valid by the old ones. Barring emergency fixes, you can make it so that the change depends on the transaction version number.
As I explained already, that is often not an option. Soft forks are often issued precisely because it is necessary or desirable to outlaw certain types of transactions. Note that miners cannot distinguish a genuine mothballed transaction from a new transaction that is using the old version number just to frustrate the fork.
|
Academic interest in bitcoin only. Not owner, not trader, very skeptical of its longterm success.
|
|
|
TierNolan
Legendary
Offline
Activity: 1232
Merit: 1104
|
|
March 22, 2016, 11:45:44 AM |
|
That is mathematically impossible. A soft fork, by definition, is a change that only makes the rules more restrictive: that is, some transactions that were valid by the old rules are invalid by the new ones, whereas all transactions that are valid by the new rules are also valid by the old ones.
That is why I mentioned using the version field. People who use undefined versions for their transactions need to accept that there is a risk. The P2SH soft fork could easily have only applied to the outputs for version 2 (and above) transactions. The way it was actually done was to make it so that certain outputs could have been made unspendable. If someone happened to have a locked transaction with a P2SH output, then it would have ended up unspendable. Similarly, it could have used one of the NOPs as trigger. Using undefined nops in locked transactions is also a risky thing to do. <20 byte hash> OP_P2SH_VERIFY
That would even have used fewer bytes. It would be worth making a statement of what are reasonable things to do with locked transactions. Using undefined versions and undefined NOPs would be risky.
|
1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
|
|
|
gmaxwell
Moderator
Legendary
Offline
Activity: 4270
Merit: 8805
|
|
March 24, 2016, 05:24:27 PM |
|
People who use undefined versions for their transactions need to accept that there is a risk.
The primary established mechanism for safe softforks is the reserved script NOPs (which will not be relayed or mined by unmodified software today), the secondary one is transaction versions. They are reserved for explicitly this purpose. The reason for this ranking is that version is global to all inputs and outputs in a transaction; which creates unwelcome tying-- one should be able to spend and create mixtures of coins under different rule sets. For changes that happen outside script, however, version is still available for use. For segwit the primary mechanism will be segwit witness script versions... which are more clear and flexible than the reserved NOPs.
|
|
|
|
watashi-kokoto
|
|
March 24, 2016, 07:50:19 PM |
|
Let's talk address format. If I remember correctly, segwit will use P2WPKH (20 bytes) and P2WSH (32 byte). The reasoning is because the pay to script variant needs to defend against a certain security bug, that would limit only 80 bits of security. But , can we improve the alphabet itself. I mean, to move from base58 to base56? Removing some two letters. Perhaps wide ones like Wwm Or even completely drop lowercase. This would provide 32 symbols: {ABCDEFGHJKLMNPQRSTUVXYZ123456789} * 32 is nice round number * O 0, I is removed cause ambiguity * W is removed because too wide for very narrow low-resolution fonts. Opinions? ?
|
|
|
|
JorgeStolfi
|
|
March 25, 2016, 06:29:30 AM |
|
The primary established mechanism for safe softforks is the reserved script NOPs (which will not be relayed or mined by unmodified software today)
If a fork makes a previously illegal opcode legal, how can it be a soft fork?
|
Academic interest in bitcoin only. Not owner, not trader, very skeptical of its longterm success.
|
|
|
l8orre
Legendary
Offline
Activity: 1181
Merit: 1018
|
|
March 25, 2016, 08:00:29 AM |
|
The primary established mechanism for safe softforks is the reserved script NOPs (which will not be relayed or mined by unmodified software today)
If a fork makes a previously illegal opcode legal, how can it be a soft fork? Good Question - maybe like the issue with being pregnant or not, and trying to skirt the issue by saying it could be possible to be sort of a 'bit' pregnant... But I am not technically qualified enough to make authoritative statements about details of bitcoin protocol.
|
|
|
|
CIYAM
Legendary
Offline
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
|
|
March 25, 2016, 08:00:29 AM |
|
The primary established mechanism for safe softforks is the reserved script NOPs (which will not be relayed or mined by unmodified software today)
If a fork makes a previously illegal opcode legal, how can it be a soft fork? It was not previously illegal and will be interpreted as doing nothing by unmodified software (in scripts that might appear in later blocks) so although the unmodified software doesn't know what that op-code does it won't worry about it as far as validating the script goes (important assuming that the soft-fork succeeds). Because the unmodified software doesn't know what the NOP is intended to do, however, it won't relay such a script (nor would an unmodified miner mine it). This is because the unmodified software knows enough to know it can't be sure if the script is valid or not. Got it? (there is a clear difference between relaying, mining and validating)
|
|
|
|
TierNolan
Legendary
Offline
Activity: 1232
Merit: 1104
|
|
March 25, 2016, 02:11:05 PM |
|
The primary established mechanism for safe softforks is the reserved script NOPs (which will not be relayed or mined by unmodified software today), the secondary one is transaction versions.
Neither of which are being used for segregated witness. According to the BIP, it works like P2SH and uses a template. OP_1 <0x{32-byte-hash-value}>
If an output is of that format, then it counts as a witness output (the OP_1 can be replaced by other values to give SW version). An alternative would be to use OP_1 <0x{32-byte-hash-value}> OP_SW_VERIFY
OP_SW_VERIFY would be one of the NOPs. This would ensure that an output that matches the template would not end up unspendable. Outputs that don't include a checksig of some kind are already inherently unsafe to spend. At least the P2SH and SW templates don't include OP_CHECKSIG calls.
|
1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
|
|
|
JorgeStolfi
|
|
March 25, 2016, 07:16:48 PM |
|
Because the unmodified software doesn't know what the NOP is intended to do, however, it won't relay such a script (nor would an unmodified miner mine it). This is because the unmodified software knows enough to know it can't be sure if the script is valid or not.
If no miner will mine a transaction that has a NOP code, then the NOP is effectively illegal. I.e., those lines in the miner's software that say to reject such transactions are effectively part of the validity rules. Which means that making those opcodes legal is a relaxation of the existing rules, and therefore not a soft-fork type of change. (there is a clear difference between relaying, mining and validating)
Each player can validate as much as he wants, by any rules that he wants. However, if he wants to use "the" bitcoin that "everybody" uses, he had better use rules that are compatible with them, in the sense that he must trust the same blockchain that they trust. As long as "everybody" prefers to trust the chain with the 1500 PH/s, "everybody" had better accept as valid whatever chain is created by the miners with the majority of that hashpower. Likewise, each miner in theory can adopt any validity criteria that he likes. He can change them at any time, apply them if and when he wants, and build his blocks any way he wants. But, as long as he wants to earn bitcoins that he can sell, he must make blocks that end up included in some blockchain that enough potential buyers will trust. There is no algorithm for that: he must watch the "market" and try to guess how the humans will behave. My point is that external observers cannot tell which validity rules a miner is using, nor when or whether he applies them. All they can see are the blocks that he broadcasts. In particular, there is no way to tell whether a miner is not accepting transactions with NOPs because NOPs are invalid in his version of the validiy rules, or because he is afraid that someone else may consider them invalid, or because he thinks that they bring bad luck. As for the non-mining relay nodes, they are aberrations that have no place in the protocol and break all its (already weak) security guarantees. Thhey should not exist, and clients shoud not use them.
|
Academic interest in bitcoin only. Not owner, not trader, very skeptical of its longterm success.
|
|
|
|