Bitcoin Forum
November 02, 2024, 12:41:50 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 4 5 6 »  All
  Print  
Author Topic: BIP 17  (Read 9209 times)
Steve
Hero Member
*****
Offline Offline

Activity: 868
Merit: 1008



View Profile WWW
January 23, 2012, 09:41:28 PM
 #21

Can someone remind me why BIP-12 has fallen out of favor?  OP_EVAL might add some amount of flexibility (regarding where a script is defined vs when it is actually executed), but none of these proposals seems radically different form one another.
BIP 12 cannot be statically analyzed. AFAIK no practical need for static analysis has surfaced yet, but so long as things are being rushed, it's not safe to say they won't in the future either...
Ah, right.  Why do you say there's no practical need?  One practical need is to determine a priori whether a script is too computationally expensive to be allowed.  With OP_EVAL, I could push some code on the stack and evaluate many times over…it's possible you could trivially mitigate that problem by limiting the number of allowed OP_EVALs, but it does make it difficult (if not impossible) to determining up front, in all cases, the cost of running a given script.  You could push code onto the stack that pushes more code onto the stack and executes another OP_EVAL (creating an infinite recursion that may be difficult to detect).  For reasoning similar to the omission of loops and jumps, I would be hesitant to have an OP_EVAL.  I think the code separator & checkhashverify (or ideally pushcode) is the cleaner approach.  The whole objective here is that you want a mechanism to hash the script required to spend a transaction.  OP_EVAL goes way beyond that.  And even BIP16, which also evaluates code you push on the stack, seems wrong to me (and would make the implementation more complex and static analysis more difficult).

With a general OP_CODESEPARATOR OP_PUSHCODE, you would have the flexibility of pushing any sequence of code onto the stack for the purposes of later computing a hash value, but you would never be executing code that was pushed onto the stack.  One improvement upon that would be to somehow ensure that only code which will actually execute would be hashed (to make it impossible to just push a random value onto the stack and then use its hash).  OP_CODEHASHVERIFY has that advantage, but requires that the hashed code immediately precede the operation.  It's possible I'm being overly paranoid though.

As for backward compatibility & chain forks, I think I would prefer a clean solution rather than one that is compromised for the sake of backward compatibility.  Then I would lobby to get people to upgrade to clients that accept/propagate the new transactions and perhaps create patches for some of the more popular old versions designed just to allow and propagate these new types of transactions.  Then when it's clear that the vast majority of nodes support the new transactions, declare it safe to start using them.  Any stragglers that haven't updated might find themselves off on a dying fork of the block chain…which will be a great motivator for them to upgrade.  Wink

(gasteve on IRC) Does your website accept cash? https://bitpay.com
Luke-Jr
Legendary
*
expert
Offline Offline

Activity: 2576
Merit: 1186



View Profile
January 23, 2012, 09:48:58 PM
 #22

Why do you say there's no practical need?  One practical need is to determine a priori whether a script is too computationally expensive to be allowed.  With OP_EVAL, I could push some code on the stack and evaluate many times over…
…and then stop evaluating when you hit the limit. Even with a static-analysis limit in place, I could waste just as much of your computation time by putting just under the limit, then failing with OP_0 or such. Knowing the cost beforehand doesn't stop any known attacks.

Here is a BIP 17 transaction on testnet created with my new checkhashverify branch. Please help test/review!

Gavin Andresen (OP)
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2301


Chief Scientist


View Profile WWW
January 23, 2012, 09:59:37 PM
 #23

...And even BIP16, which also evaluates code you push on the stack, seems wrong to me (and would make the implementation more complex and static analysis more difficult).

BIP 16 explicitly states:
"Validation fails if there are any operations other than "push data" operations in the scriptSig."

Let me try again for why I think it is a bad idea to put anything besides "push data" in the scriptSig:

Bitcoin version 0.1 evaluated transactions by doing this:

Code:
Evaluate(scriptSig + OP_CODESEPARATOR + scriptPubKey)

That turned out to be a bad idea, because one person controls what is in the scriptPubKey and another the scriptSig.

Part of the fix was to change evaluation to:

Code:
stack = Evaluate(scriptSig)
Evaluate(scriptPubKey, stack)

That gives a potential attacker much less ability to leverage some bug or flaw in the scripting system.

Little known fact of bitcoin as it exists right now: you can insert extra "push data" opcodes at the beginning of the scriptsigs of transactions that don't belong to you, relay them, and the modified transaction (with a different transaction id!) may be mined.

Quote
As for backward compatibility & chain forks, I think I would prefer a clean solution rather than one that is compromised for the sake of backward compatibility.  Then I would lobby to get people to upgrade to clients that accept/propagate the new transactions and perhaps create patches for some of the more popular old versions designed just to allow and propagate these new types of transactions.  Then when it's clear that the vast majority of nodes support the new transactions, declare it safe to start using them.  Any stragglers that haven't updated might find themselves off on a dying fork of the block chain…which will be a great motivator for them to upgrade.  Wink

Are you volunteering to make that happen? After working really hard for over four months now to get a backwards-compatible change done I'm not about to suggest an "entire network must upgrade" change...

How often do you get the chance to work on a potentially world-changing project?
Luke-Jr
Legendary
*
expert
Offline Offline

Activity: 2576
Merit: 1186



View Profile
January 23, 2012, 10:11:43 PM
 #24

Let me try again for why I think it is a bad idea to put anything besides "push data" in the scriptSig:

...

That turned out to be a bad idea, because one person controls what is in the scriptPubKey and another the scriptSig.
And BIP 16 makes this true again: the receiver now controls (to a degree) both scriptSig and "scriptPubKey". BIP 17 retains the current rules.

Little known fact of bitcoin as it exists right now: you can insert extra "push data" opcodes at the beginning of the scriptsigs of transactions that don't belong to you, relay them, and the modified transaction (with a different transaction id!) may be mined.
You can insert extra non-PUSH opcodes too, and mine them yourself... Basically, we already can put non-PUSH stuff in scriptSig, so if there is a vulnerability here, it's already in effect.

Quote
As for backward compatibility & chain forks, I think I would prefer a clean solution rather than one that is compromised for the sake of backward compatibility.  Then I would lobby to get people to upgrade to clients that accept/propagate the new transactions and perhaps create patches for some of the more popular old versions designed just to allow and propagate these new types of transactions.  Then when it's clear that the vast majority of nodes support the new transactions, declare it safe to start using them.  Any stragglers that haven't updated might find themselves off on a dying fork of the block chain…which will be a great motivator for them to upgrade.  Wink

Are you volunteering to make that happen? After working really hard for over four months now to get a backwards-compatible change done I'm not about to suggest an "entire network must upgrade" change...
I think just about every developer agrees with Gavin that this is not worth a blockchain fork...

Steve
Hero Member
*****
Offline Offline

Activity: 868
Merit: 1008



View Profile WWW
January 23, 2012, 11:12:17 PM
 #25

Quote
As for backward compatibility & chain forks, I think I would prefer a clean solution rather than one that is compromised for the sake of backward compatibility.  Then I would lobby to get people to upgrade to clients that accept/propagate the new transactions and perhaps create patches for some of the more popular old versions designed just to allow and propagate these new types of transactions.  Then when it's clear that the vast majority of nodes support the new transactions, declare it safe to start using them.  Any stragglers that haven't updated might find themselves off on a dying fork of the block chain…which will be a great motivator for them to upgrade.  Wink

Are you volunteering to make that happen? After working really hard for over four months now to get a backwards-compatible change done I'm not about to suggest an "entire network must upgrade" change...
I think just about every developer agrees with Gavin that this is not worth a blockchain fork...

So, instead of a fork, you try and create a hacky solution that old clients won't completely reject, but at the same time won't actually fully verify?  It doesn't seem so clear cut that this is preferable to a fork.  It seems like a bigger risk that you have clients passing along transactions that they aren't really validating.  And, I'm not sure a block chain fork is the end of the world.  Consider that when a fork occurs (the first block with a newer transaction type not accepted by the old miners and clients), most transactions will still be completely valid in both forks.  The exceptions are coin base transactions, p2sh transactions and any down stream transactions from those.  If you've convinced the majority of miners to upgrade before the split occurs (and miners take steps to avoid creating a fork block until some date after it's been confirmed that most miners support it), then miners that have chosen not to upgrade will quickly realize that they risk their block rewards being un-marketable coins.  So, I'm pretty sure they'll quickly update after that point.  People running non mining clients will also quickly follow when they realize mining activity on their fork is quickly dying off (but the vast majority of the transactions that appear valid to them will also be just as valid in the newer fork).  The big risk for people that haven't upgraded is that someone double spends by sending them a plain old transaction after they've already spent those coins via a new style transaction that the old clients don't accept.  But even then it might be difficult for such a transaction to propagate if the vast majority of people have upgraded.

(gasteve on IRC) Does your website accept cash? https://bitpay.com
Steve
Hero Member
*****
Offline Offline

Activity: 868
Merit: 1008



View Profile WWW
January 23, 2012, 11:31:58 PM
 #26

...And even BIP16, which also evaluates code you push on the stack, seems wrong to me (and would make the implementation more complex and static analysis more difficult).

BIP 16 explicitly states:
"Validation fails if there are any operations other than "push data" operations in the scriptSig."

Let me try again for why I think it is a bad idea to put anything besides "push data" in the scriptSig:

Bitcoin version 0.1 evaluated transactions by doing this:

Code:
Evaluate(scriptSig + OP_CODESEPARATOR + scriptPubKey)

That turned out to be a bad idea, because one person controls what is in the scriptPubKey and another the scriptSig.

Part of the fix was to change evaluation to:

Code:
stack = Evaluate(scriptSig)
Evaluate(scriptPubKey, stack)

That gives a potential attacker much less ability to leverage some bug or flaw in the scripting system.
The only practical difference between these is that by restarting evaluation you ensure that all execution context other than stack is cleared.  I think you could have made OP_CODESEPARATOR ensure that everything other than the stack is wiped and have achieved essentially the same objective.  If you have good tests for every opcode that ensure it behaves correctly and leaves all execution context in the correct state, you could breath a little easier about such possible exploits.

Both BIP-16 and BIP-17 have an OP_CHECKSIG in the scriptSig.  It seems you're concerned about whether it's executing in the same context as the rest of the scriptSig or it's run in some other context (i.e. scriptPubKey).  It's seems the concern is the same concern that originally motivated you to split the execution of scriptSig and scriptPubKey.  It feels like an irrational fear, but maybe the implementation of the opcodes has been historically buggy and the fear is warranted.

Quote
Little known fact of bitcoin as it exists right now: you can insert extra "push data" opcodes at the beginning of the scriptsigs of transactions that don't belong to you, relay them, and the modified transaction (with a different transaction id!) may be mined.
Well that seems bad (though I can't imagine how it could be actually exploited…other than creating confusion about transaction IDs).  Is there a problem with the scope of what is getting signed?

(gasteve on IRC) Does your website accept cash? https://bitpay.com
makomk
Hero Member
*****
Offline Offline

Activity: 686
Merit: 564


View Profile
January 24, 2012, 11:20:28 AM
 #27

  • Old clients and miners count each OP_CHECKMULTISIG in a scriptSig or scriptPubKey as 20 "signature operations (sigops)."  And there is a maximum of 20,000 sigops per block.  That means a maximum of 1,000 BIP-17-style multisig inputs per block.  BIP 16 "hides" the CHECKMULTISIGs from old clients, and (for example) counts a 2-of-2 CHECKMULTISIG as 2 sigops instead of 20. Increasing the MAX_SIGOPS limit would require a 'hard' blockchain split; BIP 16 gives 5-10 times more room for transaction growth than BIP 17 before bumping into block limits.
Unless I'm entirely mistaken there was a rather nasty vulnerability in OP_EVAL caused by this added bit of complexity, one that BIP 16 would've inherited if you hadn't spotted and fixed it. While technically it was only a denial of service vulnerability that prevented nodes that supported it from mining any blocks, a denial of service vulnerability of this kind is enough to create transactions spending other people's bitcoins from their P2SH addresses and get non-upgraded nodes to accept them even after the switch-on date, which is kind of a big deal.

Quad XC6SLX150 Board: 860 MHash/s or so.
SIGS ABOUT BUTTERFLY LABS ARE PAID ADS
DiThi
Full Member
***
Offline Offline

Activity: 156
Merit: 100

Firstbits: 1dithi


View Profile
January 24, 2012, 12:58:40 PM
Last edit: January 24, 2012, 01:09:23 PM by DiThi
 #28

Why the hard dates?
You both are struggling and rushing because the dates you set keep coming again and again, before miners notice and upgrade. Here's an alternate idea:
  • Have P2SH* implemented, and announce P2SH support in the coinbase, but it's disabled until a certain condition is met.
  • The condition is to have 55% or more blocks in any 2016-block span announcing support for P2SH.
  • When that condition is met, the remaining 45% hashing will have two weeks to update.
  • Remove the P2SH announcement and just reject blocks with invalid P2SH transactions. Also the changes in the block limits will be made effective here.
  • A future version of the software will remove the automatic switch logic.

* With P2SH I'm referring to both BIP 16 and 17. I have no preference for any of them if a soft schedule is chosen.

Unless I'm entirely mistaken there was a rather nasty vulnerability in OP_EVAL caused by this added bit of complexity, one that BIP 16 would've inherited if you hadn't spotted and fixed it. While technically it was only a denial of service vulnerability that prevented nodes that supported it from mining any blocks, a denial of service vulnerability of this kind is enough to create transactions spending other people's bitcoins from their P2SH addresses and get non-upgraded nodes to accept them even after the switch-on date, which is kind of a big deal.

BIP 16 was made specifically to address this. All concerns expressed in this thread are solved IMHO with a soft schedule with an automatic 55% switchover, as long as clients honors the 6-confirmation convention.

1DiThiTXZpNmmoGF2dTfSku3EWGsWHCjwt
Steve
Hero Member
*****
Offline Offline

Activity: 868
Merit: 1008



View Profile WWW
January 24, 2012, 01:28:48 PM
 #29

Why the hard dates?
You both are struggling and rushing because the dates you set keep coming again and again, before miners notice and upgrade. Here's an alternate idea:
  • Have P2SH* implemented, and announce P2SH support in the coinbase, but it's disabled until a certain condition is met.
  • The condition is to have 55% or more blocks in any 2016-block span announcing support for P2SH.
  • When that condition is met, the remaining 45% hashing will have two weeks to update.
  • Remove the P2SH announcement and just reject blocks with invalid P2SH transactions. Also the changes in the block limits will be made effective here.
  • A future version of the software will remove the automatic switch logic.

* With P2SH I'm referring to both BIP 16 and 17. I have no preference for any of them if a soft schedule is chosen.

Unless I'm entirely mistaken there was a rather nasty vulnerability in OP_EVAL caused by this added bit of complexity, one that BIP 16 would've inherited if you hadn't spotted and fixed it. While technically it was only a denial of service vulnerability that prevented nodes that supported it from mining any blocks, a denial of service vulnerability of this kind is enough to create transactions spending other people's bitcoins from their P2SH addresses and get non-upgraded nodes to accept them even after the switch-on date, which is kind of a big deal.

BIP 16 was made specifically to address this. All concerns expressed in this thread are solved IMHO with a soft schedule with an automatic 55% switchover, as long as clients honors the 6-confirmation convention.

I agree (though I might suggest 64 or 70%…and maybe a month instead of 2 weeks for the activation).  It seems to me that people are compromising design out of an irrational fear of a chain fork.  If you get the overwhelming majority of people to add p2sh support (without actually activating it yet), you've then built a consensus that people want it.  If you then set a date for activation, you give everyone else that hasn't yet upgraded a chance to do so…and they will do it because the consequence of not doing so is that they end up with some unspendable coins in their wallet (it's also worth noting that the majority of coins in old wallets would actually still be usable on both chain forks long after the activation).  After activation, you can then begin the work needed to add support for these transaction in the user interface.

(gasteve on IRC) Does your website accept cash? https://bitpay.com
piuk
Hero Member
*****
expert
Offline Offline

Activity: 910
Merit: 1005



View Profile WWW
January 24, 2012, 04:41:51 PM
 #30

  • The condition is to have 55% or more blocks in any 2016-block span announcing support for P2SH.

This is an excellent idea. It would give everyone in the network network a longer time to upgrade. Also voting process should be standardised so you include a BIP number e.g. "/BIP_0016/" rather than "/P2SH/" or "CHC" etc.

Inaba
Legendary
*
Offline Offline

Activity: 1260
Merit: 1000



View Profile WWW
January 24, 2012, 05:22:10 PM
 #31

Yes, my biggest problem with either BIP is the timeframe as well.  I've disabled all of my mining daemons from broadcasting support for either BIP at the moment for my pool.  I would like to see a consensus reached and a sane timeframe proposed before re-enabling support.

Two, three, four weeks is not a sane timeframe for the types of changes proposed to be tested, vetted and deployed.

If you're searching these lines for a point, you've probably missed it.  There was never anything there in the first place.
Luke-Jr
Legendary
*
expert
Offline Offline

Activity: 2576
Merit: 1186



View Profile
January 24, 2012, 05:28:39 PM
 #32

I set the vote/deadline for BIP 17 soon because I figured Gavin wouldn't accept any delays. If Gavin is willing to tolerate a later schedule for BIP 17, I can update it.

Gavin Andresen (OP)
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2301


Chief Scientist


View Profile WWW
January 25, 2012, 03:00:17 PM
 #33

Why the hard dates?
You both are struggling and rushing because the dates you set keep coming again and again, before miners notice and upgrade. Here's an alternate idea:
  • Have P2SH* implemented, and announce P2SH support in the coinbase, but it's disabled until a certain condition is met.
  • The condition is to have 55% or more blocks in any 2016-block span announcing support for P2SH.
  • When that condition is met, the remaining 45% hashing will have two weeks to update.
  • Remove the P2SH announcement and just reject blocks with invalid P2SH transactions. Also the changes in the block limits will be made effective here.
  • A future version of the software will remove the automatic switch logic.

That's non-trivial to implement; it seems to me that a conscious decision by the miners/pools to support or not support is less work and safer.

Luke proposed something similar earlier, though; I'm surprised his patches don't implement it.

I like whoever proposed that the string in the coinbase refer to the BIP, in the future that's the way it should be done.


RE: schedules:

Deadlines, as we've just seen, have a way of focusing attention.  OP_EVAL got, essentially, zero review/testing (aside from my own) until a month before the deadline.

It seems to me one-to-two months is about the right amount of time to get thorough review and testing of this type of backwards-compatible change. Longer deadlines just mean people get busy working on other things and ignore the issue.

How often do you get the chance to work on a potentially world-changing project?
mndrix
Michael Hendricks
VIP
Sr. Member
*
Offline Offline

Activity: 447
Merit: 258


View Profile
January 25, 2012, 03:57:47 PM
 #34

With BIP 17, both transaction outputs and inputs fail the old IsStandard() check, so old clients and miners will refuse to relay or mine both transactions that send coins into a multisignature transaction and transactions that spend multisignature transactions.  BIP 16 scriptSigs look like standard scriptSigs to old clients and miners. The practical effect is as long as less than 100% of the network is upgraded it will take longer for BIP 17 transactions to get confirmed compared to BIP 16 transactions.
Since scriptSigs must always follow scriptPubKey, does this really make a big difference? ie, if people can't send them, they can't receive them anyway.

So far, I favor BIP 16 because of this point alone.  Eyeballing the client version distributions suggests that roughly 70% of users are running clients older than 0.5.  We probably can't expect that 70% to upgrade anytime soon.

For Bitcoin businesses, slow propagation is a customer support headache.  In my experience, customers send their payment and expect prompt acknowledgement that it was received even if they still have to wait for confirmations.  The longer the propagation delay, the more customer support emails I get.  I'm guessing propagation delay will slow adoption of BIP 17 transactions among those needing them most: businesses with large balances.

Incidentally, I agree with sentiments expressed elsewhere on this thread that IsStandard() should be replaced with actual resource consumption metrics in the scripting evaluation engine.  It seems fairly straightforward to assign a cost to each opcode and fail any script that hits the resource limits.  That seems both more flexible and more durable than IsStandard().
Luke-Jr
Legendary
*
expert
Offline Offline

Activity: 2576
Merit: 1186



View Profile
January 25, 2012, 04:04:34 PM
 #35

With BIP 17, both transaction outputs and inputs fail the old IsStandard() check, so old clients and miners will refuse to relay or mine both transactions that send coins into a multisignature transaction and transactions that spend multisignature transactions.  BIP 16 scriptSigs look like standard scriptSigs to old clients and miners. The practical effect is as long as less than 100% of the network is upgraded it will take longer for BIP 17 transactions to get confirmed compared to BIP 16 transactions.
Since scriptSigs must always follow scriptPubKey, does this really make a big difference? ie, if people can't send them, they can't receive them anyway.

So far, I favor BIP 16 because of this point alone.  Eyeballing the client version distributions suggests that roughly 70% of users are running clients older than 0.5.  We probably can't expect that 70% to upgrade anytime soon.
BIP 16 has the some problem as BIP 17 when sending to them.

Incidentally, I agree with sentiments expressed elsewhere on this thread that IsStandard() should be replaced with actual resource consumption metrics in the scripting evaluation engine.  It seems fairly straightforward to assign a cost to each opcode and fail any script that hits the resource limits.  That seems both more flexible and more durable than IsStandard().
IsStandard() is a permanent part of the protocol with BIP 16.

jojkaart
Member
**
Offline Offline

Activity: 97
Merit: 10


View Profile
January 25, 2012, 04:12:09 PM
 #36


So far, I favor BIP 16 because of this point alone.  Eyeballing the client version distributions suggests that roughly 70% of users are running clients older than 0.5.  We probably can't expect that 70% to upgrade anytime soon.

For Bitcoin businesses, slow propagation is a customer support headache.  In my experience, customers send their payment and expect prompt acknowledgement that it was received even if they still have to wait for confirmations.  The longer the propagation delay, the more customer support emails I get.  I'm guessing propagation delay will slow adoption of BIP 17 transactions among those needing them most: businesses with large balances.

Incidentally, I agree with sentiments expressed elsewhere on this thread that IsStandard() should be replaced with actual resource consumption metrics in the scripting evaluation engine.  It seems fairly straightforward to assign a cost to each opcode and fail any script that hits the resource limits.  That seems both more flexible and more durable than IsStandard().

It would be pretty easy to fix this. All you need to do is to modify the networking code (as well as the dnsseeds) to make sure there is connectivity between eveyrone running the new version without needing to pass through old versions in the process. Could someone more experienced at p2p networks than me propose an algorithm?
Gavin Andresen (OP)
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2301


Chief Scientist


View Profile WWW
January 25, 2012, 04:13:55 PM
 #37

IsStandard() is a permanent part of the protocol with BIP 16.

No, it really isn't.

Here's a possible future implementation of IsStandard():

Code:
bool
IsStandard()
{
    return true;
}

I like the idea of a future IsStandard() that allows more transaction types, but only if they're under some (sane) resource limits.

How often do you get the chance to work on a potentially world-changing project?
Steve
Hero Member
*****
Offline Offline

Activity: 868
Merit: 1008



View Profile WWW
January 25, 2012, 04:14:24 PM
 #38

IsStandard() is a permanent part of the protocol with BIP 16.
Can you elaborate why that's the case?  If true, I think it's very bad.  IsStandard() needs to be lifted at some point (probably when there is a suite of tests around each and every opcode that verify that it does the specified thing and leaves all execution context in a valid state).

Edit: by "lifted" I mean to not restrict scripts to a small set of well known ones…but limits on resource utilization by a script is definitely needed.

(gasteve on IRC) Does your website accept cash? https://bitpay.com
Luke-Jr
Legendary
*
expert
Offline Offline

Activity: 2576
Merit: 1186



View Profile
January 25, 2012, 04:57:30 PM
 #39

IsStandard() is a permanent part of the protocol with BIP 16.
Can you elaborate why that's the case?  If true, I think it's very bad.  IsStandard() needs to be lifted at some point (probably when there is a suite of tests around each and every opcode that verify that it does the specified thing and leaves all execution context in a valid state).
Gavin is correct that the actual function named IsStandard() could be removed or replaced. However, with BIP 16, all implementations are required to check for the specific BIP 16 standard transaction and treat it differently.

interlagos
Hero Member
*****
Offline Offline

Activity: 496
Merit: 500


View Profile
January 25, 2012, 06:38:32 PM
 #40

IsStandard() is a permanent part of the protocol with BIP 16.
Can you elaborate why that's the case?  If true, I think it's very bad.  IsStandard() needs to be lifted at some point (probably when there is a suite of tests around each and every opcode that verify that it does the specified thing and leaves all execution context in a valid state).
Gavin is correct that the actual function named IsStandard() could be removed or replaced. However, with BIP 16, all implementations are required to check for the specific BIP 16 standard transaction and treat it differently.
+1
I'm still with Luke on this one.
While Gavin admits that he would love to make IsStandard more generic (moving from pre-defined scripts to generic resource-constrained ones)
he still leaves handling of multisig as an eternal "special case" for some reason.

On the other hand this "special case" might just be a first step into another way of doing things, and if there are other "special cases" in the future it wouldn't look that odd. What frightens me is that there is no way back if we make this step and it turns out to be a wrong direction.
With Luke's solution we are not making any steps into unknown, we just implement the feature within the current framework.
This will allow us to have multisig as soon as we want it, while giving us more time to think what kind of steps and in what direction we want to make.
Pages: « 1 [2] 3 4 5 6 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!