Bitcoin Forum
April 30, 2024, 09:01:41 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 [136] 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 ... 288 »
2701  Bitcoin / Development & Technical Discussion / Re: Potential bug in bitcoin: long-range attacks. on: May 09, 2014, 04:15:05 AM
Let's go back to the reality. Since there is a 4x difficulty adjustment rule, does it mean an attack of this kind won't be able to reverse more than 4 blocks?
No, the idea is that they keep trying to mine blocks, each time specifying times needed to get the maximum possible ramp— mining 2016 blocks between each 4x difficulty change. The first are easy, obviously, but eventually they become very unlikely to ever get a block. The key point is that the last couple of their lucky steps are more apparent work than all the history, if you assume exponential growth. The key points that makes this unreal is that while the probablity tends to one— assuming the attacker keeps a fixed arbitrary ratio the the network and the network grows exponentially, its negligible over sane timeframes.

When talking about the compact SPV proofs for total chain work the same kind of variance problem arises, that is to say that while the _expected_ work to produce a compact SPV proof is the same as the expected work for the long sequence of normal difficulty blocks it replaces, the reduced sample count means that the probability of 'getting lucky' and being able to mine a compact proof with less work than it would have taken you to do regular blocks of the same difficulty is much higher.

It occurred to me that by requiring additional edges in the proof you can ask about the work over alternative confidence intervals (bounded by base difficulty), and not just the expected work. E.g. "I am 50% confident that this chain had at least work X". In effect you can trade off proof size for decreased variance. But for most of the interesting applications of compact SPV proofs variance 'attacks' were really only interesting right at the tip of the chain (e.g. making a lucky bogus block that has a skip 6 blocks back to where a transaction question was committed), and so it would be easy enough to just demand that your last couple links in the proof be regular difficulty.

I suspect there may actually be an improvement possible to this variance game by having miners commit to lower difficulty solutions and provide them according to some cut-and-choose, and allow for a test of two chains against each other not just in terms of expected work, but according (e.g. to x-percentile work). It would probably take a lot of work to figure out the details here, and maybe in the exponential growth forever assumption it still doesn't work out unless you toss scalability out the window by having to exponentially increase the data disclose along with the exponential increases in hashrate.
2702  Bitcoin / Development & Technical Discussion / Re: Why will nodes not relay non-standard txs? on: May 09, 2014, 03:49:16 AM
This thread's got me wondering whether, assuming core
I want to transition to a blacklist instead: inhibit the no-op opcodes, non-canonical pushes/signatures, oddball versions, and enforce sanity limits on size and checksig count... everything else relayed (assuming it's valid and meets whatever fee criteria is in use). I think this is also the goal of everyone else working on core in some timeframe or another: No one has any great affection for the whitelist approach afaik, it's just expedient and changing this is not a top priority compared to all the things which have a more urgent need.
2703  Bitcoin / Development & Technical Discussion / Re: Why will nodes not relay non-standard txs? on: May 07, 2014, 08:05:32 PM
You make a good point about the multiplication. I thought bugs meant a bug in the multiplication code itself, not malicious usage of it.
The possibility of malicious use is a bug. A script isn't something only the users system validates, the whole network must run it... they can't even just ignore scripts they don't like. So any possibility of malicious use is a severe bug.
2704  Bitcoin / Bitcoin Technical Support / Re: Spending own generated unconfirmed change in the v.0.9.0 era on: May 07, 2014, 05:38:41 PM
They didn't fix malleability, it is still possible for transactions to be modified.
They reduced the odds of modified transactions being accepted into the block chain.
Most of the changes in 0.9 related to malleability, and 100% of the changes in responses to the attacksattacks, were changes to the wallet behavior.  The wallet will not become confused now if change is mutated.  There is now a switch to disable spending unconfirmed change, but by default it still does. But, for example, it will notice as soon as change is conflicted in the chain and not try to spend it, greatly decreasing the potential disruption.

(0.9 also included the fix from last September that expanded the definition of non-canoical, but thats not really important to changing the behavior here)

could prove catastrophic for any business that allows it.
Thats pure hyperbole. The worst case consequence there was that you get transactions stuck which require technical intervention to unstick in your wallet. An annoying DOS attack but hardly "catastrophic to business".

It is a bit weird that nobody has replied to this very critical question, could somebody please shed some light on this one?
It's not weird, you asked in the wrong subform and no one who cared to answer it noticed. This should have been posted in technical support.

If you'd prefer transactions to fail when you run out of confirmed inputs instead of creating a risk of getting stuck you should set spendzeroconfchange=0 in your configuration.
2705  Bitcoin / Development & Technical Discussion / Re: Why will nodes not relay non-standard txs? on: May 07, 2014, 05:27:00 PM
Scripting doesn't really achieve very much, as I posted here enabling all the scripting operations only really allows more complex multi-signature transactions and proof of work in order to spend transactions.
There are a lot more uses than you've listed there. For example lottery transactions, and those puzzles you dismiss as "stupid" enable secure atomic cross chain trades, making payments that securely depend on an external zero knoweldge proof, and making highly private coin trades.

Yeah, I don't get why they permanently disabled multiplication. The explanation says it's because of possible bugs, but come on, it'd be pretty hard to fuck up a simple multiplication.
Because multiplication increases the size of the data. Load a 510 byte number, then keep DUPing and multiplying. Each operation doubles the storage, in relatively few operations all nodes are crashing because they've run out of memory. Your dismissive response shows an extreme carelessness which would virtually ensure the existence of vulnerabilities. Even the simplest operation is easy to make mistakes in. Fortunately the people working on the software are more considerate than that.

What's the point of having script
Because you can go and put new things to use without first requiring a risky network upgrade for everyone. Some of the non-standardness is also required to preserve forward compatibility so that new things can be safely and non-disruptively added in the future.
2706  Bitcoin / Development & Technical Discussion / Re: Question on the scriptSig and scriptPubKey on: May 06, 2014, 08:40:45 PM
restrict Bitcoin keys to a subset of secp256k1 keys (i.e. Bitcoin keys must be odd or they are invalid),A
And seriously disrupt all the kinds of clever derivation schemes that now exist, e.g. blinding for reality keys, etc. I'm glad Bitcoin was not hyperoptimized in that particular way. Smiley
2707  Bitcoin / Development & Technical Discussion / Re: Potential bug in bitcoin: long-range attacks. on: May 06, 2014, 08:32:35 PM
That's the beauty of it - the result doesn't require exponential growth (though it does help a bit). If the hashrate of attacker and network is fixed to eternity, the attacker still has a chance of 100% to succeed eventually. This is because the harmonic integral diverges (the cumulative PoW increases linearly, so his probability of success each day decreases inversely linearly. The sum of this goes to infinity and this can be translated to 100% probability of success).
But if the hashrate will not be increasing exponentially, you can prohibit difficulty adjustment patterns that do. Smiley Though practical fixes aren't needed against something whos probability becomes non-trivial only after life-of-the-solar-system timeframes, which was what I was going for when talking about working out the distribution and not just the asymptotic behavior.
2708  Bitcoin / Development & Technical Discussion / Re: Potential bug in bitcoin: long-range attacks. on: May 06, 2014, 08:18:21 PM
I think it's more interesting than you make it out to be. Consider the fact that if you try to reorg the entire blockchain, you have 100% chance to eventually succeed, no matter how low your hashrate (assuming that the ratio between your hashrate and the network's has a positive lower bound).
Indeed, while I was well aware of growth making the historical hashing inconsequential (http://bitcoin.sipa.be/powdays-50k.png) and playing the reorg lottery I hadn't considered that particular possibility before reading that paper (thanks for the link). Though it does require also exponential growth, which is physically senseless in some sufficiently long run. It would probably be interesting to explore the probability distribution with a relaxed form of that assumption.
2709  Bitcoin / Development & Technical Discussion / Re: Potential bug in bitcoin: long-range attacks. on: May 06, 2014, 07:25:52 AM
The fact that such an obvious and simple attack has never happened suggests it can't happen. Shouldn't you realize that?
Well, take care there— lots of things are busted without ever being noticed.
Then it is even easier to perform this attack, in theory.
All you would have to do is create a whole bunch of low-difficulty blocks with nearly the same timestamp, then after the "difficulty adjustment" in your branch of the blockchain would result in a super large difficulty. Solve that one block and the blockchain is broken.
This from the guy who was going around claiming to sell a bogus magical ECDSA cracker. I guess the deadline has passed for my challenge, no keys broken? So sad for you.

In any case, no this isn't actually interesting either— because you have to do as much work as the whole network to get ahead of it in terms of expectation. So you might as well say "you could go mine as much as the network until you get ahead of it"— something you can't do without more computing power than it (much more, in the case that you start far behind it) since the expected required computing power would be equal. The only change is the variance. (and indeed, you can construct some kind of not very interesting very low probability example out of the difference in variance, but like your fraudulent ECDSA cracker, its not very interesting in practice)

(And— since you don't seem to understand any of the technical details about the system at all— I guess I also need to point out that the difficulty can only increase by a factor of four per retarget, though thats not really necessary for what what you're talking about to not bay a concern, though it does frustrate an attempt at a lucky roll).

2710  Alternate cryptocurrencies / Altcoin Discussion / Re: Turing complete language vs non-Turing complete (Ethereum vs Bitcoin) on: May 06, 2014, 12:58:59 AM
Quote
These sorts of schemes still erode the decentralization, but they are a step forward from the status quo.
I can't figure out how that was supposed to be an improvement to the "status quo" (presumably in Bitcoin). Truly decentralized mining works fine today though its not currently very popular, that text is talking about a world where it isn't economically possible.

Moreover, even accepting the motivation the solution sounds like a handwave in other ways as well: Pooling allows work delegation because of the huge ratio of work for solving vs verifying.  Script work is all verification, barring complex and insanely expensive things like proofs of computation the best I think you can do there is farm the work out to N miners and ask for the hash of the execution transcript, and if there are disagreements verify for yourself... but even that fails if the N miners are sibyls and it has N fold overhead. What am I missing here?
2711  Bitcoin / Development & Technical Discussion / Re: Potential bug in bitcoin: long-range attacks. on: May 05, 2014, 11:52:37 PM
checkpoints.
Have nothing to do with this.  A general tip: if you are commenting on the security of Bitcoin and the word "checkpoint" comes to mind, you are probably confused. Smiley

This thread was answered completely and correctly in the very first response. This attack does not exist because Bitcoin chooses the chain with the most work, not the most blocks.
2712  Bitcoin / Development & Technical Discussion / Re: Could deterministic signatures be used to reduce Bitcoin's dependency on PRNG? on: May 05, 2014, 11:50:51 PM
I think a deterministic k-value will be less prone to errors (like the Android repeat k-value bug).  However, RFC6979 seems more complex than it needs to be.  Why can't I just take the 256-bit private key, concatenate it with the 256-bit hash I'm about to sign, apply SHA256 to the resulting 512-bit integer
   k = sha256(private_key || hash_to_sign)
and (assuming 0<k<n) use the resulting hash as my per-message secret number?
Because they are trying to do things like avoid extension attacks in hash functions (which all MD structure hash functions have at least in theory), as its a spec which is hash-function neutral. This might lead to things like preparing messages with particular structure to sign that reduce the apparent uniformity of k leading to compromise.  Right away I'd strongly recommend against your simple design and suggest that it be using HMAC-SHA256 instead to close the extension attack concern. Of course, pile on a few more layers of opinion and you probably get the RFC.  Doubly so when you want to be hash function and application independent (e.g. the extension concerns aren't so much of an issue in Bitcoin with sha256 and ... few applications where you'd blindly sign some hash without knowing what it was).
2713  Bitcoin / Development & Technical Discussion / Re: Could deterministic signatures be used to reduce Bitcoin's dependency on PRNG? on: May 05, 2014, 06:04:15 PM
Reading through that message thread it looks like a proposed plan was a compile time flag to enable libsecp256k1.  That was almost six months ago.  What ever happened to that idea?
It's part of making openssl optional. There are currently a half dozen pull req basically waiting for the release of 0.9.2 to get merged that move that subproject along.
2714  Bitcoin / Development & Technical Discussion / Re: Bitcoin node on OpenWRT router question on: May 05, 2014, 05:36:58 AM
I can't recommend people stay away from the RPI strongly enough. They are obscenely slow even for their clockrate. I'd say they're more in competition with small microcontrollers, but they're also notoriously unreliable.

I've had much better luck with the odroid products, e.g. http://hardkernel.com/main/products/prdt_info.php?g_code=G138733896281  which is a much better value.  Coupled with a large emmc (e.g. 64gb) it should run a reasonably respectable bitcoin node. The beaglebones are nice ... but at the price the odroids have a lot more memory and performance.
2715  Bitcoin / Development & Technical Discussion / Re: Question on the scriptSig and scriptPubKey on: May 05, 2014, 01:10:48 AM
The basic signing process should just use hash(transaction without inputs | hash_type) as the signing hash.  The signing hash should be used to refer to the previous transaction.
Oh no, that wouldn't be good in general, at least unless you could opt out of it.

Consider, You pay Alice.  The transaction isn't confirming because your fees were not competitive. So you double spend its inputs in a new transaction with better fees in order to achieve atomic exclusion. Oops: Moments after your replacement transaction a prior payer, Peggy, poses a parallel payment and in your present position this is no perk since her payments were paired: price and pubkey parroted. Preclusion prevented by a profusion of parallel property, both payments are processed and Alice, pleased with her profit, parts leaving you peevish.
 
There certantly are cases where it would be good to be able to mask the inputs— generally where you're doing something interesting where you'd be absolutely sure to never reuse a public key as part of your protocol— but in the common case, the addition control precision is very important, not just against preventing stupidity but to avoid suffering losses due to inconsistency which is inherent in a distributed system.

And fengshu, it's generally preferred that people not bump old threads.
2716  Alternate cryptocurrencies / Altcoin Discussion / Re: Turing complete language vs non-Turing complete (Ethereum vs Bitcoin) on: May 04, 2014, 09:41:34 PM
btw - gmaxwell, gave some more thought to your points.  I think you need to read into the latest from Ethereum.  It's a lot more than just expanding the OPCODE set.  They have a random access memory system there and other functions available to the script environment.  It's a LOT more than just expanding the OPCODEs.
I know they have, I was just responding to their form of "turing completeness" and not the other things. I guess we were talking past each other, sorry for that.

As far as the other things go people often don't realize that some of these mechanisms are not actually necessary and can be achieved in other ways. E.g. you don't need global "random access memory" if scripts can introspect their transactions enough to ensure that their final state is passed on in outputs— E.g. "exactly one of the outputs must be a copy of this script, plus the updated state".  Encapsulating state in this way creates good architectural isolation and makes it very easy to correctly implement reorganization logic and understand what will happen under reorganization. Obviously preserving the architecture matters a lot when you're talking about enhancements to an existing system, but the trickiness of reorganization means that I'd probably still adopt this kind of state management approach even in a totally greenfield environment. I am far from convinced that many of the people working on altcoins are even dimly aware of all the bullets Bitcoin has dodged in its design, even the ones which have been pretty widely discussed.

Likewise, any "non-deterministic" inputs to a script (such as "locktime must be at least this high") can be made deterministic by just including the input directly in the ScriptSig and having part of the criteria in the ScriptPubKey verify them.

but we barely exploit Script 1.0 today!
Exactly. I am probably more enthused about the possibilities than most, but intellectual honesty keeps me from arguing for a bunch of new features in light of the reality that what we have today is hardly used at all even though it very much could be from the perspective of the technology itself. No matter how sexy I think more expressive power might be, I can't argue for it with a straight face when we're not using what we have.  It _might_ be the case that there is an expressiveness gap where we're not quite expressive enough to get more use, but I'm seldom hearing "I'd like to do X but can't but for missing Y." and often when I do hear that we're able to find a way around it (e.g. the lottery transactions). As a result I do think that any script enhancement needs to come along with an actual application (even if its only command-line geek grade) so there is some evidence that it would get used, in addition to checking all the boxes

I have a personal Script-2.0 laundry list that I've maintained for some time (but not currently published because I'm sick and tired of white-paper only altcoins taking ideas I've invented or promoted and selling them for a profit and not even implementing them!). Something like 1/3rd of the document describes contracts which tie into the features I suggest and which I think must be implemented to prove the design wisdom of an attempted implementation of the features.

it's really much more than just a money system.  There's people on the forums making all sorts of suggestions: "Lets make a Distributed Wikipedia!".
That one amuses me, one of  biggest reason for my interest and eventual involvement in Bitcoin is that almost a decade ago some people argued that the Wikimedia Foundation shouldn't be formed because Wikipedia should just be decentralized, not only claiming it was possible but that it could be trivially implemented. I wrote one of my trademark rants on the physical impossibility of true decenteralized consensus, as consensus is a necessary component of replicating the functionality of a singular resource as opposed to a grab-bag of assorted repositories. Bitcoin challenged that view but didn't change was was possible— my views weren't overtly wrong, Bitcoin just works under different assumptions which I hadn't considered at the time... primarily the ability to use hashcash and in-system compensation to create an incentive alignment and to force participants to make exclusive choices.  It's far from clear, and— in fact, now seems unlikely— that these different assumptions are anywhere near as strong as they are for other applications as they are for Bitcoin, and the verdict is even still out on if Bitcoin's properties are even good enough for Bitcoin in the long run.

A lot of the things I've heard that crowd talk about don't make a lot of sense to me... e.g. implementing a freestanding rent extractor which does nothing you couldn't just do locally— which is a pretty common event because in that execution environment the agent can't keep any secrets from the public. It's the sort of argument that sounds good until someone not seeped in the excitement steps up and says "The Russians used a pencil.". Some of it would require the network to perform IO which you can't safely do in a consensus environment (except via trusted parties— and in which case you could just have them compute for you too, and thus keep your program private), and even the things which aren't impossible run into the pointlessness problems that some of the verifiable computing stuff does: e.g. you can ask something else to compute for you, but with millionfold overhead and a loss of privacy that makes it pretty pointless to do in almost any conceivable circumstance.

Though I'm still glad people are excited about some novel ideas.  I hope that excitement lasts once people realize that that they're not going to make it rich off of them … I hope that making the world a freer, safer, and more interesting place is enough of a motivation to retain some of that excitement. Although 15 year long winter of the cipherpunks movement suggests that some cynicism here is justified. Is all the new interest because people hadn't been exposed to these ideas— they'd never stumbled into tools like rpow or mixmaster years ago— or is it mostly because they think it's something they can cash in on? Not that I begrudge people making money, and a few will— no doubt— make a bunch, but most will not, and if its a primarily a profit motive sustaining this interest then I expect we'll have a return to the low level of progress that other tools in this space have had since the excitement in the early 90s.
2717  Bitcoin / Development & Technical Discussion / Re: Question about script on: May 04, 2014, 07:01:10 AM
There is nothing called "balance" in the bitcoin protocol.
Or even resembling one under a different name. The Bitcoin protocol tracks txouts— the closest analogy would be thinking of it tracking individual coins with their requirements for spending stamped on them, which come in arbitrary denomination. They're spent atomically— in a process that melts down the original coins after checking their rules which yields new coins of new denominations and new rules stamped on them.
2718  Alternate cryptocurrencies / Altcoin Discussion / Re: Turing complete language vs non-Turing complete (Ethereum vs Bitcoin) on: May 04, 2014, 02:16:13 AM
Why? The interpreter is already protected against this. Once the opcode limit is exeeded, the execution terminates, the transaction is rejected, and the peer is DoS banned.
Yep. The only issue there that I'm aware of is the issue of priority calculation... but there are straightfoward ways to address that.
2719  Alternate cryptocurrencies / Altcoin Discussion / Re: Turing complete language vs non-Turing complete (Ethereum vs Bitcoin) on: May 04, 2014, 01:44:32 AM
I think you're really trivializing it.  We're talking about the building of a whole new VM emulation machine here.  There's going to have to be thread monitoring, security analysis, etc.
To add loops to script.cpp in the simplest way that achieves the goal in a hardforking change is that you add a "jump opcode" to the big switch statement which, if executing, changes the instruction pointer to a new value within the size of the script and continues execution. The patch is roughly four lines of code. Thats it, no "thread monitoring" (there is already an operation counter that will halt execution if more than 201 steps are taken), no "whole new VM", etc.

Like any change to the consensus code— which is a cryptosystem— It would, of course, require extensive review and to make it efficient you'd want to do as I said with making the nOpCount explicit, turned into a soft-forking change, or not be quite so ugly as a bare loop, etc. so a real implementation would be moderately more complex, but not like you seem to be thinking. Perhaps people in altcoins are proposing far more complicated things, but the currently published ethereum code (the older stuff that has almost nothing to do with their recent whitepapers) is pretty much precisely this "simplest thing" which I described above.

Now all the other things some people are talking about... I mean, some of these pure-whitepaper altcoins advertise features which I believe are impossible while their authors seem to have no concern that they might not be. About that stuff, who knows? But looping? Looping is not _that_ big deal it's also not obviously (to me) all that valuable either. "Meh."
2720  Alternate cryptocurrencies / Altcoin Discussion / Re: Turing complete language vs non-Turing complete (Ethereum vs Bitcoin) on: May 04, 2014, 12:31:31 AM
I think the complexity is manageable under certain conditions.

You have a list of operators, each has a cost. You add up the cost as you verify. Relayed transactions should carry the total cost with them, if the stated cost and the computed cost do not match exactly you reject the transaction and ban the peer.  There is currently an opcode limit in Script which is added this way, except the cost is always one and its not signaled.  A more complex opcode counter is an additional bit of consensus code you could get wrong, true, but it need not be excessively tricky, and if all transactions were carrying their own counts a counting bug would be more easily found in testing. Of course, it's best if the cost is always 1 except for a few special operations (e.g. expensive crypto operations like signature validation).

What I think really creates big challenges is when you expect the total execution time, not including single hotspots like the cryptoops, to be non-trivial. As soon as someone feels they need to diverge from the big-dumb-switch-statement style simulator  (which simply counts one each run through, and some more for certain operations) and start doing fancy template matching or JIT compilation of scripts and it becomes very easy to have hard to detect operation counting corner case bugs. But these sorts of risks can me mitigated by setting sane limits to begin with and insuring that the execution itself will never be such a bottleneck that anyone feels they need to engage in risky optimizations of consensus critical code. Since fancy (and different) execution implementations are risky regardless of things like looping operations, being mindful of complexity and avoiding the need to optimize addresses a wider set of issues than just the operation counting ones.

If things like loops actually worth doing is another question... the things I'd want to use them for in Script today can't be done for unrelated reasons.
Pages: « 1 ... 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 [136] 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!