I'm glad to see someone with an aggregate signatures proposal. From an anonymity perspective, I believe a cryptographic approach is unnecessary, and they are very difficulty to deploy, but still may useful in the future. There are certain malleability problems that arise when using aggregate signatures that would need to be carefully engineered out. For example: I believe If the user has received multiple coins using the same public key (something that is discouraged and which harms privacy today, but is commonly done in practice due to user (and wallet software author) ignorance, as well as convenience) then I could take an OWAS spend of a coin with a certain public key and make it instead be spending a different coin with the same public key. Your proposal to use OWAS may offer additional benefits beyond anonymity however. I'd like to better understand why you write: "Furthermore, for our application, an even weaker form of security - the non-adaptive case - should be sucient. This requires the adversary to output a forgery after making only one sign query." The notion of compressing all the signatures in a block into a constant size output is very attractive, even if it retains a linear validation cost. The improvement of anonymity could be viewed as a side benefit.
|
|
|
got over 100-200 btc a day for a few months
As others have pointed out, this is nonsense. At January first 2013— long before any avalon miners existed regardless of what Chinese conspiracy theories you have— difficulty was 2979636. At 60GH/s this would have resulted in 10.13 BTC/day, an order of magnitude higher than you claim. By the time any were reported in anyone's hands anywhere the difficulty was 2968775 ... very slightly _lower_, which is again evidence that no one was turning on some big mystery asic farms at the time. This bit of complete nonsense degrades what is otherwise an OK post, you ought to fix it.
|
|
|
Oh ok. Do you reckon there is any need to switch bitcoin over to Ed25519 at the moment? Or do you trust the magic numbers in Secp256k1?
If it's possible for any of these ECC systems to be intentionally insecure that would require some profound math which is unknown to the public. If we assume the existence of profound math which is unknown to the public, I do not see a reason to also assume Ed25519 is more secure. Including it would be a significant burden (a fast ecc signature validation implementation is not simple code, and would not overlap with our existing code) which would carry its own risks.
|
|
|
So. Um.
All this speculation. If they're going to make people's expected delivery dates— you would have thought that they'd have test hardware in the miner software folks (remote) hands by now.
Just saying…
|
|
|
It would be kind of genius if the reason Bitcoin does not use secp256r1 was because Satoshi knew about its possible weaknesses.
No need to assume that. Secp256k1 was sort of the obvious choice for Bitcoin because of the performance considerations. (Today you would have chosen Ed25519 instead)
|
|
|
Is it known who submitted these transactions, and with what client?
I do. They were created by Genjix and his SX wallet software. Looks like it was due to a failure to initialize the version numbers in transactions. I was able to determine this because they spent funds send to the well known and oft reused libbitcoin donation address two hops back, so I sent Genjix an email to ask and he confirmed and tracked down his bug.
|
|
|
How large are Lamport signatures with security equal to 256 bit ECDSA? 256-bit ECDSA has a security strength of 128-bits under classical computing. Under Lamport, that would require signatures of length 128 * 128 bits, which is 2 KB. The public keys would be 4 KB. ( Source) To have an estimated security strength of 128-bits under quantum computing, Lamport signatures would be 256 * 256, which is 8KB. ( Source) In Bitcoin the public key would be part of the signature, when both are sent together there are several 'compression' schemes you can apply which allow you to avoid specify unneeded parts of the keys (since both are tree structured), you can do something which has 256 bits of QC security in about 11kbytes. This also benefits reducing the size of the public key to a single hash— addresses would be no longer (or only somewhat longer, for 256 bit ones) than the addresses we use today. There is no severe reuse issue, for a very minor increase in size you use a merkle signature scheme, where you have a tree of lamport keys. This significantly reduces the reuse problem. In the context of Bitcoin some reuse would also not be completely fatal. You could very nearly implement lamport signatures in our existing script with no special functionality at all, we're basically only missing code to push the raw data-to-be-signed onto the stack... though doing it that way wouldn't get you public key compression. I'm not sure if we should implement such things prophylacticly: It would be great to have an already deployed answer to "OMG WHAT ABOUT QCs?!" or "OMG WHAT IF NSA ECDSA?!"... but I suspect a lot of people who would ask such things aren't really looking for answers. Our common infrastructure is also very sensitive to size— my "structure it so you can forget old signatures" is a major security model change which might have severe economic consequences in the long run... and additional block-chain bloat absolutely would have consequences. So ::shrugs::.
|
|
|
That thread is a _little_ misleading. The author is writing about the traditional characteristic 2 koblitz curves (e.g. NIST k curves). SECP256k1 is a curve which is not part of the nist standard. It is a generalization of the koblitz but with a prime field that admits similar optimizations. The design freedom for it is this generalization is somewhat larger than than that available for characteristic 2 koblitz. ... but the design space still is substantially constrained, and I agree with the conclusion that it reduces the room for carefully selected values.
|
|
|
Can people please open up the debug console and run getpeerinfo when the error first happens and paste the results here?
|
|
|
I like the network dependency the way it is, because it's clearly defined how to handle alts. HASH256("BITCOIN"+K) means that I'd have to define it for all the alts as well, including testnet.
Hm. Is an internal network binding really desirable? wouldn't it be preferable to use a prefix character? Selected KDF, AES, EC Public key derivation, SHA256+RIPEMD160, HASH256, bigint math to go to string (mod 58), another HASH256. All of this will significantly affect the effectiveness of a GPU based attack.
The base58 encode can be done without bitint math, it's just a less obvious implementation. Does make a embedded C implementation more of a pain. You get your attack hardness from your KDF... these other operations make the implementation more complicated... which also may make people more likely to roll their own rather than use your scheme. I would encourage thinking about it some. I'm not sure what you're referring to:
Sorry, my inability to read. The word 'seed' in the text seems to have given me trouble I keep reading past it. What you're encrypting is a root key. I think I keep thinking it's a salt. There has, however, I'm still not entirely convinced I'd want to store a tree structure in the encoding. The point was to make this as compact as possible in order to create a paper wallet out of it. I guess some form of notation could be added along side it to indicate the tree structure. But even then, I expect the tree structure to grow over time.
Yea, I just wanted to see if there was something simple like a depth counter that would satisfy people, serializing a tree would be kind of ugly in a compact format.
|
|
|
A shame. I can deal with waiting four days to access my coins or move my wallet to an uber fast machine if needed, but not everyone can. If my I5 machine is taking this long, how long would this take on a core 2 duo with 2GB ram?
Hm? reindex shouldn't take longer than an hour or two. What kind of media is this on?
|
|
|
Public key is always going to be trickier to keep secure as they all rely on assumptions, and that will lead to a never ending "arms race".
Well, careful, symmetric ciphers depend on the existence of one way functions. If P happened to practically equal NP, then one way functions couldn't exist and I could solve for the symmetric keys that turns your ciphertext into ascii (there is probably only one). NOT. BLOODLY. LIKELY. (kinda sadly, there would be a lot of other befits to such a world) It's possible to construct public key signature systems that depend only on the existence of one way functions. (Lamport!) The soundness assumptions in error correcting code crypto-systems are also generally pretty solid (well, we keep breaking them trying to make their overheads tolerable…) (solving for random linear codes is NP-HARD ... the only question is can the attacker turn your public key back into an easy linear code) Considering that for encrypted messages overhead is mostly immaterial I'm surprised that no one has created a stone soup protocol that just takes "one from each column": NIST-521 bit ECDH, just in case the NSA made it stronger 1024 bit ECDH with parameters selected the best known public art techniques (e.g. like the brainpool curves) Supersingular isogenies key agreement Wrapped up inside an error correcting code public key encryption And that encrypted with a symmetric key which is from the recipient, a starter one is in the public key.. though thats not very useful. Feed it to a pair of orthogonal strong KDFs which then feed separate passes of multiple standard ciphers (unrelated keys) in some long block modes. Then inside the encrypted messages you send symmetric keys generated using H(random, data_thats_part_of_your_private_key) which your receiver will save and use as an additional key in your KDFs in messages they send to you in the future (perhaps up to N of them with octave spacing, so a spy that can break the public key stuff will get locked out with high probability if they miss any of your messages). Perhaps then the whole message gets thrown through a gnarly unkeyed cryptographic permutation and coded up with a RS code and you replace it with the non-systematic outputs and, at your option send, the message in as many parts as you like over different communications channels... so an attacker who can't snoop all of them learns almost nothing about the whole message. Care would need to be taken to avoid interactions that hurt security.. but for encrypted messages.. who gives a crap if there is 50K of overhead and it takes a half second to decrypt? There are plenty of applications where thats totally unacceptable, like Bitcoin... but also plenty where it is. ... wait. what board is this?? woah .. way offtopic.
|
|
|
Ok. This prevents it from failing but does not seem to make it go faster. Now at three days running machine non-stop. 15 weeks left to go.
It isn't expected to make it faster. The checklevel refers to only the brief startup sanity check that covers a few hundred blocks.
|
|
|
I'm not sure that in general it's completely true that a side-channel attack on a hash function like SHA512 involves only non-memory access, because the input to the hash function probably resides in memory, so there might be side-channel attacks that involve cache misses etc.,
In SHA512 none of the memory accesses are data dependent, every execution reads from the same locations. I believe this is true of all relatively modern hash functions (SCRYPT is the notable exception, though it's normally used in a way that probably makes them harmless). (just a minor comment— I agree with everything you're writing)
|
|
|
You may note that I revised my message to make abundantly clear that I am stating my personal opinion, for whatever worth people want to take it for.
I do not want to endure more legal threats from you. If you insist on making them, however, I am not afraid.
But after my interactions I with you I do not believe the Bitcoin community would be well served by your services in this role.
As an aside, I do not understand why you persist in referring to yourself in the third person. I am also mildly confused at your commentary regarding pleadings for a judge. Is this a reference to your prior legal threats? I did not believe you were licensed to practice law.
|
|
|
As a Bitcoin-QT core developer, foundation member, and Bitcoin enthusiast I am strongly opposed to Trace Mayer on the foundation seat.
In the past he has treated me in a manner which I found to be disrespectful and hostile, which is saying a lot considering that I can handle reading the mining subforum here.
Contact from him which was, in my opinion, threatening made me seriously consider discontinuing my involvement with Bitcoin.
In light of this, I do not believe that his approach or interpersonal skills are suited for the role.
|
|
|
That's a neat idea (mixing large transactions) but unfortunately I cannot see how it could be implemented. When signing an input we sign a hash of the outputs, and thus adding new outputs will require to re-sign the transaction (as you already stated). So, the transaction must go back and fort (in order to resign it each time an output is added) and the miner becomes essentially the rendez-vous server.
That isn't the case, and if you see the " taint rich" link in the post, you can see I went and performed these transactions with people with no back and forth, there is a single round trip: I offer inputs and outputs, you respond with inputs and outputs and your signature, I then add my signature. If you'd like we can do one together too. My main motivation in creating that long writeup was correcting that misconception. For SIGHASH_ALL these can be accomplished by simply agreeing on the outputs before any signing begins. (Obviously things are even simpler with SIGHASH_SINGLE, but that doesn't have the desirable privacy properties). Standardize some coin denominations, call 'minted coin'. I tried pretty hard a couple years ago to get pools to round up their payments to non-jagged numbers like 0.01, because the highly jagged outputs they produce are bad for privacy and produce more bloaty change... and had absolutely zero luck. I am not anticipating great success on any kind of denominationalizing bitcoin. Maybe if the block explorers that give the misleading "account" view go away and people use more clients that show a more accurate "coin" view people will start to care more about the denomination of the coins they receive.
|
|
|
Although I agree that the scrypt 214/8/8 KDF isn't very strong, consider that using a 10 character password with upper, lower, numbers and special chars (say 72 different characters) is still going to take you over 5,935 years on average to crack at 10M hashes per second. And that's ignoring the AES / EC public key generation / double SHA256 part that you need to do to verify if it's correct.
That ignores how people actually use passwords, if you have space to store that much entropy you're not actually far from just putting the whole key there. Typical passwords have much lower entropy than that and can be found with far fewer attempts than you'd expect from a uniform probability model. That said, I'd actually forgotten how the scrypt memory hardness was effected by the parameters, I thought it was just N*128 bytes, which is hardly memory hard considering script memory/computation trade-offs, but it's actually N*R*128, which is a good improvement here. So I'm a bit less than 8 times less concerned than I was before. Basically, there is a bit of a moral hazard with supporting an insecure KDF— users are known to use bad keys, and application developers are known to use the most inefficient code possible in JS and then force users to the insecure KDFs, and everyone has to follow along for compatiblity. These are well established behaviors in the bitcoin world. But I think your minimum is probably fine. Currently there are 3 defined KDF's with 29 left to be defined. If you have any suggestions, that would be awesome.
The obvious thing to use instead of scrypt is cantena since script has data-dependent access patterns that leak key material in contexts where timing or power analysis are risks. Might be interesting to get a recommendation from colin percival. Some other random questions. Is there a reason base64 was not considered? These keys are all too long for the one click copying and reading applications where base58 is somewhat better for... base 64 is 10% smaller and with the right padding can result in a deterministic length. We decided to go with base64 for signmessage and I don't think we've regretted it. Is there a reason that the sale is HASH256(Base58Check(RIPEMD160(SHA256(K)))) instead of the simpler HASH256(K)? This avoids a needless entropy bottleneck, additional computations, and an indirect network binding. If you want to bind the network you could just add a network string: HASH256("BITCOIN"+K) Is there a reason the date and KDF code is not included in the derivation of the salt e.g. salt = length + prefix + date + HASH256(length + prefix + date + K)[0..3]? (length is 16/32/64 or such). This alternative construction puts the prefix and date under your authentication code, and also increases the space of possible salt values. (the latter probably not very important, though OS password hardening seems to use 64 bit salts typically, but the former sounds useful and important) Minor bug, your input seed can be 64 bytes, but the rest of the text assumes 32, e.g. the offsets for the whitening and aes key. This should be clarified. (As an aside I am happy you whiten the input to the EBC mode cipher). Should the master generation procedure just reference BIP32? Has there been no interest from wallet implementers in a possible span parameter: e.g. "this key has addresses assigned out to position X?"
|
|
|
They just need a dead man's switch. When they are "compromised" they simply don't reset the switch and let it activate. Oh, of course, you'll say, the evil government agencies will instruct them to reset the switch.
There is a popular mining pool has a deadmans switch to turn over control of the pool to the backup ops if the main ops go offline... It has fired accidentally once. These things are tricky to get right. Worse, they can create some perverse incentives. If we had a deadmans switch we might not tell you if we thought it would make attacks more likely.
|
|
|
|