are at risk. In context, of course— thats assuming a compromise of ECC on our curve.
|
|
|
Requests from whom?
From myself and probably any SPV wallet author too. This is a common feature in wallets which makes an _enormous_ performance difference for both full nodes and SPV nodes, and anything else without an enormous disk wasting historical index. Dates with keys is a feature already supported in every SPV wallet, as well as Bitcoin-qt. It makes a huge impact on rescanning speed. What if the encrypted wallet has an incorrect date string? How is the client to know? If it just skips ahead to the date listed on the encrypted wallet, it could miss a whole bunch of important transactions. How do you propose we deal with that without re-scanning the entire blockchain anyway? Any solution I can think of removes the utility of a date code in the first place.
If it's incorrect and you're able to scan without it, then go ahead and ignore it. Asking for an approximately correct date while generating a key is simply not a burden. If you're unable to handle this then you're going to garble up the addresses and send coins off into space. Why are you speaking like this is so final? You haven't really given a compelling argument for the necessity of the date field, and I think I've given several strong arguments against it. I'd like to hear from others what they think.
Because it's a major requested feature and you can simply opt out of it either on the generation (set to 0) or use (ignore it) side.
|
|
|
Well, I guess there is at least one element of honesty in the shilling: a 5% discount and an early shipment is worthless on a product that won't ever exist. So at least we can say that all the shills created through that offer at least earnestly believe that AMT is not a scam.
just wow, we have a "moderator" chiming in? I just lost a lot of respect for this site - im late to that game i admit. here i was thinking this was a reputable blog. gmaxwell... do you have anything intelligent to bring to the conversation or just your worthless speculation? Do you get double pay for making a fool of yourself towards a moderator? I thought I was offering the best possible positive spin on an otherwise pretty terrible situation. Feel free to disagree.
|
|
|
Questions/comments?
You're kidding, right?
|
|
|
Hey HF, should I wait until before or after your customers receive their units to expose your true bullshit? Pfft, so much drama. Come on, it's not like you're going to show us that iCEBREAKER is on HF's payroll instead of just being independently crazy.
|
|
|
These boards don't have additional electronics. On the contrary, a lot of the expensive components have been removed, and the 2-layer board is cheaper too.
Yes, I know. I was attempting and failing to say that the because the chips are so costly the additional electronics are not that substantial, and considering the reliability concerns the efficiencies gained may not be worth it. E.g. you save $x on support electronics but increase the failure rate by y% then if its a win or not depends on the price of the chips. E.g. if chips cost a million dollars a piece and you save $20/chip on electronics at the cost of a 0.01% increase in failures its not a win.
|
|
|
Well, I guess there is at least one element of honesty in the shilling: a 5% discount and an early shipment is worthless on a product that won't ever exist. So at least we can say that all the shills created through that offer at least earnestly believe that AMT is not a scam.
|
|
|
Might be a more interesting design if the chips were priced anywhere near the marginal cost of manufacturer, but with them priced like they were solid gold— the cost of the additional electronics and the reliability concerns may outweigh the benefits.
|
|
|
Any word on releasing the firmware source? I'd really like to correct the high cpu usage.
|
|
|
You can copy information but that isn't all there is. Alice wants to send some information to Bob and Bob wants to make sure that he is the only one who can make use of it. I can not see why there could no be such a scheme where Alice would have surely rendered this information unusable for everyone else and so on. Although it would be nice to know how this "spending" happens. But it's 5 AM and I don't make any sense I think, I'd like to see a proper formalization of this though. I am tired but can you comment on this?
Please read my message again. There are problems both with defining "everyone else" and, ignoring that, getting them into agreement over which of multiple spends was the bad one. I speak to both of these things.
|
|
|
Sort of, first you have to be more specific about "the double spending problem". Information can be copied— so long as we're not using unclonable qbits to store it (google: quantum money)... so any money based on information can be replicated to yield double spends.
There are "change the rules" kinds of solutions where you make creating a double spend have consequences (e.g. give away your private key) but ignoring those the problem of double spending is really the problem of the whole universe (including the payee) coming to an irreversible agreement about what order two events happened in.
Relativity tells us that the order you perceive events to happen in depends on your relative position in space-time with the events, another party at another location will perceive another order.
With this in mind, it seems clear to me that you cannot autonomously achieve a consensus ordering without specifying a privileged location, even with perfect knoweldge. Of course, we don't have perfect knoweldge so the problem is even harder.
For our purpose we also want the system to be anonymous— meaning the participants are unknown in advance and can come and go ant every time— and resistant to malicious parties. This means we cannot use a consensus solution that involves asking everyone if they agree on the order (because you can't enumerate them, and even if you could— some would lie just to wedge the process), and besides, most of those have quadratic communications complexity.
What Bitcoin itself did was largely believed to be not possible— it achieved it by relaxing some of the definitions. The anonymous requirement means that you can never have a guarantee— perhaps there are some moon nazis on the dark side of the moon with a longer chain that we'll only discover a week from now, etc.
|
|
|
Well, I certainly intend it to replace BIP38. BIP38 was dropped onto the wiki without any public review or discussion and contains a number of unfortunate shortcomings which are corrected here.
The WRT third party generation, I think this proposal does accommodate that in a not-quite 1:1 manner... in that you can send someone off your encrypted key to transcribe. Though even if it doesn't thats just a single use case— the fact that BIP38 can only accommodate a single address is a reason something should replace it even for that use case. Though I'm not sure how common that usecase will be in the future considering recent regulatory activity.
|
|
|
No. The use of any particular encryption scheme does not answer the very minor and very abstract concern I raised about parallel use of the private keying material. I'm getting a very concerning sense of the blind leading the blind here, and the incivility is really out of line. The "new tech" encryption based "two factor" authentication protocol described above is insecure against a malware infected host. Because it operates in-band and the authentication credential is unauthenticated and the malware can just steal the response. Presumably the purpose of having a two factor in the first place was to be secure against that. The sign-message mechanism— which could be given an exactly equivalent workflow, and which doesn't have the same additional security considerations to consider can be free of this weakness if used to specifically authenticate the action requested. This has gone offtopic for this subforum. [Edit: For historical interest sake, I should publish the weaknesses I reported privately: I found a weakness for this cryptosystem which allows me to compromise it for a single message with 2^64 known-ciphertext queries to a decryption oracle. E.g. the key-holders run a server (the oracle) that decrypts messages and returns the results and I obtain the ciphertext of a message someone else created (which the oracle refuses to decrypt for me, otherwise this would be trivial), and after making 2^64 queries to the decryption oracle I can decrypt the unknown message. To accomplish this I take the nonce from the message to be decrypted, and combine it with the all zeros ciphertext (or any other known ciphertext for that matter) and sweep the 64-bit MAC space until it passes. I xor the resulting garbage decryption of zeros with the message thus recovering the plaintext. This attack is a result of compromising the ECIES security claims by the reduction of the MAC size, though it only results in a complete break because of the use of counter mode AES. Less cryptographically, Using a centralized service to look up public keys creates weaknesses worse than just the 'obvious' privacy and availability ones because the implementation here doesn't check that the pubkey returned matches the address, though it trivially could. This weakness is made exponentially worse because the public key is fetched over https using urllib2 which, as far as I can tell, doesn't do any certificate validation, so any MITM could substitute the public key. Altoz has subsequently opened up issues for all these items: https://github.com/coinmessage/coinmessage/issuesI still have the general reservations with using encryption for authentication, especially in-band auth— and for reusing a single private key for signing and encryption, both are generally inadvisable practices though I'm not aware of any specific weakness they create here. Likewise, any usage that needlessly ties a user's identity to their finances could result in surprising losses of privacy, so I hope this approach isn't widely adopted even once the cryptographic weaknesses are fixed. If people want you to send them encrypted messages, get pubkeys from them! ]
|
|
|
If I'm not mistaken, this is an attack that can be performed on any elliptical curve, not just secp256k1. Not so, there are twist-secure curves like the one used by curve25519 where the points on the twist are equally secure. Is the fact that the private exponent is also used to sign messages somehow related to this attack?
The general statement cautioning against using the same keys for encryption and signing is because the parallel composition of signing and encryption is an unanalyzed construct. I might be able to take some signatures, combine them algebraically, ask for a decryption, and learn something about the private key as a result. Providing parallel access to the private key material, even if its via constructs which are separately accepted as cryptographically strong, voids the security proofs and deployment confidences, and surprising weaknesses have shown up in the past as a result of it. ... so it's generally considered a good practice to avoid it where possible. I'm disappointed to see that the conversation with Luke went unproductive there, he is responsible— AFAIK— the largest and longest standing use of bitcoin keys for identification/authentication purposes; which were one of your enumerated use cases. I actually asked him to come here and respond specifically to those use cases. Likewise, andytoshi has been active in the Bitcoin wizards channel where a lot of advanced cryptography is discussed for some time. He's not a sock of anyone, and negative tone is just going to discourage people from evaluating your system.
|
|
|
Software absolutely should have protection against stupidity. But the network is enormously hard to update, and as I pointed to there are real use cases that involve paying high fees (and, indeed, fees greater than outputs) so what you are suggesting does not avoid an unnecessary arbitrary limitation.
We are trying to remove the IsStandard restrictions over time, adding more of them— especially ones that assume a particular value for a particular amount of coin, is entirely the wrong direction.
There is basically no boundary to the kinds of mistake poorly written node software can make. Perhaps they'll use a constant value as their DSA nonce— do you suggest we add code to screen for duplicate nonces on relay? Perhaps they'll use a 32 bit LCG to generate their private keys. Perhaps they'll confuse their main output and change output?
Brainwallet prefills the destination in the transaction make with an address. If you're using a system that copies on highlight then hilighting to erase the address will wipe out the address in your copy buffer and then you may paste the default back in without noticing it— It's a mistake I've made several times while screwing with the site, but I'd never use it for an actual transaction— shall we blacklist that default output?
|
|
|
TierNolan, uh. Yes. Just do so. Write a transaction that spends all those coins and send them to a new address.
Even if you need to pay to do so doing so can reduce your fees in the future, assuming txn fees tend to be larger in the future than now.
Your actual question? Just how to do it? In bitcoin-qt (git) turn on coincontrol, select the subtree with those coins, create a txn for their complete value and select them as the source. With Bitcoind listunspent 0 999999999 '["addresss"]' and spend those coins using createraw. If we had an explicit sweep function in Bitcoin-qt it would be neat if it had an easy path to be used for this. But we don't currently. .... but the client tech support stuff is mostly in other forums.
|
|
|
Personally I think a much better solution would be to have the fee *explicitly specifiable* in the script (a new op such as OP_CHECKFEE).
Shoving in a _fee_ operator is a really kludgy way to handle it as if it worked it would break the sighash independence and the ability to do ANYONE CAN PAY to add fees to unstick a transaction which had fees which were too low. But since the ScriptSig is not under the signature hash if a fee amount on the signature stack was the only thing preventing incorrect fees a signature could be trivially rebound from a transaction with sane fees to one with insane fees by a hostile host/relayer. The only non-hardforking way I know to address that is to introduce a new checksig operator, which is not a small step at all. ... moreover, I don't think it would have helped here: Brainwallet looks up the input values (using blockchain.info), it knew what they were. An outright rejection to protect people here feels like dangers are not addressed to keep them from using alternate tools.
Can you please step back for a moment and consider how this response feels from my shoes. You're basically accusing me of being in a conspiracy to prevent people from using "alternative tools" simply because I think that degrading the functionality of the network with a bunch of hyper-specific mistake detectors to patch over _incompetently_ and _unsafely_ authored "alternative tools" is not a grand idea. I tend to think that making bitcoind subsume the functionality of other people's software would be the ultimate in keeping people from using alternative tools. But hey, if thats what you want you always have the option of just using bitcoind directly and gaining the mistake protection it presents towards the user. But, moreover, don't you think you could have expressed an opinion on the ease of alternative implementations without the suggestion that anyone here had any ill motivations?
|
|
|
I apologize for my naivete, but I'm trying to understand the attack. My algorithm sends the short version of the nonce point (x plus parity) so the attacker sending an invalid nonce means the attacker sends an x that's past p but less than 2^256. Say the receiver has a broken program that doesn't check the nonce and gets a garbage message. What would the receiver do at this point to inform the attacker? Here is the message I got?
Right, imagine the receiver that takes the form of network reachable service, and you can send it messages and it tells you what it decoded or just tells you if the checksum passed. You can now blast candidate messages (e.g. sweeping the checksum) at it and learn data derived from secret*(twist point), with all that indirection actually compromising something that would be impressive, but its clearly gone far outside of the realm of being able to make solid statements about the security by that point.
|
|
|
Can you elaborate on this point? What nonce can you send that would leak the private key and by whom? At least in my implementation, the nonce is generated by the message sender who doesn't have the private key and may be malicious. That attack is that the sender picks a nonce which is not on the curve and then attempts to learn something about the point that the receiver has generated using the bogus point— e.g. probing it to see if the 'checksum' fails. The secret key * off-curve-point is equivalent to performing ECDH on the quadratic twist, and for secp256k1 the twist is not really cryptographically strong. In a trivial cryptosystem where the decrypting party just happens to tell you the secret they derrive compromising their private key is not hard, in practical systems it can be harder to exploit but also hard to be confident if it's never exploitable. Also, what is the danger to using the same key to sign and encrypt? Just curious.
The classic example is RSA where even deployed systems have been compromised by sending blinded encrypted data, getting them to sign it, and unblinding the result to yield decrypted data with that key. Basically the security assumptions of an algorithm can be broken by doing other things with the key material outside of the algorithm... usually its fine. Sometimes it's not. Figuring out where its fine or not is hard, so it's considered a better practice to just generate separate signing and encryption keys and sign the encryption key to bind them... sometimes there are important reasons to compromise on this rule of thumb, of course. But absent a good reason its good to keep them separate (perhaps doubly so in that there are no strong proofs of ecdsa's security— only ones that make rather broad generalizations, if we can't prove ecdsa secure the being confidence that ecdsa plus a potential additional side channel to the secret key is harder.).
|
|
|
|