If Satoshi left fixed 50 BTC/block reward forever, it would still make the system deflationary since the relative reward (compared to the total supply) would slowly diminish over time. Economically, it would work just fine and make transaction fees matter more and more eventually.
There are a few good reasons, in my view, to introduce regular halving and having a completely fixed nominal supply.
1. Psychological-1: perspective of a drastic halving in a few years/months makes people hurry up and grab coins while there are more of them. Of course, on "efficient market" the knowledge of the future is already accounted for, but real markets are made of real people (and we witness that every day on forums, mailing lists and exchanges).
2. Psychological-2: talking about reward is easier when the supply is nominally fixed to some amount (in our case it's 21M coins). We still have a lot of people who don't understand inflation/deflation and "real" vs "nominal" prices and wages. Having ever-increasing supply of coins which is actually deflationary is a hard concept for many to wrap their head around.
3. Practical: having fixed amount of coins makes it possible to fit all possible amounts into 64-bit integer using in transactions. Ever-increasing block reward would eventually lead to an overflow and require a dynamically-sized field which only complicates things and creates tons of opportunities for fatal mistakes.
|
|
|
I've updated the scheme: 1. It describes data format and crypto in full detail. 2. Key derivation and signing is simpler (HMACs instead of ECDSA and BIP32). 3. Merkle tree support to allow efficient periodical "proof of storage" requests. 4. Method to efficiently timestamp backups on the blockchain so you know which one is the latest one. 5. Method to do incremental backups if they unusually large. Let me know what you think: https://github.com/oleganza/bitcoin-papers/blob/master/AutomaticEncryptedWalletBackups.md Thanks!
|
|
|
Decentralized exchange requires all parts to be decentralized.
If you exchange BTC to wire transfers, you already lose because banks are centralized and ask questions. LocalBitcoins is somewhat decentralized, it allows trading person-to-person, but it does not work in all jurisdictions and not when you have a big turnover. And it's inherently not safe: you rely on reputation of the trader or opinion of the arbiter.
Cash is decentralized. You can use joint escrow with a trader, swap cash for coins and go home. Both put insurance deposit in 2-of-2 multisig BTC before meeting (must be 200% of the value exchanged from each side). When coming home safe and with valid cash and confirmed coins, both unlock the deposit. This is somewhat secure and better protected from all-observing eye, but: 1) it requires owning considerable amount of BTC from both sides prior to action; 2) it's a physical meetup, so some AML/KYC folks could kick in (especially if they monitor the seller for a few deals already) and arrest all your belongings and maybe charge you with conspiring with some drug money laundering or whatever.
People who want to change their surveillancecoin (usd in banks) or drugcoin (physical paper cash) for Bitcoin must realize these inherent limitations. The best strategy is to not do anything illegal, buy a bunch of coins once on a safe platform or from trusted people you know, and then simply secure and hold your stash until hard times are over and you can simply buy things with it without exchanging back to fiat.
|
|
|
Ok, got it. Thanks. I misread "passphrase" in the second quote as meaning the mnemonic itself.
|
|
|
BIP39 describes how to generate a multi-word phrase and then how to convert it to a seed. It states that phrase is directly hashed into a binary seed, so it gives us plausible deniability ("any phrase can work"), but at the same time the phrase contains the checksum, so I can't provide "any" phrase. If I tell some guys another phrase that happens to have a broken checksum, that will easily notice that. Should I understand that "plausible deniability" applies only to a set of all "valid" phrases, i.e. those with valid checksum? Maybe this should be clarified better in the BIP. First, an initial entropy of ENT bits is generated. A checksum is generated by taking the first (ENT / 32) bits of its SHA256 hash. This checksum is appended to the end of the initial entropy.
Described method also provides plausible deniability, because every passphrase generates a valid seed (and thus deterministic wallet) but only the correct one will make the desired wallet available.
https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki
|
|
|
Thanks for feedback guys. Since people already implement RFC6979, lets use it.
Question: why HMAC-SHA512 is preferable to HMAC-SHA256? Only to be the same as in BIP32? But unlike BIP32, here we do not use extra 32 bytes output and would simply throw it away. Is it correct that SHA512 is faster than SHA256 on 64-bit systems, so using HMAC-SHA512 would allow better singing performance?
|
|
|
I want to start a discussion about deterministic signatures in the context of RNG failures, hardware wallets' auditability, malleability and CompactSignature. And wrap everything in a single canonical specification. Here's the proposal. It specifies how to produce k deterministically from the message hash and a private key. Also specifies canonical format for the signature and CompactSignature algorithm. This is an opportunity to have a single BIP to cover everything regarding signatures that we need in Bitcoin, so it is easy for developers to understand signatures, learn about hazards and embrace best practice. https://github.com/oleganza/bips/blob/master/bip-oleganza-detsig.mediawikiPrevious discussion on the topic: https://bitcointalk.org/index.php?topic=285142.0
|
|
|
Thanks for the feedback! I'm glad someone validated this idea. Please do not use a #@$@ number without an assignment. Just call it BIP-oleganza-backup for the moment, until the text is ready. Otherwise we get a mess of number collisions and people calling things by colliding numbers they picked and not wanting to change them. (this isn't nitpicking, it's happened multiple times) Ok, noted. Otherwise— this sounds useful! Should it perhaps specify more of the storage service? e.g. how much data can you expect to store, how would such a service be compensated? how would you know which service(s) you're using?
The last in particular seems to be a tough question... but in general we should probably try to specify a "minimum interoperable unit", and I'm not sure if the message alone is terribly interesting.
That would be nice, but can be added as an additional server-side BIP after a couple of actual implementation (and if people really want to produce generalized API). WRT the spec. The IV really should be non-determinstic, it's already stored in the encrypted message. With a constant IV an observer can tell with AES block precision where the first modification to an updated copy was (and perhaps some more elaborate attacks, e.g. it would be trivially insecure if the cipher mode selected was CTR—). There is no need for the IV to be deterministic that I'm aware of... If you're worried about embedded device RNG quality, you could recommend that the IV be constructed as H(time||other-random||pubkey).
IV is deterministic, but not static. I've made it more clear in BIP. For each backup wallet is supposed to pick next index and derive another unpredictable IV. This is not mandatory (IV is published anyway and can be random), but allows us to have a good default that does not depend on RNGs and can be verified with test vectors. You appear to have no length encoded for the plaintext. AES-CBC is only capable of encoding an integral number of blocks, so something must encode the plaintext length. I might suggest it use self-descriptive padding, e.g. there is always at least 1 byte of padding, and last byte says how many bytes of padding there are (up to 16, though perhaps some applications might want more padding to close a size sidechannel?). Another style of self-descriptive padding I've seen used is to pad with a 0 bit and then all ones until the end, and the receiver drops all trailing 1s and the last 0 (has the advantage of fewer decodings being invalid).
Thanks for noting this. I myself used PKCS7 padding which I think is exactly what you suggested. Now it's mentioned explicitly in the BIP. The signature encoding can be made constant length, and probably should be, doing so will save at least one byte (and probably several, depending on how you were planning on having a variable length signature encoding).
Is there a reason to keep the AuthFingerprint? It can be derived from the message itself and the signature (e.g. how bitcoin's signed message works), omitting it would save ~19 bytes.
Good point. I've replaced the auth fingerprint, signature and its length prefix with a single 65-byte long compact signature. Is there a particular motivation for using a digital signature instead of using a MAC? One reason I could see is that you might want to have multiple servers synchronizing their data without individually talking to the user, like the PGP SKS keyserver— but for that case you'd want to add a sequence number (so you know if an update you're getting is a newer message or not).
Should these encrypted data chunks have a good-until date coded in them? I'd say it could be provided out of band, but not if we wanted it to be authenticated by the signatures (for the imagined synchronization network).
Initially I had an idea about adding a timestamp and making the whole thing verifiable without access to the private keys. But it was not well-thought. Now I've clarified this: auth key is non-hardened auth pubkey can be kept in memory/stored on disk unencrypted so the wallet can verify various backup payloads without asking user for his password. When the fresh valid backup is found (or the user selected one of the available backups), wallet asks for a password or TouchID verification to unlock private master key and derive decryption keys. [Hm. Wow, a synchronizing server would be super cool for this, if we had a good way of avoiding abuse.]
Maybe some proof of work would do? However, I'd prefer some payment scheme built-in. So we could pay a little bit upfront for X uploads and therefore has some incentive for the server to stick around when we need to retrieve the data. Maybe the payment is better be done afterwards. Or with some sort of 2-of-2 bilateral deposit.
|
|
|
Hi, My name is Oleg Andreev, I work on iOS/OSX wallet and CoreBitcoin - a clean and well-documented Bitcoin toolkit in Objective-C. As you all know, wallets are typically encrypted with a password (using some key stretching algorithm like PBKDF2 or Scrypt). Since the password is weaker than a purely random 128+ bit key, it's better if the user keeps their wallet in some private location that is relatively hard to access. Such backup is better not to be thrown around on popular hosting services like Gmail or Dropbox. HD wallets (BIP32) improve user experience by requiring to secure only the master key and only once. The rest of the keys can be derived later to retrieve the funds. The problem is, wallets may have extra metadata which cannot be derived from the master key. E.g. user notes, invoice info, or even more importantly, multisig pubkeys and P2SH scripts. To redeem a P2SH payment one needs to know original script which must be stored somewhere and securely backed up before any transaction is made involving that script. Asking the user to backup his password-protected wallet before each such transaction would be cumbersome. I suggest additional backup scheme where the user's wallet is encrypted using a truly unpredictable AES key derived from the wallet's master key. If the master key itself is not derived from a weak passphrase, but has 128+ bits on entropy, the AES key would be equally strong. Therefore the wallet can be automatically encrypted and uploaded to one or more backup services without any user action. When the user needs to restore the backup, he will have to restore the original master key first and then make his wallet connect to backup servers and retrieve the most recent backup of full wallet contents. Backup servers cannot possibly decrypt wallets with bruteforce, they only need to allow reliable retrieval. User's wallet may download the backup at regular intervals to detect if one of the servers lost his data or went offline. In such case, another server may be used or the user may be warned to make a manual backup as soon as possible. Proposal: https://github.com/oleganza/bips/blob/master/bip-0081.mediawikiUPD: https://github.com/oleganza/bips/blob/master/bip-oleganza-backups.mediawikiUPD2: https://github.com/oleganza/bitcoin-papers/blob/master/AutomaticEncryptedWalletBackups.mdPS. I didn't want to create a pull request as the text might change and I don't want to have troubles with rebase (and accidentally lose connection to a pull request). Github Issues seem to be disabled in the bitcoin/bips repo. So lets discuss it here for now.
|
|
|
Standard OP_RETURN output is for attaching arbitrary data (e.g. a hash of some document). How do you use it - it's your problem. But you are encouraged to add this data as OP_RETURN output instead of, say, a 1 satoshi fake address output. This way the index of unspent outputs will not be cluttered with provably unspent outputs (my opinion: that does not matter, UTXO will grow huge anyway; we need other ways to optimize it). For smart contracts there is an entire scripting language built-in. And there are some wiki pages on how you can build cool contracts with it: https://en.bitcoin.it/wiki/Contracts
|
|
|
Regulations apply to people, not things or ideas. You should rephrase your question like this:
1) Should legal shops accepting Bitcoin adhere to certain regulations? 2) Should online exchanges adhere to certain regulations? 3) Should two individuals on the p2p network adhere to certain regulations before sending transactions? 4) Should individuals writing and deploying software that does automated Bitcoin transactions adhere to certain regulations? 5) Should ISPs and proxy servers that pass Bitcoin traffic through them adhere to certain regulations?
Etc.
|
|
|
Umberto Eco, Foucault’s Pendulum: "Gentlemen," he said, "I invite you to go and measure that kiosk. You will see that the length of the counter is one hundred and forty-nine centimeters-in other words, one hundred-billionth of the distance between the earth and the sun. The height at the rear, one hundred and seventy-six centimeters, divided by the width of the window, fifty-six centimeters, is 3.14. The height at the front is nineteen decimeters, equal, in other words, to the number of years of the Greek lunar cycle. The sum of the heights of the two front corners and the two rear corners is one hundred and ninety times two plus one hundred and seventy-six times two, which equals seven hundred and thirty-two, the date of the victory at Poitiers. The thickness of the counter is 3.10 centimeters, and the width of the cornice of the window is 8.8 centimeters. Replacing the numbers before the decimals by the corresponding letters of the alphabet, we obtain C for ten and H for eight, or C10H8, which is the formula for naphthalene." "Fantastic," I said. "You did all these measurements?" "No," Aglie said. "They were done on another kiosk, by a certain Jean-Pierre Adam. But I would assume that all lottery kiosks have more or less the same dimensions. With numbers you can do anything you like. Suppose I have the sacred number 9 and I want to get the number 1314, date of the execution of Jacques de Molay-a date dear to anyone who, like me, professes devotion to the Templar tradition of knighthood. What do I do? I multiply nine by one hundred and forty-six, the fateful day of the destruction of Carthage. How did I arrive at this? I divided thirteen hundred and fourteen by two, by three, et cetera, until I found a satisfying date. I could also have divided thirteen hundred and fourteen by 6.28, the double of 3.14, and I would have got two hundred and nine. That is the year in which Attalus I, king of Pergamon, joined the anti-Macedonian League. You see?" "Then you don’t believe in numerologies of any kind," Diotallevi said, disappointed. PDF: http://www.cs.utexas.edu/users/acharya/Inputs/Books/Foucault’s%20Pendulum.pdf
|
|
|
The actual solution would be to use multisig. If your server stores someone's funds, then it's better to lock them up with 2-of-3 keys: one the server's, another one derived from user's password, and third is emergency key that belongs to the staff (in case user forgets his password). Your server will automate part of its job, but if it's compromised, users do not lose everything.
|
|
|
Yup n=3 is the limit for IsStandard right now. That isn't true. If you read the code, the limit is with the size of sigScript if you are using P2SH: txin.scriptSig.size() > 500 Signatures are of length 72 bytes and public keys are of length 33 bytes (if compacted) so 4 of 6 is about the limit. I've managed 3 of 4 and it passed as a standard transaction. This 500 byte limit is in IsStandardTx() check. You can still get your 10-of-20 multisig transaction included in block if your really need it.
|
|
|
Yes. The official binary is compiled in a special environment that can be exactly replicated so that the binary can be verified. It's a bit difficult to set up, though.
Where can I read more about this environment? I'm very interested in having the same thing for my own app.
|
|
|
It's relatively easy to do with schnorr signatures. It would be a major advance to be able to do this with ECDSA.
I don't know much about schnorr signatures. Could you pls show an example why/how it is trivial to do n-of-m in schnorr scheme?
|
|
|
Wow, thanks for the paper. I will definitely check it out. My goal is to implement this idea: 1. People crowdfund a bunch of money for some company. 2. Unlike usual schemes, company cannot use all that money how they want, but only in some pre-determined portions. E.g. when they start crowdfunding, they need a guarantee of $1M, but they will spend first $100K on a prototype, then $300K for initial batch, then if everything is well, for the rest. Crowdfunding contract will take care of putting all money in such buckets so they are not spendable right away. 3. If founders begin spending money not in a way investors like, investors can unlock the funds and get them back to everyone with a majority vote. TLDR: "If you start fucking with us, we will automatically get most of the cash back". Alternatively, every chunk of money could be allowed or denied via a majority vote, but that may be too cumbersome. It's probably more efficient to simply allow spending some smaller portions and rescue the rest in case of a problem. Alternatively (2), crowdfunding process itself can be broken down in independent stages, but that is also cumbersome for the same reason. It's simpler just to crowdfund the total $1M only once, begin the work and then take it back if needed.
|
|
|
|