Bitcoin Forum
April 26, 2024, 10:37:27 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 [24] 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 »
461  Other / MultiBit / Re: Going forward: MultiBit HD and MultiBit Classic on: September 30, 2013, 06:44:20 PM
I figure if we have all of:
+ HD wallets only with strong insistence to write your seed mnemonic down.
+ local backups (basically the backup system that is in MultiBit now)
And
+ cloud backup of an encrypted copy of your wallet

(Ie  triple redundancy)
Then the overall failure rate (meaning a loss of bitcoins) will be as low as possible.

I'm not going to make myself popular by saying this, but the blockchain is a very effective way to store private keys when people upgrade their wallets to HD. Just derive a tag from the HD seed with T=H('multibit wallet tag' + seed), encrypt the private keys with AES using the seed as a key, and create some transactions putting that data into the UTXO set with CHECKMULTISIG:

1 tag <data> <data> 3 CHECKMULTISIG

"data" can be up to 120 bytes, 240 bytes in total per txout. Because we've used a tag it's easy for a SPV node to retrieve it later by adding the tag to their bloom filter; make sure the key birthday in the seed is prior to when the private keys are backed up. If you want to be nice you can use OP_RETURN to encode the data, or derive a spendable pubkey to use as the tags instead and spend the outputs. (a bit cheaper too)

There's no reason not to implement this, at least from the point of view of your users.
462  Bitcoin / Bitcoin Discussion / Re: Once again, what about the scalability issue? on: September 30, 2013, 06:28:31 PM
Can we stop spreading incorrect information please?

MultiBit does not rely on a trusted third party. That's the point of it - it reads the block chain.

I ought to admit that I know almost nothing about MultiBit. Could you explain how it's possible to verify a transaction if you don't have a whole blockchain? You have to check the chain of ownership till you meet the block the coins were generated in. AND you have to check that none of the satoshis from these coins were double-spent.

Indeed.

Users have to understand that SPV does rely on a quorum of trusted third parties, specifically miners, which makes it significantly less secure. For instance I could temporarily take over a large amount of hashing power by hacking a large pool and I can use that to profitably rip off your business and many others with fake confirmations. If I manage to hack >50% of hashing power - which would require nothing more than hacking into about 3 pools right now - I don't even need to isolate your SPV node from the network. (which is also easy because SPV nodes have to trust their peers) I may even do it just to push the price of Bitcoin down and profit from shorting it, or I may have other more nefarious motives.

Similarly MultiBit and other SPV wallets provide no protection against fraudulent inflation yet, and the architecture they encourage puts the censorship of transactions in the hands of a very few.
463  Bitcoin / Bitcoin Discussion / Re: FAQ on the payment protocol on: September 30, 2013, 05:29:21 PM
I think Mike's point is that any key in the WoT that became widely known enough and that signed enough keys is basically a CA, and that guy could be legally pressured in the same way as a PKI CA could (or even easier because they are unlikely to have a team of lawyers).

My point is that while mass keysigning parties and magic trust in the "strong set" are in fact quasi-religious, using WoT however isn't. Mike Perry's evaluation of WoT as a stand-alone system, and complaining that you can't authenticate it in its entirety completely, throws the baby out with the bathwater. My example of an Edward using the WoT to help him determine if journalist Glenn's key is valid based on evidence like many different PGP signed articles by him and other journalists who have signed Glenn's key makes great use of the WoT.

There is indeed value in establishing continuity across a long term key, although key management is hard (and revocation of bogus keys is even harder). I think one of the aspects of the PKI that is often overlooked is that although people love to complain about how there are so many CAs, the fact that there are lots does make a global revocation actually achievable. If there were only 5 then revocation would be much harder or even infeasible. CAs know that being revoked means immediate death of the business, so they put a lot of effort into guarding against accidental screwups.

The other thing is that with cert transparency, it forces the hand of governments - either you subvert the entire global infrastructure publicly, atomically and noisily, or a CA you abuse will end up getting quickly revoked, making it a rather expensive proposition.

CA revocation is a good point, but if anything it shows even more clearly why the WoT is valuable: how do you authenticate the CA's key and reputation? With dozens or hundreds of CAs you pretty much can't, especially because they're all indistinguishable to you, and furthermore the very act of revocation is out of your control - the power is really in the hands of the likes of Google and Mozilla. On the other hand, if your "CA" is a person you can easily reason about their reputation and stake in it and who they might reasonably be able to in turn authenticate.

In addition while a government trying to attack CA's can use any compromised CA to attack many different targets at once, attacking individuals may be successful, but any one attack only compromises WoT in a small domain - the cost and effort expended per target is vastly higher.

In any case if the global infrastructure is subverted there's little guarantee those efforts will actually be stopped ("We need to MITM the internet connections of terrorists and child pornorgraphers!") leaving us with WoT; having a community that knows and uses WoT with even marginal effectiveness makes those kinds of subversion attempts less attractive.
464  Bitcoin / Bitcoin Discussion / Re: FAQ on the payment protocol on: September 30, 2013, 04:55:02 PM
(Mike Perry's) description of the WoT as being a "quasi-religious hacker ritual" made me laugh. That pretty much sums it up.

Ritual bathing is a common religious practice, yet no-one uses its existence to disparage people who take showers.

WoT is a tool, and like every other tool it has strengths and weaknesses. It's main strength is that it's heavily decentralized, and attacking it on a broad scale is hard and always will be hard; with CA infrastructure attacking it is quite possible with the current legal environment, (witness the occasional commercial availability of MITM boxes with trusted CA certs in them) and could be made trivial with the stroke of a legislatures' pen. (it's well within the power of government to change the laws to force CA's to create bogus certs on demand) Absolutists like Mike Perry like to wank about scenarios with high-powered attackers, but forget that the human nature of WoT makes those attacks expensive and risky to pull off, and downright impractical to pull off in an automated fashion.

Privacy concerns are a genuine weakness of WoT, but they shouldn't be overstated either: in the case of "Edward" trying to find journalist "Glenn's" genuine PGP key while Edward may have good reasons not to PGP sign Glenn's key, the fact that 10 other well known journalists/writers/etc. signed Glenn's key makes it easier for Edward to validate Glenn's key. At the same time those signatures pose no particular threat to Glenn: he's a public figure anyway. In the scenario where both parties need to maintain their privacy ask yourself, how did those two parties find out about each other in the first place?

The real weakness of WoT is more metaphysical: by being imperfect enough to invite hyperbole about it's insecurities it discourages people from using cryptography at all. In particular people discount the value of PGP signing their public work, like emails to mailing lists, publications and source code, and because people don't see the value in doing so systems are frequently designed in ways that make doing so inconvenient or impossible. (like this forum) Perry appears to have some grasp of this point: "Every time I verify a signature from a key sent to an email address that is not mine ... add a tiny amount of trust to that key" but unfortunately goes on to talk about downloading keys from keyservers, somehow, without describing exactly what the keyserver is supposed to be validating. (Is this a PGP Corporation style email ownership verification? Timestamping/oldest key? Keyservers currently do absolutely no verification at all.)

Of course, if you're unable or unwilling to comprehend how PGP works you probably should just stick with central certificate authorities and hope that efforts like Google's CA transparency keep them honest, but keep in mind that you're outsourcing your security to someone else. If you are willing and able, don't let geeks wanking about how broken WoT is discourage you.
465  Bitcoin / Hardware wallets / Re: Trezor developer coordination on: September 29, 2013, 11:21:41 PM
Mea culpa.

I put some thought into doing P2P data storeage awhile ago; if the data is encrypted you'll wind up having created a useful general-purpose data-storage service, so you might as well architect it to be a good one. Rather than thinking about "blockchain vs. DHT"(1) models at this stage, step 1 should be to decide on what you're going to use for spam control: bitcoin days destroyed is the obvious choice and is easy to prove (two merkle paths) and doesn't cost anything extra. Merkle paths are still kinda long, but the proofs can be discarded immediately. (and of course communications between nodes with full transaction indexes need not provide full paths)

1) Note how Freenet's weak anti-spam measure of retaining the most popular data and dropping the least doesn't work for a wallet backup system.

It's easy to define a consistent priority ordering for everyone's data by sorting all data in terms of (value*days destroyed)/size, or (value*days destroyed)/(size*data age) so that older data is eventually expired out. (these equations are all rough sketches obviously) People donating bandwidth and storage capacity to this scheme can now easily do so on a best effort basis and know that there is some rational way to allocate their donation. Because the ordering is consistent its also easy for nodes to advertise what the cut-off minimum priority is for the data they have in their store, thus making it easy to search for someone who might have your old data.

Building on stick's idea of a metadata tree for the encryption keys, I'd also suggest the use of a metadata tag tree whose values can be made public. That tree is then used to derive tags associated with each bit of data stored, so that queries can be later done for the tags in an efficient way to recover a wallet fully. Of course, this invites making those tags be pubkeys and allowing updates, but that means you've got to charge for update bandwidth fairly somehow - leave it for version 2 IMO.

Double-spending your priority is an interesting thing... in a chain-based system it's not possible, but with a strict DHT it might be. I'd suggest just sticking to an "everyone has all the data" model at first. The other thing you can do is commit to the data in the transaction itself, for instance with an OP_RETURN <digest> output or even deriving the nonce in the signature in part from the digest. Doing so publicly with OP_RETURN almost has a nice benefit: you can hold nodes to account for failing to return the data when queried in the "everyone stores the same data set" model; with some signed advertisements, queries and responses you can basically create a short fraud proof.

Except that doesn't work because there's no guarantee the data was ever broadcast... except in the case of the transactions themselves which are guaranteed to be public by virtue of being in the blockchain. So now rather than doing UTXO commitments, a complex and invasive change that may not be implemented anytime soon, you can do UTXO set search query fraud proofs. Basically you're just creating a system where some service claims they'll honestly return queries for all transactions/UTXOs with some H(pushdata) in them, like sipa's searchrawtransactions, and you can hold those services to account if they fail to return transactions that matched. Of course, this also means that system can be later extended to this data storage service if we (merge) mine it, although the lack of actual economic incentives is a bit troubling.
466  Bitcoin / Hardware wallets / Re: Trezor developer coordination on: September 29, 2013, 12:30:29 PM
you'll need to make sure your users use a cryptographically strong password - at least 128 bits of entropy

If you've read the whole thread you would have learned that the plan is to use BIP32 - maybe a modified one, so the "metadata" tree would not collide with the main address tree. Smiley

Ha, fair enough.

So here's my suggestion: don't use the word "backup" anywhere when referring to such a feature. Describe it as something along the lines of "your transactions are being retrieved from the Bitcoin network" so the user never wrongly thinks the wallet itself is somehow being backed up.
467  Bitcoin / Hardware wallets / Re: Trezor developer coordination on: September 29, 2013, 12:06:09 PM
Mike, do you mean that users would maintain a small P2P filesharing network only for retaining backup copies of each others' encrypted wallets? We were thinking about an idea like that too. What do you think would be best, a tiny blockchain that is just a sticky ball of encrypted keys, or something more like DNS (and presumably keyservers?), with servers telling each other about/propagating new keys?

If wallets are being uploaded to a public network where attackers could get the encrypted data you'll need to make sure your users use a cryptographically strong password - at least 128 bits of entropy. Of course humans are awful at memorizing long passwords, so you'll have to make sure they write it down somewhere and back up the password.

But if they have a 128bit secret backed up, why not just use BIP32 private key derivation directly? That's sufficient to recover users' funds; the only thing such a backup could provide in addition to that would be metadata like notes. Given how important it is to keep your seed/wallet password backed up why complicate the issue with (to the user) confusing discussions of some kind of "cloud backup" service that'll just lead them to think their wallet is effectively backed up when it isn't?
468  Bitcoin / Bitcoin Discussion / Re: The Bitcoin Foundation has added protecting decentralization to its bylaws on: September 29, 2013, 06:13:15 AM
It isn't in any way a rule, but a statement that they support decentralization.

Section 2.2 is within ARTICLE II - PURPOSES and is as much a rule as Section 2.1: "More specifically, the purposes of the Corporation include, but are not limited to, promotion, protection, and standardization of distributed-digital currency and transactions systems including the Bitcoin system as well as similar and related technologies."

If anything given the "include, but are not limited to" language in 2.1 the text in Section 2.2 restricts the Bitcoin Foundation's activities more than anything else in the bylaws. If the foundation, say, decided to promote a new Bitcoin-QT release that included a PPCoin-style central blockchain checkpoint system it would be very easy to argue that they were violating their own bylaws by doing so; you could even sue the Foundation for doing that if you were a member. (but IANAL)
469  Bitcoin / Bitcoin Discussion / Re: The Bitcoin Foundation has added protecting decentralization to its bylaws on: September 29, 2013, 04:01:21 AM
One way or another, the bylaws are just words on a piece of paper, and the concepts are so ambiguous that there is an almost infinite amount of wiggle-room even if someone wanted to bother trying to pretend to adhere to them.  When the rubber meets the road I doubt that this text is much more than a marketing ploy.  We'll see.

I commented on the role and value of bylaws in the pull request:

Quote
Bylaws aren't magic, but they help you say, if needed, "The Bitcoin Foundation hasn't done x, y, and z, yet they are all things they should be doing and could be doing." with the result of either forcing the foundation to change, or removing the monetary and developer support through community outcry. It will result in some messy situations because it's hard to determine if an idea is being rejected due to subversion or because technically it doesn't work, (zerocoin) but it's the best we can do at the legal level, and does keep the legal and political activities of the foundation pointed in the right direction.
470  Bitcoin / Development & Technical Discussion / Re: Will the Trezor wallet be able to store multiple private keys or just one? on: September 29, 2013, 03:42:01 AM
You can use the same private keys for multiple alt currencies: https://bitcointalk.org/index.php?topic=265582.msg2886501#msg2886501
471  Bitcoin / Bitcoin Discussion / The Bitcoin Foundation has added protecting decentralization to its bylaws on: September 27, 2013, 09:39:08 PM
tl;dr: The following language was added to the bylaws to make it clear that the the Bitcoin Foundation supports a decentralized and free Bitcoin:

Quote
Section 2.2  The Corporation shall promote and protect both the decentralized, distributed and private nature of the Bitcoin distributed-digital currency and transaction system as well as individual choice, participation and financial privacy when using such systems. The Corporation shall further require that any distributed-digital currency falling within the ambit of the Corporation's purpose be decentralized, distributed and private and that it support individual choice, participation and financial privacy.
- https://github.com/pmlaw/The-Bitcoin-Foundation-Legal-Repo/commit/ac2b734659d6f0fe250ca29337a5739e9f10e742

I think the final language chosen is a good compromise between John Dillon's original pull request, which in hindsight was probably too specific even if I thought its intent was good, and not having this section at all. Kudo's to Patrick Murck for suggesting it - I spoke to Patrick at the May conference and I found him to be reasonable and thoughtful. The discussion that lead to the acceptance of the pull request is worth reading: https://github.com/pmlaw/The-Bitcoin-Foundation-Legal-Repo/pull/4

It'll never be easy to agree on specific actions, but agreeing that a decentralized and free Bitcoin is a fundamental goal that all foundation members should share is a great starting point.

Reddit post: http://www.reddit.com/r/Bitcoin/comments/1n9wgp/the_bitcoin_foundation_has_finally_officially/
472  Bitcoin / Development & Technical Discussion / Re: fair coin toss with no extortion and no need to trust a third party on: September 27, 2013, 06:59:09 PM
For anti-coercion refund transactions, you actually want them to _not_ be in the mempool... since having them in the mempool will block the legitimate spend. At one point I was going to submit a pull to make penultimate sequence unlocked transaction be non-relayable/mempoolable specifically for that reason, but then we went and made all unlocked transactions non-relayable/mempoolable.

Another example is the nLockTime'd transaction anti-DoS mechanism I proposed for CoinJoin:

For anti-DoS the act of broadcasting these messages can be made expensive by requiring the senders to include a nLockTime'd transaction spending a txin to a scriptPubKey the sender controls. Usually this txin would be used in the mix, although technically it doesn't have to be. The key idea here is that by broadcasting such a tx the sender is guaranteed to spend some tx fees somehow, either in the nLockTime'd tx, or in a different tx with the same txin. The actual amount of fees required per KB of data broadcast can be adjusted automatically by supply and demand.
473  Bitcoin / Project Development / Re: [Fundraising 195btc] Finish “Ultimate blockchain compression” on: September 27, 2013, 08:29:29 AM
The performance cost of SHA256^2 compared to SHA256 is a lot less than 2. Heck, I'd bet you 1BTC that even in your existing Python implementation changing from SHA256 to SHA256^2 makes less than a 10% performance difference, and the difference in an optimized C++ version is likely unmeasurable. I really hope you understand why...

Enlighten me, please.

EDIT: After thinking about this for some time, the only conclusion I can come to is that you are assuming multiple blocks on the first pass. This is only true of about half of the nodes (most of which are two blocks), the other half are small enough to fit in a single block. While not a performance factor of 2, it's definitely >= 1.5.

Where do you think the first SHA256() gets the data it's working on? What about the second?
474  Bitcoin / Development & Technical Discussion / Re: Instant confirmation, call it "confirmed-by-owner" on: September 26, 2013, 08:14:36 PM
No one is realistically going to orchestrate a double-spend or MitM attack to get a free coffee at starbucks, or even a free TV at walmart. Even in the extremely unlikely event they somehow succeed, the criminal (fraudster) has likely been caught on security camera and it's no different than any other fraud/shoplifting event.

Don't assume that double-spending is hard: https://blockchain.info/create-double-spend

Even "check the tx has been broadcast" schemes don't work all that well because it's easy to craft transactions that some % of the hashing power will ignore, (like satoshidice bets or the negative nVersion bug) and double-spend those transactions with transactions that they aren't ignoring. Even double-spend alerts don't help here, because the attacker can just wait until they've left the shop/downloaded the file/whatever to broadcast the double-spend.

Things like security cameras and careful monitoring of losses is the way to go.
475  Bitcoin / Development & Technical Discussion / Re: Salvaging refund protocols from malleability attacks with P2SH on: September 26, 2013, 07:03:36 PM
FWIW jl2012 almost described what you just described, except he didn't make it explicit that you need P2SH: https://bitcointalk.org/index.php?topic=255145.msg2756623#msg2756623

Note how to further discourage miners for modifying transactions, you could spend an output of tx1 in a fee paying transaction broadcast at the same time as tx1 is broadcast. Even better would be to pay the tx fee for tx1 via an out-of-band fee paying protocol - like the fidelity-bonded rough consensus balance ideas I've talked about before - that identified transactions solely by txid.

Finally make the protocol allow for Alice to request Bob to sign multiple transactions, spread out over time, so that Bob can't know if Alice is asking him to sign a dummy transaction or because tx1 was modified and Alice needs a new refund tx.

Note that in some cases you'll need to give Bob a SHA256 midstate so Bob knows what nLockTime/nSequence he's signing - the latter could be tricky to apply due to limitations on what information the midstate can hide - you may need an extra txin - so you're better off with a protocol where Bob learns tx-refund prior to relying on it's contents.

EDIT: nah, actually I can't see how you'd prevent Bob from figuring out what tx to mutate based on the nLockTime value, so it looks like you just have to let design the protocol so Bob learns tx-refund (any any dummy's signed) prior to relying on the non-existance of a immediately-mineable refund tx.
476  Bitcoin / Development & Technical Discussion / Re: Instant confirmation, call it "confirmed-by-owner" on: September 26, 2013, 06:29:08 PM
In the end we if we can come up with a set of rules to accept 0 conf payments with very little risk, that would be very good news for bitcoin payments in person.
Sure, copy the persons photo ID. Limit sales to values that you'd be comfortable losing to shoplifting. Done.

There's a lot of bakeries and coffee shops out there that have found they can go as far as not even having any staff on duty during the day and relying 100% on the honesty of their patrons to actually pay them. One example: http://www.theglobeandmail.com/life/coffee-cookies-but-no-cashiers/article1058362/

Another example is how in many places newspapers are sold in unlocked containers with a slot to voluntarily drop a quarter in. Or for that matter how it's routine for campgrounds and huts in mountainous areas to be unstaffed with just a drop box to collect fees from hikers. Huts are a funny example: parks often find it cheaper to leave them unlocked entirely year-round than to replace windows broken by people who forgot their key or combo, or were in an emergency and needed the shelter.

we can do better then that...

simple rules, tx must have a fee, and look for double spend for 30 seconds, and you appear to have pretty damn good protection, even if a double spend is initiated.

asking for ID is a crappy solution.

Here's a better solution that expects nothing more from all parties than rational economic self-interest: https://bitcointalk.org/index.php?topic=251233.msg2669189#msg2669189
477  Bitcoin / Development & Technical Discussion / Re: Invoices/Payments/Receipts proposal discussion on: September 26, 2013, 06:10:49 PM
Really? It seems like a significant limitation/oversight if this is intended to be widely adopted in the real world.

Some mechanism for including a (non blockchain-embedded) message would be very useful. In fact, it's difficult to imagine a payment service or protocol (Paypal, etc.) being without that functionality. It would also make multiple payments to the same address much less ambiguous if that was desired for any reason: i.e. a supplier may choose to have each customer send funds to an address assigned to them specifically, or for ongoing subscription payments, etc.... in addition to the examples above.

The payment protocol is extensible; this is just version 1.0 (edit: though as mike points out if all you want is an informational message, you can do that - it's renegotiating the payment as the op asked that can't be done. also don't reuse addresses)

The reason we need the payment protocol is for security for multi-signature wallets first and foremost; getting a reasonable first version implemented without bloating it with lots of non-essential features is what's most important right now.
478  Bitcoin / Project Development / Re: [Fundraising 195btc] Finish “Ultimate blockchain compression” on: September 26, 2013, 06:05:34 PM
You can't assume that slowing down block propagation is a bad thing for a miner. First of all block propagation is uneven - it'd be quite possible for large pools to have much faster propagation than the more decentralized solo/p2pool miners we want in the network. Remember, if you can reliably get your blocks to just over 50% of the hashing power, you're better off than if they get to more than that because there are more fee paying transactions for you to pick from, and because in the long run you drive difficulty down due to the work your competitors are wasting. Of course reliably getting to 50% isn't easy, but reliably getting to, say, no more than 80% is probably quite doable, especially if efficient UTXO set handling requires semi-custom setups with GPUs to parallelize it that most full-node operators don't have.

Still, you're playing a game that is not in your favor. In this case you are trading a few extra seconds of alone time hashing the next block against the likelihood that someone else will find a competing (and faster validating) block in the mean time. Unless you are approaching 50% hash power, it's not worth it. The faster your blocks get validated, the better off you are.

Read this before making assumptions: https://bitcointalk.org/index.php?topic=144895.0

Under the right circumstances those perverse incentives do exist.

As you say, they would have to make at the very least one set of adjustments for the coinbase which becomes newly active, requiring access to part of the datastructure. This could be facilitated by miners including a proof with the block header that provides just the paths necessary for making the coinbase update. Miners know that any full node will reject a header with improperly constructed commitment hashes, so I fail to see what the issue is.

Shit... your comment just made me realize that with the variant of the perverse block propagation incentives where you simply want to keep other miners from being able to collect transaction fees profitably it's actually in your incentive to propagate whatever information is required to quickly update the UTXO set for the coinbase transaction that becomes spendable in the next block. Other miners will be able to build upon your block - making it less likely to be orphaned - but they won't have the information required to safely add transactions too it until they finish processing the rest of the UTXO updates some time later. Write the code to do this and miners will start using this to keep their profitability up regardless of the consequences down the road.

The full scriptPubKey needs to be stored, or else two lookups would be required - one from the wallet index to check existence, and one from the validation index to actually get the scriptPubKey. That is wasteful. The alternative is key by hash(scriptPubKey) and put scriptPubKey in resulting value: that nearly doubles the size of the index on disk. You'd have to have a really strong reason for doing so, which I don't think the one given is.

BTW, the structure is actually keyed by compress(scriptPubKey), using the same compression mechanism as ultraprune. A standard transaction (pubkey, pubkeyhash, or p2sh) compresses to the minimal number of bytes of either pubkey or hash data. "Attacking" pathways to pubkeyhash or p2sh keys requires multiple hash preimages. I put it in quotes because honestly, what exactly is the purpose of the attack?

Have some imagination: if the attack can cause technical problems with Bitcoin for whatever reason, there will be people and organizations with strong incentives to carry it out. For instance note how you could create a block that's fraudulent, but in a way where proving that it's fraudulent requires a proof that most implementations can't process because of the ridiculously large UTXO path length involved - an excellent way to screw up inflation fraud proofs and get some fake coins created.

Out of curiosity, have you ever worked on any safety critical systems and/or real-time systems? Or even just software where the consequences of it failing had a high financial cost? (prior to your work on Bitcoin that is...)

"Tricky and expensive" to create such a block? You'd have to create ~270,270,000 outputs, the majority of which are probably non-spendable, requiring *at minimum* 14675.661btc. For an "attack" which at worst requires more than the entire existing block chain history to construct, and adds only 22 minutes to people's resync times. I fail to see why this should be anything but an academic concern.

Creating those outputs requires 0 BTC, not 14,675BTC - zero-satoshi outputs are legal. With one satoshi outputs it requires 27BTC. That you used the dust value suggests you don't understand the attack.

As I say, such extreme differences make it very likely that you'll run into implementations - or just different versions - that can't handle such blocks, while others can, leading to dangerous forks. Look at how the March fork happened because of a previously unknown limit on how many database operations could happen in one go in BDB.

re: single SHA256, I'd recommend SHA256^2 on the basis of length-extension attacks. Who cares if they matter; why bother analyzing it? Doubling the SHA256 primitive is a well accepted belt-and-suspenders countermeasure, and done already elsewhere in Bitcoin.

A performance factor of 2 is worth a little upfront analysis work. Padding prevents you from being able to construct length-extension attacks - append padding bytes and you no longer have a valid node structure. Length-extension is not an issue, but performance considerations are.

The performance cost of SHA256^2 compared to SHA256 is a lot less than 2. Heck, I'd bet you 1BTC that even in your existing Python implementation changing from SHA256 to SHA256^2 makes less than a 10% performance difference, and the difference in an optimized C++ version is likely unmeasurable. I really hope you understand why...

Lots of optimizations sure, but given that Bitcoin is a system where failure to update sufficiently fast is a consensus failure if any of those optimizations have significantly different average case and worst case performance it could lead to a fork; that such optimizations exist isn't a good thing.

I think you mean it could lead to a reorg. "Failure to update sufficiently fast" does not in itself lead to consensus errors. Examples please?

A re-org is a failure of consensus, though one that healed itself. Currently re-orgs don't happen too often because the difference in performance between the worst, average, and best performing Bitcoin nodes is small, but if that's not true serious problems develop. As an example when I was testing out the Bloom filter IO attack vulnerability fixed in 0.8.4 I found I was able to get nodes I attacked to fall behind the rest of the network by more than six blocks; an attacker could use such an attack against miners to construct a long-fork to perform double-spend attacks.


FWIW thinking about it further, I realized that your assertion that the UTXO tree can be updated in parallel is incorrect because parallel updates don't allow for succinct fraud proofs of what modifications a given transaction made to the UTXO set; blocks will need to commit to what the top-level UTXO state hash was for every individual transaction processed because without such fine-grained commitments you can't prove that a given UTXO should not have been added to the set without providing all the transactions in the block.
479  Bitcoin / Development & Technical Discussion / Re: Invoices/Payments/Receipts proposal discussion on: September 26, 2013, 05:11:42 PM
Are there any restrictions on a payment request (signed or unsigned) being modified by the payor before submitting the transaction?

A couple of obvious instances would be:
1. "Here's the transaction for the restaurant meal, your request plus a 20% tip" 
2. "Here's the payment for invoice #12345, minus the credit note #34567 from 2 months ago"

(sorry if the answer is obvious, just want to make sure).

There's no mechanism in the protocol for those modifications to be communicated back to the payee.
480  Bitcoin / Project Development / Re: [Fundraising 195btc] Finish “Ultimate blockchain compression” on: September 26, 2013, 11:42:54 AM
To be clear, I'm not sure that example would have the effect you're imagining. Creating a purposefully time-wasting block to validate would hurt, not help the miner. But presumably you meant broadcasting DoS transactions which, if they got in other miner's blocks, would slow down their block propogations. That's possible but not economical for reasons that I'll explain in a minute.

You can't assume that slowing down block propagation is a bad thing for a miner. First of all block propagation is uneven - it'd be quite possible for large pools to have much faster propagation than the more decentralized solo/p2pool miners we want in the network. Remember, if you can reliably get your blocks to just over 50% of the hashing power, you're better off than if they get to more than that because there are more fee paying transactions for you to pick from, and because in the long run you drive difficulty down due to the work your competitors are wasting. Of course reliably getting to 50% isn't easy, but reliably getting to, say, no more than 80% is probably quite doable, especially if efficient UTXO set handling requires semi-custom setups with GPUs to parallelize it that most full-node operators don't have.

Secondly if block propagation does become an issue miners will switch to mining empty or near empty blocks building on top of the most recent blockheader that they know about until they finally get around to receiving the block in full as a way to preserve their income. Unfortunately even with UTXO commits they can still do that - just re-use the still valid UTXO commitment from the previous block.(1) Obviously this has serious issues with regard to consensus... which is why I say over and over again that any UTXO commitment scheme needs to also have UTXO possession proofs as a part of it to make sure miners can't do that.

1) Modulo the fact that the UTXO set must change because a previously unspendable coinbase output became spendable; if we don't do explicit UTXO possession proofs we should think very carefully about whether or not a miner (or group of miners) can extract the info required to calculate the delta to the UTXO set that the coinbase created more efficiently than just verifying the block honestly. I'm not sure yet one way or another if this is in fact true.

But stepping back from that example, block propagation is why sub-second index construction/validation would be ideal. It places index operations in the same ballpark as the other verification tasks, meaning that it won't significantly affect the time it takes for blocks to propagate through the network. If index construction is done in parallel (using GPU acceleration, for example), it might not have any effect at all.

Rather than just quote numbers I'll explain where they come from:

There are two indices: the validating index containing unspent transactions keyed by <txid>, and the wallet index containing unspent outputs keyed by <scriptPubKey, txid, index>.

Let T be the number of transactions with at least one unspent output, N the total number of unspent outputs, X the number of transactions in a new block, I the number of inputs, and O the number of outputs.

The number of hash operations required to perform an insert, delete, or update to the indices is 1, log(T) or log(N), and len(key) for the best, average, and worst case respectfully. Since the keys of both indices involve hashed values (txid, and typically also the content of scriptPubKey), creating artificially high pathways through the index structure is a variant of the hash pre-image problem - you are essentially constructing transactions whose hash and output scriptPubKeys have certain values so as to occupy and branch specific nodes in the index structure, for the entire length of a given key. This is computationally expensive to do, and actually growing the pathway requires getting the transactions into the UTXO set, which requires some level of burnt BTC to avoid anti-dust protections. So it is not economically viable to construct anything more than a marginally worse pathway; the average case really is the only case worth considering.

re: content of the scriptPubKey, change your algorithm to index by H(scriptPubKey) please - a major part of the value of UTXO commitments is that they allow for compact fraud proofs. If your tree is structured by scriptPubKey the average case and worst case size of a UTXO proof differs by 454x; it's bad engineering that will bite us when someone decides to attack it. In any case the majority of core developers are of the mindset that in the future we should push everything to P2SH addresses anyway; making the UTXO set indexable by crap like "tags" - the rational for using the scriptPubKey directly - just invites people to come up with more ways of stuffing data into the blockchain/UTXO set.

For the average case, the number of index update operations are:

validating: (X+I) * log(T)
wallet:     (I+O) * log(N)

Currently, these parameters have the following values:

T = 2,000,000
N = 7,000,000
X =     7,000 (max)
I =    10,000 (max)
O =    15,000 (max)

Those are approximate maximum, worst case values for X, I, and O given a maximum block size of 1MB. You certainly couldn't reach all three maximums simultaneously. If you did though, it'd be about 1,000,000 hash operations (half a "megahash," since we're talking single SHA-2), and more importantly 1,000,000 updates to the leveldb structure, which is about 4-5 seconds on a fast disk. My earlier calculation of a max 200,000 updates (computable in about 0.8s) was based on actual Bitcoin block data, and is somewhat more realistic as a worst case.

Worst case would be if a miner creates a block filled with transactions spending previously created 10,000 byte scriptPubKeys and creating no new ones. A minimal CTxIn is 37 bytes, so you'd get roughly 1,000,000/37=27,027 txouts spent. Each txout requires on the order of 10,000 hash operations to update it, so you've actually got an upper limit of ~270,270,000 updates, or 22 minutes by your metric... There's a good chance something else would break first, somewhere, which just goes to show that indexing by scriptPubKey directly is a bad idea. Sure it'd be tricky and expensive to create such a monster block, but it is possible and it leads to results literally orders of magnitude different than what is normally expected.

re: single SHA256, I'd recommend SHA256^2 on the basis of length-extension attacks. Who cares if they matter; why bother analyzing it? Doubling the SHA256 primitive is a well accepted belt-and-suspenders countermeasure, and done already elsewhere in Bitcoin.

EDIT: And again, even if a call to 'getblocktemplate' takes multiple seconds to complete, the *next* call is likely to include mostly the same transactions again, with just a few additions or deletions. So successive getblocktemplate calls should take just milliseconds to compute by basing the index on the last returned tree.

Conceivably, when receiving and validating a block it's also possible to start with a pre-generated index based off the last block + mempool contents, and simply undo transactions that were not included. Assuming, of course, that the mempool is not getting backlogged, in which case it would make more sense to start with the last block. If you collect partial proof-of-works, then you could pre-compute updates from transactions which appear in many of the partial blocks. There's a large design space here to explore, in terms of optimization.

Lots of optimizations sure, but given that Bitcoin is a system where failure to update sufficiently fast is a consensus failure if any of those optimizations have significantly different average case and worst case performance it could lead to a fork; that such optimizations exist isn't a good thing.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 [24] 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!