22
|
Bitcoin / Development & Technical Discussion / Re: Initial replace-by-fee implementation is now available on testnet
|
on: May 09, 2013, 10:22:37 PM
|
Is there a subtle issue with recursive fee calculation that's prevented it being implemented?
In fact I've already got most of a recursive fee calculation implementation written, including unit tests. But because recursive fee calc is required to make the patch robust against certain types of DoS attacks, I want to hold off on making that code available to give people time to experiment with tx replacement, in particular how they're merchant code handles it, while still discouraging miners from using it on mainnet. Similarly that vector-vs-set bug you found was a very nice catch, but again, I'm going to hold off fixing it for another two or three weeks. Ok, I wrote some code to do it and was making sure I hadn't missed some obvious-to-you-but-not-me issue before creating a pull request. I'll create a patch and send you privately instead - you may find it useful even if you've already done most of the work already.
|
|
|
25
|
Bitcoin / Development & Technical Discussion / Re: A short introduction to TPMs
|
on: May 08, 2013, 05:26:16 PM
|
Someone contacted me recently about implementing the oracle system. I hope they pull through. If they do, the next step (after quorums of oracles?) would be to have TC secured and remotely attesting oracles. It's a larger project but all the pieces exist, someone just has to pick them up and put them together.
Yes, this is interesting. You just reminded me - I had a thought some time ago that it might be possible to create trusted oracles using some kind of secure 2-party computation, such that the oracle doesn't know what it is that he knows, and therefore cannot lie convincingly. I guess it's very similar underneath to your garbled circuit blacklisting idea - have you already considered this? Something else that might be cool is an extension to the payment protocol such that you can provide a remote attestation in the PaymentACK message (is DAA interactive, I wonder?). Your wallet app would have the core wallet maintenance and signing code run in the TC secured environment. All the rest (GUI, network, etc) would run in the standard domain. The goal is that you can provide a proof you aren't going to (easily) double spend when uploading a transaction. Direct anonymous attestation allows you to control the linkability of attestations that nevertheless allow blacklisting of compromised TPMs.
This goes the other way too. If I receive a PaymentACK saying the payment was rejected, it would be nice to have proof that the merchant won't still broadcast my payment anyway.
|
|
|
26
|
Bitcoin / Development & Technical Discussion / Re: A short introduction to TPMs
|
on: May 08, 2013, 03:51:06 PM
|
That comes up a lot, but what's the threat model here? If a node wants to DoS a client it can just not serve any data at all. If it's trying to make you think you didn't receive a payment when you did, comparing the results from a few different nodes should be enough to reveal the subterfuge. If that doesn't work, it means your internet connection is hijacked: see the recent discussion on bitcoin-development for ideas about what to do in that scenario.
OK, currently bitcoinj doesn't compare nodes with each other, mostly because nobody is attacking users in that way. But if we did start seeing it then we could implement multi-querying fairly easily.
Yes, I agree it's a solution looking for a problem, but we're brainstorming
|
|
|
27
|
Bitcoin / Development & Technical Discussion / Re: A short introduction to TPMs
|
on: May 08, 2013, 03:29:34 PM
|
Joe already doesn't need to run a full node. They can use an SPV client. The chain is self-proving.
But the SPV client has no guarantee that he's been given all transactions relevant to him. If you can trust a remote peer, then you can be sure that you have been given everything. What might be better is for a trusted service running a full node to expose a 'getbalance <scriptPubkey>' command which looks up the scriptPubkey in the UTXO set and returns the total value of all unspent outputs to that scriptPubkey. This would mean that the transaction downloads could be from untrusted peers (like now) and then confirmation that you've received everything is done by querying a trusted node for the final balance and comparing it to your calculated balance.
|
|
|
28
|
Bitcoin / Development & Technical Discussion / Re: This guy spams the blockchain by sending 500BTC back and forth
|
on: May 07, 2013, 11:41:46 PM
|
This has illustrated an issue however.
When the transactions are being added to the block. Each time one is added, it's dependants are added to the queue. When we've exhausted all the transactions which link directly off the main chain, we're left with these dependant transactions which always have zero priority. If they also have zero fee, then the ordering fails to choose one over another.
Thus the ordering of the priority queue ends up being determined by the order that the transactions were added to it.
When Mr 8's 500BTC transaction is added to the block, it has a very high priority, meaning that it's added early and its dependants are added to the queue first. Once the priority>0 transactions are all added, his next transaction is at the front of the queue. This again puts the next one in the queue early. It wouldn't even matter if the other transactions were of low value and huge. They'd be first in the queue. You could use this to fill the entire block with one transaction with zero fee, simply by making it depend on a high priority transaction (which also has zero fee).
One solution is to amend the ordering. It currently only depends on the priority value and the fee/kB, so it fails to distinguish when they are both equal, falling back to insertion order. If the third element (the tx pointer) of the TxPriority object was used for comparison too, then the zero fee, zero priority transactions would be ordered by where they are in RAM - which is at least not predictable. This does have the downside (?) that it is no longer deterministic. To do that you'd want to order by hash - but that could get expensive.
EDIT: thinking a bit more, ordering by pointer doesn't really work either because the allocation strategy in std::map is (I'm guessing) likely to be to allocate a chunk of memory enough for multiple elements, so the chain of transactions coming in one after another will end up in a chunk of contiguous memory, and we end up falling back to insertion order again.
|
|
|
30
|
Bitcoin / Development & Technical Discussion / Re: This guy spams the blockchain by sending 500BTC back and forth
|
on: May 07, 2013, 11:13:29 PM
|
Ok, I think I solved the mystery.
Firstly, BTC Guild are almost certainly running 2 bitcoind instances, one of which has 'blockprioritysize' set to maximum, meaning that fee/kB is essentially ignored.
Secondly, a few blocks before (234941) was solved by BTC Guild, only a few seconds after the previous block. Because of the short time period, there were very few transactions left in the memory pool which linked directly off the main chain. Instead it was filled with very low priority transactions and chains of transactions (zero priority). This block cleared out most of the low priority transactions, thus all that was left was this guy's chain.
Fast forward a few blocks and BTC Guild solved another block (234946), and again in only a few seconds. This time there were no low priority transactions left, only zero priority transaction chains. So that is what got included in the block.
|
|
|
32
|
Bitcoin / Development & Technical Discussion / Re: This guy spams the blockchain by sending 500BTC back and forth
|
on: May 07, 2013, 06:38:49 PM
|
These transactions aren't in the same block. The chain of transactions was broadcast at the same time, but each transaction appeared in consecutive blocks. The first transaction which links off a confirmed transaction has high priority so gets into the block, and the rest of the chain has zero priority, so doesn't get into the block. Now coins comes from view.GetCoins(txin.prevout.hash, coins). The thing is the coin view is updated as transactions are added to the block - see viewTemp.Flush(); later in the function - which means it looks like the transaction already has one confirmation even though that "confirmation" comes from the block being created right now!
The priority is calculated before the transactions are selected to go into the block. The view is NOT updated while this is happening - in fact, the view is not of the memory pool at all, it is of the previous block. For any chain of transactions in the memory pool, only the first (the one linking off the blockchain) has its nConf calculated.
|
|
|
35
|
Bitcoin / Development & Technical Discussion / Re: unique identifier of a transaction
|
on: May 04, 2013, 06:38:24 PM
|
But it is possible to create a transaction and spent all its outputs within one block, isn't it? Thus this transaction won't be live.
Heh, good point! Hmm, this is interesting. It basically says that "This is caught by ConnectInputs()", but is it? Short answer: No. Longer answer: 'ConnectInputs()' doesn't exist anymore in the code. It never explicitly checked for duplicates because it was a check of a single transaction, so it couldn't know about the other transactions in the block. However, if the transactions are duplicate then updating the UTXO set will fail (presumably what it meant). If a true collision of different transactions this isn't true (as far as I can see). The same is true now, the code has just moved to CheckInputs and UpdateCoins.
|
|
|
36
|
Bitcoin / Development & Technical Discussion / Re: unique identifier of a transaction
|
on: May 04, 2013, 04:12:32 PM
|
Is it possible to have transactions with same hash in one block? BIP 30 doesn't say anything about it, but perhaps it is mentioned in some other rule.
It's really just the same rule. You can't have 2 live transactions (those with unspent outputs) with the same hash. See https://github.com/bitcoin/bitcoin/blob/master/src/main.cpp#L2105. If it is possible, then I guess the only identifier is a position in the blockchain, i.e. block hash + index of transaction within block.
Yes. Though you have to deal with the case where the transaction isn't in the blockchain separately. Basically from the view of the reference client, there is no need for the identifier to be unique forever. The only need for looking up transactions is to get their unspent outputs. Once the outputs are spent, they never need to be looked up again, so the identifier can be reused.
|
|
|
37
|
Bitcoin / Development & Technical Discussion / Re: Listunspent error
|
on: May 04, 2013, 12:31:22 AM
|
I don't know what you mean by 'out' transaction exactly, but if you spend some coins and have some change left over, you'll end up with a transaction where you're both spending and receiving.
Thus there'll be some overlap between the transactions shown by listunspent and transactions which spent some of your coins ('out' transactions?).
|
|
|
39
|
Bitcoin / Development & Technical Discussion / Re: 234078 Longest block to solve yet?
|
on: May 03, 2013, 04:00:08 PM
|
Deepceleron, thanks for taking the time to explain, though the convergence of the geometric to exponential distribution wasn't the issue I had. However, dealing with the exponential distribution made finding my error much simpler. My issue was the conversion from dealing with number of hashes to dealing with time. The re-targeting algorithm tries to keep the hashrate directly proportional to 1/p, so the variance in time quadratically increases with the hashrate. Right? My incorrect reasoning was that 1/p is the mean number of hashes, which we choose to be 10 minutes worth of hashes at the current hashrate, R, so 1/p = 600R, then 1/p 2 is 600 2R 2 and concluding that this is the variance in time, which therefore goes up with the square of the hashrate . Anyway, I worked out the error: Transforming the exponential CDF for the number of hashes required into the CDF of time required using 1/p = 600R gives an exponential distribution with a mean of 10 minutes, in particular the hashrate cancels itself out, and so the variance is constant, no matter the hashrate.
|
|
|
40
|
Bitcoin / Development & Technical Discussion / Re: 234078 Longest block to solve yet?
|
on: May 02, 2013, 10:40:03 PM
|
Only 'technically', on a very small scale, is the variance different. In reality it is unobservable at different difficulties.
I was hoping you'd explain why the difference is unobservable. Anyway, if anyone has the time to do it, it might be interesting to see a chart of the cumulative distributions of the solution times during e.g. January and April, superimposed to see if the bell shape has noticeably widened with the increase in hashing power.
|
|
|
|