Even 0.7.0 still has CPU mining. Just not accessible through the GUI.
|
|
|
Just to give you guys some numbers. My current "ultraprune" branch (which, as Mike noted, currently doesn't prune at all) is mostly a rewrite of the block validation logic, which is faster than the current one, needs less storage, and supports block pruning. It also contains work done by Mike, which switches the database backend from BDB to LevelDB (which should give a nice speed improvement as well, in particular on slow storage). In idealized laboratory conditions, I was able to make this branch synchronize from scratch in around half an hour on my laptop. Real life use will not reach such speeds (for now, there are other improvements possible that'll take us closer), but a full chain sync over network (mine was from a local disk) should be doable in a few hours. To get an idea of the actual storage requirements (after pruning, which isn't done): - Around 30 MB (right now) for the block chain index. This will grow linearly over time.
- Around 120 MB for the coin database. This grows +- following the green line in the graph below, +10% to +20%
- The actual block data: the size of the blocks themselves, as many as you can manage to keep.
- "block undo" data: this is specific to ultraprune's design, and requires around 1/9th of the size of the kept blocks
Here's a graph showing the size of the coin database (set of unspent transaction outputs), with and without pruning (note that the unpruned version of this isn't particularly useful - it's just there for comparison), and after removing several sizes of small/tiny outputs. The green line is the only actually meaningful one.
|
|
|
EDIT: Bitcoin private keys in base58 wallet import format have a marker to distinguish the compressed and uncompressed one, although it is technically not part of the secret key. An importing wallet must know which address to associate with it though (and associating it to both would be wasteful and not useful, as keys should only be used once anyway).
|
|
|
No, only one of both is used. There is no point is having both: which address/pubkey is used, is decided at key generation time, and only one address is ever exposed. There is no reason to expect payments to an address that was never given out.
|
|
|
-rescan scans the local block chain copy for transactions that are missing from the wallet.
It does not remove transactions from the wallet which are in conflict with it.
|
|
|
Generating a merkle root is exactly the same operation as the ASIC is specifically designed for already (double SHA256). No need to embed a CPU on the ASIC (an ASIC is a CPU, but one for a very specific purpose only).
That doesn't mean they should - it can be done at any stage (in the pool, in the computer controlling the asic, in some intermediate controller chip, ... I don't care). The point is that this is not hard; it's only different from how the current infrastructure works.
|
|
|
BIP 22 doesn't have anything at all to do with the block validity rules, which is the one thing this is all about. It's about a client-specific way of making mining software interact with the daemon (but it can, and hopefully will, be adopted by other software too). You could replace the BIP22 implementation in the daemon you're using with any equivalent, and nothing would break - nobody would even notice - as long as you patch the software that interacts with it as well.
BIP 34 however does concern block validity rules. It's not directly related to mining, so maybe it is harder to see why it is relevant in this discussion.
|
|
|
The bitcoin address is calculated by hashing the public key. I think the hash is always performed using the full point, and I know that using the full point works, because I've calculated addresses using the uncompressed key, sent coins to them, and spent them using the corresponding private key. I don't think that starting the hash process using a compressed key will work, but it is late, and I don't feel like digging through the code to find out for sure. Hopefully someone will chime in.
Actually, that is exactly what is done. A base58 address corresponding to a compressed public key is formed by using the normal encoding process (base58(0x00 + ripemd160(sha256(pubkey)) + checksum)) on the compressed public key. You can't even tell from an address which type of key was used. That is why it works in a backward-compatible way: old verifying nodes don't know anything about compressed public key, but their software (which uses OpenSSL) accepts compressed public keys as public keys, so the verification works, and the hash also works out fine.
|
|
|
That is correct; a compressed and an uncompressed one.
EDIT: Bitcoin private keys in base58 wallet import format have a marker to distinguish the compressed and uncompressed one, although it is technically not part of the secret key. An importing wallet must know which address to associate with it though (and associating it to both would be wasteful and not useful, as keys should only be used once anyway).
|
|
|
I'm sure that when the alternative is Bitcoin becoming useless (either by scaling issues or broken cryptography), getting a consensus about the necessity for upgrade won't be a problem (most likely still very hard, but doable).
Good luck getting the Bitcoin community convinced that it's necessary because of a performance problem for miners, who are already making money.
|
|
|
Why is this stupid BIP in 0.7.0 that's been forced on everyone, that also removed functionality from bitcoind, necessary?
I think you're missing something here. 0.7.0 was most certainly not forced on anyone. It does implement BIP34 (a fully backward-compatible change), which only takes effect as soon as a majority of mining power participates in it. Therefore it's indeed preferrable for miners and other infrastructure to upgrade to 0.7.0. But the key word in here is backward compatible. Every change that has ever been done, either had no protocol impact (like removing getmemorypool) or only a backward-compatible one (like BIP16 and BIP30). What you are proposing is a completely incompatible upgrade. Blocks created by a new miner would simply be ignored by every single old node (not just old miners, everyone). The moment such a block gets created, and a majority of mining power is behind the change, there will instantly appear a fork in the block chain. One side maintained by the new nodes, one side by the old nodes. Every existing non-spent transaction output before the split would get to be spent once in each side. This would be a disaster. The only way such a "hard fork" as it is called (essentially a non-backward compatible change to the validity rules for blocks) is possible, is when it is very carefully planned in advance (let's say 1-2 years) and everyone agrees (not just a majority, there must be exceedingly high consensus about this). I'm sorry, but no, a performance problem for miners is not worth a hard fork. Miners make money, I'm sure they'll find solutions on their own (like Stratum) which don't require the rest of the network to upgrade. If the hardware or the software can't deal with such high performance, switch to other hardware or software. Let the device do ntime rolling or calculate its own merkle root. Yes, I fully agree a 64-bit nonce would have made things easier, but there is simply no way of changing that right now. I don't mean hard - it's just impossible. Some people would refuse to have a protocol change forced on them, and that's enough to ruin Bitcoin in the face of a hard fork.
|
|
|
If you held a pre-signed transaction that sends the funds back to you with a lockTime of 1 Jan 2013 that would work.
I've been out of the loop for a while.. Does lockTime work correctly nowadays? If not, when is it scheduled to be implemented? nLocktime has ever worked since it was introduced. What doesn't exist yet is transaction replacement (which is necessary for many, but not all, of nLocktime's applications). Thankfully, implementing transaction replacement doesn't need any protocol change or fork or miner support.
|
|
|
If the network hashing power is constantly increasing at 10% per 2016 blocks, those blocks will on average take 10% less time than 10 minutes, in the long run.
If this is what you're saying, yes.
|
|
|
Well what's the point of emailing RMS? His answer is always GPL...
No, it isn't. He has a fully nuanced answer. I will attempt to condense and summarize, but he explains it better, so it is worth the time to read it. By the way, at the London conference, we had a discussion with RMS about Bitcoin and the license of its implementation. Afterwards in his talk, he acknowledged that a weaker license than GPL may be appropriate for things like Bitcoin, to increase the ability of adopting the system.
|
|
|
What makes you think we're not working very hard to deal with the increasing load?
Yes, we are conservative when it comes to modifying the reference code, and I think we should be.
That said, I've also spent more than a few weeks time already on rewriting the reference client's validation engine to be much more efficient. Without any compensation, by the way. I understand the decreasing performance is rapidly becoming an issue, and not seeing improvement must be frustrating. But please don't think we're ignorant. I'm not alone, of course. Mike helped by making Bitcoin run on top of LevelDB (instead of ancient BDB), which has much better performance, in particular on slow disks.
All this is finished and works. It just requires a massive amount of testing - we can't just switch to some faster code and hope that it behaves the same way. Even if it deviates from the old one in the tiniest way, we have a serious problem. This will take time to merge, and time is critical now. We're also just volunteers.
Anyway, expect 0.8 to be significantly faster than 0.7. I'm not talking about a few percent improvement. How much improvement will depend on a lot of factors, but I've done test runs (in idealized conditions) with full syncs in less than half an hour. In practice for most it will probably still be hours, but it shouldn't be days anymore.
Please, patience.
PS: I'm not on reddit.
|
|
|
Also, through gitian, everyone can build the binaries and verify they match byte-for-byte the distributed ones, and their hash matches the signed checksums. We do not publish binaries before a few developers have succesfully built the exact same binary.
The process is somewhat contrived, but it allows for very deep inspection of what is distributed.
|
|
|
Transactions actually do have a rarely-used feature called nLocktime. It can be used to specify the block height or UNIX timestamp after which the transaction is allowed to become stored in the block chain.
|
|
|
Conclusion: In a scenario where derived secret keys are handed over to subbranches, the derived public keys MUST not be generated on the fly on "hot" machines. Instead, even Kpar must be stored "cold". To some extend this contradicts the original idea of hierarchical deterministic wallets.
Thanks for noticing this. I must admit that I never considered this scenario.
|
|
|
It seems like almost every technical thread about bitcoin{d,-qt} needs to take a detour into the DB-land.
Not too surprising, it's one hell of a weakness right now. BDB is just not the right fit for how we use it. I just wanted to stress that the "append-only" is the key concept to understand what is required architecturally to implement Bitcoin efficiently. Incidentally it is also a key ingredient to make any Bitcoin implementation GAAP-compliant.
I was talking about the storage system, not wallet semantics (which is what you want changed to conform to those rules). Even if we move to an append-only wallet file, that doesn't mean anything will observably change. LevelDB unfortunately will not be "exactly what we need" unless a significant reachitecturing is undertaken.
Mike Hearn had explained this succintly. I'll find the link and post it here.
Mike Hearn also implemented a first Bitcoin-on-LevelDB port himself (see pull request 1619), which was abandoned after I modified it to work on top of my rewrite of the validation engine (see pull request 1677). The problem you probably were referring to is the fact that Bitcoin relied on reading uncommitted data during block validation, something that isn't supported by LevelDB (it just has atomic writes, no real database transactionality). Mike solved that in his port by writing a tiny caching layer around LevelDB. I solved it by avoiding the need for such operations altogether, with a nice performance improvement along the way. He wasn't talking about the storage layer. He was referring to your request for changing the wallet semantics. I disagree that it would take that long, by the way, but I disagree it's within our scope right now. There are enough alternative wallets already, those are in a perfect place to experiment with different types of wallets. By the way, changing the wallet storage (but just that) to use an append-only format is also already implemented (by me), but it had some issues left, and I felt other things were more important to work on.
|
|
|
If you wonder about Satoshi's intentions, don't forget that he was against alternative implementations...
No, he wasn't. He mistakenly advocated MyBitcoin. Which - at the time, and as far as I know - didn't use an alternative implementation.
|
|
|
|