You could always add a new block version number to exist alongside the current block version. There is no need for a hard fork, nor to break everything in one day.
Doing that requires a hard fork, as it means some blocks will be valid to new nodes but not to old. The first block mined in the new system will kick out every old node permanently on a sidechain without new-version blocks.
|
|
|
Why is this client creating a new testnet wallet, and not using the existing one?
Because the old testnet is being obsoleted. 0.7.0 (and the current code in git, and this next-test branch) will use "testnet3", a completely reset version. Testnet2 has undergone two many changes, and too many incompatible chains exist on the network. They were very valuable to learn from, but not really usable for real tests.
|
|
|
Can you try to start the client with -upgradewallet as parameter, does this help?
Dia
That won't help. My guess is that his wallet is actually corrupted, but old clients fail to notice it. 0.6.0 introduced more strict consistency checks for wallets at startup.
|
|
|
I disagree with the strongly emotional reactions here. This is not necessarily a bad idea. It could be an interesting experiment.
It just isn't Bitcoin.
Obviously, a "hard fork" change can change anything (more precision, limited coin age, ...), but it requires a exceedingly high degree of consensus to pull off, as (almost) everyone needs to upgrade (not just miners). If the precision becomes a problem one day, a consensus for a change that increases it may be viable.
I don't believe a change such as this would ever be accepted by nearly-everyone, as it changes the philosophy behind the system.
|
|
|
"A normal CPU can generate work for several TH/s easily when implemented efficiently"
However, if we are talking more than an order of magnitude - then this statement is very questionable. That is one single CPU. If work generation becomes to heavy for a CPU, do it on two. If that becomes too much, do it on a GPU. By the time a GPU can't do work generation for your ASIC cluster behind it, it will be economically viable to move the work generation to an ASIC.
|
|
|
Bitcoin is certainly a lot more than just mining, but that doesn't mean the mining business is not part of it. While profitable, economics of scale will always lead to research and development of more efficient (and specialized) hardware. You may argue against the potential for centralization this brings, but from an economic point of view, it is inevitable. A hard fork that would render their investments void is a problem, and it will undermine trust in the Bitcoin system.
Yes, cryptographic primitives get broken and better ones are being developed all the time. I've already said that a security flaw is a very good reason for a hard fork - presumably, few people with an interest in Bitcoin will object against a fix for a fatal security flaw.
However, you still haven't convinced me there is a problem. The current getwork()-based mining requires new work every 4 billion hashes, yes. But when combined with local work generation, or even direct mining on top of getmemorypool(), there is no performance problem at all. A normal CPU can generate work for several TH/s easily when implemented efficiently. I believe a few pools already use this.
Unless it becomes clear that there is an inherent problem with the current system that will limit mining operation in the future (as opposed to implementation issues because of the current state of operations), I see no reason at all for a hard fork.
|
|
|
Doing a hard fork requires an exceedingly high level of consensus, as it requires everyone in the network to upgrade (not just miners). Unless there is a security flaw in the protocol, I doubt we'll see one anytime soon.
The problem you are complaining about is an efficiency problem for miners. I doubt you'll get a large percentage of the Bitcoin community to even accept this is a problem. At least in my opinion, it is not, as there are much more scalable solutions already available, such as local work generation.
|
|
|
See BIP 22. It's a bit complex, but basically it allows moving the entire block-generation process to external processes, which only need to contact bitcoind to receive new transactions or blocks. I don't think a hard fork is warranted for an efficiency problem for miners. When a fork comes, we can always consider extending the nonce of course...
|
|
|
ok, let's make it a pi vulnerability (3.1415...)"
Let's use complex numbers. The imaginary part is the assumed mass histeria the bug will cause. "We consider this bug to have severity log(-1)."
|
|
|
Maybe the key is already in the wallet?
|
|
|
Difficulty 1 corresponds to a maxtarget 0x00000000FFFF0000000000000000000000000000000000000000000000000000, or a probability of 65535/2**48. Difficulty D corresponds to a probability of 65535/(D * 2^48).
|
|
|
There is no problem in sharing the blockchain between several datadirs, as long as you don't run multiple clients on them simultaneously.
You need to symlink blkindex.dat, blk0001.dat and the database/ subdirectory to the common location. Also, if you do not shutdown cleanly when running from datadir A, you'll need to first run from datadir A again and shutdown cleanly, before starting from any other datadir.
-detachdb is only necessary if you do symlink the database/ dir to both locations.
|
|
|
We're very close to adding much improved Tor support to Bitcoin, so it will most likely be in 0.7.0. Among other things: - Support for running as and connecting to Bitcoin hidden services
- Peer exchange of Bitcoin hidden services
- Non-leaking DNS seeding via proxy
|
|
|
I think I got carried away using the term "auto update" here. I certainly don't mean full automatic installation of new versions, merely a message warning for new versions, and only when enough signatures are available.
|
|
|
By somewhat larger, how many more IPv6s can there be than possible IPv4s?
79228162514264337593543950336 times more.
|
|
|
This is a known bug, and it's fixed in git HEAD already.
|
|
|
The binaries (at least for Windows and Linux) are built using gitian. This system performs the entire compilation process in a tightly controlled virtual machine, using a deterministic build process. This means that all developers (and others, if they like) can do the build themselves, and end up with the exact same binary (byte for byte identical). We then GPG sign the result, and upload it.
The (provisional) auto-update process uses these signatures (there have to be several) before installing an update.
|
|
|
Bitcoin transactions do not have a well-defined "from" address. Each transaction can have several inputs, each of which has potentially an identifiable address it was previously sent to. Those addresses may or may not be under control of the sender of the funds.
If you need to do refunds, ask people for a refund address.
If you need to identify individual payments, give a different (unique) receive address for each.
|
|
|
When a transaction comes in that depends on an unconfirmed transaction (for example one that was recently in the memory pool, but that was cleared due to a restart), it is simply rejected.
This is not a problem. Even if the entire network would regularly wipe their memory pool, the sender & receiver of the transaction will keep rebroadcasting it until it is accepted into a block, including all its unconfirmed dependencies. The fact that a thing like the memory pool even exists is just an optimization in this sense.
|
|
|
I did a calculation which says that every application of SHA-256 reduces entropy by about 0.5734 bits. I have no idea if that's correct.
Assuming SHA-256 is a random function (maps every input to a uniformly random independent output), you will end up having (on average) (1-1/e)*2^256 different outputs. This indeed means a loss of entropy of about half a bit. Further iterations map a smaller space to an output space of 2^256, and the loss of entropy of each further application drops very quickly. It's certainly not the case that you lose any significant amount by doing 1000 iterations or so.
|
|
|
|