441
|
Bitcoin / Development & Technical Discussion / Re: Handle much larger MH/s rigs : simply increase the nonce size
|
on: June 25, 2012, 05:52:36 PM
|
You could always add a new block version number to exist alongside the current block version. There is no need for a hard fork, nor to break everything in one day.
Doing that requires a hard fork, as it means some blocks will be valid to new nodes but not to old. The first block mined in the new system will kick out every old node permanently on a sidechain without new-version blocks.
|
|
|
442
|
Bitcoin / Bitcoin Discussion / Re: Please test (if you dare): next-test 2012-06-21
|
on: June 25, 2012, 01:08:24 PM
|
Why is this client creating a new testnet wallet, and not using the existing one?
Because the old testnet is being obsoleted. 0.7.0 (and the current code in git, and this next-test branch) will use "testnet3", a completely reset version. Testnet2 has undergone two many changes, and too many incompatible chains exist on the network. They were very valuable to learn from, but not really usable for real tests.
|
|
|
444
|
Bitcoin / Bitcoin Discussion / Re: What about the Bitcoin lost?
|
on: June 24, 2012, 12:33:46 PM
|
I disagree with the strongly emotional reactions here. This is not necessarily a bad idea. It could be an interesting experiment.
It just isn't Bitcoin.
Obviously, a "hard fork" change can change anything (more precision, limited coin age, ...), but it requires a exceedingly high degree of consensus to pull off, as (almost) everyone needs to upgrade (not just miners). If the precision becomes a problem one day, a consensus for a change that increases it may be viable.
I don't believe a change such as this would ever be accepted by nearly-everyone, as it changes the philosophy behind the system.
|
|
|
445
|
Bitcoin / Development & Technical Discussion / Re: Handle much larger MH/s rigs : simply increase the nonce size
|
on: June 24, 2012, 08:46:38 AM
|
"A normal CPU can generate work for several TH/s easily when implemented efficiently"
However, if we are talking more than an order of magnitude - then this statement is very questionable. That is one single CPU. If work generation becomes to heavy for a CPU, do it on two. If that becomes too much, do it on a GPU. By the time a GPU can't do work generation for your ASIC cluster behind it, it will be economically viable to move the work generation to an ASIC.
|
|
|
446
|
Bitcoin / Development & Technical Discussion / Re: Handle much larger MH/s rigs : simply increase the nonce size
|
on: June 23, 2012, 01:20:45 PM
|
Bitcoin is certainly a lot more than just mining, but that doesn't mean the mining business is not part of it. While profitable, economics of scale will always lead to research and development of more efficient (and specialized) hardware. You may argue against the potential for centralization this brings, but from an economic point of view, it is inevitable. A hard fork that would render their investments void is a problem, and it will undermine trust in the Bitcoin system.
Yes, cryptographic primitives get broken and better ones are being developed all the time. I've already said that a security flaw is a very good reason for a hard fork - presumably, few people with an interest in Bitcoin will object against a fix for a fatal security flaw.
However, you still haven't convinced me there is a problem. The current getwork()-based mining requires new work every 4 billion hashes, yes. But when combined with local work generation, or even direct mining on top of getmemorypool(), there is no performance problem at all. A normal CPU can generate work for several TH/s easily when implemented efficiently. I believe a few pools already use this.
Unless it becomes clear that there is an inherent problem with the current system that will limit mining operation in the future (as opposed to implementation issues because of the current state of operations), I see no reason at all for a hard fork.
|
|
|
447
|
Bitcoin / Development & Technical Discussion / Re: Handle much larger MH/s rigs : simply increase the nonce size
|
on: June 23, 2012, 12:36:54 PM
|
Doing a hard fork requires an exceedingly high level of consensus, as it requires everyone in the network to upgrade (not just miners). Unless there is a security flaw in the protocol, I doubt we'll see one anytime soon.
The problem you are complaining about is an efficiency problem for miners. I doubt you'll get a large percentage of the Bitcoin community to even accept this is a problem. At least in my opinion, it is not, as there are much more scalable solutions already available, such as local work generation.
|
|
|
452
|
Bitcoin / Development & Technical Discussion / Re: Symbolic link for blockchain on Linux
|
on: June 21, 2012, 10:10:26 AM
|
There is no problem in sharing the blockchain between several datadirs, as long as you don't run multiple clients on them simultaneously.
You need to symlink blkindex.dat, blk0001.dat and the database/ subdirectory to the common location. Also, if you do not shutdown cleanly when running from datadir A, you'll need to first run from datadir A again and shutdown cleanly, before starting from any other datadir.
-detachdb is only necessary if you do symlink the database/ dir to both locations.
|
|
|
453
|
Bitcoin / Development & Technical Discussion / Re: Bitcoin and TOR
|
on: June 18, 2012, 03:49:02 PM
|
We're very close to adding much improved Tor support to Bitcoin, so it will most likely be in 0.7.0. Among other things: - Support for running as and connecting to Bitcoin hidden services
- Peer exchange of Bitcoin hidden services
- Non-leaking DNS seeding via proxy
|
|
|
457
|
Bitcoin / Development & Technical Discussion / Re: Satoshi client auto update
|
on: June 16, 2012, 11:24:34 AM
|
The binaries (at least for Windows and Linux) are built using gitian. This system performs the entire compilation process in a tightly controlled virtual machine, using a deterministic build process. This means that all developers (and others, if they like) can do the build themselves, and end up with the exact same binary (byte for byte identical). We then GPG sign the result, and upload it.
The (provisional) auto-update process uses these signatures (there have to be several) before installing an update.
|
|
|
458
|
Bitcoin / Bitcoin Technical Support / Re: Bitcoind gettransaction problem?
|
on: June 12, 2012, 03:58:29 PM
|
Bitcoin transactions do not have a well-defined "from" address. Each transaction can have several inputs, each of which has potentially an identifiable address it was previously sent to. Those addresses may or may not be under control of the sender of the funds.
If you need to do refunds, ask people for a refund address.
If you need to identify individual payments, give a different (unique) receive address for each.
|
|
|
459
|
Bitcoin / Development & Technical Discussion / Re: bitcoind mempool - surviving a restart?
|
on: June 11, 2012, 01:05:26 PM
|
When a transaction comes in that depends on an unconfirmed transaction (for example one that was recently in the memory pool, but that was cleared due to a restart), it is simply rejected.
This is not a problem. Even if the entire network would regularly wipe their memory pool, the sender & receiver of the transaction will keep rebroadcasting it until it is accepted into a block, including all its unconfirmed dependencies. The fact that a thing like the memory pool even exists is just an optimization in this sense.
|
|
|
460
|
Bitcoin / Development & Technical Discussion / Re: Double hashing: less entropy?
|
on: June 11, 2012, 12:48:03 PM
|
I did a calculation which says that every application of SHA-256 reduces entropy by about 0.5734 bits. I have no idea if that's correct.
Assuming SHA-256 is a random function (maps every input to a uniformly random independent output), you will end up having (on average) (1-1/e)*2^256 different outputs. This indeed means a loss of entropy of about half a bit. Further iterations map a smaller space to an output space of 2^256, and the loss of entropy of each further application drops very quickly. It's certainly not the case that you lose any significant amount by doing 1000 iterations or so.
|
|
|
|