Other problems (such as weakening crypto) will force a protocol redesign before then. Then we can use a real high-precision number instead of a hack. Satoshi said: unsigned int is good until 2106. Surely the network will have to be totally revamped at least once by then.
|
|
|
Only one method of implementing a distributed pool has been discovered so far: each participant sends their "share blocks" to all other participants, and each participant pays out according to how much they like the share blocks they've seen from other people. p2pool uses this design, though the decision-making is simplified from earlier proposals (which causes some problems).
However, this method requires all participants to be full nodes, which will be very expensive in the future. The pools themselves also must use a ton of network traffic.
|
|
|
Something like that would make more sense. SMF doesn't support it, though.
|
|
|
240 PMs per day is already too many: no need to make it worse.
|
|
|
Run Bitcoin with the -rescan switch and it'll detect the transaction. You must do this every time you switch wallet files.
|
|
|
It's for everyone. Why are you sending so many PMs?
|
|
|
This has been discussed so many times already... There are currently 329,993 addresses in the Bitcoin network. Say that this number of addresses are created every day for the next 140 years. That's 16,862,642,300 addresses. The chance that at least two of those addresses collided is about 9.7x10 -29, using the formula here. Calculation.If every person on Earth makes ten addresses per second for 20 years (2x1018 total addresses), then the probability that two of these addresses collide is about 1.57x10-12.
UUIDs have 2 128 possible identifiers. They are also designed to be collision-proof. Wikipedia says: To put these numbers into perspective, one's annual risk of being hit by a meteorite is estimated to be one chance in 17 billion, that means the probability is about 0.00000000006 (6 × 10−11), equivalent to the odds of creating a few tens of trillions of UUIDs in a year and having one duplicate. In other words, only after generating 1 billion UUIDs every second for the next 100 years, the probability of creating just one duplicate would be about 50%. The probability of one duplicate would be about 50% if every person on earth owns 600 million UUIDs. Compare this to Bitcoin's 2 160 possible addresses. Bitcoin has: 1461501637330902918203684832716283019655932542976 addresses UUIDs have: 340282366920938463463374607431768211456 identifiers And... Bitcoin already supports OP_HASH256 in script, so it would be trivial to increase the number of addresses if it ever became a problem.
|
|
|
I doubt they can maintain that position for long. Other countries are already starting to get upset about the US's power over these domains. ICANN is under pressure to stop it.
|
|
|
Allowing variable output values would be a pretty major change. It might be worth thinking about, though, since it would solve many problems.
|
|
|
Ah, I missed that the fully-signed intermediate versions would be kept private. This would be safe. Great idea!
|
|
|
There were too many junk threads speculating about the value (in the wrong section, too). I also removed one about higher values.
If your thread talks about the BTC value and is less than a few paragraphs long, I may delete it. I find these to be worthless.
|
|
|
I think the generate_address_regex() function is broken. Looks like &b58[p] is calculated incorrectly.
I stuck a printf right before output_match() is called:
./vanitygen -r 1.{26}XX Before output_match: 1H6d1q8niPVvci5zGnpbTkRfaBhWhWSXX5 After output_match: 1H6d1q8niPVvci5zGnpbTkRfaBhWhXcbEn
The address that the regex is being compared to is never a valid address, and the (valid) end address always differs in the last few characters.
Took me the longest time to figure out this was why all my regexes with $ in it behaved ... unexpectedly.
This reply was just merged here.
|
|
|
Actually, I thought about this for a long time and I now think it might have some uses.
At first I thought that tx A could be broadcast without signing tx B, which would allow the withdrawer to burn coins. But the site just needs to count BTC as temporarily withdrawn until tx B is signed.
Then I thought that the side sending tx A would have a long time to generate a block and choose any version of tx B they want, ignoring sequence. But they'll actually have only tens of minutes to do this.
The risk still seems unacceptable for most sites. The attacker can't easily use network-based attacks as with regular 0-confirmation transactions, but they still only need to solve one block, and they can try the attack many times without cost until it works. Maybe the method could be used if other security measures were taken, such as delaying USD withdrawals out of exchanges.
|
|
|
I think Satoshi is an individual person. He writes with a consistent style. I doubt he's American, since he sometimes uses British spelling. if im understanding that right.. he could literally prove who he is by using his private key to a sign a message and send it via the client(?) to everyone!
Other people have that alert key now. However, Satoshi has published a PGP public key of his own: https://forum.bitcoin.org/Satoshi_Nakamoto.asc
|
|
|
I wouldn't really consider MtGox full-reserve if they did that, since someone could "burn" their deposits.
|
|
|
Oh, I missed that you were using private keys. Never mind that, then.
Given all but a few bytes of an ECDSA private key, I would not be surprised if there was some way of getting the remaining bytes without a full brute-force.
|
|
|
It would take less than a second to find the code, since all of the used Bitcoin addresses are known. You could just search Bitcoin Block Explorer for the known part.
|
|
|
|